title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 8
8.58M
|
---|---|---|---|---|
Analysis on personnel costs and working time for implementing a more person-centred care approach: a case study with embedded units in a Swedish region | b5bfb2c0-0021-49d8-8bba-ef7d05f79e3f | 10582865 | Patient-Centered Care[mh] | Healthcare systems in many countries experience increasing economic demands, both through the development of new technologies and treatments and through a changing age distribution in the population resulting in more people with multiple chronic conditions. As in many countries, legislators and healthcare organisations in Sweden have put person-centred care (PCC) high on the development agenda, and management-control efforts are increasingly aimed at implementing PCC. PCC acknowledges and endorses every person’s resources, interests and needs, comprising shared responsibility and power, as well as coordinated care and treatment. PCC strives towards a meaningful life, which differs from similar concepts (eg, PCC) that focus on functional life. It is related to the integrated people-centred health services promoted by the WHO, although without the community perspective embedded in the framework. PCC has been promoted to address patient dissatisfaction with healthcare access and delivery and as a potentially cost-saving or cost containing measure through more effective use of resources. Several studies suggest cost-saving as one argument for PCC. Thus, PCC is expected to both improve care quality and contain costs. However, the knowledge about costs associated with introducing a more PCC is limited and scattered. Some studies have included training costs in intervention costs or indicated costs of transferring staff between organisational units, but to the best of our knowledge, its implementation costs have not been reported. Regardless of the little knowledge of costs associated with its implementation, PCC has a growing impact on the healthcare industry in many countries. Overlooking such costs, however, implies that the time added for this implementation is minimal and can be ignored, an assumption that to some extent contradicts previous findings that implementation of PCC was associated with increased job strain. Thus, this study aimed to describe the time and costs used during the implementation of a more PCC approach as part of ordinary practice. As far as we know, the current study is one of a few reporting health economic aspects of a more PCC approach in ordinary practice, that is, not as part of an intervention study.
Region Dalarna decided in 2015 to promote a more PCC approach throughout the health system. The implementation process was initiated and managed by the healthcare organisation Region Dalarna as an essential part of continuous quality improvement. The Implementing PCC: Process evaluation of strategies, leadership and health economy (IMPROVE) project was conducted in parallel by university researchers. The process evaluation focused on the implementation process rather than PCC (the innovation being implemented). This study thus reports on the resources used for implementing a more PCC approach in one Swedish region, using data from a case study with seven embedded units ( study protocol available as Supporting information ). Previous publications from the IMPROVE project have investigated how patients’ perception of PCC can be measured, how the concept of PCC was perceived by healthcare staff, how the concept of PCC was operationalised by the units, the used implementation strategies and congruence of managers’ perceptions and understanding of PCC across organisational levels.
Region Dalarna is situated in the middle of Sweden. The region covers 6% of the Swedish land area and contains 3% of its population. Four hospitals and 25 healthcare clinics provides public healthcare for the region’s population. The Swedish health system provides universal coverage for all legal residents, and on some premises, also to visitors from other parts of the European Union, asylum seekers and undocumented immigrants. The health system is divided based on geographic area into 21 regions providing healthcare services and 290 municipalities providing care for the elderly and disabled. Approximately 84% of all health expenditures were tax-based in 2016, with care prioritised based on an ethical platform including the principles of human dignity, needs and solidarity and cost-effectiveness, in that order. The innovation The implementation of more PCC was conducted as part of the region’s work towards efficient healthcare practices, in parallel with their ‘structure and change work’ that included projects related to priority setting and resource allocation in the regional healthcare system. The vision was to put equal emphasis on the patient and professional perspective throughout the care process. The approach chosen was based on the Gothenburg model of PCC, which has been shown in clinical trials to be cost-effective for several care settings and patient groups. The main feature of the model is its focus on the partnership between the patient and the healthcare provider built during the cocreation of a written health plan. It has previously been reported that, although some ambiguity remained in their description, core practices related to all three cornerstones of PCC—creating, safeguarding and documenting the partnership—were identified in all the embedded units. Important practices introduced in the units were routines to elucidate the patient narrative during admission or throughout the care pathway, and use of communication techniques such as motivational interviewing. The implementation process As part of the implementation process, changes were made to the region’s micro, meso and macro levels. These changes included the commission of staff in the regions Department for Development (hereafter called the DD; time committed to the project corresponding to 80% of a full-time employment), as well as a budget for expenses associated with the implementation. The DD staff assigned to lead this process were engaged as operational support provided centrally from the region to the included healthcare units. Among the DD tasks were to organise learning seminars and support the staff at the healthcare units during the implementation process. The participating healthcare units chose their representatives to participate in the learning seminars. Thereafter, each healthcare unit was expected to manage its implementation process. There was no joint implementation support programme other than the learning seminars, but the DD could provide support on-demand. The implementation strategies used have previously been described to mainly fall in two clusters, that is, train and educate stakeholder, as well as Develop stakeholder interrelationships. The process evaluation Interventions or programmes directed to changing healthcare provision, such as the implementation of PCC, are often described as complex interventions. Complex interventions are interventions that contain several interacting components. This interaction can include elements tailored to each participant (eg, patients or healthcare staff), sometimes with varying outcomes and goals. In this study, complexity included differences in the training of participating staff, differences in organisational structures among units and each unit defining on their own how an increase in PCC would be interpreted and implemented. It has been recommended that research into the use and effects of complex interventions address the complexity involved. This can be done through process evaluations that provide an understanding of how the innovation (in this case PCC) is implemented. In a conceptual model for implementation research, Proctor and colleagues elaborated on a number of implementation outcomes relevant for such an evaluation, including economic aspects of implementation. The process evaluation was conducted six healthcare units that participated in the first round of learning seminars and the DD. All units consented to participate in the evaluation. The healthcare units included were specialised in nephrology, geriatric care and rehabilitation, psychiatry, and primary care. Costs related to the implementation included staff and the DD’s costs, the training of staff and any support provided locally in the healthcare units. The study examines resources spent on making this change in healthcare but does not include any control units. Therefore, the study approach is in line with an observational study of a natural experiment in a real-world setting; that is, the research team has no control over the intervention, there is no control condition and the knowledge and availability of data for evaluation are partial. Data collection Data for this study were collected through logbooks, including date and type of activity, how much time each activity took (recorded as either minutes, hours or workdays) and for how many people, as well as information about who was involved. Each unit selected one person in a leading position in the implementation to complete the logbooks, either as each activity was conducted or retrospectively. These persons received the same instruction in how to use the logbooks. However, in order not to influence the choice of strategies, we did not give any guidance regarding taxonomies that could have been used to choose or describe the activities carried out to support the implementation. Reporters were encouraged to report short and often reoccurring activities (eg, discussions about implementation between colleagues during the workday) as weekly estimates. The persons responsible for the logbooks were encouraged to report on a weekly to monthly basis but some had difficulties adhering to this recommendation due to a high workload and were instead encouraged to use their calendars on a half year basis to track their activities. In some instances, representatives from the research group met with the person responsible for the logbook and assisted to fill out the logs. Information from the logbooks was used to identify which activities were perceived by personnel units to be related to implementing a more PCC approach and estimations of the time used for this implementation. Only time was reported in the logbooks, no equipment or other expenses were tracked. In addition, units were asked to provide suggestions (hereafter called unit-specific measures) to evaluate the economic impact of implementing a more PCC approach ( ), that leaders in these units viewed as important for understanding the changes in practice induced by PCC. Relevant data on these proposed outcomes were collected from each unit retrospectively, and the units were asked to comment on any potential time trends in the data. 10.1136/bmjopen-2023-073829.supp1 Supplementary data Data collection was planned to cover a 3 year study period (June 2016–May 2019), but logbook data continued into autumn of 2019 for the psychiatric units and the DD due to delays in previously planned activities. Activities conducted before the ethics approval (in 2017) were filled in retrospectively. Resource use and cost estimation In analysing implementation programmes, four categories of costs have previously been suggested in the literature: (1) costs for executing implementation strategies, (2) excess costs for service delivery as it changes, (3) opportunity costs to providers and patients and (4) research/development costs. These categories were used retrospectively by the research group to categorise activities reported in the logbooks. In this study, costs for executing implementation strategies are mainly directed towards centrally organised processes (eg, seminars). In contrast, the costs for service delivery included activities the units used in operationalising PCC in ordinary practice. This study did not reflect foregone opportunity costs in the care of patients because data collection was not designed for patient-level follow-up. However, it needs to be acknowledged that all resource use in the health system can potentially have an opportunity cost related to an alternative use of existing resources. Activities reported in logbooks were first categorised inductively to identify less aggregated types of resource use and, after that, deductively according to the above categories. Conversions were made assuming 8 hour workdays and 46 full weeks of work each year. Costs for used resources were calculated based on the time used for each activity reported in the logbooks. The corresponding costs to the health system in 2019 values were calculated by multiplying the time spent by wage and the related mandatory and negotiated social insurance contribution (37.14% ). The mean wage for a nurse employed by a Swedish Region was SEK 34 100, which was used as an approximation for the wage of staff in the healthcare units as we seldom knew the distribution of staff categories in logbook recordings (ie, SEK 2440 per day working, after adjusting for holidays and social insurance contributions, calculated as SEK 34 100×1.3714×12 months divided by 46 weeks working and 5 days per week). The approximate wage for staff employed in the DD was SEK 40–45 000 reported by the development leaders, which in the analysis was approximated to SEK 43 000 (resulting in SEK 3077 per day working). Costs for the DD were also calculated using information from the regional budget documentation for this function, including a set budget for each year during the study. The set budget, for instance, was to cover 80% of one full-time employee. The budget was further contextualised using the total population of Region Dalarna (approximately 281 000 inhabitants) and the total healthcare spending during the study period. Year-end reports from the region reported costs of SEK 8985–11 106 million during the years 2016–2018. Costs were analysed from the health systems perspective, including all costs to the care organisation, and expressed in 2019 value. No discounting was deemed necessary because costs were only reported descriptively. Statistical analyses Time reported in the logbooks was reported descriptively by types of activity associated with implementing a more PCC approach ( ). The unit-specific outcomes were reported separately for each unit, using graphs to illustrate trends over time. The linear equation was used to indicate trends in the length of stay while survival analysis was used to examine time to readmission. Where applicable, analyses were adjusted for overcrowding during the initial hospitalisation. All analyses were conducted using Stata Statistical Software: Release 17.0. College Station, TX: StataCorp LLC. 10.1136/bmjopen-2023-073829.supp2 Supplementary data Patient and public involvement This project did not include patient or public involvement in developing the research questions, design, conduct, choice of outcome measures, or recruitment.
The implementation of more PCC was conducted as part of the region’s work towards efficient healthcare practices, in parallel with their ‘structure and change work’ that included projects related to priority setting and resource allocation in the regional healthcare system. The vision was to put equal emphasis on the patient and professional perspective throughout the care process. The approach chosen was based on the Gothenburg model of PCC, which has been shown in clinical trials to be cost-effective for several care settings and patient groups. The main feature of the model is its focus on the partnership between the patient and the healthcare provider built during the cocreation of a written health plan. It has previously been reported that, although some ambiguity remained in their description, core practices related to all three cornerstones of PCC—creating, safeguarding and documenting the partnership—were identified in all the embedded units. Important practices introduced in the units were routines to elucidate the patient narrative during admission or throughout the care pathway, and use of communication techniques such as motivational interviewing.
As part of the implementation process, changes were made to the region’s micro, meso and macro levels. These changes included the commission of staff in the regions Department for Development (hereafter called the DD; time committed to the project corresponding to 80% of a full-time employment), as well as a budget for expenses associated with the implementation. The DD staff assigned to lead this process were engaged as operational support provided centrally from the region to the included healthcare units. Among the DD tasks were to organise learning seminars and support the staff at the healthcare units during the implementation process. The participating healthcare units chose their representatives to participate in the learning seminars. Thereafter, each healthcare unit was expected to manage its implementation process. There was no joint implementation support programme other than the learning seminars, but the DD could provide support on-demand. The implementation strategies used have previously been described to mainly fall in two clusters, that is, train and educate stakeholder, as well as Develop stakeholder interrelationships.
Interventions or programmes directed to changing healthcare provision, such as the implementation of PCC, are often described as complex interventions. Complex interventions are interventions that contain several interacting components. This interaction can include elements tailored to each participant (eg, patients or healthcare staff), sometimes with varying outcomes and goals. In this study, complexity included differences in the training of participating staff, differences in organisational structures among units and each unit defining on their own how an increase in PCC would be interpreted and implemented. It has been recommended that research into the use and effects of complex interventions address the complexity involved. This can be done through process evaluations that provide an understanding of how the innovation (in this case PCC) is implemented. In a conceptual model for implementation research, Proctor and colleagues elaborated on a number of implementation outcomes relevant for such an evaluation, including economic aspects of implementation. The process evaluation was conducted six healthcare units that participated in the first round of learning seminars and the DD. All units consented to participate in the evaluation. The healthcare units included were specialised in nephrology, geriatric care and rehabilitation, psychiatry, and primary care. Costs related to the implementation included staff and the DD’s costs, the training of staff and any support provided locally in the healthcare units. The study examines resources spent on making this change in healthcare but does not include any control units. Therefore, the study approach is in line with an observational study of a natural experiment in a real-world setting; that is, the research team has no control over the intervention, there is no control condition and the knowledge and availability of data for evaluation are partial.
Data for this study were collected through logbooks, including date and type of activity, how much time each activity took (recorded as either minutes, hours or workdays) and for how many people, as well as information about who was involved. Each unit selected one person in a leading position in the implementation to complete the logbooks, either as each activity was conducted or retrospectively. These persons received the same instruction in how to use the logbooks. However, in order not to influence the choice of strategies, we did not give any guidance regarding taxonomies that could have been used to choose or describe the activities carried out to support the implementation. Reporters were encouraged to report short and often reoccurring activities (eg, discussions about implementation between colleagues during the workday) as weekly estimates. The persons responsible for the logbooks were encouraged to report on a weekly to monthly basis but some had difficulties adhering to this recommendation due to a high workload and were instead encouraged to use their calendars on a half year basis to track their activities. In some instances, representatives from the research group met with the person responsible for the logbook and assisted to fill out the logs. Information from the logbooks was used to identify which activities were perceived by personnel units to be related to implementing a more PCC approach and estimations of the time used for this implementation. Only time was reported in the logbooks, no equipment or other expenses were tracked. In addition, units were asked to provide suggestions (hereafter called unit-specific measures) to evaluate the economic impact of implementing a more PCC approach ( ), that leaders in these units viewed as important for understanding the changes in practice induced by PCC. Relevant data on these proposed outcomes were collected from each unit retrospectively, and the units were asked to comment on any potential time trends in the data. 10.1136/bmjopen-2023-073829.supp1 Supplementary data Data collection was planned to cover a 3 year study period (June 2016–May 2019), but logbook data continued into autumn of 2019 for the psychiatric units and the DD due to delays in previously planned activities. Activities conducted before the ethics approval (in 2017) were filled in retrospectively.
In analysing implementation programmes, four categories of costs have previously been suggested in the literature: (1) costs for executing implementation strategies, (2) excess costs for service delivery as it changes, (3) opportunity costs to providers and patients and (4) research/development costs. These categories were used retrospectively by the research group to categorise activities reported in the logbooks. In this study, costs for executing implementation strategies are mainly directed towards centrally organised processes (eg, seminars). In contrast, the costs for service delivery included activities the units used in operationalising PCC in ordinary practice. This study did not reflect foregone opportunity costs in the care of patients because data collection was not designed for patient-level follow-up. However, it needs to be acknowledged that all resource use in the health system can potentially have an opportunity cost related to an alternative use of existing resources. Activities reported in logbooks were first categorised inductively to identify less aggregated types of resource use and, after that, deductively according to the above categories. Conversions were made assuming 8 hour workdays and 46 full weeks of work each year. Costs for used resources were calculated based on the time used for each activity reported in the logbooks. The corresponding costs to the health system in 2019 values were calculated by multiplying the time spent by wage and the related mandatory and negotiated social insurance contribution (37.14% ). The mean wage for a nurse employed by a Swedish Region was SEK 34 100, which was used as an approximation for the wage of staff in the healthcare units as we seldom knew the distribution of staff categories in logbook recordings (ie, SEK 2440 per day working, after adjusting for holidays and social insurance contributions, calculated as SEK 34 100×1.3714×12 months divided by 46 weeks working and 5 days per week). The approximate wage for staff employed in the DD was SEK 40–45 000 reported by the development leaders, which in the analysis was approximated to SEK 43 000 (resulting in SEK 3077 per day working). Costs for the DD were also calculated using information from the regional budget documentation for this function, including a set budget for each year during the study. The set budget, for instance, was to cover 80% of one full-time employee. The budget was further contextualised using the total population of Region Dalarna (approximately 281 000 inhabitants) and the total healthcare spending during the study period. Year-end reports from the region reported costs of SEK 8985–11 106 million during the years 2016–2018. Costs were analysed from the health systems perspective, including all costs to the care organisation, and expressed in 2019 value. No discounting was deemed necessary because costs were only reported descriptively.
Time reported in the logbooks was reported descriptively by types of activity associated with implementing a more PCC approach ( ). The unit-specific outcomes were reported separately for each unit, using graphs to illustrate trends over time. The linear equation was used to indicate trends in the length of stay while survival analysis was used to examine time to readmission. Where applicable, analyses were adjusted for overcrowding during the initial hospitalisation. All analyses were conducted using Stata Statistical Software: Release 17.0. College Station, TX: StataCorp LLC. 10.1136/bmjopen-2023-073829.supp2 Supplementary data
This project did not include patient or public involvement in developing the research questions, design, conduct, choice of outcome measures, or recruitment.
The time reported in logbooks was between 3 and 13 working days per staff member in the participating units ( ), although the time spent was not equally distributed among the teams in each unit (mean 5.5 working days; median 4 working days). In total, time reported in logbooks from the DD corresponded to 267 full days of work (account for work done by the development unit overall, including the time used by two assigned development leaders) and 95–275 full days of work per unit for the healthcare units, over the 3 year study period. The cumulative distribution of time reported in logbooks indicate that some units reported an equal workload associated with the implementation between years, while others reported a more varied pattern of workload ( ). Activities categorised as being related to the implementation strategies (88% of time reported by the DD, 6%–57% for the other units) included planning and preparatory work for the learning seminars, as well as conducting/participating in those seminars. For the DD, service delivery (2% of their total reported time) was interpreted to include participation in regional decision making, administrative work, reporting to the region and collaboration with other organisations (eg, unions). For the units, it included IT solutions, reporting, educational activities, development of teams and care development (corresponding to 40%–90% of their reported time). Research and development costs comprised interactions with the research team and external collaboration with other regions (10% of the time reported by the DD, 2%–12% for the other units). A more detailed list of the activities reported by the units and the distribution of reported time between activities are reported in eTable 1 ( ). Training of staff was a major part of this implementation programme. For some employees, the training included up to three learning seminars (7–8 hours/seminar, difference based on transportation), that is, implementation strategies. In most units, additional training sessions were added (for a selected group or all employees) to facilitate the implementation of a more PCC approach, such as training in communication techniques, that is, service delivery categorised under either care development or team development depending on the type of training chosen ( ). 10.1136/bmjopen-2023-073829.supp3 Supplementary data In , only salary costs for the healthcare units are listed, estimated based on an approximate wage per day of a nurse or DD staff, respectively. For the healthcare units, salary costs ranged from SEK 231 582 to SEK 669 922. For DD, the salary costs for activities reported in the logbooks were calculated to SEK 822 633 However, the DD included both personnel (budgeted as 80% of one full-time employment over the whole period, corresponding to 552 working days (calculated as 80% of full-time over 3 years (with full time being 46 weeks per year))) and resources for training (seminars and workshops), development work and IT resources, as well as internal and external communication. For 2016, 2017, 2018 and 2019, the annual budget was SEK 500 000, SEK 600 000, SEK 600 000 and SEK 880 000, respectively. Thus, for the work organised by the DD during the period 2016–2019, the total budget was SEK 2 580 000, of which approximately SEK 1 698 342 were salary costs for the 80% employment to support the units, making it clear that the data collection through logbooks, SEK 822 633, did not capture all activities conducted by the DD (ie, the 80% of a full-time employment over the whole study period; SEK 1 698 342). Based on the target population of Region Dalarna, the total cost of the DD (salaries and other expenses) corresponds to SEK 2.30 per citizen per year. Thus, the DD budget corresponded to 0.009% of the total healthcare budget over the studied period (June 2016–May 2019). The approximate exchange rate is SEK 10 ≈ EUR 1 (exchange rate in 2019 was mean SEK 10.5892 (range SEK 10.1874–10.9056) per EUR 1 ), which means the total budget for the DD was approximately EUR 258 000 during 2016–2019. Salary costs associated with time reported in logbooks indicated 23% of costs (SEK 822 633 of a total SEK 3 619 552, ) for this implementation occurred in the DD. Including the total budgeted funding for the DD (SEK 2 580 000), their proportion increased to 48% of the total cost. Unit-specific measures The first geriatric unit and the psychiatric units (combined in ) reported a decrease in the average length of stay among their patient populations during the study period. Further examination of data from the psychiatric units showed that this trend was not explained by overcrowding (results not shown). Time to readmission within the first 10 months after discharge was similar between years in psychiatric units (combined in ). The apparent increase in length of stay in the psychiatric units ( ) is affected by data availability and how the graph was created, with longer hospitalisations started before the end date of the data collection continuing into the second half of 2019 while no shorter stays are added. The second geriatric unit reported a similar number of discharges across years (range 675–710), but refrained from providing further unit-specific data due to significant changes in the organisation ( ). The nephrology and primary care units also refrained from delivering the planned outcome data due to other major changes in their work processes conducted during the same period as the implementation of a more PCC approach. Consequently, these data are only slightly related to the change under study. The primary care unit had approximately 14 000 listed patients throughout the study period. Box 1 Staff turnover and changing methods for prioritising patients: an example from geriatric care and rehabilitation (geriatric unit 2) Initial contacts to discuss the evaluation of resource use were able to identify several aspects that could be relevant to follow during the development towards a more PCC approach, including work hours, number of patients, work environment follow-up and information that should be accessible through the administrative registers. These aspects were considered especially important due to the shortage of registered nurses. When the project was nearing its end, new contacts were made. None (or very few) of the people involved in the project’s launch remained in the organisation in 2019 due to changing roles or retirement. Concerns were expressed that they had not had time to actively work on the person-centeredness due to staff shortages and related downsizing of patient beds. The reduction in patient beds was described as having a budget for 18 patients but only staff enough to admit 10. They had handled the lack of nurses by changing from a registered nurse and an assistant nurse working in pairs to each nurse working with two assistants and transferring tasks to the physicians. Daily discussions were held to ensure that those in most need of the services provided by the geriatrics unit were cared for at the unit, and not moved to other sections of the hospital due to overcrowding. Thus, it was concluded that it would not be relevant to evaluate any of the initial planned unit-specific outcomes given that the implementation process of a more PCC approach had been given a secondary role compared to other changes in the unit. In table 2, it can be seen that this unit used among the lowest number of working days per person of all included units and had the highest percentage (57%) of that work distributed to the initial planning and participation in learning seminars (cost category implementation strategies). Combined, the perceived secondary role of the implementation process and the discrepant time distribution compared with other included units can indicate the implementation process was not complete. PCC, person-centred care.
The first geriatric unit and the psychiatric units (combined in ) reported a decrease in the average length of stay among their patient populations during the study period. Further examination of data from the psychiatric units showed that this trend was not explained by overcrowding (results not shown). Time to readmission within the first 10 months after discharge was similar between years in psychiatric units (combined in ). The apparent increase in length of stay in the psychiatric units ( ) is affected by data availability and how the graph was created, with longer hospitalisations started before the end date of the data collection continuing into the second half of 2019 while no shorter stays are added. The second geriatric unit reported a similar number of discharges across years (range 675–710), but refrained from providing further unit-specific data due to significant changes in the organisation ( ). The nephrology and primary care units also refrained from delivering the planned outcome data due to other major changes in their work processes conducted during the same period as the implementation of a more PCC approach. Consequently, these data are only slightly related to the change under study. The primary care unit had approximately 14 000 listed patients throughout the study period. Box 1 Staff turnover and changing methods for prioritising patients: an example from geriatric care and rehabilitation (geriatric unit 2) Initial contacts to discuss the evaluation of resource use were able to identify several aspects that could be relevant to follow during the development towards a more PCC approach, including work hours, number of patients, work environment follow-up and information that should be accessible through the administrative registers. These aspects were considered especially important due to the shortage of registered nurses. When the project was nearing its end, new contacts were made. None (or very few) of the people involved in the project’s launch remained in the organisation in 2019 due to changing roles or retirement. Concerns were expressed that they had not had time to actively work on the person-centeredness due to staff shortages and related downsizing of patient beds. The reduction in patient beds was described as having a budget for 18 patients but only staff enough to admit 10. They had handled the lack of nurses by changing from a registered nurse and an assistant nurse working in pairs to each nurse working with two assistants and transferring tasks to the physicians. Daily discussions were held to ensure that those in most need of the services provided by the geriatrics unit were cared for at the unit, and not moved to other sections of the hospital due to overcrowding. Thus, it was concluded that it would not be relevant to evaluate any of the initial planned unit-specific outcomes given that the implementation process of a more PCC approach had been given a secondary role compared to other changes in the unit. In table 2, it can be seen that this unit used among the lowest number of working days per person of all included units and had the highest percentage (57%) of that work distributed to the initial planning and participation in learning seminars (cost category implementation strategies). Combined, the perceived secondary role of the implementation process and the discrepant time distribution compared with other included units can indicate the implementation process was not complete. PCC, person-centred care.
The healthcare units logged on average 5.5 working days per staff member for implementing a more PCC approach, but the number of days varied largely between units (range 3–13 working days). In the healthcare units, 6%–57% of the time reported in logbooks was assessed as being used for implementation strategies, 40%–90% for service delivery and 2%–12% for research/development. As expected, the distribution of time used by the DD staff differed considerably from that of the other participating units, with most of the logged time (88%) assigned to implementation strategies. While the time spent and salary costs associated with the implementation process were considerable, usually corresponding to at least 0.5–1 year of full-time employment per unit, the total cost was small compared with the entire healthcare budget. Although budgeting for this implementation was only available for the DD, at least half of the costs occurred in the other healthcare units. Unit-specific outcomes from three of the units showed no clear effect of the implementation, and in general, the healthcare units reported that other factors had affected their throughput more during this study period than the implementation of a more PCC. To the best of our knowledge, this is the first study investigating the different components contributing to the time and costs spent on implementing a more PCC approach. Several studies have showcased how implementation costs should be measured, but there is a shortage of studies measuring the costs from an implementation standpoint. The strength of the study is that it was conducted by an independent research body that included researchers from different disciplines relevant to the interpretation of the results. Another strength was that the included units were given multiple opportunities to clarify and correct any oversights in their records (such as not writing the number of participants during a specific action) and potential misunderstandings regarding interpretation. Although the independence of the researchers was a strength, it also contributed to the main limitation of the study, the data collection process. The staff had no time set off for the logbooks and were not provided with extra time or staff to conduct the implementation process. Some reporters filled in large parts of the logbooks retroactively (not only 2016 data), potentially resulting in recall bias, and in some cases, one of our research group members (MT or HF) took part in this work to record previous activities. It should be noted that for one of the units with the highest working day estimate in relation to their staff, one of the researchers supported the staff member in filling out the logbooks which could indicate the staff member was reminded of more tasks having been conducted than would otherwise have been the case. However, the higher estimate could also be the result of this being one of two units that were merged during part of the study, thus making the division of time spent for each of these units difficult to distinguish. While all unit-specific outcomes had been identified by each healthcare unit as important aspects to follow during this change process, most units chose in the end to not providing the data due to being more affected by other changes in the workplaces. For the units providing these data, there is still an assumption that most of the observed changes were due to other factors than the implementation of a more PCC. The reason for initiating the collection of unit-specific outcomes was to make the evaluation more relevant to the participating healthcare units and similar units elsewhere, and to ensure that the implementation of a more PCC approach was at least not associated with any large negative impact on patient throughput. However, due to other parallel changes in these units, it did not provide any conclusive results. The commitment shown by staff in the participating units is exemplified by them participating in the change process and providing materials and responses to questions during analysis, regardless of the lack of time provided for their participation. This commitment can also be discussed regarding the planned observational approach of the study. Here, it can be argued that by engaging in research and data collection, the staff involved in the implementation may have been affected (ie, through social desirability bias or being reminded of the implementation process by the researchers) and thus to a larger extent interacting with it. While initially intended, we did not have logbooks from leaders (chief executives) of this implementation process at the main organisational level in the region, but that should have implied a small cost compared with that of changing care practice. It should also be acknowledged that the division of costs by categories is not self-evident. Costs for changes in service delivery could also to some extent be seen as costs for implementation strategies. For example, the initial learning seminars were assumed to be part of the implementation strategies. However, if the healthcare unit later decided that in their work to change practice, the staff needed special training (eg, in interprofessional rounds or motivational interviewing), we interpreted this as part of changing how service was delivered. An alternative interpretation would have been to see this training as part of an iterative process of implementation strategies conducted at different levels of the organisation, that is, the study distinguishes between centrally planned implementation strategies and strategies conducted by the healthcare delivery organisation. When considering the limitations of the study, our estimated working days and costs should be interpreted with caution. These findings are only part of a picture that needs to be further developed in future studies and frameworks to assess the economic impact of implementation. Using the cost components suggested by Wagner et al as a basis, additional costs would include overhead costs (eg, facilities). Moreover, the inclusion of research and development costs should be broken down into what would be sunk costs (ie, one-time investments) versus costs for development and scaling up (such as communications within the region to support others) versus costs that are solely for research purposes (ie, the research interviews). However, it has been argued that the costs of research should be reported, not to inform clinical practice but to assess the costs associated with evaluation. Here, we are only reporting on the costs for the region to participate in the research study. We hope our approach and findings might help others design similar studies to follow implementation processes more in depth. Changes in healthcare can aim to either improve effectiveness, that is, save money while producing at least as much health as before, or increase care quality or a combination of the two. Today, there is a growing body of evidence of improved patient outcomes and possibilities for cost savings by shifting to PCC from randomised controlled trials and quasi-experimental studies in Sweden and internationally. In comparison, little is known about the ‘hidden’ costs of preparatory work, training and monitoring outcomes during implementation, costs that are investigated in this study. In a recent systematic review of the literature, only six studies were identified that specified such costs. It has been argued that any implementation effort should be preceded by ex-ante modelling to compare the expected returns of implementing the intended change to the predicted costs of implementing the change. However, this is not always the case, or the results are at least not available to the research community. One crucial aspect of the studied change process was that it was made clear that healthcare units would not be provided with extra time or other resources for the implementation. This was reported by both the central regional organisation and the management in each healthcare unit, with staff shortage being the main reason. However, we fund that units implementing more PCC used a considerable amount of time for this implementation process. Thus, the time reported in logbooks could be interpreted to refer to the time that otherwise would have been used for other work in the healthcare units. Had the implementation of more PCC not been made, the reported time would still have been used in the units. A hypothetical comparator, an alternative intervention, could have used the same amount of time to implement some other change in practice, such as developing and implementing clinical guidelines or care paths (which thus would suggest the opportunity cost of this implementation process). Because several units had assigned quality developers or specialist nurse students with tasks associated with the implementation process, which equalled several working days spent per staff member, the used time likely offset other tasks otherwise conducted by the staff. In addition, staff in some settings expressed that it was time-consuming to provide PCC in the immediate time frame but it could potentially be beneficial later in the care process. Together with recent reporting that increased person-centredness was associated with higher job strain, it is likely that additional resources during the initial period had resulted in improved uptake. It should also be noted that the possibility to influence resource use is probably influenced by the patient groups in each healthcare unit and to what extent work with these patients is already streamlined. Considering a patient group for which clinical guidelines determines the frequency of follow-up, there is less opportunity to change the number of visits and thus costs will be similar even if the care changes. If there is instead a patient group experiencing unmet needs and much acute unplanned care, it can be assumed that changing how patients experience their healthcare can change how many visits are needed. While several of the healthcare units expressed that they had not completed the implementation within the study period, assessing its success need to be based on still ongoing studies of patients’ experiences. However, the findings clearly demonstrate that there is a non-zero cost of implementing a more PCC approach; costs that should be acknowledged in future research and implementation processes. The study also points towards potential improvements in how to study implementation costs, through, for example, recurrent questionnaires instead of logbooks that are collected at the end of the study period. Furthermore, due to the reported high staff turnover, the costs for changes in service delivery may to some extent continue in training of new staff. Considering our findings in light of recent updates on the use of economic evaluation in implementation science to guide decision-makers, future studies should thus distinguish all costs associated with implementation science, including implementation costs, intervention costs and downstream costs of a more PCC approach, as well as for other healthcare programmes.
The study found that a large part of resources used for this implementation of more PCC occurred in the DD, although at least half of the costs occurred in the healthcare units. Our findings suggest that the main costs associated with implementing a more PCC approach in ordinary practice resulted from implementing strategies and service delivery. In contrast, research and development costs were small by comparison. Moreover, the cost of providing a central support function corresponds to a tiny proportion of the total health budget. While there are limitations in how the study was conducted, it clearly demonstrates a non-zero cost of implementing a more PCC approach, thus implicating that future research should capture costs. Not accounting for the added strain on healthcare units can result in delay or inability to implement the new care model.
Reviewer comments Author's
manuscript
|
Aggressive anticancer treatment in the last 2 weeks of life | 06bc49f3-d3cb-44db-8c3a-aa24a509fcae | 10944113 | Internal Medicine[mh] | Anticancer treatment can be recommended to patients with advanced cancer with an aim to improve quality of life (QoL) irrespective of its impact on survival. , However, it is well known that anticancer treatment, such as palliative chemotherapy (ChT), can have a detrimental effect on QoL at the end of life (EoL). The two most commonly defining features for EoL are life-limiting disease with irreversible decline and expected survival in terms of months or less. The clinicians’ prediction of survival in patients with advanced cancer is often inaccurate and too optimistic. Although several prognostic tools were developed and validated to reduce the inaccuracy of clinicians’ prediction of survival, there is currently no consensus on the most appropriate tool to be used in everyday clinical practice. Inaccurate assessment of survival may lead to aggressive anticancer treatment in patients with advanced cancer. Recently, the armamentarium of anticancer drugs used in patients with advanced cancer has expanded enormously. Therefore, there is a growing concern of aggressive anticancer treatment and other health care at the EoL. , Such aggressive treatment may be inconsistent with patients’ EoL preferences and thus makes caregivers’ bereavement difficult; it is also of a low socioeconomic value for the health system itself. , Previously, several research groups found that administration of palliative ChT to terminally ill patients has become more common over time. , , Moreover, there are several known factors related to patients (e.g. younger age, male sex), cancer (e.g. specific tumour types such as breast cancer, general consideration of increased chemosensitivity) and the health system (e.g. enrolment into the palliative care, being cared for in a teaching hospital) which are associated with an increased administration of ChT at the EoL. , The indicators of Earle et al., which reflect overuse of anticancer treatment near death, unplanned medical encounters and hospice care are the most widely accepted for evaluation of the aggressiveness of EoL anticancer treatment and care. There is a valid concern that increased use of palliative ChT and other novel systemic therapies (STs), such as small-molecule targeted agents and immune checkpoint inhibitors, might set off a domino effect with increasing use of other treatment modalities such as palliative radiotherapy (RT) and surgery (SRG) in terminally ill cancer patients. The aim of our study was to evaluate an association between anticancer treatment in the last 2 weeks of life and year of death, age at death, sex, prognosis of cancer and enrolment into the specialist palliative care (SPC). Data sources and patient cohort This retrospective cohort study analysed the aggressiveness of anticancer treatment at the EoL in adult patients with advanced solid cancers who were treated at the Institute of Oncology Ljubljana (IOL) and died of cancer between January 2015 and December 2019. IOL is the central and main teaching tertiary cancer centre in Slovenia. The demographic characteristics and diagnoses of patients with cancer who lived in the broader Ljubljana area and died between 2015 and 2019 due to cancer were identified at the Slovenian Cancer Registry. At the IOL, the electronic health records (EHRs) of the identified patients were accessed and checked for the eligibility criteria. The analytic cohort included individuals who met the following criteria: (i) age ≥ 18 years at the time of death, (ii) residency in the broader area of Ljubljana, including eight municipalities with ∼340 000 residents, (iii) death between 1 January 2015 and 31 December 2019 due to the cancer and (iv) locally advanced or metastatic breast, gastrointestinal, genitourinary, gynaecological, lung or other cancer (i.e. head/neck cancer, germline cell carcinoma and sarcoma) at the time of death. This study was approved by the National Medical Ethics Committee of the Republic of Slovenia on 7 January 2021 (0120-484/2020/4). Outcome measures and statistical analysis According to the indicators of Earle et al., an anticancer therapy is considered aggressive when ≥10% of patients receive ChT in the last 2 weeks of life. In this study, the aggressiveness of anticancer therapy was assessed as a proportion of patients who received at least one modality of anticancer therapy, including ST (ChT, small-molecule targeted therapy, immunotherapy and other biological therapies, hormonal therapy excluded), RT and/or SRG in the last 2 weeks of life at the IOL. All collected data from the EHRs were double checked and inconsistencies resolved. Analysis began with descriptive summaries of demographic and clinical variables. The multiple logistic regression model was used to assess an association between the aggressiveness of anticancer treatment (i.e. ST, RT and SRG) in the last 2 weeks of life and year of death, age at death, sex, prognosis of cancer and enrolment into the SPC. Prognosis of included solid cancers was defined on the basis of the 5-year net survival data for these cancer types in Slovenia during 2012-2016. Three categories of the prognosis were defined: (i) good with a 5-year net survival of 72.1%-96.9% (melanoma, thymus, thyroid, breast, uterine, cervical, prostate, testicle and penile cancer), (ii) intermediate with a 5-year net survival of 43.3%-65.8% (head/neck, adrenal gland, kidney, bladder, ovary, colorectal cancer, bone and soft-tissue sarcoma) and (iii) poor with a 5-year net survival of 6.8%-35.55% (pharynx, oesophagus, stomach, lung cancer, mesothelioma, pancreas, biliary tract, liver, cancer of unknown primary and glioblastoma). We conducted statistical analyses using IBM® SPSS® version 29.0. The odds ratios (ORs) and the corresponding 95% confidence intervals (CIs) were provided. P values of <0.05 were deemed statistically significant. No adjustments for multiple comparisons were made. This retrospective cohort study analysed the aggressiveness of anticancer treatment at the EoL in adult patients with advanced solid cancers who were treated at the Institute of Oncology Ljubljana (IOL) and died of cancer between January 2015 and December 2019. IOL is the central and main teaching tertiary cancer centre in Slovenia. The demographic characteristics and diagnoses of patients with cancer who lived in the broader Ljubljana area and died between 2015 and 2019 due to cancer were identified at the Slovenian Cancer Registry. At the IOL, the electronic health records (EHRs) of the identified patients were accessed and checked for the eligibility criteria. The analytic cohort included individuals who met the following criteria: (i) age ≥ 18 years at the time of death, (ii) residency in the broader area of Ljubljana, including eight municipalities with ∼340 000 residents, (iii) death between 1 January 2015 and 31 December 2019 due to the cancer and (iv) locally advanced or metastatic breast, gastrointestinal, genitourinary, gynaecological, lung or other cancer (i.e. head/neck cancer, germline cell carcinoma and sarcoma) at the time of death. This study was approved by the National Medical Ethics Committee of the Republic of Slovenia on 7 January 2021 (0120-484/2020/4). According to the indicators of Earle et al., an anticancer therapy is considered aggressive when ≥10% of patients receive ChT in the last 2 weeks of life. In this study, the aggressiveness of anticancer therapy was assessed as a proportion of patients who received at least one modality of anticancer therapy, including ST (ChT, small-molecule targeted therapy, immunotherapy and other biological therapies, hormonal therapy excluded), RT and/or SRG in the last 2 weeks of life at the IOL. All collected data from the EHRs were double checked and inconsistencies resolved. Analysis began with descriptive summaries of demographic and clinical variables. The multiple logistic regression model was used to assess an association between the aggressiveness of anticancer treatment (i.e. ST, RT and SRG) in the last 2 weeks of life and year of death, age at death, sex, prognosis of cancer and enrolment into the SPC. Prognosis of included solid cancers was defined on the basis of the 5-year net survival data for these cancer types in Slovenia during 2012-2016. Three categories of the prognosis were defined: (i) good with a 5-year net survival of 72.1%-96.9% (melanoma, thymus, thyroid, breast, uterine, cervical, prostate, testicle and penile cancer), (ii) intermediate with a 5-year net survival of 43.3%-65.8% (head/neck, adrenal gland, kidney, bladder, ovary, colorectal cancer, bone and soft-tissue sarcoma) and (iii) poor with a 5-year net survival of 6.8%-35.55% (pharynx, oesophagus, stomach, lung cancer, mesothelioma, pancreas, biliary tract, liver, cancer of unknown primary and glioblastoma). We conducted statistical analyses using IBM® SPSS® version 29.0. The odds ratios (ORs) and the corresponding 95% confidence intervals (CIs) were provided. P values of <0.05 were deemed statistically significant. No adjustments for multiple comparisons were made. Eligible patient cohort The initial search identified 4029 potentially eligible patients for the analysis. After review of the EHRs, 2293 patients were excluded due to the following reasons: (i) 429 patients were diagnosed with other types of cancers, including haematological malignancies and lymphoma, (ii) 1484 patients did not have locally advanced or metastatic cancer at the time of death, (iii) 133 patients did not receive complete treatment/management at the IOL, (iv) 192 patients had missing data in their EHRs, (v) 23 patients died of reasons not related to cancer and (vi) 32 patients rejected treatment of their cancer ( , available at https://doi.org/10.1016/j.esmoop.2024.102937 ). Patients’ characteristics We included 1736 patients into our analysis; of these 868 (50.0%) were women. Their median age at the time of death was 70.0 years, interquartile range (62.0-78.0 years). The youngest and the oldest patient in our cohort died at the age of 18 and 98 years, respectively. A distribution of the number of deaths by year during the observed period is presented in . Overall, 542 (31.2%), 320 (18.4%), 288 (16.6%), 274 (15.8%), 108 (6.2%) and 204 (11.8%) patients died from lung, gastrointestinal, genitourinary, breast, gynaecological cancer and other cancers, respectively . Overall, prognosis was good, intermediate and poor in 572 (32.9%), 424 (24.4%) and 740 (42.6%) included patients, respectively ( , available at https://doi.org/10.1016/j.esmoop.2024.102937 ). None of the patient participated in a clinical trial. Overall, 237/1736 (13.7%) patients were enrolled into the SPC. Of these, 44.3%, 32.5% and 23.2% had good, intermediate and poor prognosis, respectively. Anticancer treatment in the last 2 weeks of life Overall, 14.4% (250/1736) of patients received at least one modality of anticancer treatment (i.e. ST, RT or SRG) in the last 2 weeks of life. The proportion of patients who received anticancer treatment was 12.7% (50/395) in 2015 and increased to 17.3% (54/313) in 2019 . Overall, 250 patients received 252 courses of different modalities of anticancer treatment as two patients received both ST and RT in the last 2 weeks of life . Of these, 125 (49.6%) were ST, 118 (46.8%) RT and 9 (0.5%) SRG. Proportions of patients who received RT were 6.3% (25/395) and 6.7% (21/313) in years 2015 and 2019, respectively. No patient in 2015 and only one (0.3%) patient in 2019 underwent SRG . Overall, 125 patients received ST in the last 2 weeks of life. Six patients received two different types of ST . The proportion of patients who received ChT did not change substantially over time; it was 5.1% (20/395) in 2015 and 5.1% (16/313) in 2019. In contrast, the proportion of patients who received novel STs increased from 1.5% (6/395) in 2015 to 5.4% (17/313) in 2019 ( P = 0.006) . Predictors of anticancer treatment in the last 2 weeks of life The odds of receiving anticancer therapy in the last 2 weeks of life increased by 15% each the following year (OR 1.15, 95% CI 1.04-1.27). Older patients had significantly lower odds to receive anticancer treatment in the last 2 weeks of life as compared to younger patients (OR 0.96, 95% CI 0.95-0.98). As compared to patients receiving only a standard oncology care those also enrolled into the SPC had significantly lower odds for anticancer treatment in the last 2 weeks of life (OR 0.22, 95% CI 0.12-0.43). Sex and prognosis of cancer were not significantly associated with receipt of anticancer treatment in the last 2 weeks of life . The initial search identified 4029 potentially eligible patients for the analysis. After review of the EHRs, 2293 patients were excluded due to the following reasons: (i) 429 patients were diagnosed with other types of cancers, including haematological malignancies and lymphoma, (ii) 1484 patients did not have locally advanced or metastatic cancer at the time of death, (iii) 133 patients did not receive complete treatment/management at the IOL, (iv) 192 patients had missing data in their EHRs, (v) 23 patients died of reasons not related to cancer and (vi) 32 patients rejected treatment of their cancer ( , available at https://doi.org/10.1016/j.esmoop.2024.102937 ). We included 1736 patients into our analysis; of these 868 (50.0%) were women. Their median age at the time of death was 70.0 years, interquartile range (62.0-78.0 years). The youngest and the oldest patient in our cohort died at the age of 18 and 98 years, respectively. A distribution of the number of deaths by year during the observed period is presented in . Overall, 542 (31.2%), 320 (18.4%), 288 (16.6%), 274 (15.8%), 108 (6.2%) and 204 (11.8%) patients died from lung, gastrointestinal, genitourinary, breast, gynaecological cancer and other cancers, respectively . Overall, prognosis was good, intermediate and poor in 572 (32.9%), 424 (24.4%) and 740 (42.6%) included patients, respectively ( , available at https://doi.org/10.1016/j.esmoop.2024.102937 ). None of the patient participated in a clinical trial. Overall, 237/1736 (13.7%) patients were enrolled into the SPC. Of these, 44.3%, 32.5% and 23.2% had good, intermediate and poor prognosis, respectively. Overall, 14.4% (250/1736) of patients received at least one modality of anticancer treatment (i.e. ST, RT or SRG) in the last 2 weeks of life. The proportion of patients who received anticancer treatment was 12.7% (50/395) in 2015 and increased to 17.3% (54/313) in 2019 . Overall, 250 patients received 252 courses of different modalities of anticancer treatment as two patients received both ST and RT in the last 2 weeks of life . Of these, 125 (49.6%) were ST, 118 (46.8%) RT and 9 (0.5%) SRG. Proportions of patients who received RT were 6.3% (25/395) and 6.7% (21/313) in years 2015 and 2019, respectively. No patient in 2015 and only one (0.3%) patient in 2019 underwent SRG . Overall, 125 patients received ST in the last 2 weeks of life. Six patients received two different types of ST . The proportion of patients who received ChT did not change substantially over time; it was 5.1% (20/395) in 2015 and 5.1% (16/313) in 2019. In contrast, the proportion of patients who received novel STs increased from 1.5% (6/395) in 2015 to 5.4% (17/313) in 2019 ( P = 0.006) . The odds of receiving anticancer therapy in the last 2 weeks of life increased by 15% each the following year (OR 1.15, 95% CI 1.04-1.27). Older patients had significantly lower odds to receive anticancer treatment in the last 2 weeks of life as compared to younger patients (OR 0.96, 95% CI 0.95-0.98). As compared to patients receiving only a standard oncology care those also enrolled into the SPC had significantly lower odds for anticancer treatment in the last 2 weeks of life (OR 0.22, 95% CI 0.12-0.43). Sex and prognosis of cancer were not significantly associated with receipt of anticancer treatment in the last 2 weeks of life . The problem of receiving aggressive anticancer treatment and other care at the EoL has been recognized and is well defined in the scientific literature. , However, due to the rapidly evolving new anticancer therapies, a concern of aggressive anticancer treatment in terminally ill and dying patients in oncology remains. Results of our study show that the use of anticancer treatment in the last 2 weeks of life has significantly increased from 2015 to 2019. While the use of ChT, RT and SRG did not change substantially over time, there was a trend of increasing use of novel, very costly ST (i.e. small-molecule targeted agents, immune checkpoint inhibitors and other biological agents). Younger patients and those not enrolled into the SPC had a higher probability of receiving aggressive anticancer treatment at the EoL as compared to older patients and patients receiving only a standard oncology care, respectively. In general, ChT is still a mainstay of ST in patients with advanced cancer. In our study, a proportion of patients who received ChT did not substantially change over the studied period of time . According to the results of published studies, administration of ChT in the last 2 weeks of life varies between 5% and 13%. , , , , , , , , , The proportion of our patients who received ChT is reassuringly lower than that previously reported in the literature and lower than a margin of 10%, which is an indicator of aggressive treatment with ChT. However, our results need to be interpreted in the broader context of a rapidly changing landscape of different types of ST and not only ChT. In our study we observed a trend of increasing use of novel ST (i.e. small-molecule targeted therapy, immune checkpoint inhibitors and other biological agents); while only 1.5% of patients received novel ST in the year 2015, this proportion increased to 5.4% in the year 2019 . A recent similar but larger study from the United States showed that, overall, ST use at the EoL did not change from 2015 to 2019; however, ChT was used less and immunotherapy more often. Recent studies showed increasing use of immune checkpoint inhibitors in patients with metastatic urothelial cancer, non-small-cell lung cancer and melanoma at the EoL, despite no evidence that this practice is beneficial for patients. , Similarly, it has been previously reported that novel ST such as targeted agents became widely used in the last few months of life of patients with advanced cancer. , The discovery of novel ST with accompanying specific toxicity profiles and the possibility of oral treatment blurred the boundary between active and palliative interventions as oncologists, patients and their families may perceive per-oral targeted agents less aggressive treatment as compared to ChT. However, it is well known that costly novel ST can cause substantial toxicity, including toxic deaths in patients with advanced cancer. Moreover, in practice patients sometimes receive these agents continuously despite progressive disease or are re-challenged with them after a period of treatment. Results of published studies show that targeted agents are prescribed twice as common as non-targeted agents at the EoL; additionally, the use of targeted agents was reported even in palliative care units in these studies. , However, evidence shows that metronomic therapy which is based on repeated administration of relatively low-cost and safe low doses of anti-neoplastic drugs might be a reasonable option of treatment in some patients with very advanced cancer. , , , In summary, increasing use of novel STs, including small-molecule targeted agents and immunotherapy near the EoL, is becoming problematic. Such practice can be detrimental for patients and may waste financial and human resources in the health care systems. We propose that the use of novel STs becomes an additional quality-of-care indicator at the EoL. In our study, the proportion of patients who received RT or underwent SRG was 6.8% and 0.5% in the last 2 weeks of life, respectively. In contrast to ST, use of RT and SRG did not change substantially over time . RT is commonly used to palliate symptoms in patients with advanced cancer and to prevent impending severe morbidity. According to the American Society for Radiation Oncology consensus statement, palliative RT is safe and effective. However, despite its important role in the management of symptoms in patients with advanced cancer, recommendations to guide its use at the EoL are lacking. Such recommendations would be useful because RT may cause short-time side-effects and sometimes requires weeks to show its palliative effect and therefore may be futile or even detrimental when administered in the last 2 weeks of life. , In the large Surveillance, Epidemiology, and End Results (SEER)—Medicare study, 7.6% of patients received RT in the last month of life. Despite impending death, a substantial proportion of patients receives prolonged irradiation schedules which are obviously not beneficial for patients at the EoL. , Decisions about palliative surgical procedures might be even more challenging in this setting. At the EoL care literature, an overtreatment is defined as a medical intervention that is extremely unlikely to help a patient, while it is misaligned with patient’s wishes or both. In fact, surgical procedures carried out for symptomatic relief, such as for example malignant bowel obstruction in a patient facing life-threatening cancer, are in accordance with the priorities of palliative care. , , However, advance care planning and discussions about care goals could prevent aggressive surgical treatment at the EoL, especially in the last 2 weeks of life. In our study, use of aggressive anticancer treatment at the EoL was significantly associated with younger age . This finding is in line with a large body of evidence showing that older patients receive ChT less often than younger patients. , , , However, we did not find any significant association between prognosis of cancer and gender with anticancer treatment in the last 2 weeks of life. Previous studies reported that patients with advanced breast cancer, lung cancer and gynaecological cancers were more likely to undergo ChT at the EoL than patients with other types of solid cancer. , , In contrast to our findings, there is some evidence that women receive fewer treatment and medical interventions at the EoL as compared to men. , An explanation could be in treatment preferences, family support and terminal illness at older age. We also showed that patients enrolled into the SPC had significantly lower odds to receive anticancer treatment at the EoL as compared to patients receiving only a standard oncology care (OR 0.22, 95% CI 0.12-0.43; ). Of note, standard oncology care in Slovenia usually also involves a palliative care (i.e. non-SPC) which is provided by the teams of treating oncologists and general practitioners. However, the quality of current non-SPC in our country is very likely not comparable to the well-developed palliative care in the Western world. Evidence shows that prescription of ChT at the EoL is strongly associated with access to palliative care. In hospitals where patients have access to the SPC a prescription of ChT is declining. Earlier cessation of anticancer treatment and concurrent inclusion of palliative care can contribute to a higher QoL and longer survival as compared to standard oncology care. The EoL discussion is also associated with fewer life-sustaining procedures and lower rates of admission to the intensive care unit. There is also evidence that medical expenses are very high in patients with advanced cancer in the last month of life. In summary, available evidence and results of our study suggest that enrolment of patients with advanced cancer into palliative care decreases the risk for aggressive anticancer treatment at the EoL. Access to high-quality palliative care has important implications for patients’ lives and lower medical expenses. Cessation of treatment in terminally ill cancer patients is a complex topic interfering with personal, social and psychological dimensions. There may be several reasons why treating oncologists may not discontinue treatment at the EoL. Firstly, active treatment may give a patient and his caregivers a sense of control over the disease and active fighting. Secondly, recommending a new course of treatment may be an easier option for the oncologist than emotionally difficult discussions of cessation of treatment and transition to palliative care. Decisions about treatment are complex and usually depend on the relationship between the oncologist and patient and patient’s and caregivers’ expectations and priorities as well as social environment and perspectives. Thirdly, predictions of the length of survival by oncologists are often overly optimistic. , However, more accurate prognostication is feasible and can be achieved by combining clinical experience and evidence from the literature which is based on well-defined prognostic factors. For example, poor performance status (PS) and indices of limited activity and functional autonomy are major predictors of the approaching death. Additionally, symptoms such as dysphagia, xerostomia, weight loss, anorexia, cachexia, dyspnoea, delirium and cognitive impairment as well as some laboratory parameters (e.g. elevated bilirubin and/or C-reactive protein, lymphocytopenia, leucocytosis) often characterize the terminal phase of disease. Various prognostic tools, which are based on the aforementioned prognostic factors, symptoms and laboratory parameters can predict survival more accurately and may be especially helpful for inexperienced clinicians. For example, palliative performance scale (PPS) and prognosis in palliative care study (PiPS) models were specifically designed to estimate a 14-day survival in patients with advanced cancer. , Fourthly, use of aggressive treatment at the EoL is associated with poor access to palliative care. Ceasing aggressive cancer treatments earlier by introducing palliative care can increase survival time and QoL in patients with advanced cancer. Furthermore, hospice care is beneficial at the EoL as it offers the utmost important symptom control and the time to accept the finality of the diagnosis without distractions of active intervention. , The suboptimal access to palliative care in Slovenia and other Eastern European countries may be associated with more aggressive anticancer treatment at the EoL. Finally, financial incentives may have a substantial impact on treatment decisions by oncologists. For example, in the United States and Australia, oncologists receive financial reimbursements for the administration of ChT but little or no reimbursement for emotionally and time-consuming EoL discussions with patients and caregivers. However, in Slovenia all cancer patients have access to cancer care within the public health system and medical oncologists do not receive any financial reimbursement for the administration of ST at any stage of cancer care. For the first time we have shown that in Slovenia a substantial proportion of cancer patients receive aggressive anticancer treatment in the last 2 weeks of life. Our study included patients from a single academic cancer centre where ∼60% of all Slovenian cancer patients are treated. However, there are several limitations to our study. Firstly, our study was retrospective and therefore results are highly dependent on the accuracy of data entered into the EHRs by the treating oncologists. Secondly, additional explanatory variables could be included into the multivariable analysis. However, data on these potential variables in the patients’ EHRs are not applicable to our environment (e.g. ethnicity and place of living) or might not be accurate or complete (e.g. PS, symptoms of impending death, comorbidities and social status). Also, due to the lack of relevant information we were not able to calculate PPS or PiPS and seek an association between symptoms of impending death and anticancer treatment at the EoL in our study. For the same reason we were also not able to assess a toxicity of anticancer therapy in this retrospective study. Thirdly, our study results should be interpreted cautiously as dates of death of included patients were not known when anticancer treatment was prescribed/administered. In contrast, results of a prospective study where we would be able to assess anticancer treatment only in patients with clear indicators of terminal phase of cancer, including deteriorating PS, might lead to different conclusions. Moreover, a study of patients’ (e.g. palliation of symptoms and patients’ values) and caregivers’ perspectives at the approaching death could give us an additional insight about the anticancer treatment at the EoL. Future prospective studies should also pay more attention to the cost-effectiveness of anticancer treatment in terminally ill patients with cancer, taking into the account also indirect costs related to the toxicity of systemic anticancer therapy. Fourthly, a small number of patients who received novel ST were included in our study. This limitation could be alleviated by a larger sample size achieved by the inclusion of other cancer centres and/or by a longer studied time period. Finally, as IOL is an academic institution, results of our study may not be generalizable to non-academic cancer centres in Slovenia and in other European countries. However, our findings might be an important signal of the risk of aggressive anticancer treatment at the EoL in the rapidly evolving field of medical oncology, especially in countries with sub-optimally developed palliative care where novel STs are available. Conclusions Aggressive anticancer treatment at the EoL is a well-recognized problem. Results of our study show that anticancer treatment in the last 2 weeks of life became more aggressive mainly due to the increasing use of novel ST. General awareness of this problem and further efforts to mitigate it, including development of palliative care, are required. Aggressive anticancer treatment at the EoL is a well-recognized problem. Results of our study show that anticancer treatment in the last 2 weeks of life became more aggressive mainly due to the increasing use of novel ST. General awareness of this problem and further efforts to mitigate it, including development of palliative care, are required. |
Urinary proteomics identifies distinct immunological profiles of sepsis associated AKI sub-phenotypes | 779316e2-190b-4f75-932c-c4b605390c45 | 11654061 | Biochemistry[mh] | Acute kidney injury (AKI) is the most common form of organ failure in sepsis . Persons who develop sepsis-induced AKI are at greater risk of inpatient need for renal replacement therapy and future chronic kidney disease (CKD) and end-stage renal kidney disease (ESKD)) . The mechanisms underlying AKI in sepsis are diverse, involving macrovascular and microvascular dysfunction, inflammatory injury leading to tubular cell dysfunction, cellular apoptosis, cell-cycle arrest and others . Despite the substantial clinical impact of sepsis-induced AKI, multiple clinical trials have yet to identify effective pharmacotherapy for its prevention or treatment . One reason for the lack of therapeutics may be the presence of biologically distinct AKI subtypes that conceal identification of therapeutic targets to prevent and treat sepsis-induced AKI in clinical populations . Traditional models of pharmacotherapy in AKI have focused on developing therapies that could be provided to broad AKI populations. In contrast, an AKI sub-phenotype model presumes the existence of physiologic subtypes that require different therapies that target distinct disease pathways . Our group and others have identified two distinct AKI sub-phenotypes (AKI-SP1 and AKI-SP2). To ease clinical identification of these AKI sub-phenotypes, we developed and validated a 3-variable model that included plasma markers of endothelial dysfunction (angiopoietin-1 (Ang-1) and angiopoietin-2 (Ang-2)) and inflammation (soluble tumor necrosis factor receptor-1 (sTNFR-1)). These AKI sub-phenotypes demonstrated different risk of short and long-term clinical outcomes and also had distinct genetic risk . We also leveraged these AKI sub-phenotypes to demonstrate that existing therapies in sepsis (early addition of vasopressin) may preferentially lead to renal recovery in one AKI sub-phenotype (AKI-SP1) compared to the other (AKI-SP2). However, to identify new therapies tailored to AKI sub-phenotypes requires a deeper understanding of the biological pathways characterizing each AKI sub-phenotype. In this study, we leverage urine sampling on ICU admission paired with detailed clinical phenotyping. Urine is an ideal biofluid for the study of kidney diseases because it can be collected non-invasively and a majority of the urinary proteome derives from the kidney . We use the Somascan aptamer platform to measure ~ 5000 proteins to understand key reparative, inflammatory and fibrotic pathways underlying AKI sub-phenotypes. The following analytical steps were completed: first, AKI sub-phenotypes were defined using a 3 variable plasma prediction model. Second, urine proteomics were compared between AKI sub-phenotypes. Third, pathway analyses were completed to understand distinct biological processes involved in AKI sub-phenotypes. Fourth associations with urinary proteins and risk of RRT were tested to identify shared biology between AKI sub-phenotypes and kidney related clinical outcomes. Finally, we developed a urinary proteomic prediction model using Least Absolute Shrinkage and Selection Operator (LASSO) to classify AKI sub-phenotypes and overcome the need for blood sampling.
Study population We conducted a prospective cohort study of critically ill patients admitted to three hospitals affiliated with the University of Washington (Seattle, WA). Patients were enrolled between March 2020 and May 2021 . Patients were eligible if admitted to an ICU with signs or symptoms of acute respiratory infection (fever, respiratory symptoms including cough/shortness of breath or sore throat) and had one of the following 1) initiation of supplemental oxygen; 2) oxygen saturation < 94% on ambient air; or 3) new opacities on chest radiograph. We excluded patients who were younger than 18 years, incarcerated, pregnant, or on chronic maintenance hemodialysis. For this ancillary study, we selected all enrolled participants who had available spot urine and blood samples within 24 h of ICU admission. All urine samples were collected from the sampling port from an indwelling urinary catheter and all urine and blood samples were collected within 2 h of each other. Collecting urine from the sampling port ensured recently produced urine. Urine was immediately centrifuged and aliquoted within 2 h of collection. All urine samples were then frozen and underwent a single-freeze thaw for urine proteomics. The University of Washington Human Subjects Division granted a waiver of informed consent given minimal risk, urgency of COVID-19 research in this period, and supply limitations in personal protective equipment preventing nonessential staff from approaching patients (STUDY #9763). Sample collection, proteomic platform, and quality control Peripheral blood was collected into EDTA anticoagulant tubes within 24 h of ICU admission. Plasma was isolated by centrifugation (10 min, 2000 g , room temperature). We measured Ang-1, Ang-2 and sTNFR-1 using electrochemiluminiscence-based immunoassays (Meso Scale Discovery, Rockville, MD). Biomarkers were measured in 2 batches and the inter-plate coefficient of variation for Ang-1 was 6.2%, Ang-2 was 10.3% and sTNFR-1 was 7.3%. All samples underwent 1 freeze–thaw cycle prior to analysis. Urine proteomic profiling was completed using the SomaScan Platform (Somalogic) that contains SOMAmer single-stranded DNA aptamers that bind to protein analytes with high specificity. The assays were performed as previously described . For each sample, the platform reported a relative fluorescence units (RFU) value for each aptamer-protein pair that provides a scale-free measure of protein abundance. Median intra- and inter-assay coefficients of variation are approximately 5% . Samples were analyzed in two batches, on the Somalogic v4 platform with 5212 aptamers that bind 4925 unique proteins, and the v4.1 platform with 7548 aptamers that bind 6399 unique proteins. Analyses were conducted on the set of 5212 aptamers that overlapped between the v4 and v4.1 Somalogic platforms. AKI and AKI sub-phenotype identification AKI was defined as an increase in serum creatinine (SCr) at the time of study enrollment of ≥ 0.3 mg/dl or 50% from a baseline SCr consistent with the KDIGO guidelines . Patients receiving dialysis prior to study enrollment were excluded. Among the patients without AKI on study enrollment, a subset subsequently developed AKI during hospitalization. However, since our primary analysis was to link differences in urinary protein concentrations with AKI sub-phenotypes, we did not include patients who developed AKI after study enrollment to classify AKI sub-phenotypes. The outcome of renal replacement therapy (RRT) was defined as new initiation of RRT during hospitalization. The baseline SCr was a pre-hospitalization SCr within 365 days and if a pre-hospitalization SCr was missing then the baseline was the lowest SCr value within 7 days of study enrollment. Among patients with AKI at study enrollment, we used plasma biomarker concentrations of Ang-1, Ang-2 and sTNFR-1 to identify two AKI sub-phenotypes (AKI-SP1 and AKI-SP2). The model to identify AKI sub-phenotypes is a previously reported 3-variable prediction model, Logit (P(AKI sub-phenotype membership)) = -41.246 + 5.241*log(Ang-2/Ang-1) + 3.242*log(TNFR-1). A Youden’s index cutoff of 0.403 was used with greater values classifying patients as AKI-SP2 and values less than 0.403 classifying patients as AKI-SP1 . Statistical analysis We summarized baseline participant characteristics across groups with AKI-SP1, AKI-SP2, no AKI on ICU admission and those without AKI on ICU admission who develop AKI during hospitalization with mean (SD) values for continuous variables and number and percentage for categorical variables. We used Cox proportional hazards regression to evaluate the association of AKI subgroups (AKI-SP1 and AKI-SP2) with incident RRT accounting for the competing risk of death. Additional methods can be found in the online supplement. The RFU value for each aptamer-protein measurement in each sample was scaled by dividing it by the mean of all the aptamer-protein RFUs reported in that sample to account for urine sample dilution (mean normalized). Similar methods have been used to normalize urinary metabolomics data . Prior to this Somalogic also normalized the amount of protein loaded in their standard urine workflow. This produces an RFU abundance that is corrected in a similar manner to creatinine adjustment of urine biomarkers. The log 2 transformation of mean normalized RFU values was used in regression which produces a regression beta estimate for the independent categorial variable that is the log 2 fold change of the protein between comparison groups adjusted for age, sex, BMI and COVID-19 status. Significance was assessed at an FDR < 0.05. To complete a pathway analysis, we used gene term enrichment over-representation analysis implemented in WebGestalt (webgestalt.org) on the 312 significant proteins that were different between AKI sub-phenotypes. We tested the proteins which were significantly up regulated in AKI-SP1 separate from those upregulated in AKI-SP2 to characterize the two phenotypes’ pathways independently. To develop a urinary proteomic classification model which identifies patients with AKI-SP2, we made 1000 bootstrapped 75% training and 25% test splits of the data where the balanced random sampling was performed within the respective groups being classified (i.e. AKI-SP1 and AKI-SP2). The training sets were prescreened for the candidate proteins using the same regression methods as reported above to identify the proteins differentially abundant and selected those proteins with an FDR < 0.2. We then used the glmnet (v4.1–4) R package least absolute shrinkage and selection operator (LASSO) tenfold cross validation regression implementation to perform feature selection among this reduced set. We performed this for 1000 bootstrapped iterations with random splits to obtain average area under the curve (AUC) prediction classification performance and confidence intervals of the test sets across these iterations.
We conducted a prospective cohort study of critically ill patients admitted to three hospitals affiliated with the University of Washington (Seattle, WA). Patients were enrolled between March 2020 and May 2021 . Patients were eligible if admitted to an ICU with signs or symptoms of acute respiratory infection (fever, respiratory symptoms including cough/shortness of breath or sore throat) and had one of the following 1) initiation of supplemental oxygen; 2) oxygen saturation < 94% on ambient air; or 3) new opacities on chest radiograph. We excluded patients who were younger than 18 years, incarcerated, pregnant, or on chronic maintenance hemodialysis. For this ancillary study, we selected all enrolled participants who had available spot urine and blood samples within 24 h of ICU admission. All urine samples were collected from the sampling port from an indwelling urinary catheter and all urine and blood samples were collected within 2 h of each other. Collecting urine from the sampling port ensured recently produced urine. Urine was immediately centrifuged and aliquoted within 2 h of collection. All urine samples were then frozen and underwent a single-freeze thaw for urine proteomics. The University of Washington Human Subjects Division granted a waiver of informed consent given minimal risk, urgency of COVID-19 research in this period, and supply limitations in personal protective equipment preventing nonessential staff from approaching patients (STUDY #9763).
Peripheral blood was collected into EDTA anticoagulant tubes within 24 h of ICU admission. Plasma was isolated by centrifugation (10 min, 2000 g , room temperature). We measured Ang-1, Ang-2 and sTNFR-1 using electrochemiluminiscence-based immunoassays (Meso Scale Discovery, Rockville, MD). Biomarkers were measured in 2 batches and the inter-plate coefficient of variation for Ang-1 was 6.2%, Ang-2 was 10.3% and sTNFR-1 was 7.3%. All samples underwent 1 freeze–thaw cycle prior to analysis. Urine proteomic profiling was completed using the SomaScan Platform (Somalogic) that contains SOMAmer single-stranded DNA aptamers that bind to protein analytes with high specificity. The assays were performed as previously described . For each sample, the platform reported a relative fluorescence units (RFU) value for each aptamer-protein pair that provides a scale-free measure of protein abundance. Median intra- and inter-assay coefficients of variation are approximately 5% . Samples were analyzed in two batches, on the Somalogic v4 platform with 5212 aptamers that bind 4925 unique proteins, and the v4.1 platform with 7548 aptamers that bind 6399 unique proteins. Analyses were conducted on the set of 5212 aptamers that overlapped between the v4 and v4.1 Somalogic platforms.
AKI was defined as an increase in serum creatinine (SCr) at the time of study enrollment of ≥ 0.3 mg/dl or 50% from a baseline SCr consistent with the KDIGO guidelines . Patients receiving dialysis prior to study enrollment were excluded. Among the patients without AKI on study enrollment, a subset subsequently developed AKI during hospitalization. However, since our primary analysis was to link differences in urinary protein concentrations with AKI sub-phenotypes, we did not include patients who developed AKI after study enrollment to classify AKI sub-phenotypes. The outcome of renal replacement therapy (RRT) was defined as new initiation of RRT during hospitalization. The baseline SCr was a pre-hospitalization SCr within 365 days and if a pre-hospitalization SCr was missing then the baseline was the lowest SCr value within 7 days of study enrollment. Among patients with AKI at study enrollment, we used plasma biomarker concentrations of Ang-1, Ang-2 and sTNFR-1 to identify two AKI sub-phenotypes (AKI-SP1 and AKI-SP2). The model to identify AKI sub-phenotypes is a previously reported 3-variable prediction model, Logit (P(AKI sub-phenotype membership)) = -41.246 + 5.241*log(Ang-2/Ang-1) + 3.242*log(TNFR-1). A Youden’s index cutoff of 0.403 was used with greater values classifying patients as AKI-SP2 and values less than 0.403 classifying patients as AKI-SP1 .
We summarized baseline participant characteristics across groups with AKI-SP1, AKI-SP2, no AKI on ICU admission and those without AKI on ICU admission who develop AKI during hospitalization with mean (SD) values for continuous variables and number and percentage for categorical variables. We used Cox proportional hazards regression to evaluate the association of AKI subgroups (AKI-SP1 and AKI-SP2) with incident RRT accounting for the competing risk of death. Additional methods can be found in the online supplement. The RFU value for each aptamer-protein measurement in each sample was scaled by dividing it by the mean of all the aptamer-protein RFUs reported in that sample to account for urine sample dilution (mean normalized). Similar methods have been used to normalize urinary metabolomics data . Prior to this Somalogic also normalized the amount of protein loaded in their standard urine workflow. This produces an RFU abundance that is corrected in a similar manner to creatinine adjustment of urine biomarkers. The log 2 transformation of mean normalized RFU values was used in regression which produces a regression beta estimate for the independent categorial variable that is the log 2 fold change of the protein between comparison groups adjusted for age, sex, BMI and COVID-19 status. Significance was assessed at an FDR < 0.05. To complete a pathway analysis, we used gene term enrichment over-representation analysis implemented in WebGestalt (webgestalt.org) on the 312 significant proteins that were different between AKI sub-phenotypes. We tested the proteins which were significantly up regulated in AKI-SP1 separate from those upregulated in AKI-SP2 to characterize the two phenotypes’ pathways independently. To develop a urinary proteomic classification model which identifies patients with AKI-SP2, we made 1000 bootstrapped 75% training and 25% test splits of the data where the balanced random sampling was performed within the respective groups being classified (i.e. AKI-SP1 and AKI-SP2). The training sets were prescreened for the candidate proteins using the same regression methods as reported above to identify the proteins differentially abundant and selected those proteins with an FDR < 0.2. We then used the glmnet (v4.1–4) R package least absolute shrinkage and selection operator (LASSO) tenfold cross validation regression implementation to perform feature selection among this reduced set. We performed this for 1000 bootstrapped iterations with random splits to obtain average area under the curve (AUC) prediction classification performance and confidence intervals of the test sets across these iterations.
Description of cohort Among 173 ICU patients with sepsis from a suspected respiratory infection, 87 had no AKI, 66 had AKI-SP1 and 20 had AKI-SP2 on study enrollment (Table ). Among patients without AKI on study enrollment, 38 (44%) subsequently developed AKI on average 8 (SD ± 9) days after ICU presentation. Overall, the mean (SD) age was 53 (16) years old, 61% were men, and 57% identified as White, 17% as Black, 12% as Asian and 21% as LatinX. Compared to patients with AKI-SP1, patients with AKI-SP2 had lower rates of COVID-19 (35% vs 68%), lower rates of diabetes mellitus (30% vs 38%) and higher rates of CKD (30% vs 14%). Table provides a summary of the sTNFR1, Ang-1 and Ang-2 biomarkers used to classify the participants into AKI-SP1 and AKI-SP2. Risk of clinical outcomes between AKI sub-phenotypes Consistent with previous studies, we found that patients with AKI-SP2 had a higher risk of RRT than AKI-SP1. The proportion of patients receiving RRT during hospitalization was 10.6% among AKI-SP1 and 30% among AKI-SP2 (Table and Table ). The Kaplan–Meier curve for risk of RRT demonstrates the majority of new RRT occurred in the first two weeks after ICU admission (Figure ). We saw no significant difference in hospital mortality between AKI sub-phenotypes (Table ). Urinary proteomic profiles between patients with AKI-SP1 and AKI-SP2 Next, we sought to determine whether urinary proteins were different between these AKI sub-phenotypes. In total, 117 urine proteins were higher in AKI-SP2, while 195 urine proteins were higher in AKI-SP1 ( FDR < 0.05 ) (Fig. ). In a sensitivity analysis, we compared the raw urinary protein RFUs between AKI sub-phenotypes. Using the raw urinary RFU values, we found a similar set of top proteins that were significantly different between AKI sub-phenotypes compared to the mean normalized RFU values for each protein (Figure ). We also completed a sensitivity analysis adjusting for baseline CKD and found a similar set of top proteins between AKI sub-phenotypes (Figure ). Proteins involved in collagen deposition (GP6), podocyte derived (SPOCK2), proliferation of mesenchymal cells (IL11RA), and anti-inflammatory (IL10RB and TREM2) were among the proteins abundant in AKI-SP1. Urinary proteins involved in inflammation (TNFRSF11B), chemoattractant of neutrophils and monocytes (CXCL1 and REG3A) and oxidative stress (SOD2) were significantly associated with AKI-SP2. See Supplemental File for summary statistics for each protein aptamer with each comparison. Next, we compared the urinary proteomic profile between all patients without AKI on study enrollment (n = 88) and AKI-SP1 and found that no urinary proteins were significantly different (Figure ). In contrast, patients with AKI-SP2 had significantly different urinary proteomic profiles compared to patients without AKI on study enrollment (Figure ). In direct comparisons, we found that the log 2 adjusted fold change in urinary proteins between AKI-SP2 vs no AKI and AKI-SP2 vs AKI-SP1 were highly correlated (Pearson's r = 0.91 , Figure ). Moreover, a majority of the proteins were overlapping (n = 243) suggesting that AKI-SP1 and no AKI have a very similar urinary proteomic profile on study enrollment (Figure ). Pathway analyses We completed pathway analyses to annotate urinary proteins with differential abundance between AKI-SP1 and AKI-SP2. In WebGestalt pathway analysis, the gene-protein name term enrichment showed 17 pathways were significant for the proteins upregulated in AKI-SP2, while 3 pathways were significant for the proteins upregulated in AKI-SP1. Among these were pathways related to immune response, complement activation and chemokine signaling in AKI-SP2 and pathways of cell and biological adhesion were enriched in AKI-SP1 (Fig. and Supplemental File ). Association of urinary proteins with risk of RRT Among 86 patients with either AKI-SP1 or AKI-SP2, 13 (15%) developed RRT during hospitalization. In total, greater abundance of 206 proteins in urine were associated with development of RRT, while a higher abundance of 179 urinary proteins were associated with a lower risk of RRT ( FDR < 0.05 ) (Supplemental File ). We found high overlap with proteins that differentiated AKI sub-phenotypes and also were associated with risk of RRT. For example, of the 179 proteins that were associated with a lower risk of subsequent RRT, 108 of these proteins were also associated with risk of AKI-SP1. Similarly of the 206 urinary proteins that were associated with a greater risk of subsequent RRT, 85 of these proteins were associated with AKI-SP2. The overlap of proteins highlights the shared urinary protein biology between development of AKI sub-phenotypes and subsequent risk of RRT (Fig. ). Rates of bacteremia in patients with AKI sub-phenotypes With proteins of immune and complement activation and TLR expression increased in the urinary proteomic profile of patients with AKI-SP2, we sought to determine whether blood cultures positive for bacteria were more common in patients with AKI-SP2. We reviewed blood culture results in the first week of study enrollment and found that patients with AKI-SP2 were more likely to have detectable bacteria in their blood (35%) compared to patients with AKI-SP1 (2%) ( p = 0.007 ). Classification of AKI sub-phenotypes using the urinary proteome To facilitate identification of AKI-SP2 on study enrollment through a non-invasive urinary sampling method, we developed urinary proteomic prediction models. We iteratively split the cohort into 1000 testing (25%) and training (75%) bootstrap sets within the classification groups and used LASSO regression to develop a urinary proteome prediction model for AKI-SP2 compared to patients without AKI and/or AKI-SP1 as well at AKI-SP1 versus AKI-SP2 and AKI-SP1 versus No AKI. We then combined the training and test sets for a final model with selected proteins for AKI-SP2 compared to patients without AKI and/or AKI-SP1. The bootstrap test datasets had a mean area under the curve (AUC) of 0.84 (95% CI: 0.66 – 0.98) to predict AKI-SP2 in comparison to no AKI and AKI-SP1 (Table ) . The final LASSO model for AKI-SP2 versus AKI-SP1 and No AKI groups included 30 different urinary proteins (Supplemental File ). Similar performance (AUC = 0.80 (95% CI: 0.56–0.99)) was seen when distinguishing AKI-SP2 from those participants with AKI-SP1. The difference between the two AUC bootstrap samplings was significant (two-sided t-test p -value < 2.2 × 10 –16 ). We were not able to usefully predict classification differences between AKI-SP1 and No AKI (Table ) which supports the observation in our regression analyses where we saw no proteins that were significantly different between No AKI and AKI-SP1. Validation of aptamer specificity with ELISA measurements We compared aptamer-based measurements with the corresponding Meso Scale Discovery immunoassay-based protein measurements of proteins found to be significantly associated with AKI sub-phenotypes (REG3A, MMP2, HAMP, RBP4, PRDX6) and candidate urinary biomarkers of kidney injury, such as KIM-1, NGAL, EGF, IL-18 and Ang-2 (Table ). Prior to completing proteomics, Somalogic normalizes all urine samples to a similar total protein concentration by diluting the urine sample. We applied this dilution factor to the RFUs to calculate a neat (undiluted) value that is comparable to an immunoassay-based protein measurement. Among the five kidney injury biomarkers we found higher correlation for Ang-2 (Pearson’s r = 0.74) and KIM-1 (r = 0.6), moderate correlation for NGAL (r = 0.43) and no correlation for EGF (r = -0.04) and IL-18 (r = -0.01). Among the proteins associated with AKI sub-phenotypes we found higher correlation for REG3A (r = 0.86), moderate correlation for MMP2 (r = 0.46), HAMP (r = 0.42) and PRDX6 (r = 0.53). RBP4 was not correlated (r = 0.12). The combination of four biomarkers measured using an immunoassay (REG3A, MMP2, HAMP and PRDX6) with the clinical variables of age and sex had a AUC of 0.69 (0.5–0.93) to predict AKI-SP2 compared to AKI-SP1 and no AKI.
Among 173 ICU patients with sepsis from a suspected respiratory infection, 87 had no AKI, 66 had AKI-SP1 and 20 had AKI-SP2 on study enrollment (Table ). Among patients without AKI on study enrollment, 38 (44%) subsequently developed AKI on average 8 (SD ± 9) days after ICU presentation. Overall, the mean (SD) age was 53 (16) years old, 61% were men, and 57% identified as White, 17% as Black, 12% as Asian and 21% as LatinX. Compared to patients with AKI-SP1, patients with AKI-SP2 had lower rates of COVID-19 (35% vs 68%), lower rates of diabetes mellitus (30% vs 38%) and higher rates of CKD (30% vs 14%). Table provides a summary of the sTNFR1, Ang-1 and Ang-2 biomarkers used to classify the participants into AKI-SP1 and AKI-SP2.
Consistent with previous studies, we found that patients with AKI-SP2 had a higher risk of RRT than AKI-SP1. The proportion of patients receiving RRT during hospitalization was 10.6% among AKI-SP1 and 30% among AKI-SP2 (Table and Table ). The Kaplan–Meier curve for risk of RRT demonstrates the majority of new RRT occurred in the first two weeks after ICU admission (Figure ). We saw no significant difference in hospital mortality between AKI sub-phenotypes (Table ).
Next, we sought to determine whether urinary proteins were different between these AKI sub-phenotypes. In total, 117 urine proteins were higher in AKI-SP2, while 195 urine proteins were higher in AKI-SP1 ( FDR < 0.05 ) (Fig. ). In a sensitivity analysis, we compared the raw urinary protein RFUs between AKI sub-phenotypes. Using the raw urinary RFU values, we found a similar set of top proteins that were significantly different between AKI sub-phenotypes compared to the mean normalized RFU values for each protein (Figure ). We also completed a sensitivity analysis adjusting for baseline CKD and found a similar set of top proteins between AKI sub-phenotypes (Figure ). Proteins involved in collagen deposition (GP6), podocyte derived (SPOCK2), proliferation of mesenchymal cells (IL11RA), and anti-inflammatory (IL10RB and TREM2) were among the proteins abundant in AKI-SP1. Urinary proteins involved in inflammation (TNFRSF11B), chemoattractant of neutrophils and monocytes (CXCL1 and REG3A) and oxidative stress (SOD2) were significantly associated with AKI-SP2. See Supplemental File for summary statistics for each protein aptamer with each comparison. Next, we compared the urinary proteomic profile between all patients without AKI on study enrollment (n = 88) and AKI-SP1 and found that no urinary proteins were significantly different (Figure ). In contrast, patients with AKI-SP2 had significantly different urinary proteomic profiles compared to patients without AKI on study enrollment (Figure ). In direct comparisons, we found that the log 2 adjusted fold change in urinary proteins between AKI-SP2 vs no AKI and AKI-SP2 vs AKI-SP1 were highly correlated (Pearson's r = 0.91 , Figure ). Moreover, a majority of the proteins were overlapping (n = 243) suggesting that AKI-SP1 and no AKI have a very similar urinary proteomic profile on study enrollment (Figure ).
We completed pathway analyses to annotate urinary proteins with differential abundance between AKI-SP1 and AKI-SP2. In WebGestalt pathway analysis, the gene-protein name term enrichment showed 17 pathways were significant for the proteins upregulated in AKI-SP2, while 3 pathways were significant for the proteins upregulated in AKI-SP1. Among these were pathways related to immune response, complement activation and chemokine signaling in AKI-SP2 and pathways of cell and biological adhesion were enriched in AKI-SP1 (Fig. and Supplemental File ).
Among 86 patients with either AKI-SP1 or AKI-SP2, 13 (15%) developed RRT during hospitalization. In total, greater abundance of 206 proteins in urine were associated with development of RRT, while a higher abundance of 179 urinary proteins were associated with a lower risk of RRT ( FDR < 0.05 ) (Supplemental File ). We found high overlap with proteins that differentiated AKI sub-phenotypes and also were associated with risk of RRT. For example, of the 179 proteins that were associated with a lower risk of subsequent RRT, 108 of these proteins were also associated with risk of AKI-SP1. Similarly of the 206 urinary proteins that were associated with a greater risk of subsequent RRT, 85 of these proteins were associated with AKI-SP2. The overlap of proteins highlights the shared urinary protein biology between development of AKI sub-phenotypes and subsequent risk of RRT (Fig. ).
With proteins of immune and complement activation and TLR expression increased in the urinary proteomic profile of patients with AKI-SP2, we sought to determine whether blood cultures positive for bacteria were more common in patients with AKI-SP2. We reviewed blood culture results in the first week of study enrollment and found that patients with AKI-SP2 were more likely to have detectable bacteria in their blood (35%) compared to patients with AKI-SP1 (2%) ( p = 0.007 ).
To facilitate identification of AKI-SP2 on study enrollment through a non-invasive urinary sampling method, we developed urinary proteomic prediction models. We iteratively split the cohort into 1000 testing (25%) and training (75%) bootstrap sets within the classification groups and used LASSO regression to develop a urinary proteome prediction model for AKI-SP2 compared to patients without AKI and/or AKI-SP1 as well at AKI-SP1 versus AKI-SP2 and AKI-SP1 versus No AKI. We then combined the training and test sets for a final model with selected proteins for AKI-SP2 compared to patients without AKI and/or AKI-SP1. The bootstrap test datasets had a mean area under the curve (AUC) of 0.84 (95% CI: 0.66 – 0.98) to predict AKI-SP2 in comparison to no AKI and AKI-SP1 (Table ) . The final LASSO model for AKI-SP2 versus AKI-SP1 and No AKI groups included 30 different urinary proteins (Supplemental File ). Similar performance (AUC = 0.80 (95% CI: 0.56–0.99)) was seen when distinguishing AKI-SP2 from those participants with AKI-SP1. The difference between the two AUC bootstrap samplings was significant (two-sided t-test p -value < 2.2 × 10 –16 ). We were not able to usefully predict classification differences between AKI-SP1 and No AKI (Table ) which supports the observation in our regression analyses where we saw no proteins that were significantly different between No AKI and AKI-SP1.
We compared aptamer-based measurements with the corresponding Meso Scale Discovery immunoassay-based protein measurements of proteins found to be significantly associated with AKI sub-phenotypes (REG3A, MMP2, HAMP, RBP4, PRDX6) and candidate urinary biomarkers of kidney injury, such as KIM-1, NGAL, EGF, IL-18 and Ang-2 (Table ). Prior to completing proteomics, Somalogic normalizes all urine samples to a similar total protein concentration by diluting the urine sample. We applied this dilution factor to the RFUs to calculate a neat (undiluted) value that is comparable to an immunoassay-based protein measurement. Among the five kidney injury biomarkers we found higher correlation for Ang-2 (Pearson’s r = 0.74) and KIM-1 (r = 0.6), moderate correlation for NGAL (r = 0.43) and no correlation for EGF (r = -0.04) and IL-18 (r = -0.01). Among the proteins associated with AKI sub-phenotypes we found higher correlation for REG3A (r = 0.86), moderate correlation for MMP2 (r = 0.46), HAMP (r = 0.42) and PRDX6 (r = 0.53). RBP4 was not correlated (r = 0.12). The combination of four biomarkers measured using an immunoassay (REG3A, MMP2, HAMP and PRDX6) with the clinical variables of age and sex had a AUC of 0.69 (0.5–0.93) to predict AKI-SP2 compared to AKI-SP1 and no AKI.
In sepsis-induced AKI, it has been particularly problematic to identify clinical subgroups with biologically distinct signatures. For example, clinicians have historically separated AKI into prerenal and acute tubular necrosis. However, multiple studies have shown poor reliability in identifying these two groups by clinicians and enriching for acute tubular necrosis in AKI clinical trials has yet to show a benefit . In our previous work, we developed a validated 3-variable molecular classifier to identify two AKI sub-phenotypes. Here we demonstrate through measurement of 5,000 proteins on urine collected within 24 h of ICU admission that these two AKI sub-phenotypes each have distinct urinary profiles. Patients with AKI-SP1 were characterized by a reparative, regenerative phenotype and AKI-SP2 being characterized by an immune activation and inflammatory phenotype. We also demonstrate shared urinary protein biology between AKI sub-phenotypes and subsequent risk of RRT and highlight potential future therapeutic targets. Relatively few studies have evaluated the urine proteome in ICU patients with sepsis-induced AKI . The largest previous study included twelve patients with early recovery of kidney function matched to 12 patients with late/non recovery. Mass spectrometry was completed on urine samples and identified 8 differential proteins . Among the 8 proteins, higher urinary concentrations of neutrophil gelatinase-associated lipocalin (NGAL) were associated with greater risk of late or never recovery of renal function. In our work, we also found that higher urinary NGAL was associated with AKI-SP2 ( FDR = 0.035 ) and eventual risk of RRT ( FDR = 0.003 ). The shared findings demonstrate the external generalizability of our cohort, and our work adds to previous urinary proteomic analyses by including a larger sample size of patients with AKI, measurement of 5,000 proteins and leveraging identification of AKI sub-phenotypes. The identification of distinct urinary proteomic profiles between AKI sub-phenotypes may inform future testing of successful pre-clinical therapeutics tailored to underlying biology. For example, among the top proteins most abundant in AKI-SP1 was IL11RA. IL11 and IL11RA have been shown to be crucial to the development of fibrosis in AKI and CKD models, and anti-IL11 treatment as well as knockout of the IL11RA gene are protective . Another example is that a number of pro-inflammatory cytokines and pathways of inflammation were upregulated in patients with AKI-SP2. Thus, trials of anti-inflammatory therapies in AKI may be mixed partly due to the inclusion of AKI-SP1 patients with different pathophysiology than AKI-SP1. In addition, urine chordin-like 2 (CHRDL2) was increased in AKI-SP2 and higher urine CHRDL2 was associated with risk of RRT. The chordin proteins are antagonists to bone morphogenetic /transforming growth factor-beta (BMP/TGF-beta) signaling and are a critical mediator to renal fibrosis, inflammation and apoptosis after kidney injury . Preclinical studies have shown that modulation of BMP-7 prevents kidney fibrosis and improves survival in rodent kidney ischemia but application to clinical AKI has been less promising . Our urine proteomic findings suggest that including all types of patients with AKI with diverse biology in therapeutic studies may prevent translation of promising pre-clinical signals to clinical AKI. The ideal method to model protein measurements in a spot urine sample is debatable . One method is to index (i.e., divide) protein measurements by urine creatinine (UCr) concentrations. However, UCr concentration decreases as glomerular filtration rate falls and thus the proteins indexed to UCr may overestimate or underestimate the true association of these proteins with clinical outcomes and empirical data has supported this . For this reason, we normalized urine aptamer protein RFUs to the mean of the total RFUs of all proteins within the sample. In a patient with concentrated urine, we would expect that all the protein RFUs to be increased and in a patient with dilute urine for all the protein RFUs to decrease. Thus, normalizing to the mean of the total RFUs will maintain the relative difference among protein RFUs within sample but also account for urine dilution. This approach has been used to normalize urine metabolomics data . In a set of sensitivity analyses, we also present analyses using the raw urine protein-aptamer RFU and demonstrate high overlap in proteins between methods. The strengths of our study include the use of a multiplex urinary proteomics platform on urine collected early after hospitalization to understand molecular signatures of AKI early after injury that are potentially modifiable. We also leverage the identification of AKI sub-phenotypes to improve the ability to identify differences in urinary proteomics. Another strength is the generalizability of our findings. In our analyses, we were able to replicate previously known protein-outcome associations, such as NGAL and risk of RRT. We also were able to leverage the large sample size to identify several novel proteins associated with AKI sub-phenotypes and risk of RRT. Our study has several limitations. First, urine output was not used to identify patients with AKI because of missingness in urine output data with early enrollment after ICU admission. The absence of urine output might have selected patients with potentially higher AKI severity. Second, the AKI-SP2 population is small and demonstrated notable clinical differences from AKI-SP1 and given the population size we were unable to account for these differences. Third, we did not have access to an external validation cohort with urinary proteomic data available. Future directions will seek to test the difference in urinary proteomics in diverse sepsis-induced AKI populations. Fourth, aptamer-based proteomic methods may be affected by probe cross-reactivity and nonspecific binding. Moreover, few published datasets are available presenting the correlation of urine proteomic data with immunoassay measurements, which we show is correlated in seven out of 10 proteins evaluated. Fifth, patients who had severe AKI and were anuric on study enrollment could not contribute urine and thus urine proteomics would not be able to understand AKI pathology in this population. In summary, among a population of patients admitted to the ICU with sepsis, we reproduce two AKI sub-phenotypes originally derived using plasma biomarkers, demonstrate distinct urinary proteomic signatures between AKI sub-phenotypes and highlight the shared urinary protein biology between AKI sub-phenotypes and risk of RRT. We also found that AKI-SP1 is not significantly different than patients without AKI on study enrollment when analyzing the urinary proteome. We highlight several key biological pathways in human AKI with corresponding pre-clinical studies demonstrating a future path for targeted therapeutics. We show that AKI-SP2 is associated with blood bacteremia. Finally, we show that AKI-SP2 can be classified using prediction modeling from urinary protein abundances. A deeper understanding of the human pathophysiology in sepsis-induced AKI may allow tailoring the study of potential therapeutics to AKI sub-phenotypes.
Additional file 1.
|
Enhancing patient safety and risk management through clinical pathways in oncology | ba3dbdb7-7046-4805-99d1-ef2be9da24de | 11781099 | Internal Medicine[mh] | The emphasis on risk management and quality assessment has become a pivotal aspect of hospital management endeavours. Audit is a powerful tool to monitor the level of adherence to quality and patient safety standards. The application of a model for the conduction of audit - that integrates patient safety, quality and risk management elements - allows the hospital to successfully track critical areas impacting patient care. This study presents the description and the application of an innovative methodology for audits, the INTegrated Audit Model. The emphasis on risk management and quality assessment has become a pivotal aspect of hospital management endeavours. In order to ensure the provision of high standards of care, organisations must comply with strict requirements, standards and guidelines from government bodies and international organisations. This adherence serves as a prerequisite for obtaining certifications such as the UNI EN ISO 9001:2015 and the Joint Commission International (JCI) Accreditation. Both the monitoring of quality and clinical risk are also regularly conducted through audits, overseen by both internal and external auditor groups. The audit process facilitates the oversight of internal operations and verifies their alignment with quality standards and guidelines. Over time, the audit process has undergone significant transformations propelled by the advancement of adopted methodologies that have unveiled new avenues for scrutinising various facets of healthcare delivery. Beyond organisational considerations, contemporary audit methodologies have expanded the scope to include an in-depth examination of patient safety measures. This not only reflects the dynamic nature of healthcare governance but also highlights an imperative towards optimising patient-centric care practices through rigorous audit frameworks. During their peer-review assessments, the JCI employs the Tracer Methodology as one of its techniques for conducting audits. As stated by the Joint Commission, the Tracer Methodology is ‘a way to analyse the organisation’s system of providing care, treatment or services using actual patients as the framework for assessing standards compliance’ and aims to ‘evaluate the system or process, including the integration of related processes and the coordination and communication among disciplines and departments. According to this methodology, the audit process entails tracking the patient’s experience of care and services, using information pertaining to the organisation’s healthcare delivery process, with the objective of pinpointing risk issues and safety concerns across various organisational levels. Aim This article presents a case study, following the experience of the IRCCS (Scientific Institute for Research, Hospitalization and Healthcare) Fondazione "Istituto Nazionale dei Tumori" (INT). The specific aim of this article is to illustrate how the audit model at INT has evolved, shifting from a model focused primarily on internal unit activities and the verification of compliance with ISO 9001:2015 standards to a model that places the patient’s journey at the centre of the evaluation. This new approach includes the assessment of both quality assurance and patient safety aspects, through the monitoring of ISO (International Organization for Standardization) standards, JCI standards and internationally recognised benchmarks. In this article, we will present the application and benefits of the newly adopted model, the INTegrated Audit Model (INTAM). The article will describe how audits are conducted within the institute and analyse audit reports to identify the key aspects that can be investigated through the application of this new model. Context Situated in Milan, Italy, the INT serves as a prominent oncological hospital and a cornerstone of cancer treatment in Italy. Boasting 482 bed spaces, 12 operating rooms and 140 exam rooms for visits and diagnostics, the institution is organised into 7 departments, housing 106 clinical units of different complexity. Annually, it delivers 1 267 747 outpatient services, 141 transplant procedures, 5069 day-care treatments and manages 11 569 hospital admissions. With a workforce comprising 2035 employees, of which 241 are actively engaged in research pursuits, the INT contributes significantly to advancing cancer-related knowledge. This article presents a case study, following the experience of the IRCCS (Scientific Institute for Research, Hospitalization and Healthcare) Fondazione "Istituto Nazionale dei Tumori" (INT). The specific aim of this article is to illustrate how the audit model at INT has evolved, shifting from a model focused primarily on internal unit activities and the verification of compliance with ISO 9001:2015 standards to a model that places the patient’s journey at the centre of the evaluation. This new approach includes the assessment of both quality assurance and patient safety aspects, through the monitoring of ISO (International Organization for Standardization) standards, JCI standards and internationally recognised benchmarks. In this article, we will present the application and benefits of the newly adopted model, the INTegrated Audit Model (INTAM). The article will describe how audits are conducted within the institute and analyse audit reports to identify the key aspects that can be investigated through the application of this new model. Situated in Milan, Italy, the INT serves as a prominent oncological hospital and a cornerstone of cancer treatment in Italy. Boasting 482 bed spaces, 12 operating rooms and 140 exam rooms for visits and diagnostics, the institution is organised into 7 departments, housing 106 clinical units of different complexity. Annually, it delivers 1 267 747 outpatient services, 141 transplant procedures, 5069 day-care treatments and manages 11 569 hospital admissions. With a workforce comprising 2035 employees, of which 241 are actively engaged in research pursuits, the INT contributes significantly to advancing cancer-related knowledge. Design This project marks the culmination of a 6-year journey aimed at assessing the effectiveness and significance of the audit survey process, a decade postimplementation within the hospital setting. A mapping of the as-is audit process was conducted . The initial situation involved the creation of a list of units to be audited each year during the audit planning phase. The selection criteria required that 45% of the departments would get audited annually, with priority given to those presenting known issues highlighted in previous audits. Each unit is audited at least once every 3 years. The audit team receives training focused on the ISO standard and audit management methods (ISO 19011:2018 and ISO 9001:2015). During the on-site visit, the focus of the inspection, the questions asked and the areas investigated are left to the discretion of the auditor, who designs the audit by concentrating on the internal activities of the audited department and verifying compliance with ISO 9001:2015. The review of medical records is limited to checking the completeness of required legal documents and forms. At the end of the on-site visit, the auditors produce a report highlighting strengths, recommendations and non-conformities, which is shared with the audited department. Based on this report, a corrective action plan is developed. Between 2016 and 2020, a thorough context analysis was conducted. The initial phase involved conducting a survey among 22 auditors to evaluate the structure and effectiveness of the audit system; this was followed by a focus group session to discuss the survey results, along with carrying out semistructured interviews with both auditors and auditees regarding the audit process and its usefulness. From this evaluation, it was highlighted that the audit analysis process primarily focuses on the internal activities of the audited units. The auditors investigate aspects limited to the individual unit’s activity, without adopting a systemic perspective that links the unit’s activity to the others. Additionally, there is a lack of standardisation in audit management procedures. Audits are conducted according to the ISO model, allowing auditors to frame questions based on their own expertise; while this is a strength of the current process, as it enables the auditor to tailor the audit to the unit’s activities, it makes each on-site visit unique compared with others, and it does not ensure the standardisation of the topics being analysed. This presents the opportunity to use the audit not only to verify ISO compliance but also to transform it into an opportunity to integrate the assessment of quality management, risk management and patient safety. Furthermore, the need has emerged to further train auditors on these topics, enhancing their expertise in internationally recognised standards for patient safety and JCI standards. Lastly, there is an opportunity to shift the audit approach from a decentralised, unit-specific focus to a patient-centred and multidisciplinary approach. By introducing the Tracer Methodology, the audit plan is designed around the patient journey (represented by the integrated care pathway (ICP)), taking into account the patient’s experience across all units. In response to this, starting in 2019, a customised training programme was delivered to auditors on a revised version of the Tracer methodology applied to clinical pathways. The subsequent step was planning the 2022 and 2023 audit programme accordingly, connecting the audit survey to integrated clinical pathways and internationally recognised good practices. This resulted in the auditing of a total of 31 units and 6 Tracers associated with 6 ICPs. In this initial phase, the attention was mainly put on three good practices related to the correct identification of patients, surgical site and procedure, the prevention of the transfusion reaction and the prevention and management of patient falling in healthcare facilities. In 2022, as part of the Tracer 4 initiative, a comprehensive Failure Mode and Effect Analysis was carried out to evaluate the management and administration of chemotherapeutic drugs at the clinical day hospital. Measurement To evaluate the impact of structured monitoring of internationally recognised best practices, specifically ISO and JCI standards, reports from 31 audit visits were analysed. These audits encompassed 31 units within 6 Tracers conducted during 2022/2023. For each report, the areas examined during the on-site visits were documented. Each topic was catalogued and linked to the corresponding ISO 9001:2015 and JCI Standard codes. 17 topics were identified, tracked and ranked by their frequency of appearance across the 31 reports . The primary areas where non-conformities and observations were noted were then identified. Non-conformities were associated with 8 ISO standards, while observations spanned 17 ISO standards . To ensure ongoing monitoring of these standards and to evaluate the continued impact of this new model, systematic analysis of reports from future Tracers is planned, with comparisons of results over time. This project marks the culmination of a 6-year journey aimed at assessing the effectiveness and significance of the audit survey process, a decade postimplementation within the hospital setting. A mapping of the as-is audit process was conducted . The initial situation involved the creation of a list of units to be audited each year during the audit planning phase. The selection criteria required that 45% of the departments would get audited annually, with priority given to those presenting known issues highlighted in previous audits. Each unit is audited at least once every 3 years. The audit team receives training focused on the ISO standard and audit management methods (ISO 19011:2018 and ISO 9001:2015). During the on-site visit, the focus of the inspection, the questions asked and the areas investigated are left to the discretion of the auditor, who designs the audit by concentrating on the internal activities of the audited department and verifying compliance with ISO 9001:2015. The review of medical records is limited to checking the completeness of required legal documents and forms. At the end of the on-site visit, the auditors produce a report highlighting strengths, recommendations and non-conformities, which is shared with the audited department. Based on this report, a corrective action plan is developed. Between 2016 and 2020, a thorough context analysis was conducted. The initial phase involved conducting a survey among 22 auditors to evaluate the structure and effectiveness of the audit system; this was followed by a focus group session to discuss the survey results, along with carrying out semistructured interviews with both auditors and auditees regarding the audit process and its usefulness. From this evaluation, it was highlighted that the audit analysis process primarily focuses on the internal activities of the audited units. The auditors investigate aspects limited to the individual unit’s activity, without adopting a systemic perspective that links the unit’s activity to the others. Additionally, there is a lack of standardisation in audit management procedures. Audits are conducted according to the ISO model, allowing auditors to frame questions based on their own expertise; while this is a strength of the current process, as it enables the auditor to tailor the audit to the unit’s activities, it makes each on-site visit unique compared with others, and it does not ensure the standardisation of the topics being analysed. This presents the opportunity to use the audit not only to verify ISO compliance but also to transform it into an opportunity to integrate the assessment of quality management, risk management and patient safety. Furthermore, the need has emerged to further train auditors on these topics, enhancing their expertise in internationally recognised standards for patient safety and JCI standards. Lastly, there is an opportunity to shift the audit approach from a decentralised, unit-specific focus to a patient-centred and multidisciplinary approach. By introducing the Tracer Methodology, the audit plan is designed around the patient journey (represented by the integrated care pathway (ICP)), taking into account the patient’s experience across all units. In response to this, starting in 2019, a customised training programme was delivered to auditors on a revised version of the Tracer methodology applied to clinical pathways. The subsequent step was planning the 2022 and 2023 audit programme accordingly, connecting the audit survey to integrated clinical pathways and internationally recognised good practices. This resulted in the auditing of a total of 31 units and 6 Tracers associated with 6 ICPs. In this initial phase, the attention was mainly put on three good practices related to the correct identification of patients, surgical site and procedure, the prevention of the transfusion reaction and the prevention and management of patient falling in healthcare facilities. In 2022, as part of the Tracer 4 initiative, a comprehensive Failure Mode and Effect Analysis was carried out to evaluate the management and administration of chemotherapeutic drugs at the clinical day hospital. To evaluate the impact of structured monitoring of internationally recognised best practices, specifically ISO and JCI standards, reports from 31 audit visits were analysed. These audits encompassed 31 units within 6 Tracers conducted during 2022/2023. For each report, the areas examined during the on-site visits were documented. Each topic was catalogued and linked to the corresponding ISO 9001:2015 and JCI Standard codes. 17 topics were identified, tracked and ranked by their frequency of appearance across the 31 reports . The primary areas where non-conformities and observations were noted were then identified. Non-conformities were associated with 8 ISO standards, while observations spanned 17 ISO standards . To ensure ongoing monitoring of these standards and to evaluate the continued impact of this new model, systematic analysis of reports from future Tracers is planned, with comparisons of results over time. Audit process as is and to be In response to the identified needs, the audit process was revised. The new process is described in . The goal was to integrate three key elements: the integration of multiple standards combining ISO, JCI and international patient safety guidelines; focus on ICPs to monitor real clinical practices compared with documented protocols; patient-centred approach to enhance patient safety and care quality. The INTAM framework is structured according to the Plan-Do-Check-Act cycle, ensuring a continuous process of planning, execution, review and improvement. A central feature of the INTAM model is the use of ICPs, which align with internal clinical procedures. Using a revised version of the JCI’s Tracer Methodology, the audits focus on specific hospital facilities involved in a selected clinical pathway. This approach allows auditors to identify gaps between evidence-based care, as outlined in internal documentation and the actual clinical practices. In doing so, the model helps to identify inefficiencies and opportunities for improvement, ensuring a patient-centred approach to care. In addition to focusing on ICPs, INTAM emphasises compliance with two key international standards: ISO 9001:2915 standards and JCI accreditation standards for excellence in healthcare. Auditors assess compliance with these standards through on-site visits, to ensure that internal processes meet the required international benchmarks. This serves both to maintain high-quality patient care and to prepare the units for external audits related to JCI accreditation and ISO certification. Moreover, the INTAM model requires that each hospital unit demonstrate adherence to 19 ministerial recommendations issued by the Italian Ministry of Health, which include critical aspects of patient safety, such as medication management, patient identification and operating room safety. Compliance with these national safety standards is monitored annually by the ministry to ensure ongoing adherence to best practices in patient care. Continuous training for auditors The training programme was developed to provide auditors with the necessary tools to effectively implement the new audit model. In response to emerging training needs, a continuous training process for auditors was initiated between 2019 and 2023, focusing on risk management, audit execution and international standards for quality and patient safety. Training also addressed the need for innovation in audit methodology through the Tracer Methodology by JCI. After auditors gained foundational knowledge on integrated audits, patient safety and clinical risk management, practical simulations were conducted to evaluate their skills in organising audits. These simulations ensured auditors were prepared for real-world application of the new auditing techniques. From 2022, specific training was provided on JCI’s audit process, particularly integrating ICPs into the audit. This approach traces a patient’s care pathway across various hospital units, auditing each unit as part of the overall patient journey. Auditors then practised conducting audits based on these methods, focusing on audit preparation, execution and reporting, in line with ISO 19011:2018 standards. The auditors have been actively involved in the ongoing evaluation of the model. Semistructured interviews were conducted to assess their satisfaction with the model and the new training they received, as well as to identify ways to improve the training offerings based on their specific needs. They were asked to share how the training has supported them and how their perception of their role during audits has changed as a result. The units involved in the Tracer procedure Each Tracer follows the journey of a selected patient through the facilities; in order to do that, a medical record number related to the ICP is identified in advance and communicated to the auditors group. The Tracer process entails selecting an average of 6 units after analysing each ICPs; in some cases, it was not possible to audit all six clinical units selected due to external constraints. The study involved six different Tracers, each focused on a specific ICP. The Tracers and the number of selected units for each are as follows: Tracer 1: Colorectal cancer surgery, with five audited units. Tracer 2: Breast-Unit (outpatient pathway), with six audited units. Tracer 3: Renal-testicular cancer, with six audited units. Tracer 4: Breast-Unit (inpatient pathway), with six audited units. Tracer 5: Liver cancer, gastric cancer, pancreatic cancer surgery, with four audited units. Tracer 6: Thoracic cancer, with four audited units. The 31 audits have been carried out between 2022 and 2023. Every year, at least 45% of the certified units of the hospital are subjected to an internal evaluation following the INTAM. Each on-site visit to the units lasts about 2 hours and follows the guidelines outlined in an internal checklist, which will be standardised starting in the year 2024. The entire audit process for each unit, including preparation and report writing, takes up to 6 hours. To better explain the methodology applied to the collection of the data discussed in the results, the following paragraph will show the Audit Survey Process applied to the Tracer 1 ‘Colorectal Cancer Surgery’. The units selected for Tracer 1 are Colorectal Cancer Surgery Unit, Endoscopy, Intensive Care Unit, Hospital Public Relations Office, Anaesthesia and Reanimation and Hospital Pharmacy. The same process was applied to each Tracer. Planning the audit The aim of the audit survey is to examine the pathway of a cancer patient who has undergone a major elective surgery, evaluating each of the units involved in this process through the observation of documentary evidence and data. In order to do that, the ICP related to the colorectal cancer surgery, as outlined in the hospital’s internal procedure, was analysed. Out of the six clinical units identified for analysis, five were actively included in the Tracer process, while one was excluded due to organisational factors. The units identified to be subjected to the audit were the colorectal cancer surgery unit, endoscopy, intensive care unit, hospital public relations office, anaesthesia and reanimation and hospital pharmacy. Auditing Each unit was asked to provide documentation related to the procedures followed during the treatment of a selected patient, who had been treated by every unit during its hospital stay. The clinical report number was communicated to the selected units in advance. During each visit, the auditor teams tracked information related to UNI EN ISO 9001 Standards and JCI Standards; the most frequent points investigated in Tracer 1 were related to ‘collection and storage documented information’; ‘management review’; ‘continuous improvement’ and ‘personnel training’ (ISO standards: 7.5.3; 9.3; 10.3; 7.2). Particular attention was paid to monitoring the unit’s adherence to the internationally recognised good practices for the prevention of sentinel events. Reporting After each audit visit, the auditor team submitted an audit report to the Quality and Risk Management Unit, detailing key findings, instances of non-conformity, observations, and strengths. The audit reports have been collected and examined by the Quality and Risk Management Unit. The most frequently addressed topics in the six Tracers have been synthesised in a table and ranked from the most discussed issue to the least. Each topic has been connected to the equivalent ISO Standard and, if existing, the equivalent JCI Standard . Observing the reports, during the 6 Tracers the auditors identified a total of 11 instances of nonconformities and provided 46 observations for enhancing the performance of the units. The instances of non-conformity were mostly detected in the areas of the collection and storage of documented information, skill mapping and personnel training, management review, monitoring of the outcomes and evaluation of performances. The same areas were also subject to observations from the auditors, alongside other suggestions for improvement in the areas of customer satisfaction, service organisation, definition and mapping of the unit’s objectives, definition of organisational responsibilities and roles and medication storage . Following the mock-Tracer Methodology, 27 ISO Standards have been investigated in 31 units, in compliance with 8 areas of the JCI Standards: patient-centred care, assessment of patients, medication management and use, quality improvement and patient safety, governance, leadership and direction, facility management and safety, staff qualifications and education, management of information. In response to the identified needs, the audit process was revised. The new process is described in . The goal was to integrate three key elements: the integration of multiple standards combining ISO, JCI and international patient safety guidelines; focus on ICPs to monitor real clinical practices compared with documented protocols; patient-centred approach to enhance patient safety and care quality. The INTAM framework is structured according to the Plan-Do-Check-Act cycle, ensuring a continuous process of planning, execution, review and improvement. A central feature of the INTAM model is the use of ICPs, which align with internal clinical procedures. Using a revised version of the JCI’s Tracer Methodology, the audits focus on specific hospital facilities involved in a selected clinical pathway. This approach allows auditors to identify gaps between evidence-based care, as outlined in internal documentation and the actual clinical practices. In doing so, the model helps to identify inefficiencies and opportunities for improvement, ensuring a patient-centred approach to care. In addition to focusing on ICPs, INTAM emphasises compliance with two key international standards: ISO 9001:2915 standards and JCI accreditation standards for excellence in healthcare. Auditors assess compliance with these standards through on-site visits, to ensure that internal processes meet the required international benchmarks. This serves both to maintain high-quality patient care and to prepare the units for external audits related to JCI accreditation and ISO certification. Moreover, the INTAM model requires that each hospital unit demonstrate adherence to 19 ministerial recommendations issued by the Italian Ministry of Health, which include critical aspects of patient safety, such as medication management, patient identification and operating room safety. Compliance with these national safety standards is monitored annually by the ministry to ensure ongoing adherence to best practices in patient care. The training programme was developed to provide auditors with the necessary tools to effectively implement the new audit model. In response to emerging training needs, a continuous training process for auditors was initiated between 2019 and 2023, focusing on risk management, audit execution and international standards for quality and patient safety. Training also addressed the need for innovation in audit methodology through the Tracer Methodology by JCI. After auditors gained foundational knowledge on integrated audits, patient safety and clinical risk management, practical simulations were conducted to evaluate their skills in organising audits. These simulations ensured auditors were prepared for real-world application of the new auditing techniques. From 2022, specific training was provided on JCI’s audit process, particularly integrating ICPs into the audit. This approach traces a patient’s care pathway across various hospital units, auditing each unit as part of the overall patient journey. Auditors then practised conducting audits based on these methods, focusing on audit preparation, execution and reporting, in line with ISO 19011:2018 standards. The auditors have been actively involved in the ongoing evaluation of the model. Semistructured interviews were conducted to assess their satisfaction with the model and the new training they received, as well as to identify ways to improve the training offerings based on their specific needs. They were asked to share how the training has supported them and how their perception of their role during audits has changed as a result. Each Tracer follows the journey of a selected patient through the facilities; in order to do that, a medical record number related to the ICP is identified in advance and communicated to the auditors group. The Tracer process entails selecting an average of 6 units after analysing each ICPs; in some cases, it was not possible to audit all six clinical units selected due to external constraints. The study involved six different Tracers, each focused on a specific ICP. The Tracers and the number of selected units for each are as follows: Tracer 1: Colorectal cancer surgery, with five audited units. Tracer 2: Breast-Unit (outpatient pathway), with six audited units. Tracer 3: Renal-testicular cancer, with six audited units. Tracer 4: Breast-Unit (inpatient pathway), with six audited units. Tracer 5: Liver cancer, gastric cancer, pancreatic cancer surgery, with four audited units. Tracer 6: Thoracic cancer, with four audited units. The 31 audits have been carried out between 2022 and 2023. Every year, at least 45% of the certified units of the hospital are subjected to an internal evaluation following the INTAM. Each on-site visit to the units lasts about 2 hours and follows the guidelines outlined in an internal checklist, which will be standardised starting in the year 2024. The entire audit process for each unit, including preparation and report writing, takes up to 6 hours. To better explain the methodology applied to the collection of the data discussed in the results, the following paragraph will show the Audit Survey Process applied to the Tracer 1 ‘Colorectal Cancer Surgery’. The units selected for Tracer 1 are Colorectal Cancer Surgery Unit, Endoscopy, Intensive Care Unit, Hospital Public Relations Office, Anaesthesia and Reanimation and Hospital Pharmacy. The same process was applied to each Tracer. Planning the audit The aim of the audit survey is to examine the pathway of a cancer patient who has undergone a major elective surgery, evaluating each of the units involved in this process through the observation of documentary evidence and data. In order to do that, the ICP related to the colorectal cancer surgery, as outlined in the hospital’s internal procedure, was analysed. Out of the six clinical units identified for analysis, five were actively included in the Tracer process, while one was excluded due to organisational factors. The units identified to be subjected to the audit were the colorectal cancer surgery unit, endoscopy, intensive care unit, hospital public relations office, anaesthesia and reanimation and hospital pharmacy. Auditing Each unit was asked to provide documentation related to the procedures followed during the treatment of a selected patient, who had been treated by every unit during its hospital stay. The clinical report number was communicated to the selected units in advance. During each visit, the auditor teams tracked information related to UNI EN ISO 9001 Standards and JCI Standards; the most frequent points investigated in Tracer 1 were related to ‘collection and storage documented information’; ‘management review’; ‘continuous improvement’ and ‘personnel training’ (ISO standards: 7.5.3; 9.3; 10.3; 7.2). Particular attention was paid to monitoring the unit’s adherence to the internationally recognised good practices for the prevention of sentinel events. Reporting After each audit visit, the auditor team submitted an audit report to the Quality and Risk Management Unit, detailing key findings, instances of non-conformity, observations, and strengths. The audit reports have been collected and examined by the Quality and Risk Management Unit. The most frequently addressed topics in the six Tracers have been synthesised in a table and ranked from the most discussed issue to the least. Each topic has been connected to the equivalent ISO Standard and, if existing, the equivalent JCI Standard . Observing the reports, during the 6 Tracers the auditors identified a total of 11 instances of nonconformities and provided 46 observations for enhancing the performance of the units. The instances of non-conformity were mostly detected in the areas of the collection and storage of documented information, skill mapping and personnel training, management review, monitoring of the outcomes and evaluation of performances. The same areas were also subject to observations from the auditors, alongside other suggestions for improvement in the areas of customer satisfaction, service organisation, definition and mapping of the unit’s objectives, definition of organisational responsibilities and roles and medication storage . Following the mock-Tracer Methodology, 27 ISO Standards have been investigated in 31 units, in compliance with 8 areas of the JCI Standards: patient-centred care, assessment of patients, medication management and use, quality improvement and patient safety, governance, leadership and direction, facility management and safety, staff qualifications and education, management of information. The aim of the audit survey is to examine the pathway of a cancer patient who has undergone a major elective surgery, evaluating each of the units involved in this process through the observation of documentary evidence and data. In order to do that, the ICP related to the colorectal cancer surgery, as outlined in the hospital’s internal procedure, was analysed. Out of the six clinical units identified for analysis, five were actively included in the Tracer process, while one was excluded due to organisational factors. The units identified to be subjected to the audit were the colorectal cancer surgery unit, endoscopy, intensive care unit, hospital public relations office, anaesthesia and reanimation and hospital pharmacy. Each unit was asked to provide documentation related to the procedures followed during the treatment of a selected patient, who had been treated by every unit during its hospital stay. The clinical report number was communicated to the selected units in advance. During each visit, the auditor teams tracked information related to UNI EN ISO 9001 Standards and JCI Standards; the most frequent points investigated in Tracer 1 were related to ‘collection and storage documented information’; ‘management review’; ‘continuous improvement’ and ‘personnel training’ (ISO standards: 7.5.3; 9.3; 10.3; 7.2). Particular attention was paid to monitoring the unit’s adherence to the internationally recognised good practices for the prevention of sentinel events. After each audit visit, the auditor team submitted an audit report to the Quality and Risk Management Unit, detailing key findings, instances of non-conformity, observations, and strengths. The audit reports have been collected and examined by the Quality and Risk Management Unit. The most frequently addressed topics in the six Tracers have been synthesised in a table and ranked from the most discussed issue to the least. Each topic has been connected to the equivalent ISO Standard and, if existing, the equivalent JCI Standard . Observing the reports, during the 6 Tracers the auditors identified a total of 11 instances of nonconformities and provided 46 observations for enhancing the performance of the units. The instances of non-conformity were mostly detected in the areas of the collection and storage of documented information, skill mapping and personnel training, management review, monitoring of the outcomes and evaluation of performances. The same areas were also subject to observations from the auditors, alongside other suggestions for improvement in the areas of customer satisfaction, service organisation, definition and mapping of the unit’s objectives, definition of organisational responsibilities and roles and medication storage . Following the mock-Tracer Methodology, 27 ISO Standards have been investigated in 31 units, in compliance with 8 areas of the JCI Standards: patient-centred care, assessment of patients, medication management and use, quality improvement and patient safety, governance, leadership and direction, facility management and safety, staff qualifications and education, management of information. The revision of the audit process has proven highly beneficial in addressing some of the key limitations of the previous system. Originally, the audit focused primarily on the internal activities of individual units, resulting in a fragmented and isolated assessment. By shifting towards a more integrated model, the audit process now offers the opportunity to assess quality management, risk management and patient safety in a cohesive manner. This broader perspective enhances the ability to identify systemic issues across departments, rather than focusing solely on isolated non-conformities. The introduction of the Tracer Methodology represents a critical shift in audit design, transforming it from a unit-specific focus to a patient-centred and multidisciplinary approach. By structuring the audit around the patient journey and following the ICP, this model ensures that the patient’s experience is evaluated across all units involved in their care. This provides a more coordinated assessment of the hospital’s services, improving care continuity and safety standards. By equipping auditors with deeper knowledge of international standards for patient safety and JCI guidelines, the audit process becomes more robust and aligned with globally recognised best practices. Standardising audit management procedures also addresses the previous inconsistency in audit execution, ensuring that all key areas are systematically evaluated. The INTAM enabled the audit team to gather structured information across each clinical pathway. This facilitated the identification of prevalent topics of interest for each unit, as well as the most significant issues that directly and indirectly affect the patient’s journey through the hospital and their perception of the quality of care. These identified topics have also been recognised as key areas requiring focus for internal quality improvement efforts aimed at enhancing the delivered services. Through the different tracers, we were able to monitor the adherence to the clinical pathways of some of the most common cancers in Lombardy and Italy, which represent a relevant volume of patients treated every year ; therefore, we can assume that the implementation of specific measures in these areas could lead to a general improvement in the quality of care, enhanced personnel performance and better service organisation. These targeted measures are the results of an extensive evaluation of critical areas such as patient safety, clinical risk management, continuous quality improvement, personnel training and medication management, which was made possible by the application of the new process. By mapping the most frequently asked topics, non-conformities and observations from the audit team and organising all of them in thematic groups aligned with ISO and JCI Standard codes, we were able to obtain get a realistic overview of the compliance to international standards; this approach enables the units to understand their the current situation and identify areas for improvement in adhering to international guidelines. It also assists in preparing them to undergo external audit surveys successfully. Both auditees and auditors’ teams have shown high compliance and satisfaction with the new audit process and related continuous training. This topic will be extensively discussed in a paper in progress (Milanesi et al, in prep.), along with the detailed process that led to the creation of the INTAM. Audited unit staff appreciate the methodology for enhancing consistency with real-world operations, fostering collaboration, increasing engagement, optimising time management and aligning with international accreditation standards. Auditors value the training’s alignment with their educational needs, the introduction of new evaluation tools and the opportunities for a deeper understanding of internal procedures and pathways. Overall, there is a sense of organisational belonging and enhanced teamwork within a cooperative environment. Limitations While certain aspects pertinent to these practices have been examined directly and indirectly during audit surveys, notably encompassing service delivery, pharmaceutical management, prescription protocols, patient identification and protocol adherence, blood transfusion management, and prevention of patient falls and associated adverse events, there remains a necessity to more comprehensively integrate all applicable observations into the survey protocol through comprehensive training on these subjects. While certain aspects pertinent to these practices have been examined directly and indirectly during audit surveys, notably encompassing service delivery, pharmaceutical management, prescription protocols, patient identification and protocol adherence, blood transfusion management, and prevention of patient falls and associated adverse events, there remains a necessity to more comprehensively integrate all applicable observations into the survey protocol through comprehensive training on these subjects. In conclusion, the new audit model not only verifies compliance with international standards for quality and patient safety but also offers a valuable tool for improving patient care and operational efficiency, ensuring that healthcare delivery is safe, standardised and centred around the patient’s needs. The findings demonstrate that the implementation of INTAM facilitated effective monitoring of unit activities and their adherence to both clinical pathways and international standards and guidelines. Through audits, an extensive assessment was conducted focusing on critical areas and topics affecting the patient journey, as well as relevant aspects of clinical governance including patient safety and risk management. This approach provided a comprehensive understanding of service delivery to end-users. Furthermore, both auditors and auditees expressed satisfaction with the audit process and the methodology applied. The methodology described in this paper lends itself to be adapted for application in diverse clinical contexts, tailoring the design of audits to alternative clinical pathways or structured patient journeys within the organisation. The Tracer aims present opportunities for extending the application of its methodology beyond individual clinical pathways to thematic pathways. For example, conducting a Tracer focused on significant issues such as the pharmacovigilance system and patient education throughout its hospital stay presents an intriguing prospect. This approach broadens the scope of mock-Tracer methodology application, enhancing its utility in addressing various healthcare concerns beyond traditional clinical pathways. Sustainability and future development The project is designed to be sustainable over time, and it will require ongoing commitment in the coming years to refine the method and make it increasingly tailored to the organisation’s operations. To achieve this, periodic updates to the auditors’ skills must be carried out, and their expertise in applying the method is continuously reinforced. It is essential to ensure the maintenance of practical competencies by participating in a minimum number of quality audits each year and attending training events on key topics. To assess the system’s effectiveness, a survey is currently underway to evaluate staff perceptions of the new method’s efficacy within the audited structures. In addition, feedback continues to be gathered through ongoing dialogue with the audit team to identify challenges and opportunities for improvement in audit organisation, execution and training activities. Anticipating the progression of this initiative, a checklist is slated for development in the near future. This checklist will incorporate standardised elements that will form the foundational components of each audit survey and will be shared among the auditing team. It will ideally serve as a tool for obtaining a comprehensive overview of each unit regarding the critical topics to monitor. Additionally, it will provide an opportunity to gain a better understanding of the overall application of international quality standards within the hospital and it will aid in identifying critical areas that require specific staff training and in determining shared improvement and strategic goals. The project is designed to be sustainable over time, and it will require ongoing commitment in the coming years to refine the method and make it increasingly tailored to the organisation’s operations. To achieve this, periodic updates to the auditors’ skills must be carried out, and their expertise in applying the method is continuously reinforced. It is essential to ensure the maintenance of practical competencies by participating in a minimum number of quality audits each year and attending training events on key topics. To assess the system’s effectiveness, a survey is currently underway to evaluate staff perceptions of the new method’s efficacy within the audited structures. In addition, feedback continues to be gathered through ongoing dialogue with the audit team to identify challenges and opportunities for improvement in audit organisation, execution and training activities. Anticipating the progression of this initiative, a checklist is slated for development in the near future. This checklist will incorporate standardised elements that will form the foundational components of each audit survey and will be shared among the auditing team. It will ideally serve as a tool for obtaining a comprehensive overview of each unit regarding the critical topics to monitor. Additionally, it will provide an opportunity to gain a better understanding of the overall application of international quality standards within the hospital and it will aid in identifying critical areas that require specific staff training and in determining shared improvement and strategic goals. |
Quality of reporting of otorhinolaryngology articles using animal models with the ARRIVE statement | e693ffad-9aef-4cf3-a29d-0af1e04ff272 | 5802542 | Otolaryngology[mh] | Journal selection The quality of reporting of articles which describe animal experiments in otorhinolaryngology research was compared between two journal categories: ENT journals and multidisciplinary journals. Based on ISI Web of Knowledge impact factors ( www.webofknowledge.com , date inspected: 12 June,2015), the five ENT journals with the highest impact factors in 2012 were selected: Ear & Hearing ( Ear Hear ), Journal of the Association for Research in Otolaryngology ( JARO ), Head & Neck – Journal for the Sciences and Specialties of the Head and Neck ( Head Neck ), Hearing Research ( Hear Res ), and Audiology & Neurotology ( Audiol Neurotol ). None of these journals implemented the ARRIVE guidelines in the ‘Instructions to Authors’ (date inspected: 12 June 2015). The top five multidisciplinary journals in 2013 were Nature, Science , Nature Communications ( Nat Commun ), Proceedings of the National Academy of Sciences of the United States of America ( PNAS ) and Scientific Reports ( Sci Rep ). Two journals ( Nature and Nat Commun ) recommended the ARRIVE guidelines when documenting animal studies (date inspected: 12 June 2015). The included journals and their impact factors are summarized in . Search strategy A PubMed database search was conducted on June 12, 2015 using four predefined filters. First, an adapted version of the ENT filter developed by the Cochrane ENT group was used to retrieve articles conducting research in otorhinolaryngology. Second, a filter was applied to only retrieve research using animal models. Subsequently, date restrictions were applied per journal category to limit the amount of retrieved articles. We searched PubMed for articles published in ENT journals in the year 2014. Since less otorhinolaryngology related articles are published in multidisciplinary journals, we searched for articles conducting animal experiments in otorhinolaryngology research published in multidisciplinary journals from 2010 to 2014. It is important to note that the ARRIVE guidelines were first published in 2010. Thus, studies published in multidisciplinary journals in 2010 might have been written prior to the publication of these guidelines. An analysis was performed to investigate correlations between year of publication and quality of reporting. The complete search syntax with specific filters is outlined in Supplemental digital content 1 (see http://journals.sagepub.com/doi/full/10.1177/0023677217718862 for all supplementary materials in this article). Study selection Two authors (SFLK and JPMP) independently screened titles, abstracts and full texts of the retrieved articles and selected those reporting in vivo animal experiments. To be considered for inclusion, studies must have assessed preclinical phases of diseases or disorders commonly treated by otorhinolaryngologists. Discrepancies between the two reviewers were discussed until consensus was reached. Scoring articles To assess the quality of reporting of articles, two authors (AB and SFLK) independently scored articles using the checklist from the ARRIVE guidelines. The checklist contains 20 points, some with subsections (a, b, c or d). Subsections were considered as separate items for scoring, yielding a total of 38 items. Two items on the ARRIVE checklist (10c and 15b) were optional and were rarely applicable. Therefore, to standardize our assessment of quality of reporting in all articles, these two items were excluded from the analysis. In total, 36 items were scored for each article. Supplemental digital content 2 summarizes the scoring criteria per item. Articles were reviewed in order to extract all provided information. This included supplementary information; available online or in appendices. No more than five articles per journal category were scored consecutively to distribute possible learning effects evenly across the two journal categories. Inter-observer agreement Cohen's kappa value for inter-observer agreement was evaluated to analyze discrepancies among the scorers. Cohen's kappa was calculated for the complete dataset, and per item. Data analysis Descriptive statistics including median and mean scores of adequately reported ARRIVE items were calculated. A two-tailed Mann–Whitney U -test for two independent samples was used to evaluate significant differences between the two journal categories. Chi-square analysis was used to evaluate each ARRIVE item between the two journal categories. For the articles published in multidisciplinary journals, a correlation between year of publication and quality of reporting was investigated using Spearman's rank correlation coefficient (Spearman's rho). Statistical tests were performed using the SPSS v20 statistics package (IBM, Armonk, NY, USA). Statistical significance was set at 5%.
The quality of reporting of articles which describe animal experiments in otorhinolaryngology research was compared between two journal categories: ENT journals and multidisciplinary journals. Based on ISI Web of Knowledge impact factors ( www.webofknowledge.com , date inspected: 12 June,2015), the five ENT journals with the highest impact factors in 2012 were selected: Ear & Hearing ( Ear Hear ), Journal of the Association for Research in Otolaryngology ( JARO ), Head & Neck – Journal for the Sciences and Specialties of the Head and Neck ( Head Neck ), Hearing Research ( Hear Res ), and Audiology & Neurotology ( Audiol Neurotol ). None of these journals implemented the ARRIVE guidelines in the ‘Instructions to Authors’ (date inspected: 12 June 2015). The top five multidisciplinary journals in 2013 were Nature, Science , Nature Communications ( Nat Commun ), Proceedings of the National Academy of Sciences of the United States of America ( PNAS ) and Scientific Reports ( Sci Rep ). Two journals ( Nature and Nat Commun ) recommended the ARRIVE guidelines when documenting animal studies (date inspected: 12 June 2015). The included journals and their impact factors are summarized in .
A PubMed database search was conducted on June 12, 2015 using four predefined filters. First, an adapted version of the ENT filter developed by the Cochrane ENT group was used to retrieve articles conducting research in otorhinolaryngology. Second, a filter was applied to only retrieve research using animal models. Subsequently, date restrictions were applied per journal category to limit the amount of retrieved articles. We searched PubMed for articles published in ENT journals in the year 2014. Since less otorhinolaryngology related articles are published in multidisciplinary journals, we searched for articles conducting animal experiments in otorhinolaryngology research published in multidisciplinary journals from 2010 to 2014. It is important to note that the ARRIVE guidelines were first published in 2010. Thus, studies published in multidisciplinary journals in 2010 might have been written prior to the publication of these guidelines. An analysis was performed to investigate correlations between year of publication and quality of reporting. The complete search syntax with specific filters is outlined in Supplemental digital content 1 (see http://journals.sagepub.com/doi/full/10.1177/0023677217718862 for all supplementary materials in this article).
Two authors (SFLK and JPMP) independently screened titles, abstracts and full texts of the retrieved articles and selected those reporting in vivo animal experiments. To be considered for inclusion, studies must have assessed preclinical phases of diseases or disorders commonly treated by otorhinolaryngologists. Discrepancies between the two reviewers were discussed until consensus was reached.
To assess the quality of reporting of articles, two authors (AB and SFLK) independently scored articles using the checklist from the ARRIVE guidelines. The checklist contains 20 points, some with subsections (a, b, c or d). Subsections were considered as separate items for scoring, yielding a total of 38 items. Two items on the ARRIVE checklist (10c and 15b) were optional and were rarely applicable. Therefore, to standardize our assessment of quality of reporting in all articles, these two items were excluded from the analysis. In total, 36 items were scored for each article. Supplemental digital content 2 summarizes the scoring criteria per item. Articles were reviewed in order to extract all provided information. This included supplementary information; available online or in appendices. No more than five articles per journal category were scored consecutively to distribute possible learning effects evenly across the two journal categories.
Cohen's kappa value for inter-observer agreement was evaluated to analyze discrepancies among the scorers. Cohen's kappa was calculated for the complete dataset, and per item.
Descriptive statistics including median and mean scores of adequately reported ARRIVE items were calculated. A two-tailed Mann–Whitney U -test for two independent samples was used to evaluate significant differences between the two journal categories. Chi-square analysis was used to evaluate each ARRIVE item between the two journal categories. For the articles published in multidisciplinary journals, a correlation between year of publication and quality of reporting was investigated using Spearman's rank correlation coefficient (Spearman's rho). Statistical tests were performed using the SPSS v20 statistics package (IBM, Armonk, NY, USA). Statistical significance was set at 5%.
Search and study selection The combined search syntaxes (Supplemental digital content 1) yielded 51 articles published in ENT journals, and 63 articles in multidisciplinary journals. summarizes the search and study selection process. Of the 51 articles retrieved from ENT journals, 11 were not primary research articles and five did not involve in vivo animal experiments. Therefore, 35 articles from ENT journals were included in the analysis ( Hear Res: n = 15, JARO: n = 11, Head Neck: n = 9). Of the 63 articles retrieved from multidisciplinary journals, 18 were not related to otorhinolaryngology research, five did not report on primary research and four did not include in vivo animal experiments. Therefore, 36 articles were included in the analysis ( PNAS: n = 24, Nature: n = 4, Nat Commun: n = 4, Sci Rep: n = 3, Science: n = 1). Eight articles were published in multidisciplinary journals that endorse the ARRIVE guidelines. Six articles were published in multidisciplinary journals in 2010, eight in 2011, eight in 2012, nine in 2013 and five in 2014. The numbers of retrieved and selected articles per journal are summarized in . Overall quality of reporting scores The 35 articles published in ENT journals reported a mean of 57.1% adequately scored items (95% confidence interval [CI]: 53.4–60.9%; median: 58.3%). The 36 articles published in multidisciplinary journals reported a mean of 49.1% adequately scored items (95% CI: 46.2–52.0%; median: 50.0%). The overall difference between the journal categories was statistically significant (Mann–Whitney U -test, P = 0.001), suggesting that ENT journals adhered better to the ARRIVE guidelines. For the articles published in multidisciplinary journals, there was no statistically significant correlation between the year of publication and the number of adequately reported ARRIVE items ( P = 0.083). Moreover, there was no significant difference in the quality of reporting between the eight articles published in multidisciplinary journals that endorsed the ARRIVE guidelines ( Nature, Nat Commun ), and the 28 articles published in journals that did not endorse the ARRIVE guidelines: 51.4% (95% CI: 45.6–57.2%, median: 54.2%) compared with 48.4% (95% CI: 45.1–51.7%, median: 50.0%), respectively. Quality of reporting for specific items When examining ARRIVE items separately, five items (6a, 7a, 9c, 14, 18b) were scored significantly higher in the articles published in ENT journals ( , ). These items assessed if the study reported the number of experimental and control groups (6a), information on the drug dose, site and route of administration, and surgical procedure and equipment used (7a), welfare-related assessments and interventions carried out prior, during, or after experiments (9c), information on health status of animals prior to treatment or testing (14) and study limitations (18b). Several items were not adequately reported in both journal categories: 10 items were reported less frequently than 20% in both journal categories ( ). These items include the time of day when experiments were carried out (7b), the rationale behind the choice of the specific anesthetic, its dose and route of administration opted for (7d), information regarding housing of animals (9a), sample size calculation (10b), allocation of the animals to groups (11a,b), methods used to assess whether the data met the assumptions of the statistical approach (13c), reporting of adverse events (17a,b), and implications of the experimental methods or findings for the replacement, refinement or reduction of the use of animals in research (18c). Inter-observer agreement Out of a total number of 2556 scored items, 158 (6.1%) were scored differently. Cohen's kappa value for inter-observer agreement was 0.87 (standard error = 0.10). A Cohen's kappa score between 0.61 and 0.80 suggests a good agreement between independent scorers. Cohen's kappa value for inter-observer agreement per item is presented in Supplemental digital content 3. The inter-observer agreement was high for most items, and there were no Cohen's kappa values lower than 0.3.
The combined search syntaxes (Supplemental digital content 1) yielded 51 articles published in ENT journals, and 63 articles in multidisciplinary journals. summarizes the search and study selection process. Of the 51 articles retrieved from ENT journals, 11 were not primary research articles and five did not involve in vivo animal experiments. Therefore, 35 articles from ENT journals were included in the analysis ( Hear Res: n = 15, JARO: n = 11, Head Neck: n = 9). Of the 63 articles retrieved from multidisciplinary journals, 18 were not related to otorhinolaryngology research, five did not report on primary research and four did not include in vivo animal experiments. Therefore, 36 articles were included in the analysis ( PNAS: n = 24, Nature: n = 4, Nat Commun: n = 4, Sci Rep: n = 3, Science: n = 1). Eight articles were published in multidisciplinary journals that endorse the ARRIVE guidelines. Six articles were published in multidisciplinary journals in 2010, eight in 2011, eight in 2012, nine in 2013 and five in 2014. The numbers of retrieved and selected articles per journal are summarized in .
The 35 articles published in ENT journals reported a mean of 57.1% adequately scored items (95% confidence interval [CI]: 53.4–60.9%; median: 58.3%). The 36 articles published in multidisciplinary journals reported a mean of 49.1% adequately scored items (95% CI: 46.2–52.0%; median: 50.0%). The overall difference between the journal categories was statistically significant (Mann–Whitney U -test, P = 0.001), suggesting that ENT journals adhered better to the ARRIVE guidelines. For the articles published in multidisciplinary journals, there was no statistically significant correlation between the year of publication and the number of adequately reported ARRIVE items ( P = 0.083). Moreover, there was no significant difference in the quality of reporting between the eight articles published in multidisciplinary journals that endorsed the ARRIVE guidelines ( Nature, Nat Commun ), and the 28 articles published in journals that did not endorse the ARRIVE guidelines: 51.4% (95% CI: 45.6–57.2%, median: 54.2%) compared with 48.4% (95% CI: 45.1–51.7%, median: 50.0%), respectively.
When examining ARRIVE items separately, five items (6a, 7a, 9c, 14, 18b) were scored significantly higher in the articles published in ENT journals ( , ). These items assessed if the study reported the number of experimental and control groups (6a), information on the drug dose, site and route of administration, and surgical procedure and equipment used (7a), welfare-related assessments and interventions carried out prior, during, or after experiments (9c), information on health status of animals prior to treatment or testing (14) and study limitations (18b). Several items were not adequately reported in both journal categories: 10 items were reported less frequently than 20% in both journal categories ( ). These items include the time of day when experiments were carried out (7b), the rationale behind the choice of the specific anesthetic, its dose and route of administration opted for (7d), information regarding housing of animals (9a), sample size calculation (10b), allocation of the animals to groups (11a,b), methods used to assess whether the data met the assumptions of the statistical approach (13c), reporting of adverse events (17a,b), and implications of the experimental methods or findings for the replacement, refinement or reduction of the use of animals in research (18c).
Out of a total number of 2556 scored items, 158 (6.1%) were scored differently. Cohen's kappa value for inter-observer agreement was 0.87 (standard error = 0.10). A Cohen's kappa score between 0.61 and 0.80 suggests a good agreement between independent scorers. Cohen's kappa value for inter-observer agreement per item is presented in Supplemental digital content 3. The inter-observer agreement was high for most items, and there were no Cohen's kappa values lower than 0.3.
The present study evaluated the quality of reporting of scientific publications using animal models in otorhinolaryngology research. Articles published in ENT journals adhered better to the ARRIVE guidelines than articles published in multidisciplinary journals. Therefore, articles published in multidisciplinary journals with high impact factors do not have a superior overall quality of reporting in otorhinolaryngology research using animal models. Similarly, MacLeod et al. have identified significantly fewer reporting of randomization in articles published in journals with high impact factors. Our findings are contrary to reports investigating the quality of reporting of randomized controlled trials and systematic reviews in otorhinolaryngology research, where ENT journals underperformed. Interpretation of results Although ENT journals showed better quality of reporting, adherence to the ARRIVE guidelines is generally poor in otorhinolaryngology research for both journal categories. Items such as choice of the specific anesthetic, dose and route of administration (7d) and information regarding the housing of animals (9a) were rarely (<20% of all studies) reported. This information is essential for accurate replication of animal experiments, as it may influence study outcomes. Prager et al. reported that housing and husbandry information of animals have the potential to influence responses of rodents, and thus alter study outcomes. Our findings also revealed that sample size calculation for the number of animals chosen per group (10b) and allocation of the animals to groups (11a,b) were rarely reported (<20% of all studies). These two items are essential for optimizing statistical design, and for fulfilling ethical obligations, as they aim to reduce potential bias and the number of animals used in research. , Articles published in multidisciplinary journals often described additional experiments alongside the animal model. As such, the animal experiment could have not been the primary focus of the study. Nevertheless, all multidisciplinary journals included had a methodology section containing information relating to the animal experiments. These sections do not have word limits that may have justified the missing information. Similar outcomes are found in other disciplines. Gulin et al. performed a quality assessment review of animal studies for Chagas disease by comparing studies published before and after the publication of the ARRIVE guidelines. In line with our findings, their study revealed that items such as randomization (16%) and sample size calculations (7%) were rarely reported. Ting et al. investigated interventional animal studies in rheumatology and reported missing information such as randomization (17%), sample size calculation (0%), allocation (0%), housing, husbandry and welfare-related information (5%), and implications for replacement, refinement or reduction of the use of animal assessments (0%). These items are essential to reduce bias in scientific research, and to make experiments transparent and replicable. Furthermore, Schwarz et al. reviewed publications on preclinical research for the treatment of mucositis/peri-implantitis, Freshwater et al. conducted reviews on animal research published in plastic surgery journals, and Tsilidis et al. investigated the reporting of animal models for neurological diseases. – All studies concluded that there is an urgent need for improving the quality of reporting when using animal models. Methodological considerations Strengths of our study include a search strategy that could be reproduced to evaluate the quality of reporting of animal studies in other disciplines. To account for learning effects, the two authors who independently scored 2556 items did not score more than five articles consecutively per journal category. The limitations of the study include firstly a subjective assessment by the two independent scorers. The scorers were also not blinded to which journal category the paper belonged. However, the high inter-observer agreement demonstrated that both reviewers had fairly similar judgment (Supplemental digital content 3). Second, in order to obtain a sufficient amount of articles, we included articles published in multidisciplinary journals from 2010–2014, whereas we included articles published in ENT journals in 2014 only. Since the ARRIVE guidelines were first published in 2010, articles published that year could not have had access to these guidelines. Nevertheless, a subanalysis revealed no correlation between the year of publication and the quality of reporting. A third limitation is that Nature and Nat Commun have recommended that authors use the ARRIVE checklist (date inspected: 12 June 2015). However, no statistical difference was found in the quality of reporting between articles published in multidisciplinary journals that endorsed the ARRIVE guidelines and those that did not. Finally, our findings were only based on eight journals (ENT journals: Hear Res: n = 15, JARO: n = 11, Head Neck: n = 9; multidisciplinary journals: PNAS: n = 24, Nature: n = 4, Nature Comm: n = 4, Science Rep: n = 3, Science: n = 1). Reporting guidelines Evidence that clinical trials lacked crucial methodological information led to the development of the Consolidated Standards for Reporting Trials (CONSORT) statement, which is now implemented by many journals and funding agencies. Implementing the CONSORT statement has been shown to drastically improve the quality of reporting of clinical trials. – By contrast, the development of the ARRIVE guidelines did not enhance quality of reporting when comparing articles appearing before and after the ARRIVE guidelines were published. Baker et al. showed that reporting of animal research in PLoS journals, which have been early proponents of the ARRIVE guidelines, still remained low. In our sample, we also showed that there was no improvement in the quality of reporting with increasing year of publication (2010–2014). Therefore, we recommend a stronger endorsement of the ARRIVE guidelines from authors, journal editors and funding agencies.
Although ENT journals showed better quality of reporting, adherence to the ARRIVE guidelines is generally poor in otorhinolaryngology research for both journal categories. Items such as choice of the specific anesthetic, dose and route of administration (7d) and information regarding the housing of animals (9a) were rarely (<20% of all studies) reported. This information is essential for accurate replication of animal experiments, as it may influence study outcomes. Prager et al. reported that housing and husbandry information of animals have the potential to influence responses of rodents, and thus alter study outcomes. Our findings also revealed that sample size calculation for the number of animals chosen per group (10b) and allocation of the animals to groups (11a,b) were rarely reported (<20% of all studies). These two items are essential for optimizing statistical design, and for fulfilling ethical obligations, as they aim to reduce potential bias and the number of animals used in research. , Articles published in multidisciplinary journals often described additional experiments alongside the animal model. As such, the animal experiment could have not been the primary focus of the study. Nevertheless, all multidisciplinary journals included had a methodology section containing information relating to the animal experiments. These sections do not have word limits that may have justified the missing information. Similar outcomes are found in other disciplines. Gulin et al. performed a quality assessment review of animal studies for Chagas disease by comparing studies published before and after the publication of the ARRIVE guidelines. In line with our findings, their study revealed that items such as randomization (16%) and sample size calculations (7%) were rarely reported. Ting et al. investigated interventional animal studies in rheumatology and reported missing information such as randomization (17%), sample size calculation (0%), allocation (0%), housing, husbandry and welfare-related information (5%), and implications for replacement, refinement or reduction of the use of animal assessments (0%). These items are essential to reduce bias in scientific research, and to make experiments transparent and replicable. Furthermore, Schwarz et al. reviewed publications on preclinical research for the treatment of mucositis/peri-implantitis, Freshwater et al. conducted reviews on animal research published in plastic surgery journals, and Tsilidis et al. investigated the reporting of animal models for neurological diseases. – All studies concluded that there is an urgent need for improving the quality of reporting when using animal models.
Strengths of our study include a search strategy that could be reproduced to evaluate the quality of reporting of animal studies in other disciplines. To account for learning effects, the two authors who independently scored 2556 items did not score more than five articles consecutively per journal category. The limitations of the study include firstly a subjective assessment by the two independent scorers. The scorers were also not blinded to which journal category the paper belonged. However, the high inter-observer agreement demonstrated that both reviewers had fairly similar judgment (Supplemental digital content 3). Second, in order to obtain a sufficient amount of articles, we included articles published in multidisciplinary journals from 2010–2014, whereas we included articles published in ENT journals in 2014 only. Since the ARRIVE guidelines were first published in 2010, articles published that year could not have had access to these guidelines. Nevertheless, a subanalysis revealed no correlation between the year of publication and the quality of reporting. A third limitation is that Nature and Nat Commun have recommended that authors use the ARRIVE checklist (date inspected: 12 June 2015). However, no statistical difference was found in the quality of reporting between articles published in multidisciplinary journals that endorsed the ARRIVE guidelines and those that did not. Finally, our findings were only based on eight journals (ENT journals: Hear Res: n = 15, JARO: n = 11, Head Neck: n = 9; multidisciplinary journals: PNAS: n = 24, Nature: n = 4, Nature Comm: n = 4, Science Rep: n = 3, Science: n = 1).
Evidence that clinical trials lacked crucial methodological information led to the development of the Consolidated Standards for Reporting Trials (CONSORT) statement, which is now implemented by many journals and funding agencies. Implementing the CONSORT statement has been shown to drastically improve the quality of reporting of clinical trials. – By contrast, the development of the ARRIVE guidelines did not enhance quality of reporting when comparing articles appearing before and after the ARRIVE guidelines were published. Baker et al. showed that reporting of animal research in PLoS journals, which have been early proponents of the ARRIVE guidelines, still remained low. In our sample, we also showed that there was no improvement in the quality of reporting with increasing year of publication (2010–2014). Therefore, we recommend a stronger endorsement of the ARRIVE guidelines from authors, journal editors and funding agencies.
Although articles using animal models published in ENT journals have better quality of reporting scores than those published in multidisciplinary journals, adherence to the ARRIVE guidelines is generally poor in otorhinolaryngology research. There is an urgent need to improve the quality of reporting in otorhinolaryngology research using animal models. Editorial endorsement of the ARRIVE guidelines from authors, research and academic institutes, editorial offices, and funding agencies is warranted to optimize quality of reporting.
Supplementary material Supplementary material Supplementary material
|
Fake gunshot wounds in the skull—post-mortem artifact caused by steel probe during police search for a missing body | 9af47c4c-e1c3-44b8-b081-1d8bf81a136e | 8036175 | Pathology[mh] | In the case of hiding a corpse and the longer time that elapsed from death to body discovery, one should take into account the possibility of post-mortem injuries caused both by environmental conditions (taphonomic changes), by the action of living organisms (insects and their larvae or other animals), as well as by the actions of humans during the search or recovery of the corpse . The distinction between vital injuries and post-mortem artifacts may sometimes be a great challenge for the forensic pathologist performing the post-mortem examination . So far, the literature has described various interesting cases of post-mortem injuries differentiated from vital ones, mainly caused by animals . However, no case description was found in available literature, as described below. A young woman, who went missing a month before, was murdered by her boyfriend, who then decided to hide the body and complicate its identification, if found. For this purpose, he dismembered the body by cutting off the head, torso and limbs, and removing large areas of skin from the body parts, including the face and fingertips. He buried fragments of her body separately in different places and sunk the skinless head in a shallow water reservoir with a very muddy bottom. When, after family had reported the woman missing, the police began searching for her and found traces indicating that the man may have murdered his partner, he confessed to the murder and indicated where the body fragments were hidden. The murder was supposed to have happened about 1 month earlier. The body and limb fragments were dug out quickly and then subjected to an autopsy. However, the search for the missing head continued, due to the sinking of the head in water. After 2 days, the head, with part of the neck, was found and also sent for post-mortem examination. The head was key in determining the cause of death, as it revealed injuries, including a fracture of the mandible and damage of the hyoid bone, which indicated manual strangulation as the cause of death and correlated with the suspect's testimony. During the post-mortem examination of the head, after separation of the preserved soft tissues exhibiting decayed lesions, apart from the above mentioned mandibular injuries, an atypical finding was revealed in the form of three round holes, each 0.5 cm in diameter, with smooth external edges located in the right temporal bone and slightly backwards in the occipital bone (Fig. ). At first glance, the holes resembled gunshot wounds from a small caliber weapon. However, there was no damage to the dura mater, which was flaccid, separated from the inner surface of the skull, but had a preserved continuity. No damage to the brain structure or the presence of foreign bodies inside the skull, such as bullet traces, was found. Due to the unusual finding in the skull, it was macerated and then reassessed (Fig. ). The holes in the bones were almost identical, located a few centimeters from each other. During detailed examination of the macerated skull bone, the presence of a funnel-shaped inward extension in the bone in the place of the holes was noted what is common for entrance gunshot wounds. Also, small slightly recessed bone fragments were visible on the inside of the skull in the holes margins. There were no fractures in the skull presenting as fissures diverging radially from the holes in the bone (Fig. ). In light of the above, several hypotheses appeared regarding the time of occurrence and cause of the mysterious 3 holes in the skull. Due to the lack of damage to the dura mater and brain, with the present damage covering the entire thickness of the skull (0.3 cm), the origin of bone damage from a shot from a firearm or pneumatic weapon was excluded. It is difficult to imagine a situation in which a bullet damages the entire thickness of the skull but stops on the dura mater and does not enter the cranial cavity . In the case of gunshot wounds, besides presence of a funnel-shaped inward extension in the bone, often radial fracture fissures appear around the wound in the bone . Such fissures in our case were not present. Only a small, linear bone fracture was found on the edge of one of the holes (Fig. —hole no. 1 visible from the inside). The remaining hypotheses concerned other possibilities of skull bone injuries. One of them was the possibility of a neurosurgical procedure in the past despite the uncharacteristic hole size for such procedures . Nevertheless, this hypothesis was excluded by the family of the deceased. Another hypothesis was that the suspect could have damaged the skull with a drill, either while the victim was still alive, or after her death . He did not confess to such deeds, no drill was found, and it would have been very unlikely that he would have drilled such thin bone completely three times without damaging the dura mater and brain in any of them. So this version also remained very doubtful. When speculating with the policemen about the possible causes of such puzzling holes, a question arose about the search for the head in a muddy water reservoir and how it was eventually discovered. It turned out, that during the search for buried corpses, special steel probes are used (similarly to the search for victims of an avalanche), which are punched into the ground . These probes have a twofold use: they create a thin channel in the ground so that police dogs can smell a corpse, or they can be used to find a corpse when the probe end hits the corpse directly and stops at it (Fig. ). People who took part in the search for the head using steel probes were interviewed and one of them confirmed that during the punching of the probe into the bottom of the water reservoir, shortly before the head was found, he felt resistance. Furthermore, a recording of the search was obtained on which the manner of searching for the head in the water reservoir with the use of a steel probe had been recorded (Fig. ). After this information was provided, the steel probe was brought to the morgue and its structure was compared with the holes in the skull, which allowed the mysterious holes’ puzzle to be solved. Therefore, taking into account the above, it can be assumed that during the search for a body using the above mentioned steel probes, the right side of the skull, which was hidden in a silt in a shallow water, was punctured three times before it was recovered. Due to the characteristic structure of the steel probe, i.e., sharp but short ending, with a significantly widening conical shaft, no deeper puncture was caused to the skull. The only damage caused by the sharp ending of the probe was the round hole in the 0.3-cm thick skull bone (Fig. ). This case shows how important and helpful is the cooperation between forensic pathologists performing post-mortem examinations and law enforcement and judicial investigators at every stage of an investigation. Thanks to a joint effort, it was possible to find the cause of initially very unusual looking skull injuries in a murder case, and establish that it was a post-mortem artifact, not inflicted by the perpetrator. The presented case was facilitated because the head was found about 1 month after the murder, when soft tissues, including the dura mater and the brain were still preserved. The post-mortem examination would have been much more difficult if during the search, despite causing the head punctures, the head would not have been found at the time of investigation, but e.g. several months or years later when there would be no more soft tissue, only bone. Then, one could come to the wrong conclusion that the cause of death were gunshot wounds to the head, and the bullets or their fragments were washed out by water or were lost while recovering the skull from the water. This case also highlights the variety of post-mortem injuries and draws attention to the necessary vigilance when diagnosing injuries, especially on a body which was found some time after death. |
Improving Patient Understanding of Emergency Department Discharge Instructions | 637f0650-d83f-40ad-bb16-9c31386bd27f | 11610731 | Patient Education as Topic[mh] | Several studies have analyzed the effectiveness of discharge instructions given to emergency department (ED) patients at the time of discharge and have identified areas for improvement. These studies recommend that key components of discharge instructions include diagnosis, expected duration of illness, at-home care, return precautions, and follow-up plan. Nonetheless, many ED patients do not receive discharge instructions that include all these components. , In addition to being incomplete, discharge instructions are often difficult to read. , In fact, discharge information given to trauma patients at one institution was written at least four grade levels higher on average than the National Institutes of Health-recommended sixth grade reading level. They noted that after improving readability by breaking up complex sentences, using simple words, and using bullet points and subheadings, there was a significant decrease in post-discharge return phone calls and readmissions. Additionally, having a good understanding of one’s discharge instructions can help promote optimal health and recovery following an ED visit. Patients may also have fewer unnecessary return visits to the ED if they better understand their discharge instructions. Currently, discharge instructions at this urban Veteran’s Administration (VA) hospital include a section at the beginning of the instructions where clinicians can free text any specific instructions they have for the patient. This section may also be kept blank. There is also standardized information about the discharge diagnosis, which is included in all instructions. In this pilot study we aimed to determine whether implementing discharge instructions that are standardized at an appropriate reading level and include key components would improve patient understanding of discharge instructions (measured by patient-clinician correlation).
We conducted this pilot study at a 20-bed ED urban VA hospital. This study did not collect any personal patient data and thus was deemed by the VA internal review board office to be institutional review board- exempt. Study participants were approached by nursing staff, clinicians, or study staff and asked whether they would be willing to participate in a short interview to help a quality improvement project focused on discharge instructions. If the patient agreed, they were interviewed by study staff regarding the key components of discharge instructions. They were asked to state their diagnosis, what (if any) new medications were prescribed, what they needed to do at home to take care of their illness, expected duration of illness, reasons to return to the ED, and who to follow up with. Study staff recorded their answers. Patients were permitted to look at their discharge instructions at any time during the interview to help answer the questions and were reminded of this at the start of the interview. Study staff then asked the clinician (physician or advanced practice practioner [APP]) the same questions. For the initial control group, clinicians were free to include whatever they wanted in the free-text portion of the discharge instructions. This group of 25 patients had the following discharge diagnoses: edema; motor vehicle collision; concussion; strain; acute psychosis; constipation; fracture; shingles; hyperglycemia; cystic acne; cervical radiculopathy; oral mucosal lesions; conjunctivitis; sinusitis; pneumonia; ear infection; cellulitis; fatigue; diarrhea; chest pain; back pain; balanoposthitis; chronic obstructive pulmonary disease (COPD); and dehydration. The clinicians treating this group included 10 physicians and two APPs. Data was again collected by study staff (Russell) in the form of in-person interviews and addressed the six key components. A set of standardized discharge instructions were developed for 12 common ED diagnoses and edited to contain six key components. These templates were created with subheadings and bullet points to make the instructions easier to follow and understand . The discharge diagnoses addressed in this group included many of the most common emergency department diagnoses: abdominal pain; back pain; cellulitis; chest pain; congestive heart failure; COPD; concussion; fracture; headache; no fracture (sprain/strain); rib fracture; and vertigo. These discharge instruction templates were reviewed for accuracy and completeness by three board-certified emergency physicians, including one study staff, one director of ED operations, and one educational director. A convenience sample of emergency clinicians, including both board-certified physicians and physician assistants, voluntarily participated in the post-standardized intervention phase. Volunteer clinicians had the standardized discharge instructions uploaded into their dictation software Dragon (Nuance Communications, Inc, Burlington, MA) and used these standardized instructions when study staff was on site to conduct interviews. The study staff then collected data via in-person interviews for these clinicians and for the 20 patients for whom the standardized discharge instructions were used. In both groups, patient responses were compared to their own clinician’s responses and marked and coded as incorrect (0), partially correct (0.5), or correct (1) with a maximum total score of six. Results were scored by each member of the study team independently as well as by a third, board-certified emergency physicians who was the director of ED operations. We performed a Mann-Whitney U test on the total interview scores in the control and intervention groups and conducted a sub-analysis on the individual scores for each of the six key components.
Demographics: The treatment clinicians for the patients in the baseline group included 10 physicians and two APPs. The treatment clinicians in the post-standardized intervention group included three physicians and two APPs. Note that some clinicians were involved in both groups. Patients in the pre-standardization group already showed high levels of understanding in three areas (above .75): their diagnosis; new medications; and who to follow up with. The patients in the post-standardized group overall demonstrated a statistically significant increase in patient-clinician concordance when compared to the patients in the baseline group ( P < 0.05) , and two of the three low understanding areas— duration of illness and reasons to return—had statistically significant increases in patient-clinician concordance in the baseline vs post-standardized group.
The data from this pilot study suggests that implementing discharge instructions standardized to increase readability and include key components improved patient understanding compared to discharge instructions entered in via free text by the clinician. Like other studies, our study demonstrated that reasons to return were among the most poorly understood. As seen in the Figure, there is clear improvement in this area with the implementation of standardized instructions. This is essential to patient care in the ED. Transitions of care have been identified as critically important times for transfer of information. This is especially true when patients are transitioning from hospital-based care in the ED to home. Indeed, patient understanding of discharge instructions has been shown to improve health outcomes including minimizing return visits, increasing follow-up, and enabling improved at-home compliance with their clinician’s plan of care. Further, institutions such as the Centers for Medicare and Medicaid Services have identified patient understanding of discharge instructions as a key domain of patient experience, and patients are asked how well they were able to understand the discharge instructions provided during their ED visit on the ED Consumer Assessment of Healthcare Providers and Systems Survey. One recent study implemented a mnemonic “DC HOME” (discharge diagnosis, care rendered, health and lifestyle modifications, obstacles after discharge, prescribed medications, and expectations) and formalized education regarding its implementation among resident physicians, which demonstrated success in both inclusion of these components and patient satisfaction. This intervention included several of the components we included in our standardized written instructions. Having a good understanding of one’s discharge instructions is important for many reasons, including that patients can have optimal health and recovery following their ED visit. Better understanding of discharge instructions can also decrease unnecessary return visits to the ED by empowering patients with the information they need to make appropriate follow-up appointments and to better understand the expected course of their illness, which may decrease the unnecessary cost of an additional ED visit for the patient.
One limitation of this study is that inter-rater reliability was not assessed within the data collection and statistical analysis. We did not collect this data and, therefore, it is unclear how closely the doctors’ ratings correlated to one another. Future analysis and interventions would benefit from two doctors rating the understanding and then performing kappa statistics to measure the level of agreement between the two doctors. An additional limitation of this study is its small sample sizes. We used small sample sizes as this was a pilot study with the goal of assessing significant impact as well as feasibility of implementation. As this pilot demonstrates statistical significance and clear beneficial impact to patient understanding, we now have a foundation for future expansion and additional research within this area. Based on this pilot study we recognize several future opportunities. While this study was focused on standardizing 12 common discharge diagnoses, a future work could expand the number of diagnoses as well as the number of clinicians. There is an opportunity to examine patient-centered outcomes including following patients after discharge to assess knowledge retention, return ED visits, and adherence with recommended follow-up. This pilot study demonstrates a first step in better understanding these patient centered outcomes potentially impacted by discharge instructions. Further, nursing staff were the primary individuals distributing the written discharge instructions to the patients and explaining them one final time prior to discharge. There is currently widely variable practice on how nursing staff provide and discuss these instructions with the patients. This study did not address this variability as our goal was to evaluate how changing the single variable of the written discharge instructions would affect patient understanding. Future work may include standardizing how clinicians or nursing staff provide discharge instructions as this has also been shown to impact patient understanding and satisfaction.
Overall, this crucial pilot study suggests that standardized discharge instructions significantly improve patients’ understanding of their instructions overall and, specifically, the expected duration of illness and reasons to return. This intervention is easy to implement, cost effective, empowers patients to better understand their health condition, impacts core ED quality measures, and should be further studied.
|
Influence of pain neuroscience education and exercises for the management of neck pain: A meta-analysis of randomized controlled trials | 91cd685d-689d-479b-a4a7-846b59a72f85 | 11608723 | Patient Education as Topic[mh] | Chronic musculoskeletal pain affects approximately 40% of adolescents, leading to increased school absenteeism, reduced social and recreational engagement, diminished educational attainment, impaired vocational functioning, and social difficulties. Neck pain, particularly prevalent among adolescents aged 16 to 18 years with rates up to 29.5%, emerges as 1 of the most common manifestations. This condition induces both biological and psychosocial impairments, including reduced pressure pain thresholds, decreased neck and scapular muscle endurance, sleep disturbances, heightened pain catastrophizing, kinesiophobia, diminished self-efficacy, and central sensitization symptoms. Multimodal interventions are developed to control neck pain, such as self-management and active movement-based strategies. However, they only focus on psychological and pharmacological interventions. Pain neuroscience education is an educational approach that explains pain as a neural output emerging from complex biological, psychological, and social interactions. This intervention aims to facilitate patients’ understanding of pain mechanisms and promote the reconceptualization of maladaptive pain beliefs. In clinical settings, pain neuroscience education helps patients comprehend the underlying biological and physiological processes that shape their pain experience. Pain neuroscience education emphasizes that pain is not merely a response to tissue damage but also a complex interaction of the nervous system, psychological factors, and social influences. It’s commonly used for patients with chronic pain conditions like low back pain, fibromyalgia, or complex regional pain syndrome, where pain perception can be heightened due to nervous system sensitivity rather than direct injury. Its combination with exercise holds some potential in the relief of neck pain. Recently, several studies have explored the efficacy of pain neuroscience education plus exercises for neck pain, but their routine use is not well established. Therefore, we perform this meta-analysis to study the effects of pain neuroscience education plus exercises in patients with neck pain.
This meta-analysis were performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Since this was a meta-analysis of previously published studies, and thus ethical approval and patient consent were not required. 2.1. Search strategy and study selection We have searched the databases including PubMed, EMbase, Web of science, EBSCO, and Cochrane library databases from inception to August 2023 by using the keywords: “pain neuroscience education” AND “exercise” OR “physical training” AND “neck pain.” The inclusive selection criteria were as follows: study design was RCT; patients were diagnosed with neck pain; intervention treatments were pain neuroscience education plus exercises versus exercises. Pain neuroscience education was delivered to the exercise, and included the discussion of acute pain, the transition from acute to chronic pain, central sensitization, brain plasticity, pain modulation and the importance of exercise, and the role of cognitions, emotions and sleep in pain. 2.2. Data extraction and outcome measures The following information were collected: author, number of women, age, body mass index, pain scores and detail methods between 2 groups. Data were extracted independently by 2 investigators, and discrepancies were resolved by consensus. The primary outcomes were VAS after treatment and VAS after 3 months. Secondary outcomes included functional disability index and pain catastrophizing scale. 2.3. Quality assessment in individual studies We evaluated the methodological quality of the included studies by using the modified Jadad scale, in which 3 items were included: randomization (0–2 points), blinding (0–2 points), dropouts and withdrawals (0–1 points). The score of Jadad scale varied from 0 to 5 points, and Jadad score ≤ 2 suggested low quality, while Jadad score ≥ 3 indicated high quality. 2.4. Statistical analysis We calculated mean difference (MD) with 95% confidence interval (CI) for continuous outcomes. I 2 statistic was applied to assess heterogeneity, and I 2 > 50% indicated significant heterogeneity. Random-effect model was used for significant heterogeneity, and otherwise fixed-effect model was applied. We detected the potential heterogeneity via omitting 1 study in turn for the meta-analysis or performing subgroup analysis. Publication bias was not evaluated because of the limited number (<10) of included studies. All statistical analyses were performed using Review Manager Version 5.3 (The Cochrane Collaboration, Software Update, Oxford, UK).
We have searched the databases including PubMed, EMbase, Web of science, EBSCO, and Cochrane library databases from inception to August 2023 by using the keywords: “pain neuroscience education” AND “exercise” OR “physical training” AND “neck pain.” The inclusive selection criteria were as follows: study design was RCT; patients were diagnosed with neck pain; intervention treatments were pain neuroscience education plus exercises versus exercises. Pain neuroscience education was delivered to the exercise, and included the discussion of acute pain, the transition from acute to chronic pain, central sensitization, brain plasticity, pain modulation and the importance of exercise, and the role of cognitions, emotions and sleep in pain.
The following information were collected: author, number of women, age, body mass index, pain scores and detail methods between 2 groups. Data were extracted independently by 2 investigators, and discrepancies were resolved by consensus. The primary outcomes were VAS after treatment and VAS after 3 months. Secondary outcomes included functional disability index and pain catastrophizing scale.
We evaluated the methodological quality of the included studies by using the modified Jadad scale, in which 3 items were included: randomization (0–2 points), blinding (0–2 points), dropouts and withdrawals (0–1 points). The score of Jadad scale varied from 0 to 5 points, and Jadad score ≤ 2 suggested low quality, while Jadad score ≥ 3 indicated high quality.
We calculated mean difference (MD) with 95% confidence interval (CI) for continuous outcomes. I 2 statistic was applied to assess heterogeneity, and I 2 > 50% indicated significant heterogeneity. Random-effect model was used for significant heterogeneity, and otherwise fixed-effect model was applied. We detected the potential heterogeneity via omitting 1 study in turn for the meta-analysis or performing subgroup analysis. Publication bias was not evaluated because of the limited number (<10) of included studies. All statistical analyses were performed using Review Manager Version 5.3 (The Cochrane Collaboration, Software Update, Oxford, UK).
3.1. Literature search, study characteristics, and quality assessment Figure demonstrates the flowchart of the search and selection results. Initially, 184 relevant articles were found, and finally 4 eligible randomized controlled trials (RCTs) were included in the meta-analysis. Table shows the baseline characteristics of the eligible RCTs in the meta-analysis. They were published between 2021 and 2022, and total sample size was 246. The treatment duration varied from 6 weeks to 6 months. Among the 4 studies included here, 2 studies reported VAS after treatment and VAS after 3 months, 3 studies reported functional disability index and 2 studies reported pain catastrophizing scale. All included studies were regarded to have high quality because their Jadad scores varied from 3 to 4. 3.2. Primary outcomes: VAS after treatment and VAS after 3 months Compared with exercise intervention for neck pain, pain neuroscience education plus exercise was associated with significantly reduced VAS after treatment (MD = −1.12; 95% CI = −1.51 to −0.73; P < .00001) with no heterogeneity remained among the studies ( I 2 = 0%, heterogeneity P = .83, Fig. ) and VAS after 3 months (MD = −1.24; 95% CI = −2.26 to −0.22; P = .02) with significant heterogeneity remained among the studies ( I 2 = 62%, heterogeneity P = .11, Fig. ). 3.3. Sensitivity analysis Significant heterogeneity remained among the included studies for VAS after 3 months. However, we did not perform the sensitivity analysis by omitting 1 study in turn due to only 2 RCTs included for its meta-analysis. 3.4. Secondary outcomes Compared to exercise intervention for neck pain, pain neuroscience education plus exercise was able to substantially reduce functional disability index (MD = −1.22; 95% CI = −1.46 to −0.97; P < .00001; Fig. ) and pain catastrophizing scale (MD = −4.25; 95% CI = −5.50 to −3.00; P < .00001; Fig. ).
Figure demonstrates the flowchart of the search and selection results. Initially, 184 relevant articles were found, and finally 4 eligible randomized controlled trials (RCTs) were included in the meta-analysis. Table shows the baseline characteristics of the eligible RCTs in the meta-analysis. They were published between 2021 and 2022, and total sample size was 246. The treatment duration varied from 6 weeks to 6 months. Among the 4 studies included here, 2 studies reported VAS after treatment and VAS after 3 months, 3 studies reported functional disability index and 2 studies reported pain catastrophizing scale. All included studies were regarded to have high quality because their Jadad scores varied from 3 to 4.
Compared with exercise intervention for neck pain, pain neuroscience education plus exercise was associated with significantly reduced VAS after treatment (MD = −1.12; 95% CI = −1.51 to −0.73; P < .00001) with no heterogeneity remained among the studies ( I 2 = 0%, heterogeneity P = .83, Fig. ) and VAS after 3 months (MD = −1.24; 95% CI = −2.26 to −0.22; P = .02) with significant heterogeneity remained among the studies ( I 2 = 62%, heterogeneity P = .11, Fig. ).
Significant heterogeneity remained among the included studies for VAS after 3 months. However, we did not perform the sensitivity analysis by omitting 1 study in turn due to only 2 RCTs included for its meta-analysis.
Compared to exercise intervention for neck pain, pain neuroscience education plus exercise was able to substantially reduce functional disability index (MD = −1.22; 95% CI = −1.46 to −0.97; P < .00001; Fig. ) and pain catastrophizing scale (MD = −4.25; 95% CI = −5.50 to −3.00; P < .00001; Fig. ).
In order to study the influence of pain neuroscience education plus exercise on the management of neck pain, our meta-analysis included 4 RCTs and 246 patients with neck pain. The results found that pain neuroscience education plus exercise was able to significantly reduce VAS after treatment, VAS after 3 months, functional disability index and pain catastrophizing scale. These suggested the efficacy of pain neuroscience education plus exercise to improve pain relief and functional recovery for patients with neck pain. Regarding the sensitivity analysis, significant heterogeneity was seen for the VAS after 3 months. Several factors may account for the heterogeneity. Firstly, the severity and duration of neck pain was different, which affected the efficacy assessment. Secondly, the treatment duration of pain neuroscience education plus exercise varied from 6 weeks to 6 months. Thirdly, the various levels of education background affected the understanding and recognition of pain neuroscience education. In patients with neck pain, pain neuroscience education and exercise benefited to control catastrophizing, and anxiety and increase muscle endurance. The adolescents who received their combination intervention found pain neuroscience education to be a facilitator of pain reconceptualization and a positive attitude as well as performance towards exercise. Blended learning of pain neuroscience education aims to combine face-to-face and online educational sessions using electronic devices or platforms, which benefits to improve the implementation and efficacy. We also should consider several potential limitations. Firstly, our analysis is based on only 4 RCTs and more RCTs are needed to confirm our findings. Secondly, significant heterogeneity was observed in this meta-analysis, which may be caused by different procedures and duration of pain neuroscience education plus exercise. Thirdly, different education backgrounds affected the understanding and recognition of pain neuroscience education.
This meta-analysis suggested that pain neuroscience education and exercise was able to improve the relief and functional recovery of neck pain.
Data curation: Chao Yang. Formal analysis: Chao Yang. Funding acquisition: Chao Yang. Resources: Yue Zhang. Software: Yue Zhang. Supervision: Yue Zhang.
|
Evaluating CK20 and MCPyV Antibody Clones in Diagnosing Merkel Cell Carcinoma | 8ee0c504-7b08-4f88-b40b-ae693a2c3355 | 11754318 | Anatomy[mh] | Merkel cell carcinoma (MCC) is a rare and aggressive primary neuroendocrine carcinoma of the skin . The highest number of patients is reported from North America, while Australia has the highest incidence in the world with a rate of 1.6 per 100,000 population . The incidence rate in Europe is much lower at 0.13 per 100,000 . However, a common trend seen in all of these regions is that the incidence of MCC increases with societal ageing and the prevalence of immunosuppressive drug use . Merkel cell polyomavirus (MCPyV) and sun damage are the main factors playing role in the etiology of MCC, and the contribution of these factors varies according to geographical regions . In Australia and New Zealand, sun damage predominantly contributes to the etiology, while in other parts of the world, MCPyV is known to be more influential . Skin lesions are most commonly localized in the head and neck region and extremities . But patients may rarely present with lymph node metastasis without a primary skin tumor . Since the histopathology of MCC is similar to that of small round cell tumors, immunohistochemical stains including neuroendocrine markers should be used for proper identification. Unlike other neuroendocrine tumors, perinuclear dot-like staining with Cytokeratin 20 (CK20) is MCC’s distinguishing diagnostic hallmark . In addition to the dot-like staining, crescent-shaped and, less frequently, membranous staining may also be observed . In nearly all reported series until now the clone Ks20.8 was used and CK20 negativity was rarely encountered (approximately 10%) which may cause difficulty in the exclusion of metastatic neuroendocrine carcinoma originating from other organs . Especially in these cases, clinical and radiological investigation of other potential origins of neuroendocrine tumors is crucial. Another helpful diagnostic tool is the detection of MCPyV, which is a specific feature of MCC . Using various viral pathogen identification techniques, most commonly quantitative real-time PCR, MCPyV has been shown to be positive in approximately 24–85% of MCC cases . The virus was first demonstrated immunohistochemically in 2009, using the MCPyV antibody clone CM2B4 . Subsequent studies comparing the immunohistochemical method with PCR demonstrated a good correlation between the two methods . In 2012, a new clone of the MCPyV antibody, termed Ab3, was found to show higher sensitivity than CM2B4 . Both clones were developed based on the peptide sequence of exon 2 in the LTag of the virus, which is unique for MCPyV . CM2B4 binds to amino acids 116–129, while Ab3 targets the region spanning amino acids 79–260 . The advantages and disadvantages of these two commercially available clones have only been investigated in a limited number of studies, and a gold standard method for MCPyV detection has yet to be established . Besides, no comparative study of CK20 antibody clones has been conducted. In our study, we used clone SP33 alongside clone Ks20.8 to assess whether the infrequently observed CK20 negativity was related to the antibody clone used. Additionally, we examined the advantages and disadvantages of two commercially available MCPyV antibody clones, namely Ab3 and CM2B4. Case Selection The single center cross-sectional study received institutional review board approval (Istanbul Tip Fakultesi Klinik Arastirmalari Etik Kurulu, number 256889, date 24.06.2021). Between 2002 and 2022, 67 biopsy and/or excision materials diagnosed as MCC were identified in our archive. Among these, 54 cases with available paraffin blocks and proper fixation allowing morphologic and immunohistochemical evaluation were included in the study. In 42 cases, the primary tumor was located in the skin, while in 12 cases, it was located in a lymph node. In these 12 cases, no significant skin lesion was found upon detailed physical examination and no other neuroendocrine carcinoma was detected on radiological investigations. The diagnosis was further supported with the previously described “ELECTHIP criteria” of MCC of lymph nodes-nodal MCC . Histopathological Evaluation Hematoxylin–eosin (HE) stained slides of the cases were re-examined. Cell shape, cytoplasmic features, nucleolar prominence and chromatin structure, structural features of the tumor and distinct areas of differentiation, if any, were evaluated and noted by two pathologists (BYE, SOS). Immunohistochemical Method In all cases, epithelial and neuroendocrine markers were routinely used in the diagnostic workup. For the study, a paraffin block with the most representative tumor and optimum fixation was determined for each case. Immunohistochemistry was performed using the automated Ventana Medical System-Benchmark XT IHC/ISH Staining System. Control tissue blocks were created in order to optimize the staining method. Colonic adenocarcinoma tissue was used as positive external control for CK20 . Since CK20 clone Ks20.8 had been routinely applied to all cases in the initial diagnostic workup, only negative cases were repeatedly stained. There were no controls other than MCC for MCPyV antibodies. Optimization trials were performed on several blocks of three study cases. Ideal dilution and incubation times were determined. Table shows the characteristics of the markers. Immunohistochemical Evaluation CK20 (for Both Clones Ks20.8 and SP33) We looked for perinuclear dot-like (DL), crescent-shaped (CS) and membranous (M) staining pattern. Staining patterns, intensity and extent of staining were separately evaluated in all cases. MCPyV (for Both Clones Ab3 and CM2B4) Nuclear staining was evaluated . In some cases, cytoplasmic staining was observed alongside nuclear staining. Cytoplasmic staining alone was considered negative. Cases were considered positive according to the criteria of at least weak staining above 1% of cells used in the literature (Allred score > 2) . Statistical Analysis Statistical Package for Social Sciences (SPSS) for Mac 24.0 package was used in statistical analyses. Descriptive statistics were presented as frequencies (n) and percentages for categorical variables, while numerical variables were expressed using the mean, minimum, and maximum values. The single center cross-sectional study received institutional review board approval (Istanbul Tip Fakultesi Klinik Arastirmalari Etik Kurulu, number 256889, date 24.06.2021). Between 2002 and 2022, 67 biopsy and/or excision materials diagnosed as MCC were identified in our archive. Among these, 54 cases with available paraffin blocks and proper fixation allowing morphologic and immunohistochemical evaluation were included in the study. In 42 cases, the primary tumor was located in the skin, while in 12 cases, it was located in a lymph node. In these 12 cases, no significant skin lesion was found upon detailed physical examination and no other neuroendocrine carcinoma was detected on radiological investigations. The diagnosis was further supported with the previously described “ELECTHIP criteria” of MCC of lymph nodes-nodal MCC . Hematoxylin–eosin (HE) stained slides of the cases were re-examined. Cell shape, cytoplasmic features, nucleolar prominence and chromatin structure, structural features of the tumor and distinct areas of differentiation, if any, were evaluated and noted by two pathologists (BYE, SOS). In all cases, epithelial and neuroendocrine markers were routinely used in the diagnostic workup. For the study, a paraffin block with the most representative tumor and optimum fixation was determined for each case. Immunohistochemistry was performed using the automated Ventana Medical System-Benchmark XT IHC/ISH Staining System. Control tissue blocks were created in order to optimize the staining method. Colonic adenocarcinoma tissue was used as positive external control for CK20 . Since CK20 clone Ks20.8 had been routinely applied to all cases in the initial diagnostic workup, only negative cases were repeatedly stained. There were no controls other than MCC for MCPyV antibodies. Optimization trials were performed on several blocks of three study cases. Ideal dilution and incubation times were determined. Table shows the characteristics of the markers. CK20 (for Both Clones Ks20.8 and SP33) We looked for perinuclear dot-like (DL), crescent-shaped (CS) and membranous (M) staining pattern. Staining patterns, intensity and extent of staining were separately evaluated in all cases. MCPyV (for Both Clones Ab3 and CM2B4) Nuclear staining was evaluated . In some cases, cytoplasmic staining was observed alongside nuclear staining. Cytoplasmic staining alone was considered negative. Cases were considered positive according to the criteria of at least weak staining above 1% of cells used in the literature (Allred score > 2) . We looked for perinuclear dot-like (DL), crescent-shaped (CS) and membranous (M) staining pattern. Staining patterns, intensity and extent of staining were separately evaluated in all cases. Nuclear staining was evaluated . In some cases, cytoplasmic staining was observed alongside nuclear staining. Cytoplasmic staining alone was considered negative. Cases were considered positive according to the criteria of at least weak staining above 1% of cells used in the literature (Allred score > 2) . Statistical Package for Social Sciences (SPSS) for Mac 24.0 package was used in statistical analyses. Descriptive statistics were presented as frequencies (n) and percentages for categorical variables, while numerical variables were expressed using the mean, minimum, and maximum values. Clinical Findings Fifty-four patients (26 female, 28 male) were between the ages of 24–91 years, with a mean age of 67 years. Among the patients, five were immunosuppressed, including the 24-year-old patient. Four of these were immunosuppressed due to organ transplantation, and one due to treatment for rheumatoid arthritis. The clinical features of three patients with organ transplantation were previously reported in a multicenter study . In 42 cases of primary cutaneous MCC, the most common site was the extremities (50%), with the remainder distributed in the head and neck (33%) and trunk (17%). The involved lymph nodes of 12 nodal MCC patients were inguinal (83.3%), axillary (one case) and intraparotid lymph nodes (one case). Histopathological Findings Tumor cells were mostly round shaped, with narrow cytoplasm. Salt & pepper chromatin was visible in all cases and nucleoli were indistinct. Solid and trabecular arrangements were present in the tumors at varying rates. Necrosis was observed in approximately 70% (36/54) of cases. All cases were pure MCC, except for one case where MCC was associated with SCC in situ (Bowen disease) (Fig. ). Immunohistochemical Findings CK20 Only two (3.7%) cases were negative with both clones of CK20 (Ks20.8 and SP33). All remaining cases were positive with both clones. MCC associated with Bowen disease was CK20 positive. Among the positive cases, the CS + M + DL pattern was observed in 73.1% (38/52) and 59.6% (31/52) for clones Ks20.8 and SP33, respectively. In 35 cases, M, CS and DL stainings were identical for both clones, and distributed as follows; M + CS + DL in 7 cases, CS + DL in 14 cases and DL only in 14 cases. In the remaining 17 cases, staining patterns differed between the clones. Specifically, while CS + DL staining was observed with Ks20.8 in 7 cases, only DL staining was observed with SP33. In 10 cases M + CS + DL staining was observed with Ks20.8, whereas CS + DL staining was observed with SP33. In terms of staining intensity, 23 cases were strongly positive with both clones. The staining was stronger with Ks20.8 in 22 cases and with SP33 in three cases. In four cases, the staining was weak with both clones. In reviewing the 17 cases that showed different staining patterns with the clones, we noticed that in 13 cases, the staining was stronger with Ks20.8, and CS + M stainings were added to the DL staining. Thirty-eight cases showed diffuse staining with both clones of CK20. In 6 cases, the staining with SP33 was patchy, whereas Ks20.8 was diffuse. There were 7 cases showing patchy staining with both clones and one case showed focal staining with both clones. Considering background stainings may complicate the evaluation, we noticed that SP33 stained areas of necrosis in 15 cases (Fig. ). Within the same areas, there were no aberrant staining with Ks20.8. MCPyV MCPyV clone Ab3 was positive in 44 cases (81.5%) and MCPyV clone CM2B4 in 39 cases (72.2%). The comparison of cases stained with the two clones is presented in Table . There were no cases that stained with CM2B4 but not with Ab3. Most of the Ab3-positive cases showed diffuse and strong staining, while only 44% of CM2B4-positive cases showed diffuse and strong staining. In some of the cases with strong staining, cytoplasmic staining was also present alongside nuclear staining with both clones. Ab3 did not show cytoplasmic staining alone in any of the cases, whereas CM2B4 showed cytoplasmic staining alone in one case. This case, which was considered negative with CM2B4, displayed nuclear staining with Ab3 (Fig. ). No aberrant staining was found in any non-tumoral cell for both clones of MCPyV. Table summarizes the clinicopathological characteristics of the study cohort based on MCPyV positivity with two different antibody clones. Tumor locations on a body figure according to MCPyV clone Ab3 are shown in Fig. . Fifty-four patients (26 female, 28 male) were between the ages of 24–91 years, with a mean age of 67 years. Among the patients, five were immunosuppressed, including the 24-year-old patient. Four of these were immunosuppressed due to organ transplantation, and one due to treatment for rheumatoid arthritis. The clinical features of three patients with organ transplantation were previously reported in a multicenter study . In 42 cases of primary cutaneous MCC, the most common site was the extremities (50%), with the remainder distributed in the head and neck (33%) and trunk (17%). The involved lymph nodes of 12 nodal MCC patients were inguinal (83.3%), axillary (one case) and intraparotid lymph nodes (one case). Tumor cells were mostly round shaped, with narrow cytoplasm. Salt & pepper chromatin was visible in all cases and nucleoli were indistinct. Solid and trabecular arrangements were present in the tumors at varying rates. Necrosis was observed in approximately 70% (36/54) of cases. All cases were pure MCC, except for one case where MCC was associated with SCC in situ (Bowen disease) (Fig. ). CK20 Only two (3.7%) cases were negative with both clones of CK20 (Ks20.8 and SP33). All remaining cases were positive with both clones. MCC associated with Bowen disease was CK20 positive. Among the positive cases, the CS + M + DL pattern was observed in 73.1% (38/52) and 59.6% (31/52) for clones Ks20.8 and SP33, respectively. In 35 cases, M, CS and DL stainings were identical for both clones, and distributed as follows; M + CS + DL in 7 cases, CS + DL in 14 cases and DL only in 14 cases. In the remaining 17 cases, staining patterns differed between the clones. Specifically, while CS + DL staining was observed with Ks20.8 in 7 cases, only DL staining was observed with SP33. In 10 cases M + CS + DL staining was observed with Ks20.8, whereas CS + DL staining was observed with SP33. In terms of staining intensity, 23 cases were strongly positive with both clones. The staining was stronger with Ks20.8 in 22 cases and with SP33 in three cases. In four cases, the staining was weak with both clones. In reviewing the 17 cases that showed different staining patterns with the clones, we noticed that in 13 cases, the staining was stronger with Ks20.8, and CS + M stainings were added to the DL staining. Thirty-eight cases showed diffuse staining with both clones of CK20. In 6 cases, the staining with SP33 was patchy, whereas Ks20.8 was diffuse. There were 7 cases showing patchy staining with both clones and one case showed focal staining with both clones. Considering background stainings may complicate the evaluation, we noticed that SP33 stained areas of necrosis in 15 cases (Fig. ). Within the same areas, there were no aberrant staining with Ks20.8. MCPyV MCPyV clone Ab3 was positive in 44 cases (81.5%) and MCPyV clone CM2B4 in 39 cases (72.2%). The comparison of cases stained with the two clones is presented in Table . There were no cases that stained with CM2B4 but not with Ab3. Most of the Ab3-positive cases showed diffuse and strong staining, while only 44% of CM2B4-positive cases showed diffuse and strong staining. In some of the cases with strong staining, cytoplasmic staining was also present alongside nuclear staining with both clones. Ab3 did not show cytoplasmic staining alone in any of the cases, whereas CM2B4 showed cytoplasmic staining alone in one case. This case, which was considered negative with CM2B4, displayed nuclear staining with Ab3 (Fig. ). No aberrant staining was found in any non-tumoral cell for both clones of MCPyV. Table summarizes the clinicopathological characteristics of the study cohort based on MCPyV positivity with two different antibody clones. Tumor locations on a body figure according to MCPyV clone Ab3 are shown in Fig. . Only two (3.7%) cases were negative with both clones of CK20 (Ks20.8 and SP33). All remaining cases were positive with both clones. MCC associated with Bowen disease was CK20 positive. Among the positive cases, the CS + M + DL pattern was observed in 73.1% (38/52) and 59.6% (31/52) for clones Ks20.8 and SP33, respectively. In 35 cases, M, CS and DL stainings were identical for both clones, and distributed as follows; M + CS + DL in 7 cases, CS + DL in 14 cases and DL only in 14 cases. In the remaining 17 cases, staining patterns differed between the clones. Specifically, while CS + DL staining was observed with Ks20.8 in 7 cases, only DL staining was observed with SP33. In 10 cases M + CS + DL staining was observed with Ks20.8, whereas CS + DL staining was observed with SP33. In terms of staining intensity, 23 cases were strongly positive with both clones. The staining was stronger with Ks20.8 in 22 cases and with SP33 in three cases. In four cases, the staining was weak with both clones. In reviewing the 17 cases that showed different staining patterns with the clones, we noticed that in 13 cases, the staining was stronger with Ks20.8, and CS + M stainings were added to the DL staining. Thirty-eight cases showed diffuse staining with both clones of CK20. In 6 cases, the staining with SP33 was patchy, whereas Ks20.8 was diffuse. There were 7 cases showing patchy staining with both clones and one case showed focal staining with both clones. Considering background stainings may complicate the evaluation, we noticed that SP33 stained areas of necrosis in 15 cases (Fig. ). Within the same areas, there were no aberrant staining with Ks20.8. MCPyV clone Ab3 was positive in 44 cases (81.5%) and MCPyV clone CM2B4 in 39 cases (72.2%). The comparison of cases stained with the two clones is presented in Table . There were no cases that stained with CM2B4 but not with Ab3. Most of the Ab3-positive cases showed diffuse and strong staining, while only 44% of CM2B4-positive cases showed diffuse and strong staining. In some of the cases with strong staining, cytoplasmic staining was also present alongside nuclear staining with both clones. Ab3 did not show cytoplasmic staining alone in any of the cases, whereas CM2B4 showed cytoplasmic staining alone in one case. This case, which was considered negative with CM2B4, displayed nuclear staining with Ab3 (Fig. ). No aberrant staining was found in any non-tumoral cell for both clones of MCPyV. Table summarizes the clinicopathological characteristics of the study cohort based on MCPyV positivity with two different antibody clones. Tumor locations on a body figure according to MCPyV clone Ab3 are shown in Fig. . In this study, reviewing 54 MCC cases, MCPyV and CK20 antibody clones used in the diagnosis were evaluated. MCPyV clone Ab3, although relatively understudied in the literature, demonstrated a higher positivity rate and was easier to evaluate compared to CM2B4. CK20 clone SP33, which has not been previously reported in MCC series in the literature, demonstrated aberrant staining in areas of necrosis. We observed no difference between clones SP33 and Ks20.8 in their ability to detect positive cases. MCC, a rare and aggressive neuroendocrine skin carcinoma, has an etiology linked to MCPyV and sun damage, with notable geographical variations . In our series, MCPyV positivity was observed in nearly 80% of cases, a rate comparable to those reported in Europe, where MCPyV is accounted as the major etiologic agent and may suggest that MCC in our region is more influenced by MCPyV than sun damage . For MCPyV detection, a gold standard method has not been established. Since its discovery by Feng et al. in 2008, PCR-based techniques have been widely used due to their sensitivity in detecting MCPyV . It has been demonstrated that MCPyV contributes to tumorigenesis only after integrating into the host genome and undergoing specific mutations in MCC . However, MCPyV has also been detected in non-MCC skin tumors and other malignancies where it does not play a role in pathogenesis, raising concerns about the specificity of PCR in these cases . The MCPyV antibody clone CM2B4, introduced in 2009, showed no reactivity in non-MCC tumors according to current studies . The lack of false positivity with CM2B4 (MCPyV PCR-negativity in CM2B4-positive tumors) supported the clinical utility of this antibody in the distinction of MCPyV-related MCC . However, the sensitivity of this antibody (CM2B4) was slightly lower, as it identified approximately 70% of PCR-positive cases . In 2012, the newer MCPyV antibody clone Ab3 was designed to enhance sensitivity . In their initial study, Rodig et al. evaluated two MCPyV antibody clones in 57 cases and emphasized that Ab3 exhibited significantly greater sensitivity . With the new clone, they detected 9 additional cases that CM2B4 had missed. However, CM2B4 remains commonly used in studies and routine practice, while reports on Ab3 are limited . The largest series comparing two antibody clones reported 90% positivity with Ab3 and 70% with CM2B4 . In our series, approximately 80% of cases were positive with MCPyV clone Ab3 and 70% with clone CM2B4. Ab3 detected 5 additional cases that CM2B4 had missed, consistent with previous findings. In studies conducted to date, no cases have been reported that stained with CM2B4 but not with Ab3, as observed in our study. In terms of specificity, studies indicate that both clones can occasionally exhibit immunoreactivity in non-neoplastic components . With Ab3, a small subset of non-MCC skin cancers demonstrated focal and weak staining . In our study, although we did not test the antibodies on non-MCC tumors, we did not observe any aberrant staining in tumor surrounding tissues with either clone. The extent and intensity of staining differ between the two clones. Ab3 has been reported to show higher extent and intensity, facilitating evaluation, while partial or weak staining has been reported in up to 25% of CM2B4-positive cases in literature . In our series, approximately 30% of CM2B4-positive cases showed low percentages or weak staining, complicating interpretation. Additionally, two of three cases with partial or weak Ab3 staining were negative with CM2B4. Similar findings of focal and weak staining with Ab3, yet negativity with CM2B4, have also been reported . One study even described certain CM2B4 staining results as “uninterpretable” . Based on these findings and our experience, we consider Ab3 to be easier to evaluate and more reliable than CM2B4, with higher sensitivity supported by current evidence. Another finding that may vary geographically and etiologically in MCC is the expression status of CK20. CK20 plays a fundamental role in the diagnosis of MCC, and its perinuclear dot-like staining pattern is a distinctive and diagnostic feature for MCC . However, it has been reported that approximately 10% of cases, particularly those negative for MCPyV, are CK20 negative . In our study, there were only two CK20 negative cases, which were also MCPyV negative. A study using next generation sequencing reported that CK20 and MCPyV-negative cases exhibit an ultraviolet (UV)-signature mutational profile . Similarly, combined neuroendocrine carcinomas, which are mostly associated with squamous cell carcinoma, are MCPyV-negative and show a UV-signature mutational profile . Likewise, one of our MCPyV negative cases had Bowen disease overlying MCC indicating a UV radiation exposure. Considering the key role of CK20 in MCC diagnosis, we investigated whether CK20 negativity might be influenced by factors beyond etiology. For this, we compared the commonly used Ks20.8 clone with the SP33 clone. To date, no studies have compared CK20 clones in MCC. In our series, two cases were negative with both clones, and no significant difference was observed in detecting positive cases. Diffuse staining was observed in a similar proportion of cases with both clones, although Ks20.8 showed stronger staining in about 40% of cases. It is known that the typical DL staining pattern of CK20 in MCC can be accompanied by CS and M staining . In one-third of our cases, staining patterns differed between clones. Among these, Ks20.8 revealed additional CS staining in 40% and M staining in 60%. These variations highlight subtle enhancements between the clones. DL staining can be challenging to interpret, especially when weak or focal. In our series, we identified two cases where the DL staining was missed and incorrectly reported during routine practice (data not shown; author’s observation). This underscores the importance of selecting clones that produce sharper, more reliable staining to ensure accurate diagnosis. We also noticed that SP33 stained necrotic areas in addition to viable tumor cells in 15 of our cases. In the same areas, no aberrant staining was observed with Ks20.8. Necrosis, observed in 70% of cases in our study, is a common feature of MCC due to its high grade. This aberrant SP33 staining could complicate evaluations, especially in resection specimens after neoadjuvant immunotherapy, where CK20 may play a role . These findings highlight the need for careful consideration of clone selection. This study has several limitations. The antibodies were only tested on skin and lymph nodes, and not on other tissues or non-MCC tumors. We also did not compare MCPyV antibody performance with PCR, which could have provided insights into their diagnostic accuracy. Lastly, our sample size may not fully represent the variability seen in routine practice. Future studies with larger cohorts and clone comparisons are needed to validate and expand these findings. In conclusion, this study highlights the importance of selecting appropriate clones of MCPyV and CK20 antibodies in diagnosing MCC to enhance diagnostic accuracy. MCPyV antibody clone Ab3 demonstrated superior sensitivity and ease of interpretation compared to CM2B4, confirming its value in routine practice. While both CK20 clones Ks20.8 and SP33 showed comparable diagnostic performance in detecting positive cases, Ks20.8 exhibited stronger and more consistent staining patterns, which may aid in distinguishing challenging cases. The aberrant staining observed with SP33 in necrotic areas raises concerns, particularly in specimens from patients undergoing neoadjuvant therapy. These findings emphasize the need for further comparative studies to establish consensus on the optimal antibody clones for routine use in MCC diagnosis. |
Conventional and living guideline for schizophrenia: barriers and facilitating factors in guideline implementation | 27a2acb5-822e-44bc-b574-8f04e75a26f4 | 11362244 | Psychiatry[mh] | Schizophrenia is a severe and often life-long disorder which ranks among the 20 leading causes of disability and grades 20th in terms years lived with disability (YLDs) overall according to the recent Global Burden of Disease report . Due to the high burden of the disease for patients living with schizophrenia and relatives as well as the high economic costs, evidence-based guidelines are crucial for ensuring that patients receive the treatment as needed. However, implementation of treatment guidelines into clinical practice faces many difficulties and is insufficient worldwide [ – ] as well as in Germany, where a recent study indicated an unsatisfactory implementation of the German evidence- and consensus-based guideline for schizophrenia published in 2019 . Consequently, the question arises how implementation of guidelines can be improved and thereby reduce the evidence-practice-gap. First, behavioral changes among healthcare professionals are required . The sequence of behavior change ideally preceding guideline adherence is described by Cabana’s Knowledge-Attitude-Behavior Framework, according to which physicians’ knowledge is affected initially, then attitudes and finally behavior . Each of these categories is assigned with various barriers impeding guideline adherence and underlines the importance of identifying obstacles and possible facilitating factors. Thereby reasons why physicians do not adhere to clinical guidelines can be revealed and targeted solution approaches to improve guideline adherence may be developed. Barriers regarding physicians’ knowledge are e.g., lack of awareness or lack of experience, while obstacles with respect to physicians’ attitude are related to e.g., lack of motivation or deficient benefits for everyday clinical work. Clinicians’ behavior is influenced by patient-, guideline-, or environmental-related factors, such as rejection of the guideline by patients or lack of time resources. Second, so-called living guidelines could address the problem of rapidly increasing medical knowledge, which means that guidelines are often out of date by the time they are published [ – ]. Thus, guideline adherence is hampered due to the situation that recommendations do not correspond to the current state of the art. In contrast, with living guidelines individual recommendations can be updated as soon as relevant new evidence is available . In that regard, the user’s perception on the concept of living guidelines has not yet been explored. The current German guideline for schizophrenia is supposed to gradually be converted into a living guideline . Therefore, the guideline was integrated into the web-based evidence ecosystem MAGICapp facilitating the entire process of creating a living guideline . This study aims to elaborate for the first time the anticipated barriers and facilitating factors to guideline adherence for both the classical print version of the German guideline for schizophrenia and an upcoming living guideline. Moreover, preferences of healthcare professionals in the use of living guidelines will be presented.
Subjects and recruitment A cross-sectional online survey was conducted from January 2022 to April 2022 in the context of a larger project (Structured implementation of digital, systematically updated guideline recommendations for enhanced therapeutic adherence in schizophrenia, SISYPHOS project) . The focus of our preceding paper on this topic was the implementation status of the guideline for schizophrenia and the attitude toward an upcoming living guideline . There are no duplications of results between the two papers. In total, 17 hospitals for psychiatry, psychotherapy and psychosomatic medicine in Southern Germany (see Supplementary Table 1) and one professional association for German neurologists and psychiatrists (BVDN: Berufsverband Deutscher Nervenärzte e. V.) took part in the study by forwarding the link to their clinical staff (medical doctors, psychologists/psychotherapists, psychosocial therapists, caregivers (e.g., nurses)) and members. We used the licensed LimeSurveyR version 5.3.4 + (LMU hospital) to create the questionnaire, perform the survey, and ensure an anonymous participation. A reminder mail was sent to the participating hospitals after approximately three weeks. The data protection officer of the University Hospital Munich reviewed the survey, and the local ethical committee approved the project (reference number 21–0780). The trial has been performed according to the latest version of the Declaration of Helsinki . If not defined otherwise, the term “schizophrenia guideline” refers to the current German evidence- and consensus-based guideline for schizophrenia 2019 . Figure shows the recruitment and study flowchart. Survey structure The survey aimed to evaluate the implementation of the general guideline for schizophrenia as well as of four key recommendations . Moreover, the survey was designed to investigate the attitude toward an upcoming living guideline for schizophrenia and explore perceived barriers (questions 42–55) and facilitators (questions 56–70) regarding knowledge, attitude and behavior of the implementation of the schizophrenia guideline. For our analysis, we allocated these questions to the three sequences of behavior change preceding guideline adherence according to Cabana’s Knowledge–Attitude–Behavior framework . Examples from the questionnaire examining knowledge-related barriers are e.g., “I have heard of the corresponding guideline format before”, for knowledge-related facilitators e.g., “I would like to have (more) training/education on working with the guideline format”. Attitude-related barriers were illustrated with e.g., “I lack motivation to deal with the guideline format”, attitude-related facilitators with e.g., “I would like to have clinical conditions more considered (e.g., comorbidities, complex courses) in the content of the guideline”. With statements such as “Due to lack of time resources (e.g. due to a high workload) the use of the guideline format seems to be difficult”, behavior-related barriers were investigated, whereas behavior-related facilitators were represented with e.g., “I would like to have short, clear treatment checklists”). Barriers and facilitators were examined on a five-point Likert scale (agreement: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree) for both formats: print and living. As no living guideline for mental disorders was available at the time of the study , the concept of a living guideline was introduced to the participants (1) by an explanatory text and (2) visualized by screenshots of the unpublished living guideline for schizophrenia. The presented text, screenshots and the whole questionnaire are displayed in the supplement. Moreover, the preferences of healthcare providers when using living guidelines were investigated (questions 71–79). The questionnaire was provided in German language and translated into English by the authors for this publication. Statistical analysis All analyses were carried out in IBM SPSS for Windows (version 29) with a significance level of α = 0.05. Descriptive statistics are displayed with frequency and percentage distributions for binary data. For continuous data, means and standard deviations are presented and additionally medians for categorical data. Intergroup differences were assessed using Chi 2 tests in case of binary data. Kruskal–Wallis tests for between group analyses (Dunn–Bonferroni tests for subgroup analyses to account for multiple testing in case of significant intergroup differences) or Wilcoxon signed-rank tests (in case of dependent samples within subjects) were used for categorical data (e.g., Likert scale). In addition to age groups [young (20–34 years old) vs. middle-aged (35–49 years old) vs. older mental healthcare professionals (50–66 years old)], professional groups were compared (medical doctors vs. psychotherapists/psychologists vs. psychosocial therapists vs. caregivers (e.g., nurses)). See Table for a detailed listing of the associated occupational profiles.
A cross-sectional online survey was conducted from January 2022 to April 2022 in the context of a larger project (Structured implementation of digital, systematically updated guideline recommendations for enhanced therapeutic adherence in schizophrenia, SISYPHOS project) . The focus of our preceding paper on this topic was the implementation status of the guideline for schizophrenia and the attitude toward an upcoming living guideline . There are no duplications of results between the two papers. In total, 17 hospitals for psychiatry, psychotherapy and psychosomatic medicine in Southern Germany (see Supplementary Table 1) and one professional association for German neurologists and psychiatrists (BVDN: Berufsverband Deutscher Nervenärzte e. V.) took part in the study by forwarding the link to their clinical staff (medical doctors, psychologists/psychotherapists, psychosocial therapists, caregivers (e.g., nurses)) and members. We used the licensed LimeSurveyR version 5.3.4 + (LMU hospital) to create the questionnaire, perform the survey, and ensure an anonymous participation. A reminder mail was sent to the participating hospitals after approximately three weeks. The data protection officer of the University Hospital Munich reviewed the survey, and the local ethical committee approved the project (reference number 21–0780). The trial has been performed according to the latest version of the Declaration of Helsinki . If not defined otherwise, the term “schizophrenia guideline” refers to the current German evidence- and consensus-based guideline for schizophrenia 2019 . Figure shows the recruitment and study flowchart.
The survey aimed to evaluate the implementation of the general guideline for schizophrenia as well as of four key recommendations . Moreover, the survey was designed to investigate the attitude toward an upcoming living guideline for schizophrenia and explore perceived barriers (questions 42–55) and facilitators (questions 56–70) regarding knowledge, attitude and behavior of the implementation of the schizophrenia guideline. For our analysis, we allocated these questions to the three sequences of behavior change preceding guideline adherence according to Cabana’s Knowledge–Attitude–Behavior framework . Examples from the questionnaire examining knowledge-related barriers are e.g., “I have heard of the corresponding guideline format before”, for knowledge-related facilitators e.g., “I would like to have (more) training/education on working with the guideline format”. Attitude-related barriers were illustrated with e.g., “I lack motivation to deal with the guideline format”, attitude-related facilitators with e.g., “I would like to have clinical conditions more considered (e.g., comorbidities, complex courses) in the content of the guideline”. With statements such as “Due to lack of time resources (e.g. due to a high workload) the use of the guideline format seems to be difficult”, behavior-related barriers were investigated, whereas behavior-related facilitators were represented with e.g., “I would like to have short, clear treatment checklists”). Barriers and facilitators were examined on a five-point Likert scale (agreement: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree) for both formats: print and living. As no living guideline for mental disorders was available at the time of the study , the concept of a living guideline was introduced to the participants (1) by an explanatory text and (2) visualized by screenshots of the unpublished living guideline for schizophrenia. The presented text, screenshots and the whole questionnaire are displayed in the supplement. Moreover, the preferences of healthcare providers when using living guidelines were investigated (questions 71–79). The questionnaire was provided in German language and translated into English by the authors for this publication.
All analyses were carried out in IBM SPSS for Windows (version 29) with a significance level of α = 0.05. Descriptive statistics are displayed with frequency and percentage distributions for binary data. For continuous data, means and standard deviations are presented and additionally medians for categorical data. Intergroup differences were assessed using Chi 2 tests in case of binary data. Kruskal–Wallis tests for between group analyses (Dunn–Bonferroni tests for subgroup analyses to account for multiple testing in case of significant intergroup differences) or Wilcoxon signed-rank tests (in case of dependent samples within subjects) were used for categorical data (e.g., Likert scale). In addition to age groups [young (20–34 years old) vs. middle-aged (35–49 years old) vs. older mental healthcare professionals (50–66 years old)], professional groups were compared (medical doctors vs. psychotherapists/psychologists vs. psychosocial therapists vs. caregivers (e.g., nurses)). See Table for a detailed listing of the associated occupational profiles.
Participants’ characteristics 524 Participants originally took part in the study meaning they responded to at least one question. 85 respondents were excluded from analyses due to missing experience in the treatment of mental disorders ( n = 22) or not answering at least one content-related question ( n = 63). Only participants who completed the demographic questions and answered at least one content-related item were included in the analysis, N = 439 (see Fig. ). Participants were counted as “drop-outs” if not completing the content-related survey ( n = 130). However, all available data until the participant’s drop-out was used for the analysis. Table depicts demographic information of the participants. In the Supplementary Tables 2, 3, 4, further demographic information can be found regarding comparisons between included and excluded participants as well as between professions and age groups. Barriers to guideline implementation The investigated barriers for both the print format and the living guideline for schizophrenia were categorized according to the three sequences of behavioral change by Cabana et al. : Knowledge, attitude and behavior. More than two-third considered “lack of experience” (80%) and “lack of awareness” (64%) for living guidelines in general as barriers in the use of the upcoming living guideline for schizophrenia. Moreover, 64% of respondents reported to anticipate difficulties in accessing the living guideline for schizophrenia once published. Regarding the utilization of the schizophrenia guideline as print version, the most important barrier appeared to be “lack of time resources” (63%) followed by “lack of trainings” (53%). See Table for an overview of the presented barriers and related descriptive information. Group comparisons—print versus living: Wilcoxon tests for dependent variables indicate an increased occurrence of knowledge-related implementation barriers in the context of the living guideline and of external and attitude-related barriers in the implementation of the print format. Higher agreement scores on knowledge-related barriers were detected for the living compared to the print format, p < 0.001. In contrast, external and attitude-related barriers exhibited higher agreement levels for the print compared to the living format of the schizophrenia guideline ( p < 0.001). For complete test statistics, see Table . Group comparisons—age: Kruskal–Wallis tests showed significant differences between age groups concerning attitude-related barriers towards the concept of a living guideline. Younger healthcare professionals (20–34 years) perceived less attitude-related barriers to the living guideline than older healthcare professionals (50–66 years), p = 0.019. No significant differences were found regarding knowledge-related and external barriers between age groups ( p ≥ 0.070). For complete test statistics, see Table and Supplementary Table 5 for post hoc tests. Group comparisons—profession: Kruskal–Wallis tests indicated significant differences among professions regarding knowledge-related barriers of the print version (see Table ). Psychosocial therapists and caregivers were more influenced by knowledge-related barriers than medical doctors and psychologists/psychotherapists regarding the print version ( p s < 0.001). In terms of attitude-related barriers (print format), psychosocial therapists exhibited higher confirmation rates than psychologists/psychotherapists ( p = 0.002) and medical doctors ( p = 0.002). No significant differences among professions were found for external barriers of the print version. Concerning the living guideline, psychosocial therapists and caregivers reported more attitude-related barriers than medical doctors and psychologists/psychotherapists ( p ≤ 0.004). Additionally, caregivers appeared to be more constrained by external barriers than medical doctors ( p = 0.001). No significant differences among professions were found for knowledge-related barriers of a living guideline as well as external barriers of the print version of the schizophrenia guideline. For complete test statistics, see Table and Supplementary Table 6. Facilitating factors in guideline implementation The explored facilitating factors were analogue to the barriers assigned to Cabana’s knowledge–attitude–behavior framework. See Table for an overview. The surveyed mental healthcare professionals considered the provision of treatment checklists (living: 90%; print: 88%) as the main facilitating factor in the implementation of the schizophrenia guideline for both formats (living and print), followed by notifications in case of updates (living: 85%; print: 83%) and a firm implementation of the specific guideline in the curriculum (living: 85%; print: 83%), see Table . Group comparisons—print versus living: Wilcoxon tests for dependent samples indicate a greater need for knowledge-related and external facilitating factors among mental healthcare professionals in the implementation of the living guideline compared to the print format (p < 0.001), see Table . In terms of the print version, there was a higher reported need for attitude-related facilitators ( p < 0.001). Group comparisons—age: Younger healthcare professionals reported a higher need for attitude-related facilitating factors than older (print: p = 0.024; living: p = 0.001) and middle-aged healthcare professionals (living: p = 0.010). Regarding external facilitating factors, younger professionals expressed a higher confirmation rate than older (print: p ≤ 0.001; living: p ≤ 0.001) and middle-aged professionals (print: p = 0.008). For complete test statistics, see Table and Supplementary Table 5 for post hoc tests. Group comparisons—profession: Kruskal–Wallis tests found no significant results between professions concerning facilitating factors for the print and living guideline format. Overall, results indicated an agreement (all M s > 3) of all professions requiring more knowledge-related, attitude-related and external facilitating factors in guideline utilization for both formats print and living (see Table ). Preferences in the use of living guidelines Concerning preferences in using an upcoming living guideline, 97% of the participants would prefer an update at least annually of the recommendations in the living guideline (see Supplementary Table 8). Moreover, about 38% of the respondents would like to be notified immediately of new and relevant research findings, whereas only 3% do not want to receive notifications. Less than 10% of the participants reported that an annual update of recommendations or references to new research findings would evoke pressure to constantly adjusting treatment. In contrast, about 74% of the participants considered this update as a relief, because there would be more confidence that the current treatment of patients is according to the ‘state of the art’. Approximately 17% of the respondents reported to use other formats than guidelines (e.g., textbooks) to learn about evidence-based treatment (agreed and strongly agreed), whereas about 33% did not prefer other formats to guidelines (disagreed and strongly disagreed). In order to learn about appropriate treatment options about 15% stated to use guidelines, scientific journals or exchange with colleagues, while 34% answered to use professional literature. Most of the surveyed healthcare professionals (63%) stated to at least occasionally use digital tools/apps, whereas only 16% reported never having used digital tools in everyday clinical practice. For an overview of the descriptive characteristics, see Supplementary Table 7. An overview of our presented results regarding barriers, facilitators as well as preferences and differences between professions and age groups is shown in Table .
524 Participants originally took part in the study meaning they responded to at least one question. 85 respondents were excluded from analyses due to missing experience in the treatment of mental disorders ( n = 22) or not answering at least one content-related question ( n = 63). Only participants who completed the demographic questions and answered at least one content-related item were included in the analysis, N = 439 (see Fig. ). Participants were counted as “drop-outs” if not completing the content-related survey ( n = 130). However, all available data until the participant’s drop-out was used for the analysis. Table depicts demographic information of the participants. In the Supplementary Tables 2, 3, 4, further demographic information can be found regarding comparisons between included and excluded participants as well as between professions and age groups.
The investigated barriers for both the print format and the living guideline for schizophrenia were categorized according to the three sequences of behavioral change by Cabana et al. : Knowledge, attitude and behavior. More than two-third considered “lack of experience” (80%) and “lack of awareness” (64%) for living guidelines in general as barriers in the use of the upcoming living guideline for schizophrenia. Moreover, 64% of respondents reported to anticipate difficulties in accessing the living guideline for schizophrenia once published. Regarding the utilization of the schizophrenia guideline as print version, the most important barrier appeared to be “lack of time resources” (63%) followed by “lack of trainings” (53%). See Table for an overview of the presented barriers and related descriptive information. Group comparisons—print versus living: Wilcoxon tests for dependent variables indicate an increased occurrence of knowledge-related implementation barriers in the context of the living guideline and of external and attitude-related barriers in the implementation of the print format. Higher agreement scores on knowledge-related barriers were detected for the living compared to the print format, p < 0.001. In contrast, external and attitude-related barriers exhibited higher agreement levels for the print compared to the living format of the schizophrenia guideline ( p < 0.001). For complete test statistics, see Table . Group comparisons—age: Kruskal–Wallis tests showed significant differences between age groups concerning attitude-related barriers towards the concept of a living guideline. Younger healthcare professionals (20–34 years) perceived less attitude-related barriers to the living guideline than older healthcare professionals (50–66 years), p = 0.019. No significant differences were found regarding knowledge-related and external barriers between age groups ( p ≥ 0.070). For complete test statistics, see Table and Supplementary Table 5 for post hoc tests. Group comparisons—profession: Kruskal–Wallis tests indicated significant differences among professions regarding knowledge-related barriers of the print version (see Table ). Psychosocial therapists and caregivers were more influenced by knowledge-related barriers than medical doctors and psychologists/psychotherapists regarding the print version ( p s < 0.001). In terms of attitude-related barriers (print format), psychosocial therapists exhibited higher confirmation rates than psychologists/psychotherapists ( p = 0.002) and medical doctors ( p = 0.002). No significant differences among professions were found for external barriers of the print version. Concerning the living guideline, psychosocial therapists and caregivers reported more attitude-related barriers than medical doctors and psychologists/psychotherapists ( p ≤ 0.004). Additionally, caregivers appeared to be more constrained by external barriers than medical doctors ( p = 0.001). No significant differences among professions were found for knowledge-related barriers of a living guideline as well as external barriers of the print version of the schizophrenia guideline. For complete test statistics, see Table and Supplementary Table 6.
The explored facilitating factors were analogue to the barriers assigned to Cabana’s knowledge–attitude–behavior framework. See Table for an overview. The surveyed mental healthcare professionals considered the provision of treatment checklists (living: 90%; print: 88%) as the main facilitating factor in the implementation of the schizophrenia guideline for both formats (living and print), followed by notifications in case of updates (living: 85%; print: 83%) and a firm implementation of the specific guideline in the curriculum (living: 85%; print: 83%), see Table . Group comparisons—print versus living: Wilcoxon tests for dependent samples indicate a greater need for knowledge-related and external facilitating factors among mental healthcare professionals in the implementation of the living guideline compared to the print format (p < 0.001), see Table . In terms of the print version, there was a higher reported need for attitude-related facilitators ( p < 0.001). Group comparisons—age: Younger healthcare professionals reported a higher need for attitude-related facilitating factors than older (print: p = 0.024; living: p = 0.001) and middle-aged healthcare professionals (living: p = 0.010). Regarding external facilitating factors, younger professionals expressed a higher confirmation rate than older (print: p ≤ 0.001; living: p ≤ 0.001) and middle-aged professionals (print: p = 0.008). For complete test statistics, see Table and Supplementary Table 5 for post hoc tests. Group comparisons—profession: Kruskal–Wallis tests found no significant results between professions concerning facilitating factors for the print and living guideline format. Overall, results indicated an agreement (all M s > 3) of all professions requiring more knowledge-related, attitude-related and external facilitating factors in guideline utilization for both formats print and living (see Table ).
Concerning preferences in using an upcoming living guideline, 97% of the participants would prefer an update at least annually of the recommendations in the living guideline (see Supplementary Table 8). Moreover, about 38% of the respondents would like to be notified immediately of new and relevant research findings, whereas only 3% do not want to receive notifications. Less than 10% of the participants reported that an annual update of recommendations or references to new research findings would evoke pressure to constantly adjusting treatment. In contrast, about 74% of the participants considered this update as a relief, because there would be more confidence that the current treatment of patients is according to the ‘state of the art’. Approximately 17% of the respondents reported to use other formats than guidelines (e.g., textbooks) to learn about evidence-based treatment (agreed and strongly agreed), whereas about 33% did not prefer other formats to guidelines (disagreed and strongly disagreed). In order to learn about appropriate treatment options about 15% stated to use guidelines, scientific journals or exchange with colleagues, while 34% answered to use professional literature. Most of the surveyed healthcare professionals (63%) stated to at least occasionally use digital tools/apps, whereas only 16% reported never having used digital tools in everyday clinical practice. For an overview of the descriptive characteristics, see Supplementary Table 7. An overview of our presented results regarding barriers, facilitators as well as preferences and differences between professions and age groups is shown in Table .
This study displays barriers and facilitators in guideline implementation for both: the current schizophrenia guideline in the print and the concept of a living format. To our knowledge, this is the first study drawing attention to obstacles and facilitators in implementing a living guideline. The most frequently mentioned barrier regarding the print version was lack of time resources, followed by insufficient training in guidelines use and too long or complex versions. Regarding the living guideline, the most frequently cited barriers were knowledge-related, which could be explained by the new format and the fact that no living guideline for mental disorder is available yet . In contrast, as the print version was found to be more vulnerable to attitude-related and external barriers, one possible solution to overcome this situation could be the development of living guidelines. This notion that living guidelines could be a worthwhile tool to improve guideline adherence is supported by the often reported facilitating factors such as notifications in case of updates, more guideline trainings, treatment checklists and shorter versions as these factors can be more easily addressed with a living guideline usually embedded in a flexible digital system such as MAGICapp . Moreover, lack of time is one of the most frequently reported barriers to guideline adherence in general [ – ] and regarding the print version in our survey. Living guidelines may resolve this barrier making digitalized learning easier (e.g., by directly linking guidelines to other sources of evidence) and saving time concurrently. This consideration is underlined by a recent study about dissemination of psychiatric practice guidelines, which found web-based courses about guideline knowledge more satisfying and as effective as face-to-face courses . When examining possible age differences, younger professionals reported significantly less attitude-related obstacles than older professionals in the context of a living guideline. This may be due to the circumstance that participants of younger age may be more experienced and thus more confident in using technical devices and apps in their everyday life . In respect to facilitating factors, younger participants expressed a higher need to attitude-related and external facilitators than older and middle-aged participants for both formats. One explanation could be that younger people will be more affected in their further professional lives by the increasing prevalence of living guidelines . Consequently, they might have a greater interest in possible solutions for guideline implementation, also as younger professionals are more inclined to use guidelines . Several studies show profession-specific differences in guideline implementation [ , , ]. Therefore, a closer look at profession-specific obstacles and facilitators to guideline adherence is essential. Regarding the print version of the schizophrenia guideline, there was a consensus among professions with respect to external barriers. However, caregivers and psychosocial therapists stated to be more influenced by knowledge- and attitude-related barriers than medical doctors and psychologists, which could explain the lower implementation rate of the schizophrenia guideline in these professions . In detail, about 67% of psychosocial therapists and 35% of caregivers stated having a lack of experience with the print version of the guideline, while this was the case for only 8% of physicians. Moreover, 13% of psychosocial therapists and only 6% of medical doctors reported a lack of benefit for their clinical work (see Supplementary Table 6). The German schizophrenia guideline has relatively more recommendations concerning the everyday clinical work of medical doctors than of psychosocial therapists or caregivers , thus, the idea could prevail that the recommendations might be less relevant for the respective professional groups in everyday clinical practice. This explanation could also account for the concept of a living guideline where psychosocial therapists reported more attitude-related and caregivers additionally more external obstacles to guideline adherence. The depicted profession-specific barriers to guideline adherence for both formats accentuate the need for target-specific implementation strategies [ , , – ]. In general, evidence regarding effective implementation strategies is heterogeneous and insufficient [ , , ]. However, there is agreement that the passive introduction of guidelines alone does not improve implementation . Rather, a structured implementation is required considering the barriers and facilitators across all stages of behavior change . Our results show a high agreement for the need of facilitating factors among mental healthcare professionals for both formats. Knowledge-related facilitators such as notifications in case of updates may be well encountered with web-based living guidelines. This corresponds to our findings that most of the surveyed participants wished to be updated immediately in case of new research findings and would be relieved as they could be sure not to overlook what is state of the art (76%). Regarding attitude-related facilitators, healthcare professionals regarded an increased consideration of clinical conditions with multimorbid patients as helpful, while guidelines often do not consider this comprehensively . This could be improved with the concept of living guidelines in case they are incorporated in a web-based environment (e.g., MAGICapp) as they can be directly linked to other specific guidelines. Web-based tools can further provide descriptive illustrations for shared decision-making as well as shorter and more profession-specific, tailored versions (external facilitators). Overall, our results show that many of the expressed helpful strategies to guideline implementation can be addressed more easily with the concept of living guidelines than with classic print versions. As more than half of the surveyed healthcare professionals (63%) already apply digital tools/apps in their everyday clinical life, living guidelines seem to be a promising tool to improve guideline adherence. There are some limitations concerning the results of our study. First, we cannot exclude that participants took part in the study several times as we did not apply tracking of IT addresses. This would not have been compatible with the given regulations on data protection. However, participants were explicitly asked to answer the questionnaire only once. Second, as a living guideline for schizophrenia is not available yet, the participants’ answers were based on presented screenshots of the schizophrenia guideline in the online environment of MAGICapp. This can possibly lead to bias, as it is difficult to represent the holistic concept of a living guideline with screenshots. Moreover, the depicted screenshots were taken from the evidence ecosystem MAGICapp. However, other digital tools for living guidelines exist and could result in a different evaluation. Third, we detected significant differences between professions (age, gender, work setting and working experience) and between included and excluded participants (gender, profession, setting, age) regarding demographic information (see Supplementary Tables 2, 3, 4). As a large proportion of the excluded participants did not indicate which professional group they belonged to, p < 0.001 (“Other”, see Supplementary Table 2), there is probably a significant effect on the proportion of included compared to excluded medical doctors, p < 0.001. Moreover, the drop-out group was significantly younger than the included group. However, the differences between included participants and drop-outs were subtle without a clear pattern of a systematic bias. However, about one-third of the participants started the survey but did not complete it, possibly resulting in a bias regarding the results and could be explained by a lack of time to answer the comprehensive survey.
Various barriers exist for both guideline formats and a high need for facilitators was expressed across all professions. Many of the mentioned obstacles and facilitators may be more easily addressed with living guidelines embedded in online environments such as the evidence ecosystem MAGICapp. However, living guidelines themselves are fraught with many predominantly knowledge-related barriers. Thus, the introduction of these new formats alone cannot lead to sustainable behavior change regarding guideline adherence, in fact all stages of behavior change must be considered, including the identification of knowledge-, attitude- and behavior-related barriers as well as facilitating factors. As living guidelines are becoming increasingly widespread in medicine , our findings represent first insights into barriers, facilitators and preferences which can enhance a successful implementation of a (living) guideline.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 637 KB)
|
Evaluation of prediction errors in nine intraocular lens calculation formulas using an explainable machine learning model | 9bd8b113-d227-486a-baac-71202343911b | 11657498 | Ophthalmologic Surgical Procedures[mh] | The primary goal of modern cataract surgery is to achieve optimal vision and predicted refraction. The prediction accuracy of postoperative refraction has increased significantly in recent years. The development of advanced optical technologies and intraocular lens (IOL) power calculation formulas have improved refractive outcomes. Conventional IOL formulas, such as the SRK/T , Haigis , and Barrett Universal II (BUII) formulas, are based on theoretical lens calculations. More complex optics and effective lens position calculation for IOL power calculation have been incorporated in recently developed formulas, such as EVO 2.0 and Kane formulas . Several studies have attempted to develop optimal IOL formulas for specific eye groups by classifying the eyes according to the axial length (AL) and comparing the refractive outcomes among the subgroups [ – ]. Other studies have revealed remarkable variations among the refractive outcomes of different subgroups according to the anterior chamber depth (ACD, measured from corneal epithelium to lens) . New-generation formulas demonstrate higher overall accuracy by incorporating more ocular biometric variables . Despite this advancement, there remains a need to further enhance IOL formulas for improved precision. Ocular biometric variables show significant correlations with each other . Thus, multiple ocular biometric factors affect the refractive outcomes of cataract surgery. Multivariable analysis must be conducted considering these parameters simultaneously to calculate the prediction errors (PEs), rather than attributing the PEs to individual biometric parameters. In this study, we used a machine learning model to predict the refractive outcomes of nine IOL power formulas using ocular biometric variables to investigate the overall influence of ocular biometric variables on PE and compare the prediction outcomes of the formulas.
This study was approved by the Institutional Review Board of the Seoul National University Hospital (SNUH; IRB No. 2112-132-1284) and adhered to the principles of the Declaration of Helsinki. The Institutional Review Board waived the requirement for obtaining written informed consent owing to the retrospective study design and anonymization of patient information. Study population We retrospectively reviewed the medical records of patients who had undergone standard cataract surgery between August 1, 2018, and December 31, 2021. The inclusion criteria were as follows: (1) operation using Tecnis ZCB00 (Johnson & Johnson Vision Care, Inc., Santa Ana, CA, USA) IOL insertion in the bag, and (2) age of at least 19 years old. The exclusion criteria were as follows: (1) history of previous vitrectomy, corneal refractive surgery, or other corneal operation; (2) incidence of severe intraoperative or postoperative complication, such as zonular dialysis, posterior capsular rupture, or the use of capsular tension ring or iris retractor; (3) combined operation for the correction of glaucoma, pterygium, or vitrectomy; (4) cataract operation with large corneal or limbal incision or limbal relaxing incision; (5) absence of postoperative manifest refraction data; (6) postoperative best-corrected visual acuity (BCVA) of worse than < 20/40; (7) failure of ocular biometric examination; (8) inability to calculate IOL power owing to extreme refractive target. The first eye that underwent surgery was included if both eyes were eligible for inclusion. Preoperative ophthalmic evaluation In accordance with the SNUH preoperative cataract examination protocol, all patients underwent a comprehensive ophthalmologic examination, which included BCVA assessment, slit-lamp biomicroscopy, dilated funduscopic examination, ocular biometric measurement (IOLMaster 700; Carl Zeiss, Germany), autokeratometry (KR-7100; Topcon, Japan), anterior segment topographic measurements (Orbscan II; Bausch and Lomb, USA), specular microscopy (NSP-9900; Konan Medical, Japan), optical coherence tomography (Heidelberg Spectralis; Heidelberg Engineering, Germany), and ultra-widefield fundus photography (Optos California; Optos, USA), preoperatively. Surgical procedures All cataract surgeries were performed by the experienced surgeons at SNUH, as described below. The procedures were as follows: 2.2-mm or 2.75-mm small, clear corneal incision, continuous curvilinear capsulorrhexis, phacoemulsification of the crystalline lens, and implantation of the IOL in the capsular bag. The corneal incision was made at a temporal or superior location at the discretion of the surgeon. Prediction of postoperative refractive errors We predicted the postoperative refractive errors using nine IOL power calculation formulas: Barrett Universal II (BUII), Cooke K6, EVO V2.0, Haigis, Hoffer QST, Holladay 1, Kane, SRK/T, and PEARL-DGS. We used the manufacturers’ and IOL calculators’ recommended constants available from online ( https://www.iolcon.org/lensesTable.php ; accessed April 5, 2024) for each IOL formula. The A constant of the IOL used was 119.3. The Haigis constants for a0, a1, and a2 were − 1.302, 0.210, and 0.251, respectively. The SRK/T formula requires the A constant for the IOL, corneal power, and AL. An investigator (RO) manually entered the data into online calculators for the BUII, Cooke K6, EVO V2.0, Hoffer-QST, Kane, and PEARL-DGS formulas ( https://calc.apacrs.org/barrett_universal2105/ , https://cookeformula.com/ , https://www.evoiolcalculator.com/calculator.aspx , https://hofferqst.com/ , https://www.iolformula.com/ , and https://iolsolver.com/main , respectively). Another investigator (CHY) evaluated the results to determine the plausibility. Postoperative refractive outcome analysis All patients underwent routine postoperative examinations. We assessed the manifest refraction 1 month postoperatively. We adhered to the previously proposed protocols for the postoperative refractive outcome analysis , and defined PE as the difference between the spherical equivalent of the postoperative manifest refraction and formula PE using the IOL power implanted. To eliminate systematic error, we zeroed out the mean PE by adjusting the PE for each eye up or down by an amount equal to the mean PE. Negative and positive PEs indicate myopic and hyperopic outcomes, respectively . We defined the absolute prediction error (APE) as the absolute value of PE and calculated the mean value of PE (ME), median of value APE (MedAE), mean value of APE (MAE), and percentages of eyes within ± 0.25 diopter (D), ± 0.50 D, ± 0.75 D, and ± 1.00 D from the target refraction. Development of machine learning models to estimate PEs after cataract surgery We used LightGBM, which was developed by Microsoft and Peking University, to evaluate the relative influence of ocular biometric parameters on the PEs of the nine IOL formulas . LightGBM is one of the most popular machine learning models owing to its superior accuracy, computational speed, and memory consumption compared with those of other machine-learning models . LightGBM was trained using the lightgbm library version 3.3.2, with the following hyperparameters: n_estimators = 10,000, learning_rate = 0.01, max_depth = 8, and otherwise, with the default value. We subsequently divided the dataset into development and test sets at an 8:2 ratio and used five-fold cross-validation in the development process. We divided the development set into training and validation sets of 80% and 20%, respectively, for each fold. As a result, the training, validation, and test sets were exclusively constructed at the patient level. We fitted and validated the LightGBM model using the training and validation sets and estimated the PE using the following variables: age, gender, and ocular biometric measurements (including AL, ACD, mean keratometry [K], LT, central corneal thickness [CCT], and horizontal corneal diameter [CD]). We developed five models for the test set to estimate PE for each IOL formula. The predicted values of the five models were averaged for each test sample to determine the final PE using the formula. Model performance evaluation and SHAP We calculated the R squared value (R 2 , coefficient of determination), defined as the proportion of the variation in the dependent variable that can be predicted from the independent variable 1 - RSS/TSS (where RSS is the sum of squares of residuals, and TSS is the total sum of squares), to measure the predictive performance of the models. This value is a measure of how well observed outcomes are replicated by the model based on the proportion of the total variation of outcomes explained by the model. A value of 1 indicates that the model predicts 100% of the relationship, whereas a value of 0.5 indicates that the model predicts 50% of the relationship . We used the bootstrap method to calculate the 95% confidence intervals (CIs) of R 2 . From the test set, the same amount of data as the test set was resampled, with allowance for repetitive samples for the MAE and R 2 evaluation. This process was repeated 10,000 times to calculate CIs. We used the SHAP method, a game-theoretic technique used to explain the output of machine learning models, to interpret the model . SHAP values yield quantified contributions, thereby intuitively demonstrating the effect of each feature in terms of the shift of the model output from the base value. SHAP values quantify the effect of individual parameters on the model output and estimated PE. Further details on the SHAP method have been described in the article by Lundberg and Lee . We calculated the SHAP value by determining the average change relative to the presence or absence of individual features after constructing a model with several features. The SHAP value of each feature is an indicator of its strength in terms of positive or negative prediction of the model. A larger absolute SHAP value indicates a greater effect of the feature on the prediction of the model. We calculated the SHAP values to determine the contribution of each variable and their correlation with the PEs of the formulas. Features with positive signs indicate a positive effect on PEs, whereas those with negative signs indicate a negative effect on PEs. The partial dependence plot (PDP) presents the marginal effect of features on the predicted outcome of a machine learning model. Development process and analysis We used Python ver. 3.7.11 ( https://www.python.org ), scikit-learn library ver. 1.0.2, and shap library ver. 0.41.0 to develop and analyze the performance of the model. An investigator (RO) performed all developments and inferences on a private server equipped with a central processing unit (CPU) with 32 GB of RAM and an NVIDIA GeForce GTX 3090 with a 24 GB graphics processing unit (GPU; Nvidia). We used scipy library ver. 1.7.3 and scikit_posthocs library ver. 0.7.0 for statistical analysis. In addition, we used Student’s t-test, Friedman test, post hoc pairwise Wilcoxon test with Holm’s adjustment, Cochrane Q test, and post hoc pairwise Dunn’s test with Holm’s adjustment for comparisons.
We retrospectively reviewed the medical records of patients who had undergone standard cataract surgery between August 1, 2018, and December 31, 2021. The inclusion criteria were as follows: (1) operation using Tecnis ZCB00 (Johnson & Johnson Vision Care, Inc., Santa Ana, CA, USA) IOL insertion in the bag, and (2) age of at least 19 years old. The exclusion criteria were as follows: (1) history of previous vitrectomy, corneal refractive surgery, or other corneal operation; (2) incidence of severe intraoperative or postoperative complication, such as zonular dialysis, posterior capsular rupture, or the use of capsular tension ring or iris retractor; (3) combined operation for the correction of glaucoma, pterygium, or vitrectomy; (4) cataract operation with large corneal or limbal incision or limbal relaxing incision; (5) absence of postoperative manifest refraction data; (6) postoperative best-corrected visual acuity (BCVA) of worse than < 20/40; (7) failure of ocular biometric examination; (8) inability to calculate IOL power owing to extreme refractive target. The first eye that underwent surgery was included if both eyes were eligible for inclusion.
In accordance with the SNUH preoperative cataract examination protocol, all patients underwent a comprehensive ophthalmologic examination, which included BCVA assessment, slit-lamp biomicroscopy, dilated funduscopic examination, ocular biometric measurement (IOLMaster 700; Carl Zeiss, Germany), autokeratometry (KR-7100; Topcon, Japan), anterior segment topographic measurements (Orbscan II; Bausch and Lomb, USA), specular microscopy (NSP-9900; Konan Medical, Japan), optical coherence tomography (Heidelberg Spectralis; Heidelberg Engineering, Germany), and ultra-widefield fundus photography (Optos California; Optos, USA), preoperatively.
All cataract surgeries were performed by the experienced surgeons at SNUH, as described below. The procedures were as follows: 2.2-mm or 2.75-mm small, clear corneal incision, continuous curvilinear capsulorrhexis, phacoemulsification of the crystalline lens, and implantation of the IOL in the capsular bag. The corneal incision was made at a temporal or superior location at the discretion of the surgeon.
We predicted the postoperative refractive errors using nine IOL power calculation formulas: Barrett Universal II (BUII), Cooke K6, EVO V2.0, Haigis, Hoffer QST, Holladay 1, Kane, SRK/T, and PEARL-DGS. We used the manufacturers’ and IOL calculators’ recommended constants available from online ( https://www.iolcon.org/lensesTable.php ; accessed April 5, 2024) for each IOL formula. The A constant of the IOL used was 119.3. The Haigis constants for a0, a1, and a2 were − 1.302, 0.210, and 0.251, respectively. The SRK/T formula requires the A constant for the IOL, corneal power, and AL. An investigator (RO) manually entered the data into online calculators for the BUII, Cooke K6, EVO V2.0, Hoffer-QST, Kane, and PEARL-DGS formulas ( https://calc.apacrs.org/barrett_universal2105/ , https://cookeformula.com/ , https://www.evoiolcalculator.com/calculator.aspx , https://hofferqst.com/ , https://www.iolformula.com/ , and https://iolsolver.com/main , respectively). Another investigator (CHY) evaluated the results to determine the plausibility.
All patients underwent routine postoperative examinations. We assessed the manifest refraction 1 month postoperatively. We adhered to the previously proposed protocols for the postoperative refractive outcome analysis , and defined PE as the difference between the spherical equivalent of the postoperative manifest refraction and formula PE using the IOL power implanted. To eliminate systematic error, we zeroed out the mean PE by adjusting the PE for each eye up or down by an amount equal to the mean PE. Negative and positive PEs indicate myopic and hyperopic outcomes, respectively . We defined the absolute prediction error (APE) as the absolute value of PE and calculated the mean value of PE (ME), median of value APE (MedAE), mean value of APE (MAE), and percentages of eyes within ± 0.25 diopter (D), ± 0.50 D, ± 0.75 D, and ± 1.00 D from the target refraction.
We used LightGBM, which was developed by Microsoft and Peking University, to evaluate the relative influence of ocular biometric parameters on the PEs of the nine IOL formulas . LightGBM is one of the most popular machine learning models owing to its superior accuracy, computational speed, and memory consumption compared with those of other machine-learning models . LightGBM was trained using the lightgbm library version 3.3.2, with the following hyperparameters: n_estimators = 10,000, learning_rate = 0.01, max_depth = 8, and otherwise, with the default value. We subsequently divided the dataset into development and test sets at an 8:2 ratio and used five-fold cross-validation in the development process. We divided the development set into training and validation sets of 80% and 20%, respectively, for each fold. As a result, the training, validation, and test sets were exclusively constructed at the patient level. We fitted and validated the LightGBM model using the training and validation sets and estimated the PE using the following variables: age, gender, and ocular biometric measurements (including AL, ACD, mean keratometry [K], LT, central corneal thickness [CCT], and horizontal corneal diameter [CD]). We developed five models for the test set to estimate PE for each IOL formula. The predicted values of the five models were averaged for each test sample to determine the final PE using the formula.
We calculated the R squared value (R 2 , coefficient of determination), defined as the proportion of the variation in the dependent variable that can be predicted from the independent variable 1 - RSS/TSS (where RSS is the sum of squares of residuals, and TSS is the total sum of squares), to measure the predictive performance of the models. This value is a measure of how well observed outcomes are replicated by the model based on the proportion of the total variation of outcomes explained by the model. A value of 1 indicates that the model predicts 100% of the relationship, whereas a value of 0.5 indicates that the model predicts 50% of the relationship . We used the bootstrap method to calculate the 95% confidence intervals (CIs) of R 2 . From the test set, the same amount of data as the test set was resampled, with allowance for repetitive samples for the MAE and R 2 evaluation. This process was repeated 10,000 times to calculate CIs. We used the SHAP method, a game-theoretic technique used to explain the output of machine learning models, to interpret the model . SHAP values yield quantified contributions, thereby intuitively demonstrating the effect of each feature in terms of the shift of the model output from the base value. SHAP values quantify the effect of individual parameters on the model output and estimated PE. Further details on the SHAP method have been described in the article by Lundberg and Lee . We calculated the SHAP value by determining the average change relative to the presence or absence of individual features after constructing a model with several features. The SHAP value of each feature is an indicator of its strength in terms of positive or negative prediction of the model. A larger absolute SHAP value indicates a greater effect of the feature on the prediction of the model. We calculated the SHAP values to determine the contribution of each variable and their correlation with the PEs of the formulas. Features with positive signs indicate a positive effect on PEs, whereas those with negative signs indicate a negative effect on PEs. The partial dependence plot (PDP) presents the marginal effect of features on the predicted outcome of a machine learning model.
We used Python ver. 3.7.11 ( https://www.python.org ), scikit-learn library ver. 1.0.2, and shap library ver. 0.41.0 to develop and analyze the performance of the model. An investigator (RO) performed all developments and inferences on a private server equipped with a central processing unit (CPU) with 32 GB of RAM and an NVIDIA GeForce GTX 3090 with a 24 GB graphics processing unit (GPU; Nvidia). We used scipy library ver. 1.7.3 and scikit_posthocs library ver. 0.7.0 for statistical analysis. In addition, we used Student’s t-test, Friedman test, post hoc pairwise Wilcoxon test with Holm’s adjustment, Cochrane Q test, and post hoc pairwise Dunn’s test with Holm’s adjustment for comparisons.
Among the 3,188 eyes of 2,269 patients with cataracts, 1,430 eyes of 1,430 patients (mean age: 69.71 ± 9.14 years, 898 (62.8%) females) were included in this study. Table summarizes the demographic and clinical characteristics of the patients. Table summarizes the predictive performance of the IOL power formulas for postoperative refractive outcomes after zeroing adjustment. The difference in the APE values among the formulas was statistically significant ( P < 0.05, Friedman test). Supplementary Table presents the results of the pairwise comparison. The Cooke K6 formula exhibited the lowest MAE (0.325) and BUII formula exhibited the lowest MedAE (0.254) values, whereas the SRK/T and Holladay 1 formulas exhibited the highest MAE (0.378) and MedAE (0.307) values, respectively. However, no statistical differences were observed between the Kane, BUII, and Cooke K6 formulas in terms of the MAE value. The Cochrane Q test revealed significant differences between the formulas in terms of the percentages of eyes within the given error range ( P < 0.001 for all ranges). The Kane formula exhibited the highest percentage of eyes in the given error ranges. Supplementary Table presents the statistical results for the percentage of eyes within the given error range using post hoc Dunn’s test with Holm’s adjustment. LightGBM models were constructed to estimate the PEs for each IOL formula. Table summarizes the R 2 values of the ensemble models for the estimation of PEs using the IOL formulas in the test set. The R 2 values ranged from 0.021 to 0.231. Notably, the SRK/T and Kane formulas exhibited the highest and lowest R 2 values in the test set, respectively (R 2 value: 0.231 and 0.021, respectively). For BUII, Cooke K6, EVO V2.0, and Kane formula, the R 2 values for the models were not significantly different from zero. The SHAP values are described in Fig. . The colored dots represent the SHAP value of each eye. The x-axis represents the SHAP values, wherein the negative values correspond to the eyes contributing to a negative PE and positive values correspond to the eyes contributing to a positive PE. SHAP values of < 0.0 and > 0.0 indicate myopic and hyperopic shifts, respectively. The colors ranging from blue to red represent the values of ocular biometric variables. The red dots represent eyes with greater values, whereas the blue dots represent the eyes with smaller values. The average absolute SHAP values for each variable indicate the contribution of the variation in the variable within the formula. The variable at the top represents the most important variable in the estimation of the PE for each formula. The importance of the variables decreases in descending order. A lower R 2 value of the IOL formula indicated a smaller absolute SHAP value. Figure presents the PDPs between the SHAP values and ocular biometric variables. The x- and y-axes represent the value of the variable and the SHAP value, respectively. The SHAP values in the figures represent the marginal impact of the variables on the PEs based on the presence or absence of other variables. A longer AL was independently associated with more myopic PE in the SRK/T formula. The remaining formulas showed irregular trends. A steeper K results in myopic PE in the SRK/T formula, whereas it leads to hyperopic PEs in the Haigis formula. All variables exhibited a nearly flat curve in the PDP plots for the Kane formula, indicating that the performance efficiency of the formula was less affected by these variables.
We developed machine learning models to predict PEs in cataract surgery in this study. The R 2 values were generally low, ranging from 0.021 to 0.232. The SRK/T formula exhibited the highest R 2 value for estimating PE, whereas the BUII, Cooke K6, EVO V2.0, and Kane formula exhibited the lowest R 2 value, which did not statistically differ from zero. These findings indicate that ocular biometric parameters, including AL, ACD, K, LT, CCT, and CD, have significant effects on the PE of the SRK/T formula. However, they had no significant effect on the PE of the BUII, Cooke K6, EVO V2.0, and Kane formula. To the best of our knowledge, this is the first study to use ocular biometric variables to estimate PEs using machine learning. Previous studies have evaluated the accuracy of IOL power formulas for specific subgroups of patients according to the ocular biometric variables [ , , – ] and provided valuable insights into the strengths and weaknesses of various formulas. The Kane formula exhibits a significantly lower MAE value and higher accuracy in eyes with long AL. Moreover, it outperforms other formulas, including the Hill-RBF 2.0 and BUII formulas, in eyes with short AL . Several studies have used other ocular biometric parameters in combination with AL. Kim et al. categorized eyes based on the AL, K, and ACD values and compared the accuracies of the Haigis, Hoffer Q, Holladay 1, SRK/T, and BUII formulas in each subgroup . Hipólito-Fernandes et al. revealed that the Kane, PEARL-DGS, and EVO V.2.0 formulas yielded reliable and stable results in subgroups with extreme combinations of ACD and LT . Although previous studies have provided valuable insights into ocular biometric combinations, an optimal formula for each subgroup of eyes remains to be established. The ocular biometric parameters AL, K, ACD, LT, WTW, and CCT showed significant associations with each other. Kim et al. conducted a large-scale ocular biometric analysis and revealed that AL was positively correlated with ACD, WTW, and CCT and negatively correlated with K and LT . Another larger study revealed similar tendencies among AL, ACD, K, and LT . Furthermore, these parameters also show associations with age [ , – ]. Thus, subgroup analysis using individual variables introduced significant confounding factors, given the inherent correlations among the ocular biometric parameters. The effects of these confounding factors cannot be adequately addressed in studies using such designs. Multivariate analysis must be conducted to predict PEs and identify the best-fitting formula owing to the interdependence of these variables. We incorporated nine ocular biometric parameters to estimate PEs using machine learning models. Traditional regression models consider all relationships as linear, thereby limiting their ability to evaluate complex interactions between variables. In contrast, machine learning models incorporate nonlinear relationships, thereby enhancing their predictive capabilities. However, the mechanisms underlying machine learning models are challenging owing to their complex structures. The SHAP method addresses this limitation by aiding in the interpretation of the outcomes of machine learning models. Moreover, it enables the exploration of the importance and dependence of variables. The machine learning models produced R² scores below 30% for all formulas, suggesting that at least 70% of the variability cannot be explained by the ocular biometric variables included in this study. The machine learning model exhibited the lowest R 2 score of 0.021 for the Kane formula. The machine learning models for BUII, Cooke K6, EVO V2.0, and Kane formula failed to introduce significant performance in estimating the variability in the predicted PE, indicating their stability across ocular biometric parameter ranges. In contrast, the R 2 scores of the third-generation formulas, SRK/T, Haigis, and Holladay 1, were relatively high, indicating a stronger association between the predicted PEs and ocular biometric variables. The impact of each variable on the prediction of PE can be discerned (Fig. ). Notably, the absolute SHAP values of the BUII, Cooke K6, EVO V2.0, and Kane formulas were lower than those of the Haigis, Holladay 1, and SRK/T formulas. The PDP plots illustrate the adjusted influence of each variable while considering the influence of the other variables. Thus, these graphs depict the response of different formulas to changes in ocular biometry. An increase in AL leads to the SRK/T formula predicting a negative PE, whereas smaller K, larger ACD, and LT values lead to the SRK/T formula predicting greater PEs. In contrast, LT and K, rather than AL, exhibit significant effects on the Haigis formula. The findings of our study highlight different aspects of the objectives and designs of numerous previous studies that attempted to differentiate subgroups based on the range of ocular biometric parameters and compared various formulas within these subgroups to identify the best formula in terms of PE or APE. However, the outcomes varied among research groups. Moreover, the differences often failed to demonstrate statistical differences in subgroup analyses. The findings of our study can be used to understand these divergent findings. New-generation formulas, such as the BUII, Cooke K6, EVO V2.0, and Kane, consistently yielded stable results across a wide range of parameters. In contrast, the SRK/T, Holladay 1, and Haigis formulas exhibited a less consistent performance in the subgroup analysis, which may be attributed to the PEs being explainable using known ocular biometric variables. Our study demonstrated that the new-generation formulas are generally stable across a wide range of known ocular biometric parameters. We suggest that further research should focus not on the previous known ocular biometric parameters but on the new variables which can have affect on the PEs, such as the lens vault, angle kappa, and accuracy of the ocular biometric measurements [ , , ]. New variables using the mathematical combination of the known variables, such as AL/corneal radius , might have a role in the change of the PEs. This study has some limitations. First, our sample size was relatively small for the machine learning model. However, a sample size exceeding 1000 is sufficient to demonstrate this tendency. Further large-scale studies could reveal a higher proportion of potentially predictable variance. Second, we only included a single type of IOL in this study; thus, the findings of our study must be validated before being applied to other IOLs. Lastly, the machine learning model for each formula could be overfitted to our dataset, which could have resulted in bias. Moreover, comparisons have been made between these overfitted models, but not with an ideal prediction algorithm. In conclusion, machine learning algorithms can predict a portion of PEs; however, they have limitations in explaining the variability of PEs using ocular biometric parameters. Newer IOL formulas, such as the BUII, Cooke K6, EVO V2.0, and Kane formulas, demonstrated relatively stable outcomes across a wide range of ocular biometric variables, indicating that these formulas are already optimized by ocular biometric variables used in this study. Introducing additional ocular biometric variables beyond those currently used in the IOL calculation formula may improve the accuracy of future surgical results.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Prevention of device-related infections in patients with cancer: Current practice and future horizons | 40086eef-1a7f-479c-b9d9-e35e689f795a | 9992006 | Internal Medicine[mh] | Cancer is one of the leading causes of death in the industrialized world. The International Agency for Research on Cancer estimated that about 20 million new cases of cancer would be diagnosed and 10 million cancer deaths would occur worldwide in 2020. The past few decades have seen several advances in the management of patients with cancer. Increased research funding through public and private entities and the involvement of numerous scientists and academic institutions in combination with pharmaceutical companies have led to a growing pipeline of life-saving products. New chemotherapeutic regimens, targeted therapies, and checkpoint inhibitors , ; radiation and proton therapy ; and refinement in stem cell transplantation and immune effector cell therapies have all recently entered the oncological armamentarium. In addition, advancements in and contributions of diverse specialized consulting services and numerous supportive care measures as well as increased specialty societal guidelines and standardized institutional procedures have all converged to improve the overall survival rate for patients with cancer. Along with the advances described above, numerous devices have been introduced for the administration of intravenous and intrathecal medications and management of diverse comorbidities and complications related to cancer therapy. These include diverse central venous access devices (CVADs), cardiac-implantable electronic devices (CIEDs), Ommaya reservoirs, external ventricular drains (EVDs), breast implants plus tissue expanders (TEs), percutaneous nephrostomy tubes (PCNTs), and ureteral stents as well as esophageal stents, pleural drains, percutaneous endoscopic gastrostomy (PEG) tubes, percutaneous cholecystostomy tubes, biliary stents, and peritoneal drains ( ). Unfortunately, infections associated with these devices are common, increasing health care costs and complicating patients’ oncological management over both the short and long term, usually leading to delays in further cancer therapy until the infection has resolved. Treating these infections with systemic antimicrobials and removal or replacement of the device is often necessary because of the formation of a three-dimensional biofilm on implants that contains a complex community of sessile bacteria plus host and microbial products. However, removal of an implant may be difficult and at times even prohibitive because of the patient’s underlying comorbidities, thrombocytopenia, immunosuppression, lack of vascular access, and prior surgical interventions. Also, governmental regulations have increased and financial reimbursements have decreased for the treatment of these infections, which can be reasonably prevented through the application of evidence-based guidelines. In 2011, the Centers for Medicare and Medicaid Services began requiring acute care hospitals to report specific types of health care-associated infection data to it through the Centers for Disease Control and Prevention’s National Healthcare Safety Network so that the hospitals can receive their full annual reimbursements. Soon thereafter, the Inpatient Prospective Payment System and Fiscal Year 2013 Rates—Final Rule published by the Centers for Medicare and Medicaid Services listed specific conditions that will not be financially reimbursed, including catheter-associated urinary tract infections, vascular catheter-associated infections, and surgical site infections (SSIs) after certain orthopedic or CIED procedures. Over the following years, more infections deemed to be preventable likely will be added to this list. Therefore, because of the burden on patients, families, physicians, health care institutions, and governments, further reducing the rate of these infections is imperative. Herein, we review the main indications for placement of the above-mentioned devices, their infection rates, as well as the epidemiology and risk factors for infection. We also provide several general and device-specific, evidence-based recommendations for the provider who cares for patients with cancer along with best practices, expert opinions, and novel measures for the prevention and reduction of device-related infections.
Several basic principles for the prevention of infections have been implemented over the past few decades. These primary preventive measures, which involve patients, health care workers, and the environment, are commonly used with all surgical interventions, including those that involve the placement of foreign medical devices. Below, we describe these simple and innovative interventions, which should be implemented at all institutions, as they have been demonstrated to significantly reduce the historically high rate of preventable infections ( ). Hand Hygiene In 1847, Dr Ignaz Semmelweis was the first to describe the basic hygienic practice of hand washing as a way to stop the spread of infection and infection-related death in Vienna, Austria. Since then, hand hygiene has become the cornerstone of all surgical and infection-control policies. Although both alcohol-based products and soap and water are effective, in the inpatient and outpatient setting, the former has been demonstrated to be superior to the latter, with a compliance rate that is approximately 25% higher. , In 2006, the World Health Organization added the My Five Moments for Hand Hygiene campaign, which emphasizes key moments for performing hand hygiene based on known mechanisms of microbial cross-transmission among patients, health care workers, and the environment. These five moments are: (1) before touching a patient, (2) before cleaning/aseptic procedures, (3) after body fluid exposure/risk of such exposure, (4) after touching a patient, and (5) after touching a patient’s surroundings. This has proven to be the simplest and most cost-effective intervention for infection prevention. Furthermore, several clinical trials have demonstrated that an increase in hand hygiene acceptance significantly decreases the rates of health care-associated infections. , However, rates of hand hygiene compliance for high-income countries rarely exceed 70%, and the rates are much lower in low-income countries. The long-term challenge in health care settings is to achieve and sustain high hand hygiene compliance among the personnel in all disciplines who interact with patients who have cancer and their environments to decrease the rate of SSIs and device-related infections, especially in the early postoperative period and at subsequent follow-up visits. Perioperative Antisepsis Protocols Many interventions for the prevention of perioperative infections with differing degrees of evidence have been implemented. Recommended interventions with the highest degree of evidence are: (1) administration of antimicrobial prophylaxis according to evidence-based standards and guidelines , (see Perioperative antibacterial reflow skin-preparatory agents if no contraindication exists over povidone iodine, (3) maintenance of normothermia during the perioperative period, (4) optimization of tissue oxygenation by administering supplemental oxygen during and immediately after surgical procedures involving mechanical ventilation, and (5) use of a checklist based on the World Health Organization recommendations to ensure compliance with best practices and thus improve surgical patient safety. Recommended interventions with a moderate degree of evidence are: (1) avoidance of hair removal or use of razors at the operative site unless the presence of hair will interfere with the operation, , (2) control of blood glucose levels during the immediate postoperative period, (3) sterilization of all surgical equipment according to published guidelines, (4) surveillance for SSIs through the use of automated data with ongoing feedback to health care providers and leadership, and (5) implementation of policies and practices aimed at reducing the risk of SSIs that align with evidence-based standards (such as those from the Centers for Disease Control and Prevention, the Society for Healthcare Epidemiology of America, and the Infectious Diseases Society of America). , Recommended interventions with the lowest degree of evidence but that have also been found to be successful are: (1) educating both surgeons and perioperative personnel plus patients and their families about SSI prevention ; (2) observing and reviewing personnel, practices, and the environment of care in the operating room, postanesthesia care unit, surgical intensive care unit, and surgical wards ; (3) measuring and providing feedback to providers regarding rates of compliance with process measures , ; and (4) using an Environmental Protection Agency-approved hospital disinfectant to clean contaminated surfaces, following the American Institute of Architects’ recommendations for proper air handling, and minimizing operating room traffic. , Perioperative Antibacterial Prophylaxis Patients who undergo surgery should receive systemic perioperative antimicrobials according to evidence-based standards and guidelines. , As recommended by the Surgical Care Improvement Project, the use of prophylactic antimicrobials should be based on the surgical procedure and most common pathogens encountered at the surgical site and responsible for causing postoperative SSIs. Every effort to confirm a patient’s reported penicillin allergy should be obtained as part of routine perioperative care, specifically because the odds of developing a SSI increases by 50% when a patient receives a second-line perioperative antibiotic. Also, if a patient is known to be colonized with methicillin-resistant Staphylococcus aureus (MRSA), administering a single dose of vancomycin is reasonable. However, vancomycin is less effective than cefazolin at preventing infections caused by methicillin-susceptible S. aureus or streptococci. For this reason, vancomycin is used in combination with cefazolin at some institutions when the risk of infections with these organisms is high. Furthermore, patients who have cancer usually are already receiving prophylactic antimicrobials because of their underlying immunosuppression and are known to be colonized with or have had prior infections with MRSA, vancomycin-resistant enterococci, and multidrug-resistant gram-negative rods; the decision to use perioperative antimicrobials in such cases should be individualized for each patient. , – Of note, although patients with one of the implanted devices described above have a theoretical risk of becoming secondarily infected during an invasive clean or clean–contaminated procedure, especially if the device was recently placed, evidence that antimicrobial prophylaxis prevents infections of these nonvalvular intravascular medical devices is lacking. , , Prophylactic antimicrobials should be infused within 60 minutes of the incision, whereas vancomycin, aminoglycosides, and quinolones should be infused within 120 minutes. The dosing of these prophylactic antimicrobials should be adjusted on the basis of the patient’s weight and re-dosed at intervals of every two half-lives or when excessive blood loss occurs during the procedure. For surgeries defined as clean or clean–contaminated, the use of all perioperative antimicrobials should be discontinued within 24 hours after the procedure. , In addition to the evidence-based recommendations described above, patients undergoing interventions that involve the placement of an implantable device usually have the device submersed in or the surgical pocket irrigated with an antimicrobial and/or antiseptic solution with the intention to decrease the probability of contaminating the newly placed foreign medical device. Also, surgeons commonly provide postoperative oral antimicrobials beyond 24 hours of surgery to patients who have surgical drains placed near an implantable device, with the hope of further decreasing the rate of infection. Prolonging postoperative antimicrobials in this scenario is performed because these drains are known to allow microbial translocation from the skin to the deeper surgical site where the implant is located. These interventions, with a low degree of evidence, have produced mixed results. Furthermore, extending perioperative use of antimicrobials beyond 24 hours can lead to several unintended side effects, including hypersensitivity reactions, renal failure, antimicrobial resistance, and Clostridium difficile -associated diarrhea. MRSA Screening and Decolonization The likelihood of MRSA colonization increases with: (1) prior history of MRSA infection, (2) hospitalization and exposure to health care facilities within the preceding year, (3) receipt of antibiotics within 3 months before admission, or (4) the presence of select comorbid conditions, such as immunosuppression, diabetes, chronic obstructive pulmonary disease, congestive heart failure, and use of hemodialysis, all of which are commonly encountered in the cancer population. , Surgical patients identified as colonized with MRSA by a positive nasal polymerase chain reaction screen have been found to have 2-fold to 14-fold greater odds of a subsequent MRSA SSI than patients with negative nasal MRSA polymerase chain reaction screens. , Several studies have shown that a bundled approach that includes decolonization protocols plus intravenous vancomycin prophylaxis can decrease the rate of postoperative gram-positive infections, especially in the orthopedic and cardiac surgical population, whereas other studies have not shown this benefit. , Decolonization protocols that include topical mupirocin and chlorhexidine gluconate (CHG) versus a placebo have been effective in reducing the rate of postoperative infections, with a relative risk of infection of 0.42 (95% CI, 0.23–0.75). Development of resistance to mupirocin is unlikely in the perioperative setting, especially when it is not used as an ointment for prolonged periods. CHG resistance is also uncommon, mainly because topical concentrations of CHG used for decolonization are 200-fold higher than the highest recorded minimum inhibitory and bactericidal concentrations of it used for staphylococci. , Therefore, most researchers concluded that the use of preoperative intranasal mupirocin and/or topical CHG in MRSA-colonized patients is safe and potentially beneficial as an adjuvant to intravenous antimicrobial prophylaxis to decrease the occurrence of SSIs. Screening and targeted decolonization should specifically be considered for all those patient at high risk for negative outcomes, including the immunocompromised cancer population with device implantation. Infection Control and Prevention Programs To further decrease the risk of cross-contamination, nosocomial transmission, and SSIs, mainly caused by MRSA, in the acute health care setting, a robust infection control department should be established, ensuring the following : (1) implementation of a MRSA monitoring program along with a laboratory-based alert system that notifies health care workers of new MRSA-colonized or MRSA-infected patients in a timely manner , ; (2) use of contact precautions for MRSA-colonized and MSRA-infected patients , ; (3) cleaning and disinfection of equipment and the environment ; (4) provision of MRSA data and outcome measures to senior leadership, physicians, and nursing staff; and (5) education of health care workers as well as patients and their families about MRSA.
In 1847, Dr Ignaz Semmelweis was the first to describe the basic hygienic practice of hand washing as a way to stop the spread of infection and infection-related death in Vienna, Austria. Since then, hand hygiene has become the cornerstone of all surgical and infection-control policies. Although both alcohol-based products and soap and water are effective, in the inpatient and outpatient setting, the former has been demonstrated to be superior to the latter, with a compliance rate that is approximately 25% higher. , In 2006, the World Health Organization added the My Five Moments for Hand Hygiene campaign, which emphasizes key moments for performing hand hygiene based on known mechanisms of microbial cross-transmission among patients, health care workers, and the environment. These five moments are: (1) before touching a patient, (2) before cleaning/aseptic procedures, (3) after body fluid exposure/risk of such exposure, (4) after touching a patient, and (5) after touching a patient’s surroundings. This has proven to be the simplest and most cost-effective intervention for infection prevention. Furthermore, several clinical trials have demonstrated that an increase in hand hygiene acceptance significantly decreases the rates of health care-associated infections. , However, rates of hand hygiene compliance for high-income countries rarely exceed 70%, and the rates are much lower in low-income countries. The long-term challenge in health care settings is to achieve and sustain high hand hygiene compliance among the personnel in all disciplines who interact with patients who have cancer and their environments to decrease the rate of SSIs and device-related infections, especially in the early postoperative period and at subsequent follow-up visits.
Many interventions for the prevention of perioperative infections with differing degrees of evidence have been implemented. Recommended interventions with the highest degree of evidence are: (1) administration of antimicrobial prophylaxis according to evidence-based standards and guidelines , (see Perioperative antibacterial reflow skin-preparatory agents if no contraindication exists over povidone iodine, (3) maintenance of normothermia during the perioperative period, (4) optimization of tissue oxygenation by administering supplemental oxygen during and immediately after surgical procedures involving mechanical ventilation, and (5) use of a checklist based on the World Health Organization recommendations to ensure compliance with best practices and thus improve surgical patient safety. Recommended interventions with a moderate degree of evidence are: (1) avoidance of hair removal or use of razors at the operative site unless the presence of hair will interfere with the operation, , (2) control of blood glucose levels during the immediate postoperative period, (3) sterilization of all surgical equipment according to published guidelines, (4) surveillance for SSIs through the use of automated data with ongoing feedback to health care providers and leadership, and (5) implementation of policies and practices aimed at reducing the risk of SSIs that align with evidence-based standards (such as those from the Centers for Disease Control and Prevention, the Society for Healthcare Epidemiology of America, and the Infectious Diseases Society of America). , Recommended interventions with the lowest degree of evidence but that have also been found to be successful are: (1) educating both surgeons and perioperative personnel plus patients and their families about SSI prevention ; (2) observing and reviewing personnel, practices, and the environment of care in the operating room, postanesthesia care unit, surgical intensive care unit, and surgical wards ; (3) measuring and providing feedback to providers regarding rates of compliance with process measures , ; and (4) using an Environmental Protection Agency-approved hospital disinfectant to clean contaminated surfaces, following the American Institute of Architects’ recommendations for proper air handling, and minimizing operating room traffic. ,
Patients who undergo surgery should receive systemic perioperative antimicrobials according to evidence-based standards and guidelines. , As recommended by the Surgical Care Improvement Project, the use of prophylactic antimicrobials should be based on the surgical procedure and most common pathogens encountered at the surgical site and responsible for causing postoperative SSIs. Every effort to confirm a patient’s reported penicillin allergy should be obtained as part of routine perioperative care, specifically because the odds of developing a SSI increases by 50% when a patient receives a second-line perioperative antibiotic. Also, if a patient is known to be colonized with methicillin-resistant Staphylococcus aureus (MRSA), administering a single dose of vancomycin is reasonable. However, vancomycin is less effective than cefazolin at preventing infections caused by methicillin-susceptible S. aureus or streptococci. For this reason, vancomycin is used in combination with cefazolin at some institutions when the risk of infections with these organisms is high. Furthermore, patients who have cancer usually are already receiving prophylactic antimicrobials because of their underlying immunosuppression and are known to be colonized with or have had prior infections with MRSA, vancomycin-resistant enterococci, and multidrug-resistant gram-negative rods; the decision to use perioperative antimicrobials in such cases should be individualized for each patient. , – Of note, although patients with one of the implanted devices described above have a theoretical risk of becoming secondarily infected during an invasive clean or clean–contaminated procedure, especially if the device was recently placed, evidence that antimicrobial prophylaxis prevents infections of these nonvalvular intravascular medical devices is lacking. , , Prophylactic antimicrobials should be infused within 60 minutes of the incision, whereas vancomycin, aminoglycosides, and quinolones should be infused within 120 minutes. The dosing of these prophylactic antimicrobials should be adjusted on the basis of the patient’s weight and re-dosed at intervals of every two half-lives or when excessive blood loss occurs during the procedure. For surgeries defined as clean or clean–contaminated, the use of all perioperative antimicrobials should be discontinued within 24 hours after the procedure. , In addition to the evidence-based recommendations described above, patients undergoing interventions that involve the placement of an implantable device usually have the device submersed in or the surgical pocket irrigated with an antimicrobial and/or antiseptic solution with the intention to decrease the probability of contaminating the newly placed foreign medical device. Also, surgeons commonly provide postoperative oral antimicrobials beyond 24 hours of surgery to patients who have surgical drains placed near an implantable device, with the hope of further decreasing the rate of infection. Prolonging postoperative antimicrobials in this scenario is performed because these drains are known to allow microbial translocation from the skin to the deeper surgical site where the implant is located. These interventions, with a low degree of evidence, have produced mixed results. Furthermore, extending perioperative use of antimicrobials beyond 24 hours can lead to several unintended side effects, including hypersensitivity reactions, renal failure, antimicrobial resistance, and Clostridium difficile -associated diarrhea.
The likelihood of MRSA colonization increases with: (1) prior history of MRSA infection, (2) hospitalization and exposure to health care facilities within the preceding year, (3) receipt of antibiotics within 3 months before admission, or (4) the presence of select comorbid conditions, such as immunosuppression, diabetes, chronic obstructive pulmonary disease, congestive heart failure, and use of hemodialysis, all of which are commonly encountered in the cancer population. , Surgical patients identified as colonized with MRSA by a positive nasal polymerase chain reaction screen have been found to have 2-fold to 14-fold greater odds of a subsequent MRSA SSI than patients with negative nasal MRSA polymerase chain reaction screens. , Several studies have shown that a bundled approach that includes decolonization protocols plus intravenous vancomycin prophylaxis can decrease the rate of postoperative gram-positive infections, especially in the orthopedic and cardiac surgical population, whereas other studies have not shown this benefit. , Decolonization protocols that include topical mupirocin and chlorhexidine gluconate (CHG) versus a placebo have been effective in reducing the rate of postoperative infections, with a relative risk of infection of 0.42 (95% CI, 0.23–0.75). Development of resistance to mupirocin is unlikely in the perioperative setting, especially when it is not used as an ointment for prolonged periods. CHG resistance is also uncommon, mainly because topical concentrations of CHG used for decolonization are 200-fold higher than the highest recorded minimum inhibitory and bactericidal concentrations of it used for staphylococci. , Therefore, most researchers concluded that the use of preoperative intranasal mupirocin and/or topical CHG in MRSA-colonized patients is safe and potentially beneficial as an adjuvant to intravenous antimicrobial prophylaxis to decrease the occurrence of SSIs. Screening and targeted decolonization should specifically be considered for all those patient at high risk for negative outcomes, including the immunocompromised cancer population with device implantation.
To further decrease the risk of cross-contamination, nosocomial transmission, and SSIs, mainly caused by MRSA, in the acute health care setting, a robust infection control department should be established, ensuring the following : (1) implementation of a MRSA monitoring program along with a laboratory-based alert system that notifies health care workers of new MRSA-colonized or MRSA-infected patients in a timely manner , ; (2) use of contact precautions for MRSA-colonized and MSRA-infected patients , ; (3) cleaning and disinfection of equipment and the environment ; (4) provision of MRSA data and outcome measures to senior leadership, physicians, and nursing staff; and (5) education of health care workers as well as patients and their families about MRSA.
Many patients will likely have implantation of one or more devices at any given time during their cancer journey. These devices are placed either during active oncological therapy or after the patient has been cured to mitigate the unintended side effects of cancer therapy or the cancer itself. These devices may become infected, increasing patient morbidity and mortality and further increasing the complexity of oncological care. Therefore, key stakeholders and health care providers should be knowledgeable and serve as advocates for patients in providing specific interventions for the prevention of device-related infections like those described below. Central Venous Access Devices These devices include nontunneled and tunneled centrally inserted central catheters, peripherally inserted central catheters, as well as totally implantable venous access devices. , These central venous devices, which are used in at least 4 million patients in the United States and are left in place for several months, are essential lifelines for patients living with cancer. However, CVADs are associated with a wide array of infectious complications, including localized exit-site infections, tunnel-related or pocket-related infections, and life-threating catheter-related bloodstream infections (CRBSIs). The infection rates of the latter vary significantly among different clinical settings, but it has been estimated that, in the oncological population, it is approximately 2.5 per 1000 catheter-days. Femorally inserted central catheters have the highest risk of infection, followed by centrally inserted central catheters, peripherally inserted central catheters, and totally implantable venous access devices. In addition, patients receiving chemotherapy, total parenteral nutrition, or who are neutropenic for a prolonged period of time will be at increased risk for infection. The pathogens that most frequently are responsible for CRBSIs are gram-positive bacteria, in particular, coagulase-negative staphylococci, S. aureus , and Enterococcus species, whereas gram-negative microorganisms account for approximately 20%. , The average cost per episode of CRBSI is $45,814 (95% CI, $30,919–$65,245), making CRBSI one of the costliest health care-associated infections. CVADs have four main routes of contamination that are the targets of infection-preventive measures: (1) migration of skin organisms at the insertion site, resulting in bacterial adhesion to the external or intraluminal surface of the device; (2) direct contamination by contact with hands or contaminated fluids or devices; (3) less commonly, catheters may become hematogenously seeded from another focus of infection; and (4) rarely, infusate contamination may lead to a CRBSI. Therefore, several well established, evidence-based recommendations of a bundle approach have been designed to mitigate the risk for infection. This bundle intervention includes the implementation of specific steps during both the insertion and the maintenance of central lines : (1) educating and designating only trained health care personnel; (2) hand hygiene and the use of sterile gloves before catheter insertion; (3) the use of alcohol-containing CHG for skin antisepsis before insertion and during dressing change; (4) maximal sterile barrier precautions, including the use of a cap, mask, gown, and sterile full-body drape; (5) avoiding the use of systemic antimicrobial prophylaxis; (6) preferring an infraclavicular rather than a supraclavicular or groin exit site; (7) selecting a CVAD with the minimum number of lumens and to be used for the fewest days necessary for management of the patient; (8) implementation of ultrasound guidance to reduce the number of catheter placement attempts; (9) choosing a suture-less securement device with needle-less connectors; (10) placing a sterile, transparent dressing over the insertion site and replacing it no more than once a week (unless the dressing is soiled or loose); (11) avoiding submerging the catheter in water or using topical antimicrobial ointments at insertion sites as well as not replacing the CVAD to prevent CRBSI, but replacing the administration set and needle-less connectors at least every 7 days assuming the patient has not received blood, blood products, or fat emulsions, in which case they must be replaced within 24 hours after the infusion; and (12) most importantly, it is encouraged to have collaborative-based performance-improvement initiatives. These interventions require a designated physician and nursing team leader along with a checklist to assess compliance with the elements of the bundle and empowerment to stop the procedure if protocols are not followed. If compliance with all components is high, the bundle approach has reported a statistically significant decrease in the rate of CRBSI of 66% ( p < .002). The American Society of Clinical Oncology has high-lighted the importance of CRBSIs and emphasized the need for more research targeting patients with cancer, mainly because the majority of studies have focused on patients who have indwelling CVADs for a short term, such as in intensive care units. However, based on the available literature, several additional CRBSI-preventive measures can be instituted. Simple and inexpensive interventions (<$10 per unit) in which CRBSI remains elevated despite maximum compliance with the aforementioned measures are the use of 70% isopropyl alcohol caps for needle-less connectors and the placement of a chlorhexidine-impregnated dressing , around the catheter insertion site and exchanging it every 7 days. These two interventions have been effective in reducing the incidence of intraluminal and extraluminal infections, respectively. Furthermore, the introduction of US Food and Drug Administration (FDA)-approved antimicrobial-impregnated catheters (AICs) has added an extra layer of CRBSI prevention. The use of these AICs is associated with a markedly lower rate of catheter colonization and CRBSI compared with non-AICs. , Cost-effectiveness assessments of these relatively inexpensive devices have justified their integration into clinical practice. Of the most commonly used AICs, minocycline/rifampin-impregnated catheters have been associated with lower rates of CRBSI than chlorhexidine/silver sulfadiazine-impregnated catheters (0.3% vs. 3.4%; p < .002) , without an increased incidence of antibacterial resistance of Staphylococcus species. Moreover, AICs ensure protection for a limited time, ranging from 28 to 50 days in the setting of a minocycline/rifampin-impregnated catheter, which contrasts with an average of 7 days in the setting of a chlorhexidine/silver sulfadiazine-impregnated catheter. – Therefore, the use of antimicrobial lock solutions has been proposed as a method of preventing intraluminal CRBSI of CVADs that are projected to remain in place for an extended duration, especially in patients with a history of multiple CRBSIs. A meta-analysis of randomized controlled trials comparing antimicrobial lock solutions with heparin revealed a 69% reduction in the incidence of CRBSIs. These antimicrobial lock solutions can be created with numerous drugs and drug combinations. The simplest lock solutions are those formulated with ethanol, which was revealed in another meta-analysis of randomized controlled trials to significantly decrease CRBSI compared with heparin alone (odds ratio, 0.53; p = .004). However, ethanol concentrations and antimicrobial lock solution dwell times are not standardized. Also, ethanol concentrations >28% should be avoided because they lead to plasma protein precipitation and structural changes in CVADs, mainly polyurethane catheters. Other antimicrobial lock solutions, such as the chelators citrate and EDTA, have gained attention because they have excellent anticoagulant activity, prevent biofilm formation, have antimicrobial characteristics, and inhibit bacterial proliferation, whereas heparin may anecdotally enhance biofilm growth. The use of a combined antimicrobial chelator lock solution, such as minocycline–EDTA and taurolidine–citrate, has led to remarkable progress in preventing CRBSIs in patients who have cancer. , Another promising antimicrobial lock solution is nitroglycerin–citrate–ethanol, a nonantibiotic chelator combination. This lock solution is safe and has unique features of an active anticoagulant, no risk of triggering bacterial resistance, and the ability to disrupt biofilm. These findings were validated in a clinical study that evaluated patients with hematological malignancies and showed a considerable reduction in the incidence of CRBSIs. Although these lock solutions are well studied, currently, there are no FDA-approved lock formulations commercially available for which they are prepared locally in hospital pharmacies. The components of the antimicrobial lock solutions are usually generic, economical, and effective in preventing thrombosis and CRBSIs. However, their beneficial use in preventing infections must be balanced with potential breaches in catheter integrity, bacterial resistance, systemic toxicity, frequent antimicrobial lock solution exchanges (depending on the stability of each component of the solution), and inability to use the CVAD while the lock solution is dwelling. Cardiac-Implantable Electronic Devices The indications for permanent pacemakers, implantable cardiac defibrillators, and cardiac resynchronization therapy, collectively known as CIEDs, are extensive. The cardiotoxicity of some cancer therapies and the rising average age of the oncological population have increased the need for these devices. In the United States, more than 100,000 implantable cardiac defibrillators and 300,000 permanent pacemakers are inserted every year. Unfortunately, the rates of CIED infections have been reported to be approximately 4%, with a disproportionate increase in these rates compared with the increase in CIED implantation. The most common microorganisms causing CIED infections are expected skin flora, such as coagulase-negative staphylococci (38%), S. aureus (31%), and other pathogens, including gram-negative bacteria (9%). , Infections of these devices necessitate the extraction of all CIED components (generator and leads), increasing the mean hospitalization charges in the United States to $173,211, with overall in-hospital mortality rates ranging from 3.7% to 11.3%. Several modifiable and nonmodifiable patient-related, procedure-related, and device-related risk factors for CIED infections have been identified. These risk factors are common in the oncological population and have been compiled in various stratification scores. On the basis of these scoring systems, patients who have cancer are usually at intermediate to high risk for developing a CIED infection. The Prevention of Arrhythmia Device Infection Trial ( ClinicalTrials.gov identifier NCT01628666 ) score , is one of the most commonly used scoring systems because it is simple and has been independently validated to identify high-risk patients who may benefit from tailored strategies to reduce the risk of CIED infection. For patients with several nonmodifiable risks, alternative approaches may be used to lower the overall risk of infection, including confirming the indication for CIED use and consideration of a leadless CIED. , In addition to the general surgical recommendations described above, the identification of modifiable risk factors is important because it may allow for further preventive measures to reduce the risk of CIED infection. These include preventive preprocedural measures supported by scientific consensus, such as: (1) provision of perioperative systemic antimicrobials ; (2) use of a preoperative checklist , ; (3) delay of CIED implantation in patients with infection or fever for at least 24 hours; (4) avoidance of CVADs when introducing a CIED, when feasible ; and (5) measures to decrease the risk of pocket hematoma (increasing platelet count to >50,000/μl, discontinuation of antiplatelet medications within 5–10 days before the procedure, avoidance of therapeutic low-molecular-weight heparin and a bridging approach with heparin, and holding of anticoagulation therapy until the risk of bleeding has diminished in patients with a history of deep venous thrombosis or CHA 2 DS 2 -VASc score <4). The latter three measures are commonly encountered in the cancer population and should be closely addressed. Perioperative recommendations for the prevention of CIED infections include: (1) consideration of adding an acellular dermal matrix within the surgical pocket to reinforce the incision site, (2) avoidance of antimicrobial irrigation within the pocket, and (3) use of an antimicrobial envelope (such as TYRX; Medtronic) that locally releases a high concentration of minocycline and rifampin within the surgical pocket for a minimum of 7 days in patients at high-risk for developing CIED infection. The World-wide Randomized Antibiotic Envelope Infection Prevention Trial ( ClinicalTrials.gov identifier NCT02277990 ) demonstrated that the use of these envelopes significantly reduced the primary end point (infection resulting in CIED extraction or revision, long-term antibiotic therapy, or death within 12 months of device placement) from 1.2% (control) to 0.7% (envelope; hazard ratio, 0.6; p = .04). The number needed to treat was 100 for high-risk patients undergoing implantable cardiac defibrillator/cardiac resynchronization therapy defibrillator replacement or upgrade. However, this trial excluded patients at increased risk for infection, such as those with prior CIED infection, those receiving immunosuppressive therapy, those with long-term vascular access, or patients undergoing hemodialysis. Therefore, selecting a high-risk population for infection, such as an oncological population with several risk factors, would likely decrease the number needed to treat and improve the cost effectiveness of the envelope, which is priced slightly below $1000. , At our institution, all patients who have cancer receive the TYRX envelope as part of a comprehensive prophylactic bundle, which has been demonstrated to be both safe and effective in maintaining a low rate of CIED infection (1.3%) and is well within published averages in the broader population of all CIED recipients. Of note, few studies have evaluated novel techniques for decreasing microbial adherence to CIEDs. Polyurethane has been shown to have a higher affinity for biofilm-producing pathogens than titanium in vitro. Therefore, increasing the titanium:polyurethane surface ratio of these cardiac devices may decrease the rate of CIED infection. Furthermore, the use of silver ion-based antimicrobial surface technology for the reduction of bacterial growth on CIEDs was shown to be safe in an ovine model. However, CIED surface modification techniques are unlikely to progress because of the complexity of the regulatory approval pathways, the diversity of CIED models and manufacturing companies worldwide, and the availability of more cost-effective preventive measures already approved by the FDA, such as antimicrobial envelopes. Furthermore, postprocedural prophylactic measures in CIED recipients include: (1) the use of pressure dressings to decrease hematoma occurrence and hemostatic gelatin sponges in patients receiving anticoagulation or dual antiplatelet therapy ; (2) refraining from early reintervention, which dramatically increases the risk of CIED infection ; and (3) avoidance of postoperative antimicrobials. The last measure was confirmed in the Prevention of Arrhythmia Device Infection Trial, which included 19,603 patients and revealed no benefit from an incremental approach (preoperative intravenous vancomycin or cefazolin plus intraoperative bacitracin wash and postoperative oral cephalosporin) over the conventional approach (single dose of preoperative cefazolin or vancomycin; odds ratio, 0.77; p = .1). Ommaya Reservoirs and External Ventricular Drains An Ommaya reservoir, a small, dome-shaped, subgaleal reservoir connected to an intraventricular catheter, is the preferred device for intrathecal infusion of chemotherapy in patients with leptomeningeal cancer ; whereas EVDs are used for temporary diversion of cerebrospinal fluid (CSF) from an obstructed ventricular system in cases of acute hydrocephalus, monitoring of intracranial pressure, and as part of the treatment approach for infected CSF shunts. These devices can become infected, manifesting as a local skin soft tissue inflammatory infectious process or with meningitis and ventriculitis at a rate of 6% for Ommaya reservoirs and 8% for EVDs. , Concomitant bloodstream infections have been identified in 7.5%–12% of Ommaya reservoir infections. , The overall incidence of infection in previous studies was 0.74 per 10,000 Ommaya reservoir-days and 11.4 per 10,000 EVD-days. These infections usually occur soon after the time of placement or later through retrograde spread by exit-site colonization or direct inoculation through device manipulation. , The main risk factor for Ommaya reservoir infections is the frequency of CSF sampling, whereas the main risk factors for EVD infections include prolonged catheterization, subarachnoid hemorrhage, drain blockage, and CSF leakage at the EVD entry site. , – The most common organisms causing Ommaya reservoir infections are predominantly normal skin flora, including Staphylococcus spp. and Cutibacterium acnes ; whereas EVD infections are increasingly caused by gram-negative rods, such as Escherichia coli , Pseudomonas aeruginosa , and Enterobacter , Acinetobacter , and Klebsiella species. , , Preprocedural use of antimicrobials such as cefazolin is necessary to reduce the rate of SSIs and central nervous system infections in patients with Ommaya reservoirs and EVDs. Perioperative chlorhexidine shampoo and hair clipping, with special care to avoid causing skin abrasions, also should be implemented. In addition, an Ommaya reservoir should be placed under a skin flap that allows for implantation at a safe distance from the incision site. Furthermore, despite few studies with mixed results, at institutions with high rates of infections, the use of subcutaneous long-tunneling EVDs to the chest wall can be considered. Moreover, silver-coated and, more recently, minocycline- and rifampin-impregnated catheters have proven to be cost-effective in significantly reducing the rate of infection in EVDs (risk ratio, 0.31; 95% CI, 0.15–0.64; p = .0002). However, another study did not show an additional benefit of using AICs, likely because of a small sample size. Similar to other devices, studies have shown an advantage with the prolonged use of postprocedural antibiotics as long as an EVD remains in place compared with no postoperative antimicrobial use (3% vs. 11%; p = .01). Other preventative interventions, including the use of a daily prophylactic bundle plus intraventricular amikacin, also had encouraging results. However, because these were relatively small studies with the potential for drug-related toxic effects and development of multidrug-resistant pathogens, these findings should be verified in large, multicenter, randomized controlled studies. Other interventions, such as routine EVD exchange, should not be performed because they have not been shown to reduce the rate of infection. , Also, frequent CSF analysis with cultures at each use may detect preclinical infections with C. acnes or staphylococci. However, these results must be interpreted with caution because these pathogens may also be contaminants. Once an Ommaya reservoir or an EVD has been placed, the risk of infection can be minimized through the use of institutional protocols established for ensuring safe, sterile access of the device by only highly qualified personnel. Minimal manipulation of the device, minimizing the number of days the device remains in situ, and implementing an infection control protocol have all been shown to decrease the incidence of these infections. – The introduction of an EVD care bundle that includes a standardized technique of hand washing for aseptic CSF sampling, the use of surgical theater-standard scrubs and preparations, and cleaning the EVD access ports while wearing a mask and gloves significantly decreased the rate of infection from 21 to 9 cases per 1000 EVD-days ( p = .003). , In a meta-analysis, the addition of a chlorhexidine-impregnated dressing to the catheter exit site significantly reduced the incidence of EVD infections (7.9% vs. 1.7%; risk difference, 0.07; 95% CI, 0.0–0.13; p = .04). , Similar bundled approaches for the prevention of Ommaya reservoir infections have been successful. Hence, because of the difficulty in assessing the effectiveness of each individual component and based on the relatively low cost, to further reduce the rate of these infections, we recommend the continued use of these preventive bundles. Breast Tissue Expanders and Permanent Implants Breast cancer is the most common cancer worldwide, with a 5-year survival rate >90%. In 2021, the American Society of Plastic Surgeons reported that 103,485 postmastectomy implant-based reconstructive procedures were performed in the United States. Some of the patients who underwent these procedures had direct-to-implant reconstruction (one-step approach), whereas >80% had implantation of a temporary TE; once a sufficiently large soft tissue envelope was created, the TE was replaced by a permanent breast implant (two-step approach). Unfortunately, the average TE infection rate is high at 13%. These infections occur mostly in the early postoperative period, with one third occurring within the first 30 days after surgery (median, 48 days). The most common bacteria causing TE infections are methicillin-resistant staphylococci (44%) and gram-negative pathogens (26%), including Pseudomonas (13%) and Klebsiella (5%) spp. In addition to the traditional risk factors for infection, patients with breast TEs have several unique risk factors, including a body mass index >25 kg/m 2 , breast cup size >C, prior breast implant infection, bilateral or immediate breast reconstruction, axillary lymph node resection, use of an acellular dermal matrix, extended duration of surgical drains, mastectomy skin flap necrosis, breaks in the sterility process of TE implant infusions, and use of adjuvant chemotherapy and radiation therapy. , Patients at high risk for infection should consider proceeding with an autologous flap reconstruction instead of an implant-based reconstruction because of the lower rate of infection (approximately 7%) with the former procedure. Similar to other methods of prevention, the use of preprocedural systemic antimicrobials has proven to significantly reduce the rate of infection. In addition, following a detailed best-practice standardized protocol has helped reduce the incidence of these complications. , Furthermore, periprocedural measures, including antimicrobial irrigation of the pocket and implant immersion, were shown in a meta-analysis to decrease infection rates (risk ratio, 0.52; 95% CI, 0.38–0.81; p = .004), although with a relatively low degree of evidence. These antimicrobial solutions are promptly absorbed, rapidly decreasing their effectiveness. Therefore, similar to antibiotic beads used in orthopedics, we developed a completely bioabsorbable film that allows for full expansion of the temporary breast implant and elutes a high concentration of antibiotic locally for an extended period. This promising film has been shown in vitro to prevent biofilm formation by diverse microorganisms on silicone surfaces with minimal cytotoxicity. Of note, acellular dermal matrices have been increasingly used for surgical reconstruction to allow for lower pole support of the breast implant, enhancing aesthetic outcomes while decreasing operative time. These biologic meshes are available in aseptic or sterile form, with no significant difference in the rate of infection between the two forms. However, they have been associated with increased incidence of seroma and hematoma and extended durations of surgical drains. These drains likely serve as microbial conduits for pathogens to migrate from the skin to the implant, with an overall risk ratio for infection of 2.47 (95% CI, 1.71–3.57; p = .01). Also, a seroma located between an acellular dermal matrix and an implant is relatively isolated from the host’s immune system, likely further increasing the probability of infection. Therefore, the goal is to place these drains through a subcutaneous tunnel and then remove them as soon as possible in the presence of <30 ml of daily output or even earlier, not surpassing 7–14 days of use. , Further infection preventive measures during the early postoperative period include (1) avoidance of extending postoperative antimicrobial use beyond 24 hours, although this is common practice because this does not reduce the rate of infection and leads to the development of multidrug-resistant pathogens – ; (2) allowing adequate incisional healing before initiating adjuvant bevacizumab use or radiation therapy ; (3) proceeding with early expansion of the TE to decrease the size of the seroma pocket but without significantly increasing the surface tension and causing a skin flap necrosis ; (4) keeping the surgical bulb at gravity at all times to keep the drained fluid from re-entering the surgical pocket; and (5) consideration of additional techniques, such as using chlorhexidine-impregnated dressing at the exit drain site and exchanging it weekly along with a daily antiseptic solution within the surgical bulb to further decrease bacterial colonization ( p = .03) and the likelihood of a secondary infection within 30 days ( p = .13) and 1 year ( p = .45). Percutaneous Nephrostomy Tubes and Ureteral Stents These devices are mainly indicated for temporary or permanent decompression of the urinary tract because of intrinsic or extrinsic malignant obstructions, mainly cervical or colorectal cancers. Ureteral stents are also used temporarily after urinary diversion or ureteral reimplantation surgeries to prevent strictures at the anastomotic site. The definition of these infections is not standardized but is reported to be 1%–19% for PCNTs and 11% for ureteral stents. By using a stringent clinical and microbiologic definition, at our institution in patients with newly placed PCNTs, we found that the infection rate was 14%, with an infection incidence of 2.65 per 1000 patient-days. These infections occur early, with a median time from PCNT placement to infection of 44 days (interquartile range, 25–61 days). These devices can be readily colonized and infected by lower urinary tract pathogens acquired during or after their placement, including Pseudomonas , Escherichia , Stenotrophomonas , Klebsiella , and Enterococcus spp., with up to 50% of infections being polymicrobial or by normal skin flora at the PCNT exit site. Similar to Foley catheter-related infections, the main risk factor for these infections is the length of time the device remains in place. Therefore, periodically reassessing the need for these devices to determine whether their removal is possible is the best approach to prevent these infections. The use of preprocedural antimicrobials with these clean–contaminated procedures is indicated for elective PCNT and ureteral stent placement and exchange. Prophylaxis with cefazolin that focused mainly on skin flora was not beneficial for patients receiving PCNTs. However, when ceftriaxone or ampicillin/sulbactam was used to cover expected uropathogens, the rate of serious postprocedural sepsis-related complications decreased in high-risk patients from 50% to 9%. For patients receiving ureteral stents considered to be at high risk for infection (those who are immunocompromised, have had recurrent urinary tract infections, have uncontrolled diabetes, or have a history of infected renal stones), we usually administer ciprofloxacin or trimethoprim-sulfamethoxazole prophylaxis or intravenous antimicrobials to patients undergoing complex surgery that requires a high level of instrumentation under general anesthesia. A targeted prophylactic approach based on colonizing organisms’ growth in urine culture obtained a few days before a scheduled exchange appeared to have a more protective effect than providing standard-of-care prophylactic antimicrobials, but larger studies with supporting evidence are needed. , Several approaches to coating these urinary devices to inhibit bacterial adhesion and growth have been evolving. For example, they have been coated with diverse antibiotics as well as chitosan, gendine, hyaluronic acid, hydrogel, silver, triclosan, and many other substances. , One of the main concerns associated with antibiotic-based coatings, as mentioned above, is a lack of long-term effectiveness and development of resistance. Therefore, combination regimens that reduce the probability of resistance, including minocycline-, rifampin-, and chlorhexidine-impregnated catheters, have been developed. Unfortunately, because of their high cost of production and potential toxicity and a lack of adequate clinical studies, these catheters have yet to be introduced into practice. Postprocedural preventive strategies, including maintaining a clean exit site area with antiseptic use, regular dressing exchange, and placement of a closed urinary drainage collection bag under the PCNT insertion site to keep urine from recirculating back into the urinary collection system, may help decrease the rate of infection. Also, concomitant use of Foley catheters with PCNT and ureteral stents should be avoided when feasible. Furthermore, in patients with frequent exit site infections, using a chlorhexidine-impregnated dressing and exchanging it weekly should be considered. Moreover, to avoid development of infections with multidrug-resistant organisms and inappropriate use of antimicrobials, surveillance urinary cultures and giving treatment to asymptomatic patients should be discouraged. Finally, bacterial colonization occurs soon after placement of these urinary devices, with subsequent encrustation of debris and solutes and formation of an intraluminal complex biofilm over time. This eventually leads to obstruction of the device, resulting in progressive hydronephrosis, renal failure, and increased likelihood of pyelonephritis, renal abscess, or even bacteremia. Therefore, routine replacement of the device every 3 months or even more frequently in patients at high risk for intraluminal obstruction and definitive removal should be attempted when clinically possible. The average cost of $3000 per procedure is considerably lower than the approximately $40,000 cost for treatment of each episode of these almost inexorable infectious events. Other Relevant Devices Many additional implantable devices have been used to support and improve the quality of life of patients living with advanced cancers, including pleural and peritoneal drains, esophageal and biliary stents, and PEG and percutaneous cholecystostomy tubes. Unfortunately, data on preventing infections of these devices are limited, mainly because of the relatively low infection rates and short life spans of patients receiving these implants, usually for palliative purposes. However, below, we describe several agreed-upon recommendations for the prevention of developing an infection of these devices. Preprocedural prophylactic antimicrobials are not needed for routine procedures classified as clean, such as esophageal stent and pleural or peritoneal drain placement, or for biliary stent insertion with resolution of an obstruction. However, PEG tube placement, which is considered a clean–contaminated procedure, has been associated with a significant reduction in the incidence of peristomal infection when prophylactic cefazolin was administered (odds ratio, 0.36; 95% CI, 0.26–0.50). Also, percutaneous cholecystostomy tubes are usually placed in patients with cholecystitis; hence, placement of these tubes is considered a contaminated procedure, for which antimicrobials with enteric coverage, such as ampicillin/sulbactam, are warranted if the patient is not already receiving another antibiotic. Similar to other procedures, consensus statements by experts agree that physicians should use an exclusive operating or procedural room during insertion of these devices, an adequate local antiseptic, and fully sterile body draping and sterile gloves, as well as continuously educating health care personnel and follow standardized institutional protocols. – In addition, authors have described three noteworthy measures for the prevention of biliary stent-related infections. (1) Use of a disposable single-use duodenoscope for placement of biliary stents: Because of the complicated design of reusable duodenoscopes used for biliary stent placement, cleaning them under standard sterilization protocols is challenging, which has led to several outbreaks of multidrug-resistant bacterial infections. Until better processes for duodenoscope cleaning are developed, the clinician must rely on personal judgment and infection control reports to detect outbreaks. Therefore, for patients at high risk for infectious complications or ongoing outbreaks, the use of disposable single-use duodenoscopes should be considered. (2) Plastic stents versus covered and uncovered biliary self-expandable metal stents (SEMSs): The use of these stents should be individualized for each patient. Plastic stents are less expensive than SEMSs, but they have a smaller diameter (about one third that of SEMSs). This can result in a more rapid biliary sludge accumulation and bacterial biofilm proliferation, leading to occlusion and eventually increasing the rate of recurrent infections. Hence, plastic stents require routine exchange every 3 months; therefore, these stents are indicated for patients with a life span ≤3 months. SEMSs, conversely, integrate into the biliary tract and become very difficult to remove. To circumvent this complication, fully and partially silicone-covered and polytetrafluoroethylene-covered SEMSs have been developed, which maintains a large luminal patency, decreases tissue embedding, and can be easily removed, specifically if the patient develops an infection because this has been shown to significantly decrease the rate of recurrent cholangitis. Nonetheless, the main limitation of covered SEMSs remains migration of them, which occurs in about 10% of cases. Taking all this into account, there have not been any differences in the rate of infection between covered versus uncovered SEMS, whereas a series of meta-analyses demonstrated substantially lower sepsis and cholangitis rates with SEMSs than with plastic stents (odds ratio, 0.53; 95% CI, 0.37–0.77). (3) Surface modification techniques for biliary stents with silver ions: This promising technology has been shown both in vitro and in animal models to significantly decrease biofilm formation and increase stent patency. Hopefully, the use of these antimicrobial surface modification technologies, which have been successfully used with intravenous catheters, will continue to grow and expand to other devices and eventually will be introduced into clinical practice in the near future. Postprocedural infection preventive recommendations mainly consist of maintaining a clean external drain with the use of soap and water or hydrogen peroxide and covering the drain exit site with sterile dressing. Also, the use of a PEG tube requires daily rotation of 360 degrees both clockwise and counterclockwise to prevent pressure ulcers from forming between the abdominal and gastric walls, leading to tissue necrosis and infection. Furthermore, patients receiving biliary stents should avoid using long-term postprocedural ciprofloxacin for the prevention of biliary stent blockage because this intervention has not been proven to improve stent patency or infection rates. Most importantly, all patients should have an instruction booklet, access to an institutional hotline, as well as regular clinical follow-up according to institutional guidelines with a provider experienced in the long-term use and management of infectious complications of these devices.
These devices include nontunneled and tunneled centrally inserted central catheters, peripherally inserted central catheters, as well as totally implantable venous access devices. , These central venous devices, which are used in at least 4 million patients in the United States and are left in place for several months, are essential lifelines for patients living with cancer. However, CVADs are associated with a wide array of infectious complications, including localized exit-site infections, tunnel-related or pocket-related infections, and life-threating catheter-related bloodstream infections (CRBSIs). The infection rates of the latter vary significantly among different clinical settings, but it has been estimated that, in the oncological population, it is approximately 2.5 per 1000 catheter-days. Femorally inserted central catheters have the highest risk of infection, followed by centrally inserted central catheters, peripherally inserted central catheters, and totally implantable venous access devices. In addition, patients receiving chemotherapy, total parenteral nutrition, or who are neutropenic for a prolonged period of time will be at increased risk for infection. The pathogens that most frequently are responsible for CRBSIs are gram-positive bacteria, in particular, coagulase-negative staphylococci, S. aureus , and Enterococcus species, whereas gram-negative microorganisms account for approximately 20%. , The average cost per episode of CRBSI is $45,814 (95% CI, $30,919–$65,245), making CRBSI one of the costliest health care-associated infections. CVADs have four main routes of contamination that are the targets of infection-preventive measures: (1) migration of skin organisms at the insertion site, resulting in bacterial adhesion to the external or intraluminal surface of the device; (2) direct contamination by contact with hands or contaminated fluids or devices; (3) less commonly, catheters may become hematogenously seeded from another focus of infection; and (4) rarely, infusate contamination may lead to a CRBSI. Therefore, several well established, evidence-based recommendations of a bundle approach have been designed to mitigate the risk for infection. This bundle intervention includes the implementation of specific steps during both the insertion and the maintenance of central lines : (1) educating and designating only trained health care personnel; (2) hand hygiene and the use of sterile gloves before catheter insertion; (3) the use of alcohol-containing CHG for skin antisepsis before insertion and during dressing change; (4) maximal sterile barrier precautions, including the use of a cap, mask, gown, and sterile full-body drape; (5) avoiding the use of systemic antimicrobial prophylaxis; (6) preferring an infraclavicular rather than a supraclavicular or groin exit site; (7) selecting a CVAD with the minimum number of lumens and to be used for the fewest days necessary for management of the patient; (8) implementation of ultrasound guidance to reduce the number of catheter placement attempts; (9) choosing a suture-less securement device with needle-less connectors; (10) placing a sterile, transparent dressing over the insertion site and replacing it no more than once a week (unless the dressing is soiled or loose); (11) avoiding submerging the catheter in water or using topical antimicrobial ointments at insertion sites as well as not replacing the CVAD to prevent CRBSI, but replacing the administration set and needle-less connectors at least every 7 days assuming the patient has not received blood, blood products, or fat emulsions, in which case they must be replaced within 24 hours after the infusion; and (12) most importantly, it is encouraged to have collaborative-based performance-improvement initiatives. These interventions require a designated physician and nursing team leader along with a checklist to assess compliance with the elements of the bundle and empowerment to stop the procedure if protocols are not followed. If compliance with all components is high, the bundle approach has reported a statistically significant decrease in the rate of CRBSI of 66% ( p < .002). The American Society of Clinical Oncology has high-lighted the importance of CRBSIs and emphasized the need for more research targeting patients with cancer, mainly because the majority of studies have focused on patients who have indwelling CVADs for a short term, such as in intensive care units. However, based on the available literature, several additional CRBSI-preventive measures can be instituted. Simple and inexpensive interventions (<$10 per unit) in which CRBSI remains elevated despite maximum compliance with the aforementioned measures are the use of 70% isopropyl alcohol caps for needle-less connectors and the placement of a chlorhexidine-impregnated dressing , around the catheter insertion site and exchanging it every 7 days. These two interventions have been effective in reducing the incidence of intraluminal and extraluminal infections, respectively. Furthermore, the introduction of US Food and Drug Administration (FDA)-approved antimicrobial-impregnated catheters (AICs) has added an extra layer of CRBSI prevention. The use of these AICs is associated with a markedly lower rate of catheter colonization and CRBSI compared with non-AICs. , Cost-effectiveness assessments of these relatively inexpensive devices have justified their integration into clinical practice. Of the most commonly used AICs, minocycline/rifampin-impregnated catheters have been associated with lower rates of CRBSI than chlorhexidine/silver sulfadiazine-impregnated catheters (0.3% vs. 3.4%; p < .002) , without an increased incidence of antibacterial resistance of Staphylococcus species. Moreover, AICs ensure protection for a limited time, ranging from 28 to 50 days in the setting of a minocycline/rifampin-impregnated catheter, which contrasts with an average of 7 days in the setting of a chlorhexidine/silver sulfadiazine-impregnated catheter. – Therefore, the use of antimicrobial lock solutions has been proposed as a method of preventing intraluminal CRBSI of CVADs that are projected to remain in place for an extended duration, especially in patients with a history of multiple CRBSIs. A meta-analysis of randomized controlled trials comparing antimicrobial lock solutions with heparin revealed a 69% reduction in the incidence of CRBSIs. These antimicrobial lock solutions can be created with numerous drugs and drug combinations. The simplest lock solutions are those formulated with ethanol, which was revealed in another meta-analysis of randomized controlled trials to significantly decrease CRBSI compared with heparin alone (odds ratio, 0.53; p = .004). However, ethanol concentrations and antimicrobial lock solution dwell times are not standardized. Also, ethanol concentrations >28% should be avoided because they lead to plasma protein precipitation and structural changes in CVADs, mainly polyurethane catheters. Other antimicrobial lock solutions, such as the chelators citrate and EDTA, have gained attention because they have excellent anticoagulant activity, prevent biofilm formation, have antimicrobial characteristics, and inhibit bacterial proliferation, whereas heparin may anecdotally enhance biofilm growth. The use of a combined antimicrobial chelator lock solution, such as minocycline–EDTA and taurolidine–citrate, has led to remarkable progress in preventing CRBSIs in patients who have cancer. , Another promising antimicrobial lock solution is nitroglycerin–citrate–ethanol, a nonantibiotic chelator combination. This lock solution is safe and has unique features of an active anticoagulant, no risk of triggering bacterial resistance, and the ability to disrupt biofilm. These findings were validated in a clinical study that evaluated patients with hematological malignancies and showed a considerable reduction in the incidence of CRBSIs. Although these lock solutions are well studied, currently, there are no FDA-approved lock formulations commercially available for which they are prepared locally in hospital pharmacies. The components of the antimicrobial lock solutions are usually generic, economical, and effective in preventing thrombosis and CRBSIs. However, their beneficial use in preventing infections must be balanced with potential breaches in catheter integrity, bacterial resistance, systemic toxicity, frequent antimicrobial lock solution exchanges (depending on the stability of each component of the solution), and inability to use the CVAD while the lock solution is dwelling.
The indications for permanent pacemakers, implantable cardiac defibrillators, and cardiac resynchronization therapy, collectively known as CIEDs, are extensive. The cardiotoxicity of some cancer therapies and the rising average age of the oncological population have increased the need for these devices. In the United States, more than 100,000 implantable cardiac defibrillators and 300,000 permanent pacemakers are inserted every year. Unfortunately, the rates of CIED infections have been reported to be approximately 4%, with a disproportionate increase in these rates compared with the increase in CIED implantation. The most common microorganisms causing CIED infections are expected skin flora, such as coagulase-negative staphylococci (38%), S. aureus (31%), and other pathogens, including gram-negative bacteria (9%). , Infections of these devices necessitate the extraction of all CIED components (generator and leads), increasing the mean hospitalization charges in the United States to $173,211, with overall in-hospital mortality rates ranging from 3.7% to 11.3%. Several modifiable and nonmodifiable patient-related, procedure-related, and device-related risk factors for CIED infections have been identified. These risk factors are common in the oncological population and have been compiled in various stratification scores. On the basis of these scoring systems, patients who have cancer are usually at intermediate to high risk for developing a CIED infection. The Prevention of Arrhythmia Device Infection Trial ( ClinicalTrials.gov identifier NCT01628666 ) score , is one of the most commonly used scoring systems because it is simple and has been independently validated to identify high-risk patients who may benefit from tailored strategies to reduce the risk of CIED infection. For patients with several nonmodifiable risks, alternative approaches may be used to lower the overall risk of infection, including confirming the indication for CIED use and consideration of a leadless CIED. , In addition to the general surgical recommendations described above, the identification of modifiable risk factors is important because it may allow for further preventive measures to reduce the risk of CIED infection. These include preventive preprocedural measures supported by scientific consensus, such as: (1) provision of perioperative systemic antimicrobials ; (2) use of a preoperative checklist , ; (3) delay of CIED implantation in patients with infection or fever for at least 24 hours; (4) avoidance of CVADs when introducing a CIED, when feasible ; and (5) measures to decrease the risk of pocket hematoma (increasing platelet count to >50,000/μl, discontinuation of antiplatelet medications within 5–10 days before the procedure, avoidance of therapeutic low-molecular-weight heparin and a bridging approach with heparin, and holding of anticoagulation therapy until the risk of bleeding has diminished in patients with a history of deep venous thrombosis or CHA 2 DS 2 -VASc score <4). The latter three measures are commonly encountered in the cancer population and should be closely addressed. Perioperative recommendations for the prevention of CIED infections include: (1) consideration of adding an acellular dermal matrix within the surgical pocket to reinforce the incision site, (2) avoidance of antimicrobial irrigation within the pocket, and (3) use of an antimicrobial envelope (such as TYRX; Medtronic) that locally releases a high concentration of minocycline and rifampin within the surgical pocket for a minimum of 7 days in patients at high-risk for developing CIED infection. The World-wide Randomized Antibiotic Envelope Infection Prevention Trial ( ClinicalTrials.gov identifier NCT02277990 ) demonstrated that the use of these envelopes significantly reduced the primary end point (infection resulting in CIED extraction or revision, long-term antibiotic therapy, or death within 12 months of device placement) from 1.2% (control) to 0.7% (envelope; hazard ratio, 0.6; p = .04). The number needed to treat was 100 for high-risk patients undergoing implantable cardiac defibrillator/cardiac resynchronization therapy defibrillator replacement or upgrade. However, this trial excluded patients at increased risk for infection, such as those with prior CIED infection, those receiving immunosuppressive therapy, those with long-term vascular access, or patients undergoing hemodialysis. Therefore, selecting a high-risk population for infection, such as an oncological population with several risk factors, would likely decrease the number needed to treat and improve the cost effectiveness of the envelope, which is priced slightly below $1000. , At our institution, all patients who have cancer receive the TYRX envelope as part of a comprehensive prophylactic bundle, which has been demonstrated to be both safe and effective in maintaining a low rate of CIED infection (1.3%) and is well within published averages in the broader population of all CIED recipients. Of note, few studies have evaluated novel techniques for decreasing microbial adherence to CIEDs. Polyurethane has been shown to have a higher affinity for biofilm-producing pathogens than titanium in vitro. Therefore, increasing the titanium:polyurethane surface ratio of these cardiac devices may decrease the rate of CIED infection. Furthermore, the use of silver ion-based antimicrobial surface technology for the reduction of bacterial growth on CIEDs was shown to be safe in an ovine model. However, CIED surface modification techniques are unlikely to progress because of the complexity of the regulatory approval pathways, the diversity of CIED models and manufacturing companies worldwide, and the availability of more cost-effective preventive measures already approved by the FDA, such as antimicrobial envelopes. Furthermore, postprocedural prophylactic measures in CIED recipients include: (1) the use of pressure dressings to decrease hematoma occurrence and hemostatic gelatin sponges in patients receiving anticoagulation or dual antiplatelet therapy ; (2) refraining from early reintervention, which dramatically increases the risk of CIED infection ; and (3) avoidance of postoperative antimicrobials. The last measure was confirmed in the Prevention of Arrhythmia Device Infection Trial, which included 19,603 patients and revealed no benefit from an incremental approach (preoperative intravenous vancomycin or cefazolin plus intraoperative bacitracin wash and postoperative oral cephalosporin) over the conventional approach (single dose of preoperative cefazolin or vancomycin; odds ratio, 0.77; p = .1).
An Ommaya reservoir, a small, dome-shaped, subgaleal reservoir connected to an intraventricular catheter, is the preferred device for intrathecal infusion of chemotherapy in patients with leptomeningeal cancer ; whereas EVDs are used for temporary diversion of cerebrospinal fluid (CSF) from an obstructed ventricular system in cases of acute hydrocephalus, monitoring of intracranial pressure, and as part of the treatment approach for infected CSF shunts. These devices can become infected, manifesting as a local skin soft tissue inflammatory infectious process or with meningitis and ventriculitis at a rate of 6% for Ommaya reservoirs and 8% for EVDs. , Concomitant bloodstream infections have been identified in 7.5%–12% of Ommaya reservoir infections. , The overall incidence of infection in previous studies was 0.74 per 10,000 Ommaya reservoir-days and 11.4 per 10,000 EVD-days. These infections usually occur soon after the time of placement or later through retrograde spread by exit-site colonization or direct inoculation through device manipulation. , The main risk factor for Ommaya reservoir infections is the frequency of CSF sampling, whereas the main risk factors for EVD infections include prolonged catheterization, subarachnoid hemorrhage, drain blockage, and CSF leakage at the EVD entry site. , – The most common organisms causing Ommaya reservoir infections are predominantly normal skin flora, including Staphylococcus spp. and Cutibacterium acnes ; whereas EVD infections are increasingly caused by gram-negative rods, such as Escherichia coli , Pseudomonas aeruginosa , and Enterobacter , Acinetobacter , and Klebsiella species. , , Preprocedural use of antimicrobials such as cefazolin is necessary to reduce the rate of SSIs and central nervous system infections in patients with Ommaya reservoirs and EVDs. Perioperative chlorhexidine shampoo and hair clipping, with special care to avoid causing skin abrasions, also should be implemented. In addition, an Ommaya reservoir should be placed under a skin flap that allows for implantation at a safe distance from the incision site. Furthermore, despite few studies with mixed results, at institutions with high rates of infections, the use of subcutaneous long-tunneling EVDs to the chest wall can be considered. Moreover, silver-coated and, more recently, minocycline- and rifampin-impregnated catheters have proven to be cost-effective in significantly reducing the rate of infection in EVDs (risk ratio, 0.31; 95% CI, 0.15–0.64; p = .0002). However, another study did not show an additional benefit of using AICs, likely because of a small sample size. Similar to other devices, studies have shown an advantage with the prolonged use of postprocedural antibiotics as long as an EVD remains in place compared with no postoperative antimicrobial use (3% vs. 11%; p = .01). Other preventative interventions, including the use of a daily prophylactic bundle plus intraventricular amikacin, also had encouraging results. However, because these were relatively small studies with the potential for drug-related toxic effects and development of multidrug-resistant pathogens, these findings should be verified in large, multicenter, randomized controlled studies. Other interventions, such as routine EVD exchange, should not be performed because they have not been shown to reduce the rate of infection. , Also, frequent CSF analysis with cultures at each use may detect preclinical infections with C. acnes or staphylococci. However, these results must be interpreted with caution because these pathogens may also be contaminants. Once an Ommaya reservoir or an EVD has been placed, the risk of infection can be minimized through the use of institutional protocols established for ensuring safe, sterile access of the device by only highly qualified personnel. Minimal manipulation of the device, minimizing the number of days the device remains in situ, and implementing an infection control protocol have all been shown to decrease the incidence of these infections. – The introduction of an EVD care bundle that includes a standardized technique of hand washing for aseptic CSF sampling, the use of surgical theater-standard scrubs and preparations, and cleaning the EVD access ports while wearing a mask and gloves significantly decreased the rate of infection from 21 to 9 cases per 1000 EVD-days ( p = .003). , In a meta-analysis, the addition of a chlorhexidine-impregnated dressing to the catheter exit site significantly reduced the incidence of EVD infections (7.9% vs. 1.7%; risk difference, 0.07; 95% CI, 0.0–0.13; p = .04). , Similar bundled approaches for the prevention of Ommaya reservoir infections have been successful. Hence, because of the difficulty in assessing the effectiveness of each individual component and based on the relatively low cost, to further reduce the rate of these infections, we recommend the continued use of these preventive bundles.
Breast cancer is the most common cancer worldwide, with a 5-year survival rate >90%. In 2021, the American Society of Plastic Surgeons reported that 103,485 postmastectomy implant-based reconstructive procedures were performed in the United States. Some of the patients who underwent these procedures had direct-to-implant reconstruction (one-step approach), whereas >80% had implantation of a temporary TE; once a sufficiently large soft tissue envelope was created, the TE was replaced by a permanent breast implant (two-step approach). Unfortunately, the average TE infection rate is high at 13%. These infections occur mostly in the early postoperative period, with one third occurring within the first 30 days after surgery (median, 48 days). The most common bacteria causing TE infections are methicillin-resistant staphylococci (44%) and gram-negative pathogens (26%), including Pseudomonas (13%) and Klebsiella (5%) spp. In addition to the traditional risk factors for infection, patients with breast TEs have several unique risk factors, including a body mass index >25 kg/m 2 , breast cup size >C, prior breast implant infection, bilateral or immediate breast reconstruction, axillary lymph node resection, use of an acellular dermal matrix, extended duration of surgical drains, mastectomy skin flap necrosis, breaks in the sterility process of TE implant infusions, and use of adjuvant chemotherapy and radiation therapy. , Patients at high risk for infection should consider proceeding with an autologous flap reconstruction instead of an implant-based reconstruction because of the lower rate of infection (approximately 7%) with the former procedure. Similar to other methods of prevention, the use of preprocedural systemic antimicrobials has proven to significantly reduce the rate of infection. In addition, following a detailed best-practice standardized protocol has helped reduce the incidence of these complications. , Furthermore, periprocedural measures, including antimicrobial irrigation of the pocket and implant immersion, were shown in a meta-analysis to decrease infection rates (risk ratio, 0.52; 95% CI, 0.38–0.81; p = .004), although with a relatively low degree of evidence. These antimicrobial solutions are promptly absorbed, rapidly decreasing their effectiveness. Therefore, similar to antibiotic beads used in orthopedics, we developed a completely bioabsorbable film that allows for full expansion of the temporary breast implant and elutes a high concentration of antibiotic locally for an extended period. This promising film has been shown in vitro to prevent biofilm formation by diverse microorganisms on silicone surfaces with minimal cytotoxicity. Of note, acellular dermal matrices have been increasingly used for surgical reconstruction to allow for lower pole support of the breast implant, enhancing aesthetic outcomes while decreasing operative time. These biologic meshes are available in aseptic or sterile form, with no significant difference in the rate of infection between the two forms. However, they have been associated with increased incidence of seroma and hematoma and extended durations of surgical drains. These drains likely serve as microbial conduits for pathogens to migrate from the skin to the implant, with an overall risk ratio for infection of 2.47 (95% CI, 1.71–3.57; p = .01). Also, a seroma located between an acellular dermal matrix and an implant is relatively isolated from the host’s immune system, likely further increasing the probability of infection. Therefore, the goal is to place these drains through a subcutaneous tunnel and then remove them as soon as possible in the presence of <30 ml of daily output or even earlier, not surpassing 7–14 days of use. , Further infection preventive measures during the early postoperative period include (1) avoidance of extending postoperative antimicrobial use beyond 24 hours, although this is common practice because this does not reduce the rate of infection and leads to the development of multidrug-resistant pathogens – ; (2) allowing adequate incisional healing before initiating adjuvant bevacizumab use or radiation therapy ; (3) proceeding with early expansion of the TE to decrease the size of the seroma pocket but without significantly increasing the surface tension and causing a skin flap necrosis ; (4) keeping the surgical bulb at gravity at all times to keep the drained fluid from re-entering the surgical pocket; and (5) consideration of additional techniques, such as using chlorhexidine-impregnated dressing at the exit drain site and exchanging it weekly along with a daily antiseptic solution within the surgical bulb to further decrease bacterial colonization ( p = .03) and the likelihood of a secondary infection within 30 days ( p = .13) and 1 year ( p = .45).
These devices are mainly indicated for temporary or permanent decompression of the urinary tract because of intrinsic or extrinsic malignant obstructions, mainly cervical or colorectal cancers. Ureteral stents are also used temporarily after urinary diversion or ureteral reimplantation surgeries to prevent strictures at the anastomotic site. The definition of these infections is not standardized but is reported to be 1%–19% for PCNTs and 11% for ureteral stents. By using a stringent clinical and microbiologic definition, at our institution in patients with newly placed PCNTs, we found that the infection rate was 14%, with an infection incidence of 2.65 per 1000 patient-days. These infections occur early, with a median time from PCNT placement to infection of 44 days (interquartile range, 25–61 days). These devices can be readily colonized and infected by lower urinary tract pathogens acquired during or after their placement, including Pseudomonas , Escherichia , Stenotrophomonas , Klebsiella , and Enterococcus spp., with up to 50% of infections being polymicrobial or by normal skin flora at the PCNT exit site. Similar to Foley catheter-related infections, the main risk factor for these infections is the length of time the device remains in place. Therefore, periodically reassessing the need for these devices to determine whether their removal is possible is the best approach to prevent these infections. The use of preprocedural antimicrobials with these clean–contaminated procedures is indicated for elective PCNT and ureteral stent placement and exchange. Prophylaxis with cefazolin that focused mainly on skin flora was not beneficial for patients receiving PCNTs. However, when ceftriaxone or ampicillin/sulbactam was used to cover expected uropathogens, the rate of serious postprocedural sepsis-related complications decreased in high-risk patients from 50% to 9%. For patients receiving ureteral stents considered to be at high risk for infection (those who are immunocompromised, have had recurrent urinary tract infections, have uncontrolled diabetes, or have a history of infected renal stones), we usually administer ciprofloxacin or trimethoprim-sulfamethoxazole prophylaxis or intravenous antimicrobials to patients undergoing complex surgery that requires a high level of instrumentation under general anesthesia. A targeted prophylactic approach based on colonizing organisms’ growth in urine culture obtained a few days before a scheduled exchange appeared to have a more protective effect than providing standard-of-care prophylactic antimicrobials, but larger studies with supporting evidence are needed. , Several approaches to coating these urinary devices to inhibit bacterial adhesion and growth have been evolving. For example, they have been coated with diverse antibiotics as well as chitosan, gendine, hyaluronic acid, hydrogel, silver, triclosan, and many other substances. , One of the main concerns associated with antibiotic-based coatings, as mentioned above, is a lack of long-term effectiveness and development of resistance. Therefore, combination regimens that reduce the probability of resistance, including minocycline-, rifampin-, and chlorhexidine-impregnated catheters, have been developed. Unfortunately, because of their high cost of production and potential toxicity and a lack of adequate clinical studies, these catheters have yet to be introduced into practice. Postprocedural preventive strategies, including maintaining a clean exit site area with antiseptic use, regular dressing exchange, and placement of a closed urinary drainage collection bag under the PCNT insertion site to keep urine from recirculating back into the urinary collection system, may help decrease the rate of infection. Also, concomitant use of Foley catheters with PCNT and ureteral stents should be avoided when feasible. Furthermore, in patients with frequent exit site infections, using a chlorhexidine-impregnated dressing and exchanging it weekly should be considered. Moreover, to avoid development of infections with multidrug-resistant organisms and inappropriate use of antimicrobials, surveillance urinary cultures and giving treatment to asymptomatic patients should be discouraged. Finally, bacterial colonization occurs soon after placement of these urinary devices, with subsequent encrustation of debris and solutes and formation of an intraluminal complex biofilm over time. This eventually leads to obstruction of the device, resulting in progressive hydronephrosis, renal failure, and increased likelihood of pyelonephritis, renal abscess, or even bacteremia. Therefore, routine replacement of the device every 3 months or even more frequently in patients at high risk for intraluminal obstruction and definitive removal should be attempted when clinically possible. The average cost of $3000 per procedure is considerably lower than the approximately $40,000 cost for treatment of each episode of these almost inexorable infectious events.
Many additional implantable devices have been used to support and improve the quality of life of patients living with advanced cancers, including pleural and peritoneal drains, esophageal and biliary stents, and PEG and percutaneous cholecystostomy tubes. Unfortunately, data on preventing infections of these devices are limited, mainly because of the relatively low infection rates and short life spans of patients receiving these implants, usually for palliative purposes. However, below, we describe several agreed-upon recommendations for the prevention of developing an infection of these devices. Preprocedural prophylactic antimicrobials are not needed for routine procedures classified as clean, such as esophageal stent and pleural or peritoneal drain placement, or for biliary stent insertion with resolution of an obstruction. However, PEG tube placement, which is considered a clean–contaminated procedure, has been associated with a significant reduction in the incidence of peristomal infection when prophylactic cefazolin was administered (odds ratio, 0.36; 95% CI, 0.26–0.50). Also, percutaneous cholecystostomy tubes are usually placed in patients with cholecystitis; hence, placement of these tubes is considered a contaminated procedure, for which antimicrobials with enteric coverage, such as ampicillin/sulbactam, are warranted if the patient is not already receiving another antibiotic. Similar to other procedures, consensus statements by experts agree that physicians should use an exclusive operating or procedural room during insertion of these devices, an adequate local antiseptic, and fully sterile body draping and sterile gloves, as well as continuously educating health care personnel and follow standardized institutional protocols. – In addition, authors have described three noteworthy measures for the prevention of biliary stent-related infections. (1) Use of a disposable single-use duodenoscope for placement of biliary stents: Because of the complicated design of reusable duodenoscopes used for biliary stent placement, cleaning them under standard sterilization protocols is challenging, which has led to several outbreaks of multidrug-resistant bacterial infections. Until better processes for duodenoscope cleaning are developed, the clinician must rely on personal judgment and infection control reports to detect outbreaks. Therefore, for patients at high risk for infectious complications or ongoing outbreaks, the use of disposable single-use duodenoscopes should be considered. (2) Plastic stents versus covered and uncovered biliary self-expandable metal stents (SEMSs): The use of these stents should be individualized for each patient. Plastic stents are less expensive than SEMSs, but they have a smaller diameter (about one third that of SEMSs). This can result in a more rapid biliary sludge accumulation and bacterial biofilm proliferation, leading to occlusion and eventually increasing the rate of recurrent infections. Hence, plastic stents require routine exchange every 3 months; therefore, these stents are indicated for patients with a life span ≤3 months. SEMSs, conversely, integrate into the biliary tract and become very difficult to remove. To circumvent this complication, fully and partially silicone-covered and polytetrafluoroethylene-covered SEMSs have been developed, which maintains a large luminal patency, decreases tissue embedding, and can be easily removed, specifically if the patient develops an infection because this has been shown to significantly decrease the rate of recurrent cholangitis. Nonetheless, the main limitation of covered SEMSs remains migration of them, which occurs in about 10% of cases. Taking all this into account, there have not been any differences in the rate of infection between covered versus uncovered SEMS, whereas a series of meta-analyses demonstrated substantially lower sepsis and cholangitis rates with SEMSs than with plastic stents (odds ratio, 0.53; 95% CI, 0.37–0.77). (3) Surface modification techniques for biliary stents with silver ions: This promising technology has been shown both in vitro and in animal models to significantly decrease biofilm formation and increase stent patency. Hopefully, the use of these antimicrobial surface modification technologies, which have been successfully used with intravenous catheters, will continue to grow and expand to other devices and eventually will be introduced into clinical practice in the near future. Postprocedural infection preventive recommendations mainly consist of maintaining a clean external drain with the use of soap and water or hydrogen peroxide and covering the drain exit site with sterile dressing. Also, the use of a PEG tube requires daily rotation of 360 degrees both clockwise and counterclockwise to prevent pressure ulcers from forming between the abdominal and gastric walls, leading to tissue necrosis and infection. Furthermore, patients receiving biliary stents should avoid using long-term postprocedural ciprofloxacin for the prevention of biliary stent blockage because this intervention has not been proven to improve stent patency or infection rates. Most importantly, all patients should have an instruction booklet, access to an institutional hotline, as well as regular clinical follow-up according to institutional guidelines with a provider experienced in the long-term use and management of infectious complications of these devices.
Continued progress in implementation science research has led to several improvements in effective health care-associated infection prevention strategies. However, persistent gaps between recommendations and practices remain. The involvement of several key stakeholders, including governmental policy makers, the research and development industry, specialty medical societies, hospital and infection control interventions, surgeons, oncologists, and consulting health care providers, is paramount for continued reduction in the incidence of preventable foreign medical device-related infections. Advancement in this intricate preventive arena will lead to further progress in cancer outcomes and physicians’ fulfillment as well as a significant decrease in the economic burden to the health care system.
|
Investigating the Role of Primary Cilia and Bone Morphogenetic Protein Signaling in Periodontal Ligament Response to Orthodontic Strain In Vivo and In Vitro: A Pilot Study | a18685a8-1451-4895-8f92-7cb4850cb232 | 11640950 | Dentistry[mh] | The periodontal ligament (PDL) is a specialized fibrous connective tissue located between tooth root cementum and alveolar bone. It plays a critical role in sensing mechanical forces, in addition to providing structural support to the teeth, facilitating nutritional supply, maintaining tissue homeostasis, and supporting repair processes . PDL cellular networks withstand physiological strains induced by mastication, as well as the pronounced compressive and tensile forces associated with orthodontic tooth movement (OTM), leading to displacement of teeth within the alveolar bone . The specific orthodontic stresses initiate a cascade of cellular and molecular events, culminating in accelerated remodeling of the PDL and adjacent bone, triggering sterile inflammation . Analogous to physiological bone remodeling, both tensile and compressive forces are exerted during this highly coordinated process. Specifically, osteoclasts facilitate bone resorption on the compression side, and bone formation is driven by osteoblasts on the tension side . However, excessive compressive forces during OTM can result in a pathological autoimmune tissue resorption referred to as external root resorption (ERR) . This adverse phenomenon initiates with the degradation of the superficial root cementum and may subsequently progress to an irreversible stage of resorption. Such adverse side effects can also manifest from orthodontic forces conventionally considered clinically safe, as well as idiopathically in cases where no discernible causative factors are identified . PDL cells, constituting the predominant cellular population within the PDL, can directly perceive mechanical stimulation and convert these physical cues into intracellular signaling cascades through the process known as mechanotransduction. These signaling responses result in secretion of signaling molecules that orchestrate the aforementioned remodeling processes . Although mechanically driven repair is critical for maintaining tissue homeostasis and preventing autoimmunological conditions, the intricate mechanisms through which these cells sense and transduce physical signals into transcriptional responses remain unknown. A promising candidate for mediating mechanotransduction within the PDL is the primary cilium, potentially orchestrating cellular responses to biomechanical forces encountered during OTM. Primary cilia are solitary, nonmotile microtubule-based organelles present on most cells in the body. They project from the cell surface and function as chemo- and mechanosensory entities, facilitating the transduction of extracellular cues into intracellular responses. Primary cilia formation and function are fundamentally dependent on intraflagellar transport (IFT), a specialized process involving the transport of macromolecules essential for maintenance of this dynamic organelle . Deletion of various IFT components, especially Ift88, results in disruption of primary cilia formation and function . The primary cilium is considered a signaling amplifier because it is highly enriched with signaling receptors such that the cilium can sense and respond to external stimuli that other regions of the cell cannot detect. The variety of receptors present also enables the cilium to mediate multiple signaling pathways simultaneously. Thus, primary cilia act as sensory antennae to detect a wide range of extracellular signals, including signaling molecules and physical perturbations. In the context of mechanotransduction, the primary cilium is believed to bend and open stretch-activated ion channels located at its base where tension is maximal . It is well established that primary cilia are pivotal in regulating cellular proliferation, differentiation, and signaling, and they exert a profound impact on tissue formation and homeostasis, including craniofacial structures . However, the occurrence and functional role of cilia within the PDL remain inadequately explored to date, with the existing literature limited to a small number of studies. Current evidence indicates that primary cilia are present on approximately 70% of PDL cells, exhibit stage- and region-specific morphologies, are implicated in tooth development and may play a role in mechanotransduction . The random orientation of primary cilia within the ligament provides additional evidence for their mechanosensory function, as the axonemes are strategically positioned to detect physical movement from all directions . While existing research has delineated the critical role of BMP signaling in the formation, maintenance, and repair of periodontal tissues, its precise involvement in the processes of tooth movement and mechanical loading of PDL cells remains unexplored . An expanding body of evidence underscores the complex interplay between ciliary mechanotransduction and bone morphogenetic proteins (BMPs) . At this, signaling pathways associated with BMPs are anticipated to modulate ciliary function in a force-dependent manner, with distinct effects observed under high versus low mechanical forces . To date, the roles of primary cilia and BMP signaling in the PDL cell response to orthodontic movement remains unexamined, whether conducted in vivo or in vitro. The scope of this study was to fill this gap by exploring the role of primary cilia and BMP signaling in the mechanotransductive response of human PDL cells to OTM. The research hypothesis is that primary cilia act as mechanosensors, modulating BMP signaling to regulate cellular processes during OTM. The investigation examines the presence, structural dynamics, and functional significance of primary cilia in PDL cells exposed to varying mechanical strains, alongside the transcriptional regulation of key BMP signaling molecules. A mechanotransduction pathway is proposed in which primary cilia enhance BMP receptor activation, thereby initiating intracellular signaling cascades that influence PDL cell behavior. This investigation aims to elucidate the mechanistic link between primary cilia and BMP signaling, offering insights into the cellular response to mechanical forces and providing potential therapeutic targets to mitigate complications such as ERR in orthodontics.
Previous studies have indicated the presence of PDL cells, aiming this investigation to firstly confirm their existence in primary cultures established from human samples. Analysis revealed that primary cilia were present in PDL cells as early as the first day of cell culture. A temporal increase in the prevalence of primary cilia was noted, and the incidence and length of primary cilia appeared to increase in positive correlation with cell density . This observation aligns with findings in other cell types, where primary cilia exhibit increased prominence as cell density rises. More importantly, this finding substantiated the existence of dynamic primary cilia within the PDL cell preparations, thereby validating these isolations as viable models for investigating mechanotransduction in vitro. Subsequently, the expression of components of the BMP signaling pathway and the key regulator of primary cilia formation and function, Ift88, was assessed in isolated PDL cells. Investigations confirmed the expression of Ift88 and all core components of the BMP pathway, except for the type I receptor Alk4 . To further analyze the cellular response, primary PDL cells were subjected to tensile strain mimicking OTM, and changes in mRNA expression were evaluated after 24 h of stimulation. Multiple strain intensities were tested as previous studies indicate that cellular responses may vary with strain magnitude. Expression of Cox2, a marker for cellular response to physical stimulation, was increased significantly only at 10% and 20% strain magnitudes ( A,C). Conversely, the primary cilia marker Ift88 showed elevated expression only under low strains of 2.5% and 5%. Overall, changes in BMP signaling components varied greatly with strain magnitude and donor; however, trends could be observed that are summarized in . Some components, such as Alk1 and Alk5, exhibited statistically significant magnitude-dependent trends similar to those observed with Cox2 and Ift88 ( B,C). Like Cox2 and Alk1, Bmp9 expression generally increased with low strain magnitudes . Expression of Smad5, Bmp4, Alk7, Acvr2a, Acvr2b, and Tgfbr2 exclusively increased when cells were subjected to the highest strain of 20%. Smad1 expression was not altered at low strain but significantly decreased at 10% and 20% strains. Interestingly, expression of the ligand Bmp7, type I receptor Alk2, and type II receptor Bmpr2 was consistently increased across strain magnitudes ( B), suggesting that Bmp7, Alk2, and Bmpr2 are candidate components in the cellular response to mechanical loading of the PDL. The table summarizes trends in mRNA expression of BMP signaling superfamily components (including signaling transcription factors, ligands, and receptors), and known markers for the cellular response to mechanical stimulation (Cox2) and primary cilia (Ift88). Components that are not expressed in PDL cells are highlighted in pink. Gray shading indicates no observed change between stretched and static control samples. Three genes that were consistently upregulated at all levels of tensile strain (Bmp7, Alk2, and Bmpr2) are highlighted in yellow. Sample size: n = 3–4 per group. The in vitro results of this study provide support for the involvement of primary cilia and BMP signaling in PDL cells. To determine whether these components are involved in OTM, primary cilia and BMP signaling were visualized in the PDL of human teeth under static or movement conditions in vivo. In control teeth without movement, minimal BMP signaling activity was observed in the PDL . However, BMP signaling was highly activated in response to OTM compared to controls. Analysis of primary cilia presence revealed that not all cells within the PDL exhibited a primary cilium, irrespective of whether the tooth was subjected to movement or remained stationary. Primary cilium incidence drastically increased in the regions where BMP signaling activity was detected compared to the PDL in static teeth and regions of the PDL in moved teeth where BMP signaling was absent. These findings suggest that BMP signaling and primary cilia play a role in OTM and may also participate in PDL mechanotransduction in vivo. Additionally, the co-localization of BMP signaling activity with enhanced ciliogenesis implies a potential interaction between these mechanisms.
This study was conducted in adherence to the ethical principles outlined in the World Medical Association Declaration of Helsinki. Informed consent was secured from all individual human donors contributing experimental material. Additionally, the study underwent independent review and received approval from the Ethical Committee of the University of Bonn (reference number 029/08). 3.1. Primary PDL Cell Isolation and Culture PDL tissues were obtained from the middle third of the root surface of caries-free teeth, extracted during routine orthodontic procedures from periodontally healthy donors. The tissues were cultured in T75 cell culture flasks (CELLSTAR ® Greiner BioOne, Kremsmünster, Austria) using N2B27-PDLsf medium at 37 °C in a humidified atmosphere with 5% CO 2 . Cells were passaged upon reaching confluence, with the medium supplemented with 1% Penicillin/Streptomycin (Gibco, Carlsbad, CA, USA) and 1% Plasmocin prophylactic (Invivogen, Toulouse, France) until passage 2. From passage 3 onward, the media were used without Penicillin/Streptomycin and Plasmocin prophylactic. Cells expanded to passages 3 and 4 were employed for subsequent analyses. Passaging was performed using StemPro Accutase (Gibco) for 5–10 min at 37 °C, with enzyme activity neutralized by diluting with culture medium. 3.2. Mechanical Loading of PDL Cells Static tensile strain was applied to PDL cells seeded at a density of 20,000 cells per well in 2 mL of culture medium. The cells were grown to 80% confluence on Bioflex ® culture plates, which were coated with type I collagen and featured flexible silicone membrane-bottom wells designed to facilitate mechanical stimulation (BF-3001C; Flexcell International, Hillsborough, NC, USA). The plates were placed into a strain device (FX-6000T™ Tension System, Flexcell International), which includes a BioFlex baseplate with a cylindrical post as the loading platform, matching the dimensions of the flexible-bottom wells. This system employs a computer-regulated bioreactor that utilizes vacuum and positive air pressure to deliver precisely controlled, static deformation to cells growing in a monolayer. Following cell seeding and a 24 h growth period, continuous stretching of the cells was performed at 2.5% (1.3 cN/mm 2 ), 5% (2.6 cN/mm 2 ), 10% (5.2 cN/mm 2 ), and 20% strain (10.4 cN/mm 2 ) . The BioFlex baseplate, along with the loading stations and posts, was then placed in an incubator to maintain a humidified atmosphere with 5% CO 2 at 37 °C. Cells were subjected to mechanical loading for 24 h. After this period, the plates were removed, and the flexible membranes with the stretched cells were prepared for further experimentation. To elucidate the mechanisms triggered by tension-induced mechanical loading, unstretched cells were used as controls in each experiment. 3.3. RNA Extraction, Quality Control and cDNA Synthesis Total messenger ribonucleic acid (mRNA) was isolated and purified from cell lysates using the RNeasy Mini Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. The concentration of the isolated mRNA was determined spectrophotometrically using a Nanodrop (Thermo-Fisher Scientific, Waltham, MA, USA), and its purity was assessed by the 260/280 absorbance ratio. Subsequently, mRNA was reverse-transcribed into complementary deoxyribonucleic acid (cDNA) using the iScript Select cDNA Synthesis Kit (Bio-Rad Laboratories, Hercules, CA, USA). The 20 µL cDNA synthesis reaction was prepared with oligo(dT) primer mix following the manufacturer’s protocol. Each reaction utilized the maximum available amount of RNA from each sample, with a maximum of 1 µg total RNA per reaction. The synthesis process involved an initial cDNA synthesis step at 42 °C for 90 min, followed by reverse transcriptase inactivation at 85 °C for 5 min, both conducted using an iCycler (Bio-Rad Laboratories). 3.4. RT-qPCR to Detect Transcriptional Changes in PDL Cells Exposed to Strain qPCR was performed using Faststart universal SYBR Green (Sigma-Aldrich, St. Louis, MO, USA) and a StepOnePlus Real-Time PCR System (Thermo-Fisher Scientific). mRNA values were normalized to the GAPDH housekeeping gene, which is constitutively expressed at high levels, to account for general variability in mRNA expression between samples. Genes that were within 12 cycles of the cycle at which GAPDH reached the threshold for expression were considered expressed in the PDL cells. Primer sequences are presented in including catalogue numbers from Origene. 3.5. In Vivo Preparation of Histological Specimens of Teeth With and Without OTM Exposure For in vivo analyses, teeth with adjacent PDL were obtained from adolescent patients initiating orthodontic treatment with fixed multibracket appliances and who required symmetric premolar extractions in one jaw. The selection criteria for participants included overall good health, absence of medications influencing bone or soft tissue metabolism, no prosthetic restorations on the teeth to be moved, absence of premature occlusal contacts, no radiographic evidence of horizontal bone loss or vertical bony defects, and no signs of root resorption. Due to the inherent tendency of brackets to act as reservoirs for plaque accumulation, it was essential that all participants strictly adhere to comprehensive oral hygiene protocols. These included brushing after every meal, using interdental brushes, and following detailed instructions on the proper technique for brushing bracketed teeth, which were provided prior to bracket placement. Furthermore, adherence to proper oral hygiene was regularly monitored and reinforced at each follow-up appointment to ensure consistent and effective oral care throughout the duration of the treatment. This precaution was necessary to prevent plaque-induced bacterial inflammation and ensure the reliability of the study results. In the context of the multibracket appliance insertion, one of the two premolars designated for extraction was fitted with a bracket for biomechanical purposes, thereby facilitating optimal movement of the adjacent teeth, and conversely, the contralateral premolar was left unbracketed. Consequently, the bracketed premolar experienced orthodontic loading during the leveling phase, while the unbracketed premolar served as a reference for physiological loading without OTM. Following a period of 3–8 weeks, determined by the individual therapeutic needs, both premolars were extracted to create space for therapeutic purposes and were immediately processed for subsequent analyses. Post-extraction, the teeth were preserved by immersion in 4% buffered formaldehyde (Sörensen buffer) at room temperature for a minimum of 24 h, followed by decalcification in 4.1% disodium ethylenediaminetetraacetic acid (EDTA) solution for at least one month, with the solution being refreshed every 24 h. After hydration, the specimens were dehydrated through an ascending series of ethanol concentrations, embedded in paraffin, and serial sagittal sections of 2–3 µm were prepared for further examination. 3.6. Immunostaining To visualize the presence of primary cilia, primary human PDL cells were seeded at a density of 10,000 cells per coverslip in 1 mL of culture medium and maintained on glass-bottom dishes (Marienfeld Laboratory Glassware, Lauda-Königshofen, Germany) for a period of five consecutive days. On each day, cells were fixed with paraformaldehyde (PFA, Sigma-Aldrich, St. Louis, MO, USA) prior to incubation with the primary antibody against acetylated α-tubulin at 4 °C overnight. Bound antibodies were detected using fluorescent secondary antibody Alexa Fluor 568, applied at room temperature for 1 h. Nuclei were counterstained with DAPI for 10 min. Images were acquired using a fluorescent microscope (Keyence BZ-X710, Keyence, Ōsaka, Japan). Laser lines at 405 nm and 561 nm were employed for sample excitation, with settings held constant across all analyzed images. To examine primary cilia and BMP signaling with tooth movement in vivo, histology sections of human teeth were deparaffinized, rehydrated, and rinsed for 10 min in tris-buffered saline (TBS). Following this, the sections were permeabilized with 0.1% Tween 20 (Sigma, Aldrich) for 5 min at room temperature, blocked with 10% goat serum (Sigma Aldrich) for 1 h at room temperature, and incubated in primary antibodies at 4 °C overnight. Antibodies against pSMAD1/5/8 (rabbit polyclonal, 1:250, Fisher Scientific, NH, USA) and acetylated α-tubulin (mouse monoclonal, 1:10, Sigma-Aldrich) were used to visualize BMP signaling and primary cilia, respectively. Sections were then incubated in the secondary antibodies anti-mouse Alexa Fluor 488 (1:500, Life Technologies, Carlsbad, CA, USA) and anti-rabbit Alexa Fluor 568 (1:500, Life Technologies) at room temperature for 1 h. Stained slides were then mounted with media containing the nuclear stain DAPI and sealed with nail polish (Electron Microscopy Sciences, Hatfield, PA, USA). Images were captured using a 316 Discover ECHO Confocal Microscope (ECHO, San Diego, CA, USA). The validity of the assays was ensured by routinely performing negative controls to exclude any artifacts, obtained by replacing the primary antibody with TBS/BSA. All antibodies were utilized at previously optimized concentrations. The quality of the original microphotographs of the stainings was routinely verified for high resolution and clarity, with particular attention to the coloring, to ensure no reduction in image quality. 3.7. Statistical Analysis Differences between control and experimental groups were determined using a two-tailed Student’s t-test assuming equal variance. Values are reported as mean ± SEM, with p < 0.05 considered statistically significant. The sample size was selected to achieve a power of at least 80%. Experimental groups in transcriptional analyses were expressed as a fold change in relation to static controls normalized to a value of “1”. Statistical analysis was conducted using GraphPad Prism (San Diego, CA, USA).
PDL tissues were obtained from the middle third of the root surface of caries-free teeth, extracted during routine orthodontic procedures from periodontally healthy donors. The tissues were cultured in T75 cell culture flasks (CELLSTAR ® Greiner BioOne, Kremsmünster, Austria) using N2B27-PDLsf medium at 37 °C in a humidified atmosphere with 5% CO 2 . Cells were passaged upon reaching confluence, with the medium supplemented with 1% Penicillin/Streptomycin (Gibco, Carlsbad, CA, USA) and 1% Plasmocin prophylactic (Invivogen, Toulouse, France) until passage 2. From passage 3 onward, the media were used without Penicillin/Streptomycin and Plasmocin prophylactic. Cells expanded to passages 3 and 4 were employed for subsequent analyses. Passaging was performed using StemPro Accutase (Gibco) for 5–10 min at 37 °C, with enzyme activity neutralized by diluting with culture medium.
Static tensile strain was applied to PDL cells seeded at a density of 20,000 cells per well in 2 mL of culture medium. The cells were grown to 80% confluence on Bioflex ® culture plates, which were coated with type I collagen and featured flexible silicone membrane-bottom wells designed to facilitate mechanical stimulation (BF-3001C; Flexcell International, Hillsborough, NC, USA). The plates were placed into a strain device (FX-6000T™ Tension System, Flexcell International), which includes a BioFlex baseplate with a cylindrical post as the loading platform, matching the dimensions of the flexible-bottom wells. This system employs a computer-regulated bioreactor that utilizes vacuum and positive air pressure to deliver precisely controlled, static deformation to cells growing in a monolayer. Following cell seeding and a 24 h growth period, continuous stretching of the cells was performed at 2.5% (1.3 cN/mm 2 ), 5% (2.6 cN/mm 2 ), 10% (5.2 cN/mm 2 ), and 20% strain (10.4 cN/mm 2 ) . The BioFlex baseplate, along with the loading stations and posts, was then placed in an incubator to maintain a humidified atmosphere with 5% CO 2 at 37 °C. Cells were subjected to mechanical loading for 24 h. After this period, the plates were removed, and the flexible membranes with the stretched cells were prepared for further experimentation. To elucidate the mechanisms triggered by tension-induced mechanical loading, unstretched cells were used as controls in each experiment.
Total messenger ribonucleic acid (mRNA) was isolated and purified from cell lysates using the RNeasy Mini Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. The concentration of the isolated mRNA was determined spectrophotometrically using a Nanodrop (Thermo-Fisher Scientific, Waltham, MA, USA), and its purity was assessed by the 260/280 absorbance ratio. Subsequently, mRNA was reverse-transcribed into complementary deoxyribonucleic acid (cDNA) using the iScript Select cDNA Synthesis Kit (Bio-Rad Laboratories, Hercules, CA, USA). The 20 µL cDNA synthesis reaction was prepared with oligo(dT) primer mix following the manufacturer’s protocol. Each reaction utilized the maximum available amount of RNA from each sample, with a maximum of 1 µg total RNA per reaction. The synthesis process involved an initial cDNA synthesis step at 42 °C for 90 min, followed by reverse transcriptase inactivation at 85 °C for 5 min, both conducted using an iCycler (Bio-Rad Laboratories).
qPCR was performed using Faststart universal SYBR Green (Sigma-Aldrich, St. Louis, MO, USA) and a StepOnePlus Real-Time PCR System (Thermo-Fisher Scientific). mRNA values were normalized to the GAPDH housekeeping gene, which is constitutively expressed at high levels, to account for general variability in mRNA expression between samples. Genes that were within 12 cycles of the cycle at which GAPDH reached the threshold for expression were considered expressed in the PDL cells. Primer sequences are presented in including catalogue numbers from Origene.
For in vivo analyses, teeth with adjacent PDL were obtained from adolescent patients initiating orthodontic treatment with fixed multibracket appliances and who required symmetric premolar extractions in one jaw. The selection criteria for participants included overall good health, absence of medications influencing bone or soft tissue metabolism, no prosthetic restorations on the teeth to be moved, absence of premature occlusal contacts, no radiographic evidence of horizontal bone loss or vertical bony defects, and no signs of root resorption. Due to the inherent tendency of brackets to act as reservoirs for plaque accumulation, it was essential that all participants strictly adhere to comprehensive oral hygiene protocols. These included brushing after every meal, using interdental brushes, and following detailed instructions on the proper technique for brushing bracketed teeth, which were provided prior to bracket placement. Furthermore, adherence to proper oral hygiene was regularly monitored and reinforced at each follow-up appointment to ensure consistent and effective oral care throughout the duration of the treatment. This precaution was necessary to prevent plaque-induced bacterial inflammation and ensure the reliability of the study results. In the context of the multibracket appliance insertion, one of the two premolars designated for extraction was fitted with a bracket for biomechanical purposes, thereby facilitating optimal movement of the adjacent teeth, and conversely, the contralateral premolar was left unbracketed. Consequently, the bracketed premolar experienced orthodontic loading during the leveling phase, while the unbracketed premolar served as a reference for physiological loading without OTM. Following a period of 3–8 weeks, determined by the individual therapeutic needs, both premolars were extracted to create space for therapeutic purposes and were immediately processed for subsequent analyses. Post-extraction, the teeth were preserved by immersion in 4% buffered formaldehyde (Sörensen buffer) at room temperature for a minimum of 24 h, followed by decalcification in 4.1% disodium ethylenediaminetetraacetic acid (EDTA) solution for at least one month, with the solution being refreshed every 24 h. After hydration, the specimens were dehydrated through an ascending series of ethanol concentrations, embedded in paraffin, and serial sagittal sections of 2–3 µm were prepared for further examination.
To visualize the presence of primary cilia, primary human PDL cells were seeded at a density of 10,000 cells per coverslip in 1 mL of culture medium and maintained on glass-bottom dishes (Marienfeld Laboratory Glassware, Lauda-Königshofen, Germany) for a period of five consecutive days. On each day, cells were fixed with paraformaldehyde (PFA, Sigma-Aldrich, St. Louis, MO, USA) prior to incubation with the primary antibody against acetylated α-tubulin at 4 °C overnight. Bound antibodies were detected using fluorescent secondary antibody Alexa Fluor 568, applied at room temperature for 1 h. Nuclei were counterstained with DAPI for 10 min. Images were acquired using a fluorescent microscope (Keyence BZ-X710, Keyence, Ōsaka, Japan). Laser lines at 405 nm and 561 nm were employed for sample excitation, with settings held constant across all analyzed images. To examine primary cilia and BMP signaling with tooth movement in vivo, histology sections of human teeth were deparaffinized, rehydrated, and rinsed for 10 min in tris-buffered saline (TBS). Following this, the sections were permeabilized with 0.1% Tween 20 (Sigma, Aldrich) for 5 min at room temperature, blocked with 10% goat serum (Sigma Aldrich) for 1 h at room temperature, and incubated in primary antibodies at 4 °C overnight. Antibodies against pSMAD1/5/8 (rabbit polyclonal, 1:250, Fisher Scientific, NH, USA) and acetylated α-tubulin (mouse monoclonal, 1:10, Sigma-Aldrich) were used to visualize BMP signaling and primary cilia, respectively. Sections were then incubated in the secondary antibodies anti-mouse Alexa Fluor 488 (1:500, Life Technologies, Carlsbad, CA, USA) and anti-rabbit Alexa Fluor 568 (1:500, Life Technologies) at room temperature for 1 h. Stained slides were then mounted with media containing the nuclear stain DAPI and sealed with nail polish (Electron Microscopy Sciences, Hatfield, PA, USA). Images were captured using a 316 Discover ECHO Confocal Microscope (ECHO, San Diego, CA, USA). The validity of the assays was ensured by routinely performing negative controls to exclude any artifacts, obtained by replacing the primary antibody with TBS/BSA. All antibodies were utilized at previously optimized concentrations. The quality of the original microphotographs of the stainings was routinely verified for high resolution and clarity, with particular attention to the coloring, to ensure no reduction in image quality.
Differences between control and experimental groups were determined using a two-tailed Student’s t-test assuming equal variance. Values are reported as mean ± SEM, with p < 0.05 considered statistically significant. The sample size was selected to achieve a power of at least 80%. Experimental groups in transcriptional analyses were expressed as a fold change in relation to static controls normalized to a value of “1”. Statistical analysis was conducted using GraphPad Prism (San Diego, CA, USA).
This study is the first to investigate the role of primary cilia in human periodontal PDL cells under orthodontic force, specifically exploring their involvement in mechanotransduction and the modulation of BMP signaling, a pivotal pathway in craniofacial tissue development and maintenance. While primary cilia are well documented for their involvement in mechanotransduction and cellular homeostasis across various tissues, their role in PDL cells during OTM remained largely unexplored so far. The research presented here fills this knowledge gap by examining the presence, structure, and function of primary cilia in PDL cells and their potential interaction with BMP signaling pathways. The findings of this investigation confirmed the presence of primary cilia in human PDL cells both in vitro and in vivo and revealed that their structural characteristics such as length and incidence are dynamically regulated in response to mechanical strain and culture conditions such as cellular density. Previous studies have demonstrated that the length of primary cilia is a critical determinant of their ability to detect mechanical stimuli, with elongation enhancing their sensitivity to such forces . It is well established that inflammatory cytokines and other signaling molecules influence cilia formation and elongation, which is particularly relevant in the context of orthodontic treatment, where mechanical forces induce sterile inflammation within periodontal tissues, potentially heightening the mechanosensitivity of PDL cells . Transcriptional analyses of PDL cells exposed to orthodontic forces revealed the expression of key components of the BMP signaling pathway, with several candidates emerging as potential facilitators of mechanosensation. Notably, the in vivo data provide compelling evidence for the involvement of primary cilia in mediating mechanotransduction during OTM. These results suggest that primary cilia may enhance BMP signaling, which is consistent with similar mechanisms observed in other cell types under mechanical stimulation, though this represents a novel discovery within the PDL . Activation of BMP signaling in response to OTM coincided with a significant increase in primary cilium incidence and elongation in this study, particularly in regions where BMP signaling was activated. This supports the hypothesis that primary cilia may serve as critical mediators in the transduction of mechanical signals through the BMP pathway. While primary cilia were detected in the PDL of both moved and static teeth, not all PDL cells exhibited a primary cilium. This finding is consistent with the dynamic nature of primary cilia, which can shorten or undergo disassembly in the absence of external stimuli . This could explain the lower incidence of primary cilia in the PDL of static teeth, where mechanical forces are not applied. Additionally, the random orientation of primary cilia in the PDL may have contributed to some projections not being captured in the tissue sections. In contrast, increased primary cilium incidence and elongation were observed in the PDL of moved teeth, particularly in regions where BMP signaling was activated. Further analysis of Ift88, a key ciliary marker, revealed that expression levels varied with mechanical strain. Specifically, Ift88 expression was elevated at low strain levels (2.5% and 5%), suggesting a threshold for axoneme elongation and cilium adaptation. These results align with previous research indicating that primary cilia are more prevalent on cells subjected to low mechanical forces and possess heightened sensitivity to these subtle strains . Conversely, exposure to higher mechanical strains has been shown to disrupt ciliary architecture, with many cells subjected to these forces lacking primary cilia altogether . The transcriptional analysis of PDL cells subjected to tensile strain further elucidated the role of BMP signaling in the mechanotransduction process. Previous studies have demonstrated that endothelial cell primary cilia enhance BMP9-Smad1/5/8 signaling exclusively under low strain intensities . The BMP–Alk–Smad signaling axis is a conserved mechanism for mechanosensing, with its activity strongly dependent on the magnitude of applied force . In our study, we observed a variable transcriptional response to BMP signaling components across different strain conditions, with a trend of increased BMP9 expression under higher strain levels. Analysis of key BMP signaling molecules, including Smad1, Smad5, Smad8, and Smad9, revealed a heterogeneous response to mechanical strain, characterized by both upregulation and downregulation. Notably, Smad1 showed a pronounced downregulation in response to higher strain intensities, suggesting a potential inhibitory or compensatory role aimed at preventing excessive inflammatory responses or immune reactions. Type I BMP receptors exhibited consistent upregulation across all strain conditions, though inter-donor variability suggests a complex and cell-specific regulatory response. Type II BMP receptors were upregulated exclusively under the highest strain condition (20%), indicating a strain-specific regulatory pattern. These observations emphasize the need to consider the cellular context when interpreting BMP signaling responses to mechanical stimuli. The sustained upregulation of Bmpr2, Bmp7, and Alk2 across all strain conditions suggests that these components may play a crucial role in the PDL cell response to mechanical loading, possibly contributing to the adaptive cellular mechanisms that preserve homeostasis and modulate host responses to prevent excessive immune reactions. Despite these novel insights, several limitations of the study design must be acknowledged. First, while this investigation identified key interactions between primary cilia and BMP signaling in mechanotransduction, further research is needed to determine whether primary cilia directly interact with BMP signaling components in PDL cells. Second, the observed transcriptional patterns suggest that BMP signaling and ciliary regulation are highly context-dependent, complicating the ability to draw generalized conclusions across different strain conditions. Although a 24 h strain protocol was used to simulate OTM, it is possible that shorter durations of mechanical loading could yield more precise insights into cilia dynamics and BMP signaling. For example, other cell types have shown that primary cilia undergo negative feedback mechanisms within hours of stimulation, and BMP signaling activation often plateaus after initial induction. Additionally, while the in vitro model focuses on acute responses to mechanical strain, it does not address the long-term adaptations of PDL cells to sustained orthodontic forces, which are crucial for understanding chronic immune responses and potential pathological conditions during orthodontic treatment. These limitations highlight the need for future research to explore the temporal dynamics of primary cilia and BMP signaling and to investigate the long-term mechanotransductive responses of PDL cells to orthodontic forces. In conclusion, this study underscores the complex interplay between mechanical forces, cellular signaling, and structural adaptations in PDL cells. The differential gene expression patterns observed in response to varying strain intensities emphasize the intricate nature of mechanotransduction in the PDL, with primary cilia likely serving as central mediators of this process. Future research should focus on elucidating the temporal dynamics of primary cilia and BMP signaling activation, as well as exploring their interactions with other mechanosensitive pathways, such as Wnt/β-catenin signaling. The clinical implications of this study are particularly relevant to the prevention of ERR during OTM as a common complication associated with prolonged mechanical loading. ERR can occur as a result of the inflammatory processes triggered by orthodontic forces, and it is often linked to excessive or improper loading that disrupts the normal remodeling of the PDL. Given that primary cilia are implicated in mechanotransduction and cellular signaling, particularly in response to mechanical strain and BMP signaling pathways, understanding their role in PDL cells provides a potential avenue for mitigating this complication. Given the dynamic regulation of primary cilia length and incidence in response to mechanical strain, as observed in this study, therapeutic strategies aimed at optimizing ciliary function could potentially prevent excessive inflammatory responses that contribute to root resorption. For instance, modulating the activity of BMP receptors or promoting primary cilium elongation under low strain conditions could enhance the mechanosensitive capacity of PDL cells, thus promoting a more controlled and beneficial remodeling process. Additionally, targeting the pathways that influence ciliary formation, such as inflammatory cytokine signaling, might provide a means of reducing the risk of unwanted resorptive activity on the root surface. Incorporating cilia-based therapies or interventions into orthodontic treatment could therefore provide clinicians with a more refined approach to managing mechanical forces. By ensuring that PDL cells respond appropriately to strain via enhanced mechanotransduction and BMP signaling, it may be possible to prevent or limit the extent of ERR, ultimately improving patient outcomes and the long-term success of orthodontic treatment. Future studies should focus on validating these potential therapeutic strategies, exploring the effects of modulating primary cilia in clinical settings, and determining how these interventions can be applied to prevent ERR during orthodontic treatment.
|
Comparative physiology and transcriptome response patterns in cold-tolerant and cold-sensitive varieties of | 588d919b-d985-48cd-8952-335e82323a1e | 11003173 | Physiology[mh] | Eggplant ( Solanum melongena L. ) holds a prominent position among vegetable crops cultivated in both the southern and northern regions of China. It has evolved into a vital industry ensuring a consistent supply of vegetables throughout the year, thereby contributing significantly to the augmentation of farmers’ income and supporting rural revitalization. The predominant method of eggplant cultivation in China relies on early spring planting. However, the vulnerability of early spring cultivation to low temperatures results in sluggish plant growth, flower and fruit drop, fruit deformities, and delayed fruit expansion speed . These issues detrimentally impact both yield and economic returns. Enhancing the cold resistance of eggplants has thus emerged as a key objective in the breeding of early maturing eggplants. Low-temperature stress manifests in two distinct forms: chilling injury (> 0 ℃) and freezing injury (≤ 0 ℃). These challenges are recognized as among the most destructive threats, exerting adverse effects on the plant life cycle, geographical distribution, and crop yield . The response of plants to low-temperature stress constitutes a complex regulatory system. When exposed to cold stress, plants initiate a series of physiological responses, including modifications to cell membrane lipid composition, clearance of ROS , and the maintenance of a steady-state balance in the cell membrane system . Some studies have revealed that the exogenous addition of various chemicals, such as H 2 O 2 , abscisic acid (ABA), and methyl jasmonate (MeJA), can safeguard plants from cold damage. For instance, H 2 O 2 stimulates the accumulation of plant hormones (ABA, MeJA, etc.), consequently enhancing the cold stress tolerance of tomatoes . The application of exogenous ABA increases the activity of wheat antioxidant enzymes, including catalase (CAT), superoxide dismutase (SOD), and POD, thereby mitigating cold damage . Similarly, MeJA reinforces tomato cold resistance by elevating the activity of antioxidant enzymes, including CAT and POD, along with the expression of related genes . Plant adaptation to cold stress also encompasses molecular-level changes, such as alterations in transcription, translation, and metabolic processes involving specific proteins, metabolites, and plant hormone levels. The alterations in gene expression levels associated with the response to cold stress represent pivotal adaptive molecular mechanisms in plants combating adverse cold conditions. Notably, cold response genes, including C receptor binding factors (CBFs), inducers of CBF expression genes ( ICEs ) , cold-regulated ( COR ) genes , and other key regulators, are swiftly upregulated, contributing to heightened cold resistance . Transcription factors linked to cold response, such as bHLHs, WRKYs, and MYBs, have been validated as crucial regulatory elements for cold response genes in model plant species like Arabidopsis , enhancing plant adaptability to cold . Moreover, plants’ adaptation to cold stress encompasses multiple signal transduction pathways governing cold response gene expression, such as the abscisic acid signaling pathway , CBF expression (ICE)-C-repeat binding factor (CBF)-cold-responsive (COR) pathway , and mitogen-activated protein kinase signaling pathway . These pathways influence the expression levels of genes regulating the rearrangement of plant secondary metabolites, particularly antioxidant metabolites like flavonoids and terpenoids. The cold-induced accumulation of polyphenolic compounds, for instance, enhances free radical scavenging activity and antioxidant capacity in Brassica rapa L. ssp. pekinensis, contributing to its cold tolerance . However, such changes involve the expression of numerous related genes and the regulation of regulatory factors, making it challenging to unravel the intricate network of plant adaptation to cold solely by studying individual genes and metabolic pathways. The advent of high-throughput sequencing has facilitated the comprehensive exploration of the entire genome’s expression at the transcriptional level. Transcriptome sequencing has been conducted on various plants under low-temperature stress, encompassing model plants , horticultural plants , vegetable crops , and trees . These studies offer insights into how transcriptional changes respond to cold stress, leading to the identification of numerous cold response genes across different crops. However, it is essential to recognize that distinct species may exhibit varying cold response mechanisms, necessitating ongoing exploration of regulatory pathways and genes involved in cold response across diverse plant species. Eggplant, native to tropical Asia, has now gained global popularity as a commercially significant economic crop. However, due to incomplete genomic information and a lack of expression profile data, research on low-temperature tolerance in eggplants has traditionally focused on enhancing cultivation management measures and basic physiological and biochemical aspects. Unfortunately, this approach did not fundamentally address the urgent need for low-temperature-tolerant varieties in production. Advances in technology, coupled with the widespread use of modern molecular biology methods, high-throughput sequencing, and genetic engineering techniques, have allowed for the preliminary identification of several genes associated with regulating low-temperature responses in eggplants . While existing research has made progress, the depth of exploration varies, and there is a scarcity of reports concerning gene function analysis and regulatory mechanisms. A prior study by Yang et al. analyzed the transcriptome of eggplants under cold stress, identifying some DEGs related to cold stress, such as cold-inducible proteins, genes associated with hormone signal transduction, and osmoregulation proteins . However, the metabolic pathways associated with these DEGs were not thoroughly examined. In this study, we conducted a comprehensive investigation into the phenotypic and physiological differences between the cold-tolerant “E7135” and cold-sensitive “E7142” eggplant varieties. Employing phenotype identification and transcriptomics methods, we delved into potential mechanisms underlying eggplant cold responses at physiological, biochemical, transcriptional, and metabolic levels, constructing a cold stress transcriptional regulatory network. The findings of this research serve as a foundation for a deeper understanding of the molecular mechanisms governing eggplant adaptation to cold stress and identifying genes with the potential to enhance the low-temperature tolerance of early maturing eggplants. Physiological response of eggplant under cold stress treatment This study aimed to compare the cold resistance of the cold-tolerant “E7134” (“A”) eggplant variety and the cold-sensitive “7145” (“B”) variety. Morphological changes of two eggplant varieties “A” and “B” under 5 ℃ cold stress exhibited significant differences (Fig. a). Specifically, leaves of “B” began dehydrating at 4 d, intensifying by 7 d, while “A” showed dehydrating only after 7 d. “B” displayed leaf curling after 4 d, contrasting with “A,” which showed no notable changes, confirming “A” as cold-resistant and “B” as cold-sensitive. To investigate To comprehend cellular responses to oxidative and osmotic damage under cold stress, we measured the activity of POD and the content of osmoregulation-related components, including malondialdehyde (MDA), γ-aminobutyric acid (GABA), free proline, soluble protein, and soluble sugar in eggplant leaves (Fig. b). The POD activity in sample “A” exhibited a significant increase after 1 d of cold stress at 5 ℃, escalating by 1.17 times (1 d) and 2.01 times (2 d) compared to the control group (0 d), respectively. Conversely, the POD activity in sample “B” showed no significant changes on the first day but increased by 1.77-fold after 2 d under cold stress, with a notable decrease at 4 d. More importantly, the POD activity of sample “A” surpassed that of sample “B” under cold stress. The MDA content in sample “A” remained largely unchanged at 2 d post-cold stress, peaking at 4 d. In contrast, the MDA content in sample “B” consistently decreased over time, rising significantly after the 4th d but consistently remaining lower than that of sample “A”. After 2 d of cold stress, the GABA content in both samples “A” and “B” peaked, with sample “B” exhibiting slightly higher levels during the stress period. The free proline content in samples “A” and “B” gradually increased 2 d before cold stress, rising by nearly 1.35 and 1.16 times compared to the control group (0 d), respectively. On the 4th d, a significant decrease was observed, followed by an increase. The soluble protein content in sample “A” reached its peak at the 4th d under cold stress, nearly 1.21 times higher than sample “B”. After 1 d of cold stress, the soluble sugar content in sample “B” significantly increased. In contrast, the soluble sugar content in sample “A” was significantly lower than that of sample “B” during the entire cold stress period. These findings underscore significant differences in cold resistance between the two eggplant varieties, indicating distinct molecular regulatory mechanisms. Chemometrics analysis based on physiological index data First, physiological data were standardized for Chemometrics analysis, and the results were clustered with cold stress samples (Fig. a). The 30 samples were categorized into three groups: “A” 0d, “A” 1d, “A” 2d (treated with 2 d of cold stress) forming the first category, “B” 0d, “B” 1d, “B” 2d, and “B” 7d forming the second category, and “A” 4d, “A” 7d, and “B” 4d forming the third category. Next, principal component analysis (PCA) was employed to discern grouping characteristics and physiological indicators of the samples. PC1, which accounted for 57.1% of the total variance, distinctly separated the samples: “A” 0d - “A” 7d (represented by orange) on the left and “B” 0d − 7d (represented by blue) on the right (Fig. b). Sample “A” was further divided into two groups (“A” 0d − 2d, “A” 4d − 7d) in PC2 (20.8%), while sample “B” was divided into two groups (“B” 0d − 7d, “B” 4d). Thus, variety differences were distinguishable by PC1, and differences induced by cold stress treatment were discernible by PC2. Moreover, the positive load of PC1 encompassed GABA, MDA, free proline, and soluble sugar, while the negative load comprised soluble protein and POD. All physiological indicators of PC2 were positive loads. Finally, OPLS-DA was employed to evaluate the crucial physiological indicators for the two varieties under low-temperature stress. The Variable Importance in Projection (VIP) threshold served as a measure of the impact intensity of physiological indicators. Researchers identified that POD (VIP = 1.09) and soluble protein (VIP = 1.12) emerged as important physiological indicators for both varieties under low-temperature stress (Fig. c). Gene expression profiling of eggplant under cold stress treatment. To analyze transcriptional changes between two eggplant cultivars under cold stress, designated as “A” and “B” both under cold stress treatment at five time points: before (CK, 0 d) and after exposure to 5 °C (1d, 2d, 4d, and 7d), with three biological replicates per sample. A total of 434.65 Gb of clean data were obtained from 30 samples, each reaching 6.17 Gb. All Q30 values surpassed 93.02%, attesting to the reliability of the sequencing results (Table S1). Table S2 showed detailed annotation information for unigenes containing Nonredundant (NR), Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and Non supervised Orthologous Groups (NOG) databases (Table S2). The sample heatmap analysis indicated correlation coefficients exceeding 0.8 for the treatment groups in each variety (Fig. S1). Differentially expressed genes (DEGs) were identified based on transcript abundance, using error detection rate < 0.01 and fold change ≥ 2 as thresholds. Both “A” and “B” exhibited similar and significant changes in their response to cold stress at different time points, with 7024 and 6209 DEGs identified, respectively (Fig. S2). Prior to cold treatment (CK, 0 d), “A” and “B” displayed 1741 and 1482 highly expressed genes, respectively, indicating genotype-related gene expression changes (Fig. a, b). “A” exhibited a larger number of DEGs from 1 d to 2 d, while “B” showed fewer upregulated and downregulated genes during this period. Specifically, “A” had 2646, 1724, and 2027 upregulated genes and 3143, 3038, and 4173 downregulated genes at each respective time point. In contrast, “B” showed 1583, 1322, and 1785 upregulated genes and 2130, 2602, and 3814 downregulated genes, respectively (Fig. a). Comparatively, “A” demonstrated a higher number of activated genes than “B,” indicating distinct responses between tolerant and sensitive varieties, with “A” exhibiting greater adaptability to cold stress. The number of shared genes significantly increased at 7 d in both “A” and “B”. Intriguingly, the shared genes in “A” and “B” exhibited different expression patterns during the 1–4 d stage of the cold stress response activation. Transcriptome differences between genes with known cold response mechanisms and core metabolic pathways All samples underwent GO term analysis, revealing significant enrichment in catalytic activity, membrane components, transport activity, and responses to stress. Notably, the number of DEGs enriched in membrane components and transport activity was higher in variety “A” compared to “B”. Variety “A” specifically exhibited enrichment in processes related to the cell wall, non-protoplast, and secondary metabolism. On the other hand, variety “B” showed specific enrichment in oxidoreductase activity, protein serine/threonine kinase activity, and terpene synthase activity (Fig. c, d). These results suggest that “A” enhances cold tolerance by fortifying cell structure (membrane components, cell walls) and secondary metabolism, facilitating plant growth under adverse conditions. In contrast, “B” primarily adapts to cold stress by enhancing kinase activity, showcasing a comparatively more focused response than “A”. To delve deeper into the function of eggplant DEGs in response to low-temperature stress, KEGG pathway enrichment analysis was employed to identify biological pathways involved in cold stress. A Q value of ≤ 0.05 was used as the truncation criterion, revealing that compared to variety “B”, variety “A” exhibited a greater number of enriched metabolic pathways (Fig. e, f). Both varieties showed significant enrichment in metabolic pathways, synthesis of secondary metabolites, plant hormone signal transduction, and the mitogen-activated protein kinase (MAPK) signaling pathway under cold stress. Notably, starch and sucrose metabolism, synthesis of diterpenoid and tetraterpene compounds, and the glutathione biosynthesis pathway were specifically enriched in variety “A”, while DEGs of variety “B” were specifically enriched in brassinosteroid biosynthesis and plant circadian rhythm. This implies that, compared to “B”, variety “A” demonstrates a more diverse pathway for responding to cold stress. To investigate the effects of cold stress on conserved responses within and between varieties, a Venn analysis was conducted (Fig. a). The results indicated that 263 DEGs were shared among the varieties, with nearly half of them highly expressed in “A” (Fig. b). GO enrichment analysis of these 263 DEGs revealed a higher proportion of genes related to membrane, transmembrane transport protein activity, response to stress, iron ion binding, and catalytic activity in molecular functional categories. Additionally, biological processes such as photosynthesis and transmembrane transport was enriched (Fig. c). KEGG enrichment analysis showed involvement in photosynthesis, biosynthesis of secondary metabolites, MAPK signaling pathway, and energy metabolism (Fig. d). These results indicate that ion transport, photosynthesis, and energy metabolism contribute to the conservative cold stress response of eggplant seedlings. While there are similarities in the expression patterns of shared genes, the differences suggest that cold stress response pathways have become more diverse. Response of core metabolic pathways to cold stress To delve into the nuanced effects of cold stress on the core metabolism of eggplant, the transcriptional abundance of key genes in two varieties, involved in starch and sucrose metabolism, synthesis of diterpenoids and tetraterpenes, glutathione synthesis metabolic, photosynthesis, and chlorophyll degradation pathways, were compared. Glutathione, a vital antioxidant component, serves as a crucial indicator of plant response to stress . In this study, 21 DEGs were identified in the transcriptome participating in the glutathione metabolism pathway. 19 DEGs in “A” were significantly upregulated, under cold stress. As shown in Fig. , most genes encoding enzymes regulated by the glutathione biosynthesis pathway exhibited a similar expression pattern, with the highest expression in “A” and the lowest in “B”. The expression levels of glutathione upstream synthesis genes ( OPLAH, gshA ) in “A” significantly increased at 1 d under cold stress, while in “B”, they were significantly downregulated. The expression level of GSS in “A” significantly upregulated after 1 d and returned to baseline at 7 d, while the transcriptional abundance in “B” remained lower than that in “A”. The expression patterns of DEGs ( GPX , GSR , and GST ) downstream of glutathione metabolism were consistent between “A” and “B” at different time points, yet the transcriptional abundance of DEGs in variety “B” was consistently lower than that in variety “A” (Fig. ). These results indicated that the glutathione metabolism pathway is an important metabolism in cold resistance “A” under cold stress, and the transcriptional abundance of various DEGs was significantly higher than that in “B” under cold stress. Terpene compounds exhibit antioxidant effects, effectively neutralizing free radicals and mitigating cellular damage caused by oxidative stress. KEGG enrichment analysis highlighted the specific enrichment of triterpene synthesis pathways in variety “A”. To delve into this, we analyzed the expression patterns of triterpene biosynthetic genes in both “A” and “B” under cold conditions. The identification and gene expression analysis of cycloalkene synthase ( CAS ), lanosterol synthase ( LAS ), b-myristica synthetase ( BAS ), CYP72A , CYP90C and CYP94D cytochrome P450 family members in the trunk and branch lines showed different expression patterns in different strains (Fig. ). The DEGs genes of “A” were highly expressed in the early stage (0–2d) of cold stress and significantly downregulated by the 7th d. Notably, the transcriptional abundance of various DEGs involved in the triterpenoid metabolic pathway in “A” under cold stress was markedly higher than that in “B”. In contrast, our transcriptome data unveiled that most triterpenoid biosynthetic genes in “B” were repressed with prolonged exposure to cold stress. Soluble sugar content serves as a pivotal physiological indicator, often reflecting a plant’s resistance under adverse conditions. To understand the impact of cold stress on the synthesis of soluble sugars in both “A” and “B”, we analyzed the expression patterns of genes in the core pathway of sugar synthesis under cold stress. We found that most of the transcripts related to the starch and sucrose pathway, including genes encoding isoamylase ( ISA ), (1->4)-alpha-D-glucan 1-alpha-D-glucosylmutase ( TreY ), maltooligosyltrehalose trehalohydrolase ( TreZ ), alpha-amylase ( AMY ) were more highly expressed in “B” than “A” (Fig. ). Notably, most key starch and sucrose biosynthesis structural genes displayed lower expression levels on the 7th day. However, the HK gene showcased a consistent, higher expression level in the 4–7 day cold stress period of “B” (Fig. ). HK , considered a key rate-limiting enzyme in EMP, displayed stable, higher expression levels in cold-stressed “B”, indicating that cold stress amplified the EMP metabolism of “B”, with its metabolic intensity surpassing that of “A”. When subjected to adverse stress, the wilting of plant leaves is often accompanied by the rapid breakdown of chlorophyll. Transcription data revealed 10 DEGs chlorophyll degrading genes, including 3 stage green ( SGR1 , SGR2 , SGR3 ), 1 chlorophyllase gene ( CLH ), 3 phenolase genes ( PPH1 , PPH2 , PPH3 ), 2 phenophoride an oxygenase genes ( PAO1 , PAO2 ), and 1 red chlorophyll catabolite reductase gene ( RCCR ). Among them, SGR3 , PPH2 , PPH3 , and PAO1 were downregulated after non cold stress treatment, but other chlorophyll degrading genes, especially CLH , SGR1 , SGR2 , PPH1 , PAO2 , and RCCR were upregulated (Fig. a). The upregulation of gene “B” during cold treatment was more pronounced compared to that of gene “A”. These results suggest that cold stress triggers the degradation of chlorophyll and activates the corresponding biochemical processes. The degradation of chlorophyll leaded to a weakening of photosynthesis, which inhibited plant growth and development. By KEGG enrichment analysis, we identified DEGs associated with photosynthesis to unravel the molecular mechanisms governing varied photosynthetic responses in the two varieties during cold stress treatment. The KEGG analysis pinpointed a total of 79 DEGs involved in the photosynthesis pathway, with 55 DEGs linked to Photosystem I and 24 DEGs related to Photosystem II (Fig. b, Table S3). Notably, 14 genes exhibited upregulation in the leaves of “A”. DEGs associated with various facets of photosynthesis, including light harvesting and electron transfer chains, were significantly enriched. Under cold stress, the expression of Photosystem II (PS II) genes ( PsbA , PsbC , PsbB , PsbK , PsbQ , PsbR , PsbY , and PsbW ) and Photosystem I (PS I) proteins ( PsaA , PsaB , PsaC , and PsaD ) genes was downregulated in both varieties. However, in comparison to the expression levels in “A”, the majority of DEGs in “B” exhibited lower expression, indicating a significant decrease in photosynthetic activity in “B”. This decrease in photosynthesis is closely linked to the variety’s tolerance to low temperatures. Transcription factor analysis and weighted correlation network analysis (WGCNA) The fluctuations in gene expression levels are pivotal in regulating eggplants’ response to cold stress, with transcription factors (TFs) playing a crucial role in both biological and abiotic stress responses. A total of 823 hypothetical TFs belonging to 24 different families were identified, with the top 10 TF families illustrated in Fig. a. WGCNA was employed to further explore the relationship among key genes, stress time and physiological characteristics (genes filtered through FPKM < 1). WGCNA delineated highly correlated gene clusters termed modules, where genes within the same cluster exhibited strong correlations. A total of 12 modules were optimized and merged based on identified genes (Fig. b). WGCNA was used to further explore the relationship among key genes, stress time and physiological characteristics (genes filtered through FPKM < 1). WGCNA results showed that highly correlated gene cluster were defined as modules, and genes in the same cluster were highly correlated. The identified genes were optimized and merged into 12 modules (Fig. b). The correlation analysis of characteristic genes traits between each module and cold related physiological indicators showed that MEgray60 ( r = 0.87, p < 0.05), MEdarkred ( r = 0.72, p < 0.05), and MEsaddlebrown ( r = 0.66, p < 0.05) were significantly positively correlated with physiological indicators, while MElightyellow ( r = -0.70, p < 0.05), MEgreen ( r = -0.66, p < 0.05), MEorange ( r = -0.76, p < 0.05), and MEmidlightblue ( r = -0.82, p < 0.05) were significantly negatively correlated with physiological indicators (Fig. c). Using the same method and setting parameters, MEgrey60 ( r = 0.94, p < 0.05) emerged as having a robust correlation with cold stress duration and physiological indicators (Fig. c, d). This finding suggests that co-expressed genes within the MEgrey60 module are intricately linked to cold-related physiological indicators and the duration of cold stress, signifying their pivotal role in eggplant’s response to cold stress. Consequently, co-expressed genes within the MEgrey60 module were selected for further in-depth analysis and research. The regulatory network comprising the top 150 hub genes, characterized by high connectivity, was visualized using Cytoscape software v3.9.1 ( https://cytoscape.org/download . html) (Fig. e). In the MEgrey60 module, 10 central genes were identified based on specific criteria (kME ≥ 0.9) and edge weights ≥ 0.5. These genes were predominantly associated with transcriptional activation, plant hormone regulation, and redox homeostasis. Notably, POD _EGP13161, PP2C _EGP11066, SnRK2 _EGP05474, SnRK2 _EGP18530, MDR1 _EGP04516, DELLA _EGP22417, PRR7 _EGP18322, and transcription factors (C2H2_EGP19213, AP2/ERF_EGP27974, MYB_EGP33116, bHLH_EGP07396, bZIP_EGP13341, and ERF_EGP29469) were identified as central genes that may play crucial roles in the cold response of eggplants. Subsequently, a functional annotation analysis of other genes within the co-expression network revealed enrichment in eight main aspects: signal transduction, plant hormone regulation, biosynthesis and metabolism, transcription factors, cell structure, ion binding, catalytic and transport activities, and abiotic stress (Fig. S3). Specifically, transcription factors C2H2_EGP19213, AP2/ERF_EGP27974, MYB_EGP33116 were co expressed with genes related to hormones (including auxin ( AUX1 _EGP18001, IAA _EGP14445, ARF _EGP31412), ABA ( PP2C _EGP11066, SnRK2 _EGP05474, SnRK2 _EGP18530). MYB_EGP33116, bZIP_EGP13341, and ERF_EGP29469 related genes are co-expressed in redox homeostasis, including POD _EGP13161, GST _EGP00487, GST _EGP18109, GR _EGP00582, and APX _EGP30903. Finally, MYB_EGP33116, C2H2_EGP19213 and bHLH_EGP07396 were co expressed with genes related to soluble sugar synthesis genes ( SPS _EGP10783, SUS _EGP10783, GBE1 _EGP00175, and UDPGA _EGP04238) and genes related to cold stress signaling ( MPK _EGP00891, CAMAL _EGP04153, and CLR _EGP10353). In conclusion, MYB was the core hub of the network, these results suggest that MYB_EGP33116 may be a central gene regulating plant hormone and TF, which cooperatively regulates redox homeostasis, MPK signaling pathway and soluble sugar biosynthesis, thus protecting eggplant seedlings from cold stress. QRT-PCR validation of RNA-seq data To verify the accuracy of RNA-seq results, a total of 16 genes were randomly selected for qRT-PCR experiments, including EGP06774, EGP09992, EGP06499, EGP14379, EGP00263, EGP25367, EGP09657, EGP20989, EGP01302, EGP14445, EGP18001, EGP11066, and EGP18530. The gene expression levels of the two varieties at five time points were measured. Pearson correlation analysis showed that the multiple changes of qRT-PCR and RNA-seq were basically the same (Fig. S4), indicating the reliability of transcriptome sequencing. This study aimed to compare the cold resistance of the cold-tolerant “E7134” (“A”) eggplant variety and the cold-sensitive “7145” (“B”) variety. Morphological changes of two eggplant varieties “A” and “B” under 5 ℃ cold stress exhibited significant differences (Fig. a). Specifically, leaves of “B” began dehydrating at 4 d, intensifying by 7 d, while “A” showed dehydrating only after 7 d. “B” displayed leaf curling after 4 d, contrasting with “A,” which showed no notable changes, confirming “A” as cold-resistant and “B” as cold-sensitive. To investigate To comprehend cellular responses to oxidative and osmotic damage under cold stress, we measured the activity of POD and the content of osmoregulation-related components, including malondialdehyde (MDA), γ-aminobutyric acid (GABA), free proline, soluble protein, and soluble sugar in eggplant leaves (Fig. b). The POD activity in sample “A” exhibited a significant increase after 1 d of cold stress at 5 ℃, escalating by 1.17 times (1 d) and 2.01 times (2 d) compared to the control group (0 d), respectively. Conversely, the POD activity in sample “B” showed no significant changes on the first day but increased by 1.77-fold after 2 d under cold stress, with a notable decrease at 4 d. More importantly, the POD activity of sample “A” surpassed that of sample “B” under cold stress. The MDA content in sample “A” remained largely unchanged at 2 d post-cold stress, peaking at 4 d. In contrast, the MDA content in sample “B” consistently decreased over time, rising significantly after the 4th d but consistently remaining lower than that of sample “A”. After 2 d of cold stress, the GABA content in both samples “A” and “B” peaked, with sample “B” exhibiting slightly higher levels during the stress period. The free proline content in samples “A” and “B” gradually increased 2 d before cold stress, rising by nearly 1.35 and 1.16 times compared to the control group (0 d), respectively. On the 4th d, a significant decrease was observed, followed by an increase. The soluble protein content in sample “A” reached its peak at the 4th d under cold stress, nearly 1.21 times higher than sample “B”. After 1 d of cold stress, the soluble sugar content in sample “B” significantly increased. In contrast, the soluble sugar content in sample “A” was significantly lower than that of sample “B” during the entire cold stress period. These findings underscore significant differences in cold resistance between the two eggplant varieties, indicating distinct molecular regulatory mechanisms. First, physiological data were standardized for Chemometrics analysis, and the results were clustered with cold stress samples (Fig. a). The 30 samples were categorized into three groups: “A” 0d, “A” 1d, “A” 2d (treated with 2 d of cold stress) forming the first category, “B” 0d, “B” 1d, “B” 2d, and “B” 7d forming the second category, and “A” 4d, “A” 7d, and “B” 4d forming the third category. Next, principal component analysis (PCA) was employed to discern grouping characteristics and physiological indicators of the samples. PC1, which accounted for 57.1% of the total variance, distinctly separated the samples: “A” 0d - “A” 7d (represented by orange) on the left and “B” 0d − 7d (represented by blue) on the right (Fig. b). Sample “A” was further divided into two groups (“A” 0d − 2d, “A” 4d − 7d) in PC2 (20.8%), while sample “B” was divided into two groups (“B” 0d − 7d, “B” 4d). Thus, variety differences were distinguishable by PC1, and differences induced by cold stress treatment were discernible by PC2. Moreover, the positive load of PC1 encompassed GABA, MDA, free proline, and soluble sugar, while the negative load comprised soluble protein and POD. All physiological indicators of PC2 were positive loads. Finally, OPLS-DA was employed to evaluate the crucial physiological indicators for the two varieties under low-temperature stress. The Variable Importance in Projection (VIP) threshold served as a measure of the impact intensity of physiological indicators. Researchers identified that POD (VIP = 1.09) and soluble protein (VIP = 1.12) emerged as important physiological indicators for both varieties under low-temperature stress (Fig. c). Gene expression profiling of eggplant under cold stress treatment. To analyze transcriptional changes between two eggplant cultivars under cold stress, designated as “A” and “B” both under cold stress treatment at five time points: before (CK, 0 d) and after exposure to 5 °C (1d, 2d, 4d, and 7d), with three biological replicates per sample. A total of 434.65 Gb of clean data were obtained from 30 samples, each reaching 6.17 Gb. All Q30 values surpassed 93.02%, attesting to the reliability of the sequencing results (Table S1). Table S2 showed detailed annotation information for unigenes containing Nonredundant (NR), Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and Non supervised Orthologous Groups (NOG) databases (Table S2). The sample heatmap analysis indicated correlation coefficients exceeding 0.8 for the treatment groups in each variety (Fig. S1). Differentially expressed genes (DEGs) were identified based on transcript abundance, using error detection rate < 0.01 and fold change ≥ 2 as thresholds. Both “A” and “B” exhibited similar and significant changes in their response to cold stress at different time points, with 7024 and 6209 DEGs identified, respectively (Fig. S2). Prior to cold treatment (CK, 0 d), “A” and “B” displayed 1741 and 1482 highly expressed genes, respectively, indicating genotype-related gene expression changes (Fig. a, b). “A” exhibited a larger number of DEGs from 1 d to 2 d, while “B” showed fewer upregulated and downregulated genes during this period. Specifically, “A” had 2646, 1724, and 2027 upregulated genes and 3143, 3038, and 4173 downregulated genes at each respective time point. In contrast, “B” showed 1583, 1322, and 1785 upregulated genes and 2130, 2602, and 3814 downregulated genes, respectively (Fig. a). Comparatively, “A” demonstrated a higher number of activated genes than “B,” indicating distinct responses between tolerant and sensitive varieties, with “A” exhibiting greater adaptability to cold stress. The number of shared genes significantly increased at 7 d in both “A” and “B”. Intriguingly, the shared genes in “A” and “B” exhibited different expression patterns during the 1–4 d stage of the cold stress response activation. All samples underwent GO term analysis, revealing significant enrichment in catalytic activity, membrane components, transport activity, and responses to stress. Notably, the number of DEGs enriched in membrane components and transport activity was higher in variety “A” compared to “B”. Variety “A” specifically exhibited enrichment in processes related to the cell wall, non-protoplast, and secondary metabolism. On the other hand, variety “B” showed specific enrichment in oxidoreductase activity, protein serine/threonine kinase activity, and terpene synthase activity (Fig. c, d). These results suggest that “A” enhances cold tolerance by fortifying cell structure (membrane components, cell walls) and secondary metabolism, facilitating plant growth under adverse conditions. In contrast, “B” primarily adapts to cold stress by enhancing kinase activity, showcasing a comparatively more focused response than “A”. To delve deeper into the function of eggplant DEGs in response to low-temperature stress, KEGG pathway enrichment analysis was employed to identify biological pathways involved in cold stress. A Q value of ≤ 0.05 was used as the truncation criterion, revealing that compared to variety “B”, variety “A” exhibited a greater number of enriched metabolic pathways (Fig. e, f). Both varieties showed significant enrichment in metabolic pathways, synthesis of secondary metabolites, plant hormone signal transduction, and the mitogen-activated protein kinase (MAPK) signaling pathway under cold stress. Notably, starch and sucrose metabolism, synthesis of diterpenoid and tetraterpene compounds, and the glutathione biosynthesis pathway were specifically enriched in variety “A”, while DEGs of variety “B” were specifically enriched in brassinosteroid biosynthesis and plant circadian rhythm. This implies that, compared to “B”, variety “A” demonstrates a more diverse pathway for responding to cold stress. To investigate the effects of cold stress on conserved responses within and between varieties, a Venn analysis was conducted (Fig. a). The results indicated that 263 DEGs were shared among the varieties, with nearly half of them highly expressed in “A” (Fig. b). GO enrichment analysis of these 263 DEGs revealed a higher proportion of genes related to membrane, transmembrane transport protein activity, response to stress, iron ion binding, and catalytic activity in molecular functional categories. Additionally, biological processes such as photosynthesis and transmembrane transport was enriched (Fig. c). KEGG enrichment analysis showed involvement in photosynthesis, biosynthesis of secondary metabolites, MAPK signaling pathway, and energy metabolism (Fig. d). These results indicate that ion transport, photosynthesis, and energy metabolism contribute to the conservative cold stress response of eggplant seedlings. While there are similarities in the expression patterns of shared genes, the differences suggest that cold stress response pathways have become more diverse. To delve into the nuanced effects of cold stress on the core metabolism of eggplant, the transcriptional abundance of key genes in two varieties, involved in starch and sucrose metabolism, synthesis of diterpenoids and tetraterpenes, glutathione synthesis metabolic, photosynthesis, and chlorophyll degradation pathways, were compared. Glutathione, a vital antioxidant component, serves as a crucial indicator of plant response to stress . In this study, 21 DEGs were identified in the transcriptome participating in the glutathione metabolism pathway. 19 DEGs in “A” were significantly upregulated, under cold stress. As shown in Fig. , most genes encoding enzymes regulated by the glutathione biosynthesis pathway exhibited a similar expression pattern, with the highest expression in “A” and the lowest in “B”. The expression levels of glutathione upstream synthesis genes ( OPLAH, gshA ) in “A” significantly increased at 1 d under cold stress, while in “B”, they were significantly downregulated. The expression level of GSS in “A” significantly upregulated after 1 d and returned to baseline at 7 d, while the transcriptional abundance in “B” remained lower than that in “A”. The expression patterns of DEGs ( GPX , GSR , and GST ) downstream of glutathione metabolism were consistent between “A” and “B” at different time points, yet the transcriptional abundance of DEGs in variety “B” was consistently lower than that in variety “A” (Fig. ). These results indicated that the glutathione metabolism pathway is an important metabolism in cold resistance “A” under cold stress, and the transcriptional abundance of various DEGs was significantly higher than that in “B” under cold stress. Terpene compounds exhibit antioxidant effects, effectively neutralizing free radicals and mitigating cellular damage caused by oxidative stress. KEGG enrichment analysis highlighted the specific enrichment of triterpene synthesis pathways in variety “A”. To delve into this, we analyzed the expression patterns of triterpene biosynthetic genes in both “A” and “B” under cold conditions. The identification and gene expression analysis of cycloalkene synthase ( CAS ), lanosterol synthase ( LAS ), b-myristica synthetase ( BAS ), CYP72A , CYP90C and CYP94D cytochrome P450 family members in the trunk and branch lines showed different expression patterns in different strains (Fig. ). The DEGs genes of “A” were highly expressed in the early stage (0–2d) of cold stress and significantly downregulated by the 7th d. Notably, the transcriptional abundance of various DEGs involved in the triterpenoid metabolic pathway in “A” under cold stress was markedly higher than that in “B”. In contrast, our transcriptome data unveiled that most triterpenoid biosynthetic genes in “B” were repressed with prolonged exposure to cold stress. Soluble sugar content serves as a pivotal physiological indicator, often reflecting a plant’s resistance under adverse conditions. To understand the impact of cold stress on the synthesis of soluble sugars in both “A” and “B”, we analyzed the expression patterns of genes in the core pathway of sugar synthesis under cold stress. We found that most of the transcripts related to the starch and sucrose pathway, including genes encoding isoamylase ( ISA ), (1->4)-alpha-D-glucan 1-alpha-D-glucosylmutase ( TreY ), maltooligosyltrehalose trehalohydrolase ( TreZ ), alpha-amylase ( AMY ) were more highly expressed in “B” than “A” (Fig. ). Notably, most key starch and sucrose biosynthesis structural genes displayed lower expression levels on the 7th day. However, the HK gene showcased a consistent, higher expression level in the 4–7 day cold stress period of “B” (Fig. ). HK , considered a key rate-limiting enzyme in EMP, displayed stable, higher expression levels in cold-stressed “B”, indicating that cold stress amplified the EMP metabolism of “B”, with its metabolic intensity surpassing that of “A”. When subjected to adverse stress, the wilting of plant leaves is often accompanied by the rapid breakdown of chlorophyll. Transcription data revealed 10 DEGs chlorophyll degrading genes, including 3 stage green ( SGR1 , SGR2 , SGR3 ), 1 chlorophyllase gene ( CLH ), 3 phenolase genes ( PPH1 , PPH2 , PPH3 ), 2 phenophoride an oxygenase genes ( PAO1 , PAO2 ), and 1 red chlorophyll catabolite reductase gene ( RCCR ). Among them, SGR3 , PPH2 , PPH3 , and PAO1 were downregulated after non cold stress treatment, but other chlorophyll degrading genes, especially CLH , SGR1 , SGR2 , PPH1 , PAO2 , and RCCR were upregulated (Fig. a). The upregulation of gene “B” during cold treatment was more pronounced compared to that of gene “A”. These results suggest that cold stress triggers the degradation of chlorophyll and activates the corresponding biochemical processes. The degradation of chlorophyll leaded to a weakening of photosynthesis, which inhibited plant growth and development. By KEGG enrichment analysis, we identified DEGs associated with photosynthesis to unravel the molecular mechanisms governing varied photosynthetic responses in the two varieties during cold stress treatment. The KEGG analysis pinpointed a total of 79 DEGs involved in the photosynthesis pathway, with 55 DEGs linked to Photosystem I and 24 DEGs related to Photosystem II (Fig. b, Table S3). Notably, 14 genes exhibited upregulation in the leaves of “A”. DEGs associated with various facets of photosynthesis, including light harvesting and electron transfer chains, were significantly enriched. Under cold stress, the expression of Photosystem II (PS II) genes ( PsbA , PsbC , PsbB , PsbK , PsbQ , PsbR , PsbY , and PsbW ) and Photosystem I (PS I) proteins ( PsaA , PsaB , PsaC , and PsaD ) genes was downregulated in both varieties. However, in comparison to the expression levels in “A”, the majority of DEGs in “B” exhibited lower expression, indicating a significant decrease in photosynthetic activity in “B”. This decrease in photosynthesis is closely linked to the variety’s tolerance to low temperatures. The fluctuations in gene expression levels are pivotal in regulating eggplants’ response to cold stress, with transcription factors (TFs) playing a crucial role in both biological and abiotic stress responses. A total of 823 hypothetical TFs belonging to 24 different families were identified, with the top 10 TF families illustrated in Fig. a. WGCNA was employed to further explore the relationship among key genes, stress time and physiological characteristics (genes filtered through FPKM < 1). WGCNA delineated highly correlated gene clusters termed modules, where genes within the same cluster exhibited strong correlations. A total of 12 modules were optimized and merged based on identified genes (Fig. b). WGCNA was used to further explore the relationship among key genes, stress time and physiological characteristics (genes filtered through FPKM < 1). WGCNA results showed that highly correlated gene cluster were defined as modules, and genes in the same cluster were highly correlated. The identified genes were optimized and merged into 12 modules (Fig. b). The correlation analysis of characteristic genes traits between each module and cold related physiological indicators showed that MEgray60 ( r = 0.87, p < 0.05), MEdarkred ( r = 0.72, p < 0.05), and MEsaddlebrown ( r = 0.66, p < 0.05) were significantly positively correlated with physiological indicators, while MElightyellow ( r = -0.70, p < 0.05), MEgreen ( r = -0.66, p < 0.05), MEorange ( r = -0.76, p < 0.05), and MEmidlightblue ( r = -0.82, p < 0.05) were significantly negatively correlated with physiological indicators (Fig. c). Using the same method and setting parameters, MEgrey60 ( r = 0.94, p < 0.05) emerged as having a robust correlation with cold stress duration and physiological indicators (Fig. c, d). This finding suggests that co-expressed genes within the MEgrey60 module are intricately linked to cold-related physiological indicators and the duration of cold stress, signifying their pivotal role in eggplant’s response to cold stress. Consequently, co-expressed genes within the MEgrey60 module were selected for further in-depth analysis and research. The regulatory network comprising the top 150 hub genes, characterized by high connectivity, was visualized using Cytoscape software v3.9.1 ( https://cytoscape.org/download . html) (Fig. e). In the MEgrey60 module, 10 central genes were identified based on specific criteria (kME ≥ 0.9) and edge weights ≥ 0.5. These genes were predominantly associated with transcriptional activation, plant hormone regulation, and redox homeostasis. Notably, POD _EGP13161, PP2C _EGP11066, SnRK2 _EGP05474, SnRK2 _EGP18530, MDR1 _EGP04516, DELLA _EGP22417, PRR7 _EGP18322, and transcription factors (C2H2_EGP19213, AP2/ERF_EGP27974, MYB_EGP33116, bHLH_EGP07396, bZIP_EGP13341, and ERF_EGP29469) were identified as central genes that may play crucial roles in the cold response of eggplants. Subsequently, a functional annotation analysis of other genes within the co-expression network revealed enrichment in eight main aspects: signal transduction, plant hormone regulation, biosynthesis and metabolism, transcription factors, cell structure, ion binding, catalytic and transport activities, and abiotic stress (Fig. S3). Specifically, transcription factors C2H2_EGP19213, AP2/ERF_EGP27974, MYB_EGP33116 were co expressed with genes related to hormones (including auxin ( AUX1 _EGP18001, IAA _EGP14445, ARF _EGP31412), ABA ( PP2C _EGP11066, SnRK2 _EGP05474, SnRK2 _EGP18530). MYB_EGP33116, bZIP_EGP13341, and ERF_EGP29469 related genes are co-expressed in redox homeostasis, including POD _EGP13161, GST _EGP00487, GST _EGP18109, GR _EGP00582, and APX _EGP30903. Finally, MYB_EGP33116, C2H2_EGP19213 and bHLH_EGP07396 were co expressed with genes related to soluble sugar synthesis genes ( SPS _EGP10783, SUS _EGP10783, GBE1 _EGP00175, and UDPGA _EGP04238) and genes related to cold stress signaling ( MPK _EGP00891, CAMAL _EGP04153, and CLR _EGP10353). In conclusion, MYB was the core hub of the network, these results suggest that MYB_EGP33116 may be a central gene regulating plant hormone and TF, which cooperatively regulates redox homeostasis, MPK signaling pathway and soluble sugar biosynthesis, thus protecting eggplant seedlings from cold stress. To verify the accuracy of RNA-seq results, a total of 16 genes were randomly selected for qRT-PCR experiments, including EGP06774, EGP09992, EGP06499, EGP14379, EGP00263, EGP25367, EGP09657, EGP20989, EGP01302, EGP14445, EGP18001, EGP11066, and EGP18530. The gene expression levels of the two varieties at five time points were measured. Pearson correlation analysis showed that the multiple changes of qRT-PCR and RNA-seq were basically the same (Fig. S4), indicating the reliability of transcriptome sequencing. When plants face low-temperature stress, a cascade of cellular physiological activities is triggered, initiating a series of cold stress response processes to cope with environmental challenges. To elucidate the mechanism of eggplant response to cold stress, the physiological and transcriptome response patterns between cold tolerant and cold sensitive varieties of eggplant were compared and analyzed. Firstly, the analysis of physiological data results shows that alterations in GABA, MDA, free proline, soluble protein, and soluble sugar content may be pivotal factors influencing the tolerance of eggplant species to low-temperature stress. Comparative analysis of transcriptome between “A” and “B” indicated that cold stress has a significant impact on the abundance of transcripts A and B, notable differences in the cold-responsive gene profiles between the two varieties are evident (Figs. and ). KEGG enrichment analysis shown that plant hormones and signal transduction systems, starch and sucrose metabolism, and diterpenoid and tetraterpene synthesis may be important roles in the response to cold stress in these eggplant species. In the subsequent sections, we delve into the interpretation of these crucial findings. Abiotic stresses, such as cold stress, inflict damage on the cell membrane by inducing oxidative and lipid peroxidation processes . In this study, a significant increase in the MDA content in both “A” and “B” species under low temperatures indicates heightened lipid peroxidation and membrane injury. Under oxidative conditions, the accumulation of uncontrolled free radicals prompts plants to employ enzymatic and nonenzymatic antioxidants to mitigate oxidative stress and maintain cellular homeostasis. POD, a key enzyme in the plant’s enzymatic defense system during stress, collaborates with SOD and CAT to eliminate excess free radicals and enhance the plant’s stress resistance . Previous research has shown increased POD activity in cold-tolerant banana varieties under cold stress, contrasting with a significant decrease in cold-sensitive varieties . Similarly, Zanthoxylum bungeanum exhibited enhanced POD activity after cold stress, crucial in reducing ROS accumulation . In our study, POD activity increased with prolonged cold stress, and the activity of POD in “A” was notably higher than in “B” from 0 to 4 days. These results suggest that a robust antioxidant system enhances the ROS clearance efficiency of “A”. Moreover, our research revealed a positive correlation between the expression level of POD coding genes and POD activity ( p < 0.05), indicating that the upregulation of these genes contributes to increased POD enzyme activity. In summary, these findings demonstrate that lipid peroxidation reduction in “A” mitigates cell damage. Compared to “B”, “A” exhibits a more robust antioxidant system, effectively eliminating ROS and adapting to cold environments. Under low-temperature stress, plants synthesize a substantial amount of permeable substances to reduce the osmotic potential of cells and bolster the water retention capacity of plants . In our study, the content of soluble proteins, soluble sugars, free proline, and GABA increased with the prolonged duration of cold stress. Throughout most of the cold stress periods, “A” exhibited higher soluble protein content than “B”, whereas “B” demonstrated higher levels of soluble sugars, free proline, and GABA. This discrepancy helps maintain a greater cell osmotic potential in “B”, facilitating water absorption by plant roots and retaining water in cells. Proline is considered the primary osmotic agent in higher plants, preventing water loss, and its accumulation is a common physiological response to cold stress. Enhanced cold resistance in winter rapeseed under cold stress is associated with proline accumulation. Cold stress promotes proline accumulation in Brassica rapa L. ssp. Pekinensis seedlings, with proline content positively correlated with the degree of cold stress . Conversely, research on pepper under cold stress indicates that the proline content in the cold-tolerant variety “FG” is higher during early and late stress stages compared to the cold-sensitive variety “FX” . Further investigation is needed to explore the relationship between cold stress and proline changes in the plant body. Based on the results of this study, we posit that eggplants employ similar strategies to cope with low temperatures, with the antioxidant system and osmotic regulatory substances playing pivotal roles in managing low-temperature stress. After prolonged exposure to cold stress, plants develop a strategy that involves coordinating cold and hormone signaling pathways, along with the MAPK cascade, to effectively cope with the challenges posed by low temperatures . The RNA-seq analysis unveiled the regulation of numerous genes associated with abscisic acid, auxin, and jasmonic acid under cold stress in eggplants. Notably, three hormone-related GO terms—“response to abscisic acid stimulus”, “response to auxin stimulus”, and “response to jasmonic acid stimulus”—were significantly enriched among the differentially expressed genes in both “A” and “B” under cold stress (Fig. c,d). The significance of ABA core components in responding to low-temperature stress has been validated across diverse plant species. For instance, heterologous overexpression of the SnRK2 protein kinase gene from Agropyron cristatum ( AcSnRK2.11 ) has demonstrated the ability to stimulate the growth of transgenic plants under normal conditions and enhance the tolerance of transgenic yeast and tobacco to cold stress . Heterologous overexpression of a common wheat gene, TaSnRK2.4 , has been shown to improve cold resistance by fostering the accumulation of ABA and augmenting ABA signaling in Arabidopsis . The co-expression network analysis in this study identified the core components of the ABA signaling pathway, including PP2C and SnRK2 , along with the downstream transcription factor ABF. These identified genes collectively form a comprehensive ABA pathway. Notably, the expression levels of three SnRK2 coding genes and one ABF gene exhibited a significant increase after 4 days in the “A” group compared to the “B” group. This overall upregulation suggests that higher ABA accumulation may contribute to the stronger cold resistance observed in variety “A”. Auxin, a type of β-indoleacetic acid hormone, plays a crucial role in regulating plant cell division, differentiation, and responses to both abiotic and biotic stresses. In recent years, there has been growing research interest in understanding the impact of auxin on cold tolerance. In our study, we identified the auxin influx vector AUX1 gene, crucial for auxin signal transduction, along with the IAA gene and downstream transcription activator ARF in the co-expression network. The expression levels of AUX1 (EGP18001) and ARF (EGP02058) in the “A” group increased, promoting the accumulation of auxin and enhancing cold resistance. These findings underscore the significant role of plant hormones in eggplant cold stress responses. The MAPK (mitogen-activated protein kinase) signaling pathway is a crucial intracellular signaling cascade that regulates various cellular responses, including responses to environmental stresses such as cold. MAPKs are enzymes that function by phosphorylating other proteins, thereby activating or inhibiting their activities . We identified two MAPK genes (EGP00891, EGP30996) with high connectivity in the co-expression network. However, the role of MAPK in the cold stress response is not yet well-understood. In the cold tolerance eggplants, the upregulation of two genes encoding MAPKs suggests that these genes are actively involved in the plant’s response to cold stress. Upregulation typically indicates that the genes are being expressed at higher levels, which may lead to increased production of the corresponding MAPK proteins. These proteins, in turn, may phosphorylate and activate other proteins involved in cold tolerance mechanisms, such as transcription factors or other kinases. Moreover, these two MAPK genes in variety “A” were upregulated after 4 d of cold treatment, suggesting a higher efficiency of MAPK signal transduction in “A” compared to “B”. This may lead to the activation of multiple cold reaction pathways in “A”. On the other hand, the downregulation of these two genes in cold-sensitive eggplants suggests that the MAPK signaling pathway may not be fully functional or effective in these varieties. This downregulation may lead to reduced production of the MAPK proteins, which could impair the plant’s ability to mount an effective response to cold stress. In summary, under low-temperature stress, the differential gene expression may impact the biosynthesis of ABA and IAA, leading to an increase in the content of these hormones in eggplant seedlings. This overall hormonal modulation may induce cell signaling pathways, particularly MAPK, associated with stress tolerance, ultimately mitigating the adverse effects of stress on growth by elevating the levels of plant hormones such as ABA and auxin. Transcription factors play a crucial role in regulating gene expression by specifically binding to the upstream promoter sequences of functional genes, including secondary metabolic pathways, defense responses, as well as growth and development regulation . By orchestrating these interactions with specific genes, transcription factors contribute significantly to the overall phenotype and adaptability of plants in response to various internal and external stimuli. Many plant species have identified and analyzed several cold stress-responsive TF families . For instance, the heterologous overexpression of soybean GmWRKY21 has been demonstrated to enhance Arabidopsis resistance to cold stress . Another study showed that the heterologous overexpression of the WRKY transcription factor PmWRKY57 in plum blossoms significantly enhanced cold resistance in Arabidopsis . CaNAC2 was strongly induced by low temperature and gene silencing techniques induced by viruses were used to inhibit the expression of CaNAC2 pepper ( Capsicum annuum L.) seedlings, increasing their sensitivity to low temperatures . Additionally, GmNAC20 was identified as a positive regulator of salt and freezing resistance in transgenic Arabidopsis plants . In recent studies, multiple cold-responsive transcription factors have been identified in eggplants . The correlation of five key transcription factors (MYB, AP2/ERF, bZIP, bHLH, C2H2) with cold stress in eggplants highlights their potential importance in mediating plant responses to environmental challenges. While previous studies have identified these transcription factors in relation to cold stress in various plant species, the specific regulatory mechanisms in eggplants require further elucidation. In particular, the role of MYB transcription factors in eggplant cold stress response mechanisms remains unexplored in the current literature. Investigating the involvement of MYB transcription factors in regulating cold stress responses in eggplants could offer valuable insights into novel regulatory pathways and contribute to a more comprehensive understanding of plant stress responses. Some unique metabolites in plants are often associated with defense responses because they activate defense related genes. The synthesis and accumulation of secondary metabolites in plants in response to stress play a crucial role in enhancing the plant’s resistance mechanisms. Studies had shown that under stress conditions such as low temperatures, plants can trigger the accumulation of compounds like flavonoids. Flavonoids are known for their antioxidant properties and had been associated with improving plant stress resistance by scavenging free radicals and protecting plant cells from oxidative damage . Sucrose can elevate metabolite levels and enhance specific enzyme activity through various pathways, thereby bolstering plant stress resistance. For instance, Yuan et al. observed a significant upregulation of genes encoding terpenoid synthase/trehalose 6-phosphate phosphatase ( TPS/TPP ) and trehalose 6-phosphatase ( TREH ) in the drought-tolerant millet variety DT43 under drought and melatonin treatment, leading to increased trehalose accumulation and improved drought resistance . Yang et al. conducted KEGG analysis on cold-tolerant and cold-sensitive coconut varieties subjected to low-temperature stress, revealing a substantial enrichment of differentially expressed proteins in starch and sucrose metabolism pathways . In upland cotton of the KN27-3 type, genes encoding GDP-neneneba mannose, trehalose, and raffinose were highly expressed under low-temperature stress, promoting soluble sugar accumulation and protecting cotton from oxidative damage . In the context of cold-tolerant eggplant plants, our study identified significantly higher expression levels of sucrose phosphate synthase ( SPS ), glycolysis, and UDP glucosyltransferase related to sucrose metabolism compared to cold-sensitive eggplant plants. This heightened expression likely maintained a more robust gene expression profile in cold-tolerant eggplant plants, enhancing resistance to the permeation environment imbalance induced by cold stress. Terpenoids, a subclass of terpenes, are a diverse group of organic compounds found in plants that exhibit a wide range of biological activities, including antioxidant activity and influencing plant growth, development, and stress response. Research by Li et al. had demonstrated that low temperatures can induce the accumulation of volatile orange tertiary alcohol and glycoside precursors in tea plants, enhancing their cold resistance . Similarly, Li et al. observed an increase in the content of terpenoids in sandalwood leaves under cold stress, contributing to enhanced stress resistance . In this study, the backbone genes ( SQS , CAS , CYP72A , and CYP94D ) related to terpenoid synthesis was identified in the co-expression network, and the expression levels of these genes were higher in “A” compared to “B.” By identifying and characterizing the genes involved in terpenoid biosynthesis and understanding how their expression is modulated in response to cold stress, researchers can gain insights into the molecular mechanisms underlying cold tolerance in plants. This knowledge can be leveraged to develop new strategies for enhancing cold tolerance in crop plants, improving their performance under adverse environmental conditions. These findings present a more comprehensive research approach to investigating plant stress resistance, spanning physiological, transcriptional, and protein levels. This integrated approach better elucidates the cold response mechanism in plants. Using the identified cold response factors, we established a comprehensive model illustrating the eggplant cold response mechanism, represented by a complex co-expression network (Fig. ). This model holds the potential to uncover the intricate response network pathways in eggplants under cold stress, offering valuable insights for studying the cold tolerance of other plant species. The physiological and molecular response mechanisms to cold stress in eggplants are intricate, involving various factors. Physiological index detection and chemometrics analysis revealed distinct regulation factors for the cold-tolerant variety “A” and the cold-sensitive variety “B”. In “A”, POD and soluble proteins were pivotal for a more responsive reaction to cold stress, while osmotic regulators in “B” played a crucial role in the later stages of cold stress treatment. Genes associated with glutathione, terpenoids, and starch and sucrose metabolism pathways were identified as key players in regulating the cold response. The construction of a co-expression network, integrating physiological and transcriptome data, illustrated that eggplants employ signal transduction, plant hormones, transcription factors, membrane transporters, and cell structure in response to cold stress. Core genes like POD , PP2C , SnRK2 , MYB , ERD7 , DELLA , PRR7 , and transcription factors within the co-expression network played central roles in eggplant cold response. The findings from this study serve as a valuable reference for understanding the cold response mechanisms of plants, including eggplants, under low-temperature stress. Plant materials and cold treatment Two S. melongena varieties, “E7134” (“A”, a cold-tolerant variety) and “E7145” (“B”, a cold-sensitive variety) were selected for cold treatment. The seeds of two varieties were manually harvested at the natural maturation period, respectively. All seedlings were grown in a greenhouse at 26 ± 1 °C and 16/8 h (light/dark) in the agricultural science of sichuan province (Chengdu, China) before experiment. For cold treatment, eggplant seedlings with 4 to 5 leaf sizes were transferred to a growth chamber, where the daily photoperiod was 16 h/8 h (light/dark), the temperature was 5 °C/10°C (day/night). A total of 30 leaves samples from two S. melongena varieties seedling were collected after 0, 1, 2, 4, and 7 d with 3 biological replicates obtained at each experimental time point, immediately frozen in liquid nitrogen and stored at -80 °C for further studies. Determinations of leaf physiological indices under 5 ◦C cold stress To ensure sample integrity and prevent enzyme inactivation, 0.1 g of leaf tissue, previously frozen in liquid nitrogen, was weighed. Subsequently, 1 mL of extraction solution was added, and homogenization was carried out in an ice bath. After centrifugation at 8000 rpm for 10 min at 4 °C, the supernatant was carefully collected and kept on ice for further measurements . The activity of POD and the content of various metabolites (MDA, free proline, soluble protein, GABA, and soluble sugar) were quantified in 1 mL of leaf sample supernatant using specific kits, following the manufacturer’s instructions (Solarbio, Beijing, China). Results were expressed as mean ± standard deviation. Statistical analysis was conducted using GraphPad Prism 7. Differential analysis of metabolites between the two varieties was performed using PCA and OPLS-DA . RNA extraction, library preparation, RNA-Seq, and sequence assembly For RNA sample preparation, 1 µg of RNA per sample served as input material. The concentration and purity of RNA were determined using NanoDrop 2000 (Thermo Fisher Scientific, Wilmington, DE, USA), while RNA integrity was assessed through the RNA Nano 6000 Assay Kit on the Agilent Bioanalyzer 2100 system (Agilent Technologies, CA, USA). Sequencing libraries were constructed using the NEBNext UltraTM RNA Library Prep Kit for Illumina (NEB, USA), following the manufacturer’s guidelines. Index codes were added to attribute sequences to each sample. In brief, mRNA was isolated from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was achieved using divalent cations at an elevated temperature in NEBNext First Strand Synthesis Reaction Buffer (5X). First-strand cDNA synthesis was performed using a random hexamer primer and M-MuLV Reverse Transcriptase. Subsequent second-strand cDNA synthesis was carried out using DNA Polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities. After adenylation of 3’ ends of DNA fragments, NEBNext Adaptor with a hairpin loop structure was ligated for hybridization. To select cDNA fragments of approximately 240 bp in length, library fragments were purified using the AMPure XP system (Beckman Coulter, Beverly, USA). Subsequently, 3 µl USER Enzyme (NEB, USA) was applied with size-selected, adaptor-ligated cDNA at 37 °C for 15 min, followed by 5 min at 95 °C before PCR. PCR was executed with Phusion High-Fidelity DNA polymerase, Universal PCR primers, and Index (X) Primer. Finally, PCR products were purified (AMPure XP system), and library quality was assessed using the Agilent Bioanalyzer 2100 system. Clean data (clean reads) were obtained by eliminating reads containing adapters, reads containing poly-N, and low-quality reads from the raw data. Differential expression analysis of two samples was carried out using edgeR , with the threshold for significantly differential expression set at FDR < 0.01 & Fold Change ≥ 2. Co-expression network analysis and hub gene identification Gene co-expression network analysis was performed using the Weighted Gene Co-Expression Network Analysis (WGCNA) package (v1.72-1) in R . The analysis utilized RNA-Seq data from 30 samples (two varieties × five time points × three replicates). The key parameters included mean FPKM ≥ 1, a similarity threshold of 0.25 for control module fusion, and a minimum of 30 genes in each module. WGCNA divided genes with different expression levels into 12 modules, with distinct colors representing each module. The correlation between each module and cold stress duration, as well as the correlation with physiological indicator data, was calculated. DEGs were assigned to different modules using the Dynamic Tree Cut algorithm . The node size was adjusted based on the number of genes linked to a specific gene. Node genes from modules with kME > 0.9, ranked by kME to the top 150, were selected as hub genes representing the overall expression trend of the respective module. Connectivity, defined as the sum of weights from all edges of a node, was used to assess the node’s importance . Real-time quantitative PCR validation of DEG results Total RNA was extracted from leaves treated with cold stress using the TRIzol reagent (Invitrogen, USA) . Reverse transcription used RNA as a template was performed with the PrimeScript TM RT Reagent Kit with gDNA Eraser (TaKaRa, Japan). The specific primers used in this study were designed via Primer 5 v5.5.0 with RefSeq and are listed in Table S4, the expression of the β-tubulin gene was used as an internal reference. Quantitative real-time PCR (qRT-PCR) was carried out by a CFX96 Touch Real-Time PCR System (Bio-Rad, USA) with SYBR Premix ExTaqTM (Takara) using TB GreenTM Premix Ex TaqTM II (TaKaRa). Three independent biological and technical replicates of each sample were subjected to RT-qPCR analysis. The 2 −∆∆CT analysis method was utilized to calculate relative expression levels. Subsequently, Pearson’s correlation analysis between the data obtained by RNA-seq and qRT-PCR was performed following Guo et al. and the results were imported to the TBtools to visualize the heat map of expression levels. Two S. melongena varieties, “E7134” (“A”, a cold-tolerant variety) and “E7145” (“B”, a cold-sensitive variety) were selected for cold treatment. The seeds of two varieties were manually harvested at the natural maturation period, respectively. All seedlings were grown in a greenhouse at 26 ± 1 °C and 16/8 h (light/dark) in the agricultural science of sichuan province (Chengdu, China) before experiment. For cold treatment, eggplant seedlings with 4 to 5 leaf sizes were transferred to a growth chamber, where the daily photoperiod was 16 h/8 h (light/dark), the temperature was 5 °C/10°C (day/night). A total of 30 leaves samples from two S. melongena varieties seedling were collected after 0, 1, 2, 4, and 7 d with 3 biological replicates obtained at each experimental time point, immediately frozen in liquid nitrogen and stored at -80 °C for further studies. To ensure sample integrity and prevent enzyme inactivation, 0.1 g of leaf tissue, previously frozen in liquid nitrogen, was weighed. Subsequently, 1 mL of extraction solution was added, and homogenization was carried out in an ice bath. After centrifugation at 8000 rpm for 10 min at 4 °C, the supernatant was carefully collected and kept on ice for further measurements . The activity of POD and the content of various metabolites (MDA, free proline, soluble protein, GABA, and soluble sugar) were quantified in 1 mL of leaf sample supernatant using specific kits, following the manufacturer’s instructions (Solarbio, Beijing, China). Results were expressed as mean ± standard deviation. Statistical analysis was conducted using GraphPad Prism 7. Differential analysis of metabolites between the two varieties was performed using PCA and OPLS-DA . For RNA sample preparation, 1 µg of RNA per sample served as input material. The concentration and purity of RNA were determined using NanoDrop 2000 (Thermo Fisher Scientific, Wilmington, DE, USA), while RNA integrity was assessed through the RNA Nano 6000 Assay Kit on the Agilent Bioanalyzer 2100 system (Agilent Technologies, CA, USA). Sequencing libraries were constructed using the NEBNext UltraTM RNA Library Prep Kit for Illumina (NEB, USA), following the manufacturer’s guidelines. Index codes were added to attribute sequences to each sample. In brief, mRNA was isolated from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was achieved using divalent cations at an elevated temperature in NEBNext First Strand Synthesis Reaction Buffer (5X). First-strand cDNA synthesis was performed using a random hexamer primer and M-MuLV Reverse Transcriptase. Subsequent second-strand cDNA synthesis was carried out using DNA Polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities. After adenylation of 3’ ends of DNA fragments, NEBNext Adaptor with a hairpin loop structure was ligated for hybridization. To select cDNA fragments of approximately 240 bp in length, library fragments were purified using the AMPure XP system (Beckman Coulter, Beverly, USA). Subsequently, 3 µl USER Enzyme (NEB, USA) was applied with size-selected, adaptor-ligated cDNA at 37 °C for 15 min, followed by 5 min at 95 °C before PCR. PCR was executed with Phusion High-Fidelity DNA polymerase, Universal PCR primers, and Index (X) Primer. Finally, PCR products were purified (AMPure XP system), and library quality was assessed using the Agilent Bioanalyzer 2100 system. Clean data (clean reads) were obtained by eliminating reads containing adapters, reads containing poly-N, and low-quality reads from the raw data. Differential expression analysis of two samples was carried out using edgeR , with the threshold for significantly differential expression set at FDR < 0.01 & Fold Change ≥ 2. Gene co-expression network analysis was performed using the Weighted Gene Co-Expression Network Analysis (WGCNA) package (v1.72-1) in R . The analysis utilized RNA-Seq data from 30 samples (two varieties × five time points × three replicates). The key parameters included mean FPKM ≥ 1, a similarity threshold of 0.25 for control module fusion, and a minimum of 30 genes in each module. WGCNA divided genes with different expression levels into 12 modules, with distinct colors representing each module. The correlation between each module and cold stress duration, as well as the correlation with physiological indicator data, was calculated. DEGs were assigned to different modules using the Dynamic Tree Cut algorithm . The node size was adjusted based on the number of genes linked to a specific gene. Node genes from modules with kME > 0.9, ranked by kME to the top 150, were selected as hub genes representing the overall expression trend of the respective module. Connectivity, defined as the sum of weights from all edges of a node, was used to assess the node’s importance . Total RNA was extracted from leaves treated with cold stress using the TRIzol reagent (Invitrogen, USA) . Reverse transcription used RNA as a template was performed with the PrimeScript TM RT Reagent Kit with gDNA Eraser (TaKaRa, Japan). The specific primers used in this study were designed via Primer 5 v5.5.0 with RefSeq and are listed in Table S4, the expression of the β-tubulin gene was used as an internal reference. Quantitative real-time PCR (qRT-PCR) was carried out by a CFX96 Touch Real-Time PCR System (Bio-Rad, USA) with SYBR Premix ExTaqTM (Takara) using TB GreenTM Premix Ex TaqTM II (TaKaRa). Three independent biological and technical replicates of each sample were subjected to RT-qPCR analysis. The 2 −∆∆CT analysis method was utilized to calculate relative expression levels. Subsequently, Pearson’s correlation analysis between the data obtained by RNA-seq and qRT-PCR was performed following Guo et al. and the results were imported to the TBtools to visualize the heat map of expression levels. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Integrated Case-Based Learning Session for Breast and Upper Limb Anatomy | b2e0cc09-589e-4dda-a4ba-f515be03ebc1 | 11082076 | Anatomy[mh] | By the end of this activity, learners will be able to: 1. Apply basic anatomy of the breast, pectoral, and axillary regions to a clinical case. 2. Describe the anatomical relationships of the breast, pectoral, and axillary regions. 3. Interpret histological images and relate these to a clinically relevant scenario. 4. Describe the development of the muscles and nervous system structures. 5. Practice strategies for learning anatomy. The topics of anatomy (i.e., gross anatomy, histology, and embryology) are often taught early in the first-year medical curriculum. Traditional modes of gross anatomy instruction include dissection, prosection, and traditional lectures; more recently, computer-based learning, such as 3D models and online image banks, has emerged. However, these teaching methods do not offer many opportunities for integrating clinical knowledge, particularly with gross anatomy. Recent calls advocate for increased integration between the basic and clinical sciences and more opportunities for students to engage in critical thinking and innovation. This has resulted in a significant reduction in the number of course hours dedicated to gross anatomy and histology and an increased presence of gross anatomy, histology, and embryology as components of integrated curricula rather than as stand-alone courses. While increased integration of basic and clinical sciences content has been shown to improve retention, the decrease in time dedicated to these foundational topics puts increased pressure on students to learn the already high-volume, complex information. According to the AAMC, students who are members of racial and ethnic groups that are “underrepresented in the medical profession relative to their numbers in the general population” are considered to be underrepresented in medicine (URiM). These students are particularly susceptible to such pressures, as they are more likely to enter medical school with a lower MCAT score and GPA than their White and Asian colleagues. These differences in academic achievement have been shown to persist beyond the initial stages of medical school, with an NBME review demonstrating that White students perform better on USMLE Step 1, Step 2 CK (Clinical Knowledge), and Step 3 exams compared to URiM students even when controlling for factors such as undergraduate GPA and MCAT score. Prematriculation programs are one approach that medical schools have increasingly been using to ease this transition and encourage the success of students. These programs target incoming medical students, some of whom are considered URiM, and typically provide an introduction to the first-year curriculum. – While further research is needed to more robustly assess the efficacy of these programs, there is evidence to suggest that they help students retain material, , increase confidence and make connections among peers, – and improve academic performance ; they may even be useful for identifying students truly at risk of academic difficulties in medical school. Preparing students for professional curricula is arguably the primary goal of prematriculation programs, and many medical education curricula utilize an active learning method called case-based learning (CBL), which is an effective means of integrating the basic and clinical sciences through clinical cases and presentations. – CBL sessions encourage application of knowledge while engaging critical thinking skills and are well received by students due to their real-life application, interactive nature, and utility in identifying knowledge gaps. – Because of these characteristics, CBL is ideally suited for teaching students basic science and clinical knowledge as well as how to effectively use and adapt study techniques. While some undergraduate courses are incorporating CBL into their curricula, CBL is used more frequently with upper-level students such as those in medical, nursing, and graduate school. Many students entering medical school may be unfamiliar with this teaching method, which can create difficulties in learning the content, as adjusting to CBL can be challenging for students with less prerequisite knowledge and/or experience with this teaching method, especially if the case questions are more open-ended. Numerous publications in MedEdPORTAL focus on one or more of the concepts of anatomy, embryology, and histology. – While some are designed to be accessible to first-year medical students, – , few integrate all three content areas, – , and none do so for basic, foundational concepts of histology and embryology. We identified an opportunity to implement integrated anatomy CBLs in our prematriculation program, known as the Leadership, Engagement, Achievement, Development (LEAD) Scholars program, at the Indiana University School of Medicine (IUSM). The program was developed to support the transition of students, especially those considered URiM based on the AAMC definition, into their first year of medical school. The first of three CBLs utilized in the program is described here. Our CBL session fills a gap by integrating three anatomical content areas at a level appropriate to the knowledge base of first-year medical students at the start of their medical training. The case presents a clinical scenario based around the upper limb and challenges students to apply knowledge of anatomy and the foundational concepts of embryology and histology. An additional novelty of our case session is that we designed it to provide opportunities for students to practice and adapt study techniques within the exercises of the session, which is an important skill for students to master early to facilitate success throughout medical school. Curricular Context The CBL was completed as a component of the LEAD Scholars prematriculation program, which took place over one 4-week period during the summer of 2022 and again in 2023 immediately before the first year of medical school. Participating students were either matriculating first-year medical students ( n = 43) or medical students repeating their first year ( n = 15). During the first week of the program, students received instruction in personal and professional development as well as study strategies for success in medical education. During the following 2-week period, which was meant to simulate the pace and volume of the first of three blocks of the first-year medical anatomy course at IUSM (i.e., Human Structure), students received 21 prerecorded lectures, each an hour long, covering regional gross anatomy of the upper limb, early embryology, histology of basic tissues, and an introduction to radiology . Students also participated in seven 2-hour gross anatomy lab sessions where they alternated regional dissection and examining prosected (i.e., previously dissected) donors with the assistance of instructors and an online dissector. In the final week of the program, students participated in three CBL small-group sessions (one per day) before taking final exams. A 40-item gross lab practical primarily assessed the identification of structures, with one-quarter of questions assessing the function or clinical relevance of a tagged structure. An 80-item, computer-based examination assessed lecture content in gross anatomy, histology, and embryology and included 15 items based on histology imaging. These examinations were comparable in structure and question types to those the students would encounter in the integrated Human Structure course. The three CBL sessions integrated all content areas covered in the 2-week instructional period. The first session, described here, covered the anatomy, embryology, and histology of the upper limb, breast, and associated structures. Team Formation Students were assigned alphabetically to groups of six to eight, which also included each member of their laboratory dissection group. Students had already had 2 weeks of experience with their lab group members, so this helped provide a safe space for them to discuss the answers to questions. Small groups of this size (as supported by the literature) were large enough to promote interaction but not too large to risk the voices of some members not being heard. Description of Advance Preparation Resources Session objectives, a list of preparation resources, and prework activities were located on the Canvas learning management system at the start of the course. Assigned prework included reviewing specific lectures related to the case (breast, pectoral region, and axilla; brachial plexus lesions; etc.) and completion of laboratory sessions. Lectures and laboratory sessions were scheduled in the first 2 weeks of the program and were listed as a reminder to review. Also included in the prework were two tables for students to complete. The tables helped the students to synthesize information prior to the event and were an effective tool for compartmentalizing information for recall and retention. Learning Activity The sessions were primarily facilitated in 2022 by a third-year medical student who developed all elements of the session and in 2023 by a doctoral student in anatomy education. A faculty member in the IUSM Department of Anatomy, Cell Biology & Physiology who was an instructor for the Human Structure course oversaw development of the sessions and was present as an assistant facilitator. The sessions took place in a large team-based learning (TBL) classroom with U-shaped tables, each equipped with a computer and TV monitor. Facilitators utilized a central computer to project a PowerPoint onto the students’ screens and could release the monitors for the students to begin working on activities associated with the CBL at their own computer stations. One student was the lead for each group and entered responses to the questions from the computer at their station (other students could complete the assignments on their personal devices, such as laptops and tablets, if they desired). Between the activities, a facilitator debrief took place to answer the questions interactively with the groups. Students completed pre- and postsession quizzes and a postsession survey on their own devices. At the beginning of the session, students were given 15 minutes to individually complete the 10-item presession quiz , which included questions on gross anatomy, histology, and embryology. This quiz was closed book and closed note. Following completion of the quiz (which was not reviewed), students worked as a group to answer the questions and address the tasks in the CBL ; the facilitator presented the PowerPoint and followed the timing constraints given in its slides. The facilitator stopped to debrief, address the questions in the activity, and answer any other questions before moving to the next activity. Each case consisted of clinical scenarios and questions that required students to interpret images, complete matching exercises, and make diagrams, flowcharts, or tables; these were fully integrated, asking students to recall and apply information from all three disciplines of gross anatomy, embryology, and histology. After all case activities had been completed, students individually took the postsession quiz , which was identical to the presession quiz (except for mixing of question and response order), and filled out the voluntary postsession survey . In all, the session was completed in approximately 3 hours ( gives the timing of each activity). Evaluation To evaluate students’ preparedness and the sessions’ effectiveness, students completed individual pre- and postsession quizzes , as well as a postsession survey . The pre- and postsession quizzes featured multiple-choice knowledge questions to gauge whether the case and resultant discussion were effective in improving students’ knowledge. The postsession survey was administered directly after the session to elicit students’ feedback on whether the CBL successfully integrated the content areas and facilitated their learning. Students rated their level of agreement with five items using a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree ). The survey also included open-ended questions that encouraged students to reflect on their preparation and general perceptions of the session. Pre- and postsession quiz scores were compared using a Wilcoxon signed rank test. Response rate and average statement agreement were calculated for the Likert items. Free responses were analyzed using content analysis. This project was deemed exempt by the Indiana University Institutional Review Board. The CBL was completed as a component of the LEAD Scholars prematriculation program, which took place over one 4-week period during the summer of 2022 and again in 2023 immediately before the first year of medical school. Participating students were either matriculating first-year medical students ( n = 43) or medical students repeating their first year ( n = 15). During the first week of the program, students received instruction in personal and professional development as well as study strategies for success in medical education. During the following 2-week period, which was meant to simulate the pace and volume of the first of three blocks of the first-year medical anatomy course at IUSM (i.e., Human Structure), students received 21 prerecorded lectures, each an hour long, covering regional gross anatomy of the upper limb, early embryology, histology of basic tissues, and an introduction to radiology . Students also participated in seven 2-hour gross anatomy lab sessions where they alternated regional dissection and examining prosected (i.e., previously dissected) donors with the assistance of instructors and an online dissector. In the final week of the program, students participated in three CBL small-group sessions (one per day) before taking final exams. A 40-item gross lab practical primarily assessed the identification of structures, with one-quarter of questions assessing the function or clinical relevance of a tagged structure. An 80-item, computer-based examination assessed lecture content in gross anatomy, histology, and embryology and included 15 items based on histology imaging. These examinations were comparable in structure and question types to those the students would encounter in the integrated Human Structure course. The three CBL sessions integrated all content areas covered in the 2-week instructional period. The first session, described here, covered the anatomy, embryology, and histology of the upper limb, breast, and associated structures. Students were assigned alphabetically to groups of six to eight, which also included each member of their laboratory dissection group. Students had already had 2 weeks of experience with their lab group members, so this helped provide a safe space for them to discuss the answers to questions. Small groups of this size (as supported by the literature) were large enough to promote interaction but not too large to risk the voices of some members not being heard. Session objectives, a list of preparation resources, and prework activities were located on the Canvas learning management system at the start of the course. Assigned prework included reviewing specific lectures related to the case (breast, pectoral region, and axilla; brachial plexus lesions; etc.) and completion of laboratory sessions. Lectures and laboratory sessions were scheduled in the first 2 weeks of the program and were listed as a reminder to review. Also included in the prework were two tables for students to complete. The tables helped the students to synthesize information prior to the event and were an effective tool for compartmentalizing information for recall and retention. The sessions were primarily facilitated in 2022 by a third-year medical student who developed all elements of the session and in 2023 by a doctoral student in anatomy education. A faculty member in the IUSM Department of Anatomy, Cell Biology & Physiology who was an instructor for the Human Structure course oversaw development of the sessions and was present as an assistant facilitator. The sessions took place in a large team-based learning (TBL) classroom with U-shaped tables, each equipped with a computer and TV monitor. Facilitators utilized a central computer to project a PowerPoint onto the students’ screens and could release the monitors for the students to begin working on activities associated with the CBL at their own computer stations. One student was the lead for each group and entered responses to the questions from the computer at their station (other students could complete the assignments on their personal devices, such as laptops and tablets, if they desired). Between the activities, a facilitator debrief took place to answer the questions interactively with the groups. Students completed pre- and postsession quizzes and a postsession survey on their own devices. At the beginning of the session, students were given 15 minutes to individually complete the 10-item presession quiz , which included questions on gross anatomy, histology, and embryology. This quiz was closed book and closed note. Following completion of the quiz (which was not reviewed), students worked as a group to answer the questions and address the tasks in the CBL ; the facilitator presented the PowerPoint and followed the timing constraints given in its slides. The facilitator stopped to debrief, address the questions in the activity, and answer any other questions before moving to the next activity. Each case consisted of clinical scenarios and questions that required students to interpret images, complete matching exercises, and make diagrams, flowcharts, or tables; these were fully integrated, asking students to recall and apply information from all three disciplines of gross anatomy, embryology, and histology. After all case activities had been completed, students individually took the postsession quiz , which was identical to the presession quiz (except for mixing of question and response order), and filled out the voluntary postsession survey . In all, the session was completed in approximately 3 hours ( gives the timing of each activity). To evaluate students’ preparedness and the sessions’ effectiveness, students completed individual pre- and postsession quizzes , as well as a postsession survey . The pre- and postsession quizzes featured multiple-choice knowledge questions to gauge whether the case and resultant discussion were effective in improving students’ knowledge. The postsession survey was administered directly after the session to elicit students’ feedback on whether the CBL successfully integrated the content areas and facilitated their learning. Students rated their level of agreement with five items using a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree ). The survey also included open-ended questions that encouraged students to reflect on their preparation and general perceptions of the session. Pre- and postsession quiz scores were compared using a Wilcoxon signed rank test. Response rate and average statement agreement were calculated for the Likert items. Free responses were analyzed using content analysis. This project was deemed exempt by the Indiana University Institutional Review Board. Fifty-one students participated in the CBL session (2022: n = 12, 2023: n = 39). Students performed significantly better on the postsession quiz, with the averages at 57% and 69%, respectively ( p < .001). Overall, 30 of the 51 students (2022: n = 10, 2023: n = 20) completed the postsession survey (59% response rate). The CBL was well received by students, who all agreed the session improved their understanding of the material, integrated anatomy content, and was well organized . Free responses indicated students enjoyed the small-group collaboration and being able to work in teams. Others commented on the effectiveness of the facilitator discussions, with one student stating that “the large group discussion helped to solidify the content.” While many students noted that they had reviewed the lectures and begun making notes, they could have improved their study techniques for the session by including self-testing and other active recall techniques. The fast-paced, high-volume nature of medical education curricula presents a uniquely challenging situation to incoming students to quickly adapt their study techniques to meet this demand. Medical educators have called for an increased emphasis on integrated content, providing an opportunity for more active learning sessions within medical education. The CBL session we describe addresses both of these areas by incorporating opportunities for students to practice study techniques within a session with fully integrated content. This session offers the opportunity to ease students’ transition into the rigorous medical school curriculum and to introduce core content areas in a low-stakes environment. Our results demonstrate that CBL sessions are a viable means of providing opportunities to incoming first-year medical students to practice, adapt, and evaluate study techniques while delivering integrated anatomy, embryology, and histology content. The significant improvement in scores between the pre- and postsession quizzes suggests that the session activities and accompanying discussion helped to further students’ knowledge and understanding of the content. This is further supported by the postsession survey results, where all students agreed the session improved their understanding of the material through the real-life application of the cases and allowed them to evaluate the effectiveness of their study techniques. Together, our results indicate that the activities and content within the sessions encouraged students to evaluate their own study techniques and, perhaps more importantly, reflect on how they might change them to be more effective going forward. This engaged students in a form of self-regulated learning, a process involving cycles of preparation (based on factors such as past knowledge and experience), strategy implementation, and outcome evaluation (which gives information on the efficacy of a student's preparation and allows the opportunity for changes to occur for the next cycle). This process of self-reflection and evaluation is an important step in students’ development of self-regulated learning strategies, which are critical for success in higher education environments such as medicine. The generalizability of our results is limited by a small sample size of students in a formative, prematriculation program at a single institution. The prematriculation program invited a small, select group of students whose participation was entirely voluntary. Thus, scores on assessments were formative and did not impact students’ current standing in the program or in medical school. These factors may have influenced students’ study techniques and motivation to study outside the scheduled program time. Despite this, our results demonstrate the utility of CBL sessions for both integrating content and teaching study techniques to incoming medical students and support other research suggesting that CBL encourages students to engage in self-directed learning and identify learning gaps and strategies to fill them. , Our work expands the already robust literature highlighting the utility of CBL for medical education and integrating content – by demonstrating an efficient means with which to introduce students early in training to study strategies for approaching integrated anatomy content with a clinical focus. Future directions include conducting a follow-up to explore the degree to which students implement changes to their study strategies following the session, to explore the strategies they ultimately find useful within their medical school courses, and to determine whether the integrated CBL sessions assist them in making connections between anatomy, histology, and embryology content in the Human Structure course. We developed a CBL session accessible to incoming first-year medical students that integrated three core content areas of medical anatomy education as well as incorporating opportunities to practice and evaluate study strategies useful for succeeding in medical education. Especially as medical knowledge continues to increase while the time allotted for teaching it remains stable, educational modalities such as these that efficiently teach a number of concepts and skills will continue to grow in value and importance. Required Prework.docx Pre- and Postsession Quiz.docx CBL Questions and Activities Key.docx Case 1 Facilitator Slides & Instructions.pptx Postsession Survey.docx All appendices are peer reviewed as integral parts of the Original Publication. |
Pharmacogenomics of in vitro response of the NCI-60 cancer cell line panel to Indian natural products | ae397a24-1f3c-49f5-b074-0c922473eefc | 9077913 | Pharmacology[mh] | History of Ayurveda Ayurveda is a traditional system of medicine that originated around 3000–4000 BCE, which utilizes Indian natural products (INP) derived mainly from plants to treat “imbalances” in the body aiming to cure a variety of diseases, including cancer . In the Ayurvedic system of herbal medicine, there are 3 main physiologic states called doshas which are based on several phenotypic (body frame, weight, facial features) and mental (memory, emotional lability) factors. A fundamental belief in Ayurvedic medicine is that an imbalance in these doshas leads to disease and illness, which are purported to be corrected by a combination of these herbal remedies . Historical references in Ayurvedic text contain some of the first descriptions of cancer (blood and soft tissue) and their successful treatment with a combination of INPs administered via oral and topical routes . However, results reported in these historical references are difficult to replicate due to the use of multiple herbal products in combination, a difference in basic disease terminology, and heterogeneity in preparation of the herbal compounds . Despite the uncertain efficacy of these INPs, Ayurvedic medications have been reported to be used by as many as 20–40% of patients with cancer in India as they are believed to prevent chemotherapy-related toxicity, boost immunity, and slow tumor growth . Knowledge of the putative anticancer mechanisms of action of individual molecular compounds comprising the INPs is incomplete, however some in vitro and in vivo data for several commonly used INPs exist and are discussed below. Examples of Indian natural products Curcumin is a bioactive polyphenol that is the most common curcuminoid, a group of compounds that impart a yellow color to Curcuma longa (turmeric). Curcumin has generated a lot of interest as an INP with possible chemo-preventative, anticancer, and anti-inflammatory properties, highlighting the difficulty of defining a specific indication due to its description as a panacea . Some reports have demonstrated the modest activity of curcumin to induce apoptosis in cancer cell lines, its role in enhancing response to cisplatin, and its anti-inflammatory properties . These findings have led to many trials including active clinical trials in the US ( NCT02064673 , NCT02944578 , NCT02782949 ) exploring the role of curcumin as a chemo-preventative agent in preventing gastric cancer, cervical intraepithelial neoplasia, and the recurrence of prostate cancer. Neem ( Azadirachta indica) is another commonly used herbal product that has several component INPs with reported anticancer properties, which highlights the difficulty in isolating active INP compounds. Nimbolide is a terpenoid lactone derived from Neem that induces apoptosis in pancreatic cancer cells through reactive oxygen species (ROS) generation and upregulation of pro-apoptotic proteins . Gedunine, a pentacyclic triterpenoid derived from Neem, has also demonstrated activity in pancreatic cancer through inhibition of the sonic hedgehog pathway . These mechanisms of action of multiple INPs from the same herbal product make it difficult to attribute the activity of INPs, which is further complicated as many patients taking INPs receive combinations of several herbal products. Amla ( Phyllanthus emblica) , a.k.a. Indian gooseberry, is part of the genus Phyllanthus , which has been used in traditional herbal medicine to treat multiple ailments. The Phyllanthus genus includes several species (e.g., P. niruri , P. urinaria , P. fraternus , etc.) which have been used to treat a wide range of ailments from diabetes to renal calculi . Although anecdotal reports of use of Amla to treat cancer are lacking, some active molecules in Amla have been studied more extensively, including quercetin. Quercetin, a polyphenolic flavonoid derived from P. emblica, has been shown to attenuate tumor growth in breast and pancreatic cancer models through multiple mechanisms including growth signal inhibition of the PI3K pathway and tyrosine kinase inhibition . Cucurbitacins are a group of compounds characterized by a tripterpene hydrocarbon. which are found in over 40 species, including Indian plants such as Brahmi ( Bacopa monnieri ) and bitter gourd ( Momordica charantia ) . These plants, which are known for their bitter taste due to the cucurbitacins, are purported to prevent cancer and are administered orally as a liquid formulation. While cucurbitacin B is one of the more extensively studied cucurbitacins, its putative anticancer mechanism of action is not well defined; however this product is thought to be involved in JAK/STAT pathway inhibition and F-actin cytoskeleton disruption . While putative anti-cancer mechanisms of action have been suggested for commonly used INPs as detailed above, these data are often limited to in vitro response in one or a few cell lines . Data regarding rarer INPs including plumbagin ( Plumbago zeylanica ), alizarin ( Rubia cordifolia ), and Achilleol A ( Achillea odorata ) are limited or have not yet been reported . Analysis of data from a large database of cell line assay results such as the NCI-60 cancer cell line panel data, for the purpose of determining a mechanism of action, may improve our understanding of these INPs. NCI-60 cell line panel Our overall strategy to explore the possible mechanisms of action of INPs was to compare patterns of cell line response to each INP with publicly available data to those for standard reference anticancer compounds and to identify clusters (subtrees) of INPs with similar patterns of response across the NCI-60 cell lines. Next, we examined the association of gene expression levels and of clinically or biologically important single nucleotide variants (SNVs) with response to individual INPs. We also examined how the molecular features associated with tumor cell line responses to individual INPs were distributed among the INP subtrees that had similar patterns of response. Lastly, we investigated the biological pathways representing the gene expression patterns that were associated with different INP subtrees. These analyses provided new insights into potential mechanisms of actions of the INPs. To examine the activity of INPs in tumor cells, we analyzed publicly available data from the NCI-60 cancer cell line panel. The NCI-60 initiative was started by the U.S. National Cancer Institute (NCI) in 1989 with the purpose of screening candidate anti-cancer compounds on 60 cancer cell lines representing 10 different tumor types. Over 100,000 compounds have been screened to date, including INPs and well-characterized reference compounds approved for clinical use (e.g., paclitaxel, methotrexate, and other agents) . The Developmental Therapeutics Program (DTP) of the NCI screens these compounds using a single high-dose test to meet pre-specified minimum inhibition criteria and subsequently screens each compound in a 5-dose screen using a 48 h endpoint measured by a Sulforhodamine B stain . Data recorded by the screen include GI50, IC50, LC50, and total growth inhibition (TGI) cell response data which are used to generate unique patterns across cell lines . To interrogate this rich dataset, the COMPARE algorithm was developed to allow comparisons of response patterns (across cell lines) of synthetic and natural products of interest with standard reference compounds to help determine their putative mechanisms of actions . Additionally , molecular features of the NCI-60 cell lines have been extensively characterized. Their gene expression, whole exome sequencing, and other molecular data have been made publicly available . These data were integrated into online databases and made available through CellMiner and CellMinerCDB data portals, which allow access to gene expression, genetic variation, and drug sensitivity data . Measures of response of the cell lines to a large number of drugs and investigational compounds, including some natural products, are also publicly available from the NCI DTP NCI-60 Growth Inhibition data repository. Combined, these data provide an opportunity to assess gene-drug relationships. Thus, the NCI-60 resource offers a robust dataset that may be interrogated to increase our understanding of INPs and their mechanisms of action.
Ayurveda is a traditional system of medicine that originated around 3000–4000 BCE, which utilizes Indian natural products (INP) derived mainly from plants to treat “imbalances” in the body aiming to cure a variety of diseases, including cancer . In the Ayurvedic system of herbal medicine, there are 3 main physiologic states called doshas which are based on several phenotypic (body frame, weight, facial features) and mental (memory, emotional lability) factors. A fundamental belief in Ayurvedic medicine is that an imbalance in these doshas leads to disease and illness, which are purported to be corrected by a combination of these herbal remedies . Historical references in Ayurvedic text contain some of the first descriptions of cancer (blood and soft tissue) and their successful treatment with a combination of INPs administered via oral and topical routes . However, results reported in these historical references are difficult to replicate due to the use of multiple herbal products in combination, a difference in basic disease terminology, and heterogeneity in preparation of the herbal compounds . Despite the uncertain efficacy of these INPs, Ayurvedic medications have been reported to be used by as many as 20–40% of patients with cancer in India as they are believed to prevent chemotherapy-related toxicity, boost immunity, and slow tumor growth . Knowledge of the putative anticancer mechanisms of action of individual molecular compounds comprising the INPs is incomplete, however some in vitro and in vivo data for several commonly used INPs exist and are discussed below.
Curcumin is a bioactive polyphenol that is the most common curcuminoid, a group of compounds that impart a yellow color to Curcuma longa (turmeric). Curcumin has generated a lot of interest as an INP with possible chemo-preventative, anticancer, and anti-inflammatory properties, highlighting the difficulty of defining a specific indication due to its description as a panacea . Some reports have demonstrated the modest activity of curcumin to induce apoptosis in cancer cell lines, its role in enhancing response to cisplatin, and its anti-inflammatory properties . These findings have led to many trials including active clinical trials in the US ( NCT02064673 , NCT02944578 , NCT02782949 ) exploring the role of curcumin as a chemo-preventative agent in preventing gastric cancer, cervical intraepithelial neoplasia, and the recurrence of prostate cancer. Neem ( Azadirachta indica) is another commonly used herbal product that has several component INPs with reported anticancer properties, which highlights the difficulty in isolating active INP compounds. Nimbolide is a terpenoid lactone derived from Neem that induces apoptosis in pancreatic cancer cells through reactive oxygen species (ROS) generation and upregulation of pro-apoptotic proteins . Gedunine, a pentacyclic triterpenoid derived from Neem, has also demonstrated activity in pancreatic cancer through inhibition of the sonic hedgehog pathway . These mechanisms of action of multiple INPs from the same herbal product make it difficult to attribute the activity of INPs, which is further complicated as many patients taking INPs receive combinations of several herbal products. Amla ( Phyllanthus emblica) , a.k.a. Indian gooseberry, is part of the genus Phyllanthus , which has been used in traditional herbal medicine to treat multiple ailments. The Phyllanthus genus includes several species (e.g., P. niruri , P. urinaria , P. fraternus , etc.) which have been used to treat a wide range of ailments from diabetes to renal calculi . Although anecdotal reports of use of Amla to treat cancer are lacking, some active molecules in Amla have been studied more extensively, including quercetin. Quercetin, a polyphenolic flavonoid derived from P. emblica, has been shown to attenuate tumor growth in breast and pancreatic cancer models through multiple mechanisms including growth signal inhibition of the PI3K pathway and tyrosine kinase inhibition . Cucurbitacins are a group of compounds characterized by a tripterpene hydrocarbon. which are found in over 40 species, including Indian plants such as Brahmi ( Bacopa monnieri ) and bitter gourd ( Momordica charantia ) . These plants, which are known for their bitter taste due to the cucurbitacins, are purported to prevent cancer and are administered orally as a liquid formulation. While cucurbitacin B is one of the more extensively studied cucurbitacins, its putative anticancer mechanism of action is not well defined; however this product is thought to be involved in JAK/STAT pathway inhibition and F-actin cytoskeleton disruption . While putative anti-cancer mechanisms of action have been suggested for commonly used INPs as detailed above, these data are often limited to in vitro response in one or a few cell lines . Data regarding rarer INPs including plumbagin ( Plumbago zeylanica ), alizarin ( Rubia cordifolia ), and Achilleol A ( Achillea odorata ) are limited or have not yet been reported . Analysis of data from a large database of cell line assay results such as the NCI-60 cancer cell line panel data, for the purpose of determining a mechanism of action, may improve our understanding of these INPs.
Our overall strategy to explore the possible mechanisms of action of INPs was to compare patterns of cell line response to each INP with publicly available data to those for standard reference anticancer compounds and to identify clusters (subtrees) of INPs with similar patterns of response across the NCI-60 cell lines. Next, we examined the association of gene expression levels and of clinically or biologically important single nucleotide variants (SNVs) with response to individual INPs. We also examined how the molecular features associated with tumor cell line responses to individual INPs were distributed among the INP subtrees that had similar patterns of response. Lastly, we investigated the biological pathways representing the gene expression patterns that were associated with different INP subtrees. These analyses provided new insights into potential mechanisms of actions of the INPs. To examine the activity of INPs in tumor cells, we analyzed publicly available data from the NCI-60 cancer cell line panel. The NCI-60 initiative was started by the U.S. National Cancer Institute (NCI) in 1989 with the purpose of screening candidate anti-cancer compounds on 60 cancer cell lines representing 10 different tumor types. Over 100,000 compounds have been screened to date, including INPs and well-characterized reference compounds approved for clinical use (e.g., paclitaxel, methotrexate, and other agents) . The Developmental Therapeutics Program (DTP) of the NCI screens these compounds using a single high-dose test to meet pre-specified minimum inhibition criteria and subsequently screens each compound in a 5-dose screen using a 48 h endpoint measured by a Sulforhodamine B stain . Data recorded by the screen include GI50, IC50, LC50, and total growth inhibition (TGI) cell response data which are used to generate unique patterns across cell lines . To interrogate this rich dataset, the COMPARE algorithm was developed to allow comparisons of response patterns (across cell lines) of synthetic and natural products of interest with standard reference compounds to help determine their putative mechanisms of actions . Additionally , molecular features of the NCI-60 cell lines have been extensively characterized. Their gene expression, whole exome sequencing, and other molecular data have been made publicly available . These data were integrated into online databases and made available through CellMiner and CellMinerCDB data portals, which allow access to gene expression, genetic variation, and drug sensitivity data . Measures of response of the cell lines to a large number of drugs and investigational compounds, including some natural products, are also publicly available from the NCI DTP NCI-60 Growth Inhibition data repository. Combined, these data provide an opportunity to assess gene-drug relationships. Thus, the NCI-60 resource offers a robust dataset that may be interrogated to increase our understanding of INPs and their mechanisms of action.
Figure summarizes the workflow of the steps of the analyses in this study. Collection of Indian natural products and reference compounds with cell line response data A biomedical literature search in PubMed at the National Center for Biotechnology Information (NCBI) using keywords “Ayurveda” AND “cancer” AND “review” was conducted to identify Ayurvedic herbs of interest, with a total of 170 publications found. Each publication was manually reviewed. Among them, 25 publications contained a comprehensive description of one or more Ayurvedic herbs and their specific INPs that are commonly used by Ayurvedic practitioners in cancer treatment. These INPs were included in subsequent searches. All INPs identified in our manual curation were then searched in PubMed for evidence of any activity in cancer cell lines and were compiled, resulting in the total of 258 INPs. The NCI DTP screening program uses a special identifier, called an NSC number, for each compound screened in the NCI-60 cell line panel. Those INPs obtained from our literature search that did not have NSC numbers ( n = 66) were excluded from further analysis. The unique NSC numbers for the remaining INPs ( n = 192) identified from biomedical literature were interrogated using the NCI PUBLIC COMPARE portal for available GI50 data ( https://dtp.cancer.gov/public_compare ) . Each GI50 value represents sensitivity of an NCI-60 cell line to a particular compound, calculated as the concentration producing 50% growth inhibition that is derived from the 5-concentration screen of each compound at 48 h after incubation . Those INPs with only single dose response data ( n = 117) were excluded. The remaining INPs ( n = 75) were used as input for separate queries in the NCI PUBLIC COMPARE portal. The public version of the NCI PUBLIC COMPARE database does not store the taxonomy and global locations of the original source products for the database compounds. The queries use Pearson correlation analysis to compare the vector of GI50 values across the NCI-60 cell line panel for each input INP to the vector of GI50 values for available COMPARE reference antitumor agents (including approved agents, e.g., methotrexate and vincristine, and experimental agents). We used a cutoff of the absolute value for a pairwise Pearson correlation coefficient |r|> 0.5 to select the reference compounds with similar GI50 response profiles to each input INP. The NSC numbers of the 75 INPs and the 57 reference compounds that were correlated with at least one of those 75 INPs with |r|> 0.5 (Table ) were used to download publicly available -log 10 GI50 data (negative log 10 GI50, referred as NLOGGI50 in the downloadable dataset) from the static public release at the DTP website NCI-60 Growth Inhibition data repository ( https://wiki.nci.nih.gov/display/NCIDTPdata/NCI-60+Growth+Inhibition+Data ) . This dataset is currently available under previous releases (filename: NCI60_GI50_2016b.zip, June 2016 release downloaded on March 4, 2020). Details of the sample handling, preparation and cell line testing methods followed to generate the data in this repository are described elsewhere . The NLOGGI50 values were multiplied by -1 in order to convert them to log 10 GI50, a measure of cell line response to treatment. Here and below, we refer to these measures as logGI50. All logGI50 values that were not available were set to missing. The term “compound” is used to describe the INPs and reference compounds with available logGI50 data. As multiple experiments had been run for each compound, the median logGI50 was calculated, using replicate experiments, for each cell line-compound pair. These median logGI50 values for each NCI-60 cell line were computed for all 132 compounds using 15,199 experiment records. The majority of the data were screened in molar units, except for the product of Ricinus communis (NSC 15384), which had the units in μg/ml and was not included in the clustering analysis for that reason. A more detailed description of the public COMPARE algorithm and the NCI-60 cell line panel can be found elsewhere . Hierarchical clustering of the logGI50, logLC50 and TGI values of INPs and reference compounds In order to identify groups of INPs with similar patterns of activity in the NCI-60 cell line panel, we employed hierarchical clustering of the INPs. The initial clustering to identify groups of compounds with similar response patterns was based on the logGI50 values (Fig. ). Reference compounds were also included in the clustering to provide information about possible mechanisms of action of each hierarchical cluster, or subtree, containing INPs with similar response. Clustering was based on pairwise Euclidean distances between each compound pair, which were calculated using the logGI50 values of the INPs and reference compounds in all 60 NCI-60 cell lines. A hierarchical tree based on these Euclidean distances was generated using the hclust package using the ‘average’, or UPGMA, option and exported for further visualization using the ape package . Additionally, a 2-dimensional heatmap of the compounds and cell lines was generated from logGI50 values using heatmap.2 in the gplots package. We used RStudio v1.2.5033 for clustering analysis. Further visualization and graphical representation of the hierarchical clustering of all compounds and of their individual subtrees was done using Dendroscope version 3.7.2 . To augment the analysis of clusters of INPs and reference compounds using logGI50 values, we also performed separate clustering of compounds using logLC50 and TGI values representing the 50% lethal concentration needed for the 50% cell kill and the concentration (also on the log 10 scale) for the total inhibition of growth, respectively. Both logLC50 and TGI values were downloaded from the December 2021 release of the NCI-60 Growth Inhibition Data ( https://wiki.nci.nih.gov/NCIDTPdata/NCI-60+Growth+Inhibition+Data ). Values for all INPs and reference compounds were extracted, and median values were computed as detailed above. Pairwise Euclidean distances were calculated, and unrooted radial hierarchical trees were generated using the methodology described above. These trees were visualized and compared to the tree inferred using logGI50 values (Fig. ; Table ). Subsequent analyses of association of INP response with gene expression, gene enrichment, and single nucleotide variation data were performed using logGI50 values as the primary endpoint measure. Analysis of association of gene expression with INP activity To examine how NCI-60 cell line response to INPs may be influenced by molecular genetic features, we analyzed the association of median logGI50 values with NCI-60 molecular data. Pre-treatment gene expression data for the NCI-60 cell lines was downloaded from the CellMinerCDB resource . A more detailed description of the collection of molecular measures can be found in our previous publication . For expression analysis, we used log 2 transformed expression measures of 23,059 annotated transcripts, lncRNAs, and miRNAs which had been previously combined from five Affymetrix expression microarray platforms and normalized by the CellMiner development team . Cell lines for which there were no drug response data (MDA-MB-468) or no gene expression data (MDA-N) were excluded ( n = 2). For each gene-INP pair, Spearman correlation was computed to evaluate the association between pre-treatment gene expression and logGI50 in 58 cell lines. Benjamini–Hochberg procedure was applied to control the false discovery rate (FDR) across the 23,059 gene × 75 INP pairs. Gene-INP pairs with FDR-adjusted p < 0.05 were considered significant. A positive value of the Spearman correlation coefficient ρ indicated an association of higher gene expression with higher logGI50 values of an INP, i.e., with increased resistance to that INP. Similarly, negative values of ρ showed an association of higher gene expression with lower logGI50 values, i.e., with increased sensitivity to that INP. Here and below, the terms sensitivity and resistance were used to define the direction of the associations, as the analyses of logGI50 values were performed on the continuous scale. All genes with significant Spearman correlations were investigated to determine whether the gene involved in the gene-INP pair was associated with a known molecular mechanism of action of reference compounds that clustered in the same subtree with that INP. Gene set enrichment analysis Gene set enrichment analysis was performed using g:Profiler ( https://biit.cs.ut.ee/gprofiler/gost ), which is a regularly updated web-based utility that includes annotated pathway gene sets from KEGG, Reactome, and WikiPathways . Genes that were significantly associated with response to INPs (FDR adjusted p < 0.1) in each cluster were stratified to negatively and positively correlated groups (Supplementary Tables – ). GSEA analysis was performed on each gene group separately for each cluster, using the gene symbols as input for g:Profiler. A significance level for enriched pathways was set at p < 0.05 (FDR adjusted). Analysis of association of INP activity with single nucleotide variants To examine the association between NCI-60 cell line response to INPs and specific DNA alterations of cancer genes that may affect cytotoxicity response, whole exome sequencing (WES) data were downloaded from the CellMiner data download portal . One cell line (MDA-N) which did not have drug response data was excluded, leaving a total of 59 cell lines available for analysis. The data were filtered using a list of candidate genes and functionally relevant SNVs from OncoKB v. 1.17, a curated precision oncology knowledge base . As outlined in our earlier report , the list consisted of variants classified by OncoKB at levels 1–4 of potential therapeutic action, R1 and R2 levels of resistance, and variants classified as “oncogenic” and “likely oncogenic”. After applying this filter to the CellMiner WES data, 1,586 protein changing SNVs in 280 genes across 59 NCI-60 cell lines were identified. These SNVs, which included nonsynonymous changes, frameshift variants, and variants involving the stop codon or the loss of a translational initiation codon start site, were additionally filtered to include only variants present in at least 3 NCI-60 cell lines, resulting in 107 genes with 220 SNVs across 59 cell lines. A Student’s t -test was used to compare logGI50 values between groups of NCI-60 cell lines defined by variant status, for each SNV-INP pair. A positive value of the t -statistic indicated an association of higher gene expression with higher logGI50 values of an INP, i.e. increased resistance to that INP, whereas a negative value of the t -statistic showed an association of higher gene expression with lower logGI50 values, i.e. increased sensitivity to that INP. All analyses of associations between response to the natural products and sequence variants were performed using the RStudio v1.0.153. Biological interpretation of significant SNV-response associations was based on SNV annotation in OncoKB, using its updated annotation of levels of functional and oncogenic SNV effects as of 03/25/2021, and on published reports in biomedical literature. Visualization of associations of response to INPs with molecular features and with cellular pathways in the NCI-60 cell lines Visualization of significant associations (FDR adjusted p < 0.05) of logGI50 with gene expression and with single nucleotide variants, and of association of significantly upregulated and downregulated cellular pathways with INP subtrees was performed using Cytoscape v. 3.9.1 and Microsoft Excel.
A biomedical literature search in PubMed at the National Center for Biotechnology Information (NCBI) using keywords “Ayurveda” AND “cancer” AND “review” was conducted to identify Ayurvedic herbs of interest, with a total of 170 publications found. Each publication was manually reviewed. Among them, 25 publications contained a comprehensive description of one or more Ayurvedic herbs and their specific INPs that are commonly used by Ayurvedic practitioners in cancer treatment. These INPs were included in subsequent searches. All INPs identified in our manual curation were then searched in PubMed for evidence of any activity in cancer cell lines and were compiled, resulting in the total of 258 INPs. The NCI DTP screening program uses a special identifier, called an NSC number, for each compound screened in the NCI-60 cell line panel. Those INPs obtained from our literature search that did not have NSC numbers ( n = 66) were excluded from further analysis. The unique NSC numbers for the remaining INPs ( n = 192) identified from biomedical literature were interrogated using the NCI PUBLIC COMPARE portal for available GI50 data ( https://dtp.cancer.gov/public_compare ) . Each GI50 value represents sensitivity of an NCI-60 cell line to a particular compound, calculated as the concentration producing 50% growth inhibition that is derived from the 5-concentration screen of each compound at 48 h after incubation . Those INPs with only single dose response data ( n = 117) were excluded. The remaining INPs ( n = 75) were used as input for separate queries in the NCI PUBLIC COMPARE portal. The public version of the NCI PUBLIC COMPARE database does not store the taxonomy and global locations of the original source products for the database compounds. The queries use Pearson correlation analysis to compare the vector of GI50 values across the NCI-60 cell line panel for each input INP to the vector of GI50 values for available COMPARE reference antitumor agents (including approved agents, e.g., methotrexate and vincristine, and experimental agents). We used a cutoff of the absolute value for a pairwise Pearson correlation coefficient |r|> 0.5 to select the reference compounds with similar GI50 response profiles to each input INP. The NSC numbers of the 75 INPs and the 57 reference compounds that were correlated with at least one of those 75 INPs with |r|> 0.5 (Table ) were used to download publicly available -log 10 GI50 data (negative log 10 GI50, referred as NLOGGI50 in the downloadable dataset) from the static public release at the DTP website NCI-60 Growth Inhibition data repository ( https://wiki.nci.nih.gov/display/NCIDTPdata/NCI-60+Growth+Inhibition+Data ) . This dataset is currently available under previous releases (filename: NCI60_GI50_2016b.zip, June 2016 release downloaded on March 4, 2020). Details of the sample handling, preparation and cell line testing methods followed to generate the data in this repository are described elsewhere . The NLOGGI50 values were multiplied by -1 in order to convert them to log 10 GI50, a measure of cell line response to treatment. Here and below, we refer to these measures as logGI50. All logGI50 values that were not available were set to missing. The term “compound” is used to describe the INPs and reference compounds with available logGI50 data. As multiple experiments had been run for each compound, the median logGI50 was calculated, using replicate experiments, for each cell line-compound pair. These median logGI50 values for each NCI-60 cell line were computed for all 132 compounds using 15,199 experiment records. The majority of the data were screened in molar units, except for the product of Ricinus communis (NSC 15384), which had the units in μg/ml and was not included in the clustering analysis for that reason. A more detailed description of the public COMPARE algorithm and the NCI-60 cell line panel can be found elsewhere .
In order to identify groups of INPs with similar patterns of activity in the NCI-60 cell line panel, we employed hierarchical clustering of the INPs. The initial clustering to identify groups of compounds with similar response patterns was based on the logGI50 values (Fig. ). Reference compounds were also included in the clustering to provide information about possible mechanisms of action of each hierarchical cluster, or subtree, containing INPs with similar response. Clustering was based on pairwise Euclidean distances between each compound pair, which were calculated using the logGI50 values of the INPs and reference compounds in all 60 NCI-60 cell lines. A hierarchical tree based on these Euclidean distances was generated using the hclust package using the ‘average’, or UPGMA, option and exported for further visualization using the ape package . Additionally, a 2-dimensional heatmap of the compounds and cell lines was generated from logGI50 values using heatmap.2 in the gplots package. We used RStudio v1.2.5033 for clustering analysis. Further visualization and graphical representation of the hierarchical clustering of all compounds and of their individual subtrees was done using Dendroscope version 3.7.2 . To augment the analysis of clusters of INPs and reference compounds using logGI50 values, we also performed separate clustering of compounds using logLC50 and TGI values representing the 50% lethal concentration needed for the 50% cell kill and the concentration (also on the log 10 scale) for the total inhibition of growth, respectively. Both logLC50 and TGI values were downloaded from the December 2021 release of the NCI-60 Growth Inhibition Data ( https://wiki.nci.nih.gov/NCIDTPdata/NCI-60+Growth+Inhibition+Data ). Values for all INPs and reference compounds were extracted, and median values were computed as detailed above. Pairwise Euclidean distances were calculated, and unrooted radial hierarchical trees were generated using the methodology described above. These trees were visualized and compared to the tree inferred using logGI50 values (Fig. ; Table ). Subsequent analyses of association of INP response with gene expression, gene enrichment, and single nucleotide variation data were performed using logGI50 values as the primary endpoint measure.
To examine how NCI-60 cell line response to INPs may be influenced by molecular genetic features, we analyzed the association of median logGI50 values with NCI-60 molecular data. Pre-treatment gene expression data for the NCI-60 cell lines was downloaded from the CellMinerCDB resource . A more detailed description of the collection of molecular measures can be found in our previous publication . For expression analysis, we used log 2 transformed expression measures of 23,059 annotated transcripts, lncRNAs, and miRNAs which had been previously combined from five Affymetrix expression microarray platforms and normalized by the CellMiner development team . Cell lines for which there were no drug response data (MDA-MB-468) or no gene expression data (MDA-N) were excluded ( n = 2). For each gene-INP pair, Spearman correlation was computed to evaluate the association between pre-treatment gene expression and logGI50 in 58 cell lines. Benjamini–Hochberg procedure was applied to control the false discovery rate (FDR) across the 23,059 gene × 75 INP pairs. Gene-INP pairs with FDR-adjusted p < 0.05 were considered significant. A positive value of the Spearman correlation coefficient ρ indicated an association of higher gene expression with higher logGI50 values of an INP, i.e., with increased resistance to that INP. Similarly, negative values of ρ showed an association of higher gene expression with lower logGI50 values, i.e., with increased sensitivity to that INP. Here and below, the terms sensitivity and resistance were used to define the direction of the associations, as the analyses of logGI50 values were performed on the continuous scale. All genes with significant Spearman correlations were investigated to determine whether the gene involved in the gene-INP pair was associated with a known molecular mechanism of action of reference compounds that clustered in the same subtree with that INP.
Gene set enrichment analysis was performed using g:Profiler ( https://biit.cs.ut.ee/gprofiler/gost ), which is a regularly updated web-based utility that includes annotated pathway gene sets from KEGG, Reactome, and WikiPathways . Genes that were significantly associated with response to INPs (FDR adjusted p < 0.1) in each cluster were stratified to negatively and positively correlated groups (Supplementary Tables – ). GSEA analysis was performed on each gene group separately for each cluster, using the gene symbols as input for g:Profiler. A significance level for enriched pathways was set at p < 0.05 (FDR adjusted).
To examine the association between NCI-60 cell line response to INPs and specific DNA alterations of cancer genes that may affect cytotoxicity response, whole exome sequencing (WES) data were downloaded from the CellMiner data download portal . One cell line (MDA-N) which did not have drug response data was excluded, leaving a total of 59 cell lines available for analysis. The data were filtered using a list of candidate genes and functionally relevant SNVs from OncoKB v. 1.17, a curated precision oncology knowledge base . As outlined in our earlier report , the list consisted of variants classified by OncoKB at levels 1–4 of potential therapeutic action, R1 and R2 levels of resistance, and variants classified as “oncogenic” and “likely oncogenic”. After applying this filter to the CellMiner WES data, 1,586 protein changing SNVs in 280 genes across 59 NCI-60 cell lines were identified. These SNVs, which included nonsynonymous changes, frameshift variants, and variants involving the stop codon or the loss of a translational initiation codon start site, were additionally filtered to include only variants present in at least 3 NCI-60 cell lines, resulting in 107 genes with 220 SNVs across 59 cell lines. A Student’s t -test was used to compare logGI50 values between groups of NCI-60 cell lines defined by variant status, for each SNV-INP pair. A positive value of the t -statistic indicated an association of higher gene expression with higher logGI50 values of an INP, i.e. increased resistance to that INP, whereas a negative value of the t -statistic showed an association of higher gene expression with lower logGI50 values, i.e. increased sensitivity to that INP. All analyses of associations between response to the natural products and sequence variants were performed using the RStudio v1.0.153. Biological interpretation of significant SNV-response associations was based on SNV annotation in OncoKB, using its updated annotation of levels of functional and oncogenic SNV effects as of 03/25/2021, and on published reports in biomedical literature.
Visualization of significant associations (FDR adjusted p < 0.05) of logGI50 with gene expression and with single nucleotide variants, and of association of significantly upregulated and downregulated cellular pathways with INP subtrees was performed using Cytoscape v. 3.9.1 and Microsoft Excel.
Hierarchical clustering of Indian natural products and reference compounds based on the logGI50 measures Figure shows the hierarchical clustering of the Indian natural products and reference compounds based on their median logGI50 values, presenting the results as an unrooted radial phylogram. Clustering revealed 4 distinct subtrees. As Subtree 4 consisted of only reference products (NSC 326231 - L-buthionine sulfoximine, and NSC 237020 - largomycin), it was excluded from subsequent analysis. Supplementary Fig. provides a heatmap showing the two-dimensional clustering of the NCI-60 cell lines and the INPs and reference compounds, clustered according to the similar patterns of cell line response to these compounds using logGI50 values. The similarities of logGI50 response patterns within each subtree may suggest similar potency of the INPs with their grouped reference products and possibly similar mechanisms of actions. Subtree 1 (13 INPs and 18 reference products) The reference compounds in this subtree have mainly anti-mitotic activity (vincristine sulfate, vinleurosine sulfate, vinblastine sulfate, paclitaxel); however, they also included some agents that act as DNA intercalators (doxorubicin) and anti-metabolites (methotrexate). Some INPs of the cucurbitacin family and its derivatives (Cucurbitacin A, B, D, E, L, datiscoside) affect mitotic spindles and delay mitoses leading to a G2/M phase cell cycle arrest of cancer cells . Phyllanthoside has been demonstrated to function both in vivo and in vitro as an inhibitor of eukaryotic protein synthesis by interfering with translation elongation, similar to the reference compound actinomycin D. While a mechanism of action has not been clearly defined for tylophorin and its analog cryptoleurine, some experimental evidence points toward G1 arrest through cyclin A2 downregulation and VEGF2-mediated angiogenesis, which is not a known mechanism of any of the reference compounds correlated with its cytotoxicity . Subtree 2 (34 INPs and 22 reference products) The 22 reference compounds in this subtree had many different mechanisms of action; however, the majority fit into either alkylators (piperazine, mitrozolamide, BCNU, busulfan), ribonucleoide reductase inhibitors (pyrimidine-5-glycodialdehyde, caracemide, IMPY, hydroxyurea), and broad inhibitors of RNA synthesis (diglycoaldehyde, 3-deazauridine). The 34 INPs included in this cluster consisted of a large group of cinnamon-based INPs and some Phyllanthus INPs. Subtree 3 (25 INPs and 17 reference products) The 17 reference compounds in this subtree consisted of a variety of alkylators (CCNU, methyl-CCNU, asaley), anti-metabolites (AT-125, 5-FU, DUP785, dichloroallyl lawsone), and DNA-crosslinking agents (carboxy-platinum). The 25 INPs included in Subtree 3 consisted of curcumin, curcuminoids, neem, and Calendula products. Hierarchical clustering of Indian natural products and reference compounds based on the logLC50 and TGI measures Supplementary Figs. and show the hierarchical clustering of INPs and reference compounds based on their median logLC50 or TGI values, respectively. The trees inferred using logLC50 and TGI were similar to each other, except for 12 compounds. Both logLC50 and TGI trees were comprised of 5 distinct subtrees, as compared to 4 distinct subtrees in the logGI50 tree (Fig. , Supplementary Figs. – ). Table provides information, for each INP and reference compound, whether a compound had a similar clustering with other compounds and was assigned to a subtree with the same number based on logLC50 and TGI as compared to the subtrees based on clustering of logGI50. Detailed comparison of the cluster assignment of the compounds based on different response measures is provided in Supplementary Table . Clustering which was based on TGI was more similar to logGI50-based clustering, whereas with the logLC50-based clustering more compounds showed differences from their logGI50-based cluster assignment (Supplementary Table ). These patterns of similarity and difference between the three trees derived from different response measures may be explained by the fact that logGI50 and TGI both represent different degrees of growth inhibition, both being derived from the growth curve, whereas logLC50 is a different parameter representing a concentration needed to achieve 50% of cell kill . Overall, the clustering was consistent for many INPs among the three difference response measures (Table and Supplementary Table ). It was less consistent for a number of reference compounds, possibly due to the higher potency of established anticancer drugs, which may result in their lower concentration needed to achieve total growth inhibition (TGI) or 50% lethal concentration (LC) as compared to the INPs. Seven reference compounds from subtree 2 of the logGI50 tree formed a separate cluster (subtree 5) in both TGI- and logLC50-based trees. Anti-mitotic reference compounds (e.g. vinblastine, vincristine) clustered closely together in logGI50 subtree 1, however they were not tightly clustered in both logLC50 and TGI trees. The cluster assignment of many INPs (e.g. cinnamon and turmeric) in both logLC50 and TGI trees was similar to that in the logGI50 tree. Association of cell line response to INPs with gene expression Using pre-treatment gene expression data of 23,059 transcripts and the median logGI50 values of the 75 INPs, we conducted a Spearman correlation analysis that identified 204 natural product-gene pairs (including 190 unique genes and 28 unique INPs) that were statistically significant after adjusting for multiple testing (FDR adjusted p value < 0.05). All significant results are listed in Table and summarized in a graphical format in Supplementary Fig. . Below we discuss some of the highly significant correlations of biologically important protein-coding genes. SLC7A11 and plumbagin (NSC 688284) SLC7A11 (solute carrier family 7 member 11) has recently been suggested as potential drug target in pancreatic adenocarcinoma . It plays a role in maintaining cellular glutathione levels via cystine uptake, protecting cells from oxidative stress induced death and is commonly overexpressed in cancer, which has been linked to chemoresistance in many anti-tumor agents . Deletion of the SLC7A11 gene in genetically engineered mice with pancreatic ductal adenocarcinoma induced tumor-selective ferroptosis and inhibited tumor growth . Targeting of the SLC7A11/glutathione axis with sulfasalazine has been shown to cause synthetic lethality via decreased cystine uptake and intracellular glutathione biosynthesis . Alternative strategies leveraging this metabolic addiction have also been demonstrated via inhibiting glucose uptake preventing the conversion of potentially toxic cystine to cysteine . This highly positive correlation (Spearman correlation coefficient ρ = 0.79, unadjusted p value = 1.07 × 10 –13 , FDR adjusted p value = 8.47 × 10 –8 ) demonstrates increased resistance of tumor cell lines to plumbagin associated with increased gene expression of SLC7A11, which is consistent with the previous findings by our group and other authors about the potential role of this transporter in resistance to multiple antitumor agents and natural products . ATAD family and curcumin ATAD3A and ATAD3B are mitochondrial ATPase proteins expressed in embryogenesis. ATAD3B has been shown to be over-expressed in head and neck cancer and hepatocellular carcinoma . Curcumin acts as a protonorphic uncoupler of oxidative phosphorylation decreasing ATP biosynthesis which alters the AMP:ATP ratio and ultimately decreases cell proliferation . The negative correlation for both ATAD3A (Spearman correlation coefficient ρ = -0.57, unadjusted p value = 3.68 × 10 –6 , FDR adjusted p value = 0.04) and ATAD3B (Spearman ρ = -0.67, unadjusted p value = 1.29 × 10 –8 , FDR adjusted p value = 3.4 × 10 –3 ) genes demonstrates that increased sensitivity of cell lines to curcumin (i.e., lower logGI50 values) was associated with increased expression of the ATAD3A and ATAD3B genes. MYB and phyllanthoside MYB, a transcriptional activator, is a proto-oncogene that has been shown to be over-expressed in hematologic, colorectal, and breast cancer . The negative correlation (Spearman correlation coefficient ρ = -0.66, unadjusted p value = 1.69 × 10 –8 , FDR adjusted p value = 3.84 × 10 –3 ) demonstrates an association between increased sensitivity of cell lines to phyllanthoside and increased expression of the MYB gene. This suggests a potential role of MYB-mediated transcriptional regulation in response to this INP. Biological pathway analysis The results of pathway analysis using g:Profiler are presented in Supplementary Tables – and summarized in a graphical format in Supplementary Fig. . Below we discuss the pathways and molecular functions that were identified for Subtrees 1 and 3. Subtree 2 was not evaluable due to a paucity of significant genes. Biological pathway analysis using g:Profiler identified several biological pathways and functions which may be associated with increased sensitivity or resistance to INPs. Among the INPs in Subtree 1, resistance to NSC number 328426 (phyllanthoside), 342443 (S3’-desacetyl-phyllanthoside), 94743 (cucurbitacin A), 143925 (pekilocerin A), 112167 (elatericin B) was associated with pathways related to mineral homeostasis (Supplementary Table ). Due to an insufficient number of genes associated with sensitivity to INPs in Subtree 1, common biological processes for those genes and INPs could not be evaluated. Subtree 3 Among the INPs in Subtree 3, response to NSC number 236613 (plumbagin), 643023 (alpha-phenyl-2,5-dimethoxy-alpha-cinnamonitrile), 365798 (piceatannol), 112166 (cucurbitacin K) and sensitivity to 32982 (curcumin), 309909 (nimbolide), 87868 (phenethyl mustard oil), 742021 (curcumin tri adamantylaminoethylcarbonate), 742019 (ethoxycurcumin trithiadiazolaminomethylcarbonte), 705537 (daturaolone), 643769 (O-bromo-alpha-benzoyl cinnamonitrile), 383468 (product of Andrographis paniculata ) was associated with expression of genes involved in several molecular pathways (Supplementary Tables and ). Molecular functions associated with drug response in Subtree 3 include nucleic acid binding, heterocyclic compound binding, organic cyclic compound binding, and multiple aspects of protein synthesis including various stages of translation and structural components of the ribosome. Nuclear factor erythroid 2-related factor 2 (NRF2) pathway NRF2 is a key transcription factor and a key modulator of cellular antioxidant response which has a role in preventing carcinogenesis. However, persistent activation of NRF2 has been demonstrated in some tumor types, which raises a possibility of its role in cancer proliferation . As expression of the genes in this pathway was positively correlated with the INPs in Subtree 3, this suggests that resistance mechanisms to these INPs may be related to the NRF2 pathway . PI3K-Akt-mTOR pathway Overactivation of the PI3K-Akt-mTOR signaling pathway has been demonstrated in many different cancer types as a mechanism for tumor growth and therapeutic resistance . As the pathway analysis of expression of the genes in this pathway found a positive correlation with logGI50 of the INPs in Subtree 3, this suggests that resistance mechanisms to the INPs such as NSC number 236613 (plumbagin), 643023 (alpha-phenyl-2,5-dimethoxy-alpha-cinnamonitrile), 365798 (piceatannol) and 112166 (cucurbitacin K) may be related to the PI3K-Akt-mTOR signaling. Subtree 3 contained several curcumin INPs and gallocatechin, which have been previously demonstrated to be associated with this pathway . Eukaryotic translation pathway A crucial component of cancer progression is translational control of protein synthesis through a increased rates of protein synthesis and specific mRNAs that promote increased tumor cell growth and survival . As the pathway analysis of expression of genes in this pathway found a negative correlation with logGI50 of the INPs in Subtree 3, this suggests that sensitivity mechanisms to these INPs may be related to pathways associated with protein synthesis inhibition. Subtree 3 contained several curcumin-related INPs which have been previously demonstrated to have an association with these pathways . Slit/Robo pathway While the Slit/Robo pathway mainly involves functions to promote axon branching and neuronal migration, it is also involved in other physiological processes including angiogenesis and apoptosis . Promoter hypermethylation of Slit/Robo has been observed in many different cancers, leading to undetectable or low levels of Slit/Robo, and natural products that reactivate this pathway via demethylation or other mechanisms are actively being explored . Increased expression of genes in this pathway was negatively correlated with logGI50 of several INPs in Subtree 3, including NSC number 32982 (curcumin), 309909 (nimbolide), 87868 (phenethyl mustard oil), 742021 (curcumin tri adamantylaminoethylcarbonate), 742020 (ethoxycurcumin trithiadiazolaminomethylcarbonte), 705537 (daturaolone), 643769 (O-bromo-alpha-benzoyl cinnamonitrile), and 383468 (product of Andrographis paniculata ), suggesting that overexpression of those genes may confer increased sensitivity to these products. This association indicates that such INPs could be explored to target this pathway. Curcumin and its related analogues have been demonstrated to also have a demethylating effect . Association of cell line response to INPs with protein-changing single nucleotide variants For each of the 75 INPs, and using whole exome sequencing data for the cell lines from CellMiner after filtering, we used a Student’s t -test to analyze the differences between logGI50 values comparing cell lines with and without individual protein-changing single nucleotide variants in each of the 107 genes listed in OncoKB. After FDR adjustment, 13 SNV-INP pairs satisfied the FDR adjusted p value < 0.05, including 4 unique genes and 10 unique natural products. Below we discuss examples of associations of functionally important variants and likely oncogenic variants from OncoKB (Table and Supplementary Fig. ). BRAF V600E and Cucurbitacin D (NSC 308606) OncoKB lists BRAF V600E as a level 1 actionable variant, which was present in 9 cell lines (7 melanoma and 2 colorectal cell lines) in the NCI-60 dataset. Tumors with this variant are responsive to treatment with BRAF inhibitors (e.g., dabrafenib, vemurafenib) and in combination with MEK inhibitors this has been shown to be an effective treatment strategy for melanoma . Consistent with our earlier analysis of a separate large natural product dataset , mean logGI50 response to cucurbitacin D was statistically significantly different when comparing cell lines without the BRAF V600E variant (mean = -6.69) to those with this variant (mean = -7.16, unadjusted p value = 5.71 × 10 –7 ; FDR adjusted p value = 7.42 × 10 –5 ). This association suggests that cucurbitacin D may have a role in targeting cancers with BRAF mutations or having an effect on BRAF . Alternatively, the presence of BRAF V600E in most of the melanoma lines (8 out of 9 melanoma cell lines) may suggest that this INP may have a more general effect on growth inhibition in melanoma. Likely oncogenic or likely gain of function variants Multiple INPs were significantly associated with likely oncogenic individual variants listed in OncoKB in the KDR and KNSTRN genes (C482R and A40E, respectively) and the likely gain of function variant T992I in MET. The receptor tyrosine kinase MET gene variant T992I was associated with sensitivity to multiple INPs, including products from the cucurbitacin family (Curcurbitacin K; NSC 112166, Elatericin B; NSC 112167) and the Tylophorine family (tylophorin, NSC 717335) and resistance to other products (3-bromo-4-dimethylamino-.alpha.-benzoyl cinnamonitrite; NSC 643160, achilleol A; NSC 710351). The likely oncogenic, likely gain of function KDR gene variant C482R was associated with sensitivity to two INPs from the Calendula family (calendulaglycoside D2; NSC 731921, calendulaglycoside D-6'-O-methyl ester; NSC 731922) and the Phyllanthus family (phyllathoside, NSC 328426) and resistance to achilleol A (NSC 710351). The likely oncogenic, likely gain of function kinetochore KNSTRN gene variant A40E was associated with sensitivity to three INPs (tylophorin; NSC 717335, calendulaglycoside B-6'-O-butyl ester; NSC 731920 and calendulaglycoside D-6'-O-methyl ester; NSC 731922).
Figure shows the hierarchical clustering of the Indian natural products and reference compounds based on their median logGI50 values, presenting the results as an unrooted radial phylogram. Clustering revealed 4 distinct subtrees. As Subtree 4 consisted of only reference products (NSC 326231 - L-buthionine sulfoximine, and NSC 237020 - largomycin), it was excluded from subsequent analysis. Supplementary Fig. provides a heatmap showing the two-dimensional clustering of the NCI-60 cell lines and the INPs and reference compounds, clustered according to the similar patterns of cell line response to these compounds using logGI50 values. The similarities of logGI50 response patterns within each subtree may suggest similar potency of the INPs with their grouped reference products and possibly similar mechanisms of actions. Subtree 1 (13 INPs and 18 reference products) The reference compounds in this subtree have mainly anti-mitotic activity (vincristine sulfate, vinleurosine sulfate, vinblastine sulfate, paclitaxel); however, they also included some agents that act as DNA intercalators (doxorubicin) and anti-metabolites (methotrexate). Some INPs of the cucurbitacin family and its derivatives (Cucurbitacin A, B, D, E, L, datiscoside) affect mitotic spindles and delay mitoses leading to a G2/M phase cell cycle arrest of cancer cells . Phyllanthoside has been demonstrated to function both in vivo and in vitro as an inhibitor of eukaryotic protein synthesis by interfering with translation elongation, similar to the reference compound actinomycin D. While a mechanism of action has not been clearly defined for tylophorin and its analog cryptoleurine, some experimental evidence points toward G1 arrest through cyclin A2 downregulation and VEGF2-mediated angiogenesis, which is not a known mechanism of any of the reference compounds correlated with its cytotoxicity . Subtree 2 (34 INPs and 22 reference products) The 22 reference compounds in this subtree had many different mechanisms of action; however, the majority fit into either alkylators (piperazine, mitrozolamide, BCNU, busulfan), ribonucleoide reductase inhibitors (pyrimidine-5-glycodialdehyde, caracemide, IMPY, hydroxyurea), and broad inhibitors of RNA synthesis (diglycoaldehyde, 3-deazauridine). The 34 INPs included in this cluster consisted of a large group of cinnamon-based INPs and some Phyllanthus INPs. Subtree 3 (25 INPs and 17 reference products) The 17 reference compounds in this subtree consisted of a variety of alkylators (CCNU, methyl-CCNU, asaley), anti-metabolites (AT-125, 5-FU, DUP785, dichloroallyl lawsone), and DNA-crosslinking agents (carboxy-platinum). The 25 INPs included in Subtree 3 consisted of curcumin, curcuminoids, neem, and Calendula products.
The reference compounds in this subtree have mainly anti-mitotic activity (vincristine sulfate, vinleurosine sulfate, vinblastine sulfate, paclitaxel); however, they also included some agents that act as DNA intercalators (doxorubicin) and anti-metabolites (methotrexate). Some INPs of the cucurbitacin family and its derivatives (Cucurbitacin A, B, D, E, L, datiscoside) affect mitotic spindles and delay mitoses leading to a G2/M phase cell cycle arrest of cancer cells . Phyllanthoside has been demonstrated to function both in vivo and in vitro as an inhibitor of eukaryotic protein synthesis by interfering with translation elongation, similar to the reference compound actinomycin D. While a mechanism of action has not been clearly defined for tylophorin and its analog cryptoleurine, some experimental evidence points toward G1 arrest through cyclin A2 downregulation and VEGF2-mediated angiogenesis, which is not a known mechanism of any of the reference compounds correlated with its cytotoxicity .
The 22 reference compounds in this subtree had many different mechanisms of action; however, the majority fit into either alkylators (piperazine, mitrozolamide, BCNU, busulfan), ribonucleoide reductase inhibitors (pyrimidine-5-glycodialdehyde, caracemide, IMPY, hydroxyurea), and broad inhibitors of RNA synthesis (diglycoaldehyde, 3-deazauridine). The 34 INPs included in this cluster consisted of a large group of cinnamon-based INPs and some Phyllanthus INPs.
The 17 reference compounds in this subtree consisted of a variety of alkylators (CCNU, methyl-CCNU, asaley), anti-metabolites (AT-125, 5-FU, DUP785, dichloroallyl lawsone), and DNA-crosslinking agents (carboxy-platinum). The 25 INPs included in Subtree 3 consisted of curcumin, curcuminoids, neem, and Calendula products.
Supplementary Figs. and show the hierarchical clustering of INPs and reference compounds based on their median logLC50 or TGI values, respectively. The trees inferred using logLC50 and TGI were similar to each other, except for 12 compounds. Both logLC50 and TGI trees were comprised of 5 distinct subtrees, as compared to 4 distinct subtrees in the logGI50 tree (Fig. , Supplementary Figs. – ). Table provides information, for each INP and reference compound, whether a compound had a similar clustering with other compounds and was assigned to a subtree with the same number based on logLC50 and TGI as compared to the subtrees based on clustering of logGI50. Detailed comparison of the cluster assignment of the compounds based on different response measures is provided in Supplementary Table . Clustering which was based on TGI was more similar to logGI50-based clustering, whereas with the logLC50-based clustering more compounds showed differences from their logGI50-based cluster assignment (Supplementary Table ). These patterns of similarity and difference between the three trees derived from different response measures may be explained by the fact that logGI50 and TGI both represent different degrees of growth inhibition, both being derived from the growth curve, whereas logLC50 is a different parameter representing a concentration needed to achieve 50% of cell kill . Overall, the clustering was consistent for many INPs among the three difference response measures (Table and Supplementary Table ). It was less consistent for a number of reference compounds, possibly due to the higher potency of established anticancer drugs, which may result in their lower concentration needed to achieve total growth inhibition (TGI) or 50% lethal concentration (LC) as compared to the INPs. Seven reference compounds from subtree 2 of the logGI50 tree formed a separate cluster (subtree 5) in both TGI- and logLC50-based trees. Anti-mitotic reference compounds (e.g. vinblastine, vincristine) clustered closely together in logGI50 subtree 1, however they were not tightly clustered in both logLC50 and TGI trees. The cluster assignment of many INPs (e.g. cinnamon and turmeric) in both logLC50 and TGI trees was similar to that in the logGI50 tree.
Using pre-treatment gene expression data of 23,059 transcripts and the median logGI50 values of the 75 INPs, we conducted a Spearman correlation analysis that identified 204 natural product-gene pairs (including 190 unique genes and 28 unique INPs) that were statistically significant after adjusting for multiple testing (FDR adjusted p value < 0.05). All significant results are listed in Table and summarized in a graphical format in Supplementary Fig. . Below we discuss some of the highly significant correlations of biologically important protein-coding genes.
SLC7A11 (solute carrier family 7 member 11) has recently been suggested as potential drug target in pancreatic adenocarcinoma . It plays a role in maintaining cellular glutathione levels via cystine uptake, protecting cells from oxidative stress induced death and is commonly overexpressed in cancer, which has been linked to chemoresistance in many anti-tumor agents . Deletion of the SLC7A11 gene in genetically engineered mice with pancreatic ductal adenocarcinoma induced tumor-selective ferroptosis and inhibited tumor growth . Targeting of the SLC7A11/glutathione axis with sulfasalazine has been shown to cause synthetic lethality via decreased cystine uptake and intracellular glutathione biosynthesis . Alternative strategies leveraging this metabolic addiction have also been demonstrated via inhibiting glucose uptake preventing the conversion of potentially toxic cystine to cysteine . This highly positive correlation (Spearman correlation coefficient ρ = 0.79, unadjusted p value = 1.07 × 10 –13 , FDR adjusted p value = 8.47 × 10 –8 ) demonstrates increased resistance of tumor cell lines to plumbagin associated with increased gene expression of SLC7A11, which is consistent with the previous findings by our group and other authors about the potential role of this transporter in resistance to multiple antitumor agents and natural products .
ATAD3A and ATAD3B are mitochondrial ATPase proteins expressed in embryogenesis. ATAD3B has been shown to be over-expressed in head and neck cancer and hepatocellular carcinoma . Curcumin acts as a protonorphic uncoupler of oxidative phosphorylation decreasing ATP biosynthesis which alters the AMP:ATP ratio and ultimately decreases cell proliferation . The negative correlation for both ATAD3A (Spearman correlation coefficient ρ = -0.57, unadjusted p value = 3.68 × 10 –6 , FDR adjusted p value = 0.04) and ATAD3B (Spearman ρ = -0.67, unadjusted p value = 1.29 × 10 –8 , FDR adjusted p value = 3.4 × 10 –3 ) genes demonstrates that increased sensitivity of cell lines to curcumin (i.e., lower logGI50 values) was associated with increased expression of the ATAD3A and ATAD3B genes.
MYB, a transcriptional activator, is a proto-oncogene that has been shown to be over-expressed in hematologic, colorectal, and breast cancer . The negative correlation (Spearman correlation coefficient ρ = -0.66, unadjusted p value = 1.69 × 10 –8 , FDR adjusted p value = 3.84 × 10 –3 ) demonstrates an association between increased sensitivity of cell lines to phyllanthoside and increased expression of the MYB gene. This suggests a potential role of MYB-mediated transcriptional regulation in response to this INP.
The results of pathway analysis using g:Profiler are presented in Supplementary Tables – and summarized in a graphical format in Supplementary Fig. . Below we discuss the pathways and molecular functions that were identified for Subtrees 1 and 3. Subtree 2 was not evaluable due to a paucity of significant genes. Biological pathway analysis using g:Profiler identified several biological pathways and functions which may be associated with increased sensitivity or resistance to INPs. Among the INPs in Subtree 1, resistance to NSC number 328426 (phyllanthoside), 342443 (S3’-desacetyl-phyllanthoside), 94743 (cucurbitacin A), 143925 (pekilocerin A), 112167 (elatericin B) was associated with pathways related to mineral homeostasis (Supplementary Table ). Due to an insufficient number of genes associated with sensitivity to INPs in Subtree 1, common biological processes for those genes and INPs could not be evaluated.
Among the INPs in Subtree 3, response to NSC number 236613 (plumbagin), 643023 (alpha-phenyl-2,5-dimethoxy-alpha-cinnamonitrile), 365798 (piceatannol), 112166 (cucurbitacin K) and sensitivity to 32982 (curcumin), 309909 (nimbolide), 87868 (phenethyl mustard oil), 742021 (curcumin tri adamantylaminoethylcarbonate), 742019 (ethoxycurcumin trithiadiazolaminomethylcarbonte), 705537 (daturaolone), 643769 (O-bromo-alpha-benzoyl cinnamonitrile), 383468 (product of Andrographis paniculata ) was associated with expression of genes involved in several molecular pathways (Supplementary Tables and ). Molecular functions associated with drug response in Subtree 3 include nucleic acid binding, heterocyclic compound binding, organic cyclic compound binding, and multiple aspects of protein synthesis including various stages of translation and structural components of the ribosome.
NRF2 is a key transcription factor and a key modulator of cellular antioxidant response which has a role in preventing carcinogenesis. However, persistent activation of NRF2 has been demonstrated in some tumor types, which raises a possibility of its role in cancer proliferation . As expression of the genes in this pathway was positively correlated with the INPs in Subtree 3, this suggests that resistance mechanisms to these INPs may be related to the NRF2 pathway .
Overactivation of the PI3K-Akt-mTOR signaling pathway has been demonstrated in many different cancer types as a mechanism for tumor growth and therapeutic resistance . As the pathway analysis of expression of the genes in this pathway found a positive correlation with logGI50 of the INPs in Subtree 3, this suggests that resistance mechanisms to the INPs such as NSC number 236613 (plumbagin), 643023 (alpha-phenyl-2,5-dimethoxy-alpha-cinnamonitrile), 365798 (piceatannol) and 112166 (cucurbitacin K) may be related to the PI3K-Akt-mTOR signaling. Subtree 3 contained several curcumin INPs and gallocatechin, which have been previously demonstrated to be associated with this pathway .
A crucial component of cancer progression is translational control of protein synthesis through a increased rates of protein synthesis and specific mRNAs that promote increased tumor cell growth and survival . As the pathway analysis of expression of genes in this pathway found a negative correlation with logGI50 of the INPs in Subtree 3, this suggests that sensitivity mechanisms to these INPs may be related to pathways associated with protein synthesis inhibition. Subtree 3 contained several curcumin-related INPs which have been previously demonstrated to have an association with these pathways .
While the Slit/Robo pathway mainly involves functions to promote axon branching and neuronal migration, it is also involved in other physiological processes including angiogenesis and apoptosis . Promoter hypermethylation of Slit/Robo has been observed in many different cancers, leading to undetectable or low levels of Slit/Robo, and natural products that reactivate this pathway via demethylation or other mechanisms are actively being explored . Increased expression of genes in this pathway was negatively correlated with logGI50 of several INPs in Subtree 3, including NSC number 32982 (curcumin), 309909 (nimbolide), 87868 (phenethyl mustard oil), 742021 (curcumin tri adamantylaminoethylcarbonate), 742020 (ethoxycurcumin trithiadiazolaminomethylcarbonte), 705537 (daturaolone), 643769 (O-bromo-alpha-benzoyl cinnamonitrile), and 383468 (product of Andrographis paniculata ), suggesting that overexpression of those genes may confer increased sensitivity to these products. This association indicates that such INPs could be explored to target this pathway. Curcumin and its related analogues have been demonstrated to also have a demethylating effect .
For each of the 75 INPs, and using whole exome sequencing data for the cell lines from CellMiner after filtering, we used a Student’s t -test to analyze the differences between logGI50 values comparing cell lines with and without individual protein-changing single nucleotide variants in each of the 107 genes listed in OncoKB. After FDR adjustment, 13 SNV-INP pairs satisfied the FDR adjusted p value < 0.05, including 4 unique genes and 10 unique natural products. Below we discuss examples of associations of functionally important variants and likely oncogenic variants from OncoKB (Table and Supplementary Fig. ).
OncoKB lists BRAF V600E as a level 1 actionable variant, which was present in 9 cell lines (7 melanoma and 2 colorectal cell lines) in the NCI-60 dataset. Tumors with this variant are responsive to treatment with BRAF inhibitors (e.g., dabrafenib, vemurafenib) and in combination with MEK inhibitors this has been shown to be an effective treatment strategy for melanoma . Consistent with our earlier analysis of a separate large natural product dataset , mean logGI50 response to cucurbitacin D was statistically significantly different when comparing cell lines without the BRAF V600E variant (mean = -6.69) to those with this variant (mean = -7.16, unadjusted p value = 5.71 × 10 –7 ; FDR adjusted p value = 7.42 × 10 –5 ). This association suggests that cucurbitacin D may have a role in targeting cancers with BRAF mutations or having an effect on BRAF . Alternatively, the presence of BRAF V600E in most of the melanoma lines (8 out of 9 melanoma cell lines) may suggest that this INP may have a more general effect on growth inhibition in melanoma.
Multiple INPs were significantly associated with likely oncogenic individual variants listed in OncoKB in the KDR and KNSTRN genes (C482R and A40E, respectively) and the likely gain of function variant T992I in MET. The receptor tyrosine kinase MET gene variant T992I was associated with sensitivity to multiple INPs, including products from the cucurbitacin family (Curcurbitacin K; NSC 112166, Elatericin B; NSC 112167) and the Tylophorine family (tylophorin, NSC 717335) and resistance to other products (3-bromo-4-dimethylamino-.alpha.-benzoyl cinnamonitrite; NSC 643160, achilleol A; NSC 710351). The likely oncogenic, likely gain of function KDR gene variant C482R was associated with sensitivity to two INPs from the Calendula family (calendulaglycoside D2; NSC 731921, calendulaglycoside D-6'-O-methyl ester; NSC 731922) and the Phyllanthus family (phyllathoside, NSC 328426) and resistance to achilleol A (NSC 710351). The likely oncogenic, likely gain of function kinetochore KNSTRN gene variant A40E was associated with sensitivity to three INPs (tylophorin; NSC 717335, calendulaglycoside B-6'-O-butyl ester; NSC 731920 and calendulaglycoside D-6'-O-methyl ester; NSC 731922).
In this study, we used in vitro data to examine the associations of variation in gene expression and deleterious mutations with tumor cell response to INPs. We also compared response patterns to those of reference compounds as a preliminary investigation of the possible mechanisms of action of these products at the cellular level. We reported the findings that were highly significant after the correction for multiple comparisons. We compared publicly available cancer cell line response data in the NCI-60 panel for 75 INPs to data for standard reference antitumor compounds. Our joint analysis of molecular data and measures of cell line response to INPs and the comparison of the cytotoxic effects of INPs to those of established antitumor reference compounds allowed us to quantitively assess the potential involvement of individual genes and molecular pathways in tumor cell response to INPs. In Supplementary Figs. – , we provide the summary of significant associations between the logGI50 measures of cancer cell line response to 75 INPs and molecular features of the tumor cells including gene expression, biological pathways, and single nucleotide variants in cancer-related genes. Subtree 1 from the clustering of logGI50 values of INPs and reference compounds consisted of many products with anti-mitotic mechanisms of action, confirming previously reporting anti-mitotic activity of some INPs including phyllanthoside, S3’-desacetyl-phyllanthoside and the cucurbitacin family. Overall, the logGI50 response data were closely grouped among similar products, including cucurbitacins in Subtree 1, and curcumin and curcuminoids in Subtree 3. Our analysis found multiple novel associations between gene expression and logGI50 values of INPs, including a highly significant association between increased levels of SLC7A11 expression and resistance to plumbagin. This resistance may involve increased SLC7A11 expression inhibiting ferroptosis, a distinct form of cell death due to excessive lipid peroxidation . To our knowledge, our observed association between increased levels of ATAD3A / ATAD3B expression and sensitivity to curcumin has not been previously reported. The products of these genes, ATPase family AAA domain containing 3A and 3B proteins, are involved in multi-protein complexes associated with mtDNA that are important for regulation of mitochondrial biogenesis and lipogenesis. Curcumin has been reported to regulate expression of enzymes involved in mitochondrial biogenesis and mitochondrial oxidative stress, to increase apoptosis and autophagic cell death, and to reduce cellular proliferation . The association with ATAD3A and ATAD3B expression may be of interest since ATAD3 over-expression has been linked to the progression of head and neck cancer, lung adenocarcinoma, non‑Hodgkin's lymphoma, uterine cancer, cervical cancer, prostate cancer, glioma, and hepatocellular carcinoma . Interestingly, prior reports suggested the roles of increased ATAD3 expression in chemoresistance . Our analysis of SNV variants demonstrated a statistically significant association of BRAF V600E with logGI50 measure of response to cucurbitacin D. The triterpene compounds from the Cucurbitaceae family, which include cucurbitacin D, are found in many gourd species. While they have demonstrated cytotoxicity in many cell lines, our finding of increased sensitivity in BRAF V600E mutated cell lines which includes almost all the melanoma cell lines in our dataset may warrant further investigation. Paucity of INPs available in the public domain and consequently their underrepresentation in the NCI-60 cell line database limited our ability to evaluate some of the more commonly used Ayurvedic concoctions and herbs of interest including Triphala, Momordica charantia , and Withania somnifera . Additional open-source natural products databases contain more INPs; however, the available NCI-60 screening data for these additional products in the DTP dataset were limited to single dose data and were not analyzed in our study. We used logGI50 values as the primary response endpoint because many previous studies have shown these measures to be a relevant outcome to study associations with molecular targets. When using logGI50 values, clusters of compounds derived from logGI50 values have been shown to correlate well both with potential mechanism of cell line response and with similarities among compound structures . We used median logGI50 derived from the five-range dose screen as our measure of cell line response for the analysis of associations with molecular features of tumor cell lines. While this single logGI50 measure is informative in characterizing the cytotoxic effect of individual products, it may not reflect the cytotoxicity of the compound if it fell outside the pre-defined range of activity, in which case this measure would not reflect low levels of activity of some of the compounds we analyzed. As we analyzed pre-treatment gene expression levels for each cancer cell line, our findings cannot characterize the association between cell line response and post-treatment gene expression changes in response to each INP or reference compound. Such analyses may be of potential benefit in the future if post-treatment response data for Indian natural products become available. As the NCI-60 panel does not include normal cell lines for comparison, we did not focus on toxicity of these compounds and further studies will need to examine the side effects of these INPs. As a note of caution, our findings do not indicate clinical efficacy but rather our study is an attempt to characterize available INPs and identify possible mechanisms of action for further study. In this analysis, utilization of the in vitro molecular screening data from the NCI-60 allowed us to identify molecular features of tumor cells associated with response to INPs. As Ayurvedic products are often used in specific combinations, our analysis would not be able to evaluate their clinical and immunomodulatory features associated with response to the combinations of such agents. Additionally, due to the limited representation of tumors and mutational features in the NCI-60 panel, we could not examine the response within individual cancer categories. Additional models including mouse patient-derived xenografts or other clinically relevant approaches may be needed to further investigate the physiological effects of Ayurvedic products in specific tumor types.
Our analysis examining NCI60 response patterns for 75 INPs and standard reference compounds and their similarities allowed us to elucidate potential common mechanisms of action and molecular features associated with response to these INPs. We identified a number of genes and several biological pathways that were associated with sensitivity and resistance to specific INPs and/or entire INP clusters. Our findings provide a proof of principle that INPs may represent compounds of interest for cancer drug discovery and further studies should increase our understanding of their possible mechanisms of action.
Additional file 1. Supplementary Figure 1. Heatmap of median logGI50 values of Indian natural products and reference compounds. Each row represents an Indian natural product or a standard reference compound and each column represents a cell line in the NCI-60 cancer cell line panel. The color key represents the logGI50 levels with negative values (blue) representing sensitivity of a cell line to the product and positive values (red) representing resistance to a product. Missing data are represented as black. The range of logGI50 values was -12.5 to -0.25 molar units. Additional file 2. Supplementary Figure 2. Hierarchical clustering of INPs and reference compounds based on their median logLC50 values across NCI60 cell lines. The tree was inferred using the UPGMA (‘average’) method and was based on Euclidean distances. The tree is presented as an unrooted radial phylogram. The scale in the top left corner is provided for the branch lengths, which were derived from Euclidean distances. Clustered products are displayed with sparse labeling, in which only a random subset of INP labels is displayed. Additional file 3. Supplementary Figure 3. Hierarchical clustering of INPs and reference compounds based on their median total growth inhibition (TGI) values across NCI60 cell lines. The tree was inferred using the UPGMA (‘average’) method and was based on Euclidean distances. The tree is presented as an unrooted radial phylogram. The scale in the top left corner is provided for the branch lengths, which were derived from Euclidean distances. Clustered products are displayed with sparse labeling, in which only a random subset of INP labels is displayed. Additional file 4. Supplementary Figure 4. Graphical overview of significant associations logGI50 of Indian natural products with gene expression. Shown are significant associations with FDR adjusted p < 0.05, which are listed in Table . INPs are presented by colored circles, with colors corresponding to their subtree assignment based on clustering of their logGI50 values (orange for subtree 1, red for subtree 2, and purple for subtree 3). The subtree assignment of the INPs based on the logGI50 values is shown in Fig. , Supplementary Fig. , Table , and Supplementary Table . The direction of the arrows corresponds to the negative or positive values of the Spearman correlation coefficient ρ of association between gene expression and logGI50. An arrow toward an INP indicates ρ > 0, when higher gene expression was associated with higher logGI50 values and increased cell line resistance to that INP, whereas an arrow toward a gene indicates ρ < 0, showing that higher gene expression was associated with lower logGI50 values and with increased cell line sensitivity to that INP. Additional file 5. Supplementary Figure 5. Graphical overview of significant associations of logGI50 of Indian natural product subtrees 1 and 3 with molecular pathways from Reactome, KEGG, and WikiPathways. Shown are significant associations identified by g:Profiler with FDR adjusted p < 0.05. (A) Positive associations for Subtree 1. (B) Positive associations for Subtree 3. (C) Negative associations for Subtree 3. Additional information about each association shown in the Figure is provided in Supplementary Tables - . Additional file 6. Supplementary Figure 6. Graphical overview of significant associations of logGI50 of Indian natural products with protein-changing SNVs in cancer-related genes, which are listed in Table . Shown are significant associations with FDR adjusted p < 0.05. INPs are presented by colored circles, with colors corresponding to their subtree assignment based on clustering of their logGI50 values shown in Fig. , Supplementary Fig. , and Table (orange for subtree 1, red for subtree 2, and purple for subtree 3). The direction of the arrows corresponds to the negative or positive values of the t -statistic in the Student’s t -test. An arrow toward an INP indicates a positive value of the t -statistic, suggesting increased cell line resistance to that INP in the presence of a variant. In contrast, an arrow toward a variant indicates a negative value of the t -statistic, suggesting increased cell line sensitivity to that INP in the presence of a variant. Additional file 7. Supplementary Table 1: Positively correlated pathways in Subtree 1 Additional file 8. Supplementary Table 2: Positively correlated pathways in Subtree 3 Additional file 9. Supplementary Table 3: Negatively correlated pathways in Subtree 3 Additional file 10. Supplementary Table 4: All queried Ayurvedic INPs from the PUBLIC COMPARE portal Additional file 11. Supplementary Table 5: Concordance between the clustering of Indian natural products and reference compounds based on logGI50, logLC50, and TGI values
|
Impact of the Novel Coronavirus 2019 (COVID-19) Pandemic on Head and
Neck Cancer Care | f7f9e03c-7d52-4bc6-9420-8703535f72ca | 8010374 | Internal Medicine[mh] | This study used a prospective observational cohort design with a comparison to
historical data. Our study was submitted to the University of Maryland, Baltimore,
Institutional Review Board (IRB) and was granted IRB exemption. Patients over 18
years of age who presented for head and neck oncologic care at the University of
Maryland Medical Center were followed at their initial consultation and treatment.
Patients were identified during a multidisciplinary tumor board (MDTB) conference,
which includes representatives from otolaryngology–head and neck surgery, oral
maxillofacial surgery, radiation oncology, and medical oncology. Data collection
occurred during institutional and statewide restrictions on elective surgery and
outpatient clinic visits. Impacts of the COVID-19 pandemic were identified and categorized from a multi-item
flowchart that was drafted and approved by members of the MDTB. Treatment
modifications were classified as follows: elimination of systemic therapy, treatment
delay, change to nonsurgical management, or alteration in adjuvant therapy. The
rationales of any modifications were identified as 1 or more of the following
categories: operating room limitations, medical comorbidities, COVID-19 positive,
patient concerns, or social limitations. Operating room limitations included lack of
appropriate personal protective equipment or reductions in operating room
availability. Social limitations included patient-related factors such as travel
restrictions, lack of family support, decreased access to transportation services,
or reduced access to primary care providers. Collection of Tumor Conference Information Information regarding treatment modifications was collected prospectively during
weekly MDTB conferences from March 18, 2020, to May 20, 2020. The presence of a
modification, type of modification, and rationale for modification were
discussed and recorded for each patient presented. Nearly all patients who
present to clinic or who undergo a procedure for treatment or diagnosis are
presented at the MDTB. If a patient was presented more than 1 week, the initial
presentation was counted toward the volume of cases presented. Distinction was
made between initial cancer consultations and presentations of patients under
cancer surveillance. Tumor and patient characteristics were obtained from a
combination of tumor conference review and chart review. As a historical
control, information regarding the number of new and total case presentations at
the tumor conference during the same 2-month time period in 2019 were obtained
from a Research Electronic Data Capture (REDCAP) database. As a supplement to
tumor conference data, deidentified metrics of outpatient clinic volumes,
procedural data, and surgical cases were obtained from electronic medical
records during the study period and compared to 2019. Outpatient clinic volumes,
procedural data, and surgical cases included those under the care of the same 6
head and neck surgeons within the Department of Otorhinolaryngology and
Department of Oral Maxillofacial Surgery who practiced during 2019 and 2020. Statistical analysis was conducted with GraphPad Prism (GraphPad Software).
Observed and expected comparisons were made between the 2019 cohort of patients
and the 2020 cohort of patients. In addition, patient and tumor demographics
were compared between the annual cohorts as well as between those patients whose
treatment plans were modified and those whose treatment plans were unmodified.
Chi-square and Fisher exact tests were used where appropriate to make
comparisons between the groups with a level of significance of P < .05.
Information regarding treatment modifications was collected prospectively during
weekly MDTB conferences from March 18, 2020, to May 20, 2020. The presence of a
modification, type of modification, and rationale for modification were
discussed and recorded for each patient presented. Nearly all patients who
present to clinic or who undergo a procedure for treatment or diagnosis are
presented at the MDTB. If a patient was presented more than 1 week, the initial
presentation was counted toward the volume of cases presented. Distinction was
made between initial cancer consultations and presentations of patients under
cancer surveillance. Tumor and patient characteristics were obtained from a
combination of tumor conference review and chart review. As a historical
control, information regarding the number of new and total case presentations at
the tumor conference during the same 2-month time period in 2019 were obtained
from a Research Electronic Data Capture (REDCAP) database. As a supplement to
tumor conference data, deidentified metrics of outpatient clinic volumes,
procedural data, and surgical cases were obtained from electronic medical
records during the study period and compared to 2019. Outpatient clinic volumes,
procedural data, and surgical cases included those under the care of the same 6
head and neck surgeons within the Department of Otorhinolaryngology and
Department of Oral Maxillofacial Surgery who practiced during 2019 and 2020. Statistical analysis was conducted with GraphPad Prism (GraphPad Software).
Observed and expected comparisons were made between the 2019 cohort of patients
and the 2020 cohort of patients. In addition, patient and tumor demographics
were compared between the annual cohorts as well as between those patients whose
treatment plans were modified and those whose treatment plans were unmodified.
Chi-square and Fisher exact tests were used where appropriate to make
comparisons between the groups with a level of significance of P < .05.
In total, 117 patients were presented for oncologic care and case discussion at the
weekly tumor conference during the review period in 2020 via virtual tumor board
web-based meetings. During the same period of time in 2019, there were 69 patients
presented during in-person meetings. In 2020, 66% of patients were male, with the
most common site of malignancy being the oral cavity. In 2019, 74% of patients were
male, with the most common site being the oropharynx. Other reported primary sites
included cutaneous malignancies, laryngeal malignancies, and sinonasal malignancies.
In 2019 and 2020, there was a greater proportion of early tumor (T1 or T2) stage and
early nodal (N0 or N1) stage compared to more advanced disease ( ).
There were more total and new cancer MDTB case presentations in 2020 than in 2019.
While the volume of surgical cases presented decreased during the review period,
this was similar to the previous year ( ). The frequencies of modifications and the rationales for modifications were recorded
prospectively. Of the 117 patients presented in the MDTB, 10 (8.4%) treatment
modifications were attributed to the COVID-19 impact. There were no statistical
differences in baseline characteristics between the patients with modifications and
those without modifications ( ). The rationales for treatment modification and
types of modifications are shown in . The most common type
of modification was a treatment delay, while the second most common modification was
a change from primary surgical management to nonsurgical management. The most common
reason for modification was operating room limitations, which was reported in 4 of
10 patients. Treatment modifications tended to occur earlier in the course of this
institutional response to the pandemic, as seen in . The characteristics
of the 10 patients with treatment modifications are presented in . The outpatient clinic and operating room case volumes were retrospectively analyzed
during the restriction compared to historical comparisons from 2019. In 2020, there
were significantly fewer operating room cases, 224, compared to 307 in 2019
( P = .02). In addition, the outpatient setting observed a
significant reduction in office visits in 2020, 346 encounters, compared to 2019,
898 encounters ( P < .001). However, there was a greater
proportion of cancer surgeries (73% vs 64%) and initial patient visits (37% vs 27%)
in 2020 compared to 2019 ( ). The number of outpatient laryngoscopies
performed decreased by 63% from 2019 to 2020 ( ).
The purpose of this study was to assess the impact of the COVID-19 pandemic on care
for head and neck oncologic patients compared to historical controls. Virtual
meeting formats allowed for weekly meetings of the MDTB conference, which recorded
an increase in the number of patients reviewed compared to the prior year. Overall,
there were relatively few modifications made to treatment plans, which were most
commonly a treatment delay. The delays were not recommended during the MDTB, but
unanticipated events due to COVID testing and operating room limitations. Treatment
modifications were also not associated with a particular tumor primary site, tumor
stage, or patient demographic. While outpatient and operative volumes decreased
during the pandemic compared to the prior year, the proportion of oncologic cases
and the proportion of new patient visits were significantly greater during the
pandemic. This reflected the prioritization and triage of oncologic patients at this
institution during the response to the pandemic. The ongoing COVID-19 pandemic resulted in restrictions and prioritization of medical
care in an effort to reduce patient and health care exposure. Statewide travel and
health care restrictions were first introduced by the state of Washington to
prioritize emergent and life-threatening health conditions. Similarly,
the state of Maryland and the University of Maryland Medical System implemented
policies to limit the spread of the virus, which included a hold on elective
procedures and outpatient visits on March 18, 2020. At the time of the restrictions,
statewide reporting of respiratory specimen testing for SARS-CoV-2 was 11.3% and
later peaked at 26.9% on April 17, 2020. Following the virus peak, there was a
gradual decline in SARS-CoV-2 testing positivity, which led to a lifting of
restrictions and resumption of elective procedures in June 2020 at UMMC. Quantifiable evidence of the pandemic’s impact on access to oncologic care and
treatment of these patients during government-implemented restrictions remains
limited. The University of Washington proposed continuing definitive oncologic care
for solid tumors despite infectious risks, but the authors acknowledged that
complications during therapy may arise and further stress clinical
resources. In addition, Weinstein et al published a consensus
recommendation regarding suggesting changes in practice management for patients with
head and neck cancer, in which they recommended prioritization of standard of care
therapy. While adherence to preestablished treatment regimens was recommended,
unforeseen modifications were observed related to personal protective equipment
(PPE) shortages and operating room limitations that may not be anticipated. Enhanced precautions, including necessary PPE utilization, help mitigate the risk of
airborne transmission of SARS-CoV-2 during head and neck examinations and
interventions. Restrictions in aerosol-generating procedures in multiple practice
settings resulted in a significant reduction in outpatient clinic volume by 62%
compared to the prior year. Telemedicine evaluations have been the primary form of
oncologic surveillance and postoperative examinations, if possible. In the setting
of necessary in-person visits, N95 respirators or powered air-purifying respirators
were used to limit risk of transmission during aerosol-generating procedures.
Furthermore, in-office endoscopic examinations were limited to only necessary
diagnostic or surveillance procedures that would influence a decision on treatment
consistent with guideline recommendations for patients with head and neck
cancer. In the setting of these restrictions, the findings of the study
identified oncologic care continued with limited modifications. Prioritization of cancer care is in line with guidance from the American College of
Surgeons, which defined mucosal cancers of the upper aerodigestive tract (UAT) as
high-acuity cases in which treatment should not be delayed. Compared to
the previous year, there were a greater number of new cancer presentations and a
greater number of total cases presented during the tumor conference. While there
were overall reductions in the number of total cases performed and patients seen in
the outpatient clinic, there was a greater proportion of new cancer consultations
and oncologic surgeries compared to the prior year to suggest there was a
prioritization of oncologic care. The types of consultations and procedures that
were eliminated included elective procedures for benign neoplasms and nonemergent
reconstructive surgeries. Treatment modifications were rare and limited to only 10 of 117 patients (8.4%).
There were no treatment recommendations that deviated from standard-of-care
guidelines. Modifications occurred early in the institutional and state response to
the pandemic, as there was greater uncertainty during this time period regarding
PPE, availability of virus testing, and levels of risk based on specific exposures.
As these factors became more predictable, there were fewer treatment modifications
related to delays in care. For example, there were 7 modifications in the first
month of the study period and 3 modifications in the last 2 months. Although there
were few modifications overall, some general trends were noted. The most common
modification for surgical management was a delay related to operating room safety or
delays in COVID-19 testing. While most modifications occurred due to institutional
response or patient preferences, some modifications were recommended by the MDTB.
These modifications related primarily to some patients with human papillomavirus
(HPV)–associated oropharyngeal cancer when there was clinical equipoise between
surgical or nonsurgical management. In these instances, nonsurgical management was
recommended to avoid longer hospital stays and the need for aerosol-generating
procedures. Although many groups have predicted substantial treatment modifications and delays in
access to care, there remains limited evidence of the observed impact of oncologic
care access for patients with head and neck cancer. There has been literature
offering consensus-based recommendations, survey findings, or opinion regarding the
appropriate triage of patients with head and neck cancer. Bowman et al predicted a
surge in patients with head and neck cancer after COVID-19 recovery. They cited
concerns of contracting the virus, limitations of testing, and local and state
restrictions as reasons new cancer patients would delay seeking care. A
complementary study published by Brody et al reported survey results from
a large group of head and neck surgeons. There was a wide range of responses, but
respondents were more likely to consider nonsurgical management and to accept delays
in care in the setting of the pandemic. A recent publication by Kiong et al offered the first reported
changes in tumor conference and clinic volumes in the setting of the ongoing
pandemic. The study from the MD Anderson Cancer Center reported a 47% reduction in
outpatient visits and a 47% decline in operative volume compared to a 61% and 27%
reduction, respectively, in the current series. In contrast to their experience, we
saw no significant difference in the number of cases presented at the MDTB. However,
there was a similarly low rate of treatment modifications between the MD Anderson
Cancer Center experience and our study, 12.0% and 8.4%, respectively. The unique
institutional experience at the MD Anderson Cancer Center, as an independent cancer
center, may not reflect national trends as it serves as a primary oncologic
hospital. The MD Anderson Cancer Center is a tertiary care center specializing in
oncologic care and may not have had the opportunity to delay nonemergent surgeries
to facilitate and expedite oncologic care. In contrast, the suspension of elective
surgery and operating room block time at UMMC increased operating room availability
for urgent surgeries. While the prioritization of oncologic care at UMMC may have
led to a low rate of modifications, this is reflective of the institutional
experience. While our institution may be similar to others across the country, our
findings should be interpreted within the context of the pandemic experienced in our
region. While there are similarities in the institutional experiences, the
differences highlight the need for tailored approaches in each institution and
geographic setting. Study Limitations Our study has several limitations. There was a short follow-up period as well as
the lack of multiple years of historical data for comparison for our MDTB
patients. An unanticipated finding during the study period is the inverse
relationship observed with the rise in MDTB presentations and the concurrent
decline in clinical and surgical volume. This observation may be attributable to
the MDTB virtual format that allowed for remote access, resulting in more cases
being presented from faculty in various practice settings. In contrast, the
lower reported MDTB rates in 2019 are potentially related to distance barriers
and delays during in-person meetings. The ability of the virtual format to
increase participation in the conference may offer a more robust
multidisciplinary participation compared to prior in-person meetings.
Furthermore, a portion of the decrease in outpatient clinic visits may be
accounted for by telemedicine visits, but these primarily served to replace
routine follow-up visits rather than initial consultations. Our results reflect
the patterns of care at a single institution, and our data may reflect a
regional impact of the COVID-19 pandemic. Our ability to capture modifications
and delays in care is limited by the characteristics of patients who present for
care at our institution, and therefore our findings may underestimate the true
impact of the pandemic. Institutions in various regions may have different
state-mandated restrictions and institutional resources that make each
experience unique. Despite these limitations, the study emphasizes the
prioritization of care for patients with head and neck cancer as well as the
utility of reviewing the impacts of the pandemic.
Our study has several limitations. There was a short follow-up period as well as
the lack of multiple years of historical data for comparison for our MDTB
patients. An unanticipated finding during the study period is the inverse
relationship observed with the rise in MDTB presentations and the concurrent
decline in clinical and surgical volume. This observation may be attributable to
the MDTB virtual format that allowed for remote access, resulting in more cases
being presented from faculty in various practice settings. In contrast, the
lower reported MDTB rates in 2019 are potentially related to distance barriers
and delays during in-person meetings. The ability of the virtual format to
increase participation in the conference may offer a more robust
multidisciplinary participation compared to prior in-person meetings.
Furthermore, a portion of the decrease in outpatient clinic visits may be
accounted for by telemedicine visits, but these primarily served to replace
routine follow-up visits rather than initial consultations. Our results reflect
the patterns of care at a single institution, and our data may reflect a
regional impact of the COVID-19 pandemic. Our ability to capture modifications
and delays in care is limited by the characteristics of patients who present for
care at our institution, and therefore our findings may underestimate the true
impact of the pandemic. Institutions in various regions may have different
state-mandated restrictions and institutional resources that make each
experience unique. Despite these limitations, the study emphasizes the
prioritization of care for patients with head and neck cancer as well as the
utility of reviewing the impacts of the pandemic.
The COVID-19 pandemic has resulted in changes in practice patterns for oncologic
care. The transition to a virtual tumor board format resulted in an increase in new
cancer presentations for head and neck cancer, while in-person clinical care,
including outpatient visits and operative procedures, was reduced compared to
historical data. Despite overall reduction in clinical volume, the increased
proportion of oncologic consultation and cases demonstrates that prioritization for
head and neck cancer in both settings. As the COVID-19 pandemic continues, with
possibilities of additional peaks in case volumes, institutions will need to
continue to use resources to streamline care for oncologic patients. They will need
to rely on technology, optimal use of personal protective equipment, and adaptation
while emphasizing standard of care to achieve the best outcomes for patients with
head and neck cancer.
|
Sir Stewart Duke-Elder: Of a Prime Minister, Three Monarchs, and a Knight | edf213a1-0f61-4f78-a747-e9d657ca8053 | 9023909 | Ophthalmology[mh] | Nil.
There are no conflicts of interest.
|
Bacteriophage-Based Biosensors: A Platform for Detection of Foodborne Bacterial Pathogens from Food and Environment | 34672f03-d4f4-4581-867e-4ebc943ad5ba | 9599427 | Microbiology[mh] | Foodborne microorganisms are an important cause of human illnesses worldwide. Two-thirds of human foodborne diseases are caused by bacterial pathogens throughout the globe, especially in developing nations . The most commonly encountered foodborne bacterial pathogens are Staphylococcus aureus ( S. aureus ), Salmonella enterica serovar Typhimurium ( S. Typhimurium), Clostridium perfringens ( C. perfringens ), Campylobacter species, Escherichia coli ( E. coli ), and Listeria monocytogenes ( L. monocytogenes ). Most of these organisms have zoonotic importance, causing huge adverse effects to both public health and economic sectors . Of these bacterial foodborne pathogens, human-sourced pathogens such as E. coli and Salmonella Typhi can contaminate the food supply chain through the feces of infected individuals , while many others such as non-typhoidal Salmonella , Campylobacter , Staphylococcus , Yersinia , Clostridium , and Listeria are transmitted through food animals, poultry, milk, or eggs . Environmental transmission has been frequently reported for several of the pathogens, including Salmonella , E. coli O157:H7, and Campylobacter , during pre- and post-harvest food processing, storage, and transportation . The Centers for Disease Control and Prevention (CDC) routinely monitors the presence of these pathogens in food . The US FDA (Food and Drug Administration) and FSIS (Food Safety Inspection Service) agencies strictly regulate their presence in raw or ready-to-eat products ; therefore, reliable detection methods that are capable of detecting live pathogens are critical. Conventional foodborne pathogen detection methods mainly depend on specific biochemical, serological, and nucleic-acid-based techniques . These methods require skilled technicians and are time-consuming, expensive, and difficult to interpret. Most rapid detection methods cannot distinguish dead from live cells unless a growth-based enrichment step is used, making them inapplicable in many food processing facilities . Conversely, enzyme-linked immunosorbent assays (ELISA) or lateral flow immunochromatographic assays are simple and rapid biochemical immunoassays, but they have a low sensitivity . Similarly, polymerase chain reaction (PCR), biochips, and microarrays are some, but not all, of the nucleic-acid-based techniques that have been used for the investigation of foodborne microbes [ , , ]. Nevertheless, various types of PCR techniques such as reverse transcriptase and multiplex PCR are ineffective at processing a large volume of samples without a pre-enrichment step and have high processing costs that make them impractical for day-to-day use . Over the last few decades, bacteriophage-based biosensors have been recognized as a promising platform for detecting pathogens or sensing various biological analytes. Compared to other bio-receptors such as aptamers and antibodies, bacteriophages provide quite a few advantages in the detection of pathogens. Firstly, phages have a unique structure, including tail fibers that aid their binding to bacterial hosts, are highly specific, and are harmless to human cells ( ). Virulent phages take 1–2 h to complete the infection cycle, quickening the release of the cytoplasmic marker from the infected host to be used in numerous detection systems. In addition, phages are the most abundant biological entities and are found in places where their host organism exists. They are relatively stable under various conditions, such as pH, temperatures, and organic/inorganic solvents, and they resist proteases. They are also cheaper to produce than antibodies and have a relatively long shelf life. It is easier to distinguish dead from live bacterial cells using this platform, as phages replicate only inside living bacteria . The short shelf life of food products and the low infectious dose of most foodborne pathogens are the most critical driving forces that push researchers to design sensitive, specific, and reliable detection techniques. The development of phage-based biosensors as a tool for the direct detection of live pathogens in food is an important and attractive approach . Presently, several phage-based biosensors have been developed that incorporate various transducers, including electrochemical , quartz crystal microbalance (QCM) , surface plasmon resonance (SPR) , magnetoelastic (ME) , and others. Most of these biosensors have been designed using the whole/intact phage or the phage proteins as well as the cytoplasmic markers that are released following the phage infection. The performance of these biosensors varies, as they employ different immobilization methodologies (physical, chemical, covalent, or oriented) and/or transducers. Efforts have been made in the last decade to optimize biosensor systems, including phage-based sensors, to enhance the reliability of the technique. As far as we know, phage-based biosensors for monitoring water and food samples have not yet been commercialized; however, the current trends show promise. This review provides an overview of the different types of phage-based biosensors and their application in the detection of foodborne bacterial pathogens, with a special emphasis on recently developed biosensor platforms.
According to the International Union of Pure and Applied Chemistry (IUPAC), the biosensor is defined as a self-controlled derivative material that contains a bio-recognition component (bio-receptor/bio-probe) linked to a transducer (sensor) to convert the biological signal into a digital signal in the computer system for interpretation . Phage-based biosensor platforms generally consist of the network of the whole phage or partial phage particle, infection of the host bacterium, and finally production of colorimetric, electrical, fluorescent, or luminescent signals [ , , ]. Lytic bacteriophages are primarily classified under the order Caudovirales ( ) and are the principal biorecognition entities used as probes for phage-based biosensors. Apart from lytic phages, temperate phages also play a comparable role in the development of phage-based biosensors. Both lytic and temperate phages, such as HK620, P22, and ΦV10, have been used to develop reporter (engineered) phages . Reporter phages are genetically modified by incorporating a reporter gene sequence into the phage genome to generate a measurable signal inside the intact host cell without killing (lysing) the host cell for the detection of live pathogens . Moreover, proteins such as phage receptor-binding proteins (RBPs) have been recognized to be efficient bio-probes for replacing antibodies or other biomolecules, and have been used in the design of various types of biosensors . In comparison to the whole phages, RBPs provide better stability across a broad range of pH values, temperatures, and gastrointestinal proteases . Remarkably, appropriate tags (amino acids, e.g., cysteine) can be added to the RBP sequence at a specific site without affecting the binding ability and can be employed for the oriented surface functionalization of the RBPs on the biosensor platforms . Bacteriophage-based biosensors offer several benefits for rapid bacterial detection . They are highly specific towards their host organism, resist high temperatures (90–97 °C), and are stable across a wide range of pH values (3–14) and organic solvents. In comparison to antibodies, phages can be produced in large quantities easily and cheaply. They are eco-friendly and safe to use since they do not infect humans . These characteristics make phages a novel bio-recognition tool for the development of biosensors for the detection of foodborne bacterial pathogens . Today, phage-mediated biosensors have been developed as novel diagnostic tools in which specific phages are fixed to the device’s surface and then enabled to detect the analyte found in the sample . Bacteriophages can be immobilized on a solid material with the aid of chemical, physical, or other immobilization or tethering techniques. The capture of targeted bacterial cells by surface-immobilized virions is an event that ends up with specific detection. The detection of pathogens using phage-based sensors is not limited to clinical samples, but is also used in a wide range of nonclinical applications, including foodborne pathogens from water and various food matrices , such as milk and other perishable and non-perishable foodstuffs .
3.1. Bacterial β-D-Galactosidase Lytic phages have been used for the detection of bacteria relying on the cytoplasmic contents (cell markers) released from the lysed cells ( ). Neufeld and co-workers developed an amperometric assay based on bacterial β-D-galactosidase activity to detect E. coli at a concentration of 1 CFU/100 mL within 6 to 8 h . In this assay, β-D-galactosidase was released from the phage-infected host cell following lysis, and an externally added substrate, p-aminophenyl-β-D-galactopyranoside, was converted into p-aminophenol, whose successive oxidation could be sensed by a potentiostat-based device. Sample filtration and pre-incubation before phage infection have improved the sensitivity of the test. Yemini and co-workers reported two cytoplasmic markers for the detection of Bacillus cereus ( B. cereus ) and Mycobacterium smegmatis ( M. smegmatis ) with a detection limit of 10 CFU/mL using α- and β- glucosidase, respectively, within 8 h . Similarly, the presence of E. coli in water has been detected after phage lysis with a detection limit of 40 CFU/mL in 8 h . 3.2. Adenosine Triphosphate Adenosine triphosphate (ATP) is one of the cytoplasmic markers most extensively used for estimating the number of bacterial cells in a sample ( ). The concentration of ATP in a live, average-sized bacterium is nearly 10 −15 g and near-constant for different species, so the quantification of the concentration of ATP released via a bioluminescent assay enables us to determine the viable cell counts . ATP drives the catalytic reaction of the luciferase enzyme, which converts luciferin into oxyluciferin aerobically, together with adenosine monophosphate (AMP), carbon dioxide, and pyrophosphate, ultimately emitting light at a level corresponding to the specific concentration of ATP . The high amount of ATP found in many foodstuffs is one of the main drawbacks of this assay, which results in high detection limits ranging from 10 4 to 10 5 CFU/mL . However, this problem could be addressed using a phage-based biosorbent (e.g., T4 phage) by concentrating the host organism on the filter surface, which has shown a significant improvement in assay sensitivity with a detection limit as low as 6 × 10 3 CFU/mL within 2 h (Disruptor TM filter) . This assay is robust and highly accurate with a 60-fold higher concentration of the sample background flora than the concentration of host pathogens . 3.3. Adenylate Kinase Adenylate kinase (AK) is a bacterial cytoplasmic marker released from phage-infected cells, and the assay developed based on this marker could be used as an alternative approach to enhance the sensitivity of the bioluminescent ATP assay . Adenylate kinase is an enzyme that enhances ATP production in the presence of a high amount of adenosine diphosphate (ADP) . Under optimal conditions, its sensitivity can be enhanced by the addition of ADP, where the detection limits of Salmonella and E. coli were lower than 10 3 CFU/mL . This technique has been improved by incorporating an immunomagnetic separation (IMS) system in which antibody-coated magnetic beads are used to capture the target organism, which is then purified and concentrated . Variations of this approach have been developed for the detection of Salmonella , Listeria , E. coli O157, and other bacterial pathogens. 3.4. Conductivity (Impedance) The conductivity of the microbial growth medium can be changed by the perpetuation of microbes in the medium via the transformation of small to large charged and uncharged metabolites. Bacteriophages are appropriate tools for the detection of bacterial impedance (the resistance to the current flow via the conducting medium) since the presence of phage in a sample causes the retardation of impedance in the presence of the host organism. Chang and colleagues have detected E. coli O157:H7 without changing the conductivity of the MacConkey-sorbitol medium in the presence of an anti- E. coli O157:H7 phage (AR1) . The obvious challenge of direct conductivity-based detection techniques is the necessity of an appropriate culture medium optimized for measuring the impedance, in which development is usually labor-intensive and vulnerable to bacterial contamination with the background flora. Besides, not all target bacterium release charged metabolites, which may adversely affect the impedance and conductivity measurements. Some of these problems can be overcome by employing indirect impedimetric techniques, in which metabolites such as carbon dioxide released into the medium during the cultivation of the target bacterium can be removed by the addition of potassium hydroxide to facilitate impedance measurements . This method is highly specific and sensitive, and has been utilized for the detection of many foodborne pathogens, such as L. monocytogenes , S. aureus , Salmonella enterica , Campylobacter species, E. coli , and Enterococcus faecalis . 3.5. Whole-Phage or Progeny Virion Detection Lytic phages infect the host cell, and the number of progeny virions released from the infected cell is directly proportional to the number of bacteria infected. This approach was first reported by Stewart et al. (1998) , in which cells were infected with phages followed by treatment with a virucidal agent to eliminate the added phage, thus allowing only progeny phage to be detected. The developed assay was sensitive and could obtain results in 4 h using plaque assays. Alternative assays, including molecular diagnostic tools such as quantitative PCR (qPCR), have been used to determine the number of progeny virions released from the infected cells as well. For instance, B. anthracis was detected by immuno-chromatography, which has been designed based on a lateral-flow assay and the amplification of the gamma phage (γ) in bacterial cells. The virions released from the infected cells have been detected via reporters made of polystyrene nanoparticles linked to anti-γ phage antibodies. The detection limit has been recorded as 2.5 × 10 4 CFU/mL with a 2–4 h assay time . The plaque assay is one of the easiest/most straightforward methods for detecting foodborne pathogens to determine infection by increased titer . If the titer of the phages rises, it relates to the effective binding or adsorption of phages to the host bacteria, resulting in lysis and the release of progeny virions, and thus indicating the existence of the viable target pathogens in the food matrices as initially described by Stewart et al. . Recently, an assay was developed that employed phages coupled with qPCR for the detection of S. enterica ser Enteritidis in spiked chicken meat samples . Approximately 0.22 fg/µL of pure phage (vB_SenS_PVP-SE2) DNA and nearly10 3 pfu/mL of virions were detected using the combined technique with a detection limit of <10 CFU/25 g for 10 h of analysis, which included 3 h pre-enrichment, 6 h co-incubation, and 1 h DNA enrichment and qPCR. Despite its benefits, intact phages suffer from certain limitations that restrict their use in the development of whole-phage sensor systems. The fast adsorption of phages onto the host cell and their subsequent lytic activity may destroy the target bacterium before the completion of downstream detection steps. The size of phages is also another constraint that adversely affects the whole-phage detection system. Besides, some phages produce catalytic enzymes towards the receptors situated on the surface of the bacterial cell. For instance, the endorhaminosidase enzymes produced by the P22 phage can degrade the O-antigen of the outer membrane structure of Gram-negative bacteria, especially Salmonella enterica , which then affects the subsequent attachment process. The S. flexneri phage, Sf6, shows similar endorhamnosidase-mediated cleavage . Such phage-encoded enzymes can interfere with biosensor performance, leading to poor signal output. Moreover, intact phages can dry up on the surface of the biosensor, which ultimately can collapse and prevent tail fibers from attaching to the target bacterium . 3.6. Reporter Phages Reporter bacteriophages are also engineered to integrate/insert a specific gene into the host bacterial genome to facilitate the visualization and subsequent detection of the host bacterium. Both lytic and lysogenic bacteriophages have been used for this purpose . Currently, three types of phage engineering approaches have been reported: direct cloning, homologous recombination, and whole-genome activation. Reporter phages are designed to enable the detection of pathogens based on the enzymatic conversion of a chromogenic substrate . Several reporter phages have been developed for the detection of foodborne bacteria. For instance, T7-ALP , Φ V10 lux , ΦV10 NanoLuc luciferase (Nluc) , T7-NRGp5 , and T4-NRGp17 have been developed for the detection of different E. coli strains from various food matrices . 3.7. Phage-Associated Proteins Phage receptor-binding proteins (RBPs) are the most variable structures of phages, which are responsible for recognizing specific receptors on the host bacterium . Unlike antibodies, these proteins are relatively resistant to a wide range of pH values and heat treatments as well as protease activity, while showing analogous or even superior specificity . These intrinsic features make RBPs more efficient and much-needed biorecognition elements for the specific and rapid detection of bacterial pathogens from different matrices . These specialized phage binding proteins have been used for the detection of pathogens such as Shigella , Salmonella , and P. aeruginosa from different food samples. Similarly, Poshtiban and co-workers designed magnetic beads by immobilizing the RBP protein Gp047, derived from the phage NCTC12673, and used them for the capture and detection of Campylobacter from chicken broth and milk samples . Cell wall-binding domains (CBDs) of bacteriophage-encoded peptidoglycan hydrolases, commonly called endolysins, are the other phage-associated proteins (polypeptides) that have a high affinity and specificity towards the ligands on the Gram-positive cell wall . Currently, CBD-based magnetic separation (CBD-MS) has been effectively used for detecting several Gram-positive foodborne bacteria, such as B. cereus , Listeria , and Clostridium tyrobutyricum .
Lytic phages have been used for the detection of bacteria relying on the cytoplasmic contents (cell markers) released from the lysed cells ( ). Neufeld and co-workers developed an amperometric assay based on bacterial β-D-galactosidase activity to detect E. coli at a concentration of 1 CFU/100 mL within 6 to 8 h . In this assay, β-D-galactosidase was released from the phage-infected host cell following lysis, and an externally added substrate, p-aminophenyl-β-D-galactopyranoside, was converted into p-aminophenol, whose successive oxidation could be sensed by a potentiostat-based device. Sample filtration and pre-incubation before phage infection have improved the sensitivity of the test. Yemini and co-workers reported two cytoplasmic markers for the detection of Bacillus cereus ( B. cereus ) and Mycobacterium smegmatis ( M. smegmatis ) with a detection limit of 10 CFU/mL using α- and β- glucosidase, respectively, within 8 h . Similarly, the presence of E. coli in water has been detected after phage lysis with a detection limit of 40 CFU/mL in 8 h .
Adenosine triphosphate (ATP) is one of the cytoplasmic markers most extensively used for estimating the number of bacterial cells in a sample ( ). The concentration of ATP in a live, average-sized bacterium is nearly 10 −15 g and near-constant for different species, so the quantification of the concentration of ATP released via a bioluminescent assay enables us to determine the viable cell counts . ATP drives the catalytic reaction of the luciferase enzyme, which converts luciferin into oxyluciferin aerobically, together with adenosine monophosphate (AMP), carbon dioxide, and pyrophosphate, ultimately emitting light at a level corresponding to the specific concentration of ATP . The high amount of ATP found in many foodstuffs is one of the main drawbacks of this assay, which results in high detection limits ranging from 10 4 to 10 5 CFU/mL . However, this problem could be addressed using a phage-based biosorbent (e.g., T4 phage) by concentrating the host organism on the filter surface, which has shown a significant improvement in assay sensitivity with a detection limit as low as 6 × 10 3 CFU/mL within 2 h (Disruptor TM filter) . This assay is robust and highly accurate with a 60-fold higher concentration of the sample background flora than the concentration of host pathogens .
Adenylate kinase (AK) is a bacterial cytoplasmic marker released from phage-infected cells, and the assay developed based on this marker could be used as an alternative approach to enhance the sensitivity of the bioluminescent ATP assay . Adenylate kinase is an enzyme that enhances ATP production in the presence of a high amount of adenosine diphosphate (ADP) . Under optimal conditions, its sensitivity can be enhanced by the addition of ADP, where the detection limits of Salmonella and E. coli were lower than 10 3 CFU/mL . This technique has been improved by incorporating an immunomagnetic separation (IMS) system in which antibody-coated magnetic beads are used to capture the target organism, which is then purified and concentrated . Variations of this approach have been developed for the detection of Salmonella , Listeria , E. coli O157, and other bacterial pathogens.
The conductivity of the microbial growth medium can be changed by the perpetuation of microbes in the medium via the transformation of small to large charged and uncharged metabolites. Bacteriophages are appropriate tools for the detection of bacterial impedance (the resistance to the current flow via the conducting medium) since the presence of phage in a sample causes the retardation of impedance in the presence of the host organism. Chang and colleagues have detected E. coli O157:H7 without changing the conductivity of the MacConkey-sorbitol medium in the presence of an anti- E. coli O157:H7 phage (AR1) . The obvious challenge of direct conductivity-based detection techniques is the necessity of an appropriate culture medium optimized for measuring the impedance, in which development is usually labor-intensive and vulnerable to bacterial contamination with the background flora. Besides, not all target bacterium release charged metabolites, which may adversely affect the impedance and conductivity measurements. Some of these problems can be overcome by employing indirect impedimetric techniques, in which metabolites such as carbon dioxide released into the medium during the cultivation of the target bacterium can be removed by the addition of potassium hydroxide to facilitate impedance measurements . This method is highly specific and sensitive, and has been utilized for the detection of many foodborne pathogens, such as L. monocytogenes , S. aureus , Salmonella enterica , Campylobacter species, E. coli , and Enterococcus faecalis .
Lytic phages infect the host cell, and the number of progeny virions released from the infected cell is directly proportional to the number of bacteria infected. This approach was first reported by Stewart et al. (1998) , in which cells were infected with phages followed by treatment with a virucidal agent to eliminate the added phage, thus allowing only progeny phage to be detected. The developed assay was sensitive and could obtain results in 4 h using plaque assays. Alternative assays, including molecular diagnostic tools such as quantitative PCR (qPCR), have been used to determine the number of progeny virions released from the infected cells as well. For instance, B. anthracis was detected by immuno-chromatography, which has been designed based on a lateral-flow assay and the amplification of the gamma phage (γ) in bacterial cells. The virions released from the infected cells have been detected via reporters made of polystyrene nanoparticles linked to anti-γ phage antibodies. The detection limit has been recorded as 2.5 × 10 4 CFU/mL with a 2–4 h assay time . The plaque assay is one of the easiest/most straightforward methods for detecting foodborne pathogens to determine infection by increased titer . If the titer of the phages rises, it relates to the effective binding or adsorption of phages to the host bacteria, resulting in lysis and the release of progeny virions, and thus indicating the existence of the viable target pathogens in the food matrices as initially described by Stewart et al. . Recently, an assay was developed that employed phages coupled with qPCR for the detection of S. enterica ser Enteritidis in spiked chicken meat samples . Approximately 0.22 fg/µL of pure phage (vB_SenS_PVP-SE2) DNA and nearly10 3 pfu/mL of virions were detected using the combined technique with a detection limit of <10 CFU/25 g for 10 h of analysis, which included 3 h pre-enrichment, 6 h co-incubation, and 1 h DNA enrichment and qPCR. Despite its benefits, intact phages suffer from certain limitations that restrict their use in the development of whole-phage sensor systems. The fast adsorption of phages onto the host cell and their subsequent lytic activity may destroy the target bacterium before the completion of downstream detection steps. The size of phages is also another constraint that adversely affects the whole-phage detection system. Besides, some phages produce catalytic enzymes towards the receptors situated on the surface of the bacterial cell. For instance, the endorhaminosidase enzymes produced by the P22 phage can degrade the O-antigen of the outer membrane structure of Gram-negative bacteria, especially Salmonella enterica , which then affects the subsequent attachment process. The S. flexneri phage, Sf6, shows similar endorhamnosidase-mediated cleavage . Such phage-encoded enzymes can interfere with biosensor performance, leading to poor signal output. Moreover, intact phages can dry up on the surface of the biosensor, which ultimately can collapse and prevent tail fibers from attaching to the target bacterium .
Reporter bacteriophages are also engineered to integrate/insert a specific gene into the host bacterial genome to facilitate the visualization and subsequent detection of the host bacterium. Both lytic and lysogenic bacteriophages have been used for this purpose . Currently, three types of phage engineering approaches have been reported: direct cloning, homologous recombination, and whole-genome activation. Reporter phages are designed to enable the detection of pathogens based on the enzymatic conversion of a chromogenic substrate . Several reporter phages have been developed for the detection of foodborne bacteria. For instance, T7-ALP , Φ V10 lux , ΦV10 NanoLuc luciferase (Nluc) , T7-NRGp5 , and T4-NRGp17 have been developed for the detection of different E. coli strains from various food matrices .
Phage receptor-binding proteins (RBPs) are the most variable structures of phages, which are responsible for recognizing specific receptors on the host bacterium . Unlike antibodies, these proteins are relatively resistant to a wide range of pH values and heat treatments as well as protease activity, while showing analogous or even superior specificity . These intrinsic features make RBPs more efficient and much-needed biorecognition elements for the specific and rapid detection of bacterial pathogens from different matrices . These specialized phage binding proteins have been used for the detection of pathogens such as Shigella , Salmonella , and P. aeruginosa from different food samples. Similarly, Poshtiban and co-workers designed magnetic beads by immobilizing the RBP protein Gp047, derived from the phage NCTC12673, and used them for the capture and detection of Campylobacter from chicken broth and milk samples . Cell wall-binding domains (CBDs) of bacteriophage-encoded peptidoglycan hydrolases, commonly called endolysins, are the other phage-associated proteins (polypeptides) that have a high affinity and specificity towards the ligands on the Gram-positive cell wall . Currently, CBD-based magnetic separation (CBD-MS) has been effectively used for detecting several Gram-positive foodborne bacteria, such as B. cereus , Listeria , and Clostridium tyrobutyricum .
Bacteriophage immobilization is the principal factor that determines the efficient detection of bacterial pathogens on a specific platform . Various strategies have been established for the immobilization of phages on the electrode surface ( ). The major phage immobilization techniques on solid surfaces include physical adsorption , covalent bonding , chemical interaction, and many more . Physical adsorption is one of the easiest immobilization approaches for phages on a solid surface . This approach involves the minimal use of chemicals, wherein phages are arranged randomly unless a surface and/or phage modification is performed. In this technique, the adsorbed phage may detach from the surface of the substrate due to changes in temperature, pH, or ionic concentrations, thus affecting biosensing performance . Chemical-mediated immobilization approaches may cause the partial inactivation of the phage, most likely due to the alteration of domains involved in the interaction between the bacteriophage and the host cell’s surface. This approach, however, cannot guarantee the proper orientation of immobilized phages unless the immobilization approach is modified. The covalent interaction of phages on the surface of the substrate provides a firm binding and low risk of detachment of phages from the substrate. This technique produces a sufficient phage mass, which is required for phage application in the development of biosensors .
5.1. Phage-Based Optical Biosensors Optical biosensors are one of the best diagnostic tools for detecting pathogenic bacteria because of their high compatibility and sensitivity. Optical biosensors are developed by taking advantage of different properties of light such as wavelength, polarization, and the refractive index . The most commonly employed optical phage-based detection techniques are chemo/bioluminescence, fluorescence spectrometry, and SPR ( ). 5.1.1. Surface Plasmon Resonance Sensors Surface plasmon resonance (SPR) sensors are optical sensors that use distinct plasmon electromagnetic waves to detect (quantify) analytes based on molecular interactions with the biosensor. SPR biosensing, as a spectroscopic method, allows the real-time and quantitative detection of the binding agents or molecules freely without any kind of labeling . The optical system of this type of biosensor consists of a light-emitting diode (LED), a photodiode array, a glass prism, and an optical surface. The molecular networking at the surface of this sensor drives angular changes in the reflected light, which changes the refractive index ( ). The photodiode array detects the shift in angle and provides the result as a response unit (RU), which is equivalent to the whole mass of the bound ligands . Foodborne microbes can be detected using binding proteins from bacteriophages and the phages themselves, which are incorporated into the SPR sensor system as biosensors. For instance, Singh and colleagues utilized the tail spike protein of an engineered phage (P22) immobilized onto a gold surface for the accurate and fast detection of Salmonella with a sensitivity of 10 3 CFU/mL . Choi et al. isolated a novel bacteriophage, KFS-SE2, from an eel farm for the detection of Salmonella Enteritidis on a food sample using the SPR platform. However, detailed information about its application in food has not been demonstrated . Shin and Lim developed a novel 6HN-J-functionalized SPR biosensor comprising a segment of tail fiber protein derived from the lambda phage. This biosensor provided the fast, label-free detection of E. coli K-12 in the range of 2 × 10 4 –2 × 10 9 CFU/mL and showed a lower detection limit of 2 × 10 4 CFU/mL within 20 min . However, the researchers reported a nonspecific binding with P. aeruginosa. The SPR sensor has also been shown to be efficient in the detection of methicillin-resistant S. aureus (MRSA), E. coli O157:H7 , E. coli K12, S. aureus , and hepatitis B virus (HBV) . S. Typhimurium has been detected by an SPR device prepared via the immobilization of full-length engineered Det7 phage tail proteins (Det7T) on gold-coated surfaces by amine-coupling. This platform was able to detect S. Typhimurium quickly (within ~20 min) with a detection limit of 5 × 10 4−5 CFU/mL in 10% apple juice and water . 5.1.2. Bioluminescence Sensors A bioluminescence sensor relies on the enzymatic (luciferase) cleavage of an organic compound, luciferin, which ultimately emits light in a living organism (especially in Vibrio strains). The ATP bioluminescence tests are a fast, sensitive and uncomplicated approach to the detection of bacterial contamination. In this assay, the cytoplasmic ATP released from a lysed bacterial cell is measured by the luciferase bioluminescence reaction . Several studies have shown that different types of bioluminescence that have been obtained from different organisms can be integrated into the genome of bacteriophages for the quick and efficient detection of pathogens from food samples. For instance, the light-emitting features (luminescence values) of the NanoLuc luciferase (NLuc) reporter phage was designed by incorporating luciferase coding sequences derived from other organisms such as cnidarians, bacteria, and crustaceans into the genes of the Listeria phage A500 (A500::nluc ΔLCR), and the signal was found to be 100-fold higher than those of the other reporters. Hence, the NLuc luciferase-based assay is sensitive and able to directly detect as low as 3 CFU/100 mL L. monocytogenes in lettuce and milk samples, 72 h faster than culture-based approaches . In a related study, a set of T7-based phages encoding an NLuc carbohydrate-binding module fusion protein (NLuc-CBM) were used for the detection of E. coli in water with a detection limit of 1 CFU/100 mL in less than 10 h . In a study by Zhang et al. , a reporter phage was designed to detect E. coli O157:H7 in food samples. In this assay, the genome of the E. coli phage, ΦV10, was modified by incorporating a specific bioluminescent, Nluc, which is derived from Oplophorus gracilirostris (deep-sea shrimp), coupled with the commercial luciferin (Nano-Glo ® ). At a 1.76 × 10 2 pfu/mL concentration of the reporter phage, the assay enabled the detection of 5 CFU of E. coli O157:H7 grown in Luria–Bertani broth within 7 h. A comparable detection was obtained using ΦV10 reporter phages in ground beef at 9.23 × 10 3 pfu/mL within a 9 h turn-around time . Kim and colleagues developed a bioluminescence sensor using an engineered reporter phage, SPC32H-CDABE, at a minimum detection limit of 20 CFU/mL of Salmonella within 2 h, and the signals raised at a parallel rate to the concentration of contaminated bacteria found in milk, lettuce, and sliced pork . The researchers proclaimed the sensor to be a promising diagnostic tool for the detection of Salmonella contamination in food . In another study, a substrate-independent luminescent phage-based biosensor was developed using the HK620 and HK97 bacteriophages for the detection of enteric bacteria such as E. coli in water samples. The developed bioluminescence was specific and allowed the detection of 10 4 bacteria/mL in 1.5 h post-infection without the need for enrichment or a concentration step . 5.1.3. Fluorescent Bioassay Phage-based fluorescent bioassays have also been combined with fluorescently labeled bacteriophages that are involved in binding and detecting the host bacterium. An epifluorescent filter technique or flow cytometry has been used to detect phage–bacteria interactions. The reported sensitivity of this assay is about 10 2 –10 3 CFU/mL and 10 4 CFU/mL for epifluorescent and flow cytometry microscopy detection, respectively . Vinay and co-workers demonstrated the detection of enteric bacteria such as E. coli and S. Typhimurium in water using phage-based fluorescent biosensor prototypes developed using the intact temperate phages HK620 and P22, respectively. The method is robust, fast, and sensitive, enabling the detection of as low as 10 bacteria/mL without enrichment or a concentration step . summarizes the use of different phage-based biosensor techniques for foodborne bacterial pathogens. 5.2. Phage-Based Electrochemical Biosensors Phages are specific to their host organisms and can act as transducers for electrochemical sensors. In a phage-based electrochemical biosensor, an electric current applied from an external source is used to attach the phage in an appropriate orientation. Richter et al. immobilized a T4 phage on a gold surface with the aid of 10 volt electric power for 30 min and observed a four-fold rise in the sensitivity of the ordered phage sensor compared with the disordered one . They also suggested that the Debye length (L D ) between the sample solution and the sensor’s surface is crucial for the successful alignment of bacteriophages. A 33-fold rise in the density of phages on the surface compared to the chemical modification of the surface with dithiobis succinimidyl propionate (DTSP) and the sensitivity of the sensor increased by 64-fold in comparison to the physical adsorption immobilization method . A typical phage-based electrochemical sensor consists of potentiometric and amperometric measurements . summarizes the different foodborne bacteria that have been detected using different types of phage-based electrochemical biosensors. 5.2.1. Amperometric Biosensors Phage-based amperometric biosensors are one of the electrochemical sensors that have received much attention due to their simplicity, high sensitivity, specificity, and suitability for field testing. However, inhibitors can interfere with the assay and lower its specificity. In this platform, the phages are used either as a probe for the detection of a target bacterium or as a lysing agent for the indirect detection of pathogens using the metabolites released from the lysed cells . Amperometric biosensors have been developed to quantify the flow of the current between electrodes when the oxidation–reduction reaction takes place. In this assay, enzymes such as horseradish peroxidase (HRP), glucose oxidase, and alkaline phosphatase (AP) are used as bio-receptors . Several phage-based amperometric biosensors have been introduced for the detection of foodborne bacterial pathogens from food surfaces. Neufeld et al. designed phage-based amperometric techniques (specifically, β-D-galactosidase) for the detection of E. coli at concentrations as low as 1 CFU/100 mL within 6 to 8 h . Likewise, Yemini et al. used the same platform to detect M. smegmatis and B. cereus using β- and α- glucosidases, respectively, as markers with a detection limit of 10 CFU/mL within 8 h . Xu et al. designed a T4 phage-based sensor with a micro-gold electrode for the detection of E. coli from unspecified food samples. The sensitivity of this amperometric biosensor is in the range of 1.9 × 10 1 –1.9 × 10 8 CFU/mL of the bacterial cells . Quintela and Wu developed a portable sandwich-type phage-based amperometric biosensor using the environmental phage isolates belonging to the Myoviridae and Siphoviridae families. The sensor was highly specific to various Shiga toxin-producing E. coli (STEC) serogroups. The amperometric biosensor showed a detection limit of 10–10 2 CFU/mL for the STEC O26, O157, and O179 strains within 1 h . In another study, Nikkhoo et al. introduced a quick and inexpensive bacterial detection platform using T6 bacteriophages in combination with ion-selective field-effect transistors (ISFETs) and potassium-sensitive membranes (potassium ion detection). This amperometric platform was highly specific for the detection of E. coli in less than 10 min . 5.2.2. Electrochemical Impedance Spectroscopy (EIS) Biosensors Electrochemical impedance spectroscopy (EIS) is a novel biosensor that uses functional sinusoids. The analysis is carried out based on changes in the electrical impedance (conductance, impedance, and capacitance) of the medium ( ). The microbial metabolism in the medium reduces the capacity of the impedance . Bacteriophages immobilized on an electrode are used as probes in this platform to detect bacterial strains at the electrode’s surface . This technique is applicable for the detection of E. coli in inoculated samples or pure culture media ranging from 10 4 to 10 7 CFU/mL . Webster et al. designed a phage-based impedimetric microelectrode array biosensor. The results indicated that the sensitivity of the impedimetric biosensor was enhanced by reducing the gap and width of the electrode and by using a lower relative dielectric permittivity . An impedimetric biosensor (a label-free system) was proposed by Tlili et al. for the analysis of E. coli B with T4 phage-based EIS by covalently immobilizing them on a gold surface (cysteamine-modified) with a detection limit of 8 × 10 2 CFU/mL in less than 15 min . A screen-printed graphene sensor surface (electrode) was immobilized by highly specific lytic phages for the quick detection of Staphylococcus arlettae . summarizes some of the foodborne pathogens that have been detected using this technique. 5.3. Micromechanical Biosensors Phage-Based Quartz Crystal Microbalance Assays A phage-based quartz crystal microbalance (QCM) sensor is used to quantify the mass of analytes via immobilized phages on the surface of a sensor that is made from quartz crystal . The quartz crystal fluctuates by an alternating current (AC current) at a specific resonance frequency. The frequency of the resonance is dependent on changes in the surface mass . The phage-based QCM assays enhance the deposition of bacterial cells by capturing various components of the phage and ultimately changing the mass on the sensor surface. Guntupalli et al. used the phage 12,600 as a sensor (probe) in a phage-based QCM assay . Olsen and co-workers developed a filamentous phage-based sensor that adsorbed ~3 × 10 10 phages/cm 2 physically on a piezoelectric transducer surface, which enabled the fast detection of S . Typhimurium. This phage-based QCM sensor exhibited a low LOD of 10 2 CFU/mL with an assay time of <3 min . 5.4. Phage-Based Magnetoelastic Biosensor Phage-based magnetoelastic (ME) sensors use a wireless, mass-sensitive technique for the simple, specific, and rapid detection of biological analytes such as B. anthracis spores, Salmonella , and E. coli cells on food surfaces . This biosensor consists of a magnetoelastic resonator immobilized with phages that act as bio-probes to recognize the target organism . This sensor detects pathogens by measuring changes in the resonant frequency, which is proportional to changes in the sensor’s mass ( ). An ME biosensor is a simple, time-effective, and cost-effective detection platform for foodborne pathogens in different food matrices, and can be a substitute for the qPCR method . This biosensor has been used to detect S. Typhimurium directly on the shells of eggs and various fresh produce surfaces, including tomatoes, spinach leaves, and watermelons . Wang et al. fabricated an ME using filamentous E2 phages specific for the detection of S. Typhimurium on fresh spinach leaves. The bacterium was detected after a minimum incubation time of 7 h with a detection limit of 100 CFU/25 g . In another study, Chen et al. developed an ME biosensor for the detection of Salmonella using the phage C4-22 from the surface of chicken breast fillets in 2–10 min with a detection limit of 7.86 × 10 5 CFU/mm 2 . A ferromagnetoelastic biosensor was designed using a tailed B. cereus -specific phage as a novel biorecognition tool for the detection of B. cereus in food matrices; however, the application of this biosensor in food samples has not been explored yet . In general, ME biosensors show excellent specificity and sensitivity in pathogen detection and can be used for the real-time detection of target pathogens . summarizes the different foodborne bacteria that have been detected using the different types of phage-based micromechanical biosensors.
Optical biosensors are one of the best diagnostic tools for detecting pathogenic bacteria because of their high compatibility and sensitivity. Optical biosensors are developed by taking advantage of different properties of light such as wavelength, polarization, and the refractive index . The most commonly employed optical phage-based detection techniques are chemo/bioluminescence, fluorescence spectrometry, and SPR ( ). 5.1.1. Surface Plasmon Resonance Sensors Surface plasmon resonance (SPR) sensors are optical sensors that use distinct plasmon electromagnetic waves to detect (quantify) analytes based on molecular interactions with the biosensor. SPR biosensing, as a spectroscopic method, allows the real-time and quantitative detection of the binding agents or molecules freely without any kind of labeling . The optical system of this type of biosensor consists of a light-emitting diode (LED), a photodiode array, a glass prism, and an optical surface. The molecular networking at the surface of this sensor drives angular changes in the reflected light, which changes the refractive index ( ). The photodiode array detects the shift in angle and provides the result as a response unit (RU), which is equivalent to the whole mass of the bound ligands . Foodborne microbes can be detected using binding proteins from bacteriophages and the phages themselves, which are incorporated into the SPR sensor system as biosensors. For instance, Singh and colleagues utilized the tail spike protein of an engineered phage (P22) immobilized onto a gold surface for the accurate and fast detection of Salmonella with a sensitivity of 10 3 CFU/mL . Choi et al. isolated a novel bacteriophage, KFS-SE2, from an eel farm for the detection of Salmonella Enteritidis on a food sample using the SPR platform. However, detailed information about its application in food has not been demonstrated . Shin and Lim developed a novel 6HN-J-functionalized SPR biosensor comprising a segment of tail fiber protein derived from the lambda phage. This biosensor provided the fast, label-free detection of E. coli K-12 in the range of 2 × 10 4 –2 × 10 9 CFU/mL and showed a lower detection limit of 2 × 10 4 CFU/mL within 20 min . However, the researchers reported a nonspecific binding with P. aeruginosa. The SPR sensor has also been shown to be efficient in the detection of methicillin-resistant S. aureus (MRSA), E. coli O157:H7 , E. coli K12, S. aureus , and hepatitis B virus (HBV) . S. Typhimurium has been detected by an SPR device prepared via the immobilization of full-length engineered Det7 phage tail proteins (Det7T) on gold-coated surfaces by amine-coupling. This platform was able to detect S. Typhimurium quickly (within ~20 min) with a detection limit of 5 × 10 4−5 CFU/mL in 10% apple juice and water . 5.1.2. Bioluminescence Sensors A bioluminescence sensor relies on the enzymatic (luciferase) cleavage of an organic compound, luciferin, which ultimately emits light in a living organism (especially in Vibrio strains). The ATP bioluminescence tests are a fast, sensitive and uncomplicated approach to the detection of bacterial contamination. In this assay, the cytoplasmic ATP released from a lysed bacterial cell is measured by the luciferase bioluminescence reaction . Several studies have shown that different types of bioluminescence that have been obtained from different organisms can be integrated into the genome of bacteriophages for the quick and efficient detection of pathogens from food samples. For instance, the light-emitting features (luminescence values) of the NanoLuc luciferase (NLuc) reporter phage was designed by incorporating luciferase coding sequences derived from other organisms such as cnidarians, bacteria, and crustaceans into the genes of the Listeria phage A500 (A500::nluc ΔLCR), and the signal was found to be 100-fold higher than those of the other reporters. Hence, the NLuc luciferase-based assay is sensitive and able to directly detect as low as 3 CFU/100 mL L. monocytogenes in lettuce and milk samples, 72 h faster than culture-based approaches . In a related study, a set of T7-based phages encoding an NLuc carbohydrate-binding module fusion protein (NLuc-CBM) were used for the detection of E. coli in water with a detection limit of 1 CFU/100 mL in less than 10 h . In a study by Zhang et al. , a reporter phage was designed to detect E. coli O157:H7 in food samples. In this assay, the genome of the E. coli phage, ΦV10, was modified by incorporating a specific bioluminescent, Nluc, which is derived from Oplophorus gracilirostris (deep-sea shrimp), coupled with the commercial luciferin (Nano-Glo ® ). At a 1.76 × 10 2 pfu/mL concentration of the reporter phage, the assay enabled the detection of 5 CFU of E. coli O157:H7 grown in Luria–Bertani broth within 7 h. A comparable detection was obtained using ΦV10 reporter phages in ground beef at 9.23 × 10 3 pfu/mL within a 9 h turn-around time . Kim and colleagues developed a bioluminescence sensor using an engineered reporter phage, SPC32H-CDABE, at a minimum detection limit of 20 CFU/mL of Salmonella within 2 h, and the signals raised at a parallel rate to the concentration of contaminated bacteria found in milk, lettuce, and sliced pork . The researchers proclaimed the sensor to be a promising diagnostic tool for the detection of Salmonella contamination in food . In another study, a substrate-independent luminescent phage-based biosensor was developed using the HK620 and HK97 bacteriophages for the detection of enteric bacteria such as E. coli in water samples. The developed bioluminescence was specific and allowed the detection of 10 4 bacteria/mL in 1.5 h post-infection without the need for enrichment or a concentration step . 5.1.3. Fluorescent Bioassay Phage-based fluorescent bioassays have also been combined with fluorescently labeled bacteriophages that are involved in binding and detecting the host bacterium. An epifluorescent filter technique or flow cytometry has been used to detect phage–bacteria interactions. The reported sensitivity of this assay is about 10 2 –10 3 CFU/mL and 10 4 CFU/mL for epifluorescent and flow cytometry microscopy detection, respectively . Vinay and co-workers demonstrated the detection of enteric bacteria such as E. coli and S. Typhimurium in water using phage-based fluorescent biosensor prototypes developed using the intact temperate phages HK620 and P22, respectively. The method is robust, fast, and sensitive, enabling the detection of as low as 10 bacteria/mL without enrichment or a concentration step . summarizes the use of different phage-based biosensor techniques for foodborne bacterial pathogens.
Surface plasmon resonance (SPR) sensors are optical sensors that use distinct plasmon electromagnetic waves to detect (quantify) analytes based on molecular interactions with the biosensor. SPR biosensing, as a spectroscopic method, allows the real-time and quantitative detection of the binding agents or molecules freely without any kind of labeling . The optical system of this type of biosensor consists of a light-emitting diode (LED), a photodiode array, a glass prism, and an optical surface. The molecular networking at the surface of this sensor drives angular changes in the reflected light, which changes the refractive index ( ). The photodiode array detects the shift in angle and provides the result as a response unit (RU), which is equivalent to the whole mass of the bound ligands . Foodborne microbes can be detected using binding proteins from bacteriophages and the phages themselves, which are incorporated into the SPR sensor system as biosensors. For instance, Singh and colleagues utilized the tail spike protein of an engineered phage (P22) immobilized onto a gold surface for the accurate and fast detection of Salmonella with a sensitivity of 10 3 CFU/mL . Choi et al. isolated a novel bacteriophage, KFS-SE2, from an eel farm for the detection of Salmonella Enteritidis on a food sample using the SPR platform. However, detailed information about its application in food has not been demonstrated . Shin and Lim developed a novel 6HN-J-functionalized SPR biosensor comprising a segment of tail fiber protein derived from the lambda phage. This biosensor provided the fast, label-free detection of E. coli K-12 in the range of 2 × 10 4 –2 × 10 9 CFU/mL and showed a lower detection limit of 2 × 10 4 CFU/mL within 20 min . However, the researchers reported a nonspecific binding with P. aeruginosa. The SPR sensor has also been shown to be efficient in the detection of methicillin-resistant S. aureus (MRSA), E. coli O157:H7 , E. coli K12, S. aureus , and hepatitis B virus (HBV) . S. Typhimurium has been detected by an SPR device prepared via the immobilization of full-length engineered Det7 phage tail proteins (Det7T) on gold-coated surfaces by amine-coupling. This platform was able to detect S. Typhimurium quickly (within ~20 min) with a detection limit of 5 × 10 4−5 CFU/mL in 10% apple juice and water .
A bioluminescence sensor relies on the enzymatic (luciferase) cleavage of an organic compound, luciferin, which ultimately emits light in a living organism (especially in Vibrio strains). The ATP bioluminescence tests are a fast, sensitive and uncomplicated approach to the detection of bacterial contamination. In this assay, the cytoplasmic ATP released from a lysed bacterial cell is measured by the luciferase bioluminescence reaction . Several studies have shown that different types of bioluminescence that have been obtained from different organisms can be integrated into the genome of bacteriophages for the quick and efficient detection of pathogens from food samples. For instance, the light-emitting features (luminescence values) of the NanoLuc luciferase (NLuc) reporter phage was designed by incorporating luciferase coding sequences derived from other organisms such as cnidarians, bacteria, and crustaceans into the genes of the Listeria phage A500 (A500::nluc ΔLCR), and the signal was found to be 100-fold higher than those of the other reporters. Hence, the NLuc luciferase-based assay is sensitive and able to directly detect as low as 3 CFU/100 mL L. monocytogenes in lettuce and milk samples, 72 h faster than culture-based approaches . In a related study, a set of T7-based phages encoding an NLuc carbohydrate-binding module fusion protein (NLuc-CBM) were used for the detection of E. coli in water with a detection limit of 1 CFU/100 mL in less than 10 h . In a study by Zhang et al. , a reporter phage was designed to detect E. coli O157:H7 in food samples. In this assay, the genome of the E. coli phage, ΦV10, was modified by incorporating a specific bioluminescent, Nluc, which is derived from Oplophorus gracilirostris (deep-sea shrimp), coupled with the commercial luciferin (Nano-Glo ® ). At a 1.76 × 10 2 pfu/mL concentration of the reporter phage, the assay enabled the detection of 5 CFU of E. coli O157:H7 grown in Luria–Bertani broth within 7 h. A comparable detection was obtained using ΦV10 reporter phages in ground beef at 9.23 × 10 3 pfu/mL within a 9 h turn-around time . Kim and colleagues developed a bioluminescence sensor using an engineered reporter phage, SPC32H-CDABE, at a minimum detection limit of 20 CFU/mL of Salmonella within 2 h, and the signals raised at a parallel rate to the concentration of contaminated bacteria found in milk, lettuce, and sliced pork . The researchers proclaimed the sensor to be a promising diagnostic tool for the detection of Salmonella contamination in food . In another study, a substrate-independent luminescent phage-based biosensor was developed using the HK620 and HK97 bacteriophages for the detection of enteric bacteria such as E. coli in water samples. The developed bioluminescence was specific and allowed the detection of 10 4 bacteria/mL in 1.5 h post-infection without the need for enrichment or a concentration step .
Phage-based fluorescent bioassays have also been combined with fluorescently labeled bacteriophages that are involved in binding and detecting the host bacterium. An epifluorescent filter technique or flow cytometry has been used to detect phage–bacteria interactions. The reported sensitivity of this assay is about 10 2 –10 3 CFU/mL and 10 4 CFU/mL for epifluorescent and flow cytometry microscopy detection, respectively . Vinay and co-workers demonstrated the detection of enteric bacteria such as E. coli and S. Typhimurium in water using phage-based fluorescent biosensor prototypes developed using the intact temperate phages HK620 and P22, respectively. The method is robust, fast, and sensitive, enabling the detection of as low as 10 bacteria/mL without enrichment or a concentration step . summarizes the use of different phage-based biosensor techniques for foodborne bacterial pathogens.
Phages are specific to their host organisms and can act as transducers for electrochemical sensors. In a phage-based electrochemical biosensor, an electric current applied from an external source is used to attach the phage in an appropriate orientation. Richter et al. immobilized a T4 phage on a gold surface with the aid of 10 volt electric power for 30 min and observed a four-fold rise in the sensitivity of the ordered phage sensor compared with the disordered one . They also suggested that the Debye length (L D ) between the sample solution and the sensor’s surface is crucial for the successful alignment of bacteriophages. A 33-fold rise in the density of phages on the surface compared to the chemical modification of the surface with dithiobis succinimidyl propionate (DTSP) and the sensitivity of the sensor increased by 64-fold in comparison to the physical adsorption immobilization method . A typical phage-based electrochemical sensor consists of potentiometric and amperometric measurements . summarizes the different foodborne bacteria that have been detected using different types of phage-based electrochemical biosensors. 5.2.1. Amperometric Biosensors Phage-based amperometric biosensors are one of the electrochemical sensors that have received much attention due to their simplicity, high sensitivity, specificity, and suitability for field testing. However, inhibitors can interfere with the assay and lower its specificity. In this platform, the phages are used either as a probe for the detection of a target bacterium or as a lysing agent for the indirect detection of pathogens using the metabolites released from the lysed cells . Amperometric biosensors have been developed to quantify the flow of the current between electrodes when the oxidation–reduction reaction takes place. In this assay, enzymes such as horseradish peroxidase (HRP), glucose oxidase, and alkaline phosphatase (AP) are used as bio-receptors . Several phage-based amperometric biosensors have been introduced for the detection of foodborne bacterial pathogens from food surfaces. Neufeld et al. designed phage-based amperometric techniques (specifically, β-D-galactosidase) for the detection of E. coli at concentrations as low as 1 CFU/100 mL within 6 to 8 h . Likewise, Yemini et al. used the same platform to detect M. smegmatis and B. cereus using β- and α- glucosidases, respectively, as markers with a detection limit of 10 CFU/mL within 8 h . Xu et al. designed a T4 phage-based sensor with a micro-gold electrode for the detection of E. coli from unspecified food samples. The sensitivity of this amperometric biosensor is in the range of 1.9 × 10 1 –1.9 × 10 8 CFU/mL of the bacterial cells . Quintela and Wu developed a portable sandwich-type phage-based amperometric biosensor using the environmental phage isolates belonging to the Myoviridae and Siphoviridae families. The sensor was highly specific to various Shiga toxin-producing E. coli (STEC) serogroups. The amperometric biosensor showed a detection limit of 10–10 2 CFU/mL for the STEC O26, O157, and O179 strains within 1 h . In another study, Nikkhoo et al. introduced a quick and inexpensive bacterial detection platform using T6 bacteriophages in combination with ion-selective field-effect transistors (ISFETs) and potassium-sensitive membranes (potassium ion detection). This amperometric platform was highly specific for the detection of E. coli in less than 10 min . 5.2.2. Electrochemical Impedance Spectroscopy (EIS) Biosensors Electrochemical impedance spectroscopy (EIS) is a novel biosensor that uses functional sinusoids. The analysis is carried out based on changes in the electrical impedance (conductance, impedance, and capacitance) of the medium ( ). The microbial metabolism in the medium reduces the capacity of the impedance . Bacteriophages immobilized on an electrode are used as probes in this platform to detect bacterial strains at the electrode’s surface . This technique is applicable for the detection of E. coli in inoculated samples or pure culture media ranging from 10 4 to 10 7 CFU/mL . Webster et al. designed a phage-based impedimetric microelectrode array biosensor. The results indicated that the sensitivity of the impedimetric biosensor was enhanced by reducing the gap and width of the electrode and by using a lower relative dielectric permittivity . An impedimetric biosensor (a label-free system) was proposed by Tlili et al. for the analysis of E. coli B with T4 phage-based EIS by covalently immobilizing them on a gold surface (cysteamine-modified) with a detection limit of 8 × 10 2 CFU/mL in less than 15 min . A screen-printed graphene sensor surface (electrode) was immobilized by highly specific lytic phages for the quick detection of Staphylococcus arlettae . summarizes some of the foodborne pathogens that have been detected using this technique.
Phage-based amperometric biosensors are one of the electrochemical sensors that have received much attention due to their simplicity, high sensitivity, specificity, and suitability for field testing. However, inhibitors can interfere with the assay and lower its specificity. In this platform, the phages are used either as a probe for the detection of a target bacterium or as a lysing agent for the indirect detection of pathogens using the metabolites released from the lysed cells . Amperometric biosensors have been developed to quantify the flow of the current between electrodes when the oxidation–reduction reaction takes place. In this assay, enzymes such as horseradish peroxidase (HRP), glucose oxidase, and alkaline phosphatase (AP) are used as bio-receptors . Several phage-based amperometric biosensors have been introduced for the detection of foodborne bacterial pathogens from food surfaces. Neufeld et al. designed phage-based amperometric techniques (specifically, β-D-galactosidase) for the detection of E. coli at concentrations as low as 1 CFU/100 mL within 6 to 8 h . Likewise, Yemini et al. used the same platform to detect M. smegmatis and B. cereus using β- and α- glucosidases, respectively, as markers with a detection limit of 10 CFU/mL within 8 h . Xu et al. designed a T4 phage-based sensor with a micro-gold electrode for the detection of E. coli from unspecified food samples. The sensitivity of this amperometric biosensor is in the range of 1.9 × 10 1 –1.9 × 10 8 CFU/mL of the bacterial cells . Quintela and Wu developed a portable sandwich-type phage-based amperometric biosensor using the environmental phage isolates belonging to the Myoviridae and Siphoviridae families. The sensor was highly specific to various Shiga toxin-producing E. coli (STEC) serogroups. The amperometric biosensor showed a detection limit of 10–10 2 CFU/mL for the STEC O26, O157, and O179 strains within 1 h . In another study, Nikkhoo et al. introduced a quick and inexpensive bacterial detection platform using T6 bacteriophages in combination with ion-selective field-effect transistors (ISFETs) and potassium-sensitive membranes (potassium ion detection). This amperometric platform was highly specific for the detection of E. coli in less than 10 min .
Electrochemical impedance spectroscopy (EIS) is a novel biosensor that uses functional sinusoids. The analysis is carried out based on changes in the electrical impedance (conductance, impedance, and capacitance) of the medium ( ). The microbial metabolism in the medium reduces the capacity of the impedance . Bacteriophages immobilized on an electrode are used as probes in this platform to detect bacterial strains at the electrode’s surface . This technique is applicable for the detection of E. coli in inoculated samples or pure culture media ranging from 10 4 to 10 7 CFU/mL . Webster et al. designed a phage-based impedimetric microelectrode array biosensor. The results indicated that the sensitivity of the impedimetric biosensor was enhanced by reducing the gap and width of the electrode and by using a lower relative dielectric permittivity . An impedimetric biosensor (a label-free system) was proposed by Tlili et al. for the analysis of E. coli B with T4 phage-based EIS by covalently immobilizing them on a gold surface (cysteamine-modified) with a detection limit of 8 × 10 2 CFU/mL in less than 15 min . A screen-printed graphene sensor surface (electrode) was immobilized by highly specific lytic phages for the quick detection of Staphylococcus arlettae . summarizes some of the foodborne pathogens that have been detected using this technique.
Phage-Based Quartz Crystal Microbalance Assays A phage-based quartz crystal microbalance (QCM) sensor is used to quantify the mass of analytes via immobilized phages on the surface of a sensor that is made from quartz crystal . The quartz crystal fluctuates by an alternating current (AC current) at a specific resonance frequency. The frequency of the resonance is dependent on changes in the surface mass . The phage-based QCM assays enhance the deposition of bacterial cells by capturing various components of the phage and ultimately changing the mass on the sensor surface. Guntupalli et al. used the phage 12,600 as a sensor (probe) in a phage-based QCM assay . Olsen and co-workers developed a filamentous phage-based sensor that adsorbed ~3 × 10 10 phages/cm 2 physically on a piezoelectric transducer surface, which enabled the fast detection of S . Typhimurium. This phage-based QCM sensor exhibited a low LOD of 10 2 CFU/mL with an assay time of <3 min .
A phage-based quartz crystal microbalance (QCM) sensor is used to quantify the mass of analytes via immobilized phages on the surface of a sensor that is made from quartz crystal . The quartz crystal fluctuates by an alternating current (AC current) at a specific resonance frequency. The frequency of the resonance is dependent on changes in the surface mass . The phage-based QCM assays enhance the deposition of bacterial cells by capturing various components of the phage and ultimately changing the mass on the sensor surface. Guntupalli et al. used the phage 12,600 as a sensor (probe) in a phage-based QCM assay . Olsen and co-workers developed a filamentous phage-based sensor that adsorbed ~3 × 10 10 phages/cm 2 physically on a piezoelectric transducer surface, which enabled the fast detection of S . Typhimurium. This phage-based QCM sensor exhibited a low LOD of 10 2 CFU/mL with an assay time of <3 min .
Phage-based magnetoelastic (ME) sensors use a wireless, mass-sensitive technique for the simple, specific, and rapid detection of biological analytes such as B. anthracis spores, Salmonella , and E. coli cells on food surfaces . This biosensor consists of a magnetoelastic resonator immobilized with phages that act as bio-probes to recognize the target organism . This sensor detects pathogens by measuring changes in the resonant frequency, which is proportional to changes in the sensor’s mass ( ). An ME biosensor is a simple, time-effective, and cost-effective detection platform for foodborne pathogens in different food matrices, and can be a substitute for the qPCR method . This biosensor has been used to detect S. Typhimurium directly on the shells of eggs and various fresh produce surfaces, including tomatoes, spinach leaves, and watermelons . Wang et al. fabricated an ME using filamentous E2 phages specific for the detection of S. Typhimurium on fresh spinach leaves. The bacterium was detected after a minimum incubation time of 7 h with a detection limit of 100 CFU/25 g . In another study, Chen et al. developed an ME biosensor for the detection of Salmonella using the phage C4-22 from the surface of chicken breast fillets in 2–10 min with a detection limit of 7.86 × 10 5 CFU/mm 2 . A ferromagnetoelastic biosensor was designed using a tailed B. cereus -specific phage as a novel biorecognition tool for the detection of B. cereus in food matrices; however, the application of this biosensor in food samples has not been explored yet . In general, ME biosensors show excellent specificity and sensitivity in pathogen detection and can be used for the real-time detection of target pathogens . summarizes the different foodborne bacteria that have been detected using the different types of phage-based micromechanical biosensors.
Bacteriophages have very important characteristics that make them ideal biorecognition agents for incorporation into biosensors for the detection of foodborne bacterial pathogens in food samples. They are highly specific; therefore, phage-based sensors are unaffected by background flora. As phages infect only living host bacteria, a phage-based sensor can easily distinguish living from dead organisms. The resistance of phages and phage-associated proteins to a wide range of temperatures, pH values, and organic solvents makes phage-based biosensors superior to other conventional pathogen-detection techniques. Generally, bacteriophage-based biosensor systems are cost-effective, specific, and more stable than conventional foodborne pathogen detection techniques. Unlike antibodies, bacteriophages can be produced in large quantities readily; thus, the fabrication of a biosensor using whole phages or phage proteins could be a cost-effective economical platform . Currently, new phages with multiple binding sites on their surface or with other desirable properties can be generated using advanced synthetic biology approaches. This enables phages to be used for a wide range of biosensor applications . With all the advantages mentioned above, there are certain challenges related to the development of phage-based biosensors that need attention. An obvious challenge of phage-based biosensors is the employment of bacteriophages that have a broad host range in a manner so that false-negative results can be avoided. Bacteriophages typically detect a specific receptor on the host cell’s surface; therefore, phage-based sensors must be tested against target and nontarget bacteria to diminish the chance of false-negative results. Besides, bacterial contamination or the presence of lipids, carbohydrates, and proteins could profoundly affect the binding efficiency and phase immobilization on the sensor surface. In addition, phage resistance is an emerging challenge due to the lack of receptors on the surface of the host organism required for phage adsorption, or host resistance triggered by eliciting intracellular defense mechanisms. This phenomenon can also affect the development of phage-based biosensors. However, such a problem can be overcome by using a “phage cocktail” containing a mixture of phages. The idea of a phage cocktail has to be adopted in future phage-based biosensor application platforms, especially for the simultaneous detection of multiple foodborne pathogens . Another challenge for the establishment of a stable phage-based biosensor is the formation of stable chemical bonds between the surface of the biosensor and the phage attachment domain. For this, the physical as well as chemical features of the phages have to be explored in-depth to continue with suitable reactions to generate a stable sensor platform . In addition, it has been recognized that when phage-based biosensors are exposed to a dry environment, the tail fibers lose their structural integrity, affecting bacterial capture on the sensor platform . Nevertheless, engineered phage-based biosensors can circumvent these limitations. Engineering bacteriophages is inherently challenging due to the compact nature of the genomes and the availability of fewer noncoding sequences or restriction sites. However, with the development of numerous DNA synthesis methodologies and their application in synthetic biology, these drawbacks are likely to fall away rapidly. Hence, even with the development of synthetic biology, there is still a need for more insight into the genetic makeup of phage genomes that can be used for this purpose . The selection of bacteriophages of a desired size, especially for nano-biosensor platforms, and the optimization of the expression of the binding domains on the surface of the phages remain the major challenges. In addition, the ability of phages or their proteins to be immobilized on the surface of sensor platforms through chemical anchoring or physical absorption is well developed; however, their stable attachment on other surfaces is a fertile topic for research exploration . While the specificity of bacteriophages towards a host/target bacterium is the basis for the development of phage-based biosensors, there is a need for broadening the detection range for multi-pathogen detection. Introducing polyvalency to RBPs has become relevant to establishing a multiplexed platform for the rapid detection of foodborne pathogens, which is an area that has yet to be addressed . In this review, a description of nearly 54 biosensors has been summarized. Though the detection limits and validation with samples for the majority of these sensors are known ( , and ), many researchers failed to provide such information. These might be due to the lysis of the target bacterium by bacteriophages or interference from the food samples, which consequently could have obscured the accuracy of bacterial counts. In addition, the drying of biorecognition molecules on the sensor platform may have resulted in the loss of the captured target bacterium, which could have also affected detection. To overcome such limitations, genetically modified phages and/or advanced functional surface chemistry can be employed for stable phage immobilization. Phage-based biosensors have been demonstrated to have great potential in the detection of pathogenic bacteria from food and the environment. However, the transition from the laboratory bench to commercial spaces has been very slow due to several constraints, including, but not limited to, a weak signal-to-noise ratio, sensitivity, and specificity of the bacteriophages; reproducibility; a short shelf-life of the sensor; instrument design; and cost. The future advancement of phage-based biosensing platforms should also consider the development of new recognition platforms, improvements to signal amplification, and the establishment of nanostructures for the precise geometry of the sensor design. To this end, genetically modified phages are relevant since they can produce the desired peptides and proteins on their surface to generate an appropriate and multifunctional biorecognition platform. Furthermore, one of the most promising future directions of phage-based sensing is its compatibility with emerging biomolecules and nanostructures (quantum dots; metallic, magnetic, and polymer nanoparticles; etc.) to generate new and innovative phage-based nanodevices or bioinspired sensor tools. Such hybrid versatile sensors are well-suited for the detection of a wide variety of foodborne pathogens from various sources. In conclusion, even though the progress made so far has been inspiring, the future of phage-based sensing still requires a strong collaborative effort between researchers working in diverse disciplines, such as molecular biology, microbiology, biochemistry, engineering, material science, biology, physics, and chemistry, to enhance the overall detection efficiency of the sensors. Moreover, care must be taken to avoid any potential public health hazards associated with the bacteriophages and the spread of the parental host (pathogenic bacteria) during bacteriophage production, purification, and storage.
|
Antimicrobial resistance pattern of | a5bb8f6d-8411-4b1d-81d1-ca4b2f3f0918 | 11665426 | Microbiology[mh] | Escherichia coli (E. coli) , a member of the Enterobacteriaceae family, is known to be a foodborne pathogen in humans . It is able to spread along the food chain and into ecosystems and has been widely reported worldwide. E. coli may be of pathogenic or non-pathogenic strains, causing disease in both the intestinal tract and other areas of the body. The main conditions or clinical signs of an E. coli infection can be observed as patients having diarrhea, urinary tract infections, meningitis, peritonitis, septicemia and gram-negative bacterial pneumonia. . Strains of E. coli of animal origin can be opportunistic and pathogenic, which may lead to either less harmful infections ( e.g ., uncomplicated urinary tract infections) or lethal infections ( e.g ., blood stream infections) even though the fact that the majority of E. coli strains are non-pathogenic or even part of the normal flora in humans . Seafood is consumed globally, and the probability of ingesting seafood with pathogenic microorganisms may be observed in unsanitary conditions either via the handling and storage processes or exposure to contaminated water sources. . Escherichia coli is one of the pathogenic organisms that are considered emergent in aquaculture, which is the fastest growing food producing sector according to the United Nations Food and Agriculture Organization (FAO) . Moreover, consumption of raw or undercooked seafood will increase chances of E. coli outbreaks emphasizing the importance of spreading public health awareness to these potential health risks with subsequent diseases that may occur through contamination . Furthermore, the public should be informed that some E. coli strains are heat resistant and may spread infections even when precautions have been made to thoroughly cook food . Antimicrobial resistance (AMR) is another key factor when discussing the impact of E. coli infection on humans. This is supported by the fact that AMR is considered a global threat to the public health and economy . Additionally, the use of antibiotics in food-producing animals is increasing despite the known implications to human health . Furthermore, it was demonstrated that AMR could be transferred between commensal and zoonotically pathogenic members of the Enterobacteriaceae through the transmission of genetic material . Hydrolysis of β-lactam antibiotics via overproduction of β-lactamases is one of the most commonly reported resistance mechanisms of Enterobacteriaceae . Extended-spectrum β-lactamases (ESBLs) from microbes, in particular E. coli , are variant of β-lactamases that confer resistance to multiple drugs including aztreonam, cefotaxime, ceftazidime, and related oxyimino-β-lactams as well as to other penicillin and cephalosporins. Although they may be inhibited by β-lactamases inhibitors such as clavulanic acid and tazobactam . In addition to the possibility of transferring multidrug-resistance capacity between multidrug-resistant microbe, ESBL producing E. coli are of particular importance when considering the potential health issues worldwide . In 2019, the estimated annual mortality rate attributed to AMR was approximately 1.27 million deaths, with developing countries bearing the greatest impact of this burden . AMR from E. coli is no longer through healthcare associated infections (nosocomial outbreaks), as reports have shown that both humans and animals contribute to the contamination of aquatic systems being significant reservoirs of resistance to antibiotics. Aquatic environments are emerging as significant reservoirs of antibiotic resistance, enabling the spread of antibiotic resistance genes (ARGs) through contaminated food sources . The inappropriate use of antibiotics in aquaculture significantly contributes to the global spread of AMR which has resulted in severe repercussions which has impacted human, animal and environmental health . A concerning direct correlation between AMR and ARGs was shown when bacterial strains were found in seafood sources leading to more recent studies highlighting that most of these strains were from E. coli that carry β-lactam antibiotic resistance genes . With the a growing global population that is increasingly dependent on aquaculture products for their food security, the rise of AMR and related infections linked to seafood poses a significant threat to public health . Although the aquaculture supply chain plays a crucial role and is recognized within the context of a “One Health” framework for controlling the spread of AMR in the global aquaculture sector . In 2015, during the 68 th World Health Assembly held in Switzerland and subsequent to the adoption of the Global Action Plan (GAP) on Antimicrobial Resistance (AMR), most members of the United Nations (UN) and World Health Organization (WHO) implemented a resolution that committed to the development of National Action Plans (NAPs) based on a “One Health” approach to reduce the use of antibiotics in hope to decrease the spread of antimicrobial resistance (AMR) . A significant portion of food necessities of Saudi Arabia is maintained through imports from different countries worldwide . Food products imports including seafood are regulated by the Saudi Food and Drug Authority (SFDA), which require a health certificate for imported seafood of animal origin among other conditions and requirements ensuring food safety . Despite the local and international strict regulations to ensure food safety, the ability of microorganisms to modify via mutations so as to acclimate efficiently towards the new surroundings suggest the requirement for continuous development of the regulations and monitoring strategies to aid in the prevention and management of food-related diseases . Contamination of seafood products with antibiotic-resistant bacteria is a growing concern and poses a serious public health issue. This study aimed to investigate the antimicrobial resistance as well as assess the extent of co-resistance patterns in E. coli isolated from imported frozen shrimp available for purchase in the Eastern Province of Saudi Arabia. Samples In this study, a total of 40 samples of frozen shrimp imported from China ( n = 25 samples) and from Vietnam ( n = 15 samples) were purchased from different supermarkets in Al Khobar in the Eastern Province of Saudi Arabia. Each purchased sample of shrimp is equivalent to one kilogram. All samples were examined for the presence of multidrug-resistant (MDR) E. coli . During sample processing, the storage temperature and production date of the products were noted. Most of the purchased frozen shrimp in this project were peeled shrimp without head and some samples were found without tails. The purchased samples were transferred to the microbiology laboratory and were stored at −20 °C. Isolation and identification Escherichia coli enrichment broth (EC Broth, Oxoid, Hampshire, England) and CHROMagar™ E. coli (CHROMagar, Saint-Denis, France) were used to isolate E. coli from frozen shrimp. Briefly, a 25 g portion of shrimp from each sample was weighed and placed in a sterile stomacher plastic bag containing 225 mL of EC Broth . The bag was placed into the stomacher 400 lab blender (Stomacher 400 Circulator; Seward, West Sussex, UK) and blended for 2 min. Then, the broth containing the sample was incubated at 37 °C for 24 h. After incubation, 10 μL from each broth were taken by an inoculating loop and streaked onto CHROMagar™ E. coli plates and were incubated at 37 °C. After 24 h of incubation plates were inspected for blue-colored colony formation and were identified using biochemicals which include oxidase, indole and API 20E kit strips (BioMerieux, Marcy, France). Antibiotic susceptibility testing Antimicrobial susceptibility was assessed in accordance with the established protocol for the Kirby-Bauer disk diffusion susceptibility test. For each test suspension the turbidity was adjusted to a 0.5 MacFarland and inoculated onto Muller-Hinton Agar using sterile cotton-wool swabs. A total of twenty-one different Oxoid antibiotic discs (Oxoid, Hampshire, UK) from different classes were tested. The antibiotic discs were dispensed onto the agar using an automated disk dispenser (Oxoid, Hampshire, UK). The diameter of the inhibition zone was measured in millimeters with Vernier calipers, and the results were interpreted as sensitive (S), intermediate (I), or resistant (R) based on breakpoints for E. coli as per the protocol established by and the Clinical and Laboratory Standards Institute (CLSI) . The tested antibiotic agents included: ampicillin (AM, 10 µg), amikacin (AK, 30 µg), augmentin (AUG, 30 µg), aztreonam (ATM, 30 µg), ciprofloxacin (CIP, 5 µg), cefotaxime (CTX, 30 µg), ceftazidime (CAZ, 30 µg), ceftriaxone (CRO, 30 µg), chloramphenicol (C, 30 µg), cephalexin (CFX, 30 µg), cefoxitin (FOX, 30 µg), nitrofurantoin (FM, 50 µg), gentamicin (GM, 10 µg), kanamycin (K, 30 µg), cephalotin (KF, 30 µg), nalidixic acid (NA, 30 µg), norfloxacin (NOR, 10 µg), tobramycin (TN, 30 µg), piperacillin (PIP, 30 µg), trimetrophrim sulphamethoxazole (SXT, 1.25 µg/23.75 µg) and tetracycline (TE, 30 µg). The Escherichia coli ATCC25922 reference strain included with the samples served as the control in performed tests. Determining the MAR index was performed using E. coli isolates following the Krumperman method . The MAR index was calculated by utilizing the equation MAR = a/b, with “a” denoting the count of antibiotics to which isolates showed resistance, and “b” representing the total number of antibiotics used in this study. More than or above 0.2 in valuesuggests that the isolates were obtained from high-risk origins. MDRwas characterized as being resistant to ≥3 different classes of antimicrobials . In this study, a total of 40 samples of frozen shrimp imported from China ( n = 25 samples) and from Vietnam ( n = 15 samples) were purchased from different supermarkets in Al Khobar in the Eastern Province of Saudi Arabia. Each purchased sample of shrimp is equivalent to one kilogram. All samples were examined for the presence of multidrug-resistant (MDR) E. coli . During sample processing, the storage temperature and production date of the products were noted. Most of the purchased frozen shrimp in this project were peeled shrimp without head and some samples were found without tails. The purchased samples were transferred to the microbiology laboratory and were stored at −20 °C. Escherichia coli enrichment broth (EC Broth, Oxoid, Hampshire, England) and CHROMagar™ E. coli (CHROMagar, Saint-Denis, France) were used to isolate E. coli from frozen shrimp. Briefly, a 25 g portion of shrimp from each sample was weighed and placed in a sterile stomacher plastic bag containing 225 mL of EC Broth . The bag was placed into the stomacher 400 lab blender (Stomacher 400 Circulator; Seward, West Sussex, UK) and blended for 2 min. Then, the broth containing the sample was incubated at 37 °C for 24 h. After incubation, 10 μL from each broth were taken by an inoculating loop and streaked onto CHROMagar™ E. coli plates and were incubated at 37 °C. After 24 h of incubation plates were inspected for blue-colored colony formation and were identified using biochemicals which include oxidase, indole and API 20E kit strips (BioMerieux, Marcy, France). Antimicrobial susceptibility was assessed in accordance with the established protocol for the Kirby-Bauer disk diffusion susceptibility test. For each test suspension the turbidity was adjusted to a 0.5 MacFarland and inoculated onto Muller-Hinton Agar using sterile cotton-wool swabs. A total of twenty-one different Oxoid antibiotic discs (Oxoid, Hampshire, UK) from different classes were tested. The antibiotic discs were dispensed onto the agar using an automated disk dispenser (Oxoid, Hampshire, UK). The diameter of the inhibition zone was measured in millimeters with Vernier calipers, and the results were interpreted as sensitive (S), intermediate (I), or resistant (R) based on breakpoints for E. coli as per the protocol established by and the Clinical and Laboratory Standards Institute (CLSI) . The tested antibiotic agents included: ampicillin (AM, 10 µg), amikacin (AK, 30 µg), augmentin (AUG, 30 µg), aztreonam (ATM, 30 µg), ciprofloxacin (CIP, 5 µg), cefotaxime (CTX, 30 µg), ceftazidime (CAZ, 30 µg), ceftriaxone (CRO, 30 µg), chloramphenicol (C, 30 µg), cephalexin (CFX, 30 µg), cefoxitin (FOX, 30 µg), nitrofurantoin (FM, 50 µg), gentamicin (GM, 10 µg), kanamycin (K, 30 µg), cephalotin (KF, 30 µg), nalidixic acid (NA, 30 µg), norfloxacin (NOR, 10 µg), tobramycin (TN, 30 µg), piperacillin (PIP, 30 µg), trimetrophrim sulphamethoxazole (SXT, 1.25 µg/23.75 µg) and tetracycline (TE, 30 µg). The Escherichia coli ATCC25922 reference strain included with the samples served as the control in performed tests. Determining the MAR index was performed using E. coli isolates following the Krumperman method . The MAR index was calculated by utilizing the equation MAR = a/b, with “a” denoting the count of antibiotics to which isolates showed resistance, and “b” representing the total number of antibiotics used in this study. More than or above 0.2 in valuesuggests that the isolates were obtained from high-risk origins. MDRwas characterized as being resistant to ≥3 different classes of antimicrobials . Isolation and prevalence rate The analysis of 40 frozen imported shrimp samples, 30 tested positive for E. coli , resulting in an overall prevalence rate of 75%, with a total of 180 isolates being identified. The largest number of E. coli isolates ( n = 140) were found in 22 (88%) out of 25 positive samples from frozen shrimp imported from China. Additionally, eight (53.3%) out of 15 samples from frozen shrimp imported from Vietnam were positive for E. coli , leading to the recovery of 40 isolates . Antibiotic susceptibility testing All the 180 E.coli strains isolated from imported frozen shrimp were subject to antibiotic susceptibility testing with a battery of 21 distinct antibiotics. The antibiotic susceptibility results are shown in . Overall, the highest percentage of isolates exhibited resistance to cephalothin 174 (96.6%), ampicillin 167 (92.7%), cephalexin 163 (90.5%), piperacillin 156 (86.6%), ceftriaxone 123 (68.3%), nalidixic acid 95 (52.7%), trimethoprim-Sulphamethoxazole 90 (50%), and tetracycline 88 (48.8%), respectively . Relatively, higher susceptibilities were observed against augmentin 179 (99.4%), amikacin 179 (99.4%); kanamycin 179 (99.4%); cefoxitin 171 (95%); ceftazidime 167 (92.7%); and nitrofurantoin 167 (92.7%) as shown in . Surprisingly, results for aztreonam, from 108 isolates were found to be intermediate resistant marking up to 60.0% . The lowest resistance of 0.5% was found similar to the subsequent antibiotics amikacin, augmentin and kanamycin . Antibiotic resistance and multiple drug resistance patterns The multiple antibiotic resistance patterns and the multiple antibiotic resistance (MAR) index are presented in . Among the total of 180 isolates of E. coli isolated from frozen shrimp imported from China and Vietnam, none of the isolates were found susceptible to any of the antibiotics used . The highest number of isolates 40 (22.2%) exhibited resistance to eight antimicrobials, with 33 (18.3%) found in frozen shrimp imported from China and seven (3.8%) found in frozen shrimp imported from Vietnam as shown in . The MDR (resistant to ≥3 different antimicrobials) rate was 94.4% (170/180) as shown in . The highest recorded MDR pattern to 14 different antimicrobial (AM-ATM-CIP-CAZ-CRO-CFX-GM-KF-NA-NOR-TN-PIP-SXT-TE) was detected in seven isolates which were isolated from frozen shrimp imported from China, whereas among E. coli isolated from frozen shrimp imported from Vietnam the highest MDR pattern to 12 different antimicrobial (AM-CIP-CTX-CRO-C-CFX-KF-NA-NOR-PIP-SXT-TE) and was found in a single isolate . Among the 180 E. coli isolates were segregated into 71 different antibiotic resistance groups with MAR index ranging from 0.04 to 0.66 and 161 out of 180 (89.4%) of E. coli isolates recorded very significant MAR indexes above the range of 0.2 as shown in . The largest number of isolates were grouped in the12a pattern “AM-CIP-CTX-CRO-C-CFX-KF-NA-NOR-PIP-SXT-TE” with 18 (94.7%) and 1 (5.2%) isolate from the frozen shrimp imported from China and Vietnam, respectively . Interestingly, the segregation of patterns based on the source of country origin of frozen shrimp revealed few inter-country resistant patterns found interconnecting such as 4a, 6a, 8b, and 12a patterns, which were analogous in both the countries. This influenced 44 (24.4%) isolates overlapping between China and Vietnam as shown in and . The depicted results are more exciting as all four overlapping patterns belong to different numbers of MAR groups . Antimicrobial co-resistance patterns Upon evaluation of co-resistance results, it was found that the highest co-resistance of 162 (90%) isolates was observed between cephalothin and ampicillin, followed by 158 (87.7%) isolates between cephalothin and Cephalexin as shown in . The low co-resistance was found between amikacin and all other tested antibiotics showed 0.5%, and similar results were obtained for augmentin as shown in . The analysis of 40 frozen imported shrimp samples, 30 tested positive for E. coli , resulting in an overall prevalence rate of 75%, with a total of 180 isolates being identified. The largest number of E. coli isolates ( n = 140) were found in 22 (88%) out of 25 positive samples from frozen shrimp imported from China. Additionally, eight (53.3%) out of 15 samples from frozen shrimp imported from Vietnam were positive for E. coli , leading to the recovery of 40 isolates . All the 180 E.coli strains isolated from imported frozen shrimp were subject to antibiotic susceptibility testing with a battery of 21 distinct antibiotics. The antibiotic susceptibility results are shown in . Overall, the highest percentage of isolates exhibited resistance to cephalothin 174 (96.6%), ampicillin 167 (92.7%), cephalexin 163 (90.5%), piperacillin 156 (86.6%), ceftriaxone 123 (68.3%), nalidixic acid 95 (52.7%), trimethoprim-Sulphamethoxazole 90 (50%), and tetracycline 88 (48.8%), respectively . Relatively, higher susceptibilities were observed against augmentin 179 (99.4%), amikacin 179 (99.4%); kanamycin 179 (99.4%); cefoxitin 171 (95%); ceftazidime 167 (92.7%); and nitrofurantoin 167 (92.7%) as shown in . Surprisingly, results for aztreonam, from 108 isolates were found to be intermediate resistant marking up to 60.0% . The lowest resistance of 0.5% was found similar to the subsequent antibiotics amikacin, augmentin and kanamycin . The multiple antibiotic resistance patterns and the multiple antibiotic resistance (MAR) index are presented in . Among the total of 180 isolates of E. coli isolated from frozen shrimp imported from China and Vietnam, none of the isolates were found susceptible to any of the antibiotics used . The highest number of isolates 40 (22.2%) exhibited resistance to eight antimicrobials, with 33 (18.3%) found in frozen shrimp imported from China and seven (3.8%) found in frozen shrimp imported from Vietnam as shown in . The MDR (resistant to ≥3 different antimicrobials) rate was 94.4% (170/180) as shown in . The highest recorded MDR pattern to 14 different antimicrobial (AM-ATM-CIP-CAZ-CRO-CFX-GM-KF-NA-NOR-TN-PIP-SXT-TE) was detected in seven isolates which were isolated from frozen shrimp imported from China, whereas among E. coli isolated from frozen shrimp imported from Vietnam the highest MDR pattern to 12 different antimicrobial (AM-CIP-CTX-CRO-C-CFX-KF-NA-NOR-PIP-SXT-TE) and was found in a single isolate . Among the 180 E. coli isolates were segregated into 71 different antibiotic resistance groups with MAR index ranging from 0.04 to 0.66 and 161 out of 180 (89.4%) of E. coli isolates recorded very significant MAR indexes above the range of 0.2 as shown in . The largest number of isolates were grouped in the12a pattern “AM-CIP-CTX-CRO-C-CFX-KF-NA-NOR-PIP-SXT-TE” with 18 (94.7%) and 1 (5.2%) isolate from the frozen shrimp imported from China and Vietnam, respectively . Interestingly, the segregation of patterns based on the source of country origin of frozen shrimp revealed few inter-country resistant patterns found interconnecting such as 4a, 6a, 8b, and 12a patterns, which were analogous in both the countries. This influenced 44 (24.4%) isolates overlapping between China and Vietnam as shown in and . The depicted results are more exciting as all four overlapping patterns belong to different numbers of MAR groups . Upon evaluation of co-resistance results, it was found that the highest co-resistance of 162 (90%) isolates was observed between cephalothin and ampicillin, followed by 158 (87.7%) isolates between cephalothin and Cephalexin as shown in . The low co-resistance was found between amikacin and all other tested antibiotics showed 0.5%, and similar results were obtained for augmentin as shown in . Seafood products serve as the primary food source because of their high polyunsaturated fatty acids and essential trace elements . Consequently, numerous coastal nations have promoted seafood farming to fulfill both local and international demand . The contamination of seafood with antibiotics and antibiotic-resistant bacteria poses a potential threat to human health . A recent investigation revealed that European countries confiscated seafood and seafood products from four Southeast Asian countries (Malaysia, Thailand, Vietnam, and Indonesia) due to the presence of antibiotics, and these products were rejected by 19 European countries from 1997 to 2020 due to the presence of pathogens and antibiotics . Seafood-borne pathogens and antibiotic resistance are recognized as global health concerns since they jeopardize food security . In Europe, antibiotic-resistant pathogens have been linked to over 30,000 deaths annually . It has been projected that by 2050, antibiotic-resistant pathogens could lead to 10 million deaths per year globally causing, potentially, a reduction of 2% to 5% in the gross domestic product, which amounts to approximately 100 trillion US dollars . A recently published review article from Singapore highlighted antimicrobial resistance among bacteria originating from 11 Southeast Asian countries. It revealed that most antimicrobial resistance reports have come from Vietnam, Malaysia, and Thailand, respectively . Also, the antimicrobial resistance found in Southeast Asian aquaculture was classified into 17 drug classes . The most reported antimicrobial resistance are aminoglycosides, beta-lactams, (fluoro) quinolones, tetracycline, sulpha group, and multi-drug resistance . The same study revealed that beta-lactams, tetracycline, and sulpha groups are reported in each country with frequencies higher than 40% and the most widely and frequently reported antimicrobial resistance were found in Southeast Asian aquaculture are strains of E. coli , Aeromonas , and Vibrio . Notably, E. coli isolates in our study exhibited high resistance against the antibiotic classes of cephalosporins, penicillins, quinolones, sulfonamides, and tetracycline. Our findings are consistent with earlier research study from China conducted on AMR prevalence of MDR E. coli in retail aquatic products available for purchase such as shrimp, fish and shellfish, where the study revealed that 40% of overall investigated samples were contaminated with MDR E. coli . Additionally, the study reported that the E. coli isolates showed high prevalence of resistance to tetracycline (93.7%), trimethoprim-sulfamethoxazole (78.9%), ampicillin (78.4%), chloramphenicol (72.1%), nalidixic acid (73.2%), cephalothin (65.3%), streptomycin (65.8%), kanamycin (42.1%), gentamicin (37.9%), ciprofloxacin (42.6%), and norfloxacin (45.8%) . Whereas, in our study isolates of E. coli isolated from frozen shrimp imported from China showed high resistance to cephalotin (97.8%), ampicillin (90.7%), cephalexin (90%), piperacillin (87.1%), ceftriaxone (69.2%), nalidixic acid (64.2%), noroxin (55%), trimethoprim/sulfamethoxazole (50.7%), and tetracycline (41.4%). Our study is in concordance with recent research from the USA which reported high antimicrobial resistance among E. coli isolated from imported shrimp. The study revealed that the isolates showed resistance to eight different antibiotic classes namely gentamicin, streptomycin, ampicillin, chloramphenicol, nalidixic acid, ciprofloxacin, tetracycline, and trimethoprim/sulfamethoxazole . Furthermore, another similar study from the USA highlighted quinolone resistance E. coli isolated from imported shrimp as with 52.3% of isolates exhibited resistance against nalidixic acid, ampicillin, tetracycline and chloramphenicol . Another study from Vietnam investigated antimicrobial resistance in total of 88 E. coli strains that were isolated from wild and farm fish and results revealed that a high prevalence of 94.3% of isolates were resistant to sulfonamides . The usage of different classes of antibiotics, such as β-lactams, quinolones, sulfonamides, tetracyclines, and nitrofuran are among the most commonly used antibiotics in aquaculture and shrimp farming worldwide . However, the consequences of the use of antibiotics in aquaculture for both therapeutic and prophylaxis may enhance the bacteria found within the water environment to develop antibiotic resistance and these resistance genes further spread through horizontal gene transfer to surrounding bacteria . Therefore, antibiotic usage in the aquaculture industry is expected to rise since aquaculture industry is rapidly growing worldwide and particularly in Southeast Asia to provide safe food for the growing global population that is expected to increase to about 10 billion people by 2030 . The rise and emergence of antimicrobial resistance is a globally acknowledged issue according to the Food and Agriculture Organization of the United Nations (FAO) Action Plan on Antimicrobial Resistance (AMR) 2016–2020. The FAO Action Plan aims to assist the agricultural and food sectors in addressing AMR on an international scale . The FAO has identified that the threat of increasing AMR is more pronounced in nations with inadequate legislation and regulatory frameworks governing the use of antimicrobial agents, compared to those with established action plans and monitoring systems for antimicrobial usage . Furthermore, the global food trade is likely contributed to the spread of AMR across borders, with well-regulated countries potentially facing the risk of introducing new resistant bacterial pathogens harboring resistance genes in plasmids and transposons, thereby increasing the national AMR burden through imported food products . Due to the high levels of international trade and the direct links to aquatic ecosystems, shrimp aquaculture may facilitate the global spread of AMR. Most shrimp production takes place in developing countries, where antibiotic quality and usage are often poorly regulated. Additionally, in shrimp farming regions, untreated waste is frequently discharged directly into local water bodies . These risks contrast sharply with those associated with other major aquaculture products, such as salmon, which are cultivated in higher-income countries with stricter regulations and established management practices. In contrast, several early studies investigated AMR in shrimp concluded that evaluating the true extent of AMR risk in the shrimp sector is a significant challenge, particularly due to the difficulty in obtaining accurate data on antibiotic use . Consequently, a recent research article reviewed critically the potential risks of antimicrobial resistance in the global shrimp industry revealing that assessing the risks associated with antimicrobial use in this rapidly expanding sector is currently quite challenging, because it includes diverse production systems that exist at the intersection of aquatic and terrestrial environments . In addition, the study addressed the risks linked to AMR is further complicated by the trend toward intensification and the accompanying disease pressures, as many farmers currently lack alternatives to antibiotics for preventing crop losses . In this study, E. coli isolates displayed a ranging of multiple antibiotic resistance (MAR) index values ranged from one antimicrobial resistant to 14 antimicrobials. Studies demonstrated that bacteria with a MAR index exceeding 0.2 are typically linked to high-risk sources of contamination indicating, also, high usage of antibiotic growth promoters. Thus, in this study 89.4% of E. coli isolates exhibited MAR index in the range of 0.2 to 0.66 indicating that the majority of isolates originated from sources of high antibiotic exposures. A significantly larger proportion of E. coli isolates displayed MDR with an overall rate of 94.4%. The overall resistance to antimicrobials in this study ranged between resistance to one antimicrobial and 14 different classes of antimicrobials. The highest number of MDR was found among 8, 18 and 7 E. coli isolates that were resistant to 8, 12 and 14 different classes of antimicrobials, respectively, in frozen shrimp imported from China. In comparison, 6, 5, 3, 2 and one E. coli isolates exhibited resistance to 6, 8, 9, 11, and 12 resistant different classes of antimicrobials, respectively, in frozen shrimp imported from Vietnam. Isolation of MDR E. coli from shrimp, seafood and other seafood products has been reported by several studies in different countries . A study from India reported high prevalence of multiple antibiotic-resistant E. coli isolated from fresh seafood sold in retail markets of Mumbai . However, the same study revealed that more than 90% of isolates were resistant to cephalosporins (cefotaxime, cefpodoxime, and ceftazidime) and MAR index of 97.35% of the isolates was above 0.18 . The fast-expanding aquaculture industry depends significantly on antimicrobials to prevent infectious bacterial diseases that pose risks to production, thereby researchers have been investigating antimicrobial resistance in aquaculture for over five decades and notably reported rise in evidence concerning antimicrobial resistance within this sector . In Saudi Arabia, very limited studies investigated the antimicrobial resistance in imported frozen aquaculture fishery products. Early studies from Saudi Arabia investigated antimicrobial resistance in retail imported frozen freshwater fish studies revealed high rate of antimicrobial resistance among isolates of Salmonella spp. and E. coli isolated . Antimicrobial resistance (AMR) in aquaculture has the potential to be transmitted to clinically significant strains in the natural environment via horizontal gene transfer, which can have consequences for the entire ecosystem . Moreover, many studies have confirmed that shrimp aquaculture harbor bacterial pathogens that demonstrate multiple antibiotic resistance . Imported frozen shrimp and other aquaculture products can serve as potential carriers for the spread of clinically significant antimicrobial-resistant bacteria and genes associated with resistance, such as extended-spectrum ß-lactamases (ESBLs), plasmid-mediated quinolone resistance determinants (PMQR), colistin resistance (mcr-1), and carbapenemases . The presence of multi-drug resistance E. coli in imported frozen shrimp available for purchase in the Eastern Province of Saudi Arabia indicates possible of unsanitary practices and this contamination may pose a risk of human infection. The obtained results in this study indicate the emergence of resistance and a decline in the efficacy of antimicrobial agents. Additionally, 94.4% of examined isolates exhibited MDR and 90% of isolates showed co-resistance between cephalotin and ampicillin, followed by 87.7% co-resistance between cephalotin and cephalexin. Moreover, the obtained MAR index values in this study revealed that isolates of E. coli isolated from imported frozen shrimp originated from high sources of antibiotics exposures and were used in large amounts or to a great degree. Such findings underline the need for collaborative efforts between scientists and food authorities in Saudi Arabia to work together to monitor presence of antimicrobial resistance bacteria in imported frozen shrimp and other aquaculture products. Implementing hygienic practices among imported frozen aquaculture products is recommended to decrease the transmission of antimicrobial-resistant E. coli and other bacterial species within the human food chain. This study addresses the need for further research in Saudi Arabia to aid in monitoring and investigation of AMR bacteria in imported frozen aquaculture products. However, our study has two limitations: firstly, the number of samples examined were not optimal, and secondly, screening for antibiotic-resistance genes was not performed. 10.7717/peerj.18689/supp-1 Supplemental Information 1 Antimicrobial susceptibility testing of E. coli isolated from imported shrimp. |
Congenital Hypothyroidism: A 2020–2021 Consensus Guidelines Update—An ENDO-European Reference Network Initiative Endorsed by the European Society for Pediatric Endocrinology and the European Society for Endocrinology | 79dc2bdc-79bf-473c-89ed-942f09b0d331 | 8001676 | Physiology[mh] | Congenital hypothyroidism (CH) can be defined as (variable) dysfunction of the hypothalamic–pituitary–thyroid (HPT) axis present at birth, resulting in insufficient thyroid hormone (TH) production and, with that, severe-to-mild TH deficiency. CH may be caused by abnormal development or function of the thyroid gland, or of the hypothalamus and pituitary, but also to impaired TH action. In 2014, an international consensus guideline on CH was published that encompassed the scientific literature up to 2013 . An ENDO-European Reference Network (ERN) initiative was launched, which was endorsed by the European Society for Pediatric Endocrinology and the European Society for Endocrinology, with the aim to update the practice guidelines for the diagnosis and management of CH.
Twenty-two participants from the ENDO-ERN network, Main Thematic Group 8—thyroid, including an ENDO-ERN patient association representative, and from the two scientific societies, the European Society for Pediatric Endocrinology and the European Society for Endocrinology participated. Preparation for the consensus took ∼24 months, starting late 2017 including email exchanges and two preparatory face-to-face meetings organized in 2019. All coauthors performed a comprehensive literature research using PudMed including articles published from January 1, 2013 to present (late 2020) concerning the five different subthemes presented in the consensus. Publications before 2013 have already been considered in the previous CH consensus published in 2014. Only publications in English were considered. A comprehensive review of all selected articles formed the basis of discussion and writing for the five working groups (WGs): WG1: neonatal screening, WG2: diagnosis and criteria for treatment, WG3: treatment and monitoring, WG4: outcomes of neonatal screening and early treatment, and WG5: genetics of CH and antenatal management. A preliminary document summarizing the questions addressed in the preparatory meetings was prepared by each WG and shared for review with all the experts before the final meeting. At the final consensus meeting, propositions and recommendations were reconsidered by participants and discussed in plenary sessions, enabling any reformulation of the recommendations. Recommendations were based on best available research evidence. Best practice statements were considered when necessary and, if evidence is mixed, based on expert opinion. A detailed description of the grading scheme Grading of Recommendations Assessment, Development and Evaluation (GRADE) has been published elsewhere . Factors that influence the strength of the recommendation (strong vs. weak) include the quality of evidence, the balance between benefits and risks, the burden of interventions, and the cost. For each point, recommendations and evidence are described, with a modification in the grading evidence, as follows: 1 = strong recommendation (applies to most patients in most circumstances, benefits clearly outweigh the risk); 2 = weak recommendation (suggested by us or should be considered; the best action may depend on circumstances or patient values, benefits, and risks closely balanced or uncertain). Quality of evidence is indicated as follows: +00: low (case series or nonsystematic clinical observations, inconsistent and unprecise estimates, or with indirect evidence); ++0: moderate (studies with methodological flaws, inconsistent or indirect evidence); +++: high quality (low risk bias). Summary of the CH consensus guidelines update 1. Neonatal screening 1.1. The benefits of CH screening Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++). 1.2. Analytical methodology and effectiveness of CH screening strategies The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of thyrotropin (TSH) (1/+++). When financial resources are available, we recommend adding measurement of total or free thyroxine (fT4) to TSH, to screen for central CH (2/++0). 1.3. Postscreening strategies in special categories of neonates at risk of CH Some groups of children may have a false-negative neonatal screening result or have a high risk of mild CH not detected by neonatal screening, for instance premature, low birthweight, and sick babies; for these groups a postscreening strategy including collection of a second specimen ∼10 to 14 days of age may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a second screening in same sex twins should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00). 2. Diagnostics and criteria for treatment 2.1. Biochemical criteria used in the decision to start treatment for CH A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then levothyroxine (LT4) treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration is 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00). 2.2. Communication of abnormal screening and confirmatory results An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). 2.3. Imaging techniques in CH In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or ultrasonography (US), or both (1/++0). Knee X-ray may be performed to assess the severity of intrauterine hypothyroidism (2/+00). 2.4. Associated malformations and syndromes All neonates with a high TSH concentration should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++). 3. Treatment and monitoring of CH 3.1. Starting treatment for primary CH LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and gestational age (GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed but based on personal experience/expert opinion we recommend brand rather than generic (2/++0). 3.2. Monitoring treatment in primary CH We recommend measurement of serum fT4 and TSH concentrations before or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of TH, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose of 50 μg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). We recommend physicians to avoid long-term under- or overtreatment during childhood (1/++0). In contrast to adults, in neonates, infants, and children, LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake; while this approach can improve compliance, it ensures as constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption, or increased metabolization of thyroxine (T4) by other disease (e.g., gastrointestinal), food or medication should be considered (2/+00); incompliance may be the most frequent cause, especially in teenagers and adolescents. 3.3. Treatment and monitoring of central CH In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (10–15 μg/kg per day, see section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day), to avoid the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH, or free triiodothyronine (fT3) or total triiodothyronine (T3) can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high fT3 concentration (1/+00). 3.4. Diagnostic re-evaluation of thyroid function beyond the first 6 months of life When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a gland in situ (GIS), and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires a LT4 dose less than 3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0). 3.5. Treatment and monitoring of pregnant women with CH In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester specific reference interval (1/+00). After delivery, we recommend lowering LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0). 4. Outcomes of neonatal screening and early treatment 4.1. Neurodevelopmental outcomes Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention, and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0). 4.2. Development of goiter in thyroid dyshormonogenesis Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00). 4.3. Growth, puberty, and fertility Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++). 4.4. Bone, metabolic, and cardiovascular health Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0). 4.5. Patient and professional education, and health-related quality of life Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of diagnosis, and later on of the patient is essential; not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0). 4.6. Transition to adult care When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++). 5. Genetics of CH, genetic counseling, and antenatal management 5.1. Criteria for genetic counseling Genetic counseling should be targeted rather than general (to all CH patients) and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH should have access to information about the two major forms of primary CH—thyroid dysgenesis (TD) and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++). 5.2. Genetics of CH If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as comparative genomic hybridization (CGH) array, next-generation sequencing (NGS) of gene panels (targeted NGS), or whole exome sequencing (WES) (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0). 5.3. Antenatal diagnostics, evaluation of fetal thyroid function, and management of fetal hypothyroidism We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 gestational weeks to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic T4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with T4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00).
1. Neonatal screening 1.1. The benefits of CH screening Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++). 1.2. Analytical methodology and effectiveness of CH screening strategies The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of thyrotropin (TSH) (1/+++). When financial resources are available, we recommend adding measurement of total or free thyroxine (fT4) to TSH, to screen for central CH (2/++0). 1.3. Postscreening strategies in special categories of neonates at risk of CH Some groups of children may have a false-negative neonatal screening result or have a high risk of mild CH not detected by neonatal screening, for instance premature, low birthweight, and sick babies; for these groups a postscreening strategy including collection of a second specimen ∼10 to 14 days of age may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a second screening in same sex twins should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00). 2. Diagnostics and criteria for treatment 2.1. Biochemical criteria used in the decision to start treatment for CH A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then levothyroxine (LT4) treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration is 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00). 2.2. Communication of abnormal screening and confirmatory results An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). 2.3. Imaging techniques in CH In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or ultrasonography (US), or both (1/++0). Knee X-ray may be performed to assess the severity of intrauterine hypothyroidism (2/+00). 2.4. Associated malformations and syndromes All neonates with a high TSH concentration should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++). 3. Treatment and monitoring of CH 3.1. Starting treatment for primary CH LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and gestational age (GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed but based on personal experience/expert opinion we recommend brand rather than generic (2/++0). 3.2. Monitoring treatment in primary CH We recommend measurement of serum fT4 and TSH concentrations before or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of TH, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose of 50 μg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). We recommend physicians to avoid long-term under- or overtreatment during childhood (1/++0). In contrast to adults, in neonates, infants, and children, LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake; while this approach can improve compliance, it ensures as constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption, or increased metabolization of thyroxine (T4) by other disease (e.g., gastrointestinal), food or medication should be considered (2/+00); incompliance may be the most frequent cause, especially in teenagers and adolescents. 3.3. Treatment and monitoring of central CH In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (10–15 μg/kg per day, see section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day), to avoid the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH, or free triiodothyronine (fT3) or total triiodothyronine (T3) can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high fT3 concentration (1/+00). 3.4. Diagnostic re-evaluation of thyroid function beyond the first 6 months of life When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a gland in situ (GIS), and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires a LT4 dose less than 3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0). 3.5. Treatment and monitoring of pregnant women with CH In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester specific reference interval (1/+00). After delivery, we recommend lowering LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0). 4. Outcomes of neonatal screening and early treatment 4.1. Neurodevelopmental outcomes Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention, and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0). 4.2. Development of goiter in thyroid dyshormonogenesis Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00). 4.3. Growth, puberty, and fertility Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++). 4.4. Bone, metabolic, and cardiovascular health Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0). 4.5. Patient and professional education, and health-related quality of life Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of diagnosis, and later on of the patient is essential; not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0). 4.6. Transition to adult care When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++). 5. Genetics of CH, genetic counseling, and antenatal management 5.1. Criteria for genetic counseling Genetic counseling should be targeted rather than general (to all CH patients) and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH should have access to information about the two major forms of primary CH—thyroid dysgenesis (TD) and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++). 5.2. Genetics of CH If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as comparative genomic hybridization (CGH) array, next-generation sequencing (NGS) of gene panels (targeted NGS), or whole exome sequencing (WES) (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0). 5.3. Antenatal diagnostics, evaluation of fetal thyroid function, and management of fetal hypothyroidism We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 gestational weeks to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic T4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with T4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00).
1.1. The benefits of CH screening Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++). 1.2. Analytical methodology and effectiveness of CH screening strategies The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of thyrotropin (TSH) (1/+++). When financial resources are available, we recommend adding measurement of total or free thyroxine (fT4) to TSH, to screen for central CH (2/++0). 1.3. Postscreening strategies in special categories of neonates at risk of CH Some groups of children may have a false-negative neonatal screening result or have a high risk of mild CH not detected by neonatal screening, for instance premature, low birthweight, and sick babies; for these groups a postscreening strategy including collection of a second specimen ∼10 to 14 days of age may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a second screening in same sex twins should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00).
Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++).
The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of thyrotropin (TSH) (1/+++). When financial resources are available, we recommend adding measurement of total or free thyroxine (fT4) to TSH, to screen for central CH (2/++0).
Some groups of children may have a false-negative neonatal screening result or have a high risk of mild CH not detected by neonatal screening, for instance premature, low birthweight, and sick babies; for these groups a postscreening strategy including collection of a second specimen ∼10 to 14 days of age may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a second screening in same sex twins should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00).
2.1. Biochemical criteria used in the decision to start treatment for CH A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then levothyroxine (LT4) treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration is 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00). 2.2. Communication of abnormal screening and confirmatory results An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). 2.3. Imaging techniques in CH In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or ultrasonography (US), or both (1/++0). Knee X-ray may be performed to assess the severity of intrauterine hypothyroidism (2/+00). 2.4. Associated malformations and syndromes All neonates with a high TSH concentration should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++).
A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then levothyroxine (LT4) treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration is 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00).
An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00).
In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or ultrasonography (US), or both (1/++0). Knee X-ray may be performed to assess the severity of intrauterine hypothyroidism (2/+00).
All neonates with a high TSH concentration should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++).
3.1. Starting treatment for primary CH LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and gestational age (GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed but based on personal experience/expert opinion we recommend brand rather than generic (2/++0). 3.2. Monitoring treatment in primary CH We recommend measurement of serum fT4 and TSH concentrations before or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of TH, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose of 50 μg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). We recommend physicians to avoid long-term under- or overtreatment during childhood (1/++0). In contrast to adults, in neonates, infants, and children, LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake; while this approach can improve compliance, it ensures as constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption, or increased metabolization of thyroxine (T4) by other disease (e.g., gastrointestinal), food or medication should be considered (2/+00); incompliance may be the most frequent cause, especially in teenagers and adolescents. 3.3. Treatment and monitoring of central CH In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (10–15 μg/kg per day, see section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day), to avoid the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH, or free triiodothyronine (fT3) or total triiodothyronine (T3) can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high fT3 concentration (1/+00). 3.4. Diagnostic re-evaluation of thyroid function beyond the first 6 months of life When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a gland in situ (GIS), and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires a LT4 dose less than 3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0). 3.5. Treatment and monitoring of pregnant women with CH In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester specific reference interval (1/+00). After delivery, we recommend lowering LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0).
LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and gestational age (GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed but based on personal experience/expert opinion we recommend brand rather than generic (2/++0).
We recommend measurement of serum fT4 and TSH concentrations before or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of TH, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose of 50 μg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). We recommend physicians to avoid long-term under- or overtreatment during childhood (1/++0). In contrast to adults, in neonates, infants, and children, LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake; while this approach can improve compliance, it ensures as constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption, or increased metabolization of thyroxine (T4) by other disease (e.g., gastrointestinal), food or medication should be considered (2/+00); incompliance may be the most frequent cause, especially in teenagers and adolescents.
In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (10–15 μg/kg per day, see section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day), to avoid the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH, or free triiodothyronine (fT3) or total triiodothyronine (T3) can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high fT3 concentration (1/+00).
When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a gland in situ (GIS), and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires a LT4 dose less than 3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0).
In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester specific reference interval (1/+00). After delivery, we recommend lowering LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0).
4.1. Neurodevelopmental outcomes Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention, and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0). 4.2. Development of goiter in thyroid dyshormonogenesis Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00). 4.3. Growth, puberty, and fertility Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++). 4.4. Bone, metabolic, and cardiovascular health Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0). 4.5. Patient and professional education, and health-related quality of life Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of diagnosis, and later on of the patient is essential; not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0). 4.6. Transition to adult care When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++).
Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention, and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0).
Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00).
Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++).
Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0).
Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of diagnosis, and later on of the patient is essential; not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0).
When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++).
5.1. Criteria for genetic counseling Genetic counseling should be targeted rather than general (to all CH patients) and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH should have access to information about the two major forms of primary CH—thyroid dysgenesis (TD) and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++). 5.2. Genetics of CH If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as comparative genomic hybridization (CGH) array, next-generation sequencing (NGS) of gene panels (targeted NGS), or whole exome sequencing (WES) (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0). 5.3. Antenatal diagnostics, evaluation of fetal thyroid function, and management of fetal hypothyroidism We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 gestational weeks to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic T4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with T4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00).
Genetic counseling should be targeted rather than general (to all CH patients) and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH should have access to information about the two major forms of primary CH—thyroid dysgenesis (TD) and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++).
If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as comparative genomic hybridization (CGH) array, next-generation sequencing (NGS) of gene panels (targeted NGS), or whole exome sequencing (WES) (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0).
We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 gestational weeks to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic T4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with T4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00).
1.1. Benefits of CH screening 1.2. Analytical methodology and effectiveness of CH screening strategies 1.3. Postscreening strategies in special categories of neonates at risk of CH 1.1. Benefits of CH screening Summary Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++). Evidence Neonatal screening for CH has almost eliminated the profound negative effects of TH deficiency on growth and neurodevelopment (cretinism) in those countries where it has been established. Improved developmental outcomes were already reported a few years after the start of neonatal screening , and justified its economic costs by clearly outweighing the costs of providing health and educational care for individuals with neurodevelopmental damage due to CH . Despite the benefits of neonatal screening, 70% of infants worldwide are born in areas that do not have access to neonatal screening . In addition, many of these infants are born in areas of endemic iodine deficiency, placing them at increased risk of TH deficiency. 1.2. Analytical methodology and effectiveness of CH screening strategies Summary The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of TSH (1/+++). When financial resources are available, we recommend adding measurement of total or fT4 to TSH, to screen for central CH (2/++0). Evidence Since the introduction of neonatal screening for CH in the late 1970s, using total T4 plus, or followed by TSH, gradually evolving into TSH only, its incidence and yield have also changed. An initial estimated incidence was revised from 1 in 7000 to ∼1 in 4000 soon after the introduction of screening in the United Kingdom , probably reflecting more accurate data with detection of CH cases who were previously undiagnosed. Since then, the CH incidence has increased to between 1 in 3000 and 1 in 2000. This can be partly explained by the lowering of neonatal screening TSH cut-off values, resulting in the detection of newborns who would have been missed otherwise (false negatives) , but also in finding children with biochemically milder forms of CH (mostly with thyroid GIS) . However, the overall increase in the incidence of CH cannot be attributed solely to lower screening TSH cut-off values , and thus environmental, ethnic, and genetic factors should be considered, and all require further evaluation . For instance, the clinical expression of mutations in genes such as DUOX2/DUOXA2 varies widely between individuals and over time, with some patients requiring no treatment, and some having transient CH. In contrast, DUOX gene mutations can be associated with worsening of thyroid fucntions in the first weeks of life . However, justification for screening and detecting biochemically less severe eventually transient CH cases require assessment of neurodevelopmental sequelae, but this has been proved difficult . Long-term outcome studies of the effect of LT4 treatment on prevention of neurodevelopmental delay in these patients will also be required. Neonatal screening programmes were originally designed to detect primary CH by total T4 plus, or followed by TSH measurement, and later by measurement of only TSH, with optimal timing of samples at least 48 hours after birth. However, also measuring T4 ± T4-binding globulin provides the potential to diagnose central CH. Although slightly >50% of neonates with central CH have moderate-to-severe CH, that is, a first diagnostic fT4 concentration of 5–10 pmol/L or lower, and central CH is likely to be associated with other pituitary abnormalities, this diagnosis is often delayed . Therefore, detection of central CH by neonatal screening has the potential to prevent the neurodevelopmental sequelae of TH deficiency and associated morbidities. The reported incidence of central CH detected through neonatal screening lies between 1 in 30,000 and 1 in 16,000, depending on the screening strategy . Although additional data on the true clinical benefits and false-positive rates are required, central CH is a potential candidate for neonatal screening. Until 2019, only supportive therapy was available for patients with MCT8 deficiency. This changed when a clinical trial demonstrated that treatment with triiodothyroacetic acid (Triac) ameliorates key features of the peripheral thyrotoxicosis and might benefit brain development once treatment is commenced early in life . Therefore, early recognition of MCT8-affected children becomes of utmost importance through T4 and TSH neonatal screening eventhough the part of the fetal component of the disease that can be alleviated by Triac treatment remains to be determined. Pitfalls in the newborn screening do exist and can be due to abnormal TH binding globulin, severe concomitant illnesses, as well as several drugs and autoantibodies . 1.3. Postscreening strategies in special categories of neonates at risk of CH Summary Some groups of children, such as preterm or low birthweight and sick babies, pass their initial screening test but are at high risk for later development of mild CH. For these groups, a postscreening strategy may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a strategy of a second screening should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00). Evidence Babies with primary CH who are born premature or with low birthweight, or who are sick in the neonatal period may not be able to generate an adequate TSH response in the first weeks of life. Therefore, in TSH-based neonatal screening programs, their screening result may be false negative . Maturation or recovery of the HPT axis with an increase in TSH occurs between the ages of 2 to 6 weeks of life, and many neonatal screening programs have revised recommendations for this group of infants . In preterm newborns, the TSH surge and the blood levels of T4 and T3 are lower than those in term neonates. The immature HPT axis in the extreme preterm neonates is characterized by (i) a markedly attenuated TSH surge, (ii) a T4 decrease instead of an increase, and (iii) a clearly lower and shorter T3 increase within the first 24 hours of life. Interestingly, the T3 surge is observed as early as 1 hour postnatally, while the T4 surge only appeared at 7 hours after birth in infants born 28 to 30 gestational weeks and 31 to 34 gestational weeks . This observation may be explained by three factors: decreased T3 metabolism in the placenta, increased outer ring deiodination of T4, and increased thyroidal T3 release in response to the TSH surge. However, because the T3 increase at 1 hour after birth was independent of the TSH surge, and T4 peak values were reached only at 7 hours after birth in more mature infants, an abrupt loss of placental D3 activity is the most probable physiologic explanation for the observed rapid T3 increase followed by a slightly delayed T4 increase. Transient hypothyroxinemia of the preterm neonate is a frequent finding, often aggravated by general illness of the preterm neonate and it is due to an immature HPT function. So far, LT4 therapy of preterm hypothyroxinemia remains controversial and large-scale randomized trials are necessary to provide more clarity on its potential impact or absence thereof. Even after diagnosis of CH in preterm infants, one needs to be aware of the high incidence of postnatal transient forms of CH, emphasizing the need of diagnostic re-evaluation beyond infancy. The Wolff–Chaikoff effect is only mature at the end of the third trimester. Premature neonates cannot protect themselves from excessive exposure to iodine overdose. Thus, the use of iodine-containing disinfectants is contraindicated in preterm babies, since exposure to topical iodine may cause transient neonatal hypo- or hyperthyroidism as summarized in a systematic review . Although the concordance rate for CH in twins is low, twins are overrepresented in the CH population . Because of fetal blood mixing, the TSH concentration of an affected twin may be lower than expected and may escape detection in TSH-based screening . Therefore, a low threshold for repeat TSH measurement is suggested, or a second screening should be considered in same-sex twins. In addition, the nonaffected twin should be followed up for possible TSH elevation later in life . Down's syndrome is associated with a 14 to 21 times higher than expected incidence of CH, and highly prevalent mild TSH elevation/subclinical hypothyroidism, especially in the first months to years of life . The probable cause of both phenomena is TD, probably related to the extra chromosome 21 and possibly to overexpression of the DYRK1A gene . Because many neonates with Down's syndrome have nonthyroidal illness due to (surgery for) cardiac or intestinal disease , TSH generation may be impaired resulting in a false-negative neonatal screening result (in TSH-based screening programs). Therefore, additional measurement of TSH and fT4 around the age of 3 to 4 weeks should be considered. In babies born into families affected with primary or central CH, fT4 and TSH measurements are advised, even if TSH was normal in TSH-based screening programs. A delayed rise of TSH has been reported in newborns affected with defects in the DUOXs system . In central CH, TSH is usually normal, but can be lower than normal or mildly elevated; only fT4 will contribute to the diagnosis . In case of a known genetic cause, (even prenatal) genetic testing can prevent diagnostic delay. Central CH should be considered in neonates with clinical manifestations of CH or congenital hypopituitarism, but a low, normal, or slightly elevated TSH concentration . In addition, we recommend endocrine testing in all neonates with a familial history of central CH, or signs or symptoms of congenital hypopituitarism, for example, micropenis with undescended testes, hypoglycemia, prolonged jaundice, or unexplained failure to thrive.
Summary Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++). Evidence Neonatal screening for CH has almost eliminated the profound negative effects of TH deficiency on growth and neurodevelopment (cretinism) in those countries where it has been established. Improved developmental outcomes were already reported a few years after the start of neonatal screening , and justified its economic costs by clearly outweighing the costs of providing health and educational care for individuals with neurodevelopmental damage due to CH . Despite the benefits of neonatal screening, 70% of infants worldwide are born in areas that do not have access to neonatal screening . In addition, many of these infants are born in areas of endemic iodine deficiency, placing them at increased risk of TH deficiency.
Early detection and treatment of CH through neonatal screening prevent irreversible neurodevelopmental delay and optimize its developmental outcome (1/+++). Screening for CH should be introduced worldwide (1/+++).
Neonatal screening for CH has almost eliminated the profound negative effects of TH deficiency on growth and neurodevelopment (cretinism) in those countries where it has been established. Improved developmental outcomes were already reported a few years after the start of neonatal screening , and justified its economic costs by clearly outweighing the costs of providing health and educational care for individuals with neurodevelopmental damage due to CH . Despite the benefits of neonatal screening, 70% of infants worldwide are born in areas that do not have access to neonatal screening . In addition, many of these infants are born in areas of endemic iodine deficiency, placing them at increased risk of TH deficiency.
Summary The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of TSH (1/+++). When financial resources are available, we recommend adding measurement of total or fT4 to TSH, to screen for central CH (2/++0). Evidence Since the introduction of neonatal screening for CH in the late 1970s, using total T4 plus, or followed by TSH, gradually evolving into TSH only, its incidence and yield have also changed. An initial estimated incidence was revised from 1 in 7000 to ∼1 in 4000 soon after the introduction of screening in the United Kingdom , probably reflecting more accurate data with detection of CH cases who were previously undiagnosed. Since then, the CH incidence has increased to between 1 in 3000 and 1 in 2000. This can be partly explained by the lowering of neonatal screening TSH cut-off values, resulting in the detection of newborns who would have been missed otherwise (false negatives) , but also in finding children with biochemically milder forms of CH (mostly with thyroid GIS) . However, the overall increase in the incidence of CH cannot be attributed solely to lower screening TSH cut-off values , and thus environmental, ethnic, and genetic factors should be considered, and all require further evaluation . For instance, the clinical expression of mutations in genes such as DUOX2/DUOXA2 varies widely between individuals and over time, with some patients requiring no treatment, and some having transient CH. In contrast, DUOX gene mutations can be associated with worsening of thyroid fucntions in the first weeks of life . However, justification for screening and detecting biochemically less severe eventually transient CH cases require assessment of neurodevelopmental sequelae, but this has been proved difficult . Long-term outcome studies of the effect of LT4 treatment on prevention of neurodevelopmental delay in these patients will also be required. Neonatal screening programmes were originally designed to detect primary CH by total T4 plus, or followed by TSH measurement, and later by measurement of only TSH, with optimal timing of samples at least 48 hours after birth. However, also measuring T4 ± T4-binding globulin provides the potential to diagnose central CH. Although slightly >50% of neonates with central CH have moderate-to-severe CH, that is, a first diagnostic fT4 concentration of 5–10 pmol/L or lower, and central CH is likely to be associated with other pituitary abnormalities, this diagnosis is often delayed . Therefore, detection of central CH by neonatal screening has the potential to prevent the neurodevelopmental sequelae of TH deficiency and associated morbidities. The reported incidence of central CH detected through neonatal screening lies between 1 in 30,000 and 1 in 16,000, depending on the screening strategy . Although additional data on the true clinical benefits and false-positive rates are required, central CH is a potential candidate for neonatal screening. Until 2019, only supportive therapy was available for patients with MCT8 deficiency. This changed when a clinical trial demonstrated that treatment with triiodothyroacetic acid (Triac) ameliorates key features of the peripheral thyrotoxicosis and might benefit brain development once treatment is commenced early in life . Therefore, early recognition of MCT8-affected children becomes of utmost importance through T4 and TSH neonatal screening eventhough the part of the fetal component of the disease that can be alleviated by Triac treatment remains to be determined. Pitfalls in the newborn screening do exist and can be due to abnormal TH binding globulin, severe concomitant illnesses, as well as several drugs and autoantibodies .
The incidence of CH partly depends on the screening strategy; based on data from a number of screening programs, the incidence of primary CH lies between 1 in 3000 and 1 in 2000; the highest reported incidence of central CH is ∼1 in 16,000 (1/+++). The initial priority of neonatal screening for CH should be the detection of all forms of primary CH—mild, moderate, and severe; the most sensitive test for detecting primary CH is measurement of TSH (1/+++). When financial resources are available, we recommend adding measurement of total or fT4 to TSH, to screen for central CH (2/++0).
Since the introduction of neonatal screening for CH in the late 1970s, using total T4 plus, or followed by TSH, gradually evolving into TSH only, its incidence and yield have also changed. An initial estimated incidence was revised from 1 in 7000 to ∼1 in 4000 soon after the introduction of screening in the United Kingdom , probably reflecting more accurate data with detection of CH cases who were previously undiagnosed. Since then, the CH incidence has increased to between 1 in 3000 and 1 in 2000. This can be partly explained by the lowering of neonatal screening TSH cut-off values, resulting in the detection of newborns who would have been missed otherwise (false negatives) , but also in finding children with biochemically milder forms of CH (mostly with thyroid GIS) . However, the overall increase in the incidence of CH cannot be attributed solely to lower screening TSH cut-off values , and thus environmental, ethnic, and genetic factors should be considered, and all require further evaluation . For instance, the clinical expression of mutations in genes such as DUOX2/DUOXA2 varies widely between individuals and over time, with some patients requiring no treatment, and some having transient CH. In contrast, DUOX gene mutations can be associated with worsening of thyroid fucntions in the first weeks of life . However, justification for screening and detecting biochemically less severe eventually transient CH cases require assessment of neurodevelopmental sequelae, but this has been proved difficult . Long-term outcome studies of the effect of LT4 treatment on prevention of neurodevelopmental delay in these patients will also be required. Neonatal screening programmes were originally designed to detect primary CH by total T4 plus, or followed by TSH measurement, and later by measurement of only TSH, with optimal timing of samples at least 48 hours after birth. However, also measuring T4 ± T4-binding globulin provides the potential to diagnose central CH. Although slightly >50% of neonates with central CH have moderate-to-severe CH, that is, a first diagnostic fT4 concentration of 5–10 pmol/L or lower, and central CH is likely to be associated with other pituitary abnormalities, this diagnosis is often delayed . Therefore, detection of central CH by neonatal screening has the potential to prevent the neurodevelopmental sequelae of TH deficiency and associated morbidities. The reported incidence of central CH detected through neonatal screening lies between 1 in 30,000 and 1 in 16,000, depending on the screening strategy . Although additional data on the true clinical benefits and false-positive rates are required, central CH is a potential candidate for neonatal screening. Until 2019, only supportive therapy was available for patients with MCT8 deficiency. This changed when a clinical trial demonstrated that treatment with triiodothyroacetic acid (Triac) ameliorates key features of the peripheral thyrotoxicosis and might benefit brain development once treatment is commenced early in life . Therefore, early recognition of MCT8-affected children becomes of utmost importance through T4 and TSH neonatal screening eventhough the part of the fetal component of the disease that can be alleviated by Triac treatment remains to be determined. Pitfalls in the newborn screening do exist and can be due to abnormal TH binding globulin, severe concomitant illnesses, as well as several drugs and autoantibodies .
Summary Some groups of children, such as preterm or low birthweight and sick babies, pass their initial screening test but are at high risk for later development of mild CH. For these groups, a postscreening strategy may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a strategy of a second screening should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00). Evidence Babies with primary CH who are born premature or with low birthweight, or who are sick in the neonatal period may not be able to generate an adequate TSH response in the first weeks of life. Therefore, in TSH-based neonatal screening programs, their screening result may be false negative . Maturation or recovery of the HPT axis with an increase in TSH occurs between the ages of 2 to 6 weeks of life, and many neonatal screening programs have revised recommendations for this group of infants . In preterm newborns, the TSH surge and the blood levels of T4 and T3 are lower than those in term neonates. The immature HPT axis in the extreme preterm neonates is characterized by (i) a markedly attenuated TSH surge, (ii) a T4 decrease instead of an increase, and (iii) a clearly lower and shorter T3 increase within the first 24 hours of life. Interestingly, the T3 surge is observed as early as 1 hour postnatally, while the T4 surge only appeared at 7 hours after birth in infants born 28 to 30 gestational weeks and 31 to 34 gestational weeks . This observation may be explained by three factors: decreased T3 metabolism in the placenta, increased outer ring deiodination of T4, and increased thyroidal T3 release in response to the TSH surge. However, because the T3 increase at 1 hour after birth was independent of the TSH surge, and T4 peak values were reached only at 7 hours after birth in more mature infants, an abrupt loss of placental D3 activity is the most probable physiologic explanation for the observed rapid T3 increase followed by a slightly delayed T4 increase. Transient hypothyroxinemia of the preterm neonate is a frequent finding, often aggravated by general illness of the preterm neonate and it is due to an immature HPT function. So far, LT4 therapy of preterm hypothyroxinemia remains controversial and large-scale randomized trials are necessary to provide more clarity on its potential impact or absence thereof. Even after diagnosis of CH in preterm infants, one needs to be aware of the high incidence of postnatal transient forms of CH, emphasizing the need of diagnostic re-evaluation beyond infancy. The Wolff–Chaikoff effect is only mature at the end of the third trimester. Premature neonates cannot protect themselves from excessive exposure to iodine overdose. Thus, the use of iodine-containing disinfectants is contraindicated in preterm babies, since exposure to topical iodine may cause transient neonatal hypo- or hyperthyroidism as summarized in a systematic review . Although the concordance rate for CH in twins is low, twins are overrepresented in the CH population . Because of fetal blood mixing, the TSH concentration of an affected twin may be lower than expected and may escape detection in TSH-based screening . Therefore, a low threshold for repeat TSH measurement is suggested, or a second screening should be considered in same-sex twins. In addition, the nonaffected twin should be followed up for possible TSH elevation later in life . Down's syndrome is associated with a 14 to 21 times higher than expected incidence of CH, and highly prevalent mild TSH elevation/subclinical hypothyroidism, especially in the first months to years of life . The probable cause of both phenomena is TD, probably related to the extra chromosome 21 and possibly to overexpression of the DYRK1A gene . Because many neonates with Down's syndrome have nonthyroidal illness due to (surgery for) cardiac or intestinal disease , TSH generation may be impaired resulting in a false-negative neonatal screening result (in TSH-based screening programs). Therefore, additional measurement of TSH and fT4 around the age of 3 to 4 weeks should be considered. In babies born into families affected with primary or central CH, fT4 and TSH measurements are advised, even if TSH was normal in TSH-based screening programs. A delayed rise of TSH has been reported in newborns affected with defects in the DUOXs system . In central CH, TSH is usually normal, but can be lower than normal or mildly elevated; only fT4 will contribute to the diagnosis . In case of a known genetic cause, (even prenatal) genetic testing can prevent diagnostic delay. Central CH should be considered in neonates with clinical manifestations of CH or congenital hypopituitarism, but a low, normal, or slightly elevated TSH concentration . In addition, we recommend endocrine testing in all neonates with a familial history of central CH, or signs or symptoms of congenital hypopituitarism, for example, micropenis with undescended testes, hypoglycemia, prolonged jaundice, or unexplained failure to thrive.
Some groups of children, such as preterm or low birthweight and sick babies, pass their initial screening test but are at high risk for later development of mild CH. For these groups, a postscreening strategy may be considered (1/+00). In patients with Down's syndrome, we recommend measuring TSH at the end of the neonatal period (1/++0). The initial screening in an affected twin may be normal; a strategy of a second screening should be considered. The nonaffected sibling of twins should be followed up for possible TSH elevation later in life (2/+00). Clinical suspicion of hypothyroidism, despite normal TSH in TSH-based screening programs, should prompt further evaluation for primary (rare cases of false-negative neonatal screening results) and central CH, particularly in children with a family history of central CH (2/+00).
Babies with primary CH who are born premature or with low birthweight, or who are sick in the neonatal period may not be able to generate an adequate TSH response in the first weeks of life. Therefore, in TSH-based neonatal screening programs, their screening result may be false negative . Maturation or recovery of the HPT axis with an increase in TSH occurs between the ages of 2 to 6 weeks of life, and many neonatal screening programs have revised recommendations for this group of infants . In preterm newborns, the TSH surge and the blood levels of T4 and T3 are lower than those in term neonates. The immature HPT axis in the extreme preterm neonates is characterized by (i) a markedly attenuated TSH surge, (ii) a T4 decrease instead of an increase, and (iii) a clearly lower and shorter T3 increase within the first 24 hours of life. Interestingly, the T3 surge is observed as early as 1 hour postnatally, while the T4 surge only appeared at 7 hours after birth in infants born 28 to 30 gestational weeks and 31 to 34 gestational weeks . This observation may be explained by three factors: decreased T3 metabolism in the placenta, increased outer ring deiodination of T4, and increased thyroidal T3 release in response to the TSH surge. However, because the T3 increase at 1 hour after birth was independent of the TSH surge, and T4 peak values were reached only at 7 hours after birth in more mature infants, an abrupt loss of placental D3 activity is the most probable physiologic explanation for the observed rapid T3 increase followed by a slightly delayed T4 increase. Transient hypothyroxinemia of the preterm neonate is a frequent finding, often aggravated by general illness of the preterm neonate and it is due to an immature HPT function. So far, LT4 therapy of preterm hypothyroxinemia remains controversial and large-scale randomized trials are necessary to provide more clarity on its potential impact or absence thereof. Even after diagnosis of CH in preterm infants, one needs to be aware of the high incidence of postnatal transient forms of CH, emphasizing the need of diagnostic re-evaluation beyond infancy. The Wolff–Chaikoff effect is only mature at the end of the third trimester. Premature neonates cannot protect themselves from excessive exposure to iodine overdose. Thus, the use of iodine-containing disinfectants is contraindicated in preterm babies, since exposure to topical iodine may cause transient neonatal hypo- or hyperthyroidism as summarized in a systematic review . Although the concordance rate for CH in twins is low, twins are overrepresented in the CH population . Because of fetal blood mixing, the TSH concentration of an affected twin may be lower than expected and may escape detection in TSH-based screening . Therefore, a low threshold for repeat TSH measurement is suggested, or a second screening should be considered in same-sex twins. In addition, the nonaffected twin should be followed up for possible TSH elevation later in life . Down's syndrome is associated with a 14 to 21 times higher than expected incidence of CH, and highly prevalent mild TSH elevation/subclinical hypothyroidism, especially in the first months to years of life . The probable cause of both phenomena is TD, probably related to the extra chromosome 21 and possibly to overexpression of the DYRK1A gene . Because many neonates with Down's syndrome have nonthyroidal illness due to (surgery for) cardiac or intestinal disease , TSH generation may be impaired resulting in a false-negative neonatal screening result (in TSH-based screening programs). Therefore, additional measurement of TSH and fT4 around the age of 3 to 4 weeks should be considered. In babies born into families affected with primary or central CH, fT4 and TSH measurements are advised, even if TSH was normal in TSH-based screening programs. A delayed rise of TSH has been reported in newborns affected with defects in the DUOXs system . In central CH, TSH is usually normal, but can be lower than normal or mildly elevated; only fT4 will contribute to the diagnosis . In case of a known genetic cause, (even prenatal) genetic testing can prevent diagnostic delay. Central CH should be considered in neonates with clinical manifestations of CH or congenital hypopituitarism, but a low, normal, or slightly elevated TSH concentration . In addition, we recommend endocrine testing in all neonates with a familial history of central CH, or signs or symptoms of congenital hypopituitarism, for example, micropenis with undescended testes, hypoglycemia, prolonged jaundice, or unexplained failure to thrive.
2.1. Biochemical criteria used in the decision to start treatment for CH 2.2. Communication of abnormal neonatal screening and confimatory results 2.3. Imaging techniques in CH 2.4. Associated malformations and syndromes 2.1. Biochemical criteria used in the decision to start treatment for CH Summary A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then LT4 treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00). Evidence Early detection and prompt treatment of CH (within the first 2 weeks of life) are essential to optimize the neurocognitive outcome, linear growth, the onset and progression of puberty, pubertal growth, and final height of affected neonates . All newborns with an abnormal neonatal screening result must be referred to an expert center for immediate thyroid function testing (TSH and fT4) to confirm the diagnosis of CH. Treatment is indicated if the serum TSH concentration is >20 mU/L or fT4 is below the age-specific reference interval . In the latter case, severe, moderate, and mild forms can be classified according to fT4 concentrations, <5, 5–10, and 10–15 pmol/L, respectively . Whether neonates with mild hypothyroidism/hyperthyrotropinemia (i.e., diagnostic TSH concentrations between 6 and 20 mU/L, but a normal fT4 concentration) benefit from LT4 treatment is still unclear . Randomized controlled trials addressing this question have not been performed. The evolution of the TSH and fT4 concentrations and trend is instrumental in deciding whether to treat or not; the family history, thyroid imaging, and, if available, genetic analysis may be helpful in predicting the course of the thyroid function. In a large cohort study, Lain et al. found a worse neurocognitive outcome in children of school age with neonatal screening TSH concentrations between the 75th and 99.9th percentiles , while those with neonatal TSH values above the 99.9th percentile (12–14 mU/L) had better cognitive development, possibly due to LT4 treatment. In contrast, in a Belgian cohort of children, there was no relationship between mild neonatal TSH elevation and neurodevelopment at the preschool age . In healthy neonates, it is generally suggested to evaluate thyroid function (TSH and fT4 measurement) every 1 to 2 weeks, and consider LT4 treatment when TSH is above, or fT4 is below the age-specific reference interval . Mild CH can be a permanent or transient condition. The family history, thyroid imaging, and genetic testing may be helpful to clarify the etiology and the need of (long-term) treatment . In some coutries or regions, confirmatory thyroid function testing may not be readily available. In this scenario, LT4 treatment can be started when the neonatal screening TSH concentration is ≥40 mU/L, without awaiting the confirmatory thyroid function test result. Such a value is highly suggestive of moderate-to-severe primary CH . Central hypothyroidism is characterized by a low serum fT4 on combination with a low, normal, or slightly elevated TSH concentration. Other causes of this fT4–TSH combination are nonthyroidal illness, premature birth (with a correlation between severity and GA/birthweight), and certain forms of reduced sensitivity to TH . Central CH can be isolated or part of multiple pituitary hormone deficiency (MPHD) . In case of untreated adrenal insufficiency, LT4 treatment may cause an adrenal crisis. Therefore, LT4 treatment should be started only after a normal adrenal function test result or after glucocorticoid treatment has been started . 2.2. Communication of abnormal neonatal screening and confirmatory results Summary An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). A confirmed CH diagnosis should be communicated face to face by a medical specialist (2/+00). Evidence In the organization of a (neonatal) screening program, both in industrialized and developing countries, communicating abnormal results is a key responsibility that should be carefully managed by trained personnel. Accurate prescreening information for families about the screening test and possible outcomes (e.g., false positives) improves participation and reduces possible parental anxiety. An abnormal neonatal screening result should be communicated quickly, but the way this should be done may differ, depending on biochemical severity and local circumstances (phone call directly to the family, web-based tool if available, etc.). The communication of a confirmed CH diagnosis should be carried out face to face by a medical specialist with sufficient knowledge of CH; in case of language or cultural differences, deployment of a translator or (cultural) mediator is recommended. Taking time and using simple language to explain the implications and management of the diagnosis, and the importance of early detection and adequate LT4 treatment are essential. Written materials can be helpful but should not replace this face-to-face discussion . 2.3. Imaging techniques in CH Summary In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or US, or both (1/++0). X-ray of the knee may be performed to assess the severity of intrauterine hypothyroidism (2/+00). Evidence Although it does not change initial treatment, it is recommended to determine the etiology of CH at the time of diagnosis. However, this approach should never delay the start of treatment in newborns with CH. Early determination of the cause of CH provides the family with a precise diagnosis (including visual evidence) and, with that, strong arguments that their child has a congenital disorder necessitating lifelong daily treatment. Furthermore, an early accurate diagnosis—in most cases achievable by dual imaging—abolishes the need for further diagnostic testing and re-evaluation of the cause later on. Finally, (dual) imaging can give direction to genetic counseling and testing, providing information about the risk of recurrence and a possible early diagnosis in future siblings. Thyroid US US is an important diagnostic tool for determining the presence of the thyroid gland and, when present, its location, size, and echotexture. US, however, is less accurate than radionuclide scan for detection of an ectopic thyroid gland. It is a noninvasive nonirradiating cost-effective imaging technique, but highly observer dependent. Thyroid volume in newborns varies from 0.84 ± 0.38 to 1.62 ± 0.41 mL , without significant changes during the first 3 weeks of life . Thyroid size can be influenced by (long-term) TSH suppression during LT4 treatment. In that case, TSH should be measured at the time of the US so that thyroid size can be correctly interpreted. Thyroid US should be performed by an expert. Thyroid scintigraphy Scintigraphy is the most accurate diagnostic test for determining the etiology of CH, especially in case of TD. Technetium-99m ( 99m Tc) and iodine-123 ( 123 I) are both captured by sodium (Na)-iodide symporter (NIS) at the basal side of thyrocytes and are both suitable for imaging. 99m Tc is more widely available, less expensive, faster in use (image acquisition 15 minutes after administration), and has a shorter half-live than 123 I. 99m Tc is not organified, it is, therefore, difficult to provide quantification of the radionuclide uptake using 99m Tc. Images are of lower quality than with 123 I. The latter isotope needs later image acquisitions (at 2–3, and 24 hours), but provides more contrast and adds information about organification process, allowing perchlorate discharge testing when the thyroid is eutopic . Furthermore, it exposes infants to a lower dose of whole-body irradiation than 99m Tc (3–10 μCi/kg vs. 50–250 μCi/kg body weight) . When the thyroid is present and normally located, and if sodium perchlorate is available, perchlorate discharge testing can be performed to study the iodine retention capacity of the thyroid gland. Sodium perchlorate is administred and thyroid activity is measured before and 1 hour afterward. The perchlorate discharge test is considered positive when discharge of 123 I is more than 10% of the administered dose. Together with serum thyroglobulin measurement, the perchlorate discharge test provides useful information for targeted genetic testing to diagnose the various forms of CH caused by dyshormonogenesis . One pitfall of scintigraphy is lack of isotope uptake despite the presence of thyroid tissue. This can be due to TSH suppression at the time of the scintigraphy (when performed beyond 5 to 7 days after the start of LT4 treatment), previous iodine exposure, maternal blocking TSH receptor antibodies, and mutations in genes affecting iodine uptake ( NIS ) or TSH receptor ( TSHR ) defects. In these cases, thyroid US should be performed to demonstrate the presence or absence of thyroid tissue. When treatment-related TSH suppression is the cause, and treatment cannot be interrupted, thyroid scintigraphy and perchlorate discharge testing can also be performed after recombinant human TSH administration . Dual imaging The combination of thyroid US and scintigraphy provides high-resolution anatomical (US) and functional (scintigraphy) information, allowing to distinguish between permanent and possible transient CH . Each technique compensates for limitations and pitfalls of the other. Dual imaging is particularly effective in confirming athyreosis (when scintigraphy shows absence of isotope uptake) and detecting thyroid ectopy . X-ray of the knee At birth, bone maturation is delayed in the majority of patients with severe CH and is considered a disease severity parameter. It has been shown to correlate with neurodevelopmental outcome , educational level , hearing impairment , and can be assessed by performing a X-ray of the knee (presence or absence of the femoral and tibial epiphyses). LT4 treatment normalizes bone maturation within the first year of life . Although disease severity can be derived from the first diagnostic fT4 and TSH concentrations, a knee X-ray can be performed as an additional parameter reflecting the severity of intrauterine hypothyroidism. 2.4. Associated malformations and syndromes Summary All neonates with CH should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++). Evidence Permanent CH can be isolated or syndromic. Careful clinical examination during the first days of life is, therefore, necessary to detect dysmorphic features suggestive of a syndrome. Syndromic CH is mostly caused by mutations in genes encoding transcription factors or involved in early thyroid development. The Bamforth–Lazarus syndrome (OMIM No. 241850) is characterized by TD (mainly athyreosis or severe hypoplasia), cleft palate, and spiky hair with or without bilateral choanal atresia or bifid epiglottis, and is due to biallelic mutations in the FOXE1 gene . Another example of syndromic CH that can be recognized during neonatal period or early infancy is the brain–lung–thyroid (BLT) syndrome (OMIM No. 610978) due to NKX2-1 haploinsufficiency, characterized by various types of CH, infant respiratory distress syndrome, and benign hereditary chorea . Other examples of syndromic CH are Alagille syndrome type 1 (OMIM No. 118450) with thyroid in situ , liver (bile duct hypoplasia), and cardiac malformations ; Williams–Beuren (OMIM No. 194050) and DiGeorge syndromes (OMIM No. 188400) with a high prevalence of thyroid hypoplasia (50–70%) and subclinical hypothyroidism (25–30%) ; and Kabuki and Johanson–Blizzard syndromes with a eutopic thyroid gland. Pendred syndrome due to mutations in the SLC26A4 gene (OMIM No. 274600), with or without goiter, should be considered in case of congenital sensorineural hearing loss. Finally, the prevalence of congenital malformations, particularly cardiac defects, including septal defects, and renal abnormalities is higher in individuals with CH than in the general population, with differences in prevalence between studies ; indeed, the reported frequency of cardiac defects in CH is between 3% and 11%, compared with 0.5% to 0.8% in all live births. For Down's syndrome, see Section 1.3.
Summary A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then LT4 treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00). Evidence Early detection and prompt treatment of CH (within the first 2 weeks of life) are essential to optimize the neurocognitive outcome, linear growth, the onset and progression of puberty, pubertal growth, and final height of affected neonates . All newborns with an abnormal neonatal screening result must be referred to an expert center for immediate thyroid function testing (TSH and fT4) to confirm the diagnosis of CH. Treatment is indicated if the serum TSH concentration is >20 mU/L or fT4 is below the age-specific reference interval . In the latter case, severe, moderate, and mild forms can be classified according to fT4 concentrations, <5, 5–10, and 10–15 pmol/L, respectively . Whether neonates with mild hypothyroidism/hyperthyrotropinemia (i.e., diagnostic TSH concentrations between 6 and 20 mU/L, but a normal fT4 concentration) benefit from LT4 treatment is still unclear . Randomized controlled trials addressing this question have not been performed. The evolution of the TSH and fT4 concentrations and trend is instrumental in deciding whether to treat or not; the family history, thyroid imaging, and, if available, genetic analysis may be helpful in predicting the course of the thyroid function. In a large cohort study, Lain et al. found a worse neurocognitive outcome in children of school age with neonatal screening TSH concentrations between the 75th and 99.9th percentiles , while those with neonatal TSH values above the 99.9th percentile (12–14 mU/L) had better cognitive development, possibly due to LT4 treatment. In contrast, in a Belgian cohort of children, there was no relationship between mild neonatal TSH elevation and neurodevelopment at the preschool age . In healthy neonates, it is generally suggested to evaluate thyroid function (TSH and fT4 measurement) every 1 to 2 weeks, and consider LT4 treatment when TSH is above, or fT4 is below the age-specific reference interval . Mild CH can be a permanent or transient condition. The family history, thyroid imaging, and genetic testing may be helpful to clarify the etiology and the need of (long-term) treatment . In some coutries or regions, confirmatory thyroid function testing may not be readily available. In this scenario, LT4 treatment can be started when the neonatal screening TSH concentration is ≥40 mU/L, without awaiting the confirmatory thyroid function test result. Such a value is highly suggestive of moderate-to-severe primary CH . Central hypothyroidism is characterized by a low serum fT4 on combination with a low, normal, or slightly elevated TSH concentration. Other causes of this fT4–TSH combination are nonthyroidal illness, premature birth (with a correlation between severity and GA/birthweight), and certain forms of reduced sensitivity to TH . Central CH can be isolated or part of multiple pituitary hormone deficiency (MPHD) . In case of untreated adrenal insufficiency, LT4 treatment may cause an adrenal crisis. Therefore, LT4 treatment should be started only after a normal adrenal function test result or after glucocorticoid treatment has been started .
A newborn with an abnormal neonatal screening result should be referred to an expert center (1/++0). An abnormal screening result should be followed by confirmatory testing consisting of measurement of serum fT4 and TSH (1/++0). If the serum fT4 concentration is below and TSH clearly above the age-specific reference interval, then LT4 treatment should be started immediately (1/+++). If the serum TSH concentration is >20 mU/L at confirmatory testing (approximately in the second week of life), treatment should be started, even if fT4 is normal (arbitrary threshold, expert opinion) (2/+00). If the serum TSH concentration 6–20 mU/L beyond the age of 21 days in a healthy neonate with an fT4 concentration within the age-specific reference interval, we suggest to either start LT4 treatment immediately and retest, off-treatment, at a later stage, or to withhold treatment but retest 1 to 2 weeks later and to re-evaluate the need for treatment (lack of evidence in favor or against treatment, this is an area of further investigation) (2/++0). In countries or regions where thyroid function tests are not readily available, LT4 treatment should be started if filter paper TSH concentration is >40 mU/L (at the moment of neonatal screening; arbitrary threshold, expert opinion) (2/+00). If the serum fT4 is low, and TSH is low, normal or slightly elevated, the diagnosis central CH should be considered (1/++0). In neonates with central CH, we recommend to start LT4 treatment only after evidence of intact adrenal function; if coexistent central adrenal insufficiency cannot be ruled out, LT4 treatment must be preceded by glucocorticoid treatment to prevent possible induction of an adrenal crisis (2/+00).
Early detection and prompt treatment of CH (within the first 2 weeks of life) are essential to optimize the neurocognitive outcome, linear growth, the onset and progression of puberty, pubertal growth, and final height of affected neonates . All newborns with an abnormal neonatal screening result must be referred to an expert center for immediate thyroid function testing (TSH and fT4) to confirm the diagnosis of CH. Treatment is indicated if the serum TSH concentration is >20 mU/L or fT4 is below the age-specific reference interval . In the latter case, severe, moderate, and mild forms can be classified according to fT4 concentrations, <5, 5–10, and 10–15 pmol/L, respectively . Whether neonates with mild hypothyroidism/hyperthyrotropinemia (i.e., diagnostic TSH concentrations between 6 and 20 mU/L, but a normal fT4 concentration) benefit from LT4 treatment is still unclear . Randomized controlled trials addressing this question have not been performed. The evolution of the TSH and fT4 concentrations and trend is instrumental in deciding whether to treat or not; the family history, thyroid imaging, and, if available, genetic analysis may be helpful in predicting the course of the thyroid function. In a large cohort study, Lain et al. found a worse neurocognitive outcome in children of school age with neonatal screening TSH concentrations between the 75th and 99.9th percentiles , while those with neonatal TSH values above the 99.9th percentile (12–14 mU/L) had better cognitive development, possibly due to LT4 treatment. In contrast, in a Belgian cohort of children, there was no relationship between mild neonatal TSH elevation and neurodevelopment at the preschool age . In healthy neonates, it is generally suggested to evaluate thyroid function (TSH and fT4 measurement) every 1 to 2 weeks, and consider LT4 treatment when TSH is above, or fT4 is below the age-specific reference interval . Mild CH can be a permanent or transient condition. The family history, thyroid imaging, and genetic testing may be helpful to clarify the etiology and the need of (long-term) treatment . In some coutries or regions, confirmatory thyroid function testing may not be readily available. In this scenario, LT4 treatment can be started when the neonatal screening TSH concentration is ≥40 mU/L, without awaiting the confirmatory thyroid function test result. Such a value is highly suggestive of moderate-to-severe primary CH . Central hypothyroidism is characterized by a low serum fT4 on combination with a low, normal, or slightly elevated TSH concentration. Other causes of this fT4–TSH combination are nonthyroidal illness, premature birth (with a correlation between severity and GA/birthweight), and certain forms of reduced sensitivity to TH . Central CH can be isolated or part of multiple pituitary hormone deficiency (MPHD) . In case of untreated adrenal insufficiency, LT4 treatment may cause an adrenal crisis. Therefore, LT4 treatment should be started only after a normal adrenal function test result or after glucocorticoid treatment has been started .
Summary An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). A confirmed CH diagnosis should be communicated face to face by a medical specialist (2/+00). Evidence In the organization of a (neonatal) screening program, both in industrialized and developing countries, communicating abnormal results is a key responsibility that should be carefully managed by trained personnel. Accurate prescreening information for families about the screening test and possible outcomes (e.g., false positives) improves participation and reduces possible parental anxiety. An abnormal neonatal screening result should be communicated quickly, but the way this should be done may differ, depending on biochemical severity and local circumstances (phone call directly to the family, web-based tool if available, etc.). The communication of a confirmed CH diagnosis should be carried out face to face by a medical specialist with sufficient knowledge of CH; in case of language or cultural differences, deployment of a translator or (cultural) mediator is recommended. Taking time and using simple language to explain the implications and management of the diagnosis, and the importance of early detection and adequate LT4 treatment are essential. Written materials can be helpful but should not replace this face-to-face discussion .
An abnormal neonatal screening result should be communicated by an experienced professional (e.g., member of pediatric endocrine team, pediatrician, or general physician) either by telephone or face to face, and supplemented with written information for the family (2/+00). A confirmed CH diagnosis should be communicated face to face by a medical specialist (2/+00).
In the organization of a (neonatal) screening program, both in industrialized and developing countries, communicating abnormal results is a key responsibility that should be carefully managed by trained personnel. Accurate prescreening information for families about the screening test and possible outcomes (e.g., false positives) improves participation and reduces possible parental anxiety. An abnormal neonatal screening result should be communicated quickly, but the way this should be done may differ, depending on biochemical severity and local circumstances (phone call directly to the family, web-based tool if available, etc.). The communication of a confirmed CH diagnosis should be carried out face to face by a medical specialist with sufficient knowledge of CH; in case of language or cultural differences, deployment of a translator or (cultural) mediator is recommended. Taking time and using simple language to explain the implications and management of the diagnosis, and the importance of early detection and adequate LT4 treatment are essential. Written materials can be helpful but should not replace this face-to-face discussion .
Summary In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or US, or both (1/++0). X-ray of the knee may be performed to assess the severity of intrauterine hypothyroidism (2/+00). Evidence Although it does not change initial treatment, it is recommended to determine the etiology of CH at the time of diagnosis. However, this approach should never delay the start of treatment in newborns with CH. Early determination of the cause of CH provides the family with a precise diagnosis (including visual evidence) and, with that, strong arguments that their child has a congenital disorder necessitating lifelong daily treatment. Furthermore, an early accurate diagnosis—in most cases achievable by dual imaging—abolishes the need for further diagnostic testing and re-evaluation of the cause later on. Finally, (dual) imaging can give direction to genetic counseling and testing, providing information about the risk of recurrence and a possible early diagnosis in future siblings. Thyroid US US is an important diagnostic tool for determining the presence of the thyroid gland and, when present, its location, size, and echotexture. US, however, is less accurate than radionuclide scan for detection of an ectopic thyroid gland. It is a noninvasive nonirradiating cost-effective imaging technique, but highly observer dependent. Thyroid volume in newborns varies from 0.84 ± 0.38 to 1.62 ± 0.41 mL , without significant changes during the first 3 weeks of life . Thyroid size can be influenced by (long-term) TSH suppression during LT4 treatment. In that case, TSH should be measured at the time of the US so that thyroid size can be correctly interpreted. Thyroid US should be performed by an expert. Thyroid scintigraphy Scintigraphy is the most accurate diagnostic test for determining the etiology of CH, especially in case of TD. Technetium-99m ( 99m Tc) and iodine-123 ( 123 I) are both captured by sodium (Na)-iodide symporter (NIS) at the basal side of thyrocytes and are both suitable for imaging. 99m Tc is more widely available, less expensive, faster in use (image acquisition 15 minutes after administration), and has a shorter half-live than 123 I. 99m Tc is not organified, it is, therefore, difficult to provide quantification of the radionuclide uptake using 99m Tc. Images are of lower quality than with 123 I. The latter isotope needs later image acquisitions (at 2–3, and 24 hours), but provides more contrast and adds information about organification process, allowing perchlorate discharge testing when the thyroid is eutopic . Furthermore, it exposes infants to a lower dose of whole-body irradiation than 99m Tc (3–10 μCi/kg vs. 50–250 μCi/kg body weight) . When the thyroid is present and normally located, and if sodium perchlorate is available, perchlorate discharge testing can be performed to study the iodine retention capacity of the thyroid gland. Sodium perchlorate is administred and thyroid activity is measured before and 1 hour afterward. The perchlorate discharge test is considered positive when discharge of 123 I is more than 10% of the administered dose. Together with serum thyroglobulin measurement, the perchlorate discharge test provides useful information for targeted genetic testing to diagnose the various forms of CH caused by dyshormonogenesis . One pitfall of scintigraphy is lack of isotope uptake despite the presence of thyroid tissue. This can be due to TSH suppression at the time of the scintigraphy (when performed beyond 5 to 7 days after the start of LT4 treatment), previous iodine exposure, maternal blocking TSH receptor antibodies, and mutations in genes affecting iodine uptake ( NIS ) or TSH receptor ( TSHR ) defects. In these cases, thyroid US should be performed to demonstrate the presence or absence of thyroid tissue. When treatment-related TSH suppression is the cause, and treatment cannot be interrupted, thyroid scintigraphy and perchlorate discharge testing can also be performed after recombinant human TSH administration . Dual imaging The combination of thyroid US and scintigraphy provides high-resolution anatomical (US) and functional (scintigraphy) information, allowing to distinguish between permanent and possible transient CH . Each technique compensates for limitations and pitfalls of the other. Dual imaging is particularly effective in confirming athyreosis (when scintigraphy shows absence of isotope uptake) and detecting thyroid ectopy . X-ray of the knee At birth, bone maturation is delayed in the majority of patients with severe CH and is considered a disease severity parameter. It has been shown to correlate with neurodevelopmental outcome , educational level , hearing impairment , and can be assessed by performing a X-ray of the knee (presence or absence of the femoral and tibial epiphyses). LT4 treatment normalizes bone maturation within the first year of life . Although disease severity can be derived from the first diagnostic fT4 and TSH concentrations, a knee X-ray can be performed as an additional parameter reflecting the severity of intrauterine hypothyroidism.
In patients with a recent CH diagnosis, we strongly recommend starting LT4 treatment before conducting thyroid gland imaging studies (1/++0). We recommend imaging of the thyroid gland using either radioisotope scanning (scintigraphy) with or without the perchlorate discharge test, or US, or both (1/++0). X-ray of the knee may be performed to assess the severity of intrauterine hypothyroidism (2/+00).
Although it does not change initial treatment, it is recommended to determine the etiology of CH at the time of diagnosis. However, this approach should never delay the start of treatment in newborns with CH. Early determination of the cause of CH provides the family with a precise diagnosis (including visual evidence) and, with that, strong arguments that their child has a congenital disorder necessitating lifelong daily treatment. Furthermore, an early accurate diagnosis—in most cases achievable by dual imaging—abolishes the need for further diagnostic testing and re-evaluation of the cause later on. Finally, (dual) imaging can give direction to genetic counseling and testing, providing information about the risk of recurrence and a possible early diagnosis in future siblings. Thyroid US US is an important diagnostic tool for determining the presence of the thyroid gland and, when present, its location, size, and echotexture. US, however, is less accurate than radionuclide scan for detection of an ectopic thyroid gland. It is a noninvasive nonirradiating cost-effective imaging technique, but highly observer dependent. Thyroid volume in newborns varies from 0.84 ± 0.38 to 1.62 ± 0.41 mL , without significant changes during the first 3 weeks of life . Thyroid size can be influenced by (long-term) TSH suppression during LT4 treatment. In that case, TSH should be measured at the time of the US so that thyroid size can be correctly interpreted. Thyroid US should be performed by an expert. Thyroid scintigraphy Scintigraphy is the most accurate diagnostic test for determining the etiology of CH, especially in case of TD. Technetium-99m ( 99m Tc) and iodine-123 ( 123 I) are both captured by sodium (Na)-iodide symporter (NIS) at the basal side of thyrocytes and are both suitable for imaging. 99m Tc is more widely available, less expensive, faster in use (image acquisition 15 minutes after administration), and has a shorter half-live than 123 I. 99m Tc is not organified, it is, therefore, difficult to provide quantification of the radionuclide uptake using 99m Tc. Images are of lower quality than with 123 I. The latter isotope needs later image acquisitions (at 2–3, and 24 hours), but provides more contrast and adds information about organification process, allowing perchlorate discharge testing when the thyroid is eutopic . Furthermore, it exposes infants to a lower dose of whole-body irradiation than 99m Tc (3–10 μCi/kg vs. 50–250 μCi/kg body weight) . When the thyroid is present and normally located, and if sodium perchlorate is available, perchlorate discharge testing can be performed to study the iodine retention capacity of the thyroid gland. Sodium perchlorate is administred and thyroid activity is measured before and 1 hour afterward. The perchlorate discharge test is considered positive when discharge of 123 I is more than 10% of the administered dose. Together with serum thyroglobulin measurement, the perchlorate discharge test provides useful information for targeted genetic testing to diagnose the various forms of CH caused by dyshormonogenesis . One pitfall of scintigraphy is lack of isotope uptake despite the presence of thyroid tissue. This can be due to TSH suppression at the time of the scintigraphy (when performed beyond 5 to 7 days after the start of LT4 treatment), previous iodine exposure, maternal blocking TSH receptor antibodies, and mutations in genes affecting iodine uptake ( NIS ) or TSH receptor ( TSHR ) defects. In these cases, thyroid US should be performed to demonstrate the presence or absence of thyroid tissue. When treatment-related TSH suppression is the cause, and treatment cannot be interrupted, thyroid scintigraphy and perchlorate discharge testing can also be performed after recombinant human TSH administration . Dual imaging The combination of thyroid US and scintigraphy provides high-resolution anatomical (US) and functional (scintigraphy) information, allowing to distinguish between permanent and possible transient CH . Each technique compensates for limitations and pitfalls of the other. Dual imaging is particularly effective in confirming athyreosis (when scintigraphy shows absence of isotope uptake) and detecting thyroid ectopy . X-ray of the knee At birth, bone maturation is delayed in the majority of patients with severe CH and is considered a disease severity parameter. It has been shown to correlate with neurodevelopmental outcome , educational level , hearing impairment , and can be assessed by performing a X-ray of the knee (presence or absence of the femoral and tibial epiphyses). LT4 treatment normalizes bone maturation within the first year of life . Although disease severity can be derived from the first diagnostic fT4 and TSH concentrations, a knee X-ray can be performed as an additional parameter reflecting the severity of intrauterine hypothyroidism.
US is an important diagnostic tool for determining the presence of the thyroid gland and, when present, its location, size, and echotexture. US, however, is less accurate than radionuclide scan for detection of an ectopic thyroid gland. It is a noninvasive nonirradiating cost-effective imaging technique, but highly observer dependent. Thyroid volume in newborns varies from 0.84 ± 0.38 to 1.62 ± 0.41 mL , without significant changes during the first 3 weeks of life . Thyroid size can be influenced by (long-term) TSH suppression during LT4 treatment. In that case, TSH should be measured at the time of the US so that thyroid size can be correctly interpreted. Thyroid US should be performed by an expert.
Scintigraphy is the most accurate diagnostic test for determining the etiology of CH, especially in case of TD. Technetium-99m ( 99m Tc) and iodine-123 ( 123 I) are both captured by sodium (Na)-iodide symporter (NIS) at the basal side of thyrocytes and are both suitable for imaging. 99m Tc is more widely available, less expensive, faster in use (image acquisition 15 minutes after administration), and has a shorter half-live than 123 I. 99m Tc is not organified, it is, therefore, difficult to provide quantification of the radionuclide uptake using 99m Tc. Images are of lower quality than with 123 I. The latter isotope needs later image acquisitions (at 2–3, and 24 hours), but provides more contrast and adds information about organification process, allowing perchlorate discharge testing when the thyroid is eutopic . Furthermore, it exposes infants to a lower dose of whole-body irradiation than 99m Tc (3–10 μCi/kg vs. 50–250 μCi/kg body weight) . When the thyroid is present and normally located, and if sodium perchlorate is available, perchlorate discharge testing can be performed to study the iodine retention capacity of the thyroid gland. Sodium perchlorate is administred and thyroid activity is measured before and 1 hour afterward. The perchlorate discharge test is considered positive when discharge of 123 I is more than 10% of the administered dose. Together with serum thyroglobulin measurement, the perchlorate discharge test provides useful information for targeted genetic testing to diagnose the various forms of CH caused by dyshormonogenesis . One pitfall of scintigraphy is lack of isotope uptake despite the presence of thyroid tissue. This can be due to TSH suppression at the time of the scintigraphy (when performed beyond 5 to 7 days after the start of LT4 treatment), previous iodine exposure, maternal blocking TSH receptor antibodies, and mutations in genes affecting iodine uptake ( NIS ) or TSH receptor ( TSHR ) defects. In these cases, thyroid US should be performed to demonstrate the presence or absence of thyroid tissue. When treatment-related TSH suppression is the cause, and treatment cannot be interrupted, thyroid scintigraphy and perchlorate discharge testing can also be performed after recombinant human TSH administration .
The combination of thyroid US and scintigraphy provides high-resolution anatomical (US) and functional (scintigraphy) information, allowing to distinguish between permanent and possible transient CH . Each technique compensates for limitations and pitfalls of the other. Dual imaging is particularly effective in confirming athyreosis (when scintigraphy shows absence of isotope uptake) and detecting thyroid ectopy .
At birth, bone maturation is delayed in the majority of patients with severe CH and is considered a disease severity parameter. It has been shown to correlate with neurodevelopmental outcome , educational level , hearing impairment , and can be assessed by performing a X-ray of the knee (presence or absence of the femoral and tibial epiphyses). LT4 treatment normalizes bone maturation within the first year of life . Although disease severity can be derived from the first diagnostic fT4 and TSH concentrations, a knee X-ray can be performed as an additional parameter reflecting the severity of intrauterine hypothyroidism.
Summary All neonates with CH should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++). Evidence Permanent CH can be isolated or syndromic. Careful clinical examination during the first days of life is, therefore, necessary to detect dysmorphic features suggestive of a syndrome. Syndromic CH is mostly caused by mutations in genes encoding transcription factors or involved in early thyroid development. The Bamforth–Lazarus syndrome (OMIM No. 241850) is characterized by TD (mainly athyreosis or severe hypoplasia), cleft palate, and spiky hair with or without bilateral choanal atresia or bifid epiglottis, and is due to biallelic mutations in the FOXE1 gene . Another example of syndromic CH that can be recognized during neonatal period or early infancy is the brain–lung–thyroid (BLT) syndrome (OMIM No. 610978) due to NKX2-1 haploinsufficiency, characterized by various types of CH, infant respiratory distress syndrome, and benign hereditary chorea . Other examples of syndromic CH are Alagille syndrome type 1 (OMIM No. 118450) with thyroid in situ , liver (bile duct hypoplasia), and cardiac malformations ; Williams–Beuren (OMIM No. 194050) and DiGeorge syndromes (OMIM No. 188400) with a high prevalence of thyroid hypoplasia (50–70%) and subclinical hypothyroidism (25–30%) ; and Kabuki and Johanson–Blizzard syndromes with a eutopic thyroid gland. Pendred syndrome due to mutations in the SLC26A4 gene (OMIM No. 274600), with or without goiter, should be considered in case of congenital sensorineural hearing loss. Finally, the prevalence of congenital malformations, particularly cardiac defects, including septal defects, and renal abnormalities is higher in individuals with CH than in the general population, with differences in prevalence between studies ; indeed, the reported frequency of cardiac defects in CH is between 3% and 11%, compared with 0.5% to 0.8% in all live births. For Down's syndrome, see Section 1.3.
All neonates with CH should be examined carefully for dysmorphic features suggestive for syndromic CH, and for congenital malformations (particularly cardiac) (1/+++).
Permanent CH can be isolated or syndromic. Careful clinical examination during the first days of life is, therefore, necessary to detect dysmorphic features suggestive of a syndrome. Syndromic CH is mostly caused by mutations in genes encoding transcription factors or involved in early thyroid development. The Bamforth–Lazarus syndrome (OMIM No. 241850) is characterized by TD (mainly athyreosis or severe hypoplasia), cleft palate, and spiky hair with or without bilateral choanal atresia or bifid epiglottis, and is due to biallelic mutations in the FOXE1 gene . Another example of syndromic CH that can be recognized during neonatal period or early infancy is the brain–lung–thyroid (BLT) syndrome (OMIM No. 610978) due to NKX2-1 haploinsufficiency, characterized by various types of CH, infant respiratory distress syndrome, and benign hereditary chorea . Other examples of syndromic CH are Alagille syndrome type 1 (OMIM No. 118450) with thyroid in situ , liver (bile duct hypoplasia), and cardiac malformations ; Williams–Beuren (OMIM No. 194050) and DiGeorge syndromes (OMIM No. 188400) with a high prevalence of thyroid hypoplasia (50–70%) and subclinical hypothyroidism (25–30%) ; and Kabuki and Johanson–Blizzard syndromes with a eutopic thyroid gland. Pendred syndrome due to mutations in the SLC26A4 gene (OMIM No. 274600), with or without goiter, should be considered in case of congenital sensorineural hearing loss. Finally, the prevalence of congenital malformations, particularly cardiac defects, including septal defects, and renal abnormalities is higher in individuals with CH than in the general population, with differences in prevalence between studies ; indeed, the reported frequency of cardiac defects in CH is between 3% and 11%, compared with 0.5% to 0.8% in all live births. For Down's syndrome, see Section 1.3.
3.1. Starting treatment for primary CH 3.2. Monitoring treatment in primary CH 3.3. Treatment and monitoring of central CH 3.4. Diagnostic re-evaluation of thyroid function beyond the first 6 months of life 3.5. Treatment and monitoring of pregnant women with CH 3.1. Starting treatment for primary CH Summary LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval, an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed, but based on personal experience/expert opinion, we recommend brand rather than generic (2/++0). Evidence There are no randomized clinical trials that support a specific treatment approach in CH with high-quality evidence. Since the first enthusiastic reports on the successful treatment of “sporadic cretinism” with thyroid extracts derived from animal thyroid glands, all further adaptations and improvements have been based on retrospective or prospective observational studies only. However, today a large series of such cohort studies is available that were undertaken to correlate final outcome to different treatment strategies. Initially somatic development in terms of growth and puberty was studied, but later on cognitive outcome—the most precious, but also vulnerable developmental outcome—became the focus of such studies. The highest level of evidence was gained by those studies that assessed the cognitive outcome (intelligence quotient [IQ]) in individuals with CH and unaffected sibling controls. Together, the available data allow for reliable conclusions and recommendations. One such conclusion is that one can expect a favorable outcome in most children with CH who were given the “right” treatment. In this respect, numerous outcome studies point to a strong impact of two (main) factors that influence cognitive outcome: the age at start of LT4 treatment and the LT4 starting dose. Age at start of treatment and starting dose Bearing in mind that these factors were not studied systematically, one can only deduce conclusions and recommendations from observational studies. Therefore, the recommendations on the optimal age at start of LT4 treatment and the optimal starting dose are deduced from reasonably powered studies that eventually demonstrated no difference in cognitive outcome between individuals with CH and unaffected siblings. So far only two such studies are available. Initially, two outcome studies in young adult CH patients and sibling controls showed an IQ gap of eight points. In these observational studies, treatment was started at an average age of 24 days and with average LT4 dose <10 μg/kg per day. The first study that reported no gap comparing 44 CH and 53 unaffected sibling controls with a median age at time of testing of 9 years was from New Zealand and published in 2013 . Patients were treated with LT4 from a mean age of 9 days with a starting dose between 10 and 15 μg/kg depending on CH severity. Neonates with athyreosis were treated with 15 μg/kg per day. TSH normalized within a median of 14 days after diagnosis. Power calculation predicted that the number of patients and siblings would be sufficient to detect a difference of 5.2 IQ point. There was no significant difference between the tested patients and siblings. The second study reporting no gap comparing 76 CH patients and 40 sibling controls was from Berlin and was published in 2018 . The treatment approach resembled the New Zealand approach with a median age at diagnosis of 8 days, a mean LT4 starting dose of 13.5 μg/kg per day, and TSH normalizing within a median time of 15 days. In contrast to the New Zealand study, the mean ages of the patients and controls were 18.1 and 19.8 years, respectively. There was no significant difference in overall IQ (102.5 vs. 102.5), nor were there differences in other (cognitive) tests of attention, memory, fine motor skills, quality-of-life scores, and in anthropometric measurements. In addition, there was no negative effect of episodes of overtreatment in terms of a suppressed TSH. Even in the children with the highest number of episodes of TSH suppression, IQ and other outcome parameters did not differ. Based on the evidence from four studies reporting sibling-controlled cognitive outcome data, one can deduce and conclude that a even a child with severe CH can reach a normal IQ that does not differ from unaffected siblings, if LT4 treatment is started before the age of 10 days and the starting dose is at least 10 μg/kg, with 15 μg/kg in the most severe forms. More precise values for the optimal age at start of LT4 treatment or the starting dose leading to such a favorable outcome cannot be given since this has not been systematically studied. However, in a meta-analysis included in the Berlin study comparing IQ differences between severe and mild CH cases with respect to the starting dose revealed that this difference can only be overcome with a starting dose of at least 10 μg/kg, but not lower than that. Hormone preparations and administration Since there are only a few studies on the effect of different hormone preparations or methods of administration available, recommendations are based on the results of the previously mentioned studies. Those studies that reported a normal cognitive outcome did either use crunched LT4 tablets dissolved in water or breast milk administered through a spoon, or liquid LT4 preparations (both administered orally). In none of the studies T3 was administered. Because the cognitive outcomes in these studies were favorable, it is recommended to use only LT4, administered as just described. The expert panel recognizes that crushing tablets is an off-label procedure, but that it has been done this way succesfullly for many years. Clinical experience suggests that the bioavailability of liquid LT4 preparations is higher than tablets, with a possible risk of overtreatment if tablet doses are used. The higher bioavailability may also have dosing consequences for changing medication from tablets to liquid, and the other way around. In addition, CH patients treated with liquid LT4 may need more frequent fT4 and TSH measurements, and dose adjustments during their first months of life . If intravenous treatment is necessary, the (starting) dose should be no more than 80% of the oral dose; subsequently, the dose should be adjusted guided by fT4 and TSH measurements. It should be stressed that only pharmaceutically produced medication should be prescribed. This applies to both tablets and liquid LT4 preparations. Brand rather than generic LT4 tablets should be used, particularly in severe CH and in infants . The expert panel is against the use of compounded solutions or suspensions. Finally, parents should be provided with written instructions about LT4 treatment. 3.2. Monitoring treatment in primary CH Summary We recommend measurement of serum fT4 and TSH concentrations before, or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of thyroid hormone, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose close to 15 μg/kg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). Adequate treatment throughout childhood is essential, and long-term under- or overtreatment, that is, TSH concentrations above or below the reference interval, should be avoided (1/++0). In contrast to adults, in neonates, infants and children LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake. This approach can improve compliance, and ensures as consistent as possible LT4 absorption and as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption or increased metabolization of T4 by other disease (e.g., gastrointestinal), food, or medication should be considered (2/+00); noncompliance may be the most frequent cause, especially in teenagers and adolescents. Evidence Shortly after the start of LT4 treatment Repeated measurement of serum fT4 and TSH, and clinical assessment (especially for signs of overtreatment when using the highest starting dose) are the backbone of monitoring LT4 treatment in patients with primary CH . TSH normalizes slower than fT4. Therefore, the first treatment goal is as rapid as possible normalization of fT4. Since fT4 reflects the unbound biologically active form of T4, measurement of fT4 is preferred to total T4 . The second treatment goal is normalization of TSH within 4 weeks. Consequently, fT4 (or total T4) should guide dosing until TSH reaches the age-specific reference interval . Rapid normalization of TSH and keeping fT4 in the upper half of the age-specific reference interval have been shown to optimize the neurodevelopmental outcome . Follow-up after the first weeks of LT4 treatment There is no evidence for a one optimal follow-up scheme. Recent studies focusing on optimization of biochemical thyroid function testing suggest the importance of frequent laboratory monitoring and dose adjustment during the first year of life. Findings in these studies were that (i) patients with severe CH (athyreosis and dysgenesis vs. dyshormonogenesis, with high TSH values at diagnosis) need more dose adjustments during the first year of life ; (ii) the highest doses within the recommended range of 10–15 μg/kg per day resulted in more dose adjustments because of hyperthyroxinemia ; and (iii) monthly thyroid function testing led to frequent dose adjustments during the first year of life (75% at 0–6 months of age, and 36% at 7–12 months of age) . However, in none of these studies neurodevelopmental outcome data were available, the most important long-term treatment goal in CH. With this in mind, the follow-up schemes that were chosen in the studies that reported normal IQ outcomes can be used as recommendation. In the New Zealand and the Berlin studies, treatment effectiveness in terms of normalization of serum parameters was tested weekly after the start of treatment until they normalized . Thereafter, in New Zealand, blood tests were done monthly during the first year and bimonthly during the second year, and every 3 months in the Berlin study. Obviously, follow-up schemes have to be personalized according to parents' capabilities and compliance. The main biochemical target parameter in primary CH is TSH. The Berlin study reported on all obtained serum parameters during the first 2 years of life in all treated children. This revealed that when TSH was within the reference interval, T4 was often elevated but T3 was normal. Noteworthy, also in adult patients with severe acquired hypothyroidism, a higher serum fT4 is necessary to reach normal TSH concentrations. This may be due to lack of thyroidal production of T3 that needs to be compensated by a higher fT4 concentration. Data on the effects of clearly increased serum (f)T4 concentrations are scarce. In two studies, long-term follow-up after periods of overtreatment during the first 2 years of life suggested a decreased IQ at the age of 11 years, and an increased rate of attention deficit hyperactivity disorder . Earlier studies suggested adverse effects on attention span . However, Aleksander et al. showed no IQ differences between patients and siblings despite comparable periods of overtreatment . As long as there is no evidence for a possible negative effect of periods of overtreatment, dose reduction in case of an elevated fT4 should only be done after a second fT4 measurement, unless TSH is suppressed. Besides overtreatment, “resetting” of the hypothalamus–pituitary–thyroid feedback axis after intrauterine hypothyroidism has been proposed as a possible mechanism, especially in patients younger than 12 months . Persistence of such mild hypothalamus–pituitary resistance has been reported in adult CH patients compared with patients with acquired hypothyroidism . In summary, there is no definitive evidence for one optimal follow-up scheme based on studies with cognitive outcome as the main parameter. However, a normal cognitive outcome has been achieved with monthly and bimonthly, and with 3-monthly controls during the first 2 to 3 years of life, after TSH normalization in the first weeks after diagnosis. Furthermore, patients with the most severe forms of CH and the highest range of the recommended LT4 starting dose are at an increased risk for frequent dose adjustments in the first year of life because of elevated fT4 levels. Since the long-term neurological consequences of hyperthyroxinemia/periods of overtreatment are still not clarified, the follow-up frequency should be individualized with more controls in case of suboptimal fT4 or TSH values. After dose adjustment, a next control is recommended 4 to 6 weeks later . Finally, adolescence and the period of transition to adult care are critical periods. Individualized follow-up schemes should be drawn up to assure normal growth and puberty in the adolescent, and fertility in the young adult . Adverse effects of LT4 Adverse effects of long-term LT4 treatment are rare or absent if adequately prescribed. Cases of pseudotumor cerebri or craniosynostosis have been described . However, relative macrocrania at the age of 18 months, but without any case of craniosynostosis, was reported in a cohort of 45 CH patients with documented fT4 concentrations above the reference interval during their first 6 to 9 months of life . In one cohort of young adults with CH, cardiovascular abnormalities were reported (impaired diastolic dysfunction and exercise capacity, and increased intima media thickness, IMT); however, the clinical relevance of these findings remains unknown. Moreover, in a large nationwide study, standardized mortality ratio in patients with CH was not increased for diseases of the circulatory system . Cardiac insufficiency LT4 has clear positive ino- and chronotropic effects on the heart. In newly diagnosed CH in newborns with congenital heart disease and impending heart failure, we therefore recommend to apply a lower LT4 starting dose—approximately 50% of the recommended dose—and to increase it guided by serum fT4 and TSH measurement, and the infant's clinical condition. Impaired bioavailability by diseases, drugs, or food LT4 is mainly absorbed in the proximal small intestine. Undiagnosed or untreated celiac disease will reduce LT4 absorption. Children with short bowel syndrome will also have reduced absorption . Recently, rectal administration of LT4 was been shown to be effective in a child with this condition . Increased type 3 deiodinase activity in large hemangiomas can cause increased metabolic clearance of administered LT4 and, with that, necessitate a higher LT4 dose . Bioavailability of LT4 can also be reduced by concomitant use of other medication. For example, proton pump inhibitors, calcium or iron, will decrease absorption, while antiepileptic medication (phenobarbital, phenytoin, and carbamazepine) and rifampicin will increase its metabolic clearance. Interactions need to be considered and can sometimes be overcome by avoiding concomitant ingestion . While in adults the recommended LT4 intake moment is 30–60 minutes before intake of food , such a recommendation is difficult to realize in infants . Pragmatically, LT4 should be administered at a fixed time with an equal interval to food intake every day to have a constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration. Soy containing food products have been repeatedly shown to inhibit LT4 absorption in children with CH . 3.3. Treatment and monitoring of central CH In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (at least 10 μg/kg per day, see Section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day) to reduce the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH or fT3 or T3 can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high (f)T3 concentration (1/+00). Evidence Just like primary CH, treatment of central CH consists of daily administration of LT4 (orally; tablets or liquid dosage form). The biggest differences between the treatment of primary and central CH are in the monitoring of treatment—with serum fT4 (instead of TSH) being the most important parameter—and in the LT4 starting dose. Important to realize is that in central CH, a low TSH concentration does not point to overtreatment. The (biochemical) LT4 treatment aim is bringing and keeping the fT4 concentration in the upper half of the age-specific fT4 reference interval. Although randomized clinical trials testing this approach in children are lacking, studies in adults give some support . Central CH can be a severe condition (fT4 at diagnosis <5 pmol/L), but most cases can be classified as mild to moderate (fT4 at diagnosis 5–15 pmol/L) . Although studies investigating the optimal starting dose in central CH are lacking, clinical experience has taught that an LT4 starting dose of 10–15 μg/kg in mild-to-moderate cases quickly results in supraphysiological fT4 concentrations. So, with exception of severe cases, a lower starting dose that is 5–10 μg/kg is advisable. With regard to the treatment monitoring frequency, the schedule for primary CH should be followed. 3.4. Diagnostic re-evaluation of thyroid function beyond the first 6 months of life Summary When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a GIS, and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires an LT4 dose <3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0). Evidence In recent years, the prevalence of transient CH has steadily increased. In a number of studies, factors have been identified that increase the likelihood of transient disease, such as sex (more often in boys) , low birthweight , neonatal morbidity requiring intensive care , race/ethnicity (more often in nonwhite patients) , and less severe CH at diagnosis (assessed by screening TSH, or diagnostic TSH or fT4) . In contrast, factors such as prematurity , other congenital abnormalities , a family history of thyroid disease , abnormal thyroid morphology (thyroid hypoplasia at diagnosis) , TSH elevation >10 mU/L after the age of 1 year (when infants outgrow the LT4 dose), and a higher LT4 dose requirement at 1 to 3 years of age are associated with permanent CH (with conflicting results between studies for the factor dose requirement) . Recent studies have shown that early treatment withdrawal to assess the necessity of further treatment can be considered and done from the age of 6 months onward, particularly in patients with a GIS, a negative first-degree family history of CH, or in those requiring a low LT4 dose. Saba et al. investigated 92 patients with CH and a GIS and found 49 of them (54%) to have transient CH. In this study, the optimal LT4 dose cut-off values for predicting transient CH at the ages of 6 and 12 months were 3.2 and 2.5 μg/kg per day, respectively, with a sensitivity of 71% at both time points, and a specificity of 79% and 78% at the ages of 6 and 12 months, respectively (with values below these thresholds considered predictive of transient CH). In the study by Oron et al. , 17 out of 84 patients with a GIS (20%) turned out to have transient CH. The optimal LT4 dose cut-off values at the age of 6 months were 2.2 μg/kg per day, with a sensitivity of 90% and a specificity of 57%. Both studies highlight the need for careful clinical and biological monitoring to identify children who do not require long-term treatment. Medication that interferes with thyroid function, in particular iodine and iodomimetics, may result in transient but profound hypothyroidism . The use of iodine as a skin antiseptic, such as povidone–iodine (PVP-1), is therefore not recommended in obstetrics and neonatology, since it reaches the fetal or neonatal thyroid gland easily, causing transient hypothyroidism (through skin and placenta in mothers, and skin in neonates) . This may be more profound in premature born babies, as escape from the Wolff Chaikoff effect does not mature until term. Mothers should be asked about consumption of iodine-rich nutritional food or supplements, which can also induce transient CH . 3.5. Treatment and monitoring of pregnant women with CH Summary In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester-specific reference interval (1/+00). After delivery, we recommend lowering the LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0). Evidence Optimal management of pregnant women with CH requires knowledge and understanding of the normal physiological changes. In early pregnancy, before and during the development of the functioning fetal thyroid gland, the fetus depends on TH supply by the mother, requiring an optimal iodine status. Indeed, since the fetal thyroid gland is not functionally matured before weeks 18–20 of pregnancy, the fetus largely depends on the supply of maternal T4 during the early stages of intrauterine brain development, making fT4 the most important hormone for the fetus. During the second half of pregnancy, fetal thyroid hormones are both from maternal and fetal origin. Overt and subclinical maternal hypothyroidism have been associated with adverse pregnancy outcomes as well as with neurodevelopmental deficits in the offspring, particularly if the dysfunction occurs early in pregnancy. With respect to adverse pregnancy outcomes, maternal CH is associated with an increased risk of gestational hypertension, emergency cesarean section, induced labor for vaginal delivery, and preterm delivery . TSH ≥10 mU/L during the first 3 to 6 months of pregnancy is associated with a higher risk of preterm delivery and fetal macrosomia. These associations were not found in women with satisfactory control of hypothyroidism, that is, TSH <10 mU/L. Yet, these women did have a higher risk of induced labor for vaginal delivery . Children born to mothers with CH were found to have a higher risk of poor motor coordination, but not of other developmental domains such as mobility, communication, and motor and language skills. However, children born to mothers with TSH ≥10 mU/L were more likely to have low motor or communication skills scores. Yet, it remains unclear whether these adverse effects modify subsequent neurodevelopment . During pregnancy, TH requirement increases and most LT4-treated women require a dose increase up to 30%. Women with athyreosis, the most severe form of CH, require the highest doses and treatment should aim to keep TSH concentrations <2.5 mU/L throughout pregnancy . Therefore, careful monitoring of LT4 treatment of pregnant women with hypothyroidism is extremely important.
Summary LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval, an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed, but based on personal experience/expert opinion, we recommend brand rather than generic (2/++0). Evidence There are no randomized clinical trials that support a specific treatment approach in CH with high-quality evidence. Since the first enthusiastic reports on the successful treatment of “sporadic cretinism” with thyroid extracts derived from animal thyroid glands, all further adaptations and improvements have been based on retrospective or prospective observational studies only. However, today a large series of such cohort studies is available that were undertaken to correlate final outcome to different treatment strategies. Initially somatic development in terms of growth and puberty was studied, but later on cognitive outcome—the most precious, but also vulnerable developmental outcome—became the focus of such studies. The highest level of evidence was gained by those studies that assessed the cognitive outcome (intelligence quotient [IQ]) in individuals with CH and unaffected sibling controls. Together, the available data allow for reliable conclusions and recommendations. One such conclusion is that one can expect a favorable outcome in most children with CH who were given the “right” treatment. In this respect, numerous outcome studies point to a strong impact of two (main) factors that influence cognitive outcome: the age at start of LT4 treatment and the LT4 starting dose. Age at start of treatment and starting dose Bearing in mind that these factors were not studied systematically, one can only deduce conclusions and recommendations from observational studies. Therefore, the recommendations on the optimal age at start of LT4 treatment and the optimal starting dose are deduced from reasonably powered studies that eventually demonstrated no difference in cognitive outcome between individuals with CH and unaffected siblings. So far only two such studies are available. Initially, two outcome studies in young adult CH patients and sibling controls showed an IQ gap of eight points. In these observational studies, treatment was started at an average age of 24 days and with average LT4 dose <10 μg/kg per day. The first study that reported no gap comparing 44 CH and 53 unaffected sibling controls with a median age at time of testing of 9 years was from New Zealand and published in 2013 . Patients were treated with LT4 from a mean age of 9 days with a starting dose between 10 and 15 μg/kg depending on CH severity. Neonates with athyreosis were treated with 15 μg/kg per day. TSH normalized within a median of 14 days after diagnosis. Power calculation predicted that the number of patients and siblings would be sufficient to detect a difference of 5.2 IQ point. There was no significant difference between the tested patients and siblings. The second study reporting no gap comparing 76 CH patients and 40 sibling controls was from Berlin and was published in 2018 . The treatment approach resembled the New Zealand approach with a median age at diagnosis of 8 days, a mean LT4 starting dose of 13.5 μg/kg per day, and TSH normalizing within a median time of 15 days. In contrast to the New Zealand study, the mean ages of the patients and controls were 18.1 and 19.8 years, respectively. There was no significant difference in overall IQ (102.5 vs. 102.5), nor were there differences in other (cognitive) tests of attention, memory, fine motor skills, quality-of-life scores, and in anthropometric measurements. In addition, there was no negative effect of episodes of overtreatment in terms of a suppressed TSH. Even in the children with the highest number of episodes of TSH suppression, IQ and other outcome parameters did not differ. Based on the evidence from four studies reporting sibling-controlled cognitive outcome data, one can deduce and conclude that a even a child with severe CH can reach a normal IQ that does not differ from unaffected siblings, if LT4 treatment is started before the age of 10 days and the starting dose is at least 10 μg/kg, with 15 μg/kg in the most severe forms. More precise values for the optimal age at start of LT4 treatment or the starting dose leading to such a favorable outcome cannot be given since this has not been systematically studied. However, in a meta-analysis included in the Berlin study comparing IQ differences between severe and mild CH cases with respect to the starting dose revealed that this difference can only be overcome with a starting dose of at least 10 μg/kg, but not lower than that. Hormone preparations and administration Since there are only a few studies on the effect of different hormone preparations or methods of administration available, recommendations are based on the results of the previously mentioned studies. Those studies that reported a normal cognitive outcome did either use crunched LT4 tablets dissolved in water or breast milk administered through a spoon, or liquid LT4 preparations (both administered orally). In none of the studies T3 was administered. Because the cognitive outcomes in these studies were favorable, it is recommended to use only LT4, administered as just described. The expert panel recognizes that crushing tablets is an off-label procedure, but that it has been done this way succesfullly for many years. Clinical experience suggests that the bioavailability of liquid LT4 preparations is higher than tablets, with a possible risk of overtreatment if tablet doses are used. The higher bioavailability may also have dosing consequences for changing medication from tablets to liquid, and the other way around. In addition, CH patients treated with liquid LT4 may need more frequent fT4 and TSH measurements, and dose adjustments during their first months of life . If intravenous treatment is necessary, the (starting) dose should be no more than 80% of the oral dose; subsequently, the dose should be adjusted guided by fT4 and TSH measurements. It should be stressed that only pharmaceutically produced medication should be prescribed. This applies to both tablets and liquid LT4 preparations. Brand rather than generic LT4 tablets should be used, particularly in severe CH and in infants . The expert panel is against the use of compounded solutions or suspensions. Finally, parents should be provided with written instructions about LT4 treatment.
LT4 alone is recommended as the medication of choice for the treatment of CH (1/++0). LT4 treatment should be started as soon as possible, not later than 2 weeks after birth or immediately after confirmatory (serum) thyroid function testing in neonates in whom CH is detected by a second routine screening test (1/++0). The LT4 starting dose should be up to 15 μg/kg per day, taking into account the whole spectrum of CH, ranging from mild to severe (1/++0). Infants with severe CH, defined by a very low pretreatment serum fT4 (<5 pmol/L) or total T4 concentration in combination with elevated TSH (above the normal range based on time since birth and GA), should be treated with the highest starting dose (10–15 μg/kg per day) (1/++0). Infants with mild CH (fT4 > 10 pmol/L in combination with elevated TSH) should be treated with the lowest initial dose (∼10 μg/kg per day); in infants with pretreatment fT4 concentrations within the age-specific reference interval, an even lower starting dose may be considered (from 5 to 10 μg/kg) (1/++0). LT4 should be administered orally, once a day (1/++0). The evidence favoring brand versus generic LT4 is mixed, but based on personal experience/expert opinion, we recommend brand rather than generic (2/++0).
There are no randomized clinical trials that support a specific treatment approach in CH with high-quality evidence. Since the first enthusiastic reports on the successful treatment of “sporadic cretinism” with thyroid extracts derived from animal thyroid glands, all further adaptations and improvements have been based on retrospective or prospective observational studies only. However, today a large series of such cohort studies is available that were undertaken to correlate final outcome to different treatment strategies. Initially somatic development in terms of growth and puberty was studied, but later on cognitive outcome—the most precious, but also vulnerable developmental outcome—became the focus of such studies. The highest level of evidence was gained by those studies that assessed the cognitive outcome (intelligence quotient [IQ]) in individuals with CH and unaffected sibling controls. Together, the available data allow for reliable conclusions and recommendations. One such conclusion is that one can expect a favorable outcome in most children with CH who were given the “right” treatment. In this respect, numerous outcome studies point to a strong impact of two (main) factors that influence cognitive outcome: the age at start of LT4 treatment and the LT4 starting dose. Age at start of treatment and starting dose Bearing in mind that these factors were not studied systematically, one can only deduce conclusions and recommendations from observational studies. Therefore, the recommendations on the optimal age at start of LT4 treatment and the optimal starting dose are deduced from reasonably powered studies that eventually demonstrated no difference in cognitive outcome between individuals with CH and unaffected siblings. So far only two such studies are available. Initially, two outcome studies in young adult CH patients and sibling controls showed an IQ gap of eight points. In these observational studies, treatment was started at an average age of 24 days and with average LT4 dose <10 μg/kg per day. The first study that reported no gap comparing 44 CH and 53 unaffected sibling controls with a median age at time of testing of 9 years was from New Zealand and published in 2013 . Patients were treated with LT4 from a mean age of 9 days with a starting dose between 10 and 15 μg/kg depending on CH severity. Neonates with athyreosis were treated with 15 μg/kg per day. TSH normalized within a median of 14 days after diagnosis. Power calculation predicted that the number of patients and siblings would be sufficient to detect a difference of 5.2 IQ point. There was no significant difference between the tested patients and siblings. The second study reporting no gap comparing 76 CH patients and 40 sibling controls was from Berlin and was published in 2018 . The treatment approach resembled the New Zealand approach with a median age at diagnosis of 8 days, a mean LT4 starting dose of 13.5 μg/kg per day, and TSH normalizing within a median time of 15 days. In contrast to the New Zealand study, the mean ages of the patients and controls were 18.1 and 19.8 years, respectively. There was no significant difference in overall IQ (102.5 vs. 102.5), nor were there differences in other (cognitive) tests of attention, memory, fine motor skills, quality-of-life scores, and in anthropometric measurements. In addition, there was no negative effect of episodes of overtreatment in terms of a suppressed TSH. Even in the children with the highest number of episodes of TSH suppression, IQ and other outcome parameters did not differ. Based on the evidence from four studies reporting sibling-controlled cognitive outcome data, one can deduce and conclude that a even a child with severe CH can reach a normal IQ that does not differ from unaffected siblings, if LT4 treatment is started before the age of 10 days and the starting dose is at least 10 μg/kg, with 15 μg/kg in the most severe forms. More precise values for the optimal age at start of LT4 treatment or the starting dose leading to such a favorable outcome cannot be given since this has not been systematically studied. However, in a meta-analysis included in the Berlin study comparing IQ differences between severe and mild CH cases with respect to the starting dose revealed that this difference can only be overcome with a starting dose of at least 10 μg/kg, but not lower than that. Hormone preparations and administration Since there are only a few studies on the effect of different hormone preparations or methods of administration available, recommendations are based on the results of the previously mentioned studies. Those studies that reported a normal cognitive outcome did either use crunched LT4 tablets dissolved in water or breast milk administered through a spoon, or liquid LT4 preparations (both administered orally). In none of the studies T3 was administered. Because the cognitive outcomes in these studies were favorable, it is recommended to use only LT4, administered as just described. The expert panel recognizes that crushing tablets is an off-label procedure, but that it has been done this way succesfullly for many years. Clinical experience suggests that the bioavailability of liquid LT4 preparations is higher than tablets, with a possible risk of overtreatment if tablet doses are used. The higher bioavailability may also have dosing consequences for changing medication from tablets to liquid, and the other way around. In addition, CH patients treated with liquid LT4 may need more frequent fT4 and TSH measurements, and dose adjustments during their first months of life . If intravenous treatment is necessary, the (starting) dose should be no more than 80% of the oral dose; subsequently, the dose should be adjusted guided by fT4 and TSH measurements. It should be stressed that only pharmaceutically produced medication should be prescribed. This applies to both tablets and liquid LT4 preparations. Brand rather than generic LT4 tablets should be used, particularly in severe CH and in infants . The expert panel is against the use of compounded solutions or suspensions. Finally, parents should be provided with written instructions about LT4 treatment.
Bearing in mind that these factors were not studied systematically, one can only deduce conclusions and recommendations from observational studies. Therefore, the recommendations on the optimal age at start of LT4 treatment and the optimal starting dose are deduced from reasonably powered studies that eventually demonstrated no difference in cognitive outcome between individuals with CH and unaffected siblings. So far only two such studies are available. Initially, two outcome studies in young adult CH patients and sibling controls showed an IQ gap of eight points. In these observational studies, treatment was started at an average age of 24 days and with average LT4 dose <10 μg/kg per day. The first study that reported no gap comparing 44 CH and 53 unaffected sibling controls with a median age at time of testing of 9 years was from New Zealand and published in 2013 . Patients were treated with LT4 from a mean age of 9 days with a starting dose between 10 and 15 μg/kg depending on CH severity. Neonates with athyreosis were treated with 15 μg/kg per day. TSH normalized within a median of 14 days after diagnosis. Power calculation predicted that the number of patients and siblings would be sufficient to detect a difference of 5.2 IQ point. There was no significant difference between the tested patients and siblings. The second study reporting no gap comparing 76 CH patients and 40 sibling controls was from Berlin and was published in 2018 . The treatment approach resembled the New Zealand approach with a median age at diagnosis of 8 days, a mean LT4 starting dose of 13.5 μg/kg per day, and TSH normalizing within a median time of 15 days. In contrast to the New Zealand study, the mean ages of the patients and controls were 18.1 and 19.8 years, respectively. There was no significant difference in overall IQ (102.5 vs. 102.5), nor were there differences in other (cognitive) tests of attention, memory, fine motor skills, quality-of-life scores, and in anthropometric measurements. In addition, there was no negative effect of episodes of overtreatment in terms of a suppressed TSH. Even in the children with the highest number of episodes of TSH suppression, IQ and other outcome parameters did not differ. Based on the evidence from four studies reporting sibling-controlled cognitive outcome data, one can deduce and conclude that a even a child with severe CH can reach a normal IQ that does not differ from unaffected siblings, if LT4 treatment is started before the age of 10 days and the starting dose is at least 10 μg/kg, with 15 μg/kg in the most severe forms. More precise values for the optimal age at start of LT4 treatment or the starting dose leading to such a favorable outcome cannot be given since this has not been systematically studied. However, in a meta-analysis included in the Berlin study comparing IQ differences between severe and mild CH cases with respect to the starting dose revealed that this difference can only be overcome with a starting dose of at least 10 μg/kg, but not lower than that.
Since there are only a few studies on the effect of different hormone preparations or methods of administration available, recommendations are based on the results of the previously mentioned studies. Those studies that reported a normal cognitive outcome did either use crunched LT4 tablets dissolved in water or breast milk administered through a spoon, or liquid LT4 preparations (both administered orally). In none of the studies T3 was administered. Because the cognitive outcomes in these studies were favorable, it is recommended to use only LT4, administered as just described. The expert panel recognizes that crushing tablets is an off-label procedure, but that it has been done this way succesfullly for many years. Clinical experience suggests that the bioavailability of liquid LT4 preparations is higher than tablets, with a possible risk of overtreatment if tablet doses are used. The higher bioavailability may also have dosing consequences for changing medication from tablets to liquid, and the other way around. In addition, CH patients treated with liquid LT4 may need more frequent fT4 and TSH measurements, and dose adjustments during their first months of life . If intravenous treatment is necessary, the (starting) dose should be no more than 80% of the oral dose; subsequently, the dose should be adjusted guided by fT4 and TSH measurements. It should be stressed that only pharmaceutically produced medication should be prescribed. This applies to both tablets and liquid LT4 preparations. Brand rather than generic LT4 tablets should be used, particularly in severe CH and in infants . The expert panel is against the use of compounded solutions or suspensions. Finally, parents should be provided with written instructions about LT4 treatment.
Summary We recommend measurement of serum fT4 and TSH concentrations before, or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of thyroid hormone, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose close to 15 μg/kg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). Adequate treatment throughout childhood is essential, and long-term under- or overtreatment, that is, TSH concentrations above or below the reference interval, should be avoided (1/++0). In contrast to adults, in neonates, infants and children LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake. This approach can improve compliance, and ensures as consistent as possible LT4 absorption and as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption or increased metabolization of T4 by other disease (e.g., gastrointestinal), food, or medication should be considered (2/+00); noncompliance may be the most frequent cause, especially in teenagers and adolescents. Evidence Shortly after the start of LT4 treatment Repeated measurement of serum fT4 and TSH, and clinical assessment (especially for signs of overtreatment when using the highest starting dose) are the backbone of monitoring LT4 treatment in patients with primary CH . TSH normalizes slower than fT4. Therefore, the first treatment goal is as rapid as possible normalization of fT4. Since fT4 reflects the unbound biologically active form of T4, measurement of fT4 is preferred to total T4 . The second treatment goal is normalization of TSH within 4 weeks. Consequently, fT4 (or total T4) should guide dosing until TSH reaches the age-specific reference interval . Rapid normalization of TSH and keeping fT4 in the upper half of the age-specific reference interval have been shown to optimize the neurodevelopmental outcome . Follow-up after the first weeks of LT4 treatment There is no evidence for a one optimal follow-up scheme. Recent studies focusing on optimization of biochemical thyroid function testing suggest the importance of frequent laboratory monitoring and dose adjustment during the first year of life. Findings in these studies were that (i) patients with severe CH (athyreosis and dysgenesis vs. dyshormonogenesis, with high TSH values at diagnosis) need more dose adjustments during the first year of life ; (ii) the highest doses within the recommended range of 10–15 μg/kg per day resulted in more dose adjustments because of hyperthyroxinemia ; and (iii) monthly thyroid function testing led to frequent dose adjustments during the first year of life (75% at 0–6 months of age, and 36% at 7–12 months of age) . However, in none of these studies neurodevelopmental outcome data were available, the most important long-term treatment goal in CH. With this in mind, the follow-up schemes that were chosen in the studies that reported normal IQ outcomes can be used as recommendation. In the New Zealand and the Berlin studies, treatment effectiveness in terms of normalization of serum parameters was tested weekly after the start of treatment until they normalized . Thereafter, in New Zealand, blood tests were done monthly during the first year and bimonthly during the second year, and every 3 months in the Berlin study. Obviously, follow-up schemes have to be personalized according to parents' capabilities and compliance. The main biochemical target parameter in primary CH is TSH. The Berlin study reported on all obtained serum parameters during the first 2 years of life in all treated children. This revealed that when TSH was within the reference interval, T4 was often elevated but T3 was normal. Noteworthy, also in adult patients with severe acquired hypothyroidism, a higher serum fT4 is necessary to reach normal TSH concentrations. This may be due to lack of thyroidal production of T3 that needs to be compensated by a higher fT4 concentration. Data on the effects of clearly increased serum (f)T4 concentrations are scarce. In two studies, long-term follow-up after periods of overtreatment during the first 2 years of life suggested a decreased IQ at the age of 11 years, and an increased rate of attention deficit hyperactivity disorder . Earlier studies suggested adverse effects on attention span . However, Aleksander et al. showed no IQ differences between patients and siblings despite comparable periods of overtreatment . As long as there is no evidence for a possible negative effect of periods of overtreatment, dose reduction in case of an elevated fT4 should only be done after a second fT4 measurement, unless TSH is suppressed. Besides overtreatment, “resetting” of the hypothalamus–pituitary–thyroid feedback axis after intrauterine hypothyroidism has been proposed as a possible mechanism, especially in patients younger than 12 months . Persistence of such mild hypothalamus–pituitary resistance has been reported in adult CH patients compared with patients with acquired hypothyroidism . In summary, there is no definitive evidence for one optimal follow-up scheme based on studies with cognitive outcome as the main parameter. However, a normal cognitive outcome has been achieved with monthly and bimonthly, and with 3-monthly controls during the first 2 to 3 years of life, after TSH normalization in the first weeks after diagnosis. Furthermore, patients with the most severe forms of CH and the highest range of the recommended LT4 starting dose are at an increased risk for frequent dose adjustments in the first year of life because of elevated fT4 levels. Since the long-term neurological consequences of hyperthyroxinemia/periods of overtreatment are still not clarified, the follow-up frequency should be individualized with more controls in case of suboptimal fT4 or TSH values. After dose adjustment, a next control is recommended 4 to 6 weeks later . Finally, adolescence and the period of transition to adult care are critical periods. Individualized follow-up schemes should be drawn up to assure normal growth and puberty in the adolescent, and fertility in the young adult . Adverse effects of LT4 Adverse effects of long-term LT4 treatment are rare or absent if adequately prescribed. Cases of pseudotumor cerebri or craniosynostosis have been described . However, relative macrocrania at the age of 18 months, but without any case of craniosynostosis, was reported in a cohort of 45 CH patients with documented fT4 concentrations above the reference interval during their first 6 to 9 months of life . In one cohort of young adults with CH, cardiovascular abnormalities were reported (impaired diastolic dysfunction and exercise capacity, and increased intima media thickness, IMT); however, the clinical relevance of these findings remains unknown. Moreover, in a large nationwide study, standardized mortality ratio in patients with CH was not increased for diseases of the circulatory system . Cardiac insufficiency LT4 has clear positive ino- and chronotropic effects on the heart. In newly diagnosed CH in newborns with congenital heart disease and impending heart failure, we therefore recommend to apply a lower LT4 starting dose—approximately 50% of the recommended dose—and to increase it guided by serum fT4 and TSH measurement, and the infant's clinical condition. Impaired bioavailability by diseases, drugs, or food LT4 is mainly absorbed in the proximal small intestine. Undiagnosed or untreated celiac disease will reduce LT4 absorption. Children with short bowel syndrome will also have reduced absorption . Recently, rectal administration of LT4 was been shown to be effective in a child with this condition . Increased type 3 deiodinase activity in large hemangiomas can cause increased metabolic clearance of administered LT4 and, with that, necessitate a higher LT4 dose . Bioavailability of LT4 can also be reduced by concomitant use of other medication. For example, proton pump inhibitors, calcium or iron, will decrease absorption, while antiepileptic medication (phenobarbital, phenytoin, and carbamazepine) and rifampicin will increase its metabolic clearance. Interactions need to be considered and can sometimes be overcome by avoiding concomitant ingestion . While in adults the recommended LT4 intake moment is 30–60 minutes before intake of food , such a recommendation is difficult to realize in infants . Pragmatically, LT4 should be administered at a fixed time with an equal interval to food intake every day to have a constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration. Soy containing food products have been repeatedly shown to inhibit LT4 absorption in children with CH .
We recommend measurement of serum fT4 and TSH concentrations before, or at least 4 hours after the last (daily) LT4 administration (1/++0). We recommend evaluation of fT4 and TSH according to age-specific reference intervals (1/++0). The first treatment goal in neonates with primary CH is to rapidly increase the circulating amount of thyroid hormone, reflected by normalization of serum TSH; therafter, TSH should be kept within the reference interval. If TSH is in the age-specific reference interval, fT4 concentrations above the upper limit of the reference interval can be accepted and recommend maintaining the same LT4 dose (1/++0). Any reduction of the LT4 dose should not be based on a single higher than normal fT4 concentration, unless TSH is suppressed (i.e., below the lower limit of the reference interval) or there are signs of overtreatment (e.g., jitteriness or tachycardia) (1/++0). The first clinical and biochemical follow-up evaluation should take place 1 to 2 weeks after the start of LT4 treatment (1 week at the latest in case of a starting dose close to 15 μg/kg per day or an even higher dose) (1/+00). Subsequent (clinical and biochemical) evaluation should take place every 2 weeks until complete normalization of serum TSH is achieved; therafter, the evaluation frequency can be lowered to once every 1 to 3 months until the age of 12 months (1/+00). Between the ages of 12 months and 3 years, the evaluation frequency can be lowered to every 2 to 4 months; thereafter, evaluations should be carried out every 3 to 6 months until growth is completed (1/+00). If abnormal fT4 or TSH values are found, or if compliance is questioned, the evaluation frequency should be increased (2/+00). After a change of LT4 dose or formulation, an extra evaluation should be carried out after 4 to 6 weeks (2/+00). Adequate treatment throughout childhood is essential, and long-term under- or overtreatment, that is, TSH concentrations above or below the reference interval, should be avoided (1/++0). In contrast to adults, in neonates, infants and children LT4 can be administered together with food (but with avoidance of soy protein and vegetable fiber); more important, LT4 should be administered at the same time every day, also in relation to food intake. This approach can improve compliance, and ensures as consistent as possible LT4 absorption and as good as possible LT4 dose titration (2/+00). In case of an unexpected need for LT4 dose increase, reduced absorption or increased metabolization of T4 by other disease (e.g., gastrointestinal), food, or medication should be considered (2/+00); noncompliance may be the most frequent cause, especially in teenagers and adolescents.
Shortly after the start of LT4 treatment Repeated measurement of serum fT4 and TSH, and clinical assessment (especially for signs of overtreatment when using the highest starting dose) are the backbone of monitoring LT4 treatment in patients with primary CH . TSH normalizes slower than fT4. Therefore, the first treatment goal is as rapid as possible normalization of fT4. Since fT4 reflects the unbound biologically active form of T4, measurement of fT4 is preferred to total T4 . The second treatment goal is normalization of TSH within 4 weeks. Consequently, fT4 (or total T4) should guide dosing until TSH reaches the age-specific reference interval . Rapid normalization of TSH and keeping fT4 in the upper half of the age-specific reference interval have been shown to optimize the neurodevelopmental outcome . Follow-up after the first weeks of LT4 treatment There is no evidence for a one optimal follow-up scheme. Recent studies focusing on optimization of biochemical thyroid function testing suggest the importance of frequent laboratory monitoring and dose adjustment during the first year of life. Findings in these studies were that (i) patients with severe CH (athyreosis and dysgenesis vs. dyshormonogenesis, with high TSH values at diagnosis) need more dose adjustments during the first year of life ; (ii) the highest doses within the recommended range of 10–15 μg/kg per day resulted in more dose adjustments because of hyperthyroxinemia ; and (iii) monthly thyroid function testing led to frequent dose adjustments during the first year of life (75% at 0–6 months of age, and 36% at 7–12 months of age) . However, in none of these studies neurodevelopmental outcome data were available, the most important long-term treatment goal in CH. With this in mind, the follow-up schemes that were chosen in the studies that reported normal IQ outcomes can be used as recommendation. In the New Zealand and the Berlin studies, treatment effectiveness in terms of normalization of serum parameters was tested weekly after the start of treatment until they normalized . Thereafter, in New Zealand, blood tests were done monthly during the first year and bimonthly during the second year, and every 3 months in the Berlin study. Obviously, follow-up schemes have to be personalized according to parents' capabilities and compliance. The main biochemical target parameter in primary CH is TSH. The Berlin study reported on all obtained serum parameters during the first 2 years of life in all treated children. This revealed that when TSH was within the reference interval, T4 was often elevated but T3 was normal. Noteworthy, also in adult patients with severe acquired hypothyroidism, a higher serum fT4 is necessary to reach normal TSH concentrations. This may be due to lack of thyroidal production of T3 that needs to be compensated by a higher fT4 concentration. Data on the effects of clearly increased serum (f)T4 concentrations are scarce. In two studies, long-term follow-up after periods of overtreatment during the first 2 years of life suggested a decreased IQ at the age of 11 years, and an increased rate of attention deficit hyperactivity disorder . Earlier studies suggested adverse effects on attention span . However, Aleksander et al. showed no IQ differences between patients and siblings despite comparable periods of overtreatment . As long as there is no evidence for a possible negative effect of periods of overtreatment, dose reduction in case of an elevated fT4 should only be done after a second fT4 measurement, unless TSH is suppressed. Besides overtreatment, “resetting” of the hypothalamus–pituitary–thyroid feedback axis after intrauterine hypothyroidism has been proposed as a possible mechanism, especially in patients younger than 12 months . Persistence of such mild hypothalamus–pituitary resistance has been reported in adult CH patients compared with patients with acquired hypothyroidism . In summary, there is no definitive evidence for one optimal follow-up scheme based on studies with cognitive outcome as the main parameter. However, a normal cognitive outcome has been achieved with monthly and bimonthly, and with 3-monthly controls during the first 2 to 3 years of life, after TSH normalization in the first weeks after diagnosis. Furthermore, patients with the most severe forms of CH and the highest range of the recommended LT4 starting dose are at an increased risk for frequent dose adjustments in the first year of life because of elevated fT4 levels. Since the long-term neurological consequences of hyperthyroxinemia/periods of overtreatment are still not clarified, the follow-up frequency should be individualized with more controls in case of suboptimal fT4 or TSH values. After dose adjustment, a next control is recommended 4 to 6 weeks later . Finally, adolescence and the period of transition to adult care are critical periods. Individualized follow-up schemes should be drawn up to assure normal growth and puberty in the adolescent, and fertility in the young adult . Adverse effects of LT4 Adverse effects of long-term LT4 treatment are rare or absent if adequately prescribed. Cases of pseudotumor cerebri or craniosynostosis have been described . However, relative macrocrania at the age of 18 months, but without any case of craniosynostosis, was reported in a cohort of 45 CH patients with documented fT4 concentrations above the reference interval during their first 6 to 9 months of life . In one cohort of young adults with CH, cardiovascular abnormalities were reported (impaired diastolic dysfunction and exercise capacity, and increased intima media thickness, IMT); however, the clinical relevance of these findings remains unknown. Moreover, in a large nationwide study, standardized mortality ratio in patients with CH was not increased for diseases of the circulatory system . Cardiac insufficiency LT4 has clear positive ino- and chronotropic effects on the heart. In newly diagnosed CH in newborns with congenital heart disease and impending heart failure, we therefore recommend to apply a lower LT4 starting dose—approximately 50% of the recommended dose—and to increase it guided by serum fT4 and TSH measurement, and the infant's clinical condition. Impaired bioavailability by diseases, drugs, or food LT4 is mainly absorbed in the proximal small intestine. Undiagnosed or untreated celiac disease will reduce LT4 absorption. Children with short bowel syndrome will also have reduced absorption . Recently, rectal administration of LT4 was been shown to be effective in a child with this condition . Increased type 3 deiodinase activity in large hemangiomas can cause increased metabolic clearance of administered LT4 and, with that, necessitate a higher LT4 dose . Bioavailability of LT4 can also be reduced by concomitant use of other medication. For example, proton pump inhibitors, calcium or iron, will decrease absorption, while antiepileptic medication (phenobarbital, phenytoin, and carbamazepine) and rifampicin will increase its metabolic clearance. Interactions need to be considered and can sometimes be overcome by avoiding concomitant ingestion . While in adults the recommended LT4 intake moment is 30–60 minutes before intake of food , such a recommendation is difficult to realize in infants . Pragmatically, LT4 should be administered at a fixed time with an equal interval to food intake every day to have a constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration. Soy containing food products have been repeatedly shown to inhibit LT4 absorption in children with CH .
Repeated measurement of serum fT4 and TSH, and clinical assessment (especially for signs of overtreatment when using the highest starting dose) are the backbone of monitoring LT4 treatment in patients with primary CH . TSH normalizes slower than fT4. Therefore, the first treatment goal is as rapid as possible normalization of fT4. Since fT4 reflects the unbound biologically active form of T4, measurement of fT4 is preferred to total T4 . The second treatment goal is normalization of TSH within 4 weeks. Consequently, fT4 (or total T4) should guide dosing until TSH reaches the age-specific reference interval . Rapid normalization of TSH and keeping fT4 in the upper half of the age-specific reference interval have been shown to optimize the neurodevelopmental outcome .
There is no evidence for a one optimal follow-up scheme. Recent studies focusing on optimization of biochemical thyroid function testing suggest the importance of frequent laboratory monitoring and dose adjustment during the first year of life. Findings in these studies were that (i) patients with severe CH (athyreosis and dysgenesis vs. dyshormonogenesis, with high TSH values at diagnosis) need more dose adjustments during the first year of life ; (ii) the highest doses within the recommended range of 10–15 μg/kg per day resulted in more dose adjustments because of hyperthyroxinemia ; and (iii) monthly thyroid function testing led to frequent dose adjustments during the first year of life (75% at 0–6 months of age, and 36% at 7–12 months of age) . However, in none of these studies neurodevelopmental outcome data were available, the most important long-term treatment goal in CH. With this in mind, the follow-up schemes that were chosen in the studies that reported normal IQ outcomes can be used as recommendation. In the New Zealand and the Berlin studies, treatment effectiveness in terms of normalization of serum parameters was tested weekly after the start of treatment until they normalized . Thereafter, in New Zealand, blood tests were done monthly during the first year and bimonthly during the second year, and every 3 months in the Berlin study. Obviously, follow-up schemes have to be personalized according to parents' capabilities and compliance. The main biochemical target parameter in primary CH is TSH. The Berlin study reported on all obtained serum parameters during the first 2 years of life in all treated children. This revealed that when TSH was within the reference interval, T4 was often elevated but T3 was normal. Noteworthy, also in adult patients with severe acquired hypothyroidism, a higher serum fT4 is necessary to reach normal TSH concentrations. This may be due to lack of thyroidal production of T3 that needs to be compensated by a higher fT4 concentration. Data on the effects of clearly increased serum (f)T4 concentrations are scarce. In two studies, long-term follow-up after periods of overtreatment during the first 2 years of life suggested a decreased IQ at the age of 11 years, and an increased rate of attention deficit hyperactivity disorder . Earlier studies suggested adverse effects on attention span . However, Aleksander et al. showed no IQ differences between patients and siblings despite comparable periods of overtreatment . As long as there is no evidence for a possible negative effect of periods of overtreatment, dose reduction in case of an elevated fT4 should only be done after a second fT4 measurement, unless TSH is suppressed. Besides overtreatment, “resetting” of the hypothalamus–pituitary–thyroid feedback axis after intrauterine hypothyroidism has been proposed as a possible mechanism, especially in patients younger than 12 months . Persistence of such mild hypothalamus–pituitary resistance has been reported in adult CH patients compared with patients with acquired hypothyroidism . In summary, there is no definitive evidence for one optimal follow-up scheme based on studies with cognitive outcome as the main parameter. However, a normal cognitive outcome has been achieved with monthly and bimonthly, and with 3-monthly controls during the first 2 to 3 years of life, after TSH normalization in the first weeks after diagnosis. Furthermore, patients with the most severe forms of CH and the highest range of the recommended LT4 starting dose are at an increased risk for frequent dose adjustments in the first year of life because of elevated fT4 levels. Since the long-term neurological consequences of hyperthyroxinemia/periods of overtreatment are still not clarified, the follow-up frequency should be individualized with more controls in case of suboptimal fT4 or TSH values. After dose adjustment, a next control is recommended 4 to 6 weeks later . Finally, adolescence and the period of transition to adult care are critical periods. Individualized follow-up schemes should be drawn up to assure normal growth and puberty in the adolescent, and fertility in the young adult .
Adverse effects of long-term LT4 treatment are rare or absent if adequately prescribed. Cases of pseudotumor cerebri or craniosynostosis have been described . However, relative macrocrania at the age of 18 months, but without any case of craniosynostosis, was reported in a cohort of 45 CH patients with documented fT4 concentrations above the reference interval during their first 6 to 9 months of life . In one cohort of young adults with CH, cardiovascular abnormalities were reported (impaired diastolic dysfunction and exercise capacity, and increased intima media thickness, IMT); however, the clinical relevance of these findings remains unknown. Moreover, in a large nationwide study, standardized mortality ratio in patients with CH was not increased for diseases of the circulatory system .
LT4 has clear positive ino- and chronotropic effects on the heart. In newly diagnosed CH in newborns with congenital heart disease and impending heart failure, we therefore recommend to apply a lower LT4 starting dose—approximately 50% of the recommended dose—and to increase it guided by serum fT4 and TSH measurement, and the infant's clinical condition.
LT4 is mainly absorbed in the proximal small intestine. Undiagnosed or untreated celiac disease will reduce LT4 absorption. Children with short bowel syndrome will also have reduced absorption . Recently, rectal administration of LT4 was been shown to be effective in a child with this condition . Increased type 3 deiodinase activity in large hemangiomas can cause increased metabolic clearance of administered LT4 and, with that, necessitate a higher LT4 dose . Bioavailability of LT4 can also be reduced by concomitant use of other medication. For example, proton pump inhibitors, calcium or iron, will decrease absorption, while antiepileptic medication (phenobarbital, phenytoin, and carbamazepine) and rifampicin will increase its metabolic clearance. Interactions need to be considered and can sometimes be overcome by avoiding concomitant ingestion . While in adults the recommended LT4 intake moment is 30–60 minutes before intake of food , such a recommendation is difficult to realize in infants . Pragmatically, LT4 should be administered at a fixed time with an equal interval to food intake every day to have a constant as possible LT4 absorption and, with that, as good as possible LT4 dose titration. Soy containing food products have been repeatedly shown to inhibit LT4 absorption in children with CH .
In severe forms of central CH (fT4 < 5 pmol/L), we also recommend to start LT4 treatment as soon as possible after birth at doses like in primary CH (at least 10 μg/kg per day, see Section 3.1), to bring fT4 rapidly within the normal range (1/++0). In milder forms of central CH, we suggest starting treatment at a lower LT4 dose (5–10 μg/kg per day) to reduce the risk of overtreatment (1/++0). In newborns with central CH, we recommend monitoring treatment by measuring fT4 and TSH according to the same schedule as for primary CH; serum fT4 should be kept above the mean/median value of the age-specific reference interval; if TSH is low before treatment, subsequent TSH determinations can be omitted (1/+00). When under- or overtreatment is suspected in a patient with central CH, then TSH or fT3 or T3 can be measured (1/+00). When fT4 is around the lower limit of the reference interval, then undertreatment should be considered, particularly if TSH >1.0 mU/L (1/+00). When serum fT4 is around or above the upper limit of the reference interval, then overtreatment should be considered (assuming that LT4 has not been administered just before blood withdrawal), particularly if associated with clinical signs of thyrotoxicosis, or a high (f)T3 concentration (1/+00). Evidence Just like primary CH, treatment of central CH consists of daily administration of LT4 (orally; tablets or liquid dosage form). The biggest differences between the treatment of primary and central CH are in the monitoring of treatment—with serum fT4 (instead of TSH) being the most important parameter—and in the LT4 starting dose. Important to realize is that in central CH, a low TSH concentration does not point to overtreatment. The (biochemical) LT4 treatment aim is bringing and keeping the fT4 concentration in the upper half of the age-specific fT4 reference interval. Although randomized clinical trials testing this approach in children are lacking, studies in adults give some support . Central CH can be a severe condition (fT4 at diagnosis <5 pmol/L), but most cases can be classified as mild to moderate (fT4 at diagnosis 5–15 pmol/L) . Although studies investigating the optimal starting dose in central CH are lacking, clinical experience has taught that an LT4 starting dose of 10–15 μg/kg in mild-to-moderate cases quickly results in supraphysiological fT4 concentrations. So, with exception of severe cases, a lower starting dose that is 5–10 μg/kg is advisable. With regard to the treatment monitoring frequency, the schedule for primary CH should be followed.
Just like primary CH, treatment of central CH consists of daily administration of LT4 (orally; tablets or liquid dosage form). The biggest differences between the treatment of primary and central CH are in the monitoring of treatment—with serum fT4 (instead of TSH) being the most important parameter—and in the LT4 starting dose. Important to realize is that in central CH, a low TSH concentration does not point to overtreatment. The (biochemical) LT4 treatment aim is bringing and keeping the fT4 concentration in the upper half of the age-specific fT4 reference interval. Although randomized clinical trials testing this approach in children are lacking, studies in adults give some support . Central CH can be a severe condition (fT4 at diagnosis <5 pmol/L), but most cases can be classified as mild to moderate (fT4 at diagnosis 5–15 pmol/L) . Although studies investigating the optimal starting dose in central CH are lacking, clinical experience has taught that an LT4 starting dose of 10–15 μg/kg in mild-to-moderate cases quickly results in supraphysiological fT4 concentrations. So, with exception of severe cases, a lower starting dose that is 5–10 μg/kg is advisable. With regard to the treatment monitoring frequency, the schedule for primary CH should be followed.
Summary When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a GIS, and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires an LT4 dose <3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0). Evidence In recent years, the prevalence of transient CH has steadily increased. In a number of studies, factors have been identified that increase the likelihood of transient disease, such as sex (more often in boys) , low birthweight , neonatal morbidity requiring intensive care , race/ethnicity (more often in nonwhite patients) , and less severe CH at diagnosis (assessed by screening TSH, or diagnostic TSH or fT4) . In contrast, factors such as prematurity , other congenital abnormalities , a family history of thyroid disease , abnormal thyroid morphology (thyroid hypoplasia at diagnosis) , TSH elevation >10 mU/L after the age of 1 year (when infants outgrow the LT4 dose), and a higher LT4 dose requirement at 1 to 3 years of age are associated with permanent CH (with conflicting results between studies for the factor dose requirement) . Recent studies have shown that early treatment withdrawal to assess the necessity of further treatment can be considered and done from the age of 6 months onward, particularly in patients with a GIS, a negative first-degree family history of CH, or in those requiring a low LT4 dose. Saba et al. investigated 92 patients with CH and a GIS and found 49 of them (54%) to have transient CH. In this study, the optimal LT4 dose cut-off values for predicting transient CH at the ages of 6 and 12 months were 3.2 and 2.5 μg/kg per day, respectively, with a sensitivity of 71% at both time points, and a specificity of 79% and 78% at the ages of 6 and 12 months, respectively (with values below these thresholds considered predictive of transient CH). In the study by Oron et al. , 17 out of 84 patients with a GIS (20%) turned out to have transient CH. The optimal LT4 dose cut-off values at the age of 6 months were 2.2 μg/kg per day, with a sensitivity of 90% and a specificity of 57%. Both studies highlight the need for careful clinical and biological monitoring to identify children who do not require long-term treatment. Medication that interferes with thyroid function, in particular iodine and iodomimetics, may result in transient but profound hypothyroidism . The use of iodine as a skin antiseptic, such as povidone–iodine (PVP-1), is therefore not recommended in obstetrics and neonatology, since it reaches the fetal or neonatal thyroid gland easily, causing transient hypothyroidism (through skin and placenta in mothers, and skin in neonates) . This may be more profound in premature born babies, as escape from the Wolff Chaikoff effect does not mature until term. Mothers should be asked about consumption of iodine-rich nutritional food or supplements, which can also induce transient CH .
When no definitive diagnosis of permanent CH was made in the first weeks or months of life, then re-evaluation of the HPT axis after the age of 2 to 3 years is indicated, particularly in children with a GIS, and in those with presumed isolated central CH (1/++0). For a precise diagnosis, LT4 treatment should be phased out over a 4 to 6 weeks period or just stopped, and full re-evaluation should be carried out after 4 weeks, consisting of (at least) fT4 and TSH measurement. If primary hypothyroidism is confirmed (TSH ≥10 mU/L), consider thyroid imaging and, if possible, genetic testing; if central CH is likely (fT4 below the lower limit of the reference interval in combination with a low normal of only mildly elevated TSH), consider evaluating the other anterior pituitary functions and genetic testing. If TSH is above the upper limit of the reference interval but <10 mU/L (primary CH) or fT4 just above the lower limit of the reference interval (central CH), then continue withdrawal and retest in another 3 to 4 weeks (1/++0). If a child with no permanent CH diagnosis and a GIS requires an LT4 dose <3 μg/kg per day at the age of 6 months, then re-evaluation can be done already at that time (1/++0). We recommend avoiding iodine as an antiseptic during peri- and neonatal period, as it can cause transient CH (1/++0).
In recent years, the prevalence of transient CH has steadily increased. In a number of studies, factors have been identified that increase the likelihood of transient disease, such as sex (more often in boys) , low birthweight , neonatal morbidity requiring intensive care , race/ethnicity (more often in nonwhite patients) , and less severe CH at diagnosis (assessed by screening TSH, or diagnostic TSH or fT4) . In contrast, factors such as prematurity , other congenital abnormalities , a family history of thyroid disease , abnormal thyroid morphology (thyroid hypoplasia at diagnosis) , TSH elevation >10 mU/L after the age of 1 year (when infants outgrow the LT4 dose), and a higher LT4 dose requirement at 1 to 3 years of age are associated with permanent CH (with conflicting results between studies for the factor dose requirement) . Recent studies have shown that early treatment withdrawal to assess the necessity of further treatment can be considered and done from the age of 6 months onward, particularly in patients with a GIS, a negative first-degree family history of CH, or in those requiring a low LT4 dose. Saba et al. investigated 92 patients with CH and a GIS and found 49 of them (54%) to have transient CH. In this study, the optimal LT4 dose cut-off values for predicting transient CH at the ages of 6 and 12 months were 3.2 and 2.5 μg/kg per day, respectively, with a sensitivity of 71% at both time points, and a specificity of 79% and 78% at the ages of 6 and 12 months, respectively (with values below these thresholds considered predictive of transient CH). In the study by Oron et al. , 17 out of 84 patients with a GIS (20%) turned out to have transient CH. The optimal LT4 dose cut-off values at the age of 6 months were 2.2 μg/kg per day, with a sensitivity of 90% and a specificity of 57%. Both studies highlight the need for careful clinical and biological monitoring to identify children who do not require long-term treatment. Medication that interferes with thyroid function, in particular iodine and iodomimetics, may result in transient but profound hypothyroidism . The use of iodine as a skin antiseptic, such as povidone–iodine (PVP-1), is therefore not recommended in obstetrics and neonatology, since it reaches the fetal or neonatal thyroid gland easily, causing transient hypothyroidism (through skin and placenta in mothers, and skin in neonates) . This may be more profound in premature born babies, as escape from the Wolff Chaikoff effect does not mature until term. Mothers should be asked about consumption of iodine-rich nutritional food or supplements, which can also induce transient CH .
Summary In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester-specific reference interval (1/+00). After delivery, we recommend lowering the LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0). Evidence Optimal management of pregnant women with CH requires knowledge and understanding of the normal physiological changes. In early pregnancy, before and during the development of the functioning fetal thyroid gland, the fetus depends on TH supply by the mother, requiring an optimal iodine status. Indeed, since the fetal thyroid gland is not functionally matured before weeks 18–20 of pregnancy, the fetus largely depends on the supply of maternal T4 during the early stages of intrauterine brain development, making fT4 the most important hormone for the fetus. During the second half of pregnancy, fetal thyroid hormones are both from maternal and fetal origin. Overt and subclinical maternal hypothyroidism have been associated with adverse pregnancy outcomes as well as with neurodevelopmental deficits in the offspring, particularly if the dysfunction occurs early in pregnancy. With respect to adverse pregnancy outcomes, maternal CH is associated with an increased risk of gestational hypertension, emergency cesarean section, induced labor for vaginal delivery, and preterm delivery . TSH ≥10 mU/L during the first 3 to 6 months of pregnancy is associated with a higher risk of preterm delivery and fetal macrosomia. These associations were not found in women with satisfactory control of hypothyroidism, that is, TSH <10 mU/L. Yet, these women did have a higher risk of induced labor for vaginal delivery . Children born to mothers with CH were found to have a higher risk of poor motor coordination, but not of other developmental domains such as mobility, communication, and motor and language skills. However, children born to mothers with TSH ≥10 mU/L were more likely to have low motor or communication skills scores. Yet, it remains unclear whether these adverse effects modify subsequent neurodevelopment . During pregnancy, TH requirement increases and most LT4-treated women require a dose increase up to 30%. Women with athyreosis, the most severe form of CH, require the highest doses and treatment should aim to keep TSH concentrations <2.5 mU/L throughout pregnancy . Therefore, careful monitoring of LT4 treatment of pregnant women with hypothyroidism is extremely important.
In women with CH who are planning pregnancy, we strongly recommend optimization of LT4 treatment; in addition, these women should be counseled regarding the higher need for LT4 during pregnancy (1/++0). fT4 (or total T4) and TSH levels should be monitored every 4 to 6 weeks during pregnancy, aiming at TSH concentrations in accordance with current guidelines on treatment of hypothyroidism during pregnancy, that is, <2.5 mU/L throughout gestation in patients treated with LT4 (1/+00). In pregnant women with central CH, the LT4 doses should be increased aiming at an fT4 concentration above the mean/median value of the trimester-specific reference interval (1/+00). After delivery, we recommend lowering the LT4 dose to preconception dose; additional thyroid function testing should be performed at ∼6 weeks postpartum (1/++0). All pregnant women should ingest ∼250 μg iodine per day (1/++0).
Optimal management of pregnant women with CH requires knowledge and understanding of the normal physiological changes. In early pregnancy, before and during the development of the functioning fetal thyroid gland, the fetus depends on TH supply by the mother, requiring an optimal iodine status. Indeed, since the fetal thyroid gland is not functionally matured before weeks 18–20 of pregnancy, the fetus largely depends on the supply of maternal T4 during the early stages of intrauterine brain development, making fT4 the most important hormone for the fetus. During the second half of pregnancy, fetal thyroid hormones are both from maternal and fetal origin. Overt and subclinical maternal hypothyroidism have been associated with adverse pregnancy outcomes as well as with neurodevelopmental deficits in the offspring, particularly if the dysfunction occurs early in pregnancy. With respect to adverse pregnancy outcomes, maternal CH is associated with an increased risk of gestational hypertension, emergency cesarean section, induced labor for vaginal delivery, and preterm delivery . TSH ≥10 mU/L during the first 3 to 6 months of pregnancy is associated with a higher risk of preterm delivery and fetal macrosomia. These associations were not found in women with satisfactory control of hypothyroidism, that is, TSH <10 mU/L. Yet, these women did have a higher risk of induced labor for vaginal delivery . Children born to mothers with CH were found to have a higher risk of poor motor coordination, but not of other developmental domains such as mobility, communication, and motor and language skills. However, children born to mothers with TSH ≥10 mU/L were more likely to have low motor or communication skills scores. Yet, it remains unclear whether these adverse effects modify subsequent neurodevelopment . During pregnancy, TH requirement increases and most LT4-treated women require a dose increase up to 30%. Women with athyreosis, the most severe form of CH, require the highest doses and treatment should aim to keep TSH concentrations <2.5 mU/L throughout pregnancy . Therefore, careful monitoring of LT4 treatment of pregnant women with hypothyroidism is extremely important.
4.1. Neurodevelopmental outcomes 4.2. Development of goiter in thyroid dyshormonogenesis 4.3. Growth, puberty, and fertility 4.4. Bone, metabolic, and cardiovascular health 4.5. Patient and professional education, and health-related quality of life 4.6. Transition to adult care 4.1. Neurodevelopmental outcomes Summary Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0). Evidence In the vast majority of early and adequately treated children with CH, neurodevelopmental and school outcomes level are normal , and intellectual disability—defined as an IQ <70—has virtually disappeared . In the past, patients with severe CH treated with a low initial LT4 dose had lower IQ scores (although within normal range), and subtle neurological deficits in cognitive and motor development when compared with control populations, including healthy siblings . In the past two decades, early treatment with a high initial LT4 (≥10 μg/kg per day) and improvement in the management of CH patients has resulted in better cognitive and motor developmental outcomes, comparable with those of sibling controls . However, despite early and adequate treatment, patients with severe CH may still have subtle cognitive and motor deficits, and lower educational attainment . These deficits may reflect prenatal brain damage due to TH insufficiency in utero , not completely reverted by postnatal treatment. Even though transplacental supply of maternal T4 may protect the fetal brain from severe neurological impairment, it may not be sufficient to protect from severe fetal hypothyroidism . Children with CH may also display reduced hippocampal volume and abnormal cortical morphology among brain regions (thinning or thickening) , which may explain subtle and specific deficits in memory, language, sensorimotor, and visuospatial function . In addition, early episodes of both under- and overtreatment may be associated with permanent behavioral problems in a limited number of preadolescent children with CH . Overtreatment during the first months of life (with the exception of fT4 above the normal range with not supressed TSH and/or without signs or symptoms of hyperthyroidism), a critical period for brain development, may be associated with attention deficit at the school age , and lower IQ scores . Finally, other factors such as socioeducational status and poor adherence to the treatment may also negatively affect cognitive outcome and educational attainement. Therefore, psychomotor development and school progression should be periodically evaluated in all children with CH. In case of doubt, evaluation by a specialized team is indicated at specific ages (12, 18, 24, and 36 months, 5, 8, and 14 years) to monitor progression of specific developmental skills . Speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation. In the small proportion of children with CH who do display significant delay in psychomotor development, it is necessary to rule out other causes of intellectual impairment than CH. Undiagnosed hearing impairment can adversely impair speech development, school performance, and quality of life . TH plays a role in cochlear and auditory function development . Despite early and adequate LT4 treatment, mild and subclinical hearing impairment has been reported in ∼20% to 25% of adolescents with CH. The risk of hearing loss was higher than in healthy controls (3%), and closely associated with the severity of CH . Young adults with CH reported hearing impairment more frequently (9.5%) than the general population (2.5%) . Hearing loss was mostly bilateral, mild to moderate, of the sensorineural type, concerned high or very high frequencies, and in some cases required hearing aids. Even after exclusion of patients with Pendred syndrome, the risk of developing a hearing impairment seems to be more than three times higher in CH subjects than in the general population . Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during follow-up. 4.2. Development of goiter in thyroid dyshormonogenesis Summary Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range, and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00). Evidence Children and adolescents with primary CH due to dyshormonogenesis (mainly TPO gene, but also SLC5A5 / NIS , SLC26A4 / PDS , DUOX , and TG gene mutations) may have an increased risk of developing goiter and thyroid nodules, and may even have an increased risk of malignancy. However, to date only a few cases of thyroid cancer (either papillary or follicular) have been reported in patients with long-standing CH. In some cases, goiter was already present and thyroid nodules (isolated or multiple) developed despite apparently adequate LT4 treatment. In other cases, poor compliance to treatment, with persistently high TSH levels during adolescence, was the probable cause . Therefore, TSH should be targeted in the lower part of normal range during treatment of dyshormogenic CH. Despite the rare occurrence of thyroid carcinoma in CH patients, we recommend periodical neck US—for example, every 2 to 3 years—in children and adolescents with goitrous CH due to dyshormonogenesis (including NIS gene mutations), to identify nodules that may require fine needle aspiration biopsy to rule out thyroid carcinoma. 4.3. Growth, puberty, and fertility Summary Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++). Evidence Early and adequately treated children with nonsyndromic CH have normal growth and pubertal development . Adult height is normal and comparable with siblings, with no effects of severity of CH at diagnosis, CH etiology, or LT4 starting dose ; moreover, in the majority of children, adult height is above the target height in both sexes . Onset of puberty occurs at the normal age in the vast majority of CH patients and progresses normally in both sexes . The same applies to age at menarche and menstrual cycles . In adults, fertility is generally normal . However, women with CH may have an increased risk of adverse pregnancy outcomes. In addition, their offspring is at risk for poorer motor coordination (see also Section 3.5) . 4.4. Bone, metabolic, and cardiovascular health Summary Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0). Evidence Thyroid hormones play an important role in skeletal growth and bone mineral homeostasis. At birth, skeletal maturation is delayed in the majority of CH patients with severe hypothyroidism ; however, within the first months of life, LT4 treatment rapidly normalizes bone maturation . Since thyroid hormones have major effects on bone remodeling, LT4 overtreatment may increase bone turnover with higher bone resorption than formation, resulting in progressive bone loss . Yet, long-term studies in children and young adults with CH have shown normal bone mineral density , suggesting that early started and adequate LT4 treatment is not harmful to bone health. Given the importance of sufficient calcium intake, patients with CH, in addition to adequate LT4 treatment, should consume 800 to 1200 mg calcium daily; if dietary calcium intake is low, supplements should be added . Body mass index and composition are generally normal in children and adult with CH , and comparable with that of the general population. However, earlier adiposity rebound and increased risks of being overweight or obese have been reported in up to 37% of young adults with CH . Therefore, lifestyle interventions, including diet and physical exercise, should be encouraged to avoid metabolic abnormalities . In addition to an increased risk of congenital heart disease , neonates with untreated CH may have increased aortic intimal-media thickness (IMT), serum cholesterol levels , and impaired cardiac function reversed by early LT4 treatment . Young adults with CH have normal blood pressure, glucose, and lipid metabolism, and carotid IMT . However, repeated episodes of inadequate treatment may place them at risk of subtle cardiovascular dysfunction such as low exercise capacity, impaired diastolic function, increased IMT, and mild endothelial dysfunction . Whether these subtle abnormalities result in impaired quality of life or in an increased risk of cardiovascular disease needs to be further clarified. Anyway, good adherence to treatment in adolescents and young adults with CH is mandatory for optimal metabolic and cardiovascular health. 4.5. Patient and professional education, adherence, and health-related quality of life Summary Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of the diagnosis, and later on of the patient, is essential not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0). Evidence It is very clear, and it should not have to be stated here, that medical professionals should have basic knowledge about CH. The education of parents, starting at diagnosis and updated regularly, and of CH patients throughout childhood is mandatory. Good understanding of CH is essential to manage parental anxiety attitude, and to promote treatment adherence throughout life. Both are important conditions to assure optimal outcomes in CH. Adequate education of patients is also important to improve self-esteem and health-related quality of life (HRQoL), and to assure treatment adherence particularly during adolescence and pregnancy. The perception of the impact of CH on behavior varies with age and differs between children and their parents . Most , but not all , studies suggest that children and young adults with CH have an increased risk for lower HRQoL. Young adults with CH do not report problems concerning autonomy and sexual functioning. However, compared with the general population, they experience lower HRQoL with respect to cognitive and social functioning, daily activities, aggressiveness, and self-worth , which was already present in childhood . Moreover, young adults with CH are more likely to report associated chronic diseases, hearing impairment, visual problems, and overweight than their peers. Fewer attain the highest socioeconomic category and full-time employment, and more are still living with their parents. CH severity at diagnosis, long-term treatment adequacy, and the presence of other chronic health conditions seem to be the main determinants of educational achievement and HRQoL scores. Yet, despite these subtle disadvantages, most patients well integrated into society . 4.6. Transition to adult care Summary When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++). Evidence The period of transition from pediatric to adult care can be challenging since it is associated with an increased risk of poor treatment compliance and inadequate follow-up that may have repercussions, in terms of increased morbidity, and poor educational and social outcomes . Family structure and parental involvement are important for preventing and tackling this problem. Finally, given the female preponderance in all thyroid diseases and the finding that (subclinical) hypothyroidism may be associated with subfertility and adverse pregnancy and offspring outcomes, improvement and maintenance of disease control in young women are crucial .
Summary Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0). Evidence In the vast majority of early and adequately treated children with CH, neurodevelopmental and school outcomes level are normal , and intellectual disability—defined as an IQ <70—has virtually disappeared . In the past, patients with severe CH treated with a low initial LT4 dose had lower IQ scores (although within normal range), and subtle neurological deficits in cognitive and motor development when compared with control populations, including healthy siblings . In the past two decades, early treatment with a high initial LT4 (≥10 μg/kg per day) and improvement in the management of CH patients has resulted in better cognitive and motor developmental outcomes, comparable with those of sibling controls . However, despite early and adequate treatment, patients with severe CH may still have subtle cognitive and motor deficits, and lower educational attainment . These deficits may reflect prenatal brain damage due to TH insufficiency in utero , not completely reverted by postnatal treatment. Even though transplacental supply of maternal T4 may protect the fetal brain from severe neurological impairment, it may not be sufficient to protect from severe fetal hypothyroidism . Children with CH may also display reduced hippocampal volume and abnormal cortical morphology among brain regions (thinning or thickening) , which may explain subtle and specific deficits in memory, language, sensorimotor, and visuospatial function . In addition, early episodes of both under- and overtreatment may be associated with permanent behavioral problems in a limited number of preadolescent children with CH . Overtreatment during the first months of life (with the exception of fT4 above the normal range with not supressed TSH and/or without signs or symptoms of hyperthyroidism), a critical period for brain development, may be associated with attention deficit at the school age , and lower IQ scores . Finally, other factors such as socioeducational status and poor adherence to the treatment may also negatively affect cognitive outcome and educational attainement. Therefore, psychomotor development and school progression should be periodically evaluated in all children with CH. In case of doubt, evaluation by a specialized team is indicated at specific ages (12, 18, 24, and 36 months, 5, 8, and 14 years) to monitor progression of specific developmental skills . Speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation. In the small proportion of children with CH who do display significant delay in psychomotor development, it is necessary to rule out other causes of intellectual impairment than CH. Undiagnosed hearing impairment can adversely impair speech development, school performance, and quality of life . TH plays a role in cochlear and auditory function development . Despite early and adequate LT4 treatment, mild and subclinical hearing impairment has been reported in ∼20% to 25% of adolescents with CH. The risk of hearing loss was higher than in healthy controls (3%), and closely associated with the severity of CH . Young adults with CH reported hearing impairment more frequently (9.5%) than the general population (2.5%) . Hearing loss was mostly bilateral, mild to moderate, of the sensorineural type, concerned high or very high frequencies, and in some cases required hearing aids. Even after exclusion of patients with Pendred syndrome, the risk of developing a hearing impairment seems to be more than three times higher in CH subjects than in the general population . Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during follow-up.
Psychomotor development and school progression should be periodically evaluated in all children with CH; speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation (1/++0). In the small proportion of children with CH who do display significant psychomotor developmental delay and syndromic CH with brain abnormalities, it is crucial to rule out other causes of intellectual impairment than CH (1/+00). Not just neonatal but also repeated hearing tests should be carried out before school age and, if required, during further follow-up (2/++0).
In the vast majority of early and adequately treated children with CH, neurodevelopmental and school outcomes level are normal , and intellectual disability—defined as an IQ <70—has virtually disappeared . In the past, patients with severe CH treated with a low initial LT4 dose had lower IQ scores (although within normal range), and subtle neurological deficits in cognitive and motor development when compared with control populations, including healthy siblings . In the past two decades, early treatment with a high initial LT4 (≥10 μg/kg per day) and improvement in the management of CH patients has resulted in better cognitive and motor developmental outcomes, comparable with those of sibling controls . However, despite early and adequate treatment, patients with severe CH may still have subtle cognitive and motor deficits, and lower educational attainment . These deficits may reflect prenatal brain damage due to TH insufficiency in utero , not completely reverted by postnatal treatment. Even though transplacental supply of maternal T4 may protect the fetal brain from severe neurological impairment, it may not be sufficient to protect from severe fetal hypothyroidism . Children with CH may also display reduced hippocampal volume and abnormal cortical morphology among brain regions (thinning or thickening) , which may explain subtle and specific deficits in memory, language, sensorimotor, and visuospatial function . In addition, early episodes of both under- and overtreatment may be associated with permanent behavioral problems in a limited number of preadolescent children with CH . Overtreatment during the first months of life (with the exception of fT4 above the normal range with not supressed TSH and/or without signs or symptoms of hyperthyroidism), a critical period for brain development, may be associated with attention deficit at the school age , and lower IQ scores . Finally, other factors such as socioeducational status and poor adherence to the treatment may also negatively affect cognitive outcome and educational attainement. Therefore, psychomotor development and school progression should be periodically evaluated in all children with CH. In case of doubt, evaluation by a specialized team is indicated at specific ages (12, 18, 24, and 36 months, 5, 8, and 14 years) to monitor progression of specific developmental skills . Speech delay, attention and memory problems, and behavioral problems are reasons for additional evaluation. In the small proportion of children with CH who do display significant delay in psychomotor development, it is necessary to rule out other causes of intellectual impairment than CH. Undiagnosed hearing impairment can adversely impair speech development, school performance, and quality of life . TH plays a role in cochlear and auditory function development . Despite early and adequate LT4 treatment, mild and subclinical hearing impairment has been reported in ∼20% to 25% of adolescents with CH. The risk of hearing loss was higher than in healthy controls (3%), and closely associated with the severity of CH . Young adults with CH reported hearing impairment more frequently (9.5%) than the general population (2.5%) . Hearing loss was mostly bilateral, mild to moderate, of the sensorineural type, concerned high or very high frequencies, and in some cases required hearing aids. Even after exclusion of patients with Pendred syndrome, the risk of developing a hearing impairment seems to be more than three times higher in CH subjects than in the general population . Not just neonatal, but also repeated hearing tests should be carried out before school age and, if required, during follow-up.
Summary Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range, and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00). Evidence Children and adolescents with primary CH due to dyshormonogenesis (mainly TPO gene, but also SLC5A5 / NIS , SLC26A4 / PDS , DUOX , and TG gene mutations) may have an increased risk of developing goiter and thyroid nodules, and may even have an increased risk of malignancy. However, to date only a few cases of thyroid cancer (either papillary or follicular) have been reported in patients with long-standing CH. In some cases, goiter was already present and thyroid nodules (isolated or multiple) developed despite apparently adequate LT4 treatment. In other cases, poor compliance to treatment, with persistently high TSH levels during adolescence, was the probable cause . Therefore, TSH should be targeted in the lower part of normal range during treatment of dyshormogenic CH. Despite the rare occurrence of thyroid carcinoma in CH patients, we recommend periodical neck US—for example, every 2 to 3 years—in children and adolescents with goitrous CH due to dyshormonogenesis (including NIS gene mutations), to identify nodules that may require fine needle aspiration biopsy to rule out thyroid carcinoma.
Children and adolescents with primary CH due to dyshomonogenesis may develop goiter and nodules; in these cases, serum TSH should be carefully targeted in the lower part of normal range, and periodical ultrasound investigation is recommended to monitor thyroid volume (2/++0). Since a few cases of thyroid cancer have been reported, fine needle aspiration biopsy for cytology should be performed in case of suspicious nodules on ultrasound investigation (1/+00).
Children and adolescents with primary CH due to dyshormonogenesis (mainly TPO gene, but also SLC5A5 / NIS , SLC26A4 / PDS , DUOX , and TG gene mutations) may have an increased risk of developing goiter and thyroid nodules, and may even have an increased risk of malignancy. However, to date only a few cases of thyroid cancer (either papillary or follicular) have been reported in patients with long-standing CH. In some cases, goiter was already present and thyroid nodules (isolated or multiple) developed despite apparently adequate LT4 treatment. In other cases, poor compliance to treatment, with persistently high TSH levels during adolescence, was the probable cause . Therefore, TSH should be targeted in the lower part of normal range during treatment of dyshormogenic CH. Despite the rare occurrence of thyroid carcinoma in CH patients, we recommend periodical neck US—for example, every 2 to 3 years—in children and adolescents with goitrous CH due to dyshormonogenesis (including NIS gene mutations), to identify nodules that may require fine needle aspiration biopsy to rule out thyroid carcinoma.
Summary Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++). Evidence Early and adequately treated children with nonsyndromic CH have normal growth and pubertal development . Adult height is normal and comparable with siblings, with no effects of severity of CH at diagnosis, CH etiology, or LT4 starting dose ; moreover, in the majority of children, adult height is above the target height in both sexes . Onset of puberty occurs at the normal age in the vast majority of CH patients and progresses normally in both sexes . The same applies to age at menarche and menstrual cycles . In adults, fertility is generally normal . However, women with CH may have an increased risk of adverse pregnancy outcomes. In addition, their offspring is at risk for poorer motor coordination (see also Section 3.5) .
Adequately treated children with nonsyndromic CH have normal growth and puberty, and their fertility does not differ from individuals who do not have CH (1/+++).
Early and adequately treated children with nonsyndromic CH have normal growth and pubertal development . Adult height is normal and comparable with siblings, with no effects of severity of CH at diagnosis, CH etiology, or LT4 starting dose ; moreover, in the majority of children, adult height is above the target height in both sexes . Onset of puberty occurs at the normal age in the vast majority of CH patients and progresses normally in both sexes . The same applies to age at menarche and menstrual cycles . In adults, fertility is generally normal . However, women with CH may have an increased risk of adverse pregnancy outcomes. In addition, their offspring is at risk for poorer motor coordination (see also Section 3.5) .
Summary Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0). Evidence Thyroid hormones play an important role in skeletal growth and bone mineral homeostasis. At birth, skeletal maturation is delayed in the majority of CH patients with severe hypothyroidism ; however, within the first months of life, LT4 treatment rapidly normalizes bone maturation . Since thyroid hormones have major effects on bone remodeling, LT4 overtreatment may increase bone turnover with higher bone resorption than formation, resulting in progressive bone loss . Yet, long-term studies in children and young adults with CH have shown normal bone mineral density , suggesting that early started and adequate LT4 treatment is not harmful to bone health. Given the importance of sufficient calcium intake, patients with CH, in addition to adequate LT4 treatment, should consume 800 to 1200 mg calcium daily; if dietary calcium intake is low, supplements should be added . Body mass index and composition are generally normal in children and adult with CH , and comparable with that of the general population. However, earlier adiposity rebound and increased risks of being overweight or obese have been reported in up to 37% of young adults with CH . Therefore, lifestyle interventions, including diet and physical exercise, should be encouraged to avoid metabolic abnormalities . In addition to an increased risk of congenital heart disease , neonates with untreated CH may have increased aortic intimal-media thickness (IMT), serum cholesterol levels , and impaired cardiac function reversed by early LT4 treatment . Young adults with CH have normal blood pressure, glucose, and lipid metabolism, and carotid IMT . However, repeated episodes of inadequate treatment may place them at risk of subtle cardiovascular dysfunction such as low exercise capacity, impaired diastolic function, increased IMT, and mild endothelial dysfunction . Whether these subtle abnormalities result in impaired quality of life or in an increased risk of cardiovascular disease needs to be further clarified. Anyway, good adherence to treatment in adolescents and young adults with CH is mandatory for optimal metabolic and cardiovascular health.
Adequately treated children with nonsyndromic CH also have normal bone, metabolic, and cardiovascular health (1/++0).
Thyroid hormones play an important role in skeletal growth and bone mineral homeostasis. At birth, skeletal maturation is delayed in the majority of CH patients with severe hypothyroidism ; however, within the first months of life, LT4 treatment rapidly normalizes bone maturation . Since thyroid hormones have major effects on bone remodeling, LT4 overtreatment may increase bone turnover with higher bone resorption than formation, resulting in progressive bone loss . Yet, long-term studies in children and young adults with CH have shown normal bone mineral density , suggesting that early started and adequate LT4 treatment is not harmful to bone health. Given the importance of sufficient calcium intake, patients with CH, in addition to adequate LT4 treatment, should consume 800 to 1200 mg calcium daily; if dietary calcium intake is low, supplements should be added . Body mass index and composition are generally normal in children and adult with CH , and comparable with that of the general population. However, earlier adiposity rebound and increased risks of being overweight or obese have been reported in up to 37% of young adults with CH . Therefore, lifestyle interventions, including diet and physical exercise, should be encouraged to avoid metabolic abnormalities . In addition to an increased risk of congenital heart disease , neonates with untreated CH may have increased aortic intimal-media thickness (IMT), serum cholesterol levels , and impaired cardiac function reversed by early LT4 treatment . Young adults with CH have normal blood pressure, glucose, and lipid metabolism, and carotid IMT . However, repeated episodes of inadequate treatment may place them at risk of subtle cardiovascular dysfunction such as low exercise capacity, impaired diastolic function, increased IMT, and mild endothelial dysfunction . Whether these subtle abnormalities result in impaired quality of life or in an increased risk of cardiovascular disease needs to be further clarified. Anyway, good adherence to treatment in adolescents and young adults with CH is mandatory for optimal metabolic and cardiovascular health.
Summary Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of the diagnosis, and later on of the patient, is essential not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0). Evidence It is very clear, and it should not have to be stated here, that medical professionals should have basic knowledge about CH. The education of parents, starting at diagnosis and updated regularly, and of CH patients throughout childhood is mandatory. Good understanding of CH is essential to manage parental anxiety attitude, and to promote treatment adherence throughout life. Both are important conditions to assure optimal outcomes in CH. Adequate education of patients is also important to improve self-esteem and health-related quality of life (HRQoL), and to assure treatment adherence particularly during adolescence and pregnancy. The perception of the impact of CH on behavior varies with age and differs between children and their parents . Most , but not all , studies suggest that children and young adults with CH have an increased risk for lower HRQoL. Young adults with CH do not report problems concerning autonomy and sexual functioning. However, compared with the general population, they experience lower HRQoL with respect to cognitive and social functioning, daily activities, aggressiveness, and self-worth , which was already present in childhood . Moreover, young adults with CH are more likely to report associated chronic diseases, hearing impairment, visual problems, and overweight than their peers. Fewer attain the highest socioeconomic category and full-time employment, and more are still living with their parents. CH severity at diagnosis, long-term treatment adequacy, and the presence of other chronic health conditions seem to be the main determinants of educational achievement and HRQoL scores. Yet, despite these subtle disadvantages, most patients well integrated into society .
Medical education about CH should be improved at all levels, with regular updates (1/+++). Education of parents, starting at the time of the diagnosis, and later on of the patient, is essential not only throughout childhood, but also during transition to adult care and in women during pregnancy (1/+++). Since adherence to treatment may influence the outcomes, it should be promoted throughout life (1/++0).
It is very clear, and it should not have to be stated here, that medical professionals should have basic knowledge about CH. The education of parents, starting at diagnosis and updated regularly, and of CH patients throughout childhood is mandatory. Good understanding of CH is essential to manage parental anxiety attitude, and to promote treatment adherence throughout life. Both are important conditions to assure optimal outcomes in CH. Adequate education of patients is also important to improve self-esteem and health-related quality of life (HRQoL), and to assure treatment adherence particularly during adolescence and pregnancy. The perception of the impact of CH on behavior varies with age and differs between children and their parents . Most , but not all , studies suggest that children and young adults with CH have an increased risk for lower HRQoL. Young adults with CH do not report problems concerning autonomy and sexual functioning. However, compared with the general population, they experience lower HRQoL with respect to cognitive and social functioning, daily activities, aggressiveness, and self-worth , which was already present in childhood . Moreover, young adults with CH are more likely to report associated chronic diseases, hearing impairment, visual problems, and overweight than their peers. Fewer attain the highest socioeconomic category and full-time employment, and more are still living with their parents. CH severity at diagnosis, long-term treatment adequacy, and the presence of other chronic health conditions seem to be the main determinants of educational achievement and HRQoL scores. Yet, despite these subtle disadvantages, most patients well integrated into society .
Summary When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++). Evidence The period of transition from pediatric to adult care can be challenging since it is associated with an increased risk of poor treatment compliance and inadequate follow-up that may have repercussions, in terms of increased morbidity, and poor educational and social outcomes . Family structure and parental involvement are important for preventing and tackling this problem. Finally, given the female preponderance in all thyroid diseases and the finding that (subclinical) hypothyroidism may be associated with subfertility and adverse pregnancy and offspring outcomes, improvement and maintenance of disease control in young women are crucial .
When patients are transferred from pediatric to adult care, the main aims are continuity of care and, with that, optimal clinical outcomes and quality of life, and to increase understanding of CH and promote self-management (1/+++).
The period of transition from pediatric to adult care can be challenging since it is associated with an increased risk of poor treatment compliance and inadequate follow-up that may have repercussions, in terms of increased morbidity, and poor educational and social outcomes . Family structure and parental involvement are important for preventing and tackling this problem. Finally, given the female preponderance in all thyroid diseases and the finding that (subclinical) hypothyroidism may be associated with subfertility and adverse pregnancy and offspring outcomes, improvement and maintenance of disease control in young women are crucial .
5.1. Criteria for genetic counseling 5.2. Genetics of CH 5.3. Antenatal diagnostics, evaluation of fetal thyroid function, and management of fetal hypothyroidism 5.1. Criteria for genetic counseling Summary Genetic counseling should be targeted rather than general (to all CH patients), and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH, should have access to information about the two major forms of primary CH—TD and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++). Evidence Genetic counseling is highly recommended for patients and families with one or more affected member(s) with CH. Precise criteria were already established for the CH consensus guideline published in 2014 . describes proposed criteria for genetic counseling. Detailed phenotypic description of the index patient's CH form is essential and should include the presence or absence of associated malformations (syndromic vs. isolated CH), guiding genetic counseling and, if possible and necessary, genetic testing. Patients and family members should be informed about the inheritance and the risk of recurrence, and the presence of associated disorders in case of syndromic CH. Accurate genotyping/genetic testing of patients with CH by mutation analysis of candidate genes can or may (i) explain the disease; (ii) predict the risk of CH and extrathyroidal defects in family members (to be performed in all cases of syndromic primary CH, and in central CH); (iii) identify carriers of NKX2-1 gene mutations who are at risk of life-threatening respiratory disease ; (iv) enable “personalized” LT4 treatment to prevent goiter formation, which may occur in CH due to TPO or TG gene mutations if TSH concentrations are not carefully kept in the lower part of the reference interval; and (v) identify patients with mild TSH resistance in whom long-term LT4 treatment may be nonbeneficial . 5.2. Genetics of CH Summary If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as CGH array, NGS of gene panels (targeted NGS), or WES (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0). Evidence Primary CH TD due to thyroid maldevelopment is the most frequent cause of permanent primary CH, explaining ∼65% of cases . In contrast to TD with conditions such as athyreosis or thyroid ectopy, the other 35% is best described as GIS of which <50% is due to inherited defects of TH synthesis (dyshormonogenesis). TD is considered a sporadic disease. However, the familial component cannot be ignored, suggesting a genetic predisposition and a probably complex inheritance mode . In only 5% of TD cases, a genetic cause is identified with mutations in TSHR , or in genes encoding transcription factors involved in thyroid development ( TTF1/NKX2.1 , PAX8 , FOXE1 , NKX2-5 , and GLIS3 ) . During the past years, novel and faster genetic and molecular tests, and the availability of large well-phenotyped cohorts of patients have led to the discovery of new genetic causes of CH. Heterozygous mutations in the JAG1 gene, responsible for Alagille syndrome and encoding the jagged protein in the Notch pathway, have been identified in TD patients (mainly with orthotopic thyroid hypoplasia) . By WES in familial TD cases, Carré et al. found borealin (encoded by BOREALIN ), a major component of chromosomal passenger complex, to be also involved in thyrocyte migration and adhesion, explaining cases of thyroid ectopy . Mutations or deletion in the NTN1 gene have been found in patients with TD. Netrin is part of a family of laminin-related proteins, involved in cell migration and possibly in the development of pharyngeal vessels . Finally, mutations in the TUBB1 (tubulin, beta 1 class VI) gene have recently been identified in patients from three families with TD (mostly ectopy) and abnormal platelet physiology (basal activation and exaggerated platelet aggregation) . Functional studies in knockout mice validated the role of Tubb1 in thyroid development, function, and disease. With respect to the cause of the mild nonautoimmune subclinical hypothyroidism in neonates and infants with Down's syndrome, new insights were provided by a study in Dyrk1A mice, showing abnormal thyroid development and function . How overexpression of this gene causes thyroid abnormalities remains to be elucidated. Another more frequent form of syndromic CH is BLT syndrome due to NKX2-1 haploinsufficiency. Extensive genetic analysis of a large group of affected patients revealed novel variants, expanding BLT syndrome phenotype . summarizes genes associated with TD. In contrast to TD, thyroid dyshormonogenesis is inherited in an autosomal recessive pattern and, except for Pendred syndrome, CH is isolated in most cases. Genes involved in TH synthesis are SLC5A5 (NIS) , SLC26A4 (PDS) , TPO , TG , DUOX2 , DUOXA2 , and IYD (DEHAL1). These seven genes encode proteins for the various steps in this process. The use of modern genetic techniques, such as single nucleotide polymorphisms arrays and NGS (WES/whole genome sequencing), has provided new insights into the genetics of CH. First, NGS has identified new genes and/or extended the assumed thyroid phenotype, resulting from mutations in genes responsible for TH synthesis, causing dyshormonogenesis. For instance, biallelic mutations in SLC26A7 cause goitrous CH . SLC26A7 is a member of the same transporter family as SLC26A4 (pendrin), an anion exchanger with affinity for iodide and chloride (among others). However, in contrast to pendrin, SLC26A7 does not mediate cellular iodide efflux and affected individuals have normal hearing . Mutations in SLC26A4/PDS , TPO (222a) and DUOX2 have been unexpectedly found in patients with nongoitrous CH and thyroid hypoplasia, narrowing the gap between TD and dyshormonogenesis. Recently, DUOX2 mutations have also been reported in patients with thyroid ectopy; however, further studies are needed to confirm and explain this striking finding . Moreover, the first CH patients with both DUOX1 and DUOX2 mutations have been reported, suggesting that CH can have a digenic cause . DUOX2 mutations have also been found in patients with early-onset inflammatory bowel disease, suggesting an extrathyroidal role for DUOX2 . gives genes implicated in thyroid dyshormonogenesis. Also, recently, NGS studies in cohorts of CH patients screened for mutations in sets of CH genes revealed that a significant proportion of these patients has multiple variations in more than one thyroid-specific gene . Strikingly, these variations were found in genes encoding both thyroid transcription factors and proteins involved in TH synthesis, independently of the thyroid phenotype. These variations in more than one gene (oligogenicity) should, therefore, be considered as a plausible hypothesis for the genetic aetiology of CH . These novel data may also provide an explanation for the sporadic presentation of CH and observed complex modes of inheritance. In such context, JAG1 may act as a gene modifier in a multifactorial architecture of CH . Central CH Thanks to NGS, the number of probable genetic causes of isolated central CH and central CH within the framework of MPHD has increased . Isolated central CH due to biallelic TSHβ gene mutations is associated with severe hypothyroidism and characterized by the typical manifestations of CH (hypotonia, jaundice, umbilical hernia, macroglossia, etc.). If left untreated, these patients develop cretinism comparable with patients with severe primary CH . Therefore, central CH must be ruled out in all infants with signs or symptoms of CH and a low, normal, or only slightly elevated TSH concentration. To date, defective thyrotropin-releasing hormone (TRH) action due to biallelic mutations in the TRHR gene has been described in only a few families . Although prolonged neonatal jaundice was reported in one female, even complete TRH resistance does not cause severe neonatal hypothyroidism. The diagnosis in three of the four probands with biallelic TRHR mutations was made during childhood because of delayed growth accompanied by lethargy and fatigue or by overweight. However, complete TRH resistance diagnosed by genetic testing has been diagnosed in a pregnant woman . Immunoglobulin superfamily member 1 gene ( IGSF1 ) mutations are the molecular cause of a recently described X-linked syndrome, including mild-to-moderate central CH. In this syndrome, central CH is associated with abnormal testicular growth leading to adult macro-orchidism (+2.0 standard deviation score), a tendency toward pubertal delay, low prolactin, and, rarely, reversible growth hormone deficiency . Some female carriers can also manifest central CH. Recent data indicate IGSF1 as the most frequently implicated gene in congenital central CH . Mutations in the TBL1X gene are the second most frequent cause of X-linked central CH. TBL1X, transducin-like protein 1, is an essential subunit of the nuclear receptor corepressor-silencing mediator for retinoid and TH receptor complex, the major TH receptor CoR involved in T3-regulated gene expression. In addition to central CH, many patients exhibit hearing loss . Finally, mutations in IRS4 are another cause of X-linked mild central CH. Since IRS4 is involved in leptin signaling, the cause of the central CH may be disrupted leptin signaling . Central CH is more frequently part of MPHD and can be associated with one or more other pituitary hormone deficiences. In addition, a certain percentage of affected patients has morphological abnormalities of the pituitary gland or hypothalamus, or other neurological defects . presents genes implicated in central hypothyroidism. 5.3. Antenatal diagnostics, evaluation of fetal thyroid function, and management of fetal hypothyroidism Summary We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 weeks gestation to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic LT4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with LT4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00). Evidence Antenatal diagnostics is advised in case of fortuitously discovered fetal goiter during fetal US examination in an anti- TSHR antibodies negative mother, an earlier child with primary CH due to dyshormonogenesis (and a 25% risk of recurrence), and in an earlier child with (severe) syndromic CH. How to evaluate fetal thyroid function and to manage (nonautoimmune) fetal hypothyroidism have been described in the 2014 CH concensus guidelines . In short, fetal thyroid size can be assessed by US at 20 to 22 weeks, and at 32 weeks gestation. When thyroid measurement values based on diameter or perimeter are above the 95th percentile , the mother and fetus should be referred to a specialized center for prenatal care. If prenatal intervention is considered, cordocentesis can be performed to assess fetal thyroid function. Conditions that may be a reason for fetal treatment are a large fetal goiter with progressive hydramnios, and risk of premature delivery or concerns about tracheal occlusion. If fetal treatment is considered in a euthyroid pregnant woman, one way is to administer intra-amniotic LT4 injections in a dosage of 10 μg/kg estimated fetal weight per 15 days. Studies have confirmed the feasibility and safety of intra-amniotic LT4 injection and strongly suggest that this treatment is effective for decreasing goiter size. However, none of the many LT4 regimens used ensures euthyroidism at birth. It is, therefore, not possible to formulate guidelines from current data. These further diagnostics and intervention should only be done by an experienced multidisciplinary team in a specialized center of prenatal care after a careful benefit–risk evaluation. Determination of the indications and optimal modes of prenatal treatment for nonimmune fetal goitrous hypothyroidism will require larger well-designed studies that would be best conducted through international cooperation between multidisciplinary medical teams. Alternative ways of treating the fetus by administering drugs to the mother should also be investigated. In a hypothyroid pregnant woman, the preferred approach is to treat the woman with (rather than the fetus) with LT4. Finally, adequate iodine intake should be ensured for all pregnant women (250 μg/day).
Summary Genetic counseling should be targeted rather than general (to all CH patients), and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH, should have access to information about the two major forms of primary CH—TD and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++). Evidence Genetic counseling is highly recommended for patients and families with one or more affected member(s) with CH. Precise criteria were already established for the CH consensus guideline published in 2014 . describes proposed criteria for genetic counseling. Detailed phenotypic description of the index patient's CH form is essential and should include the presence or absence of associated malformations (syndromic vs. isolated CH), guiding genetic counseling and, if possible and necessary, genetic testing. Patients and family members should be informed about the inheritance and the risk of recurrence, and the presence of associated disorders in case of syndromic CH. Accurate genotyping/genetic testing of patients with CH by mutation analysis of candidate genes can or may (i) explain the disease; (ii) predict the risk of CH and extrathyroidal defects in family members (to be performed in all cases of syndromic primary CH, and in central CH); (iii) identify carriers of NKX2-1 gene mutations who are at risk of life-threatening respiratory disease ; (iv) enable “personalized” LT4 treatment to prevent goiter formation, which may occur in CH due to TPO or TG gene mutations if TSH concentrations are not carefully kept in the lower part of the reference interval; and (v) identify patients with mild TSH resistance in whom long-term LT4 treatment may be nonbeneficial .
Genetic counseling should be targeted rather than general (to all CH patients), and done by an experienced professional (2/++0). Counseling should include explaining inheritance and the risk of recurrence of the patient's primary or central form of CH, based on the CH subtype, the family history, and, if known, the (genetic) cause (1/++0). Parents with a child, or families with a member with CH, should have access to information about the two major forms of primary CH—TD and dyshormonogenesis—and, if included in the neonatal screening, about central CH (1/+++).
Genetic counseling is highly recommended for patients and families with one or more affected member(s) with CH. Precise criteria were already established for the CH consensus guideline published in 2014 . describes proposed criteria for genetic counseling. Detailed phenotypic description of the index patient's CH form is essential and should include the presence or absence of associated malformations (syndromic vs. isolated CH), guiding genetic counseling and, if possible and necessary, genetic testing. Patients and family members should be informed about the inheritance and the risk of recurrence, and the presence of associated disorders in case of syndromic CH. Accurate genotyping/genetic testing of patients with CH by mutation analysis of candidate genes can or may (i) explain the disease; (ii) predict the risk of CH and extrathyroidal defects in family members (to be performed in all cases of syndromic primary CH, and in central CH); (iii) identify carriers of NKX2-1 gene mutations who are at risk of life-threatening respiratory disease ; (iv) enable “personalized” LT4 treatment to prevent goiter formation, which may occur in CH due to TPO or TG gene mutations if TSH concentrations are not carefully kept in the lower part of the reference interval; and (v) identify patients with mild TSH resistance in whom long-term LT4 treatment may be nonbeneficial .
Summary If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as CGH array, NGS of gene panels (targeted NGS), or WES (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0). Evidence Primary CH TD due to thyroid maldevelopment is the most frequent cause of permanent primary CH, explaining ∼65% of cases . In contrast to TD with conditions such as athyreosis or thyroid ectopy, the other 35% is best described as GIS of which <50% is due to inherited defects of TH synthesis (dyshormonogenesis). TD is considered a sporadic disease. However, the familial component cannot be ignored, suggesting a genetic predisposition and a probably complex inheritance mode . In only 5% of TD cases, a genetic cause is identified with mutations in TSHR , or in genes encoding transcription factors involved in thyroid development ( TTF1/NKX2.1 , PAX8 , FOXE1 , NKX2-5 , and GLIS3 ) . During the past years, novel and faster genetic and molecular tests, and the availability of large well-phenotyped cohorts of patients have led to the discovery of new genetic causes of CH. Heterozygous mutations in the JAG1 gene, responsible for Alagille syndrome and encoding the jagged protein in the Notch pathway, have been identified in TD patients (mainly with orthotopic thyroid hypoplasia) . By WES in familial TD cases, Carré et al. found borealin (encoded by BOREALIN ), a major component of chromosomal passenger complex, to be also involved in thyrocyte migration and adhesion, explaining cases of thyroid ectopy . Mutations or deletion in the NTN1 gene have been found in patients with TD. Netrin is part of a family of laminin-related proteins, involved in cell migration and possibly in the development of pharyngeal vessels . Finally, mutations in the TUBB1 (tubulin, beta 1 class VI) gene have recently been identified in patients from three families with TD (mostly ectopy) and abnormal platelet physiology (basal activation and exaggerated platelet aggregation) . Functional studies in knockout mice validated the role of Tubb1 in thyroid development, function, and disease. With respect to the cause of the mild nonautoimmune subclinical hypothyroidism in neonates and infants with Down's syndrome, new insights were provided by a study in Dyrk1A mice, showing abnormal thyroid development and function . How overexpression of this gene causes thyroid abnormalities remains to be elucidated. Another more frequent form of syndromic CH is BLT syndrome due to NKX2-1 haploinsufficiency. Extensive genetic analysis of a large group of affected patients revealed novel variants, expanding BLT syndrome phenotype . summarizes genes associated with TD. In contrast to TD, thyroid dyshormonogenesis is inherited in an autosomal recessive pattern and, except for Pendred syndrome, CH is isolated in most cases. Genes involved in TH synthesis are SLC5A5 (NIS) , SLC26A4 (PDS) , TPO , TG , DUOX2 , DUOXA2 , and IYD (DEHAL1). These seven genes encode proteins for the various steps in this process. The use of modern genetic techniques, such as single nucleotide polymorphisms arrays and NGS (WES/whole genome sequencing), has provided new insights into the genetics of CH. First, NGS has identified new genes and/or extended the assumed thyroid phenotype, resulting from mutations in genes responsible for TH synthesis, causing dyshormonogenesis. For instance, biallelic mutations in SLC26A7 cause goitrous CH . SLC26A7 is a member of the same transporter family as SLC26A4 (pendrin), an anion exchanger with affinity for iodide and chloride (among others). However, in contrast to pendrin, SLC26A7 does not mediate cellular iodide efflux and affected individuals have normal hearing . Mutations in SLC26A4/PDS , TPO (222a) and DUOX2 have been unexpectedly found in patients with nongoitrous CH and thyroid hypoplasia, narrowing the gap between TD and dyshormonogenesis. Recently, DUOX2 mutations have also been reported in patients with thyroid ectopy; however, further studies are needed to confirm and explain this striking finding . Moreover, the first CH patients with both DUOX1 and DUOX2 mutations have been reported, suggesting that CH can have a digenic cause . DUOX2 mutations have also been found in patients with early-onset inflammatory bowel disease, suggesting an extrathyroidal role for DUOX2 . gives genes implicated in thyroid dyshormonogenesis. Also, recently, NGS studies in cohorts of CH patients screened for mutations in sets of CH genes revealed that a significant proportion of these patients has multiple variations in more than one thyroid-specific gene . Strikingly, these variations were found in genes encoding both thyroid transcription factors and proteins involved in TH synthesis, independently of the thyroid phenotype. These variations in more than one gene (oligogenicity) should, therefore, be considered as a plausible hypothesis for the genetic aetiology of CH . These novel data may also provide an explanation for the sporadic presentation of CH and observed complex modes of inheritance. In such context, JAG1 may act as a gene modifier in a multifactorial architecture of CH . Central CH Thanks to NGS, the number of probable genetic causes of isolated central CH and central CH within the framework of MPHD has increased . Isolated central CH due to biallelic TSHβ gene mutations is associated with severe hypothyroidism and characterized by the typical manifestations of CH (hypotonia, jaundice, umbilical hernia, macroglossia, etc.). If left untreated, these patients develop cretinism comparable with patients with severe primary CH . Therefore, central CH must be ruled out in all infants with signs or symptoms of CH and a low, normal, or only slightly elevated TSH concentration. To date, defective thyrotropin-releasing hormone (TRH) action due to biallelic mutations in the TRHR gene has been described in only a few families . Although prolonged neonatal jaundice was reported in one female, even complete TRH resistance does not cause severe neonatal hypothyroidism. The diagnosis in three of the four probands with biallelic TRHR mutations was made during childhood because of delayed growth accompanied by lethargy and fatigue or by overweight. However, complete TRH resistance diagnosed by genetic testing has been diagnosed in a pregnant woman . Immunoglobulin superfamily member 1 gene ( IGSF1 ) mutations are the molecular cause of a recently described X-linked syndrome, including mild-to-moderate central CH. In this syndrome, central CH is associated with abnormal testicular growth leading to adult macro-orchidism (+2.0 standard deviation score), a tendency toward pubertal delay, low prolactin, and, rarely, reversible growth hormone deficiency . Some female carriers can also manifest central CH. Recent data indicate IGSF1 as the most frequently implicated gene in congenital central CH . Mutations in the TBL1X gene are the second most frequent cause of X-linked central CH. TBL1X, transducin-like protein 1, is an essential subunit of the nuclear receptor corepressor-silencing mediator for retinoid and TH receptor complex, the major TH receptor CoR involved in T3-regulated gene expression. In addition to central CH, many patients exhibit hearing loss . Finally, mutations in IRS4 are another cause of X-linked mild central CH. Since IRS4 is involved in leptin signaling, the cause of the central CH may be disrupted leptin signaling . Central CH is more frequently part of MPHD and can be associated with one or more other pituitary hormone deficiences. In addition, a certain percentage of affected patients has morphological abnormalities of the pituitary gland or hypothalamus, or other neurological defects . presents genes implicated in central hypothyroidism.
If genetic testing is performed, its aim should be improving diagnosis, treatment, or prognosis (1/++0). Before doing so, possibilities and limits of genetic testing should be discussed with parents or families (1/++0). When available, genetic testing should be performed by means of new techniques, such as CGH array, NGS of gene panels (targeted NGS), or WES (1/++0). Preferably, genetic testing or studies should be preceded by careful phenotypic description of the patient's CH, including morphology of the thyroid gland (2/++0). Not only thyroid dyshormonogenesis, but also familial occurrence of dysgenesis and central hypothyroidism should lead to further genetic testing (1/++0). Any syndromic association should be studied genetically, not only to improve genetic counseling, but also to identify new candidate genes explaining the association (1/++0). Further research is needed to better define patients or patient groups that will benefit most from these new diagnostic possibilities (2/++0).
Primary CH TD due to thyroid maldevelopment is the most frequent cause of permanent primary CH, explaining ∼65% of cases . In contrast to TD with conditions such as athyreosis or thyroid ectopy, the other 35% is best described as GIS of which <50% is due to inherited defects of TH synthesis (dyshormonogenesis). TD is considered a sporadic disease. However, the familial component cannot be ignored, suggesting a genetic predisposition and a probably complex inheritance mode . In only 5% of TD cases, a genetic cause is identified with mutations in TSHR , or in genes encoding transcription factors involved in thyroid development ( TTF1/NKX2.1 , PAX8 , FOXE1 , NKX2-5 , and GLIS3 ) . During the past years, novel and faster genetic and molecular tests, and the availability of large well-phenotyped cohorts of patients have led to the discovery of new genetic causes of CH. Heterozygous mutations in the JAG1 gene, responsible for Alagille syndrome and encoding the jagged protein in the Notch pathway, have been identified in TD patients (mainly with orthotopic thyroid hypoplasia) . By WES in familial TD cases, Carré et al. found borealin (encoded by BOREALIN ), a major component of chromosomal passenger complex, to be also involved in thyrocyte migration and adhesion, explaining cases of thyroid ectopy . Mutations or deletion in the NTN1 gene have been found in patients with TD. Netrin is part of a family of laminin-related proteins, involved in cell migration and possibly in the development of pharyngeal vessels . Finally, mutations in the TUBB1 (tubulin, beta 1 class VI) gene have recently been identified in patients from three families with TD (mostly ectopy) and abnormal platelet physiology (basal activation and exaggerated platelet aggregation) . Functional studies in knockout mice validated the role of Tubb1 in thyroid development, function, and disease. With respect to the cause of the mild nonautoimmune subclinical hypothyroidism in neonates and infants with Down's syndrome, new insights were provided by a study in Dyrk1A mice, showing abnormal thyroid development and function . How overexpression of this gene causes thyroid abnormalities remains to be elucidated. Another more frequent form of syndromic CH is BLT syndrome due to NKX2-1 haploinsufficiency. Extensive genetic analysis of a large group of affected patients revealed novel variants, expanding BLT syndrome phenotype . summarizes genes associated with TD. In contrast to TD, thyroid dyshormonogenesis is inherited in an autosomal recessive pattern and, except for Pendred syndrome, CH is isolated in most cases. Genes involved in TH synthesis are SLC5A5 (NIS) , SLC26A4 (PDS) , TPO , TG , DUOX2 , DUOXA2 , and IYD (DEHAL1). These seven genes encode proteins for the various steps in this process. The use of modern genetic techniques, such as single nucleotide polymorphisms arrays and NGS (WES/whole genome sequencing), has provided new insights into the genetics of CH. First, NGS has identified new genes and/or extended the assumed thyroid phenotype, resulting from mutations in genes responsible for TH synthesis, causing dyshormonogenesis. For instance, biallelic mutations in SLC26A7 cause goitrous CH . SLC26A7 is a member of the same transporter family as SLC26A4 (pendrin), an anion exchanger with affinity for iodide and chloride (among others). However, in contrast to pendrin, SLC26A7 does not mediate cellular iodide efflux and affected individuals have normal hearing . Mutations in SLC26A4/PDS , TPO (222a) and DUOX2 have been unexpectedly found in patients with nongoitrous CH and thyroid hypoplasia, narrowing the gap between TD and dyshormonogenesis. Recently, DUOX2 mutations have also been reported in patients with thyroid ectopy; however, further studies are needed to confirm and explain this striking finding . Moreover, the first CH patients with both DUOX1 and DUOX2 mutations have been reported, suggesting that CH can have a digenic cause . DUOX2 mutations have also been found in patients with early-onset inflammatory bowel disease, suggesting an extrathyroidal role for DUOX2 . gives genes implicated in thyroid dyshormonogenesis. Also, recently, NGS studies in cohorts of CH patients screened for mutations in sets of CH genes revealed that a significant proportion of these patients has multiple variations in more than one thyroid-specific gene . Strikingly, these variations were found in genes encoding both thyroid transcription factors and proteins involved in TH synthesis, independently of the thyroid phenotype. These variations in more than one gene (oligogenicity) should, therefore, be considered as a plausible hypothesis for the genetic aetiology of CH . These novel data may also provide an explanation for the sporadic presentation of CH and observed complex modes of inheritance. In such context, JAG1 may act as a gene modifier in a multifactorial architecture of CH . Central CH Thanks to NGS, the number of probable genetic causes of isolated central CH and central CH within the framework of MPHD has increased . Isolated central CH due to biallelic TSHβ gene mutations is associated with severe hypothyroidism and characterized by the typical manifestations of CH (hypotonia, jaundice, umbilical hernia, macroglossia, etc.). If left untreated, these patients develop cretinism comparable with patients with severe primary CH . Therefore, central CH must be ruled out in all infants with signs or symptoms of CH and a low, normal, or only slightly elevated TSH concentration. To date, defective thyrotropin-releasing hormone (TRH) action due to biallelic mutations in the TRHR gene has been described in only a few families . Although prolonged neonatal jaundice was reported in one female, even complete TRH resistance does not cause severe neonatal hypothyroidism. The diagnosis in three of the four probands with biallelic TRHR mutations was made during childhood because of delayed growth accompanied by lethargy and fatigue or by overweight. However, complete TRH resistance diagnosed by genetic testing has been diagnosed in a pregnant woman . Immunoglobulin superfamily member 1 gene ( IGSF1 ) mutations are the molecular cause of a recently described X-linked syndrome, including mild-to-moderate central CH. In this syndrome, central CH is associated with abnormal testicular growth leading to adult macro-orchidism (+2.0 standard deviation score), a tendency toward pubertal delay, low prolactin, and, rarely, reversible growth hormone deficiency . Some female carriers can also manifest central CH. Recent data indicate IGSF1 as the most frequently implicated gene in congenital central CH . Mutations in the TBL1X gene are the second most frequent cause of X-linked central CH. TBL1X, transducin-like protein 1, is an essential subunit of the nuclear receptor corepressor-silencing mediator for retinoid and TH receptor complex, the major TH receptor CoR involved in T3-regulated gene expression. In addition to central CH, many patients exhibit hearing loss . Finally, mutations in IRS4 are another cause of X-linked mild central CH. Since IRS4 is involved in leptin signaling, the cause of the central CH may be disrupted leptin signaling . Central CH is more frequently part of MPHD and can be associated with one or more other pituitary hormone deficiences. In addition, a certain percentage of affected patients has morphological abnormalities of the pituitary gland or hypothalamus, or other neurological defects . presents genes implicated in central hypothyroidism.
TD due to thyroid maldevelopment is the most frequent cause of permanent primary CH, explaining ∼65% of cases . In contrast to TD with conditions such as athyreosis or thyroid ectopy, the other 35% is best described as GIS of which <50% is due to inherited defects of TH synthesis (dyshormonogenesis). TD is considered a sporadic disease. However, the familial component cannot be ignored, suggesting a genetic predisposition and a probably complex inheritance mode . In only 5% of TD cases, a genetic cause is identified with mutations in TSHR , or in genes encoding transcription factors involved in thyroid development ( TTF1/NKX2.1 , PAX8 , FOXE1 , NKX2-5 , and GLIS3 ) . During the past years, novel and faster genetic and molecular tests, and the availability of large well-phenotyped cohorts of patients have led to the discovery of new genetic causes of CH. Heterozygous mutations in the JAG1 gene, responsible for Alagille syndrome and encoding the jagged protein in the Notch pathway, have been identified in TD patients (mainly with orthotopic thyroid hypoplasia) . By WES in familial TD cases, Carré et al. found borealin (encoded by BOREALIN ), a major component of chromosomal passenger complex, to be also involved in thyrocyte migration and adhesion, explaining cases of thyroid ectopy . Mutations or deletion in the NTN1 gene have been found in patients with TD. Netrin is part of a family of laminin-related proteins, involved in cell migration and possibly in the development of pharyngeal vessels . Finally, mutations in the TUBB1 (tubulin, beta 1 class VI) gene have recently been identified in patients from three families with TD (mostly ectopy) and abnormal platelet physiology (basal activation and exaggerated platelet aggregation) . Functional studies in knockout mice validated the role of Tubb1 in thyroid development, function, and disease. With respect to the cause of the mild nonautoimmune subclinical hypothyroidism in neonates and infants with Down's syndrome, new insights were provided by a study in Dyrk1A mice, showing abnormal thyroid development and function . How overexpression of this gene causes thyroid abnormalities remains to be elucidated. Another more frequent form of syndromic CH is BLT syndrome due to NKX2-1 haploinsufficiency. Extensive genetic analysis of a large group of affected patients revealed novel variants, expanding BLT syndrome phenotype . summarizes genes associated with TD. In contrast to TD, thyroid dyshormonogenesis is inherited in an autosomal recessive pattern and, except for Pendred syndrome, CH is isolated in most cases. Genes involved in TH synthesis are SLC5A5 (NIS) , SLC26A4 (PDS) , TPO , TG , DUOX2 , DUOXA2 , and IYD (DEHAL1). These seven genes encode proteins for the various steps in this process. The use of modern genetic techniques, such as single nucleotide polymorphisms arrays and NGS (WES/whole genome sequencing), has provided new insights into the genetics of CH. First, NGS has identified new genes and/or extended the assumed thyroid phenotype, resulting from mutations in genes responsible for TH synthesis, causing dyshormonogenesis. For instance, biallelic mutations in SLC26A7 cause goitrous CH . SLC26A7 is a member of the same transporter family as SLC26A4 (pendrin), an anion exchanger with affinity for iodide and chloride (among others). However, in contrast to pendrin, SLC26A7 does not mediate cellular iodide efflux and affected individuals have normal hearing . Mutations in SLC26A4/PDS , TPO (222a) and DUOX2 have been unexpectedly found in patients with nongoitrous CH and thyroid hypoplasia, narrowing the gap between TD and dyshormonogenesis. Recently, DUOX2 mutations have also been reported in patients with thyroid ectopy; however, further studies are needed to confirm and explain this striking finding . Moreover, the first CH patients with both DUOX1 and DUOX2 mutations have been reported, suggesting that CH can have a digenic cause . DUOX2 mutations have also been found in patients with early-onset inflammatory bowel disease, suggesting an extrathyroidal role for DUOX2 . gives genes implicated in thyroid dyshormonogenesis. Also, recently, NGS studies in cohorts of CH patients screened for mutations in sets of CH genes revealed that a significant proportion of these patients has multiple variations in more than one thyroid-specific gene . Strikingly, these variations were found in genes encoding both thyroid transcription factors and proteins involved in TH synthesis, independently of the thyroid phenotype. These variations in more than one gene (oligogenicity) should, therefore, be considered as a plausible hypothesis for the genetic aetiology of CH . These novel data may also provide an explanation for the sporadic presentation of CH and observed complex modes of inheritance. In such context, JAG1 may act as a gene modifier in a multifactorial architecture of CH .
Thanks to NGS, the number of probable genetic causes of isolated central CH and central CH within the framework of MPHD has increased . Isolated central CH due to biallelic TSHβ gene mutations is associated with severe hypothyroidism and characterized by the typical manifestations of CH (hypotonia, jaundice, umbilical hernia, macroglossia, etc.). If left untreated, these patients develop cretinism comparable with patients with severe primary CH . Therefore, central CH must be ruled out in all infants with signs or symptoms of CH and a low, normal, or only slightly elevated TSH concentration. To date, defective thyrotropin-releasing hormone (TRH) action due to biallelic mutations in the TRHR gene has been described in only a few families . Although prolonged neonatal jaundice was reported in one female, even complete TRH resistance does not cause severe neonatal hypothyroidism. The diagnosis in three of the four probands with biallelic TRHR mutations was made during childhood because of delayed growth accompanied by lethargy and fatigue or by overweight. However, complete TRH resistance diagnosed by genetic testing has been diagnosed in a pregnant woman . Immunoglobulin superfamily member 1 gene ( IGSF1 ) mutations are the molecular cause of a recently described X-linked syndrome, including mild-to-moderate central CH. In this syndrome, central CH is associated with abnormal testicular growth leading to adult macro-orchidism (+2.0 standard deviation score), a tendency toward pubertal delay, low prolactin, and, rarely, reversible growth hormone deficiency . Some female carriers can also manifest central CH. Recent data indicate IGSF1 as the most frequently implicated gene in congenital central CH . Mutations in the TBL1X gene are the second most frequent cause of X-linked central CH. TBL1X, transducin-like protein 1, is an essential subunit of the nuclear receptor corepressor-silencing mediator for retinoid and TH receptor complex, the major TH receptor CoR involved in T3-regulated gene expression. In addition to central CH, many patients exhibit hearing loss . Finally, mutations in IRS4 are another cause of X-linked mild central CH. Since IRS4 is involved in leptin signaling, the cause of the central CH may be disrupted leptin signaling . Central CH is more frequently part of MPHD and can be associated with one or more other pituitary hormone deficiences. In addition, a certain percentage of affected patients has morphological abnormalities of the pituitary gland or hypothalamus, or other neurological defects . presents genes implicated in central hypothyroidism.
Summary We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 weeks gestation to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic LT4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with LT4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00). Evidence Antenatal diagnostics is advised in case of fortuitously discovered fetal goiter during fetal US examination in an anti- TSHR antibodies negative mother, an earlier child with primary CH due to dyshormonogenesis (and a 25% risk of recurrence), and in an earlier child with (severe) syndromic CH. How to evaluate fetal thyroid function and to manage (nonautoimmune) fetal hypothyroidism have been described in the 2014 CH concensus guidelines . In short, fetal thyroid size can be assessed by US at 20 to 22 weeks, and at 32 weeks gestation. When thyroid measurement values based on diameter or perimeter are above the 95th percentile , the mother and fetus should be referred to a specialized center for prenatal care. If prenatal intervention is considered, cordocentesis can be performed to assess fetal thyroid function. Conditions that may be a reason for fetal treatment are a large fetal goiter with progressive hydramnios, and risk of premature delivery or concerns about tracheal occlusion. If fetal treatment is considered in a euthyroid pregnant woman, one way is to administer intra-amniotic LT4 injections in a dosage of 10 μg/kg estimated fetal weight per 15 days. Studies have confirmed the feasibility and safety of intra-amniotic LT4 injection and strongly suggest that this treatment is effective for decreasing goiter size. However, none of the many LT4 regimens used ensures euthyroidism at birth. It is, therefore, not possible to formulate guidelines from current data. These further diagnostics and intervention should only be done by an experienced multidisciplinary team in a specialized center of prenatal care after a careful benefit–risk evaluation. Determination of the indications and optimal modes of prenatal treatment for nonimmune fetal goitrous hypothyroidism will require larger well-designed studies that would be best conducted through international cooperation between multidisciplinary medical teams. Alternative ways of treating the fetus by administering drugs to the mother should also be investigated. In a hypothyroid pregnant woman, the preferred approach is to treat the woman with (rather than the fetus) with LT4. Finally, adequate iodine intake should be ensured for all pregnant women (250 μg/day).
We recommend antenatal diagnosis in cases of goiter fortuitously discovered during systematic ultrasound examination of the fetus, in relation to thyroid dyshormonogenesis (1/+++); a familial recurrence of CH due to dyshormonogenesis (25% recurrence rate) (1/+++); and known defects of genes involved in thyroid function or development with potential germline transmission (1/++0). Special issues should be considered for syndromic cases with potential mortality and possible germline mosaicism (as for NKX2-1 gene mutation/deletion and severe pulmonary dysfunction with possible transmission through germline mosaicism). In such circumstances, the discussion of the prenatal diagnosis should be open. The therapeutic management of affected fetuses should comply with the laws in force in the country concerned (1/++0). The familial recurrence of CH due to dysgenesis (2% of familial occurrences) requires further study to determine the feasibility and clinical relevance for antenatal detection. For the evaluation of fetal thyroid volume, we recommend ultrasound scans at 20 to 22 weeks gestation to detect fetal thyroid hypertrophy and potential thyroid dysfunction in the fetus. Goiter or an absence of thyroid tissue can also be documented by this technique. Measurements should be made as a function of GA, and thyroid perimeter and diameter should be measured to document goiter (1/+++). If a (large) fetal goiter is diagnosed, prenatal care should be provided in a specialized center of prenatal care (1/+++). We recommend cordocentesis, rather than amniocentesis, as the reference method for assessing fetal thyroid function. Norms have been established as a function of GA. This examination should be carried out only if prenatal intervention is considered (1/+++). In most cases, fetal thyroid function can be inferred from context and ultrasound criteria, and fetal blood sampling is, therefore, only exceptionally required (2/++0). We strongly recommend fetal treatment by intra-amniotic LT4 injections in a euthyroid pregnant woman with a large fetal goiter associated with hydramnios and/or tracheal occlusion; in a hypothyroid pregnant woman, we recommend to treat the woman (rather the fetus) with LT4 (1/++0). For goitrous nonimmune fetal hypothyroidism leading to hydramnios, we recommend intra-amniotic injections of LT4 to decrease the size of the fetal thyroid gland. The injections should be performed by multidisciplinary specialist teams (1/+++). The expert panel proposes the use of 10 μg/kg estimated fetal weight per 15 days in the form of intra-amniotic injections. The risks to the fetus and the psychological burden on the parents should be factored into the risk–benefit evaluation (2/+00).
Antenatal diagnostics is advised in case of fortuitously discovered fetal goiter during fetal US examination in an anti- TSHR antibodies negative mother, an earlier child with primary CH due to dyshormonogenesis (and a 25% risk of recurrence), and in an earlier child with (severe) syndromic CH. How to evaluate fetal thyroid function and to manage (nonautoimmune) fetal hypothyroidism have been described in the 2014 CH concensus guidelines . In short, fetal thyroid size can be assessed by US at 20 to 22 weeks, and at 32 weeks gestation. When thyroid measurement values based on diameter or perimeter are above the 95th percentile , the mother and fetus should be referred to a specialized center for prenatal care. If prenatal intervention is considered, cordocentesis can be performed to assess fetal thyroid function. Conditions that may be a reason for fetal treatment are a large fetal goiter with progressive hydramnios, and risk of premature delivery or concerns about tracheal occlusion. If fetal treatment is considered in a euthyroid pregnant woman, one way is to administer intra-amniotic LT4 injections in a dosage of 10 μg/kg estimated fetal weight per 15 days. Studies have confirmed the feasibility and safety of intra-amniotic LT4 injection and strongly suggest that this treatment is effective for decreasing goiter size. However, none of the many LT4 regimens used ensures euthyroidism at birth. It is, therefore, not possible to formulate guidelines from current data. These further diagnostics and intervention should only be done by an experienced multidisciplinary team in a specialized center of prenatal care after a careful benefit–risk evaluation. Determination of the indications and optimal modes of prenatal treatment for nonimmune fetal goitrous hypothyroidism will require larger well-designed studies that would be best conducted through international cooperation between multidisciplinary medical teams. Alternative ways of treating the fetus by administering drugs to the mother should also be investigated. In a hypothyroid pregnant woman, the preferred approach is to treat the woman with (rather than the fetus) with LT4. Finally, adequate iodine intake should be ensured for all pregnant women (250 μg/day).
This update of the consensus guidelines on CH recommends worldwide neonatal screening and appropriate diagnostics—including genetics—to assess the cause of both primary and central hypothyroidism. The expert panel recommends the immediate start of correctly dosed LT4 treatment, and frequent follow-up including laboratory testing and dose adjustments to keep TH levels in their target ranges, timely assessments of the need to continue treatment, attention for neurodevelopmental and neurosensory functions and, if necessary, consulting other health professionals, and education of the child and family about CH. Harmonization of diagnostics, treatment, and follow-up will optimize patient outcomes. Lastly, all individuals with CH are entitled to a well-planned transition of care from pediatrics to adult medicine. This consensus guidelines update should be used to further optimize detection, diagnosis, treatment, and follow-up of children with all forms of CH in the light of the most recent evidence. It should be helpful in convincing health authorities of the benefits of neonatal screening for CH. Despite ∼50 years of neonatal screening for CH, some important questions remain, such as the genetic etiology of TD, the assumed harm of subclinical CH, that is, a normal fT4 in combination with an elevated TSH, and the cause of the gradually increased incidence of CH with GIS. Further epidemiological and experimental studies are needed to understand the increased incidence of this condition.
|
Effect of training status on muscle excitation and neuromuscular fatigue with resistance exercise with and without blood flow restriction in young men | 50098c02-0473-4d48-93da-aeab08dc212f | 11923869 | Musculoskeletal System[mh] | INTRODUCTION Resistance exercise (RE) training to task failure, or near‐task failure, is generally considered essential for maximizing skeletal muscle hypertrophy (Morton, Sonne, et al., ; Refalo et al., , , ). Mechanistically, RE training‐induced muscle hypertrophy is likely enhanced when the number of motor units recruited and firing frequencies are maximized (i.e., greater muscle excitation) (Jenkins et al., ; Schoenfeld, ; Valerio et al., ), particularly when associated with neuromuscular fatigue due to metabolic stress (Schoenfeld, ). Moreover, RE training using loads ≥60% of one repetition maximum (1‐RM) or high‐load resistance exercise (HLRE) is important for optimizing skeletal muscle excitation while also enhancing the recruitment of type II muscle fibers (Morton, Sonne, et al., ). Furthermore, HLRE has been shown to elicit greater neural drive compared to moderate resistance exercise when performed to failure under free‐flow conditions (Miller et al., ). Contextually, surface electromyography (sEMG) is commonly used to quantify skeletal muscle excitation during acute bouts of RE (Lacerda et al., ; Morton, Sonne, et al., ), while reductions in the post‐ compared to pre‐exercise maximum voluntary isometric contractions (MVICs) are used to quantify neuromuscular fatigue (Hill et al., ; Izquierdo et al., ; Karabulut et al., ). Emerging evidence suggests that blood flow‐restricted RE (BFR‐RE) is an effective exercise modality for inducing skeletal muscle hypertrophy, even when performed using relatively low loads (e.g., 20%–30% 1‐RM, LLBFR) (Patterson et al., ). LLBFR may be advantageous in individuals where HLRE may be contraindicated (e.g., post‐anterior cruciate ligament reconstruction, post‐fracture rehabilitation, etc.) (Banwan Hasan & Awed, ; Ohta et al., ; Patterson et al., , ). Clinically, LLBFR is gaining traction as a viable pre‐ and post‐surgical rehabilitation procedure in patients due to its ability to improve and maintain muscle mass while minimizing injury risk (Hughes et al., ; Ogawa et al., ). Mechanistically, LLBFR has been suggested to result in greater skeletal muscle excitation measured using sEMG than load‐matched free flow RE (Lacerda et al., ; Loenneke et al., ) while also achieving levels of skeletal muscle activation that are observed in response to free flow HLRE in some (Takarada et al., ) but not all studies (Biazon et al., ). BFR‐RE‐induced increases in muscle hypertrophy have been hypothesized to be due to a combination of increased fiber recruitment, accumulation of metabolites, transient cellular swelling, and stimulation of muscle protein synthesis (Loenneke et al., ; Pearson & Hussain, ; Wilson et al., ), which are indicative of greater muscle excitation and enhanced cellular stress (Schoenfeld, ). A recent meta‐analysis that systematically reviewed low‐load resistance exercise with and without BFR (LLBFR and low‐load resistance exercise (LLRE), respectively) concluded that LLBFR had greater exercise‐induced muscle excitation than LLRE (Centner & Lauber, ). Although BFR‐RE has been shown to enhance skeletal muscle excitation in both trained and untrained adults, few studies have directly compared the acute impact of BFR‐RE on skeletal muscle excitation, total muscle activation, and neuromuscular fatigue in adults of different training statuses relative to traditional free‐flow HLRE. Therefore, the purpose of this study was to determine if there are differences in the muscle excitation and muscle activation of the vastus lateralis measured using sEMG in resistance‐trained (RT) versus untrained (UT) college‐aged males performing BFR‐RE at low loads (25% 1‐RM, LLBFR) or medium loads (50% 1‐RM, MLBFR) compared to a traditional free‐flow HLRE program (75% 1‐RM). Furthermore, we also sought to determine if there were differences in neuromuscular fatigue measured during a standardized isometric fatigue test in RT and UT college‐aged males following acute bouts of LLBFR, MLBFR, and HLRE. We hypothesized that the RT group would have higher absolute muscle excitation and lower relative muscle excitation during LLBFR, MLBFR, and HLRE than the UT group. We also hypothesized that the LLBFR and MLBFR would result in muscle excitation, muscle activation, and neuromuscular fatigue similar to HLRE despite lower training volumes.
METHODS 2.1 Ethics statement The Louisiana State University's Institutional Review Board approved the study protocol and consent form for this study (IRBAM‐22‐0600), which was registered at clinicaltrials.gov (NCT05586451). All participants provided written and informed consent before their participation in the study, while all procedures conducted were in accordance to the Declaration of Helsinki. 2.2 Participants Thirty‐two participants qualified for this study. Two participants were excluded due to noncompliance (i.e., only completed one study visit), and one participant was excluded due to technical (equipment) problems during their exercise visits. Thus, twenty‐nine healthy college‐aged males completed this study. The resistance‐trained participants (RT, n = 15) were required to report RE at least 3 days per week for 2 years. The untrained participants (UT, n = 14) exercised for less than 2 days per week and were required to report not performing RE training for at least 6 months before starting the study. All participants were free of any cardiovascular or metabolic diseases or other abnormalities preventing them from performing exercise. All participants were tobacco‐ and medication‐free, normotensive, and with no history of thromboembolism, sickle cell trait, or sickle cell anemia. 2.3 Study design A randomized, repeated measures design was used to test the impact of acute HLRE, LLBFR, and MLBFR bouts on muscle excitation, muscle activation, and neuromuscular fatigue in RT and UT college‐aged males. The participants completed one screening visit, one strength training visit, and three acute exercise visits (HLRE, LLBFR, MLBFR). Research randomizer ( https://randomizer.org/ ) was used to block randomize the three exercise conditions stratified by training status. The participants were not blinded to which exercise condition they were completing during a given trial. 2.4 Screening visit The screening visit consisted of informed consent, medical history, and a physical activity readiness questionnaire for everyone (PARQ+) (Warburton, ), International Physical Activity Questionnaire (Sember et al., ), Muscle Strengthening Exercise Questionnaire (MSEQ) (Shakespear‐Druery et al., ), demographics, anthropometric measurements, and blood pressure. Height and body mass were measured using a stadiometer (Seca, Germany) and an electronic scale (Seca, Germany). Participants also completed a whole‐body Dual‐energy X‐ray Absorptiometry (DXA) scan (Horizon‐A, Hologic Inc., Danbury, CT, USA) as previously described (Davis et al., ; Wong et al., ). In addition, we quantified the thigh bone‐free lean mass of the dominant leg using region of interest (ROI) analyses described previously (Hirsch et al., ). They were also familiarized with the exercise equipment, testing protocol, and BFR. 2.5 Strength testing visit All participants completed their strength testing visits at least 48 h after the screening visit and ~48 h or more after their last leg training session, if they were resistance‐trained, to avoid the potential confounding effect of muscle soreness on strength outcomes. All strength testing and exercise sessions were performed on the dominant leg, the leg the participant felt most comfortable kicking a ball. While participants were sitting on the isokinetic dynamometer (Biodex System 3, Shirley, NY), the total thigh length was measured from the greater trochanter to the lateral border of the patella's base. Surface EMG (sEMG) electrodes (Biopac Systems, Inc.™, Goleta, CA) were placed on the belly of the vastus lateralis at 1/3 the length of the thigh from the lateral border of the patella's base. The inter‐electrode distance was 20 mm, with the positive electrode superior to the negative electrode, and the ground electrode was placed on the patella according to the SENIAM guidelines (Hermens et al., ). Using the isotonic setting, we measured the knee extension one repetition max (1‐RM), which was determined as the maximum amount of torque that could be lifted through a full range of motion, quantified in Nm. We then measured the participants' peak isokinetic torque, quantified in Nm for the knee extension at 60 0 /s and range of motion set at 70 0 (80–10 0 where 0 0 = full extension). Participants then performed a knee extension isometric endurance test with a joint angle set at 60 0 of flexion (0 0 = full extension). Participants performed an MVIC for 5 s, followed by a 5 s rest, and continued for 4 min (24 total MVICs). We adopted the isometric endurance test to reduce signal noise for sEMG readings (Armatas et al., ). 2.6 Exercise visits All exercise visits were performed at least 48 h after the previous visit and at least 48 h after the last leg training day for trained participants to avoid the confounding effects of muscle soreness on the study outcomes. Figure presents the overall study flow for each exercise visit. Participants were instructed not to have any food or beverages except water for at least 10 h before the start of their study days. Participants were provided a standardized breakfast (Boost™ Max, 30 g protein, 1 g sugar), which they were asked to consume 2 h before each exercise visit. The standardized meal was used to mimic a pre‐workout meal. All exercise visits were performed in the morning and, when possible, at the same time of day for each participant. Upon arrival and after a 5‐min rest period in a seated position, a pre‐exercise blood sample was obtained by venipuncture of an antecubital vein. Next, participants completed a 5‐min warm‐up on a treadmill at a self‐selected pace (≥1.5 mph) before being positioned on the Biodex. Participants were equipped with sEMG probes over the vastus lateralis of their dominant leg as described above (strength testing visit). The participants performed two pre‐exercise MVICs with a 1‐min break between pre‐exercise MVICs and then performed one of three randomly assigned exercise sessions (HLRE, LLBFR, and MLBFR). After completing the assigned exercise session, two MVICs were performed at ~30 s and ~90 s post‐exercise (1‐min break between postexercise MVICs), followed by a postexercise blood draw. The postexercise blood draws were taken ~3–5 min after completing the exercise session. Details of the HLRE, LLBFR, and MLBFR protocols are below. 2.7 Exercise protocols High‐load resistance exercise (HLRE)—Participants performed three sets of isotonic knee extensions on the Biodex at 75% of 1‐RM for 12 repetitions with a 1‐min break between sets. One repetition (concentric and eccentric) was completed every 2 s to minimize sEMG signal noise. Low‐load blood flow restricted resistance exercise (LLBFR)—For the LLBFR, we followed the consensus guidelines for BFR (Patterson et al., ). In brief, participants performed four sets of isotonic knee extensions on the Biodex at 25% of 1‐RM for 30, 15, 15, and 15 repetitions with a 1‐min break between sets, with one repetition every 2 s. The Delfi™ Personal Tourniquet System (PTS) and ~11.4 cm wide Easi‐Fit Tourniquets (Vancouver, CA) induced BFR. An Easi‐Fit Tourniquet was attached at the most proximal portion of the exercising thigh. Each participant's limb occlusion pressure (LOP) was determined using the PTS's built‐in Doppler system to measure and regulate the LOP at 60% throughout the entire exercise session. BFR was initiated immediately before the start of the first exercise set and terminated immediately following the completion of the last exercise set (~6–6.5 min of occlusion). Medium‐load blood flow restricted exercise (MLBFR)—Participants performed four sets of isotonic knee extensions on the Biodex at 50% of 1‐RM for 15, 8, 7, and 7 repetitions with a 1‐min break between sets, with one repetition every 2 s. The BFR during the MLBFR was performed as described for the LLBFR (~4.5–5.25 min of occlusion). We chose this protocol to double the resistance exercise intensity while matching the training volume achieved during the LLBFR condition. 2.8 Blood draws and pre‐exercise plasma glucose Following an overnight fast (≥10 h) and 2 h after a standardized protein shake (Boost™ Max, 30 g protein, 1 g) for breakfast, a pre‐exercise venous blood sample was collected from an antecubital vein into K 2 EDTA tubes (BD, Franklin Lakes, NJ). All blood draws were performed in a semi‐recumbent position. The whole blood was centrifuged at 500 g for 10 min at 4°C, and the plasma was stored at −80°C until analysis. Plasma blood glucose concentrations were assessed using the glucose oxidase method (Analox GL5 Analox Instruments, Lunenberg, MA). 2.9 Surface electromyography ( sEMG ) and signal processing Participants were fitted with portable sEMG electrodes (Biopac Systems, BIONOMADIX, Goleta, CA) on their vastus lateralis to measure muscle excitation and total muscle activation as described above. The electrodes were connected to an amplifier and digitizer (Biopac Systems, EMG‐R2, Inc.™, Goleta, CA). The raw data were sampled at a rate of 2000 Hz and analyzed using AcqKnowledge 5.0 software (Biopac Systems, Inc.™, Goleta, CA). The bandwidth filter was set at 5 Hz–500 Hz, and the signal was amplified (gain: ×2000). The sEMG data were analyzed using a 30 ms moving window when performing the root mean square (RMS) analyses. Muscle activations were initially identified using the locate muscle activation function in BIOPAC's EMG analysis toolkit, which was followed by manual clean‐up to ensure that the RMS data were quantified from the onset and offset of each muscle action (e.g., MVIC or repetition). Thus, the EPOCHs for the MVICs were 5 s, and for the individual muscle repetition were ~2 s (inclusive of both the concentric and eccentric phases). Next, the maximal RMS amplitudes (AMP) per MVIC and per repetition were quantified in mV to determine muscle excitation. In addition, the integrated area under the EMG curve (iEMG) per MVIC and per repetition was quantified in mV∙s to determine total muscle activation. During the HLRE, LLBFR, and MLBFR, the maximal RMS AMP and iEMG for each complete repetition were quantified. The maximal RMS AMP measured for each repetition was normalized to the maximal RMS AMP measured during the pre‐exercise MVIC to quantify the relative muscle excitation. The iEMG measured per repetition was summed together (∑iEMG) to quantify the total muscle activation per exercise session. 2.10 Statistical analysis Data were analyzed using Rstudio (2024.04.2 Build 764). Table presents participant characteristics (mean ± SD) stratified by training status (Trained vs. Untrained) using the tidyverse (Wickham et al., ) and gtsummary (Sjoberg et al., ) packages. Differences between the trained and untrained were determined using Fisher's exact test for categorical variables and Welch's Two Sample t ‐test for continuous variables using gtsummary (Sjoberg et al., ). Table presents the exercise data (mean ± SD) stratified by training status (trained and untrained) and treatment (HRLE, LLBFR, and LLBFR) using the tidyverse (Wickham et al., ), gtsummary (Sjoberg et al., ), and flextable (Gohel & Skintzos, ) packages. Linear mixed‐effects models were used to detect differences between training status, treatments, and their interaction using restricted maximum likelihood (REML) (Bates et al., ; Kuznetsova et al., ). Specifically, the lemr function in the nmle4 and lmerTest packages was used to fit the linear mixed‐effects models. In addition, ID was included in the linear mixed model as a random effect (lmer(y ~ training status + treatment + training status*treatment + 1|ID)). The Kenward‐Rogers method was used to determine the denominator degrees of freedom (Kenward & Roger, ). Data are presented as LSMEANS ± 95% confidence intervals. Post hoc linear contrasts were performed using the emmeans package and pairs function (Lenth, ). For the endurance test, the linear mixed models included the main effects of repetitions (REP1‐REP24), training status (trained and untrained), and their interaction. In addition, the participant ID was used as a random effect, as previously described. A similar model assessed differences in maximal RMS amplitudes measured during the muscle endurance test. Likewise, linear mixed‐effects models were fit for the primary study outcomes, where the main effects were training status, treatment, and their interaction, while the participant ID was used as a random effect. The primary outcomes were knee extensor peak torque and maximum RMS amplitude during the pre‐exercise MVICs, the percent change in peak torque, and maximum RMS amplitude measured during the postexercise MVIC relative to the pre‐exercise MVIC, and the relative and absolute amount of muscle activation that were achieved during the three exercise treatments. The maximal RMS amplitude measured for each repetition was normalized to the maximal RMS amplitude measured during the pre‐exercise MVIC to quantify the relative muscle activation and expressed as a percentage (%MVIC). The mean relative muscle activation across all repetitions within a given exercise treatment was used as the dependent variable. To quantify the total muscle activation, the iEMG measured for each repetition was summed across all repetitions ∑iEMG within a given exercise treatment. The ∑iEMG within a given exercise treatment was used as the dependent variable. The secondary outcomes included plasma glucose and cortisol measures before and after the exercise treatments. Figures , , , were created using the ggplot_the_response function (Walker, ). For all statistical tests, an alpha level of <0.05 was used. 2.11 Sample size considerations A sample size of at least n = 12 per group was selected based on sample sizes from prior studies (8–12 subjects per group) (Cook et al., ; Kubo et al., ; Sousa et al., ). Using sample size procedures outlined by Beck ( ), G*Power suggests that a sample size of 10 participants per group provides 80% power to detect an effect size of 1.0 at an α = 0.05 for detecting within‐participant differences in muscle activation based on paired data (e.g., LLBFR vs. MLBFR). Likewise, a sample size of 12 participants per group provided 80% power to detect an effect size of 1.2 at an α = 0.05 for between‐participant differences in muscle excitation based on independent data (e.g., RT vs. UT). Although these effect sizes are often considered large, pre‐ to post‐training effect sizes for muscle excitation and changes in strength have been reported to be greater than 1.0 following only 6 weeks of HLRE and LLBFR (Sousa et al., ).
Ethics statement The Louisiana State University's Institutional Review Board approved the study protocol and consent form for this study (IRBAM‐22‐0600), which was registered at clinicaltrials.gov (NCT05586451). All participants provided written and informed consent before their participation in the study, while all procedures conducted were in accordance to the Declaration of Helsinki.
Participants Thirty‐two participants qualified for this study. Two participants were excluded due to noncompliance (i.e., only completed one study visit), and one participant was excluded due to technical (equipment) problems during their exercise visits. Thus, twenty‐nine healthy college‐aged males completed this study. The resistance‐trained participants (RT, n = 15) were required to report RE at least 3 days per week for 2 years. The untrained participants (UT, n = 14) exercised for less than 2 days per week and were required to report not performing RE training for at least 6 months before starting the study. All participants were free of any cardiovascular or metabolic diseases or other abnormalities preventing them from performing exercise. All participants were tobacco‐ and medication‐free, normotensive, and with no history of thromboembolism, sickle cell trait, or sickle cell anemia.
Study design A randomized, repeated measures design was used to test the impact of acute HLRE, LLBFR, and MLBFR bouts on muscle excitation, muscle activation, and neuromuscular fatigue in RT and UT college‐aged males. The participants completed one screening visit, one strength training visit, and three acute exercise visits (HLRE, LLBFR, MLBFR). Research randomizer ( https://randomizer.org/ ) was used to block randomize the three exercise conditions stratified by training status. The participants were not blinded to which exercise condition they were completing during a given trial.
Screening visit The screening visit consisted of informed consent, medical history, and a physical activity readiness questionnaire for everyone (PARQ+) (Warburton, ), International Physical Activity Questionnaire (Sember et al., ), Muscle Strengthening Exercise Questionnaire (MSEQ) (Shakespear‐Druery et al., ), demographics, anthropometric measurements, and blood pressure. Height and body mass were measured using a stadiometer (Seca, Germany) and an electronic scale (Seca, Germany). Participants also completed a whole‐body Dual‐energy X‐ray Absorptiometry (DXA) scan (Horizon‐A, Hologic Inc., Danbury, CT, USA) as previously described (Davis et al., ; Wong et al., ). In addition, we quantified the thigh bone‐free lean mass of the dominant leg using region of interest (ROI) analyses described previously (Hirsch et al., ). They were also familiarized with the exercise equipment, testing protocol, and BFR.
Strength testing visit All participants completed their strength testing visits at least 48 h after the screening visit and ~48 h or more after their last leg training session, if they were resistance‐trained, to avoid the potential confounding effect of muscle soreness on strength outcomes. All strength testing and exercise sessions were performed on the dominant leg, the leg the participant felt most comfortable kicking a ball. While participants were sitting on the isokinetic dynamometer (Biodex System 3, Shirley, NY), the total thigh length was measured from the greater trochanter to the lateral border of the patella's base. Surface EMG (sEMG) electrodes (Biopac Systems, Inc.™, Goleta, CA) were placed on the belly of the vastus lateralis at 1/3 the length of the thigh from the lateral border of the patella's base. The inter‐electrode distance was 20 mm, with the positive electrode superior to the negative electrode, and the ground electrode was placed on the patella according to the SENIAM guidelines (Hermens et al., ). Using the isotonic setting, we measured the knee extension one repetition max (1‐RM), which was determined as the maximum amount of torque that could be lifted through a full range of motion, quantified in Nm. We then measured the participants' peak isokinetic torque, quantified in Nm for the knee extension at 60 0 /s and range of motion set at 70 0 (80–10 0 where 0 0 = full extension). Participants then performed a knee extension isometric endurance test with a joint angle set at 60 0 of flexion (0 0 = full extension). Participants performed an MVIC for 5 s, followed by a 5 s rest, and continued for 4 min (24 total MVICs). We adopted the isometric endurance test to reduce signal noise for sEMG readings (Armatas et al., ).
Exercise visits All exercise visits were performed at least 48 h after the previous visit and at least 48 h after the last leg training day for trained participants to avoid the confounding effects of muscle soreness on the study outcomes. Figure presents the overall study flow for each exercise visit. Participants were instructed not to have any food or beverages except water for at least 10 h before the start of their study days. Participants were provided a standardized breakfast (Boost™ Max, 30 g protein, 1 g sugar), which they were asked to consume 2 h before each exercise visit. The standardized meal was used to mimic a pre‐workout meal. All exercise visits were performed in the morning and, when possible, at the same time of day for each participant. Upon arrival and after a 5‐min rest period in a seated position, a pre‐exercise blood sample was obtained by venipuncture of an antecubital vein. Next, participants completed a 5‐min warm‐up on a treadmill at a self‐selected pace (≥1.5 mph) before being positioned on the Biodex. Participants were equipped with sEMG probes over the vastus lateralis of their dominant leg as described above (strength testing visit). The participants performed two pre‐exercise MVICs with a 1‐min break between pre‐exercise MVICs and then performed one of three randomly assigned exercise sessions (HLRE, LLBFR, and MLBFR). After completing the assigned exercise session, two MVICs were performed at ~30 s and ~90 s post‐exercise (1‐min break between postexercise MVICs), followed by a postexercise blood draw. The postexercise blood draws were taken ~3–5 min after completing the exercise session. Details of the HLRE, LLBFR, and MLBFR protocols are below.
Exercise protocols High‐load resistance exercise (HLRE)—Participants performed three sets of isotonic knee extensions on the Biodex at 75% of 1‐RM for 12 repetitions with a 1‐min break between sets. One repetition (concentric and eccentric) was completed every 2 s to minimize sEMG signal noise. Low‐load blood flow restricted resistance exercise (LLBFR)—For the LLBFR, we followed the consensus guidelines for BFR (Patterson et al., ). In brief, participants performed four sets of isotonic knee extensions on the Biodex at 25% of 1‐RM for 30, 15, 15, and 15 repetitions with a 1‐min break between sets, with one repetition every 2 s. The Delfi™ Personal Tourniquet System (PTS) and ~11.4 cm wide Easi‐Fit Tourniquets (Vancouver, CA) induced BFR. An Easi‐Fit Tourniquet was attached at the most proximal portion of the exercising thigh. Each participant's limb occlusion pressure (LOP) was determined using the PTS's built‐in Doppler system to measure and regulate the LOP at 60% throughout the entire exercise session. BFR was initiated immediately before the start of the first exercise set and terminated immediately following the completion of the last exercise set (~6–6.5 min of occlusion). Medium‐load blood flow restricted exercise (MLBFR)—Participants performed four sets of isotonic knee extensions on the Biodex at 50% of 1‐RM for 15, 8, 7, and 7 repetitions with a 1‐min break between sets, with one repetition every 2 s. The BFR during the MLBFR was performed as described for the LLBFR (~4.5–5.25 min of occlusion). We chose this protocol to double the resistance exercise intensity while matching the training volume achieved during the LLBFR condition.
Blood draws and pre‐exercise plasma glucose Following an overnight fast (≥10 h) and 2 h after a standardized protein shake (Boost™ Max, 30 g protein, 1 g) for breakfast, a pre‐exercise venous blood sample was collected from an antecubital vein into K 2 EDTA tubes (BD, Franklin Lakes, NJ). All blood draws were performed in a semi‐recumbent position. The whole blood was centrifuged at 500 g for 10 min at 4°C, and the plasma was stored at −80°C until analysis. Plasma blood glucose concentrations were assessed using the glucose oxidase method (Analox GL5 Analox Instruments, Lunenberg, MA).
Surface electromyography ( sEMG ) and signal processing Participants were fitted with portable sEMG electrodes (Biopac Systems, BIONOMADIX, Goleta, CA) on their vastus lateralis to measure muscle excitation and total muscle activation as described above. The electrodes were connected to an amplifier and digitizer (Biopac Systems, EMG‐R2, Inc.™, Goleta, CA). The raw data were sampled at a rate of 2000 Hz and analyzed using AcqKnowledge 5.0 software (Biopac Systems, Inc.™, Goleta, CA). The bandwidth filter was set at 5 Hz–500 Hz, and the signal was amplified (gain: ×2000). The sEMG data were analyzed using a 30 ms moving window when performing the root mean square (RMS) analyses. Muscle activations were initially identified using the locate muscle activation function in BIOPAC's EMG analysis toolkit, which was followed by manual clean‐up to ensure that the RMS data were quantified from the onset and offset of each muscle action (e.g., MVIC or repetition). Thus, the EPOCHs for the MVICs were 5 s, and for the individual muscle repetition were ~2 s (inclusive of both the concentric and eccentric phases). Next, the maximal RMS amplitudes (AMP) per MVIC and per repetition were quantified in mV to determine muscle excitation. In addition, the integrated area under the EMG curve (iEMG) per MVIC and per repetition was quantified in mV∙s to determine total muscle activation. During the HLRE, LLBFR, and MLBFR, the maximal RMS AMP and iEMG for each complete repetition were quantified. The maximal RMS AMP measured for each repetition was normalized to the maximal RMS AMP measured during the pre‐exercise MVIC to quantify the relative muscle excitation. The iEMG measured per repetition was summed together (∑iEMG) to quantify the total muscle activation per exercise session.
Statistical analysis Data were analyzed using Rstudio (2024.04.2 Build 764). Table presents participant characteristics (mean ± SD) stratified by training status (Trained vs. Untrained) using the tidyverse (Wickham et al., ) and gtsummary (Sjoberg et al., ) packages. Differences between the trained and untrained were determined using Fisher's exact test for categorical variables and Welch's Two Sample t ‐test for continuous variables using gtsummary (Sjoberg et al., ). Table presents the exercise data (mean ± SD) stratified by training status (trained and untrained) and treatment (HRLE, LLBFR, and LLBFR) using the tidyverse (Wickham et al., ), gtsummary (Sjoberg et al., ), and flextable (Gohel & Skintzos, ) packages. Linear mixed‐effects models were used to detect differences between training status, treatments, and their interaction using restricted maximum likelihood (REML) (Bates et al., ; Kuznetsova et al., ). Specifically, the lemr function in the nmle4 and lmerTest packages was used to fit the linear mixed‐effects models. In addition, ID was included in the linear mixed model as a random effect (lmer(y ~ training status + treatment + training status*treatment + 1|ID)). The Kenward‐Rogers method was used to determine the denominator degrees of freedom (Kenward & Roger, ). Data are presented as LSMEANS ± 95% confidence intervals. Post hoc linear contrasts were performed using the emmeans package and pairs function (Lenth, ). For the endurance test, the linear mixed models included the main effects of repetitions (REP1‐REP24), training status (trained and untrained), and their interaction. In addition, the participant ID was used as a random effect, as previously described. A similar model assessed differences in maximal RMS amplitudes measured during the muscle endurance test. Likewise, linear mixed‐effects models were fit for the primary study outcomes, where the main effects were training status, treatment, and their interaction, while the participant ID was used as a random effect. The primary outcomes were knee extensor peak torque and maximum RMS amplitude during the pre‐exercise MVICs, the percent change in peak torque, and maximum RMS amplitude measured during the postexercise MVIC relative to the pre‐exercise MVIC, and the relative and absolute amount of muscle activation that were achieved during the three exercise treatments. The maximal RMS amplitude measured for each repetition was normalized to the maximal RMS amplitude measured during the pre‐exercise MVIC to quantify the relative muscle activation and expressed as a percentage (%MVIC). The mean relative muscle activation across all repetitions within a given exercise treatment was used as the dependent variable. To quantify the total muscle activation, the iEMG measured for each repetition was summed across all repetitions ∑iEMG within a given exercise treatment. The ∑iEMG within a given exercise treatment was used as the dependent variable. The secondary outcomes included plasma glucose and cortisol measures before and after the exercise treatments. Figures , , , were created using the ggplot_the_response function (Walker, ). For all statistical tests, an alpha level of <0.05 was used.
Sample size considerations A sample size of at least n = 12 per group was selected based on sample sizes from prior studies (8–12 subjects per group) (Cook et al., ; Kubo et al., ; Sousa et al., ). Using sample size procedures outlined by Beck ( ), G*Power suggests that a sample size of 10 participants per group provides 80% power to detect an effect size of 1.0 at an α = 0.05 for detecting within‐participant differences in muscle activation based on paired data (e.g., LLBFR vs. MLBFR). Likewise, a sample size of 12 participants per group provided 80% power to detect an effect size of 1.2 at an α = 0.05 for between‐participant differences in muscle excitation based on independent data (e.g., RT vs. UT). Although these effect sizes are often considered large, pre‐ to post‐training effect sizes for muscle excitation and changes in strength have been reported to be greater than 1.0 following only 6 weeks of HLRE and LLBFR (Sousa et al., ).
RESULTS 3.1 Participants characteristics The overall participant characteristics stratified by training status are presented in Table . The RT participants reported 8.6 times more vigorous ( p < 0.001), 8 times more moderate ( p = 0.003), and 2 times more total MET‐minutes per week of physical activity ( p = 0.002). By design, the RT participants reported a greater number of resistance exercise sessions ( p < 0.001) and minutes per training session ( p < 0.001) than the UT participants, as estimated by the MSEQ. However, neither the knee extension 1‐RM nor the peak isokinetic torque measures differed between the RT and UT groups ( p = 0.23 and p = 0.34, respectively). The RT participants had 10% more thigh lean mass than the UT participants ( p = 0.03). The pre‐exercise plasma glucose concentrations were not different between treatments, training status, or their interaction (all p > 0.05). The pre‐exercise plasma glucose concentrations were 5.6 mM (5.4–5.9 mM) in the RT and 5.7 mM (5.4–5.9 mM) in the UT participants, averaged over all treatment levels. 3.2 Isometric knee extensor endurance test During the knee extensor endurance test, the RT participants produced 24% higher average peak torque than the UT participants ( p Training Status = 0.001, Figure ). Peak torque per repetition declined throughout the endurance test ( p Repetition < 0.0001, Figure ) independent of training status ( p Repetition*Training Status = 0.32, Figure ). Likewise, both groups had similar reductions in peak torque when comparing the peak torque achieved during the first 4 MVICs with the last 4 MVICs (−18 ± 19% vs. −17.3 ± 9%, p = 0.89 Welch's Two Sample t ‐test). The RT participants produced 38% higher absolute maximal RMS AMP (mV) during the knee extensor endurance test than the UT participants ( p Training Status = 0.012, Figure ). However, the absolute maximal RMS AMP per repetition did not change throughout the endurance test ( p Repetition = 0.86 and p Repetition*Training Status = 0.20, Figure ). 3.3 Exercise session data The total volume (load*repetitions or Nm*repetitions) was higher during the HLRE treatment compared to both the LLBFR (46% higher, p between‐treatments <0.001) and HLBFR (46% higher, p between‐treatments < 0.001) treatments, independent of training status ( p Training Status = 0.28 and p Training Status*Treatment = 0.29) (Table ). By design, the total volume was not different between the LLBFR and MLBFR treatments ( p > 0.05). The total concentric work measured during the HLRE treatment was higher than the LLBFR (22% higher, p between‐treatments < 0.001) and MLBFR (32% higher, p between‐treatments < 0.001) treatments, independent of training status ( p Training Status = 0.66 and p Treatment*Training Status = 0.28) (Table ). The total concentric work was not different between the LLBFR and MLBFR treatments among the RT participants ( p > 0.05) (Table ). However, the total concentric work was 11% higher during LLBFR than MLBFR ( p between‐treatments = 0.026) within the UT participants (Table ). The overall correlation between the total volume versus total concentric work was high ( r = 0.82, p < 0.001). One trained participant could only complete the first two sets during the LLBFR treatment. The average heart rates during each exercise session were not different between treatments, independent of training status (all p > 0.05) (Table ). However, the RPEs were higher in response to the LLBFR and MLBFR treatments than the HLRE treatments within the UT participants ( p between‐treatments < 0.05) (Table ). 3.4 Muscle excitation measured during the exercise sessions The mixed effects models revealed that there were differences in the normalized RMS AMP (%MVIC) per training session between three exercise treatments ( p Treatment = 0.002), independent of training status ( p Training Status = 0.40 and p Treatment*Training Status = 0.80) (Figure ) indicative of differences in relative muscle excitation. Specifically, the RMS AMP (%MVIC) was 26.7% higher during the HLRE than the LLBFR sessions ( p between‐treatments = 0.0009, averaged across all levels of training status) and 23.2% higher during the MLBFR than the LLBFR sessions ( p between‐treatments = 0.004, averaged across all levels of training status) (Figure ). Moreover, the RMS AMP (%MVIC) was 28.2% higher during the HLRE than the LLBFR sessions ( p within‐training status = 0.016) and 29.1% higher during the MLBFR than the LLBFR sessions within the RT participants ( p within‐training status = 0.013) (Figure ). In addition, the RMS AMP (%MVIC) was 25.4% higher during the HLRE than the LLBFR sessions within the UT participants ( p within‐training status = 0.017) (Figure ). 3.5 Total muscle activation measured during the exercise sessions The mixed effects models revealed that there were differences in ∑iEMG (mV∙s) per training session between three exercise treatments ( p Treatment = 0.002) and training status ( p Training Status = 0.049), but not their interaction ( p Treatment*Training Status = 0.22) (Figure ) indicative of differences in total muscle activation. Specifically, the ∑iEMG was 33.9% higher in the RT than in the UT participants ( p Training Status = 0.049, averaged over the levels of treatment) (Figure ). The ∑iEMG was 50.0% higher in RT than in the UT participants during the HLRE ( p between‐training status = 0.026) and 36.2% higher in the RT than in the UT participants during the LLBFR treatment ( p between‐training status = 0.043) (Figure ). The ∑iEMG was also 19.3% higher during LLBFR than during the HLRE treatment ( p between‐treatments = 0.034, averaged across all levels of training status) and 38.1% higher during the LLBFR than during the MLBFR treatment ( p between‐treatments = 0.0005, averaged across all levels of training status) (Figure ). Moreover, the ∑iEMG was 30.0% higher in HLRE than in the MLBFR treatments ( p within‐training status = 0.027) and 49.1% in LLBFR than the MLBFR treatments ( p within‐training status < 0.001, Figure ) in the RT participants (Figure ). 3.6 Peak torque measured during the pre‐ and post‐exercise knee extensor MVICs The RT participants produced 21% higher peak torque measured during the pre‐exercise MVICs than the untrained participants ( p Training Status = 0.003, averaged across all levels of treatment) (Figure ). Notably, no differences in peak torque were measured during the pre‐exercise MVICs between treatments ( p Treatment = 0.48) (Figure ). There were no differences in the percent change (%Δ) in peak torque measured during the postexercise MVIC relative to the pre‐exercise MVIC between treatments ( p Treatment = 0.52), training status ( p Training Status = 0.72), or their interaction ( p Treatment*Training Status = 0.47) at the 30‐s postexercise timepoint (Figure ). However, the peak torque was −7.6% ( p within‐treatment = 0.043) and −11.6% ( p within‐treatment = 0.002) lower during the 30‐s postexercise MVIC than the pre‐exercise MVIC following the HLRE and LLBRF conditions, respectively, when averaged over all levels of training status (Figure ). Moreover, the RT participants produced lower peak torque following the HLRE (−11.1%, p within‐treatment = 0.034) and LLBFR (−13.2%, p within‐treatment = 0.012) treatments during the 30‐s postexercise MVIC compared to the pre‐exercise MVIC (Figure ). In contrast, the UT participants did not show significantly lower peak torque during the 30‐s postexercise MVIC than the pre‐exercise MVIC regardless of treatment (Figure ). Figure suggests that there were differences in the %Δ in peak torque measured during the 90‐s postexercise MVIC relative to the pre‐exercise MVIC between treatments ( p Treatment = 0.012), but not between training status ( p Training Status = 0.72) or their interaction ( p Treatment*Training Status = 0.16). Specifically, the peak torque measured during the 90‐s postexercise MVIC was lower than the pre‐exercise MVIC following HLRE treatment (−5.6%, p within‐treatment = 0.002) when averaged over all levels of training status (Figure ). Moreover, the UT participants showed a lower peak torque during the 90‐s postexercise MVIC than the pre‐exercise MVIC following HLRE treatment (−9.4%, p within‐treatment = 0.0004) (Figure ). Moreover, the %Δ in peak torque following HLRE treatment was greater than the MLBFR treatment ( p between = 0.001) within the UT participants (Figure ). 3.7 Muscle excitation during the pre‐ and post‐exercise knee extensor MVICs The RT participants had a 28.6% higher maximal RMS AMP (mV) during the pre‐exercise MVICs than the UT participants ( p Training Status = 0.027, averaged across all levels of treatment) (Figure ), indicative of greater absolute muscle excitation. Notably, there were no differences in the maximal RMS AMP measured during the pre‐exercise MVICs between treatments ( p Treatment = 0.63) nor their interaction ( p Treatment*Training Status = 0.25) (Figure ). Figure suggests that there are no differences in the %Δ RMS AMP between treatments ( p Treatment = 0.44), training status ( p Training Status = 0.27), or their interaction ( p Treatment*Training Status = 0.16) during the 30‐s postexercise MVIC relative to the pre‐exercise MVIC. The maximal RMS AMP measured during the 30‐s postexercise MVICs was lower than the pre‐exercise MVIC following HLRE (−13.6%, p within‐treatment = 0.0003), LLBFR (−8.7%, p within‐treatment = 0.019), and MLBFR (−8.7%, p within‐treatment = 0.019) when averaged over all levels of training status (Figure ). However, the RT participants showed lower maximal RMS AMP following the HLRE (−20.2%, p within‐treatment = 0.0001) and LLBFR (−12.6%, p within‐treatment = 0.017) treatments during the 30‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). The UT participants showed a lower maximal RMS AMP following LLBFR (−10.5%, p within‐treatment = 0.047) during the 30‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). Figure suggests that there are no differences in the %Δ RMS AMP between treatments ( p Treatment = 0.55), training status ( p Training Status = 0.42), or their interaction ( p Treatment*Training Status = 0.26) during the 90‐s postexercise MVIC than the pre‐exercise MVIC. The maximal RMS AMP measured during the 90‐s postexercise MVICs was lower following HLRE (−13.2%, p within‐treatment = 0.0003), LLBFR (−8.2%, p within‐treatment = 0.023), and MLBFR (−9.6%, p within‐treatment = 0.019) treatments when averaged over all levels of training status (Figure ). However, the RT participants showed lower maximal RMS AMP following the HLRE (−17.1%, p within‐treatment = 0.0006) and LLBFR (−12.4%, p within‐treatment = 0.016) treatments during the 90‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). The UT participants showed a decline in maximal RMS AMP following LLBFR (−12.3%, p within‐treatment = 0.016) treatments during the 90‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). 3.8 Testing for the potential confounding effect of muscle size on muscle excitation Since muscle size could be a confounding variable related to greater maximal RMS AMP in the RT group compared to the UT group (Skarabot et al., ), and prior work has normalized RMS AMP data by muscle cross‐sectional area (Keller et al., ), we performed exploratory analyses using the thigh lean mass as a covariate (Karp et al., ; Tanner, ). However, the addition of thigh lean mass as a covariate to the RMS AMP models neither changed their main effects nor their interactions and did not reach the significance level (all p > 0.05); these additional analyses were excluded from the present study.
Participants characteristics The overall participant characteristics stratified by training status are presented in Table . The RT participants reported 8.6 times more vigorous ( p < 0.001), 8 times more moderate ( p = 0.003), and 2 times more total MET‐minutes per week of physical activity ( p = 0.002). By design, the RT participants reported a greater number of resistance exercise sessions ( p < 0.001) and minutes per training session ( p < 0.001) than the UT participants, as estimated by the MSEQ. However, neither the knee extension 1‐RM nor the peak isokinetic torque measures differed between the RT and UT groups ( p = 0.23 and p = 0.34, respectively). The RT participants had 10% more thigh lean mass than the UT participants ( p = 0.03). The pre‐exercise plasma glucose concentrations were not different between treatments, training status, or their interaction (all p > 0.05). The pre‐exercise plasma glucose concentrations were 5.6 mM (5.4–5.9 mM) in the RT and 5.7 mM (5.4–5.9 mM) in the UT participants, averaged over all treatment levels.
Isometric knee extensor endurance test During the knee extensor endurance test, the RT participants produced 24% higher average peak torque than the UT participants ( p Training Status = 0.001, Figure ). Peak torque per repetition declined throughout the endurance test ( p Repetition < 0.0001, Figure ) independent of training status ( p Repetition*Training Status = 0.32, Figure ). Likewise, both groups had similar reductions in peak torque when comparing the peak torque achieved during the first 4 MVICs with the last 4 MVICs (−18 ± 19% vs. −17.3 ± 9%, p = 0.89 Welch's Two Sample t ‐test). The RT participants produced 38% higher absolute maximal RMS AMP (mV) during the knee extensor endurance test than the UT participants ( p Training Status = 0.012, Figure ). However, the absolute maximal RMS AMP per repetition did not change throughout the endurance test ( p Repetition = 0.86 and p Repetition*Training Status = 0.20, Figure ).
Exercise session data The total volume (load*repetitions or Nm*repetitions) was higher during the HLRE treatment compared to both the LLBFR (46% higher, p between‐treatments <0.001) and HLBFR (46% higher, p between‐treatments < 0.001) treatments, independent of training status ( p Training Status = 0.28 and p Training Status*Treatment = 0.29) (Table ). By design, the total volume was not different between the LLBFR and MLBFR treatments ( p > 0.05). The total concentric work measured during the HLRE treatment was higher than the LLBFR (22% higher, p between‐treatments < 0.001) and MLBFR (32% higher, p between‐treatments < 0.001) treatments, independent of training status ( p Training Status = 0.66 and p Treatment*Training Status = 0.28) (Table ). The total concentric work was not different between the LLBFR and MLBFR treatments among the RT participants ( p > 0.05) (Table ). However, the total concentric work was 11% higher during LLBFR than MLBFR ( p between‐treatments = 0.026) within the UT participants (Table ). The overall correlation between the total volume versus total concentric work was high ( r = 0.82, p < 0.001). One trained participant could only complete the first two sets during the LLBFR treatment. The average heart rates during each exercise session were not different between treatments, independent of training status (all p > 0.05) (Table ). However, the RPEs were higher in response to the LLBFR and MLBFR treatments than the HLRE treatments within the UT participants ( p between‐treatments < 0.05) (Table ).
Muscle excitation measured during the exercise sessions The mixed effects models revealed that there were differences in the normalized RMS AMP (%MVIC) per training session between three exercise treatments ( p Treatment = 0.002), independent of training status ( p Training Status = 0.40 and p Treatment*Training Status = 0.80) (Figure ) indicative of differences in relative muscle excitation. Specifically, the RMS AMP (%MVIC) was 26.7% higher during the HLRE than the LLBFR sessions ( p between‐treatments = 0.0009, averaged across all levels of training status) and 23.2% higher during the MLBFR than the LLBFR sessions ( p between‐treatments = 0.004, averaged across all levels of training status) (Figure ). Moreover, the RMS AMP (%MVIC) was 28.2% higher during the HLRE than the LLBFR sessions ( p within‐training status = 0.016) and 29.1% higher during the MLBFR than the LLBFR sessions within the RT participants ( p within‐training status = 0.013) (Figure ). In addition, the RMS AMP (%MVIC) was 25.4% higher during the HLRE than the LLBFR sessions within the UT participants ( p within‐training status = 0.017) (Figure ).
Total muscle activation measured during the exercise sessions The mixed effects models revealed that there were differences in ∑iEMG (mV∙s) per training session between three exercise treatments ( p Treatment = 0.002) and training status ( p Training Status = 0.049), but not their interaction ( p Treatment*Training Status = 0.22) (Figure ) indicative of differences in total muscle activation. Specifically, the ∑iEMG was 33.9% higher in the RT than in the UT participants ( p Training Status = 0.049, averaged over the levels of treatment) (Figure ). The ∑iEMG was 50.0% higher in RT than in the UT participants during the HLRE ( p between‐training status = 0.026) and 36.2% higher in the RT than in the UT participants during the LLBFR treatment ( p between‐training status = 0.043) (Figure ). The ∑iEMG was also 19.3% higher during LLBFR than during the HLRE treatment ( p between‐treatments = 0.034, averaged across all levels of training status) and 38.1% higher during the LLBFR than during the MLBFR treatment ( p between‐treatments = 0.0005, averaged across all levels of training status) (Figure ). Moreover, the ∑iEMG was 30.0% higher in HLRE than in the MLBFR treatments ( p within‐training status = 0.027) and 49.1% in LLBFR than the MLBFR treatments ( p within‐training status < 0.001, Figure ) in the RT participants (Figure ).
Peak torque measured during the pre‐ and post‐exercise knee extensor MVICs The RT participants produced 21% higher peak torque measured during the pre‐exercise MVICs than the untrained participants ( p Training Status = 0.003, averaged across all levels of treatment) (Figure ). Notably, no differences in peak torque were measured during the pre‐exercise MVICs between treatments ( p Treatment = 0.48) (Figure ). There were no differences in the percent change (%Δ) in peak torque measured during the postexercise MVIC relative to the pre‐exercise MVIC between treatments ( p Treatment = 0.52), training status ( p Training Status = 0.72), or their interaction ( p Treatment*Training Status = 0.47) at the 30‐s postexercise timepoint (Figure ). However, the peak torque was −7.6% ( p within‐treatment = 0.043) and −11.6% ( p within‐treatment = 0.002) lower during the 30‐s postexercise MVIC than the pre‐exercise MVIC following the HLRE and LLBRF conditions, respectively, when averaged over all levels of training status (Figure ). Moreover, the RT participants produced lower peak torque following the HLRE (−11.1%, p within‐treatment = 0.034) and LLBFR (−13.2%, p within‐treatment = 0.012) treatments during the 30‐s postexercise MVIC compared to the pre‐exercise MVIC (Figure ). In contrast, the UT participants did not show significantly lower peak torque during the 30‐s postexercise MVIC than the pre‐exercise MVIC regardless of treatment (Figure ). Figure suggests that there were differences in the %Δ in peak torque measured during the 90‐s postexercise MVIC relative to the pre‐exercise MVIC between treatments ( p Treatment = 0.012), but not between training status ( p Training Status = 0.72) or their interaction ( p Treatment*Training Status = 0.16). Specifically, the peak torque measured during the 90‐s postexercise MVIC was lower than the pre‐exercise MVIC following HLRE treatment (−5.6%, p within‐treatment = 0.002) when averaged over all levels of training status (Figure ). Moreover, the UT participants showed a lower peak torque during the 90‐s postexercise MVIC than the pre‐exercise MVIC following HLRE treatment (−9.4%, p within‐treatment = 0.0004) (Figure ). Moreover, the %Δ in peak torque following HLRE treatment was greater than the MLBFR treatment ( p between = 0.001) within the UT participants (Figure ).
Muscle excitation during the pre‐ and post‐exercise knee extensor MVICs The RT participants had a 28.6% higher maximal RMS AMP (mV) during the pre‐exercise MVICs than the UT participants ( p Training Status = 0.027, averaged across all levels of treatment) (Figure ), indicative of greater absolute muscle excitation. Notably, there were no differences in the maximal RMS AMP measured during the pre‐exercise MVICs between treatments ( p Treatment = 0.63) nor their interaction ( p Treatment*Training Status = 0.25) (Figure ). Figure suggests that there are no differences in the %Δ RMS AMP between treatments ( p Treatment = 0.44), training status ( p Training Status = 0.27), or their interaction ( p Treatment*Training Status = 0.16) during the 30‐s postexercise MVIC relative to the pre‐exercise MVIC. The maximal RMS AMP measured during the 30‐s postexercise MVICs was lower than the pre‐exercise MVIC following HLRE (−13.6%, p within‐treatment = 0.0003), LLBFR (−8.7%, p within‐treatment = 0.019), and MLBFR (−8.7%, p within‐treatment = 0.019) when averaged over all levels of training status (Figure ). However, the RT participants showed lower maximal RMS AMP following the HLRE (−20.2%, p within‐treatment = 0.0001) and LLBFR (−12.6%, p within‐treatment = 0.017) treatments during the 30‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). The UT participants showed a lower maximal RMS AMP following LLBFR (−10.5%, p within‐treatment = 0.047) during the 30‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). Figure suggests that there are no differences in the %Δ RMS AMP between treatments ( p Treatment = 0.55), training status ( p Training Status = 0.42), or their interaction ( p Treatment*Training Status = 0.26) during the 90‐s postexercise MVIC than the pre‐exercise MVIC. The maximal RMS AMP measured during the 90‐s postexercise MVICs was lower following HLRE (−13.2%, p within‐treatment = 0.0003), LLBFR (−8.2%, p within‐treatment = 0.023), and MLBFR (−9.6%, p within‐treatment = 0.019) treatments when averaged over all levels of training status (Figure ). However, the RT participants showed lower maximal RMS AMP following the HLRE (−17.1%, p within‐treatment = 0.0006) and LLBFR (−12.4%, p within‐treatment = 0.016) treatments during the 90‐s postexercise MVIC than the pre‐exercise MVIC (Figure ). The UT participants showed a decline in maximal RMS AMP following LLBFR (−12.3%, p within‐treatment = 0.016) treatments during the 90‐s postexercise MVIC than the pre‐exercise MVIC (Figure ).
Testing for the potential confounding effect of muscle size on muscle excitation Since muscle size could be a confounding variable related to greater maximal RMS AMP in the RT group compared to the UT group (Skarabot et al., ), and prior work has normalized RMS AMP data by muscle cross‐sectional area (Keller et al., ), we performed exploratory analyses using the thigh lean mass as a covariate (Karp et al., ; Tanner, ). However, the addition of thigh lean mass as a covariate to the RMS AMP models neither changed their main effects nor their interactions and did not reach the significance level (all p > 0.05); these additional analyses were excluded from the present study.
DISCUSSION We sought to determine differences in muscle excitation and total muscle activation of the vastus lateralis in resistance‐trained (RT) versus untrained (UT) college‐aged males performing acute bouts of LLBFR, MLBFR, and HLRE. Moreover, we examined whether there were differences in neuromuscular fatigue following the acute bouts of exercise. The present results suggest that the (relative) normalized muscle excitation (RMS AMP, %MVIC) measured during the HLRE and MLBFR treatments were higher than the LLBFR treatment, independent of the training status. As expected, the total muscle activation (∑iEMG) in response to the three exercise treatments was higher in the RT compared to the UT participants. The total muscle activation was also higher during the LLBFR treatment than the MLBFR and HLRE treatments. These data suggest that (i) training status had minimal impact on (relative) normalized muscle excitation in response to the three treatments, (ii) the RT participants achieved greater total muscle activation during the acute bouts of exercise, and (iii) despite lower (relative) normalized muscle excitation, LLBFR led to greater total muscle activation compared to the volume‐matched MLBFR and the HLRE treatments. Our results also show that the RT participants had lower peak torque measured during the ~30‐s postexercise MVIC following both HLRE and LLBFR, which returned to pre‐exercise levels (no change) by the 90‐s postexercise MVIC. In contrast, the UT participants only showed a lower peak torque during the ~90‐s postexercise MVIC than their pre‐exercise MVIC following HLRE. Our results also suggest that less muscle excitation was measured during the 30‐s and 90‐s postexercise MVIC following the HLRE and LLBFR among the RT participants. At the same time, muscle excitation was lower in the 30‐s and 90‐s postexercise MVIC than in the MLBFR among UT participants. These data suggest that reductions in muscle excitation may contribute to some of the exercise‐induced reductions in peak torque measured during an isometric knee extension MVIC. Although the mechanisms of resistance training‐induced muscle hypertrophy are complex and remain to be fully elucidated, the degree of muscle excitation and total muscle activation achieved during training remains a potential factor activating some of the molecular transducers of muscle hypertrophy (Schoenfeld, ). Moreover, some have suggested that blood flow restriction can enhance relative muscle excitation induced by LLBFR (Hill et al., ; Loenneke et al., ), which could translate into greater muscle hypertrophy in LLBFR relative to LLRE (Davis et al., ). Consistent with our findings, another recent study showed that HLRE resulted in greater voluntary muscle excitation than LLBFR in untrained adults (Biazon et al., ); however, the exercise volume was higher in the HLRE condition compared to the LLBFR condition in both studies. Moreover, their study also showed that the HLBFR condition resulted in higher voluntary muscle excitation than the LLBFR condition (Biazon et al., ), which is consistent with our findings. Of note, our MLBFR condition was performed at 50% 1‐RM and was matched for volume to the LLBFR condition, while the HLRE was performed at 80% of 1‐RM and was a higher training volume than the LLBFR condition in the later study (Biazon et al., ). One could also argue that muscle hypertrophy may be more related to total neural drive (i.e., effort) than simple measures of muscle excitation and/or total muscle activation determined by sEMG (Morton, Colenso‐Semple, et al., ). Along these lines, recent data suggest that individuals seeking greater neural drive should perform HLRE rather than moderate LLRE (Miller et al., ). Unfortunately, measurements of total neural drive were beyond the scope of the present study. Thus, future studies are needed to measure the impact of HLRE, LLBFR, and MLBFR on total neural drive using methods outlined in Farina et al. (Farina et al., ). Exercise‐induced neuromuscular fatigue is often characterized by lower knee extensor peak torque measured during a postexercise MVIC compared to the pre‐exercise MVIC (Hill et al., ). Our data suggest that HLRE, LLBFR, and MLBR lead to lower peak torque at ~30‐s postexercise MVIC than the pre‐exercise MVIC averaged across all levels of training status. In the present study, our RT participants returned to their pre‐exercise peak torque by the 90‐s postexercise timepoint. These findings are consistent with previous data that have suggested that BFR‐RE causes transient reductions in force production that are quickly resolved (Husmann et al., ; Loenneke et al., ). Consistent with the present study, Loenneke et al. ( ) showed similar results; BFR‐RE significantly reduced peak torque immediately postexercise but rebounded within an hour of recovery in RT adults (Loenneke et al., ). The studies by Loenneke and Husmann performed LLBFR at 30% 1‐RM with the traditional 30, 15, 15, and 15 repetitions. Our participants performed LLBFR at 25% 1‐RM, which may have been one of the contributing factors for our participants reaching baseline levels more quickly. Moreover, based on the pre‐exercise MVICs peak torques reported in the studies above (>250 Nm), their RT participants appear to be stronger than our RT participants (205 Nm). A recent study also demonstrated that LLRE and LLBFR performed at 20% 1‐RM lead to immediate reductions in maximal voluntary contraction force production in moderately RT adults (Pignanelli et al., ). Although the force production quickly rebounded, it remained lower than the baseline for up to 4 h. Our UT participants showed no reduction in knee extensor MVIC peak torque at 30‐s postexercise but experienced a significant decline at 90 s in the HLRE treatment. However, there was a significant reduction in their knee extensor MVIC peak torque at 90‐s postexercise in the HLRE treatment. While our findings align with some studies (Fatela et al., ), they differ from others (Hill et al., ), possibly due to variability in MVIC measurements and study power. For example, Fatela et al. ( ) examined the impact of BFR‐RE at 20% 1‐RM and under a range of occlusion pressures (40%, 60%, and 80% LOP) in healthy UT adults. The primary finding of their study was that although there was an immediate reduction in knee extensor MVIC peak torque following the 80% LOP treatment, they did not observe reductions in MVIC peak torque following the 60% LOP treatment, which is comparable to the LOP implemented in the present study. In contrast, recent data suggest that both LLRE and LLBFR are sufficient to reduce the knee extensor MVIC peak torque using a protocol similar to ours (1 × 30, 3 × 15 repetition protocol) in recreationally active adults with no differences between treatments (Hill et al., ). Although it is not readily apparent why our UT participants did not show significant declines in their knee extensor MVIC peak torques following LLBFR, it may be due to insufficient power and slightly higher than expected variability in the MVIC peak torque measurements following the LLBFR condition. However, our untrained UT participants showed reductions in their knee extensor MVIC peak torque measurements at the 90‐s postexercise timepoint, which was significantly different from the LLBFR and MLBFR treatments. Our data suggest that HLRE, LLBFR, and MLBR lead to reductions in the voluntary muscle excitation measured during the ~30 and ~90‐s postexercise MVICs relative to the pre‐exercise MVIC when averaged across all levels of training status, which is consistent with prior studies. Our RT participants had significant reductions in muscle excitation measured during the postexercise knee extensor MVICs relative to the pre‐exercise MVIC following the HLRE and LLBFR conditions at 30‐s and 90‐s timepoints. Our results are consistent with Husmann et al. ( ), who demonstrated significant reductions in voluntary muscle excitation measured during the immediate postexercise knee extensor MVIC relative to the pre‐exercise MVIC, which remained depressed until the 8‐min timepoint (Husmann et al., ). Moreover, Hill et al. ( ) also showed that LLBFR reduces voluntary muscle excitation measured during the immediate postexercise MVIC relative to the pre‐exercise MVIC in UT adults. Likewise, recent data also suggest that LLRE and LLBFR, when performed to failure, also result in reductions in voluntary muscle excitation during postexercise knee extensor MVICs relative to pre‐exercise MVICs in moderately trained adults (Pignanelli et al., ). In contrast, our UT participants did not have reductions in voluntary muscle excitation measured during their postexercise MVICs relative to the pre‐exercise MVIC following the HLRE or LLBFR conditions. Consistent with our findings, Fatela et al. ( ) also showed no reduction in voluntary muscle excitation measured during the postexercise MVIC following LLBFR compared to the pre‐exercise MVIC when performed at 40% or 60% LOP (Fatela et al., ). However, the voluntary muscle excitation measured during the ~30 and ~90 s postexercise MVICs were lower than the pre‐exercise MVIC following MLBFR in our UT participants. The reason for the reduction in voluntary muscle excitation following the MLBFR condition in our UT participants remains to be determined. 4.1 Strengths and limitations One strength of the present study design is the within‐and between‐subject design. Another strength of the present study is that we used the Delfi PTS' patented LOP detection technology to measure and maintain the 60% LOP precisely during the two BFR conditions based on Doppler blood flow measurements, which trained Doppler ultrasound technicians have previously validated (Masri et al., ). Moreover, the Delfi PTS technology is designed to apply consistent pressure throughout an exercise session (Hughes et al., ). Our study has a few notable limitations. Although our RT participants had higher thigh bone‐free lean mass, pre‐exercise MVICs, and muscle endurance compared to the untrained participants, their isotonic 1‐RMs were not significantly different, which could have limited our ability to observe differences in muscle excitation, total muscle activation, and neuromuscular fatigue between RT and UT participants. Although the total muscle activation during the HLRE and LLBFR appears to be qualitatively higher in the RT than in the UT participants, these differences did not reach statistical significance. We suspect these differences may have reached the level of statistical significance if the difference in isotonic 1‐RMs had been greater and/or if we had more participants per group. Moreover, considering that our participants were healthy, college‐aged males, there may have been a few UT participants who were either naturally strong and/or under‐reported their resistance training history, further attenuating between‐training status differences in 1‐RM. Another potential limitation of the present study is that we did not measure muscle excitation during an MVIC plus maximum potentiated singlets or a peak twitch torque contraction to quantify voluntary muscle excitation more directly (Hill et al., ). Another limitation of this study was that it only included males. Future studies need to explore the impact of HLRE, LLBFR, and MLBFR on neuromuscular fatigue, muscle excitation, muscle activation, and sex‐based (male vs. female) differences in these parameters.
Strengths and limitations One strength of the present study design is the within‐and between‐subject design. Another strength of the present study is that we used the Delfi PTS' patented LOP detection technology to measure and maintain the 60% LOP precisely during the two BFR conditions based on Doppler blood flow measurements, which trained Doppler ultrasound technicians have previously validated (Masri et al., ). Moreover, the Delfi PTS technology is designed to apply consistent pressure throughout an exercise session (Hughes et al., ). Our study has a few notable limitations. Although our RT participants had higher thigh bone‐free lean mass, pre‐exercise MVICs, and muscle endurance compared to the untrained participants, their isotonic 1‐RMs were not significantly different, which could have limited our ability to observe differences in muscle excitation, total muscle activation, and neuromuscular fatigue between RT and UT participants. Although the total muscle activation during the HLRE and LLBFR appears to be qualitatively higher in the RT than in the UT participants, these differences did not reach statistical significance. We suspect these differences may have reached the level of statistical significance if the difference in isotonic 1‐RMs had been greater and/or if we had more participants per group. Moreover, considering that our participants were healthy, college‐aged males, there may have been a few UT participants who were either naturally strong and/or under‐reported their resistance training history, further attenuating between‐training status differences in 1‐RM. Another potential limitation of the present study is that we did not measure muscle excitation during an MVIC plus maximum potentiated singlets or a peak twitch torque contraction to quantify voluntary muscle excitation more directly (Hill et al., ). Another limitation of this study was that it only included males. Future studies need to explore the impact of HLRE, LLBFR, and MLBFR on neuromuscular fatigue, muscle excitation, muscle activation, and sex‐based (male vs. female) differences in these parameters.
CONCLUSION Our study showed that our resistance‐trained participants had greater absolute muscle activation than untrained colleged‐aged males during their knee extensor MVIC. However, muscle excitation was significantly lower postexercise for all three acute exercise conditions, independent of training status. Although LLBFR resulted in lower relative muscle excitation than the HLRE or MLBFR during treatments, the total muscle activation during the LLBFR was higher than both the HLRE and MLBFR treatments. This finding suggests that the greater number of repetitions combined with BFR may also be an essential driver of total muscle activation. Future studies should focus on the effects of the same training conditions used in this study on muscular fatigue, strength, excitation, and hypertrophy after a training intervention instead of an acute bout of exercise.
BD, GS, NJ, TA, and BI developed the study design; BD, GS, and NJ collected data; BD, VF, and BI analyzed the data; BD, GS, NJ, TA, and BI wrote and edited the manuscript. All authors have read and agreed to the published version of the manuscript.
All authors report no conflicts of interest. Delfi, Inc. provided their Personalized Tourniquet System used in the present study. However, Delfi, Inc. had no input in this manuscript's data analysis, interpretation, or writing of the present manuscript.
|
Relationship between tooth macrowear and jaw morphofunctional traits in representative hypercarnivores | 62086cb0-7fee-4dbd-bd13-df52a644c395 | 11562772 | Dentistry[mh] | Diphyodonty, the condition of having two generations of teeth throughout an individual’s life, is a salient feature of crown mammals . Evolutionary benefits of having a permanent or adult set of dentitions may include functional consistency and stability in support of heterodonty, maintenance of precise occlusal performance, and reduction of energetic budget spent on dental growth. However, a principal trade-off of diphyodonty is the constraint of the permanent dentition as a non-renewable tissue. Wear or breakage to the adult teeth may affect their function, and any performance compensation in response must be made from other parts of the masticatory system because tooth enamel cannot repair itself. The evolutionary manifestation of this key property of mammalian dental tissues can be observed in species ranging from shrews to elephants, in which tooth wear severity is a limiting factor in individual lifespans . A concomitant evolutionary innovation alongside a diphyodont and heterodont dentition in mammals is a many-to-one form-function linkage of their lower jaws. The post-K-Pg radiation of mammalian taxonomic diversity also reflects a radiation of jaw shape disparity . However, jaw biomechanical performance, specifically stiffness, is both elevated and less variable in crown mammals than in other vertebrates. This suggests that a stiff jaw is a synapomorphic condition of crown mammals regardless of feeding ecology. The combination of a stiff lower jaw bone and a diphyodont, heterodont dentition underlies the diversity of feeding ecologies observed across living mammals . Although tooth wear and its corresponding functional changes is a fact of life for most mammals, it is unclear whether the universally stiff jaws of mammals compared to other vertebrates implies that overall biting biomechanical performance is maintained across mammalian tooth wear stages. The amount of pressure or stress that can be generated at the tooth-food contact surface is inversely proportional to the area of that contact; for a given amount of force generated, stress is equal to that force divided by the area through which the force is applied. For resistant food items that require crushing, cracking, or shearing, the most efficient way to generate a fracture in the food bolus is to concentrate the bite force over a small occlusal area of the tooth crown. As teeth wear, the occlusal area enlarges, and thus the same masticatory task would require higher forces to generate the same pressure/stress at the tooth-food interface. Again, because tooth enamel wear is irreversible, any compensation to biting performance must come from other aspects of the masticatory system. In this study, we ask whether the lower jaw exhibits different morphofunctional characteristics according to the severity of tooth wear. We also ask whether any such morphofunctional traits support the identification of convergent feeding ecologies in the fossil record. We use a carnivoran study system, well-known for its strong link between tooth wear and feeding ecology , to test two hypotheses: H 1 : Bone cracking and scavenging ecological morphologies (ecomorphs; ; ), represented among living carnivorans by large hyaenids with mechanically demanding diets, exhibit morphofunctional compensation of decreased force to area ratio for a given input muscle force as tooth wear increases. There should be a significant difference in mechanical efficiency, strain energy, and/or jaw dimensions across tooth macrowear categories. By contrast, meat specialists (represented in this study by large felids), which do not experience high mechanical demands, do not exhibit morphofunctional compensation for tooth wear. H 2 : The fossil taxon Hyaenodon , long interpreted as an ecological avatar of extant large-bodied hyenas, should exhibit similar relationships between tooth macrowear and morphofunctional trait variation as extant bone cracking and scavenging ecomorphs represented by some hyenas. Such similarity reflects similar ecomorphological adaptation between Hyaenodon and extant hyaenids. It is worth noting that a Miocene hyaenid adaptive radiation produced a diversity of jackal-like and wolf-like forms, and living hyaenids include an ant-specialist ; for the purpose of this study, we focused our comparisons on the three bone cracking and scavenging hyaenids genera Crocuta, Hyaena , and Parahyaena .
All morphofunctional data analyzed in this study are based on 2D photographs of hemimandible specimens in two museum collections: the American Museum of Natural History (AMNH) and the University of Michigan Museum of Zoology (UMMZ). A total of 54 specimens representing six genera were included in the analyses . Each AMNH specimen was placed onto the scanning area of a Dell AIO A960 Flatbed Scanner in its natural resting position with the lateral side facing the scanning bed. A metric scale bar was placed next to the specimen. A color image at a resolution 600 dpi was then captured and saved as a tiff image file. UMMZ specimen images were downloaded from the Animal Diversity Web ( https://animaldiversity.org ) under an CC BY-NC-SA 3.0 license by P. Myers. Tooth macrowear analysis We categorized wear stages of all canine and carnassial teeth in the dataset using the scheme defined in . Each tooth was given a score from 1 to 3, where a score of 1 indicates little to no occlusal wear with little or no dentine exposed, two indicates moderate occlusal wear with dentine exposure, and three indicates extensive occlusal wear with dentine exposure larger in area than the remaining enamel at the wear surface. Jaw measurements We used FIJI to take all linear measurements. Each image was opened in FIJI, calibrated by setting the scale according to the length of 10 mm on the scale bar included in each photograph, and then using the line tool to make measurements. Jaw length measurements were taken on all specimens by taking the distance between the anterior boundary of the first lower incisor and the mandibular bone, and the posterior-most point on the condylar process. Two additional measurements were taken as proxies for the bending strength of the mandibular ramus below canine and carnassial bite positions, respectively: depth of ramus at the post-carnassial position, and depth of ramus at the post-canine position. Lastly, we record total jaw model volume from the finite element models of each specimen, the construction of which is detailed below. Biomechanical performance estimates Each specimen image was converted into a high contrast image that represents the jaw in black pixels and surrounding space in white pixels. We used the magnetic lasso tool in GIMP 2.10.20 to select the jaw, reversed the object selection, and removed background pixels. The high contrast image was then exported as PNG files and next converted into an outline bound by nodes within Inkscape version 0.48. The outlines were saved as .dxf files. Next, the outline shape was extruded with an arbitrary thickness of height 10 using OpenSCAD version 2014.01.29 and converted into a mesh file in STL format. The extruded shape was then improved for triangular element count, aspect ratio, and evenness in Geomagic Wrap 2019. The imported stl meshes were first refined to represent at least 60k triangular faces, then cleaned using the ‘quick smooth’ tool. The meshes were then decimated to a target triangle face count of 50k, with triangle face dimensional aspect ratio constrained to 10 or less. Lastly, the meshes were subjected to the mesh improvement tool ‘mesh doctor’ and then alternated with mesh decimation until the mesh improvement tool no longer detected any mesh issues. The final clean meshes were then exported as stl files and used for 2D finite element modeling. We used Strand7 finite element analysis software version 2.4.6 to estimate biomechanical performance traits from the extruded mandibular meshes. Meshes were checked and cleaned using the automatic clean mesh tool in Strand7. If errors were detected during this mesh cleaning step, the mesh was taken through the improvement procedure outlined in the previous paragraph and reimported into Strand7 until no errors were detected. The mesh file was then exported once again as stl files for muscle and tooth enamel mesh generation. The vetted mesh file form Strand7 was then reimported into Geomagic Wrap to generate muscle and tooth enamel mesh groups. Three muscle groups were delineated on the ascending ramus of the jaw shape based on previous descriptions of musculoskeletal anatomy in spotted hyenas and carnivorans in general . The temporalis, superficial masseter, and deep masseter muscles were included in the biting simulation models; given the 2D approach, muscles that largely contribute to lateral jaw movements such as pterygoideus muscles were not modeled. The enamel crown of the canine and cheek dentitions on the mesh models were highlighted based on the enamel crown areas visible from specimen photographs. The highlighted triangle faces were then copied and pasted as a separate mesh group to allow different material properties to be defined during the model simulation step (see below). Photographs of cranial specimens for all six genera included in the analyses were used to generate reference cranial meshes using the same protocol described above for mandible mesh generation. The reference cranial meshes (one for each genus) were then imported and scaled to each mandible mesh for mandibular muscle force contraction vector estimation. We scaled the cranial reference to each mandible mesh by aligning the dorsal face of the mandible condyle with the ventral face of the mandibular fossa on the temporal bone, and the distal face of the lower canine tooth to the mesial face of the upper canine tooth, respectively. The cranial reference mesh was then rotated away from the mandible mesh by 30 degrees, representing an average gape for carnivorans . Next, centroid points were generated for each of the three muscle groups. Muscle origination areas were highlighted on the cranial reference mesh, extruded to a thickness of 1 mm, and a ‘center of mass’ point was calculated using the function of the same name in Geomagic Wrap. These 3D centroid points were used as a reference to create 2D centroid coordinates directly on the surface of the original 2D muscle highlights. The x and y values of the centroid coordinates were recorded for each jaw-cranial mesh combination. Next, muscle forces, joint and bite point constraints, and material properties were defined to fully parameterize the jaw model. The amount of force generated by each muscle insertion area (towards the centroid points on cranial reference meshes) was set to be proportional to the surface area represented by the muscle insertion meshed, multiplied by 0.3 N based on a maximum muscle contraction force of 0.3 N/mm 2 . We used muscle insertion area as a proxy for muscle contractile force rather than estimated physiological cross section area because 3D information is not available from the 2D specimen photographs. It is important to note that the underlying assumption of our approach is that muscle insertion area is a good approximation of its force production capability. We argue that this is a reasonable assumption, as it standardizes our interspecific comparisons of biomechanical response to biting scenarios as a product of overall muscle contraction rather than species-specific muscle activation ratios, for which no empirical data are available. We used the BoneLoad program to generate distributed force vectors over muscle insertion areas to mimic muscle contraction. The force loaded meshes were then reimported into Strand7, where free body movement constraints and material properties were defined. Although all parts of the jaw model are represented by 2D plate elements, we defined a thickness of 10% of the maximum model length to enable calculation of in-plane bending stress. A negligible thickness of 0.0001 mm was assigned to muscle attachment meshes to simulate the direct action of muscle fibers pulling on the underlying bone. Young’s (elastic) modulus of 20 GPa (gigapascals) and Poisson ratio of 0.3 were assigned to the bone and muscle portions of the mesh model. The tooth enamel portion of the model was assigned a modulus of 80 GPa and Poisson ratio of 0.3. Three different bite scenarios were simulated: canine bite, canine pull, and carnassial (m1 in carnivorans, m3 in Hyaenodon ) bite . In all three cases we placed a full nodal constraint at the center of the condylar process that prevented any translational or rotational movement. In the canine bite scenario, a partial nodal constraint was placed at the tip of the canine tooth to prevent dorsoventral movement but allowing anteroposterior movement. This scenario simulated full muscle contraction during jaw closure and food contact at the tip of the canine. In the canine pull scenario, an anteriorly directed force equivalent to 10% of total muscle input force was placed at the same canine constraint as in the canine bite scenario, and all other conditions are identical to the canine bite scenario. This scenario simulated full muscle contraction during jaw closure, with a bite point at the canine and an external force from struggling prey. Lastly, in the carnassial bite scenario, the jaw joint constraint is as in the other two scenarios, but a cusp nodal constraint is placed at the carnassial paraconid instead of the canine tooth. This scenario simulated jaw closure with full muscle contraction during mastication at the carnassial tooth. All three bite scenarios were solved using Strand7’s linear static solver function. We then extracted both qualitative and quantitative data from the three bite scenarios. Output nodal reaction forces at the tooth cusp constraints were measured for the canine and carnassial bite scenarios and divided by total input muscle force to derive mechanical efficiency. Stored strain energy (in Joules), a measure of the work done by an input load in deforming a structure under load given a set of constraint conditions, was measured for each of the three scenarios. Lastly, heatmap visualizations of von Mises stress, which summarizes the distribution of forces on a structure under load, were generated from one model for each of the extant genera, and for all fossil specimens modeled. Statistical analyses We evaluated data support for our stated hypotheses using analysis of variance (ANOVA). The tooth macrowear categories were used as groups, and ANOVA tests were conducted separately for the canine and carnassial macrowear of a bone cracking hyaenid ( Crocuta crocuta ), two scavenging hyaenids ( Hyaena hyaena and Parahyaena brunnea ), two large meat specialist felids ( Panthera leo and Acinonyx jubatus ), and the fossil genus Hyaenodon . The five extant taxa were chosen as representative extant species in their respective ecomorphs that have been used in comparisons to, and in ecomorphological reconstructions of, fossil carnivores . We note that other, unsampled extant taxa of similar ecomorphs may not necessarily exhibit similar tooth macrowear to jaw mechanics relationships exhibited by the sampled taxa; thus, functional or evolutionary pattern extrapolations beyond the taxonomic sampling covered in this study should be done with caution. Morphofunctional traits evaluated against macrowear categories included input muscle force (in Newtons), output bite point reaction force (in Newtons), mechanical efficiency (output bite point reaction force/input muscle force), strain energy (J), total model volume (mm 3 ), jaw length (mm), and jaw width (mm). Additionally, results were visualized as boxplots using R programming packages ggplot2 and ggpubr . All statistical tests were conducted in R using the aov function in the core R library.
We categorized wear stages of all canine and carnassial teeth in the dataset using the scheme defined in . Each tooth was given a score from 1 to 3, where a score of 1 indicates little to no occlusal wear with little or no dentine exposed, two indicates moderate occlusal wear with dentine exposure, and three indicates extensive occlusal wear with dentine exposure larger in area than the remaining enamel at the wear surface.
We used FIJI to take all linear measurements. Each image was opened in FIJI, calibrated by setting the scale according to the length of 10 mm on the scale bar included in each photograph, and then using the line tool to make measurements. Jaw length measurements were taken on all specimens by taking the distance between the anterior boundary of the first lower incisor and the mandibular bone, and the posterior-most point on the condylar process. Two additional measurements were taken as proxies for the bending strength of the mandibular ramus below canine and carnassial bite positions, respectively: depth of ramus at the post-carnassial position, and depth of ramus at the post-canine position. Lastly, we record total jaw model volume from the finite element models of each specimen, the construction of which is detailed below.
) Each specimen image was converted into a high contrast image that represents the jaw in black pixels and surrounding space in white pixels. We used the magnetic lasso tool in GIMP 2.10.20 to select the jaw, reversed the object selection, and removed background pixels. The high contrast image was then exported as PNG files and next converted into an outline bound by nodes within Inkscape version 0.48. The outlines were saved as .dxf files. Next, the outline shape was extruded with an arbitrary thickness of height 10 using OpenSCAD version 2014.01.29 and converted into a mesh file in STL format. The extruded shape was then improved for triangular element count, aspect ratio, and evenness in Geomagic Wrap 2019. The imported stl meshes were first refined to represent at least 60k triangular faces, then cleaned using the ‘quick smooth’ tool. The meshes were then decimated to a target triangle face count of 50k, with triangle face dimensional aspect ratio constrained to 10 or less. Lastly, the meshes were subjected to the mesh improvement tool ‘mesh doctor’ and then alternated with mesh decimation until the mesh improvement tool no longer detected any mesh issues. The final clean meshes were then exported as stl files and used for 2D finite element modeling. We used Strand7 finite element analysis software version 2.4.6 to estimate biomechanical performance traits from the extruded mandibular meshes. Meshes were checked and cleaned using the automatic clean mesh tool in Strand7. If errors were detected during this mesh cleaning step, the mesh was taken through the improvement procedure outlined in the previous paragraph and reimported into Strand7 until no errors were detected. The mesh file was then exported once again as stl files for muscle and tooth enamel mesh generation. The vetted mesh file form Strand7 was then reimported into Geomagic Wrap to generate muscle and tooth enamel mesh groups. Three muscle groups were delineated on the ascending ramus of the jaw shape based on previous descriptions of musculoskeletal anatomy in spotted hyenas and carnivorans in general . The temporalis, superficial masseter, and deep masseter muscles were included in the biting simulation models; given the 2D approach, muscles that largely contribute to lateral jaw movements such as pterygoideus muscles were not modeled. The enamel crown of the canine and cheek dentitions on the mesh models were highlighted based on the enamel crown areas visible from specimen photographs. The highlighted triangle faces were then copied and pasted as a separate mesh group to allow different material properties to be defined during the model simulation step (see below). Photographs of cranial specimens for all six genera included in the analyses were used to generate reference cranial meshes using the same protocol described above for mandible mesh generation. The reference cranial meshes (one for each genus) were then imported and scaled to each mandible mesh for mandibular muscle force contraction vector estimation. We scaled the cranial reference to each mandible mesh by aligning the dorsal face of the mandible condyle with the ventral face of the mandibular fossa on the temporal bone, and the distal face of the lower canine tooth to the mesial face of the upper canine tooth, respectively. The cranial reference mesh was then rotated away from the mandible mesh by 30 degrees, representing an average gape for carnivorans . Next, centroid points were generated for each of the three muscle groups. Muscle origination areas were highlighted on the cranial reference mesh, extruded to a thickness of 1 mm, and a ‘center of mass’ point was calculated using the function of the same name in Geomagic Wrap. These 3D centroid points were used as a reference to create 2D centroid coordinates directly on the surface of the original 2D muscle highlights. The x and y values of the centroid coordinates were recorded for each jaw-cranial mesh combination. Next, muscle forces, joint and bite point constraints, and material properties were defined to fully parameterize the jaw model. The amount of force generated by each muscle insertion area (towards the centroid points on cranial reference meshes) was set to be proportional to the surface area represented by the muscle insertion meshed, multiplied by 0.3 N based on a maximum muscle contraction force of 0.3 N/mm 2 . We used muscle insertion area as a proxy for muscle contractile force rather than estimated physiological cross section area because 3D information is not available from the 2D specimen photographs. It is important to note that the underlying assumption of our approach is that muscle insertion area is a good approximation of its force production capability. We argue that this is a reasonable assumption, as it standardizes our interspecific comparisons of biomechanical response to biting scenarios as a product of overall muscle contraction rather than species-specific muscle activation ratios, for which no empirical data are available. We used the BoneLoad program to generate distributed force vectors over muscle insertion areas to mimic muscle contraction. The force loaded meshes were then reimported into Strand7, where free body movement constraints and material properties were defined. Although all parts of the jaw model are represented by 2D plate elements, we defined a thickness of 10% of the maximum model length to enable calculation of in-plane bending stress. A negligible thickness of 0.0001 mm was assigned to muscle attachment meshes to simulate the direct action of muscle fibers pulling on the underlying bone. Young’s (elastic) modulus of 20 GPa (gigapascals) and Poisson ratio of 0.3 were assigned to the bone and muscle portions of the mesh model. The tooth enamel portion of the model was assigned a modulus of 80 GPa and Poisson ratio of 0.3. Three different bite scenarios were simulated: canine bite, canine pull, and carnassial (m1 in carnivorans, m3 in Hyaenodon ) bite . In all three cases we placed a full nodal constraint at the center of the condylar process that prevented any translational or rotational movement. In the canine bite scenario, a partial nodal constraint was placed at the tip of the canine tooth to prevent dorsoventral movement but allowing anteroposterior movement. This scenario simulated full muscle contraction during jaw closure and food contact at the tip of the canine. In the canine pull scenario, an anteriorly directed force equivalent to 10% of total muscle input force was placed at the same canine constraint as in the canine bite scenario, and all other conditions are identical to the canine bite scenario. This scenario simulated full muscle contraction during jaw closure, with a bite point at the canine and an external force from struggling prey. Lastly, in the carnassial bite scenario, the jaw joint constraint is as in the other two scenarios, but a cusp nodal constraint is placed at the carnassial paraconid instead of the canine tooth. This scenario simulated jaw closure with full muscle contraction during mastication at the carnassial tooth. All three bite scenarios were solved using Strand7’s linear static solver function. We then extracted both qualitative and quantitative data from the three bite scenarios. Output nodal reaction forces at the tooth cusp constraints were measured for the canine and carnassial bite scenarios and divided by total input muscle force to derive mechanical efficiency. Stored strain energy (in Joules), a measure of the work done by an input load in deforming a structure under load given a set of constraint conditions, was measured for each of the three scenarios. Lastly, heatmap visualizations of von Mises stress, which summarizes the distribution of forces on a structure under load, were generated from one model for each of the extant genera, and for all fossil specimens modeled.
We evaluated data support for our stated hypotheses using analysis of variance (ANOVA). The tooth macrowear categories were used as groups, and ANOVA tests were conducted separately for the canine and carnassial macrowear of a bone cracking hyaenid ( Crocuta crocuta ), two scavenging hyaenids ( Hyaena hyaena and Parahyaena brunnea ), two large meat specialist felids ( Panthera leo and Acinonyx jubatus ), and the fossil genus Hyaenodon . The five extant taxa were chosen as representative extant species in their respective ecomorphs that have been used in comparisons to, and in ecomorphological reconstructions of, fossil carnivores . We note that other, unsampled extant taxa of similar ecomorphs may not necessarily exhibit similar tooth macrowear to jaw mechanics relationships exhibited by the sampled taxa; thus, functional or evolutionary pattern extrapolations beyond the taxonomic sampling covered in this study should be done with caution. Morphofunctional traits evaluated against macrowear categories included input muscle force (in Newtons), output bite point reaction force (in Newtons), mechanical efficiency (output bite point reaction force/input muscle force), strain energy (J), total model volume (mm 3 ), jaw length (mm), and jaw width (mm). Additionally, results were visualized as boxplots using R programming packages ggplot2 and ggpubr . All statistical tests were conducted in R using the aov function in the core R library.
Tooth macrowear analysis All but one meat specialist carnassial examined (20 out of 21) exhibited little to no macrowear. By contrast, all three categories of macrowear are recorded for the canine position of meat specialists . The majority of canine and carnassial macrowear scores are 2 or 3 in the scavenger data partition, and in bone crackers about half of the specimens have macrowear scores of 2 or 3. The majority of Hyaenodon specimens have a macrowear category of 2 or 3 in both canine and carnassial tooth positions. Jaw measurements Meat specialists in our dataset have a mean jaw length of 167.06 mm, mean jaw depth at canine of 31.60 mm, and mean jaw depth at m1 of 31.46 mm. Scavengers have a mean jaw length of 165.24 mm, canine jaw depth of 32.58 mm, and m1 jaw depth of 37.72 mm. Bone crackers have a mean jaw length of 162.97 mm, canine jaw depth of 29.88 mm, and carnassial jaw depth of 39.13 mm. Lastly, the Hyaenodon specimens studied have a mean jaw length of 179.78 mm, canine jaw depth of 24.96 mm, and carnassial jaw depth of 35.40 mm. Based on these measurements, meat specialists have a nearly 1:1 ratio of jaw depth at the carnassial vs . the canine, scavengers have 16% deeper mandibular ramus at the carnassial compared to the canine position, and bone crackers have ~30% deeper ramus at the carnassial compared to the canine position. In this regard, Hyaenodon is closest to bone crackers in having 41.6% deeper jaws at the carnassial compared to the canine position. No clear patterns of jaw measurement differences across macrowear categories are present for either the canine or carnassial data of all ecomorph partitions . Furthermore, none of the ANOVA tests returned statistically significant results ( p values range from 0.90 to 0.06; ). Biomechanical performance estimates Bone crackers at later macrowear stages tend to possess larger muscle insertion areas and therefore larger muscle input forces than other feeding ecologies, even though they are not overall the largest individuals in the dataset . Canine bite mechanical efficiency values do not exhibit clear trends across macrowear categories in any feeding ecologies; however, bone crackers show increasing carnassial bite mechanical efficiency with increasing macrowear ( , ; F = 9.31, p = 0.02). Hyaenodon exhibit increasing canine mechanical efficiency ( F = 8.95, p = 0.03; ) but no change in carnassial mechanical efficiency with increasing macrowear. Meat specialists tend to exhibit increased strain energy (lower work efficiency or stiffness) at macrowear category 3 compared to other categories in canine biting , and a larger spread of strain energy values at macrowear category 1 in carnassial bite simulations . None of the strain energy patterns are statistically significant . In m1 bite reaction force, bone crackers alone exhibit a significant increase with increasing macrowear ( F = 6.32, p = 0.04; , ), mirroring the pattern observed in m1 mechanical efficiency . Heatmap visualization of von Mises stress in exemplary jaw models shows qualitatively that meat specialists tend to experience higher stresses than other feeding ecologies. In all canine bite simulations, the largest region of elevated stress is in the transition between the horizontal and ascending rami, immediately posterior to the carnassial . Bone crackers exhibit the lowest stress in the core of the horizontal ramus compared to other ecomorphs, displaying parallel strips of elevated stress at the dorsal and ventral edges of the mandible, respectively. All Hyaenodon specimens studied show a similar strip of low stress region along the length of the horizontal ramus in patterns most similar to bone crackers . In canine pull simulations the overall von Mises stress distributions are similar to those observed in canine bite simulations. The major difference is a relatively more stressed vender border along the horizontal ramus when a canine bite is combined with an anterior pulling force . The carnassial bite simulations differ from canine simulations in having more limited regions of high stress . Meat specialists and scavengers tend to exhibit a continuous path of elevated stress connecting the dorsal and ventral horizontal stress paths ventral to the carnassial bite position. The bone cracking Crocuta and the morphologically robust scavenger Parahyaena show the least amount of elevated stress along that dorsoventral path. Similarly, the von Mises stress patterns for carnassial biting in Hyaenodon specimens tend to show two separate elevated stress paths at the dorsal and ventral margins of the ramus, respectively. As expected, the unloaded region of the mandible anterior to the bite point does not show elevated von Mises stress in any of the models visualized.
All but one meat specialist carnassial examined (20 out of 21) exhibited little to no macrowear. By contrast, all three categories of macrowear are recorded for the canine position of meat specialists . The majority of canine and carnassial macrowear scores are 2 or 3 in the scavenger data partition, and in bone crackers about half of the specimens have macrowear scores of 2 or 3. The majority of Hyaenodon specimens have a macrowear category of 2 or 3 in both canine and carnassial tooth positions.
Meat specialists in our dataset have a mean jaw length of 167.06 mm, mean jaw depth at canine of 31.60 mm, and mean jaw depth at m1 of 31.46 mm. Scavengers have a mean jaw length of 165.24 mm, canine jaw depth of 32.58 mm, and m1 jaw depth of 37.72 mm. Bone crackers have a mean jaw length of 162.97 mm, canine jaw depth of 29.88 mm, and carnassial jaw depth of 39.13 mm. Lastly, the Hyaenodon specimens studied have a mean jaw length of 179.78 mm, canine jaw depth of 24.96 mm, and carnassial jaw depth of 35.40 mm. Based on these measurements, meat specialists have a nearly 1:1 ratio of jaw depth at the carnassial vs . the canine, scavengers have 16% deeper mandibular ramus at the carnassial compared to the canine position, and bone crackers have ~30% deeper ramus at the carnassial compared to the canine position. In this regard, Hyaenodon is closest to bone crackers in having 41.6% deeper jaws at the carnassial compared to the canine position. No clear patterns of jaw measurement differences across macrowear categories are present for either the canine or carnassial data of all ecomorph partitions . Furthermore, none of the ANOVA tests returned statistically significant results ( p values range from 0.90 to 0.06; ).
Bone crackers at later macrowear stages tend to possess larger muscle insertion areas and therefore larger muscle input forces than other feeding ecologies, even though they are not overall the largest individuals in the dataset . Canine bite mechanical efficiency values do not exhibit clear trends across macrowear categories in any feeding ecologies; however, bone crackers show increasing carnassial bite mechanical efficiency with increasing macrowear ( , ; F = 9.31, p = 0.02). Hyaenodon exhibit increasing canine mechanical efficiency ( F = 8.95, p = 0.03; ) but no change in carnassial mechanical efficiency with increasing macrowear. Meat specialists tend to exhibit increased strain energy (lower work efficiency or stiffness) at macrowear category 3 compared to other categories in canine biting , and a larger spread of strain energy values at macrowear category 1 in carnassial bite simulations . None of the strain energy patterns are statistically significant . In m1 bite reaction force, bone crackers alone exhibit a significant increase with increasing macrowear ( F = 6.32, p = 0.04; , ), mirroring the pattern observed in m1 mechanical efficiency . Heatmap visualization of von Mises stress in exemplary jaw models shows qualitatively that meat specialists tend to experience higher stresses than other feeding ecologies. In all canine bite simulations, the largest region of elevated stress is in the transition between the horizontal and ascending rami, immediately posterior to the carnassial . Bone crackers exhibit the lowest stress in the core of the horizontal ramus compared to other ecomorphs, displaying parallel strips of elevated stress at the dorsal and ventral edges of the mandible, respectively. All Hyaenodon specimens studied show a similar strip of low stress region along the length of the horizontal ramus in patterns most similar to bone crackers . In canine pull simulations the overall von Mises stress distributions are similar to those observed in canine bite simulations. The major difference is a relatively more stressed vender border along the horizontal ramus when a canine bite is combined with an anterior pulling force . The carnassial bite simulations differ from canine simulations in having more limited regions of high stress . Meat specialists and scavengers tend to exhibit a continuous path of elevated stress connecting the dorsal and ventral horizontal stress paths ventral to the carnassial bite position. The bone cracking Crocuta and the morphologically robust scavenger Parahyaena show the least amount of elevated stress along that dorsoventral path. Similarly, the von Mises stress patterns for carnassial biting in Hyaenodon specimens tend to show two separate elevated stress paths at the dorsal and ventral margins of the ramus, respectively. As expected, the unloaded region of the mandible anterior to the bite point does not show elevated von Mises stress in any of the models visualized.
Bite simulation and macrowear analyses of hypercarnivore mandible models show that for felid meat specialists and hyaenid scavengers there is no evidence of morphofunctional compensation in mandibular performance with increased tooth wear. However, there is a statistically significant increase in carnassial bite mechanical efficiency with increasing macrowear in bone cracking spotted hyenas. There is no correlation between macrowear and either jaw strain energy (a measure of work efficiency or stiffness) or jaw dimensional changes in any of the feeding ecologies studied. These results provide only partial support for our first hypothesis (H 1 ), that bone crackers and scavengers exhibit morphofunctional compensation in mandible performance with increasing tooth macrowear whereas meat specialists do not. The extinct carnivore Hyaenodon shared no statistically significant wear-dependent morphofunctional shifts with any of the extant feeding ecologies. Instead, the fossil taxon exhibits increased canine bite mechanical efficiency with increased tooth macrowear, differing from the bone crackers which show carnassial mechanical efficiency increase with macrowear. These findings provide no biomechanical support for the prior interpretation of Hyaenodon (as the name also suggests) as ecological equivalents of hyaenids in their respective paleoguilds. Therefore, we reject our second hypothesis (H 2 ), that Hyaenodon and extant bone cracking and scavenging hyaenids share similar patterns of morphofunctional compensation with increasing tooth macrowear. Previous studies on the functional morphology of Hyaenodon suggest a semi-arboreal locomotor ecology for H. exiguus , with comparable or more specialized dental crown features than the most specialized feliforms , reduced zygomatic arch robustness associated with capability for higher gape , and similarity to hyenas or lions in dental microwear depending on geographic region . What emerges from theses studies and new findings reported in the current study is that (1) none of the sampled taxa (large felids and bone cracking and scavenging hyaenids) converge on Hyaenodon in terms of the morphofunctional traits analyzed, and (2) there is diversity in the range of dietary ecologies within the genus Hyaenodon . Therefore, the lack of a match in the morphofunctional traits measured in this study between Hyaenodon and extant hypercarnivore feeding ecologies may reflect a combination of unique niches occupied by Hyaenodon , a possible mixture of ecomorphs represented in our Hyaenodon dataset, or more generally a fundamental phylogenetic difference in form-function relationships in the extant carnivorans sampled and the extinct hyaenodontid lineage represented by Hyaenodon . We combined different species of Hyaenodon into a single dataset because of the small fossil sample sizes available; this may have reduced the functional morphological signal available in the data by mixing multiple ecomorphs. Future research that focuses on larger single-taxon samples of Hyaenodon will permit a test of this interpretation. The absence of morphofunctional correlates of tooth macrowear in the meat specialist and scavenging hypercarnivore species studied indicates that either (1) tooth wear has no significant impact on biting performance, or (2) tooth wear does influence biting performance but there is no morphofunctional compensation. In the case of meat specialists, it may be that advanced tooth macrowear is rarer than in other ecomorphs with more mechanically demanding diets , rendering morphofunctional compensation unnecessary or too subtle to be detected with the current dataset. Behavioral and natural history observations from living felids (which are collectively categorized as meat specialists) offer possible explanations for the observed macrowear patterns in the large felids studied. In some extant puma populations, both age-dependent and life stage-dependent differences in predation patterns have been observed . Dispersing pumas tend to go after smaller prey, whereas older pumas tend to take down larger prey. Both observations suggest that behavioral shifts play a role in predation and constitute another dimension of compensation for individual condition and age (which includes tooth wear) beyond morphofunctional traits. The presence of other predators, including the relative abundance of co-occurring wolves in North America, can also mediate dietary choices including the size and condition of prey species in pumas —with pumas consuming prey with the greatest range of body sizes as compared to neotropical carnivores . On the other hand, no significant dietary differences were observed among individuals of a high-density jaguar ( Panthera onca ) population . Thus, there may be a large range of behavioral plasticity that masks any morphofunctional response to decreased masticatory capability with increased tooth macrowear, at least in meat specialists. More generally, the potential presence of interactions and trade-offs between feeding and hunting (prey handling) strategies at the macroevolutionary scale may impose bounds on how tooth macrowear and jaw mechanics can vary (for example, the craniodental complex in sabertooths; ; ). The potential patterns and mechanisms underlying these complex interactions require a comprehensive examination of macrowear and jaw mechanics across a broader phylogenetic sample of meat specialists that is beyond the scope of the current study. In the bone cracking spotted hyenas ( Crocuta crocuta ), there is a range of hunting group sizes that correlates with individual age. Older spotted hyenas tend to hunt alone more frequently than younger individuals . In contrast, the scavengers brown hyenas ( Parahyaena brunnea ) and striped hyenas ( Hyaena hyaena ) tend to hunt and scavenge solo regardless of age, and instead scavenge in larger groups where all individuals access a similar food source . Our findings are consistent with these observed behavioral differences. Bone crackers (as represented by the spotted hyena, the only living taxon categorized as such a specialist by ) exhibit significantly increased carnassial bite mechanical efficiency with tooth macrowear typical of older individuals who hunt alone more frequently. Scavenging hyaenids show a non-significant increase in mechanical efficiency from the lowest macrowear category to the higher categories , and correspondingly do not show age-related differences in hunting strategy. Furthermore, the absence of wear-dependent morphofunctional changes in other measured traits (jaw dimensions, canine and carnassial bite strain energy, canine bite mechanical efficiency) in living bone crackers may be in part explained by social rank structured feeding behavior in that species; higher ranked individuals have preferential access to food resources regardless of whether those same individuals were responsible for the acquisition of a particular meal . Priority access to softer parts of a prey carcass by virtue of high social rank would permit some individuals to obtain high quality food even if their masticatory system performs suboptimally for mechanically demanding tasks because of tooth wear and damage. One strength of our 2D based approach to estimating biomechanical performance is the ability to incorporate larger sample sizes in our finite element modeling compared to most previous studies of similar scope. The current paradigm of using FEA to correlate organismal form and function often relies on only one or two specimen models per species because of the time-consuming nature of FE protocols . As such, no previous studies have examined individual differences in the biomechanical traits analyzed herein as a consequence of tooth wear and tear. On the other hand, the 2D modeling approach limits our examination of biomechanical performance to the dorsoventral plane. The origin of mammalian mastication/chewing has been speculated to involve pitch and yaw components that provide more nuanced movements of the hemimandibles and thus angles of occlusion . Although the plane of wear on the carnassial teeth of carnivorans and hyaenodontids are largely in the dorsoventral direction, indicating the principal movement of occlusion to be dorsoventrally oriented, there may be important shear forces on the masticatory system that results in this study could not account for. Future studies of form-function linkage in a tooth macrowear context would benefit from a critical analysis of the extent that 3D information is consistent with, or adds substantially to, the 2D biomechanical data collected in the present study.
In this study we hypothesized that bone cracking and scavenging hypercarnivores should exhibit morphofunctional compensation with more severe tooth macrowear, whereas meat specialists do not have the mechanical need to make such adjustments. We found only partial support for this prediction, with results showing that the carnassial bite mechanical efficiency of bone cracking ecomorphs is the only performance attribute that is significantly correlated with the extent of tooth macrowear. We further hypothesized that the extinct carnivore Hyaenodon , commonly thought to be functionally convergent with extant hyaenids, would share similar patterns of morphofunctional compensation with tooth wear. We found that Hyaenodon is unique among the hypercarnivores studied in exhibiting canine bite mechanical efficiency increase with tooth macrowear. The incorporation of tooth macrowear patterns into assessments of morphofunctional traits provides an explicit link between form-function linkages at the interspecific level, and tooth wear and age at the individual level. These findings support the inference that rather than treating feeding ecologies as static and characterizable by single specimen models, the morphofunctional trajectories of tooth use, tooth wear, and jaw mechanics can provide an added dimension of biomechanical performance profiling for a given taxon. These observations highlight the mammalian masticatory system as having a dynamic performance profile through its useful lifespan, and encourage a more nuanced understanding of past and present carnivore guilds by considering wear-dependent performance changes as a possible source of selection.
10.7717/peerj.18435/supp-1 Supplemental Information 1 Raw morphofunctional data for all specimen models used in this study. Raw data containing model inputs/outputs from extruded 2D jaw models subjected to linear static finite element simulations. Ecomorph, carnivore category assigned to taxon. Finput_N, total input muscle force. Foutput_C_N, output node reaction force at canine tooth. Mec, mechanical efficiency (Foutput:Finput) of canine bite simulation. Foutput_m1_N, output node reaction force at carnassial toth (lower first molar). MEm1, mechanical efficiency of carnassial bite simulation. Sec_J, stored strain energy value from canine bite simulation. SEm1_J, stored strain energy value from carnassial bite simulation. SEcpull_J, stored strain energy value from canine pull simulation. Volume_mm3, jaw model volume. L_mm, jaw model length. Dm1_mm, jaw model depth at carnassial tooth position. Dc_mm, jaw model depth at canine position. macrowear_c, macrowear score for canine tooth. macrowear_m1, macrowear score for carnassial tooth.
|
Association Between Single Nucleotide Polymorphisms in the Aquaporin‐4 Gene and Longitudinal Changes in White Matter Free Water and Cognitive Function in Non‐Demented Older Adults | 7638d66b-af77-4552-af9d-1c09d552ad3c | 11867789 | Cardiovascular System[mh] | Background The glymphatic system has been described as a “garbage truck” of the human brain (Nedergaard ). Studies demonstrated that cerebrospinal fluid (CSF) can flow into the perivascular spaces (PVS) surrounding penetrating arteries and enter the brain interstitial space through aquaporin‐4 (AQP4) water channels located on the endfeet of astroglial cells, where CSF and interstitial fluid (ISF) mix and efflux, taking away brain metabolic wastes (Benveniste et al. ; Iliff et al. ; Nedergaard and Goldman ). Impairment of the glymphatic system may lead to the accumulation of pathological proteins, such as amyloid‐beta (Aβ) and tau (Tiwari et al. ), the core pathologies of Alzheimer's disease (AD) (Keshavan et al. ), and contribute to the development of various neurological disorders (Fang et al. ; Vittorini et al. ). Along the glymphatic pathway, the AQP4 channels are a crucial component (Mestre et al. ). They form abundant pores in the cell membrane, creating a high permeability that is essential for the rapid CSF –ISF exchange and waste clearance (Rasmussen et al. ); Zeppenfeld et al. ( ). Utilized brain autopsy data from non‐AD and AD patients and found that the loss of perivascular AQP4 localization was associated with an increased amyloid burden. Through stereotactic injection of HiLyte 555 Tau into the brain of AQP4 KO mice, Ishida et al. (Ishida et al. ) found that tau was cleared from the brain to the CSF by an AQP4‐dependent mechanism. Interestingly, single‐nucleotide polymorphisms (SNPs) in the AQP4 gene may alter the functionality of AQP4 proteins and influence glymphatic function. Several recent studies found that variations in the AQP4 gene might accelerate Aβ deposition and cognitive decline in humans (Burfeind et al. ; Chandra et al. ; Rainey‐Smith et al. ). Nonetheless, there is a lack of human studies validating whether an altered glymphatic function mediates the association between gene variations and AD progression. Because AQP4 is also related to many other water exchange processes and degeneration mechanisms (Saadoun et al. ; Verkman ), clarifying the role of glymphatic clearance is necessary. Currently, a few non‐invasive magnetic resonance imaging (MRI) markers are considered to be related to glymphatic alterations. First, dilation of PVS, the main conduit of glymphatic flow, has been found to be associated with aging, hypertension, Aβ, and a variety of neurodegenerative disorders (Zeng et al. ). The dilation was considered a compensatory restructuring of PVS in response to reduced glymphatic flow (Brown et al. ). Second, interstitial water content assessed by free water (FW) imaging may reflect fluid stagnation due to glymphatic dysfunction. A previous study (Gomolka et al. ) found increased parenchymal water diffusivity in mice with AQP4 water channel deletion, supporting the association between parenchymal water content measured by MRI and glymphatic dysfunction. Third, diffusion tensor image analysis along the perivascular space (DTI‐ALPS) (Taoka et al. ), an index measuring fluid transport in the perivenous space along the deep medullary veins, has demonstrated a good association with glymphatic clearance measured by DCE‐MRI (Zhang, Huang, et al. ; Zhang, Zhou, et al. ). While there are still debates regarding whether they could reflect global glymphatic function, these markers have been chosen due to the paucity of available MRI markers. Several previous studies have demonstrated their close association with clinical status and pathological deposition (Hong et al. ; Kamagata et al. ; Perosa et al. ). In the present study, we aim to study whether AQP4 SNPs are associated with glymphatic function and clinical progression in non‐demented older adults. Specifically, we aim to study (1) the association between AQP4 SNPs and glymphatic markers through both cross‐sectional and longitudinal analyses and (2) whether changes in glymphatic markers mediate the association between AQP4 SNPs and AD progression.
Materials and Methods All procedures conducted in studies involving human participants adhered to the ethical standards set forth by the Institutional and National Research Committees, as well as the 1964 Declaration of Helsinki and its subsequent amendments or equivalent ethical guidelines. Written informed consent was obtained from all participants, their authorized representatives, and study partners prior to the initiation of any protocol‐specific procedures in the ADNI study. All data were downloaded at September 2023 from ADNI 2 &3. Further information is available at http://www.adni‐info.org . 2.1 Study Participants The inclusion criteria included (1) non‐demented participants, including cognitively unimpaired participants (CU, N = 126) and mild cognitive impairment participants (MCI, N = 116). The diagnoses were made by neurologists at admission. The CUs were defined as subjects who had a Clinical Dementia Rating (CDR) scale score of 0, a Mini‐Mental State Examination (MMSE) between 24 and 30 (inclusive), Wechsler memory scale logical memory (WMS‐LM) delay recall performance ≥ 9 for subjects with 16 or more years of education; ≥ 5 for subjects with 8–15 years of education; and ≥ 3 for 0–7 years of education; non‐clinical depression (GDS‐15 score < 6), and absence of dementia. MCI was defined as subjects who had preserved activities of daily living, non‐dementia, and objective cognitive impairments, as shown on the delayed recall test of the WMS‐LM, as well as a CDR score of 0.5(2) with available T1‐weighted structural images, diffusion tensor imaging (DTI) images, and Aβ PET data; (3) with gene sequencing information; and (4) with neuropsychological assessments. Exclusion criteria were as follows (Li et al. ): (1) significant medical, neurological, and psychiatric illness, (2) head trauma history, (3) use of non‐AD‐related medication known to influence cerebral function, and (4) alcohol or drug abuse. To ensure the maximum sample size, for participants with multiple follow‐up timepoints, we considered the first timepoint that met all inclusion criteria as the baseline timepoint. Available follow‐up data (varied from 0 to 10.25 years) were also collected (see flowchart in Figure ). Vascular risk factor score (VRFs), including hypertension, diabetes, hyperlipidemia, and smoking, was recorded based on participants' medical history. Each factor was coded as 0 (absent) or 1 (present), resulting in a total score ranging from 0 to 4. 2.2 Neuropsychological Assessment Primary measures were global cognitive function, episodic memory, and executive function. Global cognition assessments (Oh et al. ) included the MMSE, Montreal Cognitive Assessment (MoCA), and CDR (Tsang et al. ). Logical Memory‐Delayed Recall (Tedeschi Dauar et al. ) was used to assess episodic memory. The Trails Making Test Part A and Part B (TMT‐A and TMT‐B) (Dikmen et al. ) were used to assess executive function. 2.3 Genetic Analysis Participants carrying at least one apolipoprotein (APOE) ε4 allele were classified as carriers, while the rest were categorized as noncarriers. Whole‐genome sequencing data were downloaded from the ADNI database. To store statistically relevant SNPs called using Illumina's CASAVA SNP Caller, the ADNI WGS SNP data are stored in variant call format (VCF). SNP pruning was undertaken using PLINK (Purcell et al. ). Genetic variants of AQP4 underwent quality control procedures. Specifically, we removed the SNPs that were not in Hardy–Weinberg equilibrium ( p < 0.05) (Hosking et al. ) and had a minor allele frequency of < 5% (Tabangin et al. ). Linkage disequilibrium‐based SNP pruning (Hill and Robertson ) was performed to reduce statistical redundancy and maintain coverage of the AQP4 gene. From the rest of the SNPs, six SNPs that had been shown to be clinically relevant in AD were chosen. The results are shown in Figure . The information on these SNPs is displayed in Table and Figure . Participants possessing one or two copies of the minor allele were classified as “carriers” for the SNP. Details were provided in the . 2.4 Imaging Acquisition The ADNI imaging data were acquired from different centers using harmonized protocols, and the full list of acquisition parameters can be seen on the ADNI website ( https://adni.loni.usc.edu/methods/documents/mri‐protocols/ ). All imaging data were acquired using 3 T scanners from three vendors (GE, Siemens, and Philips). Here, we list some representative imaging parameters: The structural images were obtained based on a 3D Spoiled Gradient Recalled Echo T1 weighted sequence, with the following parameters: Flip Angle = 11°; Matrix X = 256 pixels; Matrix Y = 256 pixels; voxel size = 1 × 1 × 1.2 mm 3 ; echo time (TE) = minimum; inversion time (TI) = 400 msec; repetition time (TR) = 7.34 msec; 196 sagittal slices. The axial T2 FLAIR sequence used the following parameters: Flip Angle = 90.0/125.0°; Matrix X = 256 pixels; Matrix Y = 256 pixels; voxel size = 0.9 × 0.9 × 5.0 mm 3 ; TE = 147.9–154.0 msec; TI = 2250.0 msec; TR = 11000.0 msec. DTI images were obtained with a Spin Echo Sequence using the following imaging parameters: b = 0/1000 s/mm 2 , non‐diffusion‐weighted image = 5, 41 MPG axes, TR = 9050 msec, TE = 63.0 msec, flip angle of 90°, 1.37 mm × 1.37 mm voxel, 2.70 slice thickness, and 46 axial slices. The amyloid PET was performed using two tracers, including florbetapir or florbetaben. The detailed acquisition procedures were described in the ADNI PET Technical Procedures Manual ( http://adni.loni.usc.edu/wp‐content/uploads/2010/05/ADNI2_PET_Tech_Manual_0142011.pdf ). For the cutoffs of brain amyloid, the values were 1.11 for 18F‐florbetapir (18[F]‐AV45) SUVR and 1.08 for 18F‐florbetaben (FBB) SUVR (Royse et al. ); according to the cutoff, participants were categorized as Aβ positive (A+) and Aβ negative (A−). For continuous brain amyloid SUVR, cortical amyloid SUVRs obtained from different PET tracers were harmonized by UC Berkeley and Lawrence Berkeley National Laboratory. These SUVRs were normalized using the whole cerebellum and then transformed into Centiloids. 2.5 Imaging Analysis 2.5.1 PVS Visual Rating PVS in the basal ganglia (BG) and white matter (WM) region were assessed on T1‐weighted images by two postgraduate students (LX; LL) according to a previously proposed rating scale (Zhu et al. ). Briefly, in the BG region, PVS was rated 1: < 5 enlarged PVS (ePVS), rated 2: 5–10 ePVS, rated 3: > 10 ePVS but the number is still countable, and rated 4: uncountable; in the WM region, PVS severity was rated 1: < 10 ePVS in total, rated 2: > 10 PVS in total but no more than 10 ePVS in a single section, rated 3: 10–20 ePVS in the section containing the greatest number of ePVS, and rated 4: > 20 ePVS in any single section. 2.5.2 DTI ‐ ALPS Calculation The processing of DTI data was conducted using FSL 6.0 ( https://fsl.fmrib.ox.ac.uk/fsl ). The preprocessing steps included skull stripping, denoising, removing Gibbs artifact, EPI distortion correction, and eddy current correction. DTI‐ALPS was calculated referring to the previous article (Taoka et al. ). Briefly, the FA maps and diffusivity maps [Dxx, Dyy, Dzz] were acquired from preprocessed diffusion data using dtifit and then co‐registered to the MNI space through b0 images. Four cross‐regions of interest (ROIs) containing five voxels (40 mm 3 ) were manually placed in the areas of bilateral projection fiber (proj) and association fiber. The left and right ALPS index was calculated as [(Dxx‐proj + Dxx‐assoc) / (Dyy‐proj + Dzz‐assoc)] on each side, respectively. The mean ALPS index is defined by the average of bilateral ALPS indexes. 2.5.3 FW Calculation The FW map was calculated using the script from the MarkVCID ( https://markvcid.partners.org/ ). The MarkVCID is a consortium of US academic medical centers. Its mission is to identify and validate biomarkers for the small vessel diseases of the brain that produce vascular contributions to cognitive impairment and dementia. FW was estimated using a two‐compartment model (Pasternak et al. ) by fitting the diffusion data of water molecules to two tensors. First, the extracellular FW was quantified to obtain the isotropic FW compartment and the FW volume fraction. Second, after removing the influence of FW, the anisotropic tissue compartment was obtained through the second DTI modeling to obtain the DTI index after the correction of FW separation. The FW map represents the fractional volume in every voxel (ranging from 0 to 1) of the FW compartment. Finally, the generated FW maps were registered to structural images through b0 images, and the mean FW in the WM was extracted. 2.5.4 Statistical Analysis All the statistical analyses were performed in R (version 4.3.0) and SPSS Statistics, Version 25.0 (IBM). Demographic characteristics between groups were compared using the t ‐test for normally distributed continuous variables, the Mann–Whitney U test for non‐normally distributed continuous variables, and chi‐square tests for categorical variables. To reduce site effects, we conducted multicenter data harmonization on diffusion metrics FW and DTI‐ALPS using the COMBAT method with age and sex included as biological variables (Fortin et al. ). To remove possible influences of WM microstructural properties on DTI‐ALPS assessment (Huang et al. ), we also included the whole brain WM mean diffusivity (MD) as a covariate in the analyses involving the DTI‐ALPS index. All analyses were performed in the whole group, the A + group, and the A− group separately. 2.5.5 Correlation Among the Three Imaging Markers First, we tested the correlations among the three imaging markers. Pearson correlation was used to investigate the correlation between FW and DTI‐ALPS, and Spearman's correlation was used to investigate the correlation between FW\DTI‐ALPS and PVS scores. 2.5.6 The Association Between AQP4 SNPS and Glymphatic Markers For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between AQP4 SNPs and glymphatic markers, with age, sex, and VRFs as covariates. Model1: glymphatic marker ~ SNP1 + … + SNP6 + age + sex + VRFs . For the association between AQP4 SNPs and longitudinal changes in each glymphatic marker, linear mixed models (lme4, R statistics) were employed, with AQP4 SNPs and their interactions with time as independent variables and each glymphatic marker as the dependent variable. Age, sex, and VRFs were introduced as covariates. Model2: glymphatic marker ~ SNP1 × time + … + SNP6 × time + SNP1 + … + SNP6 + time + age + sex + VRFs + (1 + time | Subject) . 2.5.7 The Association Between the AQP4 SNP ‐Related Glymphatic Marker and Clinical Characteristics For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between the AQP4 SNP‐related glymphatic marker (i.e., FW) and amyloid accumulation and cognitive performance. When analyzing the association between FW and amyloid accumulation, age, sex, VRFs, and the status of APOE ε4 carriers were included as covariates. When analyzing the association between FW and cognitive performance, education was additionally added as a covariate. The equations for each model were as follows: Model3: Amyloid PET Centiloids ~ FW + age + sex + VRFs + APOE ε4 . Model4: cognitive measure ~ FW + age + sex + VRFs + APOE ε4 + education . Linear mixed models were also employed to investigate the association between baseline FW and changes in amyloid accumulation and cognition. When analyzing the association between baseline FW and changes in amyloid accumulation, age, sex, VRFs, and APOE ε4 status were included as covariates. When analyzing the association between baseline FW and the changes in cognition, education was additionally included as a covariate. Model5: Amyloid PET Centiloids ~ FW × time + FW + time + age + sex + VRFs + APOE ε4 + (1 + time | Subject) . Model6: cognitive measure ~ FW × time + FW + time + age + sex + VRFs + education + APOE ε4 + (1 + time | Subject) . 2.5.8 Mediation Analyses After the previous steps, we found associations between the SNPs, FW, and cognitive measures. To test whether FW mediated the associations between SNPs and cognitive measures, we built several mediation models (Figures ). In the Aβ positive group: (1) SNP rs72878794 → FW → CDR ratio, (2) SNP rs72878794 → FW → Logical Memory‐Delayed Recall score ratio, and (3) SNP rs72878794 → FW → TMT‐A time ratio. In the Aβ negative group: (1) SNP rs9951307 → FW → CDR ratio, and (2) SNP rs9951307 → FW → TMT‐A time ratio. Specifically, we used linear mixed models to extract the annual change ratio of FW and cognitive measures and used the R “Mediation” package to build and analyze the mediation model. In all models, the AQP4 SNP was the independent variable, FW was the mediator, and the cognitive measure was the dependent variable. On the path SNP → FW, we adjusted for age, sex, and VRFs; on the path SNP → cognition, we adjusted for age, sex, VRFs, education, and APOE ε4 status. A 95% bootstrap confidence interval based on 10,000 bootstrap replicates was used to estimate significance. The FDR correction method was used for multiple comparison corrections between glymphatic markers and cognitive scores at the level of three groups to control false positives. Considering the explorative nature of the current study, we reported all statistical results in the paper. The p‐value for statistical significance was set at 0.05, two‐tailed.
Study Participants The inclusion criteria included (1) non‐demented participants, including cognitively unimpaired participants (CU, N = 126) and mild cognitive impairment participants (MCI, N = 116). The diagnoses were made by neurologists at admission. The CUs were defined as subjects who had a Clinical Dementia Rating (CDR) scale score of 0, a Mini‐Mental State Examination (MMSE) between 24 and 30 (inclusive), Wechsler memory scale logical memory (WMS‐LM) delay recall performance ≥ 9 for subjects with 16 or more years of education; ≥ 5 for subjects with 8–15 years of education; and ≥ 3 for 0–7 years of education; non‐clinical depression (GDS‐15 score < 6), and absence of dementia. MCI was defined as subjects who had preserved activities of daily living, non‐dementia, and objective cognitive impairments, as shown on the delayed recall test of the WMS‐LM, as well as a CDR score of 0.5(2) with available T1‐weighted structural images, diffusion tensor imaging (DTI) images, and Aβ PET data; (3) with gene sequencing information; and (4) with neuropsychological assessments. Exclusion criteria were as follows (Li et al. ): (1) significant medical, neurological, and psychiatric illness, (2) head trauma history, (3) use of non‐AD‐related medication known to influence cerebral function, and (4) alcohol or drug abuse. To ensure the maximum sample size, for participants with multiple follow‐up timepoints, we considered the first timepoint that met all inclusion criteria as the baseline timepoint. Available follow‐up data (varied from 0 to 10.25 years) were also collected (see flowchart in Figure ). Vascular risk factor score (VRFs), including hypertension, diabetes, hyperlipidemia, and smoking, was recorded based on participants' medical history. Each factor was coded as 0 (absent) or 1 (present), resulting in a total score ranging from 0 to 4.
Neuropsychological Assessment Primary measures were global cognitive function, episodic memory, and executive function. Global cognition assessments (Oh et al. ) included the MMSE, Montreal Cognitive Assessment (MoCA), and CDR (Tsang et al. ). Logical Memory‐Delayed Recall (Tedeschi Dauar et al. ) was used to assess episodic memory. The Trails Making Test Part A and Part B (TMT‐A and TMT‐B) (Dikmen et al. ) were used to assess executive function.
Genetic Analysis Participants carrying at least one apolipoprotein (APOE) ε4 allele were classified as carriers, while the rest were categorized as noncarriers. Whole‐genome sequencing data were downloaded from the ADNI database. To store statistically relevant SNPs called using Illumina's CASAVA SNP Caller, the ADNI WGS SNP data are stored in variant call format (VCF). SNP pruning was undertaken using PLINK (Purcell et al. ). Genetic variants of AQP4 underwent quality control procedures. Specifically, we removed the SNPs that were not in Hardy–Weinberg equilibrium ( p < 0.05) (Hosking et al. ) and had a minor allele frequency of < 5% (Tabangin et al. ). Linkage disequilibrium‐based SNP pruning (Hill and Robertson ) was performed to reduce statistical redundancy and maintain coverage of the AQP4 gene. From the rest of the SNPs, six SNPs that had been shown to be clinically relevant in AD were chosen. The results are shown in Figure . The information on these SNPs is displayed in Table and Figure . Participants possessing one or two copies of the minor allele were classified as “carriers” for the SNP. Details were provided in the .
Imaging Acquisition The ADNI imaging data were acquired from different centers using harmonized protocols, and the full list of acquisition parameters can be seen on the ADNI website ( https://adni.loni.usc.edu/methods/documents/mri‐protocols/ ). All imaging data were acquired using 3 T scanners from three vendors (GE, Siemens, and Philips). Here, we list some representative imaging parameters: The structural images were obtained based on a 3D Spoiled Gradient Recalled Echo T1 weighted sequence, with the following parameters: Flip Angle = 11°; Matrix X = 256 pixels; Matrix Y = 256 pixels; voxel size = 1 × 1 × 1.2 mm 3 ; echo time (TE) = minimum; inversion time (TI) = 400 msec; repetition time (TR) = 7.34 msec; 196 sagittal slices. The axial T2 FLAIR sequence used the following parameters: Flip Angle = 90.0/125.0°; Matrix X = 256 pixels; Matrix Y = 256 pixels; voxel size = 0.9 × 0.9 × 5.0 mm 3 ; TE = 147.9–154.0 msec; TI = 2250.0 msec; TR = 11000.0 msec. DTI images were obtained with a Spin Echo Sequence using the following imaging parameters: b = 0/1000 s/mm 2 , non‐diffusion‐weighted image = 5, 41 MPG axes, TR = 9050 msec, TE = 63.0 msec, flip angle of 90°, 1.37 mm × 1.37 mm voxel, 2.70 slice thickness, and 46 axial slices. The amyloid PET was performed using two tracers, including florbetapir or florbetaben. The detailed acquisition procedures were described in the ADNI PET Technical Procedures Manual ( http://adni.loni.usc.edu/wp‐content/uploads/2010/05/ADNI2_PET_Tech_Manual_0142011.pdf ). For the cutoffs of brain amyloid, the values were 1.11 for 18F‐florbetapir (18[F]‐AV45) SUVR and 1.08 for 18F‐florbetaben (FBB) SUVR (Royse et al. ); according to the cutoff, participants were categorized as Aβ positive (A+) and Aβ negative (A−). For continuous brain amyloid SUVR, cortical amyloid SUVRs obtained from different PET tracers were harmonized by UC Berkeley and Lawrence Berkeley National Laboratory. These SUVRs were normalized using the whole cerebellum and then transformed into Centiloids.
Imaging Analysis 2.5.1 PVS Visual Rating PVS in the basal ganglia (BG) and white matter (WM) region were assessed on T1‐weighted images by two postgraduate students (LX; LL) according to a previously proposed rating scale (Zhu et al. ). Briefly, in the BG region, PVS was rated 1: < 5 enlarged PVS (ePVS), rated 2: 5–10 ePVS, rated 3: > 10 ePVS but the number is still countable, and rated 4: uncountable; in the WM region, PVS severity was rated 1: < 10 ePVS in total, rated 2: > 10 PVS in total but no more than 10 ePVS in a single section, rated 3: 10–20 ePVS in the section containing the greatest number of ePVS, and rated 4: > 20 ePVS in any single section. 2.5.2 DTI ‐ ALPS Calculation The processing of DTI data was conducted using FSL 6.0 ( https://fsl.fmrib.ox.ac.uk/fsl ). The preprocessing steps included skull stripping, denoising, removing Gibbs artifact, EPI distortion correction, and eddy current correction. DTI‐ALPS was calculated referring to the previous article (Taoka et al. ). Briefly, the FA maps and diffusivity maps [Dxx, Dyy, Dzz] were acquired from preprocessed diffusion data using dtifit and then co‐registered to the MNI space through b0 images. Four cross‐regions of interest (ROIs) containing five voxels (40 mm 3 ) were manually placed in the areas of bilateral projection fiber (proj) and association fiber. The left and right ALPS index was calculated as [(Dxx‐proj + Dxx‐assoc) / (Dyy‐proj + Dzz‐assoc)] on each side, respectively. The mean ALPS index is defined by the average of bilateral ALPS indexes. 2.5.3 FW Calculation The FW map was calculated using the script from the MarkVCID ( https://markvcid.partners.org/ ). The MarkVCID is a consortium of US academic medical centers. Its mission is to identify and validate biomarkers for the small vessel diseases of the brain that produce vascular contributions to cognitive impairment and dementia. FW was estimated using a two‐compartment model (Pasternak et al. ) by fitting the diffusion data of water molecules to two tensors. First, the extracellular FW was quantified to obtain the isotropic FW compartment and the FW volume fraction. Second, after removing the influence of FW, the anisotropic tissue compartment was obtained through the second DTI modeling to obtain the DTI index after the correction of FW separation. The FW map represents the fractional volume in every voxel (ranging from 0 to 1) of the FW compartment. Finally, the generated FW maps were registered to structural images through b0 images, and the mean FW in the WM was extracted. 2.5.4 Statistical Analysis All the statistical analyses were performed in R (version 4.3.0) and SPSS Statistics, Version 25.0 (IBM). Demographic characteristics between groups were compared using the t ‐test for normally distributed continuous variables, the Mann–Whitney U test for non‐normally distributed continuous variables, and chi‐square tests for categorical variables. To reduce site effects, we conducted multicenter data harmonization on diffusion metrics FW and DTI‐ALPS using the COMBAT method with age and sex included as biological variables (Fortin et al. ). To remove possible influences of WM microstructural properties on DTI‐ALPS assessment (Huang et al. ), we also included the whole brain WM mean diffusivity (MD) as a covariate in the analyses involving the DTI‐ALPS index. All analyses were performed in the whole group, the A + group, and the A− group separately. 2.5.5 Correlation Among the Three Imaging Markers First, we tested the correlations among the three imaging markers. Pearson correlation was used to investigate the correlation between FW and DTI‐ALPS, and Spearman's correlation was used to investigate the correlation between FW\DTI‐ALPS and PVS scores. 2.5.6 The Association Between AQP4 SNPS and Glymphatic Markers For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between AQP4 SNPs and glymphatic markers, with age, sex, and VRFs as covariates. Model1: glymphatic marker ~ SNP1 + … + SNP6 + age + sex + VRFs . For the association between AQP4 SNPs and longitudinal changes in each glymphatic marker, linear mixed models (lme4, R statistics) were employed, with AQP4 SNPs and their interactions with time as independent variables and each glymphatic marker as the dependent variable. Age, sex, and VRFs were introduced as covariates. Model2: glymphatic marker ~ SNP1 × time + … + SNP6 × time + SNP1 + … + SNP6 + time + age + sex + VRFs + (1 + time | Subject) . 2.5.7 The Association Between the AQP4 SNP ‐Related Glymphatic Marker and Clinical Characteristics For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between the AQP4 SNP‐related glymphatic marker (i.e., FW) and amyloid accumulation and cognitive performance. When analyzing the association between FW and amyloid accumulation, age, sex, VRFs, and the status of APOE ε4 carriers were included as covariates. When analyzing the association between FW and cognitive performance, education was additionally added as a covariate. The equations for each model were as follows: Model3: Amyloid PET Centiloids ~ FW + age + sex + VRFs + APOE ε4 . Model4: cognitive measure ~ FW + age + sex + VRFs + APOE ε4 + education . Linear mixed models were also employed to investigate the association between baseline FW and changes in amyloid accumulation and cognition. When analyzing the association between baseline FW and changes in amyloid accumulation, age, sex, VRFs, and APOE ε4 status were included as covariates. When analyzing the association between baseline FW and the changes in cognition, education was additionally included as a covariate. Model5: Amyloid PET Centiloids ~ FW × time + FW + time + age + sex + VRFs + APOE ε4 + (1 + time | Subject) . Model6: cognitive measure ~ FW × time + FW + time + age + sex + VRFs + education + APOE ε4 + (1 + time | Subject) . 2.5.8 Mediation Analyses After the previous steps, we found associations between the SNPs, FW, and cognitive measures. To test whether FW mediated the associations between SNPs and cognitive measures, we built several mediation models (Figures ). In the Aβ positive group: (1) SNP rs72878794 → FW → CDR ratio, (2) SNP rs72878794 → FW → Logical Memory‐Delayed Recall score ratio, and (3) SNP rs72878794 → FW → TMT‐A time ratio. In the Aβ negative group: (1) SNP rs9951307 → FW → CDR ratio, and (2) SNP rs9951307 → FW → TMT‐A time ratio. Specifically, we used linear mixed models to extract the annual change ratio of FW and cognitive measures and used the R “Mediation” package to build and analyze the mediation model. In all models, the AQP4 SNP was the independent variable, FW was the mediator, and the cognitive measure was the dependent variable. On the path SNP → FW, we adjusted for age, sex, and VRFs; on the path SNP → cognition, we adjusted for age, sex, VRFs, education, and APOE ε4 status. A 95% bootstrap confidence interval based on 10,000 bootstrap replicates was used to estimate significance. The FDR correction method was used for multiple comparison corrections between glymphatic markers and cognitive scores at the level of three groups to control false positives. Considering the explorative nature of the current study, we reported all statistical results in the paper. The p‐value for statistical significance was set at 0.05, two‐tailed.
PVS Visual Rating PVS in the basal ganglia (BG) and white matter (WM) region were assessed on T1‐weighted images by two postgraduate students (LX; LL) according to a previously proposed rating scale (Zhu et al. ). Briefly, in the BG region, PVS was rated 1: < 5 enlarged PVS (ePVS), rated 2: 5–10 ePVS, rated 3: > 10 ePVS but the number is still countable, and rated 4: uncountable; in the WM region, PVS severity was rated 1: < 10 ePVS in total, rated 2: > 10 PVS in total but no more than 10 ePVS in a single section, rated 3: 10–20 ePVS in the section containing the greatest number of ePVS, and rated 4: > 20 ePVS in any single section.
DTI ‐ ALPS Calculation The processing of DTI data was conducted using FSL 6.0 ( https://fsl.fmrib.ox.ac.uk/fsl ). The preprocessing steps included skull stripping, denoising, removing Gibbs artifact, EPI distortion correction, and eddy current correction. DTI‐ALPS was calculated referring to the previous article (Taoka et al. ). Briefly, the FA maps and diffusivity maps [Dxx, Dyy, Dzz] were acquired from preprocessed diffusion data using dtifit and then co‐registered to the MNI space through b0 images. Four cross‐regions of interest (ROIs) containing five voxels (40 mm 3 ) were manually placed in the areas of bilateral projection fiber (proj) and association fiber. The left and right ALPS index was calculated as [(Dxx‐proj + Dxx‐assoc) / (Dyy‐proj + Dzz‐assoc)] on each side, respectively. The mean ALPS index is defined by the average of bilateral ALPS indexes.
FW Calculation The FW map was calculated using the script from the MarkVCID ( https://markvcid.partners.org/ ). The MarkVCID is a consortium of US academic medical centers. Its mission is to identify and validate biomarkers for the small vessel diseases of the brain that produce vascular contributions to cognitive impairment and dementia. FW was estimated using a two‐compartment model (Pasternak et al. ) by fitting the diffusion data of water molecules to two tensors. First, the extracellular FW was quantified to obtain the isotropic FW compartment and the FW volume fraction. Second, after removing the influence of FW, the anisotropic tissue compartment was obtained through the second DTI modeling to obtain the DTI index after the correction of FW separation. The FW map represents the fractional volume in every voxel (ranging from 0 to 1) of the FW compartment. Finally, the generated FW maps were registered to structural images through b0 images, and the mean FW in the WM was extracted.
Statistical Analysis All the statistical analyses were performed in R (version 4.3.0) and SPSS Statistics, Version 25.0 (IBM). Demographic characteristics between groups were compared using the t ‐test for normally distributed continuous variables, the Mann–Whitney U test for non‐normally distributed continuous variables, and chi‐square tests for categorical variables. To reduce site effects, we conducted multicenter data harmonization on diffusion metrics FW and DTI‐ALPS using the COMBAT method with age and sex included as biological variables (Fortin et al. ). To remove possible influences of WM microstructural properties on DTI‐ALPS assessment (Huang et al. ), we also included the whole brain WM mean diffusivity (MD) as a covariate in the analyses involving the DTI‐ALPS index. All analyses were performed in the whole group, the A + group, and the A− group separately.
Correlation Among the Three Imaging Markers First, we tested the correlations among the three imaging markers. Pearson correlation was used to investigate the correlation between FW and DTI‐ALPS, and Spearman's correlation was used to investigate the correlation between FW\DTI‐ALPS and PVS scores.
The Association Between AQP4 SNPS and Glymphatic Markers For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between AQP4 SNPs and glymphatic markers, with age, sex, and VRFs as covariates. Model1: glymphatic marker ~ SNP1 + … + SNP6 + age + sex + VRFs . For the association between AQP4 SNPs and longitudinal changes in each glymphatic marker, linear mixed models (lme4, R statistics) were employed, with AQP4 SNPs and their interactions with time as independent variables and each glymphatic marker as the dependent variable. Age, sex, and VRFs were introduced as covariates. Model2: glymphatic marker ~ SNP1 × time + … + SNP6 × time + SNP1 + … + SNP6 + time + age + sex + VRFs + (1 + time | Subject) .
The Association Between the AQP4 SNP ‐Related Glymphatic Marker and Clinical Characteristics For baseline cross‐sectional analysis, multiple linear regression analysis was used to assess the associations between the AQP4 SNP‐related glymphatic marker (i.e., FW) and amyloid accumulation and cognitive performance. When analyzing the association between FW and amyloid accumulation, age, sex, VRFs, and the status of APOE ε4 carriers were included as covariates. When analyzing the association between FW and cognitive performance, education was additionally added as a covariate. The equations for each model were as follows: Model3: Amyloid PET Centiloids ~ FW + age + sex + VRFs + APOE ε4 . Model4: cognitive measure ~ FW + age + sex + VRFs + APOE ε4 + education . Linear mixed models were also employed to investigate the association between baseline FW and changes in amyloid accumulation and cognition. When analyzing the association between baseline FW and changes in amyloid accumulation, age, sex, VRFs, and APOE ε4 status were included as covariates. When analyzing the association between baseline FW and the changes in cognition, education was additionally included as a covariate. Model5: Amyloid PET Centiloids ~ FW × time + FW + time + age + sex + VRFs + APOE ε4 + (1 + time | Subject) . Model6: cognitive measure ~ FW × time + FW + time + age + sex + VRFs + education + APOE ε4 + (1 + time | Subject) .
Mediation Analyses After the previous steps, we found associations between the SNPs, FW, and cognitive measures. To test whether FW mediated the associations between SNPs and cognitive measures, we built several mediation models (Figures ). In the Aβ positive group: (1) SNP rs72878794 → FW → CDR ratio, (2) SNP rs72878794 → FW → Logical Memory‐Delayed Recall score ratio, and (3) SNP rs72878794 → FW → TMT‐A time ratio. In the Aβ negative group: (1) SNP rs9951307 → FW → CDR ratio, and (2) SNP rs9951307 → FW → TMT‐A time ratio. Specifically, we used linear mixed models to extract the annual change ratio of FW and cognitive measures and used the R “Mediation” package to build and analyze the mediation model. In all models, the AQP4 SNP was the independent variable, FW was the mediator, and the cognitive measure was the dependent variable. On the path SNP → FW, we adjusted for age, sex, and VRFs; on the path SNP → cognition, we adjusted for age, sex, VRFs, education, and APOE ε4 status. A 95% bootstrap confidence interval based on 10,000 bootstrap replicates was used to estimate significance. The FDR correction method was used for multiple comparison corrections between glymphatic markers and cognitive scores at the level of three groups to control false positives. Considering the explorative nature of the current study, we reported all statistical results in the paper. The p‐value for statistical significance was set at 0.05, two‐tailed.
Results 3.1 Demographics This study included 242 non‐demented subjects, among which 95 subjects were A+, and 147 subjects were A−. Subjects in the A+ group had a higher age, more APOE ε4 variations, and higher amyloid PET Centiloids than the A− group. Subjects in the A+ group had significantly worse cognitive performances in MoCA, TMT‐A, and TMT‐B than the A− group. There were no significant differences in glymphatic markers between the A+ group and A− group; please see details in Table . Among the three imaging markers (Figure ), BG‐PVS was correlated with FW ( r = 0.190, p = 0.003), and FW was correlated with DTI‐ALPS ( r = −0.428, p < 0.001). 3.2 Associations Between AQP4 SNPs and the Glymphatic Imaging Markers At baseline, none of the AQP4 SNPs was associated with glymphatic markers in the three groups. The detailed results are shown in Tables –S3. For longitudinal analysis, PVS ratings were not included due to a high PVS burden at baseline. As shown in Table , both PVS rating scores in the BG and WM regions were high (3 ~ 4) in either the whole group, A+ group, or A− group, which introduced a ceiling effect and limited statistical power. Therefore, we only analyzed the changes in FW and ALPS. As shown in Table , in the whole group, there was no association between AQP4 SNPs and FW changes. In the A+ group, the rs72878794 minor allele carrier status was associated with a slower increase in FW (SNP*time: β = −0.0040, t(46.25) = −2.062, p = 0.045, 95% CI = −0.0078 ~ −0.0001). In the A− group, we observed that the rs9951307 minor allele carrier status was associated with a faster increase in FW (SNP*time: β =0.0033, t(81.19) = 2.245, p = 0.027, 95% CI = 0.0004 ~ 0.0062). No significant associations had been found between the AQP4 SNPs and DTI‐ALPs changes. 3.3 Associations Between FW and Clinical Variables In the cross‐sectional analysis of the whole group (Table and Figure ), a higher FW was associated with a lower MMSE score (β‐std = −0.162, t(232) = −2.207, p = 0.028, 95% CI = −0.307 ~ −0.017), a lower MoCA score (β‐std = −0.190, t(235) = −2.782, p = 0.006, 95% CI = −0.325 ~ −0.056), a higher CDR score (β‐std = 0.226, t(233) = 3.150, p = 0.002, 95% CI = 0.084 ~ 0.367), a lower Logical Memory‐Delayed Recall score (β‐std = −0.256, t(140) = −2.966, p = 0.004, 95% CI = −0.427 ~ −0.085), and a longer TMT‐B time (β‐std = 0.223, t(234) = 3.282, p = 0.001, 95% CI = 0.089 ~ 0.357). For the A+ group (Table and Figure ), more details can be found in the . For the A− group (Table and Figure ), we observed associations between a higher FW and a lower MoCA score (β‐std = −0.197, t(140) = −2.202, p = 0.029, 95% CI = −0.373 ~ −0.020), a higher CDR score (β‐std = 0.313, t(138) = 3.427, p = 0.001, 95% CI = 0.133 ~ 0.494), a lower Logical Memory‐Delayed Recall score (β‐std = −0.039, t(81) = −2.716, p = 0.008, 95% CI = −0.536 ~ −0.083), and a longer TMT‐B time (β‐std = 0.255, t(139) = 2.974, p = 0.003, 95% CI = 0.085 ~ 0.424). For longitudinal analysis in the whole group (Table ), a higher baseline FW was associated with a faster decline in MMSE score (FW * time: β = −2.952, t(119.06) = −3.425, p = 0.001, 95% CI = −4.679 ~ −1.239), a faster increase in CDR score (FW * time: β = 0.550, t(97.28) = 5.799, p < 0.001, 95% CI = 0.361 ~ 0.737), and a faster increase in TMT‐A completion time (FW * time: β = 20.491, t(245.26) = 4.543, p < 0.001, 95% CI = 11.564 ~ 29.354). In the A+ group (Table ), a higher baseline FW was associated with a faster increase in CDR score (FW * time: β = 0.675, t(36.68) = 4.087, p < 0.001, 95% CI = 0.330 ~ 1.006), a slower decline in Logical Memory‐Delayed Recall score (FW * time: β = 7.152, t(32.11) = 2.451, p = 0.020, 95% CI = 1.256 ~ 13.066), and a faster increase in TMT‐A completion time (FW * time: β = 28.398, t(41.90) = 2.560, p = 0.014, 95% CI = 6.255 ~ 50.780). In the A− group (Table ), we also observed that higher baseline FW was associated with a faster increase in CDR score (FW * time: β = 0.447, t(60.11) = 3.869, p < 0.001, 95% CI = 0.218 ~ 0.679) and a faster increase in TMT‐A completion time (FW * time: β = 18.070, t(50.06) = 3.369, p = 0.001, 95% CI = 6.962 ~ 28.750). In both cross‐sectional and longitudinal analyses, we did not observe any associations between FW and amyloid accumulation. The results are shown in Table . 3.4 Mediation Analyses Between AQP4 SNPs , FW , and Cognitive Performance In all models, we had not observed FW's mediation role between AQP4 SNPs and cognition. Specifically, in the Aβ positive group, the effect of rs72878794 on CDR via FW was not significant (β = −0.0047, p = 0.08); the effect of rs72878794 on Logical Memory‐Delayed Recall via FW was not significant (β = 0.0246, p = 0.45); the effect of rs72878794 on TMT‐A via FW was not significant (β = −0.1710, p = 0.34). In the Aβ negative group, the effect of rs9951307 on the CDR ratio via FW was not significant (β = −0.0005, p = 0.56); the effect of rs9951307 on TMT‐A ratio via FW was not significant (β = 0.0557, p = 0.08). Detailed results are shown in Figures .
Demographics This study included 242 non‐demented subjects, among which 95 subjects were A+, and 147 subjects were A−. Subjects in the A+ group had a higher age, more APOE ε4 variations, and higher amyloid PET Centiloids than the A− group. Subjects in the A+ group had significantly worse cognitive performances in MoCA, TMT‐A, and TMT‐B than the A− group. There were no significant differences in glymphatic markers between the A+ group and A− group; please see details in Table . Among the three imaging markers (Figure ), BG‐PVS was correlated with FW ( r = 0.190, p = 0.003), and FW was correlated with DTI‐ALPS ( r = −0.428, p < 0.001).
Associations Between AQP4 SNPs and the Glymphatic Imaging Markers At baseline, none of the AQP4 SNPs was associated with glymphatic markers in the three groups. The detailed results are shown in Tables –S3. For longitudinal analysis, PVS ratings were not included due to a high PVS burden at baseline. As shown in Table , both PVS rating scores in the BG and WM regions were high (3 ~ 4) in either the whole group, A+ group, or A− group, which introduced a ceiling effect and limited statistical power. Therefore, we only analyzed the changes in FW and ALPS. As shown in Table , in the whole group, there was no association between AQP4 SNPs and FW changes. In the A+ group, the rs72878794 minor allele carrier status was associated with a slower increase in FW (SNP*time: β = −0.0040, t(46.25) = −2.062, p = 0.045, 95% CI = −0.0078 ~ −0.0001). In the A− group, we observed that the rs9951307 minor allele carrier status was associated with a faster increase in FW (SNP*time: β =0.0033, t(81.19) = 2.245, p = 0.027, 95% CI = 0.0004 ~ 0.0062). No significant associations had been found between the AQP4 SNPs and DTI‐ALPs changes.
Associations Between FW and Clinical Variables In the cross‐sectional analysis of the whole group (Table and Figure ), a higher FW was associated with a lower MMSE score (β‐std = −0.162, t(232) = −2.207, p = 0.028, 95% CI = −0.307 ~ −0.017), a lower MoCA score (β‐std = −0.190, t(235) = −2.782, p = 0.006, 95% CI = −0.325 ~ −0.056), a higher CDR score (β‐std = 0.226, t(233) = 3.150, p = 0.002, 95% CI = 0.084 ~ 0.367), a lower Logical Memory‐Delayed Recall score (β‐std = −0.256, t(140) = −2.966, p = 0.004, 95% CI = −0.427 ~ −0.085), and a longer TMT‐B time (β‐std = 0.223, t(234) = 3.282, p = 0.001, 95% CI = 0.089 ~ 0.357). For the A+ group (Table and Figure ), more details can be found in the . For the A− group (Table and Figure ), we observed associations between a higher FW and a lower MoCA score (β‐std = −0.197, t(140) = −2.202, p = 0.029, 95% CI = −0.373 ~ −0.020), a higher CDR score (β‐std = 0.313, t(138) = 3.427, p = 0.001, 95% CI = 0.133 ~ 0.494), a lower Logical Memory‐Delayed Recall score (β‐std = −0.039, t(81) = −2.716, p = 0.008, 95% CI = −0.536 ~ −0.083), and a longer TMT‐B time (β‐std = 0.255, t(139) = 2.974, p = 0.003, 95% CI = 0.085 ~ 0.424). For longitudinal analysis in the whole group (Table ), a higher baseline FW was associated with a faster decline in MMSE score (FW * time: β = −2.952, t(119.06) = −3.425, p = 0.001, 95% CI = −4.679 ~ −1.239), a faster increase in CDR score (FW * time: β = 0.550, t(97.28) = 5.799, p < 0.001, 95% CI = 0.361 ~ 0.737), and a faster increase in TMT‐A completion time (FW * time: β = 20.491, t(245.26) = 4.543, p < 0.001, 95% CI = 11.564 ~ 29.354). In the A+ group (Table ), a higher baseline FW was associated with a faster increase in CDR score (FW * time: β = 0.675, t(36.68) = 4.087, p < 0.001, 95% CI = 0.330 ~ 1.006), a slower decline in Logical Memory‐Delayed Recall score (FW * time: β = 7.152, t(32.11) = 2.451, p = 0.020, 95% CI = 1.256 ~ 13.066), and a faster increase in TMT‐A completion time (FW * time: β = 28.398, t(41.90) = 2.560, p = 0.014, 95% CI = 6.255 ~ 50.780). In the A− group (Table ), we also observed that higher baseline FW was associated with a faster increase in CDR score (FW * time: β = 0.447, t(60.11) = 3.869, p < 0.001, 95% CI = 0.218 ~ 0.679) and a faster increase in TMT‐A completion time (FW * time: β = 18.070, t(50.06) = 3.369, p = 0.001, 95% CI = 6.962 ~ 28.750). In both cross‐sectional and longitudinal analyses, we did not observe any associations between FW and amyloid accumulation. The results are shown in Table .
Mediation Analyses Between AQP4 SNPs , FW , and Cognitive Performance In all models, we had not observed FW's mediation role between AQP4 SNPs and cognition. Specifically, in the Aβ positive group, the effect of rs72878794 on CDR via FW was not significant (β = −0.0047, p = 0.08); the effect of rs72878794 on Logical Memory‐Delayed Recall via FW was not significant (β = 0.0246, p = 0.45); the effect of rs72878794 on TMT‐A via FW was not significant (β = −0.1710, p = 0.34). In the Aβ negative group, the effect of rs9951307 on the CDR ratio via FW was not significant (β = −0.0005, p = 0.56); the effect of rs9951307 on TMT‐A ratio via FW was not significant (β = 0.0557, p = 0.08). Detailed results are shown in Figures .
Discussion In this study, we explored the effects of AQP4 SNPs on glymphatic markers and AD progression in non‐demented participants. Results showed that the AQP4 SNP rs72878794 minor allele carrier status was associated with a slower increase in FW in the Aβ positive group, and the AQP4 SNP rs9951307 minor allele carrier status was associated with a faster increase in FW in the Aβ negative group. FW was associated with global cognition, memory, and executive function in both cross‐sectional and longitudinal analyses. These results may provide insights for understanding the influence of AQP4 SNPs on the glymphatic system and AD progression. This is the first in vivo study revealing the effect of AQP4 SNPs on WM FW. FW is a diffusion imaging marker that measures the fraction of water content with unrestricted and isotropic diffusion. An elevated FW may reflect an increase in ISF (Duering et al. ; Zhang, Huang, et al. ; Zhang, Zhou, et al. ). Because AQP4 proteins are crucial for the CSF –ISF exchange, their structural and functional variations may alter the water permeability and change the glymphatic flow. A previous study found that mice with genetic deletion of AQP4 (AQP4 KO) had larger interstitial spaces and higher brain water content, despite a similar CSF production rate and vascular density (Gomolka et al. ). Furthermore, the increased interstitial space was associated with a higher water diffusivity. These results suggested that AQP4 KO could result in a reduction in glymphatic flow and lead to stagnation of fluid in the interstitial space. Although this evidence is useful for understanding the modulation of glymphatic flow and for developing potential treatment methods, no study has investigated the effect of AQP4 SNPs on glymphatic function in humans, which could be mild compared to the extreme gene knock‐out approaches. The AQP4 is encoded by a 3 kb gene located on chromosome 18. Depending on the selection of two transcription starting sites, it can be translated into AQP4‐M1 or AQP4‐M23 isoforms, which differ in structural characteristics and their ability to form orthogonal arrays of particles, thereby affecting the water permeability of AQP4 (Nagelhus and Ottersen ; Smith et al. ). The rs72878794 is located in the promoter region upstream of exon 0 of the AQP4 gene (Figure ). Its polymorphism may lead to differential expression of isoforms and subsequent downstream events (Zhang, Huang, et al. ; Zhang, Zhou, et al. ; Hook‐Barnard and Hinton ). In the cortex of AD patients, a decrease in the M1‐to‐M23 isoform ratio was associated with changes in the localization of AQP4 (Zeppenfeld et al. ), but how they exactly influence AD pathology accumulation awaits further investigation. Chandra et al. (Chandra et al. ) found that AQP4 SNP rs72878794 minor allele carrier status was associated with decreased Florbetapir SUVRs, and they inferred that an altered glymphatic function might mediate the association. Here, we found that the AQP4 SNP rs72878794 minor allele carrier status was associated with a slower longitudinal increase in FW. Consistent with Chandra's study, this result implies a protective role of rs72878794 minor allele carrier status. The rs9951307 is located at the C‐terminal end of AQP4, approximately 15 kb downstream of the AQP4 UGA canonical stop codon (Rainey‐Smith et al. ), in a region that lacks known transcription factor‐binding sites. Burfeind's study (Burfeind et al. ), using a longitudinal aging cohort, found that rs9951307 minor allele carrier status was associated with a slower cognitive decline in subjects diagnosed with AD. However, we found that the rs9951307 minor allele carrier status was associated with a faster increase in FW, which is detrimental, in the Aβ‐negative group. The discrepancy may result from various reasons. Burfeind's study included subjects diagnosed with AD. Because clinically diagnosed AD was already in a late stage, complex degenerative pathologies might be involved, obscuring the effect of AQP4 SNPs. Studying non‐demented populations may reduce these confounding factors and provide an understanding of early pathological mechanisms. Our finding is consistent with a study showing that rs9951307 minor allele status was associated with severe brain edema (Kleffner et al. ) in stroke patients. Since rs9951307 does not change the amino acid sequence, it is unlikely to affect the function of AQP4. Its association with clinical features might be due to other gene variations that are in linkage disequilibrium with rs9951307, but further experiments are needed to gain insights about the detailed mechanisms. FW has been found sensitive to cognitive impairments. Maillard (Maillard et al. ) et al. found that higher baseline FW was associated with lower global cognition, memory, and executive function. Furthermore, longitudinal changes in FW were also associated with the decline of cognitive function. Several other studies also reported the relationships in community cohorts, patients with cerebral vascular diseases (Huang et al. ), AD continuum (Kamagata et al. ; Zhu et al. ), etc. In this study, we confirmed their associations in both cross‐sectional and longitudinal analyses. However, there was no significant correlation between FW and amyloid accumulation, and we had not investigated the association between FW and tau due to a small sample size. It should be noted that ISF stagnation may influence the clearance of a wide range of metabolic wastes and poisonous proteins, and it may cause brain degeneration through other pathological mechanisms rather than amyloid accumulation. Further studies are needed to clarify the detailed mechanism. We did not observe any associations between SNPs and PVS or ALPS. Although the three imaging markers are all related to the glymphatic system, they reflect different structural or functional properties, as discussed above. These properties are influenced by distinct pathological processes, such as arterial stiffness or venous collagenosis. As a result, the three markers may not exhibit simultaneous changes under a specific disease condition but instead display varied alteration patterns across different neurological diseases. This study is subject to several limitations. First, we included only SNPs that are associated with AD, and thus, we cannot dismiss the potential impact of previously reported SNPs unrelated to AD on our results. Second, the sample size was relatively small for uncovering associations between genetic variations and clinical or imaging features due to the requirements for genetic data and multi‐sequence imaging data. This limitation may explain the lack of significant findings regarding PVS and DTI‐ALPS. Similarly, although AQP4 SNPs were associated with FW, and FW was associated with cognitive scores, we did not observe a mediation effect, possibly due to a small effect size. While ADNI is already the largest multi‐center project focused on AD, a substantial portion of data has not yet been released. Future studies with larger sample sizes are necessary to validate our findings. Third, the involvement of multiple research centers may contribute to variability in diffusion metrics. However, as the acquisition protocol of ADNI has been harmonized by a specialized imaging core and we have conducted multi‐center harmonization, these differences should have been minimized. Lastly, although we used the three most widely used markers to represent glymphatic‐related changes, the pathophysiological underpinnings of these markers are still undergoing validation. Caution should be exercised when interpreting the results.
Conclusion AQP4 SNPs are associated with FW accumulation, which is further associated with longitudinal cognitive decline.
ADNI Consortia Information Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp‐content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf .
Lingyun Liu and Qingze Zeng: conceptualization; methodology; writing – review and editing; data curation. Xiao Luo: conceptualization; methodology; data curation. Hui Hong: methodology; writing – review and editing. Yi Fang: conceptualization; methodology. Linyun Xie, Yao Zhang, Miao Lin, Shuyue Wang, Kaicheng Li, and Xiaocao Liu: data curation. Ruiting Zhang and Yanxing Chen: writing – review. Yunjun Yang and Peiyu Huang: conceptualization; methodology; writing – review and editing.
All procedures performed in studies involving human participants were under the ethical standards of the Institutional and National Research Committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all participants and authorized representatives, and the study partners before any protocol‐specific procedures were carried out in the ADNI study. More details can be found at http://www.adni‐info.org .
The authors have nothing to report.
The authors declare no conflicts of interest.
Data S1.
|
Telehealth for Pediatric Cardiology Practitioners in the Time of COVID-19 | 6702d9f0-2cd3-4e01-a6e0-1d3b2283358c | 7354365 | Pediatrics[mh] | In late December 2019, a series of patients with pneumonia were reported in Wuhan, China . Further investigation by the World Health Organization identified the cause of the pneumonia as a novel coronavirus belonging to the same family responsible for previous outbreaks such as severe acute respiratory syndrome (SARS) and Middle East Respiratory Syndrome (MERS). By mid-January, the disease had spread throughout the world, eventually meeting the criteria established by the WHO as a global pandemic (COVID-19) . Since its onset, COVID-19 has challenged providers to develop safe and effective means to provide appropriate, time sensitive, and lifesaving care to patients. As a direct result of the pandemic, there has been a newfound interest in providing telehealth, also called telemedicine, due to the need to balance personal distancing precautions with patient care. We believe this new increase in use of telehealth in pediatric cardiology due to the COVID-19 pandemic will have a lasting impact on our daily practice. Despite its longstanding history, telemedicine has not been utilized as a primary means of providing pediatric cardiac care in the majority of healthcare facilities. With the seemingly overnight call for transformation of patient interaction due to COVID-19, the advantages and disadvantages of telemedicine are being closely examined. The role of telehealth within pediatric cardiology is particularly valuable given the need for prompt diagnosis and reliance on a multitude of factors including history, physical examination and advanced testing (EKG, chest X-ray, and echocardiography) to arrive at the correct diagnosis, which may be life-threatening. Specific areas explored in this paper include our attempt to identify appropriate patient populations, evaluation of those specific diagnoses best suited for telehealth, the practical implications of performing a telehealth visit, use of remote monitoring, staffing needs and resource utilization, billing, incorporation of telehealth into current practice, and the role for pediatric cardiology fellows in telehealth.
While the current COVID-19 pandemic has generated urgency in the adaptation of patient care within the confines of personal distancing, significant contributions to the practice of telemedicine in the field of pediatric cardiology have previously been reported. Telemedicine is defined as the “specific application of technology to conduct clinical medicine at a distance and establishment of a connection between physicians and patients in a multitude of settings” . This includes its value in fetal echocardiography and fetal cardiac monitoring, neonatal consultation for the sick newborn with suspected congenial heart disease, the care of pediatric patients presenting for follow-up care or to the local emergency department with acquired heart disease, and providing expertise in the management of the adult congenital patient . Multiple publications have detailed the ability of telehealth to extend subspecialty availability and expertise to rural or community practices without an on-site pediatric cardiologist . A study of telehealth in a Portuguese pediatric cardiology practice noted improved access to patients living in rural areas and the ability of global outreach to cardiologists working in low and middle-income countries . Furthermore, a review of current technologies vital to the day-to-day operation of a robust telecardiology service exists including the use of tele-echocardiography, tele-auscultation, and remote rhythm monitoring . Reports offering guidance in creating a high-quality pediatric telemedicine program have been the subject of numerous works with particular attention to details of initial planning, options regarding various clinical care models, and proposed metrics for ongoing quality assessment . This extensive body of literature led to the culmination of a scientific statement by the American Heart Association in 2017 addressing the use of telemedicine in pediatric cardiology . Outcome data regarding telemedicine within the field of pediatric cardiology are promising. Several studies have shown improvement in providing high quality of care to a greater number of patients and improvement in prompt and accurate diagnoses with its utilization . Furthermore, telemedicine has been shown to have significant cost savings and financial gain to the healthcare system as a whole . In addition to its use in clinical medicine, the application of remote education in the form of tele-education of providers and trainees is well established . While many aspects of telemedicine have had major success, it is not without its own set of obstacles. Barriers to its use include lack of standardization of telemedicine components, complex legal issues and licensure requirements, insurance reimbursements, and provider and patient acceptance . While there was a rapid acceptance of this technology by government and insurance entities to allow for providers to continue to provide care for their patients, the recommendations and regulations continue to evolve.
In general, patients who might benefit from such a transition to telehealth include those with adequate resources to complete the visit including a computer or phone-based system, a stable connection (internet, phone line), the ability to dedicate time to the visit, and the ability to afford the cost of data required for video calls. Provider requirements for incorporating telehealth include adequate institutional support such as a HIPAA-compliant platform, tech support, equipment, training, care maps and clinical pathways that are consistent across the program, scheduling support, and outpatient templates that accommodate telehealth. Telehealth may also represent a paradigm shift in reaching patients with barriers to in-person appointments. Decreased travel costs and savings of travel time may help those living distant from their pediatric cardiologist or with lack of transportation . Telehealth may also allow a glimpse into the home environment of patients, providing insights into their lives that may never have been known otherwise . Telehealth may help decrease no-show rates in clinic, helping pediatric cardiology teams to provide care to a greater number of patients each day. Finally, telehealth may also help facilitate greater accessibility to subspecialty pediatric cardiovascular care at a population level with increased patient outreach and ease of access. With the back log of patient visits, telehealth may facilitate decreased wait times for appointments, and allow for a more efficient triage system for in-office visits and tests. In spite of all these advantages, telehealth may not be appropriate for all pediatric cardiology patients. For instance, telehealth may not be best suited in emergent situations where rapid notification of emergency medical providers is indicated for patient safety (with the rare exceptions where the telehealth care model is designed for assisting local care providers in the acute stabilization of a patient). Telehealth represents a knowledge-rich but resource-poor environment: while a specialist with knowledge is present, there are often few interventions that can be done remotely until patient is physically brought to a medical facility. Telehealth may aid in triaging providers by determining the best disposition for patient evaluation if a patient is calling from home with concerns. Video assessment may help guide whether a patient can be seen in the outpatient clinic in several hours or days, or whether presentation to the nearest medical center or activating emergency medical services is more appropriate. Telemedicine may not be appropriate for a population requiring objective data to render a complete evaluation. Seeing patients without means of assessing vital signs such as heart rate, oxygen saturation, or weight trends at home may not be the best alternative in newborns or any patient that is critically ill. Physical examination signs such as a subtle change in murmur, pulse quality, or volume status may also be poorly conveyed by video or by an examination performed by a non-medical family member over telehealth.
Prior to beginning any telehealth encounters, it is important that the telehealth team be equipped to deliver high-quality care. A recent AAP webinar on telehealth may serve as a useful reference . Many institutions were able to expand basic telehealth infrastructure to accommodate the surge in need during the COVID-19 pandemic and may have different policies and procedures that must be followed. Before arranging the telehealth encounter, consent from the patient or parent must be obtained and on file. This can be either obtained verbally or in writing. While certain HIPAA regulations regarding telehealth were relaxed during the initial phase of the COVID-19 crisis, a HIPAA-compliant telehealth modality is preferred and will be needed going forward as regulations resume post-pandemic. Non-HIPAA-compliant platforms with direct communication to the patient such as Facetime™ (Apple, Cupertino, CA), Duo™ (Google, Mountain View, CA), WhatsApp™ (Facebook, Inc., Menlo Park, CA), and Google Hangouts™ (Google, Mountain View, CA) are all permissible at the present time. Several HIPAA-compliant platforms are also commercially available for office-based practices. Larger institutions may have the infrastructure to incorporate video telehealth via the electronic medical record (EMR), allowing for easier access to patient data during the encounter, patient scheduling, billing, and insurance information. Some platforms allow for group appointments which may allow for multiple members to be present for the appointment. Several features for translation are also available with these platforms. Most aspects of the in-person encounter can be readily adapted to a telehealth encounter, as described in the following sections. A helpful list of Telehealth Do’s and Don’ts are found in Table .
While it is important for cardiologists to protect themselves, the healthcare team, patients, and families from unnecessary exposure, not all patients are suitable for initial visit via telemedicine. There are certain circumstances in which the patient’s initial encounter with the cardiac team should be a face-to-face visit, as teleconsultation in these patients may prolong time to diagnosis of a potentially life-threatening illness. In these cases, patients should be seen in person for a complete evaluation by a pediatric cardiologist. Broad categories of younger patients that may be less suitable for an initial evaluation by telehealth include infants with a murmur or cyanosis, infants or toddlers with failure to thrive, and a post-natal evaluation after pre-natal suspicion for congenital heart disease. Older patients with concerns for exertional chest pain or syncope, cardiomegaly on chest imaging, an abnormal EKG with symptoms, or a family history of sudden cardiac death in a first degree relative may all be less suitable for an initial telehealth visit. When tests such as an EKG or echocardiogram are essential for assessment, an office visit would be preferred. Other circumstances such as needing clearance for a non-deferrable surgery, time-dependent diagnoses such as Kawasaki follow-up, or an evaluation prior to chemotherapy should also be seen in person. In all situations, the provider should use their clinical judgment to determine if in-person or telehealth visits are best for the individual patient scenario. Initial consultation via telemedicine may include chief complaints such as palpitations, chest pain at rest, dizziness/syncope, dyslipidemia (if a lipid profile is available), hypertension, patients with an abnormal EKG, and patients with a family history of genetic disorders. Based on the initial consultation, more information will be available and further testing may be ordered. Additionally, an initial telehealth consult may uncover “red flag” symptoms that would prompt a more urgent in-person visit. Once personal distancing restrictions have been lifted, the non-urgent patient may then follow-up in person for a complete physical examination and further testing as per standard of care. Follow-up visits in which home teleconsultation may not be optimal include the assessment of post-operative patients, where an echocardiogram and chest X-ray are often required. An examination of a post-operative wound could be done by telemedicine, but ideally these patients need to be seen in person. Established patients with congenital heart disease in whom their clinical course may deteriorate without close surveillance should be neither delayed nor scheduled with telemedicine. Conversely, stable established patients without expected changes in cardiovascular status whose routine follow-up happens to fall during this pandemic period should be deferred until restrictions are lifted. The cornucopia of clinical scenarios precludes declaring a hard-and-fast rule with regard to these patients. Therefore, established patients should be triaged at the discretion of their cardiologist, who is familiar with the patient’s family, the patient’s individual pathology, history, and risk factors. History Acquisition of patient history can be achieved as easily over teleconference as in person. Recall of patient and family history in general is not location-dependent, and cues from the home environment may prompt parents to recall history that may have been missed in a clinical setting. Additionally, physicians may gather useful data regarding home environment, as well as use prompts from the visible setting to elicit more history than might have been achieved in clinic. Whenever possible, the cardiologist should obtain the most recent clinic or ER note that prompted the consultation, so that prior vital signs and (potentially) cardio-diagnostic testing can be reviewed. If the family and physician are not fluent in the same languages, it is imperative that third-party medical translator services aid in communication during these interactions. Symptom assessment should include questions regarding breathlessness, dyspnea, loss of appetite and changes in activity level, with special attention to trends over time. One example of a symptom suited for telehealth is assessment of “bendopnea,” in which tachypnea is elicited when the patient bends forward at the waist while sitting. This can be assessed easily by telehealth and can provide evidence of heart failure severity . Physical Exam Acquisition of vital signs in the home setting can be challenging in the pediatric population. Unlike with adults, an appropriate size blood pressure cuff for babies and young children is rarely available in the average home. However, teenage patients may be able to use adult-sized home blood pressure cuffs and orthostatic vital signs (if relevant) can be obtained with proper guidance. Despite the inability to auscultate the heart and lungs, or to palpate the liver edge or pulses, a reasonable amount of information can be gathered from home teleconsultation. The general physical appearance and psychiatric state can be easily observed during history-taking. The physician can inspect for cyanosis, dysmorphisms of the face and ears, and even dentition. In babies and younger children labored breathing, pectus deformity, umbilical hernia, surgical scars and ostomy tube sites can be assessed. A reasonable examination of extremities can be done, observing for deformities, features of connective tissue disease, and clubbing. An assessment of the patient’s skin to look for cyanosis, hemangiomas, acanthosis nigricans, stretch marks, or stigmata of other systemic or genetic diseases is possible. In patients who are developmentally able, assessment of gait can be performed. Assessment The summary of the assessment should always include the limitations of the examination and therefore of the overall assessment. This must be stated clearly to the parent to prevent misunderstanding regarding the utility of the interaction. Physicians can formulate their impressions based on available information and may recommend remote testing. Physicians can also review the EMR for other encounters. Often, a recent set of vitals can be located. In addition, the results of blood work, EKGs, and chest X-rays performed in the past can also be assessed. All of these data feed into a comprehensive assessment and decision-making plan for the patient. The application of teleconsultation in pediatric cardiology is a phenomenon borne of necessity, with limited prior data. There are benefits of this technology in a time of personal distancing, including the ability to establish relationships with families in remote locations, or families in whom finances or transportation would have otherwise precluded an outpatient visit . For situations in which parents are no longer living together, both parents can be present at an appointment via teleconference, easing communication barriers. The elimination of travel time to and from appointments minimizes disruption to school and workdays . Surveys of families who have engaged with their physicians in this way demonstrated a high degree of parent satisfaction with this type of interaction . Most importantly, use of telemedicine has been shown to improve outcomes in single ventricle patients , as well as adults with heart failure , hypertension , and dyslipidemia . Expanding the use of telehealth may help cardiologists to capitalize on these benefits and perhaps establish a precedent for future interactions. Additional Testing The availability of remote cardiac testing allows for the acquisition of EKGs at a local pediatrician’s clinic or urgent care office, and ambulatory rhythm monitors can be delivered to the patient’s home. Whenever possible, physicians should abide by appropriate use criteria when ordering diagnostic testing, as certain diagnostic tests require prolonged contact between patient and the medical team, increasing the risk of exposure . However, a strategy of allowing a patient to present for testing such as an EKG or echocardiogram prior to a planned telehealth encounter may convey several advantages for both the patient and provider. A patient’s overall time spent in a medical environment would be less when compared to an all-inclusive in-person clinic visit. Patients would also come into contact with fewer medical staff during the course of their encounter. Finally, the availability of testing results to reference and review at the time of the telehealth encounter would be of great benefit to both the patient and the telehealth provider, would add further value to the encounter, and would certainly aid in medical decision-making. Follow-Up If an initial telehealth visit is performed for a new patient, then an in-person examination can be performed whenever permissible to meet the standard of care for an initial visit. In patients in whom structural or acquired heart disease is suspected, a face-to-face visit with appropriate diagnostic imaging should be arranged so that a physical exam may be performed and definitive diagnosis achieved. In established patients with known heart disease, such as a NICU or newborn nursery follow-up, echo-only appointments may be arranged to minimize contact with clinic staff while obtaining needed information. Patients in whom a physical exam is typically normal may continue to follow-up as needed via telehealth until personal distancing restrictions are lifted. This strategy is of optimal utility in children with dyslipidemia, in whom laboratory studies can be obtained at a local facility and transmitted to the ordering physician. Patients with high blood pressure, vasovagal presyncope or syncope, and postural orthostatic tachycardia syndrome may also be suited to this type of interaction. It is, however, important to recognize that after a finite number of telehealth visits, irrespective of their diagnosis, patients should have an in-person assessment with a cardiologist. Patients presenting for second opinions, especially from distant locations, may be especially suited for telehealth as robust data from the parent center are typically available for review. Disorders of Cardiac Rhythm and Conduction Cardiac arrhythmias are a common cause of morbidity and the Task Force on Children and Youth estimates that up to 30,000 children will have a newly diagnosed cardiac arrhythmia or conduction abnormality yearly . Advancements in remote real-time monitoring make electrophysiology particularly well suited for telemedicine. The establishment of digital electrocardiography has allowed for consultation and detection of those diagnoses with abnormalities on resting electrocardiograms such as Brugada or prolonged QT syndromes. Additionally, remote viewing of rhythms on cardiac monitoring in the in-patient or emergency department setting allows for real-time interpretation and treatment initiation with telecardiology supervision . The use of telemedicine in a hospital setting, many technological capabilities have enabled pediatric cardiologists to provide care outside of the traditional face-to-face in-office setting. Since pediatric arrhythmias are predominantly paroxysmal in nature, their detection relies on capturing the event. This is most often done through the use of 24-h ambulatory monitors (Holter monitors), external event or telemetry monitors, or implantable loop recorders. Holter monitor and rhythm monitors can be placed in the office or mailed to patient homes . Patients who have implantable pacemakers and cardioverter-defibrillators have the capacity to record and transmit data remotely through a landline or cellular phone allowing for identification of problems and treatment without the need for clinic visits . These remote monitoring capabilities additionally allow the care team to check the functionality of the device, battery, and leads in addition to the ability to review intra-cardiac electrograms. In addition to the standard devices used in outpatient arrhythmia monitoring, there is a growing market of direct-to-consumer devices that enable families to monitor their children and provide sharable information for their physician. The data obtained from these devices have several limitations which make their application in clinical decision-making a challenge. Fetal Cardiology Fetal cardiology has become subjected to a strict triage process during the pandemic, as performance of a fetal echocardiogram requires significant contact between the mother and the members of the fetal cardiac team. The fetal echocardiogram cannot be suitably performed via telehealth as imaging is a significant component of the visit. However, telehealth can be used to triage the reasons and timing of the study. Both the interaction with the sonographer and the cardiologist are lengthy and increase exposure risks. Therefore, the decision to perform a fetal echocardiogram must weigh the risk of exposure against the utility of the interaction. Recent statements by the American Society of Echocardiography have strongly discouraged performance of non-urgent imaging . These referrals have been stratified into low-, medium-, and high-acuity categories. Performance of fetal echocardiography should therefore be limited to those in whom there is a time-sensitive component, cases that pose a risk to fetal viability, or cases in which medical or other interventions may be indicated .
Acquisition of patient history can be achieved as easily over teleconference as in person. Recall of patient and family history in general is not location-dependent, and cues from the home environment may prompt parents to recall history that may have been missed in a clinical setting. Additionally, physicians may gather useful data regarding home environment, as well as use prompts from the visible setting to elicit more history than might have been achieved in clinic. Whenever possible, the cardiologist should obtain the most recent clinic or ER note that prompted the consultation, so that prior vital signs and (potentially) cardio-diagnostic testing can be reviewed. If the family and physician are not fluent in the same languages, it is imperative that third-party medical translator services aid in communication during these interactions. Symptom assessment should include questions regarding breathlessness, dyspnea, loss of appetite and changes in activity level, with special attention to trends over time. One example of a symptom suited for telehealth is assessment of “bendopnea,” in which tachypnea is elicited when the patient bends forward at the waist while sitting. This can be assessed easily by telehealth and can provide evidence of heart failure severity .
Acquisition of vital signs in the home setting can be challenging in the pediatric population. Unlike with adults, an appropriate size blood pressure cuff for babies and young children is rarely available in the average home. However, teenage patients may be able to use adult-sized home blood pressure cuffs and orthostatic vital signs (if relevant) can be obtained with proper guidance. Despite the inability to auscultate the heart and lungs, or to palpate the liver edge or pulses, a reasonable amount of information can be gathered from home teleconsultation. The general physical appearance and psychiatric state can be easily observed during history-taking. The physician can inspect for cyanosis, dysmorphisms of the face and ears, and even dentition. In babies and younger children labored breathing, pectus deformity, umbilical hernia, surgical scars and ostomy tube sites can be assessed. A reasonable examination of extremities can be done, observing for deformities, features of connective tissue disease, and clubbing. An assessment of the patient’s skin to look for cyanosis, hemangiomas, acanthosis nigricans, stretch marks, or stigmata of other systemic or genetic diseases is possible. In patients who are developmentally able, assessment of gait can be performed.
The summary of the assessment should always include the limitations of the examination and therefore of the overall assessment. This must be stated clearly to the parent to prevent misunderstanding regarding the utility of the interaction. Physicians can formulate their impressions based on available information and may recommend remote testing. Physicians can also review the EMR for other encounters. Often, a recent set of vitals can be located. In addition, the results of blood work, EKGs, and chest X-rays performed in the past can also be assessed. All of these data feed into a comprehensive assessment and decision-making plan for the patient. The application of teleconsultation in pediatric cardiology is a phenomenon borne of necessity, with limited prior data. There are benefits of this technology in a time of personal distancing, including the ability to establish relationships with families in remote locations, or families in whom finances or transportation would have otherwise precluded an outpatient visit . For situations in which parents are no longer living together, both parents can be present at an appointment via teleconference, easing communication barriers. The elimination of travel time to and from appointments minimizes disruption to school and workdays . Surveys of families who have engaged with their physicians in this way demonstrated a high degree of parent satisfaction with this type of interaction . Most importantly, use of telemedicine has been shown to improve outcomes in single ventricle patients , as well as adults with heart failure , hypertension , and dyslipidemia . Expanding the use of telehealth may help cardiologists to capitalize on these benefits and perhaps establish a precedent for future interactions.
The availability of remote cardiac testing allows for the acquisition of EKGs at a local pediatrician’s clinic or urgent care office, and ambulatory rhythm monitors can be delivered to the patient’s home. Whenever possible, physicians should abide by appropriate use criteria when ordering diagnostic testing, as certain diagnostic tests require prolonged contact between patient and the medical team, increasing the risk of exposure . However, a strategy of allowing a patient to present for testing such as an EKG or echocardiogram prior to a planned telehealth encounter may convey several advantages for both the patient and provider. A patient’s overall time spent in a medical environment would be less when compared to an all-inclusive in-person clinic visit. Patients would also come into contact with fewer medical staff during the course of their encounter. Finally, the availability of testing results to reference and review at the time of the telehealth encounter would be of great benefit to both the patient and the telehealth provider, would add further value to the encounter, and would certainly aid in medical decision-making.
If an initial telehealth visit is performed for a new patient, then an in-person examination can be performed whenever permissible to meet the standard of care for an initial visit. In patients in whom structural or acquired heart disease is suspected, a face-to-face visit with appropriate diagnostic imaging should be arranged so that a physical exam may be performed and definitive diagnosis achieved. In established patients with known heart disease, such as a NICU or newborn nursery follow-up, echo-only appointments may be arranged to minimize contact with clinic staff while obtaining needed information. Patients in whom a physical exam is typically normal may continue to follow-up as needed via telehealth until personal distancing restrictions are lifted. This strategy is of optimal utility in children with dyslipidemia, in whom laboratory studies can be obtained at a local facility and transmitted to the ordering physician. Patients with high blood pressure, vasovagal presyncope or syncope, and postural orthostatic tachycardia syndrome may also be suited to this type of interaction. It is, however, important to recognize that after a finite number of telehealth visits, irrespective of their diagnosis, patients should have an in-person assessment with a cardiologist. Patients presenting for second opinions, especially from distant locations, may be especially suited for telehealth as robust data from the parent center are typically available for review.
Cardiac arrhythmias are a common cause of morbidity and the Task Force on Children and Youth estimates that up to 30,000 children will have a newly diagnosed cardiac arrhythmia or conduction abnormality yearly . Advancements in remote real-time monitoring make electrophysiology particularly well suited for telemedicine. The establishment of digital electrocardiography has allowed for consultation and detection of those diagnoses with abnormalities on resting electrocardiograms such as Brugada or prolonged QT syndromes. Additionally, remote viewing of rhythms on cardiac monitoring in the in-patient or emergency department setting allows for real-time interpretation and treatment initiation with telecardiology supervision . The use of telemedicine in a hospital setting, many technological capabilities have enabled pediatric cardiologists to provide care outside of the traditional face-to-face in-office setting. Since pediatric arrhythmias are predominantly paroxysmal in nature, their detection relies on capturing the event. This is most often done through the use of 24-h ambulatory monitors (Holter monitors), external event or telemetry monitors, or implantable loop recorders. Holter monitor and rhythm monitors can be placed in the office or mailed to patient homes . Patients who have implantable pacemakers and cardioverter-defibrillators have the capacity to record and transmit data remotely through a landline or cellular phone allowing for identification of problems and treatment without the need for clinic visits . These remote monitoring capabilities additionally allow the care team to check the functionality of the device, battery, and leads in addition to the ability to review intra-cardiac electrograms. In addition to the standard devices used in outpatient arrhythmia monitoring, there is a growing market of direct-to-consumer devices that enable families to monitor their children and provide sharable information for their physician. The data obtained from these devices have several limitations which make their application in clinical decision-making a challenge.
Fetal cardiology has become subjected to a strict triage process during the pandemic, as performance of a fetal echocardiogram requires significant contact between the mother and the members of the fetal cardiac team. The fetal echocardiogram cannot be suitably performed via telehealth as imaging is a significant component of the visit. However, telehealth can be used to triage the reasons and timing of the study. Both the interaction with the sonographer and the cardiologist are lengthy and increase exposure risks. Therefore, the decision to perform a fetal echocardiogram must weigh the risk of exposure against the utility of the interaction. Recent statements by the American Society of Echocardiography have strongly discouraged performance of non-urgent imaging . These referrals have been stratified into low-, medium-, and high-acuity categories. Performance of fetal echocardiography should therefore be limited to those in whom there is a time-sensitive component, cases that pose a risk to fetal viability, or cases in which medical or other interventions may be indicated .
The existing use of telehealth and remote monitoring for specific diagnoses and conditions may help guide a broader incorporation of telehealth practices to other pediatric cardiology populations. Single Ventricle Heart Disease Technology has already been leveraged to help pediatric cardiologists monitor the single ventricle population during the critical interstage period . While the interstage period is defined as the time between the Norwood surgery and Glenn surgery, many care teams have expanded this monitoring to all single ventricle infants [including those with ductal stents, pulmonary artery (PA) bands, and Blalock–Thomas–Taussig (BTT) shunts without arch augmentation]. Infants are sent home with a scale and pulse oximeter in order to monitor their weight gain and oxygen saturations and families are well educated about “red flags” which prompt a phone call to their care team if certain thresholds are reached. The use of electronic tablets or cell phone applications by many institutions allows families to input daily weights and oxygen saturations into devices so that their cardiology teams can monitor them remotely . The equipment is supplied by the care team to allow remote, automated transmission of data. While there is some ability to bill for these devices and remote monitoring, most of the funding is covered by the care team or grants. These devices are HIPAA-compliant and can be used to conduct virtual visits. This program is being extended at this time to other heart failure patients (like ventricular septal defect or AV canal defect) who require closer monitoring. Patients with Tetralogy of Fallot or other shunt-dependent patients can also be evaluated using these types of platforms. Ventricular Assist Devices Children with ventricular assist devices (VADs) are able to be discharged after device placement, with 59% of patients with intracorporeal continuous flow devices able to leave the hospital according to the most recent Pedimacs registry data . Adapting VAD outpatient visits to a telehealth setting is feasible and can enable similar surveillance as in-person visits. While patients should be seen in person for their first several visits after hospital discharge given the need for close monitoring of device complications and continued education in the immediate post-implant period, telehealth should be considered an option for those patients requiring routine outpatient checks while stable on long-term support. Prior to the telehealth visit, patients can upload images of their driveline site and dressing via phone camera or apps linked to the electronic medical record, images of their VAD controller with values for flow and power visible, VAD log file detailing long-term trends in device flows and power, and images of any bruising, swelling, or changes of concern to the patient or their family. Alarm logs should also be submitted and the circumstances surrounding each alarm carefully clarified; alarms associated with symptoms require in-person evaluation. These data should be reviewed for concerning changes or trends. Submission of these data to the VAD team on regular intervals between telehealth visits can further aid in continued outpatient surveillance. Use of outpatient laboratory visits or home-based draws can help provide continued laboratory surveillance while limiting patients’ exposure to a hospital environment. Home international normalized ratio (INR) monitors are available and can help monitor anticoagulation. Medical emergencies can be life-threatening without prompt intervention in patients supported by VADs, and telehealth should never be considered a substitute for in-person evaluation in emergent situations.
Technology has already been leveraged to help pediatric cardiologists monitor the single ventricle population during the critical interstage period . While the interstage period is defined as the time between the Norwood surgery and Glenn surgery, many care teams have expanded this monitoring to all single ventricle infants [including those with ductal stents, pulmonary artery (PA) bands, and Blalock–Thomas–Taussig (BTT) shunts without arch augmentation]. Infants are sent home with a scale and pulse oximeter in order to monitor their weight gain and oxygen saturations and families are well educated about “red flags” which prompt a phone call to their care team if certain thresholds are reached. The use of electronic tablets or cell phone applications by many institutions allows families to input daily weights and oxygen saturations into devices so that their cardiology teams can monitor them remotely . The equipment is supplied by the care team to allow remote, automated transmission of data. While there is some ability to bill for these devices and remote monitoring, most of the funding is covered by the care team or grants. These devices are HIPAA-compliant and can be used to conduct virtual visits. This program is being extended at this time to other heart failure patients (like ventricular septal defect or AV canal defect) who require closer monitoring. Patients with Tetralogy of Fallot or other shunt-dependent patients can also be evaluated using these types of platforms.
Children with ventricular assist devices (VADs) are able to be discharged after device placement, with 59% of patients with intracorporeal continuous flow devices able to leave the hospital according to the most recent Pedimacs registry data . Adapting VAD outpatient visits to a telehealth setting is feasible and can enable similar surveillance as in-person visits. While patients should be seen in person for their first several visits after hospital discharge given the need for close monitoring of device complications and continued education in the immediate post-implant period, telehealth should be considered an option for those patients requiring routine outpatient checks while stable on long-term support. Prior to the telehealth visit, patients can upload images of their driveline site and dressing via phone camera or apps linked to the electronic medical record, images of their VAD controller with values for flow and power visible, VAD log file detailing long-term trends in device flows and power, and images of any bruising, swelling, or changes of concern to the patient or their family. Alarm logs should also be submitted and the circumstances surrounding each alarm carefully clarified; alarms associated with symptoms require in-person evaluation. These data should be reviewed for concerning changes or trends. Submission of these data to the VAD team on regular intervals between telehealth visits can further aid in continued outpatient surveillance. Use of outpatient laboratory visits or home-based draws can help provide continued laboratory surveillance while limiting patients’ exposure to a hospital environment. Home international normalized ratio (INR) monitors are available and can help monitor anticoagulation. Medical emergencies can be life-threatening without prompt intervention in patients supported by VADs, and telehealth should never be considered a substitute for in-person evaluation in emergent situations.
Due to the COVID-19 pandemic, the responsibilities for many aspects of care have shifted with the rapid utilization of telehealth, placing an increased workload on the provider performing the visit. Rooming, vitals, completing EMR history, medication reconciliation, placing orders and communicating the follow-up plan, and note composition now are all largely completed by the provider. In some situations, the provider may even need to collect the payments. While some programs may utilize this as a means of consolidating their staff to only a few essential persons, others may re-deploy office-based staff to assist with these virtual tasks. Other staffing and resource utilization changes may arise from changes in patient volumes secondary to the COVID-19 associated pandemic and the widespread stay at home orders. Decreased in-patient clinic visits have led to a decreased use of in-office diagnostic testing such as echocardiograms and EKGs. Similarly, there may be a decrease in the need for echocardiogram and MRI readers during the pandemic as these diagnostic modalities are utilized less frequently as non-emergent testing and visits are deferred. It will be a challenge for programs that measure physician productivity with traditional measures such as the revenue value unit (RVU), and different metrics must be considered to assess physician contributions to patient care in the telehealth era. Care teams may have to work longer hours once personal distancing restrictions are lifted to make up for the back log of patient visits deferred during the pandemic. In the process of re-opening after the pandemic, it may become particularly important to incorporate telehealth in order to decrease patient wait times and increase access to care.
As the pandemic unfolded, the Centers for Medicare and Medicaid Services (CMS) rapidly approved specific billing codes for telehealth. These codes were made at par with regular office visit codes and provide significant financial support to practices and institutions affected by a loss of in-person patient volume. This also provides a financial incentive for practices and institutions to adopt and expand their telehealth offerings. A list of billing codes useful for telehealth encounters can be found in Table .
The most recent SPCTPD/ACC/AAP/AHA training guidelines for pediatric cardiology fellowship programs released in 2015 do not address the incorporation of telehealth into pediatric cardiology education. A recent AHA Scientific Statement on telemedicine in pediatric cardiology briefly addressed the need for additional training, noting that even cardiology fellowship programs associated with centers possessing existing telehealth capabilities rarely have requirements in telehealth competency as part of their fellows’ curriculum. The AHA statement further notes that few pediatric cardiology fellows are graduating with the skills needed to incorporate telehealth into their practice. In response to the COVID-19 crisis, the Accreditation Council for Graduate Medical Education (ACGME) has released guidance related to fellows and their involvement in telehealth, immediately authorizing fellows to participate in telehealth with appropriate supervision to care for patients during the pandemic . Given the fact that many staff and fellow pediatric cardiologists lack formal telehealth training, this pandemic represents an opportunity for learning together. Numerous guides and best practices have been published by the American Academy of Pediatrics , the American College of Cardiology , and other medical organizations to address the groundswell of interest and rapid incorporation of telehealth; some of these resources are geared towards trainees . This may possibly lead to adding telehealth training to fellowship curricula and encouraging industry to develop platforms that aid in allowing multiple providers in a telehealth visit so that learners of all levels may continue to participate in patient care. As staff cardiologists incorporate more telehealth into their practices, fellows similarly should be incorporated into telehealth workflow . Many innovative strategies have been developed and successfully deployed to continue fellowship training and education during the COVID-19 crisis including regular webinars that are focused on the congenital cardiologist ; telehealth similarly represents an opportunity to continue to provide education in patient care in a different format during this trying period. In a time where personal distancing and separation are keeping individuals apart, including faculty from fellows, telehealth is a means to come together for patient care in a safe manner.
The expanded role of telehealth brought about by the COVID-19 pandemic is unlikely to disappear after this pandemic has abated. For telehealth to be sustainable, care maps and strategies for optimal utilization of this tool in the care of pediatric patients with heart disease must be developed. Pediatric cardiologists will need to work with advocacy groups and legislators to ensure that adequate reimbursement for telehealth encounters takes place, which will allow for the continued use of this technology. Exploring means of providing patients with access to remote monitoring equipment and equipment to measure objective vital sign data will additionally help lead to higher quality telehealth encounters with improved patient care. For example, many families do not possess the necessary medical equipment for obtaining reliable vital signs at home. Pre-packaged kits containing equipment such as finger pulse oximeter and an age-appropriate blood pressure cuff and sphygmomanometer could be easily generated and mailed to families to aid in the collection of accurate vital signs during a telehealth encounter. This may represent an easily addressed means to increase the quality and safety of a home telehealth encounter. This intervention would require support from payors for reimbursement and from the biomedical industry for providing age-appropriate pediatric equipment.
Telehealth will continue to be incorporated into pediatric cardiac clinical practice after the acute phase of the COVID-19 pandemic has passed. We will learn as a field as to which conditions and diagnosis are best suited for telehealth, though telehealth visits will never replace in-person visits entirely. Patients with other barriers to in-person care may see an increased access to care as a result of telehealth expansion. Telehealth also holds the promise of decreasing the no-show rate and the patient wait times for appointments. This pandemic lends itself to collaborative learning, with pediatric cardiology as no exception. There is potential to gather evidence about the optimal frequency of patient follow-up visits, the usefulness of appropriate use criteria for echocardiography, and the best utilization of emerging telehealth technology. It may steer our field towards a more appropriate resource utilization for pediatric cardiology care. The COVID-19 pandemic may serve as a catalyst in improving resource utilization, improving quality of care, and advancing pediatric cardiology practices to improve patient care outcomes.
|
Impact of template-based synoptic reporting on completeness of surgical pathology reports | 1ffa04a8-3a43-4f81-8a0e-7b914c68b2fb | 10791929 | Pathology[mh] | Synoptic reporting contributes to the quality of surgical pathology reports for cancer specimens by increasing completeness and standardization . The College of American Pathologists (CAP) defines synoptic reporting by (1) completeness in terms of adherence to a checklist of required data elements (RDE), and (2) a laboratory value-like paired format consisting of the RDE and the matching response with (3) different RDE presented in separate lines . Comprehensive sets of cancer protocols are published by the CAP and by the International Collaboration on Cancer Reporting (ICCR) . Synoptic reporting—according to the above definition—is an important step on the way to higher levels of data structuring, which include use of discrete data fields and the link to underlying ontologies such as SNOMED-CT . As CAP-accredited pathology laboratories are required to use the CAP cancer protocols, a number of commercially available solutions have emerged, which provide suitable database structures, maintenance of protocols and interfaces to local laboratory information systems. Outside the USA or in languages other than English—with the noteworthy exception of the nation-wide database provided by PALGA in the Netherlands—a significant burden is upon pathology departments in order to implement synoptic reporting and to continuously update protocols. Standardized cancer reporting protocols have been made available by the ICCR in French, Spanish, and Portuguese . Regarding synoptic reporting in German, there have been individual efforts to assure usage of standardized vocabulary , but general database solutions are not yet widely applicable. Based on these considerations, we attempted to facilitate implementation of synoptic reporting at our institution by separating issues related to the actual content and its formatting from those related to the setup of a database. For this purpose, we took advantage of the autotext features of our laboratory information system in order to create templates in synoptic format (matching the corresponding CAP protocols and translated to German; for certain protocols, also to French). The CAP protocols then served as checklists for the pathologists either for dictating or for entering the responses themselves. This system was intended as a transitory solution, the experiences from which would then instruct the design and implementation of an underlying database. Here, we assess the effect of this template-based, database-free synoptic reporting system on completeness of surgical pathology reports. The synoptic reporting template for lung cancer was introduced in July 2016 (Version 3.4.0.0 of the CAP protocol) and underwent a major update (Version 4.0.0.2) in April 2018. The colon cancer template (Version 4.0.0.1 of the CAP protocol) was implemented in November 2017. After implementation of either protocol, all pertinent reports were rendered in synoptic format. We analyzed 100 consecutive lung cancer and colon cancer synoptic reports each. Reports from the first 3 months after implementation of either protocol were excluded in order to reduce potential effects related to the pathologists’ or transcriptionists’ learning curve or early minor modifications of the templates. A total of 100 consecutive narrative reports for lung and colon cancer each served as control. In order to minimize confounding effects, we chose these from the period immediately before implementation of the respective synoptic protocol. Carcinomas of the rectum were excluded from evaluation of both the synoptic and the narrative reports. All cases from the two study periods had been reported in German. Reports were reviewed and each data element was classified as “present,” “missing,” or “not applicable.” While the CAP protocols subsume lymphatic and vascular invasion under the umbrella term of lymphovascular invasion, we chose to report them as separate items in accordance with the TNM classification. Therefore, they were also analyzed separately for completeness. Within our institutional policy for implementation of synoptic reporting, we decided to include all mandatory data elements as per the respective CAP protocols, whereas the inclusion of optional data elements was at the discretion of each of the department’s subspecialty groups. Since “Treatment effect” became a mandatory data element only with the introduction of version 4.0.0.0 of each of the protocols, it was excluded from analysis for the purpose of the present study. Completeness for each data element was defined as 100% minus the percentage of missing elements. Non-applicable data elements as per the CAP protocols—which usually had not been specifically mentioned in narrative reports—were excluded from analysis. The templates had been tailored to use formatting features available in our laboratory information system, particularly bold font and italics as well as empty lines. On the other hand, vertical alignment of responses was not feasible, as the laboratory information system did not provide a suitable way of using tables or tabulator stops within templates (Tables and ). Each template contained a heading that specified the synoptic nature of the report as well as the name of the protocol. This heading was followed by the TNM formula, as we intended it to be as easily retrievable as possible. At the end of each synoptic template, the version of the protocol and its source were mentioned. This final line also served to separate the synoptic report from possible additional narrative elements. Multiple hash signs were used to mark comments as well as optional or conditional elements. These provided a way to remind users of information that we felt might otherwise be forgotten. An English translation of parts of a protocol is given in Fig. . Pathologists were also provided with a PDF version of the original CAP protocols, to which we had added comments for internal use and suggested translations of English terminology. General findings When analyzing the 100 consecutive reports for each cancer type, we found that the synoptic templates had been used for each applicable resection specimen, corresponding to 100% adherence to the synoptic format. Occasional minor terminological variations were identified (such as “no” rather than “not identified” as a response). When we considered these as unequivocally understandable for clinical colleagues, we counted them as valid responses. Synoptic reports typically spanned more lines than narrative reports, but, at least, subjectively, the pertinent pieces of information were more easily retrievable from synoptic reports as compared to narrative reports. Given that any remaining hash signs from an internal note would block electronic signature, no such erroneously remaining notes appeared in the reports. We did not observe any other recurrent formatting issues among synoptic reports either. Lung cancer For lung cancer, overall completeness rate was 96% for synoptic as compared to 67% for narrative reports. For mandatory data elements, completeness was 98% among synoptic and 65% among narrative reports. Detailed results are shown in Fig. . Of note, the only (optional) element that was more frequently reported with the narrative format was “Additional pathologic findings.” Narrative reports showed the highest rate of completeness (≥ 98%) for the histological type and all elements covered by a previously existing template for the TNM formula (i.e., T and N stages, lymphatic invasion, vascular invasion, and resection status), with the exception of histologic grade that was missing in 14% of cases. Among synoptic reports, the specification of the closest margin was the only mandatory element that was reported in less than 90% of cases, possibly due to how the pertinent remark was given in the template. Colon cancer Overall completeness was 97% for synoptic reports as compared to 93% for narrative reports (Fig. ). In contrast to lung cancer specimens, a dictation template had been systematically used for colon cancer resections before implementation of synoptic reporting. All elements covered by this dictation template were reported in > 90% of cases. In contrast, three data elements not covered by the dictation template were only infrequently reported in narrative reports. These included tumor deposits, which represent a mandatory data element as per the CAP protocol. Consequently, the most pronounced increase in reporting frequency in synoptic reports was seen for these three data elements. Again, the only data element with a (slight) decrease in completeness in synoptic reports was “Additional pathologic findings.” When analyzing the 100 consecutive reports for each cancer type, we found that the synoptic templates had been used for each applicable resection specimen, corresponding to 100% adherence to the synoptic format. Occasional minor terminological variations were identified (such as “no” rather than “not identified” as a response). When we considered these as unequivocally understandable for clinical colleagues, we counted them as valid responses. Synoptic reports typically spanned more lines than narrative reports, but, at least, subjectively, the pertinent pieces of information were more easily retrievable from synoptic reports as compared to narrative reports. Given that any remaining hash signs from an internal note would block electronic signature, no such erroneously remaining notes appeared in the reports. We did not observe any other recurrent formatting issues among synoptic reports either. For lung cancer, overall completeness rate was 96% for synoptic as compared to 67% for narrative reports. For mandatory data elements, completeness was 98% among synoptic and 65% among narrative reports. Detailed results are shown in Fig. . Of note, the only (optional) element that was more frequently reported with the narrative format was “Additional pathologic findings.” Narrative reports showed the highest rate of completeness (≥ 98%) for the histological type and all elements covered by a previously existing template for the TNM formula (i.e., T and N stages, lymphatic invasion, vascular invasion, and resection status), with the exception of histologic grade that was missing in 14% of cases. Among synoptic reports, the specification of the closest margin was the only mandatory element that was reported in less than 90% of cases, possibly due to how the pertinent remark was given in the template. Overall completeness was 97% for synoptic reports as compared to 93% for narrative reports (Fig. ). In contrast to lung cancer specimens, a dictation template had been systematically used for colon cancer resections before implementation of synoptic reporting. All elements covered by this dictation template were reported in > 90% of cases. In contrast, three data elements not covered by the dictation template were only infrequently reported in narrative reports. These included tumor deposits, which represent a mandatory data element as per the CAP protocol. Consequently, the most pronounced increase in reporting frequency in synoptic reports was seen for these three data elements. Again, the only data element with a (slight) decrease in completeness in synoptic reports was “Additional pathologic findings.” In the present study, we have assessed the effect of our template-based approach on completeness of colon and lung cancer resection reports and found very high rates of 98% completeness across all mandatory data elements using synoptic reports. To our knowledge, this is the first study to assess completeness of reports associated with synoptic format but without concurrent implementation of some kind of database structure. These values in terms of completeness are in a range at least similar as those reported for database solutions , specifically above 95% and, in many cases, reaching 100%. Equally in accordance with the published literature, 100% completeness was not reached for all data elements. This is probably, to some extent, an inevitable consequence of necessary flexibility in any reporting system. In many instances, it will not be reasonable to entirely block validation if a pathologist is unable to give a response to a certain data element for a specific case. Only a single optional data element (“Additional pathologic findings”) for each cancer protocol was reported less frequently with the synoptic format than in the narrative reports, possibly because the template contained a note that only relevant findings should be reported, which may have prompted pathologists not to report non-neoplastic findings lacking clinical significance. Parenthetically, synoptic format might also be used to improve consistency and completeness of reporting of clincally relevant non-neoplastic findings, even though this would be beyond the scope of the present study. Conversely, > 95% completeness was already achieved with narrative reports for data elements covered by the previously existing dictation template for colon cancer or the TNM template used for both cancer types. Limitations of our study arise from the fact that we assessed completeness of reports only for two protocols for relatively common cancer types reported in majority by non-subspecialized pathologists. Therefore, the increase in completeness might have been smaller for cases handled by subspecialized pathologists. One study, however, analyzing the impact of synoptic reporting on reports for malignant melanoma found increased completeness irrespective of subspecialization . On the other hand, the rate of completeness was already relatively high for narrative reports in our institution, which appears to be due to the previously existing dictation templates, which already included many of the required data elements. Finally, the transfer of our findings to other settings may be limited by the fact that our specific approach of template-based reporting was tailored to our laboratory information system. Such differences between pathology departments might result in different actual figures for completeness of narrative and synoptic reports, respectively. The general finding, however, i.e., high levels of completeness with template-based synoptic reports, should be largely translatable to other institutions as most laboratory information systems would be expected to feature at least basic functionalities for utilization of templates. While we did not attempt to quantitatively assess pathologists’ adherence to specific wordings, we found that, generally, the terminology used in synoptic reports was very consistent with the one suggested by our translations of the CAP protocols. On the other hand, we occasionally observed minor deviations such as typographical errors, which might have been avoided with a well-designed database solution. Despite its benefits being widely acknowledged, synoptic reporting is arguably significantly underutilized. A mixture of psychological and technological factors contributes to this phenomenon. Specifically, the setup of a database structure for synoptic reporting on a single-institutional level and the continuous maintenance of a broad variety of protocols imposes a significant burden on a department of pathology. Furthermore, frustration with workflow issues associated with less than optimal database solutions may interfere significantly with the acceptance of synoptic reporting by pathologists . These considerations prompted us to implement synoptic reporting at our institution through a transitory phase in which we introduced synoptic protocols for a variety of cancer types while deferring an underlying database structure. For this purpose, we relied on the inbuilt autotext function of our laboratory information system. Thereby, reports are created, which are essentially equivalent to synoptic reports generated from a database for the readers, i.e., surgeons, radiation and medical oncologists, or pathologists (e.g., for the presentation in a multidisciplinary tumor conference). Furthermore, the anticipated multiple minor changes required in the early phase of practical implementation of each protocol were easy to make in this system. We reasoned that this approach would facilitate getting started with synoptic reporting and that the experience gained from this phase would then improve a subsequent database solution and smoothen the transition. The easy adaptability of our template-based system enabled us to rapidly implement more than 20 surgical and biomarker protocols. It was particularly useful in the early phase when multiple minor changes with regard to wording or formatting could be made very easily and fast in comparison to often lengthy software development cycles. On the other hand, our approach should not be considered a definite solution for several reasons: (1) An underlying database structure is indispensable for fully automated data retrieval and data exchange; (2) the template-based system faces limitations when it comes to complex conditional data elements (e.g., extranodal extension being only relevant when a lymph node metastasis is present); (3) background information on specific data elements or responses cannot be easily highlighted to the pathologist with the template-based format; (4) continuous updates of the protocols result in a significant workload. We, nevertheless, believe that the experiences from this transitory phase of template-based synoptic reporting provided critical input for the ongoing implementation of a database for synoptic reporting on a national level . This experience relates to a variety of issues, including consistent translation of English protocols, policies regarding the inclusion of additional data elements, the handling of optional or conditional data elements, and formatting of the reports. All of these issues can be challenging to address individually but may become disproportionately more complicated when interfering with information technology-related issues. In conclusion, we have shown that template-based synoptic reporting without underlying database structure may be a useful transitional step on the path toward higher levels of data structuring, especially for initiatives within a single institution. From a longer-term perspective, however, the pathology community needs to find ways to provide systems, which reduce the burden of local implementation of synoptic reporting with regard to both consistent terminology across languages and a framework for data structure and data exchange with local laboratory information systems. Only, thereby, synoptic reporting will be able to unfold its full potential for cancer care and cancer research on a global level. |
Acute kidney injury: the experience of a tertiary center of Pediatric Nephrology | 9889d23d-224f-4671-8f8b-ee55629afe89 | 11299983 | Internal Medicine[mh] | Acute kidney injury (AKI) is an abrupt deterioration of kidney function , , . The spectrum of manifestations is wide, ranging from subtle analytic changes in renal function to symptomatic organ failure , . According to the literature, AKI affects almost one-third of hospitalized children, and its incidence is increasing worldwide , , , , . Within a non-critically ill setting, a recent study carried out at a tertiary care children’s hospital with over two thousand patients described that AKI was observed in at least 5% of patients , , . The frequency of AKI is particularly elevated in critically ill patients, as it is stated as the most common complication in children admitted to a pediatric intensive care unit (PICU) – . A multinational prospective study involving almost five thousand children and young adults aged 3 months to 25 years admitted to a PICU reported an incidence of AKI of 26.9% . However, the overall incidence of AKI within the pediatric population is somewhat uncertain, since it depends on the population studied. A relevant body of research has focused on high-risk patients, particularly those who have been exposed to nephrotoxins, have undergone cardiac surgery, or have been admitted to a PICU , . Multiple pathophysiological mechanisms might be involved in AKI. Pre-renal etiologies are currently the most commonly associated with pediatric AKI, followed by intrinsic or renal disorders, such as glomerulonephritis , , . Since few effective specific therapeutic approaches are available today, knowledge of the risk factors for AKI is of paramount importance , , . Factors like prematurity or chronic comorbidities and events such as volume depletion, nephrotoxin exposure, sepsis, and major surgery (cardiac surgery, mainly with cardiopulmonary bypass) are the preponderant factors for the development of AKI , , . Concerning short-term outcomes, several studies have concluded that AKI in hospitalized pediatric patients may lead to prolonged mechanical ventilation, longer length of stay, and greater mortality , , , . Also, AKI may be associated with later development of proteinuria, hypertension, and chronic kidney disease , , , , , , , . In the present study, we aimed to characterize the presentation, etiology, evolution, and outcome of all cases of AKI in pediatric patients aged 29 days to 17 years and 365 days admitted to a tertiary of pediatric nephrology center in Portugal in the last decade. Study Design and Sample We conducted a retrospective observational single-center cohort study of children and adolescents aged 29 days to 17 years and 365 days admitted to the Nephrology Unit of Centro Materno-Infantil do Norte for a period of 10 consecutive years (from January 2012 to December 2021) with the diagnosis of AKI. All patients with AKI diagnosis at discharge were included, unless there was a previous diagnosis of chronic kidney disease (16 patients were excluded from the present analysis since stages 2–4 chronic kidney disease was present and the observed injury was considered an acute-on-chronic kidney injury). Data Collection and Variables Definition Clinical data were retrieved from the electronic clinical records of the included patients. AKI severity was assessed using the Kidney Disease Improving Global Outcomes (KDIGO) stages 1–3, which were defined based on the baseline and maximum inpatient serum creatinine (SCr) values recorded, as follows: stage 1 AKI was defined as a SCr value of 1.5 to 1.9 times the baseline value or ≥0.3 mg/dL increase, or urine volume <0.5 mL/kg/h for 6 to 12 hours; stage 2 AKI was defined as a SCr value of 2.0 to 2.9 times the baseline value or urine volume <0.5 mL/kg/h for ≥12 hours; stage 3 AKI comprised a SCr value 3.0 times the baseline value or increase in SCr to ≥4.0 mg/dL or the initiation of renal replacement therapy or decrease in estimate glomerular filtration rate (GFR) to <35 mL/min per 1.73 m 2 , or urinary volume <0.3 mL/kg/h for ≥24 hours, or anuria for ≥12 hours . The baseline SCr value was considered to be the lowest value within 6 months prior to admission (including the value at admission); all creatinine measurements were performed by the enzymatic method. GFR was calculated based on the revised Schwartz formula, k×(height(cm)/SCr(mg/dL)); using a k constant of 0.413. Proteinuria was defined as a urinary ratio of protein/creatinine (uP/C) >0.2 mg/mg. Hematuria was defined as ≥5 red blood cells per high-power field in urine microscopy analysis. Both in-hospital and office BP measurements were evaluated with oscillometric validated sphygmomanometers with an adequately sized cuff in the right arm, with the child in a seated position and the antecubital fossa supported at heart level, at least twice (ideally three times), with a 1-minute interval between measurements. The last available value was considered for analysis. Age-, sex-, and height-specific SBP and DBP reference values were considered for BP classification, according to the reference values of the European Hypertension Society guidelines (hypertension if the systolic or diastolic values were at or above the 95 th percentile) . The need for renal biopsy and kidney replacement therapy was recorded in all patients. Data on admission to the intensive care unit, including the need for mechanical ventilation and the use of inotropes, was recorded. The diagnosis of acute interstitial nephritis was based on clinical criteria in all patients, but in 4 cases a kidney biopsy was performed. The following risk factors were considered: comorbidities, which included previous kidney, cardiovascular, hemato-oncologic, or autoimmune diseases; exposure to nephrotoxins; prematurity; the presence of congenital anomalies of the kidney and urinary tract (CAKUT); and nephrolithiasis. The outcomes considered were sequelae and death. Sequelae were defined as the presence of at least one of the following: proteinuria, hypertension, or reduced GFR, defined as GFR <90 mL/min/1.73 m 2 , based on clinical and analytical monitoring 3 to 6 months after discharge. Ethics The project “Acute kidney injury - the experience of a tertiary center of Pediatric Nephrology” was approved by the Department of Education and Research and by the Ethical Commission of Centro Hospitalar Universitário do Porto. It complies with the Helsinki Declaration, the guidelines for the ethical conduct of medical research involving children, and the current national legislation. Statistical Analysis Standard statistical analysis was performed using IBM SPSS Statistics for Macintosh, Version 28.0.1.0 (Armonk, NY: IBM Corp, USA). The variables are presented as median and 25 th and 75 th percentiles or n , as appropriate. Differences between groups for continuous variables were evaluated with Mann-Whitney test. Chi-square test was used for the comparison of proportions of categorical variables. A p-value of less than 0.05 was considered significant. We conducted a retrospective observational single-center cohort study of children and adolescents aged 29 days to 17 years and 365 days admitted to the Nephrology Unit of Centro Materno-Infantil do Norte for a period of 10 consecutive years (from January 2012 to December 2021) with the diagnosis of AKI. All patients with AKI diagnosis at discharge were included, unless there was a previous diagnosis of chronic kidney disease (16 patients were excluded from the present analysis since stages 2–4 chronic kidney disease was present and the observed injury was considered an acute-on-chronic kidney injury). Clinical data were retrieved from the electronic clinical records of the included patients. AKI severity was assessed using the Kidney Disease Improving Global Outcomes (KDIGO) stages 1–3, which were defined based on the baseline and maximum inpatient serum creatinine (SCr) values recorded, as follows: stage 1 AKI was defined as a SCr value of 1.5 to 1.9 times the baseline value or ≥0.3 mg/dL increase, or urine volume <0.5 mL/kg/h for 6 to 12 hours; stage 2 AKI was defined as a SCr value of 2.0 to 2.9 times the baseline value or urine volume <0.5 mL/kg/h for ≥12 hours; stage 3 AKI comprised a SCr value 3.0 times the baseline value or increase in SCr to ≥4.0 mg/dL or the initiation of renal replacement therapy or decrease in estimate glomerular filtration rate (GFR) to <35 mL/min per 1.73 m 2 , or urinary volume <0.3 mL/kg/h for ≥24 hours, or anuria for ≥12 hours . The baseline SCr value was considered to be the lowest value within 6 months prior to admission (including the value at admission); all creatinine measurements were performed by the enzymatic method. GFR was calculated based on the revised Schwartz formula, k×(height(cm)/SCr(mg/dL)); using a k constant of 0.413. Proteinuria was defined as a urinary ratio of protein/creatinine (uP/C) >0.2 mg/mg. Hematuria was defined as ≥5 red blood cells per high-power field in urine microscopy analysis. Both in-hospital and office BP measurements were evaluated with oscillometric validated sphygmomanometers with an adequately sized cuff in the right arm, with the child in a seated position and the antecubital fossa supported at heart level, at least twice (ideally three times), with a 1-minute interval between measurements. The last available value was considered for analysis. Age-, sex-, and height-specific SBP and DBP reference values were considered for BP classification, according to the reference values of the European Hypertension Society guidelines (hypertension if the systolic or diastolic values were at or above the 95 th percentile) . The need for renal biopsy and kidney replacement therapy was recorded in all patients. Data on admission to the intensive care unit, including the need for mechanical ventilation and the use of inotropes, was recorded. The diagnosis of acute interstitial nephritis was based on clinical criteria in all patients, but in 4 cases a kidney biopsy was performed. The following risk factors were considered: comorbidities, which included previous kidney, cardiovascular, hemato-oncologic, or autoimmune diseases; exposure to nephrotoxins; prematurity; the presence of congenital anomalies of the kidney and urinary tract (CAKUT); and nephrolithiasis. The outcomes considered were sequelae and death. Sequelae were defined as the presence of at least one of the following: proteinuria, hypertension, or reduced GFR, defined as GFR <90 mL/min/1.73 m 2 , based on clinical and analytical monitoring 3 to 6 months after discharge. The project “Acute kidney injury - the experience of a tertiary center of Pediatric Nephrology” was approved by the Department of Education and Research and by the Ethical Commission of Centro Hospitalar Universitário do Porto. It complies with the Helsinki Declaration, the guidelines for the ethical conduct of medical research involving children, and the current national legislation. Standard statistical analysis was performed using IBM SPSS Statistics for Macintosh, Version 28.0.1.0 (Armonk, NY: IBM Corp, USA). The variables are presented as median and 25 th and 75 th percentiles or n , as appropriate. Differences between groups for continuous variables were evaluated with Mann-Whitney test. Chi-square test was used for the comparison of proportions of categorical variables. A p-value of less than 0.05 was considered significant. A total of 46 pediatric patients with a median (25 th –75 th percentile, P25–75) age of 13.0 (3.5–15.5) years were included in the analysis. Demographic characteristics of both clinical and analytic parameters are shown in , according to the KDIGO stage [stage 1, 10 (21.7%); stage 2, 12 (26.1%); stage 3, 24 (52.2%)]. The main pathogenic mechanism reported was intrinsic renal causes (73.9%). The most common etiologies of AKI were acute interstitial nephritis (23.9%), dehydration/shock (21.7%), and acute glomerulonephritis (19.6%). Approximately half of the patients (n = 24, 52.2%) had an identifiable risk factor for the development of AKI, most common comorbidities (37.5%) were pathologies (renal: 1 nephrotic syndrome and 1 hematoproteinuria under investigation; cardiovascular: 1 hypertension, 1 ventricular septal defect, and 1 internal jugular vein thrombosis; hemato-oncologic: 1 osteoid osteoma; autoimmune: 1 type 1 diabetes and systemic lupus erythematosus and 1 autoimmune hepatitis), followed by exposure to nephrotoxins (25.0%), and the presence of a CAKUT (25.0%). In regard to the classification of AKI in terms of urinary output, 13 patients (28.3%) were anuric, 7 (15.2%) were oliguric, and 26 (56.5%) were non-oliguric. All anuric patients were categorized as KDIGO stage 3 AKI. Nine (19.5%) patients presented fluid overload and all of those were classified as stage 3 AKI (considering creatinine values corrected for fluid overload). The proportion of patients with hyponatremia [KDIGO stage 1 vs stage 2 vs stage 3: 0 (0.0%) vs 1 (8.3%) vs 8 (33.3%), respectively, p = 0.043]; hyperkalemia [KDIGO stage 1 vs stage 2 vs stage 3: 0 (0.0%) vs 1 (8.3%) vs 8 (33.3%), respectively, p = 0.043] and metabolic acidosis [KDIGO stage 1 vs stage 2 vs stage 3: 1 (10.0%) vs 0 (0.0%) vs 10 (41.7%), respectively, p = 0.011] increased across AKI stages. The proportion of patients with hypertension was higher among stage 3 AKI patients but the difference was not statistically significant. Renal biopsy was performed in 10 patients, 5 of whom had stage 3 AKI (4 acute interstitial nephritis). Almost one quarter (n = 10, 21.7%) of patients required kidney replacement therapy, namely peritoneal dialysis, hemodialysis, or both techniques, all of them from the stage 3 AKI group. Ten (21.7%) patients were admitted in an intensive care unit, 90% of which had stage 3 AKI. The sequela characterization according to AKI stages is shown in . Of the original 46 patients, 2 were lost to follow-up and consequently were not included in the analysis. No deaths occurred. The majority of patients (n = 26, 59.1%) had at least one sequela 3–6 months after discharge. The frequency of sequela increased across AKI KDIGO stages. The most frequent sequelae were proteinuria (n = 15, 38.5%; median (P25–75) uP/C 0.30 (0.27–0.44) mg/mg), followed by reduced GFR (n = 11, 27.5%; median (P25–75) GFR 75 (62–83) mL/min/1.73 m 2 ) and hypertension (n = 4, 9.1%). Among the patients without any sequelae at follow-up, the median values of uP/C and GFR were 0.10 (0.06–0.15) mg/mg and 108 (100–119) mL/min/1.73 m 2 , respectively. Although within the normal range, the median GFR at follow-up increased across AKI stages [KDIGO stage 1 vs stage 2 vs stage 3: 93 (93-93) vs 107 (103–115) vs 115 (106–137) mL/min/1.73 m 2 , respectively, p = 0.035]. Twelve of the 15 patients with proteinuria were started on angiotensin-converting enzyme inhibitor or angiotensin II receptor blocker therapy during follow-up. In the present study, we report the etiology, severity, and outcomes of AKI among patients admitted to a pediatric Nephrology Unit at a tertiary care hospital in the last decade. Most of AKI cases were associated with intrinsic renal causes, especially acute interstitial nephritis, acute glomerulonephritis, and hemolytic uremic syndrome, followed by prerenal causes, namely dehydration/shock. Although several studies report that pediatric AKI is mainly derived from pre-renal etiologies , , , the predominance of renal causes might be related to the highly differentiated nature of our center. Since we are the reference center for pediatric patients with kidney diseases for the entire northern region of the country, the proportion of renal etiologies might be overrepresented in our sample. Patients with comorbidities are known to be highly susceptible to AKI , , . Nephrotic syndrome, for instance, is a frequent cause of kidney disease in children, and AKI is described as a potential complication , . Although the incidence of AKI in children with nephrotic syndrome is variable among studies, a study found AKI in about half of its population . In our study, though, only one patient presented AKI with a nephrotic syndrome relapse. Cardiovascular diseases, such as heart failure and congenital heart disease, also impose a significant risk for AKI . AKI is particularly common in children undergoing cardiac surgery, with studies suggesting a significant correlation between moderate to severe forms of injury and postoperative mortality , which is in line with our findings. AKI is a common comorbidity of hemato-oncologic diseases, and reports indicate that, in these patients, stages 2 and 3 AKI are associated with greater mortality . Autoimmune diseases may also lead to the development of AKI, having the potential for rapid progression to severe forms of injury . In our cohort, about a third of the patients had a previous history of either kidney, cardiovascular, hemato-oncologic, or autoimmune diseases. It is also of notice that most of these patients developed moderate to severe AKI, corresponding to KDIGO stages 2 and 3, therefore consistent with previous reports. Oligoanuria was reported in approximately half of the admitted patients, with almost all of these patients developing stages 2 and 3 AKI, suggesting that these changes in the urinary output might represent a risk for more severe forms of disease and therefore potentially worse outcomes, as previously reported in the literature . Patients with more severe AKI were those with more biochemical parameters disturbances, such as hyponatremia, hyperkalemia, and metabolic acidosis. These findings suggest that severe renal insults are associated with more pronounced hydro-electrolytic disorders and are consistent with studies that suggest an association between electrolyte abnormalities, mainly metabolic acidosis, and worse prognosis in children with AKI , . We reported that all patients requiring kidney replacement therapy were categorized in the most severe AKI stage, which is in agreement with previous studies , . Peritoneal dialysis was the most commonly used kidney replacement therapy, which is consistent with several studies reporting that peritoneal dialysis is a well-tolerated method, easy to perform, and with known effectiveness in the context of pediatric AKI , , . Also, hemodialysis requires a well-functioning vascular access and hemodynamically stable patients, and is therefore reserved for more specific settings . Although continuous renal replacement therapies tend to be the modality of choice in critically ill and hemodynamically unstable patients , , peritoneal dialysis was the most common kidney replacement therapy used for the patients in PICU, and no continuous therapies was used in our population within the study period. In our study, we found that most patients who required kidney replacement therapy were also admitted to the PICU during the course of the hospitalization, highlighting the severity inherent to stage 3 AKI. Although previous studies have found a correlation between AKI severity and the need for and duration of mechanical ventilation , , we did not find a statistically significant difference in the need for mechanical ventilation at different AKI stages. This may be due to the small number of patients within our population that required PICU treatment and mechanical ventilation. The low utilization rates of mechanical ventilation and vasoactive drugs in our study cohort might suggest a lower severity of cases compared to other series, and might contribute to the absence of deaths in our cohort. Although there were no deaths in our study, we highlight the fact that almost 60% of the patients had at least one sequela 3 to 6 months after hospital discharge, with more than 25% showing reduced GFR at follow-up, thus not completely recovering normal renal function. The finding of increasing median values of GFR at the follow-up visit across AKI stages, with higher values among patients with more severe AKI seems counterintuitive but can represent an initial stage of hyperfiltration in patients with more severe nephron loss during the AKI episode, as previously reported in the literature , , . We acknowledge that our study had some limitations, particularly the retrospective design and the experience of a single tertiary care center. Despite these limitations, we believe we have described a fairly representative population of pediatric AKI patients from the northern region of our country over a long period of time. We believe that the presented study contributes to increase the knowledge on AKI epidemiology, an area in need of more studies to raise awareness on the long-term consequences of AKI in pediatrics. In conclusion, AKI was common in the pediatric setting, mainly in patients with previous comorbidities, but also affected children without a known risk factor, emphasizing the importance of early suspicion of this condition. We also found that higher severity of AKI was associated with electrolyte disturbances, the need for kidney replacement therapies, and admission to the PICU. Our results suggest AKI may be associated with significant morbidity, particularly the development of proteinuria and a reduction in GFR, and therefore renal function impairment. This highlights the need for more studies focusing on the long-term impact of AKI in order to better understand the potential for transient or permanent consequences, with important impact in the long-term follow-up and management of these patients. |
Growth Hormone Neuroprotective Effects After an Optic Nerve Crush in the Male Rat | 6ea1afe1-659f-4687-aa5d-fc143dd858c2 | 11549927 | Anatomy[mh] | Animals Male Wistar rats of 6 weeks of age were used. The animals were reared and kept at the vivarium of the Institute of Neurobiology-UNAM under a 12:12-hour light-dark cycle and controlled room temperature (RT; RT = 21°C). Purina chow pellets and purified water were provided ad libitum. All experimental protocols were conducted according to the bioethical guidelines established by ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and were approved by the Institute of Neurobiology Research Ethics Committee (protocol 122A). Optic Nerve Injury and Experimental Design Three groups were established ( n = 6 per group), in which ONC or sham surgery was performed. At least three independent experiments were conducted for each time point, using the tissues for varying purposes. All procedures were exclusively conducted on the left eyes. The right eyes remained untouched to avoid behavioral or feeding alterations due to blindness. Animals were anesthetized with ketamine (80 mg/kg) and xylazine (8 mg/kg) administered intraperitoneally (IP). Briefly, the ON was crushed through an incision in the lower conjunctiva of the left eye. After carefully separating the extraocular muscles without bleeding, the ON was exposed. Using self-closing forceps (Dumont Tweezers #7; Dumoxel), the ON was then crushed approximately 2 mm behind the ON head for a duration of 10 seconds. Care was taken to avoid permanent damage to the ophthalmic artery, and each experimental eye fundus was checked after the crush to verify that the retinal blood flow was not altered. In the sham group, the same procedure was replicated, but without crushing the ON. Treatments were administered as follows: doses (0.5 mg/kg) of purified recombinant bGH (Boostin-S, Intervet) were subcutaneously (SC) injected immediately after injury and every 12 hours . Tissue collection was performed either 24 hours or 14 days after treatment initiation. In a 14 day protocol, an intravitreal injection of cholera toxin subunit B (CTB-AF488) conjugated to Alexa Fluor 488 (Invitrogen C34775), an anterograde axonal marker, was administered 2 days before euthanasia. , For gene expression quantification using quantitative PCR (qPCR) and protein expression analysis by Western blotting (WB), the retinas were collected by microdissection using a stereoscopic microscope and stored at –80°C until their analysis. For all histological analyses, eyes and optic nerves were promptly fixed for further processing. Immunofluorescence and Axonal Transport Analysis Eyeballs and optic nerves were fixed in a solution of 4% paraformaldehyde with 3% sucrose diluted in PBS at 4°C, for 6 hours. Approximately 30 µL of this solution was injected through the cornea of the eyes with a 31-gauge insulin syringe to ensure optimal fixation. Later, tissues were cryoprotected with sucrose concentrations of 10%, 20%, and 30% in PBS for 12 hours in each concentration. The fixed tissues were then frozen and mounted onto aluminum sectioning blocks using Tissue-Tek OCT (Sakura Finetek, Torrance, CA, USA). Sections of 15 µm thickness were obtained using a cryostat (Leica CM3050 S, Buffalo Grove, IL, USA) and mounted on glass slides treated with silane to enhance tissue adhesion. The eyes were cut along a naso-temporal plane at the equator, whereas the ONs were longitudinally sectioned from the ON head to the optic chiasma. For anterograde transport analysis by CTB-AF488 tracing, images of the ON were directly captured after sectioning, and digital reconstruction of the nerve was further made. Image acquisition was conducted using an Olympus BX51 fluorescence microscope (Tokyo, Japan), and subsequent analysis was performed using Image Pro 10 software (Media Cybernetics, Rockville, MD, USA). The retinal sections were photographed and measured approximately two-to-three-disc diameters (approximately 0.70 mm) from the ON. For immunohistochemical (IHC) analysis, sections were blocked with 5% Blotting-Grade Blocker non-fat dry milk (Bio-Rad, Hercules, CA, USA) in PBS. They were further incubated overnight at 4°C with primary antibodies (detailed in ) against ß-III-tubulin, growth-associated protein 43 (GAP43), brain-specific homeobox/POU domain protein 3A (Brn3a), B-cell lymphoma extra-large (Bcl-xL), and glial fibrillary acidic protein (GFAP). The primary antibodies were diluted in PBS with 0.05% Triton X-100 and 1% non-fat dry milk. Subsequently, sections were incubated for 2 hours at RT with secondary fluorescent antibodies (see ) diluted 1:1000 in the same solution, and 4′,6-diamidino-2-phenylindole (DAPI; 100 ng/mL; Sigma-Aldrich, St. Louis, MO, USA) for nuclei counterstain. Negative controls without primary antibodies were included in the analysis. Gene Expression Quantification by Real-Time Quantitative PCR Retinas were collected and rapidly frozen in dry ice and stored at –80°C until use. Total RNA was purified from retinal lysates using the Zymo Direct-zol purification kit and TRIzol (Zymo Research Corp., Irvine, CA, USA). Complementary DNA (cDNA) was synthesized from 1 µg of total RNA using the High-Capacity Reverse Transcription Kit with ribonuclease inhibitor (Applied Biosystems, Waltham, MA, USA) accordingly to the kit instructions. Target gene expressions were quantified through qPCR using a sequence detection system QuantStudio (Applied Biosystems) and SYBR Green (Maxima; Thermo Fisher Scientific, Waltham, MA, USA) in the reaction mix with a final volume of 10 µL. Reactions were conducted under the following conditions: initial denaturation at 95°C for 10 minutes, followed by 40 cycles of 95°C for 15 seconds, 60°C for 15 seconds, and 72°C for 15 seconds. Dissociation curves were incorporated after each qPCR run to ensure primer specificity. The relative abundance of mRNA was determined using the comparative threshold cycle (Ct) method, using the formula 2 −ΔΔCT . Quantification was expressed relative to the ribosomal protein S18 (RPS18) mRNA. Western Blot Analysis Total proteins were extracted from retinas after tissue homogenization using a GE 130PB sonicator (Cole-Parmer, Vernon Hills, IL, USA) for 30 seconds in radio-immunoprecipitation assay (RIPA) buffer (Abcam, Cambridge, UK), in the presence of a protease inhibitor cocktail (Roche, Basel, Switzerland). Equivalent amounts of proteins (40 µg) were separated using 12% SDS-PAGE under reducing conditions and transferred to nitrocellulose membranes (Bio-Rad). To block free binding sites, membranes were incubated with 5% non-fat milk (Bio-Rad) in tris-buffered saline (TBS) for 1 hour at RT. Subsequently, membranes were incubated overnight at RT with the appropriate antibody (see ) in 1× TBS with 0.05% Tween (TTBS) and 1% non-fat dry milk. After washing the membranes with TTBS (3 × 5 minutes), they were incubated for 2 hours with the corresponding HRP-conjugated secondary antibody (see ). Bands were visualized using ECL blotting detection reagent (Amersham-Pharmacia, Buckinghamshire, UK) and exposed to autoradiography films (Fujifilm, Tokyo, Japan). Kaleidoscope molecular weight markers (Bio-Rad) were used as reference for determining apparent molecular weights (MWs). Images were captured using a Gel Doc EZ Imager (Bio-Rad), and the optical densities of immunoreactive (IR) bands were analyzed using Image Lab software (Bio-Rad). Target IRs were normalized using glyceraldehyde-3-phosphate dehydrogenase (GAPDH) IR as a loading control. Electroretinogram Retinal function in all groups was assessed before and after the ONC or sham surgery by ERGs (see ), as previously described with minor modifications. , Prior to recording, the rats were dark-adapted overnight, after which their manipulations were carried out under dim red light. The animals were anesthetized with 70% ketamine and 30% xylazine (1 mg/kg body weight, IP). Anesthesia was maintained with supplemental doses (0.05 mL) if needed. The cornea of the left eye was moistened with hypromellose (5 mg/mL) ophthalmic solution and pupils were dilated with tropicamide-phenylephrine (50 mg/8 mg/mL) drops. ERGs were recorded under dark-adapted conditions and after a 20-minute period of light adaptation, under photopic conditions. ERG responses to photic stimuli (0.7 ms flashes of 0.38 log cd.s/m 2 , MGS-2 white Mini-Ganzfeld Stimulator; LKC Technologies) were recorded with a silver ring electrode placed on the left cornea, while ground and reference needle electrodes were placed SC in the tail and near to the left eye, respectively. For all the recordings, the bandpass was set from 0.1 hertz (Hz) to 1 kHz and the frequency sampling was 2 kHz. A total of 12 responses was averaged for both scotopic and photopic ERGs. ERG Data Processing and Analysis The quantitative analysis of photic ERGs (i.e. A- and B-wave amplitudes, implicit or peak times, and OPs) was performed according to the ISCEV standard guidelines, with low and high-pass filters set at 75 and 1 Hz for the A- and B-waves, respectively, and at 300 and 75 Hz for the OPs, respectively. OPs were detected at flash intensities from 0.244 to 7.726 cd.s/m 2 and analyzed separately (OP1 to OP4). Data were analyzed using custom-made MATLAB scripts (MATLAB R2019a; MathWorks). Statistical Analysis In all the figures, the values are expressed as mean ± SEM. Outliers were detected using the ROUT method (Q = 1%) and significant differences between groups were determined by 1-way Brown-Forsythe and Welch ANOVA followed by LSD Fischer as a post hoc test or unpaired t -tests with Prism 10 software (GraphPad, San Diego, CA, USA). For ERG analysis, significant differences between the pretreated and post-treated data of the same animals were determined by unpaired t -tests. The P values less than 0.05 were considered statistically significant and are represented with asterisks as follows: * P < 0.05, ** P < 0.01, *** P < 0.001, and **** P < 0.0001. Cell and axon counting was done manually, and fluorescence intensity was determined with the analyze tool to quantify the total mean grey value or mean grey value per area using Fiji software. Sample size in the analyzed groups varies due to outlier detection and bioethical considerations because animals with infection, stress, evident pain, or abnormal behavior were euthanized. Male Wistar rats of 6 weeks of age were used. The animals were reared and kept at the vivarium of the Institute of Neurobiology-UNAM under a 12:12-hour light-dark cycle and controlled room temperature (RT; RT = 21°C). Purina chow pellets and purified water were provided ad libitum. All experimental protocols were conducted according to the bioethical guidelines established by ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and were approved by the Institute of Neurobiology Research Ethics Committee (protocol 122A). Three groups were established ( n = 6 per group), in which ONC or sham surgery was performed. At least three independent experiments were conducted for each time point, using the tissues for varying purposes. All procedures were exclusively conducted on the left eyes. The right eyes remained untouched to avoid behavioral or feeding alterations due to blindness. Animals were anesthetized with ketamine (80 mg/kg) and xylazine (8 mg/kg) administered intraperitoneally (IP). Briefly, the ON was crushed through an incision in the lower conjunctiva of the left eye. After carefully separating the extraocular muscles without bleeding, the ON was exposed. Using self-closing forceps (Dumont Tweezers #7; Dumoxel), the ON was then crushed approximately 2 mm behind the ON head for a duration of 10 seconds. Care was taken to avoid permanent damage to the ophthalmic artery, and each experimental eye fundus was checked after the crush to verify that the retinal blood flow was not altered. In the sham group, the same procedure was replicated, but without crushing the ON. Treatments were administered as follows: doses (0.5 mg/kg) of purified recombinant bGH (Boostin-S, Intervet) were subcutaneously (SC) injected immediately after injury and every 12 hours . Tissue collection was performed either 24 hours or 14 days after treatment initiation. In a 14 day protocol, an intravitreal injection of cholera toxin subunit B (CTB-AF488) conjugated to Alexa Fluor 488 (Invitrogen C34775), an anterograde axonal marker, was administered 2 days before euthanasia. , For gene expression quantification using quantitative PCR (qPCR) and protein expression analysis by Western blotting (WB), the retinas were collected by microdissection using a stereoscopic microscope and stored at –80°C until their analysis. For all histological analyses, eyes and optic nerves were promptly fixed for further processing. Eyeballs and optic nerves were fixed in a solution of 4% paraformaldehyde with 3% sucrose diluted in PBS at 4°C, for 6 hours. Approximately 30 µL of this solution was injected through the cornea of the eyes with a 31-gauge insulin syringe to ensure optimal fixation. Later, tissues were cryoprotected with sucrose concentrations of 10%, 20%, and 30% in PBS for 12 hours in each concentration. The fixed tissues were then frozen and mounted onto aluminum sectioning blocks using Tissue-Tek OCT (Sakura Finetek, Torrance, CA, USA). Sections of 15 µm thickness were obtained using a cryostat (Leica CM3050 S, Buffalo Grove, IL, USA) and mounted on glass slides treated with silane to enhance tissue adhesion. The eyes were cut along a naso-temporal plane at the equator, whereas the ONs were longitudinally sectioned from the ON head to the optic chiasma. For anterograde transport analysis by CTB-AF488 tracing, images of the ON were directly captured after sectioning, and digital reconstruction of the nerve was further made. Image acquisition was conducted using an Olympus BX51 fluorescence microscope (Tokyo, Japan), and subsequent analysis was performed using Image Pro 10 software (Media Cybernetics, Rockville, MD, USA). The retinal sections were photographed and measured approximately two-to-three-disc diameters (approximately 0.70 mm) from the ON. For immunohistochemical (IHC) analysis, sections were blocked with 5% Blotting-Grade Blocker non-fat dry milk (Bio-Rad, Hercules, CA, USA) in PBS. They were further incubated overnight at 4°C with primary antibodies (detailed in ) against ß-III-tubulin, growth-associated protein 43 (GAP43), brain-specific homeobox/POU domain protein 3A (Brn3a), B-cell lymphoma extra-large (Bcl-xL), and glial fibrillary acidic protein (GFAP). The primary antibodies were diluted in PBS with 0.05% Triton X-100 and 1% non-fat dry milk. Subsequently, sections were incubated for 2 hours at RT with secondary fluorescent antibodies (see ) diluted 1:1000 in the same solution, and 4′,6-diamidino-2-phenylindole (DAPI; 100 ng/mL; Sigma-Aldrich, St. Louis, MO, USA) for nuclei counterstain. Negative controls without primary antibodies were included in the analysis. Retinas were collected and rapidly frozen in dry ice and stored at –80°C until use. Total RNA was purified from retinal lysates using the Zymo Direct-zol purification kit and TRIzol (Zymo Research Corp., Irvine, CA, USA). Complementary DNA (cDNA) was synthesized from 1 µg of total RNA using the High-Capacity Reverse Transcription Kit with ribonuclease inhibitor (Applied Biosystems, Waltham, MA, USA) accordingly to the kit instructions. Target gene expressions were quantified through qPCR using a sequence detection system QuantStudio (Applied Biosystems) and SYBR Green (Maxima; Thermo Fisher Scientific, Waltham, MA, USA) in the reaction mix with a final volume of 10 µL. Reactions were conducted under the following conditions: initial denaturation at 95°C for 10 minutes, followed by 40 cycles of 95°C for 15 seconds, 60°C for 15 seconds, and 72°C for 15 seconds. Dissociation curves were incorporated after each qPCR run to ensure primer specificity. The relative abundance of mRNA was determined using the comparative threshold cycle (Ct) method, using the formula 2 −ΔΔCT . Quantification was expressed relative to the ribosomal protein S18 (RPS18) mRNA. Total proteins were extracted from retinas after tissue homogenization using a GE 130PB sonicator (Cole-Parmer, Vernon Hills, IL, USA) for 30 seconds in radio-immunoprecipitation assay (RIPA) buffer (Abcam, Cambridge, UK), in the presence of a protease inhibitor cocktail (Roche, Basel, Switzerland). Equivalent amounts of proteins (40 µg) were separated using 12% SDS-PAGE under reducing conditions and transferred to nitrocellulose membranes (Bio-Rad). To block free binding sites, membranes were incubated with 5% non-fat milk (Bio-Rad) in tris-buffered saline (TBS) for 1 hour at RT. Subsequently, membranes were incubated overnight at RT with the appropriate antibody (see ) in 1× TBS with 0.05% Tween (TTBS) and 1% non-fat dry milk. After washing the membranes with TTBS (3 × 5 minutes), they were incubated for 2 hours with the corresponding HRP-conjugated secondary antibody (see ). Bands were visualized using ECL blotting detection reagent (Amersham-Pharmacia, Buckinghamshire, UK) and exposed to autoradiography films (Fujifilm, Tokyo, Japan). Kaleidoscope molecular weight markers (Bio-Rad) were used as reference for determining apparent molecular weights (MWs). Images were captured using a Gel Doc EZ Imager (Bio-Rad), and the optical densities of immunoreactive (IR) bands were analyzed using Image Lab software (Bio-Rad). Target IRs were normalized using glyceraldehyde-3-phosphate dehydrogenase (GAPDH) IR as a loading control. Retinal function in all groups was assessed before and after the ONC or sham surgery by ERGs (see ), as previously described with minor modifications. , Prior to recording, the rats were dark-adapted overnight, after which their manipulations were carried out under dim red light. The animals were anesthetized with 70% ketamine and 30% xylazine (1 mg/kg body weight, IP). Anesthesia was maintained with supplemental doses (0.05 mL) if needed. The cornea of the left eye was moistened with hypromellose (5 mg/mL) ophthalmic solution and pupils were dilated with tropicamide-phenylephrine (50 mg/8 mg/mL) drops. ERGs were recorded under dark-adapted conditions and after a 20-minute period of light adaptation, under photopic conditions. ERG responses to photic stimuli (0.7 ms flashes of 0.38 log cd.s/m 2 , MGS-2 white Mini-Ganzfeld Stimulator; LKC Technologies) were recorded with a silver ring electrode placed on the left cornea, while ground and reference needle electrodes were placed SC in the tail and near to the left eye, respectively. For all the recordings, the bandpass was set from 0.1 hertz (Hz) to 1 kHz and the frequency sampling was 2 kHz. A total of 12 responses was averaged for both scotopic and photopic ERGs. The quantitative analysis of photic ERGs (i.e. A- and B-wave amplitudes, implicit or peak times, and OPs) was performed according to the ISCEV standard guidelines, with low and high-pass filters set at 75 and 1 Hz for the A- and B-waves, respectively, and at 300 and 75 Hz for the OPs, respectively. OPs were detected at flash intensities from 0.244 to 7.726 cd.s/m 2 and analyzed separately (OP1 to OP4). Data were analyzed using custom-made MATLAB scripts (MATLAB R2019a; MathWorks). In all the figures, the values are expressed as mean ± SEM. Outliers were detected using the ROUT method (Q = 1%) and significant differences between groups were determined by 1-way Brown-Forsythe and Welch ANOVA followed by LSD Fischer as a post hoc test or unpaired t -tests with Prism 10 software (GraphPad, San Diego, CA, USA). For ERG analysis, significant differences between the pretreated and post-treated data of the same animals were determined by unpaired t -tests. The P values less than 0.05 were considered statistically significant and are represented with asterisks as follows: * P < 0.05, ** P < 0.01, *** P < 0.001, and **** P < 0.0001. Cell and axon counting was done manually, and fluorescence intensity was determined with the analyze tool to quantify the total mean grey value or mean grey value per area using Fiji software. Sample size in the analyzed groups varies due to outlier detection and bioethical considerations because animals with infection, stress, evident pain, or abnormal behavior were euthanized. GH Treatment Improved Optic Nerve Integrity 14 Days After Damage We first assessed the general structure of the ON using IHC for ß-III-tubulin (green) in longitudinal sections 14 days after the injury ( A–M). In the sham group, there was a high ß-III-tubulin-IR that exhibited a uniform linear distribution along the nerve (see A, C) and DAPI-stained nuclei (blue) showed a symmetrical distribution of cells in line with the axons (see B, C). In the injured nerves, the fluorescence intensity associated to this IR was drastically decreased ( P < 0.01) in comparison to the control group (see M), particularly toward the brain section (red arrow), and thickness of the nerve fibers appeared reduced (red dotted line; see D, F); in addition, nuclei staining showed a higher density of cells at the crush site (see E, F). Following GH treatment, these parameters appeared to improve, indicating enhanced ON integrity (see G, I, M), but the high density of nuclei was still observed (see H). The negative control, without primary antibody, did not show any immunofluorescence (IF; see J–L). GH Promoted RGCs Survival 14 Days After ONC Injury Fourteen days after ONC, we evaluated the surviving RGCs by IHC using Brn3a, as a specific label ( N–Z). ONC injury resulted in a substantial loss of RGCs compared to a sham retina ( P < 0.0001; see Z). The average number of these cells in a sham retina was 9.23 ± 1.26/100 µm (see N–P, Z), whereas the average number of surviving RGCs in the untreated ONC group was 1.11 ± 0.90 cells/100 µm (see Q–S, Z). In the group treated with GH, the number of surviving RGCs significantly increased to 2.64 ± 1.16 cells/100 µm ( P < 0.001; see T–V, Z) compared to the ONC group. However, it was still significantly lower than the sham group ( P < 0.0001; see Z). These results demonstrate that GH partially improved survival of RGCs 2 weeks after ONC. The negative control, without primary antibody, did not show any IF (see W–Y). Expression of Neurotrophic, Synaptogenic, and Glial Markers 24 Hours After ONC shows the effect of treatments (ONC and ONC + GH) 24 hours after the lesion, upon the expression of several gene markers (see ) involved in the following processes: neuroprotection and cell survival (NGF, BDNF, NT-3, CNTF, GDNF, BMP4, Bcl-2, Brn3a, IGF-1, TrkA, TrkB, TrkC, p75NTR, and CNTFR); synaptogenesis (SNAP25, NRXN1, NLGN1, DLG4, and GAP43); glial activity (GFAP, GLAST, and GLUL); and excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4), as determined by qPCR. For the genes that are known to have strong neurotrophic actions or promote cell survival, the ONC injury significantly downregulated NT-3 ( P < 0.0001), CNTF ( P < 0.01), Bcl-2 ( P < 0.01), and Brn3a ( P < 0.001), whereas BDNF, BMP4, and IGF-1 remained unchanged, and, in contrast, NGF and GDNF were clearly upregulated ( P < 0.001 and P < 0.05, respectively) in comparison to the sham control (black reference dashed line). In turn, GH treatment upregulated IGF-1 ( P < 0.05 compared to ONC), slightly diminished the upregulation of NGF (from P < 0.001 in ONC to P < 0.01 in ONC + GH), and somewhat attenuated the downregulation of NT-3 (from P < 0.0001 in ONC to P < 0.001 in ONC + GH), Bcl-2 (from P < 0.01 to P < 0.05), and Brn3a (from P < 0.001 to P < 0.01). GH also annulled the ONC-induced effects in the expression of CNTF and GDNF and downregulated the expression of BMP4 ( P < 0.05), whereas BDNF remained unaltered (see ). TrkA, p75NTR, and CNTFR did not change in either condition. Instead, TrkB and TrkC were both downregulated by ONC ( P < 0.01) compared to the sham control, and GH treatment slightly attenuated the downregulation of TrkC (from P < 0.01 to P < 0.05) but had no effect upon TrkB (see ). Our results further showed that ONC injury strongly downregulated four out of five synaptogenic markers evaluated: SNAP25 ( P < 0.01), NRXN1 ( P < 0.05), NLGN1 ( P < 0.05), and GAP43 ( P < 0.0001), as compared to the sham controls, whereas DLG4 was unaltered. GH treatment mitigated such downregulations and stimulated a recovery in the expression of SNAP25 ( P < 0.05 in comparison to the ONC group), NRXN1, and NLGN1 to reach levels similar to the sham controls. In the case of GAP43, although GH increased its expression in comparison to the ONC group ( P < 0.05) it was still lower than the sham levels ( P < 0.001; see ). As for the markers related to glial activity (GFAP, GLAST, and GLUL) and/or excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4), ONC injury did not modify the expression of GFAP but downregulated the expression of GLAST ( P < 0.05), GLUL ( P < 0.01), GLT1 ( P < 0.01), GluR2 ( P < 0.05), and GRIK4 ( P < 0.05; see ). Notably, GH treatment restored to control levels the expression of GLAST, GLUL, and GRIK4. On the other hand, GH further downregulated GluR2 ( P < 0.01) and had no effect on the ONC-induced downregulation of GLT1 ( P < 0.01; see ). Expression of Neurotrophic, Synaptogenic, and Glial Markers 14 Days After ONC We also studied the expression levels of the above-mentioned genes 2 weeks after ONC injury combined or not with GH treatment . At this time point the ONC injury upregulated NGF ( P < 0.001) and BMP4 ( P < 0.05) in comparison to the sham controls, but did not affect the expression levels of BDNF, NT-3, CNTF, GDNF, Bcl-2, and IGF-1. In contrast, it strongly downregulated the expression of Brn3a ( P < 0.0001). In turn, GH treatment partially reversed the downregulation of NGF expression ( P < 0.05) compared to ONC, although it was still significantly increased compared to basal sham levels ( P < 0.05). GH also upregulated GDNF ( P < 0.05) compared to both ONC and sham controls; further upregulated BMP4 in relation to ONC ( P < 0.01); did not change the strong downregulation of Brn3a caused by the lesion ( P < 0.0001) in ONC + GH, and downregulated the IGF-1 expression levels as compared to ONC ( P < 0.05). Of note, neither condition altered BDNF, NT3, CNTF, and Bcl-2 expression levels in comparison to the sham group (see ). Likewise, the expression of the neurotrophic receptors TrkA and CNTFR were not modified by ONC combined or not with GH, whereas TrkB and TrkC were both downregulated by ONC injury ( P < 0.01 and P < 0.05, respectively). However, GH treatment reversed the downregulation of TrkB to reach control levels but further enhanced the downregulation of TrkC ( P < 0.01). In the case of p75NTR, the injury did not affect its expression significantly, but the GH treatment induced an upregulation ( P < 0.05; see ). As for the synaptogenic markers, the expression levels of SNAP25 and NLGN1 were upregulated by both the injury and GH treatment ( P < 0.05 and P < 0.01, respectively), whereas NRXN1 did not show any changes. Notably, DLG4 was upregulated by the ONC injury but GH treatment attenuated such upregulation and, in contrast, GAP43 was downregulated with the injury ( P < 0.001), whereas GH treatment rescued its expression ( P < 0.05 compared to ONC) to reach control levels (see ). Among glial activity (GFAP, GLAST, and GLUL) and/or excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4) markers, only two genes (GFAP and GLT1) showed significant changes at this time point, whereas the other four remained unchanged in both conditions. GFAP was strongly upregulated by ONC injury ( P < 0.0001), but GH treatment significantly reduced such a response ( P <0.05), although its expression levels were still higher in comparison to the control group ( P < 0.05). Last, GLT1 was also upregulated by either ONC ( P < 0.05) or ONC + GH ( P <0.05) in comparison to the sham controls (see ). GH Effect on the Regeneration and Anterograde Axonal Transport The functional relevance of the above-mentioned effects of GH upon ONC injury on anterograde axonal transport was then analyzed . In the sham group, CTB-AF488 (green) uninterrupted transport was observed throughout the entire nerve (see A). In contrast, in the ONC group, CBT-AF488 fluorescence reduction in the section proximal to the retina was observed (see B, yellow bracket), corresponding to the very few RGCs observed in that group (see Q, Z). The ONC + GH group showed higher fluorescence in the same section (see C, yellow bracket), because it had more than twice surviving RGCs (see T, Z). A magnification of the area around the crush injury site (white asterisks), clearly demonstrated that the ONC group showed no axons projecting beyond that point. We measured the number of CTB-positive axons within the region extending from the injury site up to 1 mm toward the brain, covering the entire width of the ON. We found a mean of 0.23 CTB-positive axons in the ONC group (see F). In contrast, GH treatment resulted in a few axons projecting beyond the injury site (see E, white arrows), with a mean of 2.45 CTB-positive axons in the same area, which was significantly higher than the ONC group ( P < 0.0001; see F). To further explore the effect of GH treatment upon the previously observed CTB-labeled axons (green) in the ON, we conducted a colocalization analysis monitoring also the presence of GAP43-IR axons (red), a widely used marker for axonal regeneration. The ONC group showed no positive axons for GAP43-IR (see G, I, M), and, as previously mentioned, there were also almost no CTB-labeled axons projecting further from the injury site (white asterisks, see H, I, F). In contrast, the GH treated group had numerous positive GAP43-labeled axons further from the crush site ( P < 0.01 versus ONC; see J, L, M, yellow arrows). However, these did not colocalize with the CTB labeled axons (white arrows, see K, L). A WB for GAP43 performed with a protein extract of rat retinal homogenate is shown as a control for antibody specificity (see N). shows GAP43 IHC and colocalization with CTB-AF488 labeling and DAPI staining (blue) in the sham group, whereas shows DAPI staining of the colocalization (yellow) of GAP43 and CTB in the ONC and ONC + GH groups. Survival Associated Proteins Respond to GH Treatment We next evaluated the abundance ratio of Bcl-2 family members Bcl-xL/Bax and Bcl-2/Bad by WB ( A–C), and the presence of NT-3-IF by IHC ( D–O) in retinal tissues obtained 24 hours after ONC. In comparison to the sham controls, the ONC group did not show any changes in the Bcl-2/Bax ratio but exhibited a significant reduction of the Bcl-xL/Bad ratio ( P < 0.05; see A–C). In contrast, the ONC + GH group showed a higher ratio of both Bcl-2/Bax and Bcl-xL/Bad (see A–C). In the GH-treated group, the Bcl-2/Bax ratio was higher than the levels of both the sham group ( P < 0.05) and the ONC group ( P < 0.01; see A, B). Similarly, the Bcl-xL/Bad ratio was much higher in the GH-treated group compared to the ONC group ( P < 0.05; see A, C). At this same time point, NT-3-IF (red) in the retinal ganglion cell layer (RGCL) was clearly reduced in the ONC group compared to sham retinas (see D–I); however, GH restored its presence in the lesioned RGCL (see J–L). It should be noted that although GH treatment promoted a clear increase of the positive signal for NT-3, its distribution was not as well organized as in the sham group (see J–L). The negative control, without primary antibody, showed no IF (see M–O). The presence of Bcl-xL in the surviving cells of the RGCL was also analyzed through IHC 14 days after ONC (see P–AC). Results showed that, in the sham group, 82.69 ± 8.89% of cells in the RGCL were positive to Bcl-xL (red arrows), with a mean fluorescence (red) intensity of 8681.15 ± 3562.01 arbitrary units (a.u.; see P–R, AB, AC). In turn, the ONC group had significantly fewer positive cells (58.27 ± 9.99%, P < 0.001) and lower mean fluorescence intensity (5376.28 ± 2282.29 a.u., P < 0.001; see S–U, AB, AC). Conversely, both parameters were significantly higher in the ONC + GH group compared to the ONC group ( P < 0.01 and P < 0.05), with 67.59 ± 11.39% of positive cells and a mean fluorescence intensity of 7621.39 ± 2917.09 a.u., respectively (see V–X, AB, AC). Green arrows indicate negative cells to Bcl-xL. The negative control, without primary antibody, showed no positive signal (see Y–AA). A WB for NT-3 and Bcl-xL performed with a protein extract of rat retinal homogenate is shown as a control for antibody specificity (see AD). GFAP Expression is Prevented With GH Treatment Glial cells play a significant role in mediating cell death in the retina and the activation of Müller cells can be detrimental in the ONC model. Thus, besides assessing the expression of glial-related genes, 24 hours and 14 days after the lesion (see , ), we also examined glial activation through analyzing the presence of GFAP-IF (green) in the retina at 14 days after ONC . Our previous observations of GFAP mRNA expression 14 days after injury (see ) were confirmed by IHC in the retina (see ). The sham group showed a positive signal for GFAP-IR only in the RGCL (see A–C, M). However, in the ONC group, the intensity of GFAP-IF signal clearly increased ( P < 0.01; see M), and its distribution extended from the RGCL up to the INL (see D–F). In contrast, in the GH-treated group, the intensity and distribution of GFAP signal was importantly reduced as compared to the lesioned group ( P < 0.05), and more similar to the sham control (see G–I, M). The negative control, without a primary antibody, showed no positive signal (see J–L). A WB for GFAP performed with protein extract of rat retinal homogenate is shown as a control for antibody specificity (see N). GH Protected Retinal Function in the ONC Model Complementary to the functional assessment of anterograde axonal transport in the ON, the retinal function in the GH-treated ONC model was also analyzed. Under dark-adapted conditions that mobilize the rod pathway, the ERG response appeared reduced in the animals subjected to the ONC as compared to the sham group, whereas GH treatment tended to prevent this functional damage ( A). When the three groups were compared, no differences were observed in the amplitude of the A-wave, but the amplitude of the B-wave was decreased at flashes of 0.0076, 0.0244, and 0.0763 cd.s/m 2 in the ONC group compared to the sham group ( P < 0.05; Bi). Notably, no differences were found between the ONC + GH and sham groups (see Bi). The implicit times of the two waves showed an increase in the ONC group compared to the sham group ( Bii). For the A-wave, the ONC group had an increased implicit time at almost all flash intensities (0.0244, 0.0762, 0.2446, 0.7729, 2.450, and 7.726 cd.s/m 2 ) compared to the sham group ( P < 0.05 and P < 0.01), whereas the ONC + GH group was no different from the sham group at any flash intensity and showed significantly lower implicit time at 0.0762, 0.2446, 0.7729, 2.450, and 7.726 than the ONC group ( P < 0.05 and P < 0.01; see Bii). Furthermore, the ONC group showed an increased implicit time of the B-wave compared to the control at all flash intensities (0.0076, 0.0244, 0.0762, 0.2446, 0.7729, and 2.450 cd.s/m 2 ), whereas the ONC + GH group only showed increased implicit time at 0.0762 cd.s/m 2 compared to the sham group ( P < 0.05). This group showed a significantly lower implicit time at 0.2446, 0.7729, and 2.450 cd.s/m 2 when compared to the ONC group ( P < 0.05 and P < 0.01; see Bii). Detailed values are provided in the . At the highest light flash (7.727 cd.s/m 2 ), the OPs of the three groups were compared together. Results showed that OP2, OP3, and OP4 were significantly decreased in amplitude in the ONC group compared to the sham group ( P < 0.01; Ci). The OP4 in the ONC + GH group was also significantly lower than the sham control group ( P < 0.05), but this group showed no differences compared to sham in OP1, OP2, and OP3; and OP2 was also significantly higher than in the ONC group ( P < 0.05; see Ci). For the implicit times, the ONC group showed an increase in OP1, OP2, OP3, and OP4 ( P < 0.001, P < 0.01, and P < 0.05, respectively) compared to the sham group, whereas the ONC + GH group showed no differences with the sham group and successfully decreased OP1, OP2, and OP3 ( P < 0.05 and P < 0.05; see Cii). Detailed values are provided in the . Results in D to F showed no differences between the ERG and the OPs before and after the sham intervention, thus meaning that the observed effects could be attributed specifically to the ONC or GH treatments . In contrast, the implicit time of the A-wave and the amplitude of the B-waves were significantly increased and decreased, respectively, after the ONC ( G, Hi, Hii) in comparison to the response before the lesion . The ONC also reduced the amplitude and delayed most of the OPs at most flash intensities ( Ii, Iii, ). Most notably, GH treatment prevented the ONC-induced reduction of the B-wave amplitude at all flash intensities, except for the highest, and also prevented the increase provoked by the injury in implicit time of the A-wave at all flash intensities (see J, Ki, Kii, ). Additionally, the ONC-induced reduction of amplitude and delay in the OPs were partially mitigated by GH ( Li, Lii, ). These data showed the preventive effect of GH treatment on the retinal dysfunction associated with ONC injury. It is interesting to note that the photopic ERG was not modified by ONC alone or combined with GH . We first assessed the general structure of the ON using IHC for ß-III-tubulin (green) in longitudinal sections 14 days after the injury ( A–M). In the sham group, there was a high ß-III-tubulin-IR that exhibited a uniform linear distribution along the nerve (see A, C) and DAPI-stained nuclei (blue) showed a symmetrical distribution of cells in line with the axons (see B, C). In the injured nerves, the fluorescence intensity associated to this IR was drastically decreased ( P < 0.01) in comparison to the control group (see M), particularly toward the brain section (red arrow), and thickness of the nerve fibers appeared reduced (red dotted line; see D, F); in addition, nuclei staining showed a higher density of cells at the crush site (see E, F). Following GH treatment, these parameters appeared to improve, indicating enhanced ON integrity (see G, I, M), but the high density of nuclei was still observed (see H). The negative control, without primary antibody, did not show any immunofluorescence (IF; see J–L). Fourteen days after ONC, we evaluated the surviving RGCs by IHC using Brn3a, as a specific label ( N–Z). ONC injury resulted in a substantial loss of RGCs compared to a sham retina ( P < 0.0001; see Z). The average number of these cells in a sham retina was 9.23 ± 1.26/100 µm (see N–P, Z), whereas the average number of surviving RGCs in the untreated ONC group was 1.11 ± 0.90 cells/100 µm (see Q–S, Z). In the group treated with GH, the number of surviving RGCs significantly increased to 2.64 ± 1.16 cells/100 µm ( P < 0.001; see T–V, Z) compared to the ONC group. However, it was still significantly lower than the sham group ( P < 0.0001; see Z). These results demonstrate that GH partially improved survival of RGCs 2 weeks after ONC. The negative control, without primary antibody, did not show any IF (see W–Y). shows the effect of treatments (ONC and ONC + GH) 24 hours after the lesion, upon the expression of several gene markers (see ) involved in the following processes: neuroprotection and cell survival (NGF, BDNF, NT-3, CNTF, GDNF, BMP4, Bcl-2, Brn3a, IGF-1, TrkA, TrkB, TrkC, p75NTR, and CNTFR); synaptogenesis (SNAP25, NRXN1, NLGN1, DLG4, and GAP43); glial activity (GFAP, GLAST, and GLUL); and excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4), as determined by qPCR. For the genes that are known to have strong neurotrophic actions or promote cell survival, the ONC injury significantly downregulated NT-3 ( P < 0.0001), CNTF ( P < 0.01), Bcl-2 ( P < 0.01), and Brn3a ( P < 0.001), whereas BDNF, BMP4, and IGF-1 remained unchanged, and, in contrast, NGF and GDNF were clearly upregulated ( P < 0.001 and P < 0.05, respectively) in comparison to the sham control (black reference dashed line). In turn, GH treatment upregulated IGF-1 ( P < 0.05 compared to ONC), slightly diminished the upregulation of NGF (from P < 0.001 in ONC to P < 0.01 in ONC + GH), and somewhat attenuated the downregulation of NT-3 (from P < 0.0001 in ONC to P < 0.001 in ONC + GH), Bcl-2 (from P < 0.01 to P < 0.05), and Brn3a (from P < 0.001 to P < 0.01). GH also annulled the ONC-induced effects in the expression of CNTF and GDNF and downregulated the expression of BMP4 ( P < 0.05), whereas BDNF remained unaltered (see ). TrkA, p75NTR, and CNTFR did not change in either condition. Instead, TrkB and TrkC were both downregulated by ONC ( P < 0.01) compared to the sham control, and GH treatment slightly attenuated the downregulation of TrkC (from P < 0.01 to P < 0.05) but had no effect upon TrkB (see ). Our results further showed that ONC injury strongly downregulated four out of five synaptogenic markers evaluated: SNAP25 ( P < 0.01), NRXN1 ( P < 0.05), NLGN1 ( P < 0.05), and GAP43 ( P < 0.0001), as compared to the sham controls, whereas DLG4 was unaltered. GH treatment mitigated such downregulations and stimulated a recovery in the expression of SNAP25 ( P < 0.05 in comparison to the ONC group), NRXN1, and NLGN1 to reach levels similar to the sham controls. In the case of GAP43, although GH increased its expression in comparison to the ONC group ( P < 0.05) it was still lower than the sham levels ( P < 0.001; see ). As for the markers related to glial activity (GFAP, GLAST, and GLUL) and/or excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4), ONC injury did not modify the expression of GFAP but downregulated the expression of GLAST ( P < 0.05), GLUL ( P < 0.01), GLT1 ( P < 0.01), GluR2 ( P < 0.05), and GRIK4 ( P < 0.05; see ). Notably, GH treatment restored to control levels the expression of GLAST, GLUL, and GRIK4. On the other hand, GH further downregulated GluR2 ( P < 0.01) and had no effect on the ONC-induced downregulation of GLT1 ( P < 0.01; see ). We also studied the expression levels of the above-mentioned genes 2 weeks after ONC injury combined or not with GH treatment . At this time point the ONC injury upregulated NGF ( P < 0.001) and BMP4 ( P < 0.05) in comparison to the sham controls, but did not affect the expression levels of BDNF, NT-3, CNTF, GDNF, Bcl-2, and IGF-1. In contrast, it strongly downregulated the expression of Brn3a ( P < 0.0001). In turn, GH treatment partially reversed the downregulation of NGF expression ( P < 0.05) compared to ONC, although it was still significantly increased compared to basal sham levels ( P < 0.05). GH also upregulated GDNF ( P < 0.05) compared to both ONC and sham controls; further upregulated BMP4 in relation to ONC ( P < 0.01); did not change the strong downregulation of Brn3a caused by the lesion ( P < 0.0001) in ONC + GH, and downregulated the IGF-1 expression levels as compared to ONC ( P < 0.05). Of note, neither condition altered BDNF, NT3, CNTF, and Bcl-2 expression levels in comparison to the sham group (see ). Likewise, the expression of the neurotrophic receptors TrkA and CNTFR were not modified by ONC combined or not with GH, whereas TrkB and TrkC were both downregulated by ONC injury ( P < 0.01 and P < 0.05, respectively). However, GH treatment reversed the downregulation of TrkB to reach control levels but further enhanced the downregulation of TrkC ( P < 0.01). In the case of p75NTR, the injury did not affect its expression significantly, but the GH treatment induced an upregulation ( P < 0.05; see ). As for the synaptogenic markers, the expression levels of SNAP25 and NLGN1 were upregulated by both the injury and GH treatment ( P < 0.05 and P < 0.01, respectively), whereas NRXN1 did not show any changes. Notably, DLG4 was upregulated by the ONC injury but GH treatment attenuated such upregulation and, in contrast, GAP43 was downregulated with the injury ( P < 0.001), whereas GH treatment rescued its expression ( P < 0.05 compared to ONC) to reach control levels (see ). Among glial activity (GFAP, GLAST, and GLUL) and/or excitotoxicity (GLAST, GLUL, GLT1, GluR2, and GRIK4) markers, only two genes (GFAP and GLT1) showed significant changes at this time point, whereas the other four remained unchanged in both conditions. GFAP was strongly upregulated by ONC injury ( P < 0.0001), but GH treatment significantly reduced such a response ( P <0.05), although its expression levels were still higher in comparison to the control group ( P < 0.05). Last, GLT1 was also upregulated by either ONC ( P < 0.05) or ONC + GH ( P <0.05) in comparison to the sham controls (see ). The functional relevance of the above-mentioned effects of GH upon ONC injury on anterograde axonal transport was then analyzed . In the sham group, CTB-AF488 (green) uninterrupted transport was observed throughout the entire nerve (see A). In contrast, in the ONC group, CBT-AF488 fluorescence reduction in the section proximal to the retina was observed (see B, yellow bracket), corresponding to the very few RGCs observed in that group (see Q, Z). The ONC + GH group showed higher fluorescence in the same section (see C, yellow bracket), because it had more than twice surviving RGCs (see T, Z). A magnification of the area around the crush injury site (white asterisks), clearly demonstrated that the ONC group showed no axons projecting beyond that point. We measured the number of CTB-positive axons within the region extending from the injury site up to 1 mm toward the brain, covering the entire width of the ON. We found a mean of 0.23 CTB-positive axons in the ONC group (see F). In contrast, GH treatment resulted in a few axons projecting beyond the injury site (see E, white arrows), with a mean of 2.45 CTB-positive axons in the same area, which was significantly higher than the ONC group ( P < 0.0001; see F). To further explore the effect of GH treatment upon the previously observed CTB-labeled axons (green) in the ON, we conducted a colocalization analysis monitoring also the presence of GAP43-IR axons (red), a widely used marker for axonal regeneration. The ONC group showed no positive axons for GAP43-IR (see G, I, M), and, as previously mentioned, there were also almost no CTB-labeled axons projecting further from the injury site (white asterisks, see H, I, F). In contrast, the GH treated group had numerous positive GAP43-labeled axons further from the crush site ( P < 0.01 versus ONC; see J, L, M, yellow arrows). However, these did not colocalize with the CTB labeled axons (white arrows, see K, L). A WB for GAP43 performed with a protein extract of rat retinal homogenate is shown as a control for antibody specificity (see N). shows GAP43 IHC and colocalization with CTB-AF488 labeling and DAPI staining (blue) in the sham group, whereas shows DAPI staining of the colocalization (yellow) of GAP43 and CTB in the ONC and ONC + GH groups. We next evaluated the abundance ratio of Bcl-2 family members Bcl-xL/Bax and Bcl-2/Bad by WB ( A–C), and the presence of NT-3-IF by IHC ( D–O) in retinal tissues obtained 24 hours after ONC. In comparison to the sham controls, the ONC group did not show any changes in the Bcl-2/Bax ratio but exhibited a significant reduction of the Bcl-xL/Bad ratio ( P < 0.05; see A–C). In contrast, the ONC + GH group showed a higher ratio of both Bcl-2/Bax and Bcl-xL/Bad (see A–C). In the GH-treated group, the Bcl-2/Bax ratio was higher than the levels of both the sham group ( P < 0.05) and the ONC group ( P < 0.01; see A, B). Similarly, the Bcl-xL/Bad ratio was much higher in the GH-treated group compared to the ONC group ( P < 0.05; see A, C). At this same time point, NT-3-IF (red) in the retinal ganglion cell layer (RGCL) was clearly reduced in the ONC group compared to sham retinas (see D–I); however, GH restored its presence in the lesioned RGCL (see J–L). It should be noted that although GH treatment promoted a clear increase of the positive signal for NT-3, its distribution was not as well organized as in the sham group (see J–L). The negative control, without primary antibody, showed no IF (see M–O). The presence of Bcl-xL in the surviving cells of the RGCL was also analyzed through IHC 14 days after ONC (see P–AC). Results showed that, in the sham group, 82.69 ± 8.89% of cells in the RGCL were positive to Bcl-xL (red arrows), with a mean fluorescence (red) intensity of 8681.15 ± 3562.01 arbitrary units (a.u.; see P–R, AB, AC). In turn, the ONC group had significantly fewer positive cells (58.27 ± 9.99%, P < 0.001) and lower mean fluorescence intensity (5376.28 ± 2282.29 a.u., P < 0.001; see S–U, AB, AC). Conversely, both parameters were significantly higher in the ONC + GH group compared to the ONC group ( P < 0.01 and P < 0.05), with 67.59 ± 11.39% of positive cells and a mean fluorescence intensity of 7621.39 ± 2917.09 a.u., respectively (see V–X, AB, AC). Green arrows indicate negative cells to Bcl-xL. The negative control, without primary antibody, showed no positive signal (see Y–AA). A WB for NT-3 and Bcl-xL performed with a protein extract of rat retinal homogenate is shown as a control for antibody specificity (see AD). Glial cells play a significant role in mediating cell death in the retina and the activation of Müller cells can be detrimental in the ONC model. Thus, besides assessing the expression of glial-related genes, 24 hours and 14 days after the lesion (see , ), we also examined glial activation through analyzing the presence of GFAP-IF (green) in the retina at 14 days after ONC . Our previous observations of GFAP mRNA expression 14 days after injury (see ) were confirmed by IHC in the retina (see ). The sham group showed a positive signal for GFAP-IR only in the RGCL (see A–C, M). However, in the ONC group, the intensity of GFAP-IF signal clearly increased ( P < 0.01; see M), and its distribution extended from the RGCL up to the INL (see D–F). In contrast, in the GH-treated group, the intensity and distribution of GFAP signal was importantly reduced as compared to the lesioned group ( P < 0.05), and more similar to the sham control (see G–I, M). The negative control, without a primary antibody, showed no positive signal (see J–L). A WB for GFAP performed with protein extract of rat retinal homogenate is shown as a control for antibody specificity (see N). Complementary to the functional assessment of anterograde axonal transport in the ON, the retinal function in the GH-treated ONC model was also analyzed. Under dark-adapted conditions that mobilize the rod pathway, the ERG response appeared reduced in the animals subjected to the ONC as compared to the sham group, whereas GH treatment tended to prevent this functional damage ( A). When the three groups were compared, no differences were observed in the amplitude of the A-wave, but the amplitude of the B-wave was decreased at flashes of 0.0076, 0.0244, and 0.0763 cd.s/m 2 in the ONC group compared to the sham group ( P < 0.05; Bi). Notably, no differences were found between the ONC + GH and sham groups (see Bi). The implicit times of the two waves showed an increase in the ONC group compared to the sham group ( Bii). For the A-wave, the ONC group had an increased implicit time at almost all flash intensities (0.0244, 0.0762, 0.2446, 0.7729, 2.450, and 7.726 cd.s/m 2 ) compared to the sham group ( P < 0.05 and P < 0.01), whereas the ONC + GH group was no different from the sham group at any flash intensity and showed significantly lower implicit time at 0.0762, 0.2446, 0.7729, 2.450, and 7.726 than the ONC group ( P < 0.05 and P < 0.01; see Bii). Furthermore, the ONC group showed an increased implicit time of the B-wave compared to the control at all flash intensities (0.0076, 0.0244, 0.0762, 0.2446, 0.7729, and 2.450 cd.s/m 2 ), whereas the ONC + GH group only showed increased implicit time at 0.0762 cd.s/m 2 compared to the sham group ( P < 0.05). This group showed a significantly lower implicit time at 0.2446, 0.7729, and 2.450 cd.s/m 2 when compared to the ONC group ( P < 0.05 and P < 0.01; see Bii). Detailed values are provided in the . At the highest light flash (7.727 cd.s/m 2 ), the OPs of the three groups were compared together. Results showed that OP2, OP3, and OP4 were significantly decreased in amplitude in the ONC group compared to the sham group ( P < 0.01; Ci). The OP4 in the ONC + GH group was also significantly lower than the sham control group ( P < 0.05), but this group showed no differences compared to sham in OP1, OP2, and OP3; and OP2 was also significantly higher than in the ONC group ( P < 0.05; see Ci). For the implicit times, the ONC group showed an increase in OP1, OP2, OP3, and OP4 ( P < 0.001, P < 0.01, and P < 0.05, respectively) compared to the sham group, whereas the ONC + GH group showed no differences with the sham group and successfully decreased OP1, OP2, and OP3 ( P < 0.05 and P < 0.05; see Cii). Detailed values are provided in the . Results in D to F showed no differences between the ERG and the OPs before and after the sham intervention, thus meaning that the observed effects could be attributed specifically to the ONC or GH treatments . In contrast, the implicit time of the A-wave and the amplitude of the B-waves were significantly increased and decreased, respectively, after the ONC ( G, Hi, Hii) in comparison to the response before the lesion . The ONC also reduced the amplitude and delayed most of the OPs at most flash intensities ( Ii, Iii, ). Most notably, GH treatment prevented the ONC-induced reduction of the B-wave amplitude at all flash intensities, except for the highest, and also prevented the increase provoked by the injury in implicit time of the A-wave at all flash intensities (see J, Ki, Kii, ). Additionally, the ONC-induced reduction of amplitude and delay in the OPs were partially mitigated by GH ( Li, Lii, ). These data showed the preventive effect of GH treatment on the retinal dysfunction associated with ONC injury. It is interesting to note that the photopic ERG was not modified by ONC alone or combined with GH . This study aimed to explore the molecular, cellular, and functional effects of GH treatment in response to an ONC injury in the retina. To this end, we assessed RGC survival, ON integrity, axonal transport, and the expression of various neurotrophic, survival, glial-related, and synaptogenic markers. Additionally, we evaluated the impact on retinal electrical activity in response to photic stimuli following the lesion, with or without GH administration, under both acute (24 hours) and subacute (14 days) conditions. We report, we believe for the first time, a favorable effect of GH upon ameliorating retinal dysfunction and nerve damage provoked by ONC lesion, offering valuable insights into the potential neuroprotective and regenerative effects of GH in the harmed eye. RGCs undergo apoptosis in response to ONC, typically reaching their peak of cell death around 5 days post-injury, and then the rate of apoptosis gradually declines, resulting in around 25% survival of RGCs roughly 2 weeks after the injury, causing permanent damage to the ON. The GH treatment was administered via SC injections for 24 hours and 14 days, starting at the time of the injury and then at 12-hour intervals. Findings of this study showed an important deleterious effect provoked by the ONC lesion, and a significant enhancement in RGC survival 14 days post-ONC due to GH treatment, as well as a partial structural recovery and protection of the ON. The strong and rapid loss of RGCs and ON axons just 2 weeks after damage emphasizes the drastic nature of this injury model. Even under these conditions, GH was able to partially counteract the harm. These results align with several studies showing the ability of GH to improve cell survival and preserve tissue structure in the nervous system after injury. , – In postmortem human retinas, apoptotic RGCs identified by TUNEL staining did not colocalize with endogenous GH expression, whereas non-apoptotic RGCs were GH-positive, meaning that locally produced GH might have a role in cell death or survival regulation. Additionally, a growth hormone releasing hormone (GHRH) agonist enhanced RGCs survival 14 days after ONC and upregulated GH levels in the vitreous humor. Our findings confirm the neuroprotective effects of GH on RGCs post-injury, although it still remains unclear whether this is a direct effect of GH or if a mediator (i.e. IGF-1 or a neurotrophic factor) is involved, which is currently under exploration. Furthermore, a similar assessment performed in a chronic intraocular pressure elevation model may result of interest to determine if GH treatment has a more effective neuroprotective or regenerative effect. It is known that after a nerve lesion, ß-III-tubulin-IR distal to the injury site significantly decreases, compromising its structural organization, which is crucial for maintaining a functional axonal transport. Our observations showed that the ONC provoked a significant reduction in ß-III-tubulin-IR in the affected ON, both at the site of the crush and distally from it. Moreover, the damaged nerve exhibited a diminished thickness at the crush site, indicating a poor preservation of axonal cytoarchitecture. In contrast, GH treatment appeared to partially mitigate these effects, because GH-treated nerves did not show a thickness reduction at the crush site as big as in the ONC group, and total IF further from the crush site was not significantly different from the sham group, suggesting that GH preserves ON integrity. Our study also investigated the effect of treatments upon the mRNA expression of several markers in the retina. It has been reported that the neuroprotective effect of GH involves the regulation of neurotrophic factors in response to injury in various areas of the CNS, including the chicken retina, , and neurotrophic factors such as BDNF, CNTF, NT-3, GDNF, and NGF, as well as their receptors, are crucial for the survival of RGCs. Their expression in response to injury can vary over time and depends on the type or magnitude of the inflicted damage, showing both upregulation and downregulation at different time points, reflecting a dynamic endogenous response to ON injury. Consistently with these data, our study showed that GH induced significant changes in the expression of NT-3, CNTF, IGF-1, and GDNF. Particularly with NGF, the effect of the injury was a strong upregulation, which coincides with several reports showing that ONC injury upregulated proNGF presence in the retina, which has proapoptotic effects, contrary to the mature NGF. We found that GH was able to partially mitigate this upregulation both at 24 hours and 14 days. Because this was an mRNA quantification, we were not able to distinguish between proNGF and NGF, an issue that requires to be elucidated further. As for other neurotrophic factors, GH successfully maintained the expression of GDNF and CNTF to the sham levels at 24 hours and upregulated GDNF at 14 days. Glial cells are important mediators of protection and degeneration following retinal injury. The early upregulation of GDNF might reflect an endogenous protection response. Nevertheless, the fact that GH influences GDNF expression, along with other glial markers like GFAP, GLAST, and GLUL, suggests that glial cells could be key players in GH's neuroprotective effects in the retina. Similarly, IGF-1, a strong neuroprotective factor with both autocrine and endocrine actions, was upregulated by GH after 24 hours, likely due to direct GHR activation. As for NT-3, an early downregulation of its expression was observed in the ONC group and less significant but still present in the ONC + GH group. However, an IHC revealed that specifically in the RGCL, a strong reduction of NT-3-IR was induced by the injury, and GH was able to partially recover it, indicating a specific effect on the RGCL that was not reflected in the mRNA levels from whole retinal extracts. Additionally, of all the evaluated receptors, TrkA, TrkB, and p75NTR showed changes. GH successfully recovered TrkB expression 2 weeks after ONC, but further downregulated TrkC and upregulated p75NTR at the same time point. All these factors are potent regulators of neuronal survival, and their absence promotes cell death in conditions that compromise axonal transport through the ON. GH treatment reversed the detrimental effects induced by ONC in their expression, thus they might be implicated in mediating the survival effects of GH in the neural retina. Similarly, GH has been reported to upregulate the expression of synaptogenic markers, such as SNAP 25 and NRXN1. These proteins are critical for synaptic formation and vesicle fusion in the retina and visual pathway. , Twenty-four hours after the lesion, SNAP25, NRXN1, NLGN1, and GAP43 were all downregulated by ONC but upregulated in the ONC + GH group. For most genes but GAP43, these effects were transitory because they disappeared at 14 days, suggesting that GH may counteract early synaptic degeneration caused by ONC. The protection exerted by GH is however sustained, as GAP43 levels, which were not fully restored at 24 hours, were sustained at 14 days. Furthermore, the impact of GH treatment on the maintenance of synaptic connections can be observed at the functional level, as confirmed by the full-field ERG data. Müller cell reactive gliosis has been observed in different glaucoma experimental models. It is characterized by an upregulation of GFAP and multiple cytokines, leading to the dysregulation of the glutamate transporter GLAST expression, together with the enzyme glutamine synthetase (GLUL), and thus promoting excitotoxicity. We found that ONC injury induced the upregulation of GFAP and the downregulation of GLUL, GLAST, and GRIK4 24 hours and 14 days after injury. At the early time point, GH maintained the expression levels of GLAST, GLUL, and GRIK4 but, at 14 days, these genes showed no changes. As expected, GFAP expression was strongly upregulated by damage after 14 days, and this response was partially attenuated by GH treatment. An effect upon GFAP downregulation by GH was further demonstrated by IHC, in which a mitigation of its IR was observed in comparison with the ONC group at 14 days as well. In an uninjured retina, GFAP is typically expressed by astrocytes, but, after damage, Müller glial cells also become GFAP-positive. ONC injury promotes a dysregulation in specific markers expressed by macroglial cells, such as GLAST and GLUL. Given the significant influence of GH treatment on GFAP and Müller cell genes GLAST and GLUL, it is likely that GH neuroprotective effects are at least partly mediated by Müller glia and reactive astrocytes. Notably, GH has been proposed as an important modulator of inflammation and macrophage and microglia activation, not only in the retina but in other CNS areas in different models of neural damage. , To deepen the learning into these complex interactions, we are currently exploring how GH reduces glial activation, focusing on the GH receptor and its signaling pathways in RGCs. In addition to the structural characterization of the ON by ß-III-tubulin IHC, we explored the effect of GH upon anterograde axonal transport in the ON using CTB labeling. Intravitreal injection of fluorescent CTB allows for the visualization of active axonal transport, which is compromised during ON degeneration. – In the section proximal to the retina, the damaged ONs displayed a significantly reduced fluorescence, which correlates with the approximately 20% of surviving RGCs. Conversely, the ONC + GH group exhibited enhanced fluorescence in the proximal section of the ON, reflecting a greater number of surviving RGCs. Upon closer examination, it was evident that the ONC group had almost no axons projecting beyond the injury site. In contrast, the ONC + GH group showed few axons extending further. This result is consistent with unpublished data obtained in mice (manuscript in preparation) and suggests that GH promotes the maintenance of not only axonal integrity, but also of active anterograde transport, either by promoting axonal survival or by inducing axonal regrowth after damage. GAP43 is known to increase its expression in RGCs following certain types of damage, such as ischemia/reperfusion as part of an endogenous response of plasticity and survival. To determine whether the CTB-labeled axons were newly formed or just surviving axons, we conducted a colocalization study using the regeneration marker GAP43, because its expression is closely associated with the capacity for axonal regeneration and plasticity of the RGCs. The ONC group displayed no positive GAP43-labeled axons beyond the injury site. Conversely, the GH-treated group showed some GAP43-positive axonal growth cones further from the injury site, although these did not colocalize with the CTB-labeled axons. Previous reports have also indicated that GH administration upregulates GAP43 expression in the chicken retina under excitotoxic conditions. Accordingly with these observations, GAP43 expression in the mammalian retina at 24 hours and 14 days after injury exhibited decreased expression in the ONC group, whereas GH upregulated it at both time points, which further confirms GH effect on this regeneration marker, both in the retina and ON. However, further investigation is needed to determine which could be an alternative source of some of the observed re-growing GAP43-IR axons that do not colocalize with CTB-labeled axons. The regulation of antiapoptotic or proapoptotic members of the Bcl-2 family can reduce RGC loss in different injury models and mitigate axonal damage in the ON. , , Notably, Bax is considered a master regulator of axonal degeneration, along with other Bcl-2 family members, such as PUMA. In addition to the Bcl-xL role in promoting neuronal survival, it also inhibits neurotrophin insufficiency-dependent axon degeneration in cell cultures. , Our findings indicate that 24 hours after injury, the proportion of Bcl-2/Bax and Bcl-xL/Bad in the GH-treated group were in favor of antiapoptotic proteins over proapoptotic proteins, a contrary effect to what the ratio in the ONC group showed, in which proapoptotic proteins were more present than the antiapoptotic ones, among which Bcl-xL was almost totally depleted. This reduction in proapoptotic proteins by GH treatment correlated with the increased survival rate and better axonal integrity observed at the later time. The results also suggest that GH inhibits the intrinsic apoptotic pathway or changes in the mitochondrial outer membrane permeability. These effects may be mediated through the activation of PI3K/Akt pathway, as Akt is known to inhibit Bax and Bad, and this pathway is highly activated by GH in the retina after damage. , , Moreover, our research demonstrated that 14 days after the injury, a greater number of surviving cells in the RGCL were positive for Bcl-xL in the GH-treated group, and their reactivity was higher compared to those in the ONC group. In glaucoma and other ON and RGC injury models, Bcl-2 and Bcl-xL expression typically decreases, whereas Bax and Bad increase. In our study, GH countered this dynamic after ON injury, improving the Bcl-2/Bax and Bcl-xL/Bad ratios 24 hours after injury, as well as the presence of Bcl-xL in the RGCL at 14 days after damage. Most importantly, the protective effect of GH against ONC-induced apoptosis of RGCs was accompanied by near-normal maintenance of retinal function. As previously shown, ONC disrupted both the A- and B-waves, by slowing them and by decreasing the B-wave amplitude. As changes in the magnitude of ERG waves are more related to the number of neurons recruited than changes in implicit time, which are associated with synaptic connectivity, we interpret the decrease of the B-wave amplitude as a result of the massive loss of RGCs caused by ONC. In turn, the remaining RGCs as well as the rod and ON -bipolar cells, as well as Müller glia, all of which are involved in the B-wave , and have not been eliminated by the ONC, likely explain why the B-wave has not disappeared completely. On the other hand, the downregulation of the presynaptic proteins NRXN1 and SNAP25 and the postsynaptic protein NLGN, , as well as the alteration of glial function and changes in neurotrophic factors induced by the ONC possibly participate in the slowdown of the B-wave, because changes in the expression of glial and neurotrophic factors are associated with dampened ERG responses. The slowdown of the A-wave denotes that the bipolar rod-cell connectivity is affected. We further found that the OP amplitude and implicit time were decreased and increased, respectively, in the lesioned eyes, which indicates that the ON injury affected reciprocal synapses between rod bipolar cells and AII amacrine cells, or between rod bipolar cells and A17 amacrine cells. A loss of the above-mentioned cells cannot be excluded. In this context, the maintenance of the B-wave amplitude at almost all photic flash intensities and the kinetics of the A- and B-waves at values not different from the control indicates that GH favorably induced the ERG response by maintaining not only the number of RGCs, but also the synaptic connections between rods and bipolar cells as well as among input cells of the RGCs (i.e. the ON -bipolar cells and AII amacrine cells) and RGCs. Maintaining OP kinetics and decreasing the effect of ON injury on OP amplitude specifically indicates a protective effect of GH on synapses between rod bipolar cells and AII amacrine cells and/or between rod bipolar cells and A17 amacrine cells. These interpretations are consistent with the widespread presence of GH receptor in the neural retina of several species and particularly in RGCs. , , Because the ONC did not affect the ERG under photopic conditions, we could not assess the action of GH on the cone and the OFF -pathway response. Nevertheless, because rats are nocturnal, the effects of GH on the rod pathway are the most physiologically relevant observations. Demonstration of the functional protection exerted by GH in the retina requires further studies in diurnal mammals. Our functional study presents other limitations, because ERGs were registered before and after damage, and not longitudinally, we may have missed transient changes in retinal function. In addition, we did not measure the early components of the ERG, the negative and positive scotopic threshold responses, which are used to specifically assess RGC function. However, our functional data demonstrate that GH exerts much internal retinal effects than those expected and already shown in the RGCL. In conclusion, this study has unveiled the potential of GH treatment to mitigate the detrimental effects of ONC injury in the male rat retina. Importantly, this research represents the first evidence of the neuroprotective effect of GH systemic administration in the mammalian retina. Our findings support the notion that GH significantly enhances RGC survival and maintains retinal function, which has important implications for vision recovery following ON injury. The modulation of the Bcl-2 family of proteins and the promotion of axonal integrity in the ON demonstrate that GH exerts neuroprotective and regenerative actions. Additionally, the effect of GH upon neurotrophic factors, synaptogenic markers, and glial-related genes confirms its potential to promote synaptic plasticity and ameliorate the detrimental reactive gliosis. As the underlying mechanisms and specific cellular roles of GH will be uncovered, these insights may open new avenues for therapeutic approaches in the context of ON injuries, offering alternatives for improved outcomes in vision-related disorders. Supplement 1 |
Urinary Biomarkers Associated With Pathogenic Pathways Reflecting Histologic Findings in Lupus Nephritis | 0f095da6-bd22-43cd-969d-1d2a65f5a6ad | 11865699 | Biochemistry[mh] | Systemic lupus erythematosus (SLE) is a common autoimmune disease that affects young female patients and causes a variety of organ disorders. Half of the patients with SLE develop lupus nephritis (LN), and 10% to 15% of patients with LN develop end‐stage renal disease. , LN has six classifications according to its glomerular lesions based on the International Society of Nephrology/Renal Pathology Society (ISN/RPS) classification. However, this classification is not precise enough to evaluate and quantify the accurate progression or expansion of lesions, which leads to the necessity to add activity and chronicity indices defined by the National Institutes of Health (NIH) to ISN/RPS classification. In addition, it is important to evaluate chronic tubulointerstitial lesions, which better reflect renal prognosis. , , Therefore, renal biopsy is still the gold standard for the definitive diagnosis and classification of LN, and its severity is determined by various pathologic parameters. However, renal biopsy is an invasive procedure that accompanies bleeding in various degrees and is not feasible in cases with associated complications and risk factors. Therefore, the development of noninvasive predictive biomarkers for renal histopathological findings is a great unmet need in clinical practice. The role of urinary biomarkers in diagnosing LN and predicting relapse has been recognized, and many candidate proteins have been reported for this purpose. Recently, proteomic approaches have been used to identify serum and urine proteins as unbiased screening biomarkers in LN. , , , However, all these studies identified urinary biomarkers by comparing patients with active LN and healthy controls. Hence, these studies did not consider the various differences in histologic characteristics between active and chronic lesions in LN. Recently, the Accelerating Medicines Partnership study reported urine biomarkers associated with each renal histologic finding, according to the NIH activity and chronicity index. , However, biomarkers that can predict the progression of individual lesions remain unidentified. This study aimed to identify the pathogenic signal pathway in LN and elucidate urinary biomarkers for predicting the presence or severity of histologic findings of LN by precisely evaluating renal histology. Patients and sample preparation Consecutive patients with biopsy‐proven class III/IV, III/IV+V, or V LN were recruited from Keio University Hospital. Samples from two cohorts were used: (1) a discovery cohort for screening, including patients with LN (n = 24) and diabetic nephropathy (n = 3) diagnosed by renal biopsy, and (2) a validation cohort for enzyme‐linked immunosorbent assay (ELISA) validation, including patients with LN (n = 24). Serum and urine samples were collected within one week before renal biopsy and treatment intensification. Clean‐catch midstream urine samples were collected in sterile containers and refrigerated within one hour after collecting the samples. Blood samples were collected, and the serum was immediately separated by centrifugation. The samples were then aliquoted and stored at −80°C. The clinical data at the time of sample collection were recorded. Informed consent was obtained from all patients, and the study was approved by the Ethics Committee of our institution (Keio University School Hospital, approval number 20140093). Aptamer‐based screening Serum and urine samples obtained from patients in the discovery cohort were analyzed by screening with a slow off‐rate modified DNA aptamer–based capture array (SOMAscan; SomaLogic), which is a comprehensive high‐throughput proteomic assay using an Agilent microarray readout that measures 1,305 proteins. This assay is based on interactions between the aptamers and proteins in a sample. First, aptamer‐coated streptavidin beads were added to the sample to allow the aptamers to bind to the proteins. Next, the bound proteins were tagged with biotin, and the aptamer–protein complexes were released from the streptavidin beads. These aptamer–protein complexes were then recaptured on a second set of streptavidin beads, and aptamers were released from these proteins. Finally, the aptamers were hybridized to complementary sequences on a microarray chip and quantified by fluorescence. Concentrations were measured for 1,305 proteins, and the results were calculated as relative fluorescent units (RFU). RFU is related to the amount of protein in the original sample. These RFU values were standardized by urinary creatinine concentration. Histologic scoring system We evaluated renal histologic findings more quantitatively than the ISN/RPS lesion definitions and classification of renal involvement or the NIH activity and chronicity index by developing the original scoring system. , The scoring system included a total of 20 renal histologic findings, including 16 glomerular lesions and 4 tubulointerstitial lesions (Supplementary Tables and ). The renal histology was evaluated using a light microscope. Light microscopy specimens were embedded in paraffin, sectioned, and stained with hematoxylin‐eosin, periodic acid‐Schiff, periodic acid‐silver methenamine, and Masson's trichrome reagent. Each glomerular histologic finding was scored using two methods. (1) For endocapillary hypercellularity and subendothelial deposits, the percentage of lesions in the total glomerular capillary tuft was calculated in the range of 0% to 100%, and the percentage was defined as the score of one glomerulus. The average score of all glomeruli was used as the histologic score of each case. (2) For histologic findings of glomeruli other than those described previously, the presence or absence of lesions was evaluated. The percentage of glomeruli with lesions was calculated in the range of 0% to 100%, and this percentage was used as the histologic score of each case. For tubulointerstitial lesions, the percentage of lesions relative to the whole area of the tubulointerstitium was calculated in the range of 0% to 100%, and the percentage was used as the score of each case. For glomerular lesions, a score was calculated for each glomerulus, and the average of the scores of all glomeruli was used as the histologic score of the case. Histologic scores were assigned by two skilled raters (KH and SS) supervised by a pathologist (AH), and the mean of the two was used for correlation analysis. This scoring system can quantify the diversity and expansion of renal histologic features in LN more precisely than the ISN/RPS classification or the NIH activity and chronicity index because it includes the same or more items as the existing evaluation systems. Validation studies using ELISA ELISA tests were performed on candidate urinary proteins that correlated with specific histologic scores in the discovery cohort. The correlation between the ELISA test results and renal histologic scores in the validation cohort was then confirmed. The concentrations of urinary proteins, including calgranulin B (Cloud‐Clone), S100A12 (R&D Systems), high mobility group N1 (HMGN1) (Cloud‐Clone), interleukin‐8 (IL‐8) (R&D Systems), allograft inflammatory factor 1 (AIF‐1) (Cloud‐Clone), superoxide dismutase 1 (SOD1) (RayBiotech), monocyte chemotactic protein 1 (MCP‐1) (R&D Systems), FSTL3 (R&D Systems), and insulin‐like growth factor binding protein 5 (IGFBP‐5) (R&D Systems), were measured using ELISA. Absolute urinary protein levels were determined using standard curves run on each ELISA plate and standardized by the urinary creatinine concentration. Immunohistochemistry Immunohistochemical staining of the renal tissues was performed using a Leica BOND MAX Immunostainer (Leica Biosystems) for calgranulin B, MCP‐1, and IGFBP‐5. After deparaffinization, the renal tissues were stained with BOND Epitope Retrieval Solution 1 (AR9961, Leica Biosystems) for calgranulin B and BOND Epitope Retrieval Solution 2 for MCP‐1 and IGFBP‐5 (AR9640, Leica Biosystems). For immunohistochemical staining, the renal tissues were incubated with 1:2,000 diluted mouse monoclonal anti–calgranulin B (TA804091S, OriGene Technologies), 1:100 diluted mouse monoclonal anti–MCP‐1 (MAB 679, R&D Systems), and 1:50 diluted mouse monoclonal anti–IGFBP‐5 (MAB875, R&D Systems) antibodies. BOND Polymer Refine Detection (DS9800; Leica Biosystems) was used as the secondary antibody. Data analysis The JMP software version 15 (SAS Institute) was used for statistical analysis. Group comparisons were performed using the Wilcoxon rank sum test. Receiver operating characteristic (ROC) curve analysis was performed to analyze the sensitivity, specificity, and cutoff values of the biomarkers for renal histopathological findings. From the data of the aptamer‐based screening assay and histologic scores, a heatmap of the correlation coefficients was created, and cluster analysis was performed using the Python 3.9 (Python Seaborn package) programming. Spearman's rank correlation and Pearson's correlation coefficients were used for correlation related to aptamer‐based screening–measured and ELISA‐measured data, respectively. There were two reasons for using Spearman's correlation in the screening phase of this study: (1) The measurements by SOMAscan themselves may not reflect precise concentrations and could show considerable variability in results, and (2) ranking was necessary when extracting specific clusters of urinary proteins during cluster analysis. Euclidean distance was used for the distance matrix, and Ward's method was used for hierarchical clustering. The Enrichr web tool (open source available from https://amp.pharm.mssm.edu/Enrichr ) was used to determine the cellular origin of the proteins that correlated with individual clusters of renal lesions. Ingenuity pathway analysis (IPA) was used to ascertain which known pathways were enriched by proteins that were highly correlated with individual clusters of renal lesions. Data were analyzed using IPA (QIAGEN Inc, https://www.qiagen.com/us ). Protein–protein interaction (PPI) analysis was performed to identify interactions between individual clusters of renal lesions and highly correlated proteins. The PPI database STRING (available from https://string-db.org/ ) was used, and clustering was performed with the k‐means method. All data relevant to the study are included in the article or uploaded as online supplementary information. Consecutive patients with biopsy‐proven class III/IV, III/IV+V, or V LN were recruited from Keio University Hospital. Samples from two cohorts were used: (1) a discovery cohort for screening, including patients with LN (n = 24) and diabetic nephropathy (n = 3) diagnosed by renal biopsy, and (2) a validation cohort for enzyme‐linked immunosorbent assay (ELISA) validation, including patients with LN (n = 24). Serum and urine samples were collected within one week before renal biopsy and treatment intensification. Clean‐catch midstream urine samples were collected in sterile containers and refrigerated within one hour after collecting the samples. Blood samples were collected, and the serum was immediately separated by centrifugation. The samples were then aliquoted and stored at −80°C. The clinical data at the time of sample collection were recorded. Informed consent was obtained from all patients, and the study was approved by the Ethics Committee of our institution (Keio University School Hospital, approval number 20140093). Serum and urine samples obtained from patients in the discovery cohort were analyzed by screening with a slow off‐rate modified DNA aptamer–based capture array (SOMAscan; SomaLogic), which is a comprehensive high‐throughput proteomic assay using an Agilent microarray readout that measures 1,305 proteins. This assay is based on interactions between the aptamers and proteins in a sample. First, aptamer‐coated streptavidin beads were added to the sample to allow the aptamers to bind to the proteins. Next, the bound proteins were tagged with biotin, and the aptamer–protein complexes were released from the streptavidin beads. These aptamer–protein complexes were then recaptured on a second set of streptavidin beads, and aptamers were released from these proteins. Finally, the aptamers were hybridized to complementary sequences on a microarray chip and quantified by fluorescence. Concentrations were measured for 1,305 proteins, and the results were calculated as relative fluorescent units (RFU). RFU is related to the amount of protein in the original sample. These RFU values were standardized by urinary creatinine concentration. We evaluated renal histologic findings more quantitatively than the ISN/RPS lesion definitions and classification of renal involvement or the NIH activity and chronicity index by developing the original scoring system. , The scoring system included a total of 20 renal histologic findings, including 16 glomerular lesions and 4 tubulointerstitial lesions (Supplementary Tables and ). The renal histology was evaluated using a light microscope. Light microscopy specimens were embedded in paraffin, sectioned, and stained with hematoxylin‐eosin, periodic acid‐Schiff, periodic acid‐silver methenamine, and Masson's trichrome reagent. Each glomerular histologic finding was scored using two methods. (1) For endocapillary hypercellularity and subendothelial deposits, the percentage of lesions in the total glomerular capillary tuft was calculated in the range of 0% to 100%, and the percentage was defined as the score of one glomerulus. The average score of all glomeruli was used as the histologic score of each case. (2) For histologic findings of glomeruli other than those described previously, the presence or absence of lesions was evaluated. The percentage of glomeruli with lesions was calculated in the range of 0% to 100%, and this percentage was used as the histologic score of each case. For tubulointerstitial lesions, the percentage of lesions relative to the whole area of the tubulointerstitium was calculated in the range of 0% to 100%, and the percentage was used as the score of each case. For glomerular lesions, a score was calculated for each glomerulus, and the average of the scores of all glomeruli was used as the histologic score of the case. Histologic scores were assigned by two skilled raters (KH and SS) supervised by a pathologist (AH), and the mean of the two was used for correlation analysis. This scoring system can quantify the diversity and expansion of renal histologic features in LN more precisely than the ISN/RPS classification or the NIH activity and chronicity index because it includes the same or more items as the existing evaluation systems. ELISA ELISA tests were performed on candidate urinary proteins that correlated with specific histologic scores in the discovery cohort. The correlation between the ELISA test results and renal histologic scores in the validation cohort was then confirmed. The concentrations of urinary proteins, including calgranulin B (Cloud‐Clone), S100A12 (R&D Systems), high mobility group N1 (HMGN1) (Cloud‐Clone), interleukin‐8 (IL‐8) (R&D Systems), allograft inflammatory factor 1 (AIF‐1) (Cloud‐Clone), superoxide dismutase 1 (SOD1) (RayBiotech), monocyte chemotactic protein 1 (MCP‐1) (R&D Systems), FSTL3 (R&D Systems), and insulin‐like growth factor binding protein 5 (IGFBP‐5) (R&D Systems), were measured using ELISA. Absolute urinary protein levels were determined using standard curves run on each ELISA plate and standardized by the urinary creatinine concentration. Immunohistochemical staining of the renal tissues was performed using a Leica BOND MAX Immunostainer (Leica Biosystems) for calgranulin B, MCP‐1, and IGFBP‐5. After deparaffinization, the renal tissues were stained with BOND Epitope Retrieval Solution 1 (AR9961, Leica Biosystems) for calgranulin B and BOND Epitope Retrieval Solution 2 for MCP‐1 and IGFBP‐5 (AR9640, Leica Biosystems). For immunohistochemical staining, the renal tissues were incubated with 1:2,000 diluted mouse monoclonal anti–calgranulin B (TA804091S, OriGene Technologies), 1:100 diluted mouse monoclonal anti–MCP‐1 (MAB 679, R&D Systems), and 1:50 diluted mouse monoclonal anti–IGFBP‐5 (MAB875, R&D Systems) antibodies. BOND Polymer Refine Detection (DS9800; Leica Biosystems) was used as the secondary antibody. The JMP software version 15 (SAS Institute) was used for statistical analysis. Group comparisons were performed using the Wilcoxon rank sum test. Receiver operating characteristic (ROC) curve analysis was performed to analyze the sensitivity, specificity, and cutoff values of the biomarkers for renal histopathological findings. From the data of the aptamer‐based screening assay and histologic scores, a heatmap of the correlation coefficients was created, and cluster analysis was performed using the Python 3.9 (Python Seaborn package) programming. Spearman's rank correlation and Pearson's correlation coefficients were used for correlation related to aptamer‐based screening–measured and ELISA‐measured data, respectively. There were two reasons for using Spearman's correlation in the screening phase of this study: (1) The measurements by SOMAscan themselves may not reflect precise concentrations and could show considerable variability in results, and (2) ranking was necessary when extracting specific clusters of urinary proteins during cluster analysis. Euclidean distance was used for the distance matrix, and Ward's method was used for hierarchical clustering. The Enrichr web tool (open source available from https://amp.pharm.mssm.edu/Enrichr ) was used to determine the cellular origin of the proteins that correlated with individual clusters of renal lesions. Ingenuity pathway analysis (IPA) was used to ascertain which known pathways were enriched by proteins that were highly correlated with individual clusters of renal lesions. Data were analyzed using IPA (QIAGEN Inc, https://www.qiagen.com/us ). Protein–protein interaction (PPI) analysis was performed to identify interactions between individual clusters of renal lesions and highly correlated proteins. The PPI database STRING (available from https://string-db.org/ ) was used, and clustering was performed with the k‐means method. All data relevant to the study are included in the article or uploaded as online supplementary information. Cluster analysis with histologic score and aptamer‐based protein screening Urine and blood samples from 24 patients with LN (class III/IV = 14, III/IV+V = 6, V = 4) were used in the discovery cohort, and urine samples from 24 patients with LN (class III/IV = 10, III/IV+V = 6, V = 8) were used in the validation cohort. The clinical characteristics and renal histologic scores of the discovery, validation, and diabetic nephropathy groups are presented in Supplementary Tables and , respectively. The LN classification based on the renal pathology score completely matched the ISN/RPS classification, which was assessed by a pathologist. The two raters have shown good interrater reliability for histologic assessments in the discovery and validation cohorts (κ values of 0.62 to 0.95 for the presence or absence of 20 renal histologic findings). We first performed a cluster analysis on 20 pathology items according to the correlation between each item and the histologic score. Cluster analysis of the 20 renal histologic findings identified five clusters (Figure ). Cluster 1 (extracapillary lesions) included cellular or fibrocellular crescents, fibrinoid necrosis, adhesions, and fibrous crescents. Cluster 2 (endocapillary lesions) included only active lesions within the glomerular tuft, including endocapillary hypercellularity, neutrophil infiltration, subendothelial deposits, and karyorrhexis. Cluster 3 (membranous and mesangial lesions) included basement membrane and mesangial lesions. Cluster 4 (tubulointerstitial lesions) contained all tubulointerstitial lesions but not glomerular lesions. Cluster 5 (other lesions) included two minor lesions: podocyte hypertrophy and collapsed glomeruli. Subsequently, the correlation between the histologic scores and 1,305 urinary proteins was evaluated via cluster analysis for each of the five histologic clusters. For each protein, we calculated the sum of the ranks of correlation coefficients with each histologic score included in the histologic clusters. Then the mean ranks of the correlation coefficients of the proteins for each urinary protein subgroup (UG) were compared, and the UGs with the highest ranks were identified. In clusters 1, 2, and 4, UGs (UG1, 2, and 4, respectively) with high correlation coefficients for each histologic score were identified (Figure and ). UG1, 2, and 4 contained 119, 59, and 85 urinary proteins, respectively (Figure ). These UGs shared the same proteins with each other to some extent, although the number of common proteins was less than that of each noncluster overlapping protein between UG1 and UG4 or UG2 and UG4. Notably, UG1 shared more than half of the same proteins with UG2. No urinary protein cluster with high correlation coefficients was extracted in clusters 3 and 5. Hence, we focused on UG1, 2, and 4 in our subsequent analyses. Representative renal histologic findings from clusters 1, 2, and 4 are shown in Supplementary Figure . We also created heatmaps of correlation coefficients among each protein group (UG1, 2, and 4) and 20 histologic findings as a supplementary analysis (Supplementary Figure ). Cell‐type enrichment analysis Cell‐type enrichment analysis revealed that different cell types were associated with urinary protein molecules in each subgroup (Figure ). UG1 was characterized by an abundance of proteins derived from the vascular endothelial cells and platelets, in addition to monocytes and neutrophils. In general, UG2 proteins are derived from cells involved in innate and acquired immunity, such as monocytes, neutrophils, plasma cells, and plasmacytoid dendritic cells. In contrast, UG4 contained many proteins derived from the mesenchymal stem cell lineages involved in fibrosis pathology, in addition to the expected appearance of monocytes. IPA IPA generated common and distinct pathways among the top 10 pathways in each subgroup (Figure ). Leukocyte adhesion–associated signals were activated in all three subgroups. In UG1, high mobility group protein B‐cell receptor CD221 signaling was significantly activated compared to that in the other subgroups. Inflammation‐related pathways such as IL‐3 and hypoxia‐inducible factor 1α signaling are dominant in UG2. In UG4, various kinds of IL‐17–related signals and fibrosis‐related pathways, such as fibrosis signaling pathways and pathology in chronic obstructive pulmonary disease, are prevalent. In addition, in the integrated IPA of UG1, 2, and 4, it was confirmed that signaling pathways related to inflammatory cytokines and granulocyte or monocyte function were highly activated (Supplementary Figure ). PPI analysis In UG1, the proteins contributing to chemotactic activity, cell growth and survival, and cytoskeletal dynamics were confirmed. In UG2, the associated proteins contributing to chemotactic activity, cell migration and adhesion, and cell survival and differentiation were identified. In UG4, proteins contributing to chemotactic activity, fibrosis, and cellular homeostasis were identified (Figure ). Identification of urinary biomarker protein correlated with individual histologic score We attempted to identify urinary protein markers specific to the renal histologic findings in each cluster among the proteins included in the three subgroups. Table shows the top 30 proteins of UG1, 2, and 4 with high correlation coefficients between the histologic score and urinary protein concentration. The top 30 proteins are listed according to the most prevalent renal histologic findings in clusters 1, 2, and 4. The top 20 proteins with high correlation coefficients for all 20 pathologic findings are listed in Supplementary Table . The number of correlated candidate urinary proteins varied according to the renal histology. For active glomerular lesions resulting mainly from inflammatory mechanisms, urine proteins with a correlation of ρ > 0.4 were frequently detected in glomerular lesions, including cellular or fibrocellular crescents in cluster 1 and endocapillary hypercellularity in cluster 2. Meanwhile, no urinary protein was correlated with subendothelial deposition in cluster 2. There were also no positive correlations among urinary proteins that could be detected in basement membrane lesions in cluster 3. In the analysis of UG1 and UG2, a comparison with diabetic nephropathy was made to exclude proteins that were not specifically increased in the urine, even in nonnephritis pathologies of the glomerulus. Because the scores of interstitial inflammation and interstitial fibrosis correlated with each other (ρ = 0.59, P < 0.01), we focused on proteins that correlated with only one of the histologic scores in the analysis of UG4. To increase the detection rate of ELISA, proteins with an average intensity of <1,000 RFU in 24 cases were excluded. Under these conditions, the top 15 proteins with high correlation coefficients in the most prevalent renal histologic findings in UG1, 2, and 4 were selected as candidate proteins (Supplementary Table ). Among the urinary proteins in UG1 and UG2, only calgranulin B levels were significantly higher in the LN group than in the diabetic nephropathy group ( P = 0.02). In addition to calgranulin B, S100A12, HMGN1, and IL‐8 were selected in UG1 and AIF‐1, SOD1, S100A12, and HMGN1 were selected in UG2 as candidate proteins that did not significantly increase but tended to increase in urinary concentrations in patients with LN compared to those with diabetic nephropathy. In UG4, MCP‐1 (ρ = 0.72, P < 0.001) and FSTL3 (ρ = 0.69, P < 0.001) were selected as candidate proteins correlated with interstitial inflammation scores. Only IGFBP‐5 was extracted as a candidate protein that correlated solely with interstitial fibrosis scores (ρ = 0.41, P = 0.048) and not with interstitial inflammation scores. No protein correlated only with the interstitial inflammation scores. The urinary concentrations of these nine candidate proteins were measured using ELISA and reanalyzed for correlation with the histologic scores. The results for S100A12, HMGN1, IL‐8, AIF‐1, SOD1, and FSTL3 were less sensitive or did not correlate with the results of the aptamer‐based screening (Supplementary Table ). High correlations were found among the results obtained by ELISA and aptamer‐based screening for calgranulin B, MCP‐1, and IGFBP‐5 (r > 0.7; Figure ). The proteins, measured by ELISA, were then analyzed for correlation with the histologic scores. Urinary calgranulin B levels correlated with the histologic scores of cellular or fibrocellular crescents (r = 0.65, P < 0.001) and endocapillary hypercellularity (r = 0.72, P < 0.001), representative of clusters 1 and 2 (Figure ). Urinary calgranulin B levels also correlated with the histologic scores of neutrophil infiltration (r = 0.58, P = 0.003), karyorrhexis (r = 0.68, P < 0.001), and fibrinoid necrosis (r = 0.56, P = 0.005) in cluster 2 (data not shown). MCP‐1 and IGFBP‐5 urinary levels correlated with interstitial inflammation (r = 0.65, P < 0.001) and fibrosis scores (r = 0.47, P = 0.018), respectively, in the 24 patients with LN (Figure ). In addition, a stronger correlation was observed for IGFBP‐5 when three cases of diabetic nephropathy with significant interstitial fibrosis were included (r = 0.83, P < 0.001) (data not shown). The correlation between each serum protein concentration detected by aptamer‐based screening and the renal histologic score was also analyzed in the discovery cohort; however, serum calgranulin and cellular or fibrocellular crescents (ρ = 0.26, P = 0.22), endocapillary hypercellularity (ρ = 0.22, P = 0.29), neutrophil infiltration (ρ = 0.07, P = 0.73), karyorrhexis (ρ = 0.03, P = 0.89) or fibrinoid necrosis (ρ = 0.02, P = 0.93), serum MCP‐1 and interstitial inflammation scores (ρ = 0.23, P = 0.27), and serum IGFBP‐5 and interstitial fibrosis scores (ρ = 0.25, P = 0.24) showed no significant correlation with each other (Supplementary File). Validation of the three urinary proteins associated with the individual histologic score in the validation cohort The correlation between ELISA‐measured urinary protein concentrations and each histologic score was confirmed in the validation cohort for calgranulin B, MCP‐1, and IGFBP‐5. Urinary calgranulin B levels correlated with the histologic scores of endocapillary hypercellularity (r = 0.55, P = 0.001) (Figure ). However, the validation cohort had fewer cases of cellular or fibrocellular crescents (Supplementary Table ). Therefore, we could not confirm a significant correlation between the urinary calgranulin B levels and histologic scores (r = 0.41, P = 0.18) (Figure ). The urinary MCP‐1 levels correlated with the histologic scores of interstitial inflammation (r = 0.78, P < 0.001) (Figure ). In the eight cases in which measurement of the urinary IGFBP‐5 levels was possible, there was a tendency to correlate with the histologic scores of interstitial fibrosis, but no significant difference was observed (r = 0.63, P = 0.09) (Figure ). The correlation coefficients among all histologic scores of clusters 1, 2, and 4 and ELISA results of these three urinary protein concentrations in the discovery cohort, validation cohort, and diabetic nephropathy were shown in a heatmap (Supplementary Figure ). Using the results of the validation cohort, ROC analysis was performed to determine whether urinary calgranulin B, MCP‐1, and IGFBP‐5 levels could discriminate the presence or absence of each histologic finding (Figure ). These three urinary proteins exhibited moderate accuracy: calgranulin B with areas under the curve (AUCs) of 0.78 for cellular or fibrocellular crescents and 0.87 for endocapillary hypercellularity, MCP‐1 with an AUC of 0.84 for interstitial inflammation, and IGFBP‐5 with an AUC of 0.71 for interstitial fibrosis. In addition, it was confirmed in the validation cohort that urinary calgranulin B levels showed a strong correlation with activity index (r = 0.63, P < 0.001), but urinary MCP‐1 levels did not (r = 0.43, P = 0.04). Urinary IGFBP‐5 levels were not significantly correlated with chronicity index (r = 0.44, P = 0.27) (Figure ). Immunohistochemical staining of the renal tissues Immunohistochemical staining of the renal tissues was performed to clarify the localization of calgranulin B, MCP‐1, and IGFBP‐5. Calgranulin B proteins were localized to the cells infiltrating the glomerulus with endocapillary hypercellularity and cellular or fibrocellular crescents but not to the cells in tubulointerstitial lesions. In addition, glomeruli without these active lesions showed little expression of calgranulin B (Figure ). In the renal tissues with high histologic scores for interstitial inflammation, MCP‐1 proteins were localized on the tubular epithelial cells but not on the cells infiltrating tubulointerstitial lesions (Figure ). IGFBP‐5 proteins were localized on the spindle‐shaped fibroblasts present at sites of interstitial fibrosis and normal tubular epithelial cells (Figure ). Neither MCP‐1 nor IGFBP‐5 was localized in the glomeruli. Urine and blood samples from 24 patients with LN (class III/IV = 14, III/IV+V = 6, V = 4) were used in the discovery cohort, and urine samples from 24 patients with LN (class III/IV = 10, III/IV+V = 6, V = 8) were used in the validation cohort. The clinical characteristics and renal histologic scores of the discovery, validation, and diabetic nephropathy groups are presented in Supplementary Tables and , respectively. The LN classification based on the renal pathology score completely matched the ISN/RPS classification, which was assessed by a pathologist. The two raters have shown good interrater reliability for histologic assessments in the discovery and validation cohorts (κ values of 0.62 to 0.95 for the presence or absence of 20 renal histologic findings). We first performed a cluster analysis on 20 pathology items according to the correlation between each item and the histologic score. Cluster analysis of the 20 renal histologic findings identified five clusters (Figure ). Cluster 1 (extracapillary lesions) included cellular or fibrocellular crescents, fibrinoid necrosis, adhesions, and fibrous crescents. Cluster 2 (endocapillary lesions) included only active lesions within the glomerular tuft, including endocapillary hypercellularity, neutrophil infiltration, subendothelial deposits, and karyorrhexis. Cluster 3 (membranous and mesangial lesions) included basement membrane and mesangial lesions. Cluster 4 (tubulointerstitial lesions) contained all tubulointerstitial lesions but not glomerular lesions. Cluster 5 (other lesions) included two minor lesions: podocyte hypertrophy and collapsed glomeruli. Subsequently, the correlation between the histologic scores and 1,305 urinary proteins was evaluated via cluster analysis for each of the five histologic clusters. For each protein, we calculated the sum of the ranks of correlation coefficients with each histologic score included in the histologic clusters. Then the mean ranks of the correlation coefficients of the proteins for each urinary protein subgroup (UG) were compared, and the UGs with the highest ranks were identified. In clusters 1, 2, and 4, UGs (UG1, 2, and 4, respectively) with high correlation coefficients for each histologic score were identified (Figure and ). UG1, 2, and 4 contained 119, 59, and 85 urinary proteins, respectively (Figure ). These UGs shared the same proteins with each other to some extent, although the number of common proteins was less than that of each noncluster overlapping protein between UG1 and UG4 or UG2 and UG4. Notably, UG1 shared more than half of the same proteins with UG2. No urinary protein cluster with high correlation coefficients was extracted in clusters 3 and 5. Hence, we focused on UG1, 2, and 4 in our subsequent analyses. Representative renal histologic findings from clusters 1, 2, and 4 are shown in Supplementary Figure . We also created heatmaps of correlation coefficients among each protein group (UG1, 2, and 4) and 20 histologic findings as a supplementary analysis (Supplementary Figure ). Cell‐type enrichment analysis revealed that different cell types were associated with urinary protein molecules in each subgroup (Figure ). UG1 was characterized by an abundance of proteins derived from the vascular endothelial cells and platelets, in addition to monocytes and neutrophils. In general, UG2 proteins are derived from cells involved in innate and acquired immunity, such as monocytes, neutrophils, plasma cells, and plasmacytoid dendritic cells. In contrast, UG4 contained many proteins derived from the mesenchymal stem cell lineages involved in fibrosis pathology, in addition to the expected appearance of monocytes. IPA generated common and distinct pathways among the top 10 pathways in each subgroup (Figure ). Leukocyte adhesion–associated signals were activated in all three subgroups. In UG1, high mobility group protein B‐cell receptor CD221 signaling was significantly activated compared to that in the other subgroups. Inflammation‐related pathways such as IL‐3 and hypoxia‐inducible factor 1α signaling are dominant in UG2. In UG4, various kinds of IL‐17–related signals and fibrosis‐related pathways, such as fibrosis signaling pathways and pathology in chronic obstructive pulmonary disease, are prevalent. In addition, in the integrated IPA of UG1, 2, and 4, it was confirmed that signaling pathways related to inflammatory cytokines and granulocyte or monocyte function were highly activated (Supplementary Figure ). analysis In UG1, the proteins contributing to chemotactic activity, cell growth and survival, and cytoskeletal dynamics were confirmed. In UG2, the associated proteins contributing to chemotactic activity, cell migration and adhesion, and cell survival and differentiation were identified. In UG4, proteins contributing to chemotactic activity, fibrosis, and cellular homeostasis were identified (Figure ). We attempted to identify urinary protein markers specific to the renal histologic findings in each cluster among the proteins included in the three subgroups. Table shows the top 30 proteins of UG1, 2, and 4 with high correlation coefficients between the histologic score and urinary protein concentration. The top 30 proteins are listed according to the most prevalent renal histologic findings in clusters 1, 2, and 4. The top 20 proteins with high correlation coefficients for all 20 pathologic findings are listed in Supplementary Table . The number of correlated candidate urinary proteins varied according to the renal histology. For active glomerular lesions resulting mainly from inflammatory mechanisms, urine proteins with a correlation of ρ > 0.4 were frequently detected in glomerular lesions, including cellular or fibrocellular crescents in cluster 1 and endocapillary hypercellularity in cluster 2. Meanwhile, no urinary protein was correlated with subendothelial deposition in cluster 2. There were also no positive correlations among urinary proteins that could be detected in basement membrane lesions in cluster 3. In the analysis of UG1 and UG2, a comparison with diabetic nephropathy was made to exclude proteins that were not specifically increased in the urine, even in nonnephritis pathologies of the glomerulus. Because the scores of interstitial inflammation and interstitial fibrosis correlated with each other (ρ = 0.59, P < 0.01), we focused on proteins that correlated with only one of the histologic scores in the analysis of UG4. To increase the detection rate of ELISA, proteins with an average intensity of <1,000 RFU in 24 cases were excluded. Under these conditions, the top 15 proteins with high correlation coefficients in the most prevalent renal histologic findings in UG1, 2, and 4 were selected as candidate proteins (Supplementary Table ). Among the urinary proteins in UG1 and UG2, only calgranulin B levels were significantly higher in the LN group than in the diabetic nephropathy group ( P = 0.02). In addition to calgranulin B, S100A12, HMGN1, and IL‐8 were selected in UG1 and AIF‐1, SOD1, S100A12, and HMGN1 were selected in UG2 as candidate proteins that did not significantly increase but tended to increase in urinary concentrations in patients with LN compared to those with diabetic nephropathy. In UG4, MCP‐1 (ρ = 0.72, P < 0.001) and FSTL3 (ρ = 0.69, P < 0.001) were selected as candidate proteins correlated with interstitial inflammation scores. Only IGFBP‐5 was extracted as a candidate protein that correlated solely with interstitial fibrosis scores (ρ = 0.41, P = 0.048) and not with interstitial inflammation scores. No protein correlated only with the interstitial inflammation scores. The urinary concentrations of these nine candidate proteins were measured using ELISA and reanalyzed for correlation with the histologic scores. The results for S100A12, HMGN1, IL‐8, AIF‐1, SOD1, and FSTL3 were less sensitive or did not correlate with the results of the aptamer‐based screening (Supplementary Table ). High correlations were found among the results obtained by ELISA and aptamer‐based screening for calgranulin B, MCP‐1, and IGFBP‐5 (r > 0.7; Figure ). The proteins, measured by ELISA, were then analyzed for correlation with the histologic scores. Urinary calgranulin B levels correlated with the histologic scores of cellular or fibrocellular crescents (r = 0.65, P < 0.001) and endocapillary hypercellularity (r = 0.72, P < 0.001), representative of clusters 1 and 2 (Figure ). Urinary calgranulin B levels also correlated with the histologic scores of neutrophil infiltration (r = 0.58, P = 0.003), karyorrhexis (r = 0.68, P < 0.001), and fibrinoid necrosis (r = 0.56, P = 0.005) in cluster 2 (data not shown). MCP‐1 and IGFBP‐5 urinary levels correlated with interstitial inflammation (r = 0.65, P < 0.001) and fibrosis scores (r = 0.47, P = 0.018), respectively, in the 24 patients with LN (Figure ). In addition, a stronger correlation was observed for IGFBP‐5 when three cases of diabetic nephropathy with significant interstitial fibrosis were included (r = 0.83, P < 0.001) (data not shown). The correlation between each serum protein concentration detected by aptamer‐based screening and the renal histologic score was also analyzed in the discovery cohort; however, serum calgranulin and cellular or fibrocellular crescents (ρ = 0.26, P = 0.22), endocapillary hypercellularity (ρ = 0.22, P = 0.29), neutrophil infiltration (ρ = 0.07, P = 0.73), karyorrhexis (ρ = 0.03, P = 0.89) or fibrinoid necrosis (ρ = 0.02, P = 0.93), serum MCP‐1 and interstitial inflammation scores (ρ = 0.23, P = 0.27), and serum IGFBP‐5 and interstitial fibrosis scores (ρ = 0.25, P = 0.24) showed no significant correlation with each other (Supplementary File). The correlation between ELISA‐measured urinary protein concentrations and each histologic score was confirmed in the validation cohort for calgranulin B, MCP‐1, and IGFBP‐5. Urinary calgranulin B levels correlated with the histologic scores of endocapillary hypercellularity (r = 0.55, P = 0.001) (Figure ). However, the validation cohort had fewer cases of cellular or fibrocellular crescents (Supplementary Table ). Therefore, we could not confirm a significant correlation between the urinary calgranulin B levels and histologic scores (r = 0.41, P = 0.18) (Figure ). The urinary MCP‐1 levels correlated with the histologic scores of interstitial inflammation (r = 0.78, P < 0.001) (Figure ). In the eight cases in which measurement of the urinary IGFBP‐5 levels was possible, there was a tendency to correlate with the histologic scores of interstitial fibrosis, but no significant difference was observed (r = 0.63, P = 0.09) (Figure ). The correlation coefficients among all histologic scores of clusters 1, 2, and 4 and ELISA results of these three urinary protein concentrations in the discovery cohort, validation cohort, and diabetic nephropathy were shown in a heatmap (Supplementary Figure ). Using the results of the validation cohort, ROC analysis was performed to determine whether urinary calgranulin B, MCP‐1, and IGFBP‐5 levels could discriminate the presence or absence of each histologic finding (Figure ). These three urinary proteins exhibited moderate accuracy: calgranulin B with areas under the curve (AUCs) of 0.78 for cellular or fibrocellular crescents and 0.87 for endocapillary hypercellularity, MCP‐1 with an AUC of 0.84 for interstitial inflammation, and IGFBP‐5 with an AUC of 0.71 for interstitial fibrosis. In addition, it was confirmed in the validation cohort that urinary calgranulin B levels showed a strong correlation with activity index (r = 0.63, P < 0.001), but urinary MCP‐1 levels did not (r = 0.43, P = 0.04). Urinary IGFBP‐5 levels were not significantly correlated with chronicity index (r = 0.44, P = 0.27) (Figure ). Immunohistochemical staining of the renal tissues was performed to clarify the localization of calgranulin B, MCP‐1, and IGFBP‐5. Calgranulin B proteins were localized to the cells infiltrating the glomerulus with endocapillary hypercellularity and cellular or fibrocellular crescents but not to the cells in tubulointerstitial lesions. In addition, glomeruli without these active lesions showed little expression of calgranulin B (Figure ). In the renal tissues with high histologic scores for interstitial inflammation, MCP‐1 proteins were localized on the tubular epithelial cells but not on the cells infiltrating tubulointerstitial lesions (Figure ). IGFBP‐5 proteins were localized on the spindle‐shaped fibroblasts present at sites of interstitial fibrosis and normal tubular epithelial cells (Figure ). Neither MCP‐1 nor IGFBP‐5 was localized in the glomeruli. The previously reported proteome analyses aimed at comparing patients with LN and healthy controls or those with active and inactive LN to identify biomarkers that could distinguish them from each other. , , The present study is novel in that, via a proteomic analysis, we identified urinary biomarkers in patients with LN that correlate with a renal histologic scoring system to precisely reflect the characteristics and progression of individual active and chronic lesions. In this study, pathway analyses revealed the protein and cell‐to‐cell relationships associated with each renal histologic finding. Neutrophils and monocytes are the dominant immune cell type in LN and have been implicated in its pathogenesis. , This study corroborates previous findings by demonstrating that the activation of granulocytes and monocytes is involved in both glomerular and tubulointerstitial lesions. Furthermore, a stronger association of proteins related to neutrophils and monocytes is observed in active endocapillary and extracapillary lesions (ie, S100A12, calgranulin B, IL‐8, and azurocidin). This finding is consistent with previous reports in LN with active glomerulonephritis. In contrast, an enhancement of various kinds of IL‐17–related signals was observed in tubulointerstitial lesions. In lupus model mice, IL‐17 increased the expression of molecules such as MCP‐1 in renal tubular epithelial cells (RTECs), thereby promoting the renal infiltration of macrophages. Furthermore, IL‐17 stimulation of RTECs increased messenger RNA expression of other chemokines, such as Cxcl1, Cxcl2, and Cxcl8, which attract monocytes and neutrophils. Consequently, under IL‐17 stimulation, RTECs produce mediators that recruit dendritic cells and macrophages, which are crucial sources of transforming growth factor β that promote renal fibrosis, establishing IL‐17 as a key driver of RTEC‐mediated immunopathogenesis in LN. Thus, these findings support the results obtained from pathway analyses of interstitial inflammation, tubular atrophy, and interstitial fibrosis in this study. Calgranulin B (S100A9) belongs to the S100 family of proteins and exists as a heterodimer of S100A8 in vivo. S100A8/A9 are mainly expressed in the cytoplasm of neutrophils, monocytes, plasmacytoid dendritic cells, and endothelial cells and play a proinflammatory role in various autoimmune diseases. , Serum and urinary S100A8/A9 concentrations are known to increase in patients with active LN. , , The present study newly showed a correlation between urinary calgranulin B levels and quantified active intraglomerular lesions in the LN. In addition, immunohistochemical staining of the renal tissues from patients with high urinary calgranulin B concentrations showed the localization of calgranulin B in cells infiltrating the glomerulus. These results suggest that urinary calgranulin B levels may reflect local renal pathology, making it more important to measure urinary calgranulin B concentrations. MCP‐1 (CCL2) is a chemokine secreted by monocytes, macrophages, fibroblasts, and vascular endothelial cells that contributes to monocyte activation and migration via CCR2. MCP‐1 is a useful urinary biomarker for predicting disease activity and renal prognosis in patients with LN. , Urinary MCP‐1 has also been reported as a biomarker that can predict inflammatory cell infiltration of the tubulointerstitium in patients with LN. The present study demonstrated a correlation between quantified interstitial inflammation and urinary MCP‐1 concentration, indicating its potential as a detailed predictor of the severity of interstitial inflammation in patients with LN. MCP‐1 was strongly expressed in the tubular epithelial cells in the renal tissues, where interstitial inflammation was more prominent. This indicates that MCP‐1 expression in the tubular epithelial cells promotes inflammatory cell infiltration into the renal interstitium. However, urinary MCP‐1 levels did not significantly correlate with the severity of active glomerular lesions in the present study. This result reinforces the possibility that urinary MCP‐1 levels can specifically predict the severity of interstitial inflammation. IGFBP‐5 belongs to the IGFBP family and contributes to the regulation of insulin‐like growth factor signaling. It has been reported to be involved in renal diseases, particularly chronic kidney disease (CKD), after its expression was discovered in the renal interstitium of patients with CKD using single‐cell analysis. , IGFBP‐5 has also been reported to promote lung tissue fibrosis and participate in tissue remodeling. We newly reported that urinary IGFBP‐5 levels can predict the severity of interstitial fibrosis in the renal tissues. Immunohistochemical staining showed IGFBP‐5 staining in the cells that appeared to be fibroblasts in lesions with interstitial fibrosis, suggesting that it is associated with the pathogenesis of fibrosis. Some urinary proteins, such as vascular cell adhesion molecule 1 and activated leukocyte cell adhesion molecule, have been reported to correlate with activity index in patients with active LN. , Urinary calgranulin B also predicted activity index with as strong a correlation as these proteins in our study. In contrast, urinary levels of MCP‐1 and IGFBP‐5 did not show a strong correlation with activity or chronicity index. This is probably explained by the finding that urinary MCP‐1 and IGFBP‐5 were associated with only one histologic finding of all components in activity or chronicity index, whereas urinary calgranulin B was associated with the major components of histologic findings in activity index. This study had some limitations. First, our scoring system was not clinically validated, although it comprehensively included histologic findings based on the ISN/RPS classification and the NIH index. Second, the validation failed to confirm the significance of the correlation between urinary calgranulin B levels and the histologic scores of cellular or fibrocellular crescents or urinary IGFBP‐5 levels and the histologic scores of interstitial fibrosis. The former occurred because fewer patients in the validation cohort presented with cellular or fibrocellular crescents. The latter was due to the small number of cases in which IGFBP‐5 levels could be measured. Further validation using urine samples from a larger sample is required. Third, we did not perform analyses of urinary proteins associated with the histologic findings included in the clusters of membranous and mesangial lesions and other lesions. Fourth, the clinical usefulness of urinary calgranulin B, MCP‐1, and IGFBP‐5 levels was not demonstrated. A longitudinal study is warranted to investigate the relationship between these three urinary proteins and disease progression or improvement over time. Fifth, the present study could not include sufficient samples of diabetic nephropathy because renal biopsies are not commonly performed in patients with diabetic nephropathy. The limited samples of diabetic nephropathy could affect the accuracy of candidate protein extraction. In conclusion, various urinary proteins that correlate with certain renal histologic findings may reflect the different pathogeneses involved in each renal lesion. The extraction and identification of these urinary proteins may enhance clinical management and are expected to be useful in precision medicine. All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be published. Dr Kaneko had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study conception and design Hiramoto, Saito, Hanaoka, Hashiguchi, Suzuki, Takeuchi, Kaneko. Acquisition of data Hiramoto, Saito, Hanaoka, Kikuchi, Fukui, Hashiguchi. Analysis and interpretation of data Hiramoto, Saito, Hashiguchi. Hiramoto, Saito, Hanaoka, Hashiguchi, Suzuki, Takeuchi, Kaneko. Hiramoto, Saito, Hanaoka, Kikuchi, Fukui, Hashiguchi. Hiramoto, Saito, Hashiguchi. Mitsubishi Tanabe Pharma Corporation had no role in the study design or in the collection or interpretation of the data, the writing of the manuscript, or the decision to submit the manuscript for publication. Publication of this article was not contingent upon approval by Mitsubishi Tanabe Pharma Corporation. Disclosure form Appendix S1: Supplementary Information. File S1: The correlation between each 1305 serum protein concentration detected by aptamer‐based screening and the renal histological score in the discovery cohort. |
Recognition of ophthalmology consultation and fundus examination among individuals with diabetes in Japan: A cross‐sectional study using claims‐questionnaire linked data | 5311e40f-6169-4680-a441-5843c41b621f | 11885107 | Ophthalmology[mh] | INTRODUCTION Diabetic retinopathy (DR) is a major microvascular complication of diabetes, known to progress to severe conditions like proliferative DR and diabetic macular edema, and is the leading cause of visual dysfunction and blindness. , , According to a meta‐analysis, the global number of individuals with DR will increase from 130 million in 2020 to 160.5 million by 2045, making DR control a significant public health challenge. In Japan, the annual incidence of DR in individuals with type 2 diabetes is approximately 4%. , Various international guidelines recommend regular DR screening , ; in Japan, it is recommended at least once a year, or more frequently for individuals with retinopathy. Despite this recommendation, less than half of individuals with diabetes undergo fundus examination regularly. Conversely, more than 95% of individuals with diabetes who visited the ophthalmology department underwent fundus examination, suggesting that the referral process from physicians to ophthalmologists may explain the lower proportion of fundus examinations. In Japan, regular DR screening among individuals with diabetes depends on the physician's recommendation and on individuals' participation based on their understanding of the recommendation. However, the insight into whether fundus examination paucity was attributable to physicians' insufficient explanation for ophthalmology consultations or to individuals' inadequate adherence remains unclear. We hypothesized that the recognition of recommendation for ophthalmology consultation influences individual's knowledge of the recommended frequency of DR screening, and this, in turn, affects the participation in fundus examinations (Figure ). Grounded on this hypothesis, this study aimed to investigate the proportion of individuals who have received ophthalmology recommendations from their healthcare provider and understood the recommended frequency of DR screening and the proportion of individuals who participated in a fundus examination, using a questionnaire survey and medical claims data, to determine the association between individuals' recognition of the recommendations and the actual fundus examinations. We also sought to examine the association between consultation with a diabetes specialist and fundus examination.
MATERIALS AND METHODS 2.1 Design, settings, and participants This cross‐sectional study used the National Health Insurance beneficiaries' questionnaire and claims‐linked data originally collected by Tsukuba City for administrative purposes. Tsukuba City, with a population of approximately 240 000, is located in the suburbs of the Tokyo metropolitan area. It was developed as Japan's largest academic city while retaining a blend of urban and rural elements. The National Health Insurance is a health insurance program administered by municipalities. Method details this insurance system. This study consists of two processes: the conduction of the questionnaire (process 1) and the analysis of the questionnaire survey results (process 2), each administrated by a different organizer (Figure ). In process 1, Tsukuba City conducted a questionnaire survey from 19 December 2022 to 31 January 2023, for comprehending the living conditions of the beneficiaries in Tsukuba and their perception of health, and for promoting health. In process 2, after receiving anonymized data, our research team analysed the survey results using the questionnaire survey data that was individually linked with claims and health checkup data extracted from Tsukuba City's National Health Insurance Registry. This paper mainly reports on the results of the analysis in process 2; the details of the questionnaire survey (process 1) are described in Method . The University of Tsukuba Medical Ethics Committee approved our research protocol (No. 1820‐1). 2.1.1 Inclusion and exclusion criteria As outlined above, the selection of subjects for this study is divided into two parts (Figure ). In process 1, the city selected the questionnaire responders. Data of beneficiaries with diabetes who were selected for the questionnaire survey were then extracted from insurance claims and health data. The first inclusion criteria were individuals covered by Tsukuba City's National Health Insurance as of September 2022 and aged 20–74 years as of 31 October 2022. From this group, individuals with diabetes were selected according to the following conditions: (a) at least one antidiabetic drug prescription (Anatomical Therapeutic Chemical classification: A10A and A10B) in FY2021 and (b) glycosylated haemoglobin (HbA1c) levels of ≥6.5% (48 mmol/mol) in the health checkup data in FY2021, but no antidiabetic medication have been prescribed in the same year. We excluded those with the diagnosis code of gestational diabetes mellitus (O244 and O249) in FY2021. Out of 3450 initially selected beneficiaries with diabetes, 1000 were chosen by stratified random sampling, except those no longer insured at the time of sampling (November 2022), to receive the questionnaire via mail. Table details the sampling method. In process 2, we, as researchers, chose the most suitable participants from this set of respondents for the study. Tsukuba City offered the anonymized linked data of claims, health checkups and questionnaire responses at the individual level to our research team. Given that the recognition of the need for a DR screening is based on the awareness of diabetes, respondents who were unaware of their diabetes, that is those who gave a ‘no’ or invalid answer to the question ‘Have you ever been told by a doctor that you have diabetes?’ were excluded. In the subanalysis, participants without information on their medical facilities were excluded to properly investigate the association between a visit to a medical facility with a diabetes specialist and fundus examination. 2.2 Measurements 2.2.1 Exposures The exposure in the main analysis was the recognition of ophthalmology consultation recommendation from a healthcare provider. Participants were defined as ‘recognized’ if they answered ‘yes’ to the question ‘Has your healthcare provider ever recommended that you see an ophthalmologist?’ and ‘unrecognized’ if the answer was ‘No’ or ‘I don't know’. In the subanalysis, the exposure was a visit to the medical facility with diabetes specialists. 2.2.2 Outcomes In the main analysis and subanalysis, the primary outcome was participation in the fundus examination, which was defined by the medical practice code related to fundus examination in the FY2021 claims (code list shown in Table ). The secondary outcome was knowledge of the recommended frequency of DR screening. Knowledge was measured by the question ‘What do you think is the standard interval between ophthalmology visits for people with diabetes?’ We categorized the responses into three: ‘At least once every 6 months or once a year’, ‘Once every 2 years or once every 5 years’ and ‘After the onset of symptoms or Not sure’. Among these responses, ‘once every 6 months or more’ or ‘once a year’ indicated having knowledge based on the Japanese Clinical Practice Guideline for Diabetes 2019. 2.2.3 Other variables The following groups were used to describe participants' characteristics: ‘Received antidiabetic medication at least once during FY 2021–T1D [type 1 diabetes mellitus]’, ‘Received antidiabetic medication at least once during FY 2021–T2D [type 2 diabetes mellitus] or other’ and ‘Not received antidiabetic medication during FY 2021’. Moreover, for the ‘Received antidiabetic medication at least once during FY 2021–T2D or other’ group, age categories were used for stratification. The grouping corresponds to the stratification during the random sampling of the survey participants. Method details the stratification method. Medical facilities where antidiabetic medication was prescribed, where fundus examination was performed, and which have an ophthalmology department were extracted from medical claims data. These three data were prepared to describe medical facilities' characteristics. To confirm whether the diabetes care facilities were certified by the Japan Diabetes Society (JDS), we contacted the JDS. Information on whether the facilities had a diabetes specialist was obtained from published data from the respective medical facilities and the JDS. Table details the use of antidiabetic drugs and definitions of diabetes types and medical facility characteristics. 2.3 Statistical analysis First, we described participant selection, participant characteristics (including three categories based on disease type and prescription), medical facility characteristics, and responses to the questions about the recognition of ophthalmology consultation recommendations from healthcare providers, knowledge of the recommended frequency of DR screening, and participation in fundus examination. Each variable was summarized using the statistical weights calculated from the number of valid responses (i.e. the inverse of the product of the extraction probability and response rate). We also evaluated the three categories based on disease type and prescription by stratified analysis. Next, we examined the path from the recognition of ophthalmology consultation recommendations from healthcare providers to the actual participation in fundus examination, mediated by knowledge of the recommended frequency of DR screening. Specifically, the proportion of participants who underwent fundus examination was compared between those who recognized ophthalmology consultation recommendations and those who did not and between those who knew the recommended DR screening frequency and those who did not. Additionally, the proportion of those with knowledge of the recommended DR screening frequency was compared between those who recognized ophthalmology consultation recommendations and those who did not. Chi‐square test was used for these analyses. Moreover, the association between the recognition of DR screening recommendation and actual fundus examination was investigated using the multivariable modified Poisson regression model. The covariates in the regression model were selected according to previous studies and clinical findings; sex, age, visit to a medical facility with a diabetes specialist, and visit to a medical facility with ophthalmology were adjusted. In the additional analyses, the proportions of participants who recognized DR screening recommendations, who knew the frequency of DR screening, and who participated in fundus examinations were compared by whether or not they had visited a medical facility with diabetes specialists, using chi‐square tests. We also divided the characteristics of the healthcare facilities into four groups according to whether they had a diabetes specialist and whether they had an ophthalmology department; we then assessed the proportion of participants who underwent fundus examination in each group. We also calculated the proportion of participants who underwent fundus examination at an attached ophthalmology department, if the facilities had one. To investigate factors potentially influencing fundus examination participation among those recognized of the recommendation for ophthalmology consultations, we conducted additional analyses. Using chi‐squared tests, we compared the proportions of participants who underwent fundus examinations according to various characteristics, including age, sex, estimated diabetes duration, cohabitation status, health checkup history, interest in health, perceived financial burden of medical expenses, Charlson comorbidity index (CCI), visits to medical facilities with an ophthalmology department, and visits to medical facilities with diabetes specialists. All statistical data were analysed using the Stata 17.0 software (StataCorp, College Station, TX, USA), with a p value < 0.05 indicating statistical significance. 2.4 Data sharing and data accessibility The datasets generated and analysed by this study are not publicly available. However, further analysis can be conducted in collaboration with the corresponding author upon reasonable request if permission is granted from Tsukuba City.
Design, settings, and participants This cross‐sectional study used the National Health Insurance beneficiaries' questionnaire and claims‐linked data originally collected by Tsukuba City for administrative purposes. Tsukuba City, with a population of approximately 240 000, is located in the suburbs of the Tokyo metropolitan area. It was developed as Japan's largest academic city while retaining a blend of urban and rural elements. The National Health Insurance is a health insurance program administered by municipalities. Method details this insurance system. This study consists of two processes: the conduction of the questionnaire (process 1) and the analysis of the questionnaire survey results (process 2), each administrated by a different organizer (Figure ). In process 1, Tsukuba City conducted a questionnaire survey from 19 December 2022 to 31 January 2023, for comprehending the living conditions of the beneficiaries in Tsukuba and their perception of health, and for promoting health. In process 2, after receiving anonymized data, our research team analysed the survey results using the questionnaire survey data that was individually linked with claims and health checkup data extracted from Tsukuba City's National Health Insurance Registry. This paper mainly reports on the results of the analysis in process 2; the details of the questionnaire survey (process 1) are described in Method . The University of Tsukuba Medical Ethics Committee approved our research protocol (No. 1820‐1). 2.1.1 Inclusion and exclusion criteria As outlined above, the selection of subjects for this study is divided into two parts (Figure ). In process 1, the city selected the questionnaire responders. Data of beneficiaries with diabetes who were selected for the questionnaire survey were then extracted from insurance claims and health data. The first inclusion criteria were individuals covered by Tsukuba City's National Health Insurance as of September 2022 and aged 20–74 years as of 31 October 2022. From this group, individuals with diabetes were selected according to the following conditions: (a) at least one antidiabetic drug prescription (Anatomical Therapeutic Chemical classification: A10A and A10B) in FY2021 and (b) glycosylated haemoglobin (HbA1c) levels of ≥6.5% (48 mmol/mol) in the health checkup data in FY2021, but no antidiabetic medication have been prescribed in the same year. We excluded those with the diagnosis code of gestational diabetes mellitus (O244 and O249) in FY2021. Out of 3450 initially selected beneficiaries with diabetes, 1000 were chosen by stratified random sampling, except those no longer insured at the time of sampling (November 2022), to receive the questionnaire via mail. Table details the sampling method. In process 2, we, as researchers, chose the most suitable participants from this set of respondents for the study. Tsukuba City offered the anonymized linked data of claims, health checkups and questionnaire responses at the individual level to our research team. Given that the recognition of the need for a DR screening is based on the awareness of diabetes, respondents who were unaware of their diabetes, that is those who gave a ‘no’ or invalid answer to the question ‘Have you ever been told by a doctor that you have diabetes?’ were excluded. In the subanalysis, participants without information on their medical facilities were excluded to properly investigate the association between a visit to a medical facility with a diabetes specialist and fundus examination.
Inclusion and exclusion criteria As outlined above, the selection of subjects for this study is divided into two parts (Figure ). In process 1, the city selected the questionnaire responders. Data of beneficiaries with diabetes who were selected for the questionnaire survey were then extracted from insurance claims and health data. The first inclusion criteria were individuals covered by Tsukuba City's National Health Insurance as of September 2022 and aged 20–74 years as of 31 October 2022. From this group, individuals with diabetes were selected according to the following conditions: (a) at least one antidiabetic drug prescription (Anatomical Therapeutic Chemical classification: A10A and A10B) in FY2021 and (b) glycosylated haemoglobin (HbA1c) levels of ≥6.5% (48 mmol/mol) in the health checkup data in FY2021, but no antidiabetic medication have been prescribed in the same year. We excluded those with the diagnosis code of gestational diabetes mellitus (O244 and O249) in FY2021. Out of 3450 initially selected beneficiaries with diabetes, 1000 were chosen by stratified random sampling, except those no longer insured at the time of sampling (November 2022), to receive the questionnaire via mail. Table details the sampling method. In process 2, we, as researchers, chose the most suitable participants from this set of respondents for the study. Tsukuba City offered the anonymized linked data of claims, health checkups and questionnaire responses at the individual level to our research team. Given that the recognition of the need for a DR screening is based on the awareness of diabetes, respondents who were unaware of their diabetes, that is those who gave a ‘no’ or invalid answer to the question ‘Have you ever been told by a doctor that you have diabetes?’ were excluded. In the subanalysis, participants without information on their medical facilities were excluded to properly investigate the association between a visit to a medical facility with a diabetes specialist and fundus examination.
Measurements 2.2.1 Exposures The exposure in the main analysis was the recognition of ophthalmology consultation recommendation from a healthcare provider. Participants were defined as ‘recognized’ if they answered ‘yes’ to the question ‘Has your healthcare provider ever recommended that you see an ophthalmologist?’ and ‘unrecognized’ if the answer was ‘No’ or ‘I don't know’. In the subanalysis, the exposure was a visit to the medical facility with diabetes specialists. 2.2.2 Outcomes In the main analysis and subanalysis, the primary outcome was participation in the fundus examination, which was defined by the medical practice code related to fundus examination in the FY2021 claims (code list shown in Table ). The secondary outcome was knowledge of the recommended frequency of DR screening. Knowledge was measured by the question ‘What do you think is the standard interval between ophthalmology visits for people with diabetes?’ We categorized the responses into three: ‘At least once every 6 months or once a year’, ‘Once every 2 years or once every 5 years’ and ‘After the onset of symptoms or Not sure’. Among these responses, ‘once every 6 months or more’ or ‘once a year’ indicated having knowledge based on the Japanese Clinical Practice Guideline for Diabetes 2019. 2.2.3 Other variables The following groups were used to describe participants' characteristics: ‘Received antidiabetic medication at least once during FY 2021–T1D [type 1 diabetes mellitus]’, ‘Received antidiabetic medication at least once during FY 2021–T2D [type 2 diabetes mellitus] or other’ and ‘Not received antidiabetic medication during FY 2021’. Moreover, for the ‘Received antidiabetic medication at least once during FY 2021–T2D or other’ group, age categories were used for stratification. The grouping corresponds to the stratification during the random sampling of the survey participants. Method details the stratification method. Medical facilities where antidiabetic medication was prescribed, where fundus examination was performed, and which have an ophthalmology department were extracted from medical claims data. These three data were prepared to describe medical facilities' characteristics. To confirm whether the diabetes care facilities were certified by the Japan Diabetes Society (JDS), we contacted the JDS. Information on whether the facilities had a diabetes specialist was obtained from published data from the respective medical facilities and the JDS. Table details the use of antidiabetic drugs and definitions of diabetes types and medical facility characteristics.
Exposures The exposure in the main analysis was the recognition of ophthalmology consultation recommendation from a healthcare provider. Participants were defined as ‘recognized’ if they answered ‘yes’ to the question ‘Has your healthcare provider ever recommended that you see an ophthalmologist?’ and ‘unrecognized’ if the answer was ‘No’ or ‘I don't know’. In the subanalysis, the exposure was a visit to the medical facility with diabetes specialists.
Outcomes In the main analysis and subanalysis, the primary outcome was participation in the fundus examination, which was defined by the medical practice code related to fundus examination in the FY2021 claims (code list shown in Table ). The secondary outcome was knowledge of the recommended frequency of DR screening. Knowledge was measured by the question ‘What do you think is the standard interval between ophthalmology visits for people with diabetes?’ We categorized the responses into three: ‘At least once every 6 months or once a year’, ‘Once every 2 years or once every 5 years’ and ‘After the onset of symptoms or Not sure’. Among these responses, ‘once every 6 months or more’ or ‘once a year’ indicated having knowledge based on the Japanese Clinical Practice Guideline for Diabetes 2019.
Other variables The following groups were used to describe participants' characteristics: ‘Received antidiabetic medication at least once during FY 2021–T1D [type 1 diabetes mellitus]’, ‘Received antidiabetic medication at least once during FY 2021–T2D [type 2 diabetes mellitus] or other’ and ‘Not received antidiabetic medication during FY 2021’. Moreover, for the ‘Received antidiabetic medication at least once during FY 2021–T2D or other’ group, age categories were used for stratification. The grouping corresponds to the stratification during the random sampling of the survey participants. Method details the stratification method. Medical facilities where antidiabetic medication was prescribed, where fundus examination was performed, and which have an ophthalmology department were extracted from medical claims data. These three data were prepared to describe medical facilities' characteristics. To confirm whether the diabetes care facilities were certified by the Japan Diabetes Society (JDS), we contacted the JDS. Information on whether the facilities had a diabetes specialist was obtained from published data from the respective medical facilities and the JDS. Table details the use of antidiabetic drugs and definitions of diabetes types and medical facility characteristics.
Statistical analysis First, we described participant selection, participant characteristics (including three categories based on disease type and prescription), medical facility characteristics, and responses to the questions about the recognition of ophthalmology consultation recommendations from healthcare providers, knowledge of the recommended frequency of DR screening, and participation in fundus examination. Each variable was summarized using the statistical weights calculated from the number of valid responses (i.e. the inverse of the product of the extraction probability and response rate). We also evaluated the three categories based on disease type and prescription by stratified analysis. Next, we examined the path from the recognition of ophthalmology consultation recommendations from healthcare providers to the actual participation in fundus examination, mediated by knowledge of the recommended frequency of DR screening. Specifically, the proportion of participants who underwent fundus examination was compared between those who recognized ophthalmology consultation recommendations and those who did not and between those who knew the recommended DR screening frequency and those who did not. Additionally, the proportion of those with knowledge of the recommended DR screening frequency was compared between those who recognized ophthalmology consultation recommendations and those who did not. Chi‐square test was used for these analyses. Moreover, the association between the recognition of DR screening recommendation and actual fundus examination was investigated using the multivariable modified Poisson regression model. The covariates in the regression model were selected according to previous studies and clinical findings; sex, age, visit to a medical facility with a diabetes specialist, and visit to a medical facility with ophthalmology were adjusted. In the additional analyses, the proportions of participants who recognized DR screening recommendations, who knew the frequency of DR screening, and who participated in fundus examinations were compared by whether or not they had visited a medical facility with diabetes specialists, using chi‐square tests. We also divided the characteristics of the healthcare facilities into four groups according to whether they had a diabetes specialist and whether they had an ophthalmology department; we then assessed the proportion of participants who underwent fundus examination in each group. We also calculated the proportion of participants who underwent fundus examination at an attached ophthalmology department, if the facilities had one. To investigate factors potentially influencing fundus examination participation among those recognized of the recommendation for ophthalmology consultations, we conducted additional analyses. Using chi‐squared tests, we compared the proportions of participants who underwent fundus examinations according to various characteristics, including age, sex, estimated diabetes duration, cohabitation status, health checkup history, interest in health, perceived financial burden of medical expenses, Charlson comorbidity index (CCI), visits to medical facilities with an ophthalmology department, and visits to medical facilities with diabetes specialists. All statistical data were analysed using the Stata 17.0 software (StataCorp, College Station, TX, USA), with a p value < 0.05 indicating statistical significance.
Data sharing and data accessibility The datasets generated and analysed by this study are not publicly available. However, further analysis can be conducted in collaboration with the corresponding author upon reasonable request if permission is granted from Tsukuba City.
RESULTS 3.1 Participant selection and characteristics Among 1000 questionnaires distributed to the randomly selected beneficiaries, 290 had valid responses, thereby included in the main analysis. Of these remaining responders, 260 had available information on medical facilities, thereby eligible for the subanalysis (Figure ). In the main analysis group, the mean values for age and body mass index were 63.3 years and 25.3 kg/m 2 (excluding an outlier of 2130.4 kg/m 2 ), respectively, with males accounting for 57.9%. In addition, 47.6% of them recognized that their healthcare provider had recommended an eye examination, 72.8% knew the frequency of eye examinations and 50.5% had participated in a fundus examination (Table ). In the stratified analysis, the proportion of participants who underwent fundus examination was 83.3% in the ‘with antidiabetic prescription–T1D’ group, 49.4% in the ‘with antidiabetic prescription–non‐T1D’ group and 66.7% in the ‘no antidiabetic prescription group’ (Table ). 3.2 Main analysis—Association between the recognition of ophthalmology consultation recommendation, knowledge of the recommended frequency of DR screening, and fundus examination The proportion of participants undergoing fundus examination was 72.9% in the group with recognition of ophthalmology consultation recommendation versus 30.1% in the group without recognition of ophthalmology consultation recommendation ( p < 0.001). The proportion of those undergoing fundus examination was 63.9% in the group with knowledge of the recommended DR screening frequency versus 21.1% in the group without knowledge ( p < 0.001). Lastly, the proportion of those who knew the recommended frequency of DR screening was 93.4% in the group with recognition of the ophthalmology consultation recommendation versus 49.6% in the group without recognition of the recommendation ( p < 0.001). All of these results showed significant associations (Figure ). In the multivariable modified Poisson regression model, the risk for fundus examination was higher in those receiving a DR screening recommendation, even after adjusting for sex, age, visit to a diabetes specialist and visit to a medical facility with ophthalmology, household income, an event of atherosclerotic cardiovascular diseases, and estimated diabetes duration (risk ratio [95% confidence interval] 2.36 [1.65–3.38]) (Table ). A higher proportion of those in the group who recognized ophthalmology consultation recommendation were knowledgeable of the DR screening frequency, and a higher proportion of those who knew the DR screening frequency underwent fundus examination (Figure ). 3.3 Additional analyses—Difference between presence and absence of medical facilities with diabetes specialists and an attached ophthalmology department As shown in Figure , screenings by an ophthalmologist had been recommended for 65.5% of who received antidiabetic prescriptions at medical facilities with diabetes specialists (those with diabetes specialists) and only 27.5% in those without diabetes specialists. Additionally, 82.1% of those with diabetes specialists, compared with 60.7% of those without diabetes specialists, knew the recommended frequency of DR screening, and 62.7% of those with diabetes specialists vs. 35.2% of those without diabetes specialists underwent fundus examination. Moreover, when healthcare facilities with diabetes specialists were divided into groups 1 and 2 according to whether they had an attached ophthalmology department, 91.3% had an ophthalmology department (group 1). Similarly, when healthcare facilities without diabetes specialists were divided into groups 3 and 4 according to the same criteria, only 11.1% had an attached ophthalmology department (group 3). In group 1, 24.6% of those underwent a fundus examination at the ophthalmology department in the same facility, whereas none of those in group 3 underwent fundus examination at the ophthalmology department (Figure ). 3.4 Sensitivity analyses—Factors associated with fundus examinations among individuals who recognized the recommendation for ophthalmology consultation These additional analyses identified variables significantly positively associated with participation in fundus examinations: female, CCI ≥3 and visits to the medical facility with an ophthalmology department as the variables significantly associated with fundus examination. The details of the results are shown in Table .
Participant selection and characteristics Among 1000 questionnaires distributed to the randomly selected beneficiaries, 290 had valid responses, thereby included in the main analysis. Of these remaining responders, 260 had available information on medical facilities, thereby eligible for the subanalysis (Figure ). In the main analysis group, the mean values for age and body mass index were 63.3 years and 25.3 kg/m 2 (excluding an outlier of 2130.4 kg/m 2 ), respectively, with males accounting for 57.9%. In addition, 47.6% of them recognized that their healthcare provider had recommended an eye examination, 72.8% knew the frequency of eye examinations and 50.5% had participated in a fundus examination (Table ). In the stratified analysis, the proportion of participants who underwent fundus examination was 83.3% in the ‘with antidiabetic prescription–T1D’ group, 49.4% in the ‘with antidiabetic prescription–non‐T1D’ group and 66.7% in the ‘no antidiabetic prescription group’ (Table ).
Main analysis—Association between the recognition of ophthalmology consultation recommendation, knowledge of the recommended frequency of DR screening, and fundus examination The proportion of participants undergoing fundus examination was 72.9% in the group with recognition of ophthalmology consultation recommendation versus 30.1% in the group without recognition of ophthalmology consultation recommendation ( p < 0.001). The proportion of those undergoing fundus examination was 63.9% in the group with knowledge of the recommended DR screening frequency versus 21.1% in the group without knowledge ( p < 0.001). Lastly, the proportion of those who knew the recommended frequency of DR screening was 93.4% in the group with recognition of the ophthalmology consultation recommendation versus 49.6% in the group without recognition of the recommendation ( p < 0.001). All of these results showed significant associations (Figure ). In the multivariable modified Poisson regression model, the risk for fundus examination was higher in those receiving a DR screening recommendation, even after adjusting for sex, age, visit to a diabetes specialist and visit to a medical facility with ophthalmology, household income, an event of atherosclerotic cardiovascular diseases, and estimated diabetes duration (risk ratio [95% confidence interval] 2.36 [1.65–3.38]) (Table ). A higher proportion of those in the group who recognized ophthalmology consultation recommendation were knowledgeable of the DR screening frequency, and a higher proportion of those who knew the DR screening frequency underwent fundus examination (Figure ).
Additional analyses—Difference between presence and absence of medical facilities with diabetes specialists and an attached ophthalmology department As shown in Figure , screenings by an ophthalmologist had been recommended for 65.5% of who received antidiabetic prescriptions at medical facilities with diabetes specialists (those with diabetes specialists) and only 27.5% in those without diabetes specialists. Additionally, 82.1% of those with diabetes specialists, compared with 60.7% of those without diabetes specialists, knew the recommended frequency of DR screening, and 62.7% of those with diabetes specialists vs. 35.2% of those without diabetes specialists underwent fundus examination. Moreover, when healthcare facilities with diabetes specialists were divided into groups 1 and 2 according to whether they had an attached ophthalmology department, 91.3% had an ophthalmology department (group 1). Similarly, when healthcare facilities without diabetes specialists were divided into groups 3 and 4 according to the same criteria, only 11.1% had an attached ophthalmology department (group 3). In group 1, 24.6% of those underwent a fundus examination at the ophthalmology department in the same facility, whereas none of those in group 3 underwent fundus examination at the ophthalmology department (Figure ).
Sensitivity analyses—Factors associated with fundus examinations among individuals who recognized the recommendation for ophthalmology consultation These additional analyses identified variables significantly positively associated with participation in fundus examinations: female, CCI ≥3 and visits to the medical facility with an ophthalmology department as the variables significantly associated with fundus examination. The details of the results are shown in Table .
DISCUSSION The findings of this study indicated that although only less than half of individuals recognized that they were recommended by their healthcare providers to undergo ophthalmology consultation, they were more likely to know the recommended frequency of DR screening and to participate in fundus examination than those who did not recognize. Additionally, the risk of undergoing fundus examination was more than twice as high in those who recognized ophthalmology consultation recommendation. Individuals who visited a medical facility with diabetes specialists were more likely to recognize DR screening recommendations, to know the recommended DR screening frequency, and to undergo fundus examination. To the best of our knowledge, this study is the first in Japan to quantitatively clarify the impact of individuals' recognition of ophthalmology consultation recommendations on fundus examination participation. 4.1 Comparison with previous studies The proportion of fundus examination in this study is generally consistent with that in previous studies in Japan, , , although the United States, United Kingdom and Australia reported higher proportions. , , Several reasons have been reported for the higher rate of eye examinations. In the United States, eye examinations are introduced as a quality indicator of diabetes care in the insurance system , ; in the United Kingdom, a national DR screening program has made a significant impact ; and in Australia, the role of the optometrist as the primary eye care provider in DR screening is established in guidelines. Nakamura et al. and Funatsu et al. reported that 74.2%–85.8% of the individuals who visited medical facilities with diabetes specialists received eye examination recommendations from their healthcare providers. , Although these were single‐centre studies, our analysis of a group representative of beneficiaries under an insurance scheme showed that only 62.5% of participants visited medical facilities with diabetes specialists. Our results possibly reflect the actual situation of diabetes care in the Japanese community, potentially contributing to making a practical policy to improve community healthcare quality. The present study clarified the relationship between individuals' recognition of ophthalmology consultation recommendations and participation in DR screening, which has not been previously investigated. 4.2 Implications for clinical care Our study has several implications for diabetes care in Japan. Most individuals with diabetes are unaware of the need for an eye examination; thus, healthcare professionals must encourage them to undergo eye examinations. Interventions using the transtheoretical model are useful to induce health behaviour change by classifying individuals' recognition into stages of behaviour change and modifying the approach for each stage. Such interventions are reportedly effective for managing diabetes and obesity. , To increase the proportion of individuals undergoing eye examinations, healthcare providers should first determine where the individuals are in the process of changing their behaviour toward eye examinations and then appropriately encourage such individuals. However, this study showed that 25.6% of participants who were recommended an ophthalmic consultation and recognized the frequency of eye examinations did not undergo a fundus examination. In this context, we conducted additional analyses among participants recognized for the ophthalmology consultation to investigate factors associated with fundus examination participation. We identified variables significantly positively associated with participation in fundus examinations: female, CCI ≥3 and visits to the medical facility with an ophthalmology department. These findings are consistent with previous studies and suggest a key target for improving the flow of DR screening. , , The Japanese government is conducting a project to evaluate and publicize medical care quality, but DR screening is not specified as an indicator. Including fundus examination as a quality indicator may help improve the quality of diabetes care in Japan. The project also does not have a direct financial incentive to increase the score; adding financial incentives to this project, such as connecting to the medical fee, may drastically increase the rate of fundus examination, as in the United States. However, caution should be made in introducing such an indicator because it may encourage inappropriate behaviour by medical personnel to obtain a high score. 4.3 Implications for medical system Apart from the efforts of individual medical professionals, constructing a medical system encouraging such efforts is also important. The medical fee payment can be proposed as an incentive for ophthalmology consultation recommendations. In Japan, tools such as the Diabetic Eye Notebook and the Diabetes Coordination Notebook are useful for recommending eye examinations. Both notebooks provide smooth collaboration between physicians and ophthalmologists to input the latest examination results and share information. In internal medicine, their utilization rate ranges from 18% to 75.8%, , , implying that further promotion is needed in facilities with low utilization rates; however, these notebooks are not currently covered by the medical fee. Establishing a medical system that incentivizes collaboration between medical facilities by promoting these handbooks could be a practical policy recommendation to encourage DR screening. 4.4 Implications for administrative intervention Several existing measures can provide opportunities for DR screening. In Japan, specific health checkups are conducted for all insured persons and their dependents aged 40 to 74 years, and physicians may consider a fundus examination if individuals have abnormalities in blood pressure or blood glucose levels. As the rate of fundus examinations under this system remains low at 18.0%, promoting this existing initiative on a national scale would be effective, considering that nationwide DR screening programs have been reported to be effective in other countries. , , , In addition, a medical fee is allowed for family pharmacists' support (‘The Family Pharmacists Fee’); consideration should be given DR screening recommendations by pharmacists in this framework. This study showed that a certain number of those who understood the frequency of eye examinations performed fundus examinations even in the absence of recommendations for ophthalmology consultations, and it is expected that providing information on DR screening as part of existing administrative initiatives will have some effect on increasing the rate of fundus examinations. The subanalysis results showed that individuals who visited medical facilities with diabetes specialists had a higher rate of fundus examinations, and when medical facility characteristics were examined, most of these facilities had an attached ophthalmology department (Figure and Figure ). The difference in care between medical facilities with diabetes specialists and those without diabetes specialists has been pointed out in previous studies in Japan. Thus, the aforementioned implementations should be strengthened, especially for those without diabetes specialists. Limited access to medical resources is especially prevalent in developing countries and has been reported to contribute to non‐participation in DR screening. Therefore, our recommendations for improving access to DR screening may also be applicable in these countries. Screening methods that use remote technology and computer algorithms have been shown to be effective in developing countries and represent a promising solution to the shortage of ophthalmologists in medically underserved areas. , Additionally, as financial barriers are associated with non‐participation in DR screening, initiatives to promote eye screening in low‐ and middle‐income countries are likely to be highly important. Reducing copayment has been known to be effective in addressing these financial barriers. Determining the cost‐effective frequency of DR screening and implementing administrative interventions to reduce copayment costs for DR screening would also be beneficial. 4.5 Limitations This study has several limitations. First, while the data on fundus examination were extracted in FY2021, the questionnaire survey to confirm the experience of ophthalmology recommendations from healthcare providers (exposure) was conducted in FY2022; thus, temporality was not maintained, and recall bias remains possible. However, we cannot confirm whether conducting the questionnaire survey first is appropriate because the questionnaire may affect the outcome of inducing behavioural change. Second, the response rate was less than half (45.6%), indicating that our results may not be representative of the entire population of beneficiaries with diabetes in the Tsukuba City's National Health Insurance covered by the questionnaire. To address this issue, we stratified respondent characteristics and adjusted for the strata with low response rates through weighting. However, as a more fundamental solution, efforts should be directed toward further improving the questionnaire response rate. Based on previous studies, we believe that additional strategies, such as offering financial incentives or combining paper and electronic questionnaires, should have been considered to further improve the response rate. , Third, we were unable to adjust for certain variables that may be related to the ophthalmology consultation recommendations from healthcare providers and the participation in fundus examination, such as educational level, as these data were not available in this study. Fourth, we defined the outcome by extracting fundus examination‐related medical remuneration point codes. There is a possibility that fundus examinations performed for other ocular diseases other than DR were also included. In clinical practice, DR and other ocular diseases are often evaluated at the same time, and it is difficult to clearly distinguish this misclassification using medical remuneration point codes. This misclassification must be considered as a limitation of our research method. Lastly, the survey was designed to investigate individuals' subjective recognitions of ophthalmology consultation recommendations from their healthcare providers; thus, we could not distinguish a physician's non‐recommendation from a patient's non‐recognition of the recommendation. Two measures could be taken in future studies to address this misclassification. The first is to conduct a survey of medical staff to confirm whether recommendations were really made. Alternatively, medical records may be reviewed to determine whether DR screening recommendations are documented. A combination of these approaches could provide a quantitative indication of the discrepancy between actual recommendations and patient perceptions. In this context, we believe that measuring memorable recommendations is still useful as it highlights potential gaps in communication and understanding between healthcare providers and patients. In conclusion, individuals' recognition that their healthcare providers had recommended them to visit an ophthalmologist was positively associated with knowledge of the recommended frequency of DR screening and with fundus examination. The study results revealed that recommendations for ophthalmology consultations may contribute to increasing the rates of fundus examination. The suboptimal response rate to the questionnaire remains a limitation of this study, which should be addressed in future research.
Comparison with previous studies The proportion of fundus examination in this study is generally consistent with that in previous studies in Japan, , , although the United States, United Kingdom and Australia reported higher proportions. , , Several reasons have been reported for the higher rate of eye examinations. In the United States, eye examinations are introduced as a quality indicator of diabetes care in the insurance system , ; in the United Kingdom, a national DR screening program has made a significant impact ; and in Australia, the role of the optometrist as the primary eye care provider in DR screening is established in guidelines. Nakamura et al. and Funatsu et al. reported that 74.2%–85.8% of the individuals who visited medical facilities with diabetes specialists received eye examination recommendations from their healthcare providers. , Although these were single‐centre studies, our analysis of a group representative of beneficiaries under an insurance scheme showed that only 62.5% of participants visited medical facilities with diabetes specialists. Our results possibly reflect the actual situation of diabetes care in the Japanese community, potentially contributing to making a practical policy to improve community healthcare quality. The present study clarified the relationship between individuals' recognition of ophthalmology consultation recommendations and participation in DR screening, which has not been previously investigated.
Implications for clinical care Our study has several implications for diabetes care in Japan. Most individuals with diabetes are unaware of the need for an eye examination; thus, healthcare professionals must encourage them to undergo eye examinations. Interventions using the transtheoretical model are useful to induce health behaviour change by classifying individuals' recognition into stages of behaviour change and modifying the approach for each stage. Such interventions are reportedly effective for managing diabetes and obesity. , To increase the proportion of individuals undergoing eye examinations, healthcare providers should first determine where the individuals are in the process of changing their behaviour toward eye examinations and then appropriately encourage such individuals. However, this study showed that 25.6% of participants who were recommended an ophthalmic consultation and recognized the frequency of eye examinations did not undergo a fundus examination. In this context, we conducted additional analyses among participants recognized for the ophthalmology consultation to investigate factors associated with fundus examination participation. We identified variables significantly positively associated with participation in fundus examinations: female, CCI ≥3 and visits to the medical facility with an ophthalmology department. These findings are consistent with previous studies and suggest a key target for improving the flow of DR screening. , , The Japanese government is conducting a project to evaluate and publicize medical care quality, but DR screening is not specified as an indicator. Including fundus examination as a quality indicator may help improve the quality of diabetes care in Japan. The project also does not have a direct financial incentive to increase the score; adding financial incentives to this project, such as connecting to the medical fee, may drastically increase the rate of fundus examination, as in the United States. However, caution should be made in introducing such an indicator because it may encourage inappropriate behaviour by medical personnel to obtain a high score.
Implications for medical system Apart from the efforts of individual medical professionals, constructing a medical system encouraging such efforts is also important. The medical fee payment can be proposed as an incentive for ophthalmology consultation recommendations. In Japan, tools such as the Diabetic Eye Notebook and the Diabetes Coordination Notebook are useful for recommending eye examinations. Both notebooks provide smooth collaboration between physicians and ophthalmologists to input the latest examination results and share information. In internal medicine, their utilization rate ranges from 18% to 75.8%, , , implying that further promotion is needed in facilities with low utilization rates; however, these notebooks are not currently covered by the medical fee. Establishing a medical system that incentivizes collaboration between medical facilities by promoting these handbooks could be a practical policy recommendation to encourage DR screening.
Implications for administrative intervention Several existing measures can provide opportunities for DR screening. In Japan, specific health checkups are conducted for all insured persons and their dependents aged 40 to 74 years, and physicians may consider a fundus examination if individuals have abnormalities in blood pressure or blood glucose levels. As the rate of fundus examinations under this system remains low at 18.0%, promoting this existing initiative on a national scale would be effective, considering that nationwide DR screening programs have been reported to be effective in other countries. , , , In addition, a medical fee is allowed for family pharmacists' support (‘The Family Pharmacists Fee’); consideration should be given DR screening recommendations by pharmacists in this framework. This study showed that a certain number of those who understood the frequency of eye examinations performed fundus examinations even in the absence of recommendations for ophthalmology consultations, and it is expected that providing information on DR screening as part of existing administrative initiatives will have some effect on increasing the rate of fundus examinations. The subanalysis results showed that individuals who visited medical facilities with diabetes specialists had a higher rate of fundus examinations, and when medical facility characteristics were examined, most of these facilities had an attached ophthalmology department (Figure and Figure ). The difference in care between medical facilities with diabetes specialists and those without diabetes specialists has been pointed out in previous studies in Japan. Thus, the aforementioned implementations should be strengthened, especially for those without diabetes specialists. Limited access to medical resources is especially prevalent in developing countries and has been reported to contribute to non‐participation in DR screening. Therefore, our recommendations for improving access to DR screening may also be applicable in these countries. Screening methods that use remote technology and computer algorithms have been shown to be effective in developing countries and represent a promising solution to the shortage of ophthalmologists in medically underserved areas. , Additionally, as financial barriers are associated with non‐participation in DR screening, initiatives to promote eye screening in low‐ and middle‐income countries are likely to be highly important. Reducing copayment has been known to be effective in addressing these financial barriers. Determining the cost‐effective frequency of DR screening and implementing administrative interventions to reduce copayment costs for DR screening would also be beneficial.
Limitations This study has several limitations. First, while the data on fundus examination were extracted in FY2021, the questionnaire survey to confirm the experience of ophthalmology recommendations from healthcare providers (exposure) was conducted in FY2022; thus, temporality was not maintained, and recall bias remains possible. However, we cannot confirm whether conducting the questionnaire survey first is appropriate because the questionnaire may affect the outcome of inducing behavioural change. Second, the response rate was less than half (45.6%), indicating that our results may not be representative of the entire population of beneficiaries with diabetes in the Tsukuba City's National Health Insurance covered by the questionnaire. To address this issue, we stratified respondent characteristics and adjusted for the strata with low response rates through weighting. However, as a more fundamental solution, efforts should be directed toward further improving the questionnaire response rate. Based on previous studies, we believe that additional strategies, such as offering financial incentives or combining paper and electronic questionnaires, should have been considered to further improve the response rate. , Third, we were unable to adjust for certain variables that may be related to the ophthalmology consultation recommendations from healthcare providers and the participation in fundus examination, such as educational level, as these data were not available in this study. Fourth, we defined the outcome by extracting fundus examination‐related medical remuneration point codes. There is a possibility that fundus examinations performed for other ocular diseases other than DR were also included. In clinical practice, DR and other ocular diseases are often evaluated at the same time, and it is difficult to clearly distinguish this misclassification using medical remuneration point codes. This misclassification must be considered as a limitation of our research method. Lastly, the survey was designed to investigate individuals' subjective recognitions of ophthalmology consultation recommendations from their healthcare providers; thus, we could not distinguish a physician's non‐recommendation from a patient's non‐recognition of the recommendation. Two measures could be taken in future studies to address this misclassification. The first is to conduct a survey of medical staff to confirm whether recommendations were really made. Alternatively, medical records may be reviewed to determine whether DR screening recommendations are documented. A combination of these approaches could provide a quantitative indication of the discrepancy between actual recommendations and patient perceptions. In this context, we believe that measuring memorable recommendations is still useful as it highlights potential gaps in communication and understanding between healthcare providers and patients. In conclusion, individuals' recognition that their healthcare providers had recommended them to visit an ophthalmologist was positively associated with knowledge of the recommended frequency of DR screening and with fundus examination. The study results revealed that recommendations for ophthalmology consultations may contribute to increasing the rates of fundus examination. The suboptimal response rate to the questionnaire remains a limitation of this study, which should be addressed in future research.
All authors have contributed significantly. K.Y., N.I.S. and T.S. were involved in the study's conception, design and conduct, as well as the analysis and interpretation of the results. K.Y., N.I.S. and T.S. wrote the first draft of the manuscript. A.K., T.Y., N.K., K.I. and N.T. participated in the design and discussion and supported the analysis. M.O., K.U. and T.Y. participated in the design of the study and supervised the work. All authors edited, reviewed and approved the final version of the manuscript and agreed to be accountable for the accuracy or integrity of the study. The authors would like to thank Enago for the English language review.
The authors declare no conflicts of interest.
The peer review history for this article is available at https://www.webofscience.com/api/gateway/wos/peer‐review/10.1111/dom.16164 .
Data S1. Supporting Information.
|
Implementing Social Determinants of Health Screening at Community Health Centers: Clinician and Staff Perspectives | 3fdd8f8b-111f-4880-bcee-1e5c158365d5 | 6843733 | Pediatrics[mh] | Screening for social determinants of health (SDOH) during pediatric office visits is recommended by the American Academy of Pediatrics and the American College of Physicians. - SDOH are the social circumstances in which people are born, work, live, and age and include access to health care, food security, financial security, and the physical environment. Problems with SDOH may manifest in primary care office visits as unmet social needs such as food scarcity, hunger, homelessness, and debt and can lead to detrimental health and developmental outcomes in children. , Thus, mitigating children’s and families’ unmet social needs has the potential to reduce toxic stress and thereby improve health. , To date, SDOH screening and referral implemented in pediatric primary care has been found to increase receipt of families’ social services. , Even with a validated SDOH screener, however, clinicians may struggle to address their patients’ unmet social needs. - Once a social need is identified, clinicians and health systems need to refer patients and families to nonmedical organizations for additional resources and benefits. , Primary care clinicians may not have the training or the staff readily available to help patients navigate these external resources. , Our team had previously implemented the WE CARE model, a SDOH screening and referral intervention, in community health centers (CHCs) and pediatric clinics. , For this study, we explored how CHC staff responded to the WE CARE model and how they implemented WE CARE activities into daily practice. Using key informant qualitative interviews, we asked pediatric, CHC staff and clinicians about their experiences with the WE CARE model, the challenges they faced with the model, and how it affected their clinical practice.
In September 2015, 6 pediatric clinics within CHCs in Boston, MA participated in a type 1 effectiveness-implementation, cluster randomized control trial of a SDOH model (the augmented WE CARE screener and referral process). Three of the clinics were randomized to implement the WE CARE model, while the remaining three continued with their standard of care (i.e., no WE CARE or SDOH screening). The study and methods were approved by the Boston University Medical Campus Institutional Review Board. Briefly, the augmented WE CARE model consisted of three key components: a screener, a referral, and a patient navigator. WE CARE screeners were distributed to parents who presented with a child (ages 0-5 years) for a well-child visit. The WE CARE screener consists of 12 questions designed to identify 6 social needs and determine whether families wanted assistance with a need. The 6 needs include childcare, food, housing, parent education (high school/GED equivalency), parent employment, and utilities (household heat and/or electricity). It takes less than 2 minutes for parents to complete the screener, which is written at a third grade reading level. The WE CARE screener was adapted from the original 20-question instrument that had a test-retest reliability of .92. The referral process involved the primary care provider (PCP) who would give parents information about local social services. Clinicians were trained to review the WE CARE screener with parents and print community resource information for those who reported both having a need and wanting help. At some CHCs, office staff, such as medical assistants (MAs), rather than clinicians printed out the information sheets. Resource information was printed directly from the patient’s electronic medical record (EMR) using smart phrases specific for each need. For instance, if a clinician used the smart phrase “.WECAREFood” in the After-Visit Summary (AVS) section of the visit note, food resource information would populate into the AVS along with contact information for food pantries. WE CARE screeners and resource sheets were available in English, Haitian-Creole, Portuguese, Spanish, and Vietnamese. Parents could self-refer into services identified on the AVS or ask for further help from a patient navigator. The patient navigator was an implementation team member trained to assist parents with the process of accessing resources. The navigator was intended to supplement the staff at the CHC sites and was available from one to three days per week at each site. Patients could call a hotline to reach the patient navigator or the clinician could request assistance through the EMR. Of note, the augmented WE CARE model deviated from the prior tested WE CARE model by including a patient navigator and embedding community resource sheets into the EMR. Prior versions had the physical resource book located in exam rooms and had no navigator. These changes were made due to the requirements for the grant mechanism that funded this study; in addition, the study team believed they would better benefit patients and families. Sample and Recruitment Toward the end of the trial, key informant interviews were solicited from the WE CARE intervention CHC stakeholders in order to identify themes around the integration of the augmented WE CARE model into the workflow of pediatric primary care units. At the start of the clinical trial, three contacts were identified for each site: the pediatric medical director, a clinician, and an MA. The research team contacted the 3 clinical contacts and asked them to identify staff who were involved with the WE CARE implementation. On the recommendation of the contacts, we sent an email blast to all currently employed pediatric clinicians and staff. The research team emailed 17 staff (11 clinicians, 5 MAs, and 1 case manager) of whom 11 agreed to participate. Data Collection Study participants were interviewed between September 2018 and February 2019. Semistructured qualitative interview guides were informed by the Promoting Action on Research Implementation in Health Services (PARIHS) framework. , The PARIHS framework was designed to help understand how evidence is translated into clinical practice. The framework suggests that successful integration of a new practice into a clinical environment depends on how clinicians and staff respond to the project. The interview questions asked how CHC staff perceived the augmented WE CARE model (evidence), the challenges they faced when integrating WE CARE into everyday clinical practice (context), and whom within their organization championed the model (facilitation). The research team conducted interviews via telephone to accommodate participant schedules. Interviews averaged about 16 minutes. Interviewers audio-recorded the sessions and field notes were made postinterview. Data Analysis All interviews were transcribed verbatim. Transcription was performed by the research team (MP and AB) following each interview. All data were stored on a secure server. Interviews were coded deductively in April and May 2019. A codebook was developed by the analysis team (MP, AB, and CH) from the PARIHS model. Each interview was separately coded by AB and CH in March 2019. The analysis team then met and reviewed each coding decision until consensus was achieved. Themes were identified and agreed upon by the research team in June 2019. All coding and analysis were performed in NVivo 12.
Toward the end of the trial, key informant interviews were solicited from the WE CARE intervention CHC stakeholders in order to identify themes around the integration of the augmented WE CARE model into the workflow of pediatric primary care units. At the start of the clinical trial, three contacts were identified for each site: the pediatric medical director, a clinician, and an MA. The research team contacted the 3 clinical contacts and asked them to identify staff who were involved with the WE CARE implementation. On the recommendation of the contacts, we sent an email blast to all currently employed pediatric clinicians and staff. The research team emailed 17 staff (11 clinicians, 5 MAs, and 1 case manager) of whom 11 agreed to participate.
Study participants were interviewed between September 2018 and February 2019. Semistructured qualitative interview guides were informed by the Promoting Action on Research Implementation in Health Services (PARIHS) framework. , The PARIHS framework was designed to help understand how evidence is translated into clinical practice. The framework suggests that successful integration of a new practice into a clinical environment depends on how clinicians and staff respond to the project. The interview questions asked how CHC staff perceived the augmented WE CARE model (evidence), the challenges they faced when integrating WE CARE into everyday clinical practice (context), and whom within their organization championed the model (facilitation). The research team conducted interviews via telephone to accommodate participant schedules. Interviews averaged about 16 minutes. Interviewers audio-recorded the sessions and field notes were made postinterview.
All interviews were transcribed verbatim. Transcription was performed by the research team (MP and AB) following each interview. All data were stored on a secure server. Interviews were coded deductively in April and May 2019. A codebook was developed by the analysis team (MP, AB, and CH) from the PARIHS model. Each interview was separately coded by AB and CH in March 2019. The analysis team then met and reviewed each coding decision until consensus was achieved. Themes were identified and agreed upon by the research team in June 2019. All coding and analysis were performed in NVivo 12.
We interviewed 11 CHC staff members (7 clinicians, 3 MAs, and 1 case manager) from 3 CHCs involved in the WE CARE trial. All participants of the study had positive perceptions of the augmented WE CARE model, but they also reported significant problems integrating the model into their practices. We identified 4 main themes representing the range of clinician and staff perceptions of how WE CARE affected their practice: (1) benefits of the WE CARE model, (2) prioritizing WE CARE, (3) reliance on a patient navigator, and (4) resource limitations. Benefits of the WE CARE model Clinicians and MAs felt that the design of WE CARE helped them to practice holistic medicine and fulfill the mission of CHCs. Clinicians felt it was “a productive and efficient addition to our services and our environment” (PCP, Site 3). In particular “the resources were patient friendly and used patient friendly language” (PCP, Site 2). Staff reported the screener prompted patients to seek help for needs that the patient may not have known could be met with referrals to local services. “I think it’s helpful because some parents do need actual help. A lot of them at the health center, I know they are looking for housing, looking for daycare” (MA, Site 2). Similarly, clinicians and staff viewed the presence of the patient navigator as beneficial. Prior to the intervention, the CHCs did not have a patient navigator embedded in the pediatric unit. The navigator provided CHCs with a part-time, additional team member which clinicians appeared to appreciate. “We always have not had enough staff to serve everybody, but with another hand helping out, that was always a plus” (PCP, Site 1). The CHC staff realized during the intervention that they had little understanding of how to help parents connect to social services. The patient navigator filled a gap in the CHC’s staffing by specifically addressing the needs of parents seeking assistance. “When [the patient navigator] first came, that filled a very big void in our clinic just because we were identifying, you know, if we identified people who were in need, we just didn’t know how to help them practically” (PCP, Site 2). Prioritizing WE CARE Clinicians perceived the WE CARE model as easy to perform and integrate into office visits. Clinicians felt the screening and referral process were easier to implement than other interventions. . . . there was a little worry that this was a screener that they [the clinic staff] didn’t have a lot of experience in dealing with in terms of the responses from patients, but I think that the actual end to end tool along with the resource list has been well thought-out and well tested. They [the staff] didn’t find it to be particularly challenging compared to all the other stuff we have to screen and deal within the clinic. (PCP, Site 1) Clinicians also found that the integration of the resource list with the EMR system made practicing the augmented WE CARE model easier. “You could just print the visit summary and if they had identified a need, they had already put it into the computer, so you were just printing it out” (PCP, Site 2). The only drawback that clinicians reported was that the screener formatting led some parents to complete the form incorrectly or not respond. What hasn’t worked is that the form itself is confusing. Even if this is the language the patient speaks, the patients answer the questions wrong a lot of the time. So, on the left it’s kind of uh, well I don’t know. Sometimes they read the question then they go to the right to say yes or no instead of going to the left . . . that whole way of setting up the form was not simple for a lot of my patients. (PCP, Site 1) However, MAs disagreed about the ease of integrating WE CARE into their daily activities. Some found it relatively easy to implement, particularly later in the project. “But for me coming in, it was just something that already existed. It wasn’t like I was here before WE CARE so it wasn’t a part of our workflow and then it was introduced. It was already established when I started working.” (MA, Site 1) Others felt there had not been enough attention paid to training and orienting new staff. I know a lot of new staff; they don’t know about WE CARE. They don’t know how WE CARE works. And um, they don’t know when you give to patients, a lot of patients, they don’t understand how to fill out, and staff doesn’t know how to explain for them to fill out the form . . . They need to be retrained. (MA, Site 3) Confusion about the WE CARE process and materials negatively affected MA workflow. MAs noted that some clinicians had the MAs take on the responsibility of consulting with patients and providing resource sheets because of time constraints. I think that the way that we do it, giving it to the parents before the provider sees them and being able to ask the medical assistant to make sure that the patient actually gets the information, I think is very good. Because sometimes, you know, I think there was a certain point where the provider was doing it and sometimes they would forget because they were seeing complicated patients, but I think as a medical assistant, being able to just go through the form and print resources out for the patient was very helpful. (MA, Site 1) As a result, the referral protocol was not followed for some parents. “Occasionally, you know, someone would leave without their resources. Some MAs were really good about mailing it to patients, some not really” (PCP, Site 1). Reliance on Patient Navigator One of the root causes of clinician and staff workflow confusion may have been the lack of an internal, clinical champion. The project had identified clinical and staff leaders for the implementation. What their actual role in project leadership was is not clear from the interviews. What appears to have happened is that some staff and clinicians considered the part-time, patient navigator to be the internal champion of the project. Again, I don’t know what her [the patient navigator’s] exact role was, but I think it would’ve been helpful if she took more initiative with the program . . . I have a lot of other responsibilities and not a lot of time to handle those responsibilities. So, it would’ve been nice for her to handle all the WE CARE, um, kind of all the WE CARE, um, kind of oversee while she was here a little bit more. (MA, Site 1) Clinicians also relied heavily on the patient navigator to help with patient needs. One noted their site could not address SDOH questions without the navigator. “When [the patient navigator] weren’t on site, linkages [between the navigator and patient] couldn’t happen” (PCP, Site 3). Resource Limitations Clinician knowledge of the resources available in their communities appears to have grown during the intervention. Some clinicians and MAs noted that some of the resources were not helpful for parents or were not new to parents, which influenced patient experiences with WE CARE. At least one clinician noted that some patients knew they were ineligible for services, and that knowledge effected how patients responded to the resource list. The advantage of having a resource list be very broad is that we could use the same intervention for a lot of different patients and don’t think about eligibility for this or that program. The downside is that, you know, some patients clearly are gonna be eligible for some of the resources and not for the others. So, patients as they look at them sometimes say, ‘Oh you know I tried that one, I couldn’t do it’ or ‘Is this really related to me.’ (PCP, Site 1) CHC staff thought that some of the most needed resources were insufficient, particularly for working parents. “A lot of people had issues with childcare, so they could work, and don’t remember those resources being particularly robust” (MA, Site 3). Other clinicians seemed unaware of what the expectations around the referral process should be and how quickly patients could expect assistance. One PCP expressed frustration with the resource delays and how there did not appear to be a way to address them in a timely fashion or in a manner that had clear impact on clinical care. “I think that became more of an administrative thing as opposed to something that definitively helped or made a difference for our parents, to just patient care I guess” (PCP, Site 2). Staff believed that parents became frustrated with repeated WE CARE screenings because needs were not being met. Somebody who is enrolled in the WE CARE or has identified needs . . . they still have the same housing needs, for example, and you’re printing out the housing forms for them, it’s kind of, you’re at the same state where you give them the phone number, you give them the first step but it’s harder to kind of give them the second, third, fourth steps . . . you give the WE CARE survey again, and at the next visit they’re at the same step. (PCP, Site 2) At least one CHC tried to push past the resource limitations. The clinicians choose to connect parents not only with the WE CARE patient navigator, but to put families in contact with other case managers who might be able to assist with more complex needs. And then you know, for the patients who none of the resources work for, or they’re unlikely to fit for, we would have to connect them with what we call our case management resources, which is basically our social services resource. (PCP, Site 1).
Clinicians and MAs felt that the design of WE CARE helped them to practice holistic medicine and fulfill the mission of CHCs. Clinicians felt it was “a productive and efficient addition to our services and our environment” (PCP, Site 3). In particular “the resources were patient friendly and used patient friendly language” (PCP, Site 2). Staff reported the screener prompted patients to seek help for needs that the patient may not have known could be met with referrals to local services. “I think it’s helpful because some parents do need actual help. A lot of them at the health center, I know they are looking for housing, looking for daycare” (MA, Site 2). Similarly, clinicians and staff viewed the presence of the patient navigator as beneficial. Prior to the intervention, the CHCs did not have a patient navigator embedded in the pediatric unit. The navigator provided CHCs with a part-time, additional team member which clinicians appeared to appreciate. “We always have not had enough staff to serve everybody, but with another hand helping out, that was always a plus” (PCP, Site 1). The CHC staff realized during the intervention that they had little understanding of how to help parents connect to social services. The patient navigator filled a gap in the CHC’s staffing by specifically addressing the needs of parents seeking assistance. “When [the patient navigator] first came, that filled a very big void in our clinic just because we were identifying, you know, if we identified people who were in need, we just didn’t know how to help them practically” (PCP, Site 2).
Clinicians perceived the WE CARE model as easy to perform and integrate into office visits. Clinicians felt the screening and referral process were easier to implement than other interventions. . . . there was a little worry that this was a screener that they [the clinic staff] didn’t have a lot of experience in dealing with in terms of the responses from patients, but I think that the actual end to end tool along with the resource list has been well thought-out and well tested. They [the staff] didn’t find it to be particularly challenging compared to all the other stuff we have to screen and deal within the clinic. (PCP, Site 1) Clinicians also found that the integration of the resource list with the EMR system made practicing the augmented WE CARE model easier. “You could just print the visit summary and if they had identified a need, they had already put it into the computer, so you were just printing it out” (PCP, Site 2). The only drawback that clinicians reported was that the screener formatting led some parents to complete the form incorrectly or not respond. What hasn’t worked is that the form itself is confusing. Even if this is the language the patient speaks, the patients answer the questions wrong a lot of the time. So, on the left it’s kind of uh, well I don’t know. Sometimes they read the question then they go to the right to say yes or no instead of going to the left . . . that whole way of setting up the form was not simple for a lot of my patients. (PCP, Site 1) However, MAs disagreed about the ease of integrating WE CARE into their daily activities. Some found it relatively easy to implement, particularly later in the project. “But for me coming in, it was just something that already existed. It wasn’t like I was here before WE CARE so it wasn’t a part of our workflow and then it was introduced. It was already established when I started working.” (MA, Site 1) Others felt there had not been enough attention paid to training and orienting new staff. I know a lot of new staff; they don’t know about WE CARE. They don’t know how WE CARE works. And um, they don’t know when you give to patients, a lot of patients, they don’t understand how to fill out, and staff doesn’t know how to explain for them to fill out the form . . . They need to be retrained. (MA, Site 3) Confusion about the WE CARE process and materials negatively affected MA workflow. MAs noted that some clinicians had the MAs take on the responsibility of consulting with patients and providing resource sheets because of time constraints. I think that the way that we do it, giving it to the parents before the provider sees them and being able to ask the medical assistant to make sure that the patient actually gets the information, I think is very good. Because sometimes, you know, I think there was a certain point where the provider was doing it and sometimes they would forget because they were seeing complicated patients, but I think as a medical assistant, being able to just go through the form and print resources out for the patient was very helpful. (MA, Site 1) As a result, the referral protocol was not followed for some parents. “Occasionally, you know, someone would leave without their resources. Some MAs were really good about mailing it to patients, some not really” (PCP, Site 1).
One of the root causes of clinician and staff workflow confusion may have been the lack of an internal, clinical champion. The project had identified clinical and staff leaders for the implementation. What their actual role in project leadership was is not clear from the interviews. What appears to have happened is that some staff and clinicians considered the part-time, patient navigator to be the internal champion of the project. Again, I don’t know what her [the patient navigator’s] exact role was, but I think it would’ve been helpful if she took more initiative with the program . . . I have a lot of other responsibilities and not a lot of time to handle those responsibilities. So, it would’ve been nice for her to handle all the WE CARE, um, kind of all the WE CARE, um, kind of oversee while she was here a little bit more. (MA, Site 1) Clinicians also relied heavily on the patient navigator to help with patient needs. One noted their site could not address SDOH questions without the navigator. “When [the patient navigator] weren’t on site, linkages [between the navigator and patient] couldn’t happen” (PCP, Site 3).
Clinician knowledge of the resources available in their communities appears to have grown during the intervention. Some clinicians and MAs noted that some of the resources were not helpful for parents or were not new to parents, which influenced patient experiences with WE CARE. At least one clinician noted that some patients knew they were ineligible for services, and that knowledge effected how patients responded to the resource list. The advantage of having a resource list be very broad is that we could use the same intervention for a lot of different patients and don’t think about eligibility for this or that program. The downside is that, you know, some patients clearly are gonna be eligible for some of the resources and not for the others. So, patients as they look at them sometimes say, ‘Oh you know I tried that one, I couldn’t do it’ or ‘Is this really related to me.’ (PCP, Site 1) CHC staff thought that some of the most needed resources were insufficient, particularly for working parents. “A lot of people had issues with childcare, so they could work, and don’t remember those resources being particularly robust” (MA, Site 3). Other clinicians seemed unaware of what the expectations around the referral process should be and how quickly patients could expect assistance. One PCP expressed frustration with the resource delays and how there did not appear to be a way to address them in a timely fashion or in a manner that had clear impact on clinical care. “I think that became more of an administrative thing as opposed to something that definitively helped or made a difference for our parents, to just patient care I guess” (PCP, Site 2). Staff believed that parents became frustrated with repeated WE CARE screenings because needs were not being met. Somebody who is enrolled in the WE CARE or has identified needs . . . they still have the same housing needs, for example, and you’re printing out the housing forms for them, it’s kind of, you’re at the same state where you give them the phone number, you give them the first step but it’s harder to kind of give them the second, third, fourth steps . . . you give the WE CARE survey again, and at the next visit they’re at the same step. (PCP, Site 2) At least one CHC tried to push past the resource limitations. The clinicians choose to connect parents not only with the WE CARE patient navigator, but to put families in contact with other case managers who might be able to assist with more complex needs. And then you know, for the patients who none of the resources work for, or they’re unlikely to fit for, we would have to connect them with what we call our case management resources, which is basically our social services resource. (PCP, Site 1).
Using an interview questionnaire informed by the PARIHS framework, we asked clinicians and staff about their perception of the augmented WE CARE model, how they integrated it into the context of a busy, urban, CHC, and who within the CHC pediatric unit led the intervention. We found evidence that clinicians at CHCs that implemented WE CARE believed the model improved their ability to serve their patients and their communities. At the same time, staff and clinicians reported frustration with the repeated screening of patients and barriers to accessing social services. Prioritization and facilitation of the intervention were complicated by the CHC environment. Clinicians and MAs both reported that the screener elicited information about unmet social and material needs that would otherwise remain underexplored in office visits. By reviewing the ongoing demands on families, clinicians can become aware of the multiple, non-medical challenges faced by young children, families, and their communities. While providers reported that the screener formatting caused some confusion, the rest of the model (integration of referral information within EMRs, and addition of a part-time patient navigator) was well received at all sites. CHC staff perceived repeated screening of patients as frustrating because of the time and difficulties in accessing resources and the inherent challenges in mitigating social needs due to a fragile safety net. Unmet social needs such as housing are difficult to address in a timely manner. For example, receipt of permanent housing may take up to 10 years. Other needs, such as diapers and food, may be easier to meet in a short period of time. The current literature on SDOH screeners does not address the complexity and challenges faced by patients trying to access resources from local service agencies. Understanding how long wait lists are for families, parent eligibility, or at least discussing such issues with families could reduce patient frustration with SDOH screening and improve patient expectations about the outcome of the process. Health care leaders may need to be briefed on the value of repeated SDOH screening. Integrating SDOH screening into the EMR builds an invaluable record of a patient’s struggles to have their needs met. , , This information could inform community needs and risk assessments and inform clinicians’ expectations about the resources available to the community. Evidence is mixed as to whether the CHC environment complicates the implementation of new care models. , , Quinonez found that structural barriers within CHCs complicate the implementation process. Kramer suggested CHC provider rigidity and resistance to new practices could be high. In our study, we found that CHC clinicians faced multiple, competing priorities that impeded practice of WE CARE. Some staff experienced confusion about the WE CARE workflow and roles. CHCs may require additional support to introduce SDOH screeners to new staff and to prioritize their use. Patient navigators could help facilitate this process. Since the study investigators were not CHC employees but rather faculty from a nearby academic center, WE CARE was likely viewed more as a research study than a clinical initiative. Having strong clinical champions at the CHCs would have allowed for better implementation and integration of the augmented WE CARE model into routine care. Pediatric medical directors preparing to implement a SDOH screener should identify who among the clinical staff will champion the process. As has been identified in hospital settings, clinicians are more likely to put clinical priorities first before nonclinical interventions. Task shifting to the patient navigator may reflect efforts by clinicians to meet patients’ needs as effectively as possible. CHC leaders should be prepared to assign patient navigators or other staff to support the referral process as part of their formal duties. Future research on SDOH screening should investigate how patient navigators and/or case managers address SDOH, interact with clinicians, and how their actions affect patient outcomes. Limitations Our study has several limitations that may limit the transferability of findings. We chose to perform post-study, key informant interviews to reduce the risk of biasing the results for the trial. Interviews may have been subject to social desirability bias, as one interviewer (MP) was involved in the daily operations of the WE CARE intervention. However, we sought to interview all pediatric staff at the 3 CHCs. Most participants were unfamiliar with the researchers prior to the interview. The focus of the interviews was the perceptions of CHCs staff and primary care clinicians who were involved with implementing and conducting the WE CARE model. Interviewing patient navigators or other office staff was outside the scope of the project. Finally, the location of the study may have influenced responses, as we interviewed staff and clinicians from 3 pediatric clinics at CHCs in Boston. Massachusetts has universal health care, a strong Medicaid system, and relatively robust social services. As a result, the frustrations with social services reported by clinicians and staff may be greater in rural areas or metro regions with fewer resources.
Our study has several limitations that may limit the transferability of findings. We chose to perform post-study, key informant interviews to reduce the risk of biasing the results for the trial. Interviews may have been subject to social desirability bias, as one interviewer (MP) was involved in the daily operations of the WE CARE intervention. However, we sought to interview all pediatric staff at the 3 CHCs. Most participants were unfamiliar with the researchers prior to the interview. The focus of the interviews was the perceptions of CHCs staff and primary care clinicians who were involved with implementing and conducting the WE CARE model. Interviewing patient navigators or other office staff was outside the scope of the project. Finally, the location of the study may have influenced responses, as we interviewed staff and clinicians from 3 pediatric clinics at CHCs in Boston. Massachusetts has universal health care, a strong Medicaid system, and relatively robust social services. As a result, the frustrations with social services reported by clinicians and staff may be greater in rural areas or metro regions with fewer resources.
Three years after the implementation of the augmented WE CARE SDOH screener and referral model in CHCs, we found clinicians perceived the intervention as useful to their organization’s mission and their patients. Interviewees, however, also identified organizational and administrative challenges to SDOH screening. Institutions planning to implement an SDOH screener should pilot the new workflow; formalize the workflow before implementation, including defining the roles of MAs, PCPs, and patient navigators; and establish a clinical champion. Setting expectations grounded in local knowledge about resource availability may reduce the reasonable frustration experienced when nonclinical services have long wait lists.
|
Human papillomavirus vaccine coverage among immigrant adolescents in Alberta: a population-based cohort study | ab39abff-891d-46e7-86d0-57a68f18868f | 11925015 | Vaccination[mh] | In Canada, foreign-born immigrants are estimated to make up 21.9% of the total population. Immigrants often face numerous barriers to vaccination, including lack of access and language barriers. Existing literature has shown that the immigrant population historically has lower rates of vaccination compared with non-immigrant groups. , For human papillomavirus (HPV), several studies have examined knowledge, attitudes and perceptions of the HPV vaccine in immigrant parents. Lack of knowledge regarding HPV disease and vaccine, cultural and/or religious beliefs that HPV vaccine encourages sexual activity and lack of provider recommendations were some major factors for immigrant parents who chose not to vaccinate their child(ren). , Studies regarding HPV vaccination coverage among immigrants have had inconsistent findings. A study conducted in Denmark, where a free-of-charge HPV routine immunization program is available, found refugee girls had lower odds of receiving the HPV vaccine compared with Danish-born girls; predictors of uptake included the region of origin, time since migration and income status. In contrast, a US study found higher coverage in adolescent immigrants compared with the US-born population, due to differences in vaccination practices by region of origin. It is important to measure HPV vaccination coverage among immigrant adolescents, and factors associated with uptake, to understand where gaps in vaccination lie and to identify targets for improvement in routine immunization programs. Although Canada's biannual Childhood National Immunization Coverage Survey (cNICS) reports the proportion of adolescents who have had one or more HPV vaccine dose by age 14 y, more nuanced data analysis is possible in provinces with complete population-level vaccination data sources. The objectives of this study were to examine the difference in HPV vaccine coverage at age 12 y between international immigrant and non-immigrant adolescents and factors associated with uptake. We sought to assess adherence with the vaccine program as a measure of vaccine acceptability by immigrant families; thus we used three-dose uptake as the outcome of interest, as it reflected adherence with the vaccine program in place in our jurisdiction at the time of the study. However, given that the World Health Organization stated in 2022 that ‘an alternative, off-label single-dose schedule can provide a comparable efficacy and durability of protection’, we conducted supplementary analysis of coverage for series initiation (one or more doses).
Setting This study took place in Alberta, a western Canadian province with a population of approximately 4.5 million people, 99% of whom are registered with the publicly funded Alberta Health Care Insurance Plan (AHCIP). The HPV vaccine is licensed for those ≥9 y of age. In Alberta, the routine HPV school-based immunization program was introduced in 2008 for females and 2014 for males. The vaccine was originally delivered as a three-dose series in grade 5, before switching to a two-dose series in grade 6 in 2018. This meant that there was no school-based program for the HPV vaccine in the 2018–2019 school year. The school program was further impacted from 2019 to 2021 due to the coronavirus disease 2019 (COVID-19) pandemic. Cohort, data sources and coverage assessment This was a retrospective population-based cohort study utilizing data from 2008 to 2018, during which the HPV vaccine schedule consisted of three doses. Multiyear cohorts were created using linked administrative data held at the Alberta Ministry of Health. The AHCIP included a unique lifetime identifier (ULI) that allows linkage between various databases and also identifies age, biological sex and other sociodemographic characteristics of the students. The Immunization and Adverse Reaction to Immunization (Imm/ARI) database includes data on all publicly funded childhood vaccines administered in the province. The Immigrant Registry was used to identify foreign-born immigrants and refugees who arrived in Alberta from outside of Canada prior to 9 y of age; immigrants who arrived after 9 y of age were excluded, as it was unknown if they received an HPV vaccine prior to arrival in Alberta. Individuals who died or migrated out of Alberta during each respective cohort year, who identified as First Nations (as data are not consistently submitted to the data registries) or who lived in Lloydminster (as a neighbouring province delivers their vaccines) were excluded from the study. Minimal interval dose criteria between doses were applied to determine valid doses for inclusion in the analysis; vaccine doses were included if the time between dose 1 and dose 2 was ≥4 weeks and if the time between dose 2 and dose 3 was ≥12 weeks. Outcome measure HPV vaccine coverage was defined as the proportion of eligible adolescents in an annual cohort who received three doses of the vaccine by the age of 12 y. Each cohort was defined as adolescents who turned 12 y of age between 1 January and 31 December. Exposure variables Immigrant status was defined as being foreign-born, i.e. first-generation immigrants to Alberta who were born outside of Canada. Biologic sex at birth was categorized as male and female. Neighbourhood income quintiles were categorized based on the 2016 Canadian census, with quintile 1 (Q1) indicating the poorest neighbourhood and Q5 indicating the richest. Place of residence was divided into three categories based on the 2016 census: metro and moderate metro (cities of >500 000 people, which includes Edmonton and Calgary and surrounding areas), urban and moderate urban (urban centres with populations >25 000–<500 000 and surrounding areas) and rural and remote rural (populations <10 000 and outside urban areas). Region of origin among the immigrant population was divided into several categories: North America (other than Canada), South America, Europe, Middle East, East Asia, Southeast Asia, South Asia, Africa and Oceania. Statistical analysis We measured how vaccination coverage varied by immigrant status by calculating the proportion of eligible Alberta adolescents who received three doses of HPV vaccine (from 2008 to 2018 for females and from 2014 to 2018 for males) and compared them using confidence intervals (CIs). We also stratified coverage based on immigrant status and compared vaccination coverage by biologic sex, place of residence and income quintile for the years 2014–2018, when the program was available for both biologic sexes. Those with missing data on key sociodemographic variables (biologic sex, postal code) were excluded from the analysis. To explore the impact of the region of origin, we also measured vaccination coverage among immigrant adolescents by region of origin. We used multivariable logistic regression (MLR) to adjust for possible confounders associated with the outcome (receipt of three doses of HPV vaccine). The MLR data analysis was conducted on Alberta adolescents 12 y of age from 2014 to 2018. Variables adjusted for in the multivariable model included biological sex, place of residence, income quintile and cohort year. Before running the multivariable model, we tested for multicollinearity among exposure variables and for plausible interactions. We performed statistical analysis using SAS 9.4 (SAS Institute, Cary, NC, USA), with statistical significance set at p<0.05.
This study took place in Alberta, a western Canadian province with a population of approximately 4.5 million people, 99% of whom are registered with the publicly funded Alberta Health Care Insurance Plan (AHCIP). The HPV vaccine is licensed for those ≥9 y of age. In Alberta, the routine HPV school-based immunization program was introduced in 2008 for females and 2014 for males. The vaccine was originally delivered as a three-dose series in grade 5, before switching to a two-dose series in grade 6 in 2018. This meant that there was no school-based program for the HPV vaccine in the 2018–2019 school year. The school program was further impacted from 2019 to 2021 due to the coronavirus disease 2019 (COVID-19) pandemic.
This was a retrospective population-based cohort study utilizing data from 2008 to 2018, during which the HPV vaccine schedule consisted of three doses. Multiyear cohorts were created using linked administrative data held at the Alberta Ministry of Health. The AHCIP included a unique lifetime identifier (ULI) that allows linkage between various databases and also identifies age, biological sex and other sociodemographic characteristics of the students. The Immunization and Adverse Reaction to Immunization (Imm/ARI) database includes data on all publicly funded childhood vaccines administered in the province. The Immigrant Registry was used to identify foreign-born immigrants and refugees who arrived in Alberta from outside of Canada prior to 9 y of age; immigrants who arrived after 9 y of age were excluded, as it was unknown if they received an HPV vaccine prior to arrival in Alberta. Individuals who died or migrated out of Alberta during each respective cohort year, who identified as First Nations (as data are not consistently submitted to the data registries) or who lived in Lloydminster (as a neighbouring province delivers their vaccines) were excluded from the study. Minimal interval dose criteria between doses were applied to determine valid doses for inclusion in the analysis; vaccine doses were included if the time between dose 1 and dose 2 was ≥4 weeks and if the time between dose 2 and dose 3 was ≥12 weeks.
HPV vaccine coverage was defined as the proportion of eligible adolescents in an annual cohort who received three doses of the vaccine by the age of 12 y. Each cohort was defined as adolescents who turned 12 y of age between 1 January and 31 December.
Immigrant status was defined as being foreign-born, i.e. first-generation immigrants to Alberta who were born outside of Canada. Biologic sex at birth was categorized as male and female. Neighbourhood income quintiles were categorized based on the 2016 Canadian census, with quintile 1 (Q1) indicating the poorest neighbourhood and Q5 indicating the richest. Place of residence was divided into three categories based on the 2016 census: metro and moderate metro (cities of >500 000 people, which includes Edmonton and Calgary and surrounding areas), urban and moderate urban (urban centres with populations >25 000–<500 000 and surrounding areas) and rural and remote rural (populations <10 000 and outside urban areas). Region of origin among the immigrant population was divided into several categories: North America (other than Canada), South America, Europe, Middle East, East Asia, Southeast Asia, South Asia, Africa and Oceania.
We measured how vaccination coverage varied by immigrant status by calculating the proportion of eligible Alberta adolescents who received three doses of HPV vaccine (from 2008 to 2018 for females and from 2014 to 2018 for males) and compared them using confidence intervals (CIs). We also stratified coverage based on immigrant status and compared vaccination coverage by biologic sex, place of residence and income quintile for the years 2014–2018, when the program was available for both biologic sexes. Those with missing data on key sociodemographic variables (biologic sex, postal code) were excluded from the analysis. To explore the impact of the region of origin, we also measured vaccination coverage among immigrant adolescents by region of origin. We used multivariable logistic regression (MLR) to adjust for possible confounders associated with the outcome (receipt of three doses of HPV vaccine). The MLR data analysis was conducted on Alberta adolescents 12 y of age from 2014 to 2018. Variables adjusted for in the multivariable model included biological sex, place of residence, income quintile and cohort year. Before running the multivariable model, we tested for multicollinearity among exposure variables and for plausible interactions. We performed statistical analysis using SAS 9.4 (SAS Institute, Cary, NC, USA), with statistical significance set at p<0.05.
HPV vaccine coverage After excluding participants with missing data for postal codes, sex and time since migration, the final sample size from 2008 to 2018 was 346 749, of which 31 656 (9.13%) were immigrants, and the final sample size from 2014 to 2018 was 232 293, of which 24 045 (10.35%) were immigrants ( Supplementary Figure S1 ). On average, over the 5 y from 2014 to 2018, when both females and males were eligible to be vaccinated, HPV vaccine coverage (receipt of three doses) was 52.58% (95% CI 52.03 to 53.13) among immigrant adolescents and 47.41% (95% CI 47.24 to 47.59) among non-immigrants (Figure and Supplementary Table S1 ). While overall coverage consistently differed between the two groups, the size of the difference in coverage changed over time. At the start of the routine immunization program for females in 2008, HPV vaccine coverage was relatively similar between immigrants and non-immigrants, with non-significant differences of <2%. An increased discrepancy for HPV vaccine coverage between female immigrants and non-immigrants started to become more evident in the later years, with female immigrants having higher coverage. The same pattern was present in the male group, after the vaccine program was introduced in 2014, with insignificant differences between male immigrants and non-immigrants in the earlier stages of the program before HPV vaccine coverage became higher in immigrants (Figure and Supplementary Table S1 ). There was a lag in vaccine coverage for the first 2 y after the program was introduced for each biological sex group (2008 and 2009 for females and 2014 and 2015 for males) before increasing to the 50–70% range. Similar patterns were observed for HPV vaccine initiation (at least one dose) ( Supplementary Figure S2 and Supplementary Table S2 ). When excluding the first 2 y after program implementation for females (2008 and 2009), the average HPV vaccination coverage was 63.96% among immigrants (95% CI 63.24 to 64.67) and 61.39% among non-immigrants (95% CI 61.16 to 61.61) ( Supplementary Table S3 ). When excluding the first 2 y after program implementation for males (2014 and 2015), the average HPV vaccination coverage was 66.20% (95% CI 65.17 to 67.23%) among immigrants and 62.58% (95% CI 62.62 to 62.95) among non-immigrants ( Supplementary Table S3 ). When the two sexes were combined, the average HPV vaccination coverage was 58.14% (95% CI 57.64 to 58.63) among immigrants and 54.95% (95% CI 54.78 to 55.11) among non-immigrants. HPV vaccine coverage in relation to sociodemographic characteristics Potential sociodemographic characteristics associated with HPV vaccination stratified by immigration status were analysed for the cohorts from 2014 to 2018, when both males and females were included in the program. The proportions of adolescents vaccinated with three doses of HPV vaccine varied by sociodemographic characteristics (Table ). Vaccine coverage was significantly higher in immigrants regardless of biologic sex compared with non-immigrants (p<0.0001). Coverage was also significantly higher in immigrants compared with non-immigrants living in metro (p<0.0001) and urban areas (p<0.0001), but not for immigrants living in rural areas (p=0.087). Among immigrants, vaccine coverage was consistent across income quintiles (range 54.34–56.71%) while coverage increased as income quintiles increased for non-immigrants (48.92–53.39%). Multivariable logistic regression analysis showed that immigrant adolescents at age 12 y had 1.10 greater odds of receiving three doses of HPV vaccine compared with non-immigrant adolescents (95% CI 1.07 to 1.14), after controlling for place of residence, income quintile, biological sex and year (Table ). Those living in rural and urban areas had lower odds of receiving three doses of HPV vaccine compared with those living in metro areas (adjusted odds ratio [aOR] 0.68 [95% CI 0.66 to 0.70] and aOR 0.69 [95% CI 0.69 to 0.72], respectively), after controlling for immigrant status, income quintile, biological sex and year. By income quintile, those living in lower income quintiles compared with the richest had lower odds of being vaccinated by age 12 y, after controlling for immigrant status, place of residence, biological sex and year. There was an interaction effect, with immigrants from urban and metro areas having higher odds of being vaccinated compared with non-immigrants from urban and metro areas. However, immigrants from rural areas had lower odds of being vaccinated compared with non-immigrants living in rural areas (aOR 0.86 [95% CI 0.79 to 0.93]). Exploratory analysis by region of origin HPV vaccine coverage (three doses) for immigrants between 2014 and 2018 differed by the region of origin (Table ); immigrants from Asian regions had the highest vaccine coverage (68.78% for Southeast, 62.78% for South and 61.77% for East) followed by African immigrants (60.25%). In contrast, immigrants from elsewhere in North America (39.97%) and South America (48.36%) had the lowest vaccine coverage.
After excluding participants with missing data for postal codes, sex and time since migration, the final sample size from 2008 to 2018 was 346 749, of which 31 656 (9.13%) were immigrants, and the final sample size from 2014 to 2018 was 232 293, of which 24 045 (10.35%) were immigrants ( Supplementary Figure S1 ). On average, over the 5 y from 2014 to 2018, when both females and males were eligible to be vaccinated, HPV vaccine coverage (receipt of three doses) was 52.58% (95% CI 52.03 to 53.13) among immigrant adolescents and 47.41% (95% CI 47.24 to 47.59) among non-immigrants (Figure and Supplementary Table S1 ). While overall coverage consistently differed between the two groups, the size of the difference in coverage changed over time. At the start of the routine immunization program for females in 2008, HPV vaccine coverage was relatively similar between immigrants and non-immigrants, with non-significant differences of <2%. An increased discrepancy for HPV vaccine coverage between female immigrants and non-immigrants started to become more evident in the later years, with female immigrants having higher coverage. The same pattern was present in the male group, after the vaccine program was introduced in 2014, with insignificant differences between male immigrants and non-immigrants in the earlier stages of the program before HPV vaccine coverage became higher in immigrants (Figure and Supplementary Table S1 ). There was a lag in vaccine coverage for the first 2 y after the program was introduced for each biological sex group (2008 and 2009 for females and 2014 and 2015 for males) before increasing to the 50–70% range. Similar patterns were observed for HPV vaccine initiation (at least one dose) ( Supplementary Figure S2 and Supplementary Table S2 ). When excluding the first 2 y after program implementation for females (2008 and 2009), the average HPV vaccination coverage was 63.96% among immigrants (95% CI 63.24 to 64.67) and 61.39% among non-immigrants (95% CI 61.16 to 61.61) ( Supplementary Table S3 ). When excluding the first 2 y after program implementation for males (2014 and 2015), the average HPV vaccination coverage was 66.20% (95% CI 65.17 to 67.23%) among immigrants and 62.58% (95% CI 62.62 to 62.95) among non-immigrants ( Supplementary Table S3 ). When the two sexes were combined, the average HPV vaccination coverage was 58.14% (95% CI 57.64 to 58.63) among immigrants and 54.95% (95% CI 54.78 to 55.11) among non-immigrants.
Potential sociodemographic characteristics associated with HPV vaccination stratified by immigration status were analysed for the cohorts from 2014 to 2018, when both males and females were included in the program. The proportions of adolescents vaccinated with three doses of HPV vaccine varied by sociodemographic characteristics (Table ). Vaccine coverage was significantly higher in immigrants regardless of biologic sex compared with non-immigrants (p<0.0001). Coverage was also significantly higher in immigrants compared with non-immigrants living in metro (p<0.0001) and urban areas (p<0.0001), but not for immigrants living in rural areas (p=0.087). Among immigrants, vaccine coverage was consistent across income quintiles (range 54.34–56.71%) while coverage increased as income quintiles increased for non-immigrants (48.92–53.39%). Multivariable logistic regression analysis showed that immigrant adolescents at age 12 y had 1.10 greater odds of receiving three doses of HPV vaccine compared with non-immigrant adolescents (95% CI 1.07 to 1.14), after controlling for place of residence, income quintile, biological sex and year (Table ). Those living in rural and urban areas had lower odds of receiving three doses of HPV vaccine compared with those living in metro areas (adjusted odds ratio [aOR] 0.68 [95% CI 0.66 to 0.70] and aOR 0.69 [95% CI 0.69 to 0.72], respectively), after controlling for immigrant status, income quintile, biological sex and year. By income quintile, those living in lower income quintiles compared with the richest had lower odds of being vaccinated by age 12 y, after controlling for immigrant status, place of residence, biological sex and year. There was an interaction effect, with immigrants from urban and metro areas having higher odds of being vaccinated compared with non-immigrants from urban and metro areas. However, immigrants from rural areas had lower odds of being vaccinated compared with non-immigrants living in rural areas (aOR 0.86 [95% CI 0.79 to 0.93]).
HPV vaccine coverage (three doses) for immigrants between 2014 and 2018 differed by the region of origin (Table ); immigrants from Asian regions had the highest vaccine coverage (68.78% for Southeast, 62.78% for South and 61.77% for East) followed by African immigrants (60.25%). In contrast, immigrants from elsewhere in North America (39.97%) and South America (48.36%) had the lowest vaccine coverage.
Summary of findings This study examined the HPV vaccination coverage of immigrant adolescents in Alberta compared with non-immigrants. Overall, HPV vaccination coverage was higher in immigrant populations, regardless of biological sex and income quintile. Interpretation On average, over the 5 y that both females and males were eligible to be vaccinated (2014–2018), HPV vaccination coverage was 52.58% in immigrants and 47.41% in non-immigrants. In both groups, HPV vaccination coverage fell below the Canadian national target of 90%. Previous studies showed lower HPV vaccination coverage among immigrant populations compared with non-immigrants. However, a study in the USA examined HPV coverage prevalence in foreign-born and US-born adolescents (ages 13–17 y) from 2012 to 2014 and found that male immigrants had higher coverage rates for all HPV vaccine doses and female immigrants had higher coverage rates for two doses or less compared with the US born population. While immigrants had higher vaccination coverage when controlling for other sociodemographic factors, rural-residing immigrants had lower coverage compared with rural non-immigrants. Several studies have found that rural residents, regardless of migration status, have lower coverage due to factors such as transportation issues, limited providers and higher costs. Rural-residing immigrants may face further barriers, such as language barriers and unfamiliarity with how to access rural healthcare resources. Rural and non-metro residents are an important group for interventions, as studies have found that they are at greater risk of cervical cancer compared with metro residents. Vaccination coverage has been consistently higher in females throughout the HPV routine immunization program. Several studies have shown that immigrant parents are less likely to immunize their male child(ren) compared with females, due to misconceptions that HPV does not impact them. , , In general, males are an important group for targeted interventions, as research has shown that HPV vaccination for males is cost effective and effective in preventing infection in not only males, but females as well. In this study, HPV vaccination coverage was highest among Asian immigrants and lowest in North American immigrants from outside of Canada. Region of origin was a possible reason suggested by a previous study for differences in vaccination by migration status. In a US study, parents from Caribbean countries had differing attitudes towards HPV vaccination (support ranged from 30 to 70%). Many had limited knowledge and various misconceptions (transmission dependent on sexual position, HPV causes Acquired immunodeficiency syndrome, experimentation/discrimination tool, taboo against premarital sex, not necessary for youth). Asian communities had lower rates of screening but still had vaccination rates similar to those of white girls. Collectively, Asians in our study comprise approximately 40% of our immigrant population. A previous study on vaccination rates among different Asian ethnic immigrant groups found that certain subgroups had higher rates of vaccination for HPV, hepatitis B and influenza. Specifically, the Chinese and Filipino subgroups had higher coverage compared with the non-Hispanic white group. In addition, immigrants from other Asian regions outside of China, the Philippines and India had significantly higher coverage for the hepatitis B, influenza and shingles vaccine compared with the non-Hispanic white group. Future directions While the HPV routine immunization program is provided in schools, it should be noted that a catch-up program for males and females 17–26 y of age is now available. The catch-up rate from adolescence to adulthood has been understudied and therefore is an avenue for future research. Strengths and limitations To the best of our knowledge, this is the only study in Canada that has examined HPV vaccination coverage specifically within immigrant adolescents compared with non-immigrants. Utilization of population-based immunization, immigrant and resident databases allowed for a large, robust and complete dataset. Our study has a few limitations. Data were not included post-2018 due to the dose series change (three doses to two doses) and the COVID-19 pandemic. In addition, it was not possible to distinguish between interprovincial migrants to Alberta who were foreign-born immigrants versus Canadian born; foreign-born immigrants who migrated from another province were potentially misclassified as non-immigrants, which might overestimate non-immigrant coverage levels. As this study used provincial data, these findings may not be generalizable to other jurisdictions due to different immigrant characteristics and vaccination programs. Also, it is possible that HPV vaccines received outside of Alberta were not updated in the provincial immunization repository, which may underestimate vaccination coverage in both groups. Finally, our study was unable to present data on First Nations populations, as these data were not reliably included in the provincial immunization repository.
This study examined the HPV vaccination coverage of immigrant adolescents in Alberta compared with non-immigrants. Overall, HPV vaccination coverage was higher in immigrant populations, regardless of biological sex and income quintile.
On average, over the 5 y that both females and males were eligible to be vaccinated (2014–2018), HPV vaccination coverage was 52.58% in immigrants and 47.41% in non-immigrants. In both groups, HPV vaccination coverage fell below the Canadian national target of 90%. Previous studies showed lower HPV vaccination coverage among immigrant populations compared with non-immigrants. However, a study in the USA examined HPV coverage prevalence in foreign-born and US-born adolescents (ages 13–17 y) from 2012 to 2014 and found that male immigrants had higher coverage rates for all HPV vaccine doses and female immigrants had higher coverage rates for two doses or less compared with the US born population. While immigrants had higher vaccination coverage when controlling for other sociodemographic factors, rural-residing immigrants had lower coverage compared with rural non-immigrants. Several studies have found that rural residents, regardless of migration status, have lower coverage due to factors such as transportation issues, limited providers and higher costs. Rural-residing immigrants may face further barriers, such as language barriers and unfamiliarity with how to access rural healthcare resources. Rural and non-metro residents are an important group for interventions, as studies have found that they are at greater risk of cervical cancer compared with metro residents. Vaccination coverage has been consistently higher in females throughout the HPV routine immunization program. Several studies have shown that immigrant parents are less likely to immunize their male child(ren) compared with females, due to misconceptions that HPV does not impact them. , , In general, males are an important group for targeted interventions, as research has shown that HPV vaccination for males is cost effective and effective in preventing infection in not only males, but females as well. In this study, HPV vaccination coverage was highest among Asian immigrants and lowest in North American immigrants from outside of Canada. Region of origin was a possible reason suggested by a previous study for differences in vaccination by migration status. In a US study, parents from Caribbean countries had differing attitudes towards HPV vaccination (support ranged from 30 to 70%). Many had limited knowledge and various misconceptions (transmission dependent on sexual position, HPV causes Acquired immunodeficiency syndrome, experimentation/discrimination tool, taboo against premarital sex, not necessary for youth). Asian communities had lower rates of screening but still had vaccination rates similar to those of white girls. Collectively, Asians in our study comprise approximately 40% of our immigrant population. A previous study on vaccination rates among different Asian ethnic immigrant groups found that certain subgroups had higher rates of vaccination for HPV, hepatitis B and influenza. Specifically, the Chinese and Filipino subgroups had higher coverage compared with the non-Hispanic white group. In addition, immigrants from other Asian regions outside of China, the Philippines and India had significantly higher coverage for the hepatitis B, influenza and shingles vaccine compared with the non-Hispanic white group.
While the HPV routine immunization program is provided in schools, it should be noted that a catch-up program for males and females 17–26 y of age is now available. The catch-up rate from adolescence to adulthood has been understudied and therefore is an avenue for future research.
To the best of our knowledge, this is the only study in Canada that has examined HPV vaccination coverage specifically within immigrant adolescents compared with non-immigrants. Utilization of population-based immunization, immigrant and resident databases allowed for a large, robust and complete dataset. Our study has a few limitations. Data were not included post-2018 due to the dose series change (three doses to two doses) and the COVID-19 pandemic. In addition, it was not possible to distinguish between interprovincial migrants to Alberta who were foreign-born immigrants versus Canadian born; foreign-born immigrants who migrated from another province were potentially misclassified as non-immigrants, which might overestimate non-immigrant coverage levels. As this study used provincial data, these findings may not be generalizable to other jurisdictions due to different immigrant characteristics and vaccination programs. Also, it is possible that HPV vaccines received outside of Alberta were not updated in the provincial immunization repository, which may underestimate vaccination coverage in both groups. Finally, our study was unable to present data on First Nations populations, as these data were not reliably included in the provincial immunization repository.
Overall, immigrant children had higher HPV vaccination coverage compared with non-immigrants, which is encouraging. With the immigrant population predicted to increase in Canada, it is important to understand that not all immigrant groups should be treated the same and appropriate, group-specific interventions should be used. Among immigrants, routine immunization promotion strategies should be tailored for those living in rural residences and those who immigrated from elsewhere in North America, Oceania and South America. Future research should focus on vaccination coverage among second-generation immigrants.
ihae038_Supplemental_Files
|
Dynamic Associations Between Centers for Disease Control and Prevention Social Media Contents and Epidemic Measures During COVID-19: Infoveillance Study | 3ec263a8-16da-47fe-9c43-49d26d437b5e | 10848128 | Health Communication[mh] | The COVID-19 pandemic caused more than 760 million cases and 6.8 million deaths globally as of April 2023 . Therefore, it is crucial for public health agencies, such as the US Centers for Disease Control and Prevention (CDC), to quickly and effectively disseminate up-to-date and reliable health information to the public to curb the pandemic. Over the past years, social media has been widely used by various public health agencies to make announcements, disseminate information, and deliver guidelines of effective interventions to the public. The CDC is among the early adopters of social media to engage with the public, increase health literacy in the society, and promote healthy behaviors . Moreover, the CDC’s social media team has developed the Health Communicator’s Social Media Toolkit to efficiently use social media platforms; map health strategies; listen to health concerns from the public; and deliver evidence-based, credible, and timely health communications in multiple formats such as texts, images, and videos. The CDC’s digital health communication efforts have been especially established on various social media platforms such as Twitter, Facebook, and Instagram. Building successful interactions with the public relies on people understanding the content and raising awareness of it. The CDC has been heavily engaging in social media presence . For example, during the COVID-19 pandemic since 2019, it has been responsive and proactive on Twitter to continuously tweet about reliable health-related messages and quickly diffuse public engagement by responding to user comments, retweeting credible sources, and monitoring online conversations in real time. Hence, it is meaningful to recognize the COVID-19 pandemic information disseminated by the CDC on social media, characterize various contents and topics, and evaluate posting patterns with regard to the actual epidemic dynamics. Monitoring the content, topics, and trends will help identify current issues or interests and the levels of interventions. It is critical to evaluate the associations between various COVID-19 content topics tweeted by the CDC and the actual COVID-19 epidemic measures (eg, cases, deaths, testing, and vaccination records). Knowing the underlying associations between the CDC’s digital health communication contents on social media and the actual COVID-19 epidemics will help in understanding and evaluating the CDC’s tweeting patterns with changes in the epidemic, and will further help in recommending more effective social media communication strategies for public health agencies accordingly. Infodemiology and infoveillance studies tackle health challenges, generate insights, and predict patterns and trends of diseases using previously neglected online data. Infodemiology, which is the conjunction of “information” and “epidemiology,” defined by Gunther Eysenbach, is the field of distribution and determinants of information of a population through the internet or other electronic media . Infoveillance takes surveillance as the primary aim and generates automated analysis from massive online data. It employs innovative computational approaches to mine and analyze unstructured online text information, such as analyzing patterns and trends, predicting potential outbreaks, and addressing current issues of public health. Unlike traditional epidemiological surveillance systems, which include cohort studies, disease registries, population surveys, and health care records, infoveillance studies discover a wide range of health topics, monitor health issues including outbreaks and pandemics, and forecast epidemiological trends in real time. A large amount of anonymous online data can be obtained in a more timely manner with these approaches than with traditional surveillance systems, and this will help researchers and public health agencies to prepare for and tackle public health emergencies and issues more efficiently and effectively. Social media platforms have been having impacts on the community education of COVID-19 and delivering various health information about the disease. Many studies have also incorporated the concept of infoveillance by analyzing unstructured textual data obtained from social media. Liu et al collected and analyzed media reports and news articles on COVID-19 to derive topics and useful information. They aimed to investigate the relationship between media reports and the COVID-19 outbreak, and the patterns of health communication on the coronavirus through mass media to the general audience. They obtained media reports and articles related to the pandemic and studied prevalent topics. There had been prevalent public discussions of attitudes and perspectives on mask-wearing on social media. Therefore, it is important for public health agencies to disseminate the supporting evidence and benefits of masking to mitigate the spread of COVID-19. Al-Ramahi et al studied the topics associated with the public discourse against wearing masks in the United States on Twitter. They identified and categorized different topics in their models. These studies all applied infoveillance to investigate the potential impacts of diseases, health behaviors, or interventions on target populations, communities, and the society. However, mass media and social media are also prone to the spreading of misinformation and conspiracy theories, especially from unreliable sources . Hence, the sources of information obtained from social media are crucial as misinformation could potentially create bias, mislead public perceptions, and provoke negative emotions. Official accounts of public health agencies are usually sources of unbiased and reliable health information. Although there have been several studies that collectively explored the topics discussed by the general public on social media during the pandemic, no investigations have been performed so far to identify various topics from health agencies, such as the CDC, during a large health emergency. Furthermore, information discrepancies and delays could occur between topics posted by health agencies and real-time epidemic trends. Such discrepancies could cause confusion among the public on interventions for health emergencies. Therefore, quantifying their associations is important to reduce knowledge gaps. Chen et al studied correlations between the Zika epidemic in 2016 and the CDC’s responses on Twitter. They quantified the association between the 2 types of data through multivariate time series analyses and information theory measurements. The study discovered the CDC’s varying degrees of efforts in disseminating health-related information to the public during different phases of the Zika pandemic in 2016. However, no study so far has investigated such dynamic associations, more specifically, the CDC’s COVID-19 content topic tweeting patterns and the actual COVID-19 epidemic metrics. While still being investigated, it is imperative to understand the dynamic associations between various content topics on social media and actual epidemic outcome metrics, which will guide health agencies to identify driving factors between the 2 and help in disseminating helpful knowledge to the public accordingly. In this study, we aimed to discover the underlying COVID-related topics posted by the CDC during different phases of the COVID-19 pandemic. We also aimed to further quantify and evaluate the dynamic associations between content topics of the pandemic and multiple COVID-19 epidemic metrics. The findings of this study will significantly increase our knowledge about the efficiency of the CDC’s health communications during the pandemic and help make further recommendations for the CDC’s social media communication strategies with the public in the future. Data Acquisition and Preprocessing Using the Twitter academic API (application programming interface) and search query (see search query in ), we retrieved a total of 17,524 English tweets posted by 7 official CDC-affiliated Twitter accounts up to January 15, 2022 (for the detailed acquisition process for CDC tweets, see ). We also acquired the COVID-19 epidemic metric data in the United States from the Johns Hopkins University – Center for Systems Science and Engineering (CSSE) public GitHub repository . Four sets of important COVID-19 time series data were retrieved, including daily cumulative confirmed cases, deaths, testing, and vaccination. The data were all at the US national level. The 4 sets of original COVID-19 time series data consisted of dates and their cumulative targeted measurements. The case series set included the daily cumulative number of confirmed COVID-19 reported cases, and it had 751 records, ranging from January 22, 2020, to February 10, 2022. The death series set reported the daily cumulative number of confirmed COVID-19 death cases, and it had 908 records, ranging from January 22, 2020, to July 17, 2022. The testing data set reported the daily cumulative number of completed polymerase chain reaction (PCR) tests or other approved nucleic acid amplification tests, and it had 760 records, ranging from January 13, 2020, to February 10, 2022. The vaccination data set included the daily cumulative number of people who received a complete primary series of vaccine doses from the CDC Vaccine Tracker, and it had 428 records, ranging from December 10, 2020, to February 10, 2022. For consistency in subsequent analyses, all CDC tweet time series and US COVID-19 variable time series were standardized to the same time span in this study, ranging from the start date of reported case data (January 22, 2020) to the end date of CDC tweet collection (January 15, 2022), with a total of 725 records for each data type. Since vaccination data were not available until late 2020, missing values were filled with zeros. In summary, we had 4 time series from 4 different COVID-19 US epidemic metrics and another time series of number of tweets from all 7 CDC-associated Twitter accounts. Natural Language Processing In order to identify major topics in the CDC’s COVID-19 tweets, we performed various natural language processing (NLP) steps. NLP, especially topic modeling, provides granular characterization of textual inputs such as the CDC’s COVID-19 communications. Regular expressions were first applied to process tweet texts by removing @mentions, hashtags, special characters, emails, punctuations, URLs, and hyperlinks. Tokenization was performed to break down sentences into individual tokens, which can be individual words or punctuations. For example, the sentence “As COVID19 continues to spread, we must remain vigilant” becomes tokens of “ As ,” “ COVID19 ,” “ continues ,” “ to ,” “ spread ,” “ , ,” “ we ,” “ must ,” “ remain ,” and “ vigilant ” after tokenization. Next, lemmatization, a structural transformation where each word or token is turned to its base or dictionary form of the morphological information, was performed. For example, for words “studies” and “studying,” the base form, or lemma, was the same “study.” In addition to stop word removal via the Python NLTK library, we created our own list of stop words and removed them from the texts (see the stop words list in ). With help from domain experts, we excluded stop words that did not contribute to topic mapping. N-grams, phrases with n words, were developed with a threshold value of 1 to form phrases from tweets. Phrase-level n -grams were applied here because phrases offer more semantic information than individual words . A higher threshold value resulted in fewer phrases to be formed. The texts were mapped into a dictionary of word representations, which was a list of unique words, and it was then used to create bag-of-words presentations of the texts. A term frequency-inverse document frequency (TF-IDF) model was implemented to evaluate the importance and relevancy of the words to a document. It was calculated by multiplying term frequency, which is the relative frequency of a word within a document, with inverse document frequency, which measures how common or rare a word is across a corpus. A higher TF-IDF value indicates that the word is more relevant to the document it is in . Words that were missing and lower than the threshold value of 0.005 from the TF-IDF model were excluded. shows the process of data collection and preprocessing, and shows the steps of subsequent NLP and statistical analyses. Topic Modeling With Latent Dirichlet Allocation To identify more specific topics from all the COVID-19 tweets posted by the CDC, we performed topic modeling via latent Dirichlet allocation (LDA). LDA automatically generates nonoverlapping clusters of words (ie, clusters of words based on their distributions in their corresponding topics) that represent different topics based on probabilistic distributions across the whole corpus (ie, all CDC tweets in this study). LDA was developed to find latent, hidden topics from a collection of unstructured documents or a corpus with text data. Topic models are probabilistic models that perform at 3 levels of documents: a word, a document, and a corpus consisting of multiple documents. The details of LDA and topic models are provided in . We investigated and compared across 3 to 8 potential topics and determined the optimal number of topics based on both topic model evaluation and domain expert interpretations of the identified topic clusters. Model perplexity and topic coherence scores were calculated as performance metrics of LDA. Perplexity is a decreasing “held-out log-likelihood” function that assesses LDA performance using a set of training documents. The trained LDA model is then used to test documents (held-out set). The perplexity of a probability model q on how well it predicts a set of samples x 1 , x 2 , ..., x N drawn from an unknown probability distribution p , is defined as follows : An ideal q should have high probabilities q(x i ) for the new data. Perplexity decreases as the likelihood of the words in new data increases. Therefore, lower perplexity indicates better predictability of an LDA model. Topic coherence assesses the quality of the topics, which is measured as the understandability and semantic similarities between high scoring words (ie, the words that have a high probability of occurring within a particular topic) in topics generated by LDA . We used the UMass coherence score , which accounts for the order of a word appearing among the top words in a topic. It is defined as follows : where N is the number of top words of a topic of a sliding window, P (w i ) is the probability of the i th word w appearing in the sliding window that moves over a corpus to form documents, and P (w i , w j ) is the probability of words w i and w j appearing together in the sliding window. According to the study from UMass, coherence decreases initially and becomes stationary as the number of topics increases . Representations of all topics were presented in word-probability pairs for the most relevant words grouped by the topics. Interactive visualizations were produced using the pyLDAvis package in Python 3.7 to examine the topics generated by LDA and their respective associated keywords. A data frame of all dominant key topics was created. The original unprocessed full texts of the CDC tweets, IDs, and posting dates were combined into a data frame along with their corresponding key topic number labels and topic keywords. In addition, the daily percentage of each topic from LDA was calculated for further time series analysis. For instance, vaccine/vaccination is an identified key topic, so the percentage of vaccine-related CDC tweets on each day was calculated for the entire study period to construct the vaccine/vaccination-specific topic time series. Since LDA is technically an unsupervised clustering method, after the topics or clusters of word distributions from the CDC’s tweets were generated using LDA, domain experts were involved to further label and interpret the content of the topics using domain knowledge. We randomly generated 20 sample tweets from each topic using Python for domain experts to examine, analyze, and determine the themes of the topics. For each topic, LDA provided a list of the top keywords associated with that topic, and we selected the top 10 keywords. We examined these keywords and referred to the 20 sample tweets, and then derived a theme or context that encompasses these keywords and the original tweets through further discussions, which was important for understanding the context in which these words were used. The final agreement on the interpretation of LDA-generated topics was reached after multiple iterations and discussions of the above process. Multivariate Time Series Analyses Between Identified CDC Tweet Topics and COVID-19 Epidemic Metrics Data Preparation Key topic time series data were derived from the previous NLP and LDA processes. We constructed a multivariate data frame with posting dates and number of tweets for each key topic at a daily resolution. Since LDA identified 4 key topics, a total of 4 CDC key topic time series were developed. There were also 4 US COVID-19 epidemic metric time series: daily cumulative reported cases, cumulative confirmed deaths, cumulative number of completed PCR tests or other approved nucleic acid amplification tests, and cumulative number of people who received a complete primary series of vaccines. These 4 sets of COVID-19 epidemic metric time series were then converted to daily measures via first order differencing. Multivariate time series analyses were implemented to investigate the associations between time series of key CDC tweet topics and US COVID-19 epidemic metrics. Visualizations Both types of time series, CDC key topics and COVID-19 metrics, were visually inspected in the same plot on double y-axes, with the left y-axis displaying the daily COVID-19 metric and right y-axis displaying the daily CDC tweet topic count. In addition, each plot was further divided based on COVID-19 phases with different dominant variants: the original, Alpha, Delta, and Omicron variants, with their corresponding starting dates: March 11, 2020; December 29, 2020; June 15, 2021; and November 30, 2021, respectively. This helps further observe and identify dynamic changes of time series and their associations during different phases of the pandemic. Cross-Correlation Function Between 2 time series (also known as signals x and y ), the cross-correlation function (CCF) quantifies their levels of similarities (ie, how similar the 2 series are at different times), their associations (ie, how values in one series can provide information about the other series), and when they occur . The CCF takes the sum of the product for each of the x and y data points at time lag l , defined as follows : where N is the number of observations in each time series, and x i and y i are the observations at the i th time step in each of the time series. The CCF ranges from −1 to 1, and a larger absolute value of the CCF is related to a greater association shared by the 2 time series at a given time lag l . In this study, each of the 4 CDC tweet topic time series was compared with each of the 4 COVID-19 epidemic metric time series to calculate their respective CCFs. All CCF values were calculated with a maximum lag of 30 days, as we assumed that the real-world epidemic could not influence online discussions for more than a month and vice versa. Mutual Information Mutual information (MI) was calculated by computing the entropy of the empirical probability distribution to further quantify the association between each of the 4 key CDC tweet topics and each of the 4 US COVID-19 epidemic metrics. MI measures the amount of mutual dependence or average dependency between 2 random variables X and Y . It is defined as follows : where x i and y i are the i th elements of the variables X and Y , respectively. When applied to time series data, X and Y are 2 individual time series and x i and y i are their respective observations at the i th time step. Note that MI is a single value instead of a function over lag l as in the CCF. A larger MI value indicates a higher shared mutual dependency between the 2 time series. Autoregressive Integrated Moving Average With External Variable Neither the CCF nor MI differentiate dependent and independent variables, that is, the formula was symmetric with regard to X and Y variables. We further evaluated whether the CDC tweeting topics were influenced by real-world COVID-19 epidemic outcomes. An autoregressive integrated moving average with external variable (ARIMAX) model was constructed to fit each of the 4 CDC topics with each of the 4 COVID-19 epidemic metrics during the entire study period. A univariate autoregressive integrated moving average (ARIMA) model fits and forecasts time series data with the integration of an autoregressive (AR) component and a moving average (MA) component with their respective orders/lags (see for detailed information about the AR model). The ARIMA model consists of both AR( p ) and MA( q ) as well as an order d differencing term, resulting in the following ARIMA ( p , d , q ) model : or in backward shift operator form: See for details on the parameters. The ARIMAX model further extends ARIMA to the multivariate time series by incorporating at least one exogenous independent variable x t . ARIMAX ( p , d , q ) is specified as follows : or in backward shift operator form : where contributes to the exogeneous independent variable that could potentially influence the dependent variable y t . In this study, ARIMAX was developed to evaluate how real-world epidemic metrics, modeled as exogeneous variables, impact CDC tweet topic dynamics as dependent variables. Each of the 4 CDC tweet topics was modeled as a dependent variable (y t ) and each of the 4 COVID-19 epidemic measures was an independent exogeneous variable (x t ). The optimal ARIMA and ARIMAX model parameter set ( p, d, q ) was determined by the R ARIMA model package. In addition to reporting the values of the ARIMAX model parameter set ( p , d , q ), difference in Akaike information criterion (dAIC), root mean square error (RMSE), and mean absolute error (MAE) were also computed to compare different ARIMAX performances. The optimal model was the one with the lowest AIC score. dAIC was computed in between 2 models (see for detailed information on AIC). We had an ARIMA model of a single topic time series and an ARIMAX model of that topic time series with an exogeneous variable. Negative dAIC values indicated that the ARIMAX model showed improvement in model performance over the ARIMA counterpart that did not include an exogenous variable. The commonly used RMSE and MAE were adopted as performance metrics. They are defined as follows : where n is the number of data points in a sample y (y i , where i =1, 2, …, n ). RMSE and MAE are Euclidean distance and Manhattan distance in high-dimensional space, respectively. Using the Twitter academic API (application programming interface) and search query (see search query in ), we retrieved a total of 17,524 English tweets posted by 7 official CDC-affiliated Twitter accounts up to January 15, 2022 (for the detailed acquisition process for CDC tweets, see ). We also acquired the COVID-19 epidemic metric data in the United States from the Johns Hopkins University – Center for Systems Science and Engineering (CSSE) public GitHub repository . Four sets of important COVID-19 time series data were retrieved, including daily cumulative confirmed cases, deaths, testing, and vaccination. The data were all at the US national level. The 4 sets of original COVID-19 time series data consisted of dates and their cumulative targeted measurements. The case series set included the daily cumulative number of confirmed COVID-19 reported cases, and it had 751 records, ranging from January 22, 2020, to February 10, 2022. The death series set reported the daily cumulative number of confirmed COVID-19 death cases, and it had 908 records, ranging from January 22, 2020, to July 17, 2022. The testing data set reported the daily cumulative number of completed polymerase chain reaction (PCR) tests or other approved nucleic acid amplification tests, and it had 760 records, ranging from January 13, 2020, to February 10, 2022. The vaccination data set included the daily cumulative number of people who received a complete primary series of vaccine doses from the CDC Vaccine Tracker, and it had 428 records, ranging from December 10, 2020, to February 10, 2022. For consistency in subsequent analyses, all CDC tweet time series and US COVID-19 variable time series were standardized to the same time span in this study, ranging from the start date of reported case data (January 22, 2020) to the end date of CDC tweet collection (January 15, 2022), with a total of 725 records for each data type. Since vaccination data were not available until late 2020, missing values were filled with zeros. In summary, we had 4 time series from 4 different COVID-19 US epidemic metrics and another time series of number of tweets from all 7 CDC-associated Twitter accounts. In order to identify major topics in the CDC’s COVID-19 tweets, we performed various natural language processing (NLP) steps. NLP, especially topic modeling, provides granular characterization of textual inputs such as the CDC’s COVID-19 communications. Regular expressions were first applied to process tweet texts by removing @mentions, hashtags, special characters, emails, punctuations, URLs, and hyperlinks. Tokenization was performed to break down sentences into individual tokens, which can be individual words or punctuations. For example, the sentence “As COVID19 continues to spread, we must remain vigilant” becomes tokens of “ As ,” “ COVID19 ,” “ continues ,” “ to ,” “ spread ,” “ , ,” “ we ,” “ must ,” “ remain ,” and “ vigilant ” after tokenization. Next, lemmatization, a structural transformation where each word or token is turned to its base or dictionary form of the morphological information, was performed. For example, for words “studies” and “studying,” the base form, or lemma, was the same “study.” In addition to stop word removal via the Python NLTK library, we created our own list of stop words and removed them from the texts (see the stop words list in ). With help from domain experts, we excluded stop words that did not contribute to topic mapping. N-grams, phrases with n words, were developed with a threshold value of 1 to form phrases from tweets. Phrase-level n -grams were applied here because phrases offer more semantic information than individual words . A higher threshold value resulted in fewer phrases to be formed. The texts were mapped into a dictionary of word representations, which was a list of unique words, and it was then used to create bag-of-words presentations of the texts. A term frequency-inverse document frequency (TF-IDF) model was implemented to evaluate the importance and relevancy of the words to a document. It was calculated by multiplying term frequency, which is the relative frequency of a word within a document, with inverse document frequency, which measures how common or rare a word is across a corpus. A higher TF-IDF value indicates that the word is more relevant to the document it is in . Words that were missing and lower than the threshold value of 0.005 from the TF-IDF model were excluded. shows the process of data collection and preprocessing, and shows the steps of subsequent NLP and statistical analyses. To identify more specific topics from all the COVID-19 tweets posted by the CDC, we performed topic modeling via latent Dirichlet allocation (LDA). LDA automatically generates nonoverlapping clusters of words (ie, clusters of words based on their distributions in their corresponding topics) that represent different topics based on probabilistic distributions across the whole corpus (ie, all CDC tweets in this study). LDA was developed to find latent, hidden topics from a collection of unstructured documents or a corpus with text data. Topic models are probabilistic models that perform at 3 levels of documents: a word, a document, and a corpus consisting of multiple documents. The details of LDA and topic models are provided in . We investigated and compared across 3 to 8 potential topics and determined the optimal number of topics based on both topic model evaluation and domain expert interpretations of the identified topic clusters. Model perplexity and topic coherence scores were calculated as performance metrics of LDA. Perplexity is a decreasing “held-out log-likelihood” function that assesses LDA performance using a set of training documents. The trained LDA model is then used to test documents (held-out set). The perplexity of a probability model q on how well it predicts a set of samples x 1 , x 2 , ..., x N drawn from an unknown probability distribution p , is defined as follows : An ideal q should have high probabilities q(x i ) for the new data. Perplexity decreases as the likelihood of the words in new data increases. Therefore, lower perplexity indicates better predictability of an LDA model. Topic coherence assesses the quality of the topics, which is measured as the understandability and semantic similarities between high scoring words (ie, the words that have a high probability of occurring within a particular topic) in topics generated by LDA . We used the UMass coherence score , which accounts for the order of a word appearing among the top words in a topic. It is defined as follows : where N is the number of top words of a topic of a sliding window, P (w i ) is the probability of the i th word w appearing in the sliding window that moves over a corpus to form documents, and P (w i , w j ) is the probability of words w i and w j appearing together in the sliding window. According to the study from UMass, coherence decreases initially and becomes stationary as the number of topics increases . Representations of all topics were presented in word-probability pairs for the most relevant words grouped by the topics. Interactive visualizations were produced using the pyLDAvis package in Python 3.7 to examine the topics generated by LDA and their respective associated keywords. A data frame of all dominant key topics was created. The original unprocessed full texts of the CDC tweets, IDs, and posting dates were combined into a data frame along with their corresponding key topic number labels and topic keywords. In addition, the daily percentage of each topic from LDA was calculated for further time series analysis. For instance, vaccine/vaccination is an identified key topic, so the percentage of vaccine-related CDC tweets on each day was calculated for the entire study period to construct the vaccine/vaccination-specific topic time series. Since LDA is technically an unsupervised clustering method, after the topics or clusters of word distributions from the CDC’s tweets were generated using LDA, domain experts were involved to further label and interpret the content of the topics using domain knowledge. We randomly generated 20 sample tweets from each topic using Python for domain experts to examine, analyze, and determine the themes of the topics. For each topic, LDA provided a list of the top keywords associated with that topic, and we selected the top 10 keywords. We examined these keywords and referred to the 20 sample tweets, and then derived a theme or context that encompasses these keywords and the original tweets through further discussions, which was important for understanding the context in which these words were used. The final agreement on the interpretation of LDA-generated topics was reached after multiple iterations and discussions of the above process. Data Preparation Key topic time series data were derived from the previous NLP and LDA processes. We constructed a multivariate data frame with posting dates and number of tweets for each key topic at a daily resolution. Since LDA identified 4 key topics, a total of 4 CDC key topic time series were developed. There were also 4 US COVID-19 epidemic metric time series: daily cumulative reported cases, cumulative confirmed deaths, cumulative number of completed PCR tests or other approved nucleic acid amplification tests, and cumulative number of people who received a complete primary series of vaccines. These 4 sets of COVID-19 epidemic metric time series were then converted to daily measures via first order differencing. Multivariate time series analyses were implemented to investigate the associations between time series of key CDC tweet topics and US COVID-19 epidemic metrics. Visualizations Both types of time series, CDC key topics and COVID-19 metrics, were visually inspected in the same plot on double y-axes, with the left y-axis displaying the daily COVID-19 metric and right y-axis displaying the daily CDC tweet topic count. In addition, each plot was further divided based on COVID-19 phases with different dominant variants: the original, Alpha, Delta, and Omicron variants, with their corresponding starting dates: March 11, 2020; December 29, 2020; June 15, 2021; and November 30, 2021, respectively. This helps further observe and identify dynamic changes of time series and their associations during different phases of the pandemic. Cross-Correlation Function Between 2 time series (also known as signals x and y ), the cross-correlation function (CCF) quantifies their levels of similarities (ie, how similar the 2 series are at different times), their associations (ie, how values in one series can provide information about the other series), and when they occur . The CCF takes the sum of the product for each of the x and y data points at time lag l , defined as follows : where N is the number of observations in each time series, and x i and y i are the observations at the i th time step in each of the time series. The CCF ranges from −1 to 1, and a larger absolute value of the CCF is related to a greater association shared by the 2 time series at a given time lag l . In this study, each of the 4 CDC tweet topic time series was compared with each of the 4 COVID-19 epidemic metric time series to calculate their respective CCFs. All CCF values were calculated with a maximum lag of 30 days, as we assumed that the real-world epidemic could not influence online discussions for more than a month and vice versa. Mutual Information Mutual information (MI) was calculated by computing the entropy of the empirical probability distribution to further quantify the association between each of the 4 key CDC tweet topics and each of the 4 US COVID-19 epidemic metrics. MI measures the amount of mutual dependence or average dependency between 2 random variables X and Y . It is defined as follows : where x i and y i are the i th elements of the variables X and Y , respectively. When applied to time series data, X and Y are 2 individual time series and x i and y i are their respective observations at the i th time step. Note that MI is a single value instead of a function over lag l as in the CCF. A larger MI value indicates a higher shared mutual dependency between the 2 time series. Autoregressive Integrated Moving Average With External Variable Neither the CCF nor MI differentiate dependent and independent variables, that is, the formula was symmetric with regard to X and Y variables. We further evaluated whether the CDC tweeting topics were influenced by real-world COVID-19 epidemic outcomes. An autoregressive integrated moving average with external variable (ARIMAX) model was constructed to fit each of the 4 CDC topics with each of the 4 COVID-19 epidemic metrics during the entire study period. A univariate autoregressive integrated moving average (ARIMA) model fits and forecasts time series data with the integration of an autoregressive (AR) component and a moving average (MA) component with their respective orders/lags (see for detailed information about the AR model). The ARIMA model consists of both AR( p ) and MA( q ) as well as an order d differencing term, resulting in the following ARIMA ( p , d , q ) model : or in backward shift operator form: See for details on the parameters. The ARIMAX model further extends ARIMA to the multivariate time series by incorporating at least one exogenous independent variable x t . ARIMAX ( p , d , q ) is specified as follows : or in backward shift operator form : where contributes to the exogeneous independent variable that could potentially influence the dependent variable y t . In this study, ARIMAX was developed to evaluate how real-world epidemic metrics, modeled as exogeneous variables, impact CDC tweet topic dynamics as dependent variables. Each of the 4 CDC tweet topics was modeled as a dependent variable (y t ) and each of the 4 COVID-19 epidemic measures was an independent exogeneous variable (x t ). The optimal ARIMA and ARIMAX model parameter set ( p, d, q ) was determined by the R ARIMA model package. In addition to reporting the values of the ARIMAX model parameter set ( p , d , q ), difference in Akaike information criterion (dAIC), root mean square error (RMSE), and mean absolute error (MAE) were also computed to compare different ARIMAX performances. The optimal model was the one with the lowest AIC score. dAIC was computed in between 2 models (see for detailed information on AIC). We had an ARIMA model of a single topic time series and an ARIMAX model of that topic time series with an exogeneous variable. Negative dAIC values indicated that the ARIMAX model showed improvement in model performance over the ARIMA counterpart that did not include an exogenous variable. The commonly used RMSE and MAE were adopted as performance metrics. They are defined as follows : where n is the number of data points in a sample y (y i , where i =1, 2, …, n ). RMSE and MAE are Euclidean distance and Manhattan distance in high-dimensional space, respectively. Key topic time series data were derived from the previous NLP and LDA processes. We constructed a multivariate data frame with posting dates and number of tweets for each key topic at a daily resolution. Since LDA identified 4 key topics, a total of 4 CDC key topic time series were developed. There were also 4 US COVID-19 epidemic metric time series: daily cumulative reported cases, cumulative confirmed deaths, cumulative number of completed PCR tests or other approved nucleic acid amplification tests, and cumulative number of people who received a complete primary series of vaccines. These 4 sets of COVID-19 epidemic metric time series were then converted to daily measures via first order differencing. Multivariate time series analyses were implemented to investigate the associations between time series of key CDC tweet topics and US COVID-19 epidemic metrics. Both types of time series, CDC key topics and COVID-19 metrics, were visually inspected in the same plot on double y-axes, with the left y-axis displaying the daily COVID-19 metric and right y-axis displaying the daily CDC tweet topic count. In addition, each plot was further divided based on COVID-19 phases with different dominant variants: the original, Alpha, Delta, and Omicron variants, with their corresponding starting dates: March 11, 2020; December 29, 2020; June 15, 2021; and November 30, 2021, respectively. This helps further observe and identify dynamic changes of time series and their associations during different phases of the pandemic. Between 2 time series (also known as signals x and y ), the cross-correlation function (CCF) quantifies their levels of similarities (ie, how similar the 2 series are at different times), their associations (ie, how values in one series can provide information about the other series), and when they occur . The CCF takes the sum of the product for each of the x and y data points at time lag l , defined as follows : where N is the number of observations in each time series, and x i and y i are the observations at the i th time step in each of the time series. The CCF ranges from −1 to 1, and a larger absolute value of the CCF is related to a greater association shared by the 2 time series at a given time lag l . In this study, each of the 4 CDC tweet topic time series was compared with each of the 4 COVID-19 epidemic metric time series to calculate their respective CCFs. All CCF values were calculated with a maximum lag of 30 days, as we assumed that the real-world epidemic could not influence online discussions for more than a month and vice versa. Mutual information (MI) was calculated by computing the entropy of the empirical probability distribution to further quantify the association between each of the 4 key CDC tweet topics and each of the 4 US COVID-19 epidemic metrics. MI measures the amount of mutual dependence or average dependency between 2 random variables X and Y . It is defined as follows : where x i and y i are the i th elements of the variables X and Y , respectively. When applied to time series data, X and Y are 2 individual time series and x i and y i are their respective observations at the i th time step. Note that MI is a single value instead of a function over lag l as in the CCF. A larger MI value indicates a higher shared mutual dependency between the 2 time series. Neither the CCF nor MI differentiate dependent and independent variables, that is, the formula was symmetric with regard to X and Y variables. We further evaluated whether the CDC tweeting topics were influenced by real-world COVID-19 epidemic outcomes. An autoregressive integrated moving average with external variable (ARIMAX) model was constructed to fit each of the 4 CDC topics with each of the 4 COVID-19 epidemic metrics during the entire study period. A univariate autoregressive integrated moving average (ARIMA) model fits and forecasts time series data with the integration of an autoregressive (AR) component and a moving average (MA) component with their respective orders/lags (see for detailed information about the AR model). The ARIMA model consists of both AR( p ) and MA( q ) as well as an order d differencing term, resulting in the following ARIMA ( p , d , q ) model : or in backward shift operator form: See for details on the parameters. The ARIMAX model further extends ARIMA to the multivariate time series by incorporating at least one exogenous independent variable x t . ARIMAX ( p , d , q ) is specified as follows : or in backward shift operator form : where contributes to the exogeneous independent variable that could potentially influence the dependent variable y t . In this study, ARIMAX was developed to evaluate how real-world epidemic metrics, modeled as exogeneous variables, impact CDC tweet topic dynamics as dependent variables. Each of the 4 CDC tweet topics was modeled as a dependent variable (y t ) and each of the 4 COVID-19 epidemic measures was an independent exogeneous variable (x t ). The optimal ARIMA and ARIMAX model parameter set ( p, d, q ) was determined by the R ARIMA model package. In addition to reporting the values of the ARIMAX model parameter set ( p , d , q ), difference in Akaike information criterion (dAIC), root mean square error (RMSE), and mean absolute error (MAE) were also computed to compare different ARIMAX performances. The optimal model was the one with the lowest AIC score. dAIC was computed in between 2 models (see for detailed information on AIC). We had an ARIMA model of a single topic time series and an ARIMAX model of that topic time series with an exogeneous variable. Negative dAIC values indicated that the ARIMAX model showed improvement in model performance over the ARIMA counterpart that did not include an exogenous variable. The commonly used RMSE and MAE were adopted as performance metrics. They are defined as follows : where n is the number of data points in a sample y (y i , where i =1, 2, …, n ). RMSE and MAE are Euclidean distance and Manhattan distance in high-dimensional space, respectively. Topic Modeling and Content Results A total of 17,524 English tweets posted by the CDC were retrieved and analyzed. Four key topics were generated via LDA based on evaluation metrics including perplexity and coherence score. These topics were then examined and categorized to themes by domain experts ( with example tweets with their respective topics). The themes of the topics and their top 10 unique associated keywords are presented in . Topics were plotted as circles and displayed on the left panel; the most relevant terms or associated keywords with their corresponding topics were displayed in frequency bars on the right panel, which showed each term’s frequency from each topic across the corpus (ie, all CDC COVID-19 tweets sampled) (see for more detailed information about visualizations in the pyLDAvis package). The size of the circle indicated the prevalence of that topic in the corpus. Visualizations for all topics, displayed in circles on the left panel, and their top 15 corresponding relevant terms or associated keywords, displayed in frequency bars on the right panel, are provided in Figures S1-S5 in . Based on the LDA visualization results, these 4 identified key topics had the largest distances and minimal dimensional overlap in the reduced 2D plane. From a public health perspective, the CDC’s online health communication of COVID-19, the largest health emergency in the 21st century, has been relatively cohesive and comprehensive. Therefore, the 4 key topics identified via LDA were not completely mutually exclusive. In addition, the 4-topic model had the balance of separation of topics from a computational perspective and clear interpretability from a health perspective. Increasing the number of topics yields a substantial amount of topic overlap, which was also challenging to provide explicit and clear interpretations. illustrates an example of topic 4 . A list of associated terms of topic 4 and the overall frequency of the terms in the corpus have been displayed in the right panel. The 5 key terms from topic 4 based on overall frequency across all tweets were “booster,” “school,” “increase,” “parent,” and “country.” Example tweets from each topic theme. Topic 1: General vaccination information and education, especially preventing adverse health outcomes of COVID-19 “Even as the world’s attention is focused on #COVID19, this week we are taking time to highlight how #VaccinesWork and to thank the heroes who help develop and deliver lifesaving vaccines. #WorldImmunizationWeek message” “CDC’s #COVID19 Vaccine Webinar Series is a great place to start learning about a variety of topics around COVID-19 vaccination.” “The #DeltaVariant of the virus that causes #COVID19 is more than two times as contagious as the original strain. Wear a mask indoors in public, even if vaccinated and in an area of substantial or high transmission. Get vaccinated as soon as you can.” Topic 2: Pediatric intervention, pediatric vaccination information, family safety, and school and community protection “Make #handwashing a family activity! Explain to children that handwashing can keep them healthy. Be a good role model—if you wash your hands often, your children are more likely to do the same. #COVID19” “Parents: During #COVID19, well-child visits are especially important for children under 2. Schedule your child’s routine visit, so the healthcare provider can check your child’s development & provide recommended vaccines.” “It is critically important for our public health to open schools this fall. CDC resources will help parents, teachers and administrators make practical, safety-focused decisions as this school year begins.” Topic 3: Updates on COVID-19 testing, case, and death data, and relevant information of the disease “CDC tracks 12 different forecasting models of possible #COVID19 deaths in the US. As of May 11, all forecast an increase in deaths in the coming weeks and a cumulative total exceeding 100,000 by June 1. See national & state forecasts.” “The latest CDC #COVIDView report shows that the percentage of #COVID19-associated deaths has been on the rise in the United States since October and has now surpassed the highest percentage seen during summer.” “#COVID19 cases are going up dramatically. This increase is not due to more testing. As the number of cases rise, so does the percentage of tests coming back positive, which shows that COVID-19 is spreading.” Topic 4: Research, study, health care, and community engagement to curb COVID-19 “Our Nation’s medical community has been vigilant and their help in identifying confirmed cases of #COVID19 in the United States to date has been critical to containing the spread of this virus.” “In a new report using data from Colombia, scientists found that pregnant women with symptomatic #COVID19 were at higher risk of hospitalization & death than nonpregnant women with symptomatic COVID-19. HCPs can inform pregnant women about how to stay safe.” “A new study finds masking and fewer encounters or less time close to persons with #COVID19 can limit the spread in university settings. #MaskUp when inside indoor public places regardless of vaccination status.” Multivariate Time Series Analysis Results CCF Results The time series of CDC tweet topics and COVID-19 metrics were plotted to visually examine patterns and potential associations. A total of 16 time series plots (4 topics × 4 COVID-19 epidemic metrics) were generated (Figures S14-S29 in ). CCFs were computed to quantify the dynamic association between each CDC key topic series and each of the 4 COVID-19 epidemic metrics. Quantitative results have been presented (Tables S3-S6 in ). Visualizations (Figures S30-S44 in ) illustrated CCFs between both types of time series. CCF values and plots showed that the CDC’s key COVID-19 tweet topic series was not substantially correlated with the confirmed COVID-19 case count series. As an example, there were no specific patterns between topic 2 and daily confirmed COVID-19 cases ( A). COVID-19 confirmed cases and the death time series had very similar dynamic patterns in the United States across the time span ( B). Consequently, they also showed similar CCFs with the CDC key topic series (Figure S45 in ). COVID-19 deaths had no substantial correlations with any of the 4 CDC key topics (Figures S18-S21 in ) based on CCFs. There were no substantial correlations between any of the 4 key topics and the COVID-19 testing series as well as the fully vaccinated rate series. Examples showed the CCFs between those and topic 2 ( and ). These results indicated a potential discrepancy between the CDC’s health communication focus and the actual COVID-19 epidemic dynamics in the United States during the pandemic. MI Results MI values between each CDC tweet topic and each COVID-19 metric were calculated, and they are shown in . Confirmed case counts and topic 4 (research, health care, and community engagement to restrain COVID-19) had the highest MI value (3.21), indicating that there was a strong dependency in COVID-19 cases and topic 4. On the other hand, the vaccination rate and topic 3 had the lowest MI value (0.56), indicating almost independence between the 2 series. Among all 4 key topics, topic 4 showed the highest MI values (3.21, 3.02, 3.21, and 1.65) with the 4 COVID-19 metrics. Topic 2 (pediatric intervention, family safety, and school and community protection) had consistently lower MI values with the COVID-19 metric than topic 4. The MI of topic 1 (information on COVID-19 vaccination and education on preventing its adverse health outcomes) and topic 3 (updates on COVID-19 testing, case, and death metrics, and relevant information of the disease) showed similar values with all 4 COVID-19 metrics, although the MI values of topic 1 were slightly higher. Vaccination and educational information on the adverse health outcomes of COVID-19 appeared to not be substantially correlated with COVID-19 epidemic metrics, including testing, cases, and deaths. We speculated that the CDC considered both vaccination and preventing adverse health outcomes of COVID-19 critical to public health and disseminated these topics regardless of the current COVID-19 situation at the time of posting. In addition, MI values between all pairs of CDC topics were calculated (Table S7 in ). The resulting MI values, ranked from the largest to smallest, were for topics 2 and 4, topics 3 and 4, topics 1 and 2, topics 2 and 3, topics 1 and 4, and topics 1 and 3. Based on the CDC’s COVID-19 tweeting patterns, pediatric intervention and family and community safety were strongly associated with health care research studies and public engagement to curb the spread of COVID-19. ARIMAX Results ARIMAX performance measures, including values of ARIMAX parameters ( p , d , q ), dAIC, RMSE, and MAE, are reported in . As an external input, the vaccination rate time series significantly improved the performances of the original ARIMA models for all CDC key topics (dAIC = −108.15, −69.79, −90.54, and −91.53 for topics 1 to 4, respectively). This was the largest increase in model performance across all topics with the exogeneous variable in the ARIMAX model. The COVID-19 case series improved the ARIMA model performance for CDC topics 1 and 3 (dAIC = −104.76 and −1.53 for topics 1 and 3, respectively). Including the death or testing series did not result in substantial improvements to the ARIMA model performance for all CDC key topics. ARIMAX models with lower RMSE and MAE values indicated higher accuracy of the time series models . Overall, ARIMAX models for topics 1 and 3 with all COVID-19 metrics delivered the smallest RMSE values (lowest [1.10] for topic 3 with death counts and highest [1.21] for topic 1 with full vaccination records), while those of topic 4 delivered the largest RMSE values (lowest [6.25] with death counts and highest [6.93] with full vaccination records). Similarly, MAE values were the lowest for ARIMAX models for topics 1 and 3 with the epidemic metrics (lowest [0.82] for topic 3 with death counts and highest [0.91] for topic 1 with full vaccination records), and they were the largest for topic 4 with the epidemic metrics (lowest [4.97] with death counts and highest [5.56] with full vaccination records). These ARIMAX performance results showed significant variabilities between the 2 types of time series (CDC key tweet topics and actual COVID-19 metrics in the United States). We performed an exhaustive search to identify the optimal ARIMAX parameters ( p, d, q ). For example, topic 1 with death counts and completed testing records had the same parameter set ( p , d , q =2, 1, 3), indicating that the optimal ARIMAX model between these time series needed first-order differencing ( d =1) to achieve stationarity and minimal AIC values, its AR time lag was 2 ( p =2), and its MA time lag was 3 ( q =3). The topic 1 series with case counts and complete vaccination had the same parameter values ( p , d , q =5, 1, 0), indicating that the model was simply an AR model ( q =0 with no MA terms) with a time lag of 5 ( p =5) after first-order differencing ( d =1). The complete ARIMAX parameters are shown in . All ARIMAX models needed first-order differencing ( d =1) to be stationary and to minimize AIC values. A total of 17,524 English tweets posted by the CDC were retrieved and analyzed. Four key topics were generated via LDA based on evaluation metrics including perplexity and coherence score. These topics were then examined and categorized to themes by domain experts ( with example tweets with their respective topics). The themes of the topics and their top 10 unique associated keywords are presented in . Topics were plotted as circles and displayed on the left panel; the most relevant terms or associated keywords with their corresponding topics were displayed in frequency bars on the right panel, which showed each term’s frequency from each topic across the corpus (ie, all CDC COVID-19 tweets sampled) (see for more detailed information about visualizations in the pyLDAvis package). The size of the circle indicated the prevalence of that topic in the corpus. Visualizations for all topics, displayed in circles on the left panel, and their top 15 corresponding relevant terms or associated keywords, displayed in frequency bars on the right panel, are provided in Figures S1-S5 in . Based on the LDA visualization results, these 4 identified key topics had the largest distances and minimal dimensional overlap in the reduced 2D plane. From a public health perspective, the CDC’s online health communication of COVID-19, the largest health emergency in the 21st century, has been relatively cohesive and comprehensive. Therefore, the 4 key topics identified via LDA were not completely mutually exclusive. In addition, the 4-topic model had the balance of separation of topics from a computational perspective and clear interpretability from a health perspective. Increasing the number of topics yields a substantial amount of topic overlap, which was also challenging to provide explicit and clear interpretations. illustrates an example of topic 4 . A list of associated terms of topic 4 and the overall frequency of the terms in the corpus have been displayed in the right panel. The 5 key terms from topic 4 based on overall frequency across all tweets were “booster,” “school,” “increase,” “parent,” and “country.” Example tweets from each topic theme. Topic 1: General vaccination information and education, especially preventing adverse health outcomes of COVID-19 “Even as the world’s attention is focused on #COVID19, this week we are taking time to highlight how #VaccinesWork and to thank the heroes who help develop and deliver lifesaving vaccines. #WorldImmunizationWeek message” “CDC’s #COVID19 Vaccine Webinar Series is a great place to start learning about a variety of topics around COVID-19 vaccination.” “The #DeltaVariant of the virus that causes #COVID19 is more than two times as contagious as the original strain. Wear a mask indoors in public, even if vaccinated and in an area of substantial or high transmission. Get vaccinated as soon as you can.” Topic 2: Pediatric intervention, pediatric vaccination information, family safety, and school and community protection “Make #handwashing a family activity! Explain to children that handwashing can keep them healthy. Be a good role model—if you wash your hands often, your children are more likely to do the same. #COVID19” “Parents: During #COVID19, well-child visits are especially important for children under 2. Schedule your child’s routine visit, so the healthcare provider can check your child’s development & provide recommended vaccines.” “It is critically important for our public health to open schools this fall. CDC resources will help parents, teachers and administrators make practical, safety-focused decisions as this school year begins.” Topic 3: Updates on COVID-19 testing, case, and death data, and relevant information of the disease “CDC tracks 12 different forecasting models of possible #COVID19 deaths in the US. As of May 11, all forecast an increase in deaths in the coming weeks and a cumulative total exceeding 100,000 by June 1. See national & state forecasts.” “The latest CDC #COVIDView report shows that the percentage of #COVID19-associated deaths has been on the rise in the United States since October and has now surpassed the highest percentage seen during summer.” “#COVID19 cases are going up dramatically. This increase is not due to more testing. As the number of cases rise, so does the percentage of tests coming back positive, which shows that COVID-19 is spreading.” Topic 4: Research, study, health care, and community engagement to curb COVID-19 “Our Nation’s medical community has been vigilant and their help in identifying confirmed cases of #COVID19 in the United States to date has been critical to containing the spread of this virus.” “In a new report using data from Colombia, scientists found that pregnant women with symptomatic #COVID19 were at higher risk of hospitalization & death than nonpregnant women with symptomatic COVID-19. HCPs can inform pregnant women about how to stay safe.” “A new study finds masking and fewer encounters or less time close to persons with #COVID19 can limit the spread in university settings. #MaskUp when inside indoor public places regardless of vaccination status.” CCF Results The time series of CDC tweet topics and COVID-19 metrics were plotted to visually examine patterns and potential associations. A total of 16 time series plots (4 topics × 4 COVID-19 epidemic metrics) were generated (Figures S14-S29 in ). CCFs were computed to quantify the dynamic association between each CDC key topic series and each of the 4 COVID-19 epidemic metrics. Quantitative results have been presented (Tables S3-S6 in ). Visualizations (Figures S30-S44 in ) illustrated CCFs between both types of time series. CCF values and plots showed that the CDC’s key COVID-19 tweet topic series was not substantially correlated with the confirmed COVID-19 case count series. As an example, there were no specific patterns between topic 2 and daily confirmed COVID-19 cases ( A). COVID-19 confirmed cases and the death time series had very similar dynamic patterns in the United States across the time span ( B). Consequently, they also showed similar CCFs with the CDC key topic series (Figure S45 in ). COVID-19 deaths had no substantial correlations with any of the 4 CDC key topics (Figures S18-S21 in ) based on CCFs. There were no substantial correlations between any of the 4 key topics and the COVID-19 testing series as well as the fully vaccinated rate series. Examples showed the CCFs between those and topic 2 ( and ). These results indicated a potential discrepancy between the CDC’s health communication focus and the actual COVID-19 epidemic dynamics in the United States during the pandemic. MI Results MI values between each CDC tweet topic and each COVID-19 metric were calculated, and they are shown in . Confirmed case counts and topic 4 (research, health care, and community engagement to restrain COVID-19) had the highest MI value (3.21), indicating that there was a strong dependency in COVID-19 cases and topic 4. On the other hand, the vaccination rate and topic 3 had the lowest MI value (0.56), indicating almost independence between the 2 series. Among all 4 key topics, topic 4 showed the highest MI values (3.21, 3.02, 3.21, and 1.65) with the 4 COVID-19 metrics. Topic 2 (pediatric intervention, family safety, and school and community protection) had consistently lower MI values with the COVID-19 metric than topic 4. The MI of topic 1 (information on COVID-19 vaccination and education on preventing its adverse health outcomes) and topic 3 (updates on COVID-19 testing, case, and death metrics, and relevant information of the disease) showed similar values with all 4 COVID-19 metrics, although the MI values of topic 1 were slightly higher. Vaccination and educational information on the adverse health outcomes of COVID-19 appeared to not be substantially correlated with COVID-19 epidemic metrics, including testing, cases, and deaths. We speculated that the CDC considered both vaccination and preventing adverse health outcomes of COVID-19 critical to public health and disseminated these topics regardless of the current COVID-19 situation at the time of posting. In addition, MI values between all pairs of CDC topics were calculated (Table S7 in ). The resulting MI values, ranked from the largest to smallest, were for topics 2 and 4, topics 3 and 4, topics 1 and 2, topics 2 and 3, topics 1 and 4, and topics 1 and 3. Based on the CDC’s COVID-19 tweeting patterns, pediatric intervention and family and community safety were strongly associated with health care research studies and public engagement to curb the spread of COVID-19. ARIMAX Results ARIMAX performance measures, including values of ARIMAX parameters ( p , d , q ), dAIC, RMSE, and MAE, are reported in . As an external input, the vaccination rate time series significantly improved the performances of the original ARIMA models for all CDC key topics (dAIC = −108.15, −69.79, −90.54, and −91.53 for topics 1 to 4, respectively). This was the largest increase in model performance across all topics with the exogeneous variable in the ARIMAX model. The COVID-19 case series improved the ARIMA model performance for CDC topics 1 and 3 (dAIC = −104.76 and −1.53 for topics 1 and 3, respectively). Including the death or testing series did not result in substantial improvements to the ARIMA model performance for all CDC key topics. ARIMAX models with lower RMSE and MAE values indicated higher accuracy of the time series models . Overall, ARIMAX models for topics 1 and 3 with all COVID-19 metrics delivered the smallest RMSE values (lowest [1.10] for topic 3 with death counts and highest [1.21] for topic 1 with full vaccination records), while those of topic 4 delivered the largest RMSE values (lowest [6.25] with death counts and highest [6.93] with full vaccination records). Similarly, MAE values were the lowest for ARIMAX models for topics 1 and 3 with the epidemic metrics (lowest [0.82] for topic 3 with death counts and highest [0.91] for topic 1 with full vaccination records), and they were the largest for topic 4 with the epidemic metrics (lowest [4.97] with death counts and highest [5.56] with full vaccination records). These ARIMAX performance results showed significant variabilities between the 2 types of time series (CDC key tweet topics and actual COVID-19 metrics in the United States). We performed an exhaustive search to identify the optimal ARIMAX parameters ( p, d, q ). For example, topic 1 with death counts and completed testing records had the same parameter set ( p , d , q =2, 1, 3), indicating that the optimal ARIMAX model between these time series needed first-order differencing ( d =1) to achieve stationarity and minimal AIC values, its AR time lag was 2 ( p =2), and its MA time lag was 3 ( q =3). The topic 1 series with case counts and complete vaccination had the same parameter values ( p , d , q =5, 1, 0), indicating that the model was simply an AR model ( q =0 with no MA terms) with a time lag of 5 ( p =5) after first-order differencing ( d =1). The complete ARIMAX parameters are shown in . All ARIMAX models needed first-order differencing ( d =1) to be stationary and to minimize AIC values. The time series of CDC tweet topics and COVID-19 metrics were plotted to visually examine patterns and potential associations. A total of 16 time series plots (4 topics × 4 COVID-19 epidemic metrics) were generated (Figures S14-S29 in ). CCFs were computed to quantify the dynamic association between each CDC key topic series and each of the 4 COVID-19 epidemic metrics. Quantitative results have been presented (Tables S3-S6 in ). Visualizations (Figures S30-S44 in ) illustrated CCFs between both types of time series. CCF values and plots showed that the CDC’s key COVID-19 tweet topic series was not substantially correlated with the confirmed COVID-19 case count series. As an example, there were no specific patterns between topic 2 and daily confirmed COVID-19 cases ( A). COVID-19 confirmed cases and the death time series had very similar dynamic patterns in the United States across the time span ( B). Consequently, they also showed similar CCFs with the CDC key topic series (Figure S45 in ). COVID-19 deaths had no substantial correlations with any of the 4 CDC key topics (Figures S18-S21 in ) based on CCFs. There were no substantial correlations between any of the 4 key topics and the COVID-19 testing series as well as the fully vaccinated rate series. Examples showed the CCFs between those and topic 2 ( and ). These results indicated a potential discrepancy between the CDC’s health communication focus and the actual COVID-19 epidemic dynamics in the United States during the pandemic. MI values between each CDC tweet topic and each COVID-19 metric were calculated, and they are shown in . Confirmed case counts and topic 4 (research, health care, and community engagement to restrain COVID-19) had the highest MI value (3.21), indicating that there was a strong dependency in COVID-19 cases and topic 4. On the other hand, the vaccination rate and topic 3 had the lowest MI value (0.56), indicating almost independence between the 2 series. Among all 4 key topics, topic 4 showed the highest MI values (3.21, 3.02, 3.21, and 1.65) with the 4 COVID-19 metrics. Topic 2 (pediatric intervention, family safety, and school and community protection) had consistently lower MI values with the COVID-19 metric than topic 4. The MI of topic 1 (information on COVID-19 vaccination and education on preventing its adverse health outcomes) and topic 3 (updates on COVID-19 testing, case, and death metrics, and relevant information of the disease) showed similar values with all 4 COVID-19 metrics, although the MI values of topic 1 were slightly higher. Vaccination and educational information on the adverse health outcomes of COVID-19 appeared to not be substantially correlated with COVID-19 epidemic metrics, including testing, cases, and deaths. We speculated that the CDC considered both vaccination and preventing adverse health outcomes of COVID-19 critical to public health and disseminated these topics regardless of the current COVID-19 situation at the time of posting. In addition, MI values between all pairs of CDC topics were calculated (Table S7 in ). The resulting MI values, ranked from the largest to smallest, were for topics 2 and 4, topics 3 and 4, topics 1 and 2, topics 2 and 3, topics 1 and 4, and topics 1 and 3. Based on the CDC’s COVID-19 tweeting patterns, pediatric intervention and family and community safety were strongly associated with health care research studies and public engagement to curb the spread of COVID-19. ARIMAX performance measures, including values of ARIMAX parameters ( p , d , q ), dAIC, RMSE, and MAE, are reported in . As an external input, the vaccination rate time series significantly improved the performances of the original ARIMA models for all CDC key topics (dAIC = −108.15, −69.79, −90.54, and −91.53 for topics 1 to 4, respectively). This was the largest increase in model performance across all topics with the exogeneous variable in the ARIMAX model. The COVID-19 case series improved the ARIMA model performance for CDC topics 1 and 3 (dAIC = −104.76 and −1.53 for topics 1 and 3, respectively). Including the death or testing series did not result in substantial improvements to the ARIMA model performance for all CDC key topics. ARIMAX models with lower RMSE and MAE values indicated higher accuracy of the time series models . Overall, ARIMAX models for topics 1 and 3 with all COVID-19 metrics delivered the smallest RMSE values (lowest [1.10] for topic 3 with death counts and highest [1.21] for topic 1 with full vaccination records), while those of topic 4 delivered the largest RMSE values (lowest [6.25] with death counts and highest [6.93] with full vaccination records). Similarly, MAE values were the lowest for ARIMAX models for topics 1 and 3 with the epidemic metrics (lowest [0.82] for topic 3 with death counts and highest [0.91] for topic 1 with full vaccination records), and they were the largest for topic 4 with the epidemic metrics (lowest [4.97] with death counts and highest [5.56] with full vaccination records). These ARIMAX performance results showed significant variabilities between the 2 types of time series (CDC key tweet topics and actual COVID-19 metrics in the United States). We performed an exhaustive search to identify the optimal ARIMAX parameters ( p, d, q ). For example, topic 1 with death counts and completed testing records had the same parameter set ( p , d , q =2, 1, 3), indicating that the optimal ARIMAX model between these time series needed first-order differencing ( d =1) to achieve stationarity and minimal AIC values, its AR time lag was 2 ( p =2), and its MA time lag was 3 ( q =3). The topic 1 series with case counts and complete vaccination had the same parameter values ( p , d , q =5, 1, 0), indicating that the model was simply an AR model ( q =0 with no MA terms) with a time lag of 5 ( p =5) after first-order differencing ( d =1). The complete ARIMAX parameters are shown in . All ARIMAX models needed first-order differencing ( d =1) to be stationary and to minimize AIC values. Principal Findings In this study, we systematically investigated and comprehensively identified the CDC’s key topics, COVID-19 epidemic metrics, and dynamic associations between the 2 types of data series based on 17,524 COVID-related English tweets from the CDC since January 2022. The LDA topic model was built to characterize and identify the dynamic shifts of topics in the CDC’s COVID-19 communication over a period of more than 2 years. For the first time, we were able to identify the following 4 key topics: (1) general vaccination information and education; (2) pediatric intervention that also involved family and school safety; (3) updates on the COVID-19 epidemic situation, such as numbers of cases, deaths, etc; and (4) research studies that were able to curb the pandemic. Our study took a unique approach of infoveillance by identifying potential associations between COVID-19 epidemic outcome metrics in the United States and the CDC’s key topic dynamics during different stages of the pandemic. This innovative framework significantly expanded the original infoveillance approach that generally relied on the number of posts (ie, posting dynamics) without further extracting more detailed and meaningful content topics and sentiments from the textual data. Our study was able to further provide practical and useful health communication strategies for public health agencies to effectively communicate timely and accurate information to the public. It is important to investigate the dynamic associations between the CDC’s tweets on COVID-19 and the progression of the pandemic for several reasons: Understanding their relationship can reveal how public health messaging impacts public perception and engagement at different stages of a major health emergency. A strong association between the CDC’s tweets and epidemic measures indicates that public health messaging is effective. Weak associations might indicate that messaging from the CDC to the public over time is not effective; however, it will lead us to further explore the influential factors and provide health communication strategies for public health agencies. It can also show if the CDC’s messaging on Twitter is proactive or reactive to the actual epidemic, informing strategies for future public health communication. It helps public health agencies better allocate resources. For example, if tweets related to educating the public on monitoring COVID-19 symptoms and updating certain metrics lead to an increase in the number of people trying to get COVID tests, then resources could be directed toward opening testing centers and sending free test kits to homes. Our study is the first of its kind to comprehensively evaluate the impact of online public health communication, especially on Twitter, which is one of the major social media platforms, during different phases of a large health emergency. We studied the overall daily volume of COVID-19–related tweets posted by the CDC over time as a baseline , and the volume of tweets was higher in the early phase of the pandemic, indicating a strong effort at the CDC to disseminate important information to the public. We did not observe visually clear patterns of an association with COVID-19 epidemic measures. We further applied novel NLP to significantly reduce the gap of previous studies that overlooked the dynamic association between detailed topics discussed by public health agencies on social media and real-world epidemic metrics. We then examined the dynamic associations between the 4 identified key topics and 4 COVID-19 epidemic outcome metrics. Among the 4 major topics, topic 1, which covered information on vaccination and adverse health outcomes of COVID-19, had substantially strong associations with death counts and testing records during the Alpha phase (December 29, 2020, to June 14, 2021). We found that during this phase, when the overall vaccination-related CDC tweets were decreasing, the daily vaccination rate (number of people who received a complete primary series of the COVID-19 vaccine based on the CDC Vaccine Tracker) was increasing, which aligned with the CDC’s effort in emphasizing the importance of vaccination to the public on social media. When discussions from the CDC about vaccination were increasing after the Alpha phase, the vaccination rate started to decrease. The reasons could be but are not limited to the following: Ineffective messaging from the CDC on social media to the public during later stages of the pandemic. Lack of engagement from the public, since not everyone follows or engages with official accounts and might miss or overlook them amidst other content. Fatigue from information overload where frequent data updates on social media platforms can lead to desensitization, making it less likely for users to pay attention over time and act on the information. Temporal delays create time lag, which can impact the associations between the topics and the real epidemic measures. Political factors such as antivaccination groups. Therefore, with all possible influential factors, the CDC could not fully impact the public’s responses and actions on getting vaccinated even though they had been making efforts on sharing educational information about vaccination. This finding showed that the CDC had been making efforts to emphasize the importance of vaccination on Twitter, but the public response was weak. Thus, it is important to further study the influential factors for the CDC’s social media strategies. Topic 3, which provided updates on 3 of the COVID-19 measures (testing, cases, and deaths) and their relevant information, aligned better with the case series during the Delta phase (June 15, 2021, to November 29, 2021). It also matched with the death series during the declared pandemic phase (original variant: March 11, 2020, to December 28, 2020) and Delta phase, classified by the World Health Organization on May 11, 2021. Furthermore, even though topic 3 did not demonstrate a visible association with the testing series, timely communication from the CDC was actually strongly associated with the testing time series over the entire study period based on the multivariate time series analysis. According to these key findings, we suggest that aligning the content topics of health communication from public health agencies with the temporal dynamics of COVID-19 or other emerging public health emergencies (eg, major epidemic outcome metrics) can help provide more timely and relevant information to the public. Therefore, we recommend that the CDC and other public health agencies monitor the epidemic outcome metrics in real time. Health agencies can then post timely updates about the emergency, most recent findings, and interventions on social media according to the dynamic changes of these outcome metrics. Public health agencies can regain trust from the public by not only helping the public better understand the complex dynamics of the health emergency, but also informing the public with evidence-based guidance and recommendations more effectively. Limitations and Future Work There are several limitations in this infoveillance study that could be improved in future work. First, while we focused on probabilistic-based LDA for topic modeling, there are other alternative NLP approaches such as deep learning–based bidirectional encoder representations from transformers (BERT). Hence, we will explore BERT and other state-of-the-art NLP techniques for content topic modeling and sentiment analysis in the future. Second, given the complexity of this study, we will incorporate subthemes to further help contextualize the clusters in future work. Third, the CDC does not have the sole power of controlling people’s responses and actions over time (eg, getting tested and receiving full vaccine doses), even with consistent effort on Twitter to educate the public and mitigate the pandemic. There are other factors that could affect the associations between the CDC’s messages and the COVID-19 measures: Time lags: What is posted might not reflect real-time situations, which can impact the association strength between the posted measures and real-world metrics; thus, we suggest aligning the content topics of health communication with up-to-date epidemic outcome metrics. Discrepancies in posting methods: The CDC simplifies the data in their posts to make the information more comprehensible for the audience, which might not align with the detailed epidemic metrics posted from other sources with different interpretations of the reported metrics. Variability in the data source: The data open to the public might come from sources and reporting standards that are different from the CDC’s protocol, which could weaken potential associations. Audience: As a government health agency, the CDC prioritizes certain data for social media to cater to the public for relevancy. For example, posting daily epidemic measures could lead to strong associations with COVID-19 metrics, but an association does not mean causality, and we assume that the CDC does not generate their tweets with the intention to improve associations of any kind and their priority is to present a variety of reliable information to the public. Fatigue from information overload: Frequent data updates on social media can lead to desensitization, making it less likely for users to pay attention and react to the information over time, for example, tweeting about daily epidemic measures decreases the public’s attention over time. Political and societal factors, for example, antivaccination groups and conspiracy theories about the pandemic. In addition, it is important for us to continue to examine the validity of the underlying assumption that the CDC’s health communication makes an impact during a pandemic. In this infodemiology study, we focused on the national effects of these tweets. Future studies should further examine geospatial factors and other confounding factors to help understand whether and how much the CDC’s tweets impact pandemic outcomes. Lastly, public engagement (ie, retweets, likes, replies, etc) of the CDC’s health communication is an important indicator of the effectiveness of online health communication efforts. There have been studies that analyzed public sentiments and attitudes toward various health-related topics. However, very few studies have investigated the associations of public sentiment shifts along disease-related metrics. In addition, public sentiments and attitudes are heavily influenced by health agencies’ messages and should not be misled by misinformation. Public opinions also influence health practices and interventions, which have a significant impact on the actual epidemic outcomes (eg, case, death, vaccination, etc). Thus, it is important to further investigate the underlying association between public health communication topics and actual epidemic measures. The insights can help public health agencies develop better social media strategies to address public concerns at different stages of the emergency. Therefore, we suggest that examining the dynamics and patterns of public responses to health agencies’ original communications can provide valuable insights on public perceptions and attitudes around various issues during the pandemic, such as pharmaceutical interventions (eg, vaccination) and nonpharmaceutical interventions. Detailed content analysis can be applied to explicitly identify public concerns in response to the CDC’s health communications. In addition, sentiment analysis can be applied to extract public sentiments (ie, positive, neutral, or negative) toward the CDC’s health communications, and further help identify public attitudes and reactions to various content topics that the CDC has communicated. Public attitudes will ultimately determine individual health behavior and decision-making, such as vaccination acceptance and compliance with nonpharmaceutical interventions, which in turn drive the overall epidemic dynamics. Therefore, it is critical to investigate real-time public engagement, such as retweeting or replying on social media, toward public health agencies’ communications to better inform health agencies about prioritizing their communications and addressing public concerns about specific content topics. Conclusions This study investigated the dynamic associations between the CDC’s detailed COVID-19 communication topics on Twitter and epidemic metrics in the United States for almost 2 years during the pandemic. Using LDA topic modeling, we were the first to comprehensively identify and explore various COVID-related topics tweeted by the federal public health agency during the pandemic. We also collected daily COVID-19 epidemic metrics (confirmed case counts, death counts, completed tests records, and fully vaccinated records) and performed various multivariate time series analyses to unravel the temporal patterns and associations with the CDC’s COVID-19 communication patterns (ie, investigated the dynamic associations between the time series of each topic generated by the LDA model and the time series of each epidemic metric). The results suggested that some topics were strongly associated with certain COVID-19 epidemic metrics, indicating that advanced social media analytics (eg, NLP) could be a valuable tool for effective infoveillance. Based on our findings, we recommend that the CDC, along with other public health agencies, could further optimize their health communications on social media platforms by posting contents and topics that align with the temporal dynamics of key epidemic metrics. While the CDC had been making efforts to share information on social media platforms to educate the public throughout the pandemic, the public responses to these messages were relatively weak. It is important to further explore the potential factors that played a role in the effectiveness of the CDC’s social media performance in future studies. As such, we suggest increasing online health communication on health practices and interventions during high-level epidemic periods with large numbers of cases and deaths. Our findings also highlighted the importance of health communication on social media platforms to better respond to and tackle future health emergencies and issues. In this study, we systematically investigated and comprehensively identified the CDC’s key topics, COVID-19 epidemic metrics, and dynamic associations between the 2 types of data series based on 17,524 COVID-related English tweets from the CDC since January 2022. The LDA topic model was built to characterize and identify the dynamic shifts of topics in the CDC’s COVID-19 communication over a period of more than 2 years. For the first time, we were able to identify the following 4 key topics: (1) general vaccination information and education; (2) pediatric intervention that also involved family and school safety; (3) updates on the COVID-19 epidemic situation, such as numbers of cases, deaths, etc; and (4) research studies that were able to curb the pandemic. Our study took a unique approach of infoveillance by identifying potential associations between COVID-19 epidemic outcome metrics in the United States and the CDC’s key topic dynamics during different stages of the pandemic. This innovative framework significantly expanded the original infoveillance approach that generally relied on the number of posts (ie, posting dynamics) without further extracting more detailed and meaningful content topics and sentiments from the textual data. Our study was able to further provide practical and useful health communication strategies for public health agencies to effectively communicate timely and accurate information to the public. It is important to investigate the dynamic associations between the CDC’s tweets on COVID-19 and the progression of the pandemic for several reasons: Understanding their relationship can reveal how public health messaging impacts public perception and engagement at different stages of a major health emergency. A strong association between the CDC’s tweets and epidemic measures indicates that public health messaging is effective. Weak associations might indicate that messaging from the CDC to the public over time is not effective; however, it will lead us to further explore the influential factors and provide health communication strategies for public health agencies. It can also show if the CDC’s messaging on Twitter is proactive or reactive to the actual epidemic, informing strategies for future public health communication. It helps public health agencies better allocate resources. For example, if tweets related to educating the public on monitoring COVID-19 symptoms and updating certain metrics lead to an increase in the number of people trying to get COVID tests, then resources could be directed toward opening testing centers and sending free test kits to homes. Our study is the first of its kind to comprehensively evaluate the impact of online public health communication, especially on Twitter, which is one of the major social media platforms, during different phases of a large health emergency. We studied the overall daily volume of COVID-19–related tweets posted by the CDC over time as a baseline , and the volume of tweets was higher in the early phase of the pandemic, indicating a strong effort at the CDC to disseminate important information to the public. We did not observe visually clear patterns of an association with COVID-19 epidemic measures. We further applied novel NLP to significantly reduce the gap of previous studies that overlooked the dynamic association between detailed topics discussed by public health agencies on social media and real-world epidemic metrics. We then examined the dynamic associations between the 4 identified key topics and 4 COVID-19 epidemic outcome metrics. Among the 4 major topics, topic 1, which covered information on vaccination and adverse health outcomes of COVID-19, had substantially strong associations with death counts and testing records during the Alpha phase (December 29, 2020, to June 14, 2021). We found that during this phase, when the overall vaccination-related CDC tweets were decreasing, the daily vaccination rate (number of people who received a complete primary series of the COVID-19 vaccine based on the CDC Vaccine Tracker) was increasing, which aligned with the CDC’s effort in emphasizing the importance of vaccination to the public on social media. When discussions from the CDC about vaccination were increasing after the Alpha phase, the vaccination rate started to decrease. The reasons could be but are not limited to the following: Ineffective messaging from the CDC on social media to the public during later stages of the pandemic. Lack of engagement from the public, since not everyone follows or engages with official accounts and might miss or overlook them amidst other content. Fatigue from information overload where frequent data updates on social media platforms can lead to desensitization, making it less likely for users to pay attention over time and act on the information. Temporal delays create time lag, which can impact the associations between the topics and the real epidemic measures. Political factors such as antivaccination groups. Therefore, with all possible influential factors, the CDC could not fully impact the public’s responses and actions on getting vaccinated even though they had been making efforts on sharing educational information about vaccination. This finding showed that the CDC had been making efforts to emphasize the importance of vaccination on Twitter, but the public response was weak. Thus, it is important to further study the influential factors for the CDC’s social media strategies. Topic 3, which provided updates on 3 of the COVID-19 measures (testing, cases, and deaths) and their relevant information, aligned better with the case series during the Delta phase (June 15, 2021, to November 29, 2021). It also matched with the death series during the declared pandemic phase (original variant: March 11, 2020, to December 28, 2020) and Delta phase, classified by the World Health Organization on May 11, 2021. Furthermore, even though topic 3 did not demonstrate a visible association with the testing series, timely communication from the CDC was actually strongly associated with the testing time series over the entire study period based on the multivariate time series analysis. According to these key findings, we suggest that aligning the content topics of health communication from public health agencies with the temporal dynamics of COVID-19 or other emerging public health emergencies (eg, major epidemic outcome metrics) can help provide more timely and relevant information to the public. Therefore, we recommend that the CDC and other public health agencies monitor the epidemic outcome metrics in real time. Health agencies can then post timely updates about the emergency, most recent findings, and interventions on social media according to the dynamic changes of these outcome metrics. Public health agencies can regain trust from the public by not only helping the public better understand the complex dynamics of the health emergency, but also informing the public with evidence-based guidance and recommendations more effectively. There are several limitations in this infoveillance study that could be improved in future work. First, while we focused on probabilistic-based LDA for topic modeling, there are other alternative NLP approaches such as deep learning–based bidirectional encoder representations from transformers (BERT). Hence, we will explore BERT and other state-of-the-art NLP techniques for content topic modeling and sentiment analysis in the future. Second, given the complexity of this study, we will incorporate subthemes to further help contextualize the clusters in future work. Third, the CDC does not have the sole power of controlling people’s responses and actions over time (eg, getting tested and receiving full vaccine doses), even with consistent effort on Twitter to educate the public and mitigate the pandemic. There are other factors that could affect the associations between the CDC’s messages and the COVID-19 measures: Time lags: What is posted might not reflect real-time situations, which can impact the association strength between the posted measures and real-world metrics; thus, we suggest aligning the content topics of health communication with up-to-date epidemic outcome metrics. Discrepancies in posting methods: The CDC simplifies the data in their posts to make the information more comprehensible for the audience, which might not align with the detailed epidemic metrics posted from other sources with different interpretations of the reported metrics. Variability in the data source: The data open to the public might come from sources and reporting standards that are different from the CDC’s protocol, which could weaken potential associations. Audience: As a government health agency, the CDC prioritizes certain data for social media to cater to the public for relevancy. For example, posting daily epidemic measures could lead to strong associations with COVID-19 metrics, but an association does not mean causality, and we assume that the CDC does not generate their tweets with the intention to improve associations of any kind and their priority is to present a variety of reliable information to the public. Fatigue from information overload: Frequent data updates on social media can lead to desensitization, making it less likely for users to pay attention and react to the information over time, for example, tweeting about daily epidemic measures decreases the public’s attention over time. Political and societal factors, for example, antivaccination groups and conspiracy theories about the pandemic. In addition, it is important for us to continue to examine the validity of the underlying assumption that the CDC’s health communication makes an impact during a pandemic. In this infodemiology study, we focused on the national effects of these tweets. Future studies should further examine geospatial factors and other confounding factors to help understand whether and how much the CDC’s tweets impact pandemic outcomes. Lastly, public engagement (ie, retweets, likes, replies, etc) of the CDC’s health communication is an important indicator of the effectiveness of online health communication efforts. There have been studies that analyzed public sentiments and attitudes toward various health-related topics. However, very few studies have investigated the associations of public sentiment shifts along disease-related metrics. In addition, public sentiments and attitudes are heavily influenced by health agencies’ messages and should not be misled by misinformation. Public opinions also influence health practices and interventions, which have a significant impact on the actual epidemic outcomes (eg, case, death, vaccination, etc). Thus, it is important to further investigate the underlying association between public health communication topics and actual epidemic measures. The insights can help public health agencies develop better social media strategies to address public concerns at different stages of the emergency. Therefore, we suggest that examining the dynamics and patterns of public responses to health agencies’ original communications can provide valuable insights on public perceptions and attitudes around various issues during the pandemic, such as pharmaceutical interventions (eg, vaccination) and nonpharmaceutical interventions. Detailed content analysis can be applied to explicitly identify public concerns in response to the CDC’s health communications. In addition, sentiment analysis can be applied to extract public sentiments (ie, positive, neutral, or negative) toward the CDC’s health communications, and further help identify public attitudes and reactions to various content topics that the CDC has communicated. Public attitudes will ultimately determine individual health behavior and decision-making, such as vaccination acceptance and compliance with nonpharmaceutical interventions, which in turn drive the overall epidemic dynamics. Therefore, it is critical to investigate real-time public engagement, such as retweeting or replying on social media, toward public health agencies’ communications to better inform health agencies about prioritizing their communications and addressing public concerns about specific content topics. This study investigated the dynamic associations between the CDC’s detailed COVID-19 communication topics on Twitter and epidemic metrics in the United States for almost 2 years during the pandemic. Using LDA topic modeling, we were the first to comprehensively identify and explore various COVID-related topics tweeted by the federal public health agency during the pandemic. We also collected daily COVID-19 epidemic metrics (confirmed case counts, death counts, completed tests records, and fully vaccinated records) and performed various multivariate time series analyses to unravel the temporal patterns and associations with the CDC’s COVID-19 communication patterns (ie, investigated the dynamic associations between the time series of each topic generated by the LDA model and the time series of each epidemic metric). The results suggested that some topics were strongly associated with certain COVID-19 epidemic metrics, indicating that advanced social media analytics (eg, NLP) could be a valuable tool for effective infoveillance. Based on our findings, we recommend that the CDC, along with other public health agencies, could further optimize their health communications on social media platforms by posting contents and topics that align with the temporal dynamics of key epidemic metrics. While the CDC had been making efforts to share information on social media platforms to educate the public throughout the pandemic, the public responses to these messages were relatively weak. It is important to further explore the potential factors that played a role in the effectiveness of the CDC’s social media performance in future studies. As such, we suggest increasing online health communication on health practices and interventions during high-level epidemic periods with large numbers of cases and deaths. Our findings also highlighted the importance of health communication on social media platforms to better respond to and tackle future health emergencies and issues. |
Experiences of pregnant women and healthcare professionals of participating in a digital antenatal CMV education intervention | bfce7fb4-0dfd-4e55-a03d-1f2d67fbfe55 | 8850414 | Patient Education as Topic[mh] | Cytomegalovirus (CMV) is a common infection worldwide that is associated with no symptoms or only mild symptoms in most healthy individuals who become infected with it ( ). However, if a woman becomes infected with CMV during pregnancy, it can cause harm to the developing fetus. CMV is the most common cause of congenital infection, and in the UK around 1000 babies are born with congenital CMV (cCMV) infection each year. At birth, an estimated 85% of infants are asymptomatic - although some of these infants will go on to develop sequelae later - and 15% of infants have symptoms or signs of CMV at birth, ranging from a single abnormal clinical or laboratory finding to disseminated disease ( ). The way that cCMV affects infants and children in the long-term is wide-ranging; some children never develop any long-term medical problems, but around a quarter will have life-long consequences, such as sensorineural hearing loss (SNHL), physical or cognitive impairment, or autistic spectrum disorder. cCMV is the commonest non-genetic cause of sensorineural hearing loss and an important cause of neurodisability ( ). Resources for further information about CMV can be found outlined in . cCMV presents a significant challenge to families, resulting both from the uncertainty of the outcomes in an individual child and, for some children, the serious and profound consequences of infection ( ). There are also implications for society more broadly: there is a significant cost associated with the acute and long-term management of affected individuals ( ). The frequency of cCMV, and the personal and societal challenges resulting from it, make primary prevention of CMV infection in pregnancy a priority. However, there is currently no licensed vaccine for CMV and no routinely recommended and available treatment in pregnancy for those at risk of passing the infection to their unborn child. However, antenatal education about CMV risk reduction measures has been shown to result in behaviour change in pregnant women and reduce the risk of CMV infection in pregnancy in some studies ( ). This strategy relies upon the provision of accurate information to pregnant women, ideally within a context of antenatal education in which a woman's questions can be answered by their trusted health care professionals (HCPs). Currently, information about CMV is not widely provided as part of antenatal education in the UK, in contrast to other less common infections such as listeria and toxoplasmosis. A recent qualitative study in the UK revealed a lack of knowledge about CMV amongst pregnant women and participants reported feelings of disappointment and distress that they had not been informed about CMV as part of their routine antenatal care ( ), a finding supported by other studies carried out in other countries ( ; ; ). This frustration is shared by families of children affected by cCMV, who describe receiving little or no information about CMV during pregnancy, and also report limitations in the knowledge of CMV in HCPs looking after their affected child ( ). Despite the importance placed on antenatal education about CMV by pregnant women and families caring for children with cCMV, a number of studies have highlighted a lack of knowledge about CMV in HCPs ( ; ; ; ). There are significant pressures on antenatal services that can make it difficult to provide information on the large number of topics which need to be covered. It is necessary therefore to have an educational intervention which can be delivered as part of routine antenatal care and which will empower pregnant women to make decisions about how to reduce the risk of infection in their pregnancy. The success of a CMV antenatal educational intervention will depend on its acceptability to pregnant women and also how it is received by HCPs, as the way the resource is presented and the capacity of HCPs to respond to any questions which result will impact on how the resource is used and valued by pregnant women. The Reducing Acquisition of CMV through antenatal Education (RACE-FIT) study was designed to evaluate the feasibility of performing a large-scale randomised controlled trial of an antenatal, digital, educational intervention providing information about how to reduce the risk of CMV infection in pregnancy (Clinicaltrials.gov identifier NCT03511274). This current study was nested within RACE-FIT and aimed to explore the perspectives of participating pregnant women and HCPs towards receiving and providing CMV education so that barriers and facilitators towards incorporating CMV in routine antenatal care can be better understood. Box 1: CMV Risk Reduction Messages developed with the RACE-FIT study 1.Be the first to share: Try to avoid eating things which have been in a child's mouth and avoid sharing cups and cutlery 2. Forehead kisses and cuddles: Try to avoid kissing a child on the lips, offer kisses on the forehead and cuddles instead 3. Wash with care: Clean your hands with soap and water after changing a nappy or wiping a child's nose or mouth. Alt-text: Unlabelled box
Design and ethical approval In Phase 1 of the RACE-FIT study, a film-based educational intervention was developed in partnership with pregnant women and their partners, and the families of children affected by CMV ( , ). In phase 2, a feasibility study was conducted to understand the practicalities of running a randomised controlled trial, comparing the educational intervention with routine care ( ). As part of phase 2 of RACE-FIT, we also carried out a process evaluation to explore the perspective of pregnant women and HCPs towards CMV education provision, which is the focus of the current study. This study employed a qualitative design to achieve this, using individual, semi-structured, face-to-face interviews. The study was approved by the NHS Health Research Authority and South-Central Oxford Research Ethics Committee (16/SC/0683). Informed consent was obtained from all participants. Recruitment In phase 2 of RACE FIT, recruitment took place in a large teaching hospital from an ethnically diverse area of South-west London. Pregnant women were approached upon attending clinics for their first trimester screenings, between September 2018 to September 2019 and were pregnant at the time of participation, seronegative for CMV and living with a child or children aged less than four years. Pregnant women who had participated in the feasibility study were subsequently invited to participate in a short interview to discuss their experiences of participating in an antenatal CMV educational study. HCPs who were involved in the feasibility study were invited to take part in the process evaluation and consider their experiences of delivering antenatal CMV education of pregnant women. All participants in the study were over the age of 18, required to speak English to a sufficient level, were willing to sign a consent form and be available for a video conference or phone interview. Procedure Semi structured interviews were chosen to ensure that core questions were asked of participants, while providing scope for participants to explore relevant, but unanticipated domains of experience and reflection that were important to them. Twenty interviews were conducted (each lasting between 30 and 75 min), audio-recorded and transcribed. The interview guides were developed collaboratively by the research team consisting of a short list of topic areas using open-ended questions and prompts, which was frequently annotated/moderated during the progression of the study. For pregnant participants, the interviews explored the experiences of women participating in the trial and factors which facilitate and impede adherence to the suggested behavioural modifications (see Box 2). The interview guide for HCPs focused on understanding the delivery of CMV education from a professional's perspective, including barriers and ways to integrate CMV information into routine care (see Box 3). Box 2 Interview guide 1, pregnant women Through this interview we will be asking you some questions about your opinions of the film, your experience of being in the study and how you think it could be improved. Part 1 – Feedback on CMV educational film What are your impressions of the film? How informative was the film? Do you feel you learnt something new? If so what? Did the film leave you feeling anxious of did you feel the knowledge empowered you to protect yourself and baby? Considering the 3 messages in the film, do you remember the messages? Which one most? Did you find them easy to understand? Did the film motivate you to change your behaviour? What behaviours did you change? Part 2 – Behaviour change How easy was it to change these behaviours? What did you find particularly difficult to change? Did you manage to keep these changes going through the whole of your pregnancy – any stages of pregnancy that were easier or more difficult? Did you feel that any changes in your behaviour had any impact on your older child? Did they notice that you were doing anything different? Part 3 – Involvement of family Did you show the film to your partner, other family members or other pregnant women? Did your partner make any of these changes? Did you speak to your partner about the study? How supportive were other family members and friends about the changes you were trying to incorporate? Have these changes now become normal in your household? What advice would you give to a close friend to help them make these changes? Part 4 – Sharing of CMV information. How do you think we should give these messages to pregnant women? Would you recommend any changes to the film? Did you access information about CMV elsewhere? Alt-text: Unlabelled box Box 3 Interview guide 1, Health Care Professionals We will ask you some questions about your opinions of the film, your experience of the study and how you think it could be improved. Part 1 – Feedback on CMV educational film What are your impressions of the film? How informative is the film? Do you feel you learnt something new? If so what? Do you think the film has made you change your practice? Did you discuss the film with other health care Professionals in your unit? Part 2 – Film influencing practice. Did you discuss the film with other health care Professionals in your unit? What advice would you give pregnant women to help them make these changes? Have these changes now become part of the routine care you give pregnant women? Part 3 – Film feedback and future directions Would you recommend any changes to the film? Do you have any further comments on the film? What are the best ways of the delivering this information to pregnant women in routine clinical care? How can we integrate this information as part of routine clinical care? How can we do this in a way that does not impact on HCPs? Alt-text: Unlabelled box Data analysis Data was collected and analysed using Thematic Analysis ( ). The following six phases were implemented following the steps of Thematic Analysis: (1) familiarization, which was necessary to be able to fully understand the data to be able to identify repeated patterns; (2) initial coding then took place to extract the most important information and features; (3) searching for themes, using the previous coding, data was grouped into themes and sub-themes to reflect the patterns identified; (4) reviewing themes, which took place collaboratively across the research team to reflect upon the themes, alongside the dataset and confirm they provide an intelligible story; (5) defining and naming themes, providing a coherent name of each theme and subtheme to fit with their meaning; (6) producing the report, which was the process of collating the themes and subthemes in a coherent way for this research paper ( ). In the extracted quotes, “(…)” signifies that materials have been omitted. For each quote it is specified whether the participant was in the TAU (treatment as usual) or IG (intervention) group of the randomized controlled trial carried out as part of RACE-FIT, or HCP, as well as a participant number for anonymity.
In Phase 1 of the RACE-FIT study, a film-based educational intervention was developed in partnership with pregnant women and their partners, and the families of children affected by CMV ( , ). In phase 2, a feasibility study was conducted to understand the practicalities of running a randomised controlled trial, comparing the educational intervention with routine care ( ). As part of phase 2 of RACE-FIT, we also carried out a process evaluation to explore the perspective of pregnant women and HCPs towards CMV education provision, which is the focus of the current study. This study employed a qualitative design to achieve this, using individual, semi-structured, face-to-face interviews. The study was approved by the NHS Health Research Authority and South-Central Oxford Research Ethics Committee (16/SC/0683). Informed consent was obtained from all participants.
In phase 2 of RACE FIT, recruitment took place in a large teaching hospital from an ethnically diverse area of South-west London. Pregnant women were approached upon attending clinics for their first trimester screenings, between September 2018 to September 2019 and were pregnant at the time of participation, seronegative for CMV and living with a child or children aged less than four years. Pregnant women who had participated in the feasibility study were subsequently invited to participate in a short interview to discuss their experiences of participating in an antenatal CMV educational study. HCPs who were involved in the feasibility study were invited to take part in the process evaluation and consider their experiences of delivering antenatal CMV education of pregnant women. All participants in the study were over the age of 18, required to speak English to a sufficient level, were willing to sign a consent form and be available for a video conference or phone interview.
Semi structured interviews were chosen to ensure that core questions were asked of participants, while providing scope for participants to explore relevant, but unanticipated domains of experience and reflection that were important to them. Twenty interviews were conducted (each lasting between 30 and 75 min), audio-recorded and transcribed. The interview guides were developed collaboratively by the research team consisting of a short list of topic areas using open-ended questions and prompts, which was frequently annotated/moderated during the progression of the study. For pregnant participants, the interviews explored the experiences of women participating in the trial and factors which facilitate and impede adherence to the suggested behavioural modifications (see Box 2). The interview guide for HCPs focused on understanding the delivery of CMV education from a professional's perspective, including barriers and ways to integrate CMV information into routine care (see Box 3). Box 2 Interview guide 1, pregnant women Through this interview we will be asking you some questions about your opinions of the film, your experience of being in the study and how you think it could be improved. Part 1 – Feedback on CMV educational film What are your impressions of the film? How informative was the film? Do you feel you learnt something new? If so what? Did the film leave you feeling anxious of did you feel the knowledge empowered you to protect yourself and baby? Considering the 3 messages in the film, do you remember the messages? Which one most? Did you find them easy to understand? Did the film motivate you to change your behaviour? What behaviours did you change? Part 2 – Behaviour change How easy was it to change these behaviours? What did you find particularly difficult to change? Did you manage to keep these changes going through the whole of your pregnancy – any stages of pregnancy that were easier or more difficult? Did you feel that any changes in your behaviour had any impact on your older child? Did they notice that you were doing anything different? Part 3 – Involvement of family Did you show the film to your partner, other family members or other pregnant women? Did your partner make any of these changes? Did you speak to your partner about the study? How supportive were other family members and friends about the changes you were trying to incorporate? Have these changes now become normal in your household? What advice would you give to a close friend to help them make these changes? Part 4 – Sharing of CMV information. How do you think we should give these messages to pregnant women? Would you recommend any changes to the film? Did you access information about CMV elsewhere? Alt-text: Unlabelled box Box 3 Interview guide 1, Health Care Professionals We will ask you some questions about your opinions of the film, your experience of the study and how you think it could be improved. Part 1 – Feedback on CMV educational film What are your impressions of the film? How informative is the film? Do you feel you learnt something new? If so what? Do you think the film has made you change your practice? Did you discuss the film with other health care Professionals in your unit? Part 2 – Film influencing practice. Did you discuss the film with other health care Professionals in your unit? What advice would you give pregnant women to help them make these changes? Have these changes now become part of the routine care you give pregnant women? Part 3 – Film feedback and future directions Would you recommend any changes to the film? Do you have any further comments on the film? What are the best ways of the delivering this information to pregnant women in routine clinical care? How can we integrate this information as part of routine clinical care? How can we do this in a way that does not impact on HCPs? Alt-text: Unlabelled box
Data was collected and analysed using Thematic Analysis ( ). The following six phases were implemented following the steps of Thematic Analysis: (1) familiarization, which was necessary to be able to fully understand the data to be able to identify repeated patterns; (2) initial coding then took place to extract the most important information and features; (3) searching for themes, using the previous coding, data was grouped into themes and sub-themes to reflect the patterns identified; (4) reviewing themes, which took place collaboratively across the research team to reflect upon the themes, alongside the dataset and confirm they provide an intelligible story; (5) defining and naming themes, providing a coherent name of each theme and subtheme to fit with their meaning; (6) producing the report, which was the process of collating the themes and subthemes in a coherent way for this research paper ( ). In the extracted quotes, “(…)” signifies that materials have been omitted. For each quote it is specified whether the participant was in the TAU (treatment as usual) or IG (intervention) group of the randomized controlled trial carried out as part of RACE-FIT, or HCP, as well as a participant number for anonymity.
Participant characteristics Fifteen pregnant women took part in this study, nine of whom had been allocated to the intervention group of the RACE-FIT study and therefore had received detailed information about CMV, including how it can affect a child, ways in which CMV is transmitted and the risk reducing behaviours they could adopt to reduce the risk of CMV acquisition in pregnancy (see Box 3). Six participating pregnant women had been allocated to the TAU group of the RACE-FIT study, and therefore did not receive detailed information about CMV, but were aware of the aims of the study and the focus on CMV risk reduction. Five HCPs were also interviewed, all five had been involved with the delivery of the intervention to pregnant women within the RACE-FIT study. All HCPs involved were clinically active midwives or nurses, who had some awareness and involvement with the study and so had watched the film and seen the immediate reaction of women to the film; they were not experts in CMV and were not involved in the design of the educational intervention. provides socio-demographic characteristics of the pregnant women of the sample and for HCPs. From participant interviews, themes and subthemes emerged, these can be seen outlined in . Theme 1: Knowledge of CMV and risk reduction Knowledge about CMV is perceived as important, empowering and reassuring Pregnant participants in the intervention group expressed surprise that they had not heard of CMV or been told about CMV as part of their antenatal care. Additionally, they were pleased they had been provided with an awareness and knowledge of CMV as part of the study. Participants in the TAU group did not receive detailed information about CMV, but were informed that the study was about CMV. They understood the significance of CMV awareness and viewed the sharing of the information as an important part of antenatal care. Many women felt that it was important they did all they could to reduce the risk of CMV infection to protect their unborn child. Having knowledge about CMV was considered empowering, allowing them to have the information necessary to adapt their behaviour to reduce the risk of CMV infection whilst pregnant. • “…I remember saying that surely everybody should be given information on that. Just as part of because you were given a lot of information on other things when you're pregnant that, you know, for example chickenpox can be particularly harmful for pregnant women and things like that. So, it's a sort of ‘Oh why are we not given information on this?” (Lily, TAU) • “I was very pleased with being told about it. Because you know, you want to know everything can affect your children. Then it's up to you to decide.” (Helena, IG) Healthcare professionals trusted as a reliable source of information about CMV Pregnant participants indicated that they would have welcomed a conversation about CMV with a HCP, such as a midwife or general professional, to enable them to fully understand the information and the importance of CMV. Women expressed trust in their antenatal care team, and they considered HCPs to be the most reliable source of information about health-related issues. “I think the most important thing would be for a health professional to actually tell you when you are pregnant… I guess there is like fake news on social media and things that people get scared about which are actually not scientifically proven. I think if it's part of your plan of care, you would actually listen to it and understand the implications.” (Natalie, IG) Barriers to sharing information about CMV by healthcare professionals Participating HCPs were supportive of information about CMV being shared with pregnant women and considered this important, however they identified barriers which discouraged them from routinely sharing information about CMV. A commonly described barrier was the concern of raising anxiety or causing women to feel guilty for not adhering to the risk reduction measures. “But some just felt they hadn't changed their practice…… Some mums did feel a bit guilty ….we told them about it and we showed them the video, but oh no, they still hadn't managed to do it. If my child gets CMV and not that it might be their fault, but that's sort of a little bit how they felt.” (Kiera, HCP, Paediatric nurse) A significant barrier to discussing CMV with pregnant women was a lack of time to discuss this as part of their routine care, meaning participating HCPs were not confident they would be able to communicate information about CMV as well as all the information routinely given. “In routine care, there is so much to talk about. So much information to give at every point in pregnancy and everything that you are talking about can really impact them. Whereas with CMV the number of babies that are affected is actually quite small….if the mums ask about it or if we do have more time or they know of problems like they know of CMV or they have been affected by it then that is when I will bring it up with them.” (Kiera HCP, Paediatric nurse) Participating HCPs were concerned about having defined clinical pathways, adequate follow-up for women or opportunities to screen for pregnant women who were concerned about CMV. Here the salient concern for midwives was not leaving pregnant women anxious, particularly as they were unaware of where women could find out how to prevent CMV transmission to their foetus or indeed test to see if their child had CMV. “I think, if we are going to tell women about it, we need there to be a follow-up and midwives need to know where to signpost people and how to refer people if they have got problems because I think again, that will put people off telling women about CMV if they feel there is not a clinical prepare space for them to go down.” (Paige, HCP, Midwife) “I feel if I tell women about CMV and kind of opening a bit of a can of worms, because then they might want to test and there might not be someone to interpret the test…… If it's just I've kind of made the woman anxious about it…It's not like Downs Syndrome screening where we tell them about it and we have screening. It would be me telling the woman and then just kind of leaving her to get on with it.” (Paige, HCP, Midwife) Opportunity for educational film to overcome barriers to sharing information about CMV The educational film about CMV has been designed to be used alongside routine antenatal care. HCPs participating in our study suggested that the short film produced as part of the RACE-FIT project included a good introduction to CMV and clear guidelines to pregnant women on how they might reduce risk of CMV in pregnancy. Participating HCPs suggested that the film had the potential to empower pregnant women to reduce the risks of acquiring CMV during pregnancy. They expressed that they themselves felt they would be more prepared to answer pregnant women's questions about CMV after watching the film themselves. “It just raised a bit more awareness about CMV amongst us which is good. So just a bit more knowledge and then awareness which would help us kind of direct women if they asked us about CMV or if we chose to speak about it, which has a bit more to talk about.” (Shannon, HCP, Midwife) “To know it from the off like it can be kind of like a virus like CMV they say no–at least give them some information on it. That was quite nice to actually be able to tell them about something that can have an impact on life or their baby's life if they were parents and get the congenital CMV.” (Natasha, HCP, Midwife) THEME 2: Implementation of risk reduction education in antenatal care Risk reduction rather than prevention Participating pregnant women and HCPs favoured messages about CMV being framed in such a way that encourages women to modify their behaviours to reduce the risk of CMV, rather than to prevent CMV. Pregnant women expressed an inability to completely control the risk of CMV when trying to implement preventive measures but felt that making small changes to reduce risk, was an achievable goal. “I think the main thing was just around how… you can't possibly avoid all contact with bodily fluids, especially when your child is really ill. Um, so, yeah, you can take precautions, but you cannot… you can't stop them sneezing in your face, so, you know, dribbling on your pillow or, you know. There're certain things that you can't prevent.” (Fiona, IG) Participating HCPs also expressed that a focus on risk reduction, rather than prevention, may reduce anxiety about CMV for pregnant women, providing some reassurance to mothers and empowering them to take some steps towards changing behaviours that exposes them to saliva and urine. “To highlight more that what we are trying to do is just reduce the risk. We can never take it all away, the risk. Every little helps as it were. Even one less kiss or one less share of the spoon helps the risks go down. … just a bit more encouraging to mums that we are just reducing rather than eliminating.” (Kiera, HCP, Paediatric nurse) Balancing parental caring behaviours and risk reduction behaviours One clear concern for pregnant women was the potential impact of the CMV risk reduction behaviours on their other children. It was evident that pregnant participants with children were concerned about finding a balance between reducing the risk of catching CMV, passing CMV on to their unborn baby and demonstrating parental love and care towards their older child, by kissing a child on the lips and sharing food with them. “I think she would have been confused about why I wasn't eating with her. I think she would have probably been quite upset if I wasn't kissing her in the same way. But then, maybe that's my perception, because I was pregnant and you are already worried about what's it going to be like when the sibling arrived.” (Camilla, TAU) Pregnant women in the intervention group made active attempts to reduce sharing food with their children, but did experience challenges in sustaining this. “I definitely did, initially changed and didn't share food. It was hard to do that all the time, because she was only two and so trying to get her to eat she would often share food with me and I didn't want to keep on pushing it away, because we were trying to get her to eat. I did try and I was more aware that I shouldn't be sharing food or eating a bit that she'd bitten or anything like that.” (Kate, IG) In contrast, pregnant women who had not watched the CMV intervention but were aware of the potential risk of sharing food and drink with their children, seemed to weigh the risk of contracting CMV against the negative impact of insufficient food intake for their older children. “I didn't drink from her cup anyway. I still helped her eat her dinner just to get her to eat. You make decisions based on risk, don't you. I would rather she ate.” (Camilla, TAU) Most participants exposed to the intervention found adaption of behaviours fairly straightforward and were able to develop a new routine and ‘norm’. It seemed that incorporating changes into their daily routine and family ‘norm’, creating new habits which initiated and maintain changes in behaviour. “I didn't find any of them difficult to change, because I kind of developed my own mantra of you have got to do this and keep the unborn safe. It was an internal conversation that I had repeatedly until it became second nature. At first, it was difficult to not kiss my eldest on the lips, because that is just what I was used to and that's all about practice and just family norms… I think the difficulty was just changing my routine… Usually the child doesn't finish your food and you finish their plate because you don't want to waste food …, It was just changing your mindset to incorporate the recommendations…. It's breaking the habit essentially, that is the hardest thing and that would basically remove the tag line…. It wasn't hard to do. It was just once I'd created a memory stamp of where I've got to do this because then it was easy.” (Chloe, IG) Implementing behavioural change as a partnership Pregnant women in the intervention group stressed the importance of involving partners in implementing behavioural changes. Participants recalled the support they received form their partners, and expressed that their partner's support had an important role in reassuring them and helping them to adhere to behavioural changes. “Pretty supportive… He would just help kind of divert the interaction between him and my son and I if he was upset that I wasn't sharing my drink or snack or whatever it was. He would just help out in those sorts of ways.” (Nicole, IG) Others would have wanted more involvement from their partners, but it was harder to engage them in risk reducing behaviours. “I told my partner. That's it. Yes. I think he probably didn't pay much attention to be honest. …, Well, I think, maybe, maybe if I showed him it [the film], he might have paid more attention.” (Fiona, IG) Pregnant women's partners have a significant role in reassuring and facilitating the implementation and adherence to behavioural changes. This could also be especially important considering the establishment of a new family ‘norm’ and routine, and what this can do for women's motivation to adhere to preventive measures and maintain them throughout their pregnancy.
Fifteen pregnant women took part in this study, nine of whom had been allocated to the intervention group of the RACE-FIT study and therefore had received detailed information about CMV, including how it can affect a child, ways in which CMV is transmitted and the risk reducing behaviours they could adopt to reduce the risk of CMV acquisition in pregnancy (see Box 3). Six participating pregnant women had been allocated to the TAU group of the RACE-FIT study, and therefore did not receive detailed information about CMV, but were aware of the aims of the study and the focus on CMV risk reduction. Five HCPs were also interviewed, all five had been involved with the delivery of the intervention to pregnant women within the RACE-FIT study. All HCPs involved were clinically active midwives or nurses, who had some awareness and involvement with the study and so had watched the film and seen the immediate reaction of women to the film; they were not experts in CMV and were not involved in the design of the educational intervention. provides socio-demographic characteristics of the pregnant women of the sample and for HCPs. From participant interviews, themes and subthemes emerged, these can be seen outlined in . Theme 1: Knowledge of CMV and risk reduction
Pregnant participants in the intervention group expressed surprise that they had not heard of CMV or been told about CMV as part of their antenatal care. Additionally, they were pleased they had been provided with an awareness and knowledge of CMV as part of the study. Participants in the TAU group did not receive detailed information about CMV, but were informed that the study was about CMV. They understood the significance of CMV awareness and viewed the sharing of the information as an important part of antenatal care. Many women felt that it was important they did all they could to reduce the risk of CMV infection to protect their unborn child. Having knowledge about CMV was considered empowering, allowing them to have the information necessary to adapt their behaviour to reduce the risk of CMV infection whilst pregnant. • “…I remember saying that surely everybody should be given information on that. Just as part of because you were given a lot of information on other things when you're pregnant that, you know, for example chickenpox can be particularly harmful for pregnant women and things like that. So, it's a sort of ‘Oh why are we not given information on this?” (Lily, TAU) • “I was very pleased with being told about it. Because you know, you want to know everything can affect your children. Then it's up to you to decide.” (Helena, IG)
Pregnant participants indicated that they would have welcomed a conversation about CMV with a HCP, such as a midwife or general professional, to enable them to fully understand the information and the importance of CMV. Women expressed trust in their antenatal care team, and they considered HCPs to be the most reliable source of information about health-related issues. “I think the most important thing would be for a health professional to actually tell you when you are pregnant… I guess there is like fake news on social media and things that people get scared about which are actually not scientifically proven. I think if it's part of your plan of care, you would actually listen to it and understand the implications.” (Natalie, IG)
Participating HCPs were supportive of information about CMV being shared with pregnant women and considered this important, however they identified barriers which discouraged them from routinely sharing information about CMV. A commonly described barrier was the concern of raising anxiety or causing women to feel guilty for not adhering to the risk reduction measures. “But some just felt they hadn't changed their practice…… Some mums did feel a bit guilty ….we told them about it and we showed them the video, but oh no, they still hadn't managed to do it. If my child gets CMV and not that it might be their fault, but that's sort of a little bit how they felt.” (Kiera, HCP, Paediatric nurse) A significant barrier to discussing CMV with pregnant women was a lack of time to discuss this as part of their routine care, meaning participating HCPs were not confident they would be able to communicate information about CMV as well as all the information routinely given. “In routine care, there is so much to talk about. So much information to give at every point in pregnancy and everything that you are talking about can really impact them. Whereas with CMV the number of babies that are affected is actually quite small….if the mums ask about it or if we do have more time or they know of problems like they know of CMV or they have been affected by it then that is when I will bring it up with them.” (Kiera HCP, Paediatric nurse) Participating HCPs were concerned about having defined clinical pathways, adequate follow-up for women or opportunities to screen for pregnant women who were concerned about CMV. Here the salient concern for midwives was not leaving pregnant women anxious, particularly as they were unaware of where women could find out how to prevent CMV transmission to their foetus or indeed test to see if their child had CMV. “I think, if we are going to tell women about it, we need there to be a follow-up and midwives need to know where to signpost people and how to refer people if they have got problems because I think again, that will put people off telling women about CMV if they feel there is not a clinical prepare space for them to go down.” (Paige, HCP, Midwife) “I feel if I tell women about CMV and kind of opening a bit of a can of worms, because then they might want to test and there might not be someone to interpret the test…… If it's just I've kind of made the woman anxious about it…It's not like Downs Syndrome screening where we tell them about it and we have screening. It would be me telling the woman and then just kind of leaving her to get on with it.” (Paige, HCP, Midwife)
The educational film about CMV has been designed to be used alongside routine antenatal care. HCPs participating in our study suggested that the short film produced as part of the RACE-FIT project included a good introduction to CMV and clear guidelines to pregnant women on how they might reduce risk of CMV in pregnancy. Participating HCPs suggested that the film had the potential to empower pregnant women to reduce the risks of acquiring CMV during pregnancy. They expressed that they themselves felt they would be more prepared to answer pregnant women's questions about CMV after watching the film themselves. “It just raised a bit more awareness about CMV amongst us which is good. So just a bit more knowledge and then awareness which would help us kind of direct women if they asked us about CMV or if we chose to speak about it, which has a bit more to talk about.” (Shannon, HCP, Midwife) “To know it from the off like it can be kind of like a virus like CMV they say no–at least give them some information on it. That was quite nice to actually be able to tell them about something that can have an impact on life or their baby's life if they were parents and get the congenital CMV.” (Natasha, HCP, Midwife) THEME 2: Implementation of risk reduction education in antenatal care
Participating pregnant women and HCPs favoured messages about CMV being framed in such a way that encourages women to modify their behaviours to reduce the risk of CMV, rather than to prevent CMV. Pregnant women expressed an inability to completely control the risk of CMV when trying to implement preventive measures but felt that making small changes to reduce risk, was an achievable goal. “I think the main thing was just around how… you can't possibly avoid all contact with bodily fluids, especially when your child is really ill. Um, so, yeah, you can take precautions, but you cannot… you can't stop them sneezing in your face, so, you know, dribbling on your pillow or, you know. There're certain things that you can't prevent.” (Fiona, IG) Participating HCPs also expressed that a focus on risk reduction, rather than prevention, may reduce anxiety about CMV for pregnant women, providing some reassurance to mothers and empowering them to take some steps towards changing behaviours that exposes them to saliva and urine. “To highlight more that what we are trying to do is just reduce the risk. We can never take it all away, the risk. Every little helps as it were. Even one less kiss or one less share of the spoon helps the risks go down. … just a bit more encouraging to mums that we are just reducing rather than eliminating.” (Kiera, HCP, Paediatric nurse)
One clear concern for pregnant women was the potential impact of the CMV risk reduction behaviours on their other children. It was evident that pregnant participants with children were concerned about finding a balance between reducing the risk of catching CMV, passing CMV on to their unborn baby and demonstrating parental love and care towards their older child, by kissing a child on the lips and sharing food with them. “I think she would have been confused about why I wasn't eating with her. I think she would have probably been quite upset if I wasn't kissing her in the same way. But then, maybe that's my perception, because I was pregnant and you are already worried about what's it going to be like when the sibling arrived.” (Camilla, TAU) Pregnant women in the intervention group made active attempts to reduce sharing food with their children, but did experience challenges in sustaining this. “I definitely did, initially changed and didn't share food. It was hard to do that all the time, because she was only two and so trying to get her to eat she would often share food with me and I didn't want to keep on pushing it away, because we were trying to get her to eat. I did try and I was more aware that I shouldn't be sharing food or eating a bit that she'd bitten or anything like that.” (Kate, IG) In contrast, pregnant women who had not watched the CMV intervention but were aware of the potential risk of sharing food and drink with their children, seemed to weigh the risk of contracting CMV against the negative impact of insufficient food intake for their older children. “I didn't drink from her cup anyway. I still helped her eat her dinner just to get her to eat. You make decisions based on risk, don't you. I would rather she ate.” (Camilla, TAU) Most participants exposed to the intervention found adaption of behaviours fairly straightforward and were able to develop a new routine and ‘norm’. It seemed that incorporating changes into their daily routine and family ‘norm’, creating new habits which initiated and maintain changes in behaviour. “I didn't find any of them difficult to change, because I kind of developed my own mantra of you have got to do this and keep the unborn safe. It was an internal conversation that I had repeatedly until it became second nature. At first, it was difficult to not kiss my eldest on the lips, because that is just what I was used to and that's all about practice and just family norms… I think the difficulty was just changing my routine… Usually the child doesn't finish your food and you finish their plate because you don't want to waste food …, It was just changing your mindset to incorporate the recommendations…. It's breaking the habit essentially, that is the hardest thing and that would basically remove the tag line…. It wasn't hard to do. It was just once I'd created a memory stamp of where I've got to do this because then it was easy.” (Chloe, IG)
Pregnant women in the intervention group stressed the importance of involving partners in implementing behavioural changes. Participants recalled the support they received form their partners, and expressed that their partner's support had an important role in reassuring them and helping them to adhere to behavioural changes. “Pretty supportive… He would just help kind of divert the interaction between him and my son and I if he was upset that I wasn't sharing my drink or snack or whatever it was. He would just help out in those sorts of ways.” (Nicole, IG) Others would have wanted more involvement from their partners, but it was harder to engage them in risk reducing behaviours. “I told my partner. That's it. Yes. I think he probably didn't pay much attention to be honest. …, Well, I think, maybe, maybe if I showed him it [the film], he might have paid more attention.” (Fiona, IG) Pregnant women's partners have a significant role in reassuring and facilitating the implementation and adherence to behavioural changes. This could also be especially important considering the establishment of a new family ‘norm’ and routine, and what this can do for women's motivation to adhere to preventive measures and maintain them throughout their pregnancy.
This qualitative study sought to explore the perspectives of participating pregnant women and HCPs towards receiving and providing CMV education in pregnancy, so that barriers and facilitators towards incorporating CMV in routine antenatal care could be better understood. CMV infection is not routinely included as part of antenatal education in the UK, however pregnant women in our study who were introduced to CMV felt strongly that information about CMV - and ways to reduce the risk of CMV during pregnancy - should be provided to all pregnant women. In contrast, HCP who were familiar with the CMV antenatal education and had assisted with the trial were largely accepting of CMV education for pregnant women, however expressed some concerns about increasing anxiety in pregnant women, particularly as they felt that not have a clear clinical pathway or a screening programme for concerned pregnant women. Pregnant women suggested that presenting information on behavioural changes required for CMV should be presented as risk reduction methods rather than complete prevention and this would make behavioural changes required more achievable, obtainable, and realistic. Additionally, support of partners was described as essential to implementing and sustaining change family environment. Our findings are in line with previous research that have highlighted that CMV is not routinely included in antenatal education and most pregnant women felt frustrated and annoyed that they haven't been given the chance to implement changes to reduce risk of congenital CMV for their unborn child (e.g. ; ; ) and there was a unanimous agreement that they wanted information about CMV to be provided to them by HCPs, particularly midwives, as their most trusted resource in antenatal education ( ; ; ). This contrasts with the views expressed by HCPs who assisted with showing pregnant women the film that midwives often lacked adequate time to provide CMV education within routine care. There is therefore a need for provision of information about CMV in an accessible and acceptable way, that does not require a significant time investment for individual counselling in busy antenatal clinics. Another barrier that health care professionals experience in including CMV as part of antenatal education, is their own lack of knowledge and awareness of CMV ( ; ). Unsurprisingly, these concerns are often translated into an overall lack of self-belief in their own abilities to support pregnant mothers in relation to CMV awareness as well as advice on behavioural changes ( ). In order for midwives to feel equipped and empowered to provide information about CMV to pregnant women, they need to have access to such knowledge themselves. Evidence based digital antenatal educational films such as those developed by our RACE-FIT project or the e-learning training about CMV developed by the Royal College of Midwives (RCM) ( https://www.ilearn.rcm.org.uk/enrol/index.php ? id = 150) has the potential to empower midwives to be able to answer women's questions about CMV with increased confidence. The reluctance of midwives to include CMV in routine antenatal education arises from concerns that this could lead to an increase in anxiety for pregnant women, as there is no routinely recommended treatment for CMV in pregnancy or licensed vaccines available to prevent CMV. Similar concerns are also often found in other areas of antenatal care, such as advice relating to weight gain during pregnancy ( ; ), in which midwives show a similar reluctance to have discussions with patients due to concerns about framing the information in a way which is upsetting for their patients, and the overall emotive impact that knowledge might have. However, our findings show that pregnant women are unanimously keen to be equipped with knowledge about CMV and are motivated to reduce risks of CMV to their unborn child. Findings like those of ( ) as well as ( ) emphasise the importance in employing psychological health concepts, such as self-efficacy (one's own intrinsic belief that they can successfully carry out a behaviour) into antenatal care. Their results suggested applying these concepts was an effective method to achieve and maintain successful behavioural change. Often, much responsibility for sharing information is placed upon HCPs, but in line with the concepts of empowerment and self-efficacy, by giving women the tools and knowledge to modify behaviours, they may feel more in control to then initiate and maintain the changes throughout pregnancy. supports this within CMV research, by finding that an increase in self-efficacy led to an increased uptake of CMV risk reduction behaviours. It is clear that knowledge is powerful; it allows women to be autonomous and have control over their own CMV risk reduction. Our study has highlighted that behaviour-change messages about CMV should be framed as ‘reduction’ as opposed to ‘prevention’, with pregnant women acknowledging that complete prevention was unattainable. This is an important consideration for antenatal care professionals when discussing CMV with pregnant women, especially as this was something pregnant women felt made the measures more realistic and achievable for them. In line with the previously mentioned, approaching CMV in this way may increase pregnant women's self-efficacy to thus initiate and maintain these behaviour changes. Additionally, CMV risk reduction measures should also be framed using positive messaging, as positively‐framed messages leads to more positive perception of effectiveness and motivate behaviour rather than negatively‐framed messages ( ; ; ). Our study has also highlighted the importance of involving partners and families in antenatal education on CMV, specifically in helping the partner to enact the behaviours required to reduce risk of CMV themselves and support and encourage their pregnant partner to do so. Other studies have highlighted that midwives often did not include partners in antenatal conversations, for example, about alcohol advice ( ) but research does endorse the involvement of social support and partners ( ; ) for successful behavioural change. As highlighted within this research, pregnant women discuss how implementing changes as a partnership such as reducing kissing their child on the lips and both implementing and encouraging first to share, made the changes much easier to implement. Although the study was limited to 15 pregnant women and 5 HCPs who had been involved in the trial, it provided rich data to highlight the experiences of participating in a CMV digital, antenatal intervention. The aim of qualitative research is not to reach generalisable findings, but to enable a richer understanding of the participants’ experiences of the phenomena under investigation. The lack of ethnic diversity, male/paternal perspective and also engaging with midwives unfamiliar with CMV is an area that warrants further investigation.
cCMV is a significant public health challenge, with lifelong implications for affected children and their families. It is therefore vital that the information routinely provided to pregnant women includes discussion of CMV, the most common congenital infection, along with advice about how risks can be reduced and that midwives receive the training they need to be empowered to provide this aspect of antenatal care. Until such time as we have a licensed vaccine, it is imperative that we take action to reduce the risk of acquiring infection in pregnancy to reduce congenital infection and the associated life-long consequences of hearing loss and neurodevelopmental delay experienced by around a quarter of infants and children congenitally infected with CMV.
The authors declare that they have no competing interests.
|
Diagnostic Roles of α-Methylacyl-CoA Racemase (AMACR) Immunohistochemistry in Gastric Dysplasia and Adenocarcinoma | c1fb1c24-1a45-4f01-80ba-f867652a5069 | 11434461 | Anatomy[mh] | Gastric dysplasia is a neoplastic lesion with no stromal invasion . Gastric dysplasia is classified as low- or high-grade dysplasia, based on the degree of cellular abnormality . In a previous study, the prevalence of low- and high-grade dysplasia and adenocarcinoma in the stomach was observed in 1–2%, 0.1–0.2%, and 0.4–0.5% of all participants, respectively . Stomach biopsy samples are the most commonly encountered specimens in daily practice. When adenoma or adenocarcinoma is diagnosed, further treatment is required. Diagnosis can be difficult in cases with fewer lesions in the specimen. Moreover, non-neoplastic changes such as inflammation and regeneration exhibit cellular atypia and mitotic figures . Thus, differentiating between non-neoplastic changes and dysplasia is crucial in daily practice. Distinguishing between low- and high-grade dysplasia also presents challenges. Furthermore, achieving agreement between pathologists in diagnosing low- and high-grade dysplasia is difficult . While immunohistochemistry may offer assistance, only a few markers have demonstrated a definitive diagnostic value in differentiating gastric lesions. α-Methylacyl-CoA racemase (AMACR) is a highly specific marker for prostate cancer . AMACR was found to be overexpressed in primary and metastatic prostate cancer and high-grade intraepithelial neoplasms, but not in normal tissues . Described as a cytoplasmic protein, AMACR plays a significant role in the oxidation of branched-chain fatty acids and their derivatives . Jiang et al. investigated the expression of AMACR in various normal tissues and malignant tumors, including the stomach . AMACR is expressed in 25% and 75% of gastric and colorectal adenocarcinomas . Interestingly, AMACR was also found to be overexpressed in some normal tissues, including hepatocytes, bronchi, gallbladder epithelial cells, and renal tubules . However, in contrast, other normal human tissues, including the stomach, showed no expression . Colon cancers with good and moderate differentiation are mostly classified as AMACR positive . Researchers have investigated the role of AMACR in the differentiation of gastric lesions . The expression rate of AMACR varies from 51.5% to 62.9% in gastric adenocarcinoma . Similarly, the expression rate of AMACR in gastric dysplasia ranges from 43.5% to 83.3% . However, these findings suggest that AMACR lacks a significant value in the differential diagnosis of gastric lesions. AMACR expression has also been identified in dysplasia associated with Barrett’s esophagus and inflammatory bowel diseases . However, there is a notable difference in AMACR expression between low- and high-grade dysplasia in Barrett’s esophagus . Moreover, AMACR expression is significantly higher in the stomach in high-grade dysplasia than in low-grade dysplasia (76.0% vs. 4.5%) . This indicates that AMACR can potentially aid in the differential diagnosis of gastric lesions. However, identifying meaningful differences between gastric lesions based on existing studies remains challenging . This study aimed to elucidate the diagnostic role of AMACR immunohistochemistry in gastric dysplasia and adenocarcinoma. In this study, we focused on three main aspects. First, the expression rate of AMACR in various gastric lesions was examined. Second, we compared the AMACR expression patterns among gastric lesions. Third, the presence of AMACR expression in normal mucosa adjacent to the lesions was analyzed. 2.1. Patients and Tissue Array Methods The present study included 79 patients who underwent endoscopic submucosal dissection or gastrectomy at the Eulji University Medical Center from 1 April 2021 to 31 August 2023. The diagnostic criteria for these patients were low-grade dysplasia ( n = 20), high-grade dysplasia ( n = 19), and adenocarcinoma ( n = 40). We reviewed the pathological records and hematoxylin and eosin (H&E) slides to gather relevant data. The information collected included age, sex, tumor size, and diagnosis. The study protocol was reviewed and approved by the Institutional Review Board of the Eulji University Hospital (approval number: UEMC 2023-10-011) on 13 November 2023. 2.2. Immunohistochemical Staining For immunohistochemistry, 4 μm thick sections were cut from each paraffin block, deparaffinized, and dehydrated. Immunohistochemical staining was conducted following the compact polymer method using a VENTANA benchmark ULTRA autostainer (Ventana Medical Systems, Inc., Tucson, AZ, USA). The sections were then incubated with anti-AMACR (clone 13H4; Dako, Carpinteria, CA, USA). Visualization was performed using an OPTIVIEW universal 3,3′-diaminobenzidine kit, according to the manufacturer’s instructions (Ventana Medical Systems, Inc.). To ensure the reaction specificity of the antibody, negative control staining without the primary antibody was also conducted. All immunostained sections were lightly counterstained with Mayer’s hematoxylin. 2.3. Evaluation of Immunohistochemistry AMACR immunoreactivity was detected in the cytoplasm. The intensity of protein expression in immunohistochemically stained samples was scored from 0 to 3 (0 = negative, 1 = weak, 2 = moderate, and 3 = strong). The percentage of positively stained cells was determined using a scoring system that ranged from 0 to 4 (0 = negative; 1 = ≤25%; 2 = 26–50%; 3 = 51–75%; and 4 = 76–100%). The immunoreactive score (IRS) was subsequently calculated by multiplying the staining intensity score by the percentage of positively stained cells . Based on IRS, immunohistochemical staining was classified as negative (IRS 0–4) or positive (IRS > 4). The rate of loss of AMACR expression was assessed by calculating the proportion of regions with no AMACR expression relative to the entire lesion. To assess heterogeneity, AMACR expression in a hotspot (area = 0.785 mm 2 ) was evaluated. The hotspot area was selected as the highest level of immunoreactivity on the H&E slide after scanning at a medium power (×100). In addition, AMACR cytoplasmic expression was subdivided to examine the luminal pattern separately. Two independent researchers (J.S. Pyo and D.W. Kang) evaluated the immunohistochemical results with the naked eye under the microscope. In cases of discrepancy, the results were reviewed and the two researchers reached a consensus. 2.4. Statistical Analysis Statistical analyses were conducted using SPSS software version 22.0 (IBM Co., Chicago, IL, USA). The χ 2 test was used to assess the significance of the correlation between AMACR expression and sex. Correlations between the luminal and cytoplasmic expression patterns of AMACR were also evaluated using Fisher’s exact test. Comparisons between AMACR expression, age, and tumor size were analyzed using a two-tailed Student’s t -test. The negative rates of AMACR expression between gastric lesions were also evaluated using a two-tailed Student’s t -test and the Kruskal−Wallis test. The results were considered statistically significant at p < 0.05. The present study included 79 patients who underwent endoscopic submucosal dissection or gastrectomy at the Eulji University Medical Center from 1 April 2021 to 31 August 2023. The diagnostic criteria for these patients were low-grade dysplasia ( n = 20), high-grade dysplasia ( n = 19), and adenocarcinoma ( n = 40). We reviewed the pathological records and hematoxylin and eosin (H&E) slides to gather relevant data. The information collected included age, sex, tumor size, and diagnosis. The study protocol was reviewed and approved by the Institutional Review Board of the Eulji University Hospital (approval number: UEMC 2023-10-011) on 13 November 2023. For immunohistochemistry, 4 μm thick sections were cut from each paraffin block, deparaffinized, and dehydrated. Immunohistochemical staining was conducted following the compact polymer method using a VENTANA benchmark ULTRA autostainer (Ventana Medical Systems, Inc., Tucson, AZ, USA). The sections were then incubated with anti-AMACR (clone 13H4; Dako, Carpinteria, CA, USA). Visualization was performed using an OPTIVIEW universal 3,3′-diaminobenzidine kit, according to the manufacturer’s instructions (Ventana Medical Systems, Inc.). To ensure the reaction specificity of the antibody, negative control staining without the primary antibody was also conducted. All immunostained sections were lightly counterstained with Mayer’s hematoxylin. AMACR immunoreactivity was detected in the cytoplasm. The intensity of protein expression in immunohistochemically stained samples was scored from 0 to 3 (0 = negative, 1 = weak, 2 = moderate, and 3 = strong). The percentage of positively stained cells was determined using a scoring system that ranged from 0 to 4 (0 = negative; 1 = ≤25%; 2 = 26–50%; 3 = 51–75%; and 4 = 76–100%). The immunoreactive score (IRS) was subsequently calculated by multiplying the staining intensity score by the percentage of positively stained cells . Based on IRS, immunohistochemical staining was classified as negative (IRS 0–4) or positive (IRS > 4). The rate of loss of AMACR expression was assessed by calculating the proportion of regions with no AMACR expression relative to the entire lesion. To assess heterogeneity, AMACR expression in a hotspot (area = 0.785 mm 2 ) was evaluated. The hotspot area was selected as the highest level of immunoreactivity on the H&E slide after scanning at a medium power (×100). In addition, AMACR cytoplasmic expression was subdivided to examine the luminal pattern separately. Two independent researchers (J.S. Pyo and D.W. Kang) evaluated the immunohistochemical results with the naked eye under the microscope. In cases of discrepancy, the results were reviewed and the two researchers reached a consensus. Statistical analyses were conducted using SPSS software version 22.0 (IBM Co., Chicago, IL, USA). The χ 2 test was used to assess the significance of the correlation between AMACR expression and sex. Correlations between the luminal and cytoplasmic expression patterns of AMACR were also evaluated using Fisher’s exact test. Comparisons between AMACR expression, age, and tumor size were analyzed using a two-tailed Student’s t -test. The negative rates of AMACR expression between gastric lesions were also evaluated using a two-tailed Student’s t -test and the Kruskal−Wallis test. The results were considered statistically significant at p < 0.05. 3.1. AMACR Expression in Gastric Dysplasia and Adenocarcinoma AMACR immunohistochemical staining was conducted, and the representative images are shown in . AMACR expression was observed in 26 of 39 cases of gastric dysplasia (66.7%) and in 17 of 40 cases of gastric adenocarcinomas (42.5%; ). In the gastric dysplasia cases, AMACR expressions were evaluated and divided into low- and high-grade dysplasia. AMACR expression was noted in 16 of 20 low-grade dysplasia cases (80.0%) and in 10 of 19 high-grade dysplasia cases (52.6%). In addition, AMACR cytoplasmic expression was subdivided to examine the luminal pattern separately. Upon further analysis, the luminal pattern of AMACR expression was found to be more prevalent in low-grade dysplasia than in high-grade dysplasia or gastric adenocarcinoma ( p < 0.001; ). The luminal pattern of AMACR expression was identified in 14 of the 16 positive cases with low-grade dysplasia (87.5%). However, the luminal pattern of AMACR expression was observed in 30.0% and 6.3% of high-grade dysplasia and adenocarcinoma cases, respectively. There were significant differences in the AMACR expression patterns between low-grade dysplasia and high-grade dysplasia and between low-grade dysplasia and adenocarcinoma ( p = 0.003 and p < 0.001, respectively). However, there was no significant difference between high-grade dysplasia and adenocarcinoma ( p = 0.197), according to Fisher’s exact test. Next, the correlations between AMACR expression and clinicopathological parameters such as age, sex, and tumor size were evaluated. In low-grade dysplasia, AMACR expression was significantly correlated with younger patients and smaller tumor size ( p = 0.041 and 0.025, respectively; ). However, there was no significant correlation between AMACR expression and sex in the low-grade dysplasia group ( p = 0.530). In addition, there was no significant correlation between AMACR expression and clinicopathological parameters, such as age, sex, and tumor size, in the high-grade dysplasia and adenocarcinoma groups . Furthermore, AMACR expression in hotspots was assessed. The positivity rate of AMACR expression in the hotspots was higher than that in the overall area across all lesion types . In low-grade dysplasia cases, the AMACR expression rate reached 100% in hotspot areas. However, upon evaluating hotspots, the negative rates of AMACR expression in high-grade dysplasia and adenocarcinoma were found to be 37.7% and 35.0%, respectively. 3.2. Loss of AMACR Expression in Gastric Dysplasia and Adenocarcinoma Next, we investigated the negative rate of AMACR expression in gastric dysplasia and adenocarcinoma, to evaluate the heterogeneity of AMACR expression. The negative rates of AMACR expression in overall lesions were 15.1 ± 23.9%, 49.0 ± 29.9%, and 59.0 ± 32.2% in low- and high-grade dysplasia and gastric adenocarcinoma groups, respectively . The negative rates of AMACR expression between the three categories were significantly different in the Kruskal-Wallis test ( p < 0.001). The negative rate of low-grade dysplasia was significantly lower than that of high-grade dysplasia and gastric adenocarcinoma ( p < 0.001 and p < 0.001, respectively). However, there was no significant difference in the negative rate between high-grade dysplasia and adenocarcinoma ( p = 0.256). 3.3. AMACR Expression in Normal Mucosa Adjacent to the Lesion AMACR expression was evaluated in the normal mucosa adjacent to the lesion. Detailed analyses were conducted based on the distance from the lesion, categorizing distances as within 2 mm and beyond 2 mm. AMACR expression was observed in 19 of 79 cases within 2 mm (24.1%) and in 14 of 79 cases more than 2 mm away from the lesion (17.7%) . However, no significant correlation was found between AMACR expression in gastric lesions and the adjacent normal mucosa. In addition, AMACR expression in the normal mucosa showed no significant difference with the distance from the lesions. AMACR immunohistochemical staining was conducted, and the representative images are shown in . AMACR expression was observed in 26 of 39 cases of gastric dysplasia (66.7%) and in 17 of 40 cases of gastric adenocarcinomas (42.5%; ). In the gastric dysplasia cases, AMACR expressions were evaluated and divided into low- and high-grade dysplasia. AMACR expression was noted in 16 of 20 low-grade dysplasia cases (80.0%) and in 10 of 19 high-grade dysplasia cases (52.6%). In addition, AMACR cytoplasmic expression was subdivided to examine the luminal pattern separately. Upon further analysis, the luminal pattern of AMACR expression was found to be more prevalent in low-grade dysplasia than in high-grade dysplasia or gastric adenocarcinoma ( p < 0.001; ). The luminal pattern of AMACR expression was identified in 14 of the 16 positive cases with low-grade dysplasia (87.5%). However, the luminal pattern of AMACR expression was observed in 30.0% and 6.3% of high-grade dysplasia and adenocarcinoma cases, respectively. There were significant differences in the AMACR expression patterns between low-grade dysplasia and high-grade dysplasia and between low-grade dysplasia and adenocarcinoma ( p = 0.003 and p < 0.001, respectively). However, there was no significant difference between high-grade dysplasia and adenocarcinoma ( p = 0.197), according to Fisher’s exact test. Next, the correlations between AMACR expression and clinicopathological parameters such as age, sex, and tumor size were evaluated. In low-grade dysplasia, AMACR expression was significantly correlated with younger patients and smaller tumor size ( p = 0.041 and 0.025, respectively; ). However, there was no significant correlation between AMACR expression and sex in the low-grade dysplasia group ( p = 0.530). In addition, there was no significant correlation between AMACR expression and clinicopathological parameters, such as age, sex, and tumor size, in the high-grade dysplasia and adenocarcinoma groups . Furthermore, AMACR expression in hotspots was assessed. The positivity rate of AMACR expression in the hotspots was higher than that in the overall area across all lesion types . In low-grade dysplasia cases, the AMACR expression rate reached 100% in hotspot areas. However, upon evaluating hotspots, the negative rates of AMACR expression in high-grade dysplasia and adenocarcinoma were found to be 37.7% and 35.0%, respectively. Next, we investigated the negative rate of AMACR expression in gastric dysplasia and adenocarcinoma, to evaluate the heterogeneity of AMACR expression. The negative rates of AMACR expression in overall lesions were 15.1 ± 23.9%, 49.0 ± 29.9%, and 59.0 ± 32.2% in low- and high-grade dysplasia and gastric adenocarcinoma groups, respectively . The negative rates of AMACR expression between the three categories were significantly different in the Kruskal-Wallis test ( p < 0.001). The negative rate of low-grade dysplasia was significantly lower than that of high-grade dysplasia and gastric adenocarcinoma ( p < 0.001 and p < 0.001, respectively). However, there was no significant difference in the negative rate between high-grade dysplasia and adenocarcinoma ( p = 0.256). AMACR expression was evaluated in the normal mucosa adjacent to the lesion. Detailed analyses were conducted based on the distance from the lesion, categorizing distances as within 2 mm and beyond 2 mm. AMACR expression was observed in 19 of 79 cases within 2 mm (24.1%) and in 14 of 79 cases more than 2 mm away from the lesion (17.7%) . However, no significant correlation was found between AMACR expression in gastric lesions and the adjacent normal mucosa. In addition, AMACR expression in the normal mucosa showed no significant difference with the distance from the lesions. AMACR has been widely used since it was first reported as a novel molecular marker for prostate carcinoma . The diagnostic and prognostic applications of AMACR in various cancers, including prostate cancer, have been explored. However, there is no better utilization than in prostate carcinoma. In daily practice, the differential diagnosis of various gastric lesions, including reactive changes, dysplasia, and adenocarcinoma, is often conducted through the analysis of small biopsy specimens. However, distinguishing among these conditions in such specimens can be challenging. In addition, there are no biomarkers, such as CEA or CA19-9, that can help differentiate adenocarcinoma . Ancillary tests, including immunohistochemistry, can be helpful in the differential diagnosis. Although immunohistochemistry, including p53 and Ki-67, has been performed, distinguishing between gastric lesions can sometimes remain ambiguous. Attempts have been made to develop diagnostic markers, including AMACR, for application in daily practice . The present study aimed to evaluate the potential diagnostic role of AMACR immunohistochemistry in gastric lesions. In addition, an analysis of the heterogeneity of AMACR expression within gastric lesions was attempted. The results of our study are as follows: (1) Distinct AMACR expression patterns were observed across different types of gastric lesions. (2) Low-grade dysplasia predominantly exhibited a luminal expression pattern of AMACR. (3) AMACR expression demonstrated more significant heterogeneity in high-grade dysplasia and adenocarcinoma than in low-grade dysplasia. (4) The rate of AMACR expression loss was significantly lower in low-grade dysplasia than in high-grade dysplasia and adenocarcinoma. The present study evaluated AMACR expression in a hotspot (area = 0.785 mm 2 ) in the normal mucosa adjacent to the lesion. Given the low positivity rate in normal mucosa, an evaluation of a hotspot was conducted to assess the heterogeneity or extent of expression. This evaluation was based on proximity to the lesion, which was categorized as within 2 mm and beyond 2 mm. The AMACR-positive rates were 24.1% within 2 mm of the lesions and 17.7% for areas beyond 2 mm. These findings contrast with those of previous reports , which indicated that AMACR was not expressed in normal gastric mucosa. However, Lee found a 4.5% expression rate of AMACR in the non-neoplastic epithelium , and Cho et al. observed a focal expression of AMACR in the gastric mucosa with intestinal metaplasia at 7.7% . This discrepancy may have been caused by the interpretation of AMACR expression. Nonetheless, our results provide valuable insights into the understanding of AMACR expression in normal gastric mucosa. Cho et al. defined AMACR expression as focal or diffuse, with focal positivity criteria between 5% and 50% . However, if there is heterogeneity in AMACR expression, it may be more difficult to determine it in small samples according to Cho’s criteria . The significance of AMACR expression in small areas is particularly relevant in biopsy specimen analyses. By adopting a hotspot evaluation method, our findings offer practical insights for analyzing small biopsied tissues. We also examined AMACR expression across the entire normal tissue surrounding the lesion and found no cases of positivity in the whole sections. However, our approach differs from that of previous studies because we focused on hotspot evaluation. Conversely, in the colonic normal mucosa, AMACR expression was identified in 74.5% of cases , although these studies did not specify positive criteria based on distribution. Our results suggest that AMACR expression in biopsy samples should be interpreted with caution. Cho et al. did not categorize adenomas based on low- or high-grade dysplasia . Furthermore, in Lee’s study, differentiation of results by grade was absent . In contrast with our findings, Huang et al. reported that low-grade dysplasia showed weakly positive AMACR expression in only 1 out of 20 cases . The most significant difference from previous results is that AMACR positivity was 80% in low-grade dysplasia. Huang et al. observed positive AMACR expression for high-grade dysplasia in 16 out of 24 cases (64%), aligning closely with our results . Previous studies predominantly focused on the cytoplasmic pattern of AMACR expression. In contrast, our evaluation included both the luminal and cytoplasmic expression patterns, as described above. Notably, we found that the luminal expression pattern is predominant in low-grade dysplasia. Although Cho et al. described the luminal expression pattern, they did not provide specific results . Our findings, highlighting the prevalence of luminal expression in low-grade dysplasia, offer a novel perspective that could facilitate differentiation between low- and high-grade dysplasia. To determine the heterogeneity of AMACR expression in each lesion, we analyzed both the positivity of hotspots and the negativity rate of lesions. In previous studies, AMACR positivity has been quantified based on the percentage of all positive lesions . In these reports, it was not clear what the negative rate of AMACR expression was, and it was not possible to infer from these results the heterogeneity of AMACR expression. This can be a major challenge when interpreting AMACR positivity in biopsy samples. This approach allowed us to compare hotspot evaluations with overall expression patterns, providing a more nuanced understanding of the AMACR distribution. For low-grade dysplasia, high-grade dysplasia, and adenocarcinoma, there were four (20.0%), two (10.5%), and nine (22.5%) cases, respectively, which were negative in the whole lesion but showed positivity in hotspots. Through hotspot analysis, the positivity rates were found to be 100.0% in low-grade dysplasia, 62.3% in high-grade dysplasia, and 65.0% in adenocarcinoma. These findings suggest that the diagnostic implication of AMACR immunohistochemistry may be limited to biopsy samples when considering all lesions. Furthermore, we examined the negative rate of AMACR expression by calculating and comparing averages across different lesion types. In high-grade dysplasia and adenocarcinoma, the negative rates were notably higher (49.0 ± 29.9% and 59.0 ± 32.2%, respectively) than in low-grade dysplasia (15.1%). This disparity suggests that AMACR expression loss is less prevalent in low-grade dysplasia, making it a rare finding in biopsy specimens from such lesions. Conversely, approximately half of high-grade dysplasia or adenocarcinoma cases may not exhibit AMACR expression, highlighting the potential variability in the expression patterns between biopsied and resected specimens. Moreover, distinguishing expression patterns, such as luminal versus cytoplasmic patterns, is crucial in the diagnostic process of biopsied specimens. The negative rate of AMACR expression in colonic adenomas is approximately 25% . Another study reported positive results in 63.7% of cases . Brahim et al. analyzed AMACR expression in colon adenomas and categorized cases into low- and high-grade dysplasia . In their study, five out of six cases of low-grade dysplasia exhibited positive AMACR expression . However, their analysis included only one high-grade dysplasia case . Brahim’s report included only a small number of cases because they were related to inflammatory bowel disease. Furthermore, AMACR expression has also been studied in dysplasia associated with Barrett’s esophagus . However, it is not sufficient to understand the diagnostic value of AMACR expression in gastric dysplasia. Several studies have highlighted the diagnostic significance of the AMACR in the stomach . Moreover, the clinicopathological significance and prognostic relevance of AMACR in gastric cancer have also been documented. In Lee’s study, AMACR expression was associated with tumor depth and TNM stage, with a higher positive rate in early gastric cancer than in advanced stages . Because this study included a small number of patients, a large-scale study will be needed for detailed assessment. Notably, AMACR expression is the lowest in Stage IV gastric cancers . Morz et al. demonstrated that AMACR expression in gastric adenocarcinoma is correlated with poor disease-free survival . Lin et al. identified a similar correlation between AMACR expression and poor prognosis in colorectal cancer . Conversely, Shi et al. reported no significant correlation between AMACR expression and prognosis in colorectal cancer . These divergent findings underscore the need for further research to elucidate the prognostic relevance of AMACR in the gastrointestinal tract, including in the stomach. In conclusion, AMACR is a useful diagnostic marker for differentiating low-grade dysplasia from high-grade dysplasia and gastric adenocarcinoma. However, the positivity of AMACR has no diagnostic value in the differentiation of various gastric lesions. Luminal and cytoplasmic expression patterns, and the extent of expression loss, play crucial roles in this differentiation process. |
Revealing the efficacy-toxicity relationship of Fuzi in treating rheumatoid arthritis by systems pharmacology | 89b27e5f-30a1-4425-a178-e8c49244ba0a | 8630009 | Pharmacology[mh] | With the usage history over hundreds of or even thousands of years, many complementary and alternative medicines harbor rich clinical experiences. Because of the high safety and rich clinical experiences, the use of complementary and alternative medicines such as traditional Chinese medicines has increased globally over the past decades. Herbal medicines are an important resource of complementary and alternative medicines, and they play important roles in the current medical system. Reports published by the World Health Organization (WHO) showed that about 75–80% of the world population, especially those in developing countries, rely primarily on herbal medicines for healthcare . In addition, compounds isolated from herbal medicines are important resources of lead compounds in drug discovery area. For example, it is reported that 63% of anticancer small molecular drugs approved by the US Food and Drug Administration (FDA) are directly or indirectly come from herbal medicines . Recently, herbal medicines have played significant roles in fighting against COVID-19 before the wide application of vaccines , . Predictably, herbal medicines will play more and more important roles in fighting against diseases. Even though herbal medicines have been demonstrated to be effective on many diseases, the toxicity caused by herbal medicines has become a global issue. For example, aristolochic acids are a group of herbal compounds that widely exist in the plant genus Aristolochia and Asarum . Since the 1990s, aristolochic acids-induced nephropathy and upper tract urothelial carcinoma have been reported in countries such as Belgium, UK, France, Japan, and China . In 2012, the WHO cancer agency the International Agency for Research on Cancer (IARC) classified aristolochic acids as group 1 human carcinogens . Gradually, the use of aristolochic acids-containing herbal medicines is forbidden in many countries. Except for the herbal medicines that contain aristolochic acids, there are many herbal medicines with strong toxicity that are still used in the clinic. This is because the toxicity of these herbal medicines can be eliminated with careful use. However, the reason an herbal medicine can lead to both efficacy and toxicity is not fully known. Among all herbal medicines, Aconiti Lateralis Radix Praeparata (Fuzi) is the typical herbal medicine with conspicuous efficacy and strong toxicity. Fuzi, also known as Chinese wolfsbane, Chinese aconite, monkshood, Kyeong-Po Buja, and Bushi, is the processed daughter root of Aconitum carmichaeli Debx. As one of the most well-known herbal medicines, Fuzi has been extensively used for over 2 thousand years in clinics to treat diseases . In the clinic, Fuzi is commonly used to treat a wide variety of diseases, such as rheumatoid arthritis, acute myocardial infarction, low blood pressure, coronary heart disease, chronic heart failure, tumors, skin wounds, depression, diarrhea, gastroenteritis, and edema . Although Fuzi has shown wide and promising therapeutic effects, its toxicity has been recognized by ancient people and has attracted widespread attention all over the world in the past decades. From 2001 to 2010, 5000 cases of acute toxicity of Fuzi were recorded in countries such as China, Japan, and Germany . From 2004 to 2015 in mainland China, at least 40 cases of fatal Fuzi poisoning with 53 victims were recorded . The main toxicity of Fuzi includes cardiotoxicity and neurotoxicity, and the typical symptoms of Fuzi poisoning include arrhythmia, palpitation, hypotension, shock, dizziness, coma, vomiting, and nausea . Although the potent therapeutic effects and strong toxicity of Fuzi are well recognized, it is now still not fully understood that which compounds are responsible for the efficacy and toxicity, and the mechanism differences of Fuzi in inducing efficacy and toxicity. Different from synthesized chemical drugs that only contain one or two chemical compounds, an herbal medicine usually contains hundreds or even thousands of different compounds that act holistically to treat diseases . The traditional wet experiment-based approach is rather difficult to identify all the bioactive compounds and their mechanisms. Fortunately, with the development of systems biology, systems pharmacology has emerged as an efficient tool to study the mechanism of herbal medicines based on the complex status of the biological system of the human body . For this reason, it has been extensively and successfully used to study the mechanisms of herbal medicine such as Morinda officinalis . In this work, we adopted a standard systems pharmacology approach to screen the compounds that are responsible for the efficacy and toxicity of Fuzi in treating arthritis and inducing toxicity, respectively (Fig. ). Meanwhile, the mechanisms of efficacy and toxicity were investigated and the mechanism differences between efficacy and toxicity were compared. This work can be helpful for a comprehensive understanding of the efficacy-toxicity relationship of toxic herbal medicines in treating diseases.
Chemical compounds retrieving and active compounds screening of Fuzi All the compounds and molecular structures of Fuzi were retrieved and downloaded from Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP, http://ibts.hkbu.edu.hk/LSP/tcmsp.php ), a systems pharmacology platform that is specialized for traditional Chinese medicines . Considering that gut microbiota can convert glycosides in the intestinal tract by gut microbiota , the corresponding aglycones were also included in the compound library of Fuzi. To screen the potential bioactive compounds from Fuzi, two important parameters including oral bioavailability (OB) and drug-likeness (DL) that are associated with drug absorption, distribution, metabolism, and excretion were used. The threshold for screening bioactive compounds was set as OB ≥ 30 and DL ≥ 0.18 . Because some compounds that did not pass the threshold might show significant pharmacological effects as well, compounds that were reported to exhibit strong pharmacological effects yet not meet the threshold of OB or DL were also used. Target screening To screen the whole targets of bioactive compounds in Fuzi, a chemometric method and information integration approach were used. First, the screened bioactive compounds were submitted to various on-line servers and databases, including Bioinformatics Analysis Tool for Molecular Mechanism of Traditional Chinese Medicine (BATMAN-TCM, http://bionet.ncpsb.org/batman-tcm/index.php/Home/Index/index ) . Similarity Ensemble Approach (SEA, http://sea.bkslab.org ) , TCMSP ( http://ibts.hkbu.edu.hk/LSP/tcmsp.php ) , Therapeutic Targets Database (TTD, http://bidd.nus.edu.sg/group/ttd/ ) , PhID ( http://phid.ditad.org/MetaNet/ ) , and Swiss Target Prediction (STP, http://www.swisstargetprediction.ch/ ) . It is noteworthy that only the targets related to Homo sapiens were reserved for further analysis. P-value less than 0.05 were used for screening of targets in on-line serves and database where applicable. The other parameters for target screening were defaultly settled, for example, Score cutoff with 20 was used for BATMAN-TCM. Targets validation Molecular docking was performed using crystal structure of ADRA2A, BCHE, CHRM2, KCNH2 from Protein Data Bank (corresponds to PDB ID: 6KUX, 4BDS, 5ZKC, 5VA2, respectively). Compound structures of coryneine, denudatine, norcoclaurine, songorine were downloaded from TCMSP. Compound structures were performed energy minimization calculation by Chem 3D software and then imported into AutoDockTools for adding hydrogen and computing gasteiger. Crystal structures were separated from original ligands and were prepared by AutoDockTools through removing water, adding hydrogen and computing gasteiger. The virtual docking was implemented in the AutoDock Vina. The best docking pose were predicated based on the docked free energy and inhibition constant. The 3D binding model was shown by Pymol, 2D shown by LigPlus. Gene ontology and protein–protein interaction analysis To gain the biological, molecular, and cellular function of target genes, gene ontology (GO) analysis including biological process (BP), cellular component (CC), and molecular function (MF) was performed. An on-line web tool OmicShare was used to carry GO analysis ( https://www.omicshare.com/tools/ ). The protein–protein interaction (PPI) network of targets acquired from database STRING ( https://string-db.org/ , version 11.0). For visual of the PPI network, the line color indicated the type of interaction evidence, the active interaction sources were based on experiments, and the disconnected nodes were excluded in the final network. Screening of efficacy and toxicity-related targets and mechanisms The genes and proteins related to rheumatoid arthritis were retrieved from Comparative Toxicogenomics Database (CTD, http://ctdbase.org/ ) and DrugBank ( https://go.drugbank.com/ ) to obtain the efficacy-related targets , and only those genes with direct evidence to rheumatoid arthritis were reserved. The efficacy-related targets of Fuzi are those genes that belong to the targets of Fuzi and are directly linked to rheumatoid arthritis. To obtain the mechanism of Fuzi in treating rheumatoid arthritis, the efficacy-related targets of Fuzi were first subjected to Webserver the Database for Annotation, Visualization and Integrated Discovery (DAVID, https://david.ncifcrf.gov/home.jsp ), and only the no disease pathways with P < 0.05 were reserved . To obtain the targets and mechanisms of the toxicity of Fuzi, all the target genes were subjected to DAVID. Because the main toxicity of Fuzi is cardiac toxicity and neurotoxicity, and ion channels are involved in the onset of toxicity , the enriched pathways including calcium signaling, adrenergic signaling in cardiomyocytes, neuroactive ligand-receptor interaction, dopaminergic synapse were considered as the toxic pathways of Fuzi, and all the target genes on those pathways were reserved. Network construction To visualize the association among the bioactive compounds, disease-related potential targets, and toxicity-related targets and pathways, 3 networks were constructed including compound-target network for all active compounds and targets of Fuzi, compound-target-pathway network for treatment of rheumatoid arthritis by Fuzi, compound-target-pathway network for toxicity of Fuzi. To visualize the relationship compound and mechanism difference between efficacy and toxicity, the efficacy/toxicity-compound and the efficacy/toxicity-target networks were constructed. Cytoscape (version 3.8.2) was used for the visualization of networks.
All the compounds and molecular structures of Fuzi were retrieved and downloaded from Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP, http://ibts.hkbu.edu.hk/LSP/tcmsp.php ), a systems pharmacology platform that is specialized for traditional Chinese medicines . Considering that gut microbiota can convert glycosides in the intestinal tract by gut microbiota , the corresponding aglycones were also included in the compound library of Fuzi. To screen the potential bioactive compounds from Fuzi, two important parameters including oral bioavailability (OB) and drug-likeness (DL) that are associated with drug absorption, distribution, metabolism, and excretion were used. The threshold for screening bioactive compounds was set as OB ≥ 30 and DL ≥ 0.18 . Because some compounds that did not pass the threshold might show significant pharmacological effects as well, compounds that were reported to exhibit strong pharmacological effects yet not meet the threshold of OB or DL were also used.
To screen the whole targets of bioactive compounds in Fuzi, a chemometric method and information integration approach were used. First, the screened bioactive compounds were submitted to various on-line servers and databases, including Bioinformatics Analysis Tool for Molecular Mechanism of Traditional Chinese Medicine (BATMAN-TCM, http://bionet.ncpsb.org/batman-tcm/index.php/Home/Index/index ) . Similarity Ensemble Approach (SEA, http://sea.bkslab.org ) , TCMSP ( http://ibts.hkbu.edu.hk/LSP/tcmsp.php ) , Therapeutic Targets Database (TTD, http://bidd.nus.edu.sg/group/ttd/ ) , PhID ( http://phid.ditad.org/MetaNet/ ) , and Swiss Target Prediction (STP, http://www.swisstargetprediction.ch/ ) . It is noteworthy that only the targets related to Homo sapiens were reserved for further analysis. P-value less than 0.05 were used for screening of targets in on-line serves and database where applicable. The other parameters for target screening were defaultly settled, for example, Score cutoff with 20 was used for BATMAN-TCM.
Molecular docking was performed using crystal structure of ADRA2A, BCHE, CHRM2, KCNH2 from Protein Data Bank (corresponds to PDB ID: 6KUX, 4BDS, 5ZKC, 5VA2, respectively). Compound structures of coryneine, denudatine, norcoclaurine, songorine were downloaded from TCMSP. Compound structures were performed energy minimization calculation by Chem 3D software and then imported into AutoDockTools for adding hydrogen and computing gasteiger. Crystal structures were separated from original ligands and were prepared by AutoDockTools through removing water, adding hydrogen and computing gasteiger. The virtual docking was implemented in the AutoDock Vina. The best docking pose were predicated based on the docked free energy and inhibition constant. The 3D binding model was shown by Pymol, 2D shown by LigPlus.
To gain the biological, molecular, and cellular function of target genes, gene ontology (GO) analysis including biological process (BP), cellular component (CC), and molecular function (MF) was performed. An on-line web tool OmicShare was used to carry GO analysis ( https://www.omicshare.com/tools/ ). The protein–protein interaction (PPI) network of targets acquired from database STRING ( https://string-db.org/ , version 11.0). For visual of the PPI network, the line color indicated the type of interaction evidence, the active interaction sources were based on experiments, and the disconnected nodes were excluded in the final network.
The genes and proteins related to rheumatoid arthritis were retrieved from Comparative Toxicogenomics Database (CTD, http://ctdbase.org/ ) and DrugBank ( https://go.drugbank.com/ ) to obtain the efficacy-related targets , and only those genes with direct evidence to rheumatoid arthritis were reserved. The efficacy-related targets of Fuzi are those genes that belong to the targets of Fuzi and are directly linked to rheumatoid arthritis. To obtain the mechanism of Fuzi in treating rheumatoid arthritis, the efficacy-related targets of Fuzi were first subjected to Webserver the Database for Annotation, Visualization and Integrated Discovery (DAVID, https://david.ncifcrf.gov/home.jsp ), and only the no disease pathways with P < 0.05 were reserved . To obtain the targets and mechanisms of the toxicity of Fuzi, all the target genes were subjected to DAVID. Because the main toxicity of Fuzi is cardiac toxicity and neurotoxicity, and ion channels are involved in the onset of toxicity , the enriched pathways including calcium signaling, adrenergic signaling in cardiomyocytes, neuroactive ligand-receptor interaction, dopaminergic synapse were considered as the toxic pathways of Fuzi, and all the target genes on those pathways were reserved.
To visualize the association among the bioactive compounds, disease-related potential targets, and toxicity-related targets and pathways, 3 networks were constructed including compound-target network for all active compounds and targets of Fuzi, compound-target-pathway network for treatment of rheumatoid arthritis by Fuzi, compound-target-pathway network for toxicity of Fuzi. To visualize the relationship compound and mechanism difference between efficacy and toxicity, the efficacy/toxicity-compound and the efficacy/toxicity-target networks were constructed. Cytoscape (version 3.8.2) was used for the visualization of networks.
Bioactive compounds and putative targets of Fuzi After screening of bioactive compounds by OB and DL, a total of 22 compounds were screened. In addition, the compounds that did not meet the criteria of OB and DL but were reported to show pharmacological effects were also preserved. These compounds include myristic acid, mesaconitine, etc. As a result, a total of 32 bioactive compounds were screened (Table ). These compounds mainly belong to alkaloids. Target fishing showed that these compounds can act on 905 targets, and all the targets were shown in Supplementary Table . To help view the relationship between bioactive compounds and the corresponding targets, a compound-target network was constructed (Supplementary Fig. ). Molecular docking To verify the reliability of targets screened, molecular docking was utilized to estimate the binding ability between compounds of Fuzi and targets, and to view the compound-target binding interactions. The bioactive ingredients in Fuzi exhibited a strong binding ability towards the predicted genes (Fig. ). The results showed that coryneine can bind to amino acid residue Thr118 and Ser200 in ADRA2A by hydrogen bonds (Fig. A). Denudatine can bind to amino acid residue Ser198 in BCHE by hydrogen bonds (Fig. B). Norcoclaurine can bind to amino acid residue Trp155 and Tyr426 in CHRM2 by hydrogen bonds (Fig. C). Songorine can bind to amino acid residue Thr708 and Ala704 in KCNH2 by hydrogen bonds (Fig. D). The binding energy of these four binding interactions were − 6.4, − 9.5, − 9.0, − 6.6 kcal/mol, respectively, indicating the reliability of targets in our study. PPI network construction and GO analysis To visualize the properties of targets, first, a PPI network was constructed with the help of String database. Results showed that a total of 902 targets were integrated into the whole PPI network. The number of edges was 2095, and the average node degree was 4.64 (Fig. ). Noteworthy is that the PPI network contained several small networks. For example, three targets including SHBG, GABBR1, and GABBR2 were integrated as a small network. Then, GO analysis was performed with the help of Omicshare (Fig. ). The results showed that the top 5 biological processes include cellular process, biological regulation, response to stimulus, metabolic process, and regulation of biological process. The 5 top cellular components include cell, cell part, organelle, membrane, and organelle part. The 5 top molecular functions include binding, catalytic activity, molecular transducer activity, transporter activity, and molecular function regulator. Efficacy mechanism of Fuzi To explore the mechanism of Fuzi in treating rheumatoid arthritis, the targets were further screened by DrugBank and CTD. The screened targets associated with efficacy were further subjected to DAVID pathway analysis. As a result, a total of 27 pathways were enriched. These pathways include TNF signaling pathway, serotonergic synapse, arachidonic acid metabolism, adipocytokine signaling pathway, linoleic acid metabolism, PI3K-Akt signaling pathway, steroid hormone biosynthesis, etc. Many of these pathways have been demonstrated to be involved in rheumatoid arthritis. For example, modulation of arachidonic acid metabolism and linoleic acid metabolism is believed as two possible ways to treat arthritis , . To help visualize the mechanism of Fuzi, a compound-target-pathway network was constructed (Fig. ). This network contains 25 bioactive compounds, 61 targets, and 27 pathways. It is noteworthy that in this network, one compound can act on multiple targets, and each pathway contains multiple targets. For example, myristic acid (M1) can act on TLR2, UGT2B7, RXRA, PTGS1, PTGS2, PPARG, PPARA, PLA2G1B, PLA2G1A, HSD11B1, GSTM1, GSTK1, EDNRA, and ALOX12. TNF pathway plays important roles in the development of rheumatoid arthritis and can be used as a target pathway for the treatment of rheumatoid arthritis . Fuzi can act on multiple targets in the TNF pathway (Fig. ). These results showed that Fuzi could exert therapeutic effects by influencing multiple targets and pathways, featuring the property that herbal medicine act in a holistic way to treat disease. Toxicity mechanism of Fuzi To fully explore the toxic mechanism of Fuzi, all the targets of Fuzi were subjected to KEGG analysis. Because the main toxicity of Fuzi includes cardiotoxicity and neurotoxicity, and disturbance of ion channels such as voltage-gated Na + channel is responsible for toxicity , therefore calcium signaling, adrenergic signaling in cardiomyocytes, neuroactive ligand-receptor interaction, and dopaminergic synapse were considered as the toxic pathways of Fuzi. To help visualize the comprehensive toxic mechanism of Fuzi, a compound-target-pathway network was constructed (Fig. ). In this network, 32 compounds can act on 187 targets, and these targets can act on 4 pathways. It should be noted that in this network, one compound can act on multiple targets, and each pathway contains multiple targets. For example, hypaconitine (M23) can act on TBXA2R, SLC6A3, OPRM1, OPRL1, OPRK1, OPRD1, MTNR1B, MLNR, MAOA, HTR1A, HCRTR2, HCRTR1, DRD5, DRD4, DRD3, DRD2, DRD1, CHRNB4, CHRNA3, CHRM4, ADRB3, ADRB2, ADRA2C, ADRA2A, ADRA1D, ADRA1B, ADRA1A, and ADCY5. Similarly, multiple targets can act on the same pathway such as adrenergic signaling in cardiomyocytes (Fig. ). Noteworthy is that disturbance of ion flow including Na + , Ca 2+ , and K + is the main reason for the toxicity of Fuzi . Here we showed that Fuzi can act on multiple targets that directly modulate the flow of Na + , Ca 2+ , and K + . The results demonstrated the reliability of our study. The compound and target differences for the efficacy and toxicity of Fuzi Since Fuzi is the typical Chinese herbal medicine with salient efficacy and strong toxicity, we then explored the mechanism difference between efficacy and toxicity. First, we compared the targets involved in inducing toxicity and targets inducing therapeutic effects on rheumatoid arthritis (Fig. A). The results showed that 51 targets only contribute to the efficacy in treating rheumatoid arthritis, 189 targets are only responsible for the toxicity, and 10 targets are involved in both toxicity and efficacy of Fuzi. Then, we explored the compounds involved in inducing toxicity and compounds contributing to the therapeutic effects on rheumatoid arthritis (Fig. B). The results showed that 7 compounds only contribute to the toxicity, 15 targets are involved in both toxicity and efficacy of Fuzi, and no compound is only associated with efficacy. The results indicated that the compounds and targets contributing to the efficacy are also responsible for the toxicity of Fuzi.
After screening of bioactive compounds by OB and DL, a total of 22 compounds were screened. In addition, the compounds that did not meet the criteria of OB and DL but were reported to show pharmacological effects were also preserved. These compounds include myristic acid, mesaconitine, etc. As a result, a total of 32 bioactive compounds were screened (Table ). These compounds mainly belong to alkaloids. Target fishing showed that these compounds can act on 905 targets, and all the targets were shown in Supplementary Table . To help view the relationship between bioactive compounds and the corresponding targets, a compound-target network was constructed (Supplementary Fig. ).
To verify the reliability of targets screened, molecular docking was utilized to estimate the binding ability between compounds of Fuzi and targets, and to view the compound-target binding interactions. The bioactive ingredients in Fuzi exhibited a strong binding ability towards the predicted genes (Fig. ). The results showed that coryneine can bind to amino acid residue Thr118 and Ser200 in ADRA2A by hydrogen bonds (Fig. A). Denudatine can bind to amino acid residue Ser198 in BCHE by hydrogen bonds (Fig. B). Norcoclaurine can bind to amino acid residue Trp155 and Tyr426 in CHRM2 by hydrogen bonds (Fig. C). Songorine can bind to amino acid residue Thr708 and Ala704 in KCNH2 by hydrogen bonds (Fig. D). The binding energy of these four binding interactions were − 6.4, − 9.5, − 9.0, − 6.6 kcal/mol, respectively, indicating the reliability of targets in our study.
To visualize the properties of targets, first, a PPI network was constructed with the help of String database. Results showed that a total of 902 targets were integrated into the whole PPI network. The number of edges was 2095, and the average node degree was 4.64 (Fig. ). Noteworthy is that the PPI network contained several small networks. For example, three targets including SHBG, GABBR1, and GABBR2 were integrated as a small network. Then, GO analysis was performed with the help of Omicshare (Fig. ). The results showed that the top 5 biological processes include cellular process, biological regulation, response to stimulus, metabolic process, and regulation of biological process. The 5 top cellular components include cell, cell part, organelle, membrane, and organelle part. The 5 top molecular functions include binding, catalytic activity, molecular transducer activity, transporter activity, and molecular function regulator.
To explore the mechanism of Fuzi in treating rheumatoid arthritis, the targets were further screened by DrugBank and CTD. The screened targets associated with efficacy were further subjected to DAVID pathway analysis. As a result, a total of 27 pathways were enriched. These pathways include TNF signaling pathway, serotonergic synapse, arachidonic acid metabolism, adipocytokine signaling pathway, linoleic acid metabolism, PI3K-Akt signaling pathway, steroid hormone biosynthesis, etc. Many of these pathways have been demonstrated to be involved in rheumatoid arthritis. For example, modulation of arachidonic acid metabolism and linoleic acid metabolism is believed as two possible ways to treat arthritis , . To help visualize the mechanism of Fuzi, a compound-target-pathway network was constructed (Fig. ). This network contains 25 bioactive compounds, 61 targets, and 27 pathways. It is noteworthy that in this network, one compound can act on multiple targets, and each pathway contains multiple targets. For example, myristic acid (M1) can act on TLR2, UGT2B7, RXRA, PTGS1, PTGS2, PPARG, PPARA, PLA2G1B, PLA2G1A, HSD11B1, GSTM1, GSTK1, EDNRA, and ALOX12. TNF pathway plays important roles in the development of rheumatoid arthritis and can be used as a target pathway for the treatment of rheumatoid arthritis . Fuzi can act on multiple targets in the TNF pathway (Fig. ). These results showed that Fuzi could exert therapeutic effects by influencing multiple targets and pathways, featuring the property that herbal medicine act in a holistic way to treat disease.
To fully explore the toxic mechanism of Fuzi, all the targets of Fuzi were subjected to KEGG analysis. Because the main toxicity of Fuzi includes cardiotoxicity and neurotoxicity, and disturbance of ion channels such as voltage-gated Na + channel is responsible for toxicity , therefore calcium signaling, adrenergic signaling in cardiomyocytes, neuroactive ligand-receptor interaction, and dopaminergic synapse were considered as the toxic pathways of Fuzi. To help visualize the comprehensive toxic mechanism of Fuzi, a compound-target-pathway network was constructed (Fig. ). In this network, 32 compounds can act on 187 targets, and these targets can act on 4 pathways. It should be noted that in this network, one compound can act on multiple targets, and each pathway contains multiple targets. For example, hypaconitine (M23) can act on TBXA2R, SLC6A3, OPRM1, OPRL1, OPRK1, OPRD1, MTNR1B, MLNR, MAOA, HTR1A, HCRTR2, HCRTR1, DRD5, DRD4, DRD3, DRD2, DRD1, CHRNB4, CHRNA3, CHRM4, ADRB3, ADRB2, ADRA2C, ADRA2A, ADRA1D, ADRA1B, ADRA1A, and ADCY5. Similarly, multiple targets can act on the same pathway such as adrenergic signaling in cardiomyocytes (Fig. ). Noteworthy is that disturbance of ion flow including Na + , Ca 2+ , and K + is the main reason for the toxicity of Fuzi . Here we showed that Fuzi can act on multiple targets that directly modulate the flow of Na + , Ca 2+ , and K + . The results demonstrated the reliability of our study.
Since Fuzi is the typical Chinese herbal medicine with salient efficacy and strong toxicity, we then explored the mechanism difference between efficacy and toxicity. First, we compared the targets involved in inducing toxicity and targets inducing therapeutic effects on rheumatoid arthritis (Fig. A). The results showed that 51 targets only contribute to the efficacy in treating rheumatoid arthritis, 189 targets are only responsible for the toxicity, and 10 targets are involved in both toxicity and efficacy of Fuzi. Then, we explored the compounds involved in inducing toxicity and compounds contributing to the therapeutic effects on rheumatoid arthritis (Fig. B). The results showed that 7 compounds only contribute to the toxicity, 15 targets are involved in both toxicity and efficacy of Fuzi, and no compound is only associated with efficacy. The results indicated that the compounds and targets contributing to the efficacy are also responsible for the toxicity of Fuzi.
As the typical herbal medicine with strong toxicity and obvious efficacy against rheumatoid arthritis, Fuzi present in 13.20% of 500 well-known prescriptions used in clinical practice . Although Fuzi has been in the spotlight for a long time from the researchers, the mechanism relationship between the toxicity and efficacy has remain not fully known for a long time. Using the efficacy in treating rheumatoid arthritis as an example, we adopted a systems pharmacology approach to screen the bioactive compounds and to identify the potential targets. We found out that one compound can act on multiple targets and different targets can link to the same compound, typifying the properties of herbal medicines that they can act in a holistic way to treat diseases. We also found out that 25 bioactive compounds can act on 61 targets and 27 pathways to treat rheumatoid arthritis, and 32 bioactive compounds can act on 187 targets and 4 pathways to induce toxicity. Rheumatoid arthritis is a chronic and inflammatory autoimmune disease that causes symmetrical polyarthritis of large and small joints. In this disease, cytokines function in a network of overlapping, synergistic, antagonistic, and inhibitory ways to mediate the development and progress of disease . Because of the importance of cytokines, effects have been made to develop cytokine-targeted therapies, such as therapies that target important proinflammatory cytokines TNF-α, IL-1, IL-6, IL-17, IL-20, IL-21, IL-23 . In our study, we found out that Fuzi can target proinflammatory cytokines associated genes including TNF and IL6 to ameliorate rheumatoid arthritis. In addition, Fuzi can target cytokines associated pathways such as TNF signaling pathway, NF-kappa B signaling pathway. The results indicate that modulation of inflammation state is one of the main mechanisms of Fuzi to treat rheumatoid arthritis. The toxicity of Fuzi poses a great threat to the patients and can even lead to the death of patients. Studies have demonstrated that the toxicity of Fuzi mainly derives from diester diterpene alkaloids including aconitine, mesaconitine, and hypaconitine . In our study, we found out that the toxicity of Fuzi was not only associated with diester diterpene alkaloids, but also all other compounds as well. The result did not contradict the clinical and animal studies since other diterpene alkaloids such as benzoylmesaconine are toxic as well . In our study, we also found out that non-toxic compounds can act on the targets of toxic compounds. Taking adrenergic signaling in the cardiomyocytes pathway as an example. Delphin_qt (M5), benzoylnapelline (M18), denudatine (M20), songorine (M30) can act on AKT1, and AKT1, ATP1A1, ATP2B4, BCL2, and other targets in adrenergic signaling in cardiomyocytes pathway to induce toxicity. In addition, the compounds that are usually believed to be non-toxic also involved in the toxicity of Fuzi. For example, myristic acid, a compound widely used in the food industry as a flavor ingredient, does not pose a health risk to humans . In our study, myristic acid can act on hypaconitine targets TRPV1, THRA, MAPK1, etc. However, whether this action can lead to increase of toxicity, or decrease of toxicity, or no change of toxicity is known. Further animal studies are needed to confirm the three possible results. Although the effects of non-toxic compounds on final toxicity of Fuzi is known, the results can add new knowledge for our understanding of the toxicity of herbal medicines, i.e., non-toxic compounds can act on targets of toxic compounds, and therefore may influence the toxicity of Fuzi. In the clinic, the major way to avoid the toxicity of Fuzi is processing ( Paozhi ). The main processing method includes boiling Fuzi in water for a long period of time. In this process, the main toxic diester-diterpenoid alkaloids including aconitine, mesaconitine, and hypaconitine can be first transformed into monoester-diterpenoid alkaloids and finally into unesterified compounds . The unesterified diterpenoid alkaloids showed no toxicity in the clinic, but at the same time, their pharmacological activities are not influenced . Although Fuzi must be processed and boiled for a long time before orally taken, there are many cases of using unprocessed Fuzi and Fuzi that are not boiled for enough time . In addition, many Fuzi victims used Fuzi in medicinal liquors. As a result, there are many cases that toxic alkaloids lead to the Fuzi poisoning in clinic . Therefore, in our study, we collected the compounds of processed and raw Fuzi, including diester-diterpenoid alkaloids and unesterified compounds in our study to investigate the efficacy-toxicity relationship of Fuzi. We found out that non-toxic compounds can act on targets of toxic compounds and therefore may influence the toxicity. Clinically, the processed Fuzi showed almost no toxicity. Therefore, our study also indicates that boiling Fuzi in water for a long period of time is a necessary step to avoid the toxicity of Fuzi, and the non-toxic compounds alone cannot induce toxicity. In our study, some drawbacks should be noted. Although many targets of Fuzi have been validated by literatures, many targets identified in this study are needed to be further validated by in vivo studies. In addition, system pharmacology is intrinsically flawed in several aspects. First, the efficacy and toxicity of a drug are directly related to the dosage, and systems pharmacology currently cannot give the direct dosage- phenotype outcome. Secondly, an herbal medicine can contain hundreds or even thousands of different compounds, and it is now not possible to identify all the compounds that exist in Fuzi. Therefore, not all bioactive compounds and targets in our study can be screened currently. Thirdly, herbal compounds can be metabolized by gut microbiota and liver, and the metabolite can act different targets compared with their parent compounds. Even though we have predicted the aglycones of glycosides in our study, not all the metabolites can be predicted based on the structure of compounds. Therefore, not all the bioactive metabolites in the body can be predicted as well. Nevertheless, as an efficient and new tool to study the mechanisms of herbal medicines systemically, systems pharmacology enables us to understand the relationships between efficacy and toxicity. In recent years, gut microbiota has emerged as a new frontier to understand the development and progress of diseases. Dysbiosis of gut microbiota such as Prevotella histicola is involved in the pathogenesis of rheumatoid arthritis – . Gut microbiota can synthesis and release a large number of metabolites with anti-inflammatory effects such as short-chain fatty acids to ameliorate diseases – . In our study, some compounds in Fuzi are not classified as bioactive compounds according to our screen standard, such as arachic acid. However, arachic acid is reported to be able to modulate the composition of gut microbiota , . Therefore, other compounds that are omitted in our study may act on gut microbiota to ameliorate rheumatoid arthritis. In addition to small molecular compounds, big molecular compounds that mainly include polysaccharides are also the bioactive compounds of herbal medicine. Polysaccharides can modulate the composition of gut microbiota and can be metabolized into short-chain fatty acids to modulate the immune system of hosts – . Currently, the in silicon approach can only screen small molecules, and polysaccharides are usually omitted. Therefore, like other compounds such as arachic acid, polysaccharides of Fuzi may also be effective on rheumatoid arthritis, and further studies are needed to confirm this hypothesis.
Many herbal medicines with toxicity are extensively used in clinic to treat diseases, however, the mechanism relationships between the toxicity and efficacy of herbal medicines remain unknown. Fuzi is the typical toxic herbal medicine with remarkable clinical efficacy. Using Fuzi in treating rheumatoid arthritis as an example, we demonstrated that the efficacy of Fuzi can be attributed to 25 bioactive compounds that act holistically on 61 targets and 27 pathways, and the toxicity of Fuzi can be attributed to 32 compounds that act holistically on 187 targets and 4 pathways. In addition, non-toxic compound such as myristic acid can act on targets of toxic compounds and therefore may influence the toxicity. However, the effects of non-toxic compounds on the final toxicity of Fuzi remain to be studied further. The results suggested that removing of toxic compounds by processing and boiling is a necessary procedure to avoid the toxicity of Fuzi.
Supplementary Information.
|
New Perspectives for Whole Genome Amplification in Forensic STR Analysis | 68690707-ff87-47ac-9b28-bc702073b9ed | 9267064 | Forensic Medicine[mh] | This review focuses on the suitability of whole genome amplification (WGA) for forensic DNA profiling that uses current standard technologies. To be able to appreciate both the possible applications and the limitations of WGA in forensic DNA analysis, it will first be necessary to explain the basics of forensic DNA profiling from which the obvious fields of application of WGA will be motivated. Then, the technical principles of WGA will be described in conjunction with a critical discussion of published work applying WGA in forensics. By this means, the common deficiencies of WGA in a forensic context will be established, which finally will lead to the identification of fields of application where improved WGA methods may be promising.
Forensic DNA profiles are based on the analysis of a standardized set of short tandem repeat (STR) loci that are highly polymorphic in the human population. In the European Union, 15 different loci are currently analyzed; in the North American CODIS system, this standard set is complemented by five additional loci . STR loci are characterized by multiple repeat units of few nucleotides that are arranged in a tandem fashion one after the other. With few exceptions, the STR loci analyzed in human forensics have repeat units consisting of four nucleotides . The alleles of the STR loci differ in the number of repeat units, which may amount to several dozens, depending on the STR locus. The allele designations simply represent the numbers of repeat units as related to standard alleles . The number of repeat units corresponds to a length in base pairs and can thus be determined by electrophoresis following PCR using primers that bind in the conserved regions flanking the tandem repeats. Depending on the locus, nine to over forty different alleles can be identified, and the combination of resulting genotypes makes such an STR profile statistically unique within the human population . Technically, the set of forensic STR loci is analyzed using multiplex PCR, and the amplified fragments are sized using capillary electrophoresis (CE) . One of the two primers amplifying each locus is labeled with a fluorophore, allowing for detection. Four (or five) different fluorophores are assigned to the various loci in such a way that fragments of each STR locus can be unequivocally identified based on size and color on the electropherogram. A heterozygous genotype of a particular locus thus will display two peaks of similar height on the electropherogram, whereas a homozygous genotype will display one peak ( ). In modern forensic STR typing, commercial reagent kits are used that have been validated on the commonly used CE devices. Sizing standards and allele standards included in these kits allow for semi-automatic evaluation of electropherograms by software that assigns allele numbers to peaks for each locus. Of particular importance for this process of “allele calling” are threshold settings which preclude allele assignment for peaks that result from analytical noise or from typical technical artifacts . One important type of technical artifacts that typically occur in STR analysis are so-called stutter peaks, which result from the propensity of the repeat units to slip by one or more units during the elongation step of PCR amplification . As a consequence, stutter peaks are seen as small peaks preceding the main peaks and are typically one complete repeat unit shorter than the true alleles ( ). Stutter peaks with sizes that are one unit longer or two or more repeat units different from the main peak sometimes occur as well. The incidence of replication slippage in a particular PCR assay is characteristic for each locus, and thus either a general stutter threshold or locus-specific stutter thresholds are applied . Preclusion of stutter peaks is important because they have the same lengths as expected for true alleles and would lead to wrong interpretations of electropherograms.
3.1. Stochastic Effects and Low Template DNA Modern commercial STR kits are highly sensitive and can establish full profiles from as little as 125 pg of genomic DNA [ , , ]. (A human cell contains 6.6 pg of nuclear DNA). At lower concentrations, single alleles or loci may escape detection. Even running more PCR cycles will not overcome this limit, which shows that it is not just analytical in nature. Rather, the limit reflects the occurrence of stochastic sampling effects . These may have two explanations: First, in a trace with a DNA amount corresponding to only few genomic copies, some DNA loci may be present in unequal abundancies. Second, if in a DNA sample only few genomic copies are present, any fraction to be analyzed may no longer represent the complete genome. As a consequence, alleles will be underrepresented or absent, resulting in typical stochastic effects on electropherograms ( ), such as pronounced peak height imbalances between loci or between the two alleles of one locus (allelic imbalances, AI), or allele peaks being completely missing (allele drop-out, ADO) . Low DNA amounts analytically entailing stochastic effects are referred to as low template DNA (LT DNA) (also termed low copy number DNA, LCN DNA) and warrant more sensitive analytical procedures, which in turn may evoke additional artifacts, such as allele drop-ins (ADI). On electropherograms, ADIs present as peaks that resemble normal allele peaks and may result from the amplification of contaminants (present in the sample, in the equipment or in reagents). Furthermore, they may be due to replication slippage events occurring during early PCR cycles when still only a few template molecules are present, such that resulting stutter fragments may become prominent . To comply with stochastic effects, LCN DNA methods involving lower reaction volumes and more PCR cycles are applied in two or three replicates in order to identify those peaks as reliable that are reproduced by at least two assays . This common way of analysis has been criticized because dividing a sample with already limiting amounts of DNA may even exacerbate the stochastic sampling effects; thus, information could be better obtained by analyzing the complete sample in one assay . A disadvantage of the latter strategy is, however, that stochastic effects cannot be identified as such, and replicate analysis is not possible . In this context, the application of WGA would offer the advantage of generating larger amounts of template that would allow replicate analysis without the risk of eliciting additional stochastic effects due to further dilution of the sample. The differences between WGA and simply increasing the cycle number of an ordinary PCR are not immediately obvious, and if DNA loci are missing right from the start, WGA will not be able to overcome resulting stochastic effects. However, WGA may reduce the risk of generating unspecific amplification products. Similarly to nested PCR protocols , the first rounds of amplification are performed with different PCR primers than used in the actual STR analysis. 3.2. Degraded DNA Most DNA-containing traces have been exposed to the environment; thus, DNA integrity may be compromised by environmental influences, such as humidity, heat, acidic or oxidizing conditions, UV exposure, or enzymatic degradation (reviewed in ). Typically, this results in damaged or fragmented DNA, which precludes PCR amplification of affected DNA loci. As a consequence, ADOs may occur and may encompass whole loci (locus drop-out, LDO). As the chance of experiencing damage is proportional to the length of a DNA molecule, DNA degradation typically affects the longer PCR amplicons first, resulting in more pronounced reductions in peak heights and increasing appearances of ADOs and LDOs on the right side of an electropherogram (corresponding to longer DNA framents). If DNA damage is too severe, all STR loci will be affected, and peaks will remain below the detection threshold. As DNA degradation will result in a lower number of copies of STR loci that can be amplified, one strategy might consist in the preamplification of genomic DNA by WGA in order to increase the number of the few copies that are still intact. To these ends, the suitability of WGA methods for environmentally exposed DNA traces needs to be evaluated
Modern commercial STR kits are highly sensitive and can establish full profiles from as little as 125 pg of genomic DNA [ , , ]. (A human cell contains 6.6 pg of nuclear DNA). At lower concentrations, single alleles or loci may escape detection. Even running more PCR cycles will not overcome this limit, which shows that it is not just analytical in nature. Rather, the limit reflects the occurrence of stochastic sampling effects . These may have two explanations: First, in a trace with a DNA amount corresponding to only few genomic copies, some DNA loci may be present in unequal abundancies. Second, if in a DNA sample only few genomic copies are present, any fraction to be analyzed may no longer represent the complete genome. As a consequence, alleles will be underrepresented or absent, resulting in typical stochastic effects on electropherograms ( ), such as pronounced peak height imbalances between loci or between the two alleles of one locus (allelic imbalances, AI), or allele peaks being completely missing (allele drop-out, ADO) . Low DNA amounts analytically entailing stochastic effects are referred to as low template DNA (LT DNA) (also termed low copy number DNA, LCN DNA) and warrant more sensitive analytical procedures, which in turn may evoke additional artifacts, such as allele drop-ins (ADI). On electropherograms, ADIs present as peaks that resemble normal allele peaks and may result from the amplification of contaminants (present in the sample, in the equipment or in reagents). Furthermore, they may be due to replication slippage events occurring during early PCR cycles when still only a few template molecules are present, such that resulting stutter fragments may become prominent . To comply with stochastic effects, LCN DNA methods involving lower reaction volumes and more PCR cycles are applied in two or three replicates in order to identify those peaks as reliable that are reproduced by at least two assays . This common way of analysis has been criticized because dividing a sample with already limiting amounts of DNA may even exacerbate the stochastic sampling effects; thus, information could be better obtained by analyzing the complete sample in one assay . A disadvantage of the latter strategy is, however, that stochastic effects cannot be identified as such, and replicate analysis is not possible . In this context, the application of WGA would offer the advantage of generating larger amounts of template that would allow replicate analysis without the risk of eliciting additional stochastic effects due to further dilution of the sample. The differences between WGA and simply increasing the cycle number of an ordinary PCR are not immediately obvious, and if DNA loci are missing right from the start, WGA will not be able to overcome resulting stochastic effects. However, WGA may reduce the risk of generating unspecific amplification products. Similarly to nested PCR protocols , the first rounds of amplification are performed with different PCR primers than used in the actual STR analysis.
Most DNA-containing traces have been exposed to the environment; thus, DNA integrity may be compromised by environmental influences, such as humidity, heat, acidic or oxidizing conditions, UV exposure, or enzymatic degradation (reviewed in ). Typically, this results in damaged or fragmented DNA, which precludes PCR amplification of affected DNA loci. As a consequence, ADOs may occur and may encompass whole loci (locus drop-out, LDO). As the chance of experiencing damage is proportional to the length of a DNA molecule, DNA degradation typically affects the longer PCR amplicons first, resulting in more pronounced reductions in peak heights and increasing appearances of ADOs and LDOs on the right side of an electropherogram (corresponding to longer DNA framents). If DNA damage is too severe, all STR loci will be affected, and peaks will remain below the detection threshold. As DNA degradation will result in a lower number of copies of STR loci that can be amplified, one strategy might consist in the preamplification of genomic DNA by WGA in order to increase the number of the few copies that are still intact. To these ends, the suitability of WGA methods for environmentally exposed DNA traces needs to be evaluated
4.1. Overview WGA methods can be classified as PCR-based or as based on multiple displacement amplification (MDA) . In addition, there are methods that do not conform to this distinction, as they either combine both principles or are based on non-related principles [ , , ]. PCR-based principles use either mixtures of primers with randomized sequences that can bind to many DNA loci, and by this means in theory will amplify all parts of the genome ( A) , or are based on targeted or random fragmentation of genomic DNA followed by ligation of adaptor oligonucleotides at the fragment ends that allow for PCR amplification with optimized primers that are complementary to adaptor sequences ( B) [ , , , ]. MDA is based on the high-fidelity DNA polymerase phi29 that exhibits high processivity (synthesizing continuous DNA of up to 20 kb) and has strand displacement activity . MDA-based WGA begins with denaturation of the genomic DNA and elongation of the complementary strands after annealing of short primers with randomized sequences (typically hexameric). When the polymerase reaches a double-stranded region that has already been synthesized by another phi29 polymerase acting on the same strand, the preceding strand will be displaced, generating a novel single-stranded template for further random-primed elongation. By this means, the template will be multiplied in a quasi-exponential fashion by generating arborized DNA template arrays ( C). 4.2. Basic WGA Methods and Variations 4.2.1. DOP-PCR Degenerate oligonucleotide primed-PCR (DOP-PCR) was among the first WGA protocols developed . It is a PCR-based method that uses primers with six random nucleotides embedded between defined short sequences on either end, which are first used for few PCR cycles at low stringency (to allow amplification of multiple regions of the genome), followed by a larger number of high-stringency amplification cycles for specific enrichment of the products ( A). For STR analysis, success rates between 50% and 75% have been reported for DNA amounts lower than 60 pg (12 STR loci tested) . The protocol has been modified several times by altering the primer sequences, cycle numbers, and DNA polymerases in order to improve genome coverage and STR typing success rate, leading to improved methods called LL-DOP-PCR , dcDOP-PCR , mDOP-PCR , and iDOP-PCR . 4.2.2. PEP PCR Differently from DOP-PCR, primer-extension-preamplification PCR (PEP PCR) uses primers of 15 nucleotides that are completely degenerate, and amplification starts at low-stringency annealing temperatures which are continuously raised in the subsequent PCR cycles ( A) . The method worked for single-cell analysis and was subsequently further optimized in terms of PCR cycle parameters and DNA polymerases to improve genome coverage and the success rate of STR analysis of clinical samples (I-PEP PCR) , and after further modification, of forensic samples (mIPEP PCR) . 4.2.3. Adaptor Ligation-Mediated PCR Adaptor ligation-mediated PCR methods ( B) differ in the way that the random DNA fragments are generated. The initial method used the restriction enzyme Mse1 to introduce cuts in the template DNA, followed by linker annealing and ligation to the generated fragments . The commercial Omniplex and GenomePlex methods are based on chemical fragmentation of the template DNA , whereas another method called PSRG (adaptor–ligation PCR of randomly sheared genomic DNA) generates DNA fragments by hydrodynamical shearing of genomic DNA, followed by fill-in of resulting overhangs and adaptor ligation . 4.2.4. MDA Most MDA methods rely on phi29 polymerase in conjunction with random primers. Related methods use a primase instead of random primers that is coupled to a DNA polymerase with strand displacement activity . Generally, MDA methods require long, uninterrupted template DNA sequences and tend to underrepresent the ends of template DNA fragments ( C). As a means to make MDA suitable for the fragmented DNA often seen in forensic DNA samples, protocols have been proposed that circularize the template DNA fragments, allowing for rolling circle amplification using the MDA principle ( D). In the RCA-RCA WGA protocol (developed for DNA from formalin-fixed tissue) DNA fragments generated by restriction enzyme are circularized by self-ligation, followed by exonuclease digestion of the remaining linear fragments . A related protocol termed blunt-end ligation-mediated WGA (BL-WGA) has been established for plasma-circulating DNA fragments, the ends of which are first blunted by T4 polymerase and then ligated using T4 ligase, generating circular substrates and concatemers, which are then subjected to phi29-mediated rolling circle amplification and MDA . 4.3. Limitations of WGA in Forensic STR Analysis 4.3.1. A Priori Limitations of the WGA Methods All WGA methods tend to display some bias in terms of amplification of genomic DNA loci (reviewed in ); thus, the uniformity and completeness of the genome coverage of WGA products are critical parameters when it comes to downstream analysis of multiple DNA loci ( ). In forensic STR analysis of LT DNA, WGA-inherent bias thus bears the risk of generating additional ADOs and AIs on top of the already present stochastic artifacts. Forensic STR analysis imposes two further challenges: impaired integrity of template DNA and the propensity of STR loci for replication slippage during PCR amplification. Generally, PCR-based methods can better deal with low quality DNA (damaged or fragmented), because unlike MDA, PCR does not rely on long, undisrupted templates; however, the rolling circle MDA variants are suitable for fragmented DNA templates too. Like normal PCR amplification, the PCR-based protocols are prone to stutter artifacts when analyzing STR loci , whereas replication slippage is less likely to occur in MDA-based methods . The adaptor ligation-mediated PCR methods exhibit a fundamental problem if applied to only few copies of template DNA. Unlike the random primers used in DOP-PCR, PEP PCR, or MDA, which allow for initiating DNA synthesis multiple times from various locations without harming the template, the fragmentation of template DNA is irreversible, and thus any STR amplicon disintegrated during fragmentation (or located within a fragment too long for successful PCR amplification) will inevitably be underrepresented later on. This effect will be particularly apparent in the analysis of single cells where each STR allele is present only once. 4.3.2. Experimentally Established Performances and Limitations of the WGA Methods Already early in their development, the potential of WGA methods for forensic DNA analysis has been recognized , and methods have been evaluated in terms of STR analysis. Initial studies, however, did not analyze forensic standard loci, and they did not use forensically relevant DNA samples, such as extracts from typical trace types (saliva, semen, blood) or degraded DNA (artificially degraded or environmentally exposed). Several later studies have then tested the various available WGA methods for their potential to improve the STR typing success of problematic DNA samples using contemporary commercial kits for forensic STR analysis. These studies are generally difficult to compare, because they analyze DNA from different sources and of different amounts, and often evaluate STR typing success and sensitivities in different ways. Moreover, the contemporary STR typing kits used differ in their sensitivities, which sometimes do not reach the limits of the STR kits nowadays in use. In this section, the most significant findings will be summarized, paying attention in particular to sensitivity of the methods, technical artifacts, and success in typing degraded DNA samples. Two studies reported sensitivities down to to 10 pg input DNA for iPEP, GenomiPhi MDA, and the commercial adaptor ligation method GenomePlex ; however, they noticed the occurrence of AIs and ADOs at these extremely low DNA amounts. One study did not find an improvement for DOP-PCR, MDA, or I-PEP PCR over non-WGA for treated DNA (with LL-DOP-PCR completely failing), and as a consequence developed the mIPEP PCR, which was successfully applied to 5 pg DNA from buccal swabs, to semen stains, to vaginal swabs, and even to fingerprints . However, the occurrence of ADIs and AIs was reported for mIPEP PCR using low DNA amounts, and the method was of little benefit when analyzing environmentally exposed bloodstains, suggesting deficiencies when applied to real-world forensic trace material . A later study could obtain partial STR profiles from environmentally exposed blood stains using mIPEP PCR; however, it reported extra alleles (ADIs) with low amounts of template DNA . In one study, DOP-PCR failed with low DNA amounts, and an improved DOP-PCR method (called iDOP-PCR), while achieving sensitivity down to 15 pg, showed high proportions of ADOs (46%) and ADIs (4%) . Likewise, adaptor ligation PCR protocols, while generally improving the sensitivity of STR analysis, resulted in significant AIs, ADOs, and ADIs [ , , , ]. On the other hand, adaptor ligation PCR seems best suited for analyzing degraded DNA samples, as shown in two studies comparing STR typing success after the application of PEP PCR, DOP-PCR, adaptor ligation PCR, two MDA protocols, and the rolling circle MDA methods, to DNA extracted from heat-treated human muscle samples . With the exception of PEP, all methods failed when analyzing DNA amounts of less than 1 ng, and only PEP and the adapator ligation method (GenomePlex) improved the typing success for degraded DNA; GenomePlex, however, generated many ADIs and high stutters . Likewise, Uchigasaki et al. (2018) reported improved allele recovery after GenomePlex WGA applied to UV-irradiated human bloodstains; however, the observed peaks were different from those of the control samples . Remarkably, in the studies of Maciejewska et al., the rolling circle MDA methods (initially developed for fragmented DNA) proved less successful compared with other WGA methods , confirming the findings of two earlier studies . Ambers et al. (2016) modified the DOP-PCR protocol to improve the analysis of ancient and degraded forensic DNA samples. While their mDOP-PCR protocol improved STR typing success, they noticed the occurrence of artifacts such as ADOs, ADIs, and increased stutter . In a recent study, a workflow was suggested for STR analysis of UV-exposed DNA samples . The workflow incorporated mIPEP PCR, which was shown to improve allele recovery for low amounts of damaged DNA; however, it increased the number of ADIs. To summarize these studies, WGA methods when applied to forensic STR typing, while generally increasing the analytical sensitivity, impose several novel problems: Profiles often display pronounced imbalances and ADOs that affect STR loci and alleles in a non-predictable fashion and are related to WGA-inherent bias. In addition, particularly with the PCR-based methods, high rates of stutters and ADIs are seen. These phenomena render STR profiles from unknown donors hard to interpret and lower the statistical power of the evidence. For example, in the case of an apparently homozygous locus (showing just one peak on the electropherogram), it cannot be decided whether a second allele is actually missing. ADIs or high stutters cannot be distinguished from normal alleles, and may thus mislead the interpretation as well. Furthermore, pronounced intra- or inter-locus peak height imbalances hamper the interpretation of mixed profiles, because peak heights no longer reflect the true amount of template DNA. Thus, although additional genotype information from LT DNA or damaged DNA can be obtained by WGA, the generated artifacts may mislead the interpretation of STR profiles, which strongly argues against the use of WGA in forensic casework.
WGA methods can be classified as PCR-based or as based on multiple displacement amplification (MDA) . In addition, there are methods that do not conform to this distinction, as they either combine both principles or are based on non-related principles [ , , ]. PCR-based principles use either mixtures of primers with randomized sequences that can bind to many DNA loci, and by this means in theory will amplify all parts of the genome ( A) , or are based on targeted or random fragmentation of genomic DNA followed by ligation of adaptor oligonucleotides at the fragment ends that allow for PCR amplification with optimized primers that are complementary to adaptor sequences ( B) [ , , , ]. MDA is based on the high-fidelity DNA polymerase phi29 that exhibits high processivity (synthesizing continuous DNA of up to 20 kb) and has strand displacement activity . MDA-based WGA begins with denaturation of the genomic DNA and elongation of the complementary strands after annealing of short primers with randomized sequences (typically hexameric). When the polymerase reaches a double-stranded region that has already been synthesized by another phi29 polymerase acting on the same strand, the preceding strand will be displaced, generating a novel single-stranded template for further random-primed elongation. By this means, the template will be multiplied in a quasi-exponential fashion by generating arborized DNA template arrays ( C).
4.2.1. DOP-PCR Degenerate oligonucleotide primed-PCR (DOP-PCR) was among the first WGA protocols developed . It is a PCR-based method that uses primers with six random nucleotides embedded between defined short sequences on either end, which are first used for few PCR cycles at low stringency (to allow amplification of multiple regions of the genome), followed by a larger number of high-stringency amplification cycles for specific enrichment of the products ( A). For STR analysis, success rates between 50% and 75% have been reported for DNA amounts lower than 60 pg (12 STR loci tested) . The protocol has been modified several times by altering the primer sequences, cycle numbers, and DNA polymerases in order to improve genome coverage and STR typing success rate, leading to improved methods called LL-DOP-PCR , dcDOP-PCR , mDOP-PCR , and iDOP-PCR . 4.2.2. PEP PCR Differently from DOP-PCR, primer-extension-preamplification PCR (PEP PCR) uses primers of 15 nucleotides that are completely degenerate, and amplification starts at low-stringency annealing temperatures which are continuously raised in the subsequent PCR cycles ( A) . The method worked for single-cell analysis and was subsequently further optimized in terms of PCR cycle parameters and DNA polymerases to improve genome coverage and the success rate of STR analysis of clinical samples (I-PEP PCR) , and after further modification, of forensic samples (mIPEP PCR) . 4.2.3. Adaptor Ligation-Mediated PCR Adaptor ligation-mediated PCR methods ( B) differ in the way that the random DNA fragments are generated. The initial method used the restriction enzyme Mse1 to introduce cuts in the template DNA, followed by linker annealing and ligation to the generated fragments . The commercial Omniplex and GenomePlex methods are based on chemical fragmentation of the template DNA , whereas another method called PSRG (adaptor–ligation PCR of randomly sheared genomic DNA) generates DNA fragments by hydrodynamical shearing of genomic DNA, followed by fill-in of resulting overhangs and adaptor ligation . 4.2.4. MDA Most MDA methods rely on phi29 polymerase in conjunction with random primers. Related methods use a primase instead of random primers that is coupled to a DNA polymerase with strand displacement activity . Generally, MDA methods require long, uninterrupted template DNA sequences and tend to underrepresent the ends of template DNA fragments ( C). As a means to make MDA suitable for the fragmented DNA often seen in forensic DNA samples, protocols have been proposed that circularize the template DNA fragments, allowing for rolling circle amplification using the MDA principle ( D). In the RCA-RCA WGA protocol (developed for DNA from formalin-fixed tissue) DNA fragments generated by restriction enzyme are circularized by self-ligation, followed by exonuclease digestion of the remaining linear fragments . A related protocol termed blunt-end ligation-mediated WGA (BL-WGA) has been established for plasma-circulating DNA fragments, the ends of which are first blunted by T4 polymerase and then ligated using T4 ligase, generating circular substrates and concatemers, which are then subjected to phi29-mediated rolling circle amplification and MDA .
Degenerate oligonucleotide primed-PCR (DOP-PCR) was among the first WGA protocols developed . It is a PCR-based method that uses primers with six random nucleotides embedded between defined short sequences on either end, which are first used for few PCR cycles at low stringency (to allow amplification of multiple regions of the genome), followed by a larger number of high-stringency amplification cycles for specific enrichment of the products ( A). For STR analysis, success rates between 50% and 75% have been reported for DNA amounts lower than 60 pg (12 STR loci tested) . The protocol has been modified several times by altering the primer sequences, cycle numbers, and DNA polymerases in order to improve genome coverage and STR typing success rate, leading to improved methods called LL-DOP-PCR , dcDOP-PCR , mDOP-PCR , and iDOP-PCR .
Differently from DOP-PCR, primer-extension-preamplification PCR (PEP PCR) uses primers of 15 nucleotides that are completely degenerate, and amplification starts at low-stringency annealing temperatures which are continuously raised in the subsequent PCR cycles ( A) . The method worked for single-cell analysis and was subsequently further optimized in terms of PCR cycle parameters and DNA polymerases to improve genome coverage and the success rate of STR analysis of clinical samples (I-PEP PCR) , and after further modification, of forensic samples (mIPEP PCR) .
Adaptor ligation-mediated PCR methods ( B) differ in the way that the random DNA fragments are generated. The initial method used the restriction enzyme Mse1 to introduce cuts in the template DNA, followed by linker annealing and ligation to the generated fragments . The commercial Omniplex and GenomePlex methods are based on chemical fragmentation of the template DNA , whereas another method called PSRG (adaptor–ligation PCR of randomly sheared genomic DNA) generates DNA fragments by hydrodynamical shearing of genomic DNA, followed by fill-in of resulting overhangs and adaptor ligation .
Most MDA methods rely on phi29 polymerase in conjunction with random primers. Related methods use a primase instead of random primers that is coupled to a DNA polymerase with strand displacement activity . Generally, MDA methods require long, uninterrupted template DNA sequences and tend to underrepresent the ends of template DNA fragments ( C). As a means to make MDA suitable for the fragmented DNA often seen in forensic DNA samples, protocols have been proposed that circularize the template DNA fragments, allowing for rolling circle amplification using the MDA principle ( D). In the RCA-RCA WGA protocol (developed for DNA from formalin-fixed tissue) DNA fragments generated by restriction enzyme are circularized by self-ligation, followed by exonuclease digestion of the remaining linear fragments . A related protocol termed blunt-end ligation-mediated WGA (BL-WGA) has been established for plasma-circulating DNA fragments, the ends of which are first blunted by T4 polymerase and then ligated using T4 ligase, generating circular substrates and concatemers, which are then subjected to phi29-mediated rolling circle amplification and MDA .
4.3.1. A Priori Limitations of the WGA Methods All WGA methods tend to display some bias in terms of amplification of genomic DNA loci (reviewed in ); thus, the uniformity and completeness of the genome coverage of WGA products are critical parameters when it comes to downstream analysis of multiple DNA loci ( ). In forensic STR analysis of LT DNA, WGA-inherent bias thus bears the risk of generating additional ADOs and AIs on top of the already present stochastic artifacts. Forensic STR analysis imposes two further challenges: impaired integrity of template DNA and the propensity of STR loci for replication slippage during PCR amplification. Generally, PCR-based methods can better deal with low quality DNA (damaged or fragmented), because unlike MDA, PCR does not rely on long, undisrupted templates; however, the rolling circle MDA variants are suitable for fragmented DNA templates too. Like normal PCR amplification, the PCR-based protocols are prone to stutter artifacts when analyzing STR loci , whereas replication slippage is less likely to occur in MDA-based methods . The adaptor ligation-mediated PCR methods exhibit a fundamental problem if applied to only few copies of template DNA. Unlike the random primers used in DOP-PCR, PEP PCR, or MDA, which allow for initiating DNA synthesis multiple times from various locations without harming the template, the fragmentation of template DNA is irreversible, and thus any STR amplicon disintegrated during fragmentation (or located within a fragment too long for successful PCR amplification) will inevitably be underrepresented later on. This effect will be particularly apparent in the analysis of single cells where each STR allele is present only once. 4.3.2. Experimentally Established Performances and Limitations of the WGA Methods Already early in their development, the potential of WGA methods for forensic DNA analysis has been recognized , and methods have been evaluated in terms of STR analysis. Initial studies, however, did not analyze forensic standard loci, and they did not use forensically relevant DNA samples, such as extracts from typical trace types (saliva, semen, blood) or degraded DNA (artificially degraded or environmentally exposed). Several later studies have then tested the various available WGA methods for their potential to improve the STR typing success of problematic DNA samples using contemporary commercial kits for forensic STR analysis. These studies are generally difficult to compare, because they analyze DNA from different sources and of different amounts, and often evaluate STR typing success and sensitivities in different ways. Moreover, the contemporary STR typing kits used differ in their sensitivities, which sometimes do not reach the limits of the STR kits nowadays in use. In this section, the most significant findings will be summarized, paying attention in particular to sensitivity of the methods, technical artifacts, and success in typing degraded DNA samples. Two studies reported sensitivities down to to 10 pg input DNA for iPEP, GenomiPhi MDA, and the commercial adaptor ligation method GenomePlex ; however, they noticed the occurrence of AIs and ADOs at these extremely low DNA amounts. One study did not find an improvement for DOP-PCR, MDA, or I-PEP PCR over non-WGA for treated DNA (with LL-DOP-PCR completely failing), and as a consequence developed the mIPEP PCR, which was successfully applied to 5 pg DNA from buccal swabs, to semen stains, to vaginal swabs, and even to fingerprints . However, the occurrence of ADIs and AIs was reported for mIPEP PCR using low DNA amounts, and the method was of little benefit when analyzing environmentally exposed bloodstains, suggesting deficiencies when applied to real-world forensic trace material . A later study could obtain partial STR profiles from environmentally exposed blood stains using mIPEP PCR; however, it reported extra alleles (ADIs) with low amounts of template DNA . In one study, DOP-PCR failed with low DNA amounts, and an improved DOP-PCR method (called iDOP-PCR), while achieving sensitivity down to 15 pg, showed high proportions of ADOs (46%) and ADIs (4%) . Likewise, adaptor ligation PCR protocols, while generally improving the sensitivity of STR analysis, resulted in significant AIs, ADOs, and ADIs [ , , , ]. On the other hand, adaptor ligation PCR seems best suited for analyzing degraded DNA samples, as shown in two studies comparing STR typing success after the application of PEP PCR, DOP-PCR, adaptor ligation PCR, two MDA protocols, and the rolling circle MDA methods, to DNA extracted from heat-treated human muscle samples . With the exception of PEP, all methods failed when analyzing DNA amounts of less than 1 ng, and only PEP and the adapator ligation method (GenomePlex) improved the typing success for degraded DNA; GenomePlex, however, generated many ADIs and high stutters . Likewise, Uchigasaki et al. (2018) reported improved allele recovery after GenomePlex WGA applied to UV-irradiated human bloodstains; however, the observed peaks were different from those of the control samples . Remarkably, in the studies of Maciejewska et al., the rolling circle MDA methods (initially developed for fragmented DNA) proved less successful compared with other WGA methods , confirming the findings of two earlier studies . Ambers et al. (2016) modified the DOP-PCR protocol to improve the analysis of ancient and degraded forensic DNA samples. While their mDOP-PCR protocol improved STR typing success, they noticed the occurrence of artifacts such as ADOs, ADIs, and increased stutter . In a recent study, a workflow was suggested for STR analysis of UV-exposed DNA samples . The workflow incorporated mIPEP PCR, which was shown to improve allele recovery for low amounts of damaged DNA; however, it increased the number of ADIs. To summarize these studies, WGA methods when applied to forensic STR typing, while generally increasing the analytical sensitivity, impose several novel problems: Profiles often display pronounced imbalances and ADOs that affect STR loci and alleles in a non-predictable fashion and are related to WGA-inherent bias. In addition, particularly with the PCR-based methods, high rates of stutters and ADIs are seen. These phenomena render STR profiles from unknown donors hard to interpret and lower the statistical power of the evidence. For example, in the case of an apparently homozygous locus (showing just one peak on the electropherogram), it cannot be decided whether a second allele is actually missing. ADIs or high stutters cannot be distinguished from normal alleles, and may thus mislead the interpretation as well. Furthermore, pronounced intra- or inter-locus peak height imbalances hamper the interpretation of mixed profiles, because peak heights no longer reflect the true amount of template DNA. Thus, although additional genotype information from LT DNA or damaged DNA can be obtained by WGA, the generated artifacts may mislead the interpretation of STR profiles, which strongly argues against the use of WGA in forensic casework.
All WGA methods tend to display some bias in terms of amplification of genomic DNA loci (reviewed in ); thus, the uniformity and completeness of the genome coverage of WGA products are critical parameters when it comes to downstream analysis of multiple DNA loci ( ). In forensic STR analysis of LT DNA, WGA-inherent bias thus bears the risk of generating additional ADOs and AIs on top of the already present stochastic artifacts. Forensic STR analysis imposes two further challenges: impaired integrity of template DNA and the propensity of STR loci for replication slippage during PCR amplification. Generally, PCR-based methods can better deal with low quality DNA (damaged or fragmented), because unlike MDA, PCR does not rely on long, undisrupted templates; however, the rolling circle MDA variants are suitable for fragmented DNA templates too. Like normal PCR amplification, the PCR-based protocols are prone to stutter artifacts when analyzing STR loci , whereas replication slippage is less likely to occur in MDA-based methods . The adaptor ligation-mediated PCR methods exhibit a fundamental problem if applied to only few copies of template DNA. Unlike the random primers used in DOP-PCR, PEP PCR, or MDA, which allow for initiating DNA synthesis multiple times from various locations without harming the template, the fragmentation of template DNA is irreversible, and thus any STR amplicon disintegrated during fragmentation (or located within a fragment too long for successful PCR amplification) will inevitably be underrepresented later on. This effect will be particularly apparent in the analysis of single cells where each STR allele is present only once.
Already early in their development, the potential of WGA methods for forensic DNA analysis has been recognized , and methods have been evaluated in terms of STR analysis. Initial studies, however, did not analyze forensic standard loci, and they did not use forensically relevant DNA samples, such as extracts from typical trace types (saliva, semen, blood) or degraded DNA (artificially degraded or environmentally exposed). Several later studies have then tested the various available WGA methods for their potential to improve the STR typing success of problematic DNA samples using contemporary commercial kits for forensic STR analysis. These studies are generally difficult to compare, because they analyze DNA from different sources and of different amounts, and often evaluate STR typing success and sensitivities in different ways. Moreover, the contemporary STR typing kits used differ in their sensitivities, which sometimes do not reach the limits of the STR kits nowadays in use. In this section, the most significant findings will be summarized, paying attention in particular to sensitivity of the methods, technical artifacts, and success in typing degraded DNA samples. Two studies reported sensitivities down to to 10 pg input DNA for iPEP, GenomiPhi MDA, and the commercial adaptor ligation method GenomePlex ; however, they noticed the occurrence of AIs and ADOs at these extremely low DNA amounts. One study did not find an improvement for DOP-PCR, MDA, or I-PEP PCR over non-WGA for treated DNA (with LL-DOP-PCR completely failing), and as a consequence developed the mIPEP PCR, which was successfully applied to 5 pg DNA from buccal swabs, to semen stains, to vaginal swabs, and even to fingerprints . However, the occurrence of ADIs and AIs was reported for mIPEP PCR using low DNA amounts, and the method was of little benefit when analyzing environmentally exposed bloodstains, suggesting deficiencies when applied to real-world forensic trace material . A later study could obtain partial STR profiles from environmentally exposed blood stains using mIPEP PCR; however, it reported extra alleles (ADIs) with low amounts of template DNA . In one study, DOP-PCR failed with low DNA amounts, and an improved DOP-PCR method (called iDOP-PCR), while achieving sensitivity down to 15 pg, showed high proportions of ADOs (46%) and ADIs (4%) . Likewise, adaptor ligation PCR protocols, while generally improving the sensitivity of STR analysis, resulted in significant AIs, ADOs, and ADIs [ , , , ]. On the other hand, adaptor ligation PCR seems best suited for analyzing degraded DNA samples, as shown in two studies comparing STR typing success after the application of PEP PCR, DOP-PCR, adaptor ligation PCR, two MDA protocols, and the rolling circle MDA methods, to DNA extracted from heat-treated human muscle samples . With the exception of PEP, all methods failed when analyzing DNA amounts of less than 1 ng, and only PEP and the adapator ligation method (GenomePlex) improved the typing success for degraded DNA; GenomePlex, however, generated many ADIs and high stutters . Likewise, Uchigasaki et al. (2018) reported improved allele recovery after GenomePlex WGA applied to UV-irradiated human bloodstains; however, the observed peaks were different from those of the control samples . Remarkably, in the studies of Maciejewska et al., the rolling circle MDA methods (initially developed for fragmented DNA) proved less successful compared with other WGA methods , confirming the findings of two earlier studies . Ambers et al. (2016) modified the DOP-PCR protocol to improve the analysis of ancient and degraded forensic DNA samples. While their mDOP-PCR protocol improved STR typing success, they noticed the occurrence of artifacts such as ADOs, ADIs, and increased stutter . In a recent study, a workflow was suggested for STR analysis of UV-exposed DNA samples . The workflow incorporated mIPEP PCR, which was shown to improve allele recovery for low amounts of damaged DNA; however, it increased the number of ADIs. To summarize these studies, WGA methods when applied to forensic STR typing, while generally increasing the analytical sensitivity, impose several novel problems: Profiles often display pronounced imbalances and ADOs that affect STR loci and alleles in a non-predictable fashion and are related to WGA-inherent bias. In addition, particularly with the PCR-based methods, high rates of stutters and ADIs are seen. These phenomena render STR profiles from unknown donors hard to interpret and lower the statistical power of the evidence. For example, in the case of an apparently homozygous locus (showing just one peak on the electropherogram), it cannot be decided whether a second allele is actually missing. ADIs or high stutters cannot be distinguished from normal alleles, and may thus mislead the interpretation as well. Furthermore, pronounced intra- or inter-locus peak height imbalances hamper the interpretation of mixed profiles, because peak heights no longer reflect the true amount of template DNA. Thus, although additional genotype information from LT DNA or damaged DNA can be obtained by WGA, the generated artifacts may mislead the interpretation of STR profiles, which strongly argues against the use of WGA in forensic casework.
In light of the inability of WGA to significantly improve the STR typing of the typical forensic DNA samples, Barber and Foran (in a study comparing MDA and I-PEP PCR) in 2006 concluded that, “WGA appears to be of limited forensic utility unless the samples are of a very high quality” —which, however, would make the use of WGA unnecessary. Publications in the following years have not led to a substantial revision of that judgement. Remarkably, no study has addressed the sensitivities of WGA methods against PCR inhibitors—compounds, such as heme, humic acid, and denim dyes, which are often coextracted with the DNA from traces and will impair PCR amplification by various mechanisms . The ability to deal with PCR inhibitors is an important aspect in the developmental validation of forensic PCR assays (see, for example, ); however, the WGA methods have never been systematically tested in that respect. Thus, it seems forensic researchers have lost their enthusiasm for applying WGA to casework. This is the more true nowadays, as modern multiplex STR kits have improved sensitivities down to 60 pg input DNA , and LCN DNA methods based on them have greatly improved the analysis of trace DNA , removing the need to take the risk of additional WGA-caused bias and artifacts. In the meantime, however, novel WGA protocols have been developed, aiming in particular at sensitivities on the single-cell level, and at the same time reducing amplification bias; and several WGA kits have been commercialized that have been optimized for single-cell analysis, particularly for usage in clinical settings. These novel methods and kits have again sparked interest in the application of WGA in forensic DNA analysis, but so far only few of them have been tested in a forensic context. 5.1. WGA Methods with Reduced Bias The ADOs, LDOs, and pronounced AIs reported after WGA-based preamplification were observed at DNA amounts well above the stochastic threshold, and thus cannot be fully explained by stochastic sampling effects. Rather, they point towards amplification bias, which is typical of WGA applied to low template DNA concentrations, as random events during the initial amplification become exacerbated due to the exponential nature of the amplification process . Several WGA protocols have been established that aim at reducing bias by including non-exponential amplification steps. Among those low-bias methods are the multiple annealing and looping based amplification cycles (MALBAC) method, which uses the Bst polymerase during a first quasi-linear preamplification step, preceding the subsequent PCR amplification , and the commercial SurePlex/PicoPLEX kit that is based on a related principle . The LIANTI (linear amplification via transposon insertion) method is based on transposon-mediated generation of random genomic fragments with terminally attached T7 promoter sites that can be linearly transcribed into RNA capable of self-priming for subsequent DNA synthesis . Recently, the PTA (primary template-directed amplification) method has been published, in which the phi29-polymerase-mediated extension of randomly primed DNA products is limited to short lengths, thereby preventing exponential amplification . Two studies have been published that tested MALBAC in conjunction with forensic STR typing kits, both analyzing DNA extracted from human peripheral blood . Even with this presumably high-quality DNA, both studies disappointed in terms of forensic STR analysis. In their study in 2022, Liao et al. noticed improved allele recovery (as compared to non-WGA samples) after the application of MALBAC to DNA amounts less than 50 pg. However, a high number of ADOs occurred, and profiles displayed many imbalanced STR loci and ADIs . Likewise, a second study comparing MALBAC WGA with a commercial single-cell MDA kit (Repli-g) and non-WGA-treated DNA, reported a higher number of called alleles after MALBAC and MDA for DNA amounts of less than 50 pg (although only less than 50% of alleles were called with either method). However, the percentage of erroneously called STR loci was significantly higher in MALBAC-amplified profiles, but it was not further specified how much ADOs or ADIs may have accounted for the errors. The increased occurrence of ADOs and ADIs after MALBAC as compared to MDA may be due to the Bst polymerase that was reported to be less sensitive and more prone to stutters than phi29 polymerase when amplifying STR loci . High proportions of ADOs and ADIs were also reported for the methodically related PicoPLEX kit when applied to single cells . Thus, although the polymerase used in the pre-amplification step of the PicoPLEX kit has not been disclosed, the two related low-bias methods, MALBAC and PicoPLEX, are most likely not suited for forensic STR analysis. The other low-bias methods, LIANTI and PTA, however, may still hold promise and would be worth testing in a forensic context. 5.2. WGA in the STR Analysis of Single Cells 5.2.1. Micromanipulation of Single Cells and of Bioparticles for Mixture Deconvolution With the high sensitivity of modern STR typing kits, the occurrence of mixed DNA profiles derived from more than one individual has increased, because now minute DNA amounts can be detected that have been left by other individuals who may not even be related to the actual crimes . STR profiles of such mixtures typically display more than two peaks per STR locus . Even DNA transferred indirectly may become detectable and confound the DNA profile of a perpetrator . There are various ways to deconvolve the peaks on electropherograms of mixed STR profiles (i.e., assign them to individual donors), and modern software-assisted methods have increased the statistical power of the confounded information . To be able to deconvolve the electropherograms of mixed profiles, it is of importance that on electropherograms the peak heights reflect the amounts of DNA from the respective donor individuals. Despite sophisticated software tools being available, however, interpretations of mixed STR profiles often remain unsatisfactory, particularly if peaks of the different contributors have similar heights or if stochastic effects confound the information . As a way of avoiding mixtures right from the start, methods have been suggested that physically deconvolve mixed trace material by isolating bioparticles (such as skin flakes, and aggregates of a few cells or single cells) that contain the genomic DNA of exactly one donor individual . The price to be paid is an extremely low amount of DNA, which necessitates LCN DNA methods entailing stochastic effects, particularly when analyzing replicates. The feasibility of micromanipulating and genotyping single cells from forensic trace material, such as chewing gums, cigarette butts, swabs, touched skin, and fabrics, has been demonstrated in several studies [ , , , , ]. These studies recovered single cells or small bioparticles containing the DNA from individual donors, and by this means established single-donor STR profiles using LCN DNA methods. A high proportion of single-donor profiles, however, were incomplete and also contained ADIs, and thus replicates of several such profiles had to be combined to establish the full profiles. In the studies by Li et al. , buccal cells were micromanipulated from trace material, and using low-volume PCR, consensus profiles could be obtained by combining the profiles from five or six single mucosal cells. The used microwell slides are, however, no longer commercially available, and there have been no follow-up reports using this technology in forensics. The study by Farash et al. (2018) described the analysis of micromanipulated cells or cell aggregates from skin deposited on touched materials. In about one third of the samples analyzed, STR profiles attributable to donors were obtained. The study by Ostojic et al. (2021) compared several micromanipulation methods and could show that ten micromanipulated cells were sufficient to compile forensically informative profiles. A study by Huffmann (2021) showed that the application of an improved LCN DNA method to 1–3 cell subsamples of two-person mixtures allowed for successful compilation of consensus profiles of the contributors, with significant ADO and ADI rates in the individual profiles, however . Based on these experiments, a suitable strategy for analysis of complex mixtures using software-assisted mixture analysis has been published recently, showing that analyzing several subsamples consisting of one to two cells can increase the statistical power as compared to analyzing bulk mixtures . However, despite using LCN DNA methods, locus-specific drop-out rates were on average 58% for single-cell and 38% for two-cell subsamples . Thus, if in a forensic context, single-cell WGA methods were able to deliver on their promise, i.e., to enable the genotyping of single cells, their application might lead to a further improvement by increasing the template DNA of one- or two-cell subsamples to amounts that can more reliably be analyzed with modern forensic STR kits, even allowing for replicate analysis of the subsamples. 5.2.2. Forensic STR Analysis of Single Cells The principal suitability of single-cell WGA methods for forensic STR analysis of single cells was tested in several recent studies. Analyzing single cells has the advantage of stochastic sampling effects being less likely, because from whole cells, complete diploid genomes can be extracted and then be subjected to WGA. In modern single-cell WGA kits, this is accomplished by carrying out cell lysis, DNA extraction, and WGA in the same tube. In a study from 2018, the low-bias single-cell method PicoPLEX was compared to a commercial single-cell DOP-PCR kit (DOPlify), a single-cell MDA kit (Repli-g), and an adaptor ligation method (Ampli 1) . In that study—analyzing genomic DNA from micromanipulated single cells from a human B lymphoblastoid cell line—the PCR-based methods caused the highest numbers of ADOs and LDOs, and PicoPLEX showed many LDOs and ADOs when applied to single cells. Though in that study ADIs were not addressed, another study applying the PicoPLEX kit to DNA from single unfixed or formalin-fixed cells, reported a frequency of ADIs of 11.6% . Thus, PicoPLEX remained unsatisfactory in terms of STR analysis of single cells, whereas the single-cell MDA method Repli-g was promising. The Repli-g single-cell MDA kit was tested in two further studies for its suitability for forensic STR typing. The study by Maruyama et al. (2020) reported that at least 20 micromanipulated buccal cells were required for successful STR-typing, whereas from single cells, most alleles remained undetected . Another study by Chen et al. (2020), however, demonstrated that Repli-g single-cell WGA indeed allowed for successful STR analysis of single, micromanipulated B-lymphoblastoid cells; most single cells yielded complete profiles . Intra- and interlocus peak height imbalances, however, were pronounced but became less so when analyzing three or five cells. Likewise, stutters were increased as compared to STR profiles from control DNA. In the study by Maruyama et al. (2020) , cells were dried on the applicator tips prior to micromanipulation, which may have affected DNA extraction or the integrity of cell nuclei. As a further variable, the volume of buffer cotransferred by the micromanipulation capillary may have accounted for the differences in STR typing success between the two studies, as this may lead to dilution or pH change of the extraction buffer. 5.2.3. WGA in the Analysis of Single Sperm Cells A recently emerging, potential forensic application of WGA is in the STR analysis of micromanipulated single sperm cells. In rape cases, the DNA from vaginal swabs will in most cases be derived from both sperm cells from the perpetrator and vaginal cells from the victim, and differential extraction protocols aiming at separating the sperm fraction from the victim DNA fraction often remain unsatisfactory . The analysis of single sperm cells micromanipulated from vaginal swabs can thus be considered a special application of physical mixture deconvolution that might even help in the analysis of traces with low sperm count and in the clarification of multiple perpetrator rape cases. Studies addressing single sperm cell analysis using conventional forensic STR analysis, however, reported that at least 20 sperm cells are required to establish complete STR profiles [ , , ]. Sperm cells are haploid, and based on statistical considerations, a minimum of nine single sperm cells is required to compile a diploid profile . However, even haploid STR profiles of single sperm cells may already be attributable to individual donors and thus be helpful in the clarification of crime cases. One of the disadvantages of WGA, the occurrence of allelic imbalances, is less troublesome when analyzing single sperm cells, since these are haploid, showing only one allele peak per locus on an electropherogram. The successful STR analysis of single micromanipulated sperm cells after application of the Repli-g single-cell MDA kit has been demonstrated in a recent study in which individual sperm cells were isolated using an adhesive-coated tungsten needle tip . Consensus profiles were obtained by analyzing two different dilutions of the STR multiplex PCR products following WGA, and by this means the majority of single sperm cells yielded more than 80% of alleles of the haploid profiles, and several single sperm cells yielded full haploid profiles. Furthermore, gonosomal STR profiles of the single sperm cells were successfully analyzed as well and helped to compile the diploid autosomal STR donor profile from single sperm cells. The study also successfully analyzed single sperm cells from mock vaginal swabs with one or two male contributors, and thus the application of Repli-g MDA was suggested for sexual assault cases (or archival material) with low sperm counts or for multiple rape cases. An advantage over other approaches of single-sperm-cell STR analysis, such as low volume on-chip PCR would be that, apart from the micromanipulation, all steps can be carried out with standard equipment of forensic laboratories. The WGA enrichment of template DNA would furthermore allow for replicate analysis or subsequent analysis of additional markers, if required.
The ADOs, LDOs, and pronounced AIs reported after WGA-based preamplification were observed at DNA amounts well above the stochastic threshold, and thus cannot be fully explained by stochastic sampling effects. Rather, they point towards amplification bias, which is typical of WGA applied to low template DNA concentrations, as random events during the initial amplification become exacerbated due to the exponential nature of the amplification process . Several WGA protocols have been established that aim at reducing bias by including non-exponential amplification steps. Among those low-bias methods are the multiple annealing and looping based amplification cycles (MALBAC) method, which uses the Bst polymerase during a first quasi-linear preamplification step, preceding the subsequent PCR amplification , and the commercial SurePlex/PicoPLEX kit that is based on a related principle . The LIANTI (linear amplification via transposon insertion) method is based on transposon-mediated generation of random genomic fragments with terminally attached T7 promoter sites that can be linearly transcribed into RNA capable of self-priming for subsequent DNA synthesis . Recently, the PTA (primary template-directed amplification) method has been published, in which the phi29-polymerase-mediated extension of randomly primed DNA products is limited to short lengths, thereby preventing exponential amplification . Two studies have been published that tested MALBAC in conjunction with forensic STR typing kits, both analyzing DNA extracted from human peripheral blood . Even with this presumably high-quality DNA, both studies disappointed in terms of forensic STR analysis. In their study in 2022, Liao et al. noticed improved allele recovery (as compared to non-WGA samples) after the application of MALBAC to DNA amounts less than 50 pg. However, a high number of ADOs occurred, and profiles displayed many imbalanced STR loci and ADIs . Likewise, a second study comparing MALBAC WGA with a commercial single-cell MDA kit (Repli-g) and non-WGA-treated DNA, reported a higher number of called alleles after MALBAC and MDA for DNA amounts of less than 50 pg (although only less than 50% of alleles were called with either method). However, the percentage of erroneously called STR loci was significantly higher in MALBAC-amplified profiles, but it was not further specified how much ADOs or ADIs may have accounted for the errors. The increased occurrence of ADOs and ADIs after MALBAC as compared to MDA may be due to the Bst polymerase that was reported to be less sensitive and more prone to stutters than phi29 polymerase when amplifying STR loci . High proportions of ADOs and ADIs were also reported for the methodically related PicoPLEX kit when applied to single cells . Thus, although the polymerase used in the pre-amplification step of the PicoPLEX kit has not been disclosed, the two related low-bias methods, MALBAC and PicoPLEX, are most likely not suited for forensic STR analysis. The other low-bias methods, LIANTI and PTA, however, may still hold promise and would be worth testing in a forensic context.
5.2.1. Micromanipulation of Single Cells and of Bioparticles for Mixture Deconvolution With the high sensitivity of modern STR typing kits, the occurrence of mixed DNA profiles derived from more than one individual has increased, because now minute DNA amounts can be detected that have been left by other individuals who may not even be related to the actual crimes . STR profiles of such mixtures typically display more than two peaks per STR locus . Even DNA transferred indirectly may become detectable and confound the DNA profile of a perpetrator . There are various ways to deconvolve the peaks on electropherograms of mixed STR profiles (i.e., assign them to individual donors), and modern software-assisted methods have increased the statistical power of the confounded information . To be able to deconvolve the electropherograms of mixed profiles, it is of importance that on electropherograms the peak heights reflect the amounts of DNA from the respective donor individuals. Despite sophisticated software tools being available, however, interpretations of mixed STR profiles often remain unsatisfactory, particularly if peaks of the different contributors have similar heights or if stochastic effects confound the information . As a way of avoiding mixtures right from the start, methods have been suggested that physically deconvolve mixed trace material by isolating bioparticles (such as skin flakes, and aggregates of a few cells or single cells) that contain the genomic DNA of exactly one donor individual . The price to be paid is an extremely low amount of DNA, which necessitates LCN DNA methods entailing stochastic effects, particularly when analyzing replicates. The feasibility of micromanipulating and genotyping single cells from forensic trace material, such as chewing gums, cigarette butts, swabs, touched skin, and fabrics, has been demonstrated in several studies [ , , , , ]. These studies recovered single cells or small bioparticles containing the DNA from individual donors, and by this means established single-donor STR profiles using LCN DNA methods. A high proportion of single-donor profiles, however, were incomplete and also contained ADIs, and thus replicates of several such profiles had to be combined to establish the full profiles. In the studies by Li et al. , buccal cells were micromanipulated from trace material, and using low-volume PCR, consensus profiles could be obtained by combining the profiles from five or six single mucosal cells. The used microwell slides are, however, no longer commercially available, and there have been no follow-up reports using this technology in forensics. The study by Farash et al. (2018) described the analysis of micromanipulated cells or cell aggregates from skin deposited on touched materials. In about one third of the samples analyzed, STR profiles attributable to donors were obtained. The study by Ostojic et al. (2021) compared several micromanipulation methods and could show that ten micromanipulated cells were sufficient to compile forensically informative profiles. A study by Huffmann (2021) showed that the application of an improved LCN DNA method to 1–3 cell subsamples of two-person mixtures allowed for successful compilation of consensus profiles of the contributors, with significant ADO and ADI rates in the individual profiles, however . Based on these experiments, a suitable strategy for analysis of complex mixtures using software-assisted mixture analysis has been published recently, showing that analyzing several subsamples consisting of one to two cells can increase the statistical power as compared to analyzing bulk mixtures . However, despite using LCN DNA methods, locus-specific drop-out rates were on average 58% for single-cell and 38% for two-cell subsamples . Thus, if in a forensic context, single-cell WGA methods were able to deliver on their promise, i.e., to enable the genotyping of single cells, their application might lead to a further improvement by increasing the template DNA of one- or two-cell subsamples to amounts that can more reliably be analyzed with modern forensic STR kits, even allowing for replicate analysis of the subsamples. 5.2.2. Forensic STR Analysis of Single Cells The principal suitability of single-cell WGA methods for forensic STR analysis of single cells was tested in several recent studies. Analyzing single cells has the advantage of stochastic sampling effects being less likely, because from whole cells, complete diploid genomes can be extracted and then be subjected to WGA. In modern single-cell WGA kits, this is accomplished by carrying out cell lysis, DNA extraction, and WGA in the same tube. In a study from 2018, the low-bias single-cell method PicoPLEX was compared to a commercial single-cell DOP-PCR kit (DOPlify), a single-cell MDA kit (Repli-g), and an adaptor ligation method (Ampli 1) . In that study—analyzing genomic DNA from micromanipulated single cells from a human B lymphoblastoid cell line—the PCR-based methods caused the highest numbers of ADOs and LDOs, and PicoPLEX showed many LDOs and ADOs when applied to single cells. Though in that study ADIs were not addressed, another study applying the PicoPLEX kit to DNA from single unfixed or formalin-fixed cells, reported a frequency of ADIs of 11.6% . Thus, PicoPLEX remained unsatisfactory in terms of STR analysis of single cells, whereas the single-cell MDA method Repli-g was promising. The Repli-g single-cell MDA kit was tested in two further studies for its suitability for forensic STR typing. The study by Maruyama et al. (2020) reported that at least 20 micromanipulated buccal cells were required for successful STR-typing, whereas from single cells, most alleles remained undetected . Another study by Chen et al. (2020), however, demonstrated that Repli-g single-cell WGA indeed allowed for successful STR analysis of single, micromanipulated B-lymphoblastoid cells; most single cells yielded complete profiles . Intra- and interlocus peak height imbalances, however, were pronounced but became less so when analyzing three or five cells. Likewise, stutters were increased as compared to STR profiles from control DNA. In the study by Maruyama et al. (2020) , cells were dried on the applicator tips prior to micromanipulation, which may have affected DNA extraction or the integrity of cell nuclei. As a further variable, the volume of buffer cotransferred by the micromanipulation capillary may have accounted for the differences in STR typing success between the two studies, as this may lead to dilution or pH change of the extraction buffer. 5.2.3. WGA in the Analysis of Single Sperm Cells A recently emerging, potential forensic application of WGA is in the STR analysis of micromanipulated single sperm cells. In rape cases, the DNA from vaginal swabs will in most cases be derived from both sperm cells from the perpetrator and vaginal cells from the victim, and differential extraction protocols aiming at separating the sperm fraction from the victim DNA fraction often remain unsatisfactory . The analysis of single sperm cells micromanipulated from vaginal swabs can thus be considered a special application of physical mixture deconvolution that might even help in the analysis of traces with low sperm count and in the clarification of multiple perpetrator rape cases. Studies addressing single sperm cell analysis using conventional forensic STR analysis, however, reported that at least 20 sperm cells are required to establish complete STR profiles [ , , ]. Sperm cells are haploid, and based on statistical considerations, a minimum of nine single sperm cells is required to compile a diploid profile . However, even haploid STR profiles of single sperm cells may already be attributable to individual donors and thus be helpful in the clarification of crime cases. One of the disadvantages of WGA, the occurrence of allelic imbalances, is less troublesome when analyzing single sperm cells, since these are haploid, showing only one allele peak per locus on an electropherogram. The successful STR analysis of single micromanipulated sperm cells after application of the Repli-g single-cell MDA kit has been demonstrated in a recent study in which individual sperm cells were isolated using an adhesive-coated tungsten needle tip . Consensus profiles were obtained by analyzing two different dilutions of the STR multiplex PCR products following WGA, and by this means the majority of single sperm cells yielded more than 80% of alleles of the haploid profiles, and several single sperm cells yielded full haploid profiles. Furthermore, gonosomal STR profiles of the single sperm cells were successfully analyzed as well and helped to compile the diploid autosomal STR donor profile from single sperm cells. The study also successfully analyzed single sperm cells from mock vaginal swabs with one or two male contributors, and thus the application of Repli-g MDA was suggested for sexual assault cases (or archival material) with low sperm counts or for multiple rape cases. An advantage over other approaches of single-sperm-cell STR analysis, such as low volume on-chip PCR would be that, apart from the micromanipulation, all steps can be carried out with standard equipment of forensic laboratories. The WGA enrichment of template DNA would furthermore allow for replicate analysis or subsequent analysis of additional markers, if required.
With the high sensitivity of modern STR typing kits, the occurrence of mixed DNA profiles derived from more than one individual has increased, because now minute DNA amounts can be detected that have been left by other individuals who may not even be related to the actual crimes . STR profiles of such mixtures typically display more than two peaks per STR locus . Even DNA transferred indirectly may become detectable and confound the DNA profile of a perpetrator . There are various ways to deconvolve the peaks on electropherograms of mixed STR profiles (i.e., assign them to individual donors), and modern software-assisted methods have increased the statistical power of the confounded information . To be able to deconvolve the electropherograms of mixed profiles, it is of importance that on electropherograms the peak heights reflect the amounts of DNA from the respective donor individuals. Despite sophisticated software tools being available, however, interpretations of mixed STR profiles often remain unsatisfactory, particularly if peaks of the different contributors have similar heights or if stochastic effects confound the information . As a way of avoiding mixtures right from the start, methods have been suggested that physically deconvolve mixed trace material by isolating bioparticles (such as skin flakes, and aggregates of a few cells or single cells) that contain the genomic DNA of exactly one donor individual . The price to be paid is an extremely low amount of DNA, which necessitates LCN DNA methods entailing stochastic effects, particularly when analyzing replicates. The feasibility of micromanipulating and genotyping single cells from forensic trace material, such as chewing gums, cigarette butts, swabs, touched skin, and fabrics, has been demonstrated in several studies [ , , , , ]. These studies recovered single cells or small bioparticles containing the DNA from individual donors, and by this means established single-donor STR profiles using LCN DNA methods. A high proportion of single-donor profiles, however, were incomplete and also contained ADIs, and thus replicates of several such profiles had to be combined to establish the full profiles. In the studies by Li et al. , buccal cells were micromanipulated from trace material, and using low-volume PCR, consensus profiles could be obtained by combining the profiles from five or six single mucosal cells. The used microwell slides are, however, no longer commercially available, and there have been no follow-up reports using this technology in forensics. The study by Farash et al. (2018) described the analysis of micromanipulated cells or cell aggregates from skin deposited on touched materials. In about one third of the samples analyzed, STR profiles attributable to donors were obtained. The study by Ostojic et al. (2021) compared several micromanipulation methods and could show that ten micromanipulated cells were sufficient to compile forensically informative profiles. A study by Huffmann (2021) showed that the application of an improved LCN DNA method to 1–3 cell subsamples of two-person mixtures allowed for successful compilation of consensus profiles of the contributors, with significant ADO and ADI rates in the individual profiles, however . Based on these experiments, a suitable strategy for analysis of complex mixtures using software-assisted mixture analysis has been published recently, showing that analyzing several subsamples consisting of one to two cells can increase the statistical power as compared to analyzing bulk mixtures . However, despite using LCN DNA methods, locus-specific drop-out rates were on average 58% for single-cell and 38% for two-cell subsamples . Thus, if in a forensic context, single-cell WGA methods were able to deliver on their promise, i.e., to enable the genotyping of single cells, their application might lead to a further improvement by increasing the template DNA of one- or two-cell subsamples to amounts that can more reliably be analyzed with modern forensic STR kits, even allowing for replicate analysis of the subsamples.
The principal suitability of single-cell WGA methods for forensic STR analysis of single cells was tested in several recent studies. Analyzing single cells has the advantage of stochastic sampling effects being less likely, because from whole cells, complete diploid genomes can be extracted and then be subjected to WGA. In modern single-cell WGA kits, this is accomplished by carrying out cell lysis, DNA extraction, and WGA in the same tube. In a study from 2018, the low-bias single-cell method PicoPLEX was compared to a commercial single-cell DOP-PCR kit (DOPlify), a single-cell MDA kit (Repli-g), and an adaptor ligation method (Ampli 1) . In that study—analyzing genomic DNA from micromanipulated single cells from a human B lymphoblastoid cell line—the PCR-based methods caused the highest numbers of ADOs and LDOs, and PicoPLEX showed many LDOs and ADOs when applied to single cells. Though in that study ADIs were not addressed, another study applying the PicoPLEX kit to DNA from single unfixed or formalin-fixed cells, reported a frequency of ADIs of 11.6% . Thus, PicoPLEX remained unsatisfactory in terms of STR analysis of single cells, whereas the single-cell MDA method Repli-g was promising. The Repli-g single-cell MDA kit was tested in two further studies for its suitability for forensic STR typing. The study by Maruyama et al. (2020) reported that at least 20 micromanipulated buccal cells were required for successful STR-typing, whereas from single cells, most alleles remained undetected . Another study by Chen et al. (2020), however, demonstrated that Repli-g single-cell WGA indeed allowed for successful STR analysis of single, micromanipulated B-lymphoblastoid cells; most single cells yielded complete profiles . Intra- and interlocus peak height imbalances, however, were pronounced but became less so when analyzing three or five cells. Likewise, stutters were increased as compared to STR profiles from control DNA. In the study by Maruyama et al. (2020) , cells were dried on the applicator tips prior to micromanipulation, which may have affected DNA extraction or the integrity of cell nuclei. As a further variable, the volume of buffer cotransferred by the micromanipulation capillary may have accounted for the differences in STR typing success between the two studies, as this may lead to dilution or pH change of the extraction buffer.
A recently emerging, potential forensic application of WGA is in the STR analysis of micromanipulated single sperm cells. In rape cases, the DNA from vaginal swabs will in most cases be derived from both sperm cells from the perpetrator and vaginal cells from the victim, and differential extraction protocols aiming at separating the sperm fraction from the victim DNA fraction often remain unsatisfactory . The analysis of single sperm cells micromanipulated from vaginal swabs can thus be considered a special application of physical mixture deconvolution that might even help in the analysis of traces with low sperm count and in the clarification of multiple perpetrator rape cases. Studies addressing single sperm cell analysis using conventional forensic STR analysis, however, reported that at least 20 sperm cells are required to establish complete STR profiles [ , , ]. Sperm cells are haploid, and based on statistical considerations, a minimum of nine single sperm cells is required to compile a diploid profile . However, even haploid STR profiles of single sperm cells may already be attributable to individual donors and thus be helpful in the clarification of crime cases. One of the disadvantages of WGA, the occurrence of allelic imbalances, is less troublesome when analyzing single sperm cells, since these are haploid, showing only one allele peak per locus on an electropherogram. The successful STR analysis of single micromanipulated sperm cells after application of the Repli-g single-cell MDA kit has been demonstrated in a recent study in which individual sperm cells were isolated using an adhesive-coated tungsten needle tip . Consensus profiles were obtained by analyzing two different dilutions of the STR multiplex PCR products following WGA, and by this means the majority of single sperm cells yielded more than 80% of alleles of the haploid profiles, and several single sperm cells yielded full haploid profiles. Furthermore, gonosomal STR profiles of the single sperm cells were successfully analyzed as well and helped to compile the diploid autosomal STR donor profile from single sperm cells. The study also successfully analyzed single sperm cells from mock vaginal swabs with one or two male contributors, and thus the application of Repli-g MDA was suggested for sexual assault cases (or archival material) with low sperm counts or for multiple rape cases. An advantage over other approaches of single-sperm-cell STR analysis, such as low volume on-chip PCR would be that, apart from the micromanipulation, all steps can be carried out with standard equipment of forensic laboratories. The WGA enrichment of template DNA would furthermore allow for replicate analysis or subsequent analysis of additional markers, if required.
Despite the shortcomings of classical WGA methods in forensic STR analysis, the latest single-cell WGA methods have opened up a new perspective for WGA in mixture deconvolution based on the analysis of single cells or small bioparticles. Methodologically, the successful application of WGA in the analysis of single sperm cells has already been demonstrated and now awaits further validation using forensic real-world samples. Applications of single-cell WGA to other bioparticles or to single cells (or 2-cell subsamples) micromanipulated from forensic traces still need to be tested. Furthermore, stimulated by biomedical interests (such as liquid biopsy and preimplantation genetic testing ), several commercial single-cell WGA kits have entered the market, and novel, low-bias single-cell methods have been developed which may turn out useful in a forensic context. Finally, forensic DNA analysis is in the process of implementing high-throughput sequencing methods, allowing for expansion of the sets of markers analyzed and for analysis of shorter stretches of DNA throughout . In that respect, it should be noted that WGA in itself is not yet a DNA typing analysis, and the actual genotyping of forensic markers is carried out thereafter. Thus, WGA will leave the legal admissibility or the biostatistical properties of a chosen marker set untouched. By ideally uniformly amplifying the entirety of genomic DNA, WGA methods are open for any particular DNA marker type; however, different WGA methods may be better suited for particular types of markers . It will be interesting to see in how far WGA methods might be compatible with or improve upcoming methods of forensic DNA analysis.
|
The Roll-out of Child-friendly Fixed-dose Combination TB Formulations in High-TB-Burden Countries: A Case Study of STEP-TB | d686fa5d-4d03-49d7-848f-86eb29b67db4 | 7310820 | Pediatrics[mh] | INTRODUCTION 1.1. Background Tuberculosis (TB) is an infectious disease caused by the bacterium Mycobacterium tuberculosis . Globally, TB is responsible for a greater number of deaths than any other single infectious disease, and an estimated 80% of the global burden of TB morbidity is attributable to 22 high-burden countries (HBCs) , eight of which are in Africa (Nigeria, Ethiopia, South Africa, Kenya, DR Congo, Tanzania, Uganda, and Mozambique). Approximately 10% of TB patients are children , and the World Health Organization (WHO) estimates that more than 1 million children under 15 years of age will fall ill with active TB disease each year . Unfortunately, addressing childhood TB has not been a priority in the past in comparison to TB in adults , and TB control in children is hindered by the fact that accurate diagnosis of TB in children remains a challenge. Children are, however, at increased risk of progression to active TB disease, making accurate diagnosis and prompt treatment initiation crucial in this group . Moreover, TB treatment in children is considered critical to the attainment of the Sustainable Development Goal of ending preventable deaths in children by 2030. In response to evidence that children require specific dosing of TB regimens to optimize treatment effect, the WHO revised its dosing guidelines for the treatment of childhood TB in 2010 , and called for “appropriately dosed, quality medicines in a child-friendly format”; however, regrettably, no interest was demonstrated by pharmaceutical companies . Consequently, caregivers and healthcare providers have been obliged to rely on their best judgment in estimating doses, which involves splitting or crushing the bitter-tasting adult pills. This represents a tremendous challenge for both children and caregivers, leading to inaccurate dosing, poor compliance with the regimens, and ultimately poor treatment outcomes . In 2012, Unitaid pledged US$16.7 million to the Global Alliance for TB Drug Development (the TB Alliance), a not-for-profit drug development and delivery organization, to develop pediatric Fixed-Dose Combinations (FDCs) of existing TB drugs . This commitment by Unitaid led to the Speeding Treatments to End Pediatric-TB (STEP-TB) project, which engaged with the pharmaceutical sector to develop the FDCs. The first nationwide product launch and rollout of the new FDCs was in Kenya in October 2016 . As of 2017, more than 1300 Kenyan children have initiated treatment with the new FDCs since their rollout, which at the time represented 21% of pediatric cases of TB in the country . Having recently reached 1 million treatment orders globally, the child-friendly FDCs are now (as of June 2019) available in 93 countries, which together represent three quarters of the global burden of pediatric TB . 1.2. Goals of STEP-TB The goal of STEP-TB was to generate “improved access to correctly dosed, properly formulated, affordable, high-quality TB medicines for children” . A large part of the project was, therefore, the establishment of a sustainable market for these new medications . More specifically, the TB Alliance identified the following target project outcomes for STEP-TB: • Developing appropriately formulated first-line pediatric TB medicines. • Making affordable optimized first-line pediatric TB medicines available globally. • Reducing or eliminating market barriers to the introduction of pediatric FDCs. • Engendering increased commitment among countries to adopt the new FDCs. • Delineating a pathway for the introduction of the FDCs. 1.3. Description of Intervention Introducing new and improved pediatric TB medicines was the cornerstone of the STEP-TB project. To do this, the TB Alliance partnered with three pharmaceutical companies already manufacturing TB medicines . In December 2015, MacLeods, an India-based pharmaceutical company manufacturing both new products and generics then developed two pediatric FDC pills (using the existing TB drugs Rifampicin, Isoniazid, and Pyrazinamide, formulated for pediatric dosing) (See ), and was the only company able to get their medicines to market within the allotted time frame . The new FDCs were ultimately available in the correct pediatric doses, palatable flavors, and were water-soluble for young children unable to swallow pills. In January 2016, the FDCs became globally available through the Stop-TB Partnership’s Global Drug Facility for US$15.54 for a 6-month course of treatment . The goals of STEP-TB included not only the production of child-friendly FDCs but also delineating a pathway for their effective introduction. Therefore, to raise awareness of the new child-friendly FDCs in advance of their first national rollouts, the TB Alliance launched the “Louder than TB” campaign on World TB Day 2016 to position TB as a critical item on the child health and survival agenda, mobilize demand, and ensure there was a sufficient customer base for the medicines . In October 2016, Kenya became the first country to rollout the new FDCs on a national scale . The TB Alliance and Kenya’s National Tuberculosis, Leprosy, and Lung Disease Program worked closely to organize a public launch followed by a sustained outreach campaign in support of the rollout . In India, the STEP-TB project engaged with the national TB program and private providers, with the goal of supporting policy changes to ensure access to treatment . These HBC-focused approaches were designed to maximize the global outreach of the program, and they provided information regarding use of the FDCs, contributed to policy recommendations, and worked to ensure a quick and sustainable uptake of the new medicines .
Background Tuberculosis (TB) is an infectious disease caused by the bacterium Mycobacterium tuberculosis . Globally, TB is responsible for a greater number of deaths than any other single infectious disease, and an estimated 80% of the global burden of TB morbidity is attributable to 22 high-burden countries (HBCs) , eight of which are in Africa (Nigeria, Ethiopia, South Africa, Kenya, DR Congo, Tanzania, Uganda, and Mozambique). Approximately 10% of TB patients are children , and the World Health Organization (WHO) estimates that more than 1 million children under 15 years of age will fall ill with active TB disease each year . Unfortunately, addressing childhood TB has not been a priority in the past in comparison to TB in adults , and TB control in children is hindered by the fact that accurate diagnosis of TB in children remains a challenge. Children are, however, at increased risk of progression to active TB disease, making accurate diagnosis and prompt treatment initiation crucial in this group . Moreover, TB treatment in children is considered critical to the attainment of the Sustainable Development Goal of ending preventable deaths in children by 2030. In response to evidence that children require specific dosing of TB regimens to optimize treatment effect, the WHO revised its dosing guidelines for the treatment of childhood TB in 2010 , and called for “appropriately dosed, quality medicines in a child-friendly format”; however, regrettably, no interest was demonstrated by pharmaceutical companies . Consequently, caregivers and healthcare providers have been obliged to rely on their best judgment in estimating doses, which involves splitting or crushing the bitter-tasting adult pills. This represents a tremendous challenge for both children and caregivers, leading to inaccurate dosing, poor compliance with the regimens, and ultimately poor treatment outcomes . In 2012, Unitaid pledged US$16.7 million to the Global Alliance for TB Drug Development (the TB Alliance), a not-for-profit drug development and delivery organization, to develop pediatric Fixed-Dose Combinations (FDCs) of existing TB drugs . This commitment by Unitaid led to the Speeding Treatments to End Pediatric-TB (STEP-TB) project, which engaged with the pharmaceutical sector to develop the FDCs. The first nationwide product launch and rollout of the new FDCs was in Kenya in October 2016 . As of 2017, more than 1300 Kenyan children have initiated treatment with the new FDCs since their rollout, which at the time represented 21% of pediatric cases of TB in the country . Having recently reached 1 million treatment orders globally, the child-friendly FDCs are now (as of June 2019) available in 93 countries, which together represent three quarters of the global burden of pediatric TB .
Goals of STEP-TB The goal of STEP-TB was to generate “improved access to correctly dosed, properly formulated, affordable, high-quality TB medicines for children” . A large part of the project was, therefore, the establishment of a sustainable market for these new medications . More specifically, the TB Alliance identified the following target project outcomes for STEP-TB: • Developing appropriately formulated first-line pediatric TB medicines. • Making affordable optimized first-line pediatric TB medicines available globally. • Reducing or eliminating market barriers to the introduction of pediatric FDCs. • Engendering increased commitment among countries to adopt the new FDCs. • Delineating a pathway for the introduction of the FDCs.
Description of Intervention Introducing new and improved pediatric TB medicines was the cornerstone of the STEP-TB project. To do this, the TB Alliance partnered with three pharmaceutical companies already manufacturing TB medicines . In December 2015, MacLeods, an India-based pharmaceutical company manufacturing both new products and generics then developed two pediatric FDC pills (using the existing TB drugs Rifampicin, Isoniazid, and Pyrazinamide, formulated for pediatric dosing) (See ), and was the only company able to get their medicines to market within the allotted time frame . The new FDCs were ultimately available in the correct pediatric doses, palatable flavors, and were water-soluble for young children unable to swallow pills. In January 2016, the FDCs became globally available through the Stop-TB Partnership’s Global Drug Facility for US$15.54 for a 6-month course of treatment . The goals of STEP-TB included not only the production of child-friendly FDCs but also delineating a pathway for their effective introduction. Therefore, to raise awareness of the new child-friendly FDCs in advance of their first national rollouts, the TB Alliance launched the “Louder than TB” campaign on World TB Day 2016 to position TB as a critical item on the child health and survival agenda, mobilize demand, and ensure there was a sufficient customer base for the medicines . In October 2016, Kenya became the first country to rollout the new FDCs on a national scale . The TB Alliance and Kenya’s National Tuberculosis, Leprosy, and Lung Disease Program worked closely to organize a public launch followed by a sustained outreach campaign in support of the rollout . In India, the STEP-TB project engaged with the national TB program and private providers, with the goal of supporting policy changes to ensure access to treatment . These HBC-focused approaches were designed to maximize the global outreach of the program, and they provided information regarding use of the FDCs, contributed to policy recommendations, and worked to ensure a quick and sustainable uptake of the new medicines .
MATERIALS AND METHODS 2.1. Impact Evaluation The key achievement of the STEP-TB project was incentivizing the market for - and ultimately making available - two new first-line child-friendly FDCs (RH 75/50 mg and RHZ 75/50/150 mg) . To assess the impact of the program, we utilize the goals set by STEP-TB itself as a framework for initial impact evaluation, and lastly, we employ a pediatric TB-specific model to provide a projection of the impact of near-universal utilization of the new pediatric FDCs on lives saved in Kenya, the first country to have rolled out the regimens . 2.1.1. Framework for impact evaluation The self-articulated goals of the STEP-TB project were (1) to develop child-friendly, appropriately formulated medicines for the treatment of drug-susceptible TB and (2) to make these formulations available and affordable to children globally . In terms of impact evaluation, the first of these goals raises the question of quality of the new FDCs, whereas the second relates to availability and affordability. Our initial impact evaluation of STEP-TB is therefore based on these three elements, aiming to determine the quality, availability, and affordability of the STEP-TB FDCs. Other outcomes relevant to the impact evaluation of the program, including more downstream measures of success, such as the potential impact of improved adherence and improved treatment outcomes with the new FDCs, could not be assessed in the current paper owing to the lack of patient-level data on these outcomes with the new FDCs . 2.1.2. Projection of lives saved under conditions of near-universal availability and utilization of pediatric FDCs in Kenya The Model for Assessment of Pediatric Interventions for Tuberculosis (MAP-IT) was developed as part of the STEP-TB project . The model allows estimation of lives saved through modification of different screening, diagnostic, or treatment parameters in relation to pediatric TB compared with baseline values estimated from country-specific data. We therefore used this model to estimate lives saved over the next 5 years if the availability and correct use of the new FDCs were to be scaled up to near-universal levels (defined by the model developers as 98%) in Kenya, the first country to have rolled out the pediatric FDCs. It is hypothesized that scale-up of availability and correct use of the child-friendly regimens may improve treatment outcomes owing to fewer dosing errors and higher adherence, which are otherwise significant barriers to the successful treatment of pediatric TB. The time frame of the model covers the 5-year period from 2019–2024. As we aimed to estimate the impact of scale-up of the availability and use of the FDCs to near-universal levels (in public and private sector settings), the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB. Further parameters in the model reflect standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. Kenya-specific estimates for likelihood of progression to active disease and mortality are used to estimate lives saved.
Impact Evaluation The key achievement of the STEP-TB project was incentivizing the market for - and ultimately making available - two new first-line child-friendly FDCs (RH 75/50 mg and RHZ 75/50/150 mg) . To assess the impact of the program, we utilize the goals set by STEP-TB itself as a framework for initial impact evaluation, and lastly, we employ a pediatric TB-specific model to provide a projection of the impact of near-universal utilization of the new pediatric FDCs on lives saved in Kenya, the first country to have rolled out the regimens . 2.1.1. Framework for impact evaluation The self-articulated goals of the STEP-TB project were (1) to develop child-friendly, appropriately formulated medicines for the treatment of drug-susceptible TB and (2) to make these formulations available and affordable to children globally . In terms of impact evaluation, the first of these goals raises the question of quality of the new FDCs, whereas the second relates to availability and affordability. Our initial impact evaluation of STEP-TB is therefore based on these three elements, aiming to determine the quality, availability, and affordability of the STEP-TB FDCs. Other outcomes relevant to the impact evaluation of the program, including more downstream measures of success, such as the potential impact of improved adherence and improved treatment outcomes with the new FDCs, could not be assessed in the current paper owing to the lack of patient-level data on these outcomes with the new FDCs . 2.1.2. Projection of lives saved under conditions of near-universal availability and utilization of pediatric FDCs in Kenya The Model for Assessment of Pediatric Interventions for Tuberculosis (MAP-IT) was developed as part of the STEP-TB project . The model allows estimation of lives saved through modification of different screening, diagnostic, or treatment parameters in relation to pediatric TB compared with baseline values estimated from country-specific data. We therefore used this model to estimate lives saved over the next 5 years if the availability and correct use of the new FDCs were to be scaled up to near-universal levels (defined by the model developers as 98%) in Kenya, the first country to have rolled out the pediatric FDCs. It is hypothesized that scale-up of availability and correct use of the child-friendly regimens may improve treatment outcomes owing to fewer dosing errors and higher adherence, which are otherwise significant barriers to the successful treatment of pediatric TB. The time frame of the model covers the 5-year period from 2019–2024. As we aimed to estimate the impact of scale-up of the availability and use of the FDCs to near-universal levels (in public and private sector settings), the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB. Further parameters in the model reflect standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. Kenya-specific estimates for likelihood of progression to active disease and mortality are used to estimate lives saved.
Framework for impact evaluation The self-articulated goals of the STEP-TB project were (1) to develop child-friendly, appropriately formulated medicines for the treatment of drug-susceptible TB and (2) to make these formulations available and affordable to children globally . In terms of impact evaluation, the first of these goals raises the question of quality of the new FDCs, whereas the second relates to availability and affordability. Our initial impact evaluation of STEP-TB is therefore based on these three elements, aiming to determine the quality, availability, and affordability of the STEP-TB FDCs. Other outcomes relevant to the impact evaluation of the program, including more downstream measures of success, such as the potential impact of improved adherence and improved treatment outcomes with the new FDCs, could not be assessed in the current paper owing to the lack of patient-level data on these outcomes with the new FDCs .
Projection of lives saved under conditions of near-universal availability and utilization of pediatric FDCs in Kenya The Model for Assessment of Pediatric Interventions for Tuberculosis (MAP-IT) was developed as part of the STEP-TB project . The model allows estimation of lives saved through modification of different screening, diagnostic, or treatment parameters in relation to pediatric TB compared with baseline values estimated from country-specific data. We therefore used this model to estimate lives saved over the next 5 years if the availability and correct use of the new FDCs were to be scaled up to near-universal levels (defined by the model developers as 98%) in Kenya, the first country to have rolled out the pediatric FDCs. It is hypothesized that scale-up of availability and correct use of the child-friendly regimens may improve treatment outcomes owing to fewer dosing errors and higher adherence, which are otherwise significant barriers to the successful treatment of pediatric TB. The time frame of the model covers the 5-year period from 2019–2024. As we aimed to estimate the impact of scale-up of the availability and use of the FDCs to near-universal levels (in public and private sector settings), the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB. Further parameters in the model reflect standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. Kenya-specific estimates for likelihood of progression to active disease and mortality are used to estimate lives saved.
CASE STUDY RESULTS 3.1. Impact Evaluation 3.1.1. Quality Owing to the lack of individual-level data regarding the efficacy of the regimens in terms of improved pediatric TB treatment outcomes, WHO prequalification of the FDCs is the quality indicator for the purpose of this impact evaluation. Having first endorsed the FDCs in March 2017, the WHO and UNICEF released a joint statement urging “all national TB programmes to discontinue and replace the previously used medicines for children weighing <25 kg with the child-friendly dispersible TB FDCs as soon as possible” . Moreover, the new child-friendly FDCs are now included on the most recent iteration of the WHO List of Essential Medicines for Children, whereas previous FDCs are not . Finally, WHO prequalification of the FDCs was obtained in September 2017, attesting to the quality of the regimens, although concrete measurements of quality (i.e., analyses of treatment success rates or adverse events) are not possible at this time given the absence of data on patient outcomes with the new FDCs . 3.1.2. Availability Given that the key goals and strategy of the STEP-TB project centered around market incentivization and making the developed FDCs widely available, availability in the 22 target countries, as well as global availability, is an important impact evaluation consideration. At the time of the final STEP-TB evaluation report (May 2017), 20 of the 22 HBCs had strategies for the introduction of the new formulations and for the phaseout of the old formulations, with only Russia and China not planning to introduce the FDCs (Note that for the period of 2016–2020, 30 countries are classified as high TB-burden countries by the WHO, however, the STEP-TB project targeted the previously recognized 22 HBCs .). and show the order and availability status of the FDCs in the 22 target countries and the global uptake of the FDCs, respectively. 3.1.3. Affordability In comparison to previous FDCs, the new pediatric FDCs are comparably priced, with a full course of treatment costing US$15.54 (price range of previous FDCs: US$13.55–22.00), and further price reductions are expected in the future . 3.1.4. Lives saved in Kenya: an impact projection of pediatric FDCs using the MAP-IT model: model parameters and assumptions As described in , the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB (as the FDCs are currently only available for the treatment of drug-susceptible TB). Further parameters in the model represented standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. These baseline parameters and assumptions are outlined in , and the modifications made to the availability (penetration) and utilization values for the FDCs in the comparator scenario vs. the baseline are shown in . We use the “moderate” estimation mode recommended by the model developers, which uses the midpoint range of confidence intervals for estimates, thereby taking into account the variability in the available data concerning intervention effects, TB incidence, and mortality rates . 3.1.4.1. Impact projection results The results of the projected number of pediatric lives saved in Kenya over the 5-year period (2019–2024) under conditions of near-universal availability and utilization of the new FDCs is shown in . Our projection suggests that if the availability and utilization of the new child-friendly FDCs in Kenya were to be scaled up from their current levels to near-universal levels (98%) in the private and public sectors, 2660 lives could be saved between 2019 and 2024. 3.2. Financing The STEP-TB project was launched through an investment of US$ 16.7 million by Unitaid. Additional support from donors (such as USAID and the Global Fund), commercial partners, policy makers, and NTPs led to better implementation of newer policies, allowed for product-transition planning, better implementation of new registration strategies, rapid uptake, and affordable price negotiations . Manufacturers were also offered financial compensation (amounting to US$1.5 million) as an incentive to meet target deadlines, however, market demand was considered the main factor in engaging manufacturers. In addition, manufacturers were given subsidies to offset manufacturing costs, thereby keeping prices low . Unfortunately, however, as mentioned earlier, only one manufacturer, MacLeods, ultimately entered the market. This is identified by STEP-TB as a major shortcoming, as the involvement of multiple manufacturers was a key aim of the project, and the failure to attain it represents a significant threat to keeping drug prices low in the long term through competition between multiple manufacturers. Apart from the initial financial considerations involved in bringing a product onto the market, the financial challenges of scale-up must also be considered. To achieve scale-up, manufacturers first have an interest in ensuring that there will be a predictable and potentially growing market for the product . One of the elements of STEP-TB’s market research strategy was therefore to provide an estimate of the pediatric TB burden, which was hitherto not precise . This allowed for more reliable estimation of the market size, which manufacturers were able to use to forecast sales and their Return on Investment (ROI). A second element of successful scale-up is the retention of high-volume countries (those that have high demand for the product) in the market, allowing companies to attain ROI. The STEP-TB project facilitated this retention among HBCs through their work with NTPs, providing information on the appropriate use of the FDCs and engaging countries through the awareness campaign launched as part of the project . 3.3. Drivers of Success Several reasons are attributable to the considerable success of the STEP-TB project. First, updated surveillance and modelling studies conducted as part of STEP-TB accurately assessed the magnitude of the childhood TB burden, providing the groundwork to make a case for the introduction of pediatric FDCs. With a more reliable estimate of the burden of pediatric TB, the TB Alliance generated a broad and valuable partner landscape involving academics, governments, non-governmental organizations, policy makers, and most importantly, the pharmaceutical and manufacturing industries, thus incentivizing the development of pediatric TB formulations, and uniting the previously fragmented and stagnant pediatric TB treatment landscape . Also, contributing to the success of STEP-TB was the fact that the new formulations were made child-friendly to help facilitate that children take the drugs for the entire treatment course. This included dispersibility of the drugs, providing ease of administration for both the medical personnel and children. Likewise, the improved taste and palatability has the potential to improve treatment adherence among children, which has been a persistent challenge in pediatric TB treatment . Also, in contrast to the previous haphazard dosage estimates, the optimized pharmacodynamics of the new pediatric FDCs have the potential to markedly improve treatment outcomes . Another significant driver of success was, and continues to be, the affordability of the child-friendly formulations, with a 6-month course costing approximately US$15.45 being within range of affordability . Lastly, several strategic elements of the STEP-TB program helped facilitate the national rollout of the FDCs, such as the launching of the “Louder than TB campaign” in Kenya to raise awareness of pediatric TB prior to the rollout of the FDCs , and the planned phasing out of the existing old formulations while waiting for the new FDCs to become available .
Impact Evaluation 3.1.1. Quality Owing to the lack of individual-level data regarding the efficacy of the regimens in terms of improved pediatric TB treatment outcomes, WHO prequalification of the FDCs is the quality indicator for the purpose of this impact evaluation. Having first endorsed the FDCs in March 2017, the WHO and UNICEF released a joint statement urging “all national TB programmes to discontinue and replace the previously used medicines for children weighing <25 kg with the child-friendly dispersible TB FDCs as soon as possible” . Moreover, the new child-friendly FDCs are now included on the most recent iteration of the WHO List of Essential Medicines for Children, whereas previous FDCs are not . Finally, WHO prequalification of the FDCs was obtained in September 2017, attesting to the quality of the regimens, although concrete measurements of quality (i.e., analyses of treatment success rates or adverse events) are not possible at this time given the absence of data on patient outcomes with the new FDCs . 3.1.2. Availability Given that the key goals and strategy of the STEP-TB project centered around market incentivization and making the developed FDCs widely available, availability in the 22 target countries, as well as global availability, is an important impact evaluation consideration. At the time of the final STEP-TB evaluation report (May 2017), 20 of the 22 HBCs had strategies for the introduction of the new formulations and for the phaseout of the old formulations, with only Russia and China not planning to introduce the FDCs (Note that for the period of 2016–2020, 30 countries are classified as high TB-burden countries by the WHO, however, the STEP-TB project targeted the previously recognized 22 HBCs .). and show the order and availability status of the FDCs in the 22 target countries and the global uptake of the FDCs, respectively. 3.1.3. Affordability In comparison to previous FDCs, the new pediatric FDCs are comparably priced, with a full course of treatment costing US$15.54 (price range of previous FDCs: US$13.55–22.00), and further price reductions are expected in the future . 3.1.4. Lives saved in Kenya: an impact projection of pediatric FDCs using the MAP-IT model: model parameters and assumptions As described in , the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB (as the FDCs are currently only available for the treatment of drug-susceptible TB). Further parameters in the model represented standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. These baseline parameters and assumptions are outlined in , and the modifications made to the availability (penetration) and utilization values for the FDCs in the comparator scenario vs. the baseline are shown in . We use the “moderate” estimation mode recommended by the model developers, which uses the midpoint range of confidence intervals for estimates, thereby taking into account the variability in the available data concerning intervention effects, TB incidence, and mortality rates . 3.1.4.1. Impact projection results The results of the projected number of pediatric lives saved in Kenya over the 5-year period (2019–2024) under conditions of near-universal availability and utilization of the new FDCs is shown in . Our projection suggests that if the availability and utilization of the new child-friendly FDCs in Kenya were to be scaled up from their current levels to near-universal levels (98%) in the private and public sectors, 2660 lives could be saved between 2019 and 2024.
Quality Owing to the lack of individual-level data regarding the efficacy of the regimens in terms of improved pediatric TB treatment outcomes, WHO prequalification of the FDCs is the quality indicator for the purpose of this impact evaluation. Having first endorsed the FDCs in March 2017, the WHO and UNICEF released a joint statement urging “all national TB programmes to discontinue and replace the previously used medicines for children weighing <25 kg with the child-friendly dispersible TB FDCs as soon as possible” . Moreover, the new child-friendly FDCs are now included on the most recent iteration of the WHO List of Essential Medicines for Children, whereas previous FDCs are not . Finally, WHO prequalification of the FDCs was obtained in September 2017, attesting to the quality of the regimens, although concrete measurements of quality (i.e., analyses of treatment success rates or adverse events) are not possible at this time given the absence of data on patient outcomes with the new FDCs .
Availability Given that the key goals and strategy of the STEP-TB project centered around market incentivization and making the developed FDCs widely available, availability in the 22 target countries, as well as global availability, is an important impact evaluation consideration. At the time of the final STEP-TB evaluation report (May 2017), 20 of the 22 HBCs had strategies for the introduction of the new formulations and for the phaseout of the old formulations, with only Russia and China not planning to introduce the FDCs (Note that for the period of 2016–2020, 30 countries are classified as high TB-burden countries by the WHO, however, the STEP-TB project targeted the previously recognized 22 HBCs .). and show the order and availability status of the FDCs in the 22 target countries and the global uptake of the FDCs, respectively.
Affordability In comparison to previous FDCs, the new pediatric FDCs are comparably priced, with a full course of treatment costing US$15.54 (price range of previous FDCs: US$13.55–22.00), and further price reductions are expected in the future .
Lives saved in Kenya: an impact projection of pediatric FDCs using the MAP-IT model: model parameters and assumptions As described in , the only parameters modified from their baseline values in the projection were (1) presumptive treatment for drug-susceptible TB and (2) clinical treatment for confirmed drug-susceptible TB (as the FDCs are currently only available for the treatment of drug-susceptible TB). Further parameters in the model represented standard screening, immunization, and diagnostic practices in Kenya, which were kept the same in both the baseline and comparator scenarios. These baseline parameters and assumptions are outlined in , and the modifications made to the availability (penetration) and utilization values for the FDCs in the comparator scenario vs. the baseline are shown in . We use the “moderate” estimation mode recommended by the model developers, which uses the midpoint range of confidence intervals for estimates, thereby taking into account the variability in the available data concerning intervention effects, TB incidence, and mortality rates . 3.1.4.1. Impact projection results The results of the projected number of pediatric lives saved in Kenya over the 5-year period (2019–2024) under conditions of near-universal availability and utilization of the new FDCs is shown in . Our projection suggests that if the availability and utilization of the new child-friendly FDCs in Kenya were to be scaled up from their current levels to near-universal levels (98%) in the private and public sectors, 2660 lives could be saved between 2019 and 2024.
Impact projection results The results of the projected number of pediatric lives saved in Kenya over the 5-year period (2019–2024) under conditions of near-universal availability and utilization of the new FDCs is shown in . Our projection suggests that if the availability and utilization of the new child-friendly FDCs in Kenya were to be scaled up from their current levels to near-universal levels (98%) in the private and public sectors, 2660 lives could be saved between 2019 and 2024.
Financing The STEP-TB project was launched through an investment of US$ 16.7 million by Unitaid. Additional support from donors (such as USAID and the Global Fund), commercial partners, policy makers, and NTPs led to better implementation of newer policies, allowed for product-transition planning, better implementation of new registration strategies, rapid uptake, and affordable price negotiations . Manufacturers were also offered financial compensation (amounting to US$1.5 million) as an incentive to meet target deadlines, however, market demand was considered the main factor in engaging manufacturers. In addition, manufacturers were given subsidies to offset manufacturing costs, thereby keeping prices low . Unfortunately, however, as mentioned earlier, only one manufacturer, MacLeods, ultimately entered the market. This is identified by STEP-TB as a major shortcoming, as the involvement of multiple manufacturers was a key aim of the project, and the failure to attain it represents a significant threat to keeping drug prices low in the long term through competition between multiple manufacturers. Apart from the initial financial considerations involved in bringing a product onto the market, the financial challenges of scale-up must also be considered. To achieve scale-up, manufacturers first have an interest in ensuring that there will be a predictable and potentially growing market for the product . One of the elements of STEP-TB’s market research strategy was therefore to provide an estimate of the pediatric TB burden, which was hitherto not precise . This allowed for more reliable estimation of the market size, which manufacturers were able to use to forecast sales and their Return on Investment (ROI). A second element of successful scale-up is the retention of high-volume countries (those that have high demand for the product) in the market, allowing companies to attain ROI. The STEP-TB project facilitated this retention among HBCs through their work with NTPs, providing information on the appropriate use of the FDCs and engaging countries through the awareness campaign launched as part of the project .
Drivers of Success Several reasons are attributable to the considerable success of the STEP-TB project. First, updated surveillance and modelling studies conducted as part of STEP-TB accurately assessed the magnitude of the childhood TB burden, providing the groundwork to make a case for the introduction of pediatric FDCs. With a more reliable estimate of the burden of pediatric TB, the TB Alliance generated a broad and valuable partner landscape involving academics, governments, non-governmental organizations, policy makers, and most importantly, the pharmaceutical and manufacturing industries, thus incentivizing the development of pediatric TB formulations, and uniting the previously fragmented and stagnant pediatric TB treatment landscape . Also, contributing to the success of STEP-TB was the fact that the new formulations were made child-friendly to help facilitate that children take the drugs for the entire treatment course. This included dispersibility of the drugs, providing ease of administration for both the medical personnel and children. Likewise, the improved taste and palatability has the potential to improve treatment adherence among children, which has been a persistent challenge in pediatric TB treatment . Also, in contrast to the previous haphazard dosage estimates, the optimized pharmacodynamics of the new pediatric FDCs have the potential to markedly improve treatment outcomes . Another significant driver of success was, and continues to be, the affordability of the child-friendly formulations, with a 6-month course costing approximately US$15.45 being within range of affordability . Lastly, several strategic elements of the STEP-TB program helped facilitate the national rollout of the FDCs, such as the launching of the “Louder than TB campaign” in Kenya to raise awareness of pediatric TB prior to the rollout of the FDCs , and the planned phasing out of the existing old formulations while waiting for the new FDCs to become available .
DISCUSSION 4.1. Limitations of Program Impact Despite their inclusion on the WHO List of Essential Medicines for Children, their prequalification by the WHO, and their availability through the Global Drug Facility, regulatory barriers continue to hinder the adoption of the new FDCs, particularly in low-burden countries. In the European Union, for example, child-friendly FDCs are not registered with the necessary regulatory agency (the European Medicines Agency) due to low market incentives for the formulations in this region . The continued lack of incentivization for the introduction of pediatric formulations in low-burden countries, and the barriers this represents for high-risk groups in these countries are also exemplified by the lack of access to pediatric FDCs in Canadian indigenous communities . Although low-burden countries were not included in STEP-TB’s initial 22 target countries, STEP-TB did articulate an overall goal of incentivizing global access to pediatric FDCs, so barriers to access in Europe and other low-burden countries remains a relevant limitation of its impact. The STEP-TB’s strategy of market incentivization to generate access to child-friendly FDCs has thus had significant impact in HBCs, but is less successfully applicable to the context of high-risk populations in low-burden countries, and thus represents a failure of the program with regard to paving the way for global availability of child-friendly TB treatment. 4.2. Limitations of Impact Evaluation Apart from considering the limitations of actual project impact, the limitations of the impact assessment must also be recognized. There continues to be uncertainty surrounding the accuracy of estimates of the burden of childhood TB due to difficulties of diagnosis in children, and a historical lack of prioritization of pediatric TB limits the comprehensive assessment of the impact of pediatric TB interventions. The uncertainty in these estimates is also a limitation of the MAP-IT model , and the results of the impact projection should therefore be interpreted keeping in mind the variability in the estimates on which this projection is based . An additional limitation of the model is that, given the lack of individual-level data on the efficacy of the pediatric FDCs, this parameter is not yet accurately reflected in current projections of lives saved, and no causal claims can be made regarding the implementation of the FDCs and improved treatment outcomes. Further barriers to accurately assess the project’s public health impact include the fact that, although information on order volumes is available, high-order volumes do not guarantee high coverage or appropriate use, and consequently, availability at the NTP level does not necessarily reflect actual access at the patient level . Moreover, given the lack of data on improved adherence to the new FDCs or improved treatment outcomes with the new FDCs (vs. custom titration), the ultimate public health impact of the rollout of these new pediatric formulations in terms of improved treatment outcomes and reduced TB-associated mortality could not be directly assessed in this impact evaluation. 4.3. Future Directions and Challenges The future considerations of the STEP-TB project include continuing to ensure access to the FDCs, to reshape the market for pediatric TB drugs, and to reduce barriers to market entry. To achieve these goals, sustainability is key . First, a sustainable market is necessary for ensuring continued affordable access to the products. Also, necessary are sustainable partnerships with multi-stakeholders such as the academic community, clinicians, manufacturers and donor agencies, non-profit organizations, governments, policy makers, and regulatory authorities. Subsequently, sustainable integration and collaboration with low- and middle-income, high-TB burden countries is required, as is addressing the neglected position of select high-risk groups in otherwise low-burden countries in terms of the market landscape of pediatric TB formulations . Challenges that persist for STEP-TB to address therefore include: maintaining sustained interest from partners, data transparency, the lack of patient-level data on actual treatment outcomes with the new FDCs, which limits impact assessment, and the lack of child-friendly formulations to treat drug-resistant forms of TB .
Limitations of Program Impact Despite their inclusion on the WHO List of Essential Medicines for Children, their prequalification by the WHO, and their availability through the Global Drug Facility, regulatory barriers continue to hinder the adoption of the new FDCs, particularly in low-burden countries. In the European Union, for example, child-friendly FDCs are not registered with the necessary regulatory agency (the European Medicines Agency) due to low market incentives for the formulations in this region . The continued lack of incentivization for the introduction of pediatric formulations in low-burden countries, and the barriers this represents for high-risk groups in these countries are also exemplified by the lack of access to pediatric FDCs in Canadian indigenous communities . Although low-burden countries were not included in STEP-TB’s initial 22 target countries, STEP-TB did articulate an overall goal of incentivizing global access to pediatric FDCs, so barriers to access in Europe and other low-burden countries remains a relevant limitation of its impact. The STEP-TB’s strategy of market incentivization to generate access to child-friendly FDCs has thus had significant impact in HBCs, but is less successfully applicable to the context of high-risk populations in low-burden countries, and thus represents a failure of the program with regard to paving the way for global availability of child-friendly TB treatment.
Limitations of Impact Evaluation Apart from considering the limitations of actual project impact, the limitations of the impact assessment must also be recognized. There continues to be uncertainty surrounding the accuracy of estimates of the burden of childhood TB due to difficulties of diagnosis in children, and a historical lack of prioritization of pediatric TB limits the comprehensive assessment of the impact of pediatric TB interventions. The uncertainty in these estimates is also a limitation of the MAP-IT model , and the results of the impact projection should therefore be interpreted keeping in mind the variability in the estimates on which this projection is based . An additional limitation of the model is that, given the lack of individual-level data on the efficacy of the pediatric FDCs, this parameter is not yet accurately reflected in current projections of lives saved, and no causal claims can be made regarding the implementation of the FDCs and improved treatment outcomes. Further barriers to accurately assess the project’s public health impact include the fact that, although information on order volumes is available, high-order volumes do not guarantee high coverage or appropriate use, and consequently, availability at the NTP level does not necessarily reflect actual access at the patient level . Moreover, given the lack of data on improved adherence to the new FDCs or improved treatment outcomes with the new FDCs (vs. custom titration), the ultimate public health impact of the rollout of these new pediatric formulations in terms of improved treatment outcomes and reduced TB-associated mortality could not be directly assessed in this impact evaluation.
Future Directions and Challenges The future considerations of the STEP-TB project include continuing to ensure access to the FDCs, to reshape the market for pediatric TB drugs, and to reduce barriers to market entry. To achieve these goals, sustainability is key . First, a sustainable market is necessary for ensuring continued affordable access to the products. Also, necessary are sustainable partnerships with multi-stakeholders such as the academic community, clinicians, manufacturers and donor agencies, non-profit organizations, governments, policy makers, and regulatory authorities. Subsequently, sustainable integration and collaboration with low- and middle-income, high-TB burden countries is required, as is addressing the neglected position of select high-risk groups in otherwise low-burden countries in terms of the market landscape of pediatric TB formulations . Challenges that persist for STEP-TB to address therefore include: maintaining sustained interest from partners, data transparency, the lack of patient-level data on actual treatment outcomes with the new FDCs, which limits impact assessment, and the lack of child-friendly formulations to treat drug-resistant forms of TB .
CONCLUSION This case study provides a descriptive overview of the key strategies of STEP-TB, and an assessment of its impact, including a projection of lives saved as a result of scale-up of the FDCs to near-universal availability and utilization in Kenya. Although our projection indicates that near-universal availability and utilization of the new FDCs could reduce pediatric TB-associated mortality in Kenya by 2660 cases over the next 5 years, the results of this case study are substantially limited by the lack of individual patient-level data on the efficacy of the new pediatric FDCs, which prevents a detailed quantitative analysis of the public health impact of the STEP-TB program. The program successfully incentivized the introduction of pediatric FDCs to HBC markets; however, ongoing challenges include maintaining affordable prices for the FDCs (particularly given the potential for monopoly due to current production by a sole manufacturer). In addition, there remains a need for mechanisms to incentivize the introduction of the FDCs for high-risk groups in low-burden countries, which is a need not yet sufficiently addressed by the STEP-TB’s current market incentivization strategy. Lastly, the development of child-friendly formulations for drug-resistant TB is another remaining challenge.
|
Tumor-Targeting Peptides Search Strategy for the Delivery of Therapeutic and Diagnostic Molecules to Tumor Cells | 9ce2b502-aa03-4dd6-98c6-afdeaadd89cd | 7796297 | Pathology[mh] | Glioblastoma (GBM) is the most common and aggressive form of brain tumor, which is characterized by the least favorable prognosis—the average survival rate for patients with this diagnosis is 15 months . In modern medical practice, standard methods such as surgery, radiation therapy and chemotherapy are used to treat glioblastoma, and in most cases these methods are ineffective. Such a low efficiency of glioblastoma treatment is often associated with two characteristic features of this tumor: the invasion of tumor cells into the brain parenchyma, which leads to the emergence of secondary tumor foci, and the high heterogeneity of tumor. A special contribution to the resistance of GBM cells to therapy is made by a small population of cells with a highly aggressive phenotype characteristic of cancer stem cells (CSCs) . Targeted therapy based on the use of drugs specifically affecting specific types of tumors can be a solution to the problem of the low efficiency of the applied cancer therapies, which makes it possible to increase the effectiveness of treatment and minimize toxic effects on healthy tissues. The combination of the unique properties of cancer cells makes it possible to find specific ligands that interact directly with the tumor and ensure the implementation of the targeted approach. Currently, short peptides are considered promising agents for the delivery of therapeutic and diagnostic molecules to cancer cells, which have high affinity and specificity for the target and a higher efficiency of penetration into cancer cells as compared to ligands of larger sizes, for example, antibodies. One of the promising ways to search for tumor-targeting peptides is the screening of phage peptide libraries in tumor cell cultures in vitro and in xenograft models in vivo . This approach can be applied to solve the problem of tumor heterogeneity, since the screening can reveal tumor-targeting peptides that specifically interact with different populations of tumor cells, including CSCs. A targeted approach to CSCs is especially relevant, since such characteristics of these cells as the ability to self-renewal, differentiation into various cell types, invasion of the brain parenchyma and metastasis, determine their resistance to chemotherapy and radiotherapy . Earlier, by screening phage peptide libraries Ph.D-7 and Ph.D-12 (New England Biolabs, Ipswich, Massachusetts, USA), we selected bacteriophages displayed tumor-targeting peptides that provide specific binding of phage particles to human glioblastoma cells U-87 MG in vitro and with U-87 MG tumor in the xenograft model in vivo . In this work, a screening of the Ph.D.-C7C phage peptide library was carried out to obtain tumor-targeting peptides to U-87 MG tumor cells with the phenotype of tumor stem cells (CD44+/CD133+), as well as a comparative analysis of the distribution in the body of mice and the specificity of the interaction with U87 MG tumor of bacteriophages displaying tumor-targeting peptides selected during biopanning of various peptide libraries in different selection systems. 2.1. Biopanning of Linear Phage Libraries Ph.D.-12 and Ph.D.-7 on Cells and Tumors U-87 MG Earlier, in our laboratory, we screened the phage peptide library Ph.D.-7 in vivo on U-87MG glioblastoma xenografts in immunodeficient mice. In the course of the work, 102 bacteriophages were selected; the sequences of 27 exposed peptides selected after the third round were identified and analyzed. When analyzing the sequences of the selected peptides, the highest frequency of occurrence was in the sequence HPSSGSA (92)—25.9% . Additionally, the screening of the Ph.D.-12 phage peptide library in vitro on U-87 MG human glioblastoma cells was performed earlier. In the course of the work, 80 bacteriophages were selected; sequences of 39 exposed peptides selected after the third round and 37 peptides selected after the fifth round were identified and analyzed. After the fifth round, it was found that the sequence SWTFGVQFALQH (26) was found in 24.3% of cases . 2.2. Biopanning of the Circular Phage Peptide Library Ph.D.-C7C In Vivo and In Vitro We carried out in vitro biopanning on cells of an immortalized human glioblastoma cell line U-87 MG using Ph.D.-C7C at the same protocol as for linear libraries. Three rounds of selection were carried out; the sequences of the exposed peptides providing the specific interaction of phage particles with U-87 MG cells were determined by sequencing. After the third round of biopanning, bacteriophages displayed the peptides PVPGSFQ (18C), PTQLHGT (23C), MHTQTPW (19C), TTKSSHS (2C), and ISYLYGR (36C) were selected. The frequency of occurrence of the peptides PVPGSFQ (18C) and PTQLHGT (23C) was 35% and 15%, respectively. Peptides MHTQTPW (19C), TTKSSHS (2C) and ISYLYGR (36C) accounted for 10% of the selected pool of bacteriophages . 2.3. Obtaining a Population of CD44+/CD133+ U-87 MG Cells for Selection of Bacteriophages Displaying Peptides Specific to CSCs To obtain tumor-targeting peptides specific to U-87 MG cancer stem cells (CD44+/CD133+ cells), we screened the cyclic phage peptide library Ph.D.-C7C in vivo. The first two rounds of selection were performed on U-87 MG tumor transplanted subcutaneously into SCID mice. The third round of biopanning was performed on orthotopically implanted U-87 MG tumor into SCID mice. In this case, mice with a tumor were intravenously injected with an enriched phage peptide library after the first two rounds, after 24 h of circulation of the library in the body, the animals were euthanized and the tumor was removed. Tumor tissue was homogenized to single cells; tumor cells were stained for markers CD44, CD133 and sorted using Fluorescence-activated cell sorting (FACS). According to the results of sorting, the number of cells positive for CD44 (CD44+) was 8.9% ( A), positive for both markers (CD44+/CD133+)—5.53% ( B), positive for CD133 (CD44−/CD133+)—0.65% ( C). Next, cells positive for both markers CD44/CD133 were lysed, the lysate was amplified in Escherichia coli and the sequence of the insert was determined by Sanger sequencing. According to the sequencing results, only one clone displaying the MHTQTPW peptide (No.19C) binds to cancer cells that were positive for both markers tested. It should be noted that the MHTQTPW peptide was previously selected in the biopanning on U-87 MG cells in vitro (data not shown). 2.4. Analysis of the Binding Specificity of Bacteriophages, Displaying Selected Peptides, to Human Glioblastoma Cells U-87 MG We carried out a comparative analysis of the efficiency of binding of the bacteriophages displayed tumor-targeting peptides to human glioblastoma cells U-87 MG by fluorescence microscopy . We have previously shown that the peptide displayed by bacteriophage No. 26 ensures the binding and internalization of the phage particle into AS2 astrocytoma cells, but not into human MG1 glioblastoma cells . In shows fluorescence microscopy of cells incubated with bacteriophages 19C, 36C, 92, 26, selected on different phage libraries, in different screening systems. Phage M13, displayed the peptide YTYDPWLIFPAN previously selected for MDA-MB 231 cells, was taken as a negative control . No significant differences were found in the efficiency of binding to cells of bacteriophages displayed the studied peptides. Thus, the obtained tumor-targeting peptides are able to provide efficient specific binding of phage particles to U-87 MG glioblastoma cells. 2.5. Analysis of Biodistribution and Specificity of Accumulation of Bacteriophages, Displaying Selected Tumor-Targeting Peptides, in U87 MG Tumor Tissue Comparative analysis of the distribution in the body of experimental animals and the specificity of accumulation in U-87 MG xenograft tumors of bacteriophages, displaying tumor-targeting peptides, was carried out by titration of tumor homogenates and tissues of control organs (kidney, liver, lungs, brain) after 4.5 h of circulation of phage particles in the body of the animal. For comparative analysis, bacteriophages No. 26 (Ph.D.-C12), No. 19c, No. 36c (Ph.D.-C7C) and No. 92 (Ph.D.-7) were selected. A random bacteriophage displayed peptide YTYDPWLIFPAN was used as a negative control. The titration data showed that bacteriophage No. 92, obtained by screening the phage peptide library Ph.D.-7 in vivo, accumulated to the greatest extent in the tumor tissue as compared to the control organs: the titer of the bacteriophage in the tumor exceeded its titer by more than 5.5 times in the kidneys, and more than 11 times in the brain, liver and lungs . Two-way analysis of variance (ANOVA) showed a statistically significant difference ( p ≤ 0.0001) in the accumulation of this bacteriophage in the tumor as compared to the control phage and phages No. 26, No. 19C, No.36C. Bacteriophage No. 26 also specifically accumulated in the tumor tissue, but to a lesser extent compared to bacteriophage No. 92, its accumulation was statistically significantly different only from that for the control phage ( p ≤ 0.001). Bacteriophages, selected from the cyclic library Ph.D.-C7C—No. 19C and No. 36C, showed the least accumulation in tumor tissue and other organs. Earlier, in our laboratory, we screened the phage peptide library Ph.D.-7 in vivo on U-87MG glioblastoma xenografts in immunodeficient mice. In the course of the work, 102 bacteriophages were selected; the sequences of 27 exposed peptides selected after the third round were identified and analyzed. When analyzing the sequences of the selected peptides, the highest frequency of occurrence was in the sequence HPSSGSA (92)—25.9% . Additionally, the screening of the Ph.D.-12 phage peptide library in vitro on U-87 MG human glioblastoma cells was performed earlier. In the course of the work, 80 bacteriophages were selected; sequences of 39 exposed peptides selected after the third round and 37 peptides selected after the fifth round were identified and analyzed. After the fifth round, it was found that the sequence SWTFGVQFALQH (26) was found in 24.3% of cases . We carried out in vitro biopanning on cells of an immortalized human glioblastoma cell line U-87 MG using Ph.D.-C7C at the same protocol as for linear libraries. Three rounds of selection were carried out; the sequences of the exposed peptides providing the specific interaction of phage particles with U-87 MG cells were determined by sequencing. After the third round of biopanning, bacteriophages displayed the peptides PVPGSFQ (18C), PTQLHGT (23C), MHTQTPW (19C), TTKSSHS (2C), and ISYLYGR (36C) were selected. The frequency of occurrence of the peptides PVPGSFQ (18C) and PTQLHGT (23C) was 35% and 15%, respectively. Peptides MHTQTPW (19C), TTKSSHS (2C) and ISYLYGR (36C) accounted for 10% of the selected pool of bacteriophages . To obtain tumor-targeting peptides specific to U-87 MG cancer stem cells (CD44+/CD133+ cells), we screened the cyclic phage peptide library Ph.D.-C7C in vivo. The first two rounds of selection were performed on U-87 MG tumor transplanted subcutaneously into SCID mice. The third round of biopanning was performed on orthotopically implanted U-87 MG tumor into SCID mice. In this case, mice with a tumor were intravenously injected with an enriched phage peptide library after the first two rounds, after 24 h of circulation of the library in the body, the animals were euthanized and the tumor was removed. Tumor tissue was homogenized to single cells; tumor cells were stained for markers CD44, CD133 and sorted using Fluorescence-activated cell sorting (FACS). According to the results of sorting, the number of cells positive for CD44 (CD44+) was 8.9% ( A), positive for both markers (CD44+/CD133+)—5.53% ( B), positive for CD133 (CD44−/CD133+)—0.65% ( C). Next, cells positive for both markers CD44/CD133 were lysed, the lysate was amplified in Escherichia coli and the sequence of the insert was determined by Sanger sequencing. According to the sequencing results, only one clone displaying the MHTQTPW peptide (No.19C) binds to cancer cells that were positive for both markers tested. It should be noted that the MHTQTPW peptide was previously selected in the biopanning on U-87 MG cells in vitro (data not shown). We carried out a comparative analysis of the efficiency of binding of the bacteriophages displayed tumor-targeting peptides to human glioblastoma cells U-87 MG by fluorescence microscopy . We have previously shown that the peptide displayed by bacteriophage No. 26 ensures the binding and internalization of the phage particle into AS2 astrocytoma cells, but not into human MG1 glioblastoma cells . In shows fluorescence microscopy of cells incubated with bacteriophages 19C, 36C, 92, 26, selected on different phage libraries, in different screening systems. Phage M13, displayed the peptide YTYDPWLIFPAN previously selected for MDA-MB 231 cells, was taken as a negative control . No significant differences were found in the efficiency of binding to cells of bacteriophages displayed the studied peptides. Thus, the obtained tumor-targeting peptides are able to provide efficient specific binding of phage particles to U-87 MG glioblastoma cells. Comparative analysis of the distribution in the body of experimental animals and the specificity of accumulation in U-87 MG xenograft tumors of bacteriophages, displaying tumor-targeting peptides, was carried out by titration of tumor homogenates and tissues of control organs (kidney, liver, lungs, brain) after 4.5 h of circulation of phage particles in the body of the animal. For comparative analysis, bacteriophages No. 26 (Ph.D.-C12), No. 19c, No. 36c (Ph.D.-C7C) and No. 92 (Ph.D.-7) were selected. A random bacteriophage displayed peptide YTYDPWLIFPAN was used as a negative control. The titration data showed that bacteriophage No. 92, obtained by screening the phage peptide library Ph.D.-7 in vivo, accumulated to the greatest extent in the tumor tissue as compared to the control organs: the titer of the bacteriophage in the tumor exceeded its titer by more than 5.5 times in the kidneys, and more than 11 times in the brain, liver and lungs . Two-way analysis of variance (ANOVA) showed a statistically significant difference ( p ≤ 0.0001) in the accumulation of this bacteriophage in the tumor as compared to the control phage and phages No. 26, No. 19C, No.36C. Bacteriophage No. 26 also specifically accumulated in the tumor tissue, but to a lesser extent compared to bacteriophage No. 92, its accumulation was statistically significantly different only from that for the control phage ( p ≤ 0.001). Bacteriophages, selected from the cyclic library Ph.D.-C7C—No. 19C and No. 36C, showed the least accumulation in tumor tissue and other organs. The goal of this study was to develop a strategy for searching for tumor-targeting peptides for the delivery of therapeutic and diagnostic molecules to glioblastoma, which is characterized by some degree of heterogeneity. Tumor heterogeneity is due to small population of cells with a highly aggressive phenotype characteristic of CSCc. To identify CSCs, the level of CD24, CD29, CD44, CD133 and ALDH1 is most often examined. CD44 and CD133 are considered one of the most specific CSCs markers. CD44, a transmembrane glycoprotein, is considered one of the most important markers of CSCs . As a result of alternative splicing, post-translational modifications, and partial cleavage by matrix metalloproteinases, multiple CD44 isoforms can exist in the cell . CD44 acts as a co-receptor for several cell surface receptors (EGFR, Her2, Met6, TGFβRI, TGFβRII, VEGFR-2), thus participating in various signaling pathways (Rho, PI3K/Akt and Ras-Raf-MAPK), including those stimulating growth and cell motility. Another characteristic marker of CSCs, CD133 or prominin-1, is a transmembrane glycoprotein with a structure consisting of five transmembrane domains . It is known that CD133 is required to maintain the properties of CSCs, and a low level of this marker in glioblastoma cells negatively effects on the ability of cells to self-renewal and neurosphere-forming . The expression level of CD133 on cells is usually low, but can vary widely. Thus, in endometrial cancer, CD133 was immunohistochemically detected in 1.3–62.6% of cells, in colorectal cancer, CD133 was expressed in 0.3–82.0% of cells . Despite the fact that CD133 is considered as a marker of CSCs, its studies as a marker of glioblastoma CSCs remain controversial . Despite the unclear physiological function of CD133 in the pathogenesis of gliomas, mechanisms in which this receptor is involved have been discovered. It has been shown that under hypoxia an increase in the expression of this receptor is observed, as a result of which cells with a negative CD133 phenotype acquire a CD133+ phenotype . Thus, at present, CSCs are considered the most promising targets for the search for specific therapeutic and diagnostic molecules. The use of combination therapy, including standard cytotoxic drugs capable of destroying the main tumor mass, and drugs targeting CSCs, can significantly increase the effectiveness of anticancer therapy and improve patient survival . In this work, in order to develop a strategy for obtaining tumor-targeting peptides to glioblastoma, a comparative analysis of the binding efficiency of the selected peptides in the screening of linear and cyclic phage peptide libraries, Ph.D.-7, Ph.D.-12, and Ph.D.- C7C, in different selection systems (in vitro and in vivo) was conducted. We also used cyclic phage library, characterized by the fact that the peptides exposed on the surface protein p3 have a circular structure due to the formation of disulfide bridges between cysteines flanking the insert. It is believed that cyclic peptides are much less susceptible to proteolysis and often exhibit increased biological activity due to their conformational rigidity . As a result of the studies carried out, it was found that all selected tumor-targeting peptides obtained from various peptide libraries, both in vitro and in vivo, are able to provide efficient specific binding of phage particles to not enriched U-87 MG glioblastoma cells. Indeed, the immunocytochemistry showed that almost all cells in the population of not enriched U-87 MG cells are stained. On the image related to 19C phage, which was found after lysis the enriched cells (CD44+/CD133+) and their further amplification, not all cells were stained. One possible explanation of this fact could be that this peptide (19C) binds with some receptors of the stem cells surface which could not exist on all the cells in the general population, and likely with CD44 only, because according to cytometry data ( C), the population of CD44+/CD133+ cells is 5.53% only. Additionally, the phages No. 26, 92, 36C were found in the screening on unenriched U-87 MG cells. Another possibility is that after receiving the CD44+/CD133+ cells by sorting the CSCs could generate differentiated progeny, losing the markers of stemness. The highest specificity of binding to the xenograft U87 MG in vivo as compared to control organs is provided by linear tumor-targeting peptides obtained by screening the Ph.D.-12 phage peptide library on the xenograft U87 MG. Despite the great stability under physiological conditions and conformational rigidity, which often determines the high biological activity of cyclic peptides , the specificity of the interaction with the xenograft U-87 MG of bacteriophages displayed cyclic peptides selected on the population of glioblastoma cells expressing CSCs markers turned out to be lower than the specificity of interaction bacteriophages displaying linear peptides. Certain linear peptides are believed to have conformation recognized by target receptors without the need for cyclization. In addition, the linear conformation of the peptide can provide a greater efficiency of its penetration into the cell as compared to cyclic peptides, since a large free energy is required for penetration into the cell . Additionally, when studying the distribution and binding of phage particles to a tumor xenograft, we must take into account the fact that the number of CD44+/CD133+ cells inside xenograft is small. In addition to U-87 MG cells, there are the endothelial cell and stroma’s cells in the tumor. So, CSCs will be in small quantities in the tumor tissue, which explains the absence of significant differences between the binding of the control phage and bacteriophage No. 19C to the U87 MG xenograft. So, using the strategy of searching for peptides on population of enriched cells using specific markers (CD44+/CD133+), we met some obstacles in further experiments. Thus, according to the totality of the obtained data, the most effective strategy for obtaining tumor-targeting peptides that provide targeted delivery of diagnostic agents and therapeutic drugs to human glioblastoma tumors is to screen linear phage peptide libraries for glioblastoma tumors in vivo. 4.1. Cell Cultures Cancer cell line U-87 MG was obtained from the Russian cell culture collection (Russian Branch of the ETCS, St. Petersburg, Russia). U-87 MG cells were cultivated in alpha-MEM (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% of fetal bovine serum (FBS) (Sigma, St. Louis, MO, USA), 1 mM L-glutamine, 250 mg/mL amphotericin B and 100 U/mL penicillin/streptomycin. Cells were grown in a humidified 5% CO2–air atmosphere at 37 °C and were passaged with TripLE Express Enzyme (Thermo Fisher Scientific, USA) every 3–4 days. 4.2. Animals Female SCID hairless outbred (SHO-Prkdc scid Hrhr) mice aged 6–8 weeks were obtained from «SPF-vivarium» ICG SB RAS (Novosibirsk, Russia). Mice were housed in individually ventilated cages (Animal Care Systems, Centennial, Colorado, USA) in groups of one to four animals per cage with ad libitum food (ssniff Spezialdiäten GmbH, Soest, Germany) and water. Mice were kept in the same room within a specific pathogen-free animal facility with a regular 14/10 h light/dark cycle (lights on at 02:00 h) at a constant room temperature of 22 ± 2 °C and relative humidity of approximately 45 ± 15%. 4.3. In Vivo and In Vitro Biopanning Biopanning of the phage peptide library (Ph.D.-C7C, New England Biolabs, Ipswich, MA, USA) on U-87 MG glioblastoma cells in vitro was performed as described previously with some modifications , namely. The cells that reached 100% confluence were washed with 4 mL of PBS, then 400 μL of 10 mM EDTA was added to detach the cells from the surface and incubated for 4 min at 37 °C. Then 1 mL of complete growth medium was added and cell suspension was transferred into a falcon with a volume 15 mL. The cells were centrifuged for 3 min at 1000 rpm, the supernatant was removed, the cells were resuspended in 4 mL of PBS, and the centrifugation was repeated. The cells were resuspended in 4 mL of blocking buffer (5% BSA/PBS), incubated for 10 min at 37 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS and pelleted by centrifugation (3 min, 1000 rpm). The supernatant was removed, the cells were incubated with 3 mL of a negative selection-depleted phage peptide library for 1 h at 4 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cell pellet was washed three times with 4 mL of PBS and centrifuged for 3 min at 1000 rpm. The cells were resuspended in 4 mL of growth medium heated to 37 °C to provide conditions for the internalization of bacteriophages into cells, incubated for 15 min at 37 °C and centrifuged for 3 min at 1000 rpm. The cells were then washed three times with 4 mL of PBS. 400 μL of Triple Express was added to the cell pellet to remove non-internalized bacteriophages, incubated for 2 min at 37 °C, 1 mL of complete growth medium was added, and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS, and the centrifugation was repeated. Then, the cells were lysed with 1 mL of mQ water for 20 min at room temperature. The cell lysate was centrifuged for 5 min at 14,000 rpm, the supernatant was removed, and the phage suspension (1 mL) was amplified. The amplified population of phage particles was used for subsequent rounds of selection. For in vivo screening, we used the previously described methods , to wit. SCID mice with subcutaneously and orthotopic glioblastoma xenograft U-87 MG were injected into the tail vein with 300 μL of a phage peptide library with a concentration of 2 × 10 11 pfu/mL, diluted in saline. The circulation time of the phage library in the bloodstream for mice with subcutaneously glioblastoma xenograft U-87 MG was 5 min; for mice with orthotopic glioblastoma xenograft U-87 MG, the circulation time was 24 h. After the screening time elapsed, the mouse was sacrificed by cervical dislocation, the chest was opened, and 15 mL of saline was perfused through the heart to remove bacteriophages which not binding with the tumor from the bloodstream. The tumor was removed, washed in saline and homogenized in 1 mL PBS containing 1 mM PMSF. The tumor tissue homogenate was centrifuged for 10 min at 10,000 rpm. The pellet was resuspended in 1 mL of blocking buffer (1% BSA), after which centrifugation was repeated under the same conditions. The pellet was resuspended in 1 mL of liquid culture of E. coli ER2738 in the average log-phase with an optical density 0.3 (OD600) for elution of bacteriophages bound to the tumor and incubated for 30 min at 37 °C at 170 rpm. The eluate of phage particles was centrifuged for 5 min at 10,000 rpm. The supernatant was transferred to separate tubes and the enriched phage library was amplified for subsequent rounds of selection. Manipulations on glioblastoma xenograft U-87 MG and monitoring of tumor growth were carried out by employees of «SPF-vivarium» ICG SB RAS. After the third round of selection, phage particles were titrated to obtain individual phage colonies, which were used for DNA isolation according to the manufacturer’s protocol for the phage display peptide library. The sequencing reaction products were determined using an ABI 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) at the Genomics Core Facility of SB RAS using sequencing primers (-96III (5′-CCC TCA TAG TTA GCG TAA CG-3′)). 4.4. Tumor Preparation for Cell Sorting In mice with orthotopical glioblastoma xenograft U-87 MG, a peptide library enriched with in vivo biopanning (2 × 10 11 PFU/mL of phage particles in 500 μL of saline) was injected into the tail vein. After 24 h, the mouse was sacrificed by cervical dislocation and the tumor was removed. The tumor was washed twice with PBS containing 10% penicillin-streptomycin (Sigma-Aldrich, St. Louis, MO, USA), after which it was crushed with a scalpel on a Petri dish, transferred into a falcon with 3 mL of trypsin and incubated in a water bath at 37 °C for 10 min to dissociate the cells. To inactivate trypsin, 3 mL of a trypsin inhibitor from soybeans (Sigma-Aldrich, USA) was added to the cell suspension, after which the cells were centrifuged for 10 min at 800 rpm. The cell pellet was resuspended in NSC medium for neural stem cells (Sigma-Aldrich) until a homogeneous cell suspension was formed. The undissociated pieces of tumor tissue were removed and additionally homogenized. 10 mL of NSC medium was added to the cell suspension, filtered through a filter with a pore size of 40 μm, and centrifuged for 10 min at 800 rpm. The cells were resuspended in 1 mL of NSC medium and incubated for 2 h at 37 °C to restore the proteomic profile of the cells. 4.5. Cell Sorting After incubation in NSC medium, cells were incubated in 500 μL blocking buffer containing 10% FBS for 10 min. The cells were then washed with 500 μL PBS and incubated for 45 min on ice with primary antibodies against CD44 labeled with FITC (Abcam, Cambridge, UK) and primary antibodies against CD133 labeled with Alexa Fluor 647 (Abcam), both diluted in 1% FBS in PBS, in 200 μL. The cells were washed twice with 500 μL PBS, resuspended in 500 μL PBS containing 4 μg/mL gentamicin (Thermo Fisher Scientific, Waltham, MA, USA) and passed through a strainer (BD Biosciences, Franklin Lakes, NJ, USA) into flow cytometry tubes (BD Biosciences). The analysis and sorting of cells was carried out on a SONY SH800S Cell Sorter (Sony Biotechnology, San Jose, CA, USA). 4.6. Immunocytochemistry U-87 MG cells were incubated on BD Falcon culture slides to 80–90% confluence, washed with PBS twice, and 100μL of the selected phage clone (2 × 10 10 PFU/mL) in PBS-BSA Ca/Mg buffer (0.1% BSA, 1mM CaCl 2 , 10 mM MgCl 2 × 6H 2 O); was added. Cells were incubated with the bacteriophage clone for 2 h at 37 °C with the following treatment according to the previously described technique with slight modifications , namely. After incubation at 37 °C, cells were washed three times with 500 μL buffer (100 mM glycine, 0.5 M NaCl, pH 2.5) at room temperature, fixed with 200 μL cold 4% formaldehyde for 10 min and washed twice with PBS. Then, 200 μL 0.2% Triton X100 was added for 10 min to permeabilize cells, after which the cells were washed twice with 500 μL PBS. Next, cells were incubated with 200 μL mouse Anti-M13 Bacteriophage Coat Protein g8p antibodies (Abcam) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with cold 500 μL 1% BSA/PBS buffer. Next, cells were incubated with 200 μL secondary Alexa Fluor 647 (Abcam, UK) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with 500 μL cold 1% BSA/PBS buffer. Then the cells were stained with DAPI (Thermo Fisher Scientific) and analyzed by fluorescent microscopy Axio Skope 2 Plus (Zeiss, Oberkochen, Germany) at the Center for Microscopic Analysis of Biological Objects of SB RAS (Novosibirsk, Russia). 4.7. Analysis of the Specificity of Accumulation of Bacteriophages Displayed Selected Peptides in Glioblastoma Xenograft U-87 Mg Mice with a subcutaneously transplanted tumor were injected into the tail vein with 500 μL of bacteriophage (2 × 10 9 PFU/mL) diluted in physiological solution. After 4.5 h of circulation of phage particles in the body, the mouse was sacrificed by cervical dislocation and perfused through the left ventricle of the heart with 15 mL of saline. Then the tumor and control organs (liver, kidney, lungs, and brain) were removed, washed in PBS, and homogenized in 1 mL PBS containing 1 mM PMSF (Sigma Aldrich). The homogenates of tumor tissue and control organs were centrifuged for 20 min at 10,000 g at room temperature to elute bound bacteriophages and were resuspended. The resulting suspension of phage particles was titrated on agar LB medium supplemented with 1 mg/mL X-Gal and 1.25 mg/mL IPTG. 4.8. Statistical Analysis Two-way ANOVA was used for comparisons of more than two sets of data. Differences were considered to be significant if the p -value was <0.05. Nucleotide sequences of the inserts encoding peptides were analyzed using MEGA X software. Cancer cell line U-87 MG was obtained from the Russian cell culture collection (Russian Branch of the ETCS, St. Petersburg, Russia). U-87 MG cells were cultivated in alpha-MEM (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% of fetal bovine serum (FBS) (Sigma, St. Louis, MO, USA), 1 mM L-glutamine, 250 mg/mL amphotericin B and 100 U/mL penicillin/streptomycin. Cells were grown in a humidified 5% CO2–air atmosphere at 37 °C and were passaged with TripLE Express Enzyme (Thermo Fisher Scientific, USA) every 3–4 days. Female SCID hairless outbred (SHO-Prkdc scid Hrhr) mice aged 6–8 weeks were obtained from «SPF-vivarium» ICG SB RAS (Novosibirsk, Russia). Mice were housed in individually ventilated cages (Animal Care Systems, Centennial, Colorado, USA) in groups of one to four animals per cage with ad libitum food (ssniff Spezialdiäten GmbH, Soest, Germany) and water. Mice were kept in the same room within a specific pathogen-free animal facility with a regular 14/10 h light/dark cycle (lights on at 02:00 h) at a constant room temperature of 22 ± 2 °C and relative humidity of approximately 45 ± 15%. Biopanning of the phage peptide library (Ph.D.-C7C, New England Biolabs, Ipswich, MA, USA) on U-87 MG glioblastoma cells in vitro was performed as described previously with some modifications , namely. The cells that reached 100% confluence were washed with 4 mL of PBS, then 400 μL of 10 mM EDTA was added to detach the cells from the surface and incubated for 4 min at 37 °C. Then 1 mL of complete growth medium was added and cell suspension was transferred into a falcon with a volume 15 mL. The cells were centrifuged for 3 min at 1000 rpm, the supernatant was removed, the cells were resuspended in 4 mL of PBS, and the centrifugation was repeated. The cells were resuspended in 4 mL of blocking buffer (5% BSA/PBS), incubated for 10 min at 37 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS and pelleted by centrifugation (3 min, 1000 rpm). The supernatant was removed, the cells were incubated with 3 mL of a negative selection-depleted phage peptide library for 1 h at 4 °C and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cell pellet was washed three times with 4 mL of PBS and centrifuged for 3 min at 1000 rpm. The cells were resuspended in 4 mL of growth medium heated to 37 °C to provide conditions for the internalization of bacteriophages into cells, incubated for 15 min at 37 °C and centrifuged for 3 min at 1000 rpm. The cells were then washed three times with 4 mL of PBS. 400 μL of Triple Express was added to the cell pellet to remove non-internalized bacteriophages, incubated for 2 min at 37 °C, 1 mL of complete growth medium was added, and centrifuged for 3 min at 1000 rpm. The supernatant was removed, the cells were washed with 4 mL PBS, and the centrifugation was repeated. Then, the cells were lysed with 1 mL of mQ water for 20 min at room temperature. The cell lysate was centrifuged for 5 min at 14,000 rpm, the supernatant was removed, and the phage suspension (1 mL) was amplified. The amplified population of phage particles was used for subsequent rounds of selection. For in vivo screening, we used the previously described methods , to wit. SCID mice with subcutaneously and orthotopic glioblastoma xenograft U-87 MG were injected into the tail vein with 300 μL of a phage peptide library with a concentration of 2 × 10 11 pfu/mL, diluted in saline. The circulation time of the phage library in the bloodstream for mice with subcutaneously glioblastoma xenograft U-87 MG was 5 min; for mice with orthotopic glioblastoma xenograft U-87 MG, the circulation time was 24 h. After the screening time elapsed, the mouse was sacrificed by cervical dislocation, the chest was opened, and 15 mL of saline was perfused through the heart to remove bacteriophages which not binding with the tumor from the bloodstream. The tumor was removed, washed in saline and homogenized in 1 mL PBS containing 1 mM PMSF. The tumor tissue homogenate was centrifuged for 10 min at 10,000 rpm. The pellet was resuspended in 1 mL of blocking buffer (1% BSA), after which centrifugation was repeated under the same conditions. The pellet was resuspended in 1 mL of liquid culture of E. coli ER2738 in the average log-phase with an optical density 0.3 (OD600) for elution of bacteriophages bound to the tumor and incubated for 30 min at 37 °C at 170 rpm. The eluate of phage particles was centrifuged for 5 min at 10,000 rpm. The supernatant was transferred to separate tubes and the enriched phage library was amplified for subsequent rounds of selection. Manipulations on glioblastoma xenograft U-87 MG and monitoring of tumor growth were carried out by employees of «SPF-vivarium» ICG SB RAS. After the third round of selection, phage particles were titrated to obtain individual phage colonies, which were used for DNA isolation according to the manufacturer’s protocol for the phage display peptide library. The sequencing reaction products were determined using an ABI 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) at the Genomics Core Facility of SB RAS using sequencing primers (-96III (5′-CCC TCA TAG TTA GCG TAA CG-3′)). In mice with orthotopical glioblastoma xenograft U-87 MG, a peptide library enriched with in vivo biopanning (2 × 10 11 PFU/mL of phage particles in 500 μL of saline) was injected into the tail vein. After 24 h, the mouse was sacrificed by cervical dislocation and the tumor was removed. The tumor was washed twice with PBS containing 10% penicillin-streptomycin (Sigma-Aldrich, St. Louis, MO, USA), after which it was crushed with a scalpel on a Petri dish, transferred into a falcon with 3 mL of trypsin and incubated in a water bath at 37 °C for 10 min to dissociate the cells. To inactivate trypsin, 3 mL of a trypsin inhibitor from soybeans (Sigma-Aldrich, USA) was added to the cell suspension, after which the cells were centrifuged for 10 min at 800 rpm. The cell pellet was resuspended in NSC medium for neural stem cells (Sigma-Aldrich) until a homogeneous cell suspension was formed. The undissociated pieces of tumor tissue were removed and additionally homogenized. 10 mL of NSC medium was added to the cell suspension, filtered through a filter with a pore size of 40 μm, and centrifuged for 10 min at 800 rpm. The cells were resuspended in 1 mL of NSC medium and incubated for 2 h at 37 °C to restore the proteomic profile of the cells. After incubation in NSC medium, cells were incubated in 500 μL blocking buffer containing 10% FBS for 10 min. The cells were then washed with 500 μL PBS and incubated for 45 min on ice with primary antibodies against CD44 labeled with FITC (Abcam, Cambridge, UK) and primary antibodies against CD133 labeled with Alexa Fluor 647 (Abcam), both diluted in 1% FBS in PBS, in 200 μL. The cells were washed twice with 500 μL PBS, resuspended in 500 μL PBS containing 4 μg/mL gentamicin (Thermo Fisher Scientific, Waltham, MA, USA) and passed through a strainer (BD Biosciences, Franklin Lakes, NJ, USA) into flow cytometry tubes (BD Biosciences). The analysis and sorting of cells was carried out on a SONY SH800S Cell Sorter (Sony Biotechnology, San Jose, CA, USA). U-87 MG cells were incubated on BD Falcon culture slides to 80–90% confluence, washed with PBS twice, and 100μL of the selected phage clone (2 × 10 10 PFU/mL) in PBS-BSA Ca/Mg buffer (0.1% BSA, 1mM CaCl 2 , 10 mM MgCl 2 × 6H 2 O); was added. Cells were incubated with the bacteriophage clone for 2 h at 37 °C with the following treatment according to the previously described technique with slight modifications , namely. After incubation at 37 °C, cells were washed three times with 500 μL buffer (100 mM glycine, 0.5 M NaCl, pH 2.5) at room temperature, fixed with 200 μL cold 4% formaldehyde for 10 min and washed twice with PBS. Then, 200 μL 0.2% Triton X100 was added for 10 min to permeabilize cells, after which the cells were washed twice with 500 μL PBS. Next, cells were incubated with 200 μL mouse Anti-M13 Bacteriophage Coat Protein g8p antibodies (Abcam) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with cold 500 μL 1% BSA/PBS buffer. Next, cells were incubated with 200 μL secondary Alexa Fluor 647 (Abcam, UK) diluted in 1% BSA/PBS buffer (1:200) for 45 min at 4 °C and washed four times with 500 μL cold 1% BSA/PBS buffer. Then the cells were stained with DAPI (Thermo Fisher Scientific) and analyzed by fluorescent microscopy Axio Skope 2 Plus (Zeiss, Oberkochen, Germany) at the Center for Microscopic Analysis of Biological Objects of SB RAS (Novosibirsk, Russia). Mice with a subcutaneously transplanted tumor were injected into the tail vein with 500 μL of bacteriophage (2 × 10 9 PFU/mL) diluted in physiological solution. After 4.5 h of circulation of phage particles in the body, the mouse was sacrificed by cervical dislocation and perfused through the left ventricle of the heart with 15 mL of saline. Then the tumor and control organs (liver, kidney, lungs, and brain) were removed, washed in PBS, and homogenized in 1 mL PBS containing 1 mM PMSF (Sigma Aldrich). The homogenates of tumor tissue and control organs were centrifuged for 20 min at 10,000 g at room temperature to elute bound bacteriophages and were resuspended. The resulting suspension of phage particles was titrated on agar LB medium supplemented with 1 mg/mL X-Gal and 1.25 mg/mL IPTG. Two-way ANOVA was used for comparisons of more than two sets of data. Differences were considered to be significant if the p -value was <0.05. Nucleotide sequences of the inserts encoding peptides were analyzed using MEGA X software. |
Genomic characterization of foodborne | 9f7e4295-c640-4241-9cda-6df7be31f486 | 11805442 | Microbiology[mh] | Milk and meat are essential protein sources that constitute a significant and nutrient-rich component of human diets. However, their consumption is often associated with foodborne infections , particularly those caused by Salmonella and Escherichia coli . Salmonellosis is under-reported in Ghana, and only a few studies have investigated the plausible role of contaminated milk , meat, meat products, handlers’ hands and associated surfaces such as knives, tables and aprons in facilitating their transmission. Knowledge of food safety practices by key food handlers in Ghana has recently been reported to be suboptimal, as have food safety infrastructure and regulatory enforcement . There have been a few reports of meat samples contaminated with E . coli in Ghana . Similarly, E . coli has been recovered from milk, milking utensils, faeces of lactating cow and milkers’ hands . Some of these E . coli strains harbour both virulence and antimicrobial resistance genes (ARGs), raising public health concerns. However, few of these strains have been thoroughly characterized. There are public health and food safety implications of finding Salmonella and E . coli in food because they are invariably of faecal origin. Globally, in addition to contamination risks, Salmonella and E . coli sourced from milk and meat increasingly exhibit resistance to different classes of antibiotics that is commonly mediated by mobile elements . The prevalence of multidrug-resistant (MDR) Salmonella and E . coli is also on the rise in clinical infections . Notably, resistance to extended spectrum beta-lactams, trimethoprim/sulfamethoxazole, chloramphenicol and ciprofloxacin has been reported, often associated with plasmids that could mediate their spread, in both Salmonella and E . coli isolates from milk and retail meats in Ghana . Various methods, including serotyping, antibiotic profiling, pulsed-field gel electrophoresis and whole genome sequencing, have been employed to elucidate the phenotypic and genotypic attributes of foodborne pathogens and to determine their interrelationships and connections to pandemic clones of interest . Next-generation sequencing (NGS) technology, the most versatile and informative approach, has gained recent prominence . NGS is now used by PulseNet to categorize foodborne diseases, enabling nuanced epidemiological investigations . Identification of virulence factors, antibiotic resistance genes, and serotypes are all possible by genomic analysis, which can also provide enhanced information on strain inter-relatedness, and therefore enable source attribution. Data on the genomic characterization of Salmonella and E . coli from milk and meat are few from low- and middle-income countries, including Ghana . In light of this gap, this study aims to characterize the resistance, virulence and plasmid profile of previously isolated Salmonella and E . coli isolated from fresh different retail meats, milk, and associated samples (handler’s hand swab, table, knife and faecal samples) in Saboba district and Bolgatanga Municipality of Ghana. Strains A total of 33 bacterial isolates (14 Salmonella and 19 E . coli species) previously isolated from various fresh and ready-to-eat meats, meat sellers’ tables, milk, milk-collecting utensils, milkers’ hands and faeces of lactating cows were characterized for this study . The isolates originated from markets and farms in Bolgatanga Municipality and Saboba District in Northern Ghana and were cryopreserved in 50% glycerol in Luria broth at -80°C. Ethical considerations All isolates were recovered in earlier studies from vended food or at informal food vending premises, including milk cow droppings . Study design and sampling was approved by the Department of Veterinary Science, UDS. No other permissions were obtained or deemed necessary by the department. No humans or animals were use in the research and therefore ethical approval was deemed not required. Salmonella and E . coli identification Salmonella isolates were initially confirmed using a latex agglutination kit for Salmonella (Oxoid Limited, Basingstoke, UK) and by PCR targeting the invA gene as described by Rahn et al. (1992) , using PCR oligonucleotides invA139f GTGAAATTATCGCCACGTTCGGGCAA and invA141r TCATCGCACCGTCAAGGAACC . PCR was performed using PuRe Taq Ready-To-Go PCR Beads (illustra TM ). The PCR cycle used an initial denaturation temperature of 95°C for two minutes, followed by 35 cycles of denaturation at 95°C for 30 seconds, annealing at 55°C for 30 seconds and extension at 72°C for two minutes, then a terminal extension at 72°C for five minutes. Visualization of the 284 bp amplicon was accomplished after electrophoresis on 1.5% (w/v) agarose gels stained with Gel red (biotium), using a UVP GelMax transilluminator and imager. Salmonella isolates positive for invA and E . coli isolates were biotyped with the Gram-negative (GN) test kit (Ref: 21341) on VITEK 2 systems (version 2.0, Marcy-l’Etoile, France, Biomérieux) according to manufacturer’s instructions DNA extraction, library preparation and whole genome sequencing Genomic DNA of the isolates was extracted using Wizard DNA extraction kit (Promega; Wisconsin, USA) in accordance with manufacturer’s protocol. Using a dsDNA Broad Range quantification assay, the extracted DNA was quantified on a Qubit fluorometer (Invitrogen; California, USA). dsDNA libraries were prepared using NEBNext Ultra II FS DNA library kit for Illumina with 96-unique indexes (New England Biolabs, Massachusetts, USA; Cat. No: E6609L). DNA libraries was quantified using dsDNA High Sensitivity quantification assay on a Quibit fluorometer (Invitrogen; California, USA) and fragment length analysed with the Bioanalyzer (Agilent). Denatured libraries were sequenced on an Illumina MiSeq (Illumina, California, USA). The raw sequence reads were de novo assembled using SPAdes v3.15.3 according to GHRU protocols ( https://gitlab.com/cgps/ghru/pipelines/dsl2/pipelines/assembly ). Sequence typing of Salmonella and E . coli genomes Sequence reads were deposited in the Salmonella and E . coli database for Salmonella and E . coli respectively on EnteroBase and analyzed using publicly available tools that we have previously validated . Multi-locus sequence types (MLST) for the isolates were determined using ARIBA . Novel ST strains were assigned ST using EnteroBase . The Salmonella genome assemblies were analysed using the Salmonella In-Silico Typing Resource (SISTR) for the prediction of serovars and serogroups ( https://github.com/phac-nml/sistr_cmd ), while the serotyping of the E . coli genome was done using ECtyper . Identification of AMR, plasmids and virulence genes PlasmidFinder was utilized to identify plasmid replicons that were present in the assembled genomes. AMRFinderPlus v3.10.24 and its associated database (version 2022-04-04.1) were used to predict the antimicrobial resistance genes carried by the isolates and the drug classes to which they probably conferred resistance. Using ARIBA and the virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/ ), we were also able to identify the virulence genes that were present in the isolates. Single Nucleotide Polymorphism (SNP) calling and phylogenetic analysis For phylogenetic analysis, reference sequences for the Salmonella and E . coli genomes were objectively selected from the National Center for Biotechnology Information Reference Sequence (RefSeq) database ( https://www.ncbi.nlm.nih.gov/refseq/ ) using BactinspectorMax v0.1.3 ( https://gitlab.com/antunderwood/bactinspector ). The selected references were the S . enterica subsp. enterica serovar Fresno strain (assembly accession: GCF_003590695.1) and the E . coli O25b:H4-ST131 strain (assembly accession: GCF_000285655.3). The sequence reads for each species were then mapped to the chromosome of the reference using BWA (v0.7.17) and variants were called and filtered using bcftools (v1.9) as implemented in the GHRU SNP phylogeny pipeline ( https://gitlab.com/cgps/ghru/pipelines/snp_phylogeny ). Variant positions were concatenated into a pseudoalignment and used to generate a maximum likelihood tree using iqtree (v1.6.8) . SNP distances between the genome pairs were calculated using snp-dists v.0.8.2 ( https://github.com/tseemann/snp-dists ). A total of 33 bacterial isolates (14 Salmonella and 19 E . coli species) previously isolated from various fresh and ready-to-eat meats, meat sellers’ tables, milk, milk-collecting utensils, milkers’ hands and faeces of lactating cows were characterized for this study . The isolates originated from markets and farms in Bolgatanga Municipality and Saboba District in Northern Ghana and were cryopreserved in 50% glycerol in Luria broth at -80°C. All isolates were recovered in earlier studies from vended food or at informal food vending premises, including milk cow droppings . Study design and sampling was approved by the Department of Veterinary Science, UDS. No other permissions were obtained or deemed necessary by the department. No humans or animals were use in the research and therefore ethical approval was deemed not required. and E . coli identification Salmonella isolates were initially confirmed using a latex agglutination kit for Salmonella (Oxoid Limited, Basingstoke, UK) and by PCR targeting the invA gene as described by Rahn et al. (1992) , using PCR oligonucleotides invA139f GTGAAATTATCGCCACGTTCGGGCAA and invA141r TCATCGCACCGTCAAGGAACC . PCR was performed using PuRe Taq Ready-To-Go PCR Beads (illustra TM ). The PCR cycle used an initial denaturation temperature of 95°C for two minutes, followed by 35 cycles of denaturation at 95°C for 30 seconds, annealing at 55°C for 30 seconds and extension at 72°C for two minutes, then a terminal extension at 72°C for five minutes. Visualization of the 284 bp amplicon was accomplished after electrophoresis on 1.5% (w/v) agarose gels stained with Gel red (biotium), using a UVP GelMax transilluminator and imager. Salmonella isolates positive for invA and E . coli isolates were biotyped with the Gram-negative (GN) test kit (Ref: 21341) on VITEK 2 systems (version 2.0, Marcy-l’Etoile, France, Biomérieux) according to manufacturer’s instructions Genomic DNA of the isolates was extracted using Wizard DNA extraction kit (Promega; Wisconsin, USA) in accordance with manufacturer’s protocol. Using a dsDNA Broad Range quantification assay, the extracted DNA was quantified on a Qubit fluorometer (Invitrogen; California, USA). dsDNA libraries were prepared using NEBNext Ultra II FS DNA library kit for Illumina with 96-unique indexes (New England Biolabs, Massachusetts, USA; Cat. No: E6609L). DNA libraries was quantified using dsDNA High Sensitivity quantification assay on a Quibit fluorometer (Invitrogen; California, USA) and fragment length analysed with the Bioanalyzer (Agilent). Denatured libraries were sequenced on an Illumina MiSeq (Illumina, California, USA). The raw sequence reads were de novo assembled using SPAdes v3.15.3 according to GHRU protocols ( https://gitlab.com/cgps/ghru/pipelines/dsl2/pipelines/assembly ). Salmonella and E . coli genomes Sequence reads were deposited in the Salmonella and E . coli database for Salmonella and E . coli respectively on EnteroBase and analyzed using publicly available tools that we have previously validated . Multi-locus sequence types (MLST) for the isolates were determined using ARIBA . Novel ST strains were assigned ST using EnteroBase . The Salmonella genome assemblies were analysed using the Salmonella In-Silico Typing Resource (SISTR) for the prediction of serovars and serogroups ( https://github.com/phac-nml/sistr_cmd ), while the serotyping of the E . coli genome was done using ECtyper . PlasmidFinder was utilized to identify plasmid replicons that were present in the assembled genomes. AMRFinderPlus v3.10.24 and its associated database (version 2022-04-04.1) were used to predict the antimicrobial resistance genes carried by the isolates and the drug classes to which they probably conferred resistance. Using ARIBA and the virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/ ), we were also able to identify the virulence genes that were present in the isolates. For phylogenetic analysis, reference sequences for the Salmonella and E . coli genomes were objectively selected from the National Center for Biotechnology Information Reference Sequence (RefSeq) database ( https://www.ncbi.nlm.nih.gov/refseq/ ) using BactinspectorMax v0.1.3 ( https://gitlab.com/antunderwood/bactinspector ). The selected references were the S . enterica subsp. enterica serovar Fresno strain (assembly accession: GCF_003590695.1) and the E . coli O25b:H4-ST131 strain (assembly accession: GCF_000285655.3). The sequence reads for each species were then mapped to the chromosome of the reference using BWA (v0.7.17) and variants were called and filtered using bcftools (v1.9) as implemented in the GHRU SNP phylogeny pipeline ( https://gitlab.com/cgps/ghru/pipelines/snp_phylogeny ). Variant positions were concatenated into a pseudoalignment and used to generate a maximum likelihood tree using iqtree (v1.6.8) . SNP distances between the genome pairs were calculated using snp-dists v.0.8.2 ( https://github.com/tseemann/snp-dists ). Salmonella serotypes, sequence types (STs), virulence factors and phylogeny We used SISTR software to predict the serovars of the 14 Salmonella strains characterized in this study from whole genome sequence reads, which are deposited in the European Nucleotide Archive with the study accession PRJEB58695 . The most common serotype was Fresno (n = 6) followed by Give (n = 3), Orleans (n = 2), Plymouth (n = 1), Agona (n = 1) and Infantis (n = 1). The S . Fresno and S . Orleans isolates were from previously unreported sequence types, now designated ST10742 and ST10465 respectively . As shown in , all the isolates from ready-to-eat pork, mutton and chicken belonged to serovar Fresno. S . Fresno isolates were also isolated from a meat vendor’s knife, as was S . Orleans. The three milk isolates, which were all from Saboba, belonged to the serovars Plymouth (ST565), Give (ST516) and Agona (ST13) . Two more ST516 S . Give isolates were recovered from the faeces of milking cow and from a milking utensil. Phylogenetic analysis of the 14 Salmonella isolates from this study showed that all S . Fresno isolates, irrespective of source, clustered together and differed by < 3 SNPs. The two S . Orleans isolates were identical (0 SNPs) and the three S . Give isolates were also identical, with the isolates originating from milk, faeces of milking cow and from a milking utensil, also in Saboba. All Salmonella isolates harboured curli ( csg ) genes as well as bcf , fim and ste fimbrial operons and ten of them, representing all serovars except S . Give and Plymouth, carried long polar fimbriae ( lpf) genes. The S . Infantis and S . Agona strains carried ratB and shdA . Type III secretion system effector genes, such as: inv , org , prg , sif , spa , sse , ssa and sop were detected in all the isolates while avr was present in 57.1% (8/14) of the isolates and only one isolate harboured gogB . Four of the isolates also encoded the cytholethal distending toxin gene, cdtB . Plasmid replicons and ARG profiles of Salmonella The Salmonella isolates were largely pan-sensitive but genes conferring resistance to fosfomycin ( fosA7 . 2 ) and tetracycline ( tet(A) ) were detected in one and three isolates respectively. Both S . Orleans isolates and one of the S . Fresno, from ready-to-eat mutton carried tetA , along with an IncI1-I(Gamma) plasmid replicon, which was also seen in six other tetA -negative strains. Interestingly, the IncI1-I(Gamma) plasmid replicon was detected in all isolates from Bolgatanga municipality, irrespective of serovar, and no isolate from Saboba district harboured this plasmid. The fosfomycin resistance gene was found in the S . Agona genome, in which no plasmid replicons were detected . E . coli serotypes, sequence types (STs), virulence factors and phylogeny A total of 19 E . coli isolates were identified. E . coli serotyping using the ECtyper revealed that the most common serotypes among the E . coli isolates were -:H7 (n = 2), O138:H48 (n = 2), O6:H16 (n = 2), and O8/O160:H16 (n = 2). A number of these serovars and STs are associated with pathogenicity, notably O6:H16 , as well as O8/O160:H16 and O77/O17/O44/O106/O73:H18 (ST394; ). The strains belonging to these lineages lacked the defining virulence genes of the respective pathotypes but did contain accessory virulence genes, as shown in . Irrespective of whether they belonged to a lineage commonly associated with virulence, most of the isolates contained a range of adhesins and iron utilization genes. E . coli extracellular protein (ECP) export pathway ( ecp/yag ) and ompA and type I fimbriae-encoding operon, fim , encoding genes seen in most E . coli genomes, were present in 94.7% (18/19) of E . coli isolates. Fimbriae encoding gene, f17d , often seen in enterotoxigenic E . coli recovered from animals, was present in the two O8/O160:H16 isolates and an O-:H7 isolate. The phylogenetic analysis of the 19 E . coli isolates from this study and a reference genome (NZ_HG941718.1) based on SNP is presented in . The range of isolates was broader than with Salmonella but closely related pairs of isolates belonging to the same serovar, and ST were found in three instances. Very similar (2347 SNPs) O6:H16 isolates were recovered from different food preparation table samples in Bolgatanga. One of the two isolates from milk in Saboba belonged to ST2165 and was identical (0 SNPs) to a Saboba ST2165 utensil isolate. The two ST4 isolates from fresh beef and a cow milker’s hand differed by 2347 SNPs and are unlikely to be connected. Plasmid replicons and ARG profiles of E . coli Antimicrobial resistance determinants present in the E . coli isolates include those encoding resistance to aminoglycosides ( aph(3”)-Ib , aph(6)-Id , aph(6)-Id , aph(3”)-Ib ), beta-lactams ( bla LAP-2 , bla TEM-1 ), fosfomycin ( fosA7 . 5 ), quinolones ( qnrB19 , qnrS1 ), sulfonamide ( sul2 ), tetracycline ( tet(A) , tet(B) ) and trimethoprim ( dfrA14 ) . At least 3 antimicrobial resistance genes (ARGs) which confer resistance to different classes of antibiotics were present in 8 isolates. Three isolates carried one ARG each while 8 isolates had no ARGs. Six strains carried the genes aph(3’’)-Ib , aph(6)-Id , blaTEM-1 , dfrA14 , sul2 , qnrS1 and tet(A) . The dfrA14-qnrS1 - tet(A) resistance gene combination has previously been reported from Nigeria, being part of a transposon transmitted in an IncX plasmid . In this study however, IncX replicons were not detected. The most common plasmid replicon type detected among the E . coli isolates was pO111 (n = 6), originally described in an E . coli virulence plasmid and found in the aforementioned strains belonging to virulence-associated lineages. The other plasmid replicon types detected were IncY (n = 5), IncFII (n = 2), IncFIA(HI1) (n = 2), Col(pHAD28) (n = 2), IncFIB(pB171) (n = 1), IncR (n = 1), IncHI1A (n = 1) and IncI1-I(Gamma), which was common among the Salmonella , (n = 1) . All multidrug resistant E . coli strains in this study encode pO111 or IncY replicons. serotypes, sequence types (STs), virulence factors and phylogeny We used SISTR software to predict the serovars of the 14 Salmonella strains characterized in this study from whole genome sequence reads, which are deposited in the European Nucleotide Archive with the study accession PRJEB58695 . The most common serotype was Fresno (n = 6) followed by Give (n = 3), Orleans (n = 2), Plymouth (n = 1), Agona (n = 1) and Infantis (n = 1). The S . Fresno and S . Orleans isolates were from previously unreported sequence types, now designated ST10742 and ST10465 respectively . As shown in , all the isolates from ready-to-eat pork, mutton and chicken belonged to serovar Fresno. S . Fresno isolates were also isolated from a meat vendor’s knife, as was S . Orleans. The three milk isolates, which were all from Saboba, belonged to the serovars Plymouth (ST565), Give (ST516) and Agona (ST13) . Two more ST516 S . Give isolates were recovered from the faeces of milking cow and from a milking utensil. Phylogenetic analysis of the 14 Salmonella isolates from this study showed that all S . Fresno isolates, irrespective of source, clustered together and differed by < 3 SNPs. The two S . Orleans isolates were identical (0 SNPs) and the three S . Give isolates were also identical, with the isolates originating from milk, faeces of milking cow and from a milking utensil, also in Saboba. All Salmonella isolates harboured curli ( csg ) genes as well as bcf , fim and ste fimbrial operons and ten of them, representing all serovars except S . Give and Plymouth, carried long polar fimbriae ( lpf) genes. The S . Infantis and S . Agona strains carried ratB and shdA . Type III secretion system effector genes, such as: inv , org , prg , sif , spa , sse , ssa and sop were detected in all the isolates while avr was present in 57.1% (8/14) of the isolates and only one isolate harboured gogB . Four of the isolates also encoded the cytholethal distending toxin gene, cdtB . Salmonella The Salmonella isolates were largely pan-sensitive but genes conferring resistance to fosfomycin ( fosA7 . 2 ) and tetracycline ( tet(A) ) were detected in one and three isolates respectively. Both S . Orleans isolates and one of the S . Fresno, from ready-to-eat mutton carried tetA , along with an IncI1-I(Gamma) plasmid replicon, which was also seen in six other tetA -negative strains. Interestingly, the IncI1-I(Gamma) plasmid replicon was detected in all isolates from Bolgatanga municipality, irrespective of serovar, and no isolate from Saboba district harboured this plasmid. The fosfomycin resistance gene was found in the S . Agona genome, in which no plasmid replicons were detected . . coli serotypes, sequence types (STs), virulence factors and phylogeny A total of 19 E . coli isolates were identified. E . coli serotyping using the ECtyper revealed that the most common serotypes among the E . coli isolates were -:H7 (n = 2), O138:H48 (n = 2), O6:H16 (n = 2), and O8/O160:H16 (n = 2). A number of these serovars and STs are associated with pathogenicity, notably O6:H16 , as well as O8/O160:H16 and O77/O17/O44/O106/O73:H18 (ST394; ). The strains belonging to these lineages lacked the defining virulence genes of the respective pathotypes but did contain accessory virulence genes, as shown in . Irrespective of whether they belonged to a lineage commonly associated with virulence, most of the isolates contained a range of adhesins and iron utilization genes. E . coli extracellular protein (ECP) export pathway ( ecp/yag ) and ompA and type I fimbriae-encoding operon, fim , encoding genes seen in most E . coli genomes, were present in 94.7% (18/19) of E . coli isolates. Fimbriae encoding gene, f17d , often seen in enterotoxigenic E . coli recovered from animals, was present in the two O8/O160:H16 isolates and an O-:H7 isolate. The phylogenetic analysis of the 19 E . coli isolates from this study and a reference genome (NZ_HG941718.1) based on SNP is presented in . The range of isolates was broader than with Salmonella but closely related pairs of isolates belonging to the same serovar, and ST were found in three instances. Very similar (2347 SNPs) O6:H16 isolates were recovered from different food preparation table samples in Bolgatanga. One of the two isolates from milk in Saboba belonged to ST2165 and was identical (0 SNPs) to a Saboba ST2165 utensil isolate. The two ST4 isolates from fresh beef and a cow milker’s hand differed by 2347 SNPs and are unlikely to be connected. E . coli Antimicrobial resistance determinants present in the E . coli isolates include those encoding resistance to aminoglycosides ( aph(3”)-Ib , aph(6)-Id , aph(6)-Id , aph(3”)-Ib ), beta-lactams ( bla LAP-2 , bla TEM-1 ), fosfomycin ( fosA7 . 5 ), quinolones ( qnrB19 , qnrS1 ), sulfonamide ( sul2 ), tetracycline ( tet(A) , tet(B) ) and trimethoprim ( dfrA14 ) . At least 3 antimicrobial resistance genes (ARGs) which confer resistance to different classes of antibiotics were present in 8 isolates. Three isolates carried one ARG each while 8 isolates had no ARGs. Six strains carried the genes aph(3’’)-Ib , aph(6)-Id , blaTEM-1 , dfrA14 , sul2 , qnrS1 and tet(A) . The dfrA14-qnrS1 - tet(A) resistance gene combination has previously been reported from Nigeria, being part of a transposon transmitted in an IncX plasmid . In this study however, IncX replicons were not detected. The most common plasmid replicon type detected among the E . coli isolates was pO111 (n = 6), originally described in an E . coli virulence plasmid and found in the aforementioned strains belonging to virulence-associated lineages. The other plasmid replicon types detected were IncY (n = 5), IncFII (n = 2), IncFIA(HI1) (n = 2), Col(pHAD28) (n = 2), IncFIB(pB171) (n = 1), IncR (n = 1), IncHI1A (n = 1) and IncI1-I(Gamma), which was common among the Salmonella , (n = 1) . All multidrug resistant E . coli strains in this study encode pO111 or IncY replicons. Salmonella and E . coli are the main causes of bacterial foodborne illnesses in Ghana . Retail meat, along with milk and their products are recognized as primary sources of foodborne Salmonellosis and E . coli infection . Post-cooking handling practices, exposure during the points of sale, and suboptimal meat storage conditions collectively contribute to an increased presence of both pathogenic and spoilage bacteria in ready-to-eat (RTE) meat . Within Ghana, food safety has been inadequately studied in the northern region . In this study, we characterized the genomes of 14 Salmonella and 19 E . coli previously isolated from Saboba district and Bolgatanga Municipality in northern Ghana. The Salmonella serovars (Fresno, Plymouth, Infantis, Give and Orleans) identified in this study are yet to be reported in Ghana, but Guinee et al. (1961) isolated S . Agona from cattle in Ghana and S . Give (but ST524, different from ST516 in this study) has been reported from beef in Nigeria . Isolation of S . Infantis from retail poultry meat has also been reported in Ecuador , Belgium and Italy and all the serovars detected in this study have been implicated in human infections. The identification of Salmonella serovars not previously documented in the country in meat and milk products highlights the need for heightened surveillance and preventive measures to curb the spread of foodborne pathogens and reduce the risk of associated illnesses. Unlike Salmonella , not all E . coli are potential pathogens. However, E . coli serve as markers for faecal contamination and therefore the potential that other pathogens are present. The predominant E . coli STs (ST4, ST10, ST219, ST2522) detected in our study have been previously isolated from food animals and have been associated with pathogenicity . While none of the E . coli isolates carried genes encoding ETEC heat-sensitive or heat-labile enterotoxins, f17d fimbrial genes present in four of the E . coli genomes encode ETEC colonization factors commonly associated with colonization of cattle and other ruminant isolates . E . coli , F17 fimbriae are associated with pathogenic E . coli recovered from diarrhoea and septicaemia outbreaks in calves, lambs, and humans, including from outbreaks. Additionally, two of the ST4 E . coli isolates from this study not harbouring f17d fimbrial genes belong to the serovar O6:H16, one of the most widely disseminated lineages of human enterotoxigenic E . coli (ETEC). O6:H16 ETEC cause outbreaks, often associated with food and/or inadequate handwashing . ETEC, by definition, produce plasmid-encoded heat-labile and/or heat stable toxins not present in the genomes of the isolates from this study. However, the serovars O8/O160:H16 and O77/O17/O44/O106/O73:H18 belong to a previously described enteroaggregative E . coli (EAEC) lineage . Isolates from this study belonging to the serovars O8/O160:H16 and O77/O17/O44/O106/O73:H18 possessed no EAEC accessory genes. These strains are from virulent lineages but lack key virulence genes that are plasmid-encoded, which could mean that these plasmids were lost in the food chain or during isolation but could be reacquired. Nevertheless, the presence of these strains in food could increase the risk of foodborne illness. While strains belonging to virulence-associated lineages lacked key plasmid-encoded virulence plasmids, several plasmid replicons were detected in the isolate genomes. According to McMillan et al. (2019) , plasmid replicons ColE, IncI1, IncF, and IncX were commonly detected in Salmonella from food animals in the US. In this study, the IncI1 replicon was predominant, with nine of the thirteen Salmonella strains harbouring IncI1 plasmid replicon, of which three harboured the tetA gene. This is likely to be an instance of a successful mobile element with extraordinary local reach, a few of which have been reported from West Africa, including Ghana, in the past . The IncI1-I(Gamma) plasmid replicon observed among Salmonella isolates was detected in all Salmonella isolates from Bolgatanga municipality—three different serovars—and none of the Saboba district isolates. However, one E . coli isolate (from a recently reported ST, ST8274) from Saboba did have this replicon. As it is an IncI1 plasmid replicon, its plasmid should be better characterized and, it should remain under surveillance because numerous articles have reported association of the IncI1 plasmid replicon with multiple ARGs, such as tetB , tetAR , bla CMY -2 , bla TEM -1 , aac3VIa , aphA , aadA and sul1 , bla CTXM -1 , strA , strB , cmlA , floR , bla SHV-12 , bla OXA-2 and FosA3 in IncI1 plasmids in Salmonella . As our own sequence was generated by short read only, the first step would be to generate long read sequence that could fully assemble the plasmid and make it possible to identify genetic factors supporting its success. Among the E . coli isolates, the plasmid replicons pO111 was the most common replicon. A previous study by Balbuena-Alonso et al. (2022) , revealed that pO111 is usually associated with extended spectrum beta lactamases gene and is very common in food and clinical isolates. In this study, all isolates carrying pO111 harbour at least one beta-lactamase gene. Likewise, all the pO111 plasmid bearing isolates in this study carried ARGs that confer resistance to at least 4 classes of antibiotics. Altogether, these data demonstrate a concerning reservoir of resistance genes in these foodborne bacteria. This study has characterized the genomes of Salmonella and E . coli in milk, meat and their associated utensils. The diverse serovars and virulence genes detected in Salmonella strains indicate potential pathogenicity. Although not all E . coli strains are pathogenic, their presence serves as an indicator of faecal contamination, suggesting the potential presence of other harmful pathogens. The presence of EAEC strains in food is concerning as EAEC is a well-known cause of diarrhoeal diseases, particularly in children and immunocompromised individuals, making its presence in food a serious concern. While antimicrobial resistance was not common among Salmonella strains, most of the E . coli strain had at least one resistance gene, and almost half were multidrug resistant and carried mobile elements. Moreover, there have been recent reports of resistant Salmonella and E. coli from meat and milk elsewhere in West Africa . A recent scoping review reported weak enforcement of food safety regulations, as well as a lack of infrastructure, knowledge and skills to implement these regulations . Food contaminated with and Salmonella and E . coli can serve as a vehicle for their transmission, posing a significant public health risk. We recommend that food safety regulations be strengthened in northern Ghana and, by extension, West Africa. It is also important to increase awareness among consumers so that food is handled in such a way to prevent pathogen transmission. There is an additional need for continuous surveillance and preventive measures to stop the spread of foodborne pathogens and reduce the risk of associated illnesses in Ghana. S1 Table Accession numbers for genomes generated in this study. (XLSX) S2 Table Novel Salmonella allelic profile and assigned ST. (XLSX) S3 Table Salmonella strain metadata including serotype, ST, plasmid replicon, AMR and virulence profile. (XLSX) S4 Table E . coli strain metadata including serotype, ST, plasmid replicon, AMR and virulence profile. (XLSX) |
A Digital Behavior Change Intervention for Health Promotion for Adults in Midlife: Protocol for a Multidimensional Assessment Study | 837daabb-5a44-47a9-b631-b56ae0b58f55 | 11845890 | Health Promotion[mh] | Background Today, expert consensus recommends that people should strengthen disease prevention actions from the age of 40 years [ - ] to avoid loss of independence due to the accumulation of chronic diseases. Different studies show a correlation between the number of healthy behaviors (physical activity, diet, stop smoking, and reduction of alcohol consumption) and healthy aging. Santé publique France has been working on the planning stages of a social marketing scheme that includes a digital behavior change intervention. The digital intervention was designed in tandem with its assessment protocol in the hope that engineering feedback would improve its applicability. An overview of the literature on the assessment of digital tools for health promotion and disease prevention found proven evidence for the following: (1) the added value of a multidimensional assessment of a digital intervention , (2) the challenge of distinguishing between effect measurement and implementation measurement since “a crucial implication of explicitly recognizing the distinction between engagement with the technological and behavioral aspects of the intervention is that intervention usage alone cannot be taken as a valid indicator of engagement” , (3) the importance of being able to qualify the maintenance of a target behavior over time , and (4) the absence, to our knowledge, of a mixed quantitative and qualitative assessment protocol [ , - ]. It was precisely this gap that prompted the drafting of this assessment protocol for a nonclinical intervention. We explored the literature on assessments in the fields of medicine and medical informatics as a basis for consolidating some of our following methodological choices. The framework stages to develop an assessment protocol: preliminary diagram, study design, operationalization of the methods, project schedule, execution, and conclusion [ - ]. The lesson that an evaluable result consists of the internet user’s loyalty to the logic models used and not the loyalty necessary for a program to be effective : “The distinction in digital health evaluation from traditional evaluation is that there is not always a need to evaluate health outcomes as direct effects of the digital health intervention” . The decision to document the initial impact of an intervention as well as its additional impact compared to existing digital interventions by Santé publique France . The decision to take into account the unexpected effects of health IT . Digital Intervention for Behavior Change in Midlife Based on a holistic and person-centered approach, the digital intervention provides information on the main risk factors for health, taking into account the barriers to and drivers for adopting healthy behaviors as well as the specific living conditions and environments of those aged 40-55 years. This digital intervention is based on the quantified self to support behavior transformation . The design of the intervention is explained in a separate study (under review) that illustrates the complementary nature of the theories used in relation to the targeted behavior changes. To become familiar with the user and guide them toward behavior changes, the initial access to the site requires them to fill out a questionnaire on their lifestyle habits, which generates personalized feedback according to a traffic light system in order to introduce recommendations for protective behaviors. At this stage, the user has the option of downloading their report with an overview of the feedback in the form of a table. The next click opens a feed page with action cards and studies that the user can “like,” save to their account, and use to navigate further around the site. The personal account is designed as a self-coaching tool intended to support motivation, increase the power to act, and help the user understand health as an interaction between several health determinants applying to all life areas ( ). As no gold standard questionnaire on lifestyle habits is available, the present one is a concatenation of different examples taken from the literature and pretested with a target group of midlife adults during a qualitative study. The personalized space for registered users was leveraged to form the basis of the assessment as the users’ actions could be tracked via the content management system. The assessment could then be carried out continuously or in waves. Conceptual Framework: Intervention Model The first step was to identify the causes of the problem, shown in the “causal model of the problem” in , then to translate them into an objective in the “theoretical logic” part and finally to deduce the output and intervention objectives (operational logic model), leading to 3 evaluable working hypotheses. The protocol aims to assess the impact of the website based on the small actions triggered among users to the different health determinants. Specifically, it is intended to evaluate the website’s performance in terms of the following objectives: (1) engaging a specific population, (2) triggering behavior change, (3) raising awareness about a multifactorial approach to health, and (4) encouraging user interaction with the website’s resources. The paper describes the methods and their relevant limits when constructing an assessment protocol for digital interventions. It questions the value of digital self-assessment and the time frame necessary to evaluate the adoption of healthy lifestyles, as no expert consensus is available on this topic. Finally, the article explores how behavior change models can strengthen the effect measurement of an assessment protocol.
Today, expert consensus recommends that people should strengthen disease prevention actions from the age of 40 years [ - ] to avoid loss of independence due to the accumulation of chronic diseases. Different studies show a correlation between the number of healthy behaviors (physical activity, diet, stop smoking, and reduction of alcohol consumption) and healthy aging. Santé publique France has been working on the planning stages of a social marketing scheme that includes a digital behavior change intervention. The digital intervention was designed in tandem with its assessment protocol in the hope that engineering feedback would improve its applicability. An overview of the literature on the assessment of digital tools for health promotion and disease prevention found proven evidence for the following: (1) the added value of a multidimensional assessment of a digital intervention , (2) the challenge of distinguishing between effect measurement and implementation measurement since “a crucial implication of explicitly recognizing the distinction between engagement with the technological and behavioral aspects of the intervention is that intervention usage alone cannot be taken as a valid indicator of engagement” , (3) the importance of being able to qualify the maintenance of a target behavior over time , and (4) the absence, to our knowledge, of a mixed quantitative and qualitative assessment protocol [ , - ]. It was precisely this gap that prompted the drafting of this assessment protocol for a nonclinical intervention. We explored the literature on assessments in the fields of medicine and medical informatics as a basis for consolidating some of our following methodological choices. The framework stages to develop an assessment protocol: preliminary diagram, study design, operationalization of the methods, project schedule, execution, and conclusion [ - ]. The lesson that an evaluable result consists of the internet user’s loyalty to the logic models used and not the loyalty necessary for a program to be effective : “The distinction in digital health evaluation from traditional evaluation is that there is not always a need to evaluate health outcomes as direct effects of the digital health intervention” . The decision to document the initial impact of an intervention as well as its additional impact compared to existing digital interventions by Santé publique France . The decision to take into account the unexpected effects of health IT . Digital Intervention for Behavior Change in Midlife Based on a holistic and person-centered approach, the digital intervention provides information on the main risk factors for health, taking into account the barriers to and drivers for adopting healthy behaviors as well as the specific living conditions and environments of those aged 40-55 years. This digital intervention is based on the quantified self to support behavior transformation . The design of the intervention is explained in a separate study (under review) that illustrates the complementary nature of the theories used in relation to the targeted behavior changes. To become familiar with the user and guide them toward behavior changes, the initial access to the site requires them to fill out a questionnaire on their lifestyle habits, which generates personalized feedback according to a traffic light system in order to introduce recommendations for protective behaviors. At this stage, the user has the option of downloading their report with an overview of the feedback in the form of a table. The next click opens a feed page with action cards and studies that the user can “like,” save to their account, and use to navigate further around the site. The personal account is designed as a self-coaching tool intended to support motivation, increase the power to act, and help the user understand health as an interaction between several health determinants applying to all life areas ( ). As no gold standard questionnaire on lifestyle habits is available, the present one is a concatenation of different examples taken from the literature and pretested with a target group of midlife adults during a qualitative study. The personalized space for registered users was leveraged to form the basis of the assessment as the users’ actions could be tracked via the content management system. The assessment could then be carried out continuously or in waves. Conceptual Framework: Intervention Model The first step was to identify the causes of the problem, shown in the “causal model of the problem” in , then to translate them into an objective in the “theoretical logic” part and finally to deduce the output and intervention objectives (operational logic model), leading to 3 evaluable working hypotheses. The protocol aims to assess the impact of the website based on the small actions triggered among users to the different health determinants. Specifically, it is intended to evaluate the website’s performance in terms of the following objectives: (1) engaging a specific population, (2) triggering behavior change, (3) raising awareness about a multifactorial approach to health, and (4) encouraging user interaction with the website’s resources. The paper describes the methods and their relevant limits when constructing an assessment protocol for digital interventions. It questions the value of digital self-assessment and the time frame necessary to evaluate the adoption of healthy lifestyles, as no expert consensus is available on this topic. Finally, the article explores how behavior change models can strengthen the effect measurement of an assessment protocol.
Based on a holistic and person-centered approach, the digital intervention provides information on the main risk factors for health, taking into account the barriers to and drivers for adopting healthy behaviors as well as the specific living conditions and environments of those aged 40-55 years. This digital intervention is based on the quantified self to support behavior transformation . The design of the intervention is explained in a separate study (under review) that illustrates the complementary nature of the theories used in relation to the targeted behavior changes. To become familiar with the user and guide them toward behavior changes, the initial access to the site requires them to fill out a questionnaire on their lifestyle habits, which generates personalized feedback according to a traffic light system in order to introduce recommendations for protective behaviors. At this stage, the user has the option of downloading their report with an overview of the feedback in the form of a table. The next click opens a feed page with action cards and studies that the user can “like,” save to their account, and use to navigate further around the site. The personal account is designed as a self-coaching tool intended to support motivation, increase the power to act, and help the user understand health as an interaction between several health determinants applying to all life areas ( ). As no gold standard questionnaire on lifestyle habits is available, the present one is a concatenation of different examples taken from the literature and pretested with a target group of midlife adults during a qualitative study. The personalized space for registered users was leveraged to form the basis of the assessment as the users’ actions could be tracked via the content management system. The assessment could then be carried out continuously or in waves.
The first step was to identify the causes of the problem, shown in the “causal model of the problem” in , then to translate them into an objective in the “theoretical logic” part and finally to deduce the output and intervention objectives (operational logic model), leading to 3 evaluable working hypotheses. The protocol aims to assess the impact of the website based on the small actions triggered among users to the different health determinants. Specifically, it is intended to evaluate the website’s performance in terms of the following objectives: (1) engaging a specific population, (2) triggering behavior change, (3) raising awareness about a multifactorial approach to health, and (4) encouraging user interaction with the website’s resources. The paper describes the methods and their relevant limits when constructing an assessment protocol for digital interventions. It questions the value of digital self-assessment and the time frame necessary to evaluate the adoption of healthy lifestyles, as no expert consensus is available on this topic. Finally, the article explores how behavior change models can strengthen the effect measurement of an assessment protocol.
Objectives of the Digital Intervention The effect of the intervention on protective behaviors in midlife is communicated through 8 health determinants: diet, physical activity, smoking, alcohol, stress, sleep, cognitive health, and environmental health. The digital intervention is intended to help people aged 40-55 years, and in particular, socioeconomically disadvantaged people to (1) adopt multifactorial preventive actions in their daily lives; (2) increase their knowledge about lesser known determinants (stress, sleep, cognitive health, and environmental health); (3) support dialogue with health care professionals; and (4) develop psychosocial skills, especially the ability to resist social pressure. Theory of Assessment of a Digital Intervention The 3-pronged approach of “perceive, prepare, act,” resulting from existing digital behavior change interventions , correlates with the functionalities of the website—questionnaire, actions, personalized space—designed using the behavior change techniques of capacities, opportunities, motivations-behavior (COM-B) . shows the indicators that can be used to answer the assessment questions. The mechanisms and factors that influence the choice of one or more actions and contribute to whether they are adopted are shown in “1. goals and planning—perceive, prepare, and act” as well as in the column “prepare” when a user likes one or more actions. The influence of personalized space on the adoption of actions (preferably multifactorial) and on the self-assessment of lifestyle habits can be understood on the basis of the items listed in the “act” column. The factors likely to influence target users’ perception of their chosen health-promoting action are reflected in the fact that the questionnaire is repeated (2.4) and that behaviors are practiced, repeated, and changed (from 8.1 to 8.4). The typology of a target user, as described earlier in the objectives, can be combined with the indicators of the perception, preparation, and action stages to complete an assessment in advance. Mixed Assessment Protocol Three evaluative questions emerge concerning the personalized account and, by extension, the website. What mechanisms and factors influence the choice of one or more actions and contribute to the user adopting them? What influence does personalized space have on the adoption of actions that are preferably multifactorial and on the self-assessment of lifestyle habits? What factors are likely to influence target users’ perception of their chosen health-promoting action? Our protocol combining quantitative and qualitative assessment is based on data collected from the personalized space, which was designed with the objective of “outsourced self-regulation” , supplemented by additional questionnaires and individual interviews. The mixed assessment evaluates behavior changes made at different time points in the data collection process rather than the increase in quality of life and disability-free life expectancy. As mentioned earlier, the evaluable result is the user’s loyalty to the logic models used and not the loyalty necessary for a program to be effective . Kelders et al described the typical components through an analysis of 83 digital interventions: modular, updated once a week, use of persuasive technologies, and potential to interact with the communicator and peers . The features of an assessment protocol are as follows. Before the digital intervention is launched: it supports the design and modeling of digital intervention. Once launched, (1) it checks whether the users of the personalized space are between the ages of 40 and 55 years, whether they are socioeconomically deprived, and whether they have a low level of literacy; (2) it creates typologies of registered users; (3) it measures the effects (ie, changes in the behavior of registered users) through evaluable criteria and indicators such as adopting and maintaining a new healthy behavior, increased knowledge, improved psychosocial skills, and improved health variables ; and (4) it continually improves the website and personalized space to support the desire to change behavior in midlife . Recording unexpected effects sheds light on the adjustments needed in order to continually improve the intervention. Several hypotheses for these have been formulated: (1) the questionnaire does not engage users or it is never repeated; (2) the initial request does not correspond to the determinant that the user is “coached” on in their personalized space; (3) a highly disparate choice of actions makes it difficult or even impossible to implement them (no actions are adopted); and (4) actions are liked without a time objective being set. Assessment Objectives As stated above, the intention was to split the individuals included in the assessment into 2 groups. The 7 measurement objectives presented below apply to both groups. A detailed description of the objectives is given in . Objective 1: To assess whether the user’s profile matches the purpose of the site, namely, to reach socioeconomically disadvantaged people with a low level of health literacy and aged between 40 and 55 years at T0. Objective 2: To record lifestyle habits that deviate to some extent from public health recommendations at T0, T1, and T2. Objective 3: To record liked actions and articles while distinguishing actions in category A (change in behavior: diet, physical activity, smoking, and alcohol—additional contribution compared to other Santé publique France social marketing schemes) from those in category B (greater knowledge: sleep, stress, cognitive health, and environmental health—initial contribution given the absence of other Santé publique France resources). The assumption made is that the user chooses actions for category A and study pages for category B. Data are collected at T0, T1, and T2. Objective 4: To assess willingness to change behavior at T0. Objective 5: To assess the evolution of the behavior change between T0 to T1 and T1 to T2; to assess the frequency and routine nature of actions at T1 and T2. Objective 6: To assess re-engagement at T2. Objective 7: To assess lapsed connection to the personalized space before T1 and before T2 . Assessment Time Frame A digital behavior change intervention consists of several stages with a total average duration of approximately 10 weeks , although there is no consensus between experts over the time frame. Engagement with the digital intervention involves registering to create an account with a personalized space and then signaling preparation for behavior change (phase 1), followed by the adoption of 1 or 2 actions (phase 2), and a phase of lapsed activity on the site (phase 3). Reengagement with the intervention (phase 4) is prompted by the need to solve a problem, renew motivation, identify a new action, and so on. . Split into 3 evaluable phases—T0, T1, and T2 (respectively phases 1, 2, and 4 according to Yardley et al , in ). The expected results and collection methods are presented in . It is based on the assessment work of the VERB™ campaign (in normal type) , the lessons learned on health information-seeking behaviors (in italics) , the theory of small actions (in bold) , and digital behavior change interventions (in bold and italics) . presents the interaction between perceiving, preparing, and acting, which can be repeated randomly at the 3 assessment intervals set out in . Furthermore, it is particularly important to determine (1) T1 (changes related to subjective norms, beliefs, self-efficacy, and perceived control of behavior), (2) T2 (level of empowerment, degree of satisfaction, activities of daily living, and self-reported health outcomes), and (3) between T0 and T1 and then between T1 and T2, there are 4 reasons for lapsing—forgetting, having a technical problem, permanently giving up on self-quantification, and suspending usage—but these do not necessarily mean that the adopted action has been abandoned . Assessment Methods From T0 to T2 As detailed in , the assessment of the digital intervention at T0, T1, and T2 is intended to be explanatory, combining a quantitative and qualitative approach based on recording for both groups of users: (1) log-in data for the site and user account with personalized space, (2) data relating to specific and identifiable behavior changes by monitoring registered users from T0 to T2 via the content management system, (3) verbatim statements from users for classification into user profile; and (4) information about capabilities, opportunities and motivations via semistructured individual interviews with a sample of users. Self-Assessment at T0, T1, and T2 The “lifestyle habits” questionnaire is the basis of the initial self-assessment at T0, then again at T1 to visualize the changes that have taken place and finally at T2 to identify developments ( ). Other suggested tools at T1 are the Self-Report Habit Index to assess the power of the “frequency” factor for the action performed most often and the “small actions” assessment questionnaire. At T2, the Self-Report Behavior Habit Index is intended to show whether the behavior has become routine, supplemented by the “small actions” assessment questionnaire. To further support the objectives mentioned earlier, an automatic assessment at T0, T1, and T2 retrieves the information provided and actions carried out by the user. The aim of the semistructured individual interview at T1 and T2 by groups 1 and 2 profile type and sampling is to reveal the impact of the capabilities, opportunities, and motivations on behavior change by combining the methodology of the COM-B, the theoretical domains framework and the tiny habits theory. Taking a human-machine interaction perspective, it is very difficult to determine whether the choice of an action is based on conscious or unconscious motivation . The assessment of lapsed connection to the personalized space before T1 and before T2 will be carried out via a questionnaire sent by email to the concerned users. The objective is to identify the reasons for the lack of use (with the aim of continually developing the personalized space) and the number of actions maintained without logging in. Objectives 2, 4, and 5 make it possible to assess any unexpected effects: (1) the questionnaire is not the draw for the user or is never repeated (meaning that the user cannot view their progress in the personalized space; objective 2); (2) actions are liked, but no goal is set (objective 4); (3) a highly disparate choice of actions makes it difficult or even impossible to implement them (no actions are adopted; objective 5, criterion 1); and (4) the initial request differs from the action that the user is “coached” on in the personalized space (objective 5, criterion 2). Assessment Population The internet users, included in the assessment will be between the ages of 40 and 55 years, have registered to create an account on the website with a personalized space, and have carried out actions in their space during the 3 assessment stages: T0 (date of personal account creation), T1 (3 weeks after creation), and T2 (10 weeks after creation) (see ). Users will be divided into 2 groups. Group 1 will include socioeconomically deprived people and group 2 all other users. Each group will then be subdivided based on the “motivations,” “capabilities,” and “opportunities” expressed. By characterizing users into these 2 socioeconomic groups, the diversity of behaviors can be questioned, and corrections can be made to support group 1. Classification into group 1 will be based on 2 conditions: belonging to the lower socioprofessional categories and having a level of health literacy below 3.39 on domain 8 of the French Health Literacy Questionnaire . When people create their personal account, in accordance with the General Data Protection Regulation in force in Europe, registered users will need to consent to the use of their quantitative and qualitative data for study purposes and agree to be contacted as part of the assessment. No sensitive medical data will be recorded, and the data from the content management system will be separated from the information collected through the personalized space. The digital security officer at Santé publique France verified the compliance of this data management approach with French data protection regulations (Commission nationale de l'informatique et des libertés). The protocol currently allows testing in a given context and on a regional scale, for example.
The effect of the intervention on protective behaviors in midlife is communicated through 8 health determinants: diet, physical activity, smoking, alcohol, stress, sleep, cognitive health, and environmental health. The digital intervention is intended to help people aged 40-55 years, and in particular, socioeconomically disadvantaged people to (1) adopt multifactorial preventive actions in their daily lives; (2) increase their knowledge about lesser known determinants (stress, sleep, cognitive health, and environmental health); (3) support dialogue with health care professionals; and (4) develop psychosocial skills, especially the ability to resist social pressure. Theory of Assessment of a Digital Intervention The 3-pronged approach of “perceive, prepare, act,” resulting from existing digital behavior change interventions , correlates with the functionalities of the website—questionnaire, actions, personalized space—designed using the behavior change techniques of capacities, opportunities, motivations-behavior (COM-B) . shows the indicators that can be used to answer the assessment questions. The mechanisms and factors that influence the choice of one or more actions and contribute to whether they are adopted are shown in “1. goals and planning—perceive, prepare, and act” as well as in the column “prepare” when a user likes one or more actions. The influence of personalized space on the adoption of actions (preferably multifactorial) and on the self-assessment of lifestyle habits can be understood on the basis of the items listed in the “act” column. The factors likely to influence target users’ perception of their chosen health-promoting action are reflected in the fact that the questionnaire is repeated (2.4) and that behaviors are practiced, repeated, and changed (from 8.1 to 8.4). The typology of a target user, as described earlier in the objectives, can be combined with the indicators of the perception, preparation, and action stages to complete an assessment in advance. Mixed Assessment Protocol Three evaluative questions emerge concerning the personalized account and, by extension, the website. What mechanisms and factors influence the choice of one or more actions and contribute to the user adopting them? What influence does personalized space have on the adoption of actions that are preferably multifactorial and on the self-assessment of lifestyle habits? What factors are likely to influence target users’ perception of their chosen health-promoting action? Our protocol combining quantitative and qualitative assessment is based on data collected from the personalized space, which was designed with the objective of “outsourced self-regulation” , supplemented by additional questionnaires and individual interviews. The mixed assessment evaluates behavior changes made at different time points in the data collection process rather than the increase in quality of life and disability-free life expectancy. As mentioned earlier, the evaluable result is the user’s loyalty to the logic models used and not the loyalty necessary for a program to be effective . Kelders et al described the typical components through an analysis of 83 digital interventions: modular, updated once a week, use of persuasive technologies, and potential to interact with the communicator and peers . The features of an assessment protocol are as follows. Before the digital intervention is launched: it supports the design and modeling of digital intervention. Once launched, (1) it checks whether the users of the personalized space are between the ages of 40 and 55 years, whether they are socioeconomically deprived, and whether they have a low level of literacy; (2) it creates typologies of registered users; (3) it measures the effects (ie, changes in the behavior of registered users) through evaluable criteria and indicators such as adopting and maintaining a new healthy behavior, increased knowledge, improved psychosocial skills, and improved health variables ; and (4) it continually improves the website and personalized space to support the desire to change behavior in midlife . Recording unexpected effects sheds light on the adjustments needed in order to continually improve the intervention. Several hypotheses for these have been formulated: (1) the questionnaire does not engage users or it is never repeated; (2) the initial request does not correspond to the determinant that the user is “coached” on in their personalized space; (3) a highly disparate choice of actions makes it difficult or even impossible to implement them (no actions are adopted); and (4) actions are liked without a time objective being set. Assessment Objectives As stated above, the intention was to split the individuals included in the assessment into 2 groups. The 7 measurement objectives presented below apply to both groups. A detailed description of the objectives is given in . Objective 1: To assess whether the user’s profile matches the purpose of the site, namely, to reach socioeconomically disadvantaged people with a low level of health literacy and aged between 40 and 55 years at T0. Objective 2: To record lifestyle habits that deviate to some extent from public health recommendations at T0, T1, and T2. Objective 3: To record liked actions and articles while distinguishing actions in category A (change in behavior: diet, physical activity, smoking, and alcohol—additional contribution compared to other Santé publique France social marketing schemes) from those in category B (greater knowledge: sleep, stress, cognitive health, and environmental health—initial contribution given the absence of other Santé publique France resources). The assumption made is that the user chooses actions for category A and study pages for category B. Data are collected at T0, T1, and T2. Objective 4: To assess willingness to change behavior at T0. Objective 5: To assess the evolution of the behavior change between T0 to T1 and T1 to T2; to assess the frequency and routine nature of actions at T1 and T2. Objective 6: To assess re-engagement at T2. Objective 7: To assess lapsed connection to the personalized space before T1 and before T2 . Assessment Time Frame A digital behavior change intervention consists of several stages with a total average duration of approximately 10 weeks , although there is no consensus between experts over the time frame. Engagement with the digital intervention involves registering to create an account with a personalized space and then signaling preparation for behavior change (phase 1), followed by the adoption of 1 or 2 actions (phase 2), and a phase of lapsed activity on the site (phase 3). Reengagement with the intervention (phase 4) is prompted by the need to solve a problem, renew motivation, identify a new action, and so on. . Split into 3 evaluable phases—T0, T1, and T2 (respectively phases 1, 2, and 4 according to Yardley et al , in ). The expected results and collection methods are presented in . It is based on the assessment work of the VERB™ campaign (in normal type) , the lessons learned on health information-seeking behaviors (in italics) , the theory of small actions (in bold) , and digital behavior change interventions (in bold and italics) . presents the interaction between perceiving, preparing, and acting, which can be repeated randomly at the 3 assessment intervals set out in . Furthermore, it is particularly important to determine (1) T1 (changes related to subjective norms, beliefs, self-efficacy, and perceived control of behavior), (2) T2 (level of empowerment, degree of satisfaction, activities of daily living, and self-reported health outcomes), and (3) between T0 and T1 and then between T1 and T2, there are 4 reasons for lapsing—forgetting, having a technical problem, permanently giving up on self-quantification, and suspending usage—but these do not necessarily mean that the adopted action has been abandoned .
The 3-pronged approach of “perceive, prepare, act,” resulting from existing digital behavior change interventions , correlates with the functionalities of the website—questionnaire, actions, personalized space—designed using the behavior change techniques of capacities, opportunities, motivations-behavior (COM-B) . shows the indicators that can be used to answer the assessment questions. The mechanisms and factors that influence the choice of one or more actions and contribute to whether they are adopted are shown in “1. goals and planning—perceive, prepare, and act” as well as in the column “prepare” when a user likes one or more actions. The influence of personalized space on the adoption of actions (preferably multifactorial) and on the self-assessment of lifestyle habits can be understood on the basis of the items listed in the “act” column. The factors likely to influence target users’ perception of their chosen health-promoting action are reflected in the fact that the questionnaire is repeated (2.4) and that behaviors are practiced, repeated, and changed (from 8.1 to 8.4). The typology of a target user, as described earlier in the objectives, can be combined with the indicators of the perception, preparation, and action stages to complete an assessment in advance.
Three evaluative questions emerge concerning the personalized account and, by extension, the website. What mechanisms and factors influence the choice of one or more actions and contribute to the user adopting them? What influence does personalized space have on the adoption of actions that are preferably multifactorial and on the self-assessment of lifestyle habits? What factors are likely to influence target users’ perception of their chosen health-promoting action? Our protocol combining quantitative and qualitative assessment is based on data collected from the personalized space, which was designed with the objective of “outsourced self-regulation” , supplemented by additional questionnaires and individual interviews. The mixed assessment evaluates behavior changes made at different time points in the data collection process rather than the increase in quality of life and disability-free life expectancy. As mentioned earlier, the evaluable result is the user’s loyalty to the logic models used and not the loyalty necessary for a program to be effective . Kelders et al described the typical components through an analysis of 83 digital interventions: modular, updated once a week, use of persuasive technologies, and potential to interact with the communicator and peers . The features of an assessment protocol are as follows. Before the digital intervention is launched: it supports the design and modeling of digital intervention. Once launched, (1) it checks whether the users of the personalized space are between the ages of 40 and 55 years, whether they are socioeconomically deprived, and whether they have a low level of literacy; (2) it creates typologies of registered users; (3) it measures the effects (ie, changes in the behavior of registered users) through evaluable criteria and indicators such as adopting and maintaining a new healthy behavior, increased knowledge, improved psychosocial skills, and improved health variables ; and (4) it continually improves the website and personalized space to support the desire to change behavior in midlife . Recording unexpected effects sheds light on the adjustments needed in order to continually improve the intervention. Several hypotheses for these have been formulated: (1) the questionnaire does not engage users or it is never repeated; (2) the initial request does not correspond to the determinant that the user is “coached” on in their personalized space; (3) a highly disparate choice of actions makes it difficult or even impossible to implement them (no actions are adopted); and (4) actions are liked without a time objective being set.
As stated above, the intention was to split the individuals included in the assessment into 2 groups. The 7 measurement objectives presented below apply to both groups. A detailed description of the objectives is given in . Objective 1: To assess whether the user’s profile matches the purpose of the site, namely, to reach socioeconomically disadvantaged people with a low level of health literacy and aged between 40 and 55 years at T0. Objective 2: To record lifestyle habits that deviate to some extent from public health recommendations at T0, T1, and T2. Objective 3: To record liked actions and articles while distinguishing actions in category A (change in behavior: diet, physical activity, smoking, and alcohol—additional contribution compared to other Santé publique France social marketing schemes) from those in category B (greater knowledge: sleep, stress, cognitive health, and environmental health—initial contribution given the absence of other Santé publique France resources). The assumption made is that the user chooses actions for category A and study pages for category B. Data are collected at T0, T1, and T2. Objective 4: To assess willingness to change behavior at T0. Objective 5: To assess the evolution of the behavior change between T0 to T1 and T1 to T2; to assess the frequency and routine nature of actions at T1 and T2. Objective 6: To assess re-engagement at T2. Objective 7: To assess lapsed connection to the personalized space before T1 and before T2 .
A digital behavior change intervention consists of several stages with a total average duration of approximately 10 weeks , although there is no consensus between experts over the time frame. Engagement with the digital intervention involves registering to create an account with a personalized space and then signaling preparation for behavior change (phase 1), followed by the adoption of 1 or 2 actions (phase 2), and a phase of lapsed activity on the site (phase 3). Reengagement with the intervention (phase 4) is prompted by the need to solve a problem, renew motivation, identify a new action, and so on. . Split into 3 evaluable phases—T0, T1, and T2 (respectively phases 1, 2, and 4 according to Yardley et al , in ). The expected results and collection methods are presented in . It is based on the assessment work of the VERB™ campaign (in normal type) , the lessons learned on health information-seeking behaviors (in italics) , the theory of small actions (in bold) , and digital behavior change interventions (in bold and italics) . presents the interaction between perceiving, preparing, and acting, which can be repeated randomly at the 3 assessment intervals set out in . Furthermore, it is particularly important to determine (1) T1 (changes related to subjective norms, beliefs, self-efficacy, and perceived control of behavior), (2) T2 (level of empowerment, degree of satisfaction, activities of daily living, and self-reported health outcomes), and (3) between T0 and T1 and then between T1 and T2, there are 4 reasons for lapsing—forgetting, having a technical problem, permanently giving up on self-quantification, and suspending usage—but these do not necessarily mean that the adopted action has been abandoned .
As detailed in , the assessment of the digital intervention at T0, T1, and T2 is intended to be explanatory, combining a quantitative and qualitative approach based on recording for both groups of users: (1) log-in data for the site and user account with personalized space, (2) data relating to specific and identifiable behavior changes by monitoring registered users from T0 to T2 via the content management system, (3) verbatim statements from users for classification into user profile; and (4) information about capabilities, opportunities and motivations via semistructured individual interviews with a sample of users.
The “lifestyle habits” questionnaire is the basis of the initial self-assessment at T0, then again at T1 to visualize the changes that have taken place and finally at T2 to identify developments ( ). Other suggested tools at T1 are the Self-Report Habit Index to assess the power of the “frequency” factor for the action performed most often and the “small actions” assessment questionnaire. At T2, the Self-Report Behavior Habit Index is intended to show whether the behavior has become routine, supplemented by the “small actions” assessment questionnaire. To further support the objectives mentioned earlier, an automatic assessment at T0, T1, and T2 retrieves the information provided and actions carried out by the user. The aim of the semistructured individual interview at T1 and T2 by groups 1 and 2 profile type and sampling is to reveal the impact of the capabilities, opportunities, and motivations on behavior change by combining the methodology of the COM-B, the theoretical domains framework and the tiny habits theory. Taking a human-machine interaction perspective, it is very difficult to determine whether the choice of an action is based on conscious or unconscious motivation . The assessment of lapsed connection to the personalized space before T1 and before T2 will be carried out via a questionnaire sent by email to the concerned users. The objective is to identify the reasons for the lack of use (with the aim of continually developing the personalized space) and the number of actions maintained without logging in. Objectives 2, 4, and 5 make it possible to assess any unexpected effects: (1) the questionnaire is not the draw for the user or is never repeated (meaning that the user cannot view their progress in the personalized space; objective 2); (2) actions are liked, but no goal is set (objective 4); (3) a highly disparate choice of actions makes it difficult or even impossible to implement them (no actions are adopted; objective 5, criterion 1); and (4) the initial request differs from the action that the user is “coached” on in the personalized space (objective 5, criterion 2).
The internet users, included in the assessment will be between the ages of 40 and 55 years, have registered to create an account on the website with a personalized space, and have carried out actions in their space during the 3 assessment stages: T0 (date of personal account creation), T1 (3 weeks after creation), and T2 (10 weeks after creation) (see ). Users will be divided into 2 groups. Group 1 will include socioeconomically deprived people and group 2 all other users. Each group will then be subdivided based on the “motivations,” “capabilities,” and “opportunities” expressed. By characterizing users into these 2 socioeconomic groups, the diversity of behaviors can be questioned, and corrections can be made to support group 1. Classification into group 1 will be based on 2 conditions: belonging to the lower socioprofessional categories and having a level of health literacy below 3.39 on domain 8 of the French Health Literacy Questionnaire . When people create their personal account, in accordance with the General Data Protection Regulation in force in Europe, registered users will need to consent to the use of their quantitative and qualitative data for study purposes and agree to be contacted as part of the assessment. No sensitive medical data will be recorded, and the data from the content management system will be separated from the information collected through the personalized space. The digital security officer at Santé publique France verified the compliance of this data management approach with French data protection regulations (Commission nationale de l'informatique et des libertés). The protocol currently allows testing in a given context and on a regional scale, for example.
This first version of the protocol responds to the objective to create a multidimensional assessment of a digital intervention based on the statement that during a given timeline, interactions with users aged 40-55 years can reveal their capabilities, opportunities, and motivations to adopt healthy lifestyles. The assessment protocol based on the interactions with users in their personalized space of the digital behavior change intervention includes the evaluation of the following. Increased capability, opportunity, and motivation to adopt a healthy lifestyle through one or more actions. Improved access to information that is easy to translate into actions and to continue. At least 2 actions adopted in everyday life. However, the protocol cannot evaluate improved health promotion and disease prevention dialogue with adults in midlife in different settings or assess changes in social norms. As the construction of the website is currently delayed, no recruitment or effects analysis of the protocol could take place. The creation of a steering committee was abandoned.
Expected Findings As mentioned above, the protocol assesses the impact of the website based on the small behavior changes that it triggers among users in relation to different health determinants. The protocol has four aims: (1) engaging a specific population, (2) triggering behavior change, (3) raising awareness about a multifactorial approach to health, and (4) encouraging user interaction with the website’s resources. The research takes an interest in challenging the time frame necessary to evaluate the adoption of healthy lifestyles. It focuses on how the usage of behavior change models (COM-B) combined with the techniques of digital behavior change interventions can strengthen the effect measurement of an assessment protocol. The assessment protocol is based on typical digital functionalities such as a user account, self-evaluation of healthy lifestyles (questionnaire), and feedback to engage people with behavior change. It fosters a continuous short-term evaluation of digital behavior change interventions. Main Results This appears to be the first assessment protocol for digital health promotion interventions. It documents the potential of the digital intervention in various respects, supporting it on the basis of the chosen models that led to the design of the personalized space and contributing to its continued development both in terms of its technical features and written content. The mixed assessment method delivers a granular analysis that sheds light on the effectiveness and even the efficiency of the website through its personalized space. To our knowledge, our assessment protocol for a digital personalized space, designed with the aim of changing health promotion and disease prevention behaviors, is the first of its kind in the sense that it goes beyond the measurement of the implementation and expressly targets the measuring effect. According to literature reviews, the effects in question will be behavioral change, greater knowledge, improved psychosocial skills, development of a support network, and improved health variables . The protocol cannot be likened to assessments in investigational designs such as randomized controlled trials, which have been dismissed by some experts as unsuitable by some experts due to the complexity of health promotion interventions. The open design is considered effective “for the institutions that set it up and its flexibility matches the characteristics of health promotion interventions” . Limitations The breadth of the mixed assessment may make the process of interpreting the lessons learned more complex if the power of each item of “collectible” information proves to be insufficient. The absence of an expert consensus on the duration necessary for behavior change to occur throws into question the time frame of 70 days. The weakness of the protocol relates to the lack of real application given that the launch of the website is delayed. Conclusions Drafting an assessment protocol is a significant aid in the design of a digital intervention. This makes it possible to consolidate the choice of hypotheses for constructing the logical models used and the objectives targeted. A protocol helps to steer the digital intervention toward the action and regularly checks that it meets the needs of its target audience. The assessment protocol meets the SMART (specific, measurable, achievable, relevant, and time-bound) criteria. The research presented here will impact digital interventions in health promotion and disease prevention. As the protocol demonstrates, both the implementation and effects can be assessed. Health promotion and disease prevention stakeholders may prefer an assessment of the program, but this is rarely carried out. Without assessments, a digital intervention can claim to be “evidence-inspired,” although, with assessments, it is closer to “evidence-based.”
As mentioned above, the protocol assesses the impact of the website based on the small behavior changes that it triggers among users in relation to different health determinants. The protocol has four aims: (1) engaging a specific population, (2) triggering behavior change, (3) raising awareness about a multifactorial approach to health, and (4) encouraging user interaction with the website’s resources. The research takes an interest in challenging the time frame necessary to evaluate the adoption of healthy lifestyles. It focuses on how the usage of behavior change models (COM-B) combined with the techniques of digital behavior change interventions can strengthen the effect measurement of an assessment protocol. The assessment protocol is based on typical digital functionalities such as a user account, self-evaluation of healthy lifestyles (questionnaire), and feedback to engage people with behavior change. It fosters a continuous short-term evaluation of digital behavior change interventions.
This appears to be the first assessment protocol for digital health promotion interventions. It documents the potential of the digital intervention in various respects, supporting it on the basis of the chosen models that led to the design of the personalized space and contributing to its continued development both in terms of its technical features and written content. The mixed assessment method delivers a granular analysis that sheds light on the effectiveness and even the efficiency of the website through its personalized space. To our knowledge, our assessment protocol for a digital personalized space, designed with the aim of changing health promotion and disease prevention behaviors, is the first of its kind in the sense that it goes beyond the measurement of the implementation and expressly targets the measuring effect. According to literature reviews, the effects in question will be behavioral change, greater knowledge, improved psychosocial skills, development of a support network, and improved health variables . The protocol cannot be likened to assessments in investigational designs such as randomized controlled trials, which have been dismissed by some experts as unsuitable by some experts due to the complexity of health promotion interventions. The open design is considered effective “for the institutions that set it up and its flexibility matches the characteristics of health promotion interventions” .
The breadth of the mixed assessment may make the process of interpreting the lessons learned more complex if the power of each item of “collectible” information proves to be insufficient. The absence of an expert consensus on the duration necessary for behavior change to occur throws into question the time frame of 70 days. The weakness of the protocol relates to the lack of real application given that the launch of the website is delayed.
Drafting an assessment protocol is a significant aid in the design of a digital intervention. This makes it possible to consolidate the choice of hypotheses for constructing the logical models used and the objectives targeted. A protocol helps to steer the digital intervention toward the action and regularly checks that it meets the needs of its target audience. The assessment protocol meets the SMART (specific, measurable, achievable, relevant, and time-bound) criteria. The research presented here will impact digital interventions in health promotion and disease prevention. As the protocol demonstrates, both the implementation and effects can be assessed. Health promotion and disease prevention stakeholders may prefer an assessment of the program, but this is rarely carried out. Without assessments, a digital intervention can claim to be “evidence-inspired,” although, with assessments, it is closer to “evidence-based.”
|
The use of the objective structured clinical examination to evaluate paediatric cardiopulmonary resuscitation skills in medical students and measures to improve training | 6dbcea24-56fa-4bb5-ba04-4fb700d671a1 | 11468371 | Pediatrics[mh] | The Objective Structured Clinical Examination (OSCE) is a skills evaluation method proposed by Harden in 1975 . It is performed by means of observing the action of several structured stations that simulate clinical situations. Evaluation is by means of an objective evaluation list. The OSCE enables evaluating three levels of the Miller pyramid, know, know-how and show how, for different skills (history taking, physical examination, technical skills, communication, clinical opinion, diagnostic test planning, therapeutic schedule, healthcare education, drawing up reports, interprofessional relations and ethics and legal aspects). The OSCE uses different evaluation methods with standardized patients, manikins, computer or online simulators . The OSCE is included in several medical schools to evaluate clinical skills. It replaced the examination with an actual patient and complemented the written examination that evaluated knowledge. The OSCE has been proven to have suitable objectivity, and reliability while evaluating clinical and non-clinical skills both at undergraduate and postgraduate level in healthcare professions and to compare different teaching methods . The curriculum of many medical schools includes cardiopulmonary resuscitation (CPR) training with highly varied theoretical and practical programmes. Most curricula include adult CPR training , and some also include paediatric CPR training . Many OSCE include life support stations, generally for adults, performed with manikins. These enable evaluating technical CPR skills in adults . However, few studies have analysed the usefulness of OSCE to evaluate the skills of medical students and paediatric residents in Paediatric Basic Life Support (PBLS) and Neonatal CPR, respectively . Our main hypothesis is that the OSCE is a valid instrument to evaluate PBLS skills in medical students and to compare different training methods and the improvement program changes. The aims of this study were first, to evaluate the skills of medical students in PBLS in an OSCE. The second aim was to compare the PBLS skills in students from two hospitals who received different training in paediatric CPR. The third aim was to evaluate the usefulness of OSCE to analyse the effects of improvement CPR program changes on skills attained by medical students in PBLS. Study design A comparative, prospective, observational study was performed with a three-phase intervention. Setting The study was performed at the Hospital General Universitario Gregorio Marañón (HGM) and Hospital Clínico Universitario San Carlos (HCSC) of the Complutense University of Madrid, which is a public university. The study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. The study was approved by the local ethics committee (Proyecto Innova Docentia 332/2020). Students and teachers signed informed consent forms to take part in the OSCE and for the study. The same core curriculum programme, including PBLS, is taught in all hospitals of the Complutense University in the 5th year of the six years of the medical degree. However, the Pediatrics theoretical part is taught independently in each hospital and the practical PBLS training is different in the different hospitals. In HCSC a PBLS seminar lasting two and a half hours is held. Meanwhile, in the HGM a structured theoretical-practical in person course on Pediatric Intermediate Life Support (PILS) lasting 8 h is taught. The PILS course includes training in PBLS, ventilation, vascular access, and intermediate CPR teamwork and it is accredited by the Spanish Group for Paediatric and Neonatal CPR (SGPNCPR). Students have CPR recommendations and classes available in the online campus paediatrics over the entire course. At the end of the fifth year of medicine an OSCE test with five stations is held for the hospitals HGM and HCSC together. The OSCE PBLS station is held three months after the PLS training. The study was performed over three phases. 1º. PBLS skills were evaluated in the OSCE for 2022. The results were analysed and a comparison was made between the two hospitals. 2º. After the analysis of results corrective measures were set out for CPR training in both hospitals in 2023. 3º. PBLS skills were evaluated in the OSCE for 2023. The results were analysed and a comparison was made between the two hospitals and between 2022 and 2023. Participants and study size: All students from the hospitals HGM and HCSC who underwent the OSCE in 2022 and 2023 were included in the study. Two similar but not identical cases of paediatric cardiac arrest (CA) were held for evaluation of PLS skills in the two years in order to prevent conveying the information from one year to another. The first year was a CA following trauma in a breastfeeding infant and the second year a CA after intoxication in a child. Each student had seven minutes to act. 1º. Read the case study and instructions outside the room 2º. Come in and ask the teacher acting as the child’s mother or father with CA 3º. Perform basic PBLS 4º. Explain what happened to the emergency services personnel and the child's parent. After the student´s performance the evaluator performed a brief analysis with the student to strengthen the positive aspects and correct mistakes. There were 14 evaluators, one evaluator in each station. The evaluators were paediatricians and nurses accredited as paediatric CPR instructors by the SGPNCPR who received training on how the OSCE works. They were randomly distributed into the different PBLS stations and did not know the hospital to which students belonged. They scored each item in a computerized database. Variables A checklist was prepared according to the SGPNCPR basic CPR evaluation criteria (Table ) . The same items were evaluated in both cases (clinical history, clinical examination, technical skills, communication skills, interprofessional relationships). Each item was evaluated as suitable (5 points) or unsuitable (0 points), in accordance with the criteria that would have been effective in a CA situation. The items for ordered CPR steps and overall evaluation of CPR efficacy had a greater weight (20 points) than the rest. The total maximum score was 100 points. (Table ). According to the SGPNCPR criteria, it was considered that basic CPR skills were adequate if the student attained a total score greater than 70. Moreover, it included an evaluation of the overall effectiveness of the CPR, deciding, just as for CPR courses, whether the global student’s CPR action would have been sufficient to attain the patient’s recovery or maintenance until the emergency services arrived. After evaluating the results of the first year, several measures were set out to improve the training to address aspects that led to worse outcomes (for example; practice calling for help, and opening the airway with the head-chin manoeuver). In the HCSC an in-person theoretical course was given (the previous year students only had to review the theoretical documentation on the online platform). Moreover, the duration of the practical classes was increased and a previous theoretical evaluation included before and after the seminar as in HGM. Statistical methods An anonymous database was prepared. This included the hospital of origin and the score obtained for each item. The statistical study was performed using the programme SPSS v 29.0 para OsX (IBM, Armonk, NY, USA). Continuous qualitative variables are shown as means and standard deviation (mean ± SD). Categoric variables are shown in regard to the total (n/N) and percentage. The Kolmogorov–Smirnov test was used to check whether continuous variables followed a normal distribution. The Student t-test and Mann Whitney test were used to compare between means and the chi-squared test was used to compare proportions. A P < 0.05 value was deemed statistically significant. A comparative, prospective, observational study was performed with a three-phase intervention. The study was performed at the Hospital General Universitario Gregorio Marañón (HGM) and Hospital Clínico Universitario San Carlos (HCSC) of the Complutense University of Madrid, which is a public university. The study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. The study was approved by the local ethics committee (Proyecto Innova Docentia 332/2020). Students and teachers signed informed consent forms to take part in the OSCE and for the study. The same core curriculum programme, including PBLS, is taught in all hospitals of the Complutense University in the 5th year of the six years of the medical degree. However, the Pediatrics theoretical part is taught independently in each hospital and the practical PBLS training is different in the different hospitals. In HCSC a PBLS seminar lasting two and a half hours is held. Meanwhile, in the HGM a structured theoretical-practical in person course on Pediatric Intermediate Life Support (PILS) lasting 8 h is taught. The PILS course includes training in PBLS, ventilation, vascular access, and intermediate CPR teamwork and it is accredited by the Spanish Group for Paediatric and Neonatal CPR (SGPNCPR). Students have CPR recommendations and classes available in the online campus paediatrics over the entire course. At the end of the fifth year of medicine an OSCE test with five stations is held for the hospitals HGM and HCSC together. The OSCE PBLS station is held three months after the PLS training. The study was performed over three phases. 1º. PBLS skills were evaluated in the OSCE for 2022. The results were analysed and a comparison was made between the two hospitals. 2º. After the analysis of results corrective measures were set out for CPR training in both hospitals in 2023. 3º. PBLS skills were evaluated in the OSCE for 2023. The results were analysed and a comparison was made between the two hospitals and between 2022 and 2023. Participants and study size: All students from the hospitals HGM and HCSC who underwent the OSCE in 2022 and 2023 were included in the study. Two similar but not identical cases of paediatric cardiac arrest (CA) were held for evaluation of PLS skills in the two years in order to prevent conveying the information from one year to another. The first year was a CA following trauma in a breastfeeding infant and the second year a CA after intoxication in a child. Each student had seven minutes to act. 1º. Read the case study and instructions outside the room 2º. Come in and ask the teacher acting as the child’s mother or father with CA 3º. Perform basic PBLS 4º. Explain what happened to the emergency services personnel and the child's parent. After the student´s performance the evaluator performed a brief analysis with the student to strengthen the positive aspects and correct mistakes. There were 14 evaluators, one evaluator in each station. The evaluators were paediatricians and nurses accredited as paediatric CPR instructors by the SGPNCPR who received training on how the OSCE works. They were randomly distributed into the different PBLS stations and did not know the hospital to which students belonged. They scored each item in a computerized database. A checklist was prepared according to the SGPNCPR basic CPR evaluation criteria (Table ) . The same items were evaluated in both cases (clinical history, clinical examination, technical skills, communication skills, interprofessional relationships). Each item was evaluated as suitable (5 points) or unsuitable (0 points), in accordance with the criteria that would have been effective in a CA situation. The items for ordered CPR steps and overall evaluation of CPR efficacy had a greater weight (20 points) than the rest. The total maximum score was 100 points. (Table ). According to the SGPNCPR criteria, it was considered that basic CPR skills were adequate if the student attained a total score greater than 70. Moreover, it included an evaluation of the overall effectiveness of the CPR, deciding, just as for CPR courses, whether the global student’s CPR action would have been sufficient to attain the patient’s recovery or maintenance until the emergency services arrived. After evaluating the results of the first year, several measures were set out to improve the training to address aspects that led to worse outcomes (for example; practice calling for help, and opening the airway with the head-chin manoeuver). In the HCSC an in-person theoretical course was given (the previous year students only had to review the theoretical documentation on the online platform). Moreover, the duration of the practical classes was increased and a previous theoretical evaluation included before and after the seminar as in HGM. An anonymous database was prepared. This included the hospital of origin and the score obtained for each item. The statistical study was performed using the programme SPSS v 29.0 para OsX (IBM, Armonk, NY, USA). Continuous qualitative variables are shown as means and standard deviation (mean ± SD). Categoric variables are shown in regard to the total (n/N) and percentage. The Kolmogorov–Smirnov test was used to check whether continuous variables followed a normal distribution. The Student t-test and Mann Whitney test were used to compare between means and the chi-squared test was used to compare proportions. A P < 0.05 value was deemed statistically significant. OSCE 2022 results The results of the PBLS station in 2022 are shown in Table . 210 students took part and the mean score was 77.8 ± 19.8; 77.6% of students attained an overall score equal or higher than 70%. In 79.4% of students the effectiveness of the CPR was suitable. Less than 70% of students performed the first steps of CPR correctly; verified whether the situation was safe (66.7%), detected unconsciousness (45.7%), requested help (45.2%) and opened up the airway (38.6%). The score of HGM students 82.4 ± 26.6 was significantly higher than that HCSC students 72.9 ± 21.7, P < 0.001 (Fig. ). In addition, the percentage of students with a score greater than 70 was also significantly higher in HGM than in HCSC (84.3% vs 70.6%, p = 0.018). 86.1% of HGM students performed an adequate CPR versus 71.6% of HCS students. ( P = 0.010). Also for each manoeuvre, except for the information, the score was greater in HGM students (Table ). OSCE 2023 Results The results from the PBLS station in 2023 are shown in Table . 182 students took part in OSCE. The mean score was 89.5 ± 15.9 and 91.2% of students attained a score higher than 70. In 79.4% of students the effectiveness of the CPR was suitable. All items were performed correctly by over 75% of students. Opening the airway was the manoeuvre with the worst results (76.9%). There were no statistically significant differences in mean score among HGM students (91.5 ± 15.3) and HCSC students (87.8 ± 16.4) ( P = 0.121) (Fig. ). However, CPR was suitable in a statistically significantly higher percentage of HGM students than HCSC students (90.4% vs 79.8%) ( P = 0.049). The percentage of students who correctly performed the manoeuvres was similar in both hospitals, except for the detection of unconsciousness (number 2), 95.2% for HGM students vs 84.8% in HCSC students (Table ). Comparison between 2022 and 2023 Table compares the results of the evaluation in 2022 and 2023. The mean score was significantly higher in 2023 than in 2022 (89.5 ± 15.9 compared to 77.8 ± 19.8), P = 0.004 (Fig. ). The percentage of students in whom the effectiveness of CPR was adequate was also higher in 2023 (84.6% vs 79%). However, the differences did not attain statistical significance, P = 0.156. The percentage of students who correctly performed each manoeuvre was significantly higher in 2023 except the information that was greater in 2022, but this manoeuver was performed correctly by more than 90% of the students both years (Table ). Tables and compare the scores between 2022 and 2023 for each hospital. In both hospitals the overall score was higher in 2023. The percentage of students in whom the overall effectiveness of CPR was adequate was also higher in 2023, but the differences did not attain statistical significance in any of the hospitals (Tables and ). The percentage of students who exceeded a score of 70 was greater in 2023 (91.2%) than in 2022 (77.6%), P < 0.001. This was also the case in each hospital; HCSC 70.6% in 2022 and 88.9% in 2023 ( P < 0.001); and HGM 84.3% in 2022 and 94% in 2023 ( P = 0.037). For HCSC the percentage of students who correctly underwent most manoeuvres was significantly higher in 2023 (Table ). In the case of the HGM, the percentage of students who correctly performed each of the manoeuvers in 2023 was also higher than in 2022, but the differences were only significant in the detection of unconsciousness, shouting for help and opening the airway. The results of the PBLS station in 2022 are shown in Table . 210 students took part and the mean score was 77.8 ± 19.8; 77.6% of students attained an overall score equal or higher than 70%. In 79.4% of students the effectiveness of the CPR was suitable. Less than 70% of students performed the first steps of CPR correctly; verified whether the situation was safe (66.7%), detected unconsciousness (45.7%), requested help (45.2%) and opened up the airway (38.6%). The score of HGM students 82.4 ± 26.6 was significantly higher than that HCSC students 72.9 ± 21.7, P < 0.001 (Fig. ). In addition, the percentage of students with a score greater than 70 was also significantly higher in HGM than in HCSC (84.3% vs 70.6%, p = 0.018). 86.1% of HGM students performed an adequate CPR versus 71.6% of HCS students. ( P = 0.010). Also for each manoeuvre, except for the information, the score was greater in HGM students (Table ). The results from the PBLS station in 2023 are shown in Table . 182 students took part in OSCE. The mean score was 89.5 ± 15.9 and 91.2% of students attained a score higher than 70. In 79.4% of students the effectiveness of the CPR was suitable. All items were performed correctly by over 75% of students. Opening the airway was the manoeuvre with the worst results (76.9%). There were no statistically significant differences in mean score among HGM students (91.5 ± 15.3) and HCSC students (87.8 ± 16.4) ( P = 0.121) (Fig. ). However, CPR was suitable in a statistically significantly higher percentage of HGM students than HCSC students (90.4% vs 79.8%) ( P = 0.049). The percentage of students who correctly performed the manoeuvres was similar in both hospitals, except for the detection of unconsciousness (number 2), 95.2% for HGM students vs 84.8% in HCSC students (Table ). Comparison between 2022 and 2023 Table compares the results of the evaluation in 2022 and 2023. The mean score was significantly higher in 2023 than in 2022 (89.5 ± 15.9 compared to 77.8 ± 19.8), P = 0.004 (Fig. ). The percentage of students in whom the effectiveness of CPR was adequate was also higher in 2023 (84.6% vs 79%). However, the differences did not attain statistical significance, P = 0.156. The percentage of students who correctly performed each manoeuvre was significantly higher in 2023 except the information that was greater in 2022, but this manoeuver was performed correctly by more than 90% of the students both years (Table ). Tables and compare the scores between 2022 and 2023 for each hospital. In both hospitals the overall score was higher in 2023. The percentage of students in whom the overall effectiveness of CPR was adequate was also higher in 2023, but the differences did not attain statistical significance in any of the hospitals (Tables and ). The percentage of students who exceeded a score of 70 was greater in 2023 (91.2%) than in 2022 (77.6%), P < 0.001. This was also the case in each hospital; HCSC 70.6% in 2022 and 88.9% in 2023 ( P < 0.001); and HGM 84.3% in 2022 and 94% in 2023 ( P = 0.037). For HCSC the percentage of students who correctly underwent most manoeuvres was significantly higher in 2023 (Table ). In the case of the HGM, the percentage of students who correctly performed each of the manoeuvers in 2023 was also higher than in 2022, but the differences were only significant in the detection of unconsciousness, shouting for help and opening the airway. Table compares the results of the evaluation in 2022 and 2023. The mean score was significantly higher in 2023 than in 2022 (89.5 ± 15.9 compared to 77.8 ± 19.8), P = 0.004 (Fig. ). The percentage of students in whom the effectiveness of CPR was adequate was also higher in 2023 (84.6% vs 79%). However, the differences did not attain statistical significance, P = 0.156. The percentage of students who correctly performed each manoeuvre was significantly higher in 2023 except the information that was greater in 2022, but this manoeuver was performed correctly by more than 90% of the students both years (Table ). Tables and compare the scores between 2022 and 2023 for each hospital. In both hospitals the overall score was higher in 2023. The percentage of students in whom the overall effectiveness of CPR was adequate was also higher in 2023, but the differences did not attain statistical significance in any of the hospitals (Tables and ). The percentage of students who exceeded a score of 70 was greater in 2023 (91.2%) than in 2022 (77.6%), P < 0.001. This was also the case in each hospital; HCSC 70.6% in 2022 and 88.9% in 2023 ( P < 0.001); and HGM 84.3% in 2022 and 94% in 2023 ( P = 0.037). For HCSC the percentage of students who correctly underwent most manoeuvres was significantly higher in 2023 (Table ). In the case of the HGM, the percentage of students who correctly performed each of the manoeuvers in 2023 was also higher than in 2022, but the differences were only significant in the detection of unconsciousness, shouting for help and opening the airway. Our study shows that the OSCE is a good method for assessing PBLS skills in medical students and for detecting the CPR manoeuvers in which they have more difficulties. These results suggest that the OSCE could be an appropriate method for monitoring and reinforcing CPR teaching. Furthermore, our study showed that three months after training, 10% of medical students are unable to perform adequate PBLS. The OSCE is an objective, fast, reproducible and simple method to evaluate with prior preparation. It has been suggested that the stress of the OSCE examination may mean that students’ performance is lower and does not properly reflect their skills . However, the stress undergone in a real CA situation is greater, whereby the stress of the test may even increase its utility for evaluation at the CPR station . Some authors have proven that prior preparation for the OSCE and training with simulated clinical situations reduce stress and improve performance. Our results showed that the most probable cause of the differences in the results between the two hospitals was the differences in the theoretical and practical CPR teaching program (PBLS with 2.5 h versus PILS with 8 h). So, when the HCSC program was reinforced the differences diminished, but CPR training by means of a structured PILS course keep getting better results than exclusive training in PBLS. There is no clear consensus on the level of training in PLS that medical students should receive. Although in large part paediatric training universities is only a complementary part of the general training in CPR . Training in PILS requires more time, more resources and teaching. However, it is very well evaluated by students and attains a higher level of training. In our experience PILS training in medical students is feasible and attains better skills . On the other hand, our study reveals that in the OSCE evaluation three months after training 10% of medical students do not manage to undertake proper PBLS. These data coincide with those found by other authors revealing that practical CPR skill quickly falls if CPR is not kept up to date, and regardless of the level of training taught, it is essential to undertake refresher and maintenance activities . The OSCE performed several times during the medicine undergraduate degree may serve to verify the results of refresher activities for the training. Evaluation of improvement measures The OSCE enables evaluating the efficacy of improvement measures in CPR teaching, as occurred in our study. Our results reveal that improvement activities in CPR training, increasing the practical exercise time and strengthening skills that students are worse at learning or forget attains a significant improvement in skills. Therefore, the OSCE may not only be used for the evaluation of student skills but also to evaluate the training model. However, the OSCE should not only be an evaluation instrument but rather it should have a training function . For this reason we include a succinct evaluation with the student at the end of the training to correct and strengthen knowledge. Limitations First, one possible limitation of the OSCE evaluation is the individual variability between evaluators. Ensuring homogeneity of evaluators’ criteria is not easy. Four evaluators acted during both years and the remainder were different and the existence of bias cannot be ruled out. However, all evaluators were accredited by the SGPNCPR paediatric CPR instructors and received specific training in the OSCE evaluation and this fact reduces the bias of individual assessment. To limit individual variability some authors have proposed the existence of two evaluators in each station , although this means a significant number of evaluators especially when the OSCE is simultaneous for many students. Yeates has devised a method that includes a videorecording of training and its evaluation by several evaluators “Video-based Examiner Score Comparison and Adjustment” (VESCA) . This may improve the individual variability although this also means more work for evaluators. Another limitation was that the participants were not the same in 2022 and 2023. We cannot exclude that the better results in 2023 were due to the fact that the students of that year were better than those of the previous year and not to the effect of changes in teaching, but the number of students studied makes this hypothesis unlikely, since the selection of students to enter the Complutense University Medical School is carried out by the score achieved in a national exam and the criteria did not change in those years. On the other hand, the case studies set out in the stations for both years were different and this could in part account for the differences in results. Some manoeuvres such as opening the airway may be a little more complicated in children with trauma, but the remainder are the same. However, CPR manoeuvres in children are no more complicated than in the breastfeeding infant. The OSCE evaluation provided a score, but it is unclear whether this score corresponds to true competence in delivering CPR in a clinical setting. The verification list system used in the OSCE evaluation has the disadvantage that it only classifies the action in each item as suitable or unsuitable and does not enable a greater discrimination in terms of different degrees of compliance. This was the scoring system for the entire OSCE and did not enable its switch for the PBLS station. In our opinion a scoring system into five levels (e.g., very good: 5 points, good: 4 points, sufficient: 3 points, poor: 2 points, very poor: 1 point, not performed 0 points, which is the one the SGPNCPR recommends to the PLS courses, helps to better discriminate students’ skills, although it requires more time and may likely create greater discrepancies among the evaluators. Other authors propose a blend of verification lists and evaluation scales, mainly to evaluate complex skills . Finally, in our study we did not perform a long-term evaluation to see whether PBLS skills are maintained over time. Although, as discussed, various studies have revealed that without refresher courses these skills gradually fall over time which strengthens the importance of undertaking periodic refresher courses . The OSCE enables evaluating the efficacy of improvement measures in CPR teaching, as occurred in our study. Our results reveal that improvement activities in CPR training, increasing the practical exercise time and strengthening skills that students are worse at learning or forget attains a significant improvement in skills. Therefore, the OSCE may not only be used for the evaluation of student skills but also to evaluate the training model. However, the OSCE should not only be an evaluation instrument but rather it should have a training function . For this reason we include a succinct evaluation with the student at the end of the training to correct and strengthen knowledge. First, one possible limitation of the OSCE evaluation is the individual variability between evaluators. Ensuring homogeneity of evaluators’ criteria is not easy. Four evaluators acted during both years and the remainder were different and the existence of bias cannot be ruled out. However, all evaluators were accredited by the SGPNCPR paediatric CPR instructors and received specific training in the OSCE evaluation and this fact reduces the bias of individual assessment. To limit individual variability some authors have proposed the existence of two evaluators in each station , although this means a significant number of evaluators especially when the OSCE is simultaneous for many students. Yeates has devised a method that includes a videorecording of training and its evaluation by several evaluators “Video-based Examiner Score Comparison and Adjustment” (VESCA) . This may improve the individual variability although this also means more work for evaluators. Another limitation was that the participants were not the same in 2022 and 2023. We cannot exclude that the better results in 2023 were due to the fact that the students of that year were better than those of the previous year and not to the effect of changes in teaching, but the number of students studied makes this hypothesis unlikely, since the selection of students to enter the Complutense University Medical School is carried out by the score achieved in a national exam and the criteria did not change in those years. On the other hand, the case studies set out in the stations for both years were different and this could in part account for the differences in results. Some manoeuvres such as opening the airway may be a little more complicated in children with trauma, but the remainder are the same. However, CPR manoeuvres in children are no more complicated than in the breastfeeding infant. The OSCE evaluation provided a score, but it is unclear whether this score corresponds to true competence in delivering CPR in a clinical setting. The verification list system used in the OSCE evaluation has the disadvantage that it only classifies the action in each item as suitable or unsuitable and does not enable a greater discrimination in terms of different degrees of compliance. This was the scoring system for the entire OSCE and did not enable its switch for the PBLS station. In our opinion a scoring system into five levels (e.g., very good: 5 points, good: 4 points, sufficient: 3 points, poor: 2 points, very poor: 1 point, not performed 0 points, which is the one the SGPNCPR recommends to the PLS courses, helps to better discriminate students’ skills, although it requires more time and may likely create greater discrepancies among the evaluators. Other authors propose a blend of verification lists and evaluation scales, mainly to evaluate complex skills . Finally, in our study we did not perform a long-term evaluation to see whether PBLS skills are maintained over time. Although, as discussed, various studies have revealed that without refresher courses these skills gradually fall over time which strengthens the importance of undertaking periodic refresher courses . The OSCE successfully identified differences in the performance of CPR skills between medical student populations exposed to different training programs, as well as score improvement following training program modifications. |
The disposability and inclusion of Brown bodies | bcf04a6e-3402-4f88-92f0-f69838b95969 | 11775433 | Forensic Medicine[mh] | INTRODUCTION As contemporary bioanthropologists struggle to reimagine a postcolonial science and practice, the bioethics of curation and research with illegal or questionably acquired human archeological remains has been in the forefront of discourse (Halcrow et al., ; Joyce, ; Stantis et al., ). However, the ethical responsibility of scientists working with historic anatomical collections has received comparatively less attention (Geller, ; Watkins, ). In recent years, there has been increasing acknowledgment of the use of skeletal human remains held in public and private institutions for education and research that have been acquired without contemporary standards of informed consent. In the U.S. the greatest attention has focused on institutional holdings of Native American ancestors that were taken from archeological sites during the 19th to mid‐20th century against the cultural beliefs of Indigenous people. The history of these collections is set in the scientific racism that followed the genocide and forced movement of Indigenous peoples during the founding of the country, a fact made only more iniquitous by the resistance and/or slow pace of institutions to return these ancestral remains, despite the passing of federal repatriation laws (the Native American Graves and Repatriation Act (NAGPRA)) in 1990 (Jaffe et al., ). Public attention has also been captured by the recent calls for the ethical treatment of contemporary skeletal remains from forensic contexts, for example the remains of two children that were victims from Philadelphia's MOVE bombing that were held and used for teaching in the University of Pennsylvania's Museum of Archaeology and Anthropology (Dickey, ), or from high profile museum exhibits such as the remains of Charles Byrnes who had been referred to as the “Irish Giant” who was recently removed from display from the Hunterian Museum at the Royal College of Surgeons of England (Moses, ). In comparison almost no attention has been paid to the ethics of working with human remains historically sourced from South Asia (India), that are the most ubiquitous anatomical collections used globally (Agarwal, ). Human skeletons from India were the primary global source of human bone for almost 200 years with the majority obtained by questionable consent, illegal theft, and/or murder with an estimated peak of 60,000 skeletons per year exported prior to the ban in 1985 (Carney, ; Fineman, ). In this article, I give a contextualization of the collection of human skeletons, with particular focus on the unique history and nature of anatomical teaching collections, and why they matter at this moment. I then delve into the historical background of how the exportation of skeletons from India came to be and how the unique commodification of Indian bodies dominated the global market. I also discuss how India continues to allow a substantial legal market for the collection of skeletal remains from unclaimed bodies for anatomical study domestically, and what is practical solutions exist for the ethical treatment of existing historical skeletal collections. The global historical skeletal collections from India are central not only to wider conversations of bioethics but also those of colonialism, white supremacy, and scientific racism in the field of anthropology. THE CONTEXT: LEGACIES OF COLLECTION, REPATRIATION, AND USE OF HUMAN SKELETONS Before delving into the unique history of anatomical skeletons from South Asia or exploring our responsibility for their care, I wish to contextualize the collecting of human remains historically. In doing so I emphasize the differing contexts of collection and why the inclusion of voices from biological anthropologists who are persons of color is vitally important, but also why in practice it has been difficult for these scholars to speak up. While there are large collections of both ancient and modern human skeletal remains housed in institutions around the world, the histories of these collections vary greatly. The histories of how these collections were created and used expose the similarities in their colonial and racist foundation, but to also draw attention to who has built and upheld the scientific knowledge in our field, who has been excluded, and why (Blakey, , ; de la Cova, ; Watkins, , ). Collections of human remains also vary in the degree of information known about the individuals themselves and/or the communities they once lived in. The two fields of anthropology and anatomy amassed most of the institutional collections. The field of human anatomy was founded on the procurement of local bodies for dissection and medical education, with a dark history of grave robbing or other questionable acquisition methods of bodies in both the US and Europe (Halperin, ; MacDonald, ; Richardson, ; Sappol, ). Anatomy Acts in various countries curbed these methods of acquisition but refocused on the collection of bodies from other marginalized communities (Richardson, ; Sappol, ). Many of the bodies used for medical dissection education were kept as skeletal remains for long‐term educational use (de la Cova, ; Nystrom, , ). At the same time, the field of biological anthropology was founded in the history of scientific racism, colonial expansion, and genocide, with its knowledge production generated by the collection and study of remains from almost exclusively marginalized populations (both archeological and anatomical) for the study of human racial variation (Blakey, , ; Colwell, ; Geller, , ; Lans, ; Platt, ; Redman, ; TallBear, ). While all human remains, regardless of acquisition, demand ethical treatment, the unique histories are critical to determine contemporary decisions on continued curation, use, or appropriate repatriation. Further, while respect for the dead is universal in human communities, the cultural and religious beliefs about skeletal remains varies globally. In the case of Native American, First Nations, Indigenous, or African American ancestors that were stolen without permission from grave sites the moral imperative for their repatriation is clear. Repatriation of Native American ancestors has been facilitated by NAGPRA law and subsequent state legislations, such as CalNAGPRA, that were the result of the activism and decades tireless work of Native American communities and scholars (Colwell, ; Cryne, ; Daehnke & Lonetree, ; Fine‐Dare, ; Nash & Colwell, ). In these cases acquisition history along with oral history can help to establish affiliation for ancestors to be returned to their communities, but there have been challenges in the implementation NAGPRA with cultural affiliation and its integration in archeological practice (Anderson & Atalay, ; Atalay, ; Ayau & Tengan, ; Bergeron, ; Colwell & Nash, ; Colwell‐Chanthaphonh et al., ; Colwell‐Chanthaphonh & Powell, ; Cryne, ; Fforde et al., ; Halcrow et al., ; Hudetz, ; Kakaliouras, ; Lippert & Sholts, ; McKeown, ; Montgomery & Supernant, ; Nash & Colwell, ). For African American ancestors (Blakey, ; de la Cova, ; Dunnavant et al., ) and Indigenous ancestral remains internationally (Aird, ; Aranui, ; Barbosa, ; Gabriel & Dahl, ; Matsushima, ; Parsons & Segobye, ; Schanche, ; Stantis et al., ; Tapsell, ) the limited protection laws have created unique challenges and struggles for ethical treatment. In the case of individual skeletal remains, records of identification are key for communities and Nations in seeking repatriation of specific individuals, such as the case of the remains of Saartjie (Sarah) Baartman that was kept in the Paris Museum until 2002 (Daley, ). However, institutional legacy collections that were obtained with historical permission will require more complicated pathways to contemporary ethical treatment and curation. For example, international archeological material or historical anatomical material that was collected and curated with historic permission, continues to be housed and studied in various institutions. In the case of anatomical remains, recent in depth contextual studies that utilize both archival and biological data have been instrumental in bringing the histories of social inequality and violence that led to the formation of many of these collections, but also insight into the lives and living conditions of the people themselves (Austin et al., ; de la Cova, , , ; Geller & Stojanowksi, ; Lans, ; Watkins, ; Watkins & Muller, ). Most historic skeletal collections were created to conduct studies of human variation and racial categorization, such as early phrenological studies, or later taken as part of archeological excavations seeking knowledge of culture history, and/or be used as pathological case studies in medicine. In these cases, institutions made a point to record and keep information, such as individual names, demographics, medical records, geographic burial location, or racial/ethnic categorization. What is less widely known, is that some human remains were kept without any recorded details of who they were or where they were taken from. These remains were kept solely to be used as individual elements for the study of human skeletal anatomy or landmarks, essentially to be used as bodily maps of bony structure and variation. Many of the collections of unknown skeletal remains are found in medical or anatomical schools, often historically kept after postdissection as teaching tools without much information other than a recording of skeletal element. While anatomy departments now use stringent guidelines of informed consent for the acquisition of bodies for educational purposes, these historical collections remain in storage and often continue to be used in many departments/museums across the world (AAA, ; Champney et al., ). It is important to note that in some institutions, particularly in the US, archeological remains were also kept solely to teach skeletal anatomy in primarily anthropology departments. These were kept as unprovenanced remains or simply not kept with any recorded information on acquisition. There is no record of professional protocol (at least that this author has found) as to why archeological remains would be kept without any information on acquisition. However, with different standards of ethical practice and little legal protection for Native American ancestral remains historically, archeological remains would have seemed a ubiquitous source to early anatomists and anthropologists to use for basic osteological training. After World War II the use of human skeletons exported from South Asia became the primary skeletons used to teach skeletal anatomy in both the anatomy and anthropology departments. Collections of human remains from South Asia, the focus of this article, still fill anatomical and anthropological institutions globally and are the largest class of human remains used today without any additional recorded information. These South Asian exported skeletal remains are unlike other skeletal collections, with their sheer large number globally and unique British colonial history, as will be discussed. However, an important part of the story of unprovenanced skeletal collections is not only how and why they were collected in the first place, but why so many continued to be held and studied long after professional acquisition practices changed (both in anthropology and anatomy). The truth is, that until very recently, these collections were not deemed particularly uncomfortable to have by most anatomists or anthropologists. Although the 1990 the federal NAGPRA law provided the first protections to stop the acquisition of new ancestral remains and begin repatriation of Native American ancestors already held in institutions, and in 1985 a ban was made on the import of further skeletons from India, the use of existing unprovenanced collections to train subsequent generations of scholars largely continued. The change to the practice of using unprovenanced archeological remains that are potentially Native American ancestors has begun to occur due to updates to state and/or federal NAGPRA regulations. For example, the use of a large legacy teaching collection of unprovenanced archeological remains that was continuing to be used for osteological training at UC Berkeley was only stopped after a change to institutional and state policy (Hudetz & Brewer, ). Similarly, African American scholars have recently called for a stop in use and collection of African American remains (Dunnavant et al., ). Ethical concerns for the treatment of unprovenanced human remains and call for changes in professional practice have not typically been made by academic scholars, particularly in the fields of biological anthropology and human anatomy where there has historically been a lack of diverse representation. As such, scholars of color are disproportionally burdened to speak out about the ethics of skeletal collections. This is not because of a desire for culture canceling or religious beliefs, the red herring that some ignorant critics have cited as the reason for revaluating teaching collections (Weiss & Springer, ). It is because scholars of color identify with the disenfranchised, the marginalized, the enslaved, and the murdered that make up our skeletal collections. For people of color, they are extended kin. It is important to also recognize that scholarly Brown voices have been dismissed to continue to uphold the racist science in biological anthropology (Blakey, ; McLean, ; Watkins, ). This is part of why it is so hard for scholars of color to speak up and for the field to move forward. Many scholars of color, like me, have long been uncomfortable with the unacknowledged histories of legacy skeletal collections that were formative in our training, particularly collections without any legal protections that limited their use. But as a minority, we did not find the space where we could overtly and safely interrogate the scientific foundation of our field, question the practice of our mentors, or challenge the senior producers of knowledge that were almost exclusively White. It was several years into my undergraduate anthropology training when I first learned that nearly all the skeletal teaching collections on my campus came from South Asia. I had just assumed the skeletons we used in our education were from the same consented donors used for medical dissection. In a crowded room filled eager students in front of dozens of splayed‐out disarticulated skeletons it was shared by the instructor in passing they were all from India. While no one gave it a second thought, I vividly remember not only the many questions I had as to why they were from India but also my stark realization that I had more in common with my study skeleton than my peers. I kept this unacknowledged fact to myself, with guilt, for many years. This context clearly set my positionality in this current research as a South Asian Canadian/American and a bioarchaeologist. But I share this not as a disclaimer or an apology, but to underscore why my particular perspective on the historicity of skeletal collections from South Asia is uniquely important, just as the perspective of anthropological scholars of color have been on Native American (Bader & Malhi, ; Lippert, ) or African American ancestors (Blakey, , ; Lans, , ; Watkins, ). Lastly, while I discuss what might be future avenues for curation, continued use, or repatriation of South Asian collections, my primary goal here is to begin to return the humanity to those that have been systematically stripped of their very identity and made into anatomical objects. NECROPOLITICS AND COLONIAL INDIA The British rule of India began in 1858 lasting until the Independence of India and Pakistan in 1947, although colonization began as early as the 1600's with early trade and solidification of the British East India Company (called the Company) on the Indian subcontinent in 1773 (Metcalf & Metcalf, ). The British colonization of India obviously had profound effects on the subcontinent, with technology, industrialization, economic development, such as the railroad. But the benefits were primarily reaped by the Company and later the British empire, shattering the economy of India and inflicting violence and suffering on the Indian people with famine, illness, and death (Mallik, ; Rahman et al., ; Sen, ; Tharoor, ). The British Empire also implemented a Western English educational system for health and medicine, and this set the stage for India to become the largest producer of anatomical skeletons. As medical education grew in Europe and North America in the late18th and 19th century, there also grew a greater need for bodies to be used for anatomical training/dissection. With a shortage of bodies, the practice of grave robbing and murder of the poor and/or racially disenfranchised was widespread (Halperin, ; MacDonald, ; Sappol, ). The well‐known case of Burke and Hare who murdered individuals and sold their bodies to anatomist Robert Knox led to public outrage and riots that eventually led to the 1832 Anatomy Act in Great Britain. While the 1832 Act reduced grave robbing and allowed the first provisions for willed bodies, it also led to the legal use of the “unclaimed,” that were historically prisoners, psychiatric patients, the poor, and the destitute (Richardson, ; Sappol, ). But demand for cadaveric bodies in dissection rooms continued, and as such Britain looked to its colonies for bodies, particularly as they began medical education in India. Prior to British colonization India followed indigenous systems of medicine, primarily known as Ayurvedic medicine (Ayurveda) (Hajar, ). Western medicine was brought to India in the late 17th century to serve the troops and employees of the Company and then later the military and civilian British Empire, which eventually increased the need for locally trained medical assistants and staff (Anshu & Supe, ). Lord Bentick, the then Governor of India established the first medical college and hospital in Calcutta West Bengal in 1833 (Arnold, ; Jacob, ; Figure ). When establishing Western medicine in India, the British used and adapted the Indian caste system to aid in the procurement and preparation of bodies, relying on the use of community members called Doms in the medical colleges. Doms, particularly in Bengal and Bihar, represent one of the most widespread and lowest of all castes in India. Historically, they fulfilled tasks that are considered particularly defiling, like removing animal carcasses and carrying and tending to the human dead in burning cremation grounds (Arnold, ). Today, both Doms and Aghoris, one of the most extreme and controversial sect of Hindu holy men called sadhus, are popularly exoticized Eastern practitioners of death rituals. In fact, the city of Varanasi (known formerly as Benares) is a well‐known tourist destination to visit cremation grounds and bear witness to their death rituals. For Westerners, the fact that Indian, specifically Hindu, rituals of death involve such elaborate handling of the dead might make their participation anatomical dissection in colonial medical schools seem like a normal extension of practice. But prior to colonization, anatomical knowledge in ancient India was derived principally from animal sacrifice, chance observations of naturally macerated human bodies, or the examination of patients during treatment (Arnold, ; Jacob, ; Loukas et al., ). Although there were Vedic understandings and exploration of the body and anatomy, overt dissection of the body was not known (Arnold, ; Jacob, ). Western medicine established scientific thought and practice India but did so by marginalizing and then replacing Indigenous systems of the body and wellbeing (Bhattacharya, ; Kumar, ). Doms were actually forced into the service of human dissections, and they had deep loathing and suspicion of cutting up bodies (Arnold, ). The practice of medical dissection and collection of human remains was a key component to colonial hegemony, by establishing not only the education of anatomy and western perceptions of the body but also through using Indian bodies (both the dead and the use of Doms to procure bodies) as a site of colonizing power (Arnold, ; Bhattacharya, ). Arnold describes the corporality of colonialism in India as it stood specifically in the space of medicine, using the body as a site for control, authority, and legitimacy. Arnold reminds us that “Bodies were being counted and categorized, they were being disciplined, discoursed upon, and dissected, in India much as they were in Britain, France, or the United States at the time” (Arnold, , p. 9). The unparalleled opportunity provided by the establishment of anatomy in colonial India cannot be overemphasized. The Calcutta Medical College was unique in medical education worldwide with its virtually unlimited supply of cadavers (Gorman, ). With such easy access to bodies for dissection, the College established itself as a leader in anatomical education and Western medicine (Gorman, ), demonstrated by the account that in the 8 years between 1837 and 1844 some 3500 bodies were dissected at the Calcutta Medical College (Webb, ). Allan Webb, a surgeon in the Bengal Army who later became a professor of anatomy at the Calcutta Medical College, built a pathological museum of physical specimens (Gorman, ). He wrote in detail about the acquisition of specimens in the Pathologica Indica , detailing with zeal Indian patients suffering or dying from ghastly and unique pathological presentations (Webb, ). He noted how the dissections supplied the museum with its specimens, and produced the skeletons that would replace teaching skeletons derived from Europe (Webb, ). He specifically wrote skeletons from Europe would “be done away with entirely” (Webb, , p. x). By the 1850's the Calcutta Medical College was processing 900 skeletons a year for shipment abroad (Carney, ), and this number would rapidly increase in the following decades fueled by the periods of pandemics and famine. The export of anatomical specimens from Calcutta particularly expanded during World War II and following the Bengal Famine (Banerjie, ). The famine in the Bengal province of British India (what is now Bangladesh, West Bengal and eastern India) claimed the lives of up to 3 million people to starvation and disease, causing destruction of the agrarian economy and communities (Greenhough, ). The Bengal famine is regarded not as the result of serious drought, but instead largely blamed on Churchill‐era British World II geopolitical calculations related to the Japanese occupation of Burma (Mallik, ; Sen, ). Specifically, the British racist policies continued to prioritize exports from India that led to a rice shortage famine, resulting in the hunger and death of millions (Mallik, ). The suffering and loss of Brown bodies was callously disregarded even in the U.S. at the time (Figure ). For example, the capitalization of famine bodies is chronicled in a 1943 issue of Life Magazine in an article that highlights the success of the American anatomical preparator house (Clay Adams) that used imported skeletons from victims of the Indian famine (Litten, ). Traders were well known in Calcutta for supplying skeletal material to anatomical preparation houses in the UK and the US in the 1930's and the decades following the famines (Banerjie, ; Litten, ). Foucault has termed the social and political power used to achieve the control of bodies and people's lives, as biopower. An extension of biopower is necropolitics, as developed by theorist Mbembe that expands how socio‐political power can be used to dictate how not just how others live but also how others die and/or live suspended in precarious conditions. Both concepts aid in understanding how the practice of medicine, anatomy, and the slow death from starvation in colonial India intertwined with the exertion of authority and control on Indian bodies. Together they represented an arc of power over the Indian body during life and death that would eventually be exploited as skeletonized bodies. THE GLEAMING WHITE STUDY SKELETON Following the independence of India in 1947, the export of Indian skeletons continued. The demand for human skeletons continued to grow exponentially with Western medical schools and students that customarily purchased, along with their stethoscope, their own “study skeleton,” typically a boxed disarticulated skeleton for under 100$. Increasing concerns by human rights groups about unethical practices in the bone trade called for a stop of the exportation several times, but many international and national groups lobbied against a ban (Andrabi, ; Banerjie, ; Carney, ). Anatomical preparators houses in the United States and United Kingdom had a very lucrative business of importing skeletons from India from middlemen, completing the final preparation, boxing, and labeling in house. For these anatomical supply companies, many that began during colonial rule, such as Clay Adams (New York) and Adam Rouilly & Co. (London), and later Biocraft (Chicago), their business was wholly dependent on the exportation of bodies from India (Figure ). It was only in 1985 after a bone trader was arrested exporting the skeletons of 1500 children, that elicited nationwide panic that children were being kidnapped and/or killed for their skeletons, did the Supreme Court of India finally ban the export of human bones, and other tissues, under the national Import/Export Control Act (Carney, ). European and North American anatomical exporters attempted to have the ban lifted without success, and the availability and pricing of teaching skeletons quickly adjusted accordingly. It is estimated that just prior to the ban in 1985, Calcutta exporters traded almost 1.5 million worth of skeletons, with some estimates as high as 5 or 6 million (Fineman, ; Stephan et al., ). The Chicago Tribune estimated that 60,000 skulls alone were shipped from Calcutta (Carney, ). If we use a conservative estimate of approximately 40 years of exports of similar size from the time of Independence to the ban in 1985, that could be approximately 2.4 million Indian skeletons exported outside of India. This does not account for skeletons/skulls collected for pathological or phrenological studies in the 100 years prior to Independence, which are well documented in museum collections particularly the UK and former colonies, or the use of skeletons within Indian medical colleges (Cohen, ; Stephan et al., ). It is no surprise that most Western biological anthropologists and osteologists have encountered and likely used an anatomical teaching skeleton from India. How and why India continued the exportation of skeletal bodies after Independence is complex. Certainly, domestic Indian traders served as adept middlemen for what began as accommodation and participation, and eventually became lucrative appropriation. But we should not forget the necropolitical and racist origins of the production, demand, and business of exportation of bodies. The British left a gutted postindependence economy, and the bone trade was a left to grown within pockets of abject poverty. Since India legally allowed the trade to continue until the 1985 ban there is a tendency to discount the ethical concerns of using South Asian anatomical skeletal specimens, not understand the necropolitical power dynamics that were foundational to the trade industry or blame the bone trade on the Indian social structure of the caste system (Cornwall et al., ; Jones, ; Stephan et al., ). Some anatomists have suggested that it was simply “convenient” for India to export skeletons—as suggested by a recent paper by Stephan et al. ( , p. 73) that note that “India was largely Hindu so the body counted for much less after the soul was thought to have departed” (because the body was replaced with each reincarnation). This demonstrates the lack of acknowledgment of the colonial racist foundation of these collections, but also ignorance that persists today of Hindu religious beliefs and funerary customs. Traditional Hindu beliefs of life and death in fact place great significance on the release of the soul only through the breaking of the skull on the final embers of the cremation pyre (Parry, ). No South Asian person or family member likely voluntarily consented to be made into an anatomical study skeleton instead of burial or cremation without financial precarity or duress. Detailed accounts of the business just prior and following the 1985 ban noted intact bodies being taken from Ganges River from impoverished families that were not able to conduct cremation. However, several accounts also note in Bengal the widespread robbing of skeletons from cemeteries before and after the ban, and/or the purchase or agreement to take bodies prior to death from families that had no resources for burial or cremation (Carney, ). While the assumption has always been that anatomical skeletons were taken from Hindus, it is key to note that the heart of the bone trading industry as exposed widely by Carney was in West Bengal, which also has a large Muslim community. For individuals that were Hindu, there is no evidence that only the bodies of the lowest caste members were taken. The origins of the anatomical dissection and bone trade by the British targeted geographic areas in Calcutta that at first was certainly centered on Hindu bodies (Webb, ), though postindependence at least some Muslim burials were also targeted (Carney, ). While the caste system in India was manipulated by the British to procure bodies in the late 19th centuries, the later peak of the 20th century bone trade market flourished in the far more complicated intersection of indebtedness, religious marginality, and socioeconomic inequality that is not explained by the caste system alone. This continues to be the case in other contemporary predatory markets, such as the organ trade today in India (Cohen, , , ). When early skeletons were first imported to Western countries it was not kept secret where they were from. While there was disregard of the dire conditions of the communities living and dying in India, accounts such as the article in Life magazine (Litten, ) did not hide where anatomical bodies were coming from. This indifference to the extreme conditions of poverty or violence that people suffered prior to their death or the ethics of the capitalization of these conditions for the benefit of scientific research and education seems striking, but this is not unlike the indifference toward the violent history toward Black and Indigenous skeletal bodies across the Western world discussed earlier. At some point, attention stopped being paid altogether of where imports were coming from, and by the mid‐20th century most students and professionals had forgotten or did not care where their anatomical study skeletons were from. There were no cards or labels included on country origin, skeletons were only classically labeled by the UK or US preparator house that acquired and sold them from Indian middlemen as shown above. Certainly, no professionals were aware or discussed the Indian bone trade until the work of Carney who first categorized it as part of the “red market” (the trade of organs or other bodily tissues) Carney . I have argued that part of the reason human remains from India are so easily objectified and their origins forgotten, also has to do with the materiality of the crafted skeletal bodies themselves (Agarwal, ). While those that died and were skeletonized came from varying religions, villages, cremation ground and/or cemeteries, they were transformed into uniform anatomical objects. Teaching skeletons from India are highly standardized specimens with high‐quality distinguishable anatomical landmarks. Carney has described how bodies for skeletonization were meticulously processed, detailing how corpses were often dismembered naturally first in rivers by bacteria and fish, followed by crews of hands that would scrub and boil the bones in water and sodium hydroxide to dissolve remaining flesh, that with sun and hydrochloric acids soaks would produce gleaming white skeletons. While early colonial exports to Britain and the colonies appear to have been disarticulated haphazardly, often with the mixing of bone elements in individuals (Stephan et al., ) later exports to particularly North America were strikingly similarly and unmatched by exporters from China or Eastern Europe. These mid‐20th century exports are uniform, with both young age males and females usually represented, with intact teeth and little or no signs of pathology or trauma. The selective and transformative process, first established by British anatomists as the gold standard and perfected over decades, was explicitly crafted to rid signs of the individual, and specifically the Brown individual. Kakaliouras has used the concept of “osteological subject making” to discuss how Indigenous remains are used to create scientific objects devoid of personhood, and Watkins has highlighted the framing of Native American bodies (TallBear, ) and Black bodies as the “raw material” for scientific knowledge. Geller uses the concept of “becoming‐object” to highlight the history of necropolitics in anatomical collections and by their medical collectors (such as Samuel G. Morton), and also the dynamic nature of human remains as objects during life and after death. Watkins' engagement with the description of how racialized bodies are deployed as “flesh” for consumption versus the bodies of real persons as illustrated by Spillers is also chillingly echoed here in the literal stripping of flesh from South Asian bodies to create the raw osteological materials. The key difference is that most other anatomical skeletal specimens were assigned or retained at least partial histories because it was deemed valuable and essential to comparative studies of race and human variation (Geller, ; Kakaliouras, ; Lans, ; Stantis et al., ; Watkins, ). All individuals that make up historical skeletal collections experienced forms of violence during their lives in marginalized classes, during the acquisition of their body, and in their postmortem lives (de la Cova, ; Lans, ). However, the violence of erasure, disposability, and objectification of South Asian people as specifically anonymous teaching materials is exceptional. The value of these Indian bodies came from erasing any history and in making them geographically displaced. Anatomical dissection and production of skeletonized bodies in India was founded on colonial racism. However, the postcolonial domestic laborers that perfected the painstaking and toxic work of producing the white skeletons for meager renumeration, and our own uniformed use of these historical skeletons, also unknowingly upheld and continues to uphold colonial necropolitical logics (Geller, ). CONTEXT, CONSENT, AND ETHICAL SOLUTIONS In recent years, anthropologists have demonstrated how contextualization of specifically anatomical collections and their history can continue to inform the political present. For example, Hildebrandt's historic context of procurement of anatomical bodies in Nazi Germany and other historical ethical transgressions highlights how racist agendas are still tied to modern anatomical practice. Similarly, de la Cova's study of the Robert J. Terry Skeletal Anatomical Skeletal Collection (one of many skeletal collections held at the National Museum of Natural History of the Smithsonian Institution in Washington, DC) highlights the structural violence that can be traced through the lives of the people of St. Louis, as well as the nonconsensual dissection and use of their skeletal remains. There have also been recent calls for ethical guidelines and policies to deal with museum (Dunnavant et al., ; Stantis et al., ) and anatomical medical skeletal collections (AAA, ). Anthropological discussions of bioethics have sharpened our understanding of the importance of consent in the acquisition and use of archeological and modern osteological remains, that was typically not gained for most historical collections (Champney et al., ; Colwell & Nash, ; Geller, ; Winkelmann, ). Anthropological interventions also underscore how critical the historical context is to frame the trajectories of structural violence that surround these collections, and to make decisions on how they should now be treated. Recently, professional anatomists have attempted to acknowledge and deal with specifically the South Asian skeletal remains found in their historical collections (Coman et al., ; Cornwall et al., ; Jones, ; Stephan et al., ). However, these discussions urgently need to consider the histories of violence and racism from the past and those that are inadvertently perpetuated in the present. In anatomy, the central concern has been to shift to willed donation for both soft tissue dissection and bony tissues. For example, at the School of Biomedical Sciences at the University of Queensland, Stephan et al. reported steps to decommission their historic osteological collection from India to a “memorial assemblage.” The authors outline a process of slowly replacing their teaching collection over a 5‐year period with donated bodies that are being prepared with in‐house dermestid beetle preparation. While the dedication to replace skeletal collections with specimens that have informed consent is admirable, there seems little humanization of individuals in the historical collection. The researchers make clear the motivation for the replacement is a “remedy” to offer “improved osteological morphologies” with “authentic skeletons” as their collection is not “representative” with incomplete, commingled, and over‐processed early historical exports (Stephan et al., ). Further, the researchers note that the slowly decommissioned Indian skeletons will be kept in a “memorial assemblage” that is visible and prominent as a “reminder” to staff and students. This is a notable contrast from the protocol of commemoration and cremation for individuals in their ongoing contemporary (primarily White) body donor program. Similarly, the use of the word “disposal” as a viable solution to historical anatomical skeletons (Coman et al., ; Jones, ) does little to appreciate the individuals as more than objects. A recent special issue of the Anatomical Record was significant in devoting the entire volume to discuss the dark colonial history and ethics in the field of anatomy (Laitman et al., ). Yet not a single paper in the issue mentions the approximately 2.4 million Indian bodies that have been formative in skeletal anatomy education. South Asian people were the subject of violence, racism, and objectification for almost two centuries, systematically erased while still being present in almost every medical college, anatomy, and anthropology department in the western world. Our continued erasure through ignorance, insensitive language, or actual physical disposal upholds this violence. It is also curious that in the consideration of ethical treatment of remains from India there is little thought about the beliefs of descendant communities, or overt ignorance regarding Hindu practices and beliefs of the dead and reincarnation, as discussed above. The Hindu concepts of reincarnation are relevant to understanding the beliefs of Indian ancestors and descendants. In Hinduism the Ātman (a Sanskrit word) is the true or eternal Self, or the individual soul (Adamson & Ganeri, ; Doniger, ). It is considered a part of the larger essence of the absolute, or Brahman, which is the universal spiritual force that permeates all things. For humans the Ātman lives throughout the body and its biological tissues, but is eternally recreated through rebirth again and again, with the soul reborn in another form following the body's death (Dalal, ; Doniger, ). The key here is that the body is a receptacle for the Ātman. As such, for many Hindus (and Buddhists) the skeleton not an ancestral being, at least not in the sense of the spiritual belief systems of other cultures. However, the Hindu belief in reincarnation cannot be used as an excuse for the role of colonialism and the market needs of the Western world in sustaining a nearly two‐hundred‐year predatory red market of violence and racism. But the belief system of the specific original place where these individuals were taken from should be considered in the ethical treatment of their remains. It is tempting to want to consider repatriation as the obvious ethical solution for South Asian anatomical human remains as the case with ancestors taken from archeological burial contexts. However, it is not feasible to ask to repatriate South Asian ancestors to a country that in 2021 had a steadily increasing population of 1.4 billion and faced the challenge of cremating or burying 10.23 million deceased people in that year alone (United Nations, ). More importantly, we need to also consider the belief system of Indian descendants today, and not assume or try to force Judeo‐Christian or North American Indigenous beliefs on ethical treatment. It is important to note, that India continues to allow the domestic acquisition of anatomical bodies from unclaimed bodies (Lalwani et al., ). Further, India continues to be a leading producer of anatomical skeletons for use within India and exporter of human bodies for dissection in medical education (Habicht et al., ). The Anatomy Act of India, enacted in 1949, provides for the supply of unclaimed bodies of deceased persons to hospitals and medical schools for the purpose of anatomical examination, dissection, and removal of transplant organs. Bodies are deemed “unclaimed” anywhere between 24 and 72 h depending on the state. At the time of independence in 1947 there were 23 medical colleges with an annual admission of 1000 students (Jacob, ). With the contemporary explosion of privately‐owned medical schools, India has the most medical colleges in the world with a total of 63,250 students enrolled for medical education in the academic year 2018–2019 (Sabde et al., ), and 606 medical/specialty schools as of 2020. Skeletons and bodies are instrumental not just in learning basic topographical anatomy but also required for modern “specialist anatomy” that includes laparoscopic, endoscopic, radiological, and endovascular anatomy within the medical curriculum globally (Jacob, ). Lastly, the tradition continues for entering medical students to own their own teaching skeleton a status symbol along with a stethoscope (Cohen, ), which continues to fuel the now domestic industry in India. In recent years, there is a growing push from Indian anatomists to no longer use unclaimed bodies and begin to develop donation programs in the country (Ajita & Singh, ; Lalwani et al., ), though these are currently undeveloped. With contemporary attitudes in India toward anatomical specimens, the identification of descendant communities that wish to repatriate remains is complicated and is unlikely to occur for most of the hundreds of thousands of remains in collections across the world. So then what is the ethical path forward for the large number of South Asian remains in anatomical teaching collections? My goal of this research is to restore the humanity and identity of these individuals. There is likely not one best solution for all the historical remains that exist, and while I do not speak for all South Asians or all descendants close or distant, I do think that at least some continued contextual use and/or demographic study of these remains with care is necessary to reestablish their humanity and history as once living people. The greatest violence these individuals have suffered is erasure. Not only were they geographically displaced in death, but their identity was rigorously removed. In many anatomy and biological anthropology labs they were stored by body part, roughly handled by generations of students as objects that served as anatomical maps, never given recognition of their geographic place of origin, rarely even given basic demographic information (even when it could be obtained by more detailed osteological analysis). By swiftly removing them to storage, cremation, or burial as an ethical solution rather than rehumanizing them, are we not perpetuating the violence against them? While we can never reconstruct full personal identities or lineal descendants for most of the individuals, we can tell the story of these individuals alongside some aspects of their identity that remain, as has been done with other anatomical collections (de la Cova, , , ; Lans, ; Watkins, ). Rehumanizing these individuals could include things like rearticulating their skeletal elements, removing mounting hardware, rehousing them in respectful containers (Davis, ), as well as seeking to add their demographic and life course information (like estimate age at death and biological sex, stature, pathology, and cultural modifications). They are young women and men, children that did not grown, they were babies that died in a womb, and the elderly that lived hard lives. They often have visible markers of social identity, such as dental or cranial modification. They should not be labeled unknown or unprovenanced, they come from a specific region and cities of India. It is critical that who should decide what happens to these human remains be South Asian descendants both abroad and in India. In cases of unknown deceased individuals where there can be no informed consent, the obligation of researchers is to the descendants, whether they be lineal descendants with historical ties or representative of local social community (Blakey, ; Blakey & Rankin‐Hill, ). For example, The African Burial Ground Project in New York City established a model of ethical engagement with descendant groups, the clientage model, that served the interests of the descendant community for dignified treatment, study, and reinterment (Blakey, ; Blakey & Rankin‐Hill, ) and there have been recent applications of similar models with Native American descendant communities (Severson et al., ). In the case of skeletal remains India, there will obviously be different opinions even among South Asians communities across institutions globally. We should also not expect that contemporary professionals in India will quickly adopt the new ethos suggested by their Western peers, such as the “bioethos” suggested for ethical bioarchaeological practice (Geller, , ) or anatomical communities for their body donation programs (Champney, ), or ethical archeological practice (Wood & Powell, ). While anatomists are rushing to put a moratorium on the use of historical anatomical teaching collections, we also need to listen to the voices of South Asian descendants in local communities that may want continued use or partial moratoriums. Along with ethical research that seeks to learn more about the individuals, deeper documentary and historical study is needed to better understand and learn about the South Asian historical communities from which they were drawn. Specifically, attempts could be made for the first time to not just use remains to reproduce and learn osteological knowledge but learn about the population they are from (Watkins, ). Another option is the informed use of collections for teaching whereby students who study the skeletons are educated on the full history and collection of bodies from India. For example, some contemporary medical schools have started to inform students of the personal history, demographics, cause of death, and/or the first names of willed donors used in anatomical training, and in some schools even meet families of the donors (Allen, ; Talarico, ). Tools of pedagogical empathy, even simply learning and sharing limited known demographics of individuals and the history of South Asian historical collections should be put in place as ways that aid the deceased to reclaim their personhood. The need to share information about the colonial history of the Indian bone trade and humanize historical anatomical specimens is perhaps even more urgent with the growing secondary market of private dealers selling human remains globally (Huffer et al., ; Huffer & Chappell, ; Huxley & Finnegan, ). Private dealers have continued to flourish through online platforms and social media (Huffer et al., ), selling to the public under the pretense of using responsibly sourced medical material (Carington, ). Huffer et al. have noted that a significant percentage of the material sold by contemporary amateur and professional sellers is discarded anatomical specimens from India. However, the unethical and violent history of specimens is unknown to enthusiast buyers, obscured by the worn historic labels or identifiers of being prepared by well‐known historic anatomical educational purveyors in the US or UK (Huffer et al., ). The South Asian anatomical collections held in various institutions have differing contexts and histories. As such, some cases local descendant communities may feel the individuals have suffered enough trauma and violence and may feel they should be kept in storage, cremated, or buried. In practice, consultation in institutions may include faculty, staff, students, or local community members that identify as South Asian or have affiliation with South Asian communities. At institutions that sit on Indigenous lands, or where unknown remains are commingled with other human remains that could be from Indigenous communities, we should expect that local Native American, First Nation, or Indigenous communities may wish to also be included in the care for South Asian remains along with their own ancestors. Finally, working in collaboration with descendant communities is not the only ethical responsibility we have. Inclusion of diverse voices in biological anthropology and anatomy is also key in decolonization and the shift to an antiracist practice (Bolnick et al., ). The voice of anthropologists of color have repeatedly been made invisible, discounted, or simply not included in the production of knowledge even about bioethics itself (Athreya, ; Bader & Malhi, ; Blakey, , ; McLean, ; Torres, ; Watkins, ). Recent studies that have engaged with the unique histories of violence that people and communities endured in their lives before and after as anatomical subjects, and how these intertwine with the subjectivity of the researchers themselves are powerful ethical interventions that meet antiracist objectives (Agarwal, ; Blakey, ; de la Cova, , ; Lans, , ; Rodrigues, ; Watkins, , ). Colonial violence created a red market industry that resulted in millions of South Asian people to be made into anatomical objects. It is time to acknowledge the scientific racism that created the anatomical collections from India and to find ways to restore their personhood. Sabrina C. Agarwal: Conceptualization (equal); data curation (equal); formal analysis (equal); funding acquisition (equal); investigation (equal); methodology (equal); project administration (equal); resources (equal); software (equal); supervision (equal); validation (equal); visualization (equal); writing – original draft (equal); writing – review and editing (equal). |
Visualization using NIPTviewer support the clinical interpretation of noninvasive prenatal testing results | 0f8a7e48-49ac-4178-bea0-6707ac44a685 | 11748546 | Biopsy[mh] | Noninvasive prenatal testing (NIPT) is an efficient technique to screen for fetal chromosomal aneuploidies that are caused by the presence of an extra or missing copy of a chromosome. The method analyzes small fragments of cell-free DNA that are circulating in a pregnant woman’s blood (cfDNA). The DNA fragments arise when cells enter apoptosis, in which DNA is released into the bloodstream. Most cfDNA in maternal blood originates from the mother, with the fetal component (cffDNA) contributing 10–15% of the total cfDNA at 10–20 weeks of gestation (the most common time for NIPT) . Analyzing cffDNA from peripheral maternal blood samples provides an opportunity for early detection of certain genetic abnormalities without an increased risk of miscarriage, which may follow traditional invasive sample collection (chorion villi biopsy or amniocentesis) . The potential of NIPT in improving prenatal care has led to its implementation as a screening technique in many countries . Still, unlike invasive tests, NIPT is not a diagnostic method for confirmation of trisomies in pregnancies. The presence of false positives and false negatives reported in studies using NIPT hinder its adoption as a definitive test for diagnosing trisomies . NIPT is primarily used to detect the presence of additional chromosomes, in particular trisomy 21 (Down syndrome), trisomy 18 (Edwards syndrome), trisomy 13 (Patau syndrome) and an extra or missing copy of the sex chromosomes. In brief, the test quantifies the amount of cffDNA from each chromosome in the sample and estimates the ratio of amounts from test and reference chromosomes. A positive sample has a ratio that is different compared to the distribution in diploid samples. Typically, the ratio is higher due to the presence of more cffDNA from the additional chromosome, which indicates a trisomy. Commercially available NIPT tests for use in diagnostic laboratories include DNA sequencing-based analyses, typically using shallow whole genome sequencing. One such commercial application is Illumina’s VeriSeq NIPT Solution v1 , which can produce NIPT results in two days. The test analyzes 16 samples per sequencing run and includes a proprietary software solution that provides output in a comma separated file (csv). The actual interpretation of the test results involves clustering experiment data with data from earlier runs to separate normal sample chromosomal ratios from samples with trisomies. With this approach, trisomies appear as outliers in a scatter plot. This visualization can be implemented using plot functions in any spreadsheet or statistical software, however such solutions typically give rise to multiple manual steps which may be both laboursome and introduce human errors, making the interpretation less reliable. Furthermore, spreadsheet solutions typically have little traceability support which is important in clinical laboratories. A more reliable solution is required to store data for multiple run comparisons and to track user activity. With these goals in mind we developed NIPTviewer, a web application that imports and validates the output from the NIPT analysis and visualizes the results by providing scatter plots and tables that compare the current analysis results to results from previous runs, simplifying the interpretation of the test results. Implementation NIPTviewer is a web application developed in Python (3.8+) that uses Materialize (1.0.0) to provide an appealing user interface. It utilizes Django (3.1.1) as its web framework, providing essential functionalities such as user management and authentication. The application relies on a database for efficient data storage, with default support for databases like Sqlite, PostgreSQL, or Microsoft SQL. Furthermore, NIPTviewer can easily be configured to support other databases like Oracle, MariaDB or Mysql. NIPTviewer utilizes pandas (1.5.3) for data parsing, which enables easy processing and manipulation of data, and nvd3 (1.8.6) to generate interactive charts for effective data visualization. To perform statistical calculations, NIPTviewer relies on SciPy (1.9) , which offers a comprehensive library of scientific and statistical functions. The source code of NIPTviewer adheres to the PEP8 standard for improved code readability, ensured through the use of Pycodestyle (2.6.0) . The Django test-execution framework is employed to rigorously test data parsing and function behavior, guaranteeing intended functionality. The data analysis processing workflow is designed to be straightforward and easy to use (Fig. ). It starts when an authenticated user uploads the VeriSeq NIPT Analysis Software output .csv file to the application. During upload the result data is tagged with user information, making it possible to track which user performed the import. Test results are displayed using a combination of charts and tables, providing the user with a graphical overview of the data (Additional file ). The charts are interactive and offer a visual representation of the data from the current experiment run, plotted over data from previous experiments. Tables display data from the current experiment run and highlight data points that deviate from the expected in a normal sample. When data has been inspected, the user has the option to export the visualizations as a .pdf report file that can be used in external systems for reporting and archiving purposes (see supporting information for examples of a NIPT report (Additional file ) and a NIPT QC report (Additional file ) . Deployment Each release of the software is automatically packaged as a docker image and uploaded to dockerhub where it is publicly accessible. To facilitate deployment, docker-compose and kubernetes configuration files exist in the github repository where the source code is available under the MIT license. The application has been tested and runs on Firefox, Google Chrome, Microsoft Edge and Safari. Documentation, including installation instructions, is available at Read the Docs . Setup NIPTviewer was developed using Django, which offers a high level of flexibility in the setup process and allow users to tailor the system according to their specific needs. To illustrate this we have deployed NIPTviewer using two different system setups: single server and multi-server. In the single server deployment, NIPTviewer and PostgreSQL were run on the same server but in separate Docker containers. This configuration allows efficient utilization of system resources while maintaining separation between the application and the database. In the multi-server setup, NIPTviewer was deployed in conjunction with Microsoft SQL on separate servers. This approach allows for distributed architecture, enabling scalability and improved performance by distributing the workload across multiple servers. These different deployment options provide users the flexibility to choose what setup best suit their requirements, whether it is a compact and integrated setup or a distributed setup for enhanced performance and scalability. Usage The supported import file format is the output .csv file that is generated by the Veriseq NIPT Analysis Software 16 Samples (1.4.0) . The file name should begin with a date [YYMMDD]. Data in the file is used both to assess run and sample quality and to inform on sample aneuploidy status. The following metrics are of particular interest: Chromosome coverage distribution per sample - used to detect unusual patterns which could indicate processing problems or sample issues. It may also reflect actual chromosomal abnormalities. Fetal fraction (FF) - refers to the percent of cell-free circulating DNA in a maternal blood sample that is derived from the placenta. Uploaded data from the current experiment is displayed together with historical data from earlier experiments to enable the user to look for trends between runs. Normalized Chromosomal Denominator (NCD) values - are used to indicate chromosomal abnormalities with denominator chromosomes or processing errors. Uploaded data from the current experiment is displayed together with historical data from earlier experiments to enable the user to look for trends. Normalized chromosome values (NCV) - scaled to be equivalent to the commonly used Z-score . Vales estimate how different a test result is compared to the average diploid ratio. Scatter plots with NCV chr13/18/21/X/Y data against FF (fetal fraction) are displayed together with historical data to identify samples that appear as outliers to the diploid sample cluster. A plot is also generated for NCV X vs. NCV Y. This plot is particularly useful for identifying sex chromosome abnormalities. Details on plots and metrics are available in the Additional file and in the online documentation . Data values from the analysis are available using the mouse-over feature in the graphs, or displayed in tables to give the user a quick overview of all values. In the tables, values above or below defined thresholds are highlighted in red, indicating they should be investigated more thoroughly during interpretation. Clinical implementation A clinical competency test of 70 normal samples was performed as a first step to establish the Veriseq NIPT Analysis in our lab and to ensure that all parameters defined by the vendor were within their reference intervals. As a second step, with the certificate from the competency test, we performed a clinical verification with 84 plasma samples from singleton pregnancies that were sequenced in six separate runs. The verification included samples previously analyzed either by the Verify Prenatal test at Illumina Clinical Services Laboratory, US ( n = 66) or by a reference laboratory at Turku University Hospital, Turku, Finland ( n = 18). Both normal samples and samples with aneuploidies were included (five samples with trisomy 13, thirteen samples with trisomy 18, sixteen samples with trisomy 21, three samples with sex chromosome alterations and 47 normal samples). A maximum of three samples presenting with the same trisomy were included in the same run. The verification samples were processed according to the guidelines of the manufacturer. In brief, libraries were constructed from cfDNA, quantified, diluted and pooled, followed by sequencing on a NextSeq550Dx (Illumina Inc, San Diego, CA). Sequence data was processed by the cADAS pipeline in the VeriSeq NIPT Analysis Software (Illumina) as implemented on a pre-installed and dedicated VeriSeq NIPT Analysis Server (Illumina). The analysis includes quality control steps, demultiplexing, mapping, coverage analysis and estimations of NCV and FF. The result file was imported into NIPTviewer followed by interpretation by a clinical laboratory geneticist. Data from the verification was used to determine the NCV threshold values and a regression line for NCV(X) vs. NCV(Y) (Additional file ). After the verification, NIPTviewer was calibrated to receive clinical samples.
NIPTviewer is a web application developed in Python (3.8+) that uses Materialize (1.0.0) to provide an appealing user interface. It utilizes Django (3.1.1) as its web framework, providing essential functionalities such as user management and authentication. The application relies on a database for efficient data storage, with default support for databases like Sqlite, PostgreSQL, or Microsoft SQL. Furthermore, NIPTviewer can easily be configured to support other databases like Oracle, MariaDB or Mysql. NIPTviewer utilizes pandas (1.5.3) for data parsing, which enables easy processing and manipulation of data, and nvd3 (1.8.6) to generate interactive charts for effective data visualization. To perform statistical calculations, NIPTviewer relies on SciPy (1.9) , which offers a comprehensive library of scientific and statistical functions. The source code of NIPTviewer adheres to the PEP8 standard for improved code readability, ensured through the use of Pycodestyle (2.6.0) . The Django test-execution framework is employed to rigorously test data parsing and function behavior, guaranteeing intended functionality. The data analysis processing workflow is designed to be straightforward and easy to use (Fig. ). It starts when an authenticated user uploads the VeriSeq NIPT Analysis Software output .csv file to the application. During upload the result data is tagged with user information, making it possible to track which user performed the import. Test results are displayed using a combination of charts and tables, providing the user with a graphical overview of the data (Additional file ). The charts are interactive and offer a visual representation of the data from the current experiment run, plotted over data from previous experiments. Tables display data from the current experiment run and highlight data points that deviate from the expected in a normal sample. When data has been inspected, the user has the option to export the visualizations as a .pdf report file that can be used in external systems for reporting and archiving purposes (see supporting information for examples of a NIPT report (Additional file ) and a NIPT QC report (Additional file ) .
Each release of the software is automatically packaged as a docker image and uploaded to dockerhub where it is publicly accessible. To facilitate deployment, docker-compose and kubernetes configuration files exist in the github repository where the source code is available under the MIT license. The application has been tested and runs on Firefox, Google Chrome, Microsoft Edge and Safari. Documentation, including installation instructions, is available at Read the Docs .
NIPTviewer was developed using Django, which offers a high level of flexibility in the setup process and allow users to tailor the system according to their specific needs. To illustrate this we have deployed NIPTviewer using two different system setups: single server and multi-server. In the single server deployment, NIPTviewer and PostgreSQL were run on the same server but in separate Docker containers. This configuration allows efficient utilization of system resources while maintaining separation between the application and the database. In the multi-server setup, NIPTviewer was deployed in conjunction with Microsoft SQL on separate servers. This approach allows for distributed architecture, enabling scalability and improved performance by distributing the workload across multiple servers. These different deployment options provide users the flexibility to choose what setup best suit their requirements, whether it is a compact and integrated setup or a distributed setup for enhanced performance and scalability.
The supported import file format is the output .csv file that is generated by the Veriseq NIPT Analysis Software 16 Samples (1.4.0) . The file name should begin with a date [YYMMDD]. Data in the file is used both to assess run and sample quality and to inform on sample aneuploidy status. The following metrics are of particular interest: Chromosome coverage distribution per sample - used to detect unusual patterns which could indicate processing problems or sample issues. It may also reflect actual chromosomal abnormalities. Fetal fraction (FF) - refers to the percent of cell-free circulating DNA in a maternal blood sample that is derived from the placenta. Uploaded data from the current experiment is displayed together with historical data from earlier experiments to enable the user to look for trends between runs. Normalized Chromosomal Denominator (NCD) values - are used to indicate chromosomal abnormalities with denominator chromosomes or processing errors. Uploaded data from the current experiment is displayed together with historical data from earlier experiments to enable the user to look for trends. Normalized chromosome values (NCV) - scaled to be equivalent to the commonly used Z-score . Vales estimate how different a test result is compared to the average diploid ratio. Scatter plots with NCV chr13/18/21/X/Y data against FF (fetal fraction) are displayed together with historical data to identify samples that appear as outliers to the diploid sample cluster. A plot is also generated for NCV X vs. NCV Y. This plot is particularly useful for identifying sex chromosome abnormalities. Details on plots and metrics are available in the Additional file and in the online documentation . Data values from the analysis are available using the mouse-over feature in the graphs, or displayed in tables to give the user a quick overview of all values. In the tables, values above or below defined thresholds are highlighted in red, indicating they should be investigated more thoroughly during interpretation.
A clinical competency test of 70 normal samples was performed as a first step to establish the Veriseq NIPT Analysis in our lab and to ensure that all parameters defined by the vendor were within their reference intervals. As a second step, with the certificate from the competency test, we performed a clinical verification with 84 plasma samples from singleton pregnancies that were sequenced in six separate runs. The verification included samples previously analyzed either by the Verify Prenatal test at Illumina Clinical Services Laboratory, US ( n = 66) or by a reference laboratory at Turku University Hospital, Turku, Finland ( n = 18). Both normal samples and samples with aneuploidies were included (five samples with trisomy 13, thirteen samples with trisomy 18, sixteen samples with trisomy 21, three samples with sex chromosome alterations and 47 normal samples). A maximum of three samples presenting with the same trisomy were included in the same run. The verification samples were processed according to the guidelines of the manufacturer. In brief, libraries were constructed from cfDNA, quantified, diluted and pooled, followed by sequencing on a NextSeq550Dx (Illumina Inc, San Diego, CA). Sequence data was processed by the cADAS pipeline in the VeriSeq NIPT Analysis Software (Illumina) as implemented on a pre-installed and dedicated VeriSeq NIPT Analysis Server (Illumina). The analysis includes quality control steps, demultiplexing, mapping, coverage analysis and estimations of NCV and FF. The result file was imported into NIPTviewer followed by interpretation by a clinical laboratory geneticist. Data from the verification was used to determine the NCV threshold values and a regression line for NCV(X) vs. NCV(Y) (Additional file ). After the verification, NIPTviewer was calibrated to receive clinical samples.
All criteria in the clinical verification were met, in particular the concentrations of the sequence libraries were 10-250nM, the cluster densities were 140–250 K/mm2, high quality sequence data was generated (Q30 > 95%), fetal fractions were ≥ 2% (range 2-23%) and all samples included in the verification replicated the previous results regarding chromosomes 13, 18, 21, X and Y. As a result, the analysis was approved for clinical routine use. Based on the clinical verification data set, NCV threshold values could be confidently set for chromosomes 13, 18 and 21 to identify trisomies at NCV > 4 and fetal fraction ≥ 2%, with an inconclusive span at NCV 3–4. These threshold values are now implemented as default separator lines in the graphs of NIPTviewer (Additional file ). The verification set was also used to establish a regression line with corresponding 99% confidence intervals (3 standard deviations of the mean) in the plot displaying normalized sex chromosome values (NCV(X) vs. NCV(Y)) (Supplementary Fig. , Additional file ). With the NCV(X) vs. NCV(Y) regression line and the NCV threshold values incorporated, NIPTviewer was implemented as part of a NIPT analysis routine that was accredited by the national accreditation body for Sweden (Swedac) and launched in clinical production in November 2020.
We developed NIPTviewer as a tool to facilitate clinical interpretation of NIPT results and to minimize manual data entry steps. The tool has been deployed in two different setups and is used by hospital staff to visualize NIPT analysis results and to guide the interpretation of the results. The visual inspection makes it easy to identify individual data points that deviate from the distributions of historical, mostly normal, samples. Furthermore, because samples with the same fetal chromosomal aneuploidies cluster, medical geneticists expect any sample with a trisomy to cluster with previously analyzed samples with the same variation. As a result, the visualizations potentiates a fast interpretation of test results, but also a means to identify inconclusive results whenever data points do not cluster as expected. NIPTviewer also provides traceability and minimizes manual steps that could introduce human errors. The first version of NIPTviewer was implemented in clinical production in November 2020 at Uppsala University Hospital and until April 2024 a total of 4941 samples were successfully uploaded to and visualized in NIPTviewer as part of the clinical NIPT analysis. The medical geneticists who have been working with NIPTviewer claim that the application is easy to work with, intuitive and fast. It simplifies and speeds up the interpretation process and presents them with all the information they need in order to be confident that test results are accurate.
NIPTviewer provides a visualization of NIPT results from the VeriSeq NIPT Solution v1 that gives clinical staff a good overview of how individual data points cluster compared to historical data. The application include functionality for user management and authentication that are important for traceability and can output PDF reports that may be used in the clinical reporting process. Deployment options provide laboratories the flexibility to choose what setup best suit their requirements.
Below is the link to the electronic supplementary material. Supplementary Material 1 : Additional file 1 Supplementary Material 2 : Additional file 2 Supplementary Material 3 : Additional file 3
|
Safety and quality of parenteral nutrition: Areas for improvement and future perspectives | 269a5082-51bb-48d1-bb8e-856941f1ae0f | 11170503 | Patient-Centered Care[mh] | A key issue for the prescribing HCP is to ensure that every patient in need of PN receives a PN formulation appropriate for their requirements, as emphasized in statement 1 in the summary article, regardless of whether PN is prescribed and carried out at an expert center or centers where PN is just one of many services. The first step in the nutrition care process is to identify every patient with malnutrition or at risk for malnutrition and ensure that nutritional support is given in a timely manner. Hence, HCPs should be educated sufficiently regarding the importance of “closing nutritional gaps.” Moreover, a standardized approach to identify malnutrition and activate nutritional support should be a prerequisite. Screening for nutritional risk is recommended for all hospitalized patients according to the 2011 ASPEN clinical guidelines for nutrition screening, assessment, and intervention in adults. Patients identified to be at nutritional risk should then undergo a more detailed nutritional assessment to estimate nutrient requirements for the development of a nutrition therapy plan ( ). , Nutritional intervention is recommended to improve clinical outcomes for patients at risk for malnutrition or who are malnourished. ESPEN-endorsed 2015 recommendations state that those at risk for malnutrition should be identified by validated screening tools. Current established tools for nutritional screening and assessment recommended by ASPEN and/or ESPEN are summarized in . , , , , A consensus report by the Global Leadership Initiative on Malnutrition (GLIM) identified core diagnostic criteria (3 phenotypic and 2 etiologic) for malnutrition in adults in clinical settings ( ). At least one phenotypic criterion and one etiologic criterion should be present to diagnose malnutrition. Thresholds for grading the severity of malnutrition as stage 1 (moderate) and stage 2 (severe) are based on phenotypic criteria. In line with ESPEN and ASPEN, GLIM recommends a 2-step approach for a malnutrition diagnosis: first, screening to identify “at risk” status by the use of a validated screening tool; and second, assessment for diagnosis and grading the severity of malnutrition. Unfortunately, this level of expertise is not available in all PN centers. Thus, at the summit, the experts encouraged smaller institutions without substantial PN expertise to form collaborations, either with larger institutions or with national nutrition societies. Moreover, in certain countries it may be advisable for national nutrition societies to offer a remote service.
When the nutritional needs of a patient requiring PN have been determined, this usually triggers PN ordering, prescription, compounding/preparation, and administration processes. However, experts have observed that there are gaps between optimal and actual processes. This is critical, as PN is highly complex and carries the risk of serious complications, including intestinal failure–associated liver disease, thrombosis, central line–associated bloodstream infection, and loss of central venous access. Therefore, it is important that prescribed routines and safety precautions are followed as closely as possible and any obstacles or problems are addressed. Frequently, there is a lack of knowledge and proper training among nutritionists, physicians, pharmacists, and other HCPs involved in PN. Underlying causes are multifactorial, including a high workload and consequent lack of time spent per patient, lack of reimbursement, and insufficient education and awareness about the importance of nutrition support, leading to PN being considered a “cost item” rather than a general service. All these issues may contribute to an increased risk of errors. PN may also not be perceived as a medication in some settings, which may lead to underreporting of errors. To reduce the risk of errors and enhance the quality of care provided to patients, it is key to improve education in quality, product availability, sterility, and infection control across all professions involved in the PN process. In the field of long-term PN, adherence to the optimal PN process poses a particular challenge, both for the patient and the providers who manage their care. The transition of PN prescriptions from one institution to another may be a risk factor (statement HPN 2 in the summary manuscript). Problems related to the acquisition, distribution, and storage of compounded PN admixtures may also occur (eg, during storms, fires, and other emergencies), and these problems can make it impossible to acquire supplies, or lead to power failures interrupting the proper storage of PN. The experts proposed to create a small emergency stockpile of market-authorized MCBs for home PN (HPN) patients to be prepared for such special circumstances. Product shortages are another factor that may delay or change therapy, threatening the health and welfare of patients owing to medication errors and worsened patient outcomes. This issue is discussed in more detail in the publications on PN in clinical practice and HPN within this supplement.
PN should be prescribed, prepared, and administered by HCPs with demonstrated competency to do so, and institutions are encouraged to implement policies and procedures assuring that these competencies are regularly reassessed. Ideally, interdisciplinary nutrition support teams from various specialties consisting of dietitians, pharmacists, nurses, and physicians accompany the patient throughout the PN process (statement 2 in the summary manuscript). These teams have a vital role to better align patients and HCPs, facilitate a safe transfer from the hospital to the home setting, and fulfill patients’ priorities such as maintaining their quality of life (QoL) and independence, as discussed in the following sections. The experts emphasized the importance of clarifying responsibilities and improving communications between the prescribing and care team members to enhance the safety and quality of care. Participation in interdisciplinary rounds to discuss patient cases can be an effective strategy to develop and improve knowledge. The introduction of regular jour fixes (scheduled meetings) may promote mutual understanding.
Evidence-based guidance for safe clinical practices involving PN prescribing, order review, and preparation was provided by ASPEN in 2014, and a standardized ASPEN model for PN administration competency was proposed in 2018. In theory, the implementation of these recommendations should ensure that everyone in need receives PN according to today’s state-of-the-art principles (as described in statement 14 in the summary manuscript). However, compliance with society guidelines in daily routine may be poor, as pointed out by meeting attendees working at the interface between centers of expertise and frontline providers. Perceived barriers include a lack of awareness and experience of the HCPs involved in PN, and/or reluctance to modify established processes. Furthermore, patient-related factors may play a role (eg, the clinical condition of the patient), as well as institutional factors (eg, resource constraints, slow administrative processes, and high workloads). Moreover, common weaknesses of existing guidelines, such as the high number and complexity of recommendations, paucity of evidence, and outdated evidence, may also hinder guideline compliance. A critical lack of expertise in best-practice PN care consistent with current guidelines has been noted in clinical practice, owing to a variety of reasons such as disease rarity, chronicity, high patient acuity, and cost of care. , This underscores the need for effective distance education strategies to bring medical expertise to remote and/or underserved regions. A dissemination plan and the simultaneous use of complementary education approaches/tools and repetition can help to increase awareness and implementation of guidelines among target populations. In addition, the experts suggested the establishment of central service points that could offer PN expertise to other centers. Another promising approach is the use of tele-education to “democratize” medical knowledge. The Extension for Community Healthcare Outcomes (ECHO) model is one project that bridges the gap between frontline providers and specialty centers of excellence. Project ECHO was founded at the University of New Mexico in 2003 to address disparities in hepatitis C care across the state’s rural and remote communities. Universities and medical centers around the world have adopted the ECHO model for other local challenges. Briefly, the ECHO model uses videoconferencing technology to move specialized medical knowledge from academic centers to primary care providers in the community, allowing them to deliver best-practice care for complex health conditions previously unavailable to people in underserved areas. , A systematic review has found that the ECHO model and similar tele-education models of healthcare delivery improve provider- and/or patient-related outcomes (eg, for patients with hepatitis C, chronic pain, dementia, and type 2 diabetes). Based on the ECHO model, the Learn Intestinal Failure Tele-ECHO (LIFT-ECHO) project was launched in 2019 to support the treatment and management of patients with intestinal failure relying on long-term PN in the US. , , As chronic intestinal failure is a rare disease, coverage by specialized centers across the country is scant and often not easily accessible for patients. Daily care is via local (community) nonexpert clinicians including physicians, pharmacists, nutritionists, and nurses. Scheduled meetings, which are like virtual roundtables combined with mentoring and case presentations, link an interdisciplinary specialist team with local teams and/or clinics to improve knowledge among the caring clinicians and ultimately to enhance patient outcomes. , , More than 40 LIFT-ECHO clinics dedicated to intestinal failure and PN-related topics have been set up since this project was launched, and it is anticipated that the LIFT-ECHO project will contribute to improved healthcare for patients requiring long-term PN across the US.
HCPs involved in the PN process frequently complain about high workloads (ie, because of complexities surrounding PN), leaving insufficient time for the best patient care or to adhere to the intended PN processes, as mentioned previously. Thus, it is important to recognize where modern technology can assist in relieving the workload burden. In PN compounding, advanced technologies such as barcode-assisted medication preparation systems and electronic health record (EHR)/compounder interfaces have been used successfully to solve many issues, including the reduction of transcription errors and ensuring that safety precautions for compounding PN (eg, upper limits for electrolytes) are met. Implementation of a computerized PN prescription management system in a surgical/oncological department improved the clarity of PN orders significantly, as well as patients’ nutrition status. Moreover, pharmacists’ workload was reduced, and the efficiency of prescription review also improved. Nevertheless, leading experts in the field have pointed out that currently available EHR systems lack the functionality to deliver PN safely and optimally across the entire continuum of care. , The primary problem is that though EHRs are usually functional within one institution, as soon as several institutions are involved an interface problem arises. This aspect is particularly relevant to the care of long-term PN patients who are moving from a hospital either to their home or to alternative care settings (and vice versa). Owing to the inconsistency of the systems involved, a manual transcription of the PN orders is often needed, requiring a high level of coordination effort by the HCPs—time that would be better spent on patient care. (For further details about recommendations on the transition of care within HPN, please refer to the publication on long-term PN within this supplement.) An alternative option to decrease workloads is the more frequent use of MCBs as described in statement 3a in the summary article. This is a common approach in parts of the world where a broad variety of MCBs are available. The availability of MCBs in the US, especially that of 3CBs, is limited, with only one 3CB formulation (containing soybean oil as the sole source of lipids) currently approved for use. For more information on this topic, please refer to the publications on PN in clinical practice and PN in the home care setting within this supplement. In particular, some limitations of MCBs are discussed in the article on PN in the home care setting. Importantly, fixed-formula MCBs do not cover every patient’s nutritional needs: MCB customization or individually compounded PN may be necessary and, moreover, MCB use does not minimize the need for the careful evaluation of each patient’s nutritional and electrolyte requirements. Telemedicine has been used to facilitate exchanges between patients and HCPs, connecting patients remotely to HCPs for virtual patient care and monitoring. There has been tremendous growth in telemedicine over the past decade, , and it gained widespread acceptance during the recent COVID-19 pandemic. Telemedicine has been shown to be effective for diagnosis, preparation of treatment plans, and improving physician-patient interaction in numerous medical conditions (eg, in cardiovascular disease, diabetes, respiratory disease, inflammatory bowel disease, and stroke ). Offering HPN patients the possibility of remote video consultations with specialists in a UK national intestinal failure referral center, carried out via internet video calls, obviated the need for clinic attendance in HPN patients. This approach avoids travel for individuals with chronic illness while maintaining standards for follow-up. Notably, the first telemedicine experiences with HPN patients were gained in the US during the COVID-19 pandemic, and lessons learned were carefully evaluated. Given these promising examples, such an approach could reduce the challenges of patient-HCP exchanges, and thus ultimately improve PN patient care.
Traditionally, medical care has been viewed as rather paternalistic, with clinicians making treatment decisions independently of patient preferences. However, it is increasingly recognized that patients often have different priorities and concerns than HCPs. This gives increased value to patient-centered factors such as health-related QoL (HRQoL) and autonomy, rather than clinical endpoints such as disease progression and longevity. , Furthermore, shared decision-making, patient empowerment, and the involvement of patients in the guideline development process are important, particularly for long-term PN patients. However, it must be assumed that patients have enough knowledge to make independent decisions based on scientific facts and clinical advice. Although these aspects have had a minor role in PN so far, the field could benefit from experience gained in other chronic diseases requiring long-term treatment. , The summit provided the opportunity to develop proposals for advancing patient-centered care, especially in the field of long-term PN and HPN. Tools to identify the patient’s HRQoL in long-term PN care. As in other chronic conditions, implementation of patient-centered care in the long-term PN setting involves the identification of health and lifestyle factors that contribute to patients’ HRQoL. Thus, the experts emphasized the importance of generating and regularly reassessing patient-centric data and integrating these into the care plan (statement 13 in the summary manuscript). HRQoL in patients receiving long-term PN is affected by numerous physical and psychological factors. From the patients’ perspective, a major benefit of receiving PN in their own home or care facility is that they can regain a normal life—or rather a “new normal” in the face of their current life circumstances and health status. Improvements in nutritional and functional status associated with PN allow the patient to be more independent and engage in activities of daily living, such as working, attending school, completing household chores, socializing with friends and family, engaging in leisure sports, and traveling. Nonetheless, the dependence on long-term PN can have adverse effects on HRQoL. , , Factors impeding HRQoL include sleep disturbance, frequent urination, technical difficulties, fear of therapy-related complications, inability to eat, increased occurrence of depression, and medical risks because of the underlying disease. , Interference with activities of daily living is a critical issue in patients undergoing long-term PN, particularly because of daytime infusions. Emerging knowledge points to the importance of adapting schedules to circadian rhythms, indicating that an infusion for 12 to 16 hours during the daytime may have advantages in terms of improved metabolic functioning compared with an overnight or 24-hour continuous infusion. Nevertheless, evidence in this field is scarce and there is an obvious research gap in finding a balance between patient independence and optimal nutrition in line with physiological rhythms. Suitable scales are needed to identify the effect of long-term PN on HRQoL, and several tools are available ( ). , , Two popular generic and non–disease-specific instruments to measure HRQoL are the EQ-5D and the Short Form-36 (SF-36). , Both have been reported in the scientific literature over 30 years and are available—in most cases without license costs—in numerous languages. , , The HPN-QoL tool was designed specifically for patients receiving HPN by the Home Artificial Nutrition and Chronic Intestinal Failure special interest group of ESPEN, while the Parenteral Nutrition Impact Questionnaire (PNIQ) was developed to assess the impact of PN on everyday life (so far it is only available in English). , , For patients with short bowel syndrome, a specific Short Bowel Syndrome Quality of Life scale (SBS-QoL) tool has been designed and validated. HCPs will gain valuable insights to ensure good patient-centered care by implementing routine determinations of HRQoL into the care of their long-term PN patients. Functional tests to assess patients’ health and well-being during and after short-term PN. Little attention has been paid to assessing HRQoL and/or well-being in patients during and after short-term PN. Critical illnesses, such as acute respiratory distress syndrome, can markedly impair HRQoL for up to 5 years after a stay in an intensive care unit (ICU). To our knowledge, however, there are no disease-specific validated tools available to assess the effect of PN on patients’ HRQoL during and after short-term PN, and researchers and clinicians have to rely on generic instruments such as the aforementioned EQ-5D or SF-36. Moreover, commonly used tests to assess functionality are often not very meaningful within this setting. For instance, skeletal muscle wasting is common among patients with acute respiratory distress syndrome, , and 60% to 80% of critically ill patients are functionally impaired after their ICU stay. This limits the use of established functional tests, such as handgrip strength and the 6-minute walk test, in patients during and after an ICU stay as they lack voluntary muscle tension. The effect of supplemental PN in ICU patients with acute respiratory failure was assessed in a randomized controlled trial (RCT) determining QoL and functional status at ICU and hospital discharge. Patients were randomized to receive either enteral nutrition plus supplemental PN or enteral nutrition alone. However, functional and QoL measures proved challenging to collect because of severe illness and significant disability following the ICU stay, often preventing patients from completing functional tests. For example, approximately half of the surviving patients could not complete the hospital discharge 6-minute walk test due to an inability to walk. Moreover, for handgrip strength, approximately a quarter of patients were unable to be tested at ICU discharge, and 17% were still unable to do so at hospital discharge. Similarly, in the EPaNIC trial, only approximately 26% of critically ill patients were able to provide data for the 6-minute walk test at ICU discharge. The ability of patients to complete functional endpoints thus requires careful consideration when designing future trials, and potential alternative instruments should be considered. For ICU patients receiving PN, ICU-specific functional tests, such as the scored physical function ICU test (PFIT-s) or the Medical Research Council (MRC) sum score, could be suitable alternatives. Education programs and simple information sheets as memory aids to improve patient knowledge. Patients and their families should be involved as active participants in their care. Such a patient-centered care approach requires that patients are educated to gain adequate knowledge for shared decision-making and management of their condition. Key factors in engaging patients on long-term PN therapy in their own care include patient education on general aspects of their condition and its treatment, methods of self-administration, the importance of aseptic techniques, and self-monitoring and recognition of potential complications. Education strategies need to be tailored to patient needs because patients use different channels of communication and education than HCPs. To enable patients to better understand treatment goals and options, and the associated benefits and risks, information such as guideline recommendations should be translated into a lay version using language that is understandable. Recommendations should also be made available in practical short documents such as clear, concise “one pagers” suitable for bedside use. Such simple tools may help to involve patients in their treatment and increase the success of information dissemination. Furthermore, the attendees at the PN safety summit advocated for involving patients in the development of clinical practice guidelines. This is in accordance with the recommendations made by the US Institute of Medicine clinical practice guidelines and is based on the following principles: (1) patients have the moral right to participate in decisions affecting them; (2) patient involvement can contribute to the implementation of guidelines in practice; and (3) patient involvement is thought to increase the relevance and quality of guidelines, as patients’ experiential knowledge can complement scientific evidence. Such an approach may help overcome the recognized and repeatedly emphasized problem with existing PN recommendations: that they focus mainly on technical and/or medical aspects, largely ignoring patient preferences.
As in other chronic conditions, implementation of patient-centered care in the long-term PN setting involves the identification of health and lifestyle factors that contribute to patients’ HRQoL. Thus, the experts emphasized the importance of generating and regularly reassessing patient-centric data and integrating these into the care plan (statement 13 in the summary manuscript). HRQoL in patients receiving long-term PN is affected by numerous physical and psychological factors. From the patients’ perspective, a major benefit of receiving PN in their own home or care facility is that they can regain a normal life—or rather a “new normal” in the face of their current life circumstances and health status. Improvements in nutritional and functional status associated with PN allow the patient to be more independent and engage in activities of daily living, such as working, attending school, completing household chores, socializing with friends and family, engaging in leisure sports, and traveling. Nonetheless, the dependence on long-term PN can have adverse effects on HRQoL. , , Factors impeding HRQoL include sleep disturbance, frequent urination, technical difficulties, fear of therapy-related complications, inability to eat, increased occurrence of depression, and medical risks because of the underlying disease. , Interference with activities of daily living is a critical issue in patients undergoing long-term PN, particularly because of daytime infusions. Emerging knowledge points to the importance of adapting schedules to circadian rhythms, indicating that an infusion for 12 to 16 hours during the daytime may have advantages in terms of improved metabolic functioning compared with an overnight or 24-hour continuous infusion. Nevertheless, evidence in this field is scarce and there is an obvious research gap in finding a balance between patient independence and optimal nutrition in line with physiological rhythms. Suitable scales are needed to identify the effect of long-term PN on HRQoL, and several tools are available ( ). , , Two popular generic and non–disease-specific instruments to measure HRQoL are the EQ-5D and the Short Form-36 (SF-36). , Both have been reported in the scientific literature over 30 years and are available—in most cases without license costs—in numerous languages. , , The HPN-QoL tool was designed specifically for patients receiving HPN by the Home Artificial Nutrition and Chronic Intestinal Failure special interest group of ESPEN, while the Parenteral Nutrition Impact Questionnaire (PNIQ) was developed to assess the impact of PN on everyday life (so far it is only available in English). , , For patients with short bowel syndrome, a specific Short Bowel Syndrome Quality of Life scale (SBS-QoL) tool has been designed and validated. HCPs will gain valuable insights to ensure good patient-centered care by implementing routine determinations of HRQoL into the care of their long-term PN patients.
Little attention has been paid to assessing HRQoL and/or well-being in patients during and after short-term PN. Critical illnesses, such as acute respiratory distress syndrome, can markedly impair HRQoL for up to 5 years after a stay in an intensive care unit (ICU). To our knowledge, however, there are no disease-specific validated tools available to assess the effect of PN on patients’ HRQoL during and after short-term PN, and researchers and clinicians have to rely on generic instruments such as the aforementioned EQ-5D or SF-36. Moreover, commonly used tests to assess functionality are often not very meaningful within this setting. For instance, skeletal muscle wasting is common among patients with acute respiratory distress syndrome, , and 60% to 80% of critically ill patients are functionally impaired after their ICU stay. This limits the use of established functional tests, such as handgrip strength and the 6-minute walk test, in patients during and after an ICU stay as they lack voluntary muscle tension. The effect of supplemental PN in ICU patients with acute respiratory failure was assessed in a randomized controlled trial (RCT) determining QoL and functional status at ICU and hospital discharge. Patients were randomized to receive either enteral nutrition plus supplemental PN or enteral nutrition alone. However, functional and QoL measures proved challenging to collect because of severe illness and significant disability following the ICU stay, often preventing patients from completing functional tests. For example, approximately half of the surviving patients could not complete the hospital discharge 6-minute walk test due to an inability to walk. Moreover, for handgrip strength, approximately a quarter of patients were unable to be tested at ICU discharge, and 17% were still unable to do so at hospital discharge. Similarly, in the EPaNIC trial, only approximately 26% of critically ill patients were able to provide data for the 6-minute walk test at ICU discharge. The ability of patients to complete functional endpoints thus requires careful consideration when designing future trials, and potential alternative instruments should be considered. For ICU patients receiving PN, ICU-specific functional tests, such as the scored physical function ICU test (PFIT-s) or the Medical Research Council (MRC) sum score, could be suitable alternatives.
Patients and their families should be involved as active participants in their care. Such a patient-centered care approach requires that patients are educated to gain adequate knowledge for shared decision-making and management of their condition. Key factors in engaging patients on long-term PN therapy in their own care include patient education on general aspects of their condition and its treatment, methods of self-administration, the importance of aseptic techniques, and self-monitoring and recognition of potential complications. Education strategies need to be tailored to patient needs because patients use different channels of communication and education than HCPs. To enable patients to better understand treatment goals and options, and the associated benefits and risks, information such as guideline recommendations should be translated into a lay version using language that is understandable. Recommendations should also be made available in practical short documents such as clear, concise “one pagers” suitable for bedside use. Such simple tools may help to involve patients in their treatment and increase the success of information dissemination. Furthermore, the attendees at the PN safety summit advocated for involving patients in the development of clinical practice guidelines. This is in accordance with the recommendations made by the US Institute of Medicine clinical practice guidelines and is based on the following principles: (1) patients have the moral right to participate in decisions affecting them; (2) patient involvement can contribute to the implementation of guidelines in practice; and (3) patient involvement is thought to increase the relevance and quality of guidelines, as patients’ experiential knowledge can complement scientific evidence. Such an approach may help overcome the recognized and repeatedly emphasized problem with existing PN recommendations: that they focus mainly on technical and/or medical aspects, largely ignoring patient preferences.
Medical advances have led to a considerable reduction in ICU mortality in recent years, and so the number of surviving critical care patients requiring long-term PN (eg, after severe abdominal trauma, sepsis, or ischemia) has increased significantly. Eventually, more patients present with conditions requiring volume limitation (eg, because of cardiac or renal insufficiency). Furthermore, the experts noted that there were a growing number of “nontraditional patients” receiving PN in their practices (eg, intravenous drug users, homeless people, and developmentally challenged patients). The risk of infections or other complications may be increased owing to inadequate sterility precautions or a lack of understanding of instructions provided. Understanding of the patient’s personal situation and the difficulties involved in adapting to the reality of long-term PN and the associated lifestyle adaptations are the basis for improvement in this area. For all patients requiring long-term PN, it is critical to adhere to prescribed schedules and safety precautions. Poor treatment adherence, however, is a concern. There are many reasons for nonadherence to prescribed PN therapies in addition to unintentional forgetfulness. Good patient-HCP communication, among other factors, can increase treatment adherence. Therefore, it is vital that efforts are made to improve patient-HCP communications to clarify patients’ healthcare values and goals and thus better align treatments with patients’ priorities. This patient-centered care approach involves engaging patients and their families or caregivers as active participants in their care, allowing the patient to take back some measure of control. A patient-centered approach to care thus considers what is meaningful and valuable to each patient. This may allow greater autonomy in daily life and improve patients’ ability to cope with the stress and burdens associated with HPN dependency and chronic disease. Important lessons can also be learned from other diseases requiring treatment strategies that interfere considerably with patients’ independence and QoL (eg, chronic kidney disease and multiple sclerosis). Proposed actions for HCPs to help foster patient health engagement and treatment adherence are shown in . Consideration of these issues in treatment plans may help HCPs to identify areas for improvement in the management of their long-term PN patients.
Much progress has been made to improve the overall quality and safety of PN in recent decades, but there is still room for improvement, as mentioned throughout this publication and summarized in . It is vital to ensure that every patient in need of PN is identified and receives a tailored nutritional prescription, so the education of HCPs concerning nutrition screening and assessment to close nutritional gaps is key. Since errors along the entire PN process can lead to serious complications, closing obvious gaps between intended and actual clinical practice should be given high priority. Here, the use of modern technology could help improve standardization and reduce the risk of error, and also reduce HCP workloads—leaving more time for patient care. This should be the responsibility of interdisciplinary nutrition support teams consisting of HCPs from different specialties whose competencies are regularly reviewed and strengthened through training and interdisciplinary exchange. Distance education strategies such as tele-education have proven to be effective for the dissemination of best practices and to diminish disparities in healthcare provision, as well as for virtual patient care and education. More emphasis should be put on the patient’s perspective and patient-centered outcomes since patient adherence is a decisive factor for the quality and safety of PN. Moreover, patients and their families should be involved as active participants in their care, QoL should be assessed routinely, and technical information translated into plain language, to allow shared decision-making. It is also critical to improve patient-HCP communication so treatments can be better aligned with patients’ priorities. Ultimately, it is important to find a balance between safety and QoL to meet both guideline specifications and the preferences and personal situations of patients.
|
Diagnosis of urinary tract infection based on symptoms: how are likelihood ratios affected by age? a diagnostic accuracy study | d5009797-2fad-4bcd-8b26-7dad698f1f0d | 7798711 | Family Medicine[mh] | Urinary tract infection (UTI) is a common condition in general practice affecting mostly women. The diagnosis is often established based on symptoms due to lack of fast and precise point-of-care tests in general practice. The accuracy of UTI symptoms in determining bacteriuria has been investigated thoroughly. However, the available research does not take into consideration how age affects the diagnostic properties of signs and symptoms. Either the studies include only young women or do not report different age groups separately. Age is known to affect the diagnostic properties of urine tests. This could be due to variation in test performance of either the index or the reference test or both across age groups. The mechanisms are not fully understood and probably vary depending on the test. The same could be expected to apply to the accuracy of urine symptoms. The aim of this study was to investigate the impact of age on the diagnostic properties of typical symptoms of UTI in women presenting in general practice with symptoms suggestive of UTI with significant bacteriuria as the reference standard.
Study design and setting Prospective diagnostic study in general practice embedded in a cluster randomised controlled trial. The practices in the original study (unpublished) were randomised to either receiving a guideline on how to use point-of-care diagnostic tests or continue usual practice. The intervention did not interfere with registration of symptoms, collection of urine or sending the reference standard. Recruitment of general practices General practices in the Capital Region of Denmark were recruited through three channels; (1) online advertisement in email newsletters to general practice, (2) invitation by post to 200 practices and (3) invitation of 44 general practices already participating in a medical audit project about UTI ( ). Practices were offered a small remuneration and feedback on the quality of diagnosis and treatment of UTI in exchange for participation. Recruitment of patients Data collection took place in March to May 2016. Practices registered symptoms, diagnostics and treatment on the first 20–40 consecutive patients who presented with symptoms suggestive of UTI and where urine was collected for investigation. Patients who had previously been registered in the present project were not registered. Only adult (15 years or older) women who were not admitted acutely to the hospital after evaluation in general practice were included in the analysis for this study. Data collection The practices registered clinical data using a case report form. It contained information on age, sex and if the patients had dysuria, frequency, urge, abdominal pain or ‘any other symptom suggestive of UTI’ (in this order) as well as the result of the urine culture ( ). It was designed following the Audit Project Odense methodology. All patients provided a urine sample, which was sent to Hvidovre or Herlev Hospital’s microbiological departments. The practice registered the result of the urine culture on the case report form. The options were ‘positive: significant growth’, ‘negative: no significant growth’, ‘inconclusive’ (ie, mixed culture) or ‘not performed’. 10.1136/bmjopen-2020-039871.supp1 Supplementary data Culture at the microbiological laboratory (reference standard) Urine was sent in a standardised boric acid container to the microbiological departments. Urine samples were analysed on Inoqul A Bi-plate (CHROMagar and blood agar) with 10 μL on each half of the agar. Significant growth was defined as growth of ≥10 3 cfu/mL for Escherichia coli and Staphylococcus saprophyticus , ≥10 4 cfu/mL for other typical uropathogens and ≥10 5 cfu/mL for possible uropathogens in accordance with European consensus. Plates with significant growth of more than two uropathogens were labelled as mixed cultures (inconclusive). Inconclusive cultures were defined as negatives in our analysis since they are usually handled clinically as negatives. Significant bacteriuria has been shown to differentiate patients who recover without treatment from those in need of antibiotic treatment. However, the clinically relevant cut-off for significant bacteriuria is debated and differs between countries. We chose the cut-off used in Danish microbiological laboratories. Blinding Practices were not aware of the result of the reference culture when symptoms were registered. Likewise, the microbiological departments were not informed about symptoms when analysing the reference culture. Patient safety Patients gave oral informed consent to all diagnostics and treatment in accordance with the Danish Health Legislation Act. Patient’s data were anonymised before being sent from the practice to the investigators. Statistical analysis Since the sample was based on a cluster randomised controlled trial, it was fixed for this study. Sensitivity (SEN), specificity (SPE), positive likelihood ratio (pLR) and negative likelihood ratio (nLR) were calculated for each age group and reported with exact CIs. The pLR provides an estimate of the increase in odds for having UTI in case a particular symptom is present. The nLR provides an estimate of the decrease in odds of having UTI in case a symptom is absent. Asking about symptoms in any sequential order can be seen as adding diagnostic tests to each other. The diagnostic value of each symptom can then be added to the previous symptom in an additive process where the post-test probability from the previous symptom serves as the pretest probability of the next symptom. We investigated the utility of combining several symptoms in the order determined by the case report form. The result was illustrated with a Dumbell plot. As an example; symptom one has a SEN of 80% and a SPE of 50%, symptom two has a SEN of 60% and a SPE of 70%. Prevalence of bacteriuria is 50% (which implies a pretest odds=1). pLR1 = S E N 1 1 - S P E 1 = 0.80 1 - 0.50 = 1.60 pLR2 = S E N 2 1 − S P E 2 = 0.60 1 − 0.70 = 2 The post-test probability after presence of symptom one is: Post-test probability 1= P o s i t i v e p o s t t e s t o d d s 1 P o s i t i v e p o s t t e s t o d d s 1 + 1 = p r e t e s t o d d s 1 ⋅ p L R 1 p r e t e s t o d d s 1 ⋅ p L R 1 + 1 = 1 ∙ 1.60 1 ∙ 1.60 + 1 = 62% (post-test odds=1.60). Adding diagnostic values of symptom 2 to symptom one results in: Post-test probability 2 = P o s i t i v e p o s t t e s t o d d s 2 P o s i t i v e p o s t t e s t o d d s 2 + 1 = p r e t e s t o d d s 2 ⋅ p L R 2 p r e t e s t o d d s 2 ⋅ p L R 2 + 1 1.60 ∙ 2 1.60 ∙ 2 + 1 = 76% Statistical analyses were performed using SAS V.9.4 and the Dumbell plot was created in Microsoft Excel 2010.
Prospective diagnostic study in general practice embedded in a cluster randomised controlled trial. The practices in the original study (unpublished) were randomised to either receiving a guideline on how to use point-of-care diagnostic tests or continue usual practice. The intervention did not interfere with registration of symptoms, collection of urine or sending the reference standard.
General practices in the Capital Region of Denmark were recruited through three channels; (1) online advertisement in email newsletters to general practice, (2) invitation by post to 200 practices and (3) invitation of 44 general practices already participating in a medical audit project about UTI ( ). Practices were offered a small remuneration and feedback on the quality of diagnosis and treatment of UTI in exchange for participation.
Data collection took place in March to May 2016. Practices registered symptoms, diagnostics and treatment on the first 20–40 consecutive patients who presented with symptoms suggestive of UTI and where urine was collected for investigation. Patients who had previously been registered in the present project were not registered. Only adult (15 years or older) women who were not admitted acutely to the hospital after evaluation in general practice were included in the analysis for this study.
The practices registered clinical data using a case report form. It contained information on age, sex and if the patients had dysuria, frequency, urge, abdominal pain or ‘any other symptom suggestive of UTI’ (in this order) as well as the result of the urine culture ( ). It was designed following the Audit Project Odense methodology. All patients provided a urine sample, which was sent to Hvidovre or Herlev Hospital’s microbiological departments. The practice registered the result of the urine culture on the case report form. The options were ‘positive: significant growth’, ‘negative: no significant growth’, ‘inconclusive’ (ie, mixed culture) or ‘not performed’. 10.1136/bmjopen-2020-039871.supp1 Supplementary data
Urine was sent in a standardised boric acid container to the microbiological departments. Urine samples were analysed on Inoqul A Bi-plate (CHROMagar and blood agar) with 10 μL on each half of the agar. Significant growth was defined as growth of ≥10 3 cfu/mL for Escherichia coli and Staphylococcus saprophyticus , ≥10 4 cfu/mL for other typical uropathogens and ≥10 5 cfu/mL for possible uropathogens in accordance with European consensus. Plates with significant growth of more than two uropathogens were labelled as mixed cultures (inconclusive). Inconclusive cultures were defined as negatives in our analysis since they are usually handled clinically as negatives. Significant bacteriuria has been shown to differentiate patients who recover without treatment from those in need of antibiotic treatment. However, the clinically relevant cut-off for significant bacteriuria is debated and differs between countries. We chose the cut-off used in Danish microbiological laboratories.
Practices were not aware of the result of the reference culture when symptoms were registered. Likewise, the microbiological departments were not informed about symptoms when analysing the reference culture.
Patients gave oral informed consent to all diagnostics and treatment in accordance with the Danish Health Legislation Act. Patient’s data were anonymised before being sent from the practice to the investigators.
Since the sample was based on a cluster randomised controlled trial, it was fixed for this study. Sensitivity (SEN), specificity (SPE), positive likelihood ratio (pLR) and negative likelihood ratio (nLR) were calculated for each age group and reported with exact CIs. The pLR provides an estimate of the increase in odds for having UTI in case a particular symptom is present. The nLR provides an estimate of the decrease in odds of having UTI in case a symptom is absent. Asking about symptoms in any sequential order can be seen as adding diagnostic tests to each other. The diagnostic value of each symptom can then be added to the previous symptom in an additive process where the post-test probability from the previous symptom serves as the pretest probability of the next symptom. We investigated the utility of combining several symptoms in the order determined by the case report form. The result was illustrated with a Dumbell plot. As an example; symptom one has a SEN of 80% and a SPE of 50%, symptom two has a SEN of 60% and a SPE of 70%. Prevalence of bacteriuria is 50% (which implies a pretest odds=1). pLR1 = S E N 1 1 - S P E 1 = 0.80 1 - 0.50 = 1.60 pLR2 = S E N 2 1 − S P E 2 = 0.60 1 − 0.70 = 2 The post-test probability after presence of symptom one is: Post-test probability 1= P o s i t i v e p o s t t e s t o d d s 1 P o s i t i v e p o s t t e s t o d d s 1 + 1 = p r e t e s t o d d s 1 ⋅ p L R 1 p r e t e s t o d d s 1 ⋅ p L R 1 + 1 = 1 ∙ 1.60 1 ∙ 1.60 + 1 = 62% (post-test odds=1.60). Adding diagnostic values of symptom 2 to symptom one results in: Post-test probability 2 = P o s i t i v e p o s t t e s t o d d s 2 P o s i t i v e p o s t t e s t o d d s 2 + 1 = p r e t e s t o d d s 2 ⋅ p L R 2 p r e t e s t o d d s 2 ⋅ p L R 2 + 1 1.60 ∙ 2 1.60 ∙ 2 + 1 = 76% Statistical analyses were performed using SAS V.9.4 and the Dumbell plot was created in Microsoft Excel 2010.
Ninety practices in the Capital Region of Denmark consented to participate. Fourteen of these either did not include any patients or withdrew before inclusion. The 76 remaining practices included 1545 patients of whom 3 were excluded from the original study for not fulfilling inclusion criteria (2 did not have symptoms and 1 did not provide a urine sample). An additional 321 were excluded from the analysis because they did not fulfil the inclusion criteria for this study (see ). Further, 43 patients had missing data leaving 1178 adult women with symptoms suggestive of UTI for analysis. shows the distribution of bacteriuria and symptoms in the six age groups. The 1178 women were evenly distributed in age groups of 15 years each until the age of 89. Only 46 women were 90 years or older. Significant bacteriuria increased from 39% in the younger women to 67% in the older women of 75–89 years. Dysuria was the most common symptom (56% overall) followed by frequency (52% overall). Urge and abdominal pain were less frequent in all age groups (21% and 20%, respectively, overall). The distribution of symptoms was quite similar between age groups. shows the diagnostic values for dysuria, frequency, urge and abdominal pain in the six age groups. Dysuria showed the best diagnostic performance with an overall pLR of 1.39 (1.24–1.54) and nLR of 0.65 (0.56–0.75). The likelihood ratios for dysuria varied between age groups with the best performance in women aged 15–29 (pLR: 1.62 (1.30–1.94), nLR: 0.36 (0.19–0.54)) and women aged 30–44 (pLR: 1.74 (1.30–2.17), nLR: 0.48 (0.27–0.68)). Frequency had an overall pLR of 1.36 (1.20–1.52) and nLR of 0.72 (0.62–0.81). CIs included or approximated one in all age groups except women aged 30–44 (pLR: 1.85 (1.32–2.37), nLR: 0.53 (0.33–0.72). Urge had an average pLR of 1.44 (1.09–1.78) and nLR of 0.91 (0.85–0.97). Variation between age groups was low and CIs involved one in all age groups. Abdominal pain had a negative correlation with bacteriuria with a pLR of 0.63 (0.47–0.79) and nLR of 1.12 (1.05–1.19). CIs in most age groups involved one for abdominal pain. shows the clinical implications and additive value of symptoms across age groups. In women aged 15–29, absence of dysuria resulted in a probability of bacteriuria of 19%. Presence of dysuria, frequency, urge and absence of abdominal pain resulted in a probability of bacteriuria of 63% (age 15–29). In women aged 30–44, the pattern resembled that of women aged 15–29. Absence of dysuria resulted in a probability of bacteriuria of 17%. Presence of dysuria, frequency, urge and absence of abdominal pain resulted in a probability of bacteriuria of 71% (age 30–44). Absence of frequency and urge in addition to presence of abdominal pain had limited value in both of the youngest age groups. In women aged 45–59 and women aged 60–74, dysuria had limited value, but all other symptoms showed some ability to change the probability of bacteriuria. In women aged 45–59 absence of all symptoms (presence of abdominal pain) resulted in a probability of bacteriuria of 27%. Presence of all symptoms (absence of abdominal pain) resulted in a probability of bacteriuria of 74%. In women aged 60–74 absence of all symptoms (presence of abdominal pain) resulted in a probability of bacteriuria of 28%. Presence of all symptoms (absence of abdominal pain) resulted in a probability of bacteriuria of 74%. In women aged 75–89, absence of all symptoms (presence of abdominal pain) resulted in a probability of bacteriuria of 39%. Presence of dysuria was able to increase the probability of bacteriuria to 83%. Presence of additional symptoms (absence of abdominal pain) had limited value in this age group. The pattern in women aged 90 years or older resembled that of women aged 75–89.
In this study, the diagnostic properties of dysuria, frequency, urge and abdominal pain in adult women with suspected UTI in general practice varied between age groups. There was a wide variability in the prevalence of bacteriuria. The combined effect of the variability in the prevalence of bacteriuria and the varying diagnostic values resulted in a large variation in the probability of bacteriuria when symptoms were combined. The population was representative of the population seeking care in general practice because of urinary tract symptoms. This was possible because the time for data collection was minimal so general practitioners were able to include patients consecutively. However, the simplicity of the data collection method had the drawback that only few symptoms were collected. Thus, we may have overlooked relevant symptoms and were also not able to investigate other relevant demographics than age. The study was based on data from a cluster randomised trial, but the design was still appropriate for a diagnostic study with minimal bias. The reference standard was centralised, leading to a high quality in the interpretation and minimal review bias. However, clinical review bias was present since the same person collected all clinical information on a patient where ‘UTI was suspected’. Thus, individual symptoms may be interdependent leading to errors in diagnostic values, most possibly overestimating SEN and underestimating SPE. Also, LRs could be inflated in a cohort of patients where UTI is already suspected, since the clinician seeks to confirm their preliminary diagnosis. The cohort had a sufficient size to provide narrow CIs on the overall estimates of the diagnostic values. However, within the age groups, the uncertainty about the diagnostic values was high, so it is only large heterogeneity that can be identified; smaller heterogeneity cannot be determined with certainty. A larger study is needed in order to confirm the observed differences. The data were collected in 2016, but it is unlikely, they would be different if they were collected today. Our definition of UTI (symptoms together with significant bacteriuria) is commonly used, but this definition poses a problem in the older age groups where asymptomatic bacteriuria is more prevalent. We found an increasing prevalence of bacteriuria with increasing age. Asymptomatic bacteriuria in the elderly is probably only one explanation. Other explanations could be differences in the threshold for seeking care or in the spectrum of differential diagnosis in different age groups. The available research on accuracy of UTI symptoms has been conducted with a variety of definitions of significant bacteriuria. In the year 2000, the definition of significant bacteriuria in most of Europe was changed from 10 5 cfu/mL for all uropathogens to 10 3 cfu/mL for common uropathogens, which is the definition used in our study. Since more than 80% of all UTI in general practice is caused by common uropathogens, our results are difficult to compare with previous studies. A study from 2006 on women aged 18–70 using the same reference standard as in our study found a pLR of 1.64 and an nLR of 0.52 for moderate-to-severe dysuria (calculated from table 4 in Little et al . This corresponds well with our findings in women below 45 years of age but not for women older than this. Unfortunately, the study does not report the age distribution of the included patients. Despite the abundant amount of literature on the diagnostic values of UTI symptoms, this is the first study to investigate the impact of age. Previous studies have investigated the difference in diagnostic values of the urine dipstick in different populations but without looking into age specifically. Knottnerus et al have investigated how likely UTI has to be for Dutch general practitioners to prescribe or withhold antibiotics. In this Dutch context, a probability below 30% was sufficient to withhold antibiotics and a probability above 70% was sufficient to prescribe. If this finding is applied to our results, prescription would possibly be appropriate in older women with only dysuria, while no combination of symptoms would be sufficient for prescription of antibiotics in younger women. Similarly, antibiotics could possibly be withheld for young women without dysuria, while no combination of symptoms could effectively rule out UTI in older women. However, these estimates should be interpreted with caution due to the wide CIs. The diagnostic value of symptoms of UTI as well as the prevalence of bacteriuria in women presenting to general practice with suspected UTI vary between age groups with considerable clinical implications. First, the prior probability of UTI rises with age. Second, the LR of dysuria is high in young and older women, but seems to decline in middle age. Other classical symptoms of UTI had too broad CIs in this cohort to confirm variation with age. In women younger than 45 years without dysuria, bacteriuria is unlikely, and the general practitioner could consider applying a wait-and-see strategy. In women above 75 with dysuria, bacteriuria is likely and treatment without further diagnostics could be considered. In other age groups, additional symptoms or diagnostics have to be applied in order to diagnose UTI. Diagnostic studies should take demographics such as age into consideration.
Reviewer comments Author's
manuscript
|
Preliminary Investigation Towards a Safety Tool for Swine Brucellosis Diagnosis by a Proteomic Approach Within the One-Health Framework | d6f07d5b-dfda-41e0-bf27-b3b7a8d41b64 | 11855111 | Biochemistry[mh] | Brucella spp. are Gram-negative coccobacillus bacteria that cause diseases in various animal species, including humans . In domestic animals, the disease occurs as a chronic infection which results in placentitis and abortion in pregnant females, and orchitis and epididymitis in males, causing significant economic losses in livestock farms . Brucella spp. can persist and replicate within the phagocytic cells of the reticuloendothelial system and in non-phagocytic cells such as trophoblasts . When the vacuoles containing Brucella -individuals are fused with lysosome for the bacteria degradation, the lysosomal proteins are excluded, and the Brucella -containing vacuoles are associated with the endoplasmic reticulum which represents the intracellular replication site for Brucella . Among the twelve known Brucella species, the most frequent agents of brucellosis in livestock and humans are Brucella melitensis , Brucella abortus , and Brucella suis . Several biovars of these Brucella species exist, and it is possible to distinguish five biovars of B. suis . Although B. melitensis and B. abortus can be transmitted to pigs because of contact with ruminants, swine brucellosis is mainly caused by B. suis , biovars 1, 2, and 3 . B. suis bv. 1 and 3 are rarely reported in Europe while B. suis bv. 2 is largely diffused in East Europe. It was also introduced in Italy, where it was detected in domestic pigs and wild boars . However, in Italy, Bertelloni and colleagues reported that swine brucellosis seems to have a very limited spread in intensive farms. B. suis bv. 2 recognizes as principal hosts swine and hares, but it has been also detected in cows, causing seroconversion to traditional tests for bovine brucellosis, without clinical signs . Human infections by B. suis bv. 2 were rarely reported . Traditional methods for the diagnosis of brucellosis include bacteria isolation and characterization from biological samples, and serological tests. In addition, several molecular methods including PCR, PCR-restriction fragment length polymorphism (RFLP), and Southern blot, allowed, to a certain extent, the differentiation of Brucella species and some of their biovars . Serological methods are often employed in control and eradication programs to initially identify the possible positive animals. These methods are based on the detection of antibodies against the lipopolysaccharides (sLPS) of smooth Brucella strains generated by infected animals . The monoclonal antibodies against A and M antigens recognize the smooth LPS of B. suis strains; however, the first does not recognize B. melitensis strains and second does not bind to B. abortus strains . The Rose Bengal test (RBT), complement fixation test (CFT), indirect/competitive enzyme-linked immunosorbent assay (I/C ELISA), and fluorescence polarization assay (FPA) are the validated serological tests commonly used for swine brucellosis diagnosis . The serologic tests used to diagnose brucellosis were mostly developed for the detection of the A dominant, B. abortus O side-chain in infected cattle; consequently, these diagnostic tests have lower sensitivity and specificity when applied in swine than in cattle . In general, serological tests present some limitations, mainly concerning specificity and sensitivity, especially when screening individual animals . For these reasons test interpretation is generally conducted at a group or herd level, requiring bacterial isolation or molecular assays to confirm serologic data . Other Gram-negative bacteria, namely Escherichia coli O157:H7, Vibrio cholerae O1, Salmonella group N (O:30), and Yersinia enterocolitica O:9, can induce the production of antibodies that cross-react with the Brucella sLPS antigens . Particularly, Y. enterocolitica is widespread in swine populations and has an O-antigen LPS chain nearly identical to that of Brucella , resulting in a significant number of false positive serological reactions . The RBT is used as a screening test, but it lacks specificity for discriminating reactions caused by smooth Brucella from other bacteria cross-reactions . The CFT is generally used as a confirmatory test, but it has a reduced sensitivity for B. suis infection diagnosis, and it is affected by cross-reactions with other bacteria . The FPA resulted in a very good performance test but in chronically infected animals reported a low sensitivity, as well as in other serological tests .To overcome these shortcomings, the development of alternative immunoblotting methods is being investigated to increase the specificity and sensitivity of serological tests for brucellosis diagnosis on a rough strain of Brucella melitensis (88/131) ; on outer membrane proteins (OMPs) of the Rev 1 strain of B. melitensis ; and on an extract of B. abortus and B. melitensis . In all these techniques, the authors had to cultivate the bacteria, exposing operators to the risk of infection since Brucella can easily infect the operator by airborne transmission . Among the new techniques tested, one includes the use of Brucellergene OCB (Rhône-Mérieux, Lyon, France) which is a commercial antigen produced from B. melitensis B115, previously employed in swine for in vitro serological tests, such as ELISA , and for in vivo skin tests , showing significant specificity and the ability to discriminate false positive serological reactions. Brucellergene OCB is a mixture of more than 20 cytoplasmic proteins including T-cell antigens, Brucella bacterioferritin, and P39 proteins prepared from a rough (deficient in smooth LPS) mutant of Brucella melitensis B115 . Bertelloni and colleagues used Brucellergene as a tool to detect brucellosis-affected animals by Dot Blot, confirming its validity and ease of use in swine brucellosis serological diagnosis. Although the use of Brucellergene is risk-free for operators, this technique also requires the presence of Brucella in the laboratory, exposing operators to the risk of infection. This work aims to identify Brucella antigenic proteins in Brucellergene as a starting point for the development of safer immunological techniques for Brucella screening. reports an SDS-PAGE of a Brucellergene OCB sample. Sixteen protein bands with a molecular weight of 115 (B1), 84 (B2), 55 (B3), 50 (B4), 48 (B5), 44 (B6), 39 (B7), 34 (B8), 33 (B9), 31 (B10), 29 (B11), 27 (B12), 20 (B13), 13 (B14), 12 (B15), and 11 (B16) kDa were observed. In addition to the SDS-PAGE of the Brucellergene OCB sample, 2D electrophoresis was also performed . In 2D electrophoresis, at least 20 spots were detected with molecular weights corresponding to those obtained in the bands of the SDS-PAGE, while the isoelectric point ranged between pH 4.8 and 7.8. The results of the Western Blot applied to SDS-PAGE are reported in . In a,b, three bands which correspond to B3, B13, and B16 and with 55, 20, and 11 kDa of molecular weight, respectively, were able to bind positive anti- Brucella swine serum. The Western Blot of the 2D gel of Brucellergene on nitrocellulose did not show spots corresponding to the electrophoretic gel ( and ). The strip resulting from isoelectrofocusing was directly blotted, showing a band named I1 at an isoelectric point around pH 5.5–6 ( c). Proteins identified by mass spectrometry are shown in . The use of Brucellergene OCB by Dot Blot as a tool to detect brucellosis-affected animals has already been investigated by Bertelloni and colleagues . Bertelloni and colleagues tested 374 swine sera for brucellosis using the Rose Bengal Test (RBT), complement fixation test (CFT), and Dot Blot, using Brucellergene as an antigen. To verify the concordance of Dot Blot using CFT as the gold standard, they observed a concordance value of at least 91% . Y. enterocolitica is mainly responsible for cross-reactions in swine . The Dot Blot, using Brucellergene as an antigen and an anti- Yersinia enterocolitica serum as an antibody, did not show cross-reaction, suggesting a promising specificity . Since the Brucellergene bound the anti- Brucella swine serum but did not bind the anti- Yersinia serum, we investigated by Western Blot which proteins within those contained in Brucellergene could bind Brucella -positive swine serum. It was assumed that these proteins were not able to cross-react with the anti- Yersinia serum as well as the whole of Brucellergene. The Brucella proteome has already been studied by several authors . Hamidi and colleagues developed a ribosomal proteome-based mapping for the establishment of biomarker profile libraries to identify B. abortus and B. melitensis , as well as elucidating refined differences between virulent and vaccine sstrains. To the best of our knowledge, Brucellergene had never been investigated using a proteomic approach. In agreement with several authors who have previously investigated the Brucella proteome, the present results highlighted B. melitensis proteins with molecular weights in the range of 10–116 kDa . Regarding the 2D SDS-PAGE, no matching spots to bind to Brucella -sera were found by the Western Blot approach. This is because chemiluminescent detection is more sensitive and produces signal at lower protein concentrations than the staining of the gel with coomassie G250, according to the product sheet provided by the company . It can be speculated that a milder treatment of the Brucellergene might preserve the proteins in their native forms and thus might improve the resolution of the Western Blot. Further investigations are therefore needed to clarify this aspect. Among the detected bands, only those which reacted with the Brucella -serum in the Western Blot were identified by Mass Spectrometry. These bands corresponded to four proteins identified as follows: a probable sugar-binding protein, a peptide ABC transporter substrate-binding protein, a GntR family transcriptional regulator, and a conserved hypothetical protein. A class of sugar-binding proteins with molecular weight and isoelectric points corresponding to the protein found by this investigation (probable sugar-binding periplasmic protein B. abortus str 2308A) has been also observed overexpressed in the proteome of Rev1 (an attenuated strain of B. melitensis ) . Rev 1 is considered a highly effective vaccine in the control of brucellosis in small ruminants in many countries . A sugar-binding protein with a similar IP and molecular weight has been also detected in both Rev 1 and in a B. melitensis virulent strain, 16M . To the best of our knowledge, this protein in terms of amino acid composition is not like any cloned Yersinia enterocolitica protein (% homology 0%). The second protein identified belongs to ATP-binding cassette (ABC) transporters, a large group of membrane protein complexes that couple the transport of a substrate across the membrane to the hydrolysis of ATP . In prokaryotes, ABC transporters are localized to the plasma membrane, and ATP is hydrolyzed on the cytoplasmic side . Furthermore, ABC transporters are characterized by two nucleotide-binding domains (NBDs) and two transmembrane domains (TMDs) . An ABC transporter acts as a transporter of different molecules across biological membranes and participates in a variety of biological processes, such as maintaining osmotic pressure balance inside and outside the cell, antigen presentation, cell differentiation, and bacterial immunity . A protein at about 60 kDa binding all sera was also found by Wareth and colleagues , applying a Western Blot to an extract of B. abortus and B. melitensis using cattle, buffaloes, sheep, and goat sera as primary antibodies. This protein could correspond to the molecular weight of the protein identified in this investigation as the peptide ABC transporter substrate-binding protein. When comparing the amino acid sequence of the probable sugar-binding protein with other proteins, a percentage of 99.52% homology with the peptide ABC transporter substrate-binding protein was observed. It could be speculated that Brucella -positive swine serum might bind to these two proteins in a similar portion of the amino acid sequences. Comparing the amino acid sequence of peptide ABC transporter substrate-binding protein with other proteins, a percentage of homology higher than 86% was only observed with proteins belonging to the genus Brucella , thus suggesting that this is a genus-specific protein. The peptide ABC transporter substrate-binding protein is similar in terms of amino acid composition to cloned Yersinia enterocolitica proteins, with a percentage of homology of lower than 41%. The third protein identified is a GntR regulator, an important virulence factor in Brucella playing important roles in the maintenance of fatty acid concentrations, amino acid catabolism, organic acids production, the regulation of carbon catabolism and the degradation of complex organics . Furthermore, some research indicates that GntR mutants show reduced virulence . The fourth protein identified is a conserved hypothetical protein. Wagner and colleagues had already identified in the B. melitensis proteome several hypothetical low-molecular-weight proteins whose function, to the best of our knowledge, is undefined. Comparing the amino acid sequence of the GntR family transcriptional regulator or conserved hypothetical protein with other proteins, a percentage of homology higher than 82% or 70%, respectively, was only observed with proteins belonging to the genus Brucella , thus suggesting that they are genus-specific proteins. The GntR family transcriptional regulator protein is similar in terms of amino acid composition to Yersinia enterocolitica proteins with a percentage of homology lower than 49% while the conserved hypothetical protein is not like any Yersinia enterocolitica protein cloned. Concerning the subcellular localization prediction, both probable sugar-binding protein and ABC transporters belong to periplasmic protein . Brucellae can reversibly modify their cell envelope to adapt to changes in the host intracellular microenvironment and improve their survival by modifying the host immune response . Zai and colleagues investigated the resistance of B. abortus to various stresses (e.g., antibacterial stress, nutrient starvation stress, and physicochemical stress) and observed that some proteins, including the ABC transporter ones, were still produced by the bacterium despite the stressful conditions. As they are also expressed under stress conditions for the bacterium, they could be a target for antibodies produced by the infected animal. It could therefore be speculated that ABC transporter proteins may be a target for a probable Brucella identification test. Some authors have investigated the proteomes of some Brucella species and virulent/attenuated strains to search for species-specific proteins as a basis for new diagnostic screening methods . Eschenbrenner and colleagues investigated the B. melitensis vs. B. abortus proteomes and observed the presence of ABC transporter proteins in both species. This could suggest that ABC transporter proteins are not species-specific proteins and therefore could be further investigated as possible antigens to produce a general immunological kit for the identification of Brucella infections. The “probable sugar-binding periplasmic protein B. abortus str 2308A”, “peptide ABC transporter substrate-binding protein B. melitensis ”, “GntR family transcriptional regulator B. melitensis ”, and “conserved hypothetical protein B. melitensis M28” identified in this work could be produced in vitro, providing the basis for the development of a diagnostic kit to avoid Brucella culture required for large-scale antigen production. In vitro synthesis of the protein could be performed in a molecular biology laboratory by drawing ad hoc primers and cloning the protein via a vector into cell cultures (e.g., Escherichia coli ). After purification by SDS-PAGE and specific columns, the protein could be tested by Dot Blot with anti- Brucella -positive swine serum. 4.1. Material Since Bertelloni and colleagues reported that only positive serum had a cross-reaction with Brucellergene, only positive serum, from a free-ranged farm of “cinta senese” pigs, in South Tuscany (Siena province, Italy), was used in this investigation. Employed serum was stocked at −20 °C until processed. The antigen was the Brucellergene OCB (Rhône-Mérieux, France), produced from B. melitensis rough strain B115, provided by “Istituto Zooprofilattico della Lombardia e dell’Emilia Romagna Bruno Ubertini, Brescia, Italy” and by “Istituto Zooprofilattico Sperimentale dell’Abruzzo e del Molise G. Caporale, Teramo, Italy” for Western Blot (WB). 4.2. Sodium Dodecyl Sulphate—PolyAcrylamide Gel Electrophoresis (SDS-PAGE) The Brucellergene total protein content was measured by Qubit 2.0 Fluorometer (Invitrogen, Waltham, MA, USA). Ten µg of total protein of Brucellergene was loaded into 7.5% T, 2.6% C separating polyacrylamide gels (1.5 mm thick). A 10–250 kDa pre-stained protein Sharpmass TM V plus protein MW marker (Euroclone, Pero, Italy) was also carried out. SDS-PAGE was performed at 20 mA/gel and at 15 °C using SE 260 mini vertical electrophoresis (GE Healthcare, Chicago, IL, UK). 4.3. 2DE (Two-Dimensional Gel Electrophoresis) SDS-PAGE Isoelectric focusing electrophoresis was performed at 20 °C on an IPGphor III apparatus (GE Healthcare) following a previously reported protocol . The volumes of protein extract corresponding to 75–150 μg of total proteins were mixed with a rehydration solution (Urea 7 M, Thiourea 2 M, Chaps 2%, dithiothreitol 0.5%, IPG 1%, and a trace of Bromophenol blue) and loaded on 7 cm (pH 3–10) strips, rehydration time 9 h, 50 mA/strip. For some strips, the Western Blot method was applied. Prior to SDS-PAGE, the IPG strips were equilibrated for 8 min in 50 mM Tris-HCl pH 8.8, 30% Glycerol, 6 M Urea, 4% SDS, 2% dithiothreitol and afterwards for 12 min in 50 mM Tris-HCl pH 6.8, 30% Glycerol, 6M Urea, 4% SDS, 2.5% Iodoacetoamide and bromophenol blue. The following SDS-PAGE was performed using self-cast 7.5% T, 2.6% C separating polyacrylamide gels according to Laemmli , without stacking gel. 4.4. Western Blot (WB) Considering an SDS-PAGE where two gels run, proteins were fixed in one of the gels by 40% methanol and 10% acetic acid solution for 30 min. Gel was stained in Coomassie brilliant G colloidal solution , discolored with water, scanned by an Epson Perfection V750 Pro (Suwa, Nagano, Japan), and elaborated by ImageJ software, version 1.54 . For the second gel, proteins were transferred to a nitrocellulose membrane (0.45 µm size, Thermo Scientific, Waltham, MA, USA) by ECL TE 70 PWR Semi-dry transfer unit (GE Healthcare), 0.8 mA/cm 2 , for 4 h and 30 min. Western blot was performed according to Iovinella et al. , with modifications. The membrane was exposed to serum samples at 1:200 concentrations, with 30 min of incubation time in a dark place and with inactivation at 58 °C ± 2 °C for 60 min. Afterwards, the membrane was incubated for 1 h at RT with a polyclonal rabbit anti-Pig IgG-(H+L) antibody, HRP conjugated (Bethyl Laboratories, Montgomery, TX, USA) diluted 1:10,000. The reaction was detected by ClarityTM Western ECL substrate Kit (Biorad Laboratories, Hercules, CA, USA). Twenty seconds of exposure to a Nikon D5100 camera (Tokyo, Japan) fitted with a 50 mm f/1.4 lens and 12 mm extension tube, detected the chemiluminescent signal in a dark room . 4.5. Mass Spectrometry The bands corresponding to those that reacted with the antibody in the Western Blot were selected into the gel, excised and sent to a Mass Spectrometry Center (CISM, Florence University, Florence, Italy) where mass spectrometry was applied, and proteins were identified. The excised bands were destained and proteins digested as reported by Dani et al. . Each peptide mixture was submitted to capillary-LC-μESI-MS/MS analysis on an Ultimate 3000 HPLC (Dionex, San Donato Milanese, Milan, Italy) coupled to a LTQ Orbitrap mass spectrometer (Thermo Fisher, Bremen, Germany). Peptides were concentrated on a precolumn cartridge PepMap100 C18 (300 μm id × 5 mm, 5 μm, 100 Å, LC Packings Dionex, Sunnyvale, CA, USA) and then eluted on a homemade capillary column packed with Aeris Peptide XB-C18 phase (180 μm id × 15 cm, 3.6 μm, 100 Å, Phenomenex, Torrance, CA, USA) at 1 μL/min. The loading mobile phases were as follows: 0.1% TFA in H 2 O (phase A) and 0.1% TFA in CH 3 CN (phase B). The elution mobile phases composition was H 2 O 0.1% formic acid/CH 3 CN 97/3 (phase A) and CH 3 CN 0.1% formic acid/ H 2 O 97/3 (phase B). The elution program was as follows: 0 min, 4% B; 10 min, 40% B; 30 min, 65% B; 35 min, 65% B; 36 min, 90% B; 40 min, 90% B; 41 min, 4% B; 60 min, 4% B. Mass spectra were acquired in positive ion mode, setting the spray voltage at 1.8 kV and the capillary voltage and temperature at 45 V and 200 °C, respectively, and the tube lens at 130 V. Data were acquired in data-dependent mode with dynamic exclusion enabled (repeat count 2, repeat duration 15 s, exclusion duration 30 s); survey the MS scans that were recorded in the Orbitrap analyzer in the mass range 300–2000 m / z at a 15,000 nominal resolution at m / z = 400; then up to three of the most intense ions in each full MS scan were fragmented (isolation width 3 m / z , normalized collision energy 30) and analyzed in the IT analyzer. Monocharged ions did not trigger MS/MS experiments. The acquired data were searched with Mascot 2.4 search engine (Matrix Science Ltd., London, UK) against Brucella protein sequences downloaded from NCBI. Since Bertelloni and colleagues reported that only positive serum had a cross-reaction with Brucellergene, only positive serum, from a free-ranged farm of “cinta senese” pigs, in South Tuscany (Siena province, Italy), was used in this investigation. Employed serum was stocked at −20 °C until processed. The antigen was the Brucellergene OCB (Rhône-Mérieux, France), produced from B. melitensis rough strain B115, provided by “Istituto Zooprofilattico della Lombardia e dell’Emilia Romagna Bruno Ubertini, Brescia, Italy” and by “Istituto Zooprofilattico Sperimentale dell’Abruzzo e del Molise G. Caporale, Teramo, Italy” for Western Blot (WB). The Brucellergene total protein content was measured by Qubit 2.0 Fluorometer (Invitrogen, Waltham, MA, USA). Ten µg of total protein of Brucellergene was loaded into 7.5% T, 2.6% C separating polyacrylamide gels (1.5 mm thick). A 10–250 kDa pre-stained protein Sharpmass TM V plus protein MW marker (Euroclone, Pero, Italy) was also carried out. SDS-PAGE was performed at 20 mA/gel and at 15 °C using SE 260 mini vertical electrophoresis (GE Healthcare, Chicago, IL, UK). Isoelectric focusing electrophoresis was performed at 20 °C on an IPGphor III apparatus (GE Healthcare) following a previously reported protocol . The volumes of protein extract corresponding to 75–150 μg of total proteins were mixed with a rehydration solution (Urea 7 M, Thiourea 2 M, Chaps 2%, dithiothreitol 0.5%, IPG 1%, and a trace of Bromophenol blue) and loaded on 7 cm (pH 3–10) strips, rehydration time 9 h, 50 mA/strip. For some strips, the Western Blot method was applied. Prior to SDS-PAGE, the IPG strips were equilibrated for 8 min in 50 mM Tris-HCl pH 8.8, 30% Glycerol, 6 M Urea, 4% SDS, 2% dithiothreitol and afterwards for 12 min in 50 mM Tris-HCl pH 6.8, 30% Glycerol, 6M Urea, 4% SDS, 2.5% Iodoacetoamide and bromophenol blue. The following SDS-PAGE was performed using self-cast 7.5% T, 2.6% C separating polyacrylamide gels according to Laemmli , without stacking gel. Considering an SDS-PAGE where two gels run, proteins were fixed in one of the gels by 40% methanol and 10% acetic acid solution for 30 min. Gel was stained in Coomassie brilliant G colloidal solution , discolored with water, scanned by an Epson Perfection V750 Pro (Suwa, Nagano, Japan), and elaborated by ImageJ software, version 1.54 . For the second gel, proteins were transferred to a nitrocellulose membrane (0.45 µm size, Thermo Scientific, Waltham, MA, USA) by ECL TE 70 PWR Semi-dry transfer unit (GE Healthcare), 0.8 mA/cm 2 , for 4 h and 30 min. Western blot was performed according to Iovinella et al. , with modifications. The membrane was exposed to serum samples at 1:200 concentrations, with 30 min of incubation time in a dark place and with inactivation at 58 °C ± 2 °C for 60 min. Afterwards, the membrane was incubated for 1 h at RT with a polyclonal rabbit anti-Pig IgG-(H+L) antibody, HRP conjugated (Bethyl Laboratories, Montgomery, TX, USA) diluted 1:10,000. The reaction was detected by ClarityTM Western ECL substrate Kit (Biorad Laboratories, Hercules, CA, USA). Twenty seconds of exposure to a Nikon D5100 camera (Tokyo, Japan) fitted with a 50 mm f/1.4 lens and 12 mm extension tube, detected the chemiluminescent signal in a dark room . The bands corresponding to those that reacted with the antibody in the Western Blot were selected into the gel, excised and sent to a Mass Spectrometry Center (CISM, Florence University, Florence, Italy) where mass spectrometry was applied, and proteins were identified. The excised bands were destained and proteins digested as reported by Dani et al. . Each peptide mixture was submitted to capillary-LC-μESI-MS/MS analysis on an Ultimate 3000 HPLC (Dionex, San Donato Milanese, Milan, Italy) coupled to a LTQ Orbitrap mass spectrometer (Thermo Fisher, Bremen, Germany). Peptides were concentrated on a precolumn cartridge PepMap100 C18 (300 μm id × 5 mm, 5 μm, 100 Å, LC Packings Dionex, Sunnyvale, CA, USA) and then eluted on a homemade capillary column packed with Aeris Peptide XB-C18 phase (180 μm id × 15 cm, 3.6 μm, 100 Å, Phenomenex, Torrance, CA, USA) at 1 μL/min. The loading mobile phases were as follows: 0.1% TFA in H 2 O (phase A) and 0.1% TFA in CH 3 CN (phase B). The elution mobile phases composition was H 2 O 0.1% formic acid/CH 3 CN 97/3 (phase A) and CH 3 CN 0.1% formic acid/ H 2 O 97/3 (phase B). The elution program was as follows: 0 min, 4% B; 10 min, 40% B; 30 min, 65% B; 35 min, 65% B; 36 min, 90% B; 40 min, 90% B; 41 min, 4% B; 60 min, 4% B. Mass spectra were acquired in positive ion mode, setting the spray voltage at 1.8 kV and the capillary voltage and temperature at 45 V and 200 °C, respectively, and the tube lens at 130 V. Data were acquired in data-dependent mode with dynamic exclusion enabled (repeat count 2, repeat duration 15 s, exclusion duration 30 s); survey the MS scans that were recorded in the Orbitrap analyzer in the mass range 300–2000 m / z at a 15,000 nominal resolution at m / z = 400; then up to three of the most intense ions in each full MS scan were fragmented (isolation width 3 m / z , normalized collision energy 30) and analyzed in the IT analyzer. Monocharged ions did not trigger MS/MS experiments. The acquired data were searched with Mascot 2.4 search engine (Matrix Science Ltd., London, UK) against Brucella protein sequences downloaded from NCBI. Four proteins able to bind Brucella -positive swine serum were identified (a probable sugar-binding protein, a peptide ABC transporter substrate-binding protein, a GntR family transcriptional regulator, and a conserved hypothetical protein) by proteomic and Western Blot approaches. All of them could be exploited to enhance the specificity of serological investigations. Among these proteins, the peptide ABC transporter substrate seems the most promising one to be used as a specific antigen because Brucella can produce it even under stress conditions. Although Brucellergene is safe to handle, standardized, and already potentially useful for the serological investigation of Brucella by Dot Blot, it requires, however, the cultivation of Brucella in the laboratory. As future steps for serological assays in swine brucellosis, the most suitable antigenic proteins could be synthesized in vitro, avoiding the cultivation of the Brucellae and thus reducing the risk of infection for operators by airborne transmission. Further investigation will be then needed to test these proteins and verify whether they can provide a safety tool for serological diagnosis in screening diagnostic for swine brucellosis, breeding screening or monitoring plans. |
Optimization of Growth Conditions for Chlorpyrifos-Degrading Bacteria in Farm Soils in Nakuru County, Kenya | 3dc848f0-bdd4-418b-be75-f3800520810e | 10834098 | Microbiology[mh] | Pesticides are an important part of the agricultural business and are widely utilized as part of pest control strategies in agriculture. Globally, pesticides are utilized in excess of 5.6 billion pounds annually . An ideal pesticide is dangerous exclusively to the creatures it is designed to kill, is biodegradable, and does not pollute the environment . Notably, pesticides applied indiscriminately can harm nontarget organisms and unintentionally reach other ecosystems. As reviewed by Poudel et al., only 0.1% of applied pesticides reach the target pests and 99.9% escape into the environment, where they pose a threat to public health and beneficial biota and contaminate the ecosystem . Chlorpyrifos (CP), a type of organophosphate (OP) acaricide, accounts for 38% of global pesticide use but lacks target specificity, posing risks to nontarget species . Widely used in Kenya for tick control in dairy animals, its extensive application threatens human health and environmental integrity by contaminating air, soil, and water [ – ]. Unrestricted use by farmers can lead to acute diseases, loss of beneficial biota, and ecosystem imbalances, including altered soil microbial communities . In Kenya, many agricultural and pesticide firms lack adequate treatment facilities for organophosphates (OPs), leading to environmental contamination [ , , ]. Various remediation methods have been explored but have limitations in cost, effectiveness, or environmental impact [ – ]. Bioremediation using autochthonous microorganisms is an emerging approach for detoxifying pollutants like chlorpyrifos . While pure cultures have been studied, natural conditions often involve complex microbial consortia, making enriched cultures from contaminated sites more effective . Various factors, including microbial components and physicochemical conditions, influence the rate of chlorpyrifos degradation. However, there is limited knowledge on optimizing these conditions for effective biodegradation . The study is aimed at identifying bacteria in Nakuru County's agricultural soils capable of degrading chlorpyrifos (CP) and at optimizing their growth conditions for effective bioremediation. Specifically, the research focuses on isolating bacterial strains with CP degradation potential and determining their optimal growth conditions in terms of pH, temperature, and CP concentration. This work seeks to advance bioremediation techniques for CP-contaminated environments.
2.1. Reagents Chlorpyrifos of analytical grade (99.4%) was procured from Sigma-Aldrich (USA). The research utilized a mineral salt medium (MSM) consisting of the following components in grams per liter (g/l): Na 2 HPO 4 at 5.8, KH 2 PO 4 at 3.0, NaCl at 0.5, NH 4 Cl at 1.0, and MgSO 4 at 0.25. Concentrated stock solutions of CP (10 g/l) were produced and passed through 0.22 mm syringe filters. The process of achieving medium sterilization involved subjecting it to autoclaving at a temperature of 121°C for a duration of 15 minutes. 2.2. Soil Sampling Procedure Soil samples were collected from Molo, Njoro, and Subukia in Nakuru County, Kenya (latitude: 0° 29′ 59.99 ″ N; longitude: 36° 00′ 0.00 ″ E). Nakuru experiences a warm and temperate climate, with annual rainfall of approximately 762 mm and a 17.5°C temperature. Sampling was carried out in 15 selected dairy farms with a history of repeated chlorpyrifos acaricide application through cattle dips and spray races according to standard operation procedures (SOP Number: FSS0002.00) and European guidelines . All samples were code-named, stored in cool boxes with ice packs, and then transported to Mount Kenya University Research Laboratory. The soil samples were air-dried and sieved. 2.3. Determination of Soil Physicochemical Properties In situ measurements for pH, electrical conductivity, and total dissolved solids were performed at specific sampling points, in compliance with APHA (2005) guidelines. For electrical conductivity, a Jenways 4076 EC meter was used, calibrated across a range of 0 to 200 millisiemens per meter at 25°C. The probe was submerged directly in the water for accurate readings. pH was determined on-site using a Jenways 3071 portable pH meter, and total dissolved solids were quantified with a Jenways 4076 TDS meter. Furthermore, laboratory investigations were performed to evaluate additional soil properties, in conjunction with the aforementioned in situ observations. The parameters encompassed in the analysis were total nitrogen, organic carbon, phosphorus, potassium, calcium, magnesium, manganese, copper, iron, zinc, and sodium. The laboratory analysis was conducted at the National Agricultural Research Laboratories of the Kenya Agricultural and Livestock Research Organization (KALRO), Kabete, Kenya. 2.3.1. Isolation and Purification of the Most Common Microbes The study collected soil samples from three distinct locations in Nakuru County, Kenya, namely Molo, Njoro, and Subukia. The geographical coordinates of the study sites were latitude of 0° 29′ 59.99 ″ N and longitude of 36° 00′ 0.00 ″ E. The climate in Nakuru is characterized by warm and temperate conditions, with an average annual precipitation of around 762 mm and a temperature of 17.5°C. The study involved the implementation of standardized operating procedures (SOP number: FSS0002.00) and adherence to European guidelines during the sampling process, which was conducted in 15 dairy farms that had a documented history of repeated use of chlorpyrifos acaricides through dips and spray races. The specimens were assigned code names, placed in refrigerated containers with ice packs, and subsequently conveyed to the research laboratory at Mount Kenya University. The soil samples underwent an air-drying process and were subsequently subjected to sieving. 2.4. Experimental Design The relevance and interactions of three independent variables, temperature (25°C, 30°C, and 37°C), pH (values 5, 7, and 9), and CP concentration (25mg/l, 50 mg/l, and 100 mg/l), were investigated using a general multilevel factorial design. A multilevel factorial design allows for some flexibility in the number of levels used for each independent variable. For the purpose of assessing the experimental error, the centre point was duplicated three times. Using a 2 3 -factorial design, all independent variables were combined to create a design matrix. 2.5. Biodegradation of Chlorpyrifos by the Selected Isolates For the biodegradation test, pure cultured isolates in mineral salt liquid medium (MSM) enriched with chlorpyrifos (10 mg/l) were utilized. Pure cultures of isolated strains with an inoculum density of 2.4 × 10 6 CFU ml −1 were cultured on a rotary shaker for 3 days at 150 rpm and 30°C. The experiment was repeated three times, with controls consisting of media without inoculation kept under the same conditions. The growth of bacteria was evaluated turbidometrically at different time intervals by measuring optical density in Spectronic 20 spectrophotometer at 600 nm . One milliliter of sample was taken and added to freshly prepared 0.02% tetrazolium chloride in a boiling tube and the contents of the boiling tube boiled for 5 minutes in a Stuart water bath SWB series. The boiling tubes and contents were incubated for 21 days, and optical density was taken at intervals (OD 480 ) and color change observed. The isolated microbial strains were cultivated on MSM supplemented with chlorpyrifos (10 mg l −1 ) and examined for their ability to grow and degrade CP at different pH values (5, 7, and 9), temperature degrees (25°C, 30°C, and 37°C), and concentrations (50 mg/l, 100 mg/l, and 150 mg/l) to optimize the growth conditions for biodegradation. At all intervals, control flasks containing an equivalent volume of MSM and chlorpyrifos, but no microbial population and MSM with inoculum without CP were cultured. For the purpose of estimating chlorpyrifos degradation, samples were collected and extracted at intervals of 0, 1, 2, 3, 4, and 5 days during the experiment. Seven bacteria code-named MW1, MW2, MW3, MW4, MW5, MW6, and MW7 were selected for characterization based upon their eminent propensity to efficiently degrade CP. 2.6. Identification of CP-Degrading Bacterial Strains Seven bacterial isolates (MW1-MW7) were selected for their ability to biodegrade CP, identified morphologically based on Bergey's Manual and confirmed by 16S rRNA gene analysis. DNA was extracted and amplified using PCR with universal primers, and the resulting amplicons were purified and sequenced (36-40). The sequences were analysed using BLAST to compare them to sequences in the GenBank databases. 2.7. Determining the Biodegradability of Chlorpyrifos by Tetrazolium Reduction Assay The Bhagobaty and Malik method of tetrazolium chloride reduction test was employed with certain alterations to evaluate the biodegradation of chlorpyrifos (CP) by the bacterial isolates . Tetrazolium chloride functions as an artificial electron acceptor that is enzymatically reduced by bacterial dehydrogenase upon initiation of CP degradation by the isolate, leading to the production of the vividly pigmented formazan. The ability of the isolate to degrade CP in a qualitative manner is evidenced by the production of a vividly crimson product. The absence of a discernible alteration in coloration within the control group indicates that the compound CP did not undergo degradation in the MSM that was not inoculated. In a concise and sterile manner, the autoclaved mineral salt media (MSM) was combined with 150 mg/l of CP, which functioned as the exclusive carbon source, within boiling tubes. A recently cultivated culture was introduced into a growth medium and subjected to incubation in a rotary shaker at a temperature of 30°C for a duration of 48 hours. After incubation, 1 ml of the sample was removed and combined with 5 ml of freshly made 0.02% tetrazolium chloride in the test tubes containing the organism. After a 5-minute boiling, the test tubes were incubated at 30°C for 4 hours at ambient conditions. The conversion of colorless tetrazolium chloride into purple Formazan showed positive biodegradation. The biodegradation was quantified using UV-Vis spectrophotometer at optical density (OD) 480 nm at 4-day intervals for a duration of 21 days. 2.8. Statistical Analysis To determine the significance of differences in treatment means, data on bacterial growth and chlorpyrifos biodegradation in MSM were computed using factorial analysis of variance (ANOVA) with Statistical Analysis Software (SAS) 2010. Mean pair-wise comparison was carried out using Tukey's HSD (honestly significant difference) at 5% level. A P ≤ 0.05 was considered significant.
Chlorpyrifos of analytical grade (99.4%) was procured from Sigma-Aldrich (USA). The research utilized a mineral salt medium (MSM) consisting of the following components in grams per liter (g/l): Na 2 HPO 4 at 5.8, KH 2 PO 4 at 3.0, NaCl at 0.5, NH 4 Cl at 1.0, and MgSO 4 at 0.25. Concentrated stock solutions of CP (10 g/l) were produced and passed through 0.22 mm syringe filters. The process of achieving medium sterilization involved subjecting it to autoclaving at a temperature of 121°C for a duration of 15 minutes.
Soil samples were collected from Molo, Njoro, and Subukia in Nakuru County, Kenya (latitude: 0° 29′ 59.99 ″ N; longitude: 36° 00′ 0.00 ″ E). Nakuru experiences a warm and temperate climate, with annual rainfall of approximately 762 mm and a 17.5°C temperature. Sampling was carried out in 15 selected dairy farms with a history of repeated chlorpyrifos acaricide application through cattle dips and spray races according to standard operation procedures (SOP Number: FSS0002.00) and European guidelines . All samples were code-named, stored in cool boxes with ice packs, and then transported to Mount Kenya University Research Laboratory. The soil samples were air-dried and sieved.
In situ measurements for pH, electrical conductivity, and total dissolved solids were performed at specific sampling points, in compliance with APHA (2005) guidelines. For electrical conductivity, a Jenways 4076 EC meter was used, calibrated across a range of 0 to 200 millisiemens per meter at 25°C. The probe was submerged directly in the water for accurate readings. pH was determined on-site using a Jenways 3071 portable pH meter, and total dissolved solids were quantified with a Jenways 4076 TDS meter. Furthermore, laboratory investigations were performed to evaluate additional soil properties, in conjunction with the aforementioned in situ observations. The parameters encompassed in the analysis were total nitrogen, organic carbon, phosphorus, potassium, calcium, magnesium, manganese, copper, iron, zinc, and sodium. The laboratory analysis was conducted at the National Agricultural Research Laboratories of the Kenya Agricultural and Livestock Research Organization (KALRO), Kabete, Kenya. 2.3.1. Isolation and Purification of the Most Common Microbes The study collected soil samples from three distinct locations in Nakuru County, Kenya, namely Molo, Njoro, and Subukia. The geographical coordinates of the study sites were latitude of 0° 29′ 59.99 ″ N and longitude of 36° 00′ 0.00 ″ E. The climate in Nakuru is characterized by warm and temperate conditions, with an average annual precipitation of around 762 mm and a temperature of 17.5°C. The study involved the implementation of standardized operating procedures (SOP number: FSS0002.00) and adherence to European guidelines during the sampling process, which was conducted in 15 dairy farms that had a documented history of repeated use of chlorpyrifos acaricides through dips and spray races. The specimens were assigned code names, placed in refrigerated containers with ice packs, and subsequently conveyed to the research laboratory at Mount Kenya University. The soil samples underwent an air-drying process and were subsequently subjected to sieving.
The study collected soil samples from three distinct locations in Nakuru County, Kenya, namely Molo, Njoro, and Subukia. The geographical coordinates of the study sites were latitude of 0° 29′ 59.99 ″ N and longitude of 36° 00′ 0.00 ″ E. The climate in Nakuru is characterized by warm and temperate conditions, with an average annual precipitation of around 762 mm and a temperature of 17.5°C. The study involved the implementation of standardized operating procedures (SOP number: FSS0002.00) and adherence to European guidelines during the sampling process, which was conducted in 15 dairy farms that had a documented history of repeated use of chlorpyrifos acaricides through dips and spray races. The specimens were assigned code names, placed in refrigerated containers with ice packs, and subsequently conveyed to the research laboratory at Mount Kenya University. The soil samples underwent an air-drying process and were subsequently subjected to sieving.
The relevance and interactions of three independent variables, temperature (25°C, 30°C, and 37°C), pH (values 5, 7, and 9), and CP concentration (25mg/l, 50 mg/l, and 100 mg/l), were investigated using a general multilevel factorial design. A multilevel factorial design allows for some flexibility in the number of levels used for each independent variable. For the purpose of assessing the experimental error, the centre point was duplicated three times. Using a 2 3 -factorial design, all independent variables were combined to create a design matrix.
For the biodegradation test, pure cultured isolates in mineral salt liquid medium (MSM) enriched with chlorpyrifos (10 mg/l) were utilized. Pure cultures of isolated strains with an inoculum density of 2.4 × 10 6 CFU ml −1 were cultured on a rotary shaker for 3 days at 150 rpm and 30°C. The experiment was repeated three times, with controls consisting of media without inoculation kept under the same conditions. The growth of bacteria was evaluated turbidometrically at different time intervals by measuring optical density in Spectronic 20 spectrophotometer at 600 nm . One milliliter of sample was taken and added to freshly prepared 0.02% tetrazolium chloride in a boiling tube and the contents of the boiling tube boiled for 5 minutes in a Stuart water bath SWB series. The boiling tubes and contents were incubated for 21 days, and optical density was taken at intervals (OD 480 ) and color change observed. The isolated microbial strains were cultivated on MSM supplemented with chlorpyrifos (10 mg l −1 ) and examined for their ability to grow and degrade CP at different pH values (5, 7, and 9), temperature degrees (25°C, 30°C, and 37°C), and concentrations (50 mg/l, 100 mg/l, and 150 mg/l) to optimize the growth conditions for biodegradation. At all intervals, control flasks containing an equivalent volume of MSM and chlorpyrifos, but no microbial population and MSM with inoculum without CP were cultured. For the purpose of estimating chlorpyrifos degradation, samples were collected and extracted at intervals of 0, 1, 2, 3, 4, and 5 days during the experiment. Seven bacteria code-named MW1, MW2, MW3, MW4, MW5, MW6, and MW7 were selected for characterization based upon their eminent propensity to efficiently degrade CP.
Seven bacterial isolates (MW1-MW7) were selected for their ability to biodegrade CP, identified morphologically based on Bergey's Manual and confirmed by 16S rRNA gene analysis. DNA was extracted and amplified using PCR with universal primers, and the resulting amplicons were purified and sequenced (36-40). The sequences were analysed using BLAST to compare them to sequences in the GenBank databases.
The Bhagobaty and Malik method of tetrazolium chloride reduction test was employed with certain alterations to evaluate the biodegradation of chlorpyrifos (CP) by the bacterial isolates . Tetrazolium chloride functions as an artificial electron acceptor that is enzymatically reduced by bacterial dehydrogenase upon initiation of CP degradation by the isolate, leading to the production of the vividly pigmented formazan. The ability of the isolate to degrade CP in a qualitative manner is evidenced by the production of a vividly crimson product. The absence of a discernible alteration in coloration within the control group indicates that the compound CP did not undergo degradation in the MSM that was not inoculated. In a concise and sterile manner, the autoclaved mineral salt media (MSM) was combined with 150 mg/l of CP, which functioned as the exclusive carbon source, within boiling tubes. A recently cultivated culture was introduced into a growth medium and subjected to incubation in a rotary shaker at a temperature of 30°C for a duration of 48 hours. After incubation, 1 ml of the sample was removed and combined with 5 ml of freshly made 0.02% tetrazolium chloride in the test tubes containing the organism. After a 5-minute boiling, the test tubes were incubated at 30°C for 4 hours at ambient conditions. The conversion of colorless tetrazolium chloride into purple Formazan showed positive biodegradation. The biodegradation was quantified using UV-Vis spectrophotometer at optical density (OD) 480 nm at 4-day intervals for a duration of 21 days.
To determine the significance of differences in treatment means, data on bacterial growth and chlorpyrifos biodegradation in MSM were computed using factorial analysis of variance (ANOVA) with Statistical Analysis Software (SAS) 2010. Mean pair-wise comparison was carried out using Tukey's HSD (honestly significant difference) at 5% level. A P ≤ 0.05 was considered significant.
3.1. Soil Physicochemical Properties The soil analysis reveals key physicochemical properties for a farm in Nakuru. Total nitrogen was found to be adequate across samples, ranging from 0.37% to 0.46%. Total organic carbon also fell within adequate levels, with percentages between 3.94% and 4.94%. In terms of essential nutrients measured in meq%, potassium levels were high, varying from 10.0 to 12.8. Calcium and magnesium followed suit, with high levels at 16.0 to 19.0 and 5.78 to 6.78, respectively. Sodium was adequate, ranging from 0.64 to 1.06. The soil pH varied from medium alkaline at 8.18 to near neutral at 6.92 and 6.95. The high nutrient levels suggest a likely clay-loam texture. Electrical conductivity was high at 1.25 mS/cm, indicating good ion exchange capacity but potential salinity concerns ( ). 3.2. Isolation and Identification of Chlorpyrifos-Degrading Bacteria Scientific reports have thus far identified only a few species of bacteria capable of degrading chlorpyrifos (CP) and its main metabolite (TCP). By using the enrichment culture technique and MSM media, the current study isolated seven bacterial strains (MW1-MW7) from CP-contaminated soils. The growth of the bacteria in the MSM broth was confirmed through spectrophotometry, and degradation of the CP was confirmed through a positive test of the tetrazolium chloride reduction assay. The tests showed that there was no color change in the control, in which there was no degradation. The small subunit 16S rRNA gene analyses and phylogenetic investigation using BLAST software conferred that the isolates MWI to MW7 were identical to Alcaligenes faecalis , Bacillus weihenstephanensis , Bacillus toyonensis , Alcaligenes sp. strain SCAU23 , Pseudomonas sp. strain PB845W , Brevundimonas diminuta , and uncultured bacterium clone 99, respectively, with a similarity of between 90 and 100%. The strains' 16S rRNA nucleotide sequences have been deposited in NCBI GenBank under accession numbers MZ359822.1-MZ359828.1 (Supplemental materials (available )). 3.3. Qualitative Analysis of CP Utilization through Tetrazolium Reduction Assay The tetrazolium reduction test is aimed at assessing the oxidation of CP by the isolates in the MSM. Once degradation commences, the TTC serves as an electron acceptor that is reduced by the bacteria's dehydrogenase and changed to formazan, which is a highly colored product. Therefore, the presence of purple color is a qualitative measure of degradation. The isolates obtained in the current study were assessed, and color change was reported in all isolates, while there was no color change in the control ( ). 3.4. Effect of pH on Bacteria Growth and Biodegradation of Chlorpyrifos The pH is one of the critical factors that impact bacteria growth and the degradation of xenobiotics. The growth of the isolates was monitored for 21 days at different pH conditions. The effect of pH on the growth of each bacterial isolate was examined by analysing optical density (OD 600 ) periodically for a duration of 21 days. Supplementary table gives a summary of data on the effect of pH on bacterial growth in MSM. The growth patterns and CP-degrading patterns of the different bacterial strains at different pH levels are presented in Figures – . It was observed that isolate growth was maximum near the neutral pH conditions, with optimal values for most isolates recorded at pH 7, except MW7 and MW4 which showed highest growth in pH 5. On the other hand, optimum CP degradation was reported at pH 5. Highly basic conditions were characterized with less growth and degradation potential. The analysis of variance (ANOVA) showed that the differences in growth patterns across the three pH levels were significant ( P < 0.05), with most isolates favouring neutral to slightly acidic conditions. Statistical analysis revealed that optical densities/growth at each pH value differed significantly among the isolates ( P ≤ 0.05). Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01). 3.5. Effect of Temperature on Bacteria Growth and Biodegradation of Chlorpyrifos The growth of study isolates and CP degradation were investigated at different ranges of incubation temperatures. Bacterial growth and CP degradation were periodically measured across the 21-day period, and the optical densities are summarized in Supplementary table . Optical densities for the isolates across temperatures and incubation period were statistically different at P ≤ 0.05. The growth differed significantly between inoculated and uninoculated conditions. The growth patterns of the different bacterial strains at different temperature are presented in Figures – . The growth of bacterial isolates was determined to be significantly different at different temperatures (one-way ANOVA, P < 0.05). The optical densities for both growth and degradation were significantly higher at 25°C compared to 30°C and 37°C. At 25°C, MW5 had the highest growth (OD = 0.533), MW2 at 30°C (OD = 0.426), and MW1 at 37°C. Although the strains showed positive growth and degradation across the three temperatures, the optimum temperature for majority of the strains was 25°C, with the exception of MW2 that had optimum growth at 30°C. Nevertheless, the strains tolerated a wide range of temperature, which is important for in situ bioremediation. Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01). This inference is based on a comprehensive assessment of growth rates and chlorpyrifos degradation, combining graphical and statistical analyses (Supplementary table ) to identify trends not immediately visible in the graphical data. Based on the data, MW1 (0.4734 nm), MW3 (0.4304 nm), MW4 (0.46555), MW5 (0.2974 nm), MW6 (0.37665 nm), and MW7 (0.2788 nm) had high ODs at 25°C on day 21 ( P ≤ 0.05). On the other hand, MW2 (0.47795 nm) showed a high OD at 30°C on day 21 ( P ≤ 0.05). Statistical comparison of specific ODs among isolates and incubation periods showed statistical significance at P ≤ 0.05. The effect of the interaction of main factors on ODs revealed a significant difference in ODs for all isolates at P = 0.0001. A comparison between main factors and their interactions revealed that there was a significant difference between ODs for the isolates at incubation intervals ( P ≤ 0.05). The effect of the interactions of main factors on ODs revealed that there was a significant difference among the ODs for all the isolates at P = 0.0001. Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01). 3.6. Effect of CP Concentration on Bacterial Growth and Biodegradation of Chlorpyrifos The relationship of specific concentrations with each isolate was analysed, and the interaction between the main factors was determined by assessing optical densities at different intervals across the 21-day incubation period (Supplementary table ). The growth patterns of the isolates at different CP concentration are presented in Figures – . MW1 (0.38675 nm), MW3 (0.5179 nm), MW4 (0.4254 nm), MW5 (0.4926 nm), MW6 (0.403 nm), and MW7 (0.3062 nm) had high ODs at 25 mg/l on day 21 ( P ≤ 0.05). On the other hand, MW2 (0.46665 nm) showed a high OD at 50 mg/l ( P ≤ 0.05). Statistical comparison of specific ODs among isolates and incubation periods showed statistical significance at P ≤ 0.05. The effect of the interaction of main factors on ODs revealed a significant difference in ODs for all isolates at P = 0.0001. A comparison between the main factors and their interactions revealed that there was a significant difference between ODs for the isolates at incubation intervals ( P ≤ 0.05). The effect of the interactions of main factors on ODs revealed that there was a significant difference among the ODs for all the isolates at P = 0.0001. Nonetheless, generally, there was significant growth and degradation across all the concentrations, underscoring the fact that the isolates can adapt to different CP concentrations, although the growth rate and degradation ability decreases slightly at higher concentrations. From the information in , the optimum parameters for an effective consortium to degrade CP are a pH of 5 and a temperature of 25°C. The consortium should be diverse to ensure it effectively degrades the pesticide in a broad range of environments. Based on the data provided, a possible consortium could include Bacillus toyonensis 20SBZ2B (MW3) and Alcaligenes sp. SCAU23 (MW4) at 25°C and pH 5, as both the strains have shown high OD values under these conditions ( ). For a consortium at 30°C, Bacillus weihenstephanensis FB25M (MW2) could be included in addition to the strains mentioned above. It is important to note that the growth of the isolates was generally positive across all the three concentrations (25, 50, and 100 mg/l), which suggests that a consortium of the bacteria can be made for all the three concentrations tested. Besides, most bacterial strains display a high degree of congruence between optimum conditions for growth and CP degradation. However, deviations occur primarily in pH and concentration. Specifically, four strains (MW1, MW3, MW5, and MW6) shift to a lower pH for degradation. Only MW2 changes its optimum temperature for degradation. Three strains (MW5, MW6, and MW7) prefer higher concentrations for degradation, likely indicating robustness in coping with substrate abundance. These shifts imply metabolic adaptations and perhaps varying enzymatic activities across different environmental conditions. 3.7. Correlation between Bacterial Growth and Chlorpyrifos Degradation The Pearson correlation analysis was carried out to determine the correlation between bacterial growth and degradation of chlorpyrifos at the optimum pH, temperature, and concentration. In all the three parameters, a weak, positive correlation was found between growth and degradation ( P < 0.05) ( ).
The soil analysis reveals key physicochemical properties for a farm in Nakuru. Total nitrogen was found to be adequate across samples, ranging from 0.37% to 0.46%. Total organic carbon also fell within adequate levels, with percentages between 3.94% and 4.94%. In terms of essential nutrients measured in meq%, potassium levels were high, varying from 10.0 to 12.8. Calcium and magnesium followed suit, with high levels at 16.0 to 19.0 and 5.78 to 6.78, respectively. Sodium was adequate, ranging from 0.64 to 1.06. The soil pH varied from medium alkaline at 8.18 to near neutral at 6.92 and 6.95. The high nutrient levels suggest a likely clay-loam texture. Electrical conductivity was high at 1.25 mS/cm, indicating good ion exchange capacity but potential salinity concerns ( ).
Scientific reports have thus far identified only a few species of bacteria capable of degrading chlorpyrifos (CP) and its main metabolite (TCP). By using the enrichment culture technique and MSM media, the current study isolated seven bacterial strains (MW1-MW7) from CP-contaminated soils. The growth of the bacteria in the MSM broth was confirmed through spectrophotometry, and degradation of the CP was confirmed through a positive test of the tetrazolium chloride reduction assay. The tests showed that there was no color change in the control, in which there was no degradation. The small subunit 16S rRNA gene analyses and phylogenetic investigation using BLAST software conferred that the isolates MWI to MW7 were identical to Alcaligenes faecalis , Bacillus weihenstephanensis , Bacillus toyonensis , Alcaligenes sp. strain SCAU23 , Pseudomonas sp. strain PB845W , Brevundimonas diminuta , and uncultured bacterium clone 99, respectively, with a similarity of between 90 and 100%. The strains' 16S rRNA nucleotide sequences have been deposited in NCBI GenBank under accession numbers MZ359822.1-MZ359828.1 (Supplemental materials (available )).
The tetrazolium reduction test is aimed at assessing the oxidation of CP by the isolates in the MSM. Once degradation commences, the TTC serves as an electron acceptor that is reduced by the bacteria's dehydrogenase and changed to formazan, which is a highly colored product. Therefore, the presence of purple color is a qualitative measure of degradation. The isolates obtained in the current study were assessed, and color change was reported in all isolates, while there was no color change in the control ( ).
The pH is one of the critical factors that impact bacteria growth and the degradation of xenobiotics. The growth of the isolates was monitored for 21 days at different pH conditions. The effect of pH on the growth of each bacterial isolate was examined by analysing optical density (OD 600 ) periodically for a duration of 21 days. Supplementary table gives a summary of data on the effect of pH on bacterial growth in MSM. The growth patterns and CP-degrading patterns of the different bacterial strains at different pH levels are presented in Figures – . It was observed that isolate growth was maximum near the neutral pH conditions, with optimal values for most isolates recorded at pH 7, except MW7 and MW4 which showed highest growth in pH 5. On the other hand, optimum CP degradation was reported at pH 5. Highly basic conditions were characterized with less growth and degradation potential. The analysis of variance (ANOVA) showed that the differences in growth patterns across the three pH levels were significant ( P < 0.05), with most isolates favouring neutral to slightly acidic conditions. Statistical analysis revealed that optical densities/growth at each pH value differed significantly among the isolates ( P ≤ 0.05). Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01).
The growth of study isolates and CP degradation were investigated at different ranges of incubation temperatures. Bacterial growth and CP degradation were periodically measured across the 21-day period, and the optical densities are summarized in Supplementary table . Optical densities for the isolates across temperatures and incubation period were statistically different at P ≤ 0.05. The growth differed significantly between inoculated and uninoculated conditions. The growth patterns of the different bacterial strains at different temperature are presented in Figures – . The growth of bacterial isolates was determined to be significantly different at different temperatures (one-way ANOVA, P < 0.05). The optical densities for both growth and degradation were significantly higher at 25°C compared to 30°C and 37°C. At 25°C, MW5 had the highest growth (OD = 0.533), MW2 at 30°C (OD = 0.426), and MW1 at 37°C. Although the strains showed positive growth and degradation across the three temperatures, the optimum temperature for majority of the strains was 25°C, with the exception of MW2 that had optimum growth at 30°C. Nevertheless, the strains tolerated a wide range of temperature, which is important for in situ bioremediation. Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01). This inference is based on a comprehensive assessment of growth rates and chlorpyrifos degradation, combining graphical and statistical analyses (Supplementary table ) to identify trends not immediately visible in the graphical data. Based on the data, MW1 (0.4734 nm), MW3 (0.4304 nm), MW4 (0.46555), MW5 (0.2974 nm), MW6 (0.37665 nm), and MW7 (0.2788 nm) had high ODs at 25°C on day 21 ( P ≤ 0.05). On the other hand, MW2 (0.47795 nm) showed a high OD at 30°C on day 21 ( P ≤ 0.05). Statistical comparison of specific ODs among isolates and incubation periods showed statistical significance at P ≤ 0.05. The effect of the interaction of main factors on ODs revealed a significant difference in ODs for all isolates at P = 0.0001. A comparison between main factors and their interactions revealed that there was a significant difference between ODs for the isolates at incubation intervals ( P ≤ 0.05). The effect of the interactions of main factors on ODs revealed that there was a significant difference among the ODs for all the isolates at P = 0.0001. Besides, a Pearson correlation analysis revealed significant correlation between growth and degradation for all the isolates ( P < 0.01).
The relationship of specific concentrations with each isolate was analysed, and the interaction between the main factors was determined by assessing optical densities at different intervals across the 21-day incubation period (Supplementary table ). The growth patterns of the isolates at different CP concentration are presented in Figures – . MW1 (0.38675 nm), MW3 (0.5179 nm), MW4 (0.4254 nm), MW5 (0.4926 nm), MW6 (0.403 nm), and MW7 (0.3062 nm) had high ODs at 25 mg/l on day 21 ( P ≤ 0.05). On the other hand, MW2 (0.46665 nm) showed a high OD at 50 mg/l ( P ≤ 0.05). Statistical comparison of specific ODs among isolates and incubation periods showed statistical significance at P ≤ 0.05. The effect of the interaction of main factors on ODs revealed a significant difference in ODs for all isolates at P = 0.0001. A comparison between the main factors and their interactions revealed that there was a significant difference between ODs for the isolates at incubation intervals ( P ≤ 0.05). The effect of the interactions of main factors on ODs revealed that there was a significant difference among the ODs for all the isolates at P = 0.0001. Nonetheless, generally, there was significant growth and degradation across all the concentrations, underscoring the fact that the isolates can adapt to different CP concentrations, although the growth rate and degradation ability decreases slightly at higher concentrations. From the information in , the optimum parameters for an effective consortium to degrade CP are a pH of 5 and a temperature of 25°C. The consortium should be diverse to ensure it effectively degrades the pesticide in a broad range of environments. Based on the data provided, a possible consortium could include Bacillus toyonensis 20SBZ2B (MW3) and Alcaligenes sp. SCAU23 (MW4) at 25°C and pH 5, as both the strains have shown high OD values under these conditions ( ). For a consortium at 30°C, Bacillus weihenstephanensis FB25M (MW2) could be included in addition to the strains mentioned above. It is important to note that the growth of the isolates was generally positive across all the three concentrations (25, 50, and 100 mg/l), which suggests that a consortium of the bacteria can be made for all the three concentrations tested. Besides, most bacterial strains display a high degree of congruence between optimum conditions for growth and CP degradation. However, deviations occur primarily in pH and concentration. Specifically, four strains (MW1, MW3, MW5, and MW6) shift to a lower pH for degradation. Only MW2 changes its optimum temperature for degradation. Three strains (MW5, MW6, and MW7) prefer higher concentrations for degradation, likely indicating robustness in coping with substrate abundance. These shifts imply metabolic adaptations and perhaps varying enzymatic activities across different environmental conditions.
The Pearson correlation analysis was carried out to determine the correlation between bacterial growth and degradation of chlorpyrifos at the optimum pH, temperature, and concentration. In all the three parameters, a weak, positive correlation was found between growth and degradation ( P < 0.05) ( ).
The study isolated bacteria with CP-degrading potential and optimized their growth and degradation conditions. The study was necessary as the excessive use of OP pesticides has caused harmful ecological impacts. The biodegradation of chlorpyrifos by microorganisms is a promising solution to reduce its negative impact . This study was among the first to identify the optimum temperature, pH, and substrate concentration required for the in situ biodegradation of chlorpyrifos, which could potentially facilitate the development of an effective bioremediation consortium for chlorpyrifos-contaminated environments. Seven bacterial strains, namely, Alcaligenes faecalis , Bacillus weihenstephanensis , Bacillus toyonensis , Alcaligenes sp. strain SCAU23 , Pseudomonas sp. strain PB845W , Brevundimonas diminuta , and uncultured bacterium clone 99 , were successfully isolated and characterized from contaminated soils in Nakuru County. These soils have been subjected to repeated and sustained exposure to chlorpyrifos (CP) and other pesticides, resulting in their contamination. The study's findings align with previous research, affirming that bacteria can adapt and thrive in contaminated environments [ , , ]. Over time, these bacterial strains have developed resistance mechanisms in response to repeated exposure to xenobiotic compounds, enabling them to efficiently decompose and remediate contaminated environments [ – ]. Importantly, the growth response of the isolated bacteria in minimal salt medium (MSM) supplemented with chlorpyrifos revealed that they exclusively utilized CP as their carbon source, with qualitative confirmation of their biodegradation potential using tetrazolium chloride. These results emphasize the potential of these bacterial strains for effective bioremediation of chlorpyrifos-contaminated environments. Since CP has been used for a long time, several researchers have isolated microbes that degrade the pesticide [ , , – ]. However, only a few studies have tried to optimize the growth conditions for the degrading bacteria. This study stands out as one of the first to rigorously assess the optimum conditions for indigenous bacteria capable of degrading CP. By addressing this crucial aspect, the study makes a significant contribution to the field of bioremediation. Moreover, the emphasis on utilizing indigenous species underscores the importance of ecofriendly approaches to tackle pesticide contamination, minimizing potential disruptions to microflora . These findings carry profound implications, potentially unlocking targeted and efficient bioremediation strategies for chlorpyrifos-contaminated environments, a critical step towards mitigating the detrimental impacts of this hazardous pesticide. The optimal pH for the growth of majority of the bacteria species was found to be pH 7, with the exception of Alcaligenes sp. Strain SCAU23 and uncultured bacterium clone 99 , which showed optimum growth at pH 5. These pH levels resulted in the highest optical densities, indicating the most favorable conditions for bacterial growth. Previous studies have determined the optimal growth pH for these bacteria to be pH 7.5 for Pseudomonas sp. , pH 7.0 for Bacillus sp. and Brevundimonas diminuta CB21 , pH 5.8-7.0 for Alcaligenes faecalis , pH 6.5 for Bacillus toyonensis , and pH 5.4-7.0 for Bacillus weihenstephanensis . On the other hand, the optimum pH for degradation was shown to be pH 5, with the exception of only Alcaligenes faecalis , which showed optimum degradation at pH 7. It is important to note that addition of CP into the growth medium may have forced the bacteria to adapt to new conditions, hence accounting for the variations in optimum pH for growth and for degradation. The prevalence of the pH 5 for degradation of CP suggests that most of the isolates prefer acidic conditions as favorable for degradation . However, Farhan et al. isolated Bacillus sp. Ct3 that degraded 88% of CP within 8 days under alkaline conditions. Therefore, optimizing the pH level is important for successful bacterial growth in chlorpyrifos environments. It is important to recognize that the current study's evaluation of three pH levels (5, 7, and 9) aimed to provide a comprehensive understanding of bacterial growth and CO degradation in varied environmental conditions. The levels used in this study had extremes beyond what previous studies had noted to be optimum (between pH 5 and 8), with the aim of determining the resilience and adaptability of the strains in harsh environments. These findings corroborate previous studies that have demonstrated the critical role of pH in bioremediation . The results align with previous studies demonstrating that neutral to slightly acidic pH is crucial for maximum chlorpyrifos biodegradation [ , , – ]. These results align with Vidali's findings (2001), indicating that microbial strains in polluted environments often exhibit favorable growth between pH 5 and 8. Additionally, the study's optimal growth pH for Pseudomonas sp. and Bacillus weihenstephanensis (pH 7.0 and pH 5.0 to 7.0, respectively) further validate the findings reported by Singh et al. . The finding that Brevundimonas diminuta CB21 demonstrated the highest growth at all pH levels among the bacterial species tested is a significant result. It suggests that this bacterial species has the potential to be an effective candidate for bioremediation of chlorpyrifos-contaminated environments . The use of such bacteria could potentially mitigate the adverse effects of chlorpyrifos pollution on the environment and human health. These findings align with Farhan et al. , who emphasized the importance of bacterial strains capable of functioning under variable pH conditions, as they are more likely to succeed in biodegradation efforts, especially in the face of rapidly changing environmental conditions. The adaptability displayed by Brevundimonas diminuta CB21 can significantly contribute to the success of contaminant degradation, making it a crucial factor to consider when selecting strains for bioremediation endeavors [ , , ]. By understanding the significance of such adaptable bacteria, we can better design targeted and resilient bioremediation strategies to combat the persistent threat of pesticide pollution and safeguard both ecosystems and human well-being. Most strains had optimum chlorpyrifos growth at an incubation temperature of 25°C except for MW2 ( Bacillus weihenstephanensis FB25M ), which recorded optimum growth at an incubation temperature of 30°C. In terms of degradation, majority of the isolates exhibited optimum degradation at 25°C, except for Bacillus weihenstephanensis , which had optimum degradation at 30°C. These findings differ slightly from previous studies that reported natural optimum temperature growth conditions at 37.5°C for Alcaligenes faecalis , 35°C for Bacillus toyonensis , 37°C for Brevundimonas diminuta CB21 and Pseudomonas spp. , and 30°C for Bacillus weihenstephanensis . The variation could be due to the different environments in which the bacteria were grown, such as the MSM in the current study. Temperature has a huge impact on biological processes and can affect the growth of bacteria as well as influence bioremediation of pesticides . Mali et al. demonstrated that the rate of degradation dropped from 99% at 32°C to less than 47% at 40°C and 58% at 22°C for Bacillus sp., which shows that optimum degradation takes place within a narrow range of temperature. The results further revealed that temperature influences the extent of biodegradation of chlorpyrifos. The optimal temperature for bacteria degradation of chlorpyrifos in the current study was 25°C which falls within the range of normal soil temperature. These results corroborate with earlier studies that independently reported rapid biodegradation of chlorpyrifos at a temperature of between 20°C and 30°C [ , , ]. Notably, at an increase in temperature to 37°C, the biodegrading of chlorpyrifos dropped. Naturally, metabolic rates are faster at elevated temperatures, irrespective of whether they are optimum. This could be due to the bacteria cell's highest biotic activity under these incubation conditions. The rate of biodegradation, however, can drop at higher temperatures since the important degradation enzymes are plasmid-borne and bacterial cells are known to lose plasmids at higher temperatures . The growth patterns of the bacterial isolates showed variations in response to different chlorpyrifos (CP) concentrations. Isolates MW1, MW3, MW4, MW5, MW6, and MW7 had higher optical densities (ODs) at a CP concentration of 25 mg/l on day 21, while isolate MW2 had a higher OD at 50 mg/l. Also, the degradation ability, as measured through the color intensity of formazan, was found to be optimum at a concentration of 25 mg/l, except for Pseudomonas sp. strain PB845W , Brevundimonas diminuta , and uncultured bacterium clone 99 , whose optimum concentration was 100 mg/l. This observation is consistent with the previous findings by Iyer et al. and Foong et al. , where different bacterial species, including Bacillus sp., Pseudomonas sp., Achromobacter sp., and Ochrobactrum sp., demonstrated CP degradation capabilities at concentrations of 100 mg/l within a range of 1 to 28 days. Statistics revealed a difference in ODs across isolates and incubation times that was statistically significant and that the interaction of key factors significantly affected ODs for all isolates. Additionally, the isolates' ODs at different incubation intervals showed a substantial difference. These results suggest that the concentration of CP present affects the growth patterns of various bacterial isolates and that their capacity to break down CP varies. At the high dose of 100 mg/l in the current investigation, growth was at its lowest. According to Sharma et al. , it is predicted that bacterial growth will be at its lowest at a high concentration of 100 mg/l of CP since high concentrations of CP can be toxic to bacterial cells and cause cell damage or death. The inhibition of crucial metabolic and enzyme activities may lead to a reduction in cell growth and reproduction. A loss of membrane integrity and a reduction in the capacity to absorb nutrients can arise from excessive concentrations of CP, which can also impair the cell membrane's ability to function correctly. As a result, the bacteria may be less able to thrive in the harsh environment and may be less able to break down CP at high concentrations . However, several studies have reported the ability of various bacterial species including Bacillus sp., Pseudomonas sp., Achromobacter sp., and Ochrobactrum sp. to degrade CP at a concentration of 100 mg l −1 within a range of 1 to 28 days. The observed differences in growth patterns and degradation ability of the bacterial isolates at various concentrations of CP indicate that each isolate possesses distinct abilities in the degradation of CP. There is observation that certain isolates exhibited increased optical densities (ODs) at a concentration of 25 mg/l of CP, while others demonstrated that higher ODs at 50 mg/l, underscore the significance of carefully selecting bacterial isolates to establish a consortium capable of effectively degrading CP. The results indicate that the interaction between the main factors has a notable impact on ODs, and there is a significant disparity in ODs among the isolates at different incubation intervals. These findings emphasize the importance of carefully selecting growth conditions for bacterial isolates in order to maximize their ability to degrade CP. In general, the findings of this study emphasize the significance of comprehending the growth patterns exhibited by various bacterial isolates in order to enhance their capacity for the biodegradation of CP . According to a study conducted by Sharma et al. , it has been observed that bacteria growth can be impeded by the presence of excessively elevated pesticide concentrations. The aforementioned critical observation underscores the importance of thoroughly evaluating the choice of bacterial isolates according to their growth reactions to specific CP concentrations, as this factor can have a substantial influence on their efficacy in bioremediation. It is important to note that the optimal temperature and pH for chlorpyrifos degradation may also be influenced by other factors, such as the presence of other compounds or nutrients in the environment . Further research is needed to fully understand the optimal conditions for chlorpyrifos degradation by different bacterial species. Significant differences in growth and biodegradation levels between the main factors and their interactions suggest that incubation period, parameters (pH, temperature, and concentration), and bacterium strain independently influence growth and chlorpyrifos biodegradation . Regulating the studied parameters and incubation period may be more important in the biodegradation of chlorpyrifos. These findings can be linked to the evolution of different enzyme systems by microorganisms for the degradation of substrates and derived energy. Such enzymes have optimum values for pH and temperature at which they would have maximum activity. Thus, changes in environmental factors may affect enzyme activities . This envisages interactions similar to those that may exist in nature and thus would help in better management of chlorpyrifos-contaminated ecosystems . Furthermore, the findings of this study can inform that the formation of an effective consortium for degrading CP. Alcaligenes sp. has been previously reported to degrade chlorpyrifos through the hydrolysis of the P-O bond by using the metabolic pathway of chlorpyrifos hydrolase to break down the chlorpyrifos into 3,5,6-trichloro-2-pyridinol (TCP) and diethylthiophosphate (DETP), which are further broken down by other strains . Also, Pseudomonas sp. and Bacillus spp. have been reported to degrade chlorpyrifos through oxidative and hydrolytic pathways . As a result, using a consortium of different strains can improve the efficiency of chlorpyrifos degradation by allowing different components of the compound to degrade, preventing the buildup of toxic intermediates, and speeding up the overall rate of degradation . Multiple bacterial strain consortia have been shown to be more efficient at destroying organophosphorus pesticides than single strains in prior research [ , , ]. The compatibility and synergistic abilities of the chosen strains, however, are critical to the consortium's success . In order for the bacterial strains to develop and create enzymes that can break down chlorpyrifos, there may need to be an initial adaptation phase before the consortium can be established . Therefore, additional research is required to assess the interactions and connections between the consortium's chosen strains and to improve the conditions for chlorpyrifos breakdown. The consortia of bacteria's interactions and connections can significantly affect how pollutants like chlorpyrifos are degraded . The bacterial strains chosen for the consortium should, in general, have complementary metabolic pathways, with each strain contributing to the breakdown of various parts of the target pollutant . The findings of this study inform a recommendation for using its findings to enhance sustainable dairy production and prevent chlorpyrifos contamination. Bioremediation and ecosystem detoxification can use CP-degrading bacterial strains isolated herein. Dairy farmers can minimize chlorpyrifos in soil and water by using isolated bacterial strains. This improves milk quality, cattle health, and farm sustainability. The study's findings lay the groundwork for further research and practical chlorpyrifos contamination solutions in dairy production, enabling ecologically friendly practices and safer agricultural outputs. The study has valuable insights but also limitations that warrant acknowledgment. Its focus is restricted to chlorpyrifos, excluding other pesticides, and is confined to laboratory settings, lacking field tests for real-world validation. Geographically, the research is limited to Nakuru County, Kenya, which may affect its generalizability. Additionally, the study falls short in examining the interactions among identified bacterial strains in a consortium, a key aspect for effective bioremediation. Future work should explore these consortium dynamics in complex settings and extend the research to diverse geographical areas for broader applicability in detoxifying CP-contaminated ecosystems.
This study has identified bacterial isolates with the potential to degrade chlorpyrifos (CP) and established their optimum growth and degradation conditions. The results showed that Alcaligenes faecalis UWI9 , Bacillus weihenstephanensis FB25M , Bacillus toyonensis 20SBZ2B , Alcaligenes sp. SCAU23 , Pseudomonas sp. P_B845W , Brevundimonas diminuta CB21 , and uncultured bacterium clone 99 can grow on and degrade CP. Most isolates achieved optimum growth at pH 7 and the rest at pH 5. The optimum growth and degradation temperature were determined to be 25°C, although there was general good growth across all the three temperatures. Majority of the isolates had higher growth and degradation at concentration of 25 mg/l, and one achieved maximum growth at 50 mg/l, while three showed maximum degradation at 100 mg/l. The data may provide a good basis for further research on the consortium reconstitution. The isolated bacteria strains in this study can be further developed into a consortium, based on the optimum conditions identified herein, to degrade CP in the laboratory and under field conditions. Besides, future studies should aim at coming up with the bacterial consortium for the best biodegradation of CP in different environmental conditions. With the array of the findings obtained herein, a further study has been commenced to utilize the synergetic interactions of the isolated bacteria in a consortium that will be used to develop a bioreactor for in situ bioremediation of CP-contaminated environments.
|
Fragmented micro-growth habitats present opportunities for alternative competitive outcomes | 7b127485-0ab7-422b-bfcd-bbcdec5c95f8 | 11365936 | Microbiology[mh] | The implications of habitat fragmentation on biodiversity constitute a widely investigated yet highly debated topic in classical macroecology . While some authors have associated fragmentation with decreasing biodiversity in general , others have reported positive effects on biodiversity at the scale of landscapes , . In contrast, the aspect of habitat fragmentation has received only modest attention in microbial ecology , . The formation and maintenance of taxonomically rich microbial communities in nature (e.g. bacteria , archaea , microbial eukaryotes , ) are thought to depend on the physicochemical conditions prevailing in their habitat , , on inherent growth characteristics of the inhabiting species , and dynamic interspecific interactions that emerge from shared nutrient and spatial niches , . However, most of our understanding of interspecific interactions comes from experimental studies using macro-habitats, controlled uniform growth conditions and large population census (>10 6 cells per mL) , – , which are not necessarily representative for typically highly heterogenous natural microbial habitats (e.g. soil, plant leaves, skin). Thus, studies that take habitat heterogeneities into account are needed to extrapolate the roles of interspecific interaction effects on the development and maintenance of natural microbial communities. Habitat heterogeneities at the scales relevant to microbial life occur in the form of spatial discontinuities and fragmentation – , which may have important implications for microbial community assembly and diversity , , – . For example, Conwill et al. , showed how different Cutibacterium acnes strains coexist at the macroscopic level of the human skin microbiome, but each individually colonizes a single skin pore; the pores creating niches without direct space and nutrient competition. Similarly, the particle structure of the soil habitat also creates multitudes of micro-pores and channels , , which, dependent on water content, generate fluctuating networks of physically restricted and connected micro-growth environments , . Microhabitats also arise in animal guts, because of the physical shape of the epithelial cell lining (e.g., crypts ), the peristaltic motion of differently-sized food particles or encapsulation of bacteria within gut mucus . Surfaces of plant leaves have pronounced microstructures and properties leading to the formation of disconnected micro-droplets and water-filled channels, depending on humidity conditions , , . Finally, even aquatic environments, considered to be connected, are characterized by plentiful particulate organic matter to which microbes attach, forming segregated habitats , . Habitat space constrains the opportunities for cells to get into physical proximity to every other member of the community. Therefore, even though the diversity in a macro-environment (e.g., soil) may be high (up to thousands of taxa ), the spatial discontinuities lead to microhabitat fragmentation, each containing perhaps a few or a few dozens of cells from only a limited number of taxa . Communities in that regard are rather ensembles of myriads of smaller sub-communities inhabiting microhabitats. As a consequence, the assumed global roles of high-complexity interspecific interactions for community development would in individual microhabitats forcibly reduce to a few co-existing species with a smaller repertoire of potential interspecific interaction outcomes. In addition, because of the lower population census in microhabitats, one would expect phenotypic heterogeneity to play a more important role than in well-mixed conditions and at high population densities, where it would level out differences among individual founder cells. Indeed, measurements of individual phenotypic variations at the single (bacterial) cell level show heterogeneous behaviour within low-census bacterial populations – , and important heterogeneity in single cell growth kinetics – . Our hypothesis here was thus that microhabitats would enable existing phenotypic variation among individual founder cells (i.e., the cells present in a habitat that give rise to new offspring), to propagate into different reproductive success. In case of founder cells being part of a low-census multispecies community, we would then expect that growth kinetic variation could also lead to alternative outcomes of individual species growth and community composition. Expected interspecific interaction types from macro-scale experiments would then insufficiently explain their influence under microhabitat fragmentation. If true, growth variations in microhabitats could thus form an important driving force to sustain high observed microbial diversity despite competition for substrates being expected to drive poorly competitive species to extinction , . The main objective of this work was to study the effects of habitat fragmentation on bacterial community growth considering existing phenotypic heterogeneity. We tested our conjectures in four scenarios of different (expected) interspecific interaction types: (i) direct substrate competition, (ii) substrate independence, (iii) antagonism by inhibitory compounds and (iv) direct cell killing (Fig. ). Strain pairs were cultured either alone or in combination, and either in standard mixed liquid suspended culture (uniform growth conditions) with a large starting population census to observe global interaction types, or in fragmented microhabitats with each between 1–3 founder cells (Fig. ). Parallel fragmented microhabitats were created by emulsifying cell suspensions in growth medium into water-in-oil picoliter-droplets (35 or 268 pL) using a microfluidic device (Supplementary Fig. , design from Duarte et al. ). Water-in-oil droplets have been shown previously to restrict cell movement and diffusion of compounds between droplets, and effectively shield individual growth environments . The cells in millions of generated droplets were subsequently incubated as an emulsion, and taxa growth was compared between mixed liquid suspension and pL-droplets. Pseudomonas putida and Pseudomonas veronii were used to test direct substrate competition and substrate independence. Pseudomonas Leaf 15, a known phyllosphere isolate producing inhibitory compounds was tested with Sphingomonas wittichi RW1 , for antagonistic interaction effects of diffusible compounds. Finally, Pseudomonas protegens strains CHA0 and Pf-5 were used to test the effect of tailocin-mediated killing. All strains were fluorescently tagged to be able to measure their productivity in individual droplets from microscopy imaging (Fig. ). Mathematical models describing competitive Monod growth were used to examine the effects of stochastic founder cell and kinetic parameter variations on competitive strain dominance. Finally, we simulated and experimentally tested how the variation of interaction outcomes is influenced by the starting cell numbers within the microhabitats. Our results indicate that microhabitat fragmentation offers ecological opportunities to reverse interaction outcomes when the number of founder cells of each species is relatively low (smaller than ca. 10), leading to increased survival of poorly competitive strains in fragmented than in well-mixed uniform bulk environments.
Growth in fragmented habitats enables local reversion of substrate competition To understand how habitat micro-fragmentation affects the developing interactions between paired bacterial strains, we compared growth of mono- and cocultures under different interspecific interaction scenarios, and either in regular mixed liquid suspension with uniform growth conditions (with a large founder population size of 5 × 10 5 cells per 140 µL) or in pL droplets (with 1–3 founder cells per droplet and strain; Fig. , Supplementary Fig. ). In the first scenario, we focused on either substrate competition (i.e., a single shared primary carbon growth substrate in the form of succinate) or substrate independence (i.e., each strain receives its own specific substrate). To test this, we used two Pseudomonas strains ( P. putida and P. veronii ) with overlapping substrate preferences but different growth kinetic properties . Growth rates of P. putida in uniform liquid-suspended culture with 10 mM succinate based on fluorescence measurements ( n = 6 replicates) were slightly but significantly ( p = 0.009) higher in mono than in co-culture with P. veronii (Fig. ; biomass yields from culture turbidity presented in Supplementary Fig. ). In contrast, P. veronii grew slightly faster in co-culture with P. putida than alone ( p = 3.06 × 10 −4 ), possibly because of metabolite cross-feeding as suggested previously . Despite this increase in co-culture, the average maximum specific growth rate (µ max ) of P. veronii on succinate was 25% lower than that of P. putida , and the onset of growth (population lag time) was prolonged (Fig. , Supplementary Fig. ). Consequently, uniform liquid-suspended co-cultures became dominated by P. putida (Fig. ). Consistent with substrate competition, the strain-specific cell yields were lower in co- than in mono-cultures (Fig. ), with P. putida losing ca. 14.4% of its cell yield and P. veronii losing 84.7% compared to mono-cultures (Fig. ). Growth of the same cell suspension densities and substrate concentrations under fragmented conditions in pL-droplets both in mono- and co-cultures yielded similar cell numbers for P. putida in comparison to the suspended cultures (Fig. , cell numbers determined after 24 h growth by flow cytometry by coalescing droplet emulsions into a single suspension). In contrast, the P. veronii cell numbers in droplet co-cultures with P. putida were on average 4 times higher than expected from suspended cultures (Fig. ). This suggested that the global competitive deficit of P. veronii in uniform liquid suspended culture was partly abolished during growth in fragmented conditions. To better understand the mechanisms for the attenuated competitive inhibition of P. veronii by P. putida in co-culture droplets, we looked more closely at the cell yield variations at the level of individual droplets (Fig. ). The median productivities of P. putida after 24 h growth in pL-droplets were indifferent between mono-cultures and droplets with only P. putida in co-cultures (Fig. , solo droplets, p = 0.2164, n = 3, two-sided t-test), indicating that there was no difference arising from the co-culturing procedure in itself. We find such solo droplets because of the random nature of cell encapsulation in droplets, which follows a Poisson distribution (Supplementary Fig. ). Since the cell counting procedure breaks the droplet emulsion, the total counts by flow cytometry are a mixture of cells liberated from true mix droplets and solo droplets. Imaging after 24 h indicated that ca. 26% and 23% of co-culture droplets were occupied solely by either P. putida or P. veronii , respectively ( solo droplets), and 26% contained both ( mix droplets; the other 25% being empty). The increased proportion of P. veronii in the fused co-culture droplet emulsions counted by flow cytometry is thus increased by the fraction of P. veronii solo droplets. True co-culture droplets containing both P. putida and P. veronii showed an average 11.3% reduction in median productivity of P. putida (Fig. , mix , p = 0.0056 in t -test to solo droplet productivity, n = 3), which is similar as measured by flow cytometry counting on fused droplets (Fig. ). Productivity of P. veronii was indifferent between solo droplets (in co-culture) and P. veronii mono-culture droplets (Fig. , p = 0.5073, two-sided t-test), but—as expected, was on average 79.4% inferior in droplets with P. putida present (Fig. , p = 5.68 × 10 −4 , two-sided t-test). Mix droplet outcomes effectively ranged from those with almost exclusively P. putida to almost exclusively P. veronii and some with more equal proportions (Fig. ). Interestingly, ca. 24% of the co-culture droplets occupied with both strains were dominated by P. veronii (Fig. ), which may be due to non-growing cells of P. putida (Supplementary Fig. ) or to a competitive gain by P. veronii in presence of growing P. putida cells. Control experiments with paired isogenic P. putida each expressing a different fluorescent protein, showed an equilibrated distribution of mix droplet outcomes under succinate competition (Supplementary Fig. ), confirming that the observed outcomes of mix droplets with P. veronii and P. putida are due to kinetic and phenotypic differences among the strains. This result indicated, therefore, that in fragmented growth conditions with low founder cell densities, P. veronii can overcome its general competitive disadvantage for growth on succinate. Fragmented growth habitats yield varying outcomes even in case of substrate independence To contrast the reversion of competition outcomes, we next imposed a substrate independence scenario, in which P. putida and P. veronii were each given an exclusive carbon substrate (Fig. ). Our expectation here was that since substrate competition would be alleviated, both strains would grow unhindered, and liquid-suspended and pL-droplet growth would be largely similar. To test this, we used a previous observation that P. putida consumes putrescine but not D-mannitol, whereas P. veronii prefers D-mannitol and only very slowly metabolizes putrescine . Indeed, in this case, the measured growth rates in uniform liquid suspension were indifferent between mono- and co-culture conditions for both P. putida and P. veronii (Fig. , p = 0.6300, p = 0.3990, n = 6; growth curves in Supplementary Fig. ), although the time until first doubling was around 20% shorter for P. putida in co-culture (Fig. , p = 0.0032). Also, the total productivity (in cells mL –1 determined by flow cytometry) was unchanged between mono- and co-cultures, for both P. putida and P. veronii (Fig. , p = 0.84, p = 0.46). In contrast, the total productivity in fragmented conditions was two-fold higher for P. putida than P. veronii , but again indifferent between mono- and co-culture conditions (Fig. ). Seen at population levels, these results thus suggested substrate independence for either species in uniform liquid suspended and pL-droplet growth. At the level of individual co-culture mix droplets (i.e., having detectable fluorescence signals of both P. putida and P. veronii ), the substrate independence scenario presented itself again very differently. An average of 5.5% of droplets were dominated by P. veronii (Fig. , fraction a ), whereas 39.2% consisted of droplets where productivities were equal (Fig. , fraction b , n = 4 biological replicates). In contrast, 48.7% of co-culture droplets were largely dominated by P. putida (Fig. , fraction d ). Partly, the outlier fractions a and d may again be due to incidental non-growing cells of either partner (Supplementary Fig. ). The median productivity of P. putida was higher in the fraction b droplets (Fig. , ANOVA, post-hoc p = 0.0213, compared to P. putida solo droplets), whereas that of P. veronii in fraction b was lower compared to P. veronii mono-culture droplets (Fig. , p = 5.67 × 10 –4 ; n = 4 replicates). Compared to a null model of co-culture droplet distributions, the productivity of P. putida was indeed significantly higher in mix droplets with P. veronii , but significantly lower when being in droplets alone, than expected from the sampled probability of its individual productivities in mono-cultures (Fig. , ANOVA with post-hoc test p -values are 0.0492 and 0.0003, grid fractions e and d , respectively). This indicated that interactions in droplets with equal-sized partner populations were mutualistic for P. putida and slightly antagonistic for P. veronii . These results thus illustrate how a globally perceived non-competitive scenario breaks down in a variety of different outcomes in a fragmented habitat. Fragmented growth effects under an inhibition scenario To explore whether fragmented growth impacts variable outcomes in situations beyond substrate competition, we used two further strain combinations, which illustrate an inhibition and a killing interaction. In the first of these, we produced an inhibition scenario, consisting of a phyllosphere isolate Pseudomonas sp. Leaf15, known to excrete a growth-inhibitory compound , mixed with a sensitive strain (for which we used a fluorescently tagged variant of S. wittichii RW1 ). In this scenario both strains have their own carbon substrate, to avoid generating additional substrate competition (Fig. ). We used succinate for L15, which is not measurably used by RW1, and salicylate for RW1, which is not used by nor toxic for L15 (Supplementary Fig. ). As expected, growth rates of RW1 in co-culture uniform liquid suspension with L15 were reduced by 25% compared to its mono-culture, whereas those of L15 are unaffected, confirming growth rate inhibition (Fig. ). Despite the growth rate decrease, the final attained population size of both RW1 and L15 in uniform liquid suspension co-culture was indifferent from the mono-cultures (Fig. , measured by flow cytometry; p = 0.95, p = 0.79). Also, the productivity of RW1 in mix droplets with L15 was similar to that in solo droplets (Fig. , p = 0.13, p = 0.53, n = 4; Supplementary Fig. ), although both showed a constant ca. 10% fraction of non- or poorly growing cells (Fig. ). Compared to mono-culture growth, productivity of RW1 was the same and that of L15 slightly higher in co-culture mix droplets ( p = 0.0020; sign rank test on median of the growing droplet fraction, Fig. , Supplementary Table ). However, there was a 0.8–9.3% (average 3.2%) fraction of mix droplets with RW1 and L15 productivity higher than expected from their mono-culture droplet growth (Fig. , f2 fraction, p = 0.0039, sign rank test all time points and replicates). This fraction thus represents local positive interactions, suggesting reversal of inhibition under fragmented growth conditions. Tailocins provide a competitive advantage only within fragmented habitats In the final example, we studied the interactions between two P. protegens strains, one of which (Pf-5) is sensitive to a phage tail-like weapon, or tailocin, produced by the other (CHA0), leading to its lysis (Fig. ). CHA0 is self-resistant to its own tailocins . Activation of tailocin production and release, however, is a rare event in CHA0 cultures and requires a stress trigger . Consequently, we expected that variable tailocin production may occasionally change the competitive outcome during growth on the same substrate, which would be detectable under fragmented growth, but not in uniform liquid suspended cultures. Co-cultured strains on a single common substrate (succinate) in uniform liquid suspension indeed yielded almost equivalent substrate competition outcomes, with equal time to reach stationary phase for both CHA0 and Pf-5 in mono-culture, and ca. 50/50 yields in stationary phase (Fig. ). Productivities of either strain in co-culture pL-droplets were also equal and approximately half of that in mono-culture droplets (Fig. , solo). The observed distribution of the productivities of Pf-5 and CHA0 in mix droplets followed an almost perfect constant sum, composed of the variation of individual productivities of Pf-5 and CHA0 (Fig. ). Interestingly, however, in a small fraction of individual droplets with both CHA0 and Pf-5, an increased background fluorescence in the mScarlet-I fluorescence channel for Pf-5 could be observed, which in timelapse droplet imaging appeared as sudden onsets of partial and even complete disappearance of Pf-5 cells (Fig. , Supplementary Movie and ). This sudden disappearance of Pf-5 cells would be in agreement with the release of tailocins from CHA0 leading to the puncturing and liberation of the cell content of the sensitive Pf-5 cells (consequently leading to an elevated background fluorescence by diffusion of mScarlet-I protein). From the variation of Pf-5 median background fluorescence in solo droplets (Fig. ), we estimated that on average ca. 0.5% of all droplets with both partners show evidence for lysis of Pf-5 (i.e., above 2.5 × the Pf-5 solo background standard deviation, Fig. ; Fig. , p = 0.0114, n = 3 biological replicates, two time points combined). In summary, these results indicate that both P. protegens strains are equally competitive for succinate, but that the production of tailocins by CHA0 can help to remove the competitor. Tailocins can thus have a crucial localized effect in co-inhabited microhabitats, but this effect is masked in uniform liquid-suspended culture, because of their low activation rate. Phenotypic variation in growth kinetics of founder cells determines colonization outcomes in microhabitats Since all the co-culture outcomes under fragmented conditions showed important variability compared to well-mixed uniform bulk conditions, we wondered if this would be the result of inherent founder cell phenotypic variability. To demonstrate this, we again focused on P. putida and P. veronii and a single competitive substrate, and measured growth in individual droplets over time. For this, we used microfluidic chips with a low ceiling (10 µm height) , so that droplets are squeezed, kept in place and more cells are in perfect focus. Although the incubation in PDMS-glass results in slightly different oxygen provision to growing cells than culturing them in a pL-droplet emulsion, it enabled measuring the variability of cell growth in individual droplets (Fig. ). Indeed, timelapse imaging confirmed different outcomes from the same starting configurations (e.g., one P. putida cell and one P. veronii , Fig. , at t = 0 h), and growth measurements of n = 191 individual droplets showed kinetic variability both in mono- and co-culture droplets (Fig. ). Average growth rates of P. putida in solo droplets were 1.2 times higher than in mixture, whereas those of P. veronii remained indifferent between the two conditions (Fig. ). On average, P. veronii started dividing 4 h later than P. putida (Fig. ). Both strains also showed a tendency that incidental longer lag times decreased their final attained size in co-culture droplets (Supplementary Fig. ). Paired growth trajectories were highly variable between droplets, even under the same starting cell-census (Fig. and Supplementary Fig. ), whereas unequal starting cell ratios tended to favour either one of the strains (Fig. ). However, growth rates and lag times of P. veronii were the only significant predictors for biomass ratio outcomes (generalized linear mixed effects model, r 2 = 0.8236, n = 108 co-culture droplet pairs, Fig. , Supplementary Table ), whereas founder cell numbers were less relevant. The variance in single droplet growth rates and lag times tended to decrease with increasing starting cell numbers (Fig. , significant inequality of variances for P. putida but not for P. veronii - Brown-Forsythe test, see parameter distributions in Supplementary Fig. ), suggesting that the influence of individual cell heterogeneities becomes less determinant and yields more averaged behaviour. To better demonstrate the effect of single-cell growth variation on competitive outcomes in a two-species community within the fragmented habitat, we adapted an existing mathematical framework for simulating carbon-limited competitive Monod growth of P. putida and P. veronii founder cells within 35 pL-droplets (Fig. ). In this simulation, each founder cell starts with independent growth kinetic parameters, subsampled from inferred distributions around means measured in liquid mono-cultures (Fig. ). In addition, each droplet is colonized by a Poisson-drawn random number of founder cells. Simulations including both individual growth kinetic variability and Poisson-distributed initial cell ratios (λ = 3 for both P. putida and P. veronii ) produced growth distributions in co-culture droplets consistent with the observed distributions of P. putida and P. veronii co-culture droplet productivities (Fig. ). In contrast, both a constant starting cell number or homogeneous individual growth kinetics reduced the simulated fraction of droplets in which P. veronii reverses competition (Fig. ). Further simulations suggested then that mostly an increased heterogeneity of lag times and growth rates of P. veronii founder cells determines reversed competition outcomes with P. putida (Fig. ). This agrees with them being the most important predictors in the general linear mixed model outcome (Fig. ). Increasing kinetic heterogeneities among P. putida founder cells are also predicted to positively impact the reversal of competition by P. veronii , suggesting that P. veronii benefits from the heterogeneity among its competitor cells (Fig. ). Increased founder cell population sizes decrease competition reversal occurrences We suspected that the pronounced effect of individual cell phenotypic variation on the outcomes of paired strain growth at small founder populations would diminish with larger numbers. Simulations of P. putida and P. veronii growth under succinate competition indicated that increasing droplet volume and increasing starting cell numbers from 3, 10 to 100 per droplet (but the same starting cell density and thus the same number of generations of growth) resulted in more homogeneous outcomes (Fig. ). This occurred irrespectively of a Poisson-sampled or constant starting cell numbers per droplet (Fig. ), and is mainly due to reduced variation at higher starting numbers (Fig. ). As a result, the proportion of P. veronii -dominated droplets decreases to almost undetectable at a starting number of 100 cells of each strain (Fig. , inset). Seen across all droplets, this causes the relative abundance of P. veronii to diminish by 9 % from 3 to 100 starting cells per strain per droplet (Fig. ; for calculations not including solo species droplets, see Supplementary Fig. ). Essentially the same effect is obtained when maintaining the same droplet volume (35 pL) but increasing starting cell densities (Fig. ). The difference here is that the final competitive relative abundance of P. veronii remains close to the initial species-mixing ratio of 50 %. The reason for this is the reduced number of generations that cells can undergo in droplets of the same volume but at higher starting numbers (Fig. ). Experimental reproduction of increasing starting cell numbers corroborated these simulations (Supplementary Fig. ), resulting in reduced variation in droplet growth outcomes of competing P. putida and P. veronii (Fig. ), and a reduced proportion of P. veronii in the co-culture (Fig. , Supplementary Fig. ). Increasing cell densities in the same droplet volume then again increased the final ratio of P. veronii to P. putida , as expected from simulations and the number of generations of growth in the populations (Fig. compared to the simulation of Fig. , Supplementary Fig. ). Collectively, these results and simulations underscore that heterogeneity in growth properties among founder cells can enhance the probability of variable outcomes between competing species inhabiting fragmented habitats, particularly at low founder population census. This may also imply that some bacterial species have become selected for more variable growth kinetics, which can favour their survival under substrate competition in microhabitats.
To understand how habitat micro-fragmentation affects the developing interactions between paired bacterial strains, we compared growth of mono- and cocultures under different interspecific interaction scenarios, and either in regular mixed liquid suspension with uniform growth conditions (with a large founder population size of 5 × 10 5 cells per 140 µL) or in pL droplets (with 1–3 founder cells per droplet and strain; Fig. , Supplementary Fig. ). In the first scenario, we focused on either substrate competition (i.e., a single shared primary carbon growth substrate in the form of succinate) or substrate independence (i.e., each strain receives its own specific substrate). To test this, we used two Pseudomonas strains ( P. putida and P. veronii ) with overlapping substrate preferences but different growth kinetic properties . Growth rates of P. putida in uniform liquid-suspended culture with 10 mM succinate based on fluorescence measurements ( n = 6 replicates) were slightly but significantly ( p = 0.009) higher in mono than in co-culture with P. veronii (Fig. ; biomass yields from culture turbidity presented in Supplementary Fig. ). In contrast, P. veronii grew slightly faster in co-culture with P. putida than alone ( p = 3.06 × 10 −4 ), possibly because of metabolite cross-feeding as suggested previously . Despite this increase in co-culture, the average maximum specific growth rate (µ max ) of P. veronii on succinate was 25% lower than that of P. putida , and the onset of growth (population lag time) was prolonged (Fig. , Supplementary Fig. ). Consequently, uniform liquid-suspended co-cultures became dominated by P. putida (Fig. ). Consistent with substrate competition, the strain-specific cell yields were lower in co- than in mono-cultures (Fig. ), with P. putida losing ca. 14.4% of its cell yield and P. veronii losing 84.7% compared to mono-cultures (Fig. ). Growth of the same cell suspension densities and substrate concentrations under fragmented conditions in pL-droplets both in mono- and co-cultures yielded similar cell numbers for P. putida in comparison to the suspended cultures (Fig. , cell numbers determined after 24 h growth by flow cytometry by coalescing droplet emulsions into a single suspension). In contrast, the P. veronii cell numbers in droplet co-cultures with P. putida were on average 4 times higher than expected from suspended cultures (Fig. ). This suggested that the global competitive deficit of P. veronii in uniform liquid suspended culture was partly abolished during growth in fragmented conditions. To better understand the mechanisms for the attenuated competitive inhibition of P. veronii by P. putida in co-culture droplets, we looked more closely at the cell yield variations at the level of individual droplets (Fig. ). The median productivities of P. putida after 24 h growth in pL-droplets were indifferent between mono-cultures and droplets with only P. putida in co-cultures (Fig. , solo droplets, p = 0.2164, n = 3, two-sided t-test), indicating that there was no difference arising from the co-culturing procedure in itself. We find such solo droplets because of the random nature of cell encapsulation in droplets, which follows a Poisson distribution (Supplementary Fig. ). Since the cell counting procedure breaks the droplet emulsion, the total counts by flow cytometry are a mixture of cells liberated from true mix droplets and solo droplets. Imaging after 24 h indicated that ca. 26% and 23% of co-culture droplets were occupied solely by either P. putida or P. veronii , respectively ( solo droplets), and 26% contained both ( mix droplets; the other 25% being empty). The increased proportion of P. veronii in the fused co-culture droplet emulsions counted by flow cytometry is thus increased by the fraction of P. veronii solo droplets. True co-culture droplets containing both P. putida and P. veronii showed an average 11.3% reduction in median productivity of P. putida (Fig. , mix , p = 0.0056 in t -test to solo droplet productivity, n = 3), which is similar as measured by flow cytometry counting on fused droplets (Fig. ). Productivity of P. veronii was indifferent between solo droplets (in co-culture) and P. veronii mono-culture droplets (Fig. , p = 0.5073, two-sided t-test), but—as expected, was on average 79.4% inferior in droplets with P. putida present (Fig. , p = 5.68 × 10 −4 , two-sided t-test). Mix droplet outcomes effectively ranged from those with almost exclusively P. putida to almost exclusively P. veronii and some with more equal proportions (Fig. ). Interestingly, ca. 24% of the co-culture droplets occupied with both strains were dominated by P. veronii (Fig. ), which may be due to non-growing cells of P. putida (Supplementary Fig. ) or to a competitive gain by P. veronii in presence of growing P. putida cells. Control experiments with paired isogenic P. putida each expressing a different fluorescent protein, showed an equilibrated distribution of mix droplet outcomes under succinate competition (Supplementary Fig. ), confirming that the observed outcomes of mix droplets with P. veronii and P. putida are due to kinetic and phenotypic differences among the strains. This result indicated, therefore, that in fragmented growth conditions with low founder cell densities, P. veronii can overcome its general competitive disadvantage for growth on succinate.
To contrast the reversion of competition outcomes, we next imposed a substrate independence scenario, in which P. putida and P. veronii were each given an exclusive carbon substrate (Fig. ). Our expectation here was that since substrate competition would be alleviated, both strains would grow unhindered, and liquid-suspended and pL-droplet growth would be largely similar. To test this, we used a previous observation that P. putida consumes putrescine but not D-mannitol, whereas P. veronii prefers D-mannitol and only very slowly metabolizes putrescine . Indeed, in this case, the measured growth rates in uniform liquid suspension were indifferent between mono- and co-culture conditions for both P. putida and P. veronii (Fig. , p = 0.6300, p = 0.3990, n = 6; growth curves in Supplementary Fig. ), although the time until first doubling was around 20% shorter for P. putida in co-culture (Fig. , p = 0.0032). Also, the total productivity (in cells mL –1 determined by flow cytometry) was unchanged between mono- and co-cultures, for both P. putida and P. veronii (Fig. , p = 0.84, p = 0.46). In contrast, the total productivity in fragmented conditions was two-fold higher for P. putida than P. veronii , but again indifferent between mono- and co-culture conditions (Fig. ). Seen at population levels, these results thus suggested substrate independence for either species in uniform liquid suspended and pL-droplet growth. At the level of individual co-culture mix droplets (i.e., having detectable fluorescence signals of both P. putida and P. veronii ), the substrate independence scenario presented itself again very differently. An average of 5.5% of droplets were dominated by P. veronii (Fig. , fraction a ), whereas 39.2% consisted of droplets where productivities were equal (Fig. , fraction b , n = 4 biological replicates). In contrast, 48.7% of co-culture droplets were largely dominated by P. putida (Fig. , fraction d ). Partly, the outlier fractions a and d may again be due to incidental non-growing cells of either partner (Supplementary Fig. ). The median productivity of P. putida was higher in the fraction b droplets (Fig. , ANOVA, post-hoc p = 0.0213, compared to P. putida solo droplets), whereas that of P. veronii in fraction b was lower compared to P. veronii mono-culture droplets (Fig. , p = 5.67 × 10 –4 ; n = 4 replicates). Compared to a null model of co-culture droplet distributions, the productivity of P. putida was indeed significantly higher in mix droplets with P. veronii , but significantly lower when being in droplets alone, than expected from the sampled probability of its individual productivities in mono-cultures (Fig. , ANOVA with post-hoc test p -values are 0.0492 and 0.0003, grid fractions e and d , respectively). This indicated that interactions in droplets with equal-sized partner populations were mutualistic for P. putida and slightly antagonistic for P. veronii . These results thus illustrate how a globally perceived non-competitive scenario breaks down in a variety of different outcomes in a fragmented habitat.
To explore whether fragmented growth impacts variable outcomes in situations beyond substrate competition, we used two further strain combinations, which illustrate an inhibition and a killing interaction. In the first of these, we produced an inhibition scenario, consisting of a phyllosphere isolate Pseudomonas sp. Leaf15, known to excrete a growth-inhibitory compound , mixed with a sensitive strain (for which we used a fluorescently tagged variant of S. wittichii RW1 ). In this scenario both strains have their own carbon substrate, to avoid generating additional substrate competition (Fig. ). We used succinate for L15, which is not measurably used by RW1, and salicylate for RW1, which is not used by nor toxic for L15 (Supplementary Fig. ). As expected, growth rates of RW1 in co-culture uniform liquid suspension with L15 were reduced by 25% compared to its mono-culture, whereas those of L15 are unaffected, confirming growth rate inhibition (Fig. ). Despite the growth rate decrease, the final attained population size of both RW1 and L15 in uniform liquid suspension co-culture was indifferent from the mono-cultures (Fig. , measured by flow cytometry; p = 0.95, p = 0.79). Also, the productivity of RW1 in mix droplets with L15 was similar to that in solo droplets (Fig. , p = 0.13, p = 0.53, n = 4; Supplementary Fig. ), although both showed a constant ca. 10% fraction of non- or poorly growing cells (Fig. ). Compared to mono-culture growth, productivity of RW1 was the same and that of L15 slightly higher in co-culture mix droplets ( p = 0.0020; sign rank test on median of the growing droplet fraction, Fig. , Supplementary Table ). However, there was a 0.8–9.3% (average 3.2%) fraction of mix droplets with RW1 and L15 productivity higher than expected from their mono-culture droplet growth (Fig. , f2 fraction, p = 0.0039, sign rank test all time points and replicates). This fraction thus represents local positive interactions, suggesting reversal of inhibition under fragmented growth conditions.
In the final example, we studied the interactions between two P. protegens strains, one of which (Pf-5) is sensitive to a phage tail-like weapon, or tailocin, produced by the other (CHA0), leading to its lysis (Fig. ). CHA0 is self-resistant to its own tailocins . Activation of tailocin production and release, however, is a rare event in CHA0 cultures and requires a stress trigger . Consequently, we expected that variable tailocin production may occasionally change the competitive outcome during growth on the same substrate, which would be detectable under fragmented growth, but not in uniform liquid suspended cultures. Co-cultured strains on a single common substrate (succinate) in uniform liquid suspension indeed yielded almost equivalent substrate competition outcomes, with equal time to reach stationary phase for both CHA0 and Pf-5 in mono-culture, and ca. 50/50 yields in stationary phase (Fig. ). Productivities of either strain in co-culture pL-droplets were also equal and approximately half of that in mono-culture droplets (Fig. , solo). The observed distribution of the productivities of Pf-5 and CHA0 in mix droplets followed an almost perfect constant sum, composed of the variation of individual productivities of Pf-5 and CHA0 (Fig. ). Interestingly, however, in a small fraction of individual droplets with both CHA0 and Pf-5, an increased background fluorescence in the mScarlet-I fluorescence channel for Pf-5 could be observed, which in timelapse droplet imaging appeared as sudden onsets of partial and even complete disappearance of Pf-5 cells (Fig. , Supplementary Movie and ). This sudden disappearance of Pf-5 cells would be in agreement with the release of tailocins from CHA0 leading to the puncturing and liberation of the cell content of the sensitive Pf-5 cells (consequently leading to an elevated background fluorescence by diffusion of mScarlet-I protein). From the variation of Pf-5 median background fluorescence in solo droplets (Fig. ), we estimated that on average ca. 0.5% of all droplets with both partners show evidence for lysis of Pf-5 (i.e., above 2.5 × the Pf-5 solo background standard deviation, Fig. ; Fig. , p = 0.0114, n = 3 biological replicates, two time points combined). In summary, these results indicate that both P. protegens strains are equally competitive for succinate, but that the production of tailocins by CHA0 can help to remove the competitor. Tailocins can thus have a crucial localized effect in co-inhabited microhabitats, but this effect is masked in uniform liquid-suspended culture, because of their low activation rate.
Since all the co-culture outcomes under fragmented conditions showed important variability compared to well-mixed uniform bulk conditions, we wondered if this would be the result of inherent founder cell phenotypic variability. To demonstrate this, we again focused on P. putida and P. veronii and a single competitive substrate, and measured growth in individual droplets over time. For this, we used microfluidic chips with a low ceiling (10 µm height) , so that droplets are squeezed, kept in place and more cells are in perfect focus. Although the incubation in PDMS-glass results in slightly different oxygen provision to growing cells than culturing them in a pL-droplet emulsion, it enabled measuring the variability of cell growth in individual droplets (Fig. ). Indeed, timelapse imaging confirmed different outcomes from the same starting configurations (e.g., one P. putida cell and one P. veronii , Fig. , at t = 0 h), and growth measurements of n = 191 individual droplets showed kinetic variability both in mono- and co-culture droplets (Fig. ). Average growth rates of P. putida in solo droplets were 1.2 times higher than in mixture, whereas those of P. veronii remained indifferent between the two conditions (Fig. ). On average, P. veronii started dividing 4 h later than P. putida (Fig. ). Both strains also showed a tendency that incidental longer lag times decreased their final attained size in co-culture droplets (Supplementary Fig. ). Paired growth trajectories were highly variable between droplets, even under the same starting cell-census (Fig. and Supplementary Fig. ), whereas unequal starting cell ratios tended to favour either one of the strains (Fig. ). However, growth rates and lag times of P. veronii were the only significant predictors for biomass ratio outcomes (generalized linear mixed effects model, r 2 = 0.8236, n = 108 co-culture droplet pairs, Fig. , Supplementary Table ), whereas founder cell numbers were less relevant. The variance in single droplet growth rates and lag times tended to decrease with increasing starting cell numbers (Fig. , significant inequality of variances for P. putida but not for P. veronii - Brown-Forsythe test, see parameter distributions in Supplementary Fig. ), suggesting that the influence of individual cell heterogeneities becomes less determinant and yields more averaged behaviour. To better demonstrate the effect of single-cell growth variation on competitive outcomes in a two-species community within the fragmented habitat, we adapted an existing mathematical framework for simulating carbon-limited competitive Monod growth of P. putida and P. veronii founder cells within 35 pL-droplets (Fig. ). In this simulation, each founder cell starts with independent growth kinetic parameters, subsampled from inferred distributions around means measured in liquid mono-cultures (Fig. ). In addition, each droplet is colonized by a Poisson-drawn random number of founder cells. Simulations including both individual growth kinetic variability and Poisson-distributed initial cell ratios (λ = 3 for both P. putida and P. veronii ) produced growth distributions in co-culture droplets consistent with the observed distributions of P. putida and P. veronii co-culture droplet productivities (Fig. ). In contrast, both a constant starting cell number or homogeneous individual growth kinetics reduced the simulated fraction of droplets in which P. veronii reverses competition (Fig. ). Further simulations suggested then that mostly an increased heterogeneity of lag times and growth rates of P. veronii founder cells determines reversed competition outcomes with P. putida (Fig. ). This agrees with them being the most important predictors in the general linear mixed model outcome (Fig. ). Increasing kinetic heterogeneities among P. putida founder cells are also predicted to positively impact the reversal of competition by P. veronii , suggesting that P. veronii benefits from the heterogeneity among its competitor cells (Fig. ).
We suspected that the pronounced effect of individual cell phenotypic variation on the outcomes of paired strain growth at small founder populations would diminish with larger numbers. Simulations of P. putida and P. veronii growth under succinate competition indicated that increasing droplet volume and increasing starting cell numbers from 3, 10 to 100 per droplet (but the same starting cell density and thus the same number of generations of growth) resulted in more homogeneous outcomes (Fig. ). This occurred irrespectively of a Poisson-sampled or constant starting cell numbers per droplet (Fig. ), and is mainly due to reduced variation at higher starting numbers (Fig. ). As a result, the proportion of P. veronii -dominated droplets decreases to almost undetectable at a starting number of 100 cells of each strain (Fig. , inset). Seen across all droplets, this causes the relative abundance of P. veronii to diminish by 9 % from 3 to 100 starting cells per strain per droplet (Fig. ; for calculations not including solo species droplets, see Supplementary Fig. ). Essentially the same effect is obtained when maintaining the same droplet volume (35 pL) but increasing starting cell densities (Fig. ). The difference here is that the final competitive relative abundance of P. veronii remains close to the initial species-mixing ratio of 50 %. The reason for this is the reduced number of generations that cells can undergo in droplets of the same volume but at higher starting numbers (Fig. ). Experimental reproduction of increasing starting cell numbers corroborated these simulations (Supplementary Fig. ), resulting in reduced variation in droplet growth outcomes of competing P. putida and P. veronii (Fig. ), and a reduced proportion of P. veronii in the co-culture (Fig. , Supplementary Fig. ). Increasing cell densities in the same droplet volume then again increased the final ratio of P. veronii to P. putida , as expected from simulations and the number of generations of growth in the populations (Fig. compared to the simulation of Fig. , Supplementary Fig. ). Collectively, these results and simulations underscore that heterogeneity in growth properties among founder cells can enhance the probability of variable outcomes between competing species inhabiting fragmented habitats, particularly at low founder population census. This may also imply that some bacterial species have become selected for more variable growth kinetics, which can favour their survival under substrate competition in microhabitats.
Here we demonstrate how phenotypic variability in single-cell growth kinetic parameters and Poisson-variation in assembly of strain pairs in micro-scale communities lead to ecological outcomes that vary substantially from those inferred from global-scale interaction types. We show, using three different strain pairs and four different imposed growth regimes and interactions, how growth in fragmented environments at low starting cell densities (1–3 cells per droplet per species) enables local overturning of interaction directionality, whereas this is not detectable in uniform cultures starting at large population census (ca. 10 6 cells). Simulations and experimental data further show that the effect of variation and overturning is most pronounced at starting numbers below 10 cells per droplet and species, above which it gradually diminishes (but does not disappear). This aspect of growth outcome variation and interaction reversal by habitat fragmentation has received little attention in previous studies describing paired species interactions. Even though the fraction of interaction reversals may seem relatively small, this effect can help to explain why less competitive species can locally sustain in mixed microbial communities within fragmented environments . In all four tested different interaction scenarios and strain pairs, regular uniform liquid suspended culturing with larger volumes (here: 140 µL) and high starting cell numbers (10 6 cells) confirmed the intended global interaction types (i.e., substrate competition, independence, and growth rate inhibition). In contrast, fragmented growth of the same paired cultures led to variations in growth outcomes and interaction type reversal. For example, P. veronii grew to 4 times higher densities in fragmented co-culture droplets than under the same competition with P. putida in uniform liquid culture, and ca. 24% of mix droplets became dominated with P. veronii (Fig. ). Global substrate independence, in contrast, led to the opposite: the appearance of individual droplets with higher-than-expected growth of either of the partners (Fig. ). Global growth rate reduction on S. wittichii RW1 by Pseudomonas sp . Leaf15 was also inversed in 1–9% of isolated mix droplets, having higher than expected growth of the sensitive partner (Fig. ). Since we prepare uniform liquid and droplet experiments in parallel with the same coculture suspensions having the same substrate concentrations, we do not a priori expect nor measured any difference in the extent of growth in bulk liquid (140 µl) and across all droplets (collapsed from the emulsion to a single suspension; Fig. ). We conclude, therefore, that the observed variation in droplet community outcomes is not the result of some underlying confounding factor in growth conditions or cell-cell distances between a large and a miniature culturing system. Our results do not only hold for interactions mediated by metabolic products but also for interactions involving bacterial killing by specialized weapons such as tailocins. Fragmented growth in isolated droplets showed that tailocin production by P. protegens CHA0 can eradicate P. protegens Pf-5 in a small proportion of microhabitats, whereas this killing has no effect in uniform bulk liquid cultures (Fig. ). This small proportion is consistent with observed heterogenous activation of tailocin production at a population level, initiated in <1% of cells , . In addition, tailocins, like other specialized bacterial killing devices such as the type VI secretion system, act very locally , which has raised the question of their ecological importance . Our droplet observations suggest that tailocins are particularly helpful in overturning competition in spatially restricted microhabitats. Within the context of the natural habitat of P. protegens (the plant rhizosphere), killing by tailocins could help to maintain local reservoirs of the producer strain, possibly increasing its survival and ability to colonize new habitats. Two processes seem the most important to explain the effects of habitat fragmentation on microbial (paired) community growth outcomes: phenotypic variation of founder cells and stochastic sampling or dispersal effects on the formation and resulting species ratios in the microhabitats at start. At low starting cell numbers (1–3 per species per droplet), we find that phenotypic variation prevails over stochastic sampling effects (Figs. , ), whereas at higher cell numbers (above 10 per species per droplet) the effects of both phenotypic variation and stochastic sampling on growth outcomes diminish (Fig. ). Phenotypic variation is a well-known phenomenon from bacterial monocultures, caused by intrinsic and extrinsic molecular noise sources , that can affect different traits important for reproductive success, such as growth rate , lag phase , dormancy and antibiotic persistance , , or metabolic specialization . In some cases, for example, under influence of bistable genetic switches, phenotypic variation can lead to formation of subpopulations of cells with clearly different traits (e.g., sporulation , conjugation , virulence ). Under conditions where new habitats are colonized by large founder populations, such character trait variations will average out. But it can be easily conceived how, with founder populations of a few cells only, the reproductive success of the species in the pristine (i.e., newly colonizable) microhabitat in presence with others, is determined by the individual variation in cellular viability and traits. Our results demonstrate how phenotypic variation among founder cells, including incidental cell dormancy, is propagated into different growth outcomes (Fig. ). In addition, variations in the species starting cell numbers through the processes of mixing and dispersal into the new habitat, will influence the probability of maintaining or averaging phenotypic variation in their starting populations, and thus determine their proliferation success in the microcommunity. This has important ecological consequences, as it may favour species coexistence. Despite this general conception, the question is justified how relevant and representative are microhabitats with 35–200 pL and starting communities of 1–3 cells per species per droplet for microbial habitat fragmentation. An important premise for our work was to consider that natural environments for microbial communities are characterized by a high degree of spatial fragmentation and/or compartmentalization. Secondly, we assumed that such fragmentation and compartmentalization occur at a relevant micro-scale, such that the formed microhabitats are indeed colonized by dispersal of low numbers of founder cells and species. There is plentiful evidence to support the assumption that local habitats for prokaryotic cells measure in micrometer dimensions with low population census , . For example, an estimated 90% of microbial cell clusters in soils contain <100 cells ; plant surface architecture is characterized by micrometer crevices and microdroplets enabling microcolony formation , , and sinking food particles in the ocean range in sizes from 1–50 µm with 10 2 –10 3 cells . In addition, cell-cell interactions are assumed to be dominating at short (10–100 µm) distances , , which is the cell-cell distance range attained within the confinement of single 40 µm droplets. Although the exact number of founder bacterial cells in pristine environmental and host habitats is unknown, it likely ranges anywhere between a few and millions of cells, dependent on habitat and dispersal modes. For example, the most abundant microcluster sizes measured in soils (with ca. 100 cells) are likely to have been seeded from fewer founder cells, and one can imagine how rainfall and subsequent drought cause re- and disconnection of soil pores, mixing small communities and enabling new local outgrowth. Studies of plant leaf colonization have shown that most aggregates in growth-favourable areas arise from single founder cells , which may be driven by the physics of microdrop formation resulting in solitary cells being enclosed . Confocal scanning images of plant roots grown in soils inoculated with fluorescently tagged bacteria also show both solitary cells as well as small aggregates, suggesting de novo microcluster formation starting from single founder cells . In contrast, colonization of the gastro-intestinal tract of human and animals is unlikely to occur from solitary bacterial cells, but rather from thousands to millions of cells simultaneously ingested with food particles. More generally speaking, the effect of founder cell population sizes on the variability of growth and interaction outcomes may thus be largely habitat-driven, but plentiful habitats for microorganisms seem to be colonized with low numbers of founder cells. Here, phenotypic and stochastic variation become ecologically relevant processes. To measure the presumed effects of fragmentation on reproductive success, we relied on microfluidic pL-droplet formation and cell encapsulation. Droplet cultivation approaches have attracted interest as a high-throughput method to co-culture bacteria , , , , or enrich bacteria from natural samples in an untargeted manner , , and potentially allowing coculturing of unculturable members in multi-species conditions . Notably, the droplet encapsulation creates isolated habitats that only allow local resource depletion and the development of metabolic or contact-dependent interactions, but no cross-talk between droplets . The production of droplet encapsulation from species co-cultures leads to formation of empty and solo droplets (i.e., including only a single species member from the co-culture), which is dependent on the inoculum density. Droplet culturing, therefore, does not only enable observation of community interactions at individual droplet level, but extrapolation from the ensemble of all droplets helps to understand dispersal and fragmentation effects at the level of the meta-community. Subjecting communities to alternating cycles of droplet growth, collapse and mixing, and reseeding could be an interesting approach to study longer term ecological effects of microhabitat fragmentation. Our findings help to explain why so many bacterial taxa with overlapping metabolic capacities but different growth rates can co-occur in the same macro-habitat, echoing similar conclusions from macro-ecology on spatial niche heterogeneities – . One can argue that slight differences in substrate utilization and metabolic dependencies may provide opportunities for co-existence . In addition, the spatial isolation of habitats, and recurring processes of temporal mixing and dispersal can contribute to co-existence , . However, spatial fragmentation itself is not enough to maintain diversity if taxa would not show phenotypic variation, because, in the absence of cell-cell variability, interspecific interactions would become completely deterministic. We thus conclude that the importance of the micro-scale is not simply to provide spatial isolation, but to integrate variations in the dispersal processes leading to mixing of founder cells from different species at low numbers , and varying emerging local interspecific interactions as a result of individual cell phenotypes. In this light it would be reasonable to assume that there is selection on genotypes with wider phenotypic growth variation , as it could lead to increased chances of success to proliferate upon dispersal in mixed-species fragmented environments; a notion supported by our models (Fig. ). Spatial fragmentation (or perhaps rather: dynamic variation in spatial fragmentation) thus plays a crucial role in types of local interactions and the resulting diversity of a complex meta-community , . To extrapolate downwards from globally measured interactions to small scales is not doing justice to the existing variability in such interactions and provides an oversimplification of their role in community development.
Strains, media and culture conditions Two Pseudomonas strains were used for the substrate interaction experiments: P. putida F1 (PPU) is a benzene-, ethylbenzene- and toluene- (BTEX) degrading bacterium from a polluted creek . P. putida F1 was tagged with a single-copy chromosomally inserted mini-Tn5 cassette carrying a constitutively expressed fusion of eGFP to the P circ promoter of ICE clc . As isogenic control we used a P. putida F1 tagged with a single copy constitutively expressed mCherry from the tac -promoter. P. veronii 1YdBTEX2 (PVE), a BTEX-degrading strain, was isolated from contaminated soil in the Czech Republic . PVE was tagged with a constitutively expressed mCherry from the P tac -promoter within a single-copy inserted mini-Tn7 transposon (carrying a P tac – mCherry cassette—as described in ref. ). For the growth inhibition experiment, we used Pseudomonas sp Leaf15 (L15), an antibiotic-producer isolated from Arabidopsis thaliana ’s phyllosphere , and a fluorescently tagged version of Sphingomonas wittichii RW1 . L15 was tagged with a constitutively expressed mScarlet-I single-gene copy, using a pMRE-Tn7-145 mScarlet-I plasmid . We used two rhizosphere-inhabiting strains of Pseudomonas protegens , CHA0 and Pf-5, for the tailocin interaction experiment. CHA0 was tagged with a constitutively expressed single inserted gene copy of gfp2 , and Pf-5 was tagged with a constitutively expressed mScarlet-I from a single-copy inserted mini-Tn7 transposon (using a pUC18T-mini-Tn7T-Gm-Pc-mScarletI plasmid). Strains were streaked on a nutrient agar plate directly from a −80 °C stock and were incubated for 2–3 days at 30 °C before being stored at 4 °C for the later experiments (max 12 days). Each biological replicate was started from a single isolated colony of each strain, which was resuspended in a Mc-Cartney glass tube with 5 mL (or 8 mL for L15 and RW1) of 21 C minimal medium (21 C MM, Supplementary Table , as described by Gerhard et al. ) supplemented with the appropriate carbon substrate(s). Cultures were incubated at 30 °C under rotary shaking at 180 rpm. Precultures were centrifuged in 50-mL Falcon tubes to harvest the bacterial cells. The cells were two times successively washed in 5 mL of 21 C MM before being resuspended by pipetting in 5 mL of 21 C MM. P. putida and P. veronii cultures were centrifuged for 4 min at 12,000 rpm (Eppendorf centrifuge 5810 R with an F-34-6-38 rotor, 6 × 15/50 mL conical tubes), whereas the other four strains were centrifuged for 4 min at 8000 rpm. Specifically, the S. wittichii suspension was vortexed for 2 min at the final resuspension step to disperse cell aggregates as much as possible. The turbidity of the final cell suspension was measured with a spectrophotometer (MN—Nanocolor Vis, OD 600 ), and then diluted in 21 C MM with the appropriate C-source(s) to have approximately the same starting cell numbers (OD 600 of 0.02–0.05, depending on the strain; see Supplementary Table ). For co-culture experiments, the diluted cell suspensions of the respective partner strains (Supplementary Table ) were mixed at a 1:1 ratio ( vol / vol ). Mono-culture controls were diluted two times with 21 C MM including the appropriate C-source(s), thus maintaining the same starting density for each strain in mono- and co-culture. Uniform liquid suspended growth in 96 well-plates Aliquots of 140 µL of the freshly prepared mono- and co-culture cell suspensions were distributed in the wells of a 96 well-plate (Cyto One, tissue culture treated, Catalogue No: CC7682-7596), in six to seven technical replicates. Six to seven wells with the same sterile medium were incubated as controls for sterility. The plate was then incubated at 30 °C in a plate reader (BioTek, plate reader, Synergy H1), for up to 48 h. Plates were continuously shaken (double orbital, 282 cpm, slow orbital speed). Absorbance (OD 600 ) and fluorescence (GFP - 480/510 Ex/Em, and mCherry – 580/610 Ex/Em) were measured every 30 min in each cultivation well. After the incubation, the plates were placed on ice and sampled for flow-cytometry counting (see below). Microfluidic encapsulation of cells in droplets and culture procedure The same prepared mono- and co-culture cell suspensions as for 96-well plate reader experiments were also used for microfluidic encapsulation (Fig. ). This suspension contained on average 1.8 × 10 7 cells per mL, resulting in 0–3 founder cells at start within a 35 pL-droplet volume (examples of starting cell distributions presented in Supplementary Fig. ). An aliquot of 500 µL of diluted cell suspension (mono- or co-culture) was taken up in a 1 mL syringe (Omnifix 1 mL, U-100 Insulin), and 1 mL of HFE 7500 Novec fluorinated oil containing 2% (w/w) of fluorosurfactant (RAN Biotechnologies, Inc.) was loaded in another one. Oil-dissolved surfactant stabilizes formed aqueous droplets and prevents them from coalescing. Syringes with the aqueous cell suspensions and with the oil were mounted on two separate syringe pumps (Harvard Apparatus, Pump 11 Elite / Pico Plus Elite OEM Syringe Pump Modules) to inject the liquids into a droplet maker microfluidic chip at flow rates of 8 µL min –1 and 20 µL min –1 , respectively. The droplet maker chip (custom-produced by Wunderlichip GmbH, CH-8037 Zürich, Switzerland) has a 40 × 40 × 40 µm junction (see Supplementary Fig. ), generating monodispersed droplets with a diameter of ca. 40 µm. Formed droplets were collected for 10 min in a 1.5 mL Eppendorf tube, corresponding to a total volume of 80 µL of droplets. The Eppendorf tube was prefilled with 250 µL phosphate-buffered saline (PBS) to prevent the droplets from evaporating. Eppendorf tubes with the droplets were kept on ice until being incubated at 30 °C to maintain the starting cell concentrations during the collection of the droplets from the different conditions tested in parallel (mono- and co-cultures). After a first timepoint imaging ( t = 0 h), droplets were incubated at 30 °C and sampled at different intervals (Supplementary Table ). After the final incubation time, the vials with the droplets were placed on ice before coalescing all droplets into a single suspension and counting the resulting cell numbers by flow cytometry (see below). Testing effects of different founder cell population sizes To specifically test the effect of increasing founder cell population sizes on competitive outcomes, we used again P. veronii and P. putida with 10 mM succinate under substrate competition but increased the junctions in the microfluidic device from 40 to 50 × 50 × 50 µm and adjusted the oil-surfactant flow rate to 18 µL min –1 , to generate droplets of ca. 80 µm diameter (ca. 268 pL volume). At preculture densities (OD 600 ) for P. putida of 0.01 and for P. veronii of 0.02, this yielded 2–6 cells of each species in 80 µm-droplets (measured distributions in Supplementary Fig. ). By doubling the preculture densities, we obtained on average 3–8 cells per species in 80 µm-droplets (Supplementary Fig. ). Droplet emulsions were then incubated and sampled as before. Droplet sampling Droplet cultures were sampled at the start of the incubation, and after 17, 24 or 48 h (depending on the condition, Supplementary Table ). An aliquot of 1.5 µL was retrieved from the droplet emulsion, which was transferred by micro-pipette into a 5–µL HFE 7500 oil layer inside a chamber observation slide (Countess chamber slide, Invitrogen C10228). Then, another volume of 5 µL of oil was added to the chamber to disperse droplets in a monolayer. Droplets were imaged at 3-5 random individual positions with a Leica DMi4000 inverted epifluorescence microscope ( P. putida - P. veronii experiments) or a Nikon Ti2000 inverted epifluorescence microscope (for the two other paired-strain experiments), a Flash4 Hamamatsu camera, a 20x objective (Leica, HI PLAN I 20 x /0,30 PH1, with P. putida-P. veronii , or a Nikon CFI S Plan Fluor ELWD 20XC MRH08230, with the four other strains), in bright field (exposure time = 25 ms), red (exposure time = 400 ms) and green fluorescence (exposure time = 600 ms for P. putida , and 400 ms for the other strains). Images were collected as 16-bit.TIF files and further analyzed with a custom-made MATLAB script (vs. 2021b) to segment droplets and cells in droplets. Timelapse imaging of cell growth in droplets In select droplet experiments (Supplementary Table ), we followed the growth of cells in individual droplets over time by timelapse microscopy in an observation chip (Fig. , chip design adopted from ref. , custom-produced by Wunderlichip GmbH). This polydimethylsiloxane (PDMS) print was directly mounted on a 1-well chambered Coverglass (Nunc™ Lab-Tek™ II Chambered Coverglass, Thermo Fisher, Cat number: 155360PK), to be able to immerse the chip during the observation. Before loading, the glass-bonded chips were placed in a vacuum chamber for 20 min to extract any gas contained in the PDMS, thus preventing the appearance of air bubbles during the incubation (we acknowledge that this potentially reduces the level of available oxygen to the cells). The chip was then filled and immersed in deionized filtered (0.22 µm) water overnight. One hour before loading the droplets, the chip flow lines were emptied from the water and refilled with HFE 7500 oil, and the immersion chamber of the chip was filled with 1 mL of HFE 7500 oil, on top of which was placed 4.5 mL of deionized water, to limit oil and droplet evaporation during the incubation and imaging. Cell suspensions were encapsulated into droplets following the same procedure as explained above, but now the production chip outlet was directly connected by teflon tubing to the inlet of the (immersed) observation chip. An aliquot of 20 µL of HFE 7500 oil was pipetted inside the observation chip inlet (using P20 tips) to allow good separation of the incoming droplets (this was done under live observation of the observation chip with an inverted microscope, to verify droplet separation). Droplets accidentally leaking into the chamber were removed by pipetting. Finally, aliquots of 30 µL of HFE 7500 oil were pipetted onto the two outlets of the observation chip, to prevent water from entering the chip during incubation and imaging. The immersion chamber was then closed and sealed with parafilm. The height of the chamber in the observation chip is 10 µm, which causes droplets to squeeze, and to almost completely fall within the focal depth range of the 20× objective. The chip was mounted on a Nikon Ti2000 inverted epifluorescence microscope with a programmable stage and was imaged every 10 min in the three channels (bright field, GFP and mCherry) as before, at the same individual positions set with the imaging control (Micromanager software 1.4.23). Images were exported as 16-bit.TIF files. Image analysis TIF-images were processed in a custom-made MATLAB script (see Code availability), which segments all droplets per image and all fluorescent objects per droplet. The script then calculates the sum of all fluorescent objects per droplet (in pixel area), which is multiplied by their mean fluorescence intensity, to obtain a total fluorescent signal (area × fluorescence or AF, see Fig. ). We use the AF-value per droplet as a proxy for the biomass production of the strain identified by its specific fluorescence (Supplementary Table ), under the assumption that the more cells there are in a droplet, the higher their fluorescent signal will be (Fig. ). We prefer using AF-values as biomass proxies instead of inferring per-droplet cell counts from AF-values or direct counting of cell objects, because of the potential variation in per-cell and growth-phase dependent expressed fluorescence, fluorescence distortion from cells out of the focusing plane, aggregation of cells into clumps, and cell movement during exposure time (e.g, Fig. , Supplementary Movie and ). Depending on the scale of fluorescence intensities displayed by the cell-strain-pairs, the raw fluorescent signals were log 10 -, square-root or median-transformed for display of potential subpopulations. In case of median-transformation, we use the mean of median AF-values from the corresponding mono-culture controls ( n = 3 replicates) at the last time point (T24 or T48). The distributions of AF-signals are then analysed across all droplets, and across independent biological replicates. We consider droplets of co-culture experiments to be solo if only one of the fluorescence channels is detected, and otherwise a mix droplet (carrying cells from both encapsulated species). However, we made no a priori assumptions as to whether a founder cell is dormant or non-growing, which might be inferred from comparing T0 with T24 or T48 AF-values (Supplementary Fig. ). For timelapse experiments, images were segmented and processed in the same way as above, with the difference that a customized rolling ball algorithm was applied to compensate in the image segmentation for fluorescence variations existing among cells and bleaching of the signal over time. Additionally, the droplets were tracked between time frames, by comparing the distances of the centroid for every droplet between frame t and the next frame t + 1 . Droplets with minimum centroid distances were assumed to be the same on frame t + 1 as on frame t . Tracking of individual droplets was then manually controlled and corrected with the help of generated movies displaying the tracking ID attributed to each droplet over time. In this way, biomass development can be plotted per droplet over time, and the variation among droplets can be quantified. Fusing droplet emulsions for flow cytometry cell counting Droplets from a single Eppendorf emulsion experiment were fused to produce a single aqueous phase, in which the total cell amount could be counted by flow cytometry. First, the extra HFE oil that settled below the droplet emulsion was removed by pipetting. To the remaining PBS and droplet emulsion layer, an approximate equivalent volume was added of HFE oil containing 1H,1H,2H2H-perfluoro-1-octanol (5 g solution Sigma-Aldrich, further diluted 4 times in HFE 7500 oil). This breaks the emulsion and fuses the droplets into a single aqueous phase. The resulting droplet-cell-PBS aqueous phase was transferred into a new Eppendorf vial and its volume was measured from the micro-pipette directly. Flow cytometry counting of cell population sizes Cell numbers in liquid suspensions from fused droplet emulsions or mixed liquid suspended cultures in 96-well plates, or precultures, were quantified by flow cytometry. Liquid cell suspensions were tenfold serially diluted in PBS (down to 10 −3 ) and fixed by adding NaN 3 solution to a final concentration of 4 g L –1 and incubating for max 1 day at 4 °C until flow-cytometry processing. Volumes of 20 µL of fixed samples were aspired in a Novocyte flow-cytometer (Bucher Biotec, ACEA biosciences Inc.) at 14 µL min –1 . Events were collected above general thresholds of FSC = 500 and SSC = 150 to distinguish cells from particle noise, and gates were defined to selectively identify the strains from their fluorescent markers (Supplementary Table , see gating example in Supplementary Fig. ). The Novocyte gives direct volumetric counts, which were corrected for the dilution. To convert cell counts from droplet suspensions to equivalent cell concentrations per mL, we considered the proportion of empty droplets from imaging and the extra volume of 250 µL of PBS before droplet collection, as follows: 1 [12pt]{minimal}
$$}{{ml}}= }{20{{{}}}l} 2000 {10}^{{Dilution\; factor}} +{PBS\; vol}}{{Droplet\; vol}} \\ }$$ C e l l s m l = E v e n t s 20 μ l × 2000 × 10 D i l u t i o n f a c t o r × D r o p l e t v o l + P B S v o l D r o p l e t v o l × 1 F r a c t i o n o f n o n e m p t y d r o p l e t s The multiplication by 2000 includes the 2-fold dilution when fixing the cells in the sample with NaN 3 solution, and the conversion to a per-mL concentration. Calculation of maximum specific growth rates, lag times and time to first population doubling Average growth rates and lag times of strains in suspended liquid culture were inferred from the ln-transformed strain-specific fluorescence increase in mono-cultures grown in 21 C MM with their specific carbon substrate (as described above), each in 6–7 replicates. To average, we calculate the slope over a sliding window of five consecutive timepoints during the first 10 h, retained only slopes with a regression coefficient > 0.97; and reported the mean of those slopes as the maximum specific growth rate. Lag times were fitted from the complete (fluorescence) growth curve using a logistic function, and converted to time to first population doubling as the sum of the lag time (in h) plus the inverse of the logarithmic fitting constant multiplied by ln(2). In absence of lag time the time to first population doubling is the inverse of the maximum specific Monod growth rate. To calculate growth rates from fluorescence in single droplets, we deployed a manual interactive plot of ln-transformed values of the summed fluorescence signal (the product of segmented area and the average strain-specific fluorescence in that area) over time, identifying the start and ends of the ln-linear range, and the lag time being the time between the start of the imaging series and the start of the ln-linear range. The maximum specific growth rate in the droplet was then taken as the slope over the entire identified ln-linear range. Since we did not segment individual cells, the summed fluorescence signal per droplet is a proxy for their biomass, and we report a Monod-type maximum specific growth rate. Mathematical model for population growth in droplets We adapted a previously developed mathematical framework to simulate the growth of P. putida and P. veronii populations in 35 pL droplets with nutrients (10 mM succinate). The initial resource concentration (R 0 ) is homogeneously distributed among all droplets and cannot diffuse between droplets. The chemical reactions inside each droplet are similar to the bulk population model in ref. , however, each founder cell follows its own differential growth equation, and includes possible kinetic variation. Growth of each founder cell i in droplet j thus follows the general reaction, 2 [12pt]{minimal}
$${S}_{1}+R\,{ }^{{ }_{{1}_{1}}}\,{P}_{1}{ }^{{ }_{{1}_{2}}}\,2{S}_{1}\,,\,{P}_{1}\,{ }^{{ }_{{1}_{3}}}\,{S}_{1}+{W}_{1}$$ S 1 + R → κ 1 1 P 1 → κ 1 2 2 S 1 , P 1 → κ 1 3 S 1 + W 1 [12pt]{minimal}
$${S}_{2}+R{ }^{{ }_{{2}_{1}}}{P}_{2}{ }^{{ }_{{2}_{2}}}2{S}_{2},\,{P}_{2}{ }^{{ }_{{2}_{3}}}{S}_{2}+{W}_{2}$$ S 2 + R → κ 2 1 P 2 → κ 2 2 2 S 2 , P 2 → κ 2 3 S 2 + W 2 where S is the bacterial species, R represents the resource, P is the cell-resource intermediate state and W any non-used metabolic side products. For simplicity, we did not consider cross-feeding effects. Each founder cell i has its own lag time [12pt]{minimal}
$${{L}_{S}}_{i}^{k}$$ L S i k where i ∈ 1, 2 is the species index and [12pt]{minimal}
$$k \, {} \, {}$$ k N the founder cell index. Therefore, the differential equations for species S 1 or S 2 present in droplet j are, 3 [12pt]{minimal}
$$_{1}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}\,({{{}}}}{ }_{{1}_{1}}^{(i)}{S}_{1}^{(i)}(t)R(t)+\,(2{ }_{{1}_{2}}^{(i)}+\,{ }_{{1}_{3}}^{(i)}){P}_{1}^{(i)}\,(t))$$ d S 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) - κ 1 1 i S 1 i t R t + 2 κ 1 2 i + κ 1 3 i P 1 i ( t ) [12pt]{minimal}
$$_{2}\,(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}({{{}}}}{ }_{2}^{(i)}{S}_{2}^{(i)}(t)R(t)+(2{ }_{{2}_{2}}^{(i)}+{ }_{{2}_{3}}^{(i)}){P}_{2}^{(i)}(t))$$ d S 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) - κ 2 i S 2 i t R t + 2 κ 2 2 i + κ 2 3 i P 2 i ( t ) [12pt]{minimal}
$$_{1}(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}({ }_{{1}_{1}}^{(i)}{S}_{1}^{(i)}(t)R(t){{{}}}}({ }_{{1}_{2}}^{(i)}+{ }_{{1}_{3}}^{(i)}){P}_{1}^{(i)}(t))$$ d P 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 1 i S 1 i t R t - κ 1 2 i + κ 1 3 i P 1 i ( t ) [12pt]{minimal}
$$_{2}(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}({ }_{{2}_{1}}^{(i)}{S}_{2}^{(i)}(t)R(t){{{}}}}({ }_{{2}_{2}}^{(i)}+{ }_{{2}_{3}}^{(i)}){P}_{2}^{(i)}(t))$$ d P 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 1 i S 2 i t R t - κ 2 2 i + κ 2 3 i P 2 i ( t ) [12pt]{minimal}
$$_{1}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}{ }_{{1}_{3}}\,{P}_{1}^{(i)}\,(t)$$ d W 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 3 P 1 i ( t ) [12pt]{minimal}
$$_{2}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}{ }_{{2}_{3}}\,{P}_{2}^{(i)}\,(t)$$ d W 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 3 P 2 i ( t ) [12pt]{minimal}
$$(t)}{{dt}}=-{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}{ }_{{1}_{1}}{S}_{1}^{(i)}(t)R(t)-{ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}{ }_{{2}_{1}}{S}_{2}^{(i)}(t)R(t)$$ d R ( t ) d t = − ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 1 S 1 i t R t − ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 1 S 2 i t R t where [12pt]{minimal}
$${N}_{{S}_{i}}^{j}{} \, {}$$ N S i j N is the number of initial cells of species i ∈ 1, 2 in droplet j . The starting number of cells of both species per droplet was drawn from a Poisson-distribution with an average of 3 cells per species. Individual growth rates and lag time parameters were sampled from a generated Gamma-distribution of P. putida and P. veronii growth parameters, inferred from mono-culture OD 600 curves with an Monte-Carlo Metropolis Hasting algorithm centred on the mean (method as described in ref. ), and with variance deduced from the individual timelapse growth measurements in droplets with single-founder cells of P. putida and/or P. veronii (Supplementary Fig. ). We also included a 15% chance for a cell to have a zero -growth rate, to account for growth-impaired cells that we observed from droplet imaging (Supplementary Fig. ). Varying the heterogeneity in growth properties in Fig. among founder cells thus consisted in increasing or decreasing the initial variance of the parameter gamma distributions. Statistical analysis and reproducibility All experiments were carried out in biological triplicates (quadruplicate incubations for the substrate independence scenario). For each biological replicate, liquid-suspended cultures comprised 6-7 cultivation wells as technical replicates. Biological replicates of fragmented droplet cultures comprised one separate emulsion incubation each, except in one of the replicates of the substrate competition experiment, for which a triplicate emulsion was generated, to assess and show the technical reproducibility of droplet cultivation experiments (Supplementary Fig. ). Each emulsion sample was then imaged at 5–20 positions (technical replicates), to obtain 100–1000 droplets per mono- or co-culture and treatment. Flow cytometry counts (Figs. , and ) show the means of all technical replicates within each biological replicate. Each suspension from a cultivation well or fused droplet emulsion was counted three times by the flow cytometer, from which the mean was taken. T -tests were conducted to compare mean cell-counts in flow cytometry. Normality in the data was verified with a Shapiro-Wilk’s test, and variance homogeneity was verified with a Fisher test. Median, top-10 or low-10 percentile productivities of each species in mono vs mix droplets were compared using Wilcoxon rank-sum or sign-rank tests (when taken across multiple time points). Depending on the data, we tested against a null hypothesis of sample means or ranked values being indifferent, or being higher or lower (i.e., a left or right tail). Tests were implemented in R version (within R studio version 2022.07.01) or in MATLAB (MathWorks, Inc. version R2021b). To deduce strain interactions, we compared observed mixed droplet growth with the expected mixed growth from a null model based on probability distributions generated from the corresponding mono-culture droplet growth (i.e., assuming no interactions). The model uses the probability distributions for productivities of each of the strains in pairs at each sampled time point, simulated five times for the same number of pairs as the number of observed droplets. Expected and observed paired droplets were then counted in a productivity grid (e.g., as in Fig. ), and summed fractions across relevant grid regions (e.g., >1.5 times the median) were compared across replicates (typically, three biological replicates and five simulation replicates). P -values are derived from ANOVA comparison including all fractions, followed by a post-hoc multiple test (Fig. ), or by Wilcoxon sign-rank test in case of comparing multiple time points (e.g., Fig. ), as implemented in MATLAB. To estimate the proportion of mixed droplets in which P. protegens Pf-5 might have been killed (lysed) by CHA0 tailocins, we deployed variations in the specific median fluorescence background originating from Pf-5. We first calculated the standard deviation in Pf-5 background fluorescence from Pf-5 solo droplets, corrected for the Pf-5 biomass (i.e., segmented area), which was multiplied by 2.5 as a boundary for the outlier range. This outlier range definition was then imposed on the Pf-5 specific fluorescence in mix droplets with CHA0 (and in CHA0 droplets where no Pf-5 area can be distinguished, assuming they may all be lysed). Outlier fractions were corrected for the total observed droplets and compared to the outlier fractions observed for Pf-5 solo droplets (i.e., as in Fig. e, f), using one- or two-tailed two-sample t- testing of replicate values. The effect of founder cell census on the variance of growth kinetic parameters in strain-paired droplets was examined using a generalized linear mixed effect model ( glme , as implemented in MATLAB 2021a), using measured maximum specific growth rates, lag times and starting cell ratios as variables. Droplets with quasi-null growth rates (which were also characterized by a lag time above 20 h) were removed for the analysis. The relationship of individual growth rate and lag time variance as a function of founder cells was further explored using a Brown-Forsythe test implemented in R , which tests the homogeneity of variances between groups without assuming the normality of the data. Reporting summary Further information on research design is available in the linked to this article.
Two Pseudomonas strains were used for the substrate interaction experiments: P. putida F1 (PPU) is a benzene-, ethylbenzene- and toluene- (BTEX) degrading bacterium from a polluted creek . P. putida F1 was tagged with a single-copy chromosomally inserted mini-Tn5 cassette carrying a constitutively expressed fusion of eGFP to the P circ promoter of ICE clc . As isogenic control we used a P. putida F1 tagged with a single copy constitutively expressed mCherry from the tac -promoter. P. veronii 1YdBTEX2 (PVE), a BTEX-degrading strain, was isolated from contaminated soil in the Czech Republic . PVE was tagged with a constitutively expressed mCherry from the P tac -promoter within a single-copy inserted mini-Tn7 transposon (carrying a P tac – mCherry cassette—as described in ref. ). For the growth inhibition experiment, we used Pseudomonas sp Leaf15 (L15), an antibiotic-producer isolated from Arabidopsis thaliana ’s phyllosphere , and a fluorescently tagged version of Sphingomonas wittichii RW1 . L15 was tagged with a constitutively expressed mScarlet-I single-gene copy, using a pMRE-Tn7-145 mScarlet-I plasmid . We used two rhizosphere-inhabiting strains of Pseudomonas protegens , CHA0 and Pf-5, for the tailocin interaction experiment. CHA0 was tagged with a constitutively expressed single inserted gene copy of gfp2 , and Pf-5 was tagged with a constitutively expressed mScarlet-I from a single-copy inserted mini-Tn7 transposon (using a pUC18T-mini-Tn7T-Gm-Pc-mScarletI plasmid). Strains were streaked on a nutrient agar plate directly from a −80 °C stock and were incubated for 2–3 days at 30 °C before being stored at 4 °C for the later experiments (max 12 days). Each biological replicate was started from a single isolated colony of each strain, which was resuspended in a Mc-Cartney glass tube with 5 mL (or 8 mL for L15 and RW1) of 21 C minimal medium (21 C MM, Supplementary Table , as described by Gerhard et al. ) supplemented with the appropriate carbon substrate(s). Cultures were incubated at 30 °C under rotary shaking at 180 rpm. Precultures were centrifuged in 50-mL Falcon tubes to harvest the bacterial cells. The cells were two times successively washed in 5 mL of 21 C MM before being resuspended by pipetting in 5 mL of 21 C MM. P. putida and P. veronii cultures were centrifuged for 4 min at 12,000 rpm (Eppendorf centrifuge 5810 R with an F-34-6-38 rotor, 6 × 15/50 mL conical tubes), whereas the other four strains were centrifuged for 4 min at 8000 rpm. Specifically, the S. wittichii suspension was vortexed for 2 min at the final resuspension step to disperse cell aggregates as much as possible. The turbidity of the final cell suspension was measured with a spectrophotometer (MN—Nanocolor Vis, OD 600 ), and then diluted in 21 C MM with the appropriate C-source(s) to have approximately the same starting cell numbers (OD 600 of 0.02–0.05, depending on the strain; see Supplementary Table ). For co-culture experiments, the diluted cell suspensions of the respective partner strains (Supplementary Table ) were mixed at a 1:1 ratio ( vol / vol ). Mono-culture controls were diluted two times with 21 C MM including the appropriate C-source(s), thus maintaining the same starting density for each strain in mono- and co-culture.
Aliquots of 140 µL of the freshly prepared mono- and co-culture cell suspensions were distributed in the wells of a 96 well-plate (Cyto One, tissue culture treated, Catalogue No: CC7682-7596), in six to seven technical replicates. Six to seven wells with the same sterile medium were incubated as controls for sterility. The plate was then incubated at 30 °C in a plate reader (BioTek, plate reader, Synergy H1), for up to 48 h. Plates were continuously shaken (double orbital, 282 cpm, slow orbital speed). Absorbance (OD 600 ) and fluorescence (GFP - 480/510 Ex/Em, and mCherry – 580/610 Ex/Em) were measured every 30 min in each cultivation well. After the incubation, the plates were placed on ice and sampled for flow-cytometry counting (see below).
The same prepared mono- and co-culture cell suspensions as for 96-well plate reader experiments were also used for microfluidic encapsulation (Fig. ). This suspension contained on average 1.8 × 10 7 cells per mL, resulting in 0–3 founder cells at start within a 35 pL-droplet volume (examples of starting cell distributions presented in Supplementary Fig. ). An aliquot of 500 µL of diluted cell suspension (mono- or co-culture) was taken up in a 1 mL syringe (Omnifix 1 mL, U-100 Insulin), and 1 mL of HFE 7500 Novec fluorinated oil containing 2% (w/w) of fluorosurfactant (RAN Biotechnologies, Inc.) was loaded in another one. Oil-dissolved surfactant stabilizes formed aqueous droplets and prevents them from coalescing. Syringes with the aqueous cell suspensions and with the oil were mounted on two separate syringe pumps (Harvard Apparatus, Pump 11 Elite / Pico Plus Elite OEM Syringe Pump Modules) to inject the liquids into a droplet maker microfluidic chip at flow rates of 8 µL min –1 and 20 µL min –1 , respectively. The droplet maker chip (custom-produced by Wunderlichip GmbH, CH-8037 Zürich, Switzerland) has a 40 × 40 × 40 µm junction (see Supplementary Fig. ), generating monodispersed droplets with a diameter of ca. 40 µm. Formed droplets were collected for 10 min in a 1.5 mL Eppendorf tube, corresponding to a total volume of 80 µL of droplets. The Eppendorf tube was prefilled with 250 µL phosphate-buffered saline (PBS) to prevent the droplets from evaporating. Eppendorf tubes with the droplets were kept on ice until being incubated at 30 °C to maintain the starting cell concentrations during the collection of the droplets from the different conditions tested in parallel (mono- and co-cultures). After a first timepoint imaging ( t = 0 h), droplets were incubated at 30 °C and sampled at different intervals (Supplementary Table ). After the final incubation time, the vials with the droplets were placed on ice before coalescing all droplets into a single suspension and counting the resulting cell numbers by flow cytometry (see below).
To specifically test the effect of increasing founder cell population sizes on competitive outcomes, we used again P. veronii and P. putida with 10 mM succinate under substrate competition but increased the junctions in the microfluidic device from 40 to 50 × 50 × 50 µm and adjusted the oil-surfactant flow rate to 18 µL min –1 , to generate droplets of ca. 80 µm diameter (ca. 268 pL volume). At preculture densities (OD 600 ) for P. putida of 0.01 and for P. veronii of 0.02, this yielded 2–6 cells of each species in 80 µm-droplets (measured distributions in Supplementary Fig. ). By doubling the preculture densities, we obtained on average 3–8 cells per species in 80 µm-droplets (Supplementary Fig. ). Droplet emulsions were then incubated and sampled as before.
Droplet cultures were sampled at the start of the incubation, and after 17, 24 or 48 h (depending on the condition, Supplementary Table ). An aliquot of 1.5 µL was retrieved from the droplet emulsion, which was transferred by micro-pipette into a 5–µL HFE 7500 oil layer inside a chamber observation slide (Countess chamber slide, Invitrogen C10228). Then, another volume of 5 µL of oil was added to the chamber to disperse droplets in a monolayer. Droplets were imaged at 3-5 random individual positions with a Leica DMi4000 inverted epifluorescence microscope ( P. putida - P. veronii experiments) or a Nikon Ti2000 inverted epifluorescence microscope (for the two other paired-strain experiments), a Flash4 Hamamatsu camera, a 20x objective (Leica, HI PLAN I 20 x /0,30 PH1, with P. putida-P. veronii , or a Nikon CFI S Plan Fluor ELWD 20XC MRH08230, with the four other strains), in bright field (exposure time = 25 ms), red (exposure time = 400 ms) and green fluorescence (exposure time = 600 ms for P. putida , and 400 ms for the other strains). Images were collected as 16-bit.TIF files and further analyzed with a custom-made MATLAB script (vs. 2021b) to segment droplets and cells in droplets.
In select droplet experiments (Supplementary Table ), we followed the growth of cells in individual droplets over time by timelapse microscopy in an observation chip (Fig. , chip design adopted from ref. , custom-produced by Wunderlichip GmbH). This polydimethylsiloxane (PDMS) print was directly mounted on a 1-well chambered Coverglass (Nunc™ Lab-Tek™ II Chambered Coverglass, Thermo Fisher, Cat number: 155360PK), to be able to immerse the chip during the observation. Before loading, the glass-bonded chips were placed in a vacuum chamber for 20 min to extract any gas contained in the PDMS, thus preventing the appearance of air bubbles during the incubation (we acknowledge that this potentially reduces the level of available oxygen to the cells). The chip was then filled and immersed in deionized filtered (0.22 µm) water overnight. One hour before loading the droplets, the chip flow lines were emptied from the water and refilled with HFE 7500 oil, and the immersion chamber of the chip was filled with 1 mL of HFE 7500 oil, on top of which was placed 4.5 mL of deionized water, to limit oil and droplet evaporation during the incubation and imaging. Cell suspensions were encapsulated into droplets following the same procedure as explained above, but now the production chip outlet was directly connected by teflon tubing to the inlet of the (immersed) observation chip. An aliquot of 20 µL of HFE 7500 oil was pipetted inside the observation chip inlet (using P20 tips) to allow good separation of the incoming droplets (this was done under live observation of the observation chip with an inverted microscope, to verify droplet separation). Droplets accidentally leaking into the chamber were removed by pipetting. Finally, aliquots of 30 µL of HFE 7500 oil were pipetted onto the two outlets of the observation chip, to prevent water from entering the chip during incubation and imaging. The immersion chamber was then closed and sealed with parafilm. The height of the chamber in the observation chip is 10 µm, which causes droplets to squeeze, and to almost completely fall within the focal depth range of the 20× objective. The chip was mounted on a Nikon Ti2000 inverted epifluorescence microscope with a programmable stage and was imaged every 10 min in the three channels (bright field, GFP and mCherry) as before, at the same individual positions set with the imaging control (Micromanager software 1.4.23). Images were exported as 16-bit.TIF files.
TIF-images were processed in a custom-made MATLAB script (see Code availability), which segments all droplets per image and all fluorescent objects per droplet. The script then calculates the sum of all fluorescent objects per droplet (in pixel area), which is multiplied by their mean fluorescence intensity, to obtain a total fluorescent signal (area × fluorescence or AF, see Fig. ). We use the AF-value per droplet as a proxy for the biomass production of the strain identified by its specific fluorescence (Supplementary Table ), under the assumption that the more cells there are in a droplet, the higher their fluorescent signal will be (Fig. ). We prefer using AF-values as biomass proxies instead of inferring per-droplet cell counts from AF-values or direct counting of cell objects, because of the potential variation in per-cell and growth-phase dependent expressed fluorescence, fluorescence distortion from cells out of the focusing plane, aggregation of cells into clumps, and cell movement during exposure time (e.g, Fig. , Supplementary Movie and ). Depending on the scale of fluorescence intensities displayed by the cell-strain-pairs, the raw fluorescent signals were log 10 -, square-root or median-transformed for display of potential subpopulations. In case of median-transformation, we use the mean of median AF-values from the corresponding mono-culture controls ( n = 3 replicates) at the last time point (T24 or T48). The distributions of AF-signals are then analysed across all droplets, and across independent biological replicates. We consider droplets of co-culture experiments to be solo if only one of the fluorescence channels is detected, and otherwise a mix droplet (carrying cells from both encapsulated species). However, we made no a priori assumptions as to whether a founder cell is dormant or non-growing, which might be inferred from comparing T0 with T24 or T48 AF-values (Supplementary Fig. ). For timelapse experiments, images were segmented and processed in the same way as above, with the difference that a customized rolling ball algorithm was applied to compensate in the image segmentation for fluorescence variations existing among cells and bleaching of the signal over time. Additionally, the droplets were tracked between time frames, by comparing the distances of the centroid for every droplet between frame t and the next frame t + 1 . Droplets with minimum centroid distances were assumed to be the same on frame t + 1 as on frame t . Tracking of individual droplets was then manually controlled and corrected with the help of generated movies displaying the tracking ID attributed to each droplet over time. In this way, biomass development can be plotted per droplet over time, and the variation among droplets can be quantified.
Droplets from a single Eppendorf emulsion experiment were fused to produce a single aqueous phase, in which the total cell amount could be counted by flow cytometry. First, the extra HFE oil that settled below the droplet emulsion was removed by pipetting. To the remaining PBS and droplet emulsion layer, an approximate equivalent volume was added of HFE oil containing 1H,1H,2H2H-perfluoro-1-octanol (5 g solution Sigma-Aldrich, further diluted 4 times in HFE 7500 oil). This breaks the emulsion and fuses the droplets into a single aqueous phase. The resulting droplet-cell-PBS aqueous phase was transferred into a new Eppendorf vial and its volume was measured from the micro-pipette directly.
Cell numbers in liquid suspensions from fused droplet emulsions or mixed liquid suspended cultures in 96-well plates, or precultures, were quantified by flow cytometry. Liquid cell suspensions were tenfold serially diluted in PBS (down to 10 −3 ) and fixed by adding NaN 3 solution to a final concentration of 4 g L –1 and incubating for max 1 day at 4 °C until flow-cytometry processing. Volumes of 20 µL of fixed samples were aspired in a Novocyte flow-cytometer (Bucher Biotec, ACEA biosciences Inc.) at 14 µL min –1 . Events were collected above general thresholds of FSC = 500 and SSC = 150 to distinguish cells from particle noise, and gates were defined to selectively identify the strains from their fluorescent markers (Supplementary Table , see gating example in Supplementary Fig. ). The Novocyte gives direct volumetric counts, which were corrected for the dilution. To convert cell counts from droplet suspensions to equivalent cell concentrations per mL, we considered the proportion of empty droplets from imaging and the extra volume of 250 µL of PBS before droplet collection, as follows: 1 [12pt]{minimal}
$$}{{ml}}= }{20{{{}}}l} 2000 {10}^{{Dilution\; factor}} +{PBS\; vol}}{{Droplet\; vol}} \\ }$$ C e l l s m l = E v e n t s 20 μ l × 2000 × 10 D i l u t i o n f a c t o r × D r o p l e t v o l + P B S v o l D r o p l e t v o l × 1 F r a c t i o n o f n o n e m p t y d r o p l e t s The multiplication by 2000 includes the 2-fold dilution when fixing the cells in the sample with NaN 3 solution, and the conversion to a per-mL concentration.
Average growth rates and lag times of strains in suspended liquid culture were inferred from the ln-transformed strain-specific fluorescence increase in mono-cultures grown in 21 C MM with their specific carbon substrate (as described above), each in 6–7 replicates. To average, we calculate the slope over a sliding window of five consecutive timepoints during the first 10 h, retained only slopes with a regression coefficient > 0.97; and reported the mean of those slopes as the maximum specific growth rate. Lag times were fitted from the complete (fluorescence) growth curve using a logistic function, and converted to time to first population doubling as the sum of the lag time (in h) plus the inverse of the logarithmic fitting constant multiplied by ln(2). In absence of lag time the time to first population doubling is the inverse of the maximum specific Monod growth rate. To calculate growth rates from fluorescence in single droplets, we deployed a manual interactive plot of ln-transformed values of the summed fluorescence signal (the product of segmented area and the average strain-specific fluorescence in that area) over time, identifying the start and ends of the ln-linear range, and the lag time being the time between the start of the imaging series and the start of the ln-linear range. The maximum specific growth rate in the droplet was then taken as the slope over the entire identified ln-linear range. Since we did not segment individual cells, the summed fluorescence signal per droplet is a proxy for their biomass, and we report a Monod-type maximum specific growth rate.
We adapted a previously developed mathematical framework to simulate the growth of P. putida and P. veronii populations in 35 pL droplets with nutrients (10 mM succinate). The initial resource concentration (R 0 ) is homogeneously distributed among all droplets and cannot diffuse between droplets. The chemical reactions inside each droplet are similar to the bulk population model in ref. , however, each founder cell follows its own differential growth equation, and includes possible kinetic variation. Growth of each founder cell i in droplet j thus follows the general reaction, 2 [12pt]{minimal}
$${S}_{1}+R\,{ }^{{ }_{{1}_{1}}}\,{P}_{1}{ }^{{ }_{{1}_{2}}}\,2{S}_{1}\,,\,{P}_{1}\,{ }^{{ }_{{1}_{3}}}\,{S}_{1}+{W}_{1}$$ S 1 + R → κ 1 1 P 1 → κ 1 2 2 S 1 , P 1 → κ 1 3 S 1 + W 1 [12pt]{minimal}
$${S}_{2}+R{ }^{{ }_{{2}_{1}}}{P}_{2}{ }^{{ }_{{2}_{2}}}2{S}_{2},\,{P}_{2}{ }^{{ }_{{2}_{3}}}{S}_{2}+{W}_{2}$$ S 2 + R → κ 2 1 P 2 → κ 2 2 2 S 2 , P 2 → κ 2 3 S 2 + W 2 where S is the bacterial species, R represents the resource, P is the cell-resource intermediate state and W any non-used metabolic side products. For simplicity, we did not consider cross-feeding effects. Each founder cell i has its own lag time [12pt]{minimal}
$${{L}_{S}}_{i}^{k}$$ L S i k where i ∈ 1, 2 is the species index and [12pt]{minimal}
$$k \, {} \, {}$$ k N the founder cell index. Therefore, the differential equations for species S 1 or S 2 present in droplet j are, 3 [12pt]{minimal}
$$_{1}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}\,({{{}}}}{ }_{{1}_{1}}^{(i)}{S}_{1}^{(i)}(t)R(t)+\,(2{ }_{{1}_{2}}^{(i)}+\,{ }_{{1}_{3}}^{(i)}){P}_{1}^{(i)}\,(t))$$ d S 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) - κ 1 1 i S 1 i t R t + 2 κ 1 2 i + κ 1 3 i P 1 i ( t ) [12pt]{minimal}
$$_{2}\,(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}({{{}}}}{ }_{2}^{(i)}{S}_{2}^{(i)}(t)R(t)+(2{ }_{{2}_{2}}^{(i)}+{ }_{{2}_{3}}^{(i)}){P}_{2}^{(i)}(t))$$ d S 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) - κ 2 i S 2 i t R t + 2 κ 2 2 i + κ 2 3 i P 2 i ( t ) [12pt]{minimal}
$$_{1}(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}({ }_{{1}_{1}}^{(i)}{S}_{1}^{(i)}(t)R(t){{{}}}}({ }_{{1}_{2}}^{(i)}+{ }_{{1}_{3}}^{(i)}){P}_{1}^{(i)}(t))$$ d P 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 1 i S 1 i t R t - κ 1 2 i + κ 1 3 i P 1 i ( t ) [12pt]{minimal}
$$_{2}(t)}{{dt}}={ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}({ }_{{2}_{1}}^{(i)}{S}_{2}^{(i)}(t)R(t){{{}}}}({ }_{{2}_{2}}^{(i)}+{ }_{{2}_{3}}^{(i)}){P}_{2}^{(i)}(t))$$ d P 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 1 i S 2 i t R t - κ 2 2 i + κ 2 3 i P 2 i ( t ) [12pt]{minimal}
$$_{1}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}{ }_{{1}_{3}}\,{P}_{1}^{(i)}\,(t)$$ d W 1 ( t ) d t = ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 3 P 1 i ( t ) [12pt]{minimal}
$$_{2}\,(t)}{{dt}}\,=\,{ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}{ }_{{2}_{3}}\,{P}_{2}^{(i)}\,(t)$$ d W 2 ( t ) d t = ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 3 P 2 i ( t ) [12pt]{minimal}
$$(t)}{{dt}}=-{ }_{i=1}^{{N}_{{S}_{1}}^{(j)}}{{}}_{\{t {L}_{{S}_{1}}^{(i)}\}}{ }_{{1}_{1}}{S}_{1}^{(i)}(t)R(t)-{ }_{i=1}^{{N}_{{S}_{2}}^{(j)}}{{}}_{\{t {L}_{{S}_{2}}^{(i)}\}}{ }_{{2}_{1}}{S}_{2}^{(i)}(t)R(t)$$ d R ( t ) d t = − ∑ i = 1 N S 1 ( j ) 1 t ≥ L S 1 ( i ) κ 1 1 S 1 i t R t − ∑ i = 1 N S 2 ( j ) 1 t ≥ L S 2 ( i ) κ 2 1 S 2 i t R t where [12pt]{minimal}
$${N}_{{S}_{i}}^{j}{} \, {}$$ N S i j N is the number of initial cells of species i ∈ 1, 2 in droplet j . The starting number of cells of both species per droplet was drawn from a Poisson-distribution with an average of 3 cells per species. Individual growth rates and lag time parameters were sampled from a generated Gamma-distribution of P. putida and P. veronii growth parameters, inferred from mono-culture OD 600 curves with an Monte-Carlo Metropolis Hasting algorithm centred on the mean (method as described in ref. ), and with variance deduced from the individual timelapse growth measurements in droplets with single-founder cells of P. putida and/or P. veronii (Supplementary Fig. ). We also included a 15% chance for a cell to have a zero -growth rate, to account for growth-impaired cells that we observed from droplet imaging (Supplementary Fig. ). Varying the heterogeneity in growth properties in Fig. among founder cells thus consisted in increasing or decreasing the initial variance of the parameter gamma distributions.
All experiments were carried out in biological triplicates (quadruplicate incubations for the substrate independence scenario). For each biological replicate, liquid-suspended cultures comprised 6-7 cultivation wells as technical replicates. Biological replicates of fragmented droplet cultures comprised one separate emulsion incubation each, except in one of the replicates of the substrate competition experiment, for which a triplicate emulsion was generated, to assess and show the technical reproducibility of droplet cultivation experiments (Supplementary Fig. ). Each emulsion sample was then imaged at 5–20 positions (technical replicates), to obtain 100–1000 droplets per mono- or co-culture and treatment. Flow cytometry counts (Figs. , and ) show the means of all technical replicates within each biological replicate. Each suspension from a cultivation well or fused droplet emulsion was counted three times by the flow cytometer, from which the mean was taken. T -tests were conducted to compare mean cell-counts in flow cytometry. Normality in the data was verified with a Shapiro-Wilk’s test, and variance homogeneity was verified with a Fisher test. Median, top-10 or low-10 percentile productivities of each species in mono vs mix droplets were compared using Wilcoxon rank-sum or sign-rank tests (when taken across multiple time points). Depending on the data, we tested against a null hypothesis of sample means or ranked values being indifferent, or being higher or lower (i.e., a left or right tail). Tests were implemented in R version (within R studio version 2022.07.01) or in MATLAB (MathWorks, Inc. version R2021b). To deduce strain interactions, we compared observed mixed droplet growth with the expected mixed growth from a null model based on probability distributions generated from the corresponding mono-culture droplet growth (i.e., assuming no interactions). The model uses the probability distributions for productivities of each of the strains in pairs at each sampled time point, simulated five times for the same number of pairs as the number of observed droplets. Expected and observed paired droplets were then counted in a productivity grid (e.g., as in Fig. ), and summed fractions across relevant grid regions (e.g., >1.5 times the median) were compared across replicates (typically, three biological replicates and five simulation replicates). P -values are derived from ANOVA comparison including all fractions, followed by a post-hoc multiple test (Fig. ), or by Wilcoxon sign-rank test in case of comparing multiple time points (e.g., Fig. ), as implemented in MATLAB. To estimate the proportion of mixed droplets in which P. protegens Pf-5 might have been killed (lysed) by CHA0 tailocins, we deployed variations in the specific median fluorescence background originating from Pf-5. We first calculated the standard deviation in Pf-5 background fluorescence from Pf-5 solo droplets, corrected for the Pf-5 biomass (i.e., segmented area), which was multiplied by 2.5 as a boundary for the outlier range. This outlier range definition was then imposed on the Pf-5 specific fluorescence in mix droplets with CHA0 (and in CHA0 droplets where no Pf-5 area can be distinguished, assuming they may all be lysed). Outlier fractions were corrected for the total observed droplets and compared to the outlier fractions observed for Pf-5 solo droplets (i.e., as in Fig. e, f), using one- or two-tailed two-sample t- testing of replicate values. The effect of founder cell census on the variance of growth kinetic parameters in strain-paired droplets was examined using a generalized linear mixed effect model ( glme , as implemented in MATLAB 2021a), using measured maximum specific growth rates, lag times and starting cell ratios as variables. Droplets with quasi-null growth rates (which were also characterized by a lag time above 20 h) were removed for the analysis. The relationship of individual growth rate and lag time variance as a function of founder cells was further explored using a Brown-Forsythe test implemented in R , which tests the homogeneity of variances between groups without assuming the normality of the data.
Further information on research design is available in the linked to this article.
Supplementary information Peer Review File Supplementary movie legends Supplementary movie 1 Supplementary movie 2 Reporting Summary
Source Data
|
Peripheral nerves modulate the peri-implant osteogenesis under type 2 diabetes through exosomes derived from schwann cells via miR-15b-5p/Txnip signaling axis | fc700385-5e4c-4087-a31e-714f9bbf907c | 11773925 | Dentistry[mh] | Diabetes mellitus is a complex, chronic health condition with multiple contributing factors and impacts a significant portion of the population. In year 2021, the worldwide prevalence of diabetes was approximately 537 million (9.8% of adults aged 20–79), and is anticipated to increase to 700 million by the year 2045 . According to the World Health Organization, the annual rise in the number of adults affected by diabetes is anticipated, with the majority of diabetic patients (approximately 90–95%) experiencing Type 2 diabetes mellitus (T2DM) . T2DM is a condition with a diverse origin, characterized by hyperglycemia due to inadequate insulin secretion, insufficient insulin action, or a combination of both . Histological studies indicate that, due to the hyperglycemia, both bone turnover ratio and bone density are significantly lower in T2DM patients than in the healthy population . In this dental implant era, the need of implant denture in edentulous/partially edentulous patients is increasing. However, most recent clinical studies found that the osseointegrating rate of dental implant in T2DM patients was lower and the osseointegrating speed was slower, compared with non-T2DM people, resulting in a high failure rate of dental implant treatment in T2DM patients . Therefore, T2DM is still the relative contraindication of dental implant treatment. Hyperglycemia could also affect peripheral nerve function. The oxidative stress caused by hyperglycemia often leads to peripheral nerve degeneration, demyelination, and neuronal degeneration, which is called diabetic peripheral neuropathy (DPN) and is a common complication of T2DM . Previous studies on dental implant osseointegration mainly focused on the bone tissue itself, but little attention was paid to the role of the peripheral nerve in implant osseointegration. An increasing number of articles have reported that the neurotrophic regulation of bone metabolism might have played an important role in bone homeostasis . In normal bone tissue, sensory nerves and autonomic nerves are widely distributed. These nerve fibers are found in metabolically active areas such as the periosteum, cortical bone, and bone marrow cavity, often accompanying blood vessels, participating in the regulation of bone metabolism and regeneration after bone tissue injury . As early as 1977, scholars observed the presence of sympathetic nerves in rabbit bones . Researchers chemically severed the sympathetic nerves and found a significant reduction in osteoblast activity and a remarkable decrease in bone matrix deposition in the corresponding areas of bone tissue . This demonstrated that the sympathetic nervous system can regulate the activity of osteoblasts, playing a crucial role in bone formation. However, little study was done about whether the crosstalk between nerve and bone affect the bone formation in T2DM patients. One clinical study has shown that DPN was the highest contributing factor to fracture risk in T2DM patients . Therefore, further studies need to be done to investigate the role of peripheral nerves in bone metabolism of T2DM patients. Schwann cells (SCs) are the predominant cells responsible for myelinating axons in the peripheral nervous system. They have been recognized for their crucial role in peripheral nerve regeneration . However, the hyperglycemia microenvironment in T2DM can lead to the formation of advanced glycation end products in SCs, impairing protein function and triggering a cascade of inflammatory reactions . This results in additional oxidative stress, causing abnormal cellular functions in SCs . Recent studies indicate that SCs are associated with bone tissue metabolism. SCs may participate in regulating bone metabolism through Wnt-related pathways, bone morphogenetic protein (BMP)-related pathways, Hippo pathways, and others . Recent studies have shown that Schwann cell could promote mandibular bone repair by SCs transplantation or by SCs-derived growth factors . In 2020, researchers reported combining SCs’ exosomes with a porous Ti6Al4V scaffold, implanting it into the femoral condyle defect of New Zealand white rabbits . SCs exosomes effectively enhanced the biological activity of the titanium alloy scaffold, confirming that SCs exosomes have the ability to promote the migration and osteogenic differentiation of bone marrow mesenchymal stem cells (BMSCs) . These studies revealed the role of SCs and SCs-derived exosomes in promoting the bone regeneration process. Exosomes are nanoscale vesicles secreted by cells . Exosomes, characterized by a double-membrane structure, have the capacity to transport diverse molecules, including specific miRNAs, DNA, mRNA, and proteins, to target cells . Research has shown that high glucose level could change the content in Schwann cells’ exosomes . Whether dysfunctional Schwann cells’ exosomes lead to the poor bone formation level under T2DM or not remains unclear and needs further study. To address these questions, exosomes derived from SCs were taken as the breakthrough point, and the regulation of peripheral nerves on peri-implant bone regeneration under the situation of T2DM and the mechanisms underneath were investigated. The potential therapeutic targets to promote peri-implant bone regeneration and implant osseointegration in T2DM patients were also explored.
Every step of the procedure followed the recommendations detailed in the Animal Research: Reporting In Vivo Experiments (ARRIVE protocol). We affirm that all methods were executed in compliance with the applicable guidelines. Approval for this study was granted by the Experimental Animal Ethics Committee of the Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine (reference number for relevant judgment: SH9H-2022-A84-1). Animals Male Sprague-Dawley (SD) rats, at the age of 7 weeks, were procured and subsequently accommodated in an environment with a room temperature of approximately 20 °C, following a 12-hour light/dark cycle. The sample size based on Mead’s resource equation and was consistent with previous studies . For analysis of T2DM bone formation, the SD rats were randomly divided into 2 groups, 5 rats in each group, as follows: (a) Control group, (b) T2DM group. For analysis of SCs-derived exosomes impact, the SD rats were randomly divided into 3 groups, as follows: (a) Control group: 1week after implantation surgery, the peri-implant sites were treated with local injection of saline solution for 5 consecutive weeks, (b) H-exo group: 1week after implantation surgery, the peri-implant sites were treated with local injection of high glucose stimulated SC-derived exosomes (H-exos) for 5 consecutive weeks, (c) L-exo group: 1week after implantation surgery, the peri-implant sites were treated with local injection of low glucose stimulated SC-derived exosomes (L-exos) for 5 consecutive weeks. For analysis of miR-15b-5p impact, the SD rats were randomly divided into 4 groups, 5 rats in each group, 1 week after implantation surgery, the peri-implant sites were treated with different solution, as follows: (a) miR-15a-5p angomir, (b) miR-15a-5p angomir NC, (c) miR-15a-5p antagomir, (d) miR-15a-5p antagomir NC. For analysis of SCs-derived exosomes therapy impact the SD rats were randomly divided into 2 groups, 5 rats in each group, as follows: (a) T2DM with L-exo group, (b) T2DM group. T2DM model During the study period, the rats in the control group were fed a normal diet every day, while the rats in the T2DM groups were provided with a high-fat diet for a duration of 4 weeks. Following the high-fat diet regimen, the rats received an intraperitoneal injection of 35 mg/kg streptozotocin (STZ) to induce a state of T2DM . Rats with blood glucose levels exceeding 16.7 mmol/L one week after the STZ injection were considered successfully established T2DM models. Implant procedure This implantation surgery in rats’ maxillae has been reported and evaluated in our previous study, and the procedure was as follows . Seven days prior to the surgical procedure, the rats underwent daily oral infusion of antibiotics, administered at a dosage of 100 µl, consisting of 20 mg kanamycin and 20 mg ampicillin. After the rats were anesthetized by administering ketamine (85–90 mg/kg body weight) and xylazine (5–10 mg/kg body weight) intraperitoneally, the bilateral maxillary first molars were extracted. Sockets were rinsed with saline. After socket rinsing, the implant sites in the palatal root socket area of the extraction site were prepared using a low-speed handpiece with a drill diameter of 1 mm and a speed of 1000 rpm. Continuous irrigation of cooled saline solution was maintained during this process. Subsequently, a self-tapped titanium alloy implant (Ti-6Al-4 V with an anodized surface, Baiortho™, China) was inserted and left for transmucosal healing. Throughout the initial week of the healing period, the rats received a daily antibiotic dose (consistent with the preoperative dose). Regular monitoring of their oral and overall health was conducted, and at the conclusion of the experiment (6 weeks post-implantation surgery), the rats were humanely euthanized by anesthesia overdose. Micro-CT The maxillae of the rats were collected, fixed, and subjected to micro-CT scanning (Skyscan 1076, Belgium) at a resolution of 9 μm. A designated volume of interest was outlined, representing an area within a 500 μm radius around the implants and exclude the implant. 3D reconstructions of the maxillae were created based on the micro-CT scans. The bone tissue evaluation script then produced the final segmented bone image, incorporating the following parameters: trabecular thickness (Tb.Th), trabecular separation (Tb.Sp), trabecular number (Tb.N), bone volume fraction (BV/TV), bone mineral density (BMD), bone-implant contact ratio (BIC), and bone surface density (BS/TV). Cell culture BMSCs were derived from neonatal male SD rats (age < 14 days). In brief, bone marrow cells were flushed out and collected from the femur and tibia of rats. These cells were then plated in T25 flasks and cultured overnight in a 37 °C incubator with 5% CO 2 . Subsequently, nonadherent cells were eliminated by rinsing twice with phosphate-buffered saline (PBS). BMSCs were cultured in complete dulbecco’s modified eagle medium (DMEM) containing 25 mM glucose, 4 mM L-glutamine, supplemented with 100 µg/mL streptomycin, 100 U/mL penicillin, and 10% (v/v) fetal bovine serum (Life Technologies). The culture medium was refreshed every 2 days. The rat Schwann cells (RSC96) were kindly provided by the Stem Cell Bank, Chinese Academy of Sciences (Shanghai, China). RSC96 cells were cultured in DMEM (HyClone) supplemented with 4 mM L-glutamine, 5.6 mM glucose (Low glucose level for normal SCs culture), and 10% (v/v) fetal bovine serum (Life Technologies), along with 100 U/mL penicillin and 100 µg/mL streptomycin. The cells were maintained at 37 °C in a 5% CO 2 humidified atmosphere, and the medium was refreshed every 2–3 days. Isolation and identification of SC-derived exosomes CCK-8 assay test SCs activity in high glucose environment. The experimental procedure involved adding 3 × 10 3 cells per well/100 µl into 96-well plates and incubating them at 37 °C with 5% CO 2 . After cell adhesion, the cells were treated with three different medium (5.6mM glucose, 55mM glucose, 5.6mM glucose + 49.4mM mannitol) for 48 h. After the treatment period, 10 µl of CCK-8 reagent was added to each well, and the plates were incubated for 1.5 h. The OD values at 450 nm were measured using a plate reader. After excluding the interference of the osmotic pressure of 55 mM glucose on the viability of RSC96 (Figure ), 10 6 RSC cells were seeded in a pre-coated T75 flask and cultured in DMEM medium. Cells were then treated with various concentrations of glucose (Low glucose: 5.6mM glucose; High glucose: 55 mM glucose) for 48 h. The conditioned medium was harvested and underwent centrifugation at 300 g for 10 min, followed by 2000 g for 10 min to eliminate dead cells and cell debris. Subsequently, the supernatant was centrifuged at 10,000 g for 30 min to eliminate microcellular vesicles. Finally, the supernatant underwent centrifugation at 100,000 g for 70 min to pellet the exosomes and remove contaminated proteins. After discarding the supernatant, the pellet was washed with cold PBS and subjected to another round of ultracentrifugation at 100,000 g for 70 min (Beckman Coulter, California, USA). The supernatant was cautiously removed, and the pellet was resuspended in 100 µL of sterile PBS, aliquoted, and stored at − 80 °C. All centrifugation steps were conducted at 4 °C. The concentration of H-exos/ L-exoswas directly measured using the bicinchoninic acid (BCA) method and the Enhanced BCA Protein Assay Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions and was measured spectrophotometrically at 590 nm. Three independent loading experiments were performed in each group. Uptake of exosomes To identify whether SCs-derived exosomes could be taken up by BMSCs, the fluorescent dye PKH67 (green) (Sigma-Aldrich, St. Louis, MO) was used to label exosomes according to the manufacturer’s protocol. Briefly, exosomes were extracted from SCs and diluted in PBS to a final concentration of 100 mg/ml, and 2 ml of PKH67 dye was mixed with 250 ml of Diluent C (PHK67 solution) . Then, 600 ml of the diluted exosomes and 200 ml of the PKH67 mixture were added to a 1.5-ml centrifugation tube and incubated at room temperature for 5 min. Centrifugation at 750 × g for 2 min was performed to remove the unincorporated dye by using exosome spin columns (MW3000; Invitrogen, Vilnius, Lithuania). The labeled exosomes, resuspended in exosome-free DMEM, were incubated with BMSCs for 4 h at 37 °C. Nuclei were stained with DAPI (blue) (Solar Bio, Beijing, China), and F-actin was stained with tetramethylrhodamine isothiocyanate phalloidin (red) (MKBio, Shanghai, China). Images were then captured using a confocal microscope. High-throughput miRNA sequencing Cloud-Seq Biotech (Shanghai, China) provided the high-throughput sequencing service. Briefly, the supernatant of SCs was collected and sent to the company. miRNA was extracted from the total RNA in exosomes using the TaqMan ABC miRNA Purification Kit (Thermo Fisher Scientific, Waltham, MA). An Illumina HiSeq4000 device was used to conduct miRNA sequencing. Osteogenic differentiation of BMSCs By the osteogenic induction medium (OIM, OriCell, Cyagen Biosciences, Guangzhou, China), the BMSCs were subjected to osteogenic differentiation induction. The entire induction process lasted for 14 days-21 days. To assess the impact of exosomes on osteogenic differentiation, 25 µg/mL exosomes and an equal volume of PBS were added to the culture medium, with refreshment every three days. Additionally, when cell confluency reached 70–80%, BMSCs were transfected with miR-15b-5p inhibitor, miR-15b-5p mimic, and NC using Lipofectamine 3000 (Invitrogen). The reagents, including siRNA, miR-15b-5p-mimic, miR-15b-5p-mimic NC, miR-15b-5p-inhibitor, and miR-15b-5p-inhibitor NC, were synthesized by RiboBio (Guangzhou, China). To evaluate the level of osteogenic differentiation, quantitative reverse transcription-polymerase chain reaction (qRT-PCR), Western blot, alkaline phosphatase (ALP) staining, and alizarin red staining analyses were performed. Dual-luciferase reporter assay BMSCs were seeded in 24-well plates for 24 h. Subsequently, dual-luciferase vectors (pGL6-miR-Txnip-Mut-3’UTR, pGL6-miR-Txnip-WT-3′ UTR) were co-transfected into the cells with miR-15b-5p mimics or mimic-NC. The luciferase activity was assessed 48 h post-transfection using the Dual-Luciferase Reporter Assay System (Promega). In the end, the results were standardized based on Renilla luciferase activity. qRTPCR analysis Total RNA from exosomes or BMSCs was isolated using TRIpure Extraction Reagent (EP013, ELK Biotechnology). The cDNA was synthesized through reverse transcription using the EntiLink™ 1st Strand cDNA Synthesis Kit (Eq. 003, ELK Biotechnology). qRT-PCR for both mRNAs and miRNAs was conducted on a StepOne™ Real-Time PCR System (Life Technologies) utilizing EnTurbo™ SYBR Green PCR SuperMix (Eq. 001, ELK Biotechnology). The expression levels of mRNA or miRNA were normalized and evaluated using the 2 − ΔΔCT method. Western blot analysis Total protein was isolated from BMSCs or exosomes using RIPA (Aspen) according to the manufacturer’s protocol. After determining the protein sample concentrations with BCA (Aspen), equal amounts of protein samples were separated through sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Subsequently, they were transferred to a polyvinylidene fluoride (PVDF) membrane and incubated with 5% bovine serum albumin for 1 h at 25 °C. Following that, the membranes were incubated overnight at 4 °C with primary antibodies for CD81 (1:1000, Abcam, ab219209), CD63 (1:500, Affinity, AF5117), Calnexin (1:1000, Abcam, ab22595), RUNX2 (1:1000, Affinity, AF5186), BMP2 (1:1000, Affinity, AF5163), OCN (1:1000, Affinity, DF12303), TXNIP (1:1000, Affinity, DF7506), and β-actin (1:10,000, Abcam, ab8227). Then, they were stained with appropriate secondary antibodies at 1:2,000 for 30 min. The protein bands were visualized using an ECL reagent (SuperSignal West Femto Substrate; Thermo Fisher Scientific), and the gray value of the bands was analyzed by Image-Pro Plus 6.0 (MediaCybernetics, Bethesda, MD). In this experiment, the exosome biomarker proteins were detected according to the same protocol but without lysis or centrifugation. Assessment of ALP activity and mineralization To assess the osteogenic differentiation level, BMSCs were induced for 14 days or 21 days with specific osteogenic induction medium, and evaluated with ALP staining or alizarin red staining. Following the protocol, BMSCs were rinsed three times with PBS, with 4% paraformaldehyde fixed for 20 min, and subsequently stained 30 min with ALP or alizarin red at 25 °C. After the staining process, the BMSCs were washed with PBS and examined under a microscope. Absorbance was then measured at 570 nm–405 nm. Histological analysis The maxillae from rats in various groups were harvested. The samples were fixed with buffered paraformaldehyde for 48 h and subsequently decalcified with 20% EDTA at 25 °C for a duration of 8 weeks. After decalcifying, the implant was gently removed from the tissue by unscrewing it counterclockwise with a screwdriver. The samples were embedded in paraffin, longitudinally sectioned, and subjected to H&E, Masson and Luxol Fast Blue staining for histological analysis. Subsequently, the sections were imaged using a microscope. Immunohistochemistry was performed to detect the expression of Neurofilament (NF)/ Neuropeptide Y (NPY)/S100 in peri-implant tissue. Slides of the maxillae were collected and pretreated as previously described. Afterward, the slides were incubated in pepsin (BBI, Shanghai, China) for Ag retrieval and in 3% H 2 O 2 for endogenous peroxidase elimination. Then, the slides were permeabilized in 0.3% Triton X-100 for 20 min and blocked in 5% BSA (Boster Biological Technology, Pleasanton, CA) for 1 h. The NF (1:100, Affinity, DF13211)/ NPY (1:100, Affinity, DF6431)/ S100(1:100, Affinity, AF0251) was diluted in PBS, and the slides were incubated at 4℃ overnight. The next day, the slides were incubated with the secondary Ab (Boster Biological Technology) for 1 h at 37℃. After thorough washing, staining was detected with a diaminobenzidine substrate chromogen system (Boster Biological Technology). The slides were washed and counterstained for 5 min with filtered Mayer’s hematoxylin solution (Beyotime). After dehydration by alcohol and incubation in xylene, the slides were covered with neutral balsam (Biosharp, Anhui, China). Images were captured under an optical microscope. Immunofluorescence staining was performed to detect the expression of CD63, glial fibrillary acidic protein (GFAP) and S100 in peri-implant tissue. Cells that were positive for GFAP/S100 were considered SCs. The slides were collected, deparaffinized, hydrated, permeabilized, and blocked. GFAP (1:100, Abcam, ab7260), CD63 (1:100, Affinity, AF5117) and S100(1:100, Affinity, AF0251) were diluted in PBS, and the slides were incubated away from light at 4℃ overnight. The next day, the slides were incubated in the secondary Ab (Yeasen) for 1 h at 37℃. After thorough washing, the slides were stained with DAPI for 5 min and directly imaged under an optical microscope. Observation of nerve positivity in tissues by immunohistochemistry Thin slices (5 μm thick) were taken from the central part of the samples. The slices were evaluated in a blinded manner by an experienced histologist under a microscope (CX33, Olympus, Japan). The region of interest was defined as area within 500 μm radius around the implants to encompass the peri-implant region. The number of NF/ NPY positive structures (brown or dark brown) per section around the region of interest was measured. If the number of positive structures across implant exceeded 5 structures per section, it was recorded as one positive sample. The positivity rate was calculated as the number of positive samples divided by the total number of samples. Statistical analysis Values were presented as mean ± SD, and analyzed with IBM SPSS Statistics version 25.0 software (IBM, Armonk, NY; RRID: SCR_002865). All experiments were repeated at least three times. one-way ANOVA followed by LSD’s post hoc test was used to analyze the two independent groups. The Fisher test was used for the detection of positive rates. A value of p < 0.05 was considered as statistically significant.
Male Sprague-Dawley (SD) rats, at the age of 7 weeks, were procured and subsequently accommodated in an environment with a room temperature of approximately 20 °C, following a 12-hour light/dark cycle. The sample size based on Mead’s resource equation and was consistent with previous studies . For analysis of T2DM bone formation, the SD rats were randomly divided into 2 groups, 5 rats in each group, as follows: (a) Control group, (b) T2DM group. For analysis of SCs-derived exosomes impact, the SD rats were randomly divided into 3 groups, as follows: (a) Control group: 1week after implantation surgery, the peri-implant sites were treated with local injection of saline solution for 5 consecutive weeks, (b) H-exo group: 1week after implantation surgery, the peri-implant sites were treated with local injection of high glucose stimulated SC-derived exosomes (H-exos) for 5 consecutive weeks, (c) L-exo group: 1week after implantation surgery, the peri-implant sites were treated with local injection of low glucose stimulated SC-derived exosomes (L-exos) for 5 consecutive weeks. For analysis of miR-15b-5p impact, the SD rats were randomly divided into 4 groups, 5 rats in each group, 1 week after implantation surgery, the peri-implant sites were treated with different solution, as follows: (a) miR-15a-5p angomir, (b) miR-15a-5p angomir NC, (c) miR-15a-5p antagomir, (d) miR-15a-5p antagomir NC. For analysis of SCs-derived exosomes therapy impact the SD rats were randomly divided into 2 groups, 5 rats in each group, as follows: (a) T2DM with L-exo group, (b) T2DM group.
During the study period, the rats in the control group were fed a normal diet every day, while the rats in the T2DM groups were provided with a high-fat diet for a duration of 4 weeks. Following the high-fat diet regimen, the rats received an intraperitoneal injection of 35 mg/kg streptozotocin (STZ) to induce a state of T2DM . Rats with blood glucose levels exceeding 16.7 mmol/L one week after the STZ injection were considered successfully established T2DM models.
This implantation surgery in rats’ maxillae has been reported and evaluated in our previous study, and the procedure was as follows . Seven days prior to the surgical procedure, the rats underwent daily oral infusion of antibiotics, administered at a dosage of 100 µl, consisting of 20 mg kanamycin and 20 mg ampicillin. After the rats were anesthetized by administering ketamine (85–90 mg/kg body weight) and xylazine (5–10 mg/kg body weight) intraperitoneally, the bilateral maxillary first molars were extracted. Sockets were rinsed with saline. After socket rinsing, the implant sites in the palatal root socket area of the extraction site were prepared using a low-speed handpiece with a drill diameter of 1 mm and a speed of 1000 rpm. Continuous irrigation of cooled saline solution was maintained during this process. Subsequently, a self-tapped titanium alloy implant (Ti-6Al-4 V with an anodized surface, Baiortho™, China) was inserted and left for transmucosal healing. Throughout the initial week of the healing period, the rats received a daily antibiotic dose (consistent with the preoperative dose). Regular monitoring of their oral and overall health was conducted, and at the conclusion of the experiment (6 weeks post-implantation surgery), the rats were humanely euthanized by anesthesia overdose.
The maxillae of the rats were collected, fixed, and subjected to micro-CT scanning (Skyscan 1076, Belgium) at a resolution of 9 μm. A designated volume of interest was outlined, representing an area within a 500 μm radius around the implants and exclude the implant. 3D reconstructions of the maxillae were created based on the micro-CT scans. The bone tissue evaluation script then produced the final segmented bone image, incorporating the following parameters: trabecular thickness (Tb.Th), trabecular separation (Tb.Sp), trabecular number (Tb.N), bone volume fraction (BV/TV), bone mineral density (BMD), bone-implant contact ratio (BIC), and bone surface density (BS/TV).
BMSCs were derived from neonatal male SD rats (age < 14 days). In brief, bone marrow cells were flushed out and collected from the femur and tibia of rats. These cells were then plated in T25 flasks and cultured overnight in a 37 °C incubator with 5% CO 2 . Subsequently, nonadherent cells were eliminated by rinsing twice with phosphate-buffered saline (PBS). BMSCs were cultured in complete dulbecco’s modified eagle medium (DMEM) containing 25 mM glucose, 4 mM L-glutamine, supplemented with 100 µg/mL streptomycin, 100 U/mL penicillin, and 10% (v/v) fetal bovine serum (Life Technologies). The culture medium was refreshed every 2 days. The rat Schwann cells (RSC96) were kindly provided by the Stem Cell Bank, Chinese Academy of Sciences (Shanghai, China). RSC96 cells were cultured in DMEM (HyClone) supplemented with 4 mM L-glutamine, 5.6 mM glucose (Low glucose level for normal SCs culture), and 10% (v/v) fetal bovine serum (Life Technologies), along with 100 U/mL penicillin and 100 µg/mL streptomycin. The cells were maintained at 37 °C in a 5% CO 2 humidified atmosphere, and the medium was refreshed every 2–3 days.
CCK-8 assay test SCs activity in high glucose environment. The experimental procedure involved adding 3 × 10 3 cells per well/100 µl into 96-well plates and incubating them at 37 °C with 5% CO 2 . After cell adhesion, the cells were treated with three different medium (5.6mM glucose, 55mM glucose, 5.6mM glucose + 49.4mM mannitol) for 48 h. After the treatment period, 10 µl of CCK-8 reagent was added to each well, and the plates were incubated for 1.5 h. The OD values at 450 nm were measured using a plate reader. After excluding the interference of the osmotic pressure of 55 mM glucose on the viability of RSC96 (Figure ), 10 6 RSC cells were seeded in a pre-coated T75 flask and cultured in DMEM medium. Cells were then treated with various concentrations of glucose (Low glucose: 5.6mM glucose; High glucose: 55 mM glucose) for 48 h. The conditioned medium was harvested and underwent centrifugation at 300 g for 10 min, followed by 2000 g for 10 min to eliminate dead cells and cell debris. Subsequently, the supernatant was centrifuged at 10,000 g for 30 min to eliminate microcellular vesicles. Finally, the supernatant underwent centrifugation at 100,000 g for 70 min to pellet the exosomes and remove contaminated proteins. After discarding the supernatant, the pellet was washed with cold PBS and subjected to another round of ultracentrifugation at 100,000 g for 70 min (Beckman Coulter, California, USA). The supernatant was cautiously removed, and the pellet was resuspended in 100 µL of sterile PBS, aliquoted, and stored at − 80 °C. All centrifugation steps were conducted at 4 °C. The concentration of H-exos/ L-exoswas directly measured using the bicinchoninic acid (BCA) method and the Enhanced BCA Protein Assay Kit (Beyotime, Shanghai, China) according to the manufacturer’s instructions and was measured spectrophotometrically at 590 nm. Three independent loading experiments were performed in each group.
To identify whether SCs-derived exosomes could be taken up by BMSCs, the fluorescent dye PKH67 (green) (Sigma-Aldrich, St. Louis, MO) was used to label exosomes according to the manufacturer’s protocol. Briefly, exosomes were extracted from SCs and diluted in PBS to a final concentration of 100 mg/ml, and 2 ml of PKH67 dye was mixed with 250 ml of Diluent C (PHK67 solution) . Then, 600 ml of the diluted exosomes and 200 ml of the PKH67 mixture were added to a 1.5-ml centrifugation tube and incubated at room temperature for 5 min. Centrifugation at 750 × g for 2 min was performed to remove the unincorporated dye by using exosome spin columns (MW3000; Invitrogen, Vilnius, Lithuania). The labeled exosomes, resuspended in exosome-free DMEM, were incubated with BMSCs for 4 h at 37 °C. Nuclei were stained with DAPI (blue) (Solar Bio, Beijing, China), and F-actin was stained with tetramethylrhodamine isothiocyanate phalloidin (red) (MKBio, Shanghai, China). Images were then captured using a confocal microscope.
Cloud-Seq Biotech (Shanghai, China) provided the high-throughput sequencing service. Briefly, the supernatant of SCs was collected and sent to the company. miRNA was extracted from the total RNA in exosomes using the TaqMan ABC miRNA Purification Kit (Thermo Fisher Scientific, Waltham, MA). An Illumina HiSeq4000 device was used to conduct miRNA sequencing.
By the osteogenic induction medium (OIM, OriCell, Cyagen Biosciences, Guangzhou, China), the BMSCs were subjected to osteogenic differentiation induction. The entire induction process lasted for 14 days-21 days. To assess the impact of exosomes on osteogenic differentiation, 25 µg/mL exosomes and an equal volume of PBS were added to the culture medium, with refreshment every three days. Additionally, when cell confluency reached 70–80%, BMSCs were transfected with miR-15b-5p inhibitor, miR-15b-5p mimic, and NC using Lipofectamine 3000 (Invitrogen). The reagents, including siRNA, miR-15b-5p-mimic, miR-15b-5p-mimic NC, miR-15b-5p-inhibitor, and miR-15b-5p-inhibitor NC, were synthesized by RiboBio (Guangzhou, China). To evaluate the level of osteogenic differentiation, quantitative reverse transcription-polymerase chain reaction (qRT-PCR), Western blot, alkaline phosphatase (ALP) staining, and alizarin red staining analyses were performed.
BMSCs were seeded in 24-well plates for 24 h. Subsequently, dual-luciferase vectors (pGL6-miR-Txnip-Mut-3’UTR, pGL6-miR-Txnip-WT-3′ UTR) were co-transfected into the cells with miR-15b-5p mimics or mimic-NC. The luciferase activity was assessed 48 h post-transfection using the Dual-Luciferase Reporter Assay System (Promega). In the end, the results were standardized based on Renilla luciferase activity.
Total RNA from exosomes or BMSCs was isolated using TRIpure Extraction Reagent (EP013, ELK Biotechnology). The cDNA was synthesized through reverse transcription using the EntiLink™ 1st Strand cDNA Synthesis Kit (Eq. 003, ELK Biotechnology). qRT-PCR for both mRNAs and miRNAs was conducted on a StepOne™ Real-Time PCR System (Life Technologies) utilizing EnTurbo™ SYBR Green PCR SuperMix (Eq. 001, ELK Biotechnology). The expression levels of mRNA or miRNA were normalized and evaluated using the 2 − ΔΔCT method.
Total protein was isolated from BMSCs or exosomes using RIPA (Aspen) according to the manufacturer’s protocol. After determining the protein sample concentrations with BCA (Aspen), equal amounts of protein samples were separated through sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Subsequently, they were transferred to a polyvinylidene fluoride (PVDF) membrane and incubated with 5% bovine serum albumin for 1 h at 25 °C. Following that, the membranes were incubated overnight at 4 °C with primary antibodies for CD81 (1:1000, Abcam, ab219209), CD63 (1:500, Affinity, AF5117), Calnexin (1:1000, Abcam, ab22595), RUNX2 (1:1000, Affinity, AF5186), BMP2 (1:1000, Affinity, AF5163), OCN (1:1000, Affinity, DF12303), TXNIP (1:1000, Affinity, DF7506), and β-actin (1:10,000, Abcam, ab8227). Then, they were stained with appropriate secondary antibodies at 1:2,000 for 30 min. The protein bands were visualized using an ECL reagent (SuperSignal West Femto Substrate; Thermo Fisher Scientific), and the gray value of the bands was analyzed by Image-Pro Plus 6.0 (MediaCybernetics, Bethesda, MD). In this experiment, the exosome biomarker proteins were detected according to the same protocol but without lysis or centrifugation.
To assess the osteogenic differentiation level, BMSCs were induced for 14 days or 21 days with specific osteogenic induction medium, and evaluated with ALP staining or alizarin red staining. Following the protocol, BMSCs were rinsed three times with PBS, with 4% paraformaldehyde fixed for 20 min, and subsequently stained 30 min with ALP or alizarin red at 25 °C. After the staining process, the BMSCs were washed with PBS and examined under a microscope. Absorbance was then measured at 570 nm–405 nm.
The maxillae from rats in various groups were harvested. The samples were fixed with buffered paraformaldehyde for 48 h and subsequently decalcified with 20% EDTA at 25 °C for a duration of 8 weeks. After decalcifying, the implant was gently removed from the tissue by unscrewing it counterclockwise with a screwdriver. The samples were embedded in paraffin, longitudinally sectioned, and subjected to H&E, Masson and Luxol Fast Blue staining for histological analysis. Subsequently, the sections were imaged using a microscope. Immunohistochemistry was performed to detect the expression of Neurofilament (NF)/ Neuropeptide Y (NPY)/S100 in peri-implant tissue. Slides of the maxillae were collected and pretreated as previously described. Afterward, the slides were incubated in pepsin (BBI, Shanghai, China) for Ag retrieval and in 3% H 2 O 2 for endogenous peroxidase elimination. Then, the slides were permeabilized in 0.3% Triton X-100 for 20 min and blocked in 5% BSA (Boster Biological Technology, Pleasanton, CA) for 1 h. The NF (1:100, Affinity, DF13211)/ NPY (1:100, Affinity, DF6431)/ S100(1:100, Affinity, AF0251) was diluted in PBS, and the slides were incubated at 4℃ overnight. The next day, the slides were incubated with the secondary Ab (Boster Biological Technology) for 1 h at 37℃. After thorough washing, staining was detected with a diaminobenzidine substrate chromogen system (Boster Biological Technology). The slides were washed and counterstained for 5 min with filtered Mayer’s hematoxylin solution (Beyotime). After dehydration by alcohol and incubation in xylene, the slides were covered with neutral balsam (Biosharp, Anhui, China). Images were captured under an optical microscope. Immunofluorescence staining was performed to detect the expression of CD63, glial fibrillary acidic protein (GFAP) and S100 in peri-implant tissue. Cells that were positive for GFAP/S100 were considered SCs. The slides were collected, deparaffinized, hydrated, permeabilized, and blocked. GFAP (1:100, Abcam, ab7260), CD63 (1:100, Affinity, AF5117) and S100(1:100, Affinity, AF0251) were diluted in PBS, and the slides were incubated away from light at 4℃ overnight. The next day, the slides were incubated in the secondary Ab (Yeasen) for 1 h at 37℃. After thorough washing, the slides were stained with DAPI for 5 min and directly imaged under an optical microscope.
Thin slices (5 μm thick) were taken from the central part of the samples. The slices were evaluated in a blinded manner by an experienced histologist under a microscope (CX33, Olympus, Japan). The region of interest was defined as area within 500 μm radius around the implants to encompass the peri-implant region. The number of NF/ NPY positive structures (brown or dark brown) per section around the region of interest was measured. If the number of positive structures across implant exceeded 5 structures per section, it was recorded as one positive sample. The positivity rate was calculated as the number of positive samples divided by the total number of samples.
Values were presented as mean ± SD, and analyzed with IBM SPSS Statistics version 25.0 software (IBM, Armonk, NY; RRID: SCR_002865). All experiments were repeated at least three times. one-way ANOVA followed by LSD’s post hoc test was used to analyze the two independent groups. The Fisher test was used for the detection of positive rates. A value of p < 0.05 was considered as statistically significant.
T2DM condition changes the peri-implant osteogenesis and peri implant peripheral nerves in T2DM rats The assessment of the T2DM rat model Our findings indicated that all rats induced by a high-fat diet and STZ exhibited T2DM. Illustrated in Figure , the T2DM group displayed significantly increased food intake, water consumption, and urine output compared to the control group. Additionally, the rats in the T2DM group exhibited lower body weight than those in the control group, indicative of typical T2DM symptoms. As presented in Figure , one week post STZ injection, the T2DM group showed a significantly elevated random blood glucose level compared to the control group, which maintained higher than 16.7 mmol/l throughout the experiment. These results indicated the successful establishment of T2DM rat models with hyperglycemia. The analysis of peri-implant bone formation The 3D reconstructions of the maxillae with dental implants are shown in Figure F, demonstrating bone formation around the implants in different rat groups, with the control group exhibiting the highest levels. Figure A presents BMD data, indicating that the T2DM group had a significantly lower BMD ratio than the control group ( p < 0.05). Similarly, Figure B illustrates the BV/TV ratio, where the T2DM group also showed a significantly lower value ( p < 0.05). Figure C highlights the BS/TV ratio, revealing a considerable reduction in the T2DM group compared to controls ( p < 0.01). In Figure D, the Tb.Sp values indicate that the T2DM group had significantly higher values ( p < 0.001), while Figure E shows that the T2DM group exhibited a significantly lower Tb.N ( p < 0.05). Hematoxylin and eosin (H&E) staining as well as Masson staining were utilized to examine the formation of new bone around the implant. Illustrated in Figure G, there was a higher presence of osteoid tissues surrounding the implant in Control group than T2DM group, and more loose connective tissue around the implant for T2DM rats. In the Masson staining samples, as shown in Figure E, the new bone was shown by blue, while the mature bone was shown by red. The mature trabecular bone area was obviously bigger in Control group than in T2DM group. These results indicated that the bone formation level was lower in T2DM rats compared to those in Control group. The obeservation of peri-implant peripheral nerves Histological observation revealed nerve fibers in the peri-implant bone around the implants (Fig. ). NF immunoreactivity (Fig. A f) showed Nerve fibers were mainly distributed perivascularly and mainly in bone tissue. NPY immunoreactivity (Fig. B) also showed the same nerve distribution. The results indicated a higher nerve positive rate in the Control group compared to the T2DM group, suggesting that there were more nerves surrounding the dental implant in the Control group (Fig. C). Besides, the Luxol Fast Blue staining indicate that T2DM rats exhibit a reduced number of myelinated nerve fibers in the peri-implant area (Figure ). Immunofluorescence staining showed the expression of CD63 in peri-implant bone tissue (Fig. D). Both immunohistochemistry and immunofluorescence revealed the presence of S100-positive regions, indicative of Schwann cells, in the bone and soft tissue surrounding the implants, confirming the existence of Schwann cells (Figure ). Interestingly, the CD63 was observed to be expressed in GFAP-positive cells, indicating that SCs secreted EVs in peri-implant bone (Fig. D). Hexos hinder the osteogenesis Hexos impair peri-implant osteogenesis in a rat model Exosomes derived from SCs were examined through transmission electron microscopy (TEM), dynamic light scattering (DLS), and Western blotting. TEM images indicated a cup-shaped morphology of the particles (Fig. B). The particle sizes, as determined by DLS analysis, ranged from 30 to 150 nm (Fig. C), and the particle concentration was 4.75E + 10 particles/ml (Fig. A). Furthermore, the Western blotting analyses confirmed the presence of specific surface markers of exosomes, namely CD81 and CD63 (Fig. D). These results provide evidence for the effective isolation of exosomes from SCs. To explore the impact of SCs-derived exosomes on in vivo peri-implant osteogenesis, a rat dental implantation model was employed. The rats were treated with PBS, L-exo or H-exo, respectively. For each instance, 100 µg of SC-derived exosomes were dissolved in 50 µl of saline. Similarly, 50 µl of saline was injected at the same site as the blank control group, with the corresponding dosage reported before . The peri-implant bone formation was evaluated through micro-CT examinations. The 3D reconstructions of the maxillae with the placed dental implants were shown in Fig. E, demonstrating the bone formation around the implants of the rats in different groups. The results indicated that compared with the L-exo group, the H-exo group has lower BMD, BV/TV, Tb.N and Tb.Th (Fig. F-I). Quantitative analysis of micro-CT data also demonstrated that BMD and BV/TV values of the L-exos group were significantly higher, compared with the PBS group (Fig. F, G). Histology images for H&E and Masson revealed a noticeable impediment in bone formation around the implant in rats treated with H-exos, in contrast to rats in the other two groups. These findings indicate that H-exos negatively affected peri-implant bone formation in rat models. The hexos hinder BMSCs’ osteogenesis differentiation The results of exosome uptake assay demonstrated that SCs-exos marked with PKH67 could be internalized by BMSCs (Fig. A). According to the results of the cytotoxicity test, BMSCs were co-cultured with either 25 µg/ml L-exos, 25 µg/ml H-exos or an equivalent volume of PBS, respectively, and the expressions of protein and gene of the osteogenesis-related factors including RUNX2, BMP2 and OCN were assessed using Western blotting (Fig. K-M) and qRT-PCR (Fig. G-I) methods. As depicted in Fig. G-M, the protein and mRNA levels of RUNX2, BMP2, and OCN were downregulated in the H-exo groups compared to the L-exo group. Furthermore, ALP staining and Alizarin red staining revealed that the mineralization proportion increased with L-exos but decreased with H-exos, compared to the PBS group (Fig. C-F). These findings suggest that H-exos impede the osteogenic differentiation of BMSCs, while L-exos promote their osteogenic differentiation. The identification of differentially expressed miRNAs in L-exos and H-exos The present study further explored whether the impact of SC exosomes on osteogenesis was linked to miRNA cargoes. Ultraviolet (UV) treatment served as a non-specific method to impair the RNA molecules presented in exosomes . SCs-exos treated with UV lost their capacity to regulate osteogenesis (Fig. A). Then by high-throughput sequencing miRNAs in H-exos and L-exos, the miRNAs differentially expressed were screened. The Volcano plot showed 62 significantly downregulated miRNAs and 84 upregulated miRNAs (Fig. B). Then these differentially expressed miRNAs were used to make Cluster analysis shown in the heatmap (Fig. C). The heatmap revealed high intro-group consistency, with miRNA ID displayed at right end. The closer the miRNA ID showed to the top and bottom, the greater the difference in gene expression between groups. Enrichment analysis was performed to better inform the differentially expressed miRNA enriched in the above biological processes and molecular functions. Differentially expressed downregulated miRNAs enriched were shown in Fig. D and differentially expressed upregulated miRNAs enriched were shown in Fig. E. Furthermore, the top 10 differentially expressed miRNAs of upregulation and downregulation were performed correlation analysis (Fig. F). It was found that miR-15b-5p was most related to other miRNA expressions. MicroRNA-15b-5p is a down regulated miRNA in H-exos compared with L-exos. The expression level and function of miR-15b-5p were verified Then the expression level of miR-15b-5p was verified lower in exosomes from T2DM rats’ peripheral blood and high glucose-stimulated SCs, which is consistent with the result of miRNA sequencing (Fig. A, B). GEO database mining was also performed. The miRNA-seq-based dataset GSE27645 was accessed, including data from the peripheral blood samples of 12 T2DM patients and 5 healthy people. The volcano plot showed that miR-15b-5p was one of the top 10 differentially expressed miRNAs in T2DM patients’ peripheral blood (Fig. D) and the expression level of hsa-mir-15b-5p is the same as the results in rats (Fig. C). To assess the impact of miR-15b-5p on the osteogenic differentiation of BMSCs, the cells were transfected with miR-15b-5p mimic, miR-15b-5p mimic NC, miR-15b-5p inhibitor, or miR-15b-5p inhibitor NC. As depicted in Fig. H-I, the mRNA levels of RUNX2 and BMP2 exhibited an increase in miR-15b-5p mimic groups in comparison with other groups. The protein levels of RUNX2 and BMP2 were increased in miR-15b-5p mimic groups in comparison with other groups (Fig. J-L). Furthermore, as shown by ALP staining, the proportion of mineralization was increased by miR-15b-5p mimic and was inhibited by miR-15b-5p inhibitor (Fig. G). The outcomes of Alizarin red staining additionally revealed that the proportion of mineralization was increased by the introduction of miR-15b-5p mimic (Fig. F). These data indicated that miR-15b-5p mimic promotes the BMSCs’ osteogenic differentiation. As for in vivo study, miR-15a-5p angomir and miR-15a-5p antagomir were injected around the implants of the rats to demonstrate the in vivo effect of miR-15b-5p. The injected dose of miR-15a-5p angomir is 0.02 nmol, while the injected dose of miR-15a-5p antagomir is 0.04 nmol as recommended by the company RiboBio (Guangzhou, China). The 3D reconstructions of the maxillae with the placed dental implants were shown in Fig. M, demonstrating the bone formation around the implants of the rats in different groups. The results indicated that the miR-15a-5p angomir group has higher BV/TV, BMD and Tb.Th (Fig. N, O, Q). Quantitative analysis of micro-CT data also demonstrated that BMD and BV/TV values of the miR-15a-5p antagomir group were significantly decreased and Tb.Sp value was significantly increased (Fig. N, P, Q). miR-15b-5p modulates osteogenesis differentiation via targeting TXNIP To elucidate the mechanism through which miR-15b-5p modulates the osteogenic differentiation of BMSCs, the online tool miRWalk was employed to identify potential targets of miR-15b-5p. Then GO analysis was performed and 25 genes were found in the intersection set of CC, BP, Reaction, MF and KEGG (Fig. A, B). From these 25 genes, it was found that TXNIP was closely related to ROS signal transduction, NLRP3 inflammasome activation and the following inflammation as shown in the figure of PPI and KEGG pathway which have been known closely related to BMSCs’ osteogenic differentiation (Fig. C, D). As a result, TXNIP was ultimately chosen as the target gene for subsequent investigations. The luciferase reporter assay was employed to assess the interactions between the 3’-UTR of TXNIP and miR-15b-5p. As shown in Fig. F, the luciferase activity of WT-Txnip was decreased by miR-15b-5p overexpression. Additionally, it was observed that the inhibition of miR-15b-5p markedly increased the expression of Txnip at the protein levels (Fig. G, H). L-exos overcome poor bone formation in T2DM rats Figure A displays the 3D reconstructions of the maxillae with dental implants, illustrating bone formation around the implants in the different rat groups. Notably, the T2DM with L-exo group exhibited enhanced bone formation compared to the T2DM group. Figure B highlights the BV/TV ratio, indicating that the T2DM group had a significantly lower ratio than the T2DM with L-exo group. Similarly, Fig. C presents the Tb.N values, showing a significant reduction in the T2DM group’s Tb.N compared to the T2DM with L-exo group. In Fig. D, the Tb.Th measurements reveal a similar trend, with the T2DM group demonstrating significantly lower Tb.Th values than the T2DM with L-exo group. Lastly, Fig. E shows the BMD results, confirming that the T2DM group had a significantly lower BMD ratio than the T2DM with L-exo group.
The assessment of the T2DM rat model Our findings indicated that all rats induced by a high-fat diet and STZ exhibited T2DM. Illustrated in Figure , the T2DM group displayed significantly increased food intake, water consumption, and urine output compared to the control group. Additionally, the rats in the T2DM group exhibited lower body weight than those in the control group, indicative of typical T2DM symptoms. As presented in Figure , one week post STZ injection, the T2DM group showed a significantly elevated random blood glucose level compared to the control group, which maintained higher than 16.7 mmol/l throughout the experiment. These results indicated the successful establishment of T2DM rat models with hyperglycemia. The analysis of peri-implant bone formation The 3D reconstructions of the maxillae with dental implants are shown in Figure F, demonstrating bone formation around the implants in different rat groups, with the control group exhibiting the highest levels. Figure A presents BMD data, indicating that the T2DM group had a significantly lower BMD ratio than the control group ( p < 0.05). Similarly, Figure B illustrates the BV/TV ratio, where the T2DM group also showed a significantly lower value ( p < 0.05). Figure C highlights the BS/TV ratio, revealing a considerable reduction in the T2DM group compared to controls ( p < 0.01). In Figure D, the Tb.Sp values indicate that the T2DM group had significantly higher values ( p < 0.001), while Figure E shows that the T2DM group exhibited a significantly lower Tb.N ( p < 0.05). Hematoxylin and eosin (H&E) staining as well as Masson staining were utilized to examine the formation of new bone around the implant. Illustrated in Figure G, there was a higher presence of osteoid tissues surrounding the implant in Control group than T2DM group, and more loose connective tissue around the implant for T2DM rats. In the Masson staining samples, as shown in Figure E, the new bone was shown by blue, while the mature bone was shown by red. The mature trabecular bone area was obviously bigger in Control group than in T2DM group. These results indicated that the bone formation level was lower in T2DM rats compared to those in Control group. The obeservation of peri-implant peripheral nerves Histological observation revealed nerve fibers in the peri-implant bone around the implants (Fig. ). NF immunoreactivity (Fig. A f) showed Nerve fibers were mainly distributed perivascularly and mainly in bone tissue. NPY immunoreactivity (Fig. B) also showed the same nerve distribution. The results indicated a higher nerve positive rate in the Control group compared to the T2DM group, suggesting that there were more nerves surrounding the dental implant in the Control group (Fig. C). Besides, the Luxol Fast Blue staining indicate that T2DM rats exhibit a reduced number of myelinated nerve fibers in the peri-implant area (Figure ). Immunofluorescence staining showed the expression of CD63 in peri-implant bone tissue (Fig. D). Both immunohistochemistry and immunofluorescence revealed the presence of S100-positive regions, indicative of Schwann cells, in the bone and soft tissue surrounding the implants, confirming the existence of Schwann cells (Figure ). Interestingly, the CD63 was observed to be expressed in GFAP-positive cells, indicating that SCs secreted EVs in peri-implant bone (Fig. D).
Our findings indicated that all rats induced by a high-fat diet and STZ exhibited T2DM. Illustrated in Figure , the T2DM group displayed significantly increased food intake, water consumption, and urine output compared to the control group. Additionally, the rats in the T2DM group exhibited lower body weight than those in the control group, indicative of typical T2DM symptoms. As presented in Figure , one week post STZ injection, the T2DM group showed a significantly elevated random blood glucose level compared to the control group, which maintained higher than 16.7 mmol/l throughout the experiment. These results indicated the successful establishment of T2DM rat models with hyperglycemia.
The 3D reconstructions of the maxillae with dental implants are shown in Figure F, demonstrating bone formation around the implants in different rat groups, with the control group exhibiting the highest levels. Figure A presents BMD data, indicating that the T2DM group had a significantly lower BMD ratio than the control group ( p < 0.05). Similarly, Figure B illustrates the BV/TV ratio, where the T2DM group also showed a significantly lower value ( p < 0.05). Figure C highlights the BS/TV ratio, revealing a considerable reduction in the T2DM group compared to controls ( p < 0.01). In Figure D, the Tb.Sp values indicate that the T2DM group had significantly higher values ( p < 0.001), while Figure E shows that the T2DM group exhibited a significantly lower Tb.N ( p < 0.05). Hematoxylin and eosin (H&E) staining as well as Masson staining were utilized to examine the formation of new bone around the implant. Illustrated in Figure G, there was a higher presence of osteoid tissues surrounding the implant in Control group than T2DM group, and more loose connective tissue around the implant for T2DM rats. In the Masson staining samples, as shown in Figure E, the new bone was shown by blue, while the mature bone was shown by red. The mature trabecular bone area was obviously bigger in Control group than in T2DM group. These results indicated that the bone formation level was lower in T2DM rats compared to those in Control group.
Histological observation revealed nerve fibers in the peri-implant bone around the implants (Fig. ). NF immunoreactivity (Fig. A f) showed Nerve fibers were mainly distributed perivascularly and mainly in bone tissue. NPY immunoreactivity (Fig. B) also showed the same nerve distribution. The results indicated a higher nerve positive rate in the Control group compared to the T2DM group, suggesting that there were more nerves surrounding the dental implant in the Control group (Fig. C). Besides, the Luxol Fast Blue staining indicate that T2DM rats exhibit a reduced number of myelinated nerve fibers in the peri-implant area (Figure ). Immunofluorescence staining showed the expression of CD63 in peri-implant bone tissue (Fig. D). Both immunohistochemistry and immunofluorescence revealed the presence of S100-positive regions, indicative of Schwann cells, in the bone and soft tissue surrounding the implants, confirming the existence of Schwann cells (Figure ). Interestingly, the CD63 was observed to be expressed in GFAP-positive cells, indicating that SCs secreted EVs in peri-implant bone (Fig. D).
Hexos impair peri-implant osteogenesis in a rat model Exosomes derived from SCs were examined through transmission electron microscopy (TEM), dynamic light scattering (DLS), and Western blotting. TEM images indicated a cup-shaped morphology of the particles (Fig. B). The particle sizes, as determined by DLS analysis, ranged from 30 to 150 nm (Fig. C), and the particle concentration was 4.75E + 10 particles/ml (Fig. A). Furthermore, the Western blotting analyses confirmed the presence of specific surface markers of exosomes, namely CD81 and CD63 (Fig. D). These results provide evidence for the effective isolation of exosomes from SCs. To explore the impact of SCs-derived exosomes on in vivo peri-implant osteogenesis, a rat dental implantation model was employed. The rats were treated with PBS, L-exo or H-exo, respectively. For each instance, 100 µg of SC-derived exosomes were dissolved in 50 µl of saline. Similarly, 50 µl of saline was injected at the same site as the blank control group, with the corresponding dosage reported before . The peri-implant bone formation was evaluated through micro-CT examinations. The 3D reconstructions of the maxillae with the placed dental implants were shown in Fig. E, demonstrating the bone formation around the implants of the rats in different groups. The results indicated that compared with the L-exo group, the H-exo group has lower BMD, BV/TV, Tb.N and Tb.Th (Fig. F-I). Quantitative analysis of micro-CT data also demonstrated that BMD and BV/TV values of the L-exos group were significantly higher, compared with the PBS group (Fig. F, G). Histology images for H&E and Masson revealed a noticeable impediment in bone formation around the implant in rats treated with H-exos, in contrast to rats in the other two groups. These findings indicate that H-exos negatively affected peri-implant bone formation in rat models. The hexos hinder BMSCs’ osteogenesis differentiation The results of exosome uptake assay demonstrated that SCs-exos marked with PKH67 could be internalized by BMSCs (Fig. A). According to the results of the cytotoxicity test, BMSCs were co-cultured with either 25 µg/ml L-exos, 25 µg/ml H-exos or an equivalent volume of PBS, respectively, and the expressions of protein and gene of the osteogenesis-related factors including RUNX2, BMP2 and OCN were assessed using Western blotting (Fig. K-M) and qRT-PCR (Fig. G-I) methods. As depicted in Fig. G-M, the protein and mRNA levels of RUNX2, BMP2, and OCN were downregulated in the H-exo groups compared to the L-exo group. Furthermore, ALP staining and Alizarin red staining revealed that the mineralization proportion increased with L-exos but decreased with H-exos, compared to the PBS group (Fig. C-F). These findings suggest that H-exos impede the osteogenic differentiation of BMSCs, while L-exos promote their osteogenic differentiation.
Exosomes derived from SCs were examined through transmission electron microscopy (TEM), dynamic light scattering (DLS), and Western blotting. TEM images indicated a cup-shaped morphology of the particles (Fig. B). The particle sizes, as determined by DLS analysis, ranged from 30 to 150 nm (Fig. C), and the particle concentration was 4.75E + 10 particles/ml (Fig. A). Furthermore, the Western blotting analyses confirmed the presence of specific surface markers of exosomes, namely CD81 and CD63 (Fig. D). These results provide evidence for the effective isolation of exosomes from SCs. To explore the impact of SCs-derived exosomes on in vivo peri-implant osteogenesis, a rat dental implantation model was employed. The rats were treated with PBS, L-exo or H-exo, respectively. For each instance, 100 µg of SC-derived exosomes were dissolved in 50 µl of saline. Similarly, 50 µl of saline was injected at the same site as the blank control group, with the corresponding dosage reported before . The peri-implant bone formation was evaluated through micro-CT examinations. The 3D reconstructions of the maxillae with the placed dental implants were shown in Fig. E, demonstrating the bone formation around the implants of the rats in different groups. The results indicated that compared with the L-exo group, the H-exo group has lower BMD, BV/TV, Tb.N and Tb.Th (Fig. F-I). Quantitative analysis of micro-CT data also demonstrated that BMD and BV/TV values of the L-exos group were significantly higher, compared with the PBS group (Fig. F, G). Histology images for H&E and Masson revealed a noticeable impediment in bone formation around the implant in rats treated with H-exos, in contrast to rats in the other two groups. These findings indicate that H-exos negatively affected peri-implant bone formation in rat models.
The results of exosome uptake assay demonstrated that SCs-exos marked with PKH67 could be internalized by BMSCs (Fig. A). According to the results of the cytotoxicity test, BMSCs were co-cultured with either 25 µg/ml L-exos, 25 µg/ml H-exos or an equivalent volume of PBS, respectively, and the expressions of protein and gene of the osteogenesis-related factors including RUNX2, BMP2 and OCN were assessed using Western blotting (Fig. K-M) and qRT-PCR (Fig. G-I) methods. As depicted in Fig. G-M, the protein and mRNA levels of RUNX2, BMP2, and OCN were downregulated in the H-exo groups compared to the L-exo group. Furthermore, ALP staining and Alizarin red staining revealed that the mineralization proportion increased with L-exos but decreased with H-exos, compared to the PBS group (Fig. C-F). These findings suggest that H-exos impede the osteogenic differentiation of BMSCs, while L-exos promote their osteogenic differentiation.
The present study further explored whether the impact of SC exosomes on osteogenesis was linked to miRNA cargoes. Ultraviolet (UV) treatment served as a non-specific method to impair the RNA molecules presented in exosomes . SCs-exos treated with UV lost their capacity to regulate osteogenesis (Fig. A). Then by high-throughput sequencing miRNAs in H-exos and L-exos, the miRNAs differentially expressed were screened. The Volcano plot showed 62 significantly downregulated miRNAs and 84 upregulated miRNAs (Fig. B). Then these differentially expressed miRNAs were used to make Cluster analysis shown in the heatmap (Fig. C). The heatmap revealed high intro-group consistency, with miRNA ID displayed at right end. The closer the miRNA ID showed to the top and bottom, the greater the difference in gene expression between groups. Enrichment analysis was performed to better inform the differentially expressed miRNA enriched in the above biological processes and molecular functions. Differentially expressed downregulated miRNAs enriched were shown in Fig. D and differentially expressed upregulated miRNAs enriched were shown in Fig. E. Furthermore, the top 10 differentially expressed miRNAs of upregulation and downregulation were performed correlation analysis (Fig. F). It was found that miR-15b-5p was most related to other miRNA expressions. MicroRNA-15b-5p is a down regulated miRNA in H-exos compared with L-exos.
Then the expression level of miR-15b-5p was verified lower in exosomes from T2DM rats’ peripheral blood and high glucose-stimulated SCs, which is consistent with the result of miRNA sequencing (Fig. A, B). GEO database mining was also performed. The miRNA-seq-based dataset GSE27645 was accessed, including data from the peripheral blood samples of 12 T2DM patients and 5 healthy people. The volcano plot showed that miR-15b-5p was one of the top 10 differentially expressed miRNAs in T2DM patients’ peripheral blood (Fig. D) and the expression level of hsa-mir-15b-5p is the same as the results in rats (Fig. C). To assess the impact of miR-15b-5p on the osteogenic differentiation of BMSCs, the cells were transfected with miR-15b-5p mimic, miR-15b-5p mimic NC, miR-15b-5p inhibitor, or miR-15b-5p inhibitor NC. As depicted in Fig. H-I, the mRNA levels of RUNX2 and BMP2 exhibited an increase in miR-15b-5p mimic groups in comparison with other groups. The protein levels of RUNX2 and BMP2 were increased in miR-15b-5p mimic groups in comparison with other groups (Fig. J-L). Furthermore, as shown by ALP staining, the proportion of mineralization was increased by miR-15b-5p mimic and was inhibited by miR-15b-5p inhibitor (Fig. G). The outcomes of Alizarin red staining additionally revealed that the proportion of mineralization was increased by the introduction of miR-15b-5p mimic (Fig. F). These data indicated that miR-15b-5p mimic promotes the BMSCs’ osteogenic differentiation. As for in vivo study, miR-15a-5p angomir and miR-15a-5p antagomir were injected around the implants of the rats to demonstrate the in vivo effect of miR-15b-5p. The injected dose of miR-15a-5p angomir is 0.02 nmol, while the injected dose of miR-15a-5p antagomir is 0.04 nmol as recommended by the company RiboBio (Guangzhou, China). The 3D reconstructions of the maxillae with the placed dental implants were shown in Fig. M, demonstrating the bone formation around the implants of the rats in different groups. The results indicated that the miR-15a-5p angomir group has higher BV/TV, BMD and Tb.Th (Fig. N, O, Q). Quantitative analysis of micro-CT data also demonstrated that BMD and BV/TV values of the miR-15a-5p antagomir group were significantly decreased and Tb.Sp value was significantly increased (Fig. N, P, Q).
TXNIP To elucidate the mechanism through which miR-15b-5p modulates the osteogenic differentiation of BMSCs, the online tool miRWalk was employed to identify potential targets of miR-15b-5p. Then GO analysis was performed and 25 genes were found in the intersection set of CC, BP, Reaction, MF and KEGG (Fig. A, B). From these 25 genes, it was found that TXNIP was closely related to ROS signal transduction, NLRP3 inflammasome activation and the following inflammation as shown in the figure of PPI and KEGG pathway which have been known closely related to BMSCs’ osteogenic differentiation (Fig. C, D). As a result, TXNIP was ultimately chosen as the target gene for subsequent investigations. The luciferase reporter assay was employed to assess the interactions between the 3’-UTR of TXNIP and miR-15b-5p. As shown in Fig. F, the luciferase activity of WT-Txnip was decreased by miR-15b-5p overexpression. Additionally, it was observed that the inhibition of miR-15b-5p markedly increased the expression of Txnip at the protein levels (Fig. G, H).
Figure A displays the 3D reconstructions of the maxillae with dental implants, illustrating bone formation around the implants in the different rat groups. Notably, the T2DM with L-exo group exhibited enhanced bone formation compared to the T2DM group. Figure B highlights the BV/TV ratio, indicating that the T2DM group had a significantly lower ratio than the T2DM with L-exo group. Similarly, Fig. C presents the Tb.N values, showing a significant reduction in the T2DM group’s Tb.N compared to the T2DM with L-exo group. In Fig. D, the Tb.Th measurements reveal a similar trend, with the T2DM group demonstrating significantly lower Tb.Th values than the T2DM with L-exo group. Lastly, Fig. E shows the BMD results, confirming that the T2DM group had a significantly lower BMD ratio than the T2DM with L-exo group.
The importance of neural regulation for bone formation has gradually attracted the attention of researchers in recent years. The nerve system influences the activity of bone cells directly or indirectly by releasing neurotransmitters, regulating neuropeptides, and participating in neuroendocrine regulation . Communication between peripheral nerve and bone is not only crucial for bone growth and repair processes but also plays a key role in maintaining the normal function and homeostasis of bone tissue . Therefore, understanding the impact of peripheral nerve regulation on peri-implant bone formation is of clinical significance for improving the success rate of dental implant treatment, and promoting bone healing in T2DM condition. This study showed that the T2DM condition changed peripheral nerves’ distribution and function in peri-implant bone tissue and revealed the participation of peripheral nerves in inhibiting bone formation under T2DM condition by SCs’ exosomes. In 2020, researchers reported SCs exosomes effectively enhanced the biological activity of the titanium alloy scaffold, confirming that SCs exosomes have the ability to promote the migration and osteogenic differentiation of BMSCs . BMSC osteogenic differentiation is critical for bone formation around implants. In the context of osseointegration, the successful integration of an implant into the surrounding bone tissue heavily depends on the recruitment and differentiation of BMSCs . SCs from peripheral nerves are sensitive to glucose levels, and studies showed that hyperglycemia can damage the normal function of Schwann cells . Previous studies revealed the role of exosomes derived from these hyperglycemia-stimulated Schwann cells in promoting the development of DPN . However, there is no research exploring whether exosomes derived from SCs under high glucose conditions participate in the impaired bone regeneration in T2DM patients. The present study indicated that exosomes obtained from high glucose-stimulated SCs adversely affected bone regeneration. Specifically, the local delivery of exosomes derived from SCs cultured under high glucose conditions hindered peri-implant bone formation in healthy rats and impeded the osteogenic differentiation of BMSCs. The above results provide evidence proving that SCs derived exosomes play a critical role in regulating bone regeneration. To elucidate the mechanisms by which high glucose-stimulated SCs-derived exosomes impede bone regeneration, we performed high-throughput sequencing and bioinformatics analyses of the miRNAs secreted under varying glucose conditions. Notably, our findings indicate a significant downregulation of miR-15b-5p in exosomes from high glucose-stimulated SCs, consistent with data on T2DM patients’ samples from the GEO database and recent studies on T2DM patients . The results strongly indicate that diabetes reduces miR-15b-5p expression, which in turn may contribute to bone repair defects. The validation of these results with human samples further strengthens the hypothesis that targeting the miR-15b-5p pathway could offer a novel therapeutic strategy to enhance bone regeneration in patients with diabetes. Understanding the functional role of miR-15b-5p in this context may lead to new insights into potential interventions that could mitigate bone repair defects associated with T2DM. In recent years, miR-15b-5p has been found to reduce the damage caused by high glucose, such as podocyte damage, kidney cell damage, inhibition of oxidative stress, and inflammatory reaction, and is considered an important potential target for the treatment of diabetes . The present study for the first time confirmed the role of miR-15b-5p in peri-implant bone formation by regulating BMSCs. Beside BMSCs, recent studies showed the supplement of miR-15b-5p by magnetic nanoparticle may impact osteoclast activity, inhibiting osteoclast differentiation, leading to altered diabetic bone turnover . Therefore, this suggests that therapeutic strategies targeting miR-15b-5p could be beneficial in modulating both osteoclast and osteoblast activity and improving bone health in diabetic patients. Micro-RNAs, abbreviated as miRNAs, are diminutive non-coding RNA molecules that hold crucial roles in the post-transcriptional regulation of gene expression. Micro-RNAs achieve their regulatory functions by binding to specific mRNA molecules, thereby modulating the translation or stability of these target transcripts. This regulatory mechanism has profound implications for diverse biological processes, including development, differentiation, apoptosis, and immune responses . This study further screened the target genes of miR-15b-5p through bioinformatics methods and verified that miR-15b-5p can regulate the translation of TXNIP . TXNIP is a potential target for diabetes intervention, and at present, drugs targeting TXNIP to treat diabetes have been developed and clinical trials are under way . Txnip is a crucial protein involved in cellular redox regulation and various physiological processes. The central role of this protein is evident in the cellular response to oxidative stress, apoptosis, inflammation reaction and the maintenance of glucose homeostasis. Under conditions of oxidative stress or high glucose levels, Txnip binds to thioredoxin, inhibiting its antioxidant activity, and leading to increased oxidative stress and impaired cellular redox balance. In the context of diabetes, TXNIP has gained prominence due to its involvement in glucose metabolism. Elevated glucose levels can upregulate Txnip expression, contributing to oxidative stress and inflammation. TXNIP is also implicated in the pathogenesis of diabetes-related complications, such as cardiovascular diseases and nephropathy. Moreover, TXNIP is associated with NLRP3 inflammasome activation, a key component of the innate immune response. Its regulatory role in inflammation further underscores its significance in various disease processes. Further researches on TXNIP are needed to unveil its intricate involvement in cellular functions and its potential as a therapeutic target for conditions associated with oxidative stress and inflammation. The role of Txnip in osteogenesis has been reported, and inhibition of Txnip in cultured vascular smooth muscle cells has been shown to accelerate bone differentiation and upregulate bone remodeling protein signaling . The treatment with bone remodeling protein signaling inhibitor k02288 can eliminate the inhibiting impact of Txnip on bone differentiation . In addition, TXNIP is an upstream gene of NLRP3, and NLRP3 inflammasome activation is closely related to osteogenic differentiation. Activation of NLRP3 inflammasome in mesenchymal stem cells enhances adipogenic differentiation and inhibits osteogenic differentiation . NLRP3 can mediate aseptic inflammation . In the past, it was believed that peri-implant inflammation in T2DM was caused by the special high glucose-induced microbial communities . However, the target gene TXNIP confirmed in the present study highly suggests that the role of high glucose mediated aseptic inflammation in this process should not be ignored. TXNIP is expressed in osteoblasts, osteoclasts, and chondrocytes and affects the differentiation and functioning of skeletal cells through both redox-dependent and -independent regulatory mechanisms . Therefore, TXNIP is a potential regulatory and functional factor in bone metabolism and a possible new target for the treatment of bone metabolism-related diseases . Based on miRNA related research, the development of specific miRNA mimetics or inhibitors to regulate the expression of certain proteins, has great potential to transform from basic research to clinical applications. However, current research has shown that high-dose miRNA mimetics or inhibitors can initiate innate immune responses, causing unnecessary immune responses in the body . In addition, current modification methods are not sufficient to ensure the stability of miRNA during delivery . As a natural vesicle, exosomes have good stability, strong targeting ability, and unique advantages, making them an appropriate choice in clinical treatment . In the present research, we used L-exo and PBS as two different control groups and were surprised to find that L-exo promoted peri-implant bone formation in healthy rats and also promoted the osteogenic differentiation of BMSCs. It has been previously reported that exosomes from SCs can promote osteogenesis, and the results of the present study also confirm this result . In further research, it was also found that L-exo injection can overcome the abnormal osteogenesis around the implant in T2DM rats. Based on the above results, SCs extracellular vesicle therapy is expected to become a new method and strategy for improving the osseointegration effect of dental implants in T2DM patients. Researches have shown that through appropriate biosafety measures, anatomical techniques, and culture methods, a large number of SCs can be obtained from a small amount of adult nerve fragments, and can be continuously and stably amplified for more than 6 generations . The human gastrocnemial nerve is preferred for obtaining human SCs due to its ease of access and ability to be obtained through minimally invasive surgery without damaging other bodily functions . This cell acquisition and amplification protocol has been allowed for clinical trials and treatment of SCs. Based on the above characteristics of SCs, the implementation of SCs exosome therapy is practical and promising in the future.
In conclusion, this current study offers valuable and novel insights into the influence of SCs-derived exosomes on the regulation of peri-implant osteogenesis. These findings also underscore the therapeutic potential of miR-15b-5p and SCs exosomes in mitigating poor bone regeneration of T2DM patients.
Below is the link to the electronic supplementary material. Supplementary Material 1: Figure S1. Evaluate the high glucose osmotic pressure on the viability of Schwann cells. Supplementary Material 2: Figure S2. Evaluation of the T2DM rat model. Supplementary Material 3: Figure S3. The analysis of peri-implant bone formation. Supplementary Material 4: Figure S4. The Luxol Fast Bluestaining of peri-implant tissue. Supplementary Material 5: Figure S5. Immunohistochemistry and immunofluorescence staining for S100 in peri-implant bone tissue.
|
Effect of treatment with dental space maintainers after the early extraction of the second primary molar: a systematic review | 1425942e-6886-464a-a84f-a4281fe617cc | 10389058 | Dental[mh] | Early loss of primary teeth, because of tooth decay or trauma, can affect the primary dentition, the permanent dentition, or both ( , ). It can cause crowding that may affect a child’s self-esteem and quality of life ( ) and lead to changes in the dental arch such as ectopic eruption of permanent teeth and other malocclusions ( , ). Today, crowding is the most common problem seen by orthodontists, and dentists and patients need guidance on how to intervene to prevent them ( ). The greatest dimensional alterations have been seen after the loss of the second primary molars, which have mainly been attributed to the mesial migration of the first permanent molar ( , , ). To prevent this space loss, interceptive treatment with different types of space maintainers (SMs) has been used ( ). Given their frequent use in paediatric dentistry after the early loss of primary molars ( ), it is important to understand the clinical evidence and costs associated with SM, as well as the patient’s own experience ( ). Earlier studies have also investigated interceptive treatment with SM regarding the potential side effects of periodontal health and the presence of caries ( ). However, a considerable variety in the treatment methods, eligibility criteria, study designs, and research approaches has resulted in outcomes and conclusions that can be conflicting and may sometimes be difficult to interpret and compare. Therefore, an overview of the present knowledge seems important. Earlier literature reviews have investigated the effect of SM in children ( ). However, there are no systematic reviews evaluating the use of SM compared with no treatment after the premature loss of the second primary molar.
The objective of this systematic review was to assess and evaluate, in a structured and evidence-based manner, the existing scientific evidence regarding the clinical effects, side effects, patient satisfaction, and cost effects of interceptive treatment with SM compared with no treatment after the premature loss of the second primary molar in children.
The present systematic review was conducted using the criteria of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) ( ). The study protocol was registered at PROSPERO (Registration number CRD 42021290130). Protocol and eligibility criteria The problem specification and the criteria for the final search were developed according to the PICO strategy ( ). The studies included were randomized controlled trials, prospective studies, economic evaluations, and non-randomized clinical studies with a defined control group. No publication year restriction was used, and the studies were written in English or Swedish and had available abstracts. No follow-up limitation was used during the search. Literature search The literature search was performed using four databases (last search 30/8/2022; no data restriction used): PubMed, Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, and Web of Science. The databases were searched with the following keywords: ((((((((((space maintainers) OR space maintainer) OR space maintenance) OR (Band and loop)) OR (Crown and loop)) OR Nance palatal arch) OR Lower lingual arch) OR Resin space maintainers) OR bonded space maintainer) AND orthodontics). Additionally, a review of the reference lists of the relevant included articles was manually performed, and articles with relevant titles were obtained. A further search for grey literature was made in opengrey.eu. Extraction and interpretation of data The extraction and interpretation of data were completed by both authors individually and information was collected regarding each report (author, year of publication, country), study (study design, sample characteristics), the participants (indication of the use of SM, type of dentition, e.g. primary or mixed dentition), the research design and features (sampling mechanism, treatment assignment mechanism, follow up period, dropouts, complications) and the intervention (type of SM, clinical effect, cost effect, periodontal health, caries, patient satisfaction). In case of ambiguity, the consensus was achieved through a discussion between the two authors. Both authors independently analysed the retrieved titles and abstracts according to the defined eligibility criteria. If the methodology in a particular article was unclear, the full text was further analysed. The duplicates retrieved from the literature search were then excluded. Also, a hand search was made by a manual screening of the titles in the reference lists in the finally chosen articles. Risk of bias of individual studies The evaluation of the risk of bias was performed using the ROBINS-I tool (Risk Of Bias In Non-randomized Studies-of Interventions) and the studies included could be determined as ‘low’, ‘moderate’, ‘serious’, ‘critical’, or ‘unclear’ risk of bias ( ). The articles were evaluated individually by both authors. If different opinions occurred, they were discussed until a common evaluation was determined. Risk of bias across studies The risk of bias across studies would have been considered if the methodology was comparable across studies. Summary of results and statistical analysis A meta-analysis would have been performed if there was a homogeneity in the study designs and treatments. Relevant data of interest were collected and organized into tables to present the study and patient characteristics of the studies included as well as the effects of SM.
The problem specification and the criteria for the final search were developed according to the PICO strategy ( ). The studies included were randomized controlled trials, prospective studies, economic evaluations, and non-randomized clinical studies with a defined control group. No publication year restriction was used, and the studies were written in English or Swedish and had available abstracts. No follow-up limitation was used during the search.
The literature search was performed using four databases (last search 30/8/2022; no data restriction used): PubMed, Cochrane Central Register of Controlled Trials (CENTRAL), Scopus, and Web of Science. The databases were searched with the following keywords: ((((((((((space maintainers) OR space maintainer) OR space maintenance) OR (Band and loop)) OR (Crown and loop)) OR Nance palatal arch) OR Lower lingual arch) OR Resin space maintainers) OR bonded space maintainer) AND orthodontics). Additionally, a review of the reference lists of the relevant included articles was manually performed, and articles with relevant titles were obtained. A further search for grey literature was made in opengrey.eu.
The extraction and interpretation of data were completed by both authors individually and information was collected regarding each report (author, year of publication, country), study (study design, sample characteristics), the participants (indication of the use of SM, type of dentition, e.g. primary or mixed dentition), the research design and features (sampling mechanism, treatment assignment mechanism, follow up period, dropouts, complications) and the intervention (type of SM, clinical effect, cost effect, periodontal health, caries, patient satisfaction). In case of ambiguity, the consensus was achieved through a discussion between the two authors. Both authors independently analysed the retrieved titles and abstracts according to the defined eligibility criteria. If the methodology in a particular article was unclear, the full text was further analysed. The duplicates retrieved from the literature search were then excluded. Also, a hand search was made by a manual screening of the titles in the reference lists in the finally chosen articles.
The evaluation of the risk of bias was performed using the ROBINS-I tool (Risk Of Bias In Non-randomized Studies-of Interventions) and the studies included could be determined as ‘low’, ‘moderate’, ‘serious’, ‘critical’, or ‘unclear’ risk of bias ( ). The articles were evaluated individually by both authors. If different opinions occurred, they were discussed until a common evaluation was determined.
The risk of bias across studies would have been considered if the methodology was comparable across studies.
A meta-analysis would have been performed if there was a homogeneity in the study designs and treatments. Relevant data of interest were collected and organized into tables to present the study and patient characteristics of the studies included as well as the effects of SM.
Literature search Of the total 1491 articles that were retrieved after the electronic searches, 137 relevant abstracts were retrieved, of which 31 were further analysed by reading the full text. Two articles were considered relevant according to our eligibility criteria; see . The search in CENTRAL, Scopus, Web of Science, and the search of reference lists and grey literature did not result in additional articles other than the articles included from Pubmed. Outcome The data is summarized in . Both studies were clinical studies with a prospective nature; one was made in Jordan, and the other one in Turkey. The studies had the same treatment modalities regarding the treatment with different types of SM compared with a control group. In one of the studies, the control group consisted of teeth surfaces without SM in the same patients treated with SM, and in the other study, the control group consisted of a separate group of patients without SM ( , ). Ethical approval was given for both studies and was provided respectively by the JUST Institutional Research Board and Ankara University’s ethics committee ( , ). However, different outcomes were measured. Space conditions were measured by studying the arch length, extraction space using lateral cephalograms, and study casts ( ). The periodontal status was measured with regard to plaque index, gingival index, and bleeding on probing ( ). The authors concluded that the use of different space maintainers led overall to an increase in the periodontal index parameters and the number of microorganisms and plaque index in the oral cavity ( ). Regarding the space changes, it was shown that lower molar angulation to the mandibular plane increased in all groups, but significance was achieved only in patients without SM, meanwhile, the lower incisor inclination to the mandibular plane increased in patients with SM after the extraction of the primary teeth ( ). Neither of the studies included or reported information regarding the prevention of malocclusions, patient satisfaction, cost-effectiveness, or the presence of caries. Risk of bias within studies The overall risk of bias was measured as ‘moderate’ in both studies ( , ), see . Both studies included statistical analyses and both studies had a small sample size and did not describe the recruitment of patients. In one study, no power analysis was made, and the groups were divided by an alternation of odd and even numbers; thus, the study was assessed to be a quasi-randomized trial ( ). Arikan et al.’s was classified as a cohort study or an observational study ( ). The randomization process in this study was not mentioned ( ). The eligibility criteria were mentioned in both studies and the combination of lateral cephalograms, dental pantomograms, and study casts to measure alterations in the dentition and space were evaluated as relevant methods ( ). Also, a clinical examination including plaque index according to Silness and Loe ( ), gingival index, and bleeding on probing index to measure gingival health were considered as acceptable. Drop-out rates were disclosed in both studies. However, in one of the studies, no further data were given regarding the patient data (e.g., age, gender, etc.) in the dropouts or how the lack of follow-up was distributed between the different groups ( ). Risk of bias across studies and meta-analysis Due to the variety in the methodology and study design, no risk of bias across studies or meta-analysis could be performed.
Of the total 1491 articles that were retrieved after the electronic searches, 137 relevant abstracts were retrieved, of which 31 were further analysed by reading the full text. Two articles were considered relevant according to our eligibility criteria; see . The search in CENTRAL, Scopus, Web of Science, and the search of reference lists and grey literature did not result in additional articles other than the articles included from Pubmed.
The data is summarized in . Both studies were clinical studies with a prospective nature; one was made in Jordan, and the other one in Turkey. The studies had the same treatment modalities regarding the treatment with different types of SM compared with a control group. In one of the studies, the control group consisted of teeth surfaces without SM in the same patients treated with SM, and in the other study, the control group consisted of a separate group of patients without SM ( , ). Ethical approval was given for both studies and was provided respectively by the JUST Institutional Research Board and Ankara University’s ethics committee ( , ). However, different outcomes were measured. Space conditions were measured by studying the arch length, extraction space using lateral cephalograms, and study casts ( ). The periodontal status was measured with regard to plaque index, gingival index, and bleeding on probing ( ). The authors concluded that the use of different space maintainers led overall to an increase in the periodontal index parameters and the number of microorganisms and plaque index in the oral cavity ( ). Regarding the space changes, it was shown that lower molar angulation to the mandibular plane increased in all groups, but significance was achieved only in patients without SM, meanwhile, the lower incisor inclination to the mandibular plane increased in patients with SM after the extraction of the primary teeth ( ). Neither of the studies included or reported information regarding the prevention of malocclusions, patient satisfaction, cost-effectiveness, or the presence of caries.
The overall risk of bias was measured as ‘moderate’ in both studies ( , ), see . Both studies included statistical analyses and both studies had a small sample size and did not describe the recruitment of patients. In one study, no power analysis was made, and the groups were divided by an alternation of odd and even numbers; thus, the study was assessed to be a quasi-randomized trial ( ). Arikan et al.’s was classified as a cohort study or an observational study ( ). The randomization process in this study was not mentioned ( ). The eligibility criteria were mentioned in both studies and the combination of lateral cephalograms, dental pantomograms, and study casts to measure alterations in the dentition and space were evaluated as relevant methods ( ). Also, a clinical examination including plaque index according to Silness and Loe ( ), gingival index, and bleeding on probing index to measure gingival health were considered as acceptable. Drop-out rates were disclosed in both studies. However, in one of the studies, no further data were given regarding the patient data (e.g., age, gender, etc.) in the dropouts or how the lack of follow-up was distributed between the different groups ( ).
Due to the variety in the methodology and study design, no risk of bias across studies or meta-analysis could be performed.
Summary of evidence This systematic review summarized the current evidence regarding the effect of space maintainers after the premature loss of the second primary molar. Two studies were included in this review with a moderate risk of bias ( , ). The studies measured the clinical effectiveness including space loss, and periodontal health ( , ). Even though there are several studies measuring the effect of SM in general, many of them did not have a control group without SM, and/or did not study the effect of SM after the premature loss of the primary second molar. ‘The main findings of the studies included in this review are that treatment with SM seemed to preserve arch length, and at the same time increase the inclination of the lower incisors to the mandibular plane ( ). Also, the lower molar angulation in patients without SMs was significantly increased ( ). The treatment also caused an increase in plaque accumulation in groups treated with fixed SMs compared to patients without SMs ( ). No studies that fulfilled the eligibility criteria were found on cost-effectiveness, caries, and patient satisfaction.’ Clinical effectiveness Today, available resources within the health sector (personal, time, facilities, equipment, and knowledge) are limited ( ). Hence, failure to analyse the economic aspects of dental health services may lead to unsustainable over-expenditure or a reduction of resources in other areas of healthcare ( ). The studies included in our report did not examine cost-effectiveness, patient satisfaction, or the long-term benefit of using SM. Hence, today, there is a lack of sufficient evidence as a basis for dental healthcare providers to use SM in children with a premature loss of the second primary molar. In this decision, the survival rate, and possible complications of the treatment with SM, including cement failure, band breakage, solder breakage, wire breakage, and the loss of the appliance, should be considered ( ). Owais et al. investigated the correlation of complications in groups with a lower lingual holding arch device (LLHA) with two different gauges (0.9 and 1.25 mm) of stainless steel (SS). The results showed that patients with 1.25 mm LLHA had more problems than patients that received treatment with 0.9 mm LLHA regarding cement failures, band breakages, and solder breakages, which they explained by the stiffness of the 1.25 mm SS wire ( ). In both treatment groups, the proclination of the lower incisors was increased relative to the A-Pog line. These results are like some earlier findings which show that arch perimeter loss can be reduced, but at the expense of the proclination on mandibular incisors in patients treated with a lower lingual arch ( ). However, contradicting results with backward tipping of lower incisors are also found ( ). In a study investigating the effect of a lower lingual arch in 23 children, a backward tipping of the lower incisors by 0.51 degrees was observed during a follow-up period of 18 months ( ). The angulation of the lower first permanent molar as a result of the LLHA has also been investigated, and a distal tipping was found in all groups, a result that is in agreement with findings reported by others ( , ). Overall, the results from Owais et al ., presented in this review showed that both groups treated with SM preserved arch length throughout the study duration ( ). These results contradict an earlier study from Alnahwi et al., measuring space loss following the premature loss of primary second molars, where the space loss in the groups with SM and without SM was similar ( ). A possible explanation for the different results is that space loss in these studies was measured differently. Owais et al. included lateral cephalograms, dental pantomograms, and study cast in the method, and SMs were inserted followed by the extraction of the primary second molar ( ). Meanwhile, Alnahwi et al. measured space loss using bitewing and periapical radiographs with no information regarding the calibration of the pictures and in most cases SMs were placed within 2 months after the extraction and, in the case of 10 teeth, SMs were placed between 1 and 2 years after the extraction. According to Macena et al., the major space changes in the dental arches occur during the first 3 months after the extraction of the deciduous molars, indicating that SM should be applied immediately after extraction ( ). Tunison et al. ( ) also highlight the impact of individual occlusal characteristics on space loss. Besides the location of the primary tooth, it has been shown that space loss is greater in the mandible compared to the maxilla, when tooth loss occurs earlier in age and in crowded compared to spaced dentitions ( ). Another possible side effect of SM is increased eruption difficulties of the second permanent molar ( ). This effect was not mentioned in the studies included in our report. Periodontal disease and caries Previous studies have shown that there may be a correlation between the use of orthodontic appliances and the retention of plaque and the development of gingivitis ( ). Arikan et al. examined changes in the microflora and parameters including plaque index, bleeding index, pocket depth, and the presence of E . faecalis after the use of SM ( ). It was concluded that both fixed and removable SM can cause an increase in plaque accumulation. Children with fixed appliances showed an increase in plaque and bleeding index compared to patients with removable SM, and the authors suggested that special attention should therefore be given to young patients with fixed appliances. Other studies have shown similar results regarding periodontal parameters such as bleeding and probing and pocket depth, and the periodontal and microbiologic parameters with orthodontic bands compared with a control group, when investigating removable and fixed orthodontic appliances ( , ). However, periodontal parameters such as bleeding and probing, plaque accumulation, and gingivitis can be seen as temporary and reversible symptoms due to poor oral hygiene ( ). Severe conditions also include a loss of the marginal bone ( ). This was not examined in the included studies. Difficulties in maintaining good oral hygiene, and the increase of plaque accumulation, may contribute to the demineralization of enamel surfaces in patients with fixed orthodontic bands ( ). Even though the plaque accumulation was measured as higher in patients with SM, the potential effect for the development of caries was not investigated ( ). Caries are the most common reason for early extractions of primary teeth ( ). Patients treated with SM may therefore have a history of caries. Previous caries experience is the single strongest factor for the prediction of future caries ( ). The potential risk for caries’ development in patients treated with SM is therefore important to study in the future. Strengths and limitations This systematic literature review was conducted according to PRISMA. This model fulfils the criteria for repeatability and minimizes the risk of the conclusions being affected by chance or arbitrariness. The results in the studies included in this review show a clear variation in study design and measured variables that made a meta-analysis impossible to perform. During the search, several studies were excluded since they lacked a control group without SM. Both studies included in this review had a small sample size and did not describe the recruitment of patients. In one of the studies, no power analysis was made ( ). Given the fact that a small sample size was used in the study and no power analysis was made, there is a risk of low power, and insignificant outcomes may be achieved even though clear differences may occur. Other limitations were the moderate quality of the studies included and the lack of studies in the fields of patient satisfaction, caries, and cost-effectiveness. Future research More and better prospective clinical trials and randomized-controlled trials with sufficient sample sizes and control groups are required to determine the effect of the treatment after premature extractions of primary second molars. Future research should also include an analysis of the cost and side effects of the treatment as well as patient satisfaction.
This systematic review summarized the current evidence regarding the effect of space maintainers after the premature loss of the second primary molar. Two studies were included in this review with a moderate risk of bias ( , ). The studies measured the clinical effectiveness including space loss, and periodontal health ( , ). Even though there are several studies measuring the effect of SM in general, many of them did not have a control group without SM, and/or did not study the effect of SM after the premature loss of the primary second molar. ‘The main findings of the studies included in this review are that treatment with SM seemed to preserve arch length, and at the same time increase the inclination of the lower incisors to the mandibular plane ( ). Also, the lower molar angulation in patients without SMs was significantly increased ( ). The treatment also caused an increase in plaque accumulation in groups treated with fixed SMs compared to patients without SMs ( ). No studies that fulfilled the eligibility criteria were found on cost-effectiveness, caries, and patient satisfaction.’
Today, available resources within the health sector (personal, time, facilities, equipment, and knowledge) are limited ( ). Hence, failure to analyse the economic aspects of dental health services may lead to unsustainable over-expenditure or a reduction of resources in other areas of healthcare ( ). The studies included in our report did not examine cost-effectiveness, patient satisfaction, or the long-term benefit of using SM. Hence, today, there is a lack of sufficient evidence as a basis for dental healthcare providers to use SM in children with a premature loss of the second primary molar. In this decision, the survival rate, and possible complications of the treatment with SM, including cement failure, band breakage, solder breakage, wire breakage, and the loss of the appliance, should be considered ( ). Owais et al. investigated the correlation of complications in groups with a lower lingual holding arch device (LLHA) with two different gauges (0.9 and 1.25 mm) of stainless steel (SS). The results showed that patients with 1.25 mm LLHA had more problems than patients that received treatment with 0.9 mm LLHA regarding cement failures, band breakages, and solder breakages, which they explained by the stiffness of the 1.25 mm SS wire ( ). In both treatment groups, the proclination of the lower incisors was increased relative to the A-Pog line. These results are like some earlier findings which show that arch perimeter loss can be reduced, but at the expense of the proclination on mandibular incisors in patients treated with a lower lingual arch ( ). However, contradicting results with backward tipping of lower incisors are also found ( ). In a study investigating the effect of a lower lingual arch in 23 children, a backward tipping of the lower incisors by 0.51 degrees was observed during a follow-up period of 18 months ( ). The angulation of the lower first permanent molar as a result of the LLHA has also been investigated, and a distal tipping was found in all groups, a result that is in agreement with findings reported by others ( , ). Overall, the results from Owais et al ., presented in this review showed that both groups treated with SM preserved arch length throughout the study duration ( ). These results contradict an earlier study from Alnahwi et al., measuring space loss following the premature loss of primary second molars, where the space loss in the groups with SM and without SM was similar ( ). A possible explanation for the different results is that space loss in these studies was measured differently. Owais et al. included lateral cephalograms, dental pantomograms, and study cast in the method, and SMs were inserted followed by the extraction of the primary second molar ( ). Meanwhile, Alnahwi et al. measured space loss using bitewing and periapical radiographs with no information regarding the calibration of the pictures and in most cases SMs were placed within 2 months after the extraction and, in the case of 10 teeth, SMs were placed between 1 and 2 years after the extraction. According to Macena et al., the major space changes in the dental arches occur during the first 3 months after the extraction of the deciduous molars, indicating that SM should be applied immediately after extraction ( ). Tunison et al. ( ) also highlight the impact of individual occlusal characteristics on space loss. Besides the location of the primary tooth, it has been shown that space loss is greater in the mandible compared to the maxilla, when tooth loss occurs earlier in age and in crowded compared to spaced dentitions ( ). Another possible side effect of SM is increased eruption difficulties of the second permanent molar ( ). This effect was not mentioned in the studies included in our report.
Previous studies have shown that there may be a correlation between the use of orthodontic appliances and the retention of plaque and the development of gingivitis ( ). Arikan et al. examined changes in the microflora and parameters including plaque index, bleeding index, pocket depth, and the presence of E . faecalis after the use of SM ( ). It was concluded that both fixed and removable SM can cause an increase in plaque accumulation. Children with fixed appliances showed an increase in plaque and bleeding index compared to patients with removable SM, and the authors suggested that special attention should therefore be given to young patients with fixed appliances. Other studies have shown similar results regarding periodontal parameters such as bleeding and probing and pocket depth, and the periodontal and microbiologic parameters with orthodontic bands compared with a control group, when investigating removable and fixed orthodontic appliances ( , ). However, periodontal parameters such as bleeding and probing, plaque accumulation, and gingivitis can be seen as temporary and reversible symptoms due to poor oral hygiene ( ). Severe conditions also include a loss of the marginal bone ( ). This was not examined in the included studies. Difficulties in maintaining good oral hygiene, and the increase of plaque accumulation, may contribute to the demineralization of enamel surfaces in patients with fixed orthodontic bands ( ). Even though the plaque accumulation was measured as higher in patients with SM, the potential effect for the development of caries was not investigated ( ). Caries are the most common reason for early extractions of primary teeth ( ). Patients treated with SM may therefore have a history of caries. Previous caries experience is the single strongest factor for the prediction of future caries ( ). The potential risk for caries’ development in patients treated with SM is therefore important to study in the future.
This systematic literature review was conducted according to PRISMA. This model fulfils the criteria for repeatability and minimizes the risk of the conclusions being affected by chance or arbitrariness. The results in the studies included in this review show a clear variation in study design and measured variables that made a meta-analysis impossible to perform. During the search, several studies were excluded since they lacked a control group without SM. Both studies included in this review had a small sample size and did not describe the recruitment of patients. In one of the studies, no power analysis was made ( ). Given the fact that a small sample size was used in the study and no power analysis was made, there is a risk of low power, and insignificant outcomes may be achieved even though clear differences may occur. Other limitations were the moderate quality of the studies included and the lack of studies in the fields of patient satisfaction, caries, and cost-effectiveness.
More and better prospective clinical trials and randomized-controlled trials with sufficient sample sizes and control groups are required to determine the effect of the treatment after premature extractions of primary second molars. Future research should also include an analysis of the cost and side effects of the treatment as well as patient satisfaction.
The available evidence in this study shows that treatment with SM may preserve arch length, but patients treated with SM also showed an increase of plaque accumulation and some other periodontal parameters. However, these outcomes should be very cautiously interpreted, due to the methodological limitations of the studies included. Overall, there is a lack of evidence in the literature regarding the clinical effectiveness, cost-effectiveness, and side effects such as caries and periodontal disease when using SM. Hence, today, there is a lack of sufficient scientific evidence as a basis for dental healthcare providers to use SM in children with a premature loss of the second primary molar ( ).
|
Best Vitelliform Macular Dystrophy Natural History Study Report 1 | 2cd31858-56f8-4946-81ef-706047c3375b | 11932931 | Pathologic Processes[mh] | This retrospective cohort study conformed to the tenets of the Declaration of Helsinki and was approved by the Moorfields Eye Hospital ethics committee. All patients included in this database had provided informed consent previously. Patient Identification All patients with a monoallelic variant in BEST1 and a clinical diagnosis of BVMD or a clinical diagnosis of BVMD with at least 1 family member showing positive genetic test results for BEST1 in a tertiary referral center (Moorfields Eye Hospital, London, United Kingdom) were reviewed. The patients were identified using in-house databases (OpenEyes and MagicXPA 3.3; Moorfields Eye Hospital). Subsequently, information was extracted from electronic health care records and physical case notes. Patients with other concurrent ocular pathologic features were excluded. Clinical Data Clinical data extracted included presenting symptoms, best-corrected visual acuity (BCVA), refraction, and slit-lamp biomicroscopy and funduscopy findings. BCVA data at the initial presentation and at the most recent follow-up (final) visit were analyzed. When necessary, Snellen and decimal acuity were converted into logarithm of the minimum angle of resolution (logMAR) values. , The definition for reduced vision of worse than 0.2 logMAR (Snellen equivalent, 20/32) from the United Kingdom school screening was used. Mean annual progression rate for BCVA loss was calculated per eye by subtraction of BCVA at first visit from BCVA at the last visit divided by the specific follow-up period for every patient. Amblyopia was defined clinically according to the American Academy of Ophthalmology as a difference in BCVA of 2 lines or more (0.2 logMAR or more) between eyes. Myopic refractive errors were classified as follows: low myopia, –0.50 diopter (D) to –6.00 D; and high myopia, –6.00 D or less. Hyperopic errors were classified as follows: low, 0.25 D to 2.25 D; moderate, 2.25 D to 5.25 D; and high, 5.25 D or more. , Color or pseudocolor fundus photographs were obtained with either the Optos ultra-widefield camera (Optos PLC) or the TRC-50LA retinal fundus camera (Topcon). Fundus appearance was graded in the previously described stages of BVMD: stage 1, previtelliform; stage 2, vitelliform; stage 3, pseudohypopyon; stage 4, vitelliruptive; and stage 5, atrophy or fibrosis. The presence of unifocal or multifocal fundus changes also was noted. Electrophysiologic Testing Electrophysiologic testing included electrooculography performed according to the standards of the International Society for Clinical Electrophysiology of Vision. A light peak-to-dark trough ratio of 1.5 or less was considered suggestive of BVMD. Genetic Testing and Analysis As part of routine clinical diagnostics, a combination of targeted Sanger sequencing, next-generation sequencing, sequencing panels of retinal dystrophy genes, whole-exome sequencing, and whole-genome sequencing was used to identify variants in the BEST1 gene. All recruited patients were reassessed for their detected variants as described in the (available at www.aaojournal.org ). Genotype–Phenotype Correlation Patients with the most prevalent variants (for which at least 8 patients’ data are available) were selected for genotype–phenotype correlation analysis and were compared for age at onset, age-adjusted BCVA, and distribution of Gass stages. Age adjustment of BCVA was required because of different age distributions between the groups and the correlation between BCVA and age described in this cohort. We calculated age-adjusted BCVA by adding or subtracting the mean annual progression rate multiplied by the age difference between the actual age when BCVA was measured and a standardized age of 40 years for every eye. Statistical Analysis Statistical analysis was performed using Prism version 8.0.2 software (GraphPad Software). The threshold for significance for all statistical tests was set at a P value of less than 0.05.
All patients with a monoallelic variant in BEST1 and a clinical diagnosis of BVMD or a clinical diagnosis of BVMD with at least 1 family member showing positive genetic test results for BEST1 in a tertiary referral center (Moorfields Eye Hospital, London, United Kingdom) were reviewed. The patients were identified using in-house databases (OpenEyes and MagicXPA 3.3; Moorfields Eye Hospital). Subsequently, information was extracted from electronic health care records and physical case notes. Patients with other concurrent ocular pathologic features were excluded.
Clinical data extracted included presenting symptoms, best-corrected visual acuity (BCVA), refraction, and slit-lamp biomicroscopy and funduscopy findings. BCVA data at the initial presentation and at the most recent follow-up (final) visit were analyzed. When necessary, Snellen and decimal acuity were converted into logarithm of the minimum angle of resolution (logMAR) values. , The definition for reduced vision of worse than 0.2 logMAR (Snellen equivalent, 20/32) from the United Kingdom school screening was used. Mean annual progression rate for BCVA loss was calculated per eye by subtraction of BCVA at first visit from BCVA at the last visit divided by the specific follow-up period for every patient. Amblyopia was defined clinically according to the American Academy of Ophthalmology as a difference in BCVA of 2 lines or more (0.2 logMAR or more) between eyes. Myopic refractive errors were classified as follows: low myopia, –0.50 diopter (D) to –6.00 D; and high myopia, –6.00 D or less. Hyperopic errors were classified as follows: low, 0.25 D to 2.25 D; moderate, 2.25 D to 5.25 D; and high, 5.25 D or more. , Color or pseudocolor fundus photographs were obtained with either the Optos ultra-widefield camera (Optos PLC) or the TRC-50LA retinal fundus camera (Topcon). Fundus appearance was graded in the previously described stages of BVMD: stage 1, previtelliform; stage 2, vitelliform; stage 3, pseudohypopyon; stage 4, vitelliruptive; and stage 5, atrophy or fibrosis. The presence of unifocal or multifocal fundus changes also was noted.
Electrophysiologic testing included electrooculography performed according to the standards of the International Society for Clinical Electrophysiology of Vision. A light peak-to-dark trough ratio of 1.5 or less was considered suggestive of BVMD.
As part of routine clinical diagnostics, a combination of targeted Sanger sequencing, next-generation sequencing, sequencing panels of retinal dystrophy genes, whole-exome sequencing, and whole-genome sequencing was used to identify variants in the BEST1 gene. All recruited patients were reassessed for their detected variants as described in the (available at www.aaojournal.org ).
Patients with the most prevalent variants (for which at least 8 patients’ data are available) were selected for genotype–phenotype correlation analysis and were compared for age at onset, age-adjusted BCVA, and distribution of Gass stages. Age adjustment of BCVA was required because of different age distributions between the groups and the correlation between BCVA and age described in this cohort. We calculated age-adjusted BCVA by adding or subtracting the mean annual progression rate multiplied by the age difference between the actual age when BCVA was measured and a standardized age of 40 years for every eye.
Statistical analysis was performed using Prism version 8.0.2 software (GraphPad Software). The threshold for significance for all statistical tests was set at a P value of less than 0.05.
Patient Characteristics Two hundred twenty-two patients (127 male patients [57.2%]) from 141 pedigrees met the genotype and phenotype inclusion criteria. One patient was excluded from analysis of clinical findings and imaging after having a central retinal artery occlusion consecutively in both eyes before the first visit. One eye was excluded from BCVA analysis after retinal detachment with macular involvement, and 1 eye was excluded from BCVA analysis while having a corneal ulcer. Thirteen eyes were excluded from BCVA analysis because of amblyopia, and for 3 patients, BCVA at baseline was decreased because of their young age (related to ability to comply with testing), and they exhibited improved BCVA of more than 0.2 logMAR at subsequent visits. We identified 374 patients from an electronic patient letter database with a presumed diagnosis of BVMD who were not included in this cohort because they did not meet the genetic inclusion criteria. A proportion of these patients presumably did not have BVMD resulting from BEST1 , including those who in fact may have acquired disease or may have vitelliform maculopathy resulting from one of many other genes. Historical limitations have restricted the availability of genetic testing, as well as instances of loss to follow-up before genetic testing could be administered. Patients seen in nongenetic clinics may not have been offered or had access to genetic testing, and some patients or their families declined genetic testing. For patients who underwent testing, the failure to identify a sequence variant in BEST1 also led to exclusion from this study. Overall, the cohort of 222 patients from pedigrees with a likely disease-causing sequence variant represents 37.2% of all identified patients with a presumed diagnosis of BVMD. Age at Presentation and Symptoms of Onset Age at presentation was documented for 213 patients (96.0%). Mean age ± standard deviation (SD) at presentation was 26.8 ± 19.1 years (range, 1.3–84.8 years), with most patients presenting in childhood or early adulthood ( A). At presentation, 131 patients (61.5%) demonstrated a deterioration of vision, 26 patients (12.2%) were asymptomatic and had been referred because of family history or incidental findings on annual examination, 10 patients (4.7%) reported distorted vision, and 4 patients (1.9%) reported the perception of a scotoma. For 33 patients (15.5%), no initial symptoms were documented. summarizes a complete list of the presenting symptoms. No patient demonstrated acute angle-closure glaucoma at presentation. Six patients (2.8%) with chronic angle closure underwent prophylactic peripheral yttrium–aluminum–garnet laser iridotomy, 5 of them bilaterally, 1 of them unilaterally. Visual Acuity and Refraction Two hundred twelve patients had a documented BCVA for at least 1 eye at presentation. Mean ± SD BCVA was 0.37 ± 0.47 logMAR (Snellen equivalent, mean, 20/47; range, –0.18 to 2.28 logMAR [Snellen equivalent, 20/13–20/3811]) for the right eye and 0.33 ± 0.42 logMAR (Snellen equivalent, mean, 20/43; range, –0.18 to 2.28 logMAR [Snellen equivalent, 20/13–20/3811]) for the left eye at presentation. Baseline BCVA was highly variable among patients, but no significant interocular difference was found ( t = 0.72; P = 0.47, paired t test). Data from both eyes were pooled and plotted against age ( A). A statistically significant weak correlation was found between BCVA and age at presentation ( r = 0.33; P < 0.0001, Pearson correlation coefficient). Refraction data from 235 eyes of 119 patients were included in the analysis. The spherical equivalent was calculated, and refractive errors were classified. Thirty-four eyes (14.5%) were found to have high hyperopia, 67 eyes (28.5%) were found to have moderate hyperopia, and 84 eyes (35.7%) were found to have low hyperopia. Eleven eyes (4.7%) were emmetropic and 39 eyes exhibited low myopia (16.6%). No patient was found to have high myopia. Of the 213 patients with clinical information, amblyopia was diagnosed in 13 patients (6.1%), with 1 patient successfully treated with occlusion therapy. The cause for amblyopia was strabismus in 8 patients, refractive error in 3 patients, and not documented for 2 patients. Funduscopic Findings and Color Fundus Photography Funduscopic description or color fundus photographs were available for 418 eyes from 209 patients. One hundred twenty-eight eyes (30.6%) exhibited yellow vitelliform lesions, followed by 78 eyes (18.7%) with atrophic changes and 49 eyes (11.7%) with fibrotic changes. Forty-eight eyes (11.5%) showed only mild pigmentary changes and 43 eyes (10.3%) were found to have a vitelliruptive appearance. Twenty-three eyes (5.5%) demonstrated subretinal fluid on funduscopy and 22 eyes (5.3%) showed a pseudohypopyon appearance. Twenty-one eyes (5.0%) were described without any pathologic changes and 6 eyes (1.4%) exhibited retinal hemorrhages. In 7 patients (3.3%), morphologic changes were limited to 1 eye, whereas the unaffected eye did not exhibit any changes. Of 397 eyes exhibiting changes, 339 eyes (85.4%) showed unifocal features, whereas in 58 eyes (14.6%), multifocal changes were observed. Of 31 patients with multifocal changes, 27 patients (87.1%) exhibited bilateral multifocal disease. (available at www.aaojournal.org ) presents a list of the peripheral retinal findings. Choroidal Neovascularization Of the 213 patients with clinical information, a total of 37 patients (17.3%) had received a clinical diagnosis of choroidal neovascularization (CNV), of whom 28 patients (13.1%) had a unilateral occurrence and 9 patients (4.2%) were affected bilaterally over a mean course of follow-up of 8.0 years (range, 0–55 years). Mean ± SD BCVA at the last follow-up was 0.44 ± 0.42 logMAR (Snellen equivalent, mean, 20/55; range, 0.00–2.28 logMAR [Snellen equivalent, 20/20–20/3811]) for eyes with a diagnosis of CNV (mean age at last follow-up, 34.4 years) and 0.47 ± 0.52 logMAR (Snellen equivalent, mean, 20/59; range, –0.20 to 3.00 logMAR [Snellen equivalent, 20/13–20/20 000]) for eyes without a diagnosis of CNV (mean age at last follow-up, 42.8 years), with no significant difference between the groups ( t = 0.37; P = 0.71, unpaired t test with Welch’s correction). Of the 46 eyes with a diagnosis of CNV, 24 eyes (52.2%) were treated with at least 1 intravitreal injection of an anti–vascular endothelial growth factor (VEGF) agent. Mean ± SD BCVA at the last follow-up was 0.28 ± 0.25 logMAR (Snellen equivalent, mean, 20/38; range, 0.00–0.78 logMAR [Snellen equivalent, 20/20–20/121]) for eyes that were treated with an anti-VEGF agent and 0.62 ± 0.48 logMAR (Snellen equivalent, mean, 20/83; range, 0.00–2.28 logMAR [Snellen equivalent, 20/20–20/3811]) for eyes that were not treated with an anti-VEGF agent, with a significantly better mean BCVA in the group that received anti-VEGF therapy ( t = 3.0; P = 0.005, unpaired t test with Welch’s correction). Mean ± SD BCVA at the time of diagnosis of CNV was available for 35 eyes and did not reveal a significant difference between groups: eyes treated with an anti-VEGF agent, 0.60 ± 0.27 logMAR (Snellen equivalent, mean, 20/80; range, 0.16 to 1.22 logMAR [Snellen equivalent, 20/29–20/332]) versus eyes not treated with an anti-VEGF agent, 0.79 ± 0.47 logMAR (Snellen equivalent, mean, 20/123; range, 0.10–1.68 logMAR [Snellen equivalent, 20/25–20/957]; t = 1.4; P = 0.18, unpaired t test with Welch’s correction). Mean age at CNV occurrence was similar in both groups: eyes treated with an anti-VEGF agent, 25.5 ± 16.6 years (range, 6.0–58.8 years) versus eyes not treated with an anti-VEGF agent, 27.5 ± 21.7 years (range, 4.3–73.1 years; t = 0.35; P = 0.73, unpaired t test with Welch’s correction). CNV occurrence since the diagnosis of BVMD also was similar in both groups: eyes treated with an anti-VEGF agent, 5.9 ± 10.0 years (range, 0–40.0 years) versus eyes not treated with an anti-VEGF agent, 8.9 ± 15.1 years (range, 0–63.1 years; t = 0.81; P = 0.42, unpaired t test with Welch’s correction). Eighteen eyes that did not receive an anti-VEGF agent had a documented reason why no anti-VEGF treatment was administered: 7 eyes (38.9%) had fibrotic changes or chronic edema, hence the treating clinician considered that there was no potential for improvement with therapy; 5 eyes (27.8%) received a diagnosis before anti-VEGF therapy was available; 4 eyes (22.2%) did not reveal significant CNV activity at the time of diagnosis, 1 eye (5.6%) had an extrafoveal location of the CNV, and in 1 eye (5.6%), the parents of the patient declined the treatment and opted for observation. The total number of injections was available for 20 eyes and ranged from 1 to 11 injections, with a mean ± SD of 2.95 ± 2.50 injections per eye. Two adverse events after injection were documented: a small vitreous hemorrhage that resolved spontaneously and transient vision loss with photopsia without any abnormality on ophthalmologic examination. Longitudinal Analysis of Visual Acuity One hundred seventy-two patients had longitudinal data for VA with a minimum follow-up of 12 months and a mean ± SD follow-up of 9.69 ± 9.09 years (range, 1.00–55.75 years). For the right eye, a significant difference ( P < 0.001, paired t test) was found between mean ± SD VA of 0.36 ± 0.44 logMAR (Snellen equivalent, mean, 20/46) at baseline and 0.50 ± 0.57 logMAR (Snellen equivalent, mean, 20/63) at latest follow-up. This was also the case for the left eye ( P < 0.001, paired t test), with a mean ± SD BCVA of 0.33 ± 0.39 logMAR (Snellen equivalent, mean, 20/43) at baseline and 0.43 ± 0.46 logMAR (Snellen equivalent, mean, 20/54) at latest follow-up. The mean annual progression rate was 0.013 logMAR (95% confidence interval, 0.004–0.022 logMAR) for the right eye (equates to 0.65 Early Treatment Diabetic Retinopathy Study (ETDRS) letters/year) and 0.009 logMAR (95% confidence interval, –0.002 to 0.020 logMAR) for the left eye (equates to 0.45 ETDRS letters/year). Longitudinal Analysis of Gass Stages Longitudinal analysis of Gass stages was performed in 239 eyes from 124 patients with a minimum follow-up of 3 months. Mean ± SD age at baseline was 32.2 ± 21.3 years (range, 1.2–80.1 years), and mean ± SD follow-up was 8.3 ± 8.1 years (range, 0.3–43.1 years). Of 124 patients, 74 patients (59.7%) did not exhibit change in the Gass stage, with a mean ± SD follow-up of 6.4 ± 6.6 years. Gass stage changed in 50 patients in at least 1 eye, with a mean ± SD follow-up of 11.2 ± 9.4 years. At baseline, 27 eyes (11.3%) were in the previtelliform stage (stage 1), which dropped to 21 eyes (9.2%) at last visit (mean ± SD follow-up, 6.5 ± 5.4 years). The vitelliform stage (stage 2) was observed in 71 eyes (29.7%) at baseline, with a decline to 37 eyes (15.5%) at last follow-up (mean ± SD follow-up, 9.8 ± 9.8 years). The pseudohypopyon stage (stage 3) was found in 24 eyes (10.0%) at baseline and in 19 eyes (8.0%) at last visit (mean ± SD follow-up, 6.3 ± 5.9 years). Vitelliruptive changes (stage 4) were diagnosed in 33 eyes (13.8%) at baseline, with an increase to 46 eyes (19.3%) at last visit (mean ± SD follow-up, 6.3 ± 5.7 years). Similarly, atrophic or fibrotic changes (stage 5) increased from 84 eyes (35.1%) at baseline to 115 eyes (48.1%) at last the follow-up (mean ± SD follow-up, 9.2 ± 8.6 years). In summary, a decrease in frequency of stages 1, 2, and 3 was found, with an increase in the frequency of stages 4 and 5 being found from baseline to the last visit ( B). Electrooculography Electrooculography was available for 244 eyes from 122 patients. Two hundred twenty-three eyes (91.4%) exhibited a light peak-to-dark trough ratio of 1.5 or less or did not show any light rise, thereby meeting the diagnostic criteria for BVMD. Sixteen eyes (6.6%) exhibited a light peak-to-dark trough ratio of more than 1.5 and less than 1.85, and 5 eyes (2.0%) showed a light peak-to-dark trough ratio of 1.85 or more, which is considered the lower end of the normal range. Stratification According to Age at Onset Based on the first time that BCVA was reduced to 0.2 logMAR or more (Snellen equivalent, 20/32), we separately assessed patients with adult-onset disease (≥ 18 years of age) and childhood-onset disease (< 18 years of age). Forty patients (22.5%) were classified as having childhood-onset disease and 138 patients (77.5%) were classified as having adult-onset disease. Visual acuity for both groups was plotted against age ( B), and linear regression did not reveal a significant difference between the lines of best fit ( P = 0.09), although a trend of a slower decline of BCVA was found in the childhood-onset group compared with the patients in the adult-onset group. In contrast, 21 patients (52.5%) in the childhood-onset group received a diagnosis of CNV, whereas 24 patients (17.4%) from the adult-onset group received a diagnosis of CNV, suggesting a lower rate of CNV in the adult-onset group ( z = 4.50; P < 0.0001, chi-square test). A CNV diagnosis in the childhood-onset group was made at the mean ± SD age of 12.0 ± 4.8 years (range, 4.3–26.1 years), significantly lower ( t = 6.5; P < 0.0001, unpaired t test with Welch’s correction) than in the adult-onset group, with mean ± SD age at diagnosis of 39.2 ± 17.5 years (range, 13.2–73.1 years). Genetic Characterization In total, 69 monoallelic variants were identified in BEST1 . Forty-seven variants were reported previously, and 22 variants were unreported previously. The variants comprised 64 missense variants, 1 frameshift deletion, 1 frameshift duplication, 1 in-frame duplication, and 2 intronic variants. Thirty-five recurrent variants were detected in multiple patients and 34 unique variants were detected in a single individual pedigree. Four variants were classified as pathogenic, 47 variants likely were pathogenic, and 18 variants were of uncertain significance. The localization of the identified BEST1 variants in the gene domains is illustrated in . The detailed results of in silico molecular genetic analysis are presented in (available at www.aaojournal.org ) and evolutionary conservation for the detected variants is shown in (available at www.aaojournal.org ). The most prevalent variants were c.652C>T, p.(Arg218Cys) (16/444 alleles; 3.6%); c.653G>A, p.(Arg218His) (12/444 alleles; 2.7%); c.728C>T, p.(Ala243Val) (11/444 alleles, 2.5%); c.892T>G, p.(Phe298Val) (8/444 alleles; 1.8%); c.37C>T, p.(Arg13Cys) (7/444 alleles; 1.6%); c.288G>C, p.(Gln96His) (7/444 alleles; 1.6%); c.914T>C, p.(Phe305Ser) (7/444 alleles; 1.6%); and c.90G>C, p.(Lys30Asn) (5/444 alleles; 11.2%). Genotype–Phenotype Correlation The 3 most prevalent variants were analyzed for genotype–phenotype correlation: p.(Arg218Cys), p.(Arg218His), and p.(Ala243Val). The mean ± SD age at onset was 21.5 ± 15.5 years (range, 5.0–47.3 years) for p.(Arg218Cys), 28.2 ± 16.0 years (range, 7.3–58.8 years) for p.(Arg218His), and 50.6 ± 20.7 years (range, 13.5–84.8 years) for p.A243V. Multivariant analysis revealed a significant difference between the variants ( P = 0.0007, analysis of variance) showing a later onset for p.(Ala243Val) compared with p.(Arg218Cys) ( P = 0.0006, t test with Tukey correction) and p.(Arg218His) ( P = 0.012, t test with Tukey correction). Mean ± SD age-adjusted BCVA was 0.43 ± 0.35 logMAR (Snellen equivalent, mean, 20/54) for p.(Arg218Cys), 0.47 ± 0.43 logMAR (Snellen equivalent, mean, 20/59) for p.(Arg218His), and 0.13 ± 0.34 logMAR (Snellen equivalent, mean, 20/27) for p.(Ala243Val). Multivariant analysis revealed a significant difference among the variants ( P = 0.01, analysis of variance), showing a better age-adjusted BCVA for p.(Ala243Val) compared with p.(Arg218Cys) ( P = 0.03, t test with Tukey correction) and p.(Arg218His) ( P = 0.01, t test with Tukey correction). For patients harboring p.(Arg218Cys) (28 eyes; mean ± SD follow-up, 7.1 ± 7.0 years), the frequency of Gass stage 1 dropped from 21.4% at baseline to 14.4% at last visit, similarly observed for stage 2 with a drop from 14.2% to 10.7%. The occurrence of stage 3 rose from 10.7% to 17.9%, that of stage 4 declined from 17.8% to 10.7%, and that of stage 5 increased from 35.7% to 46.4%. Patients harboring p.(Arg218His) (20 eyes; mean ± SD follow-up, 2.9 ± 2.7 years) exhibited a stable frequency of Gass stage 1 of 5.0% at baseline and at last visit. Stage 2 declined from 45.0% to 30.0%, whereas stage 3 became more frequent from 10.0% to 20.0%. Stage 4 decreased from 10.0% to 0.0%, and stage 5 increased from 30.0% to 45.0%. In contrast to both of the above variants, patients harboring p.(Ala243Val) (16 eyes; mean ± SD follow-up, 7.6 ± 5.5 years) showed a stable and high frequency of Gass stage 1 of 25.0% at baseline and at last visit. Stage 2 occurred in 50% of the patients at baseline and fell to 12.5% at last visit. No patient was classified as having stage 3 disease at baseline, whereas at last follow-up, 6.25% of patients were classified as having stage 3 disease. The occurrence of stage 4 disease increased from 12.5% to 43.8%, whereas the low rate of stage 5 disease of 12.5% remained stable from baseline to the last visit. In summary, a higher frequency of stages 1 and 2 disease was found in patients with p.(Ala243Val) compared with patients with p.(Arg218Cys) and p.(Arg218His); whereas stage 5 disease occurred more frequently in the latter variants, corroborating the decreased severity of p.(Ala243Val) compared with p.(Arg218Cys) and p.(Arg218His).
Two hundred twenty-two patients (127 male patients [57.2%]) from 141 pedigrees met the genotype and phenotype inclusion criteria. One patient was excluded from analysis of clinical findings and imaging after having a central retinal artery occlusion consecutively in both eyes before the first visit. One eye was excluded from BCVA analysis after retinal detachment with macular involvement, and 1 eye was excluded from BCVA analysis while having a corneal ulcer. Thirteen eyes were excluded from BCVA analysis because of amblyopia, and for 3 patients, BCVA at baseline was decreased because of their young age (related to ability to comply with testing), and they exhibited improved BCVA of more than 0.2 logMAR at subsequent visits. We identified 374 patients from an electronic patient letter database with a presumed diagnosis of BVMD who were not included in this cohort because they did not meet the genetic inclusion criteria. A proportion of these patients presumably did not have BVMD resulting from BEST1 , including those who in fact may have acquired disease or may have vitelliform maculopathy resulting from one of many other genes. Historical limitations have restricted the availability of genetic testing, as well as instances of loss to follow-up before genetic testing could be administered. Patients seen in nongenetic clinics may not have been offered or had access to genetic testing, and some patients or their families declined genetic testing. For patients who underwent testing, the failure to identify a sequence variant in BEST1 also led to exclusion from this study. Overall, the cohort of 222 patients from pedigrees with a likely disease-causing sequence variant represents 37.2% of all identified patients with a presumed diagnosis of BVMD.
Age at presentation was documented for 213 patients (96.0%). Mean age ± standard deviation (SD) at presentation was 26.8 ± 19.1 years (range, 1.3–84.8 years), with most patients presenting in childhood or early adulthood ( A). At presentation, 131 patients (61.5%) demonstrated a deterioration of vision, 26 patients (12.2%) were asymptomatic and had been referred because of family history or incidental findings on annual examination, 10 patients (4.7%) reported distorted vision, and 4 patients (1.9%) reported the perception of a scotoma. For 33 patients (15.5%), no initial symptoms were documented. summarizes a complete list of the presenting symptoms. No patient demonstrated acute angle-closure glaucoma at presentation. Six patients (2.8%) with chronic angle closure underwent prophylactic peripheral yttrium–aluminum–garnet laser iridotomy, 5 of them bilaterally, 1 of them unilaterally.
Two hundred twelve patients had a documented BCVA for at least 1 eye at presentation. Mean ± SD BCVA was 0.37 ± 0.47 logMAR (Snellen equivalent, mean, 20/47; range, –0.18 to 2.28 logMAR [Snellen equivalent, 20/13–20/3811]) for the right eye and 0.33 ± 0.42 logMAR (Snellen equivalent, mean, 20/43; range, –0.18 to 2.28 logMAR [Snellen equivalent, 20/13–20/3811]) for the left eye at presentation. Baseline BCVA was highly variable among patients, but no significant interocular difference was found ( t = 0.72; P = 0.47, paired t test). Data from both eyes were pooled and plotted against age ( A). A statistically significant weak correlation was found between BCVA and age at presentation ( r = 0.33; P < 0.0001, Pearson correlation coefficient). Refraction data from 235 eyes of 119 patients were included in the analysis. The spherical equivalent was calculated, and refractive errors were classified. Thirty-four eyes (14.5%) were found to have high hyperopia, 67 eyes (28.5%) were found to have moderate hyperopia, and 84 eyes (35.7%) were found to have low hyperopia. Eleven eyes (4.7%) were emmetropic and 39 eyes exhibited low myopia (16.6%). No patient was found to have high myopia. Of the 213 patients with clinical information, amblyopia was diagnosed in 13 patients (6.1%), with 1 patient successfully treated with occlusion therapy. The cause for amblyopia was strabismus in 8 patients, refractive error in 3 patients, and not documented for 2 patients.
Funduscopic description or color fundus photographs were available for 418 eyes from 209 patients. One hundred twenty-eight eyes (30.6%) exhibited yellow vitelliform lesions, followed by 78 eyes (18.7%) with atrophic changes and 49 eyes (11.7%) with fibrotic changes. Forty-eight eyes (11.5%) showed only mild pigmentary changes and 43 eyes (10.3%) were found to have a vitelliruptive appearance. Twenty-three eyes (5.5%) demonstrated subretinal fluid on funduscopy and 22 eyes (5.3%) showed a pseudohypopyon appearance. Twenty-one eyes (5.0%) were described without any pathologic changes and 6 eyes (1.4%) exhibited retinal hemorrhages. In 7 patients (3.3%), morphologic changes were limited to 1 eye, whereas the unaffected eye did not exhibit any changes. Of 397 eyes exhibiting changes, 339 eyes (85.4%) showed unifocal features, whereas in 58 eyes (14.6%), multifocal changes were observed. Of 31 patients with multifocal changes, 27 patients (87.1%) exhibited bilateral multifocal disease. (available at www.aaojournal.org ) presents a list of the peripheral retinal findings.
Of the 213 patients with clinical information, a total of 37 patients (17.3%) had received a clinical diagnosis of choroidal neovascularization (CNV), of whom 28 patients (13.1%) had a unilateral occurrence and 9 patients (4.2%) were affected bilaterally over a mean course of follow-up of 8.0 years (range, 0–55 years). Mean ± SD BCVA at the last follow-up was 0.44 ± 0.42 logMAR (Snellen equivalent, mean, 20/55; range, 0.00–2.28 logMAR [Snellen equivalent, 20/20–20/3811]) for eyes with a diagnosis of CNV (mean age at last follow-up, 34.4 years) and 0.47 ± 0.52 logMAR (Snellen equivalent, mean, 20/59; range, –0.20 to 3.00 logMAR [Snellen equivalent, 20/13–20/20 000]) for eyes without a diagnosis of CNV (mean age at last follow-up, 42.8 years), with no significant difference between the groups ( t = 0.37; P = 0.71, unpaired t test with Welch’s correction). Of the 46 eyes with a diagnosis of CNV, 24 eyes (52.2%) were treated with at least 1 intravitreal injection of an anti–vascular endothelial growth factor (VEGF) agent. Mean ± SD BCVA at the last follow-up was 0.28 ± 0.25 logMAR (Snellen equivalent, mean, 20/38; range, 0.00–0.78 logMAR [Snellen equivalent, 20/20–20/121]) for eyes that were treated with an anti-VEGF agent and 0.62 ± 0.48 logMAR (Snellen equivalent, mean, 20/83; range, 0.00–2.28 logMAR [Snellen equivalent, 20/20–20/3811]) for eyes that were not treated with an anti-VEGF agent, with a significantly better mean BCVA in the group that received anti-VEGF therapy ( t = 3.0; P = 0.005, unpaired t test with Welch’s correction). Mean ± SD BCVA at the time of diagnosis of CNV was available for 35 eyes and did not reveal a significant difference between groups: eyes treated with an anti-VEGF agent, 0.60 ± 0.27 logMAR (Snellen equivalent, mean, 20/80; range, 0.16 to 1.22 logMAR [Snellen equivalent, 20/29–20/332]) versus eyes not treated with an anti-VEGF agent, 0.79 ± 0.47 logMAR (Snellen equivalent, mean, 20/123; range, 0.10–1.68 logMAR [Snellen equivalent, 20/25–20/957]; t = 1.4; P = 0.18, unpaired t test with Welch’s correction). Mean age at CNV occurrence was similar in both groups: eyes treated with an anti-VEGF agent, 25.5 ± 16.6 years (range, 6.0–58.8 years) versus eyes not treated with an anti-VEGF agent, 27.5 ± 21.7 years (range, 4.3–73.1 years; t = 0.35; P = 0.73, unpaired t test with Welch’s correction). CNV occurrence since the diagnosis of BVMD also was similar in both groups: eyes treated with an anti-VEGF agent, 5.9 ± 10.0 years (range, 0–40.0 years) versus eyes not treated with an anti-VEGF agent, 8.9 ± 15.1 years (range, 0–63.1 years; t = 0.81; P = 0.42, unpaired t test with Welch’s correction). Eighteen eyes that did not receive an anti-VEGF agent had a documented reason why no anti-VEGF treatment was administered: 7 eyes (38.9%) had fibrotic changes or chronic edema, hence the treating clinician considered that there was no potential for improvement with therapy; 5 eyes (27.8%) received a diagnosis before anti-VEGF therapy was available; 4 eyes (22.2%) did not reveal significant CNV activity at the time of diagnosis, 1 eye (5.6%) had an extrafoveal location of the CNV, and in 1 eye (5.6%), the parents of the patient declined the treatment and opted for observation. The total number of injections was available for 20 eyes and ranged from 1 to 11 injections, with a mean ± SD of 2.95 ± 2.50 injections per eye. Two adverse events after injection were documented: a small vitreous hemorrhage that resolved spontaneously and transient vision loss with photopsia without any abnormality on ophthalmologic examination.
One hundred seventy-two patients had longitudinal data for VA with a minimum follow-up of 12 months and a mean ± SD follow-up of 9.69 ± 9.09 years (range, 1.00–55.75 years). For the right eye, a significant difference ( P < 0.001, paired t test) was found between mean ± SD VA of 0.36 ± 0.44 logMAR (Snellen equivalent, mean, 20/46) at baseline and 0.50 ± 0.57 logMAR (Snellen equivalent, mean, 20/63) at latest follow-up. This was also the case for the left eye ( P < 0.001, paired t test), with a mean ± SD BCVA of 0.33 ± 0.39 logMAR (Snellen equivalent, mean, 20/43) at baseline and 0.43 ± 0.46 logMAR (Snellen equivalent, mean, 20/54) at latest follow-up. The mean annual progression rate was 0.013 logMAR (95% confidence interval, 0.004–0.022 logMAR) for the right eye (equates to 0.65 Early Treatment Diabetic Retinopathy Study (ETDRS) letters/year) and 0.009 logMAR (95% confidence interval, –0.002 to 0.020 logMAR) for the left eye (equates to 0.45 ETDRS letters/year).
Longitudinal analysis of Gass stages was performed in 239 eyes from 124 patients with a minimum follow-up of 3 months. Mean ± SD age at baseline was 32.2 ± 21.3 years (range, 1.2–80.1 years), and mean ± SD follow-up was 8.3 ± 8.1 years (range, 0.3–43.1 years). Of 124 patients, 74 patients (59.7%) did not exhibit change in the Gass stage, with a mean ± SD follow-up of 6.4 ± 6.6 years. Gass stage changed in 50 patients in at least 1 eye, with a mean ± SD follow-up of 11.2 ± 9.4 years. At baseline, 27 eyes (11.3%) were in the previtelliform stage (stage 1), which dropped to 21 eyes (9.2%) at last visit (mean ± SD follow-up, 6.5 ± 5.4 years). The vitelliform stage (stage 2) was observed in 71 eyes (29.7%) at baseline, with a decline to 37 eyes (15.5%) at last follow-up (mean ± SD follow-up, 9.8 ± 9.8 years). The pseudohypopyon stage (stage 3) was found in 24 eyes (10.0%) at baseline and in 19 eyes (8.0%) at last visit (mean ± SD follow-up, 6.3 ± 5.9 years). Vitelliruptive changes (stage 4) were diagnosed in 33 eyes (13.8%) at baseline, with an increase to 46 eyes (19.3%) at last visit (mean ± SD follow-up, 6.3 ± 5.7 years). Similarly, atrophic or fibrotic changes (stage 5) increased from 84 eyes (35.1%) at baseline to 115 eyes (48.1%) at last the follow-up (mean ± SD follow-up, 9.2 ± 8.6 years). In summary, a decrease in frequency of stages 1, 2, and 3 was found, with an increase in the frequency of stages 4 and 5 being found from baseline to the last visit ( B).
Electrooculography was available for 244 eyes from 122 patients. Two hundred twenty-three eyes (91.4%) exhibited a light peak-to-dark trough ratio of 1.5 or less or did not show any light rise, thereby meeting the diagnostic criteria for BVMD. Sixteen eyes (6.6%) exhibited a light peak-to-dark trough ratio of more than 1.5 and less than 1.85, and 5 eyes (2.0%) showed a light peak-to-dark trough ratio of 1.85 or more, which is considered the lower end of the normal range.
Based on the first time that BCVA was reduced to 0.2 logMAR or more (Snellen equivalent, 20/32), we separately assessed patients with adult-onset disease (≥ 18 years of age) and childhood-onset disease (< 18 years of age). Forty patients (22.5%) were classified as having childhood-onset disease and 138 patients (77.5%) were classified as having adult-onset disease. Visual acuity for both groups was plotted against age ( B), and linear regression did not reveal a significant difference between the lines of best fit ( P = 0.09), although a trend of a slower decline of BCVA was found in the childhood-onset group compared with the patients in the adult-onset group. In contrast, 21 patients (52.5%) in the childhood-onset group received a diagnosis of CNV, whereas 24 patients (17.4%) from the adult-onset group received a diagnosis of CNV, suggesting a lower rate of CNV in the adult-onset group ( z = 4.50; P < 0.0001, chi-square test). A CNV diagnosis in the childhood-onset group was made at the mean ± SD age of 12.0 ± 4.8 years (range, 4.3–26.1 years), significantly lower ( t = 6.5; P < 0.0001, unpaired t test with Welch’s correction) than in the adult-onset group, with mean ± SD age at diagnosis of 39.2 ± 17.5 years (range, 13.2–73.1 years).
In total, 69 monoallelic variants were identified in BEST1 . Forty-seven variants were reported previously, and 22 variants were unreported previously. The variants comprised 64 missense variants, 1 frameshift deletion, 1 frameshift duplication, 1 in-frame duplication, and 2 intronic variants. Thirty-five recurrent variants were detected in multiple patients and 34 unique variants were detected in a single individual pedigree. Four variants were classified as pathogenic, 47 variants likely were pathogenic, and 18 variants were of uncertain significance. The localization of the identified BEST1 variants in the gene domains is illustrated in . The detailed results of in silico molecular genetic analysis are presented in (available at www.aaojournal.org ) and evolutionary conservation for the detected variants is shown in (available at www.aaojournal.org ). The most prevalent variants were c.652C>T, p.(Arg218Cys) (16/444 alleles; 3.6%); c.653G>A, p.(Arg218His) (12/444 alleles; 2.7%); c.728C>T, p.(Ala243Val) (11/444 alleles, 2.5%); c.892T>G, p.(Phe298Val) (8/444 alleles; 1.8%); c.37C>T, p.(Arg13Cys) (7/444 alleles; 1.6%); c.288G>C, p.(Gln96His) (7/444 alleles; 1.6%); c.914T>C, p.(Phe305Ser) (7/444 alleles; 1.6%); and c.90G>C, p.(Lys30Asn) (5/444 alleles; 11.2%).
The 3 most prevalent variants were analyzed for genotype–phenotype correlation: p.(Arg218Cys), p.(Arg218His), and p.(Ala243Val). The mean ± SD age at onset was 21.5 ± 15.5 years (range, 5.0–47.3 years) for p.(Arg218Cys), 28.2 ± 16.0 years (range, 7.3–58.8 years) for p.(Arg218His), and 50.6 ± 20.7 years (range, 13.5–84.8 years) for p.A243V. Multivariant analysis revealed a significant difference between the variants ( P = 0.0007, analysis of variance) showing a later onset for p.(Ala243Val) compared with p.(Arg218Cys) ( P = 0.0006, t test with Tukey correction) and p.(Arg218His) ( P = 0.012, t test with Tukey correction). Mean ± SD age-adjusted BCVA was 0.43 ± 0.35 logMAR (Snellen equivalent, mean, 20/54) for p.(Arg218Cys), 0.47 ± 0.43 logMAR (Snellen equivalent, mean, 20/59) for p.(Arg218His), and 0.13 ± 0.34 logMAR (Snellen equivalent, mean, 20/27) for p.(Ala243Val). Multivariant analysis revealed a significant difference among the variants ( P = 0.01, analysis of variance), showing a better age-adjusted BCVA for p.(Ala243Val) compared with p.(Arg218Cys) ( P = 0.03, t test with Tukey correction) and p.(Arg218His) ( P = 0.01, t test with Tukey correction). For patients harboring p.(Arg218Cys) (28 eyes; mean ± SD follow-up, 7.1 ± 7.0 years), the frequency of Gass stage 1 dropped from 21.4% at baseline to 14.4% at last visit, similarly observed for stage 2 with a drop from 14.2% to 10.7%. The occurrence of stage 3 rose from 10.7% to 17.9%, that of stage 4 declined from 17.8% to 10.7%, and that of stage 5 increased from 35.7% to 46.4%. Patients harboring p.(Arg218His) (20 eyes; mean ± SD follow-up, 2.9 ± 2.7 years) exhibited a stable frequency of Gass stage 1 of 5.0% at baseline and at last visit. Stage 2 declined from 45.0% to 30.0%, whereas stage 3 became more frequent from 10.0% to 20.0%. Stage 4 decreased from 10.0% to 0.0%, and stage 5 increased from 30.0% to 45.0%. In contrast to both of the above variants, patients harboring p.(Ala243Val) (16 eyes; mean ± SD follow-up, 7.6 ± 5.5 years) showed a stable and high frequency of Gass stage 1 of 25.0% at baseline and at last visit. Stage 2 occurred in 50% of the patients at baseline and fell to 12.5% at last visit. No patient was classified as having stage 3 disease at baseline, whereas at last follow-up, 6.25% of patients were classified as having stage 3 disease. The occurrence of stage 4 disease increased from 12.5% to 43.8%, whereas the low rate of stage 5 disease of 12.5% remained stable from baseline to the last visit. In summary, a higher frequency of stages 1 and 2 disease was found in patients with p.(Ala243Val) compared with patients with p.(Arg218Cys) and p.(Arg218His); whereas stage 5 disease occurred more frequently in the latter variants, corroborating the decreased severity of p.(Ala243Val) compared with p.(Arg218Cys) and p.(Arg218His).
The current study systematically described the detailed molecular, clinical, and morphologic characteristics associated with BVMD both cross-sectionally and longitudinally over a broad range of ages. This cohort of 222 patients from 141 families represents the largest published series to date to undergo detailed clinical characterization and genotype–phenotype investigation. Clinical Presentation The clinical characteristics of this cohort largely are in keeping with those in reports from smaller series in the literature. Most of the patients demonstrated a deterioration of central vision at presentation, followed by referral because of incidental findings or a positive family history of BVMD. The reported mean BCVA of 0.35 logMAR at presentation (mean age, 26.8 years) is similar to that in a published Chinese cohort (n = 87; mean age, 31.8 years) with a mean BCVA of 0.42 logMAR. The correlation of age and BCVA at presentation, as well as the slow annual progression rate of visual deterioration of (< 1 ETDRS letter/year) found in this cohort, largely are in keeping with previous reports describing a worse BCVA in older patients, with a rather slow progression rate in earlier stages of the disease. , , Similarly, the increase of frequency of more advanced stages (vitelliruptive and atrophy or fibrosis) at last visit compared with baseline corroborates the existing literature. , , Unilateral presentation with morphologic changes limited to 1 eye occurred rarely in this cohort and was reported previously for BEST1 variants causing BVMD, as well as for adult-onset vitelliform macular dystrophy caused by variants in IMPG2 . , Refractive Error and Amblyopia Most of the cohort (78.7% [185/235 eyes]) showed a refractive error in the hyperopic spectrum, which is a greatly higher proportion than the 4.8% reported in a population-based cohort and is in agreement with previous BVMD reports. , During embryonic development, impaired retinal pigment epithelium function resulting from alterations in BEST1 is anticipated to exert an impact on choroidal thickness and to disrupt scleral growth, causing the high rate of hyperopic refractive errors. , Although retinal elevation resulting from subretinal deposits or fluid seems to be plausible as an additional reason for hyperopia, it has been reported that no correlation was found between the degree of hyperopia and vitelliform lesion height estimated by central subfield thickness. Furthermore, similar refractive errors in both eyes of individuals exhibiting asymmetric vitelliform lesions and persistence of hyperopia after regression of the subretinal lesions in flat atrophic retinas has been observed. , The rate of amblyopia in this cohort (6.1% [13/214 patients]) is higher than the national average (3.6%) described in a cohort of children from the United Kingdom, which may be associated with the higher frequency of refractive errors in BVMD. These findings stress the importance of a consequent management of highly prevalent refractive errors in BVMD to avoid the development of amblyopia, in addition to the visual impairment derived from the retinal phenotype. Similar to BVMD, autosomal recessive bestrophinopathy (ARB) usually presents with hyperopic refractive error, but it is associated more often with angle closure, with a previously reported rate of 28.6% compared with 2.8% in the present cohort. Alterations in BEST1 have been implicated in a spectrum of impaired ocular development, including reports of nanophthalmus, microcornea, early-onset cataract, and posterior staphyloma. A reason for the higher rate of angle closure in ARB may be the distinct subcellular protein quality control, leading to different protein degradation processes in BVMD and ARB. For autosomal recessive variants, a rapid degradation in the endoplasmic reticulum has been observed, whereas dominant variants were able to escape endoplasmic reticulum-associated degradation, leading to slower disintegration via an endolysosomal pathway. This provides an explanation for the described dominant-negative effects of most genetic alterations causing BVMD, but also could explain the more severe phenotype of ARB because of the diminished protein levels resulting from rapid degeneration in patients with biallelic or compound heterozygous variants causing ARB. Choroidal Neovascularization The occurrence of CNV in 17.3% of this cohort (37/213 patients) is higher than most previously described rates of clinically diagnosed CNV, ranging from 1.7% to 9.0%. , , Of note, a recent study with OCT angiography showed substantially higher rates of CNV of 50.4%, suggesting systematic underdiagnosis of CNV in BVMD. This can be attributed to the difficulty in identifying CNV within BVMD because of subretinal fluid and preexisting subretinal deposits being a feature of the underlying disease and not being associated exclusively with CNV. Comparing the mean BCVA at last visit of eyes with a diagnosis of CNV and eyes without a diagnosis of CNV did not reveal a significant difference in this cohort (0.44 logMAR vs. 0.47 logMAR). Although it has been reported that patients often retain a relatively good BCVA after the occurrence of CNV, , this finding also might be in keeping with the hypothesis of an underdiagnosis of CNV, especially in earlier stages of the disease, driving progression and leading to a high number of patients in more advanced stages with undetected CNV as a cause. , This hypothesis also is supported by our finding that the rate of CNV diagnosis is lower in the adult-onset group (17.4%) compared with the rate in the childhood-onset group (52.5%). Comparing the outcome of CNV treated with an anti-VEGF agent and without treatment with an anti-VEGF agent, we observed a mean BCVA at last follow-up of 0.28 logMAR in the treated group and 0.62 logMAR in the observed group. This beneficial effect of anti-VEGF treatment corroborates previously published research, , but selection bias in the present cohort has to be considered, with some patients not receiving anti-VEGF treatment despite diagnosis of CNV because of, for example, severe atrophic or fibrotic changes, or both. Given the low number of injections needed per eye, the low rate of reported adverse events, and the beneficial outcome of treated patients in this cohort, we recommend administering anti-VEGF in patients with BVMD with secondary CNV in the presence of any active CNV and advising the patient of potential concurrent vision-limiting features such as subretinal fibrosis or atrophy that could limit BCVA recovery. Molecular Genetics Sixty-nine BEST1 variants, including 22 novel variants, were detected in the current large cohort study. More than 90% were missense variants, in keeping with findings in a Chinese cohort (32/37 variants [86.5%]), and these missense variants are located in the highly conserved N-terminal half of the protein, as described previously. , These findings are consistent with the hypothesized disease mechanism of dominant-negative effects. , Interestingly, some of the detected variants in the current BVMD study also were identified in ARB : c.37C>T, p.(Arg13Cys); c.302C>T, p.(Pro101Leu); and c.889C>T, p.(Pro297Ser). Patients with biallelic BEST1 variants also can exhibit a phenotype similar to BVMD, whereas those with the same variants in a heterozygous state may not manifest the same clinical phenotype. Reports exist of patients with a semidominant inheritance exhibiting severe BVMD or ARB phenotype caused by biallelic variants and mild BVMD by monoallelic variant in the same family. , Further functional studies such as chloride conductance, cellular localization, and stability may reveal the exact functional effect and disease mechanism of each variant. The prevalent variants identified in the current study show different clinical characteristics. More severe phenotypes were observed for p.(Arg218Cys) and p.(Arg218His) and a milder phenotype for p.(Ala243Val), as previously reported for p.(Arg218Cys) in comparison with p.(Ala243Val) in a smaller series. Although p.(Ala243Val) is localized in the intramembrane domain of the protein, functional analyses showed intact trafficking of BEST1 to the plasma membrane. However, the chloride ion current has been found to be impaired, being 10% of wild-type. Furthermore, cotransfection of p.(Ala243Val) with wild-type did not impair the ion current of the wild-type channel in a dominant-negative way that has been described for other variants. Interestingly, for the most prevalent BEST1 variant in this cohort, p.(Arg218Cys), mutant allele-specific gene editing restored calcium-activated chloride channel activity in human induced pluripotent stem cell-derived retinal pigment epithelium, indicating that gene augmentation therapy might be effective for these patients. Study Limitations Limitations of this study include the retrospective design, the absence of a control group, variability in follow-up duration, and lack of standardized protocol used for assessments. Different genetic testing protocols were applied, and familial segregation was not fully completed. This is the largest molecularly confirmed cohort to date, yet a larger cohort is needed to correlate structural and functional measures and to assess the progression rate of each BEST1 variant reliably.
The clinical characteristics of this cohort largely are in keeping with those in reports from smaller series in the literature. Most of the patients demonstrated a deterioration of central vision at presentation, followed by referral because of incidental findings or a positive family history of BVMD. The reported mean BCVA of 0.35 logMAR at presentation (mean age, 26.8 years) is similar to that in a published Chinese cohort (n = 87; mean age, 31.8 years) with a mean BCVA of 0.42 logMAR. The correlation of age and BCVA at presentation, as well as the slow annual progression rate of visual deterioration of (< 1 ETDRS letter/year) found in this cohort, largely are in keeping with previous reports describing a worse BCVA in older patients, with a rather slow progression rate in earlier stages of the disease. , , Similarly, the increase of frequency of more advanced stages (vitelliruptive and atrophy or fibrosis) at last visit compared with baseline corroborates the existing literature. , , Unilateral presentation with morphologic changes limited to 1 eye occurred rarely in this cohort and was reported previously for BEST1 variants causing BVMD, as well as for adult-onset vitelliform macular dystrophy caused by variants in IMPG2 . ,
Most of the cohort (78.7% [185/235 eyes]) showed a refractive error in the hyperopic spectrum, which is a greatly higher proportion than the 4.8% reported in a population-based cohort and is in agreement with previous BVMD reports. , During embryonic development, impaired retinal pigment epithelium function resulting from alterations in BEST1 is anticipated to exert an impact on choroidal thickness and to disrupt scleral growth, causing the high rate of hyperopic refractive errors. , Although retinal elevation resulting from subretinal deposits or fluid seems to be plausible as an additional reason for hyperopia, it has been reported that no correlation was found between the degree of hyperopia and vitelliform lesion height estimated by central subfield thickness. Furthermore, similar refractive errors in both eyes of individuals exhibiting asymmetric vitelliform lesions and persistence of hyperopia after regression of the subretinal lesions in flat atrophic retinas has been observed. , The rate of amblyopia in this cohort (6.1% [13/214 patients]) is higher than the national average (3.6%) described in a cohort of children from the United Kingdom, which may be associated with the higher frequency of refractive errors in BVMD. These findings stress the importance of a consequent management of highly prevalent refractive errors in BVMD to avoid the development of amblyopia, in addition to the visual impairment derived from the retinal phenotype. Similar to BVMD, autosomal recessive bestrophinopathy (ARB) usually presents with hyperopic refractive error, but it is associated more often with angle closure, with a previously reported rate of 28.6% compared with 2.8% in the present cohort. Alterations in BEST1 have been implicated in a spectrum of impaired ocular development, including reports of nanophthalmus, microcornea, early-onset cataract, and posterior staphyloma. A reason for the higher rate of angle closure in ARB may be the distinct subcellular protein quality control, leading to different protein degradation processes in BVMD and ARB. For autosomal recessive variants, a rapid degradation in the endoplasmic reticulum has been observed, whereas dominant variants were able to escape endoplasmic reticulum-associated degradation, leading to slower disintegration via an endolysosomal pathway. This provides an explanation for the described dominant-negative effects of most genetic alterations causing BVMD, but also could explain the more severe phenotype of ARB because of the diminished protein levels resulting from rapid degeneration in patients with biallelic or compound heterozygous variants causing ARB.
The occurrence of CNV in 17.3% of this cohort (37/213 patients) is higher than most previously described rates of clinically diagnosed CNV, ranging from 1.7% to 9.0%. , , Of note, a recent study with OCT angiography showed substantially higher rates of CNV of 50.4%, suggesting systematic underdiagnosis of CNV in BVMD. This can be attributed to the difficulty in identifying CNV within BVMD because of subretinal fluid and preexisting subretinal deposits being a feature of the underlying disease and not being associated exclusively with CNV. Comparing the mean BCVA at last visit of eyes with a diagnosis of CNV and eyes without a diagnosis of CNV did not reveal a significant difference in this cohort (0.44 logMAR vs. 0.47 logMAR). Although it has been reported that patients often retain a relatively good BCVA after the occurrence of CNV, , this finding also might be in keeping with the hypothesis of an underdiagnosis of CNV, especially in earlier stages of the disease, driving progression and leading to a high number of patients in more advanced stages with undetected CNV as a cause. , This hypothesis also is supported by our finding that the rate of CNV diagnosis is lower in the adult-onset group (17.4%) compared with the rate in the childhood-onset group (52.5%). Comparing the outcome of CNV treated with an anti-VEGF agent and without treatment with an anti-VEGF agent, we observed a mean BCVA at last follow-up of 0.28 logMAR in the treated group and 0.62 logMAR in the observed group. This beneficial effect of anti-VEGF treatment corroborates previously published research, , but selection bias in the present cohort has to be considered, with some patients not receiving anti-VEGF treatment despite diagnosis of CNV because of, for example, severe atrophic or fibrotic changes, or both. Given the low number of injections needed per eye, the low rate of reported adverse events, and the beneficial outcome of treated patients in this cohort, we recommend administering anti-VEGF in patients with BVMD with secondary CNV in the presence of any active CNV and advising the patient of potential concurrent vision-limiting features such as subretinal fibrosis or atrophy that could limit BCVA recovery.
Sixty-nine BEST1 variants, including 22 novel variants, were detected in the current large cohort study. More than 90% were missense variants, in keeping with findings in a Chinese cohort (32/37 variants [86.5%]), and these missense variants are located in the highly conserved N-terminal half of the protein, as described previously. , These findings are consistent with the hypothesized disease mechanism of dominant-negative effects. , Interestingly, some of the detected variants in the current BVMD study also were identified in ARB : c.37C>T, p.(Arg13Cys); c.302C>T, p.(Pro101Leu); and c.889C>T, p.(Pro297Ser). Patients with biallelic BEST1 variants also can exhibit a phenotype similar to BVMD, whereas those with the same variants in a heterozygous state may not manifest the same clinical phenotype. Reports exist of patients with a semidominant inheritance exhibiting severe BVMD or ARB phenotype caused by biallelic variants and mild BVMD by monoallelic variant in the same family. , Further functional studies such as chloride conductance, cellular localization, and stability may reveal the exact functional effect and disease mechanism of each variant. The prevalent variants identified in the current study show different clinical characteristics. More severe phenotypes were observed for p.(Arg218Cys) and p.(Arg218His) and a milder phenotype for p.(Ala243Val), as previously reported for p.(Arg218Cys) in comparison with p.(Ala243Val) in a smaller series. Although p.(Ala243Val) is localized in the intramembrane domain of the protein, functional analyses showed intact trafficking of BEST1 to the plasma membrane. However, the chloride ion current has been found to be impaired, being 10% of wild-type. Furthermore, cotransfection of p.(Ala243Val) with wild-type did not impair the ion current of the wild-type channel in a dominant-negative way that has been described for other variants. Interestingly, for the most prevalent BEST1 variant in this cohort, p.(Arg218Cys), mutant allele-specific gene editing restored calcium-activated chloride channel activity in human induced pluripotent stem cell-derived retinal pigment epithelium, indicating that gene augmentation therapy might be effective for these patients.
Limitations of this study include the retrospective design, the absence of a control group, variability in follow-up duration, and lack of standardized protocol used for assessments. Different genetic testing protocols were applied, and familial segregation was not fully completed. This is the largest molecularly confirmed cohort to date, yet a larger cohort is needed to correlate structural and functional measures and to assess the progression rate of each BEST1 variant reliably.
This comprehensive analysis of clinical and genetic data of patients with BVMD contributes valuable insights for prognosis and genetic counseling and aids clinical trial design. Furthermore, the well-characterized cohort serves as a valuable resource for patient stratification in upcoming clinical trials, as well as further natural history investigations. The slow disease progression in this cohort indicates a broad therapeutic window before advancement into atrophic or fibrotic stages, especially for the milder variant p.A243V. Conversely, our identification of CNV incidence in young patients might underline the potential benefits of initiating treatment at a relatively early age.
|
Lipid profile and non-alcoholic fatty liver disease detected by ultrasonography: is systemic inflammation a necessary mediator? | 47d387f1-b395-40f4-9214-07d45a63e2c8 | 11921154 | Digestive System[mh] | Non-alcoholic fatty liver disease (NAFLD) encompasses a spectrum of diseases, including simple non-alcoholic fatty liver (NAFL), non-alcoholic steatohepatitis (NASH), and cirrhosis. NAFLD has become the leading cause of chronic liver disease globally , and is a global health problem because of its ability to cause liver related complications such as cirrhosis and hepatocellular carcinoma . Studies showed that an unhealthy lifestyle, particularly high-fat diet was associated with an increased risk of hyperlipidaemia and NAFL , while the latter is recognized as the most common type of chronic liver disease linked to the development of cirrhosis of the liver and premature death . Targeted policy on the primary prevention of NAFLD may be informed by seeking a better understanding of the association between the highly prevalent abnormal lipid profile and the risk of NAFL, as well as the possible mechanisms. Previous animal and human epidemiological studies have suggested that hypertriglyceridemia is an important risk factor for NAFLD . Plasma levels of total cholesterol (TC) were also found to be positively correlated with NAFLD risk among adults , whilst only non-HDL-cholesterol was identified as an important risk factor for NAFLD in adolescents . Although the biological mechanism is still elusive, the “two-hits hypothesis” is considered as the main pathogenetic process of NAFL involving hepatocellular inflammation induced by the toxic effects of the excessive lipids in the liver, while the intrahepatic accumulation of lipids itself further aggravates the burden of the liver and causes fibrosis of the liver . However, epidemiological studies on the association between lipid profile and NAFL have been limited, as most of the previous studies only focused on a single lipid index, and few studies depicted a whole picture of the impacts of different lipid indexes on the risks of NAFL. Recent studies showed chronic systemic inflammation is a key hallmark of metabolic dysfunction‐associated fatty liver disease , and C-reactive protein (CRP), an important index of chronic systemic inflammation, has been linked to NAFLD in the recent studies , but whether CRP also mediates the association between dyslipidemia and NAFL is not known. Therefore, we conducted this study to examine the associations of NAFL with lipid profiles, including total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C), and to evaluate which lipid index is the best for predicting the risk of NAFL, as well as to explore possible mechanisms.
Study design and population The detailed study design has been described previously . Briefly, the prospective cohort study was initiated from May 2013 to October 2015, with the original purpose of investigating the health impact of shift work. All the participants were recruited from five enterprises in South China, which are machinery and semiconductor manufacturers, printing house, electric power and petroleum industries. Abdominal ultrasounds were performed by two experienced sonographers using B-ultrasound machine model (WED-9618C) produced by Shenzhen Zhongke New Materials Technology Co., Ltd, and images were captured in a standard fashion with the subject in the supine position and with his right arm raised above his head. Blood draw and US examination were conducted on the same morning. After excluding data with missing lipid profiles and abdominal ultrasound findings, 4047 individuals remained. Furthermore, we excluded 31 female workers due to uneven gender distribution in these enterprises; thus, a total of 4016 male workers were included in the present study. Among 4016 individuals, 1673 were tested for plasma CRP levels. ( Supplementary Figure 1 ). Information on each participant’s sociodemographic (age, gender, education, marital status, height, weight, waist circumference), lifestyle (smoking, physical activity, alcohol drinking), and shift work were collected using a standardized questionnaire. Alcohol drinking was defined as drinking at least once per week for more than half a year. Physical exercise was defined as performing physical exercise >3 times/week for at least 20 min per session. Smoking (including current and ever smoking) was defined if individuals who had ever smoked at least one cigarette per day for more than half a year. The health examinations were performed by certified physicians and nurses. Body mass index (BMI) was calculated as the body weight (kg) divided by the square of height (kg/m 2 ). In the present study, we used the ultrasonography (US) for diagnosis of NAFLD, which is a safe, non-invasive and cost-effective imaging method to assess patients with suspected NAFLD for diagnosis and a large systematic review with meta-analysis showed that ultrasonography allows for reliable and accurate detection of moderate-severe fatty liver, compared to histology, and considering its low cost, safety, and accessibility, ultrasound is likely the imaging technique of choice for screening for fatty liver in population settings . Ultrasonographic diagnosis of NAFL was defined as the presence of a diffuse increase in fine echoes in the liver parenchyma compared with those in the kidney or spleen parenchyma . The present study adheres to the Declaration of Helsinki, and informed consent for participation in the study has been written and obtained. Biochemical measurements Fasting blood samples were drawn in tubes, separated into serum and plasma, and stored at −80 °C until analysis. The plasma concentrations of fasting plasma glucose (FPG), TC, TG, HDL-C, and LDL-C were determined in a clinical laboratory using a blood biochemical analyzer. Abnormal blood lipids on specific blood lipid were defined according to the following standards: TC ≥ 6.2 mmol/L, TG ≥ 1.69 mmol/L, HDL < 1.03 mmol/L, LDL > 3.3 mmol/L, and dyslipidaemia was considered if any of the defined lipid index was abnormal. Abnormal blood glucose was considered when the glucose ≥ 6.1 mmol/L. Plasma CRP concentrations were measured using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (R&D Systems, Minneapolis, MN, USA). All tests were performed in duplicate according to the manufacturer’s instructions. Statistical analyses Sociodemographic and socioeconomic characteristics of the participants are reported as mean (standard deviation [SD]) for continuous variables and as number (percentages) for categorical variables. Differences in these characteristics between subjects with and without NAFL were tested separately using the independent t-test and χ 2 test. Multivariable unconditional logistic regression was used to calculate the odds ratio (OR) of NAFL in relation to lipid profile (TC, TG, HDL-C, and LDL-C) after adjusting for variables (age, marital status, education, smoking status, alcohol consumption, leisure time exercise, BMI, waist circumference, shift work, and glucose level). Stratified analyses were performed according to adjusted variables. Logistic regression was used to estimate the association between CRP levels and NAFL. Generalized linear regression was used to examine the association between CRP levels and lipid profiles. Restricted cubic spline regression with four knots (5th, 35th, 65th, and 95th) was used to examine the dose-response relationship between the lipid profile and NAFL, with the minimum value of the lipid profile as the reference. R package ‘Mediation’ (V.4.4.5) was used to analyze the mediation effect of CRP in the relationship between lipid profile and NAFL. A receiver operating characteristic (ROC) curve was further explored to obtain the best lipid predictor for NAFL using the Z-test. All statistical analyses were carried out using SAS (version 9.4; SAS Institute, Inc., Cary, North Carolina, USA) or R software version 4.0.5 (R Core Team 2020). Statistical significance in this study was determined at a two-sided p < 0.05.
The detailed study design has been described previously . Briefly, the prospective cohort study was initiated from May 2013 to October 2015, with the original purpose of investigating the health impact of shift work. All the participants were recruited from five enterprises in South China, which are machinery and semiconductor manufacturers, printing house, electric power and petroleum industries. Abdominal ultrasounds were performed by two experienced sonographers using B-ultrasound machine model (WED-9618C) produced by Shenzhen Zhongke New Materials Technology Co., Ltd, and images were captured in a standard fashion with the subject in the supine position and with his right arm raised above his head. Blood draw and US examination were conducted on the same morning. After excluding data with missing lipid profiles and abdominal ultrasound findings, 4047 individuals remained. Furthermore, we excluded 31 female workers due to uneven gender distribution in these enterprises; thus, a total of 4016 male workers were included in the present study. Among 4016 individuals, 1673 were tested for plasma CRP levels. ( Supplementary Figure 1 ). Information on each participant’s sociodemographic (age, gender, education, marital status, height, weight, waist circumference), lifestyle (smoking, physical activity, alcohol drinking), and shift work were collected using a standardized questionnaire. Alcohol drinking was defined as drinking at least once per week for more than half a year. Physical exercise was defined as performing physical exercise >3 times/week for at least 20 min per session. Smoking (including current and ever smoking) was defined if individuals who had ever smoked at least one cigarette per day for more than half a year. The health examinations were performed by certified physicians and nurses. Body mass index (BMI) was calculated as the body weight (kg) divided by the square of height (kg/m 2 ). In the present study, we used the ultrasonography (US) for diagnosis of NAFLD, which is a safe, non-invasive and cost-effective imaging method to assess patients with suspected NAFLD for diagnosis and a large systematic review with meta-analysis showed that ultrasonography allows for reliable and accurate detection of moderate-severe fatty liver, compared to histology, and considering its low cost, safety, and accessibility, ultrasound is likely the imaging technique of choice for screening for fatty liver in population settings . Ultrasonographic diagnosis of NAFL was defined as the presence of a diffuse increase in fine echoes in the liver parenchyma compared with those in the kidney or spleen parenchyma . The present study adheres to the Declaration of Helsinki, and informed consent for participation in the study has been written and obtained.
Fasting blood samples were drawn in tubes, separated into serum and plasma, and stored at −80 °C until analysis. The plasma concentrations of fasting plasma glucose (FPG), TC, TG, HDL-C, and LDL-C were determined in a clinical laboratory using a blood biochemical analyzer. Abnormal blood lipids on specific blood lipid were defined according to the following standards: TC ≥ 6.2 mmol/L, TG ≥ 1.69 mmol/L, HDL < 1.03 mmol/L, LDL > 3.3 mmol/L, and dyslipidaemia was considered if any of the defined lipid index was abnormal. Abnormal blood glucose was considered when the glucose ≥ 6.1 mmol/L. Plasma CRP concentrations were measured using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (R&D Systems, Minneapolis, MN, USA). All tests were performed in duplicate according to the manufacturer’s instructions.
Sociodemographic and socioeconomic characteristics of the participants are reported as mean (standard deviation [SD]) for continuous variables and as number (percentages) for categorical variables. Differences in these characteristics between subjects with and without NAFL were tested separately using the independent t-test and χ 2 test. Multivariable unconditional logistic regression was used to calculate the odds ratio (OR) of NAFL in relation to lipid profile (TC, TG, HDL-C, and LDL-C) after adjusting for variables (age, marital status, education, smoking status, alcohol consumption, leisure time exercise, BMI, waist circumference, shift work, and glucose level). Stratified analyses were performed according to adjusted variables. Logistic regression was used to estimate the association between CRP levels and NAFL. Generalized linear regression was used to examine the association between CRP levels and lipid profiles. Restricted cubic spline regression with four knots (5th, 35th, 65th, and 95th) was used to examine the dose-response relationship between the lipid profile and NAFL, with the minimum value of the lipid profile as the reference. R package ‘Mediation’ (V.4.4.5) was used to analyze the mediation effect of CRP in the relationship between lipid profile and NAFL. A receiver operating characteristic (ROC) curve was further explored to obtain the best lipid predictor for NAFL using the Z-test. All statistical analyses were carried out using SAS (version 9.4; SAS Institute, Inc., Cary, North Carolina, USA) or R software version 4.0.5 (R Core Team 2020). Statistical significance in this study was determined at a two-sided p < 0.05.
Characteristics of the study participants Among the 4016 male workers, 829 (20.64%) were diagnosed with NAFL. As shown in , the mean age was 32.08 (SD: 8.155). The mean values of TG, TC, HDL, and LDL were 1.29 ± 0.937 mmol/L, 4.74 ± 0.929 mmol/L, 1.34 ± 0.284 mmol/L and 2.48 ± 0.652 mmol/L, respectively. There were significant differences in age, BMI, marital status, drinking, exercise, smoking, waist circumference, glucose, TG, TC, HDL, and LDL levels between subjects with non-NAFL and NAFL (all p < 0.05), while no significant difference was observed for education and shift work ( p = 0.558 and 0.360, respectively). Associations of lipid profile and NAFL Associations between lipid profile and NAFL are shown in . Compared with normal lipid profile, individuals with abnormal lipid profile had higher prevalence of NAFL (OR = 2.27, 95%CI: 1.85-2.79 for TG; OR = 1.45, 95%CI: 1.03-2.04 for TC; OR = 1.56, 95%CI: 1.21-2.02 for HDL; OR = 1.65, 95%CI: 1.25-2.18 for LDL; OR = 2.28, 95%CI: 1.87-2.77 for dyslipidaemia) after adjusting for potential confounders. Dose-response relationships with NAFL were observed for TG and HDL levels ( ). Stratified analyses of the lipid profile and NAFL are shown in . Abnormal TG was associated with NAFL in all groups except for those with abnormal glucose levels, and dyslipidemia was associated with NAFL in all groups except for those with low education levels. Mediation analyses Supplementary Table 1 shows the associations of lipid profile and CRP, with a significant relationship between TC, LDL and CRP (ß = 0.70, 95% CI: 0.13-1.28 for TC; ß = 1.28, 95% CI: 0.67-1.88 for LDL), after adjusting for potential confounders. An increased OR (OR = 1.04, 95% CI: 1.00-1.09) was found between CRP and NAFL ( Supplementary Table 2 ). However, no significant mediation effect of CRP was found on the association between an abnormal lipid profile and NAFL (all p > 0.05) ( ). ROC analyses The AUCs of specific lipid indices including TG, TC, HDL and LDL in predicting NAFL were significantly greater than 0.5, and among them, TG (0.754, 95%CI: 0.736-0.771) had a significantly higher AUC and Youden index (with 1.12 cut-off) than other indices ( and Supplementary Figure 2 ).
Among the 4016 male workers, 829 (20.64%) were diagnosed with NAFL. As shown in , the mean age was 32.08 (SD: 8.155). The mean values of TG, TC, HDL, and LDL were 1.29 ± 0.937 mmol/L, 4.74 ± 0.929 mmol/L, 1.34 ± 0.284 mmol/L and 2.48 ± 0.652 mmol/L, respectively. There were significant differences in age, BMI, marital status, drinking, exercise, smoking, waist circumference, glucose, TG, TC, HDL, and LDL levels between subjects with non-NAFL and NAFL (all p < 0.05), while no significant difference was observed for education and shift work ( p = 0.558 and 0.360, respectively).
Associations between lipid profile and NAFL are shown in . Compared with normal lipid profile, individuals with abnormal lipid profile had higher prevalence of NAFL (OR = 2.27, 95%CI: 1.85-2.79 for TG; OR = 1.45, 95%CI: 1.03-2.04 for TC; OR = 1.56, 95%CI: 1.21-2.02 for HDL; OR = 1.65, 95%CI: 1.25-2.18 for LDL; OR = 2.28, 95%CI: 1.87-2.77 for dyslipidaemia) after adjusting for potential confounders. Dose-response relationships with NAFL were observed for TG and HDL levels ( ). Stratified analyses of the lipid profile and NAFL are shown in . Abnormal TG was associated with NAFL in all groups except for those with abnormal glucose levels, and dyslipidemia was associated with NAFL in all groups except for those with low education levels.
Supplementary Table 1 shows the associations of lipid profile and CRP, with a significant relationship between TC, LDL and CRP (ß = 0.70, 95% CI: 0.13-1.28 for TC; ß = 1.28, 95% CI: 0.67-1.88 for LDL), after adjusting for potential confounders. An increased OR (OR = 1.04, 95% CI: 1.00-1.09) was found between CRP and NAFL ( Supplementary Table 2 ). However, no significant mediation effect of CRP was found on the association between an abnormal lipid profile and NAFL (all p > 0.05) ( ).
The AUCs of specific lipid indices including TG, TC, HDL and LDL in predicting NAFL were significantly greater than 0.5, and among them, TG (0.754, 95%CI: 0.736-0.771) had a significantly higher AUC and Youden index (with 1.12 cut-off) than other indices ( and Supplementary Figure 2 ).
Approximately 1 in 3 people in the US had NAFLD , and in Asian populations such as Japan, Korea, and China, the prevalence of NAFLD is approximately 25%–45% and it continues to increase as a result of the westernization of dietary habits, decreased physical activity, and increased obesity . In the present study, 20.64% of male workers were diagnosed with NAFL, which was lower than the prevalence of 24.81% in males from a meta-analysis of 48 studies in mainland China . The lower prevalence of NAFL in our study subjects may be related to their younger age (80.40% aged 20–40 years) and relatively healthier behaviors (such as less drinking, less smoking, and more exercise) , while using US for diagnosis of fatty liver may be also the possible reasons for low prevalence of NAFLD as US may miss patients with early fatty liver. Besides, our study showed the lipid levels of TG, TC and LDL were lower than US male in 2018, however the lipid concentration of HDL was higher than US male . We found that an abnormal lipid profile was linked to the increased prevalence of NAFL, despite the study being conducted among people with a relatively healthier lifestyle. NAFLD is a clinicopathological condition characterized by significant lipid deposition. High consumption of fat has been associated with an increased risk of dyslipidaemia , causing accumulation of lipids in the liver, which in turn increases the amount of free fatty acids (FFAs), free cholesterol (FC), and other dangerous lipid metabolites, and triggers toxic effects in the liver, consequently resulting in NAFL and the development of NAFLD . The hepatic accumulation of TG in lipid droplets was revealed to be a prerequisite for the development of NAFLD , which may provide an explanation for our study in which TG was associated with NAFL in a dose-response manner, and abnormal TG was significantly and stably associated with NAFL, and the results were in line with those of previous studies . Furthermore, we also found that TG is a more suitable index to predict the risk of NAFL based on ROC curves; however, this is a less studied area that deserves further research to seek supportive evidence from cohort studies. The underlying mechanism linking abnormal lipids with NAFL may involve insulin resistance, which makes adipose tissue resistant to the antilipolytic effect of insulin, leading to TG breakdown and the final formation of free fatty acids and glycerol , which are taken up by the liver, as is the case with the accumulation of TG in the liver . Moreover, higher insulin levels modulate hepatic lipid metabolism by increasing TG synthesis, which promotes steatosis, lipotoxicity, and progressive liver injury . At present, multifactorial pathogenesis has been postulated; however, the pathogenesis of NAFLD is still not completely understood. Using ultrasonography to diagnose NAFLD requires expertise and specific instrumentation that is generally not available in the general clinic; thus, specific risk factors based on biochemical examination would contribute to the estimation of the impact of fat accumulation on NAFL. Both previous mechanistic studies and our study indicated that the measures of lipid profiles in blood plasma could serve as surrogates, to some extent, for the fatty level in the liver, which serves as a suitable index for monitoring liver health. It has been known that inflammatory cytokines and adipose tissue cytokines have been considered to be significant factors contributing to the development and progression of NAFLD . For instance, C-reactive protein (CRP), interleukin-1β (IL-1β), interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), and intercellular adhesion molecule-1 (ICAM-1) were reported to be positively associated with higher risks of NAFLD . Consistent with a previous study in which CRP was found to be significantly associated with non-alcoholic steatohepatitis and hepatic fibrosis, our study also indicated that CRP was significantly associated with NAFL; however, our study did not find a mediating effect of CRP on the association between lipid profile and NAFL, which needs to be examined in future studies. In addition, the role of above-mentioned inflammatory cytokines and adipose tissue cytokines in the association between lipid profile and non-alcoholic fatty liver disease are also need to be assessed in the future. Several studies have shown that obesity is the most important risk factor for simple steatosis . In our study, a significantly increased OR of NAFL was demonstrated among subjects with higher BMI and waist circumference. One possible reason may be that obesity is also a key risk factor for dyslipidemia, and the joint effect of obesity and abnormal blood lipid levels could increase the risk of NAFL . A previous study showed that hypertriglyceridemic waist circumference had the highest risk of NAFLD , which also supported our results. Higher risks of NAFL with abnormal lipid indexes were observed in shift workers, which provided further evidence of the harmful effect of shift work on NAFLD . These findings suggest that shift workers should pay more attention to their blood lipid indices to mitigate the disease burden related to abnormal lipid profiles. In addition, our results showed that an abnormal lipid profile was linked with the prevalence of NAFL in the normal glucose group but not in the abnormal glucose group, although a higher OR was observed in the abnormal glucose group (not significant); however, the possibility could not be ruled out due to the relatively small sample size. Considering that NAFLD and type 2 diabetes are common medical conditions that regularly co-exist in the real world and may act synergistically to drive adverse outcomes , blood glucose should also be emphasized when considering blood lipid profiles. The findings of this study highlight the importance of lipid biomarkers for the noninvasive diagnosis of NAFL, and TG may be a more suitable lipid index to predict the prevalence of NAFL. These findings may have important public health implications for the early diagnosis and intervention of NAFL. In addition, our study suggested that CRP was not the main mechanism in the pathway of blood lipids and the risk of NAFL, although CRP was associated with both blood lipids and the prevalence of NAFL. However, some limitations of this study should be acknowledged. The observational cross-sectional design based on the cohort’s baseline survey limits inferences about temporality and causality. Moreover, in the present study, we could not distinguish simple steatosis from non-alcoholic steatohepatitis owing to data limitations. Besides, depending on ultrasound in diagnosis of fatty liver may miss a significant proportion of early stages of fatty infiltration, male workers only were included in the present study, and lack of any information of participants medications including anti hyperlipidemic drugs may also decrease the prevalence of NAFLD. Further comprehensive analyses with more detailed information should be conducted in the future.
Our study demonstrates that abnormal blood lipid levels are positively associated with NAFL, and TG tends to be a better predictor of NAFL. Abnormal lipids are more likely to exert a direct effect on NAFL rather than through the mediation effect of CRP. More human epidemiology and animal studies should be conducted to examine our findings and explore the related biological mechanisms.
Supplemental Material
|
Overcoming the translational crisis of contemporary psychiatry – converging phenomenological and spatiotemporal psychopathology | b4248669-76bc-4efe-bb2d-5779ec62f79e | 10914603 | Psychiatry[mh] | Psychiatry made enormous scientific progress in the last 50 years in various fields including genetics and brain imaging. However, recently some authors declared a ‘crisis of contemporary psychiatry’ [ – ], because both disciplines failed to deliver the promised results for daily clinical application in diagnosis and therapy . While some authors suggest to more strongly develop computational psychiatry to resolve the ‘crisis of mechanisms’ , others propose that there is lack of appreciation of the subjective component of psychopathological symptoms [ – ] (Fig. ). The subjective component is, for instance, taken into focus in Phenomenological Psychopathology (PP) which targets first-person experience of time and space, self, body and world as the subjective core of psychopathological symptoms and/or related cognitive-affective function [ – ]. However, this leaves open how first-person experience including the psychopathological symptoms link to third-person observation about the brain’s neural activity changes. The main claim of our perspective is to show how to address this gap in our current knowledge. We propose that PP provides an ideal stepping stone to address both the crisis of subjectivity and crisis of mechanism. For that, PP needs to converge with what recently has been introduced as Spatiotemporal Psychopathology (STPP) [ – ]: the key assumption here is that brain, experience and symptoms are intimately connected through their shared basic spatiotemporal disturbance as a “common currency” . Such merger of PP with STPP can address both the crisis of subjectivity and the crisis of mechanisms. This shall be illustrated by space and time experience as well as their possibly related neuronal correlates in schizophrenia (SZ), mood (uni- and bipolar) and anxiety disorders (AD). The two sections on space and time organization in both experience and brain share the same structure: First, we introduce the historical and phenomenological origins of the spatiotemporal concepts as such. Second, we discuss those EEG and MRI studies in SZ, bipolar disorders (BD), major depressive disorders (MDD) and AD that conjointly examined subjective experience of space or time including their neuronal correlates (see supplement for methodological approach). We hypothesize that the altered experience of space and time in psychiatric disorders can be mapped onto correspondingly altered spatiotemporal dynamics of the brain’s neural activity. This might provide the currently elusive link of brain, first-person experience and psychopathological symptoms that makes it possible to develop differential-diagnostic markers for clinical practice.
Subjective experience of space Among the historical authors who used PP to examine subjective experience of space are Karl Jaspers (1883–1969), Hans Sexauer (unknown), Ludwig Binswanger (1881–1966), Klaus Conrad (1905–1961), and Paul Matussek (1919–2003), among others. For instance, Jaspers described a SZ patient who experienced infinite space ; Jaspers talks about “space with an atmosphere“ . A SZ patient of Sexauer reported: “If you put the sofa cushions in disorder, everything is blurred - the laws as well. It’s just such a hustle and bustle in the whole world. The points of the compass, as you learned them as a child - north is no longer, as it used to be, the North Pole, the cold. Today, the differences are no longer like that, you can swap them around like a spinning top. I just orient myself to what’s here in the room around me ” [ p. 814]. Binswanger described a patient who, “lying in bed, sees and feels how a piece of the railroad body, which is located some distance below his window, comes up into his room and penetrates into his head. There is palpitation, fear that the light of life will go out, and a severe headache in the forehead due to the track penetrating into the brain. The sick person is perfectly oriented in the oriented space, he knows that the railroad body lies down there and remains lying; at the same time, however, he sees it coming up and is fully aware of this spatial discrepancy, explicitly declaring that it is ‘something so stupid, so stupid’ that one knows one thing and still experiences the other” [ p. 634]. Interestingly, the patient experiences a unity with the space surrounding him. The space is divided here into a normal and a pathologically altered space, because the experience of the appearance of the rails and their penetration into his head also takes place in this space. Conrad described the development of SZ as a gradually progressive process that begins with an essential change and a considerable fragmentation and constriction of the overall psychic field against the background of an indefinite “imminence” or “alienation” [ p. 41]. The space around the patient feels unsafe and threatening. The patient does not know what is going on, he/she is extremely confused. Conrad called this anxious state of tension “trema”. Later on, Matussek described three Ganzeigenschaften (English: general characteristics) of perceived objects in the space surrounding the patients, namely the structure, the whole nature or quality (German: Ganzbeschaffenheit oder –qualität ), and the essence (German: Wesenseigenschaft ) [ p. 293]. According to Matussek, the essential disturbance in the experience of SZ patients lies in the observation of alternate properties of an object, a situation, or an event in the space surrounding him/her: it is not a change of the considered object or event in and by itself, that is, its basic structure or quality, but its increased and expanded emergence within abnormally experienced spatial context [ p. 287]. Further, a 45 year old male patient treated at Central Institute of Mental Health (CIMH) described his disturbed spatial experience as follows: “Things merge, it’s like being stuck. Other people were too close to me and threatened me.” Another 27 year old female SZ patient treated at CIMH stated: “I felt at that time as if I were in another room, in the basement. This room was set up exactly like the room I was actually in, except that in my imagination it was underground. When I left the room straight ahead, I thought I had climbed a flight of stairs.” Finally, a 26 year old female patient treated at CIMH reported about her experience of dissolving boundaries between her body and the space surrounding her: “This foreign feeling is often there, although I know the place or way well. This happens mainly when I feel a sensory overload. I then feel like a ping-pong ball. I feel a physical restlessness and tension and have lost my orientation. […] The body boundaries of the skin dissolve. It’s alternating big and small, pulsating like ray movements or vibrations. […] The walls are getting closer and closer to me - exponentially. It doesn’t come to a bang, though, but feels like it’s right in front of it and feels like I’m being crushed by the walls. When I change position, the feeling subsides. I don’t know exactly how long these experiences last, probably between 10 min and half an hour. […] I step on a border and enter another area that is not mine and is foreign to me.” Together, both the historical and contemporary patient cases illustrate that SZ experience is characterized by an abnormal blurriness and fragmentation of the spatial boundaries between self, body and world. This serves as the basis for many of the psychopathological symptoms such as delusions, ego disturbances and aberrant orientation in the environment. In addition to qualitative phenomenological studies, a recent study by Stanghellini et al. examined the abnormal space experience (ASE) in a large sample of SZ patients. This study showed that the experience of blurring spatial boundaries between self, body and environment are a key feature of the puzzling metamorphosis of the SZ lifeworld. Hence, this study empirically confirmed what Jaspers, Sexauer, Conrad and Matussek described in more qualitative terms. What the above authors all share and described in similar terms is that the spatial separation of the own person from other people or objects is no longer existent in SZ. In particular, SZ patients have the experience of being penetrated/invaded by other people and objects in the personal/peripersonal space around them. There is a dissolution of spatial boundaries between SZ patients, other persons and objects; this can lead to the development of delusions and specifically delusions of alien control as well as of self- or ego-disturbances (passivity phenomena) . More broadly speaking, this reflects the confusion of internally- and externally-oriented cognition as it is typical for SZ . Besides ego-disturbances in terms of passivity phenomena, SZ patients also experience porous borders between self and other persons that lead to difficulties in their social cognition including mindreading or the misinterpretation of facial emotions. Due to the threatening nature of such experiences, SZ patients tend to withdraw socially and avoid any interaction with others. A 33 year old male patient treated at CIMH stated: “In the acute phase, I retreat into a shell to protect myself as things blur and boundaries are lost.” Are these spatial experiences specific for psychosis and SZ? To address this, we turn to spatial experience in mood disorders. On the one hand, MDD patients experience a constriction of their perceived sensorimotor space , their movements are slowed, and other people and objects are difficult for them to reach . Fuchs speaks of “a gap between the body and its surroundings” . Stanghellini also offers a similar explanation when he claims that patients with melancholia often experience their body “as an obstacle between the self and the world” . A 28 year old female patient with MDD treated at CIMH described her disturbed spatial experience similarly: “It is as if I am sometimes beside myself. The environment then seems colorless and pale and the distance to other people becomes huge, I feel rejected and ignored.” In MDD, contact with other people and the environment seems to be far away, leading to social isolation. Another MDD patient treated at CIMH described her experience as follows: “In the first breakdown, I felt like I was heavy as lead and something pushed me into bed. Others seemed far away and I felt isolated and distanced from my surrounding.” On the other hand, manic BD patients tend to have the feeling - in the context of delusions of grandiosity - that everything is within their reach . The patient has the impression of having all the possibilities in the world, everything seems to be within reach. One’s own radius of action is enormously extended due to the increased drive, euphoric mood and sleep disturbances (the day seems longer). According to Binswanger “The world is too small for this being in expansion […] and distances become smaller” . Fuchs postulated that “the relation of person and space is characterized by centrifugal dispersion and dedifferentiation“ . In relation to the changed experience of space, manic patients also experience metamorphosis of their lived body. Stanghellini described a manic patient who experienced his body turning upside down and expanding in space: “Just the way he was, then, with his body slightly leaning forwards and his head out, and constantly on the tips of his toes; all knotted up in a sort of total spasm, his jaw locked into a lockjaw and his face muscles all rigid, a slow and never-ending process of ‘turning upside down’ started up. […] But the biggest relief came from noting how his brain, finally freed into the open air, could fill up a much bigger space that what had been reserved for it ‘right side up’ inside the cranial cavity.” [ , pp. 139–140]. This case report impressively illustrates the so-called ‘maniacal corporeality’ [ , pp. 139–140]. The lived body gains an increased fluidity, flexibility, and mobility. These are the characteristics that lead to the assumption that these patient have unlimited possibilities in their repertoire of action and behavior. Taken together, the experience of space clearly differs between SZ and MDD/BD. SZ patients experience fragmentation with the blurring of the spatial boundaries between self, world and body. While MDD and BD patients experience an abnormal constriction or extension of their existing space. Furthermore, the fact that the spatial experience can be disturbed in different ways in SZ and MDD/BD provides evidence that space experience can support differential-diagnostic considerations. Pending further quantitative studies with the development of proper psychometric scales, this renders space experience a strong candidate for clinical differential diagnosis. Spatial measures of the brain’s topography and its neuro-computational mechanisms Unfortunately, we could not identify a single MRI study that investigated the neural mechanisms of aberrant spatial experience in psychiatric disorders. Therefore, we concentrated on MRI studies that focused on the spatial changes/features of brain networks dynamics in SZ. But what does a spatial change in network dynamics exactly mean? Recent analytical approaches have made it possible to measure rapid shifts in activity across different networks (see also ). A recent study by Wang et al. examined the changes in dynamic brain states present in SZ patients and found that three states - co-activated brain areas (e.g. different activation patterns of fronto-parietal control network, sensorimotor areas, visual cortex, insula and default network) - occurred less often in SZ patients than in healthy controls (HC), even though the spatial maps of these states appeared to be similar between the two groups ; this suggests specific dynamic changes in the otherwise similar network topography. Another MRI study by Iraji et al. examined spatial dynamics within and between brain networks in SZ. The authors concluded that the brain reorganizes its various networks at different spatial scales including shorter and longer ones; this is expressed at the macro level in dynamic changes in the variations of the spatial coupling among networks and their functional domains (cognition, affect, sensorimotor, etc.). Interestingly, a very recent resting-state fMRI study by Pan et al. examined the dynamic reconfiguration of the brain, i.e., the dynamic spatial interactions/changes between particular brain regions for diagnostic purposes. This study proposed a spatiotemporal dynamic functional connectivity method for the diagnosis of SZ , i.e., obtaining a significantly higher classification accuracy (81.82%) than other computational methods. A similar resting-state fMRI study by Kottaram et al. corroborated these findings and showed that the combination of both spatial and temporal dynamics of functional connectivity is able to predict diagnostic status with high accuracy exceeding 90%. Together, these findings suggest dynamic changes in the network topography of SZ. Going beyond functional networks by taking a global approach to the brain, Yang et al. investigated how fMRI global signal activity is represented in single regions and networks, e.g., global signal topography (see also Zhang and Northoff for review). SZ patients showed a converse pattern of global signal topography in sensorimotor regions (low in SZ) and higher-order associative (high in SZ) regions compared to HC. Given that sensorimotor and higher-order regions are associated with externally- and internally-oriented cognition respectively [ – ], the reversal in their global signal topography in SZ may contribute to these patients’ experience of blurring spatial boundaries between internal self and external world. Correspondingly, a review/meta-analysis demonstrated that SZ patients show decreased neural changes (mainly EEG) during the transition from the internal prestimulus activity to external task-related activity – that entails a lack of distinction of the external task from the ongoing internal cognitions. What is experienced as the blurring of spatial boundaries between the internal self and the external world may thus be mediated by corresponding confusion of internal (prestimulus, higher order regions) and external (task-related, sensorimotor) activity on the neural level of the brain’s intrinsic topography and its dynamics (that is, its changes over time). Accordingly, the spatial-topographic (and temporal-dynamic) confusion of internal and external activity/events may be shared by both experiential and neural levels as their “common currency” in SZ. Different topographic changes are observed in MDD and BD . Here, the brain’s topographic organization is shifted towards its inside, that is, towards its default-mode network at the expense of the sensorimotor regions in depressed states of MDD and BD while the opposite can be observed in manic states of BD [ – ]. Does such an inside/inwards constriction or outside/outwards extension of the brain’s topography correspond to the subjects’ experience of a restricted (depression) or extended (mania) subjective space? Future studies combing neural and phenomenological measure are warranted to support such hypothesis. These will also show whether the currently observed differences in brain topography including their relationship to the experience of space can serve as candidate biomarkers for the differential-diagnosis of SZ vs MDD vs BD (Fig. ).
Among the historical authors who used PP to examine subjective experience of space are Karl Jaspers (1883–1969), Hans Sexauer (unknown), Ludwig Binswanger (1881–1966), Klaus Conrad (1905–1961), and Paul Matussek (1919–2003), among others. For instance, Jaspers described a SZ patient who experienced infinite space ; Jaspers talks about “space with an atmosphere“ . A SZ patient of Sexauer reported: “If you put the sofa cushions in disorder, everything is blurred - the laws as well. It’s just such a hustle and bustle in the whole world. The points of the compass, as you learned them as a child - north is no longer, as it used to be, the North Pole, the cold. Today, the differences are no longer like that, you can swap them around like a spinning top. I just orient myself to what’s here in the room around me ” [ p. 814]. Binswanger described a patient who, “lying in bed, sees and feels how a piece of the railroad body, which is located some distance below his window, comes up into his room and penetrates into his head. There is palpitation, fear that the light of life will go out, and a severe headache in the forehead due to the track penetrating into the brain. The sick person is perfectly oriented in the oriented space, he knows that the railroad body lies down there and remains lying; at the same time, however, he sees it coming up and is fully aware of this spatial discrepancy, explicitly declaring that it is ‘something so stupid, so stupid’ that one knows one thing and still experiences the other” [ p. 634]. Interestingly, the patient experiences a unity with the space surrounding him. The space is divided here into a normal and a pathologically altered space, because the experience of the appearance of the rails and their penetration into his head also takes place in this space. Conrad described the development of SZ as a gradually progressive process that begins with an essential change and a considerable fragmentation and constriction of the overall psychic field against the background of an indefinite “imminence” or “alienation” [ p. 41]. The space around the patient feels unsafe and threatening. The patient does not know what is going on, he/she is extremely confused. Conrad called this anxious state of tension “trema”. Later on, Matussek described three Ganzeigenschaften (English: general characteristics) of perceived objects in the space surrounding the patients, namely the structure, the whole nature or quality (German: Ganzbeschaffenheit oder –qualität ), and the essence (German: Wesenseigenschaft ) [ p. 293]. According to Matussek, the essential disturbance in the experience of SZ patients lies in the observation of alternate properties of an object, a situation, or an event in the space surrounding him/her: it is not a change of the considered object or event in and by itself, that is, its basic structure or quality, but its increased and expanded emergence within abnormally experienced spatial context [ p. 287]. Further, a 45 year old male patient treated at Central Institute of Mental Health (CIMH) described his disturbed spatial experience as follows: “Things merge, it’s like being stuck. Other people were too close to me and threatened me.” Another 27 year old female SZ patient treated at CIMH stated: “I felt at that time as if I were in another room, in the basement. This room was set up exactly like the room I was actually in, except that in my imagination it was underground. When I left the room straight ahead, I thought I had climbed a flight of stairs.” Finally, a 26 year old female patient treated at CIMH reported about her experience of dissolving boundaries between her body and the space surrounding her: “This foreign feeling is often there, although I know the place or way well. This happens mainly when I feel a sensory overload. I then feel like a ping-pong ball. I feel a physical restlessness and tension and have lost my orientation. […] The body boundaries of the skin dissolve. It’s alternating big and small, pulsating like ray movements or vibrations. […] The walls are getting closer and closer to me - exponentially. It doesn’t come to a bang, though, but feels like it’s right in front of it and feels like I’m being crushed by the walls. When I change position, the feeling subsides. I don’t know exactly how long these experiences last, probably between 10 min and half an hour. […] I step on a border and enter another area that is not mine and is foreign to me.” Together, both the historical and contemporary patient cases illustrate that SZ experience is characterized by an abnormal blurriness and fragmentation of the spatial boundaries between self, body and world. This serves as the basis for many of the psychopathological symptoms such as delusions, ego disturbances and aberrant orientation in the environment. In addition to qualitative phenomenological studies, a recent study by Stanghellini et al. examined the abnormal space experience (ASE) in a large sample of SZ patients. This study showed that the experience of blurring spatial boundaries between self, body and environment are a key feature of the puzzling metamorphosis of the SZ lifeworld. Hence, this study empirically confirmed what Jaspers, Sexauer, Conrad and Matussek described in more qualitative terms. What the above authors all share and described in similar terms is that the spatial separation of the own person from other people or objects is no longer existent in SZ. In particular, SZ patients have the experience of being penetrated/invaded by other people and objects in the personal/peripersonal space around them. There is a dissolution of spatial boundaries between SZ patients, other persons and objects; this can lead to the development of delusions and specifically delusions of alien control as well as of self- or ego-disturbances (passivity phenomena) . More broadly speaking, this reflects the confusion of internally- and externally-oriented cognition as it is typical for SZ . Besides ego-disturbances in terms of passivity phenomena, SZ patients also experience porous borders between self and other persons that lead to difficulties in their social cognition including mindreading or the misinterpretation of facial emotions. Due to the threatening nature of such experiences, SZ patients tend to withdraw socially and avoid any interaction with others. A 33 year old male patient treated at CIMH stated: “In the acute phase, I retreat into a shell to protect myself as things blur and boundaries are lost.” Are these spatial experiences specific for psychosis and SZ? To address this, we turn to spatial experience in mood disorders. On the one hand, MDD patients experience a constriction of their perceived sensorimotor space , their movements are slowed, and other people and objects are difficult for them to reach . Fuchs speaks of “a gap between the body and its surroundings” . Stanghellini also offers a similar explanation when he claims that patients with melancholia often experience their body “as an obstacle between the self and the world” . A 28 year old female patient with MDD treated at CIMH described her disturbed spatial experience similarly: “It is as if I am sometimes beside myself. The environment then seems colorless and pale and the distance to other people becomes huge, I feel rejected and ignored.” In MDD, contact with other people and the environment seems to be far away, leading to social isolation. Another MDD patient treated at CIMH described her experience as follows: “In the first breakdown, I felt like I was heavy as lead and something pushed me into bed. Others seemed far away and I felt isolated and distanced from my surrounding.” On the other hand, manic BD patients tend to have the feeling - in the context of delusions of grandiosity - that everything is within their reach . The patient has the impression of having all the possibilities in the world, everything seems to be within reach. One’s own radius of action is enormously extended due to the increased drive, euphoric mood and sleep disturbances (the day seems longer). According to Binswanger “The world is too small for this being in expansion […] and distances become smaller” . Fuchs postulated that “the relation of person and space is characterized by centrifugal dispersion and dedifferentiation“ . In relation to the changed experience of space, manic patients also experience metamorphosis of their lived body. Stanghellini described a manic patient who experienced his body turning upside down and expanding in space: “Just the way he was, then, with his body slightly leaning forwards and his head out, and constantly on the tips of his toes; all knotted up in a sort of total spasm, his jaw locked into a lockjaw and his face muscles all rigid, a slow and never-ending process of ‘turning upside down’ started up. […] But the biggest relief came from noting how his brain, finally freed into the open air, could fill up a much bigger space that what had been reserved for it ‘right side up’ inside the cranial cavity.” [ , pp. 139–140]. This case report impressively illustrates the so-called ‘maniacal corporeality’ [ , pp. 139–140]. The lived body gains an increased fluidity, flexibility, and mobility. These are the characteristics that lead to the assumption that these patient have unlimited possibilities in their repertoire of action and behavior. Taken together, the experience of space clearly differs between SZ and MDD/BD. SZ patients experience fragmentation with the blurring of the spatial boundaries between self, world and body. While MDD and BD patients experience an abnormal constriction or extension of their existing space. Furthermore, the fact that the spatial experience can be disturbed in different ways in SZ and MDD/BD provides evidence that space experience can support differential-diagnostic considerations. Pending further quantitative studies with the development of proper psychometric scales, this renders space experience a strong candidate for clinical differential diagnosis.
Unfortunately, we could not identify a single MRI study that investigated the neural mechanisms of aberrant spatial experience in psychiatric disorders. Therefore, we concentrated on MRI studies that focused on the spatial changes/features of brain networks dynamics in SZ. But what does a spatial change in network dynamics exactly mean? Recent analytical approaches have made it possible to measure rapid shifts in activity across different networks (see also ). A recent study by Wang et al. examined the changes in dynamic brain states present in SZ patients and found that three states - co-activated brain areas (e.g. different activation patterns of fronto-parietal control network, sensorimotor areas, visual cortex, insula and default network) - occurred less often in SZ patients than in healthy controls (HC), even though the spatial maps of these states appeared to be similar between the two groups ; this suggests specific dynamic changes in the otherwise similar network topography. Another MRI study by Iraji et al. examined spatial dynamics within and between brain networks in SZ. The authors concluded that the brain reorganizes its various networks at different spatial scales including shorter and longer ones; this is expressed at the macro level in dynamic changes in the variations of the spatial coupling among networks and their functional domains (cognition, affect, sensorimotor, etc.). Interestingly, a very recent resting-state fMRI study by Pan et al. examined the dynamic reconfiguration of the brain, i.e., the dynamic spatial interactions/changes between particular brain regions for diagnostic purposes. This study proposed a spatiotemporal dynamic functional connectivity method for the diagnosis of SZ , i.e., obtaining a significantly higher classification accuracy (81.82%) than other computational methods. A similar resting-state fMRI study by Kottaram et al. corroborated these findings and showed that the combination of both spatial and temporal dynamics of functional connectivity is able to predict diagnostic status with high accuracy exceeding 90%. Together, these findings suggest dynamic changes in the network topography of SZ. Going beyond functional networks by taking a global approach to the brain, Yang et al. investigated how fMRI global signal activity is represented in single regions and networks, e.g., global signal topography (see also Zhang and Northoff for review). SZ patients showed a converse pattern of global signal topography in sensorimotor regions (low in SZ) and higher-order associative (high in SZ) regions compared to HC. Given that sensorimotor and higher-order regions are associated with externally- and internally-oriented cognition respectively [ – ], the reversal in their global signal topography in SZ may contribute to these patients’ experience of blurring spatial boundaries between internal self and external world. Correspondingly, a review/meta-analysis demonstrated that SZ patients show decreased neural changes (mainly EEG) during the transition from the internal prestimulus activity to external task-related activity – that entails a lack of distinction of the external task from the ongoing internal cognitions. What is experienced as the blurring of spatial boundaries between the internal self and the external world may thus be mediated by corresponding confusion of internal (prestimulus, higher order regions) and external (task-related, sensorimotor) activity on the neural level of the brain’s intrinsic topography and its dynamics (that is, its changes over time). Accordingly, the spatial-topographic (and temporal-dynamic) confusion of internal and external activity/events may be shared by both experiential and neural levels as their “common currency” in SZ. Different topographic changes are observed in MDD and BD . Here, the brain’s topographic organization is shifted towards its inside, that is, towards its default-mode network at the expense of the sensorimotor regions in depressed states of MDD and BD while the opposite can be observed in manic states of BD [ – ]. Does such an inside/inwards constriction or outside/outwards extension of the brain’s topography correspond to the subjects’ experience of a restricted (depression) or extended (mania) subjective space? Future studies combing neural and phenomenological measure are warranted to support such hypothesis. These will also show whether the currently observed differences in brain topography including their relationship to the experience of space can serve as candidate biomarkers for the differential-diagnosis of SZ vs MDD vs BD (Fig. ).
Subjective experience of time The examination of anomalous time (and space) experience in psychiatry was initiated in the 1920s by Eugène Minkowski (1885–1972), Ludwig Binswanger and Erwin Straus (1891–1975), among others. In particular, both Minkowski and Binswanger postulated aberrant time-space experience as a “basic disturbance” ( = “trouble generateur” coined by Eugène Minkowski) in psychiatric disorders. According to Straus , the time experience is a medium of subjective experience in general. This said, all experiences, thoughts, actions and emotions/affects of psychiatric patients are dependent on changes in the temporal experience in terms of form and content (for overview see also ). More recently, Stanghellini et al. conducted a semi-quantitative investigation about the subjective experience of time in SZ and MDD. This study showed temporal fragmentation in SZ featured by disruptions in time flow, déjà vu, and premonitions about the self and the world . Relying on earlier and current phenomenological studies, the authors assumed a basic disturbance in the articulation or synthesis of time in SZ. A 26 year old female SZ patient treated at CIMH described her fragmented time experience as follows: “In my first acute phase, I was only concerned with the current delusions. The past and the future did not play a role, there is then only the current urge. I also had the impression that several things were happening at the same time. Further, I felt daily cracks in time that I did not experience as continuous. […] I experienced time as if it were in individual blocks, some lasting only a few minutes and running like a program. Getting up, brushing my teeth, and having a cigarette in the morning all belong together. After that, there’s a break. Then making the bed, coffee and getting dressed. Sometimes there’s a crack from one minute to the next.” In SZ, time does not flow properly, but is fragmented into individual scenes, like in a cartoon film with various snapshots not being connected at all. Importantly, these experiences are specific for SZ as they are not reported by MDD patients. Instead, MDD patients rather experience stagnation of time with reduced flow and dynamic . A 53 year old female MDD patient treated at CIMH stated: “You are stuck in the past and almost petrified, brooding how it could come so far with no perspective for the future. It is associated with feelings of guilt and time passes very slowly for you.” This case shows that the experience of stagnating time is rather characteristic for MDD patients as it is not observed in SZ. This amounts to a basic disturbance in the conation or inhibition of time in MDD ( = the dynamic of time ) as distinguished from the altered construction or synthesis ( = fragmentation) of time in SZ. How about the distinction between MDD and depressed BD? A recent study investigated subjects’ experience of the timing of their thoughts, e.g., thought dynamic in MDD and BD by letting subjects report the changes in their internally- and externally-oriented thoughts . This study demonstrates that internally-oriented thoughts lasted longer, exhibited slower frequency, and showed less power (as calculated by power spectrum of the time series of their thought content changes) in MDD compared to HC. Importantly, this pattern was similar (long duration of internally-oriented thoughts) and distinct (normal power of thought) in depressed BD patients – this again indicates the differential-diagnostic relevance of the experience of time now with regard to the changes of one’s thought contents over time, that is, thought dynamics. The differential-diagnostic relevance of time experience for SZ as distinct from MDD and BD is further supported by the STEP . Importantly, particular STEP items like the experience of temporal fragmentation and premonitions are indeed specific for SZ as distinguished from both a HC and an affective group . Together, these studies show the feasibility of the time experience as a differential-diagnostic marker for SZ vs MDD vs BD. Importantly, this goes along with the distinction of different dimensions of time like continuity, speed/duration, prediction, and perspective [ , , ]. Continuity of time refers to the experience of flow of time which is disrupted in SZ . Time speed refers to the experience of a certain duration which is often overestimated in AD patients while it is underestimated and thus abnormally slowed down in MDD . Time prediction refers to the experience of uncertainty/certainty of future changes and their time points as it is typically disturbed in AD patients . Finally, time perspective refers to the experience of the relationship of past, present and future which often is abnormal in MDD (experience of strong past) and mania (experience of predominance of future) [ , , , ]. Temporal brain measures and neuro-computational mechanisms How can the features of the subjective experience of time serve as a template for investigating the brain’s neural and computational mechanisms? The brain exhibits spontaneous activity (as distinct from task-evoked activity) which constructs its own inner time as distinguished from the outer time of the environment . As postulated in STPP, the brain’s spontaneous activity constructs its own ongoing inner time with temporal features that are more or less analogous to the ones that characterize the experience of time, namely continuity/flow, speed/duration, perspective and prediction. Given that analogous temporal features are shared by both brain and experience as their “common currency” , the different features or patterns of the experience of time may serve as an ideal starting point or stepping stone for searching the neuro-computational mechanisms in the brain’s spontaneous activity with analogous temporal features. Specifically, the experience of the continuity of time and its temporal flow are closely related to the timing of both phase and amplitude in the neural activity as they can be measured and distinguished in EEG. SZ patients show temporal imprecision in the millisecond range in both phase and amplitude in their neural activity as measured with EEG. Especially the neural findings of temporal imprecision in phase synchrony or entrainment to external stimuli [ , , ] are more or less specific to SZ as distinct from MDD and BD . On the psychological level this is further supported by analogous temporal imprecision in the millisecond range in temporal behavior and time estimation . Finally, these neural, behavioral, and psychological findings showing temporal imprecision are well in line with the experience of temporal fragmentation and premotions in the SZ patients as reported above: these two experiential features may result from an analogous disruption of the continuity of time and its temporal flow on the neural/computational level as based on its temporal imprecision. Yet another dimension of time is speed. The data clearly show that the MDD subjects experience time as too slow and stagnant (see above) which is manifest in their emotion, thoughts, perceptions, and movements. This is complemented by an analogous speed disturbance on the neural level. The global spontaneous activity is too slow in MDD (and also depressed BD) shifting its power more towards the slower end of the infraslow power spectrum (0.01 to 0.1 Hz) as measured with fMRI [ , , ]. That goes along with decreased neural sensitivity to especially fast negative stimuli in both motor cortex and DMN whose reduced stimulus-evoked activity correlates with the degree of psychomotor retardation . Accordingly, instead of temporal imprecision with disruption of temporal continuity as in SZ, MDD can rather be characterized by a disturbance of time speed as these patients are too slow in both their neural and experiential and cognitive-behavioral activity. Finally, time prediction is yet another dimension of time. This concerns how a previous time point can predict the possible changes and events at a future time point which is closely related to the experience of certainty/uncertainty. AD patients typically suffer from the experience of uncertainty . On the neural side, one can observe desynchronization among different regions (showing decreased functional connectivity) especially in the CMS and default-mode network during both rest and task states . If the regions are not properly synchronized with each other, one can no longer temporally predict from one region’s neural activity to another which, on the computational level, has been described as uncertainty . However, future studies are necessary to connect such lack of temporal prediction on the neural level with both computational uncertainty and the experience of temporal uncertainty featured by the inability to predict future events in AD patients. Taken together, we can see how the different dimensions of the experience of time including their disorder-specific changes in SZ, MDD, BD and AD are accompanied by more or less corresponding neural and computational changes in the temporal features of the brain’s neural activity (Fig. ). This provides the first steps towards what has been described as “Computational phenomenology” or, as we would extend, “Spatiotemporal computational phenomenology”. Specifically, the experience of time can serve as a template to guide neuro-computational investigation – this allows extending PP beyond experience to the brain. One can link those first-person experiential features of time that allow for clinical differential-diagnosis of SZ, MDD, BD and AD (see above) to specific third-person observable measures of temporal dynamics in the neuro-computational mechanisms of the brain (and also the simulated models). While clinically, we can then use these experiential markers of time in conjunction with more or less corresponding neuro-computational markers of time, e.g., dynamics in the differential-diagnosis of SZ, MDD, BD and AD.
The examination of anomalous time (and space) experience in psychiatry was initiated in the 1920s by Eugène Minkowski (1885–1972), Ludwig Binswanger and Erwin Straus (1891–1975), among others. In particular, both Minkowski and Binswanger postulated aberrant time-space experience as a “basic disturbance” ( = “trouble generateur” coined by Eugène Minkowski) in psychiatric disorders. According to Straus , the time experience is a medium of subjective experience in general. This said, all experiences, thoughts, actions and emotions/affects of psychiatric patients are dependent on changes in the temporal experience in terms of form and content (for overview see also ). More recently, Stanghellini et al. conducted a semi-quantitative investigation about the subjective experience of time in SZ and MDD. This study showed temporal fragmentation in SZ featured by disruptions in time flow, déjà vu, and premonitions about the self and the world . Relying on earlier and current phenomenological studies, the authors assumed a basic disturbance in the articulation or synthesis of time in SZ. A 26 year old female SZ patient treated at CIMH described her fragmented time experience as follows: “In my first acute phase, I was only concerned with the current delusions. The past and the future did not play a role, there is then only the current urge. I also had the impression that several things were happening at the same time. Further, I felt daily cracks in time that I did not experience as continuous. […] I experienced time as if it were in individual blocks, some lasting only a few minutes and running like a program. Getting up, brushing my teeth, and having a cigarette in the morning all belong together. After that, there’s a break. Then making the bed, coffee and getting dressed. Sometimes there’s a crack from one minute to the next.” In SZ, time does not flow properly, but is fragmented into individual scenes, like in a cartoon film with various snapshots not being connected at all. Importantly, these experiences are specific for SZ as they are not reported by MDD patients. Instead, MDD patients rather experience stagnation of time with reduced flow and dynamic . A 53 year old female MDD patient treated at CIMH stated: “You are stuck in the past and almost petrified, brooding how it could come so far with no perspective for the future. It is associated with feelings of guilt and time passes very slowly for you.” This case shows that the experience of stagnating time is rather characteristic for MDD patients as it is not observed in SZ. This amounts to a basic disturbance in the conation or inhibition of time in MDD ( = the dynamic of time ) as distinguished from the altered construction or synthesis ( = fragmentation) of time in SZ. How about the distinction between MDD and depressed BD? A recent study investigated subjects’ experience of the timing of their thoughts, e.g., thought dynamic in MDD and BD by letting subjects report the changes in their internally- and externally-oriented thoughts . This study demonstrates that internally-oriented thoughts lasted longer, exhibited slower frequency, and showed less power (as calculated by power spectrum of the time series of their thought content changes) in MDD compared to HC. Importantly, this pattern was similar (long duration of internally-oriented thoughts) and distinct (normal power of thought) in depressed BD patients – this again indicates the differential-diagnostic relevance of the experience of time now with regard to the changes of one’s thought contents over time, that is, thought dynamics. The differential-diagnostic relevance of time experience for SZ as distinct from MDD and BD is further supported by the STEP . Importantly, particular STEP items like the experience of temporal fragmentation and premonitions are indeed specific for SZ as distinguished from both a HC and an affective group . Together, these studies show the feasibility of the time experience as a differential-diagnostic marker for SZ vs MDD vs BD. Importantly, this goes along with the distinction of different dimensions of time like continuity, speed/duration, prediction, and perspective [ , , ]. Continuity of time refers to the experience of flow of time which is disrupted in SZ . Time speed refers to the experience of a certain duration which is often overestimated in AD patients while it is underestimated and thus abnormally slowed down in MDD . Time prediction refers to the experience of uncertainty/certainty of future changes and their time points as it is typically disturbed in AD patients . Finally, time perspective refers to the experience of the relationship of past, present and future which often is abnormal in MDD (experience of strong past) and mania (experience of predominance of future) [ , , , ].
How can the features of the subjective experience of time serve as a template for investigating the brain’s neural and computational mechanisms? The brain exhibits spontaneous activity (as distinct from task-evoked activity) which constructs its own inner time as distinguished from the outer time of the environment . As postulated in STPP, the brain’s spontaneous activity constructs its own ongoing inner time with temporal features that are more or less analogous to the ones that characterize the experience of time, namely continuity/flow, speed/duration, perspective and prediction. Given that analogous temporal features are shared by both brain and experience as their “common currency” , the different features or patterns of the experience of time may serve as an ideal starting point or stepping stone for searching the neuro-computational mechanisms in the brain’s spontaneous activity with analogous temporal features. Specifically, the experience of the continuity of time and its temporal flow are closely related to the timing of both phase and amplitude in the neural activity as they can be measured and distinguished in EEG. SZ patients show temporal imprecision in the millisecond range in both phase and amplitude in their neural activity as measured with EEG. Especially the neural findings of temporal imprecision in phase synchrony or entrainment to external stimuli [ , , ] are more or less specific to SZ as distinct from MDD and BD . On the psychological level this is further supported by analogous temporal imprecision in the millisecond range in temporal behavior and time estimation . Finally, these neural, behavioral, and psychological findings showing temporal imprecision are well in line with the experience of temporal fragmentation and premotions in the SZ patients as reported above: these two experiential features may result from an analogous disruption of the continuity of time and its temporal flow on the neural/computational level as based on its temporal imprecision. Yet another dimension of time is speed. The data clearly show that the MDD subjects experience time as too slow and stagnant (see above) which is manifest in their emotion, thoughts, perceptions, and movements. This is complemented by an analogous speed disturbance on the neural level. The global spontaneous activity is too slow in MDD (and also depressed BD) shifting its power more towards the slower end of the infraslow power spectrum (0.01 to 0.1 Hz) as measured with fMRI [ , , ]. That goes along with decreased neural sensitivity to especially fast negative stimuli in both motor cortex and DMN whose reduced stimulus-evoked activity correlates with the degree of psychomotor retardation . Accordingly, instead of temporal imprecision with disruption of temporal continuity as in SZ, MDD can rather be characterized by a disturbance of time speed as these patients are too slow in both their neural and experiential and cognitive-behavioral activity. Finally, time prediction is yet another dimension of time. This concerns how a previous time point can predict the possible changes and events at a future time point which is closely related to the experience of certainty/uncertainty. AD patients typically suffer from the experience of uncertainty . On the neural side, one can observe desynchronization among different regions (showing decreased functional connectivity) especially in the CMS and default-mode network during both rest and task states . If the regions are not properly synchronized with each other, one can no longer temporally predict from one region’s neural activity to another which, on the computational level, has been described as uncertainty . However, future studies are necessary to connect such lack of temporal prediction on the neural level with both computational uncertainty and the experience of temporal uncertainty featured by the inability to predict future events in AD patients. Taken together, we can see how the different dimensions of the experience of time including their disorder-specific changes in SZ, MDD, BD and AD are accompanied by more or less corresponding neural and computational changes in the temporal features of the brain’s neural activity (Fig. ). This provides the first steps towards what has been described as “Computational phenomenology” or, as we would extend, “Spatiotemporal computational phenomenology”. Specifically, the experience of time can serve as a template to guide neuro-computational investigation – this allows extending PP beyond experience to the brain. One can link those first-person experiential features of time that allow for clinical differential-diagnosis of SZ, MDD, BD and AD (see above) to specific third-person observable measures of temporal dynamics in the neuro-computational mechanisms of the brain (and also the simulated models). While clinically, we can then use these experiential markers of time in conjunction with more or less corresponding neuro-computational markers of time, e.g., dynamics in the differential-diagnosis of SZ, MDD, BD and AD.
PP has a long history of identifying disturbances in space and time experience which is currently further developed in SZ and MDD . The concept of the basic disturbance implies that we need to look for common spatiotemporal features that are fundamental to and manifest across the various symptom domains, e.g., motor, sensory, affective, cognitive, and social. As pointed out in our examples, different disorders like SZ, MDD, BD and AD display different kinds of basic spatiotemporal disturbances in both their subjective experience and neural activity; this, as we suggest, can be used in clinical differential diagnosis (Fig. ). Use of spatiotemporal experience in clinical diagnosis requires quantitative measures and tools. Focusing on mainly individual qualitative reports so far, PP needs to specify the subjective experience of space and time in a more granular and quantitative way. For instance, we may want to ask patients in our clinical interviews to self-report and assess their experience of time (and space) in the depressed state or when they are manic. Going beyond qualitative (or semi-qualitative) accounts, this requires the use of systematic-quantitative clinical scales (and ideally also behavioral tests) for the differential experience of space and time in SZ, MDD, BD and AD. There are already several validated instruments in the English and German language, which fully or at least partially assess the altered spatial and temporal experience of patients with psychiatric disorders. These include the semi-structured qualitative and semi-quantitative psychometric interview EASE (Examination of Anomalous Self-Experience) , the semi-structured interview EAWE (Examination of Anomalous World-Experience) , and the STEP scale (Scale for Space and Time Experience in Psychosis) . Interestingly, STEP confirmed the experience of blurriness and fragmentation of spatial boundaries in SZ and was able to show differences between spatial experience of SZ and MDD patients . That might contribute to resolving the crisis of subjectivity by rendering its investigation more scientific, e.g., quantitative, without losing the subjective-experiential core of psychopathological symptoms. Further, PP needs to connect first-person experience of space and time to more or less analogous spatiotemporal features in the brain’s neural activity, e.g., its topography and dynamics as observed in the third-person. We assume that altered space and time experience (as the most bottom layer of mental disorders signifying their basic disturbance) is directly related to analogous changes in the spatiotemporal configurations of the brain’s neural activity – abnormal neuronal topography and dynamics translate into corresponding abnormalities in mental topography and dynamics . The neural and computational mechanisms driving the neural activity changes including their manifestation in the various psychopathological symptoms are then identified as spatiotemporal mechanisms. In the future, a combination of different methods (e.g., phenomenological, psychological/STEP-based and neurobiological/MRI-topography and dynamics based) might allow to distinguish patients with SZ from those with MDD, BD and AD on the basis of their basic spatiotemporal disturbance in both brain and subjective experience – this contributes resolving the crisis of mechanisms. In sum, we are encountering a translational crisis of contemporary psychiatry as both neurobiological/neurocomputational and phenomenological approaches to mental disorders seem to resist translation into clinical practice. We propose that PP provides an ideal starting point and template to resolve the crisis of psychiatry when converging its investigation of spatiotemporal experience with the spatiotemporal, e.g., topographic and dynamic investigation of the brain as proposed in STPP which allows translating basic research into clinical practice.
|
Primary well differentiated hepatic liposarcoma in a meerkat
( | b6551ff3-469e-428d-ba34-720d385fc3fe | 10315548 | Anatomy[mh] | The author(s) declared no potential conflicts of interest to the research, authorship,
and/or publication of this article.
|
Navigating the new normal in ophthalmology | 45d4642c-47d9-4eb5-bf94-95b984948afd | 7508105 | Ophthalmology[mh] | Universal Precautions, Extended Work Hours, Day Care Surgeries and Reduced Throughput are the New Normal The All India Ophthalmological Society (AIOS) and Indian Journal of Ophthalmology (IJO) have been providing timely guidelines to ophthalmologists to deal with COVID-19. The essence of it all is the unfailingly meticulous practice of physical distancing, hygiene, and universal precautions to protect the workforce and the patients. The central government guidelines currently allow us (except in designated containment zones) to routinely see patients and perform even non-emergency surgeries including cataracts. Specific measures to be taken at the point-of-entry, waiting room, out-patient clinic, procedure room and operation theater constitute the new normal in our ophthalmology practice. If you are a facility that admits patients for surgery, it is time to completely move over to daycare surgeries. The price that we pay to regain and maintain the patient volume would be in terms of an increased financial burden to enforce the safety norms, extended work hours and reduced throughput to maintain optimal physical distancing.
Tele-ophthalmology is the New Normal Although the telemedicine technology has existed for over a decade, it has been sparsely used. We have come to very quickly rediscover the power of virtual care during the lockdown. This issue of IJO carries a lucid description of a gamified teleconsultation model. As we understand that specialty triage of new patients can be performed remotely and several patients with chronic ophthalmic diseases need not physically visit us for periodic evaluation and prescription refill, we can build-in the now legalized routine practice of teleophthalmology seamlessly into our existing systems. Incorporation of simple, light, robust, scalable and fast cloud-based electronic medical records (EMR) and hospital management systems integrated with teleophthalmology would provide access to medical records at any point of consultation, minimize touch-points, and yet, have complete control on clinical documentation and help us provide an accurate prescription over teleconsultation. The visionary leadership of AIOS has seized the opportunity to constitute a well-represented teleophthalmology and EMR committee, which will very soon release its recommendations and roll-out several free-to-use platforms. Virtual care at scale would release chair time in clinical practice to be used for the patients who truly benefit from it, and thus help the cause of physical distancing.
A Lean (but not so Mean) Strategy is the New Normal Financial prudence is the key to stay afloat in troubled times. Today's new normal for healthcare businesses includes dealing with workforce management, sick workers, disrupted supply chains, cash flow crunches, uncertain compliance obligations, and the mechanics of incorporating new government programs. The keys to success are preparation, agility, accurate data collection, and a willingness to harvest good ideas. Cost containment without impacting the quality of care, recovery of financial dues, work-efficiency, remodeling, and multi-tasking of the workforce and deferring needless expenses are some of the sound strategies for financial prudence. It may be, however, counter-productive to aggressively furlough or terminate essential employees and cut back much on their remuneration, unless that remains the only option for an organization to survive. The economic challenges are very real, and these actions may be unavoidable for some organizations. However, cutting costs only to preserve profits may serve to sink us further into recession. Pre-crisis profit targets have been overtaken by events that none of us could have reasonably anticipated, and, understandably, we will miss those targets. It is commendable that several charitable non-governmental organizations have not resorted to pruning employee remuneration and are volitionally concentrating on retaining talented employees. This may be a wise managerial strategy in the long-term, for a grateful employee is often a loyal employee, and the troubled times logically yield a higher gratefulness quotient. Optimal financial prudence with a heart may be the new mantra to stay afloat and thrive again.
Online Teaching and Assessment is the New Normal It is estimated that the training of about 80% of ophthalmology residents is adversely affected by the lockdown during the pandemic. Even when we open-up, it may not be safe enough to conduct physical lectures and bedside teaching in larger groups. As we have already discovered, online teaching can be a gratifying experience and can considerably fill in for lost learning opportunities. As the opportunity-driven enthusiasm of the organizers of the me-too webinars' wanes, it is time to roll out robust, standardized, well-curated, curriculum-based, interactive, year-long online teaching modules by skillful teachers. The online avatar of the popular annual postgraduate education program iFocus is all set to debut in the next few weeks. The AIOS Ophthalmic Education, Training, and Evaluation Committee envisages the production of high-quality online teaching modules with the incorporation of standard assessment tools.
Physical Conferences Morphing into an Immersive Online Experience is the New Normal Physical ophthalmology professional conferences are neither safe nor feasible under the circumstances. Uncertainty prevails as to when it would be safe again for large groups to gather, learn, browse through ophthalmic products, and socially interact. Several prominent physical meetings have transformed into online versions – World Ophthalmology Congress is the first such that we will witness in June 2020. Technologically, it is feasible to create an immersive and engaging virtual conference environment with several simultaneous session halls and a walk-through trade exhibition with an option to buy online. This may be the way to go until it is safe to meet again. A hybrid version with an optimal combination of physical and virtual meetings may develop over time.
VUCA is an acronym – first used in 1987, to describe Volatility, Uncertainty, Complexity, and Ambiguity of general situations and it fits perfectly with the way the world is today, thoroughly overwhelmed by the current COVID-19 pandemic. The concept of VUCA naturally progresses to a smart leadership response - VUCA-2.0, incorporating Vision, Understanding, Courage, and Adaptability, which may show us a way out of this crisis, alive and kicking. Let us wish each other a successful rebooting to Ophthalmology 2.0. See you on the other side. “Humankind is now facing a global crisis. Perhaps the biggest crisis of our generation. The decisions people and governments take in the next few weeks will probably shape the world for years to come. They will shape not just our healthcare systems but also our economy, politics, and culture. We must act quickly and decisively. We should also take into account the long-term consequences of our actions. When choosing between alternatives, we should ask ourselves not only how to overcome the immediate threat, but also what kind of world we will inhabit once the storm passes. Yes, the storm will pass, humankind will survive, most of us will still be alive — but we will inhabit a different world.” - Yuval Noah Harari
|
Evaluating incident learning systems and safety culture in two radiation oncology departments | d266aba7-b4ac-4dc7-bdfc-0a534f40495c | 9163481 | Internal Medicine[mh] | Approximately 50% of Australian cancer patients receive radiation therapy. The pathway from decision‐to‐treat to completion of a treatment course is highly complex, increasingly so due to continuously developing advanced technologies and techniques. There are many inter‐linking clinical and technical process steps for creating individualised treatment plans and delivering treatment over multiple fractions. Radiation oncology involves a multidisciplinary team (MDT) of radiation oncologists (ROs), radiation oncology medical physicists (ROMPs) and radiation therapists (RTs), including support from oncology nurses and allied health professionals. The MDT work together for high‐quality treatment and patient care, while also minimising risk. However, each activity/step, or interface between steps or information transfer point, has potential for error. Therefore, radiation oncology is subject to detailed quality management and control. Nevertheless, errors may occur. Errors and near misses are termed ‘incidents’, that is unintentional events or unwanted changes from the normal intended process, which potentially can cause an adverse event. Further categorisation designates actual incidents or near‐miss incidents. Actual incidents in radiation oncology are primarily errors where dose delivered to a patient deviates from the prescribed dose or plan, with or without a clinically measurable effect. A near‐miss incident is caught before any incorrect dose is delivered. Near misses can be identified during the pre‐treatment preparation/planning phases or at quality assurance (QA) checkpoints and rectified before a patient treatment begins. They can also be identified while the patient is on the treatment couch, immediately before each treatment delivery, by final QA checks or image guidance procedures. The proportion of treatments where an actual incident has occurred is considered relatively low within radiation oncology. For incidents where deviations are large enough to trigger mandatory reporting into national reporting systems, rates have been estimated at 0.2% per course and for those having clinically significant consequences, estimates are one or two orders of magnitude lower. , , Nevertheless, given the potential consequences of actual incidents, radiation oncology facilities deploy comprehensive QA programs and Incident Learning Systems (ILSs), alongside promoting a positive safety culture (SC). The Australian and New Zealand bi‐national radiation oncology practice standards (ROPS) recommend robust QA and incident management as requirements to mitigate risk. Internationally, radiation oncology departments have reported various local ILSs to support this. , , , , , For robust incident evaluation, an ILS should include incident reporting, investigations (e.g. root cause analysis), data tracking, visualisation and practical feedback. It should also guide quality improvement (QI) and QA practices to guard against similar errors recurring. ILSs that analyse reported data to identify QI areas provide a more proactive and effective response to incident management than simple reactive changes to individual isolated reported incidents. Departmental ILSs that meet the reporting categories and needs of radiation oncology can impact on reducing error rates and provide appropriate data analysis and practical QI guidance. ILSs specifically designed for radiation oncology, coupled with continuous assessment and improvement, have reduced occurrence of significant incidents, provided QI insight and facilitated proactive approaches to quality and safety management. , , ILS success relies on the department’s SC. Many studies have shown negativity and frustration from frontline staff regarding incident reporting and learning, more than from management staff. , , , Negative responses have focussed more on unsatisfactory approaches to investigations, corrective actions and learning from incident reports rather than completing report forms. Staff show further frustration when ILSs are challenging or time‐consuming to use. Departmental SC can affect attitudes to an ILS and its utilisation and feedback to staff. As part of ongoing quality assessment and improvement, staff in the closely linked radiation oncology departments of two local health districts (LHDs) were surveyed, to determine current perceptions and understanding of the ILS in place and of departmental SC. At the survey time, the departments had multiple reporting systems, at three levels: Inhouse/departmental ILS including: paper‐based reporting forms for actual and near‐miss incidents and also departmental non‐radiation incidents; RO morbidity and mortality QA review meetings; RT senior staff meetings discussing all incident reports; and ROMP error discussion meetings. Organisational An LHD‐level electronic platform, the Incident Information Management System (IIMS), for all actual incidents and near‐miss reports of any type. Mandatory bodies Higher mandatory reporting at specified dose deviation levels; to the hospital/department Radiation Safety Officer (RSO) for dose deviations over 5%, and/or the NSW Environment Protection Authority (EPA), for greater than 10%, as per ROPS recommendations, state wide incident management policies and EPA legislation. , Incident classification followed the ROPS recommendations. The departments had similar processes and protocols around incident reporting, monitoring, learning and QI. Each professional discipline held monthly quality/safety meetings separately, with cross‐MDT discussion and collaboration for learning being mainly at inter‐group senior (management) levels. The aims of the survey were to identify staff understanding and use of the ILSs, any barriers to reporting and any needs for process change or departmental learning, as well as perceptions of SC. The findings prompted a QI project in one of the LHDs to evaluate and improve the ILS.
Survey of ILS and SC Current understanding of ILSs and attitudes towards SC were evaluated via an anonymous online survey, distributed to Radiation Oncology professionals (all ROs, RTs and ROMPs) within radiation oncology departments in the two LHDs, Western Sydney (WSLHD) and Nepean Blue Mountains (NBMLHD). The survey took less than 15 min to complete. The project received ethics approval from each LHD's human research and ethics committees. Survey ‘gatekeepers’ were RTs who distributed the survey invitation to all RTs and ROMPs at WSLHD in October 2018 and all NBMLHD staff in December 2019. Distribution to WSLHD ROs was via RO administrative staff in February 2019. The survey was open for 2 weeks; gatekeepers sent reminder emails with current response rates on days 7 and 12. Participants were informed that completing the survey indicated consent to participate in the study. The survey content was based on radiation oncology SC surveys from the literature. , , Eleven staff across the MDT initially piloted the survey, then it was sent out on a larger scale. The survey captured occupation, years worked and role level; the last two were largely removed from analysis to ensure anonymity. The REDCap electronic data, capture tools hosted by The University of Sydney, , were used to collect anonymous responses and manage the study data. Responses were exported to IBM SPSS for quantitative analysis, with results compared and summarised using descriptive statistics. Open‐ended questions were evaluated to derive any key themes.
Current understanding of ILSs and attitudes towards SC were evaluated via an anonymous online survey, distributed to Radiation Oncology professionals (all ROs, RTs and ROMPs) within radiation oncology departments in the two LHDs, Western Sydney (WSLHD) and Nepean Blue Mountains (NBMLHD). The survey took less than 15 min to complete. The project received ethics approval from each LHD's human research and ethics committees. Survey ‘gatekeepers’ were RTs who distributed the survey invitation to all RTs and ROMPs at WSLHD in October 2018 and all NBMLHD staff in December 2019. Distribution to WSLHD ROs was via RO administrative staff in February 2019. The survey was open for 2 weeks; gatekeepers sent reminder emails with current response rates on days 7 and 12. Participants were informed that completing the survey indicated consent to participate in the study. The survey content was based on radiation oncology SC surveys from the literature. , , Eleven staff across the MDT initially piloted the survey, then it was sent out on a larger scale. The survey captured occupation, years worked and role level; the last two were largely removed from analysis to ensure anonymity. The REDCap electronic data, capture tools hosted by The University of Sydney, , were used to collect anonymous responses and manage the study data. Responses were exported to IBM SPSS for quantitative analysis, with results compared and summarised using descriptive statistics. Open‐ended questions were evaluated to derive any key themes.
Characteristics of respondents Invitations were sent to 150 Radiation Oncology professional staff across WSLHD and NBMLHD; 95 (63%) responded, with similar overall response rates for each LHD (65% and 57% respectively). For the professional cohorts, response rates were 71%, 67% and 34% for RTs, ROMPs and ROs respectively. One respondent was excluded due to partial survey completion. The distribution of the 94 participants was 73% RTs, 15% ROMPs and 12% ROs, while staffing distribution at that time was 65%, 14%, and 21% respectively. Survey results between the two LHDs were not significantly different, utilising a Z test for two populations ( P < 0.05), so were combined for analysis to further protect anonymity. Knowledge of incident reporting systems Respondents showed various levels of ILS understanding and utilisation across the MDT. Table shows the differences and similarities between the professional groups. Overall, 97% of respondents were aware of incident reporting. Those not familiar were all in their first employment year as RT Interns or RO Registrars. The LHDs utilised three levels of reporting, as above: inhouse, organisational and mandatory. The largest respondent group (49%) acknowledged two systems in use. However, the professional cohorts showed differences in identifying systems. RTs (41%) and ROMPs (43%) largest responses identified dual systems at departmental and organisation levels. The RO cohort primarily stated organisational only (36%) or organisation and mandatory reporting (18%). Only 10% of respondents acknowledged all three. All cohorts indicated high confidence levels in categorising errors once identified (Table ), with ROMPs having the strongest confidence (71%) for both category and sub‐categorisation. Utilisation and barrier perception of the current system Overall, 51% of respondents had reported an incident to one or more systems in the six months before the survey (Table ). Of the 46 staff who had not reported, 74% stated they had not observed or identified an error. The rest had identified an error but not reported, mostly because they had escalated to more senior staff who investigated and reported, or another team member had completed the incident report. Two respondents were aware of some near‐miss incidents that were not reported, and only one respondent indicated they chose not to report. Overall, 37% of respondents perceived no barriers within the current system. However, 59 staff did perceive one or more barriers (Table ). The most significant stated barrier was the time it takes (31%), then lack of knowledge/understanding of the system and its use (22%) and that it was hard to access (20%). Potential improvements to safety culture were indicated by the 18% of respondents that stated a barrier was related to fear of adverse action, and the 5% that did not see a benefit to reporting. These results show the most significant barriers were related to the system rather than departmental SC. Preferences for feedback and learning Respondents ranked their preference for learning and feedback (Table ). The preferred method overall was an all‐staff MDT meeting with either mandatory or open attendance. Next was for selected staff to attend incident meetings, that is representatives from different work areas/groups who report back to others; 54% acknowledged this as the primary method currently utilised across the departments. RT and RO cohorts had similar ranking preferences for learning options. ROMPs showed some differences, such as a higher preference for newsletters to disseminate relevant information and lower for in‐service training. Safety culture and learning capacity The majority of staff (66%) feel encouraged to report, with 60% feeling comfortable reporting (Table ). Most respondents (69%), but least strong for the ROMP group, thought their department practised a no‐blame culture (Table ). Sixteen respondents stated their department did not practice a no‐blame culture, eight having either personally received, or witnessed others receive, adverse action, with two others declining to answer. Regarding assigning cause and blame between staff and processes, 37% of respondents gave this as 50%/50% staff/process, followed by 29% at 25%/75%. The majority of respondents (71%) gave positive responses to departmental learning capacity from reported incidents, with the RO cohort having the strongest perception. Qualitative results Four areas of free‐text answers were possible for questions related to (1) barriers to reporting, (2) no‐blame culture, (3) blame association to staff versus system/process and (4) open comment around the survey. The thematic analysis highlighted four key themes: (1) SC issues, (2) blame is situational dependant, (3) QI and learning weaknesses and (4) current department‐level and hospital‐level reporting system deficits. Respondents mentioned SC fourteen times. Overall, these supported a departmental no‐blame culture, but some responses, mainly from RTs at frontline and management levels, indicated some staff blame others and gossip. Blame assignment to staff versus process was identified as situational influenced, for example actual incident versus near miss, or perceived laziness of staff member/s involved. Nine open responses indicated inadequacies in the current reporting system, with 11 perceiving weakness in QI and learning. The main concerns were that the more general organisational reporting systems do not fit radiation oncology needs, and reports disappear into the system with no feedback or learning. Others indicated the analysis and learning do not focus on the actual error‐causing problems and perceptions that education is often not prioritised, with minimal preventive measures to mitigate future risk. Quality improvement project The survey findings were used to inform a QI project actioned in one LHD to evaluate and improve their current departmental (level i) ILS. An MDT QI project team was established to design, create and develop an electronic reporting system to suit departmental needs. This was guided by recommendations from literature, ROPS, barriers and other factors identified in the survey. Application of survey findings to improve local ILS Electronic report development A new customised electronic departmental‐level reporting system was locally developed on the Varian Aria TM oncology information system (Varian Medical Systems, Palo Alto, CA, USA) to improve use and support enhanced analysis and department learning and feedback. Integration into Aria TM was designed to reduce the following barriers: the time it takes to report, lack of understanding of system and use, and easier system access. It also enabled increased communication to appropriate staff and report extraction into Microsoft Excel TM and Microsoft Power BI TM (business intelligence/data analytics software) to increase data tracking and visualisation. Report analysis changes A dedicated MDT incident triage team was established and trained to support management. This provided a coordinated, centralised, structured and rapid approach to analysis and recommendations from reports. Feedback and learning Relevant meeting structures and attendance were changed, with MDT representatives in all meetings to ensure shared learning and discussion across professions. This increased communication with and between staff. Reports were easily available to all, since they were in a readily accessible electronic system. Quick feedback loops were introduced to staff involved in observed errors or barrier detection point fails and for urgent process changes. During COVID‐19 restrictions, a newsletter was used to provide regular feedback when meetings were not feasible. Focussed education on how and what to report A three‐month pilot phase tested the new system within WSLHD's smaller campus to ensure that access and use were straightforward. The system was implemented across the whole LHD. Mandatory training for all staff included how to use the new system and what to report on. A protocol non‐compliance category was introduced to report near‐miss errors which had evaded one or more barriers before being found within QA pathways before the first treatment fraction. This was to strengthen knowledge of any systematic QA weaknesses.
Invitations were sent to 150 Radiation Oncology professional staff across WSLHD and NBMLHD; 95 (63%) responded, with similar overall response rates for each LHD (65% and 57% respectively). For the professional cohorts, response rates were 71%, 67% and 34% for RTs, ROMPs and ROs respectively. One respondent was excluded due to partial survey completion. The distribution of the 94 participants was 73% RTs, 15% ROMPs and 12% ROs, while staffing distribution at that time was 65%, 14%, and 21% respectively. Survey results between the two LHDs were not significantly different, utilising a Z test for two populations ( P < 0.05), so were combined for analysis to further protect anonymity.
Respondents showed various levels of ILS understanding and utilisation across the MDT. Table shows the differences and similarities between the professional groups. Overall, 97% of respondents were aware of incident reporting. Those not familiar were all in their first employment year as RT Interns or RO Registrars. The LHDs utilised three levels of reporting, as above: inhouse, organisational and mandatory. The largest respondent group (49%) acknowledged two systems in use. However, the professional cohorts showed differences in identifying systems. RTs (41%) and ROMPs (43%) largest responses identified dual systems at departmental and organisation levels. The RO cohort primarily stated organisational only (36%) or organisation and mandatory reporting (18%). Only 10% of respondents acknowledged all three. All cohorts indicated high confidence levels in categorising errors once identified (Table ), with ROMPs having the strongest confidence (71%) for both category and sub‐categorisation.
Overall, 51% of respondents had reported an incident to one or more systems in the six months before the survey (Table ). Of the 46 staff who had not reported, 74% stated they had not observed or identified an error. The rest had identified an error but not reported, mostly because they had escalated to more senior staff who investigated and reported, or another team member had completed the incident report. Two respondents were aware of some near‐miss incidents that were not reported, and only one respondent indicated they chose not to report. Overall, 37% of respondents perceived no barriers within the current system. However, 59 staff did perceive one or more barriers (Table ). The most significant stated barrier was the time it takes (31%), then lack of knowledge/understanding of the system and its use (22%) and that it was hard to access (20%). Potential improvements to safety culture were indicated by the 18% of respondents that stated a barrier was related to fear of adverse action, and the 5% that did not see a benefit to reporting. These results show the most significant barriers were related to the system rather than departmental SC.
Respondents ranked their preference for learning and feedback (Table ). The preferred method overall was an all‐staff MDT meeting with either mandatory or open attendance. Next was for selected staff to attend incident meetings, that is representatives from different work areas/groups who report back to others; 54% acknowledged this as the primary method currently utilised across the departments. RT and RO cohorts had similar ranking preferences for learning options. ROMPs showed some differences, such as a higher preference for newsletters to disseminate relevant information and lower for in‐service training.
The majority of staff (66%) feel encouraged to report, with 60% feeling comfortable reporting (Table ). Most respondents (69%), but least strong for the ROMP group, thought their department practised a no‐blame culture (Table ). Sixteen respondents stated their department did not practice a no‐blame culture, eight having either personally received, or witnessed others receive, adverse action, with two others declining to answer. Regarding assigning cause and blame between staff and processes, 37% of respondents gave this as 50%/50% staff/process, followed by 29% at 25%/75%. The majority of respondents (71%) gave positive responses to departmental learning capacity from reported incidents, with the RO cohort having the strongest perception.
Four areas of free‐text answers were possible for questions related to (1) barriers to reporting, (2) no‐blame culture, (3) blame association to staff versus system/process and (4) open comment around the survey. The thematic analysis highlighted four key themes: (1) SC issues, (2) blame is situational dependant, (3) QI and learning weaknesses and (4) current department‐level and hospital‐level reporting system deficits. Respondents mentioned SC fourteen times. Overall, these supported a departmental no‐blame culture, but some responses, mainly from RTs at frontline and management levels, indicated some staff blame others and gossip. Blame assignment to staff versus process was identified as situational influenced, for example actual incident versus near miss, or perceived laziness of staff member/s involved. Nine open responses indicated inadequacies in the current reporting system, with 11 perceiving weakness in QI and learning. The main concerns were that the more general organisational reporting systems do not fit radiation oncology needs, and reports disappear into the system with no feedback or learning. Others indicated the analysis and learning do not focus on the actual error‐causing problems and perceptions that education is often not prioritised, with minimal preventive measures to mitigate future risk.
The survey findings were used to inform a QI project actioned in one LHD to evaluate and improve their current departmental (level i) ILS. An MDT QI project team was established to design, create and develop an electronic reporting system to suit departmental needs. This was guided by recommendations from literature, ROPS, barriers and other factors identified in the survey.
Electronic report development A new customised electronic departmental‐level reporting system was locally developed on the Varian Aria TM oncology information system (Varian Medical Systems, Palo Alto, CA, USA) to improve use and support enhanced analysis and department learning and feedback. Integration into Aria TM was designed to reduce the following barriers: the time it takes to report, lack of understanding of system and use, and easier system access. It also enabled increased communication to appropriate staff and report extraction into Microsoft Excel TM and Microsoft Power BI TM (business intelligence/data analytics software) to increase data tracking and visualisation. Report analysis changes A dedicated MDT incident triage team was established and trained to support management. This provided a coordinated, centralised, structured and rapid approach to analysis and recommendations from reports. Feedback and learning Relevant meeting structures and attendance were changed, with MDT representatives in all meetings to ensure shared learning and discussion across professions. This increased communication with and between staff. Reports were easily available to all, since they were in a readily accessible electronic system. Quick feedback loops were introduced to staff involved in observed errors or barrier detection point fails and for urgent process changes. During COVID‐19 restrictions, a newsletter was used to provide regular feedback when meetings were not feasible. Focussed education on how and what to report A three‐month pilot phase tested the new system within WSLHD's smaller campus to ensure that access and use were straightforward. The system was implemented across the whole LHD. Mandatory training for all staff included how to use the new system and what to report on. A protocol non‐compliance category was introduced to report near‐miss errors which had evaded one or more barriers before being found within QA pathways before the first treatment fraction. This was to strengthen knowledge of any systematic QA weaknesses.
A new customised electronic departmental‐level reporting system was locally developed on the Varian Aria TM oncology information system (Varian Medical Systems, Palo Alto, CA, USA) to improve use and support enhanced analysis and department learning and feedback. Integration into Aria TM was designed to reduce the following barriers: the time it takes to report, lack of understanding of system and use, and easier system access. It also enabled increased communication to appropriate staff and report extraction into Microsoft Excel TM and Microsoft Power BI TM (business intelligence/data analytics software) to increase data tracking and visualisation.
A dedicated MDT incident triage team was established and trained to support management. This provided a coordinated, centralised, structured and rapid approach to analysis and recommendations from reports.
Relevant meeting structures and attendance were changed, with MDT representatives in all meetings to ensure shared learning and discussion across professions. This increased communication with and between staff. Reports were easily available to all, since they were in a readily accessible electronic system. Quick feedback loops were introduced to staff involved in observed errors or barrier detection point fails and for urgent process changes. During COVID‐19 restrictions, a newsletter was used to provide regular feedback when meetings were not feasible.
A three‐month pilot phase tested the new system within WSLHD's smaller campus to ensure that access and use were straightforward. The system was implemented across the whole LHD. Mandatory training for all staff included how to use the new system and what to report on. A protocol non‐compliance category was introduced to report near‐miss errors which had evaded one or more barriers before being found within QA pathways before the first treatment fraction. This was to strengthen knowledge of any systematic QA weaknesses.
This work identified perceptions of SC and ILSs and barriers to reporting incidents in two Australian LHDs’ radiation oncology departments. The departments surveyed participated in joint tumour stream QA meetings and had similar in‐house ILSs. Most responding ROs worked across both LHD's, whereas only a few ROMPs and no RT staff did. The results show varied knowledge and understanding of the complete incident reporting systems, structure and associated learning. By profession, the survey response rate was highest for RTs, followed by ROMPs, in both LHDs. Although errors and incidents can occur at any point in the patient pathway, detection is highest at treatment delivery and QA checkpoints, performed most frequently by RTs, followed by ROMPs. Therefore, their greater use of reporting may influence higher response rates. RO staff primarily received reports either classified as actual incidents, with incorrect radiation delivery, or when high‐risk near‐miss incidents have been reported and discussed. Overall, RT and ROMP response rates were high compared to other literature‐reported surveys of medical professionals, for example Cook et al. presented median response rates of 59% to postal surveys of healthcare professionals. Cunningham et al. found a lower response rate (35%) for physicians. In the current work, professional cohorts showed differences in using and understanding the various ILSs (Table ). RTs and ROMPs predominantly identified departmental and organisational systems, with ROs primarily identifying organisational and mandatory reporting. This reflected different groups main uses of systems, with RTs and ROMPs being predominantly involved in in‐house and organisational reporting and in‐house QA meetings for feedback and learning. ROs primarily used the organisational reporting systems and, when necessary, provided information for reports to mandatory bodies. In‐house system reporting capacity extends more widely and is more specific to radiation oncology needs than the organisational system. However, each serves different but overlapping purposes with other capabilities. Hence, the departments used both in parallel. Differences in each cohort's use of reporting systems and feedback and learning loops may have influenced their different responses. ROs and ROMPs groups more frequently acknowledged mandatory reporting. These groups are heavily involved in investigations and decisions, alongside the RT management teams. When dose deviations requiring such reporting occur, ROMPs quantify dose deviations and prescribing ROs determine whether there are any clinical consequences to the patient and they also further provide decision‐making around any changes to the patient plan or symptom management and to facilitate open disclosure to the patient. Fewer of the RT cohort mentioned mandatory reporting, which occurs after the initial report and involves investigations primarily completed by ROMPs, RT management and RO collaboration. All professional groups had limited awareness of the overall ILS processes, systems and intended purposes, indicating potential learning opportunities. Each cohort only showed a strong understanding of the system/s most utilised by their group. The differences indicated potential for improved learning across departments to give a more interdisciplinary collaborative approach to the overall ILS. Some findings indicated potential areas for departmental education and learning around description and categorisation of errors. Increased reporting accuracy should increase data reliability and more robust trend analysis from reported incidents. The question ‘state the name of the reporting system/s you know of in use in the department?’ had few respondents (14%) stating any mandatory bodies. However, mandatory bodies were then mentioned in open answers by more staff, indicating staff have potentially interpreted the initial question as ‘within the department’, that is internal systems and not external reporting. This might have been better worded as ‘ … that are used by the department’. Reporting to any ILS was predominantly by the RT staff followed by ROMPs, supporting the literature that frontline staff are more likely to detect incidents or errors at QA checkpoints. , , It is positive that staff are willing to report when they are aware of an error, with only one respondent stating they chose not to and two indicating that sometimes near misses were not reported. Overall, 63% of respondents perceived one or more barriers to reporting within the current systems. The leading stated reason was the time involved, then lack of knowledge/understanding of the system or its use, or access difficulty. Obstacles related to negative SC, such as fear of negative actions or not seeing the benefit of reporting, had lower responses. It was promising that barriers relate to the structure, format and use of the current ILSs rather than SC awareness in staff or the perception of how the department treats safety issues. Ford et al. noted that electronic ILSs, customised to radiation oncology, reduce reporting barriers. ILS success is related to appropriate resources and utilisation, partnered with staff understanding and confidence that the SC is just and equitable. Respondents' perceptions of SC and learning were stronger towards positive SC than negative, for example two‐thirds felt encouraged to report and comfortable reporting. A no‐blame culture was perceived by most respondents, with 73% not having witnessed or received adverse action. Two‐thirds of respondents perceive cause and blame after an error to be attributed 25–50% to staff and 50–75% to processes. This further supports a no‐blame culture. Our findings are similar to those of Bolderston et al. Overall, the responses support positive SC and the ability to learn. However, as some respondents perceived blame culture or negative SC, there is still room for improvement. The thematic review of the free‐text answers provided insight into some of the issues indicated in the quantitative results. There was a strong focus on developing changes to the ILS regarding feedback and learning pathways. Changes included developing an in‐house electronic reporting system that fit the scope of reporting needed within radiation oncology and provided fast consistent data analysis. This was developed and implemented in the ensuing QI project. The newly developed electronic reporting system is integrated onto the Aria TM platform, with rapid collaborative MDT triaging to manage reports in real time. This enables more effective education and reminders to staff when needed, ensuring continuous feedback between monthly meetings. Meetings are now MDT‐based rather than professionally separated and are open for anyone to attend. Hence feedback and learning pathways have a more collaborative approach with increased knowledge of all systems in each group. The meeting purpose is to review themes, discuss learning opportunities from reports and potential areas for improvement if process areas were regularly failing or of high risk and provide recommendations for working parties to consider. The overall goal was to improve feedback loops to the whole MDT, to help promote continued and increasing SC and learning and assist in QI decision‐making. Furthermore, now that all data capture is electronic, it is more comprehensive and consistent, and visualisation is improved. This helps to provide information for department educational needs to increase learning or identify when QI projects or QA checkpoint modifications are needed. In developing the new in‐house system, the need to reduce the three most significant perceived barriers to ILSs use was considered, the lack of knowledge/understanding of available systems, that access was difficult, and the system reporting time required. A clear structure was created to identify which system to report to, in‐house only or in‐house and hospital‐wide. The in‐house system was designed to be easy to access, quick to fill in and to readily provide all relevant patient data to reduce the number of data fields a reporter needs to complete compared to the previous paper‐based reporting forms. The incident triage team review in‐house system reports within 24 h of creation. When a report (near‐miss or actual incident) requires reporting to hospital systems, they coordinate this with all relevant information, including ROMP dosimetry reports, to assist the initial reporter. This has reduced the time involved and increased staff understanding of how to use and report into the hospital‐wide system. The in‐house system has streamlined the ROMP portion of dose difference evaluation and reporting and provided clearer and faster pathways if an incident is RSO or EPA reportable. Thus, the survey helped to understand the departmental SC and how staff perceive and use the current ILSs, which helped guide decisions around design and changes to the ILS and the change implementation process. It has also provided support for changes to learning and educational needs required across the MDT to ensure a comprehensive, collaborative and open approach to ILSs, reduce barriers, increase reporting and increase the positive SC. The new system has been in use across the whole WSLHD for seven months and ten months at the pilot campus. There are plans to survey again later to evaluate accumulated experience and the need for any further QI.
A survey of perceptions of SC and understanding of ILSs established a baseline understanding within two LHDs. In one LHD, the results led to the development of a QI project that significantly improved the ILS. Major changes were implemented to aspects of reporting and to the feedback and learning portions of the ILS, as the survey had highlighted barriers to reporting and areas to improve feedback and learning across the department. The study findings provide a reference for future evaluations of ILS and SC that may identify continued improvements as the impact of the changes continues to be assessed, including further regular surveys, review of data accuracy in reports and trend analysis of incidents.
There are no conflicts of interest.
No funding supported this project.
Ethics approval WSLHD from QA/QI Committee on 15 February 2019. Approval: 1812‐02. Ethics approval NBMLHD from Appolo Committee on 15 November 2019. Approval: 19‐42(A).
|
Optimizing microwave ablation planning with the ablation success ratio | edf06b8a-b177-40eb-bccd-ddd91f04aba3 | 11947081 | Surgery[mh] | Primary and secondary liver malignancies like hepatocellular carcinoma (HCC) or colorectal liver metastases (CRM) are among the most common tumor diseases worldwide , . In addition to surgical resection, minimally invasive thermal ablation procedures are potentially curative treatment options if tumor size and location are suitable – . In clinical practice, hepatic microwave ablation (MWA) has been established among other thermoablative procedures such as radiofrequency ablation (RFA) . With MWA, higher temperatures in the ablation center and therefore larger and more uniform ablations are achieved compared to other in situ procedures . MWA is also less susceptible to cooling effects of naturally occurring liver vessels than RFA, as propagation into the tissue depositions energy into the tissue, rather than be driven by heat diffusion , , . However, studies have shown that vascular cooling occurs in MWA as well, suggesting that further research is required – . The disadvantage of thermal ablations such as MWA is that there is no postinterventional histopathologic confirmation equivalent to the “R0 situation” after surgical tumor resection . Technical success can only be evaluated indirectly by imaging techniques such as contrast-enhanced computed tomography (CECT), contrast-enhanced ultrasound (CEUS) or magnetic resonance imaging (MRI) , . Variability in imaging protocols across institutions can lead to inconsistencies in assessing the actual ablation volume, potentially resulting in over- or underestimation. Additionally, the absence of direct visibility in postinterventional imaging may make it difficult to detect small tumor residues , . Moreover, cooling effects make a prediction of ablation success particularly difficult – (Fig. ). Therefore, MWA need to be planned beforehand as accurately as possible in clinical routine. Software-based numerical simulations are being utilized to estimate ablation size – . However, the number of variables influencing MWA such as manufacturers data, liver tissue properties (tumor, cirrhosis, hepatic steatosis, etc.), liver vessels including possible cooling effects, as well as tumor localization and size present a major challenge in accurately predicting ablation size , – . Although several navigation software systems are available outside of research projects that enable patient-specific calculation of MWA, there is an unmet clinical need for a simplified, robust and easily applicable predictability algorithm. We aimed to develop a score (Ablation Success Ratio - ASR) which specifies the probability of ablation success in relation to tumor size based on real ablation data. Usually, only absolute values for the expected ablation size depending on the selected ablation parameters are provided by the manufacturer , . However, it has been shown that ablation sizes are subject to fluctuation due to the cooling effect of liver vessels , , . Eventually, the overall goal for the ASR is to take into account natural variations in ablation size by incorporating MWA of patients retrospectively. Beforehand, an ex vivo validation of the ASR is necessary. The aim of this study was to introduce and evaluate a new score (ASR) for the prediction of hepatic microwave ablation considering vascular cooling effects using a standardized ex vivo experimental setup.
A total of 148 microwave ablations were performed in ex vivo porcine livers. Twenty-two ablations were repeated due to naturally occurring large liver vessels ( n = 10), technical errors resulting in automatic ablation termination ( n = 3) and the ablation extending beyond the liver sample ( n = 9). Consequently, 126 ablations yielding 1498 individual slices were evaluated. A qualitative, quantitative (ablation volume) and semi-quantitative analysis of these ablations has already been published : We could show that although a cooling effect around the vessel occurred macroscopically in almost all ablations with perfusion, a decrease in ablation volume was detected only at the maximum flow rate of 500 ml/min at an antenna-to-vessel distance (A-V distance) of 2.5 mm ( p = 0.002). In all other test series, no difference in ablation volume was observed between ablations with and without perfusion of the glass tube ( p > 0.05) 11 . Therefore, a sole assessment of the ablation volume seems insufficient to determine the extent of vascular cooling effects. We therefore further analyzed three-dimensional ablation radii (r 3D ) in this study as an additional parameter to examine cooling effects in MWA. In contrast to the ablation volume, vascular perfusion had an impact on the minimal ablation radius in all three test series (Table ). In particular, the position of the vessel within the ablation had an influence on the 3D minimal ablation radius (r 3Dmin ). A radius reduction already occurred at the lowest flow rates (≥ 1 ml/min) when the vessel was localized at the ablation edge. In contrast to the minimum ablation radius, vascular perfusion or vessel position in relation to the ablation center had no influence on the maximum ablation radius. The three-dimensional regularity index (RI) was used to describe ablation geometry (Fig. ). The RI for ablations without vessel perfusion (0 ml/min) was around 0.6. This indicated that ablations already had an ellipsoid shape even without any influence of vascular cooling. The RI decreased further with increasing flow rates and was about 0.4 at a maximum flow rate of 500 ml/min. MWA are therefore already non-circular without any cooling effects. However, the vascular cooling effect has an additional impact on ablations and consequently must be taken into account when planning MWA. Ablation success ratio Figure shows the ASR using the results of these ex vivo experiments. Ablation success is shown in relation to ablation size (mm). The x-axis represents a hypothetical tumor, which corresponds to the tumor to be ablated in clinical routine. Usually, an ablation consists of two zones: the inner White Zone (WZ; immediate cell death) and the Red Zone (RZ; partial cell death), which transitions into native liver tissue . For this reason, both the WZ and RZ are shown in Fig. a/b. Figure a shows the ablation results without perfusion (0 ml/min) while Fig. b shows the ablations with perfusion (1-500 ml/min). As expected, the RZ is larger than the WZ in both plots. In the experiments without a cooling effect, the ASR was 100% up to a hypothetical tumor diameter of 20 mm in the RZ and 16 mm in the WZ. In contrast, the tumor diameter at which a safe ablation (ASR = 100%) can be assumed, decreased considerably for test series with perfusion (RZ: 12 mm, WZ: 7 mm). The ASR also varied depending on the A-V distance (Fig. c). Ablation success was noticeably lower when the vessel was located near the ablation margin (5 mm). As the distance between the vessel to the ablation border increased (10 mm), the ASR improved accordingly. In summary, our experimental ex vivo trial in native porcine liver (100 W, 5 min) showed that safe ablation was possible for tumors with a diameter of 12 mm (RZ) or 7 mm (WZ) respectively, when there was vascular perfusion. In the absence of liver perfusion (corresponding to an intraoperative Pringle maneuver), safe ablation extended to tumor diameters of 20 mm (RZ) or 16 mm (WZ). MWA of larger tumors must be considered critically and should be assessed individually depending on the vicinity of the tumor to larger hepatic vessels. Safety distances around the tumor need to be regarded additionally in clinical practice when planning MWA.
Figure shows the ASR using the results of these ex vivo experiments. Ablation success is shown in relation to ablation size (mm). The x-axis represents a hypothetical tumor, which corresponds to the tumor to be ablated in clinical routine. Usually, an ablation consists of two zones: the inner White Zone (WZ; immediate cell death) and the Red Zone (RZ; partial cell death), which transitions into native liver tissue . For this reason, both the WZ and RZ are shown in Fig. a/b. Figure a shows the ablation results without perfusion (0 ml/min) while Fig. b shows the ablations with perfusion (1-500 ml/min). As expected, the RZ is larger than the WZ in both plots. In the experiments without a cooling effect, the ASR was 100% up to a hypothetical tumor diameter of 20 mm in the RZ and 16 mm in the WZ. In contrast, the tumor diameter at which a safe ablation (ASR = 100%) can be assumed, decreased considerably for test series with perfusion (RZ: 12 mm, WZ: 7 mm). The ASR also varied depending on the A-V distance (Fig. c). Ablation success was noticeably lower when the vessel was located near the ablation margin (5 mm). As the distance between the vessel to the ablation border increased (10 mm), the ASR improved accordingly. In summary, our experimental ex vivo trial in native porcine liver (100 W, 5 min) showed that safe ablation was possible for tumors with a diameter of 12 mm (RZ) or 7 mm (WZ) respectively, when there was vascular perfusion. In the absence of liver perfusion (corresponding to an intraoperative Pringle maneuver), safe ablation extended to tumor diameters of 20 mm (RZ) or 16 mm (WZ). MWA of larger tumors must be considered critically and should be assessed individually depending on the vicinity of the tumor to larger hepatic vessels. Safety distances around the tumor need to be regarded additionally in clinical practice when planning MWA.
Thermal ablations such as microwave ablation (MWA) are influenced by vascular cooling effects , , . The extent of these cooling effects and therefore the exact ablation volume are difficult to predict. The aim of this study was the introduction and first evaluation of an innovative prediction score called “Ablation Success Ratio - ASR” for the planning of hepatic microwave ablation. We demonstrated that the ASR indicates ablation size taking into account vascular cooling effects in a standardized ex vivo setting. Depending on the applied ablation system and selected ablation parameters, clinicians may use the ASR in the future to decide whether complete tumor ablation is feasible. After surgical resection of hepatic tumors, complete excision is confirmed by histological analysis. This is not possible in MWA due to the in-situ approach. Instead, technical success is determined indirectly using ultrasound, CT or MRI imaging. Precise pre-therapeutic treatment planning plays a particularly important role as imaging modalities are limited in their accuracy . Ablation procedures are usually planned based on recommendations provided by the manufacturer. Depending on the ablation system and tumor diameter, specific ablation parameters are selected , . These manufacturer’s specifications normally refer to ablations that were performed under ideal conditions (ex vivo with the absence of vascular cooling effects). However, individual blood vessels in the liver, which transfer thermal energy away from the ablation site and therefore lead to vascular cooling effects, are not considered. For this reason, the manufacturer’s specifications tend to overestimate ablations in patients , . This may result in incomplete ablation and thus tumor recurrence. Therefore, vascular cooling effects of the patient-specific liver vasculature in relation to the antenna position must be considered when planning MWA. Software-based numerical simulations are able to calculate ablation size in advance including cooling effects, liver tissue properties as well as the ablation system and energy parameters – , , . However, these simulations require a high computational power and are too time-consuming for daily use in clinical practice , . Neither the manufacturer’s specification nor numerical simulations currently seem suitable for clinical routine to reliably anticipate ablation size. For this reason, a simplified method, which indicates ablation success is necessary. The ASR is intended to close the gap between the simplified manufacturer’s recommendations and the complex numerical simulations. It is supposed to provide the clinician with a practical tool for predicting ablation success. Due to vascular cooling effects, ablations in vivo are rarely round, but often irregularly shaped. Although the minimum and maximum diameter of an ablation are often specified for a retrospective assessment, this information is insufficient for ablation planning due to the irregular occurrence of ablation shape , , , . For the clinical user, a fixed area around the antenna must be defined in which a safe ablation can be assumed. The antenna can then be placed in such a way that complete tumour ablation is ensured. Such an area is represented by the three-dimensional minimum radius (r 3Dmin ) of an ablation, which consequently is the basis of the ASR. The intended ablation size is set in relation to the number of MWA that have been performed, resulting in the probability of ablation success rather than an absolute value. By definition, the ASR ranges from 0 to 100%. For example, the ASR for very small tumors is close to 100%, whereby this value decreases with increasing tumor diameter. If required, a safety distance of 5–10 mm can be added. For an initial validation, the ASR was developed using a standardized ex vivo model in this study. For clinical application of the ASR, a large number of ablations need to be evaluated. This could be implemented by assessing minimal ablation radii of patients retrospectively based on CT and MRI data, taking into account the respective ablation system, the applied energy input and the exposure duration. The quality of the ASR will hereby increase with the number of ablations analyzed. The ASR is close to clinical reality, as it takes ablation size variation depending on cooling effects and liver tissue properties into consideration. Additionally, a subgrouping for special cases such as perivascular tumors or tumors near the liver capsule as well as ablations with a Pringle maneuver can be included. However, it must be considered that the ASR is only applicable for the selected ablation setting (ablation system, ablation time, applied energy) and must therefore be determined again for all other cases with different ablation parameters. Eventually, an individual ablation simulation that incorporates tumor location, vessel vicinity as well as tissue properties of the patient is the desired goal. Until this is implementable in clinical routine, preinterventional ablation planning should be based on ablation success derived from a retrospective analysis of real ablations rather than on manufacturer’s specifications. In the present experimental ex vivo setting, safe ablations (ASR = 100%) were possible up to 20 mm (RZ) or respectively 16 mm (WZ) with the absence of liver perfusion and up to 12 mm (RZ) and 7 mm (WZ) with preserved liver perfusion. These results are not sufficient for clinical application, especially if a safety margin of 5 mm around the tumor is added. However, low energy parameters (ablation power: 100 W; ablation time: 5 min) and thus small ablation sizes were chosen in this experimental setup so that the ablations could be performed in the narrow porcine liver. This approach was reasonable as we did not want to investigate the absolute ablation size but the ASR in relation to vascular cooling effects. In clinical application, larger ablations can be expected because higher ablation parameters are used , . Therefore, our experimental ablation sizes should not be directly transferred to MWA in patients. Furthermore, a classification of an ablation into WZ and RZ is only applied macroscopically and histologically , . In clinical routine, this classification plays a subordinate role, as MWA is primarily evaluated using imaging techniques, which do not permit a color- or structure-based distinction between WZ and RZ . Accuracy is further constrained by spatial and contrast resolution to approximately 2–3 mm, depending on the imaging modality used . However, studies have shown that there is close conformity between the RZ and the ablation detected in CECT . Since complete cell death is uncertain in the RZ, it is essential to include values for both WZ and RZ in experimental studies to accurately assess MWA . Our experiments showed a Regularity Index (RI) of approximately 0.6 for ablations performed without vessel perfusion (0 ml/min), indicating an ellipsoidal rather than perfectly round ablation shape, even in the absence of vascular cooling. In clinical studies, the RI is derived from CT or MRI measurements, where WZ and RZ cannot be clearly distinguished . This often results in RI values closer to 1.0, as the RZ is included in the measurement , . Unlike other studies, we focused exclusively on the WZ, which generally conforms to the shape of the ablation probe, leading to a more elongated appearance. Consequently, our RI values are lower compared to other research groups and cannot be directly translated to clinical practice. Intrinsic factors inherent to the ablation process such as uneven energy distribution of the ablation device or different thermal properties of the liver tissue (cirrhosis, tumor, neoadjuvant therapy, etc.) may contribute to lower RI values , . Additionally, it must be noted that a large number of ablations is required for the ASR to increase in quality and to reduce the uncertainty of values due to naturally occurring variations in tissue properties. The implementation of a database with the help of software programs seems advisable in this case. Further limitations of the experimental setup include the use of a glass tube instead of natural liver vessels, the absence of a liver tumor and a macroscopic ablation analysis instead of using imaging techniques. The use of a glass tube as a vessel is very well established in experimental studies , , . It is known that glass has similar heat properties than blood vessels and therefore is a suitable substitute in an ex vivo models . In our study only one vessel was utilized to induce cooling effects, so no conclusion can be drawn regarding the effects of very large vessels or the complex vascular condition present in clinical practice in general. We observed a greater decrease in ablation success when the vessel was situated at the ablation margin. As the outer ablation margin is characterized by the lowest energy density, it is particularly vulnerable to the cooling effect. Since the ASR automatically considers vascular cooling effects, an application of the ASR with in vivo and/or clinical ablations seems possible. Although tumor models for HCC exist, we used native ex vivo porcine liver for an initial validation of the ASR due to ethical and cost-effective reasons . Moreover, blood-perfused tissue models are commonly utilized for evaluating MWA , , . Physical characteristics of human and porcine liver as well as tumor tissue are known, so that a translation to a tumor model is feasible with the aid of numerical simulations . This study focused on healthy liver tissue that was analyzed immediately after MWA, limiting the ability to assess long-term changes in ablation zones. Studies indicate that the inner red zone progressively becomes non-viable . Long-term studies are essential to better understand how ablation zones change over time. Our experiments were conducted at room temperature. Due to the higher thermal gradient compared to body temperature, cooling effects may be more pronounced. Previous studies on radiofrequency ablation (RFA) have shown that macroscopic results at room and body temperature are generally comparable . Lastly, we solely assessed MWA macroscopically in our study according to Mulier et al. . In clinical routine the ASR will be based on real ablations and therefore depend on an evaluation using imaging modalities. It has to be considered that digital ablation assessment is affected by artifacts, hemorrhage, cooling effects and tissue edema . Tissue shrinkage from dehydration and protein denaturation above 60 °C may further underestimate an ablation evaluation – . In our study, a macroscopic approach was deliberately chosen to develop and test the ASR under standardized conditions. The implementation of the ASR to imaging techniques in an in vivo setting is the next preferable step.
The ASR is a promising tool for assessing ablation success in MWA preinterventionally taking into account tumor size, cooling effects of natural liver vessels and ablation parameters. For the use in clinical practice, the ASR should be based on a retrospective evaluation of real patient ablations.
Definition of the ablation success ratio (ASR) The primary aim of this study is the establishment and validation of the methodology of the ASR in a standardized ex vivo experimental setup. The Ablation Success Ratio (ASR) is designed to predict the probability of achieving complete ablation depending on the given tumor diameter. Thermal ablations often result in irregularly shaped areas of tissue destruction. The ASR focuses solely on the minimum three-dimensional ablation radius (r 3Dmin ), which defines the spherical zone of tissue that has been completely ablated. This radius is crucial for planning an effective MWA. After an ex vivo validation (presented in this study), the goal is that the ASR will be derived from a retrospective analysis of real patient data (r 3Dmin ) in the future using standardized ablation zone measurements , . It will specifically examine cases where identical ablation parameters (MWA system, energy settings) were used. This data-driven approach ensures that the ASR reflects actual clinical outcomes. The ASR represents the percentage of these analyzed MWA cases in which the ablation area exceeded the planned target area required for complete tumor ablation. An ASR of 100% indicates that all analyzed ablations were larger than the target, signifying a “safe” ablation (Fig. ). Conversely, an ASR of 50% suggests that only half of the ablations were larger than the target, warranting a more critical assessment of the planned procedure. The ASR will determine the success of an ablation with a planned ablation diameter of ( [12pt]{minimal}
$$\:x$$ ) intended to treat a tumor with a specific diameter and is calculated as follows: Given [12pt]{minimal}
$$\:{n}_{total}$$ (total number of ablations analyzed) and [12pt]{minimal}
$$\:{r}_{3Dmin}$$ , the diameter of each minimal ablation ( [12pt]{minimal}
$$\:{d}_{min})$$ is: [12pt]{minimal}
$$\:{d}_{min}\:=\:{2\:r}_{3Dmin}$$ The percentage of ablations [12pt]{minimal}
$$\:{n}_{+}$$ with diameters greater than [12pt]{minimal}
$$\:x$$ is derived by: [12pt]{minimal}
$$\:ASR=\:_{+}}{{n}_{total}}\:\:\:100$$ where [12pt]{minimal}
$$\:{n}_{+}$$ is the number of ablations for which the diameter [12pt]{minimal}
$$\:{d}_{min}\:>x$$ . The number [12pt]{minimal}
$$\:{n}_{+}$$ can be formally expressed as the sum over all [12pt]{minimal}
$$\:{n}_{total}$$ ablations: [12pt]{minimal}
$$\:{n}_{+}=\:\:_{i=1}^{{n}_{total}}Heaviside(2\:\:\:{r}_{3Dmin,\:i}-x)\:$$ The Heaviside function [12pt]{minimal}
$$\:Heaviside(z)$$ is a step function defined as: [12pt]{minimal}
$$\:Heaviside(z)=\:\{1\:\:\:if\:z\:>0\\\:0\:\:\:if\:z\:\:0.$$ The accuracy of the ASR will increase with the number of ablations analyzed ( [12pt]{minimal}
$$\:{n}_{total}$$ ). This method allows the calculation of the percentage of ablations exceeding the target size (ASR), facilitating a quantitative analysis of the effectiveness and efficiency of MWA relative to the desired target size. Validation of the ASR in an ex vivo study The aim of the experimental setup was to evaluate the ASR in relation to different positions of hepatic vessels with respect to the ablation zone under standardized conditions. All experiments were conducted in an established ex vivo model with native porcine livers obtained from an abattoir (Brandenburg, Germany) within six hours after slaughtering , , . To induce standardized vascular cooling effects, a perfused glass tube was used as “liver vessel”. Seven different vascular flow rates were evaluated. MWA were performed in a custom-made aiming device that enabled the insertion of the vessel (glass tube) and exact positioning of the microwave antenna at three different distances into the liver (Fig. ). After MWA, the ablations were cut into half and directly snap frozen. A 3D ablation evaluation was then carried out to validate the ASR. Details of the exact experimental setup are described below. Microwave ablation The Emprint™ MWA system (Covidien, Boulder, CO, USA) with a 2.45 GHz generator was applied for all experiments. An antenna with a shaft length of 20 cm and an active tip length of 25 mm (Emprint™, Covidien, Boulder, CO, USA) was used. Internal cooling of the antenna was secured with saline solution and a continuous flow rate of 60 ml/min. A glass tube with an inner diameter of 3 mm and an outer diameter of 5 mm simulated a natural liver vessel , , . This vessel was connected to a peristaltic pump (flow rates ≤ 5 ml/min: Minipuls ® 3, Abimed, GILSON, USA; flow rates ≥ 10 ml/min: Watson-Marlow™ 323E/D, Bredel Pumps, Falmouth, Cornwall, England). A custom-made aiming device made from acrylic glass ensured parallel placement of the microwave antenna (A) and vessel (V). Three different A-V distances were analyzed: 2.5, 5.0, 10.0 mm. Seven different flow rates were tested for each of the three A-V distances for 5 min at 100 W: 0, 1, 2, 5, 10, 100, 500 ml/min (Fig. ), resulting in twenty-one different ablation settings ( n = 6 ablations for each setting). Ablations were performed at room temperature. After MWA, ablations were halved along the maximum cross-sectional diameter, which was expected at the center of the active zone of the antenna. Following a tissue preparation with Tissue Tek ® O.C.T.™ (Sakura Finetek Germany GmbH, Staufen, Germany), ablations were snap frozen with liquid nitrogen and stored at -80 °C. A croystat (CryoStar™ NX70 Cryostat, ThermoFisher Scientific, Waltham, USA) was used to cut slices with a defined layer thickness of 50 μm from the respective ablation halves. Every 2 mm, the exposed plane was photographed next to a millimeter scale so that the corresponding plane could be included in a consecutive evaluation. Ablation analysis Images of all ablation slices were macroscopically analyzed with a custom-made software (MWANecrosisMeasurement, Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany). First, the software calibrated the images using the photographed millimeter paper. Subsequently, the “white zone” (WZ) and “red zone” (RZ) were outlined manually based on the color differences of the ablated tissue compared to native liver parenchyma . The WZ is defined by irreversibly damaged tissue and represents the area of the ablation where complete tumor destruction is expected , . It is macroscopically identified by its beige/grey color. Adjacent to the WZ is the reddish colored RZ, where tissue destruction is incomplete and tumor recurrence may occur . Based on the manual outline, the software then computed the minimum (r min ) and maximum (r max ) radius of the WZ and RZ for each ablation layer/segment (2D) (Fig. ). 3D ablation radii Using the two-dimensional (2D) analysis from before, the minimum and maximum ablation radius of the entire ablation (three-dimensional, r 3D ) were subsequently calculated. The starting point for the calculation was the ablation center (C), defined by the antenna insertion point of the ablation plane with the largest ablation diameter. Next, the Pythagorean theorem was used to calculate the distance between the ablation center (C) and the minimum radius of the respective ablation plane (d CR1 , d CR2 , …, d CR20 ). This distance represented the hypotenuse in the Pythagorean theorem. The distance of the respective ablation plane from the ablation center (d 0mm , d 2mm , …, d 20mm ) and the corresponding minimum ablation radius (r min1 , r min2 , …, r min20 ) were used as the catheti of the right-angled triangle. Thus, the following formula was obtained: [12pt]{minimal}
$$\:{d}_{CRx}$$ = [12pt]{minimal}
$$\:}_{}}^{2}+\:{{}_{}}^{2}}$$ . The minimum radius of the entire ablation volume approximately represented the smallest value of the amount of all previously calculated minimum radii: r 3D = min (d CR1 , d CR2 , …, d CR20 ). Regularity index (RI) Based on the previous results, the 3D minimum and maximum ablation radii were used to calculate a 3D regularity index (RI) of an ablation . With the help of the RI, the ablation geometry was analyzed. The RI was established by the quotient of the 3D minimum and maximum radius (RI = r 3Dmin / r 3Dmax ). Values close to 1.0 correspond to an almost spherical ablation geometry, whereas values < 1.0 indicate ellipsoidal or irregular ablation shapes. Statistical analysis Statistical analysis was performed using SPSS (IBM SPSS Statistics, version 29 for Windows, Armonk, USA). Data are expressed as median (minimum - maximum). The Kruskal-Wallis test was applied for analyzing multiple independent samples, while the Mann-Whitney U test was used for the comparison of two independent samples. A Bonferroni correction was included due to multiple testing. Therefore, the level of significance was set to p ≤ 0.008. p values ≥ 0.008 and ≤ 0.05 were not considered statistically significant but were interpreted as a trend.
The primary aim of this study is the establishment and validation of the methodology of the ASR in a standardized ex vivo experimental setup. The Ablation Success Ratio (ASR) is designed to predict the probability of achieving complete ablation depending on the given tumor diameter. Thermal ablations often result in irregularly shaped areas of tissue destruction. The ASR focuses solely on the minimum three-dimensional ablation radius (r 3Dmin ), which defines the spherical zone of tissue that has been completely ablated. This radius is crucial for planning an effective MWA. After an ex vivo validation (presented in this study), the goal is that the ASR will be derived from a retrospective analysis of real patient data (r 3Dmin ) in the future using standardized ablation zone measurements , . It will specifically examine cases where identical ablation parameters (MWA system, energy settings) were used. This data-driven approach ensures that the ASR reflects actual clinical outcomes. The ASR represents the percentage of these analyzed MWA cases in which the ablation area exceeded the planned target area required for complete tumor ablation. An ASR of 100% indicates that all analyzed ablations were larger than the target, signifying a “safe” ablation (Fig. ). Conversely, an ASR of 50% suggests that only half of the ablations were larger than the target, warranting a more critical assessment of the planned procedure. The ASR will determine the success of an ablation with a planned ablation diameter of ( [12pt]{minimal}
$$\:x$$ ) intended to treat a tumor with a specific diameter and is calculated as follows: Given [12pt]{minimal}
$$\:{n}_{total}$$ (total number of ablations analyzed) and [12pt]{minimal}
$$\:{r}_{3Dmin}$$ , the diameter of each minimal ablation ( [12pt]{minimal}
$$\:{d}_{min})$$ is: [12pt]{minimal}
$$\:{d}_{min}\:=\:{2\:r}_{3Dmin}$$ The percentage of ablations [12pt]{minimal}
$$\:{n}_{+}$$ with diameters greater than [12pt]{minimal}
$$\:x$$ is derived by: [12pt]{minimal}
$$\:ASR=\:_{+}}{{n}_{total}}\:\:\:100$$ where [12pt]{minimal}
$$\:{n}_{+}$$ is the number of ablations for which the diameter [12pt]{minimal}
$$\:{d}_{min}\:>x$$ . The number [12pt]{minimal}
$$\:{n}_{+}$$ can be formally expressed as the sum over all [12pt]{minimal}
$$\:{n}_{total}$$ ablations: [12pt]{minimal}
$$\:{n}_{+}=\:\:_{i=1}^{{n}_{total}}Heaviside(2\:\:\:{r}_{3Dmin,\:i}-x)\:$$ The Heaviside function [12pt]{minimal}
$$\:Heaviside(z)$$ is a step function defined as: [12pt]{minimal}
$$\:Heaviside(z)=\:\{1\:\:\:if\:z\:>0\\\:0\:\:\:if\:z\:\:0.$$ The accuracy of the ASR will increase with the number of ablations analyzed ( [12pt]{minimal}
$$\:{n}_{total}$$ ). This method allows the calculation of the percentage of ablations exceeding the target size (ASR), facilitating a quantitative analysis of the effectiveness and efficiency of MWA relative to the desired target size.
The aim of the experimental setup was to evaluate the ASR in relation to different positions of hepatic vessels with respect to the ablation zone under standardized conditions. All experiments were conducted in an established ex vivo model with native porcine livers obtained from an abattoir (Brandenburg, Germany) within six hours after slaughtering , , . To induce standardized vascular cooling effects, a perfused glass tube was used as “liver vessel”. Seven different vascular flow rates were evaluated. MWA were performed in a custom-made aiming device that enabled the insertion of the vessel (glass tube) and exact positioning of the microwave antenna at three different distances into the liver (Fig. ). After MWA, the ablations were cut into half and directly snap frozen. A 3D ablation evaluation was then carried out to validate the ASR. Details of the exact experimental setup are described below.
The Emprint™ MWA system (Covidien, Boulder, CO, USA) with a 2.45 GHz generator was applied for all experiments. An antenna with a shaft length of 20 cm and an active tip length of 25 mm (Emprint™, Covidien, Boulder, CO, USA) was used. Internal cooling of the antenna was secured with saline solution and a continuous flow rate of 60 ml/min. A glass tube with an inner diameter of 3 mm and an outer diameter of 5 mm simulated a natural liver vessel , , . This vessel was connected to a peristaltic pump (flow rates ≤ 5 ml/min: Minipuls ® 3, Abimed, GILSON, USA; flow rates ≥ 10 ml/min: Watson-Marlow™ 323E/D, Bredel Pumps, Falmouth, Cornwall, England). A custom-made aiming device made from acrylic glass ensured parallel placement of the microwave antenna (A) and vessel (V). Three different A-V distances were analyzed: 2.5, 5.0, 10.0 mm. Seven different flow rates were tested for each of the three A-V distances for 5 min at 100 W: 0, 1, 2, 5, 10, 100, 500 ml/min (Fig. ), resulting in twenty-one different ablation settings ( n = 6 ablations for each setting). Ablations were performed at room temperature. After MWA, ablations were halved along the maximum cross-sectional diameter, which was expected at the center of the active zone of the antenna. Following a tissue preparation with Tissue Tek ® O.C.T.™ (Sakura Finetek Germany GmbH, Staufen, Germany), ablations were snap frozen with liquid nitrogen and stored at -80 °C. A croystat (CryoStar™ NX70 Cryostat, ThermoFisher Scientific, Waltham, USA) was used to cut slices with a defined layer thickness of 50 μm from the respective ablation halves. Every 2 mm, the exposed plane was photographed next to a millimeter scale so that the corresponding plane could be included in a consecutive evaluation.
Images of all ablation slices were macroscopically analyzed with a custom-made software (MWANecrosisMeasurement, Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany). First, the software calibrated the images using the photographed millimeter paper. Subsequently, the “white zone” (WZ) and “red zone” (RZ) were outlined manually based on the color differences of the ablated tissue compared to native liver parenchyma . The WZ is defined by irreversibly damaged tissue and represents the area of the ablation where complete tumor destruction is expected , . It is macroscopically identified by its beige/grey color. Adjacent to the WZ is the reddish colored RZ, where tissue destruction is incomplete and tumor recurrence may occur . Based on the manual outline, the software then computed the minimum (r min ) and maximum (r max ) radius of the WZ and RZ for each ablation layer/segment (2D) (Fig. ).
Using the two-dimensional (2D) analysis from before, the minimum and maximum ablation radius of the entire ablation (three-dimensional, r 3D ) were subsequently calculated. The starting point for the calculation was the ablation center (C), defined by the antenna insertion point of the ablation plane with the largest ablation diameter. Next, the Pythagorean theorem was used to calculate the distance between the ablation center (C) and the minimum radius of the respective ablation plane (d CR1 , d CR2 , …, d CR20 ). This distance represented the hypotenuse in the Pythagorean theorem. The distance of the respective ablation plane from the ablation center (d 0mm , d 2mm , …, d 20mm ) and the corresponding minimum ablation radius (r min1 , r min2 , …, r min20 ) were used as the catheti of the right-angled triangle. Thus, the following formula was obtained: [12pt]{minimal}
$$\:{d}_{CRx}$$ = [12pt]{minimal}
$$\:}_{}}^{2}+\:{{}_{}}^{2}}$$ . The minimum radius of the entire ablation volume approximately represented the smallest value of the amount of all previously calculated minimum radii: r 3D = min (d CR1 , d CR2 , …, d CR20 ).
Based on the previous results, the 3D minimum and maximum ablation radii were used to calculate a 3D regularity index (RI) of an ablation . With the help of the RI, the ablation geometry was analyzed. The RI was established by the quotient of the 3D minimum and maximum radius (RI = r 3Dmin / r 3Dmax ). Values close to 1.0 correspond to an almost spherical ablation geometry, whereas values < 1.0 indicate ellipsoidal or irregular ablation shapes.
Statistical analysis was performed using SPSS (IBM SPSS Statistics, version 29 for Windows, Armonk, USA). Data are expressed as median (minimum - maximum). The Kruskal-Wallis test was applied for analyzing multiple independent samples, while the Mann-Whitney U test was used for the comparison of two independent samples. A Bonferroni correction was included due to multiple testing. Therefore, the level of significance was set to p ≤ 0.008. p values ≥ 0.008 and ≤ 0.05 were not considered statistically significant but were interpreted as a trend.
|
Metabolomics approach reveals key plasma biomarkers in multiple myeloma for diagnosis, staging, and prognosis | 00f8d36a-42fc-479a-804e-f4e6416c5072 | 11800462 | Biochemistry[mh] | Multiple myeloma (MM) ranks as the second most prevalent hematological malignancy, characterized by the accumulation of plasma cells in the bone marrow, leading to bone destruction and marrow failure . Globally, approximately 155,688 new cases of MM are diagnosed each year , with male patients accounting for 54.3% (estimated range 70,924–94,910). The median age at diagnosis is 69 years, with 37%, 26%, and 37% of MM patients falling under the age categories of under 65, between 65 and 74 years, and over 75 years, respectively . Despite a 5-year survival rate of 56% , MM is deemed incurable due to its recurrent relapsing course, necessitating diverse treatment options . Over the past few decades, numerous efforts have focused on identifying markers related to the pathogenesis, diagnosis, and risk stratification of MM patients, including serum β 2 -microglobulin (β 2 M), lactate dehydrogenase (LDH), creatinine (Cr), and genomic features t(4;14), t(14;16), del17p, et al. . These markers have significantly improved the diagnosis and treatment of MM. Despite advancements in early molecular diagnostics, most patients are still diagnosed in intermediate or late disease stages . Existing prognostic stratification methods, such as the revised International Staging System (RISS), face limitations in predicting actual clinical outcomes due to the high heterogeneity of MM . Concurrently, patients with newly diagnosed MM (NDMM) are typically treated with multi-drug chemotherapy based on bortezomib or lenalidomide, but only a subset of them derive clinical benefits from this treatment . Therefore, there is a pressing need to develop biomarkers or algorithms that can facilitate early diagnosis and risk prediction in NDMM patients, as well as identify chemo-sensitive patients benefiting from targeted therapy. Metabolic reprogramming, a hallmark in non-solid tumors like lymphoma , MM , and leukemia , has spurred the use of metabolomics, a systemic tool focusing on endogenous metabolites. Metabolomics has significantly contributed insights into cancer biology, aiding in the understanding of molecular disease bases and uncovering new pathways for diagnosis, classification, and treatment . Previous studies primarily concentrated on MM diagnosis, revealing potential utility in differentially profiling key metabolites such as choline, creatinine, leucine, tryptophan, and valine to discriminate MM patients from healthy controls . Only few studies have identified biomarkers for both diagnosis and progress monitoring , or for treatment response and prognosis in MM patients . However, none of these studies have revealed abnormal metabolites that can be used to simultaneously assess diagnosis, grading, and therapeutic response of MM patients simultaneously. To address these gaps, we aimed to identify potentially minimally-invasive metabolic biomarkers for MM diagnosis, severity, and treatment response, advancing our understanding of MM-associated metabolites. Ultra-performance liquid chromatography coupled with high-resolution Orbitrap mass spectrometry, Q Exactive TM (UPLC-HRMS) was used to profile the samples of NDMM patients, on-chemotherapy MM patients, healthy controls and MM cells. Principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) characterized the differences in the metabolomics data. Finally, we examined MM-induced metabolite alterations, shedding light on the entire MM process, including diagnosis, severity, and treatment response, offering insights into pathogenesis and new prognostic factors for MM.
Patients and samples A total of 176 plasma samples from 166 subjects, including 133 MM participants and 33 healthy volunteers, were used in this study. They were obtained from Beijing Chao-yang Hospital between 2019 and 2021, and each patient provided written informed consent before their participation. The diagnosis and response criteria were based on the International Myeloma Working Group (IMWG) diagnostic criteria . Symptomatic MM patients were newly diagnosed and received chemotherapy with a bortezomib-based regimen for one to six cycles, with an average of four to six cycles. For patients who achieved a complete response (CR) or partial response (PR), the regimen was repeated for two to four cycles, or autologous stem cell transplantation was completed as consolidation therapy. Peripheral blood samples of 2–4 mL were collected at preset time points, including baseline and during chemotherapy (on-CT). Longitudinal time points were taken at every cycle starting from the first on-CT time point. Blood was collected in EDTA-coated tubes and plasma separation from the blood was achieved by centrifuging samples at 1500× g for 5 min within 2 h. The plasma was then stored at -80 ℃ until metabolite extraction and analysis. Cell culture The EB virus transformed human B lymphocyte line KM932, human myeloma cell lines (HMCLs) RPMI-8226, AMO-1, MM.1R, MM.1 S and LAMA-84 were purchased from the Cell Resource Center, IBMS, CAMS/PUMC. All cells were cultured in RPMI-1640 medium containing 10% fetal bovine serum, under meticulous conditions of 5% CO 2 at a constant temperature of 37 °C. These cells are mycoplasma-free. Sample preparation For metabolite extraction of plasma samples, 50 µL plasma of each patient was mixed with mixture of methanol and acetonitrile (v/v, 1:1) with the addition of 200 ng/mL propranolol and tolbutamide as internal standards (IS). The mixture was vortexed for 1 min and centrifuged at 15,000 ×g for 10 min at 4 °C. A 200 µL aliquot of the resulting supernatant was subsequently transferred to a new tube and kept at 4 °C until LC-MS analysis. For metabolite extraction of cell samples, each cell sample was subjected to a mixture of methanol and water (v/v, 1:1) at a 1:10 weight-to-volume (w/v) ratio. The cell sample was vigorously vortexed for 1 min, followed by 30 min of ultrasonic treatment. Subsequently, the mixture was centrifuged at 15,000 ×g for 10 min at 4 °C to separate the supernatant from the cellular debris. A 50 µL aliquot of supernatant was subsequently transferred to a new tube and added 450 µL mixture of methanol and acetonitrile (v/v, 1:1) with the addition of 200 ng/mL propranolol and tolbutamide as internal standards (IS). The mixture was vortexed for 1 min and centrifuged at 15,000 ×g for 10 min at 4 °C. A 200 µL aliquot of the final supernatant was transferred to a fresh tube and stored at 4 °C until further analysis by LC-MS. Mass spectrometry analysis The LC-HRMS analyses were conducted via an Ultimate 3000 LC system coupled with a Q Orbitrap mass analyzer (Q Exactive, Thermo Fisher Scientific, USA). Chromatographic separation was performed via an ACQUITY BEH C18 column (Waters, 2.1 × 50 mm, 1.7 μm) at a flow rate of 0.25 mL/min, maintained at 30 ℃. Mobile phase A consisted of water with 0.1% formic acid and 2.5 mmol/L ammonium formate, while mobile phase B was acetonitrile. The gradient conditions were as follows: 0–1.0 min, 95% A; 1.0–5.0 min, 95%-40% A; 5.0–8.0 min, 40%-0% A; 8.0–11.0 min, 0% A; 11.0–14.0 min, 0-40% A; 14.0–15.0 min, 40-95% A; 15.0–18.0 min, 95% A. The spectrometric settings for positive/negative ion modes were as follows: scan mode, full MS over the m/z scan range of 70-1050; resolution, 70,000; spray voltage , 3.0 kV; capillary temperature, 350 ℃; S-lens RF, 50; full MS/dd-MS2 over the resolution of 17,500; AGC target, 1e 5; maximum TT, 50 ms; NCE, 20, 40, 60. Metabolite identification and statistical analysis Metabolite identification was performed via Compound Discoverer 3.3 (ThermoFisher, CA, USA). The identification criteria involved exact mass, retention time, fragmentation spectra and isotopic pattern. An in-house library and the online library mzCloud were utilized for this purpose. The final output data included the compound name and peak area. Pattern recognition analysis, including principal component analysis (PCA), orthogonal partial least squares discriminant analysis (OPLS-DA), and topology analysis, were carried out to identify key metabolic features via MetaboAnalyst 5.0 ( https://www.metaboanalyst.ca/ ). The differentially abundant metabolites were screened (variable importance for the project, VIP > 1.0 and p < 0.05). Metabolic pathway analysis was conducted via the “MS Peaks to Pathways” of MetaboAnalyst 5.0. Significant pathways were computed on the basis of the spectral features with an impact greater than 0.1. SPSS 16.0 (Armonk, New York, USA) was used for t-test analysis. All the data were expressed as mean ± standard deviation (mean ± SD). One way ANOVA was used to analyze the differences between multiple groups, and Tukey’s test was employed for pairwise comparisons. * represents P < 0.05, ** denotes P < 0.01, ***denotes P < 0.001.
A total of 176 plasma samples from 166 subjects, including 133 MM participants and 33 healthy volunteers, were used in this study. They were obtained from Beijing Chao-yang Hospital between 2019 and 2021, and each patient provided written informed consent before their participation. The diagnosis and response criteria were based on the International Myeloma Working Group (IMWG) diagnostic criteria . Symptomatic MM patients were newly diagnosed and received chemotherapy with a bortezomib-based regimen for one to six cycles, with an average of four to six cycles. For patients who achieved a complete response (CR) or partial response (PR), the regimen was repeated for two to four cycles, or autologous stem cell transplantation was completed as consolidation therapy. Peripheral blood samples of 2–4 mL were collected at preset time points, including baseline and during chemotherapy (on-CT). Longitudinal time points were taken at every cycle starting from the first on-CT time point. Blood was collected in EDTA-coated tubes and plasma separation from the blood was achieved by centrifuging samples at 1500× g for 5 min within 2 h. The plasma was then stored at -80 ℃ until metabolite extraction and analysis.
The EB virus transformed human B lymphocyte line KM932, human myeloma cell lines (HMCLs) RPMI-8226, AMO-1, MM.1R, MM.1 S and LAMA-84 were purchased from the Cell Resource Center, IBMS, CAMS/PUMC. All cells were cultured in RPMI-1640 medium containing 10% fetal bovine serum, under meticulous conditions of 5% CO 2 at a constant temperature of 37 °C. These cells are mycoplasma-free.
For metabolite extraction of plasma samples, 50 µL plasma of each patient was mixed with mixture of methanol and acetonitrile (v/v, 1:1) with the addition of 200 ng/mL propranolol and tolbutamide as internal standards (IS). The mixture was vortexed for 1 min and centrifuged at 15,000 ×g for 10 min at 4 °C. A 200 µL aliquot of the resulting supernatant was subsequently transferred to a new tube and kept at 4 °C until LC-MS analysis. For metabolite extraction of cell samples, each cell sample was subjected to a mixture of methanol and water (v/v, 1:1) at a 1:10 weight-to-volume (w/v) ratio. The cell sample was vigorously vortexed for 1 min, followed by 30 min of ultrasonic treatment. Subsequently, the mixture was centrifuged at 15,000 ×g for 10 min at 4 °C to separate the supernatant from the cellular debris. A 50 µL aliquot of supernatant was subsequently transferred to a new tube and added 450 µL mixture of methanol and acetonitrile (v/v, 1:1) with the addition of 200 ng/mL propranolol and tolbutamide as internal standards (IS). The mixture was vortexed for 1 min and centrifuged at 15,000 ×g for 10 min at 4 °C. A 200 µL aliquot of the final supernatant was transferred to a fresh tube and stored at 4 °C until further analysis by LC-MS.
The LC-HRMS analyses were conducted via an Ultimate 3000 LC system coupled with a Q Orbitrap mass analyzer (Q Exactive, Thermo Fisher Scientific, USA). Chromatographic separation was performed via an ACQUITY BEH C18 column (Waters, 2.1 × 50 mm, 1.7 μm) at a flow rate of 0.25 mL/min, maintained at 30 ℃. Mobile phase A consisted of water with 0.1% formic acid and 2.5 mmol/L ammonium formate, while mobile phase B was acetonitrile. The gradient conditions were as follows: 0–1.0 min, 95% A; 1.0–5.0 min, 95%-40% A; 5.0–8.0 min, 40%-0% A; 8.0–11.0 min, 0% A; 11.0–14.0 min, 0-40% A; 14.0–15.0 min, 40-95% A; 15.0–18.0 min, 95% A. The spectrometric settings for positive/negative ion modes were as follows: scan mode, full MS over the m/z scan range of 70-1050; resolution, 70,000; spray voltage , 3.0 kV; capillary temperature, 350 ℃; S-lens RF, 50; full MS/dd-MS2 over the resolution of 17,500; AGC target, 1e 5; maximum TT, 50 ms; NCE, 20, 40, 60.
Metabolite identification was performed via Compound Discoverer 3.3 (ThermoFisher, CA, USA). The identification criteria involved exact mass, retention time, fragmentation spectra and isotopic pattern. An in-house library and the online library mzCloud were utilized for this purpose. The final output data included the compound name and peak area. Pattern recognition analysis, including principal component analysis (PCA), orthogonal partial least squares discriminant analysis (OPLS-DA), and topology analysis, were carried out to identify key metabolic features via MetaboAnalyst 5.0 ( https://www.metaboanalyst.ca/ ). The differentially abundant metabolites were screened (variable importance for the project, VIP > 1.0 and p < 0.05). Metabolic pathway analysis was conducted via the “MS Peaks to Pathways” of MetaboAnalyst 5.0. Significant pathways were computed on the basis of the spectral features with an impact greater than 0.1. SPSS 16.0 (Armonk, New York, USA) was used for t-test analysis. All the data were expressed as mean ± standard deviation (mean ± SD). One way ANOVA was used to analyze the differences between multiple groups, and Tukey’s test was employed for pairwise comparisons. * represents P < 0.05, ** denotes P < 0.01, ***denotes P < 0.001.
Metabolic fingerprint of plasma from newly diagnosed MM patients To analyze the metabolic changes between newly diagnosed MM patients and normal control (HC) group, untargeted metabolomics analysis was carried out using UPLC-Orbitrap-MS in both ESI positive (ESI+) and negative (ESI-) ion modes. PCA, an unsupervised model, was performed to reveal differences in the metabolic profiles of samples across groups. The workflow of our work is depicted in Fig. . The PCA score plots clearly demonstrated a distinct separation between the NDMM and HC groups (Fig. A). For a more detailed analysis of metabolic profiling discrepancies between the NDMM and HC groups, we employed the supervised pattern recognition method, OPLS-DA. The OPLS-DA score plot illustrated a marked separation between the NDMM and HC groups (Fig. D). To screen potential biomarkers, VIP values were calculated for each metabolite using the OPLS-DA models. Meanwhile, fold change (FC) and P values were obtained by assessing the magnitude and statistical significance of the variations between different groups, respectively. Intriguingly, our analysis yielded a total of 70 differentially abundant metabolites (VIP > 1.0 and p < 0.05) when comparing NDMM to the HC. To offer a more intuitive visualization, we crafted a volcano plot, illustrating log2 FC against -log10 (p-value). Highlighted within this plot are 23 pivotal metabolites, which not only exhibited remarkable statistical significance ( p < 0.0001) but also have been frequently implicated in MM (Fig. B). To provide a visual representation of the differentially expressed metabolites identified through LC-MS analysis, a heatmap was generated (Fig. C). These data confirmed that significant differences in metabolites exist between NDMM patients and the HC group. Performance of plasma metabolites for the early diagnosis of MM patients Given that the majority of MM patients are diagnosed in advanced stages, early detection significantly improves the likelihood of successful MM treatment. In contrast to the traditional bone marrow aspiration method used for screening, a blood-based test is minimally-invasive and relatively cost-effective. Thus, we examined the clinical application value of metabolite detection in MM diagnosis, with a particular emphasis on early MM. To evaluate the discriminatory capacity of the 70 aforementioned differentially abundant metabolites significantly altered in the MM cohort, ROC analyses were employed to calculate the area under the curve (AUC). Among these metabolites, 6 exhibited high diagnostic value in distinguishing plasma samples of the MM group from those of the HC group. As illustrated in Fig. , the AUC values for pyroglutamic acid, arginine, lactic acid, choline, acetylcholine, and leucine were 0.999, 0.999, 0.994, 0.984, 0.935, and 0.852, respectively. These values clearly indicated the ability to differentiate early MM patients from healthy controls. Metabolic biomarkers associated with disease risk in MM patients To gain precise insights into biomarkers associated with the severity of the disease, patients with NDMM were categorized into R-ISS-I ( n = 5), R-ISS-II ( n = 19), and R-ISS-III groups ( n = 8), according to the IMWG diagnostic criteria . First, we identified statistically differential metabolites ( p < 0.05) between R-ISS-III group samples and healthy group samples, with FC exceeding 2.0 or falling below 0.5. Second, the abundance of metabolites needed to be positively or negatively correlated with risk. As depicted in Fig. A, the levels of acetylcholine significantly increased, whereas the levels of lactic acid and leucine markedly decreased in the R-ISS groups (R-ISS-I, R-ISS-II, R-ISS-III) compared with those in the healthy group. Notably, the substantial increase in acetylcholine and decrease in lactic acid and leucine in the R-ISS I/II/III group compared with those in the healthy group align with previous findings indicating that the levels of acetylcholine, lactic acid, and leucine allow discrimination between NDMM patients and healthy controls. Furthermore, the panel of acetylcholine/lactic acid/leucine could serve as both diagnostic and risk biomarkers for MM. These findings underscore the potential of metabolomics for biomarker discovery, enabling more precise and accessible early detection of MM. Metabolic biomarkers associated with chemotherapy sensitivity in MM patients We identified two subtypes among MM patients undergoing 3 cycles of chemotherapy: the chemo-sensitive group (CSG) and the chemo-insensitive group (CIG). Correlation analysis between the aforementioned 70 candidate metabolites and 7 common clinical indexes was further performed (Fig. B). The clinical variables included hemoglobin (Hb), M protein, clonal plasma cells, immature plasma cells, plasma cells, serum creatinine (Scr) and lactic dehydrogenase (LDH). The level of lactic acid was significantly positively correlated with Hb and negatively correlated with Scr, immature plasma cells and plasma cells. On the other hand, the abundance of leucine was positively correlated with only Hb, whereas cholesterol sulfate was negatively correlated with the level of the M protein. Furthermore, in MM patients undergoing chemotherapy for both 3 and 4 cycles, the levels of lactic acid, leucine, and cholesterol sulfate were significantly greater in the CSG group than in the CIG group (Fig. C and D). Surprisingly, lactic acid and leucine demonstrated utility for diagnosis, severity assessment, and prediction of chemotherapy sensitivity (Fig. E). Metabolic pathway analysis The metabolic networks, based on the statistically and functionally integrated metabolomics data, were visualized via Cytoscape software. The observed state of identified metabolites in NDMM are shown in the metabolic networks (Fig. ). To explore the metabolic pathways involved in MM development, the differentially abundant metabolites were enriched for the related metabolic pathway analysis via the MetaboAnalyst online tool. As depicted in Fig. and Supplementary Fig. , differentially abundant metabolites were significantly enriched in multiple metabolic pathways, including central carbon metabolism in cancer, choline metabolism in cancer, citrate (TCA) cycle, glycerophospholipid metabolism, valine, leucine and isoleucine biosynthesis, sphingolipid metabolism, phenylalanine metabolism, purine metabolism, arginine and proline metabolism, thermogenesis, and biosynthesis of amino acids. Glycolysis and the TCA cycle, as two major energy metabolism pathways, attracted our attention. Additionally, we observed a substantial reduction in lactic acid levels, a marker associated with glucose metabolism, in patients with MM. Metabolic reprogramming of MM cells Untargeted metabolomics studies were conducted on human MM cell lines, which included RPMI-8226, AMO-1, MM.1R, MM.1 S, and LAMA-84, in comparison with the control group, KM932, a human B cell line. The PCA score plots clearly demonstrated a distinct separation between MM cells and control group (Fig. A). The OPLS-DA score plot illustrated a marked separation between the MM cells and control group (Fig. B). 80 differentially abundant metabolites (VIP > 1.0, |log2 FC| > 1, and p < 0.05) were identified in MM cells versus the control group (Fig. C). In keeping with the metabolic analyses of MM patients, human MM cells exhibited 3 identical metabolic pathways, including citrate (TCA) cycle, sphingolipid metabolism, and g lycerophospholipid metabolism (Fig. D). Lactic acid and leucine are downregulated in MM cells A total of 14 differential metabolites were identified as overlapping between MM cells and MM patients, as depicted in Fig. A. To provide a visual representation of the 14 differential metabolites, a heatmap was generated (Fig. B). This heatmap substantiated the significant disparities in metabolite profiles between MM cells and the control group. Intriguingly, the marked decrease in lactic acid and leucine in MM cells is consistent with prior finding that plasma levels of those allows discrimination between MM patients and healthy control. Moreover, Fig. C highlights a remarkable decline in the levels of the 2 crucial metabolites, lactic acid and leucine, in the majority of MM cells compared to the control group. These findings underscore the potential of lactic acid and leucine as diagnostic biomarkers for MM.
To analyze the metabolic changes between newly diagnosed MM patients and normal control (HC) group, untargeted metabolomics analysis was carried out using UPLC-Orbitrap-MS in both ESI positive (ESI+) and negative (ESI-) ion modes. PCA, an unsupervised model, was performed to reveal differences in the metabolic profiles of samples across groups. The workflow of our work is depicted in Fig. . The PCA score plots clearly demonstrated a distinct separation between the NDMM and HC groups (Fig. A). For a more detailed analysis of metabolic profiling discrepancies between the NDMM and HC groups, we employed the supervised pattern recognition method, OPLS-DA. The OPLS-DA score plot illustrated a marked separation between the NDMM and HC groups (Fig. D). To screen potential biomarkers, VIP values were calculated for each metabolite using the OPLS-DA models. Meanwhile, fold change (FC) and P values were obtained by assessing the magnitude and statistical significance of the variations between different groups, respectively. Intriguingly, our analysis yielded a total of 70 differentially abundant metabolites (VIP > 1.0 and p < 0.05) when comparing NDMM to the HC. To offer a more intuitive visualization, we crafted a volcano plot, illustrating log2 FC against -log10 (p-value). Highlighted within this plot are 23 pivotal metabolites, which not only exhibited remarkable statistical significance ( p < 0.0001) but also have been frequently implicated in MM (Fig. B). To provide a visual representation of the differentially expressed metabolites identified through LC-MS analysis, a heatmap was generated (Fig. C). These data confirmed that significant differences in metabolites exist between NDMM patients and the HC group.
Given that the majority of MM patients are diagnosed in advanced stages, early detection significantly improves the likelihood of successful MM treatment. In contrast to the traditional bone marrow aspiration method used for screening, a blood-based test is minimally-invasive and relatively cost-effective. Thus, we examined the clinical application value of metabolite detection in MM diagnosis, with a particular emphasis on early MM. To evaluate the discriminatory capacity of the 70 aforementioned differentially abundant metabolites significantly altered in the MM cohort, ROC analyses were employed to calculate the area under the curve (AUC). Among these metabolites, 6 exhibited high diagnostic value in distinguishing plasma samples of the MM group from those of the HC group. As illustrated in Fig. , the AUC values for pyroglutamic acid, arginine, lactic acid, choline, acetylcholine, and leucine were 0.999, 0.999, 0.994, 0.984, 0.935, and 0.852, respectively. These values clearly indicated the ability to differentiate early MM patients from healthy controls.
To gain precise insights into biomarkers associated with the severity of the disease, patients with NDMM were categorized into R-ISS-I ( n = 5), R-ISS-II ( n = 19), and R-ISS-III groups ( n = 8), according to the IMWG diagnostic criteria . First, we identified statistically differential metabolites ( p < 0.05) between R-ISS-III group samples and healthy group samples, with FC exceeding 2.0 or falling below 0.5. Second, the abundance of metabolites needed to be positively or negatively correlated with risk. As depicted in Fig. A, the levels of acetylcholine significantly increased, whereas the levels of lactic acid and leucine markedly decreased in the R-ISS groups (R-ISS-I, R-ISS-II, R-ISS-III) compared with those in the healthy group. Notably, the substantial increase in acetylcholine and decrease in lactic acid and leucine in the R-ISS I/II/III group compared with those in the healthy group align with previous findings indicating that the levels of acetylcholine, lactic acid, and leucine allow discrimination between NDMM patients and healthy controls. Furthermore, the panel of acetylcholine/lactic acid/leucine could serve as both diagnostic and risk biomarkers for MM. These findings underscore the potential of metabolomics for biomarker discovery, enabling more precise and accessible early detection of MM.
We identified two subtypes among MM patients undergoing 3 cycles of chemotherapy: the chemo-sensitive group (CSG) and the chemo-insensitive group (CIG). Correlation analysis between the aforementioned 70 candidate metabolites and 7 common clinical indexes was further performed (Fig. B). The clinical variables included hemoglobin (Hb), M protein, clonal plasma cells, immature plasma cells, plasma cells, serum creatinine (Scr) and lactic dehydrogenase (LDH). The level of lactic acid was significantly positively correlated with Hb and negatively correlated with Scr, immature plasma cells and plasma cells. On the other hand, the abundance of leucine was positively correlated with only Hb, whereas cholesterol sulfate was negatively correlated with the level of the M protein. Furthermore, in MM patients undergoing chemotherapy for both 3 and 4 cycles, the levels of lactic acid, leucine, and cholesterol sulfate were significantly greater in the CSG group than in the CIG group (Fig. C and D). Surprisingly, lactic acid and leucine demonstrated utility for diagnosis, severity assessment, and prediction of chemotherapy sensitivity (Fig. E).
The metabolic networks, based on the statistically and functionally integrated metabolomics data, were visualized via Cytoscape software. The observed state of identified metabolites in NDMM are shown in the metabolic networks (Fig. ). To explore the metabolic pathways involved in MM development, the differentially abundant metabolites were enriched for the related metabolic pathway analysis via the MetaboAnalyst online tool. As depicted in Fig. and Supplementary Fig. , differentially abundant metabolites were significantly enriched in multiple metabolic pathways, including central carbon metabolism in cancer, choline metabolism in cancer, citrate (TCA) cycle, glycerophospholipid metabolism, valine, leucine and isoleucine biosynthesis, sphingolipid metabolism, phenylalanine metabolism, purine metabolism, arginine and proline metabolism, thermogenesis, and biosynthesis of amino acids. Glycolysis and the TCA cycle, as two major energy metabolism pathways, attracted our attention. Additionally, we observed a substantial reduction in lactic acid levels, a marker associated with glucose metabolism, in patients with MM.
Untargeted metabolomics studies were conducted on human MM cell lines, which included RPMI-8226, AMO-1, MM.1R, MM.1 S, and LAMA-84, in comparison with the control group, KM932, a human B cell line. The PCA score plots clearly demonstrated a distinct separation between MM cells and control group (Fig. A). The OPLS-DA score plot illustrated a marked separation between the MM cells and control group (Fig. B). 80 differentially abundant metabolites (VIP > 1.0, |log2 FC| > 1, and p < 0.05) were identified in MM cells versus the control group (Fig. C). In keeping with the metabolic analyses of MM patients, human MM cells exhibited 3 identical metabolic pathways, including citrate (TCA) cycle, sphingolipid metabolism, and g lycerophospholipid metabolism (Fig. D).
A total of 14 differential metabolites were identified as overlapping between MM cells and MM patients, as depicted in Fig. A. To provide a visual representation of the 14 differential metabolites, a heatmap was generated (Fig. B). This heatmap substantiated the significant disparities in metabolite profiles between MM cells and the control group. Intriguingly, the marked decrease in lactic acid and leucine in MM cells is consistent with prior finding that plasma levels of those allows discrimination between MM patients and healthy control. Moreover, Fig. C highlights a remarkable decline in the levels of the 2 crucial metabolites, lactic acid and leucine, in the majority of MM cells compared to the control group. These findings underscore the potential of lactic acid and leucine as diagnostic biomarkers for MM.
Early detection and intervention play crucial roles in enhancing the clinical outcomes of MM patients , emphasizing the pressing need to identify potential minimally -invasive biomarkers. Considering the intra-tumoral heterogeneity and systemic changes induced by hematological malignancy, the analysis of blood samples offers insights into the overall phenotype of MM. Moreover, in comparison to invasive detection methods such as bone marrow puncture or biopsy, an ideal biomarker for MM clinical diagnosis should exhibit optimal sensitivity and specificity when obtained from patients through minimally invasive means, such as blood. Prior studies have explored metabolic biomarkers of MM in blood , indicating that molecular predictive classifiers could offer valuable insights for future targeted MM therapy. However, these studies focused primarily on identifying and describing the metabolic landscape of MM on the basis of single or specific clinical factors. In this retrospective metabolomics analysis of MM, we sought to address three key questions: (i) which patients could be screened early, (ii) which patients could be precisely staged, and (iii) which patients could benefit from chemotherapies. Notably, we observed a consistent decline in lactic acid and leucine levels in the plasma of MM patients at diagnosis, staging, and prognosis, suggesting that these metabolites are potential plasma biomarkers associated with active MM disease. Amazingly, the same phenomenon was also observed in vitro within MM cells. To the best of our knowledge, this is the most comprehensive analysis demonstrating the extent of metabolic reprogramming in MM. Importantly, the samples included in this study are real-world samples without strict enrollment criteria, demonstrating the robustness of our analysis. Following univariate analyses, we identified a biomarker panel that includes lactic acid and leucine. Lactic acid is often considered a marker of the “Warburg effect” in tumor cells . Warburg’s observation revealed that, unlike most normal cells, tumor cells tend to ferment glucose to lactate even in the presence of sufficient oxygen to support mitochondrial oxidative phosphorylation. The acidic microenvironment formed by lactic acid is conducive to the rapid growth and distant metastasis of tumor cells. Wiled et al. reported that lactate is also supplied to cancer cells from the surrounding environment, referring to this phenomenon as the reverse Warburg effect. Our observation of a decline in lactate levels in MM patients and cells suggests that the reverse Warburg effect might be applicable to the microenvironment in MM, as previously reported . Furthermore, in patients with MM who achieve complete remission, the increase in lactate concentration is particularly pronounced . Notably, the elevated level of lactate dehydrogenase (LDH), the enzyme catalyzing the conversion of pyruvate to lactate, serves as a marker of poor prognosis at the time of MM diagnosis. An increase in LDH is associated with worse overall survival (OS), progression-free survival (PFS), aggressive disease, and a higher tumor burden . Correspondingly, our research demonstrates that a decreased lactate level is closely related to MM initiation and progression, and it is expected to become an important biomarker for clinical diagnosis and treatment of MM in the future. Another biomarker, leucine, one of the branched chain amino acids (BCAAs), is an important amino acid that plays crucial roles in the body. Compared with the HC group, MM patients presented lower levels of leucine, and a similar downward trend was also observed in MM cells in vitro. In another related study, the amino acid profiles of MM presented relatively low concentrations of leucine . Furthermore, the concentration of essential amino acids, especially leucine, was significantly decreased in MM patients , which was consistent with our findings. Therefore, leucine appears to be a potential biomarker that should be evaluated in future studies addressing the diagnosis, staging, follow-up, prognosis, and treatment of MM. Interestingly, a recent study identified genes related to lactic acid and BCAAs metabolism as potential prognostic biomarkers independently associated with the overall survival of MM patients , which strongly supports our conclusion. Overall, we hypothesize that the diagnostic value may be improved by combining the examination of clinical indicators, lactate, and leucine levels. Metabolic pathways constitute a highly organized network of sequential chemical reactions in an organism, playing a vital role in maintaining the energy and material balance necessary for life processes. Deeper insights into how these abnormalities disrupt normal metabolic pathways will aid in better prevention, diagnosis, and treatment of associated diseases. Gonsalves et al. reported that BCAAs metabolism, tryptophan metabolism, phospholipid metabolism, and nucleotide turnover were potentially affected by MM, whereas Chanukuppa et al. reported alterations in pyrimidine metabolism, purine metabolism, amino acid metabolism, nitrogen metabolism, sulfur metabolism, and the citrate cycle. Wei et al. reported significant serum metabolic disorders in 46 pairs of pre- and post-therapy MM patients, specifically in arginine, proline and glycerophospholipid pathways . These findings confirmed the vital role of certain metabolites and metabolic pathways in the pathogenesis of MM. The citric acid cycle (TCA cycle), also known as the Krebs cycle, is a crucial biochemical pathway that occurs in the mitochondria of eukaryotic cells. It generates energy in the form of ATP by coupling the breakdown of substrates to the phosphorylation of ADP. The TCA cycle is a highly regulated process that responds to changes in nutrient availability and energy demand . It can be upregulated during periods of high energy demand or downregulated during periods of nutrient abundance or energy excess. We found that isocitric acid, malic acid, pyruvic acid, and cis-aconitic acid in the TCA cycle were significant downregulated in MM patients, possibly suggesting that the TCA cycle was inhibited. This finding is consistent with Warburg’s suggestion that even when oxygen is sufficient, tumor cells rely on massive glucose uptake, converting it to lactate for energy . Lipids serve not only as essential components of cell membranes and energy storage systems but also as crucial signaling molecules that regulate biological processes under both normal and diseased conditions. In our study, we observed significant disturbances in choline metabolism, glycerophospholipid metabolism, and sphingolipid metabolism in MM patients. Choline, an integral part of acetylcholine synthesis and a precursor to phospholipid synthesis , undergoes transformation into phosphocholine, which is coupled with diacylglycerol to form phosphatidylcholine (PCs), a major component of cell membranes. Therefore, choline is considered to reflect the intensity of cell membrane synthesis. Our findings revealed a significant increase in choline and acetylcholine in the MM group compared with those in the HC group, whereas the abundances of PC(16:1(9Z)/20:3(8Z,11Z,14Z), PC(18:1(9Z)/20:4(5Z, 8Z, 11Z, 14Z), and PC(16:1(9Z)/22:5) were notably decreased in the MM group compared with those in the HC group. High choline uptake and downregulated PCs are believed to lead to the hydrolysis necessary for forming lipid messengers, responsible for the replication of clonal plasma cells and tumor dissemination . Members of the lysophosphatidylcholines (LPCs), including LysoPC(16:0), LysoPC(18:2(9Z, 12Z)), LysoPC(18:0), LysoPC(20:4(8Z,11Z,14Z,12Z)), LysoPC(18:1(11Z)), LysoPC(16:1(9Z)), LysoPC(20:3(5Z,8Z,11Z)), and LysoPC(20:2(11Z, 14Z)), exhibited decreased levels in MM patients, consistent with previous research . LPCs are crucial LDL/bioactive lipids that contribute to the inflammatory impact of oxidized LDL on endothelial cells. They are involved in inflammatory stimuli, and promote the release of IL-6 and other inflammatory factors, ultimately contributing to the development and progression of MM . Sphingolipids (SPs), another family of bioactive lipids with a structural role in the plasma membrane, have products of their metabolism (sphingosine, sphingosine-1-phosphate, ceramides, ceramide-1-phosphate) that play crucial roles in MM migration and adhesion, survival and proliferation, as well as angiogenesis and invasion . In our study, most SPs exhibited significantly decreased levels in plasma, suggesting that SPs’ hydrolysis can be part of the systemic metabolic regulation/reprogramming of MM. Amino acids play an essential role in the synthesis of various biomolecules necessary for cell proliferation . Moreover, targeting amino acid metabolism has been proposed as a potential cancer therapy, highlighting the importance of amino acid metabolism in cancer . Alterations in plasma amino acid profiles are relatively common in MM processes . Consistent with previous findings, we also discovered the main perturbed amino acid metabolism pathways in MM plasma, including valine, leucine, and isoleucine biosynthesis, phenylalanine metabolism, and arginine and proline metabolism . Leucine, valine and isoleucine, classified as branched-chain amino acids (BCAAs), are crucial for human life and are particularly involved in stress, energy and muscle metabolism . BCAAs follow different metabolic routes, with valine exclusively contributing to carbohydrates (glycogenic), leucine solely to fats (ketogenic) and isoleucine being both a glucogenic and a ketogenic amino acid. The catabolism of valine begins with the removal of the amino group by transamination, producing alpha-ketoisovalerate, an alpha-keto acid, which is converted to isobutyryl-CoA through oxidative decarboxylation by the branched-chain alpha-ketoacid dehydrogenase complex. This is further oxidized and rearranged to succinyl-CoA, which can enter the TCA cycle. The elevated level of valine in MM patients may be due to the inhibited TCA cycle, as previously demonstrated. Arginine and proline metabolism was significantly enriched in MM. Furthermore, our findings revealed notable elevations in both arginine and creatinine levels, accompanied by a conspicuous decline in proline levels. This pattern of alterations underscores the intricate metabolic shifts occurring within the MM group, particularly focusing on arginine metabolism, which has garnered significant attention in prior research . Arginine, a fundamental amino acid, plays a pivotal role in the urea cycle, serving as a precursor for protein synthesis, polyamine production, creatine synthesis, and nitric oxide (NO) biosynthesis. Arginine deprivation has been demonstrated to have a direct pro-survival effect on myeloma cells, with potential therapeutic implications. Renal failure is a frequent clinical feature in MM patients. Creatinine, a key end product of arginine and proline metabolism, is transported to the kidneys via blood plasma, and serum creatinine is commonly used as an indicator of renal function. Previous studies have reported an obvious up-regulation of serum creatinine and arginine levels in MM patients compared with healthy controls , which is consistent with our findings. The elevated levels of creatinine in the plasma of MM patients may be attributed to impaired renal function during the progression of MM, thereby impeding the elimination of toxins . Hydrovalerylcarnitine, butyrylcarnitine, L-acetylcarnitine, and L-carnitine, which play key roles in thermogenesis and fatty acid oxidation (FAO), were significantly elevated in MM patients compared with healthy controls. Acylcarnitines play a primary role in the FAO process within mitochondria, converting fatty acids into energy. When the body requires energy, fatty acids undergo β-oxidation reactions, breaking down into shorter groups and eventually transforming into acetyl-CoA to enter the tricarboxylic acid cycle, generating substantial energy for the cell. Carnitine and acetylcarnitine have been recognized as novel biomarkers for active diagnosis, relapse, and mediators of disease-associated pathologies in MM . Additionally, carnitine may enhance plasma cell immunoglobulin (Ig) secretion, promoting B lymphocytes to differentiate into plasma cells and participate in antibody-mediated immune responses . Therefore, the increased levels of plasma carnitine and, to a greater extent, acetylcarnitine and hydrovalerylcarnitine in MM patients could entail increased lipid oxidation in highly metabolically active myeloma cells. In conclusion, a deeper understanding of the metabolic profiles of MM could aid in identifying cases resistant to specific agents, preventing repetitive errors and cumulative toxicity, and exploring new experimental strategies for these cohorts. Despite these insights, the study has several limitations. While we summarized differentially abundant metabolites and explored their value in MM, more functional validations in vivo and animal models are necessary. Additionally, the patient sample size is relatively small, validation and further investigation in a larger, independent cohort are warranted to better comprehend the mechanisms of MM. Further research is essential to support these results and verify the underlying biological functions of key amino acid metabolites through large-scale and mechanistic studies.
Below is the link to the electronic supplementary material. Supplementary Material 1: Supplementary Fig. 1. Compared with healthy controls, fold change (FC) and VIP value of different metabolites found in plasma from newly diagnosed multiple myeloma patients. The red columns represent the FC of the metabolite in pregnant women were up-regulated (log2 FC > 0); blue columns (log2 FC < 0) represent down-regulated metabolites in early multiple myeloma; The larger the circle, the higher the VIP value. The differential pathways were as follows: citrate cycle, choline metabolism in cancer, glycerophospholipid metabolism, sphingolipid metabolism, valine, leucine and isoleucine biosynthesis, phenylalanine metabolism, purine metabolism, arginine and proline metabolism, and thermogenesis
|
Whole Slide Imaging Versus Microscopy for Primary Diagnosis in Surgical Pathology | 14d9022b-e09d-400b-af5d-a1e59361f91f | 5737464 | Pathology[mh] | Over a 14-month period (July 2015 to September 2016), a blinded randomized noninferiority study comparing microscopy with WSI for primary diagnosis in surgical pathology was conducted at 4 institutions in the United States (2 academic centers and 2 commercial laboratories; the latter included an independent hospital-based pathology practice). Investigators from pathology departments at multiple other institutions were actively involved in planning, study design, execution, and data analysis. The study protocol was approved by Institutional Review Boards (IRBs) at all participating institutions. Screening and Enrollment Each participating institution was assigned a set of organ systems from which to enroll cases, for a total of 20 organ systems (Table ). Only formalin-fixed paraffin-embedded surgical pathology cases were enrolled. Frozen sections and cases received in consultation were excluded. Target enrollment for each organ system and case type was predefined, based on discussions with the United States Food and Drug Administration (FDA), and were intended to reflect routine clinical practice while enriching for more difficult malignant cases. As an example, for colorectal cases, the enrollment target was 150 cases, including 50 benign/inflammatory biopsies, 50 biopsies of adenomas, 40 endoscopic biopsies of adenocarcinoma, and 10 adenocarcinoma resections. Cases were excluded if they met any of the following exclusion criteria: (1) slides for a case were not available at the site, (2) control slides for immunohistochemistry or special stains were not available, (3) slides selected did not match any subtype of the organ for which the case was selected, (4) clinical information available to the sign-out pathologist in the pathology requisition form could not be obtained, (5) selected slides contained indelible markings, (6) more than one case was selected for a patient, (7) the case consisted of frozen section slides only, or (8) the case consisted of gross specimens only. The most common reason for not including a screened case was that the target enrollment number for that specific diagnosis was met. For example, once the target of 120 consecutive benign core biopsies of prostate was reached, subsequent benign core biopsies were not enrolled. By this process, 12,338 cases were screened by 8 enrollment pathologists from 4 centers until the enrollment target of 2000 cases (3405 slides) was reached. These cases were submitted for scanning and subsequent review. The inclusion criteria specified that the interval between accession of cases and selection into the study was to be at least 1 year. Cases were reviewed for enrollment in the study by 2 “enrollment pathologists” per institution. One of these individuals reviewed a list of consecutive pathology reports from organ systems assigned to that center and flagged cases for retrieval of glass slides. All glass slides for each case were reviewed (screened) by the enrollment pathologist. For biopsies, the enrollment pathologist selected key slides required for diagnosis, including hematoxylin and eosin and immunohistochemical stains. In addition, for resections, representative slides required for diagnosis and staging were selected, including negative or positive lymph nodes and margins. The second enrollment pathologist (validating enrollment pathologist) then reviewed all slides selected by the first enrollment pathologist to ensure that diagnostic material reflecting the original diagnosis was present. The original diagnosis made in the course of routine patient care by the pathologist signing out the case (baseline diagnosis) was considered the reference standard. Slide Scanning A study coordinator compiled all cases selected by the enrollment pathologists and submitted them for digital scanning at participating sites using the Philips IntelliSite Pathology Solution (Philips, the Netherlands), which includes a scanner, an image management system and a display. A study technician was trained to scan slides using appropriate calibration and quality control measures. All slides were scanned as WSI for digital review using the Philips IntelliSite Pathology Solution. Of the 2000 cases submitted for scanning, 8 (0.4%) were excluded for the following reasons: slide size did not meet scanner specifications (4 cases), no tissue was detected by scanner on any one of the slides selected for the case (2 cases), more than one case was selected for the patient (1 case), or slides were broken or damaged (1 case). This process yielded a “full analysis set” of 1992 cases (99.6% of enrolled cases). Randomization Original glass slides from all cases included in the study (full analysis set) were randomized and deidentified. Randomization was performed within an Electronic Data Capture (EDC) system provided by the manufacturer (eCaseLink Document Solutions Group, Malvern, PA). Original surgical pathology numbers were obscured and replaced by a study identifier (barcode label) by the study coordinator. Cases were then placed in random order and divided into batches of 20 cases, each of which contained a random mix of cases from various organ systems. Interpretation of Microscopy and Whole Slide Images by Reading Pathologists Randomized and deidentified slides from each case were presented for interpretation to 16 board-certified “reading pathologists” (4 at each center) different from the 8 enrollment pathologists whose role was described previously. Each reading pathologist followed standard training including self-familiarization with the WSI viewer. In order to represent the breadth of potential users of WSI, reading pathologists were selected to represent a variety of expertise, practice types (academic vs. nonacademic, generalists vs. subspecialists), subspecialty training and years of experience. Reading pathologists interpreted cases enrolled from their center only, blinded to the reference standard diagnosis. All cases were interpreted by 2 modalities. The first (microscopy) involved viewing glass slides using a microscope, identical to the practice of routine surgical pathology. Each pathologist viewed glass slides in their office using their own microscope. The second method (WSI) involved viewing scanned digital images on a high-resolution monitor without the use of a microscope. Reading pathologists interpreted cases in batches of 20. After a batch of 20 cases was reviewed, the same pathologist was given a separate batch of 20 cases for review by the other modality. For example, a pathologist who interpreted cases 1 to 20 using microscopy might be assigned cases 71 to 90 for review using WSI, followed by cases 41 to 60 using a microscope, and so on. This process was repeated for all 16 reading pathologists until all assigned cases were viewed by each pathologist. After a wash-out period of at least 4 weeks, all cases were arranged in random order and interpreted a second time by the same reading pathologists using the other modality (ie, cases initially interpreted by microscopy were interpreted by WSI and vice versa). The wash-out period differed from case to case depending on its order in the randomly arranged cases. The mean wash-out period per pathologist ranged from 38.7 to 81.8 days. The minimum wash-out period was 27 days and the maximum was 143 days. At least 2 workstations, each with a 27-inch monitor, were provided to each participating site and located in a room simulating a clinical practice environment. The diagnosis for each case was entered electronically into the EDC electronic database by each reading pathologist. Staging parameters on cases requiring staging were entered on paper using templates that incorporated key elements of CAP synoptic templates for each organ. The time that a pathologist either opened or closed a case in the EDC system was logged. Reading pathologists were allowed to freely consult textbooks and other literature online, whether using microscopy or WSI. Identical clinical information was provided to reading pathologists for both modalities. Information regarding prior diagnoses on the same patient was not provided. Reading pathologists were not allowed to request recuts or any additional special stains beyond those already provided, or to consult with other pathologists. The randomization process ensured that the order in which cases were presented to the reading pathologist for microscopic interpretation was different than the order for WSI interpretation. Each diagnosis by a reading pathologist on a case (whether by WSI or microscopy) was termed a “read.” As each case from any participating institution was interpreted twice by 4 reading pathologists, there were 8 “reads” per case, not including the original sign-out diagnosis. Adjudication Phase The diagnosis rendered by the original pathologist who signed out the case in the course of routine patient care using a microscope was considered the reference standard. A central panel of 3 “adjudication pathologists” independently determined the level of concordance between microscopic and WSI diagnoses and the reference standard. The adjudication panel did not include any of the enrollment pathologists or reading pathologists, and was selected from institutions different than the 4 centers that participated in enrollment and reading. Each adjudication pathologist had at least 10 years of relevant experience. Two adjudication pathologists were provided a list of paired diagnoses, blinded to method of diagnosis (microscopy or WSI), reading pathologist and participating site/institution. Adjudication pathologists did not view glass slides for any case. Using an Adjudication Charter for each organ system, adjudication pathologists placed each pair of diagnoses into one of 3 categories: concordant, minor discordant or major discordant. In keeping with widely accepted definitions, a major discordance was defined as a difference in diagnosis that would be associated with a difference in patient management. , In case of a disagreement between the 2 adjudication pathologists on the level of concordance between 2 diagnoses, the third adjudication pathologist served as a tie-breaker. The primary endpoint of the study was the difference between major discordance rates for microscopy and WSI by comparison with the reference standard. The study design is summarized in Figure .
Each participating institution was assigned a set of organ systems from which to enroll cases, for a total of 20 organ systems (Table ). Only formalin-fixed paraffin-embedded surgical pathology cases were enrolled. Frozen sections and cases received in consultation were excluded. Target enrollment for each organ system and case type was predefined, based on discussions with the United States Food and Drug Administration (FDA), and were intended to reflect routine clinical practice while enriching for more difficult malignant cases. As an example, for colorectal cases, the enrollment target was 150 cases, including 50 benign/inflammatory biopsies, 50 biopsies of adenomas, 40 endoscopic biopsies of adenocarcinoma, and 10 adenocarcinoma resections. Cases were excluded if they met any of the following exclusion criteria: (1) slides for a case were not available at the site, (2) control slides for immunohistochemistry or special stains were not available, (3) slides selected did not match any subtype of the organ for which the case was selected, (4) clinical information available to the sign-out pathologist in the pathology requisition form could not be obtained, (5) selected slides contained indelible markings, (6) more than one case was selected for a patient, (7) the case consisted of frozen section slides only, or (8) the case consisted of gross specimens only. The most common reason for not including a screened case was that the target enrollment number for that specific diagnosis was met. For example, once the target of 120 consecutive benign core biopsies of prostate was reached, subsequent benign core biopsies were not enrolled. By this process, 12,338 cases were screened by 8 enrollment pathologists from 4 centers until the enrollment target of 2000 cases (3405 slides) was reached. These cases were submitted for scanning and subsequent review. The inclusion criteria specified that the interval between accession of cases and selection into the study was to be at least 1 year. Cases were reviewed for enrollment in the study by 2 “enrollment pathologists” per institution. One of these individuals reviewed a list of consecutive pathology reports from organ systems assigned to that center and flagged cases for retrieval of glass slides. All glass slides for each case were reviewed (screened) by the enrollment pathologist. For biopsies, the enrollment pathologist selected key slides required for diagnosis, including hematoxylin and eosin and immunohistochemical stains. In addition, for resections, representative slides required for diagnosis and staging were selected, including negative or positive lymph nodes and margins. The second enrollment pathologist (validating enrollment pathologist) then reviewed all slides selected by the first enrollment pathologist to ensure that diagnostic material reflecting the original diagnosis was present. The original diagnosis made in the course of routine patient care by the pathologist signing out the case (baseline diagnosis) was considered the reference standard.
A study coordinator compiled all cases selected by the enrollment pathologists and submitted them for digital scanning at participating sites using the Philips IntelliSite Pathology Solution (Philips, the Netherlands), which includes a scanner, an image management system and a display. A study technician was trained to scan slides using appropriate calibration and quality control measures. All slides were scanned as WSI for digital review using the Philips IntelliSite Pathology Solution. Of the 2000 cases submitted for scanning, 8 (0.4%) were excluded for the following reasons: slide size did not meet scanner specifications (4 cases), no tissue was detected by scanner on any one of the slides selected for the case (2 cases), more than one case was selected for the patient (1 case), or slides were broken or damaged (1 case). This process yielded a “full analysis set” of 1992 cases (99.6% of enrolled cases).
Original glass slides from all cases included in the study (full analysis set) were randomized and deidentified. Randomization was performed within an Electronic Data Capture (EDC) system provided by the manufacturer (eCaseLink Document Solutions Group, Malvern, PA). Original surgical pathology numbers were obscured and replaced by a study identifier (barcode label) by the study coordinator. Cases were then placed in random order and divided into batches of 20 cases, each of which contained a random mix of cases from various organ systems.
Randomized and deidentified slides from each case were presented for interpretation to 16 board-certified “reading pathologists” (4 at each center) different from the 8 enrollment pathologists whose role was described previously. Each reading pathologist followed standard training including self-familiarization with the WSI viewer. In order to represent the breadth of potential users of WSI, reading pathologists were selected to represent a variety of expertise, practice types (academic vs. nonacademic, generalists vs. subspecialists), subspecialty training and years of experience. Reading pathologists interpreted cases enrolled from their center only, blinded to the reference standard diagnosis. All cases were interpreted by 2 modalities. The first (microscopy) involved viewing glass slides using a microscope, identical to the practice of routine surgical pathology. Each pathologist viewed glass slides in their office using their own microscope. The second method (WSI) involved viewing scanned digital images on a high-resolution monitor without the use of a microscope. Reading pathologists interpreted cases in batches of 20. After a batch of 20 cases was reviewed, the same pathologist was given a separate batch of 20 cases for review by the other modality. For example, a pathologist who interpreted cases 1 to 20 using microscopy might be assigned cases 71 to 90 for review using WSI, followed by cases 41 to 60 using a microscope, and so on. This process was repeated for all 16 reading pathologists until all assigned cases were viewed by each pathologist. After a wash-out period of at least 4 weeks, all cases were arranged in random order and interpreted a second time by the same reading pathologists using the other modality (ie, cases initially interpreted by microscopy were interpreted by WSI and vice versa). The wash-out period differed from case to case depending on its order in the randomly arranged cases. The mean wash-out period per pathologist ranged from 38.7 to 81.8 days. The minimum wash-out period was 27 days and the maximum was 143 days. At least 2 workstations, each with a 27-inch monitor, were provided to each participating site and located in a room simulating a clinical practice environment. The diagnosis for each case was entered electronically into the EDC electronic database by each reading pathologist. Staging parameters on cases requiring staging were entered on paper using templates that incorporated key elements of CAP synoptic templates for each organ. The time that a pathologist either opened or closed a case in the EDC system was logged. Reading pathologists were allowed to freely consult textbooks and other literature online, whether using microscopy or WSI. Identical clinical information was provided to reading pathologists for both modalities. Information regarding prior diagnoses on the same patient was not provided. Reading pathologists were not allowed to request recuts or any additional special stains beyond those already provided, or to consult with other pathologists. The randomization process ensured that the order in which cases were presented to the reading pathologist for microscopic interpretation was different than the order for WSI interpretation. Each diagnosis by a reading pathologist on a case (whether by WSI or microscopy) was termed a “read.” As each case from any participating institution was interpreted twice by 4 reading pathologists, there were 8 “reads” per case, not including the original sign-out diagnosis.
The diagnosis rendered by the original pathologist who signed out the case in the course of routine patient care using a microscope was considered the reference standard. A central panel of 3 “adjudication pathologists” independently determined the level of concordance between microscopic and WSI diagnoses and the reference standard. The adjudication panel did not include any of the enrollment pathologists or reading pathologists, and was selected from institutions different than the 4 centers that participated in enrollment and reading. Each adjudication pathologist had at least 10 years of relevant experience. Two adjudication pathologists were provided a list of paired diagnoses, blinded to method of diagnosis (microscopy or WSI), reading pathologist and participating site/institution. Adjudication pathologists did not view glass slides for any case. Using an Adjudication Charter for each organ system, adjudication pathologists placed each pair of diagnoses into one of 3 categories: concordant, minor discordant or major discordant. In keeping with widely accepted definitions, a major discordance was defined as a difference in diagnosis that would be associated with a difference in patient management. , In case of a disagreement between the 2 adjudication pathologists on the level of concordance between 2 diagnoses, the third adjudication pathologist served as a tie-breaker. The primary endpoint of the study was the difference between major discordance rates for microscopy and WSI by comparison with the reference standard. The study design is summarized in Figure .
A total of 1992 cases (3390 slides/3390 images) were included in the full analysis set, of which 923 slides (27%) were either immunohistochemical stains or special stains. The range of slides examined was 1 to 16 slides per case. Ten cases had 10 or more slides per case. Scanning performance is shown in Table . In the first scan of these 3390 slides, the Philips IntelliSite Pathology Solution was able to automatically detect an issue, such as no tissue or label detection, for 77 slides (2.3%). The images from 70 slides (2.1%) did not pass the image quality check by the scanning operator for slide-related issues such as prior ink markings, broken slides or debris on the slide. For 55 images (1.6%) the scanning technician identified an out of focus image (54 images, 1.6%) or missing tissue (1 image, 0.03%). In the second scan (in cases where this was required), the Philips IntelliSite Pathology Solution was able to automatically detect an issue for 21 slides (0.6%). The images from 7 slides (0.2%) did not pass the image quality check by the scanning operator for slide-related issues. For 22 images (0.6%), the scanning technician identified the image to be out of focus for 21 images and found “venetian blinds” at high magnification for 1 image. For clinical study operational reasons, slides were rescanned a maximum of 5 times before they were enrolled into the study. Reading times were derived from available system data. Reading time was defined as the time it took a pathologist to open the case, read all available information, diagnose the case and enter the diagnosis in the system. Approximately 94% of reads were completed within 30 minutes of reading time. Reading times longer than 30 minutes were considered to not reflect the actual reading time since such instances generally occurred due to external factors; for example, the reader opened the case, was interrupted during the read, and forgot to close the case, resulting in an incorrect log. For exploratory analyses, and assuming that this would have the same effect on microscopic reads as on WSI reads, it was decided to include only times shorter than 30 minutes for the analysis. The mean reading time for microscopy was 78 seconds and the mean reading time for WSI was 84 seconds. The mean difference between the reading time for WSI and the reading time for microscopy was 6 seconds with a 95% confidence interval (CI) of (0.03-0.12). One site in the study performed a detailed analysis of reading times, which is being published in a separate manuscript. The number of cases by organ system, diagnosis and specimen type is shown in Table . For 1992 cases, a total of 15,936 reads (1992×8) was expected. However, 11 reads (7 by microscopy, 4 by WSI) were excluded as reading pathologists selected “no diagnosis” for a variety of reasons, yielding a total of 15,925 reads (7961 by microscopy, 7964 by WSI). The major discordance rate between microscopy and the reference standard was 4.6% (364/7961 reads) and the major discordance rate between WSI and the reference standard was 4.9% (393/7964 reads). The difference in major discordance rates for WSI and microscopy was 0.4%, with a derived 2-sided 95% CI of (−0.30% to 1.01%). As the upper limit of this CI was less than the prespecified noninferiority threshold of 4%, WSI was considered noninferior to microscopy, meeting the primary objective of the study. Major Discordance Rates by Organ System: Microscopy Versus Reference Standard and WSI Versus Reference Standard For each organ system, major discordance rates between microscopy and the reference standard, and between WSI and the reference standard are listed in Table . For cases from the peritoneum, gallbladder, appendix and soft tissue, there were no major discordances between either microscopy or WSI and the reference standard. For stomach and lymph node cases, discordance rates were very low (<1%) with both modalities. For most other organs systems/tissues, discordance rates between both modalities and the reference standard ranged from 1% to 4.9%. Major discordance rates between microscopy and the reference standard were highest (≥5%) for pathology of the brain, gynecologic tract, liver/bile ducts, urinary bladder, and prostate. These were very similar to the levels of discordance between WSI and the reference standard, with the exception of liver/bile duct cases, where major discordance rates for WSI were lower than microscopy. Of all organs/organ systems included in the study, prostate showed the highest major discordance rates, which were seen with microscopy (11.3%) as well as WSI (12%). Overall, in 157/7596 reads (2%), there was a major discordance between WSI and the reference standard in cases where microscopy was concordant with the reference standard. In 127/7566 reads (1.6%), there was a major discordance between microscopy and the reference standard in cases where WSI was concordant with the reference standard. Differences between major discordance rates for microscopy and major discordance rates for WSI by organ system are shown in Table and depicted in Figure . For 4 organ systems, there was no difference between major discordance rates for the 2 modalities (peritoneum, gallbladder, appendix, soft tissue). WSI major discordances were slightly higher (<1%) in stomach, skin, brain, colorectum, gastroesophageal junction, and prostate. The major discordance rate for WSI was ≥1% higher than the major discordance rate for microscopy in endocrine, neoplastic kidney, gynecologic, and urinary bladder pathology. These 4 organs/organ systems were selected for detailed analysis (see below). WSI major discordance rates were ≥1% lower than the major discordance rate for microscopy in liver/bile duct, salivary gland, and (peri)anal pathology. These organs/organ systems, where microscopy performed worse than WSI, were not subjected to additional analysis. Endocrine Pathology: Detailed Analysis This analysis was based on paired reads, that is one read by microscopy and one read by WSI for the same case by the same pathologist. Since each case was read twice by 4 pathologists, there were 4 paired reads per case. Of 400 paired reads on 100 cases in endocrine pathology, there were 9 reads in which WSI was judged to show a major discordance with the reference standard while the corresponding microscopic read was not (Table ). Details of the diagnosis in these cases are provided in Table . Most of these occurrences (7) involved thyroid pathology. Six were caused by under-diagnosis and one by over-diagnosis of papillary thyroid carcinoma using WSI. There was only one occurrence each in adrenal pathology and pancreatic pathology. There were no cases in which 3 pathologists or all 4 pathologists made a major discordant diagnosis compared with the reference standard by WSI but a concordant (or minor discordant) diagnosis by microscopy. There was only 1 case in which 2/4 readers made a major discordant diagnosis by WSI but a concordant diagnosis by microscopy (case 0849, Table ). The remaining occurrences were random errors in which a single pathologist (1/4) made a major discordant diagnosis by WSI but a concordant (or minor discordant) diagnosis by microscopy; in each of these instances, the remaining 3 pathologists made the same diagnosis by WSI and microscopy. Neoplastic Kidney Pathology: Detailed Analysis Of 200 paired reads from 50 cases in neoplastic kidney pathology, only 4 featured a major discordance between WSI and the reference standard when microscopy was concordant (or minor discordant) with the reference standard (Table ). There were no cases in which 3/4 or 4/4 readers made a discordant diagnosis compared with the reference standard by WSI but a concordant/minor discordant diagnosis by microscopy. There was only 1 case in which 2/4 readers made a discordant diagnosis by WSI but a concordant diagnosis by microscopy (case 1095, Table ). The 2 other occurrences were random errors involving only a single pathologist. Urinary Bladder Pathology: Detailed Analysis There were 396 paired reads from 99 cases involving pathology of the urinary bladder, of which 20 featured a major discordance between WSI and the reference standard in the face of no major discrepancy between microscopy and the reference standard (Table ). These involved interpretation of benign bladder biopsies in 5, carcinomas in biopsies or transurethral resections in 3, noninvasive carcinomas in biopsies or transurethral resections in 4, and carcinoma in a resected specimen in 1. There were no consistent problem areas where WSI caused diagnostic difficulties for all 4 readers (or even 3/4 readers). There was only 1 case in which the WSI diagnosis of 2 (of 4) readers was judged as a major discordance when the corresponding microscopic diagnosis was concordant or minor discordant (case 0276, Table ). Gynecologic Pathology: Detailed Analysis Of 600 paired reads from 150 cases in gynecologic pathology, 19 paired reads involved a major discordance between WSI and the reference standard when microscopy was concordant or showed only a minor discordance (Table ). Most involved endometrial biopsies (8), malignant diagnoses in the ovary (6), and cone biopsies or loop electrosurgical excision procedure excisions of the cervix (4). There were 3 cases in which 3 (of 4) pathologists made a major discordant diagnosis compared with the reference standard by WSI but a concordant or minor discordant diagnosis by microscopy (Table , cases 0062, 0361, 0418). In case 0062, which featured an ovarian tumor, 3 pathologists diagnosed carcinoma by microscopy while making a benign or less aggressive diagnosis on WSI. In case 0361 (endometrial biopsy), 3 pathologists made a more aggressive diagnosis on WSI and a benign diagnosis by microscopy. In case 0418, grading of dysplasia was more aggressive on microscopy than on WSI. As in the other organ systems where a detailed case-by-case analysis was performed, there were no consistent problem areas. Overall, in the entire study set (1992 cases), there were only 3 cases (all in gynecologic pathology, discussed in the prior paragraph) where 3 of 4 pathologists made a major discordant diagnosis by WSI while making a concordant (or minor discordant) diagnosis by microscopy. There was not a single case in the study in which all 4 pathologists made a major discordant diagnosis by WSI while making a concordant (or minor discordant) diagnosis by microscopy.
For each organ system, major discordance rates between microscopy and the reference standard, and between WSI and the reference standard are listed in Table . For cases from the peritoneum, gallbladder, appendix and soft tissue, there were no major discordances between either microscopy or WSI and the reference standard. For stomach and lymph node cases, discordance rates were very low (<1%) with both modalities. For most other organs systems/tissues, discordance rates between both modalities and the reference standard ranged from 1% to 4.9%. Major discordance rates between microscopy and the reference standard were highest (≥5%) for pathology of the brain, gynecologic tract, liver/bile ducts, urinary bladder, and prostate. These were very similar to the levels of discordance between WSI and the reference standard, with the exception of liver/bile duct cases, where major discordance rates for WSI were lower than microscopy. Of all organs/organ systems included in the study, prostate showed the highest major discordance rates, which were seen with microscopy (11.3%) as well as WSI (12%). Overall, in 157/7596 reads (2%), there was a major discordance between WSI and the reference standard in cases where microscopy was concordant with the reference standard. In 127/7566 reads (1.6%), there was a major discordance between microscopy and the reference standard in cases where WSI was concordant with the reference standard. Differences between major discordance rates for microscopy and major discordance rates for WSI by organ system are shown in Table and depicted in Figure . For 4 organ systems, there was no difference between major discordance rates for the 2 modalities (peritoneum, gallbladder, appendix, soft tissue). WSI major discordances were slightly higher (<1%) in stomach, skin, brain, colorectum, gastroesophageal junction, and prostate. The major discordance rate for WSI was ≥1% higher than the major discordance rate for microscopy in endocrine, neoplastic kidney, gynecologic, and urinary bladder pathology. These 4 organs/organ systems were selected for detailed analysis (see below). WSI major discordance rates were ≥1% lower than the major discordance rate for microscopy in liver/bile duct, salivary gland, and (peri)anal pathology. These organs/organ systems, where microscopy performed worse than WSI, were not subjected to additional analysis.
This analysis was based on paired reads, that is one read by microscopy and one read by WSI for the same case by the same pathologist. Since each case was read twice by 4 pathologists, there were 4 paired reads per case. Of 400 paired reads on 100 cases in endocrine pathology, there were 9 reads in which WSI was judged to show a major discordance with the reference standard while the corresponding microscopic read was not (Table ). Details of the diagnosis in these cases are provided in Table . Most of these occurrences (7) involved thyroid pathology. Six were caused by under-diagnosis and one by over-diagnosis of papillary thyroid carcinoma using WSI. There was only one occurrence each in adrenal pathology and pancreatic pathology. There were no cases in which 3 pathologists or all 4 pathologists made a major discordant diagnosis compared with the reference standard by WSI but a concordant (or minor discordant) diagnosis by microscopy. There was only 1 case in which 2/4 readers made a major discordant diagnosis by WSI but a concordant diagnosis by microscopy (case 0849, Table ). The remaining occurrences were random errors in which a single pathologist (1/4) made a major discordant diagnosis by WSI but a concordant (or minor discordant) diagnosis by microscopy; in each of these instances, the remaining 3 pathologists made the same diagnosis by WSI and microscopy.
Of 200 paired reads from 50 cases in neoplastic kidney pathology, only 4 featured a major discordance between WSI and the reference standard when microscopy was concordant (or minor discordant) with the reference standard (Table ). There were no cases in which 3/4 or 4/4 readers made a discordant diagnosis compared with the reference standard by WSI but a concordant/minor discordant diagnosis by microscopy. There was only 1 case in which 2/4 readers made a discordant diagnosis by WSI but a concordant diagnosis by microscopy (case 1095, Table ). The 2 other occurrences were random errors involving only a single pathologist.
There were 396 paired reads from 99 cases involving pathology of the urinary bladder, of which 20 featured a major discordance between WSI and the reference standard in the face of no major discrepancy between microscopy and the reference standard (Table ). These involved interpretation of benign bladder biopsies in 5, carcinomas in biopsies or transurethral resections in 3, noninvasive carcinomas in biopsies or transurethral resections in 4, and carcinoma in a resected specimen in 1. There were no consistent problem areas where WSI caused diagnostic difficulties for all 4 readers (or even 3/4 readers). There was only 1 case in which the WSI diagnosis of 2 (of 4) readers was judged as a major discordance when the corresponding microscopic diagnosis was concordant or minor discordant (case 0276, Table ).
Of 600 paired reads from 150 cases in gynecologic pathology, 19 paired reads involved a major discordance between WSI and the reference standard when microscopy was concordant or showed only a minor discordance (Table ). Most involved endometrial biopsies (8), malignant diagnoses in the ovary (6), and cone biopsies or loop electrosurgical excision procedure excisions of the cervix (4). There were 3 cases in which 3 (of 4) pathologists made a major discordant diagnosis compared with the reference standard by WSI but a concordant or minor discordant diagnosis by microscopy (Table , cases 0062, 0361, 0418). In case 0062, which featured an ovarian tumor, 3 pathologists diagnosed carcinoma by microscopy while making a benign or less aggressive diagnosis on WSI. In case 0361 (endometrial biopsy), 3 pathologists made a more aggressive diagnosis on WSI and a benign diagnosis by microscopy. In case 0418, grading of dysplasia was more aggressive on microscopy than on WSI. As in the other organ systems where a detailed case-by-case analysis was performed, there were no consistent problem areas. Overall, in the entire study set (1992 cases), there were only 3 cases (all in gynecologic pathology, discussed in the prior paragraph) where 3 of 4 pathologists made a major discordant diagnosis by WSI while making a concordant (or minor discordant) diagnosis by microscopy. There was not a single case in the study in which all 4 pathologists made a major discordant diagnosis by WSI while making a concordant (or minor discordant) diagnosis by microscopy.
The major question that validation studies of WSI seek to answer is whether a pathologist will make the same diagnosis on the same case using WSI as they would by microscopy. For this purpose, a WSI diagnosis that is “correct” is as satisfactory as a WSI diagnosis that is “incorrect,” as long as the same diagnosis is made by microscopy. Reflecting this principle, the 2013 CAP guidelines state that “validation studies should establish diagnostic concordance between digital and glass slides for the same observer.” In keeping with these guidelines, our study was designed primarily to measure variability between the same pathologist(s) for the same case using 2 different modalities. To the best of our knowledge, this is the largest validation study performed in the United States comparing WSI and microscopy for primary diagnosis in surgical pathology. It is also the largest series worldwide in terms of number of reads, and the second-largest series worldwide in terms of cases. In this study, several measures aimed at accurately assessing intraobserver variability and mitigating the risk of bias, including selection bias and recall bias. These measures included selection of consecutive cases, inclusion of a validation pathologist to validate cases selected by the enrollment pathologist, randomization of reading order, division of cases evenly into batches, randomization of cases between reads, alternation of reading modalities by batch (ie, a batch of microscopy cases was followed by a batch of WSI cases on a different day), blinding of reading pathologists to the reference standard diagnosis, and adjudication of concordance by pathologists different from reading pathologists. Many of these measures were either not considered in prior studies or were not specified in published protocols. Table lists the largest studies that have compared microscopy and WSI for primary diagnosis in surgical pathology using cases from a variety of organ systems, with adequate reporting of major discrepancy rates. , , A major difference between these studies and the current study is in the number of times a study case was interpreted (read) specifically for the study after the original sign-out. In 2 prior studies, each reading pathologist interpreted each case only once during the study (either by WSI or microscopy in Bauer et al ; by WSI only in Snead et al ), which was compared with the original sign-out diagnosis. In contrast, in the current study, each reading pathologist interpreted each case twice during the study after the original sign-out diagnosis. Hence, although the study by Snead and colleagues included a larger number of cases (3017 vs. 1992), the total number of reads performed during their study (excluding the original sign-out diagnosis) was lower (3017 vs. 15925). The study by Snead and colleagues was most similar to the current study in terms of scope and size, but the design of the 2 studies differed in the stringency of measures taken to reduce bias. For example, adjudication pathologists were different from the reading pathologists in the current study and were selected from institutions different from the reading pathologists, whereas reading pathologists (participating pathologists) were included in the adjudication panel (steering group) by Snead and colleagues. In both studies, however, the difference in major discrepancy rates for WSI and microscopy was reassuringly low (0.7% vs. 0.4%), supporting the contention that these methodologies are essentially equivalent for rendering a primary diagnosis in surgical pathology. We were also able to report rates of interobserver variability (rate of major discordance between WSI and reference standard, or between microscopy and reference standard). In surgical pathology, interobserver variability is greatest in diagnostically challenging cases, and mainly serves to highlight known problem areas where agreement between observers is suboptimal, even among experts. – These problems are compounded when general surgical pathologists interpret cases that are difficult even for subspecialists, and when subspecialists interpret cases that they do not sign out in their highly subspecialized practices, as for some pathologists in this study. It is important to emphasize that reading pathologists in this study were not permitted to use standard procedures that would be available in “real-life” settings (and were possibly available to the pathologist who originally signed out the case, creating the reference standard diagnostic benchmark), such as obtaining recuts or deeper levels, comparing the case with prior specimens, ordering additional special stains, showing difficult cases to colleagues or obtaining extradepartmental consultation. Given the effort expended in recent years to validate WSI, its many potential benefits are worth reemphasizing. WSI is already being used clinically in some centers for providing consultations on difficult cases to pathologists at remote locations, providing frozen section interpretations at distant sites, conducting slide conferences and tumor boards with participants at off-site hospitals, performing proficiency testing/quality assurance, decreasing problems associated with retrieval of glass slides from physical storage sites for comparison to current cases, eliminating problems with loss of staining quality over time or loose cover slips, and using scanned images for semiquantitative image analysis (eg, HER2/neu, estrogen receptors, Ki-67). In the realm of education, the ability of WSI to be “in many places at once” obviates the need to physically transport glass slides, allows for greater flexibility in interacting locally with medical students, residents, fellows, and faculty, and facilitates educational uses such as multicenter conferences, teaching conferences at remote sites, and global pathology education. – Virtual atlases containing hundreds of educational digital images can be viewed or annotated any time and from anywhere. Links to WSI can be provided within journal articles, greatly increasing the educational value of the images provided. , The use of WSI also eliminates the need for providing glass slides and recuts to students for educational purposes and ensures that every student views the same image. The reader is referred to reviews that address these issues in greater detail. , Digital pathology also has the potential to underpin more advanced approaches to image analysis of tissues to provide quantitative data at the point of scanning that can support case selection, prioritization and diagnostic evaluation of tissues to support tumor grading, biomarker measurement, patient stratification, immuno-oncology and precision medicine. The wide variety of cases included in this series allowed us to perform a detailed analysis of major discordance rates by organ system in order to determine if there were specific organ systems, specimen types, or diagnostic categories where WSI was consistently inferior to microscopy. Although we did identify a few organ systems where the major discordance rate for WSI (vs. reference standard) was slightly higher than the major discordance rate for microscopy (vs. reference standard), a case-by-case analysis revealed no consistent vulnerabilities for WSI when compared with microscopy. As 4 pathologists interpreted each case by both modalities after the original sign-out diagnosis, one would expect that if there were a consistent technical problem that precluded an accurate diagnosis with WSI, it would manifest as major discordances between WSI and the reference standard for a given case but concordant diagnoses between microscopy and the reference standard on the same case. Further, one would expect this to occur with all 4 pathologists who viewed the case. For example, if identification of nuclear features of papillary thyroid carcinoma was a consistent Achilles’ heel of WSI, one would expect that all 4 pathologists would misinterpret cases of papillary thyroid carcinoma by WSI while making the correct diagnosis by microscopy. Instead, our analysis showed that even in the most problematic areas (eg, thyroid pathology), there was not even a single case where all 4 pathologists consistently erred when using WSI while making the correct diagnosis by microscopy. These findings lend additional support to the contention that cases where pathologists make an incorrect diagnosis by WSI and a correct diagnosis by microscopy represent random error by individual pathologists rather than a systematic or technical problem attributable to the use of WSI. It is important to note that although this manuscript focuses heavily on potential vulnerabilities of WSI, there were also areas where microscopy performed worse than WSI (Table ). The choice to not subject these areas to the same degree of scrutiny as WSI was made since potential areas of vulnerability of WSI are of greater concern to pathologists. The strengths of this study include the multicenter, blinded, randomized design, the inclusion of a wash-out period, representation of both academic pathologists as well as pathologists based in commercial laboratories, reading of cases by pathologists who were not experts in the organ systems they were assigned to interpret, and the inclusion of margins and lymph nodes in many cases with resected tumors, closely simulating real-life settings. The ability to recall cases (memory bias)—a major concern in any intraobserver variability study—was minimized by using a large number of cases, consecutive cases with many “routine” diagnoses, a wash-out period, and randomizing the reading order. In summary, this study demonstrates that WSI is noninferior to microscopy for the purpose of making a primary diagnosis in surgical pathology. This conclusion applies across a wide range of organ systems, sampling methods, specimen types, stains, and practice settings. Our findings have the potential to significantly alter the workflow of surgical pathologists in coming years and pave the way for a purely digital workflow analogous to the process currently used by radiologists.
|
Peri-implant clinical profile and subgingival yeasts carriage among cigarette-smokers with peri-implant mucositis | 470e1fe3-bda0-412a-a0c7-9c762f0dc017 | 11520373 | Dentistry[mh] | The oral microbiota is a complex ecosystem that includes a diverse array of microorganisms, and yeasts are among the normal inhabitants. Yeasts exist as symbiotic inhabitants in the oral flora and oral yeasts carriage (OYC) refers to the presence of various yeast species in the oral cavity with Candida albicans ( C. albicans ) being the most prevalent genus . The OYC is usually assessed using traditional methods such as the concentrated-oral-rinse-culture technique ; however, these microbes have also been identified in subgingival biofilm (SB) . Habitual tobacco smoking is a potential risk factor for alterations in the oral microbiota, including an increased prevalence of Candida species . Under such circumstances, these commensal microbes can transform into opportunistic pathithogens. Elevated yeast carriage is frequently implicated in the onset and advancement of oral mucosal conditions such as candidiasis ; nevertheless, scientific evidence indicates that oral yeasts (OY) might be a factor in the development of peri-implant diseases, specifically peri-implant mucositis (PM) and peri-implantitis . During initial phases, peri-implant diseases are restricted to soft tissues and this condition is identified as PM. The PM is characterized by the presence of gingival erythema, gingival bleeding (GB), increased probing depth (PD), and the absence of radiographic crestal bone loss (CBL) . However, when not promptly diagnosed and treated, the inflammatory condition of the soft tissues intensifies, ultimately posing a threat to the osseous tissues surrounding the implant, leading to peri-implantitis. Souza et al. demonstrated that implant surfaces can be colonized by OY, particularly Candida species. Similarly, Aldosari et al. assessed the subgingival yeasts colonization (SYC) among patients with PM. This study confirmed the presence of yeasts in SB among all PM patients. Furthermore, it has been suggested that SYC fosters the growth of pathogenic bacteria such as Porphyromonas gingivalis ( P. gingivalis ), Treponema denticola , Prevotella intermedia and Aggregatibacter actinomycetemcomitans , resulting in heightened virulence of yeasts and subsequent damage to the soft tissues . It is noteworthy that cigarette-smoking is a classical risk-factor of periodontal and peri-implant diseases including PM ; and aside from its detrimental impact on oral mucosal and periodontal/peri-implant tissues, nicotine (a major component in tobacco) significantly influences the oral microbiome by modifying the growth, attachment, and biofilm formation of pathogenic microorganisms, including yeasts . Experimental results by Haghighi et al. showed that nicotine possibly influences the pathogenic traits of yeast, which include aspects such as hyphal growth, biofilm formation, and expression of genes associated with virulence. The present observational clinical investigation is based on the hypothesis that SYC is higher in cigarette-smokers with PM in contrast to non-smokers with and without PM. The purpose of this investigation was to assess the peri-implant clinical profile and SYC among cigarette-smokers with PM.
Ethical guidelines The research protocol underwent rigorous review and received approval from an independent ethical committee, confirming its compliance with ethical standards. Participation in the study was entirely voluntary, and all individuals provided informed consent before inclusion. Throughout the study, participants were accorded the freedom to ask questions and seek clarifications regarding any aspect of the research. Importantly, all participants retained the autonomy to withdraw from the study at any stage without incurring any penalties or negative consequences. This commitment to voluntary participation, informed consent, and participant autonomy underscores our dedication to upholding the highest ethical standards in clinical research. The ethical approval was granted by the Institutional Review Board at the Riyadh Elm University, Riyadh, Saudi Arabia (Registration No. FRP/2024/543). Protocol for patient eligibility for inclusion For inclusion, the following criteria were implemented: (a) individuals aged at least 18 years; (b) self-reported cigarette-smokers (Individuals that reported to be smoking at least one cigarette daily for the past 12 months) ; (c) Self-reported non-smokers (individuals that reported to have never used any form of combustible and/or non-combustible nicotinic product) ; (d) Individuals with at least one dental implant in function for the past 180 days; (e) individuals diagnosed with peri-implant mucositis . Exclusion from the present study was based on the following: (a) dual smokers (individuals smoking cigarettes and using other forms of combustible nicotinic products such as cigars, pipe, waterpipe etc.); (b) Individuals who self-report systemic conditions, including cardiovascular diseases, metabolic disorders like obesity and diabetes mellitus (DM), renal and/or hepatic diseases, respiratory conditions such as chronic obstructive pulmonary disease, as well as those self-reporting viral infections like coronavirus disease-19 and acquired immune deficiency syndrome/HIV infections, along with individuals having oral/systemic malignancy; (c) pregnant or/and nursing females; (d) individuals using smokeless tobacco products; (e) individuals that reported to be currently using or had used antibiotics, antifungal medications, cancer therapy, and steroids and/or non-steroidal anti-inflammatory drugs within the past three-months and (f) individuals diagnosed with peri-implantitis . Definition of peri-implant mucositis and peri-implant health The presence of the following characteristics was used to define PM: peri-implant GB, coupled with signs such as redness, swelling, or suppuration, and without any associated CBL . A healthy peri-implant status was defined as the absence of peri-implant gingival, redness, swelling, GB and/or pus discharge, along with the absence of any indicators of inflammation . Groups Study participants were categorized into four groups: Group-1—Cigarette-smokers with PM; Group-2—Cigarette-smokers without PM; Group-3—Non-smokers with PM; and Group-4—Non-smokers without PM. Questionnaire and evaluation of dental records The principal investigator administered a questionnaire to all participants, collecting relevant information on duration and daily frequency of cigarette smoking (pack years [PY]), age, gender, any familial history of smoking and most recent visit to a dentist and/or dental hygienist. Participants were also asked about their daily frequencies of toothbrushing and flossing. Among cigarette smokers, sub-classification was performed into three subgroups: light-smokers (up to 20 PY), moderate-smokers (20.1–40.0 PY), and heavy-smokers (more than 40 PY) . The following information was collected by the principal investigator from the individuals’ digital dental healthcare records: (a) implant length; (b) implant diameter; (c) implant insertion torque; (d) depth of insertion (crestal or subcrestal); (e) implant abutment connection (platform switching); (f) jaw location (maxilla and/or mandible); (g) implant surface characteristic (moderately rough or smooth); (h) mode of implant prosthesis retention (screw or cement), and (i) segment of jaw in which, the implant was placed— implants replacing missing central incisors, lateral incisors, and/or canines were categorized as being positioned within the “anterior” region of the jaw and implants replacing missing premolars and/or molars, were categorized as being positioned within the “posterior” region of the jaw (Supplementary file attached). Clinical and radiologic investigations All clinical and radiologic investigations were performed before microbial investigations. Peri-implant indices, namely modified plaque index (mPI) , modified gingival index (mGI) , and PD were meticulously gauged by a calibrated and blinded investigator (Kappa score 0.84) on four surfaces using a graded probe (UNC, HuFriedy, Chicago, IL, United States). The CBL on both mesial and distal surfaces of the implants was quantified through digital radiographs (Planmeca Romexis Intraoral X-Ray, Planmeca OY, Helsinki, Finland). This measurement involved determining the linear distance from the implant abutment interface to the alveolar crest . The CBL assessments were conducted by a calibrated and blinded investigator (Kappa score 0.88) and documented in millimeters. Collection of subgingival oral biofilm samples The SB samples were obtained in accordance with a previously outlined protocol . Briefly, patients were comfortably seated in a dental chair, and the SB collection procedure was explained in a clear and accessible manner. Participants were encouraged to seek clarification or pose questions before the commencement of SB sample collection. To ensure isolation of peri-implant tissues, cotton rolls were utilized, and any supragingival plaque was delicately removed using sterile plastic hand curettes (Implan Prophy ® Plastic Dental-Instrument-System-Kit, Tess Corporation, WI, USA). Subsequently, SB samples were gathered employing sterile plastic curettes (Implan Prophy ® Plastic Dental-Instrument-System-Kit, Tess Corporation, WI, USA). The curette was gently inserted into the buccal and lingual peri-implant pockets, ensuring thorough contact with the subgingival area. Careful attention was given to minimize trauma to the surrounding tissues and prevent bleeding during sample retrieval. The collected SB samples were then carefully placed into a sterile plastic container equipped with a lid and containing phosphate-buffered-saline (PBS). All samples were subjected to further analysis within 30 min of collection. Assessment of subgingival yeasts colony forming units The determination of subgingival yeasts colony forming units followed the procedure outlined in previous studies . In summary, samples underwent vortexing at 1,000 rpm for 10 min, and the resulting pellet was re-suspended in 1 ml PBS. Subsequently, a sample volume of 20 µl was extracted and evenly streaked using a sterile glass spreader across duplicates of Sabouraud’s Dextrose Agar plates for culture. The plates were then incubated at 37 °C, and after 48 h, the Candida colonies were enumerated to calculate the colony-forming units per ml (CFU/ml) of SB. These investigations were performed by a trained, calibrated (kappa score 0.88) and blinded investigator. Sample-size estimation (power analysis) and statistical analyses Power analysis was conducted utilizing G*power version 3.0.10 (Franz Faul, Universität Kiel, Germany). The determination of sample size indicated that 18 individuals per group would yield an 88% power to detect a genuine difference of 2 mm in probing depth (PD), which served as the primary outcome variable, between cigarette-smokers and non-smokers. This calculation was based on a two-tailed comparison with an alpha value of 0.05. Group comparisons were executed using one-way analysis of variance, and Bonferroni Post hoc adjustment tests were applied. Logistic regression analysis was employed to assess the correlation between SYC measured in colony-forming units per milliliter (CFU/ml) and variables such as age, gender, pack-years, clinicoradiographic parameters, and the duration of implants in function. The threshold for statistical significance was established at P < 5%.
The research protocol underwent rigorous review and received approval from an independent ethical committee, confirming its compliance with ethical standards. Participation in the study was entirely voluntary, and all individuals provided informed consent before inclusion. Throughout the study, participants were accorded the freedom to ask questions and seek clarifications regarding any aspect of the research. Importantly, all participants retained the autonomy to withdraw from the study at any stage without incurring any penalties or negative consequences. This commitment to voluntary participation, informed consent, and participant autonomy underscores our dedication to upholding the highest ethical standards in clinical research. The ethical approval was granted by the Institutional Review Board at the Riyadh Elm University, Riyadh, Saudi Arabia (Registration No. FRP/2024/543).
For inclusion, the following criteria were implemented: (a) individuals aged at least 18 years; (b) self-reported cigarette-smokers (Individuals that reported to be smoking at least one cigarette daily for the past 12 months) ; (c) Self-reported non-smokers (individuals that reported to have never used any form of combustible and/or non-combustible nicotinic product) ; (d) Individuals with at least one dental implant in function for the past 180 days; (e) individuals diagnosed with peri-implant mucositis . Exclusion from the present study was based on the following: (a) dual smokers (individuals smoking cigarettes and using other forms of combustible nicotinic products such as cigars, pipe, waterpipe etc.); (b) Individuals who self-report systemic conditions, including cardiovascular diseases, metabolic disorders like obesity and diabetes mellitus (DM), renal and/or hepatic diseases, respiratory conditions such as chronic obstructive pulmonary disease, as well as those self-reporting viral infections like coronavirus disease-19 and acquired immune deficiency syndrome/HIV infections, along with individuals having oral/systemic malignancy; (c) pregnant or/and nursing females; (d) individuals using smokeless tobacco products; (e) individuals that reported to be currently using or had used antibiotics, antifungal medications, cancer therapy, and steroids and/or non-steroidal anti-inflammatory drugs within the past three-months and (f) individuals diagnosed with peri-implantitis .
The presence of the following characteristics was used to define PM: peri-implant GB, coupled with signs such as redness, swelling, or suppuration, and without any associated CBL . A healthy peri-implant status was defined as the absence of peri-implant gingival, redness, swelling, GB and/or pus discharge, along with the absence of any indicators of inflammation .
Study participants were categorized into four groups: Group-1—Cigarette-smokers with PM; Group-2—Cigarette-smokers without PM; Group-3—Non-smokers with PM; and Group-4—Non-smokers without PM.
The principal investigator administered a questionnaire to all participants, collecting relevant information on duration and daily frequency of cigarette smoking (pack years [PY]), age, gender, any familial history of smoking and most recent visit to a dentist and/or dental hygienist. Participants were also asked about their daily frequencies of toothbrushing and flossing. Among cigarette smokers, sub-classification was performed into three subgroups: light-smokers (up to 20 PY), moderate-smokers (20.1–40.0 PY), and heavy-smokers (more than 40 PY) . The following information was collected by the principal investigator from the individuals’ digital dental healthcare records: (a) implant length; (b) implant diameter; (c) implant insertion torque; (d) depth of insertion (crestal or subcrestal); (e) implant abutment connection (platform switching); (f) jaw location (maxilla and/or mandible); (g) implant surface characteristic (moderately rough or smooth); (h) mode of implant prosthesis retention (screw or cement), and (i) segment of jaw in which, the implant was placed— implants replacing missing central incisors, lateral incisors, and/or canines were categorized as being positioned within the “anterior” region of the jaw and implants replacing missing premolars and/or molars, were categorized as being positioned within the “posterior” region of the jaw (Supplementary file attached).
All clinical and radiologic investigations were performed before microbial investigations. Peri-implant indices, namely modified plaque index (mPI) , modified gingival index (mGI) , and PD were meticulously gauged by a calibrated and blinded investigator (Kappa score 0.84) on four surfaces using a graded probe (UNC, HuFriedy, Chicago, IL, United States). The CBL on both mesial and distal surfaces of the implants was quantified through digital radiographs (Planmeca Romexis Intraoral X-Ray, Planmeca OY, Helsinki, Finland). This measurement involved determining the linear distance from the implant abutment interface to the alveolar crest . The CBL assessments were conducted by a calibrated and blinded investigator (Kappa score 0.88) and documented in millimeters.
The SB samples were obtained in accordance with a previously outlined protocol . Briefly, patients were comfortably seated in a dental chair, and the SB collection procedure was explained in a clear and accessible manner. Participants were encouraged to seek clarification or pose questions before the commencement of SB sample collection. To ensure isolation of peri-implant tissues, cotton rolls were utilized, and any supragingival plaque was delicately removed using sterile plastic hand curettes (Implan Prophy ® Plastic Dental-Instrument-System-Kit, Tess Corporation, WI, USA). Subsequently, SB samples were gathered employing sterile plastic curettes (Implan Prophy ® Plastic Dental-Instrument-System-Kit, Tess Corporation, WI, USA). The curette was gently inserted into the buccal and lingual peri-implant pockets, ensuring thorough contact with the subgingival area. Careful attention was given to minimize trauma to the surrounding tissues and prevent bleeding during sample retrieval. The collected SB samples were then carefully placed into a sterile plastic container equipped with a lid and containing phosphate-buffered-saline (PBS). All samples were subjected to further analysis within 30 min of collection.
The determination of subgingival yeasts colony forming units followed the procedure outlined in previous studies . In summary, samples underwent vortexing at 1,000 rpm for 10 min, and the resulting pellet was re-suspended in 1 ml PBS. Subsequently, a sample volume of 20 µl was extracted and evenly streaked using a sterile glass spreader across duplicates of Sabouraud’s Dextrose Agar plates for culture. The plates were then incubated at 37 °C, and after 48 h, the Candida colonies were enumerated to calculate the colony-forming units per ml (CFU/ml) of SB. These investigations were performed by a trained, calibrated (kappa score 0.88) and blinded investigator.
Power analysis was conducted utilizing G*power version 3.0.10 (Franz Faul, Universität Kiel, Germany). The determination of sample size indicated that 18 individuals per group would yield an 88% power to detect a genuine difference of 2 mm in probing depth (PD), which served as the primary outcome variable, between cigarette-smokers and non-smokers. This calculation was based on a two-tailed comparison with an alpha value of 0.05. Group comparisons were executed using one-way analysis of variance, and Bonferroni Post hoc adjustment tests were applied. Logistic regression analysis was employed to assess the correlation between SYC measured in colony-forming units per milliliter (CFU/ml) and variables such as age, gender, pack-years, clinicoradiographic parameters, and the duration of implants in function. The threshold for statistical significance was established at P < 5%.
Study participants During the patient screening phase, invitations were extended to 127 individuals for participation in the current study, comprising 98 males and 29 females. Eleven males with self-reported DM and seven dual-smokers were subsequently excluded. None of the invited females ( n = 29) opted to participate, and reasons for their non-participation were not disclosed. Consequently, a total of 80 male individuals proceeded to sign the informed consent form. These individuals were divided into four groups as follows: Group-1—Cigarette-smokers with PM ( n = 20); Group-2—Cigarette-smokers without PM ( n = 19); Group-3—Non-smokers with PM ( n = 21); and Group-4—Non-smokers without PM ( n = 20). These results are illustrated in Fig. . Characteristics of patient cohort A total of 80 male individuals agreed to participate in the present investigation and signed the informed consent form. Twenty, 19, 21 and 20 individuals were included in groups 1, 2, 3 and 4, respectively. There was no difference in the mean age of individuals in all groups. In groups 1 and 2, cigarette-smokers had a smoking history of 25.9 ± 12.1 and 11.5 ± 2.5 pack years, respectively. In groups 1 and 2, 60% and 15.8% of the individuals were moderate smokers, respectively. The family history of smoking was more often reported by individuals in groups 1 and 2 (75% and 68.4%, respectively) compared with individuals in groups 3 and 4 (19.04% and 15%, respectively). Toothbrushing twice daily was more often performed by individuals in groups 2 and 4 (84.2% and 85%, respectively) compared with individuals in Group-1 (15%). In Group-3, all individuals reported that they brushed teeth once daily. None of the individuals in groups 1 and 3 were performing flossing of interproximal spaces; and 78.9% and 80% of the individuals in groups 2 and 4, respectively were performing interproximal flossing once daily. In groups 1, 2, 3 and 4, participants visited a dentist/dental hygienist 3.1 ± 1.6, 0.8 ± 0.2, 3.5 ± 1.5 and 0.7 ± 0.2 years ago, respectively. These results are shown in Table . Implant-related characteristics All implants were platform-switched, were placed at bone-level and had moderately rough surfaces. All implants were loaded with cement-retained restorations and diameters and lengths ranging between 4.1 and 4.8 and 10 and 14 mm, respectively. The total number of implants in groups 1, 2, 3 and 4 were 20, 19, 21 and 20, respectively. In groups 1, 2, 3 and 4, 12, 10, 12 and 10 implants were in the maxilla, respectively; and the remaining were located in the mandible. In all groups, most of the implants were present in the posterior jaws in the regions of missing premolars or molars. In all groups, the implants had been inserted at insertion torques ranging from 30 to 35 Ncm and were in function for 3.15 ± 0.9, 0.7 ± 0.5, 1.7 ± 0.8 and 5.2 ± 1.4 years in groups 1, 2, 3 and 4, respectively as shown in Table . Clinical and radiographic peri-implant parameters The mPI was significantly higher in Group-1 compared with Group-2 ( P < 0.05) and Group-4 ( P < 0.05). The mPI was significantly higher in Group-3 comparted with Groups 2 ( P < 0.05) and 4 ( P < 0.05). The mGI was significantly higher in Group-3 compared with groups 1 ( P < 0.05), 2 ( P < 0.05) and 4 ( P < 0.05). The PD was significantly higher in Group-1 compared with Group-2 ( P < 0.05) and Group-4 ( P < 0.05). The PD was significantly higher in Group-3 comparted with Groups 2 ( P < 0.05) and 4 ( P < 0.05). There was no statistically significant difference in medial and distal CBL in all groups (Table ). Isolation and colony for units of yeasts in the subgingival biofilm Yeasts were isolated from SB in 100%, 36.8%, 66.7% and 20% individuals in groups 1, 2, 3 and 4, respectively. The CFU/ml were significantly higher in Group-1 compared with individuals in groups 2 ( P < 0.05) and 4 ( P < 0.05). The CFU/ml were significantly higher in Group-3 compared with individuals in groups 2 ( P < 0.05) and 4 ( P < 0.05) (Table ). Correlation between smoking pack years, probing depth, interproximal flossing and oral yeasts colony forming units In Group-1, the yeasts CFU/ml in the SB were statistically significantly correlated with smoking pack-years ( P < 0.001) and peri-implant PD ( P < 0.01). In Group-2, there was no statistically significant correlation between smoking pack-years and peri-implant PD and yeasts CFU/ml in the SB (Fig. ). In Group-3, there was a statistically significant correlation between peri-implant PD and yeasts CFU/ml in the SB ( P < 0.01); whereas in Group-4, there was no statistically significant correlation between peri-implant PD and yeasts CFU/ml in the SB (Fig. ). In Group 2, there was a statistically significant correlation between daily flossing of interproximal spaces and yeasts CFU/ml in the SB (Fig. ). There was no correlation between daily flossing of interproximal spaces and yeasts CFU/ml in the SB in groups 1, 3 and 4. There was no correlation between age, family history of smoking, implant dimensions (diameter and length), duration for which implants were in function, mPI, mGI and CBL and yeasts CFU/ml in the SB in all groups.
During the patient screening phase, invitations were extended to 127 individuals for participation in the current study, comprising 98 males and 29 females. Eleven males with self-reported DM and seven dual-smokers were subsequently excluded. None of the invited females ( n = 29) opted to participate, and reasons for their non-participation were not disclosed. Consequently, a total of 80 male individuals proceeded to sign the informed consent form. These individuals were divided into four groups as follows: Group-1—Cigarette-smokers with PM ( n = 20); Group-2—Cigarette-smokers without PM ( n = 19); Group-3—Non-smokers with PM ( n = 21); and Group-4—Non-smokers without PM ( n = 20). These results are illustrated in Fig. .
A total of 80 male individuals agreed to participate in the present investigation and signed the informed consent form. Twenty, 19, 21 and 20 individuals were included in groups 1, 2, 3 and 4, respectively. There was no difference in the mean age of individuals in all groups. In groups 1 and 2, cigarette-smokers had a smoking history of 25.9 ± 12.1 and 11.5 ± 2.5 pack years, respectively. In groups 1 and 2, 60% and 15.8% of the individuals were moderate smokers, respectively. The family history of smoking was more often reported by individuals in groups 1 and 2 (75% and 68.4%, respectively) compared with individuals in groups 3 and 4 (19.04% and 15%, respectively). Toothbrushing twice daily was more often performed by individuals in groups 2 and 4 (84.2% and 85%, respectively) compared with individuals in Group-1 (15%). In Group-3, all individuals reported that they brushed teeth once daily. None of the individuals in groups 1 and 3 were performing flossing of interproximal spaces; and 78.9% and 80% of the individuals in groups 2 and 4, respectively were performing interproximal flossing once daily. In groups 1, 2, 3 and 4, participants visited a dentist/dental hygienist 3.1 ± 1.6, 0.8 ± 0.2, 3.5 ± 1.5 and 0.7 ± 0.2 years ago, respectively. These results are shown in Table .
All implants were platform-switched, were placed at bone-level and had moderately rough surfaces. All implants were loaded with cement-retained restorations and diameters and lengths ranging between 4.1 and 4.8 and 10 and 14 mm, respectively. The total number of implants in groups 1, 2, 3 and 4 were 20, 19, 21 and 20, respectively. In groups 1, 2, 3 and 4, 12, 10, 12 and 10 implants were in the maxilla, respectively; and the remaining were located in the mandible. In all groups, most of the implants were present in the posterior jaws in the regions of missing premolars or molars. In all groups, the implants had been inserted at insertion torques ranging from 30 to 35 Ncm and were in function for 3.15 ± 0.9, 0.7 ± 0.5, 1.7 ± 0.8 and 5.2 ± 1.4 years in groups 1, 2, 3 and 4, respectively as shown in Table .
The mPI was significantly higher in Group-1 compared with Group-2 ( P < 0.05) and Group-4 ( P < 0.05). The mPI was significantly higher in Group-3 comparted with Groups 2 ( P < 0.05) and 4 ( P < 0.05). The mGI was significantly higher in Group-3 compared with groups 1 ( P < 0.05), 2 ( P < 0.05) and 4 ( P < 0.05). The PD was significantly higher in Group-1 compared with Group-2 ( P < 0.05) and Group-4 ( P < 0.05). The PD was significantly higher in Group-3 comparted with Groups 2 ( P < 0.05) and 4 ( P < 0.05). There was no statistically significant difference in medial and distal CBL in all groups (Table ).
Yeasts were isolated from SB in 100%, 36.8%, 66.7% and 20% individuals in groups 1, 2, 3 and 4, respectively. The CFU/ml were significantly higher in Group-1 compared with individuals in groups 2 ( P < 0.05) and 4 ( P < 0.05). The CFU/ml were significantly higher in Group-3 compared with individuals in groups 2 ( P < 0.05) and 4 ( P < 0.05) (Table ).
In Group-1, the yeasts CFU/ml in the SB were statistically significantly correlated with smoking pack-years ( P < 0.001) and peri-implant PD ( P < 0.01). In Group-2, there was no statistically significant correlation between smoking pack-years and peri-implant PD and yeasts CFU/ml in the SB (Fig. ). In Group-3, there was a statistically significant correlation between peri-implant PD and yeasts CFU/ml in the SB ( P < 0.01); whereas in Group-4, there was no statistically significant correlation between peri-implant PD and yeasts CFU/ml in the SB (Fig. ). In Group 2, there was a statistically significant correlation between daily flossing of interproximal spaces and yeasts CFU/ml in the SB (Fig. ). There was no correlation between daily flossing of interproximal spaces and yeasts CFU/ml in the SB in groups 1, 3 and 4. There was no correlation between age, family history of smoking, implant dimensions (diameter and length), duration for which implants were in function, mPI, mGI and CBL and yeasts CFU/ml in the SB in all groups.
The authors applaud results from a clinical investigation by Canabarro et al. , which investigated the potential connection between the SYC and severity of periodontal disease. The results showed that the CFU/ml of OY were significantly higher in the SB samples collected from patients with periodontitis in contrast to those collected from individuals with a healthy periodontal status. Authors of the present investigation applaud the results by Canabarro et al. as CFU/ml of yeasts in SB were significantly higher among cigarette-smokers and non-smokers with PM (Group-1 and Group-3) compared with non-smokers without PM (Group-4). It is suggested that a variety of mechanisms contribute to this context. Nicotine is a major component of tobacco, which has been shown to have immunomodulatory effects . Studies have proposed that chronic nicotine exposure to tissues suppresses the immune response creating an immunocompromised state, which allows opportunistic microbes including yeasts to thrive and colonize the oral mucosa. Moreover, nicotine induced changes in the oral epithelium may create microlesions and/or disruptions providing entry-points for yeasts species to adhere and establish infections . In a laboratory-based investigation focusing on titanium surfaces , the virulence of OY (predominantly C. albicans ) within mixed-species biofilms, which included P. gingivalis and Streptococcus sanguis was assessed. The findings revealed that when coexisting with pathogenic bacteria, including the aforementioned species, C. albicans exhibited an elevated proportion of hyphae and an upregulation of hydrolytic enzymes . The study concluded that in conjunction with pathogenic bacteria found in oral biofilms, C. albicans expresses virulence factors that could potentially contribute to the development of peri-implant diseases. Furthermore, according to Nagler RM nicotine jeopardizes salary flow rates and composition thereby compromising the ability of saliva to inhibit microbial growth; thus contributing to an environment favorable for yeasts colonization. Nevertheless, it is important to note that the relationship between nicotine and OYC is complex and further research is needed to elucidate the underlying mechanisms. Additionally, other components of tobacco smoke and their interactions with nicotine may also contribute to the observed effects on oral and peri-implant tissues. An intriguing observation was noted in Group-2, where individuals, despite being cigarette smokers, exhibited healthy peri-implant tissues with significantly lower colony-forming units per milliliter (CFU/ml) in the SB compared to Group-1, as outlined in Table . One plausible explanation for this discrepancy may be linked to the duration of the cigarette smoking habit within these two groups. Notably, individuals in Group-1 had an approximate smoking history of 26 pack years, while those in Group-2 had a significantly shorter smoking duration, totaling nearly 12 pack years. The diminished duration of smoking among individuals in Group-2 could potentially account for the lower yeast CFU in the SB compared to Group-1. Another factor worth considering is the duration for which implants were functional. In Group-1, the implants had been in function for approximately 3 years, whereas in Group-2, the duration was markedly shorter at around 8 months. This discrepancy in implant duration might explain the absence of peri-implant diseases and the lower yeast CFU/ml observed in Group-2. It is noteworthy, however, that a substantial majority (at least 80%) of individuals in Group-2 adhered to a diligent oral hygiene routine, including brushing twice daily and daily flossing of interproximal spaces in nearly 79% of the population. Furthermore, individuals in Group-2 seemed to be visiting oral healthcare providers and attaining routine dental prophylaxis compared with individuals in groups 1 and 3. The possible contribution of these factors towards maintaining peri-implant health in these patients cannot be overlooked. The results of logistic regression analysis indicated a significant increase in CFU/ml for specific SYC among individuals in Group-2 who did not engage in daily flossing of interproximal spaces compared to those who practiced daily dental flossing within the same group. Despite these positive oral hygiene practices, it is essential to emphasize that the maintenance of routine oral hygiene should not be construed as justification for the continued use of nicotinic products in general. In a prior investigation, Krishnan et al. reported an elevated presence of SYC in the SB among individuals with periodontitis, regardless of their smoking status. Building upon this research, the current study establishes a statistically significant correlation between SYC, smoking duration (pack-years), and periodontal PD. The authors of this study commend these findings, noting a significant increase in SYC within the SB among both smokers and non-smokers with peri-implant mucositis (groups 1 and 3, respectively). In both groups, a robust statistical correlation was identified between pack-years of smoking and PD. This suggests that individuals classified as moderate and heavy cigarette smokers face an elevated risk of developing peri-implant diseases and hosting elevated colonies of pathogenic microbes, including yeast species, within the SB, as compared to their counterparts who are light smokers or non-smokers. It is strongly recommended that community-based initiatives, focusing on anti-tobacco measures and oral health promotion, be regularly conducted to educate the public about the adverse impacts of smoking on oral, periodontal, peri-implant, and overall health. Emphasizing the advantages of consistent oral hygiene practices is crucial for fostering a superior oral health-related quality of life. One limitation of the present study is that all participants were male. It has been reported that OYC varies among males and females with periodontal inflammation . The postmenopausal phase has been associated with an increased OYC in females compared with males . Therefore, it is hypothesized that the SYC in in the peri-implant sulci of females than males with peri-implant diseases. Moreover, identification of yeasts species was not performed in the present study. The primary reasoning for this was the limitation of resources that hindered further analyses such as polymerase chain reaction and DNA sequencing for yeast species identification. It is however, anticipated that most of the yeasts species in the SB were C. albicans as it is the most common yeasts species isolated from the oral cavity according to previous oral investigations . Further power adjusted and well-designed studies are needed to test these hypotheses. Limitations One limitation of the current study lies in the exclusive inclusion of male participants. Previous research has indicated variations in oral yeast carriage between males and females with periodontal inflammation, with postmenopausal females demonstrating increased oral yeast colonization compared to males . Consequently, it is postulated that females may exhibit higher SYC counts in peri-implant sulci than males in the context of peri-implant diseases. Additionally, the identification of yeast species was not undertaken in the present study. This decision was influenced by resource limitations that impeded more extensive analyses, such as polymerase chain reaction and DNA sequencing for yeast species identification. However, it is anticipated that most yeast species in the SB were C. albicans , given its prevalence as the most isolated yeast species from the oral cavity in previous investigations. Furthermore, immunocompromised individuals such as diabetic patients were excluded from the current investigation. It has been reported that peri-implant inflammatory conditions are worse and SYC is higher in diabetic compared with systemically healthy individuals . Therefore, it is likely that the CFU/ml of SYC in SB of diabetic smokers is higher than non-diabetic smokers and non-smokers. Additional robust, and adequately powered studies are warranted to empirically test these hypotheses.
One limitation of the current study lies in the exclusive inclusion of male participants. Previous research has indicated variations in oral yeast carriage between males and females with periodontal inflammation, with postmenopausal females demonstrating increased oral yeast colonization compared to males . Consequently, it is postulated that females may exhibit higher SYC counts in peri-implant sulci than males in the context of peri-implant diseases. Additionally, the identification of yeast species was not undertaken in the present study. This decision was influenced by resource limitations that impeded more extensive analyses, such as polymerase chain reaction and DNA sequencing for yeast species identification. However, it is anticipated that most yeast species in the SB were C. albicans , given its prevalence as the most isolated yeast species from the oral cavity in previous investigations. Furthermore, immunocompromised individuals such as diabetic patients were excluded from the current investigation. It has been reported that peri-implant inflammatory conditions are worse and SYC is higher in diabetic compared with systemically healthy individuals . Therefore, it is likely that the CFU/ml of SYC in SB of diabetic smokers is higher than non-diabetic smokers and non-smokers. Additional robust, and adequately powered studies are warranted to empirically test these hypotheses.
Peri-implant soft-tissue inflammatory parameters are worse and SYC is higher in moderate smokers than light smokers with PM and non-smokers without PM.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Impact of COVID-19 pandemic, national lockdown, and unlocking on an apex tertiary care ophthalmic institute | db28837f-d1b1-49f5-af60-dc60ea0634bc | 7774173 | Ophthalmology[mh] | This retrospective comparative study was conducted at the apex ophthalmic centre of the country. The study adhered to the tenets of the Declaration of Helsinki and at all times during the study, precautions recommended by well-established national societies were followed to prevent cross-infection. These included reduction of the workforce, regular sanitization, active involvement of community ophthalmologists and infectious disease specialists, provision of adequate personal protective equipment for staff, entry point screening of patients for temperature and signs and symptoms of COVID-19, constant supervision in waiting halls for minimizing overcrowding and maintaining recommended social distancing norms. Routine outpatient department (OPD) consultations, refraction, elective major and minor surgeries, and donor cornea retrieval were downsized to limit patient mobility. Community surveys and regular screening camps conducted in schools and in diabetic patients were also halted temporarily. Patients with pre-booked appointments were traced with video and audio-based teleconsultations, triaged as recommended by the largest ophthalmic community of India and advised to consult for appropriate medical help whenever deemed necessary. Specialty clinics at our center including those pertaining to retina, cornea, lens, oculoplasty, squint, glaucoma, ocular oncology, low vision aids, neuro-ophthalmology, contact lenses, and pediatric ophthalmology were also suspended for the time being and all patients requiring specialty clinic referral were dealt by clinicians experienced in respective fields at the same time. Those patients who were operated before the closure of routine services were followed up separately as they needed multiple visits. As soon as their condition stabilized, they were managed by video-consultations and hospital visits planned only if needed. From May 26, 2020, onwards, the OPD was open for online appointments, and from 8 th June onwards, even walk-in patents were allowed. Individuals presenting to emergency services were admitted only if they had an imminent risk of vision loss. Donor eye collection was resumed from July 3, 2020, onwards based on the All India Ophthalmological Society guidelines. A retrospective review of electronic medical records of all patients presenting to the ophthalmology department between March 25, 2020, and July 15, 2020, was performed. These data were compared with the analogous data of last year, that is, from March 25, 2019, to July 15, 2019. Also compared was data between March 25, 2020, to May 30, 2020 (lockdown) and June 1, 2020, to July 15, 2020 (unlock). The data assessed represented routine OPD services (new patients only), emergency OPD services (new patients only), routine inpatient department (IPD) services, emergency IPD services, investigational laboratories, teleconsultation, and eye bank services. Patients who needed multiple visits (postop follow-up, trauma cases, etc.,) were counted as single patients. Parameters evaluated were age, gender, presenting complaints, final diagnosis, treatment advised, and surgical interventions. The data were compared statistically taking all gazetted and restricted holidays and Sundays into consideration. Continuous variables were expressed as mean (±standard deviation) or median (Range). P -value <0.005 was deemed statistically significant.
Outpatient department The total number of routine outpatient visits decreased by 97.14% (978 ± 109/day vs 71 ± 19/day, P < 0.001) . As demonstrated in , patient inflow increased from May 26, 2020, onwards, but the recovery rate was nowhere near last year ( P < 0.001). The median age of presentation fell to 29 years (0–78 years) compared to 55 years (0–92 years) in the previous year. Also noted was a 4.7% rise in the total number of males (61.51% vs 66.21%) and a 0.47% rise in the total number of children (14.75% vs 15.22%) this year. The emergency outpatient visits declined by 35.25%. During both the years, around two-thirds of patients were <40 years of age and the median age of presentation fell only marginally from 29 years (10 days to 85 years) last year to 27 years (11 days to 78 years) this year in emergency OPD. The percentage of children presenting to emergency decreased from 34.28% to 30% while the total representation of males increased significantly from 59.97% to 73.5%. Surprisingly, the number of registered medicolegal cases increased by 22.22% with a shoot-up in cases with physical assault from a known person . Inpatient department Routinely, the ophthalmic department of the center is a 310 bedded hospital with an occupancy rate of 80% and an average patient stay of 5 days. However, in the present time, the routine and emergency ward admissions decreased by 95.18% and 61.66% respectively ( P < 0.001). The number of government-offered employment health scheme beneficiaries and medicolegal cases seeking admissions also decreased by 95.97% and 66.66%, respectively. Surgical data While elective surgeries were postponed for eight weeks at our center and dropped by 98.18%, emergency surgeries decreased by 58.81%. The percentage of emergency outpatient cases needing admissions and surgical intervention was 34.69% (1179/3398), and 27.86% (947/3398) last year vs 20.54% (452/2200) and 12.59% (277/2200) this year, respectively. Eyebank related The number of donor corneas collected decreased by 99.61% ( P < 0.001). While the number of emergency therapeutic keratoplasties decreased by 92.39% ( P < 0.001). Despite starting donor cornea retrieval services, only one pair of tissue was collected from a 3-year old child succumbing to the road-traffic accident. 14 therapeutic grafts performed during this crisis were undertaken majorly with glycerin-stored donor tissues. Cause specific distribution and Fig. , represent the cause-specific distribution of emergency outpatient, inpatient, and surgical cases respectively. Mechanical trauma, microbial keratitis, and conjunctivitis were the most common reasons for presentation in emergency OPD. However, the incidence of trauma decreased by 41.75% and that of microbial keratitis and conjunctivitis increased by 1.25 times and 2 times, respectively. While most of the retinal disorders and sanitizer-based chemical injuries (alcohol and hypochlorite) increased in proportion, the incidence of endophthalmitis, lens-drop, and postoperative complications lessened (due to decreased number of routine ophthalmic surgeries) during the COVID-19 pandemic. Traumatic globe-injury was the most common indication for emergency admissions (54.08% vs 62.35%) and surgeries. Admissions for microbial keratitis, primary adult glaucoma, and postoperative complications decreased while those for various other causes, particularly retinal disorders, increased.
The total number of routine outpatient visits decreased by 97.14% (978 ± 109/day vs 71 ± 19/day, P < 0.001) . As demonstrated in , patient inflow increased from May 26, 2020, onwards, but the recovery rate was nowhere near last year ( P < 0.001). The median age of presentation fell to 29 years (0–78 years) compared to 55 years (0–92 years) in the previous year. Also noted was a 4.7% rise in the total number of males (61.51% vs 66.21%) and a 0.47% rise in the total number of children (14.75% vs 15.22%) this year. The emergency outpatient visits declined by 35.25%. During both the years, around two-thirds of patients were <40 years of age and the median age of presentation fell only marginally from 29 years (10 days to 85 years) last year to 27 years (11 days to 78 years) this year in emergency OPD. The percentage of children presenting to emergency decreased from 34.28% to 30% while the total representation of males increased significantly from 59.97% to 73.5%. Surprisingly, the number of registered medicolegal cases increased by 22.22% with a shoot-up in cases with physical assault from a known person .
Routinely, the ophthalmic department of the center is a 310 bedded hospital with an occupancy rate of 80% and an average patient stay of 5 days. However, in the present time, the routine and emergency ward admissions decreased by 95.18% and 61.66% respectively ( P < 0.001). The number of government-offered employment health scheme beneficiaries and medicolegal cases seeking admissions also decreased by 95.97% and 66.66%, respectively.
While elective surgeries were postponed for eight weeks at our center and dropped by 98.18%, emergency surgeries decreased by 58.81%. The percentage of emergency outpatient cases needing admissions and surgical intervention was 34.69% (1179/3398), and 27.86% (947/3398) last year vs 20.54% (452/2200) and 12.59% (277/2200) this year, respectively.
The number of donor corneas collected decreased by 99.61% ( P < 0.001). While the number of emergency therapeutic keratoplasties decreased by 92.39% ( P < 0.001). Despite starting donor cornea retrieval services, only one pair of tissue was collected from a 3-year old child succumbing to the road-traffic accident. 14 therapeutic grafts performed during this crisis were undertaken majorly with glycerin-stored donor tissues.
and Fig. , represent the cause-specific distribution of emergency outpatient, inpatient, and surgical cases respectively. Mechanical trauma, microbial keratitis, and conjunctivitis were the most common reasons for presentation in emergency OPD. However, the incidence of trauma decreased by 41.75% and that of microbial keratitis and conjunctivitis increased by 1.25 times and 2 times, respectively. While most of the retinal disorders and sanitizer-based chemical injuries (alcohol and hypochlorite) increased in proportion, the incidence of endophthalmitis, lens-drop, and postoperative complications lessened (due to decreased number of routine ophthalmic surgeries) during the COVID-19 pandemic. Traumatic globe-injury was the most common indication for emergency admissions (54.08% vs 62.35%) and surgeries. Admissions for microbial keratitis, primary adult glaucoma, and postoperative complications decreased while those for various other causes, particularly retinal disorders, increased.
The majority of previous studies have focused on the effects of the COVID-19 pandemic and lockdown on ophthalmic care provided by private institutions of the country. We presently discuss its impact on eye-care provided by the apex institute of the nation, a government-funded multispecialty hospital providing good-quality affordable health-facilities to all economic classes besides serving COVID-19 patients. To the best of our knowledge, the present study the first of its kind in depicting results of the process of unlocking on ophthalmic care. We noted a dramatic fall in total hospital out-patient load (by 97%) particularly belonging to the elderly age-group (median age decreased from 55 years to 29 years) and systemically comorbid patients during the entire period, attributed majorly to complete lockdown and good public compliance who refrained their frail and elderly from undertaking unnecessary hospital visits. However, an unexpected delay in recovery of patient load despite initialization of the process of unlocking was attributed partly to the location of our institute, partial lockdown and restricted air and train travel in geographically surrounding hotspots, and guarded provision of e-pass to the general public for interstate travel, and partly to the suspected practice of patients preferring ophthalmic institutions not affiliated with COVID-19 care due to fear of acquiring a systemic infection during these critical times. While all this exercise may reduce fatalities in a vulnerable group, increased incidence of blindness throughout the country is expected in the future as a result of delayed ophthalmic care in the same group worsening the severity of their ocular condition. Emergency eye-care services shrunk by two-thirds at our center and sudden flooding of younger males was witnessed due to the abrupt closure of routine outpatient services. Fortunately, >two-thirds of these cases were benign and only 12% needed emergency surgical intervention. In total, cases of mechanical trauma decreased most probably due to indoor stay, supervised child play, and limited functioning of transport facilities and industries. However, this was contrary to recent studies by Hamroush et al . and Bapaye et al . who reported a spike in cases of mechanical ocular trauma during the lockdown period. Occurrence of previously never witnessed mechanical injuries with plumbing instruments and electric-repairing devices in homemakers in our study signifies the impending danger associated with the dearth of adequate professional services during the current times. Additionally, ill-effects of increased indoor stay on the incidence of ocular surface disorders, myopia, and antecedent amblyopia due to exaggerated electronic media usage combined with reduced routine eye-care (refraction) during these times needs to be determined. For instance, increased instances of violence among known people due to amplified home-stay is suggested by an unexpected 22.22% rise of medicolegal cases in our study. Although most of these were trivial and managed conservatively, there remains an urgent need to educate the general public about the importance of patience, peace, and harmony during these tough times. Whether an increased proportion of conjunctivitis in our study was a subtle manifestation of COVID-19 infection remains doubtful due to the lack of coexistent systemic complaints and low positivity rate of conjunctival swabs for SARS CoV-2. All conjunctivitis patients were managed conservatively and followed-up telephonically till resolution. When compared to last year where only one case of chemical injury with a sanitizer (surgical spirit) was reported in a health-care worker, this year, we encountered three cases of sanitizer-associated chemical injury (alcohol and hypochlorite) in the general public. Although none of these were grievous, it is imperative to educate the general public about the safe use of these substances to prevent serious ocular surface disorders. Some of the steps include the closure of eyes while pressing the nozzle, keeping the sanitizer below the eye level, applying it in a well-ventilated room, keeping them out-of-reach of children and administering sodium hyaluronate-based lubricants, and encouraging the use of soap and running water for hand-cleansing in individuals with the preexisting ocular surface disease. Donor eye collection suffered a major setback during this pandemic due to confusion regarding guidelines on tissue harvesting. However, unlike anticipated, even with initialization of unlocking and resumption of eye retrieval services, voluntary donations remained nil indicating that normalcy in this direction is still very far from achievement. The manner in which glycerin-stored corneas proved sight-saving during this crisis emphasizes on the importance of incorporating long-term donor storage methods in eye-banking to battle similar situations if they arise in the future. Also, ophthalmologists might consider adopting non-donor dependent and virus transmission free methods such as autologous scleral patch grafts, tenon's patch grafts, and conjunctival flaps and auto-keratoplasty, artificial corneas, and 3D-bioprinted corneas for dealing with urgent and elective keratoplasties in the future. Mirroring results of other recent studies, teleconsultations served as an effective method of triaging >93% patients as low-risk thereby limiting their unwanted hospital-based evaluation. However, teleophthalmology has its own medical and medicolegal limitations in evaluating the underprivileged, the children, and subjects with posterior segment disorders. Yet, if utilized appropriately, this technology can serve as a useful aid in decreasing viral transmission while simultaneously catering to the general public. The effect of the current situation on the psychological wellbeing of health care workers is profound and inexpressible. Ongoing research and academic activities and blindness eradication programs have been disrupted thereby halting newer drug trials and unlike expected, the process of unlocking has served only minimally in reestablishing them. Minimal patient exposure has jeopardized resident learning and steps such as rotational clinical postings, online classes, simulator-based surgical practice, and extension of tenure have been undertaken to facilitate their learning. Financial repercussions of national economic backlash may be expected as suboptimal government expenditure on newer blindness-eradication policies. This superadded by hampered monetary collections in our government-funded institute secondary to declined patient inflow may further worsen delivery of long-term high-quality ophthalmic care to all economic sections of the society, more so to the poor and the deprived. We have learned tremendously from the present situation and the entire hospital administration and functioning are being re-structured towards capacity-building to cater to the accumulating patient backlog without compromising safety. Guidelines recommended by national societies are being actively incorporated to assure equal and equitable delivery of good-quality eye-care to patients from all financial backgrounds.
To conclude, the impact of COVID-19 on ophthalmic care served by government-funded institutes is profound and should not be overlooked to protect the underprivileged from succumbing to the present situation. Unlike anticipated, lifting of pandemic-associated lockdown may serve only minimally in improving access to ophthalmic services in its initial phases and normalization may take more time than expected. Effective and efficient policies must be planned by both governmental and nongovernmental organizations to deal with the bombardment of ocular problems in the coming time and appropriate utilization of newer technology, particularly telemedicine, can aid in providing optimal-quality affordable eye-care to all sections of society. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
A new arthroscopic repair technique for triangular fibrocartilage complex using an intracapsular suture: an outside-in transfer all-inside repair | ccd07f12-23b9-4037-bad6-4034749d5059 | 10668466 | Suturing[mh] | The triangular fibrocartilage complex (TFCC) is the most important stabilizer of the distal radioulnar joint and act as a shock absorber across the ulnocarpal joint . The TFCC is composed of the fibrocartilaginous disk, the dorsal, and palmar ligaments spanning across the radius and ulna, the ulnocarpal ligaments, a meniscal homolog, and the subsheath of the ulnar extensor of the wrist, whose critical stable component inserts directly in the ulna, either deep into the fovea (ligamentum subcruentum) or at the base of the styloid . Traumatic injury, including axial loading of an ulnar deviated wrist and the disruption of normal ulnar variance, to the TFCC, is a frequent cause of ulnar-sided pain and wrist disability . When movements of wrist rotation occur, TFCC injury usually leads to a patient complaint of activity-related ulnar-sided wrist pain, which may also be accompanied by grip weakness, instability, clicking at presentation, and weakness in pronation and supination . Imaging examination, including radiography, Magnetic resonance imaging (MRI), and MRI with arthrography, is necessary to evaluate TFCC injury . The recent developments of MRI, including the high-resolution two-dimensional/three-dimensional sequences and 3-T field strength, may improve the detection of TFCC injuries that are difficult to evaluate on routine sequences . Arthroscopy is the most accurate means by which to diagnose TFCC injury irrespective of location . The Palmer classification system, the most widely used scheme, classifies TFCC injuries into type I (traumatic tear) and type II (degenerative tear) according to the location and chronicity of the tear . Palmer type I B tears represent traumatic peripheral tears of the TFCC from its ulnar insertion and tend to be the most amenable to surgical repair when conservative treatment is ineffective . A treatment-oriented classification system especially focusing on Palmer type 1B was proposed by Atzei, which subdivides Palmer type 1B peripheral tears into five types as follows: class l repairable distal tears; class 2, repairable complete tears; class 3, repairable proximal tears; class 4, non-repairable tears; and class 5, tears associated with distal radioulnar joint (DRUJ) arthritis . Atzei advocated that class 1 tears should be sutured and that class 2 and 3 tears are associated with DRUJ instability and require TFCC reattachment to the fovea . Class 1 tears were currently repaired using arthroscopy [ – ]. Arthroscopic TFCC foveal repair techniques for class 2 and 3 tears have already been introduced [ , – ]. However, some complications such as subcutaneous suture knotting and injury of the extensor carpi ulnaris tendon or sensory branch of the ulnar nerve still occur with the arthroscopic technique for the repair of Palmer type 1B TFCC tears. The goal of surgeons has always been to reduce the risk of surgery and complications by developing new methods that can increase TFCC tear healing and reduce complications. Here, we report an arthroscopic repair technique for the treatment of Palmer type 1B Atzei class 1 TFCC tears. This technique could be used to suture the TFCC tissue without the capsule and subcutaneous tissue and could preserve the normal biomechanics of the meniscus during motion and reduce complications.
This retrospective study was conducted in Hospital of Chengdu University of Traditional Chinese Medicine. 38 Palmer type 1B TFCC injury patients between August 2017 and June 2020 were enrolled in this study cohort. The patient was positioned for standard wrist arthroscopy using a standard traction apparatus. To probe and evaluate TFCC tears, the 3–4, 4R, and 6R portals were routinely performed. This new repair technique for subtype Palmer type 1B Atzei class 1 TFCC tears requires the needle of a 10-mL sterile syringe, an arthroscopic retriever, a knot pusher, and suture material. A 2.7-mm 30° arthroscope was used to the visualize TFCC tear via 3 or 4 portals, and the probe was inserted through the 6R portal to check and assess the tear parameter of the TFCC, including the site, size, pattern, stability, tissue quality, and associated pathology in the wrist joint. Before sewing, the tear site of the TFCC was debrided with a 2.9-mm full-radius motorized shaver until bleeding to remove the proliferative synovial tissue and encourage healing through the 6R portal. Then, the group A was sutured from the outside to the inside, the group B was sutured with the outside-in transfer, all-inside repair technique to repair the TFCC tear (Additional file ). First, on the skin near the tear ulnar site of the TFCC, the needle of a 10-mL sterile syringe penetrated the skin, subcutaneous tissue, articular capsule, and finally, the proximal tear radial surface of the TFCC, and then exited the articular cavity surface of the radial side of the torn TFCC (Fig. ). Second, a No. 2 polydioxanone (PDS) suture was threaded through the needle of a 10-mL sterile syringe and inserted in the wrist joint. Then, the suture tip was retrieved through the 6R portal with a grasper (Fig. ). Third, the needle was withdrawn carefully along the suture to the proximal tear on the ulnar surface of the TFCC, avoiding the suture while retreating the needle at the same time. After adjusting the needle puncture direction, the needle was reinserted upward and penetrated through the proximal tear of the ulnar surface of the TFCC, exiting the articular cavity surface of ulnar side of the torn TFCC. The procedure was carefully performed to avoid cutting the suture with the tip of needle. The suture end was folded in the tip of needle and pulled out with a grasper through the 6R portal. Here the outside-in repair technology was successfully converted to an all-inside knotting technology. Next, after confirming the tear site of the TFCC penetrated by both limbs of the suture, the suture was slowly tensioned to obtain fine reduction of the torn TFCC under arthroscopic visualization. Furthermore, the Samsung Medical Center sliding knot technique was used to form the knot, which was positioned on the synovial side to avoid articular cartilage erosion of the scaphoid and lunate bone during wrist motion. After the knot was performed, the tension was carefully checked with a probe via arthroscopic visualization. A second knot or more was also made in a similar way if it was necessary to acquire a stable TFCC repair. Finally, to assess the stability of the suture, the full range of repeated wrist motion must be finished.
A total of 38 Palmer type 1B TFCC injury patients, including 21 males and 17 females, were enrolled in this retrospective study. Patient demographics of these TFCC patients on admission are shown in Table . 17 cases in the group B, with an average age of 31.88 ± 7.03 years, and an average operation time of 52.88 ± 4.78 min. 21 cases in the group A, with an average age of 29.29 ± 8.30 years, and an average operation time of 52.19 ± 4.88 min (Table ). The results showed that the group B had more suture reaction than the group A in Table ( P = 0.024). There was no significant difference in VAS score and mayo score before operation, 3 and 6 months after between the two groups ( P > 0.05) (Table ).
In this study, the incidence of thread knots in the group A (28.57%) was significantly higher than that in the group B (0%), and the difference was statistically significant ( P = 0.024). There was no significant difference in VAS score and modified Mayo wrist function scores between the two groups ( P > 0.05). This new treatment was as effective as the previously arthroscopic techniques and had the advantages of no additional incision and decreased risk of operation-related complications . A new arthroscopic intracapsular suture repair technique called outside-in transfer, the all-inside repair was used for Palmer type 1B Atzei class 1 TFCC tears in this study. This new method is a modification of the technique for meniscal tears by Wang in 2019 . This repair technique using arthroscopy provides several advantages to other reported repair techniques. First, it is easy to accomplish because it first uses the outside-in technique and then transfer to using the needle of 10-mL sterile syringe, which is cheaper than other instruments. Second, it allows the use of a vertical mattress suture, which is useful for the alignment of the edge of TFCC tear and is easier to heal. Third, the suture knots of this outside-in transfer, all-inside technique can be performed without an additional skin incision and placed inside the joint instead of subcutaneously to avoid irritating the skin and injuring the dorsal branch of the ulnar nerve and extensor carpi ulnaris tendon (Fig. ). Several suture techniques are available for Palmer type 1B Atzei class 1 TFCC tears. The currently used techniques of arthroscopic repair for class 1 tears include the inside-out, outside-in, or all-arthroscopic technique [ – ]. An outside-in technique using 2 needles guiding 2 sutures to repair the TFCC was first described by Zachee et al. . Both Trumble et al. and Skie et al. advocated an inside-out technique for Palmer type 1B TFCC repair using 2–0 meniscal repair sutures . Although these techniques have been modified, they require an extra incision to tie the sutures subcutaneously, which confers a risk of injury to the extensor carpi ulnaris tendon or the sensory branch of the ulnar nerve. Furthermore, the suture knot lies subcutaneously, causing skin problems and even septic arthritis [ – ]. A study of an all-arthroscopic repair technique for TFCC with the outside-in technique in fresh-frozen cadaveric wrists showed that the PDS knot was 4.6 mm from the dorsal branch of the ulnar nerve, which may be injured by the knot, and injured an extensor carpi ulnaris tendon . Another cadaver study showed that the mean minimum distance between the suture and the dorsal branch of the ulnar nerve was 1.9 mm in the inside-out technique . In a previous study, Bayoumy reported that 37 patients with TFCC tears were treated with the arthroscopic outside-in repair technique, in which two patients showed complications, including dorsal ulnar nerve neurapraxia in one patient and weakness in extension of the little finger in the other patient . The goal of surgeons has always been to reduce the risk of surgery and complications as much as possible by developing a new method that can increase TFCC tear healing and reduce complications. In our consecutive treatment series, none of the patients had complications such as skin problems, injury of the dorsal branch of the ulnar nerve, and injury of the extensor carpi ulnaris tendon (Additional file ). As in knee arthroscopy, an all-inside technique should be fast and safe to use and avoid the disadvantages of the other techniques. A study by Conca et al. in 2003 described an all-inside repair technique for Palmer type 1B TFCC tears using a small suture hook and three portals . Böhringer et al. used a meniscus fastener fixation system to repair Palmer 1B TFCC tears . A novel all-inside approach for Palmer type 1B TFCC tears with a spinal needle and no additional incision was introduced and described by Lee et al. . Kuremsky et al. assessed the safety of an all-inside arthroscopic TFCC repair technique in 13 above-the-elbow human cadaver specimens. The results of this study showed that the all-inside technique was safe in terms of proximity to important structures. However, the technique had a significant drawback in that the intra-articular working space in the wrist was so narrow that the range of manipulation with the suture devices through the portal was restricted. Our technique is simpler than the other all-inside techniques, although one portal without special equipment is required. A new arthroscopic intracapsular suture repair technique called outside-in transfer, all-inside repair was used for Palmer type 1B Atzei class 1 TFCC tears in this study. This new method is a modification of the technique for meniscal tears introduced by Wang et al. . This repair technique using arthroscopy provides several advantages to other reported repair techniques. First, this technique is easy to accomplish because, first, it uses the outside-in technique and then transfer to using the needle of 10-mL sterile syringe, which is cheaper than other instruments. Second, it allows the use of a vertical mattress suture, which is useful for the alignment of the edge of the TFCC tear and faster tear healing. Third, the suture knots of this outside-in transfer, all-inside technique can be performed without an additional skin incision and placed inside the joint instead of subcutaneously to avoid irritating the skin and injuring the dorsal branch of the ulnar nerve and extensor carpi ulnaris tendon. Finally, this technique is easier to perform than other techniques.
In conclusion, the outside-in transfer, the all-inside repair technique is suitable for Palmer type 1B Atzei class 1 TFCC tears. We recommend this technique as a useful alternative to the conventional methods of repairing Palmer type 1B TFCC tears.
Additional file 1 . The video of the technique called outside-in transfer, all-inside repair.
|
Mechanism of | 9a32fb97-e965-4327-a0b0-8b7ccb090f4f | 11212632 | Microbiology[mh] | Photinia × fraseri Dress, a small evergreen tree or shrub, prefers warm, moist environments and exhibits vibrant colors in direct light . Predominantly found in Southeast Asia, Eastern Asia, and North America, it is widely cultivated across various provinces in China for garden greening. The plant’s disease susceptibility has important economic implications . In 2019, a major disease of P. × fraseri was found in Nanjing, Jiangsu Province, China . In the early stage of infection, the infected leaves exhibited small, round, light reddish-brown spots that gradually expanded to round areas, with light gray centers and brown edges . After a series of verification steps, the disease affecting the plants was identified as being caused by the fungal genus Colletotrichum . This genus is known for attacking the roots, stems, leaves, flowers, and fruits of various plants globally, leading to decreased agricultural product quality and substantial economic losses . In recent years, Colletotrichum has been found on a variety of crops and plants in many places around the world. For example, Colletotrichum was found on holly in Zhejiang Province, China, in 2018 , and on Litchi in Guangzhou Province in 2020 . Research on the prevention and control of Colletotrichum infection is urgently needed. In recent years, biological control has attracted widespread attention due to its environmental friendliness and safety . Scientists have focused on the use of antagonists and their active substances. In 1996, Brevibacillus , a rod-shaped gram-positive bacterium, was established as a separate genus . In previous studies, Brevibacillus was shown to be a widely effective biocontrol bacterium. It also has an inhibitory effect on many resistant fungi and can also control some hymenopterans . B. brevis , as a biocontrol strain, has great research potential in different fields. Brevibacillus species are omnipresent in agricultural soils and can secrete structurally diverse secondary metabolites with broad antibiotic spectra . Brevibacillus spp. are among the PGPR (plant growth promoting rhizobacteria) groups used as biofertilizers or biopesticides on different crops and against a variety of soil-borne and foliar pathogens . Using genome mining, many antimicrobial compounds, such as those produced by Brevibacillus and antimicrobial cyclic lipopeptides, which are found in Brevibacillus laterosporus, were discovered . Complete genome sequencing technology has good application prospects for the discovery of genome sequence information for unknown bacteria and the exploration of critical functional genes . In this study, we sampled the rhizosphere soil of healthy P. × fraseri plants, screened the soil bacteria, and obtained a bacterium with excellent biocontrol efficacy. After the study in this paper, it was determined that TR-4 had an excellent inhibitory effect on Colletotrichum , and the TR-4 was identified as Bacillus brevis . This study further investigated the control of C. siamense by B. brevis and laid the groundwork for future research on whether B. brevis can colonize P. × fraseri. The biocontrol ability of B. brevis was determined from the aspect of endogenous hormones, and the results showed that TR-4 was a biocontrol bacterium with excellent inhibition against Colletotrichum pathogens.
Experimental material The C. siamense strain was obtained from the Laboratory of Forest Protection, Nanjing Forestry University, Nanjing, Jiangsu Province, China, and was stored in the China Forestry Microbial Strain Preservation and Management Center under the preservation number CFCC54215. The strain was cultured on PDA medium and subsequently cultured in an incubator at 25 °C. The bacterial strains were isolated from healthy Photinia rubra rhizosphere soil, cultured on NA medium and subsequently cultured in a 30 °C incubator. The TR-4 fermentation liquid broth was the bacterial broth of TR-4 added to 100 ml of LB liquid medium and incubated at 30 °C for 3 days with an OD 600 of about 5. The P. × fraseri leaves used in the experiment were obtained from Yaping Nursery in Nanjing, two-year-old seedlings, which were transplanted and cultured at 28 °C under natural light. Isolation and screening of antagonistic bacteria A total of 50 g of soil was taken from five randomly selected points in the inter-root soil of healthy P. × fraseri plants by random sampling method and diluted using sterile water to obtain dilutions at concentrations of 10 −1 10 −2 , 10 −3 , 10 −4 and 10 −5 , respectively . The dilutions were spread on LB solid medium by spreading method and incubated in an incubator at 30 °C for 3 days. Single strains were isolated after labeling based on colonies with different morphological and color characteristics. Counts were taken and the inhibitory effect of the isolated antagonistic bacteria on C. siamense was determined using the plate antagonism method. The antagonistic effect on fungal mycelium was calculated as percentage growth inhibition (% GI). The formula for growth inhibition was 1-(experimental/control) × 100%. Data were obtained from three different experiments. Effect of TR-4 fermentation liquid broth on the germination of C. siamense spores A total of 500 µl of 0.1% glucose aqueous solution was added to the C. siamense spores suspension (spore suspension concentration is 10 6 /ml) and TR-4 fermentation liquid in a 2 ml aseptic centrifuge tube, and the concentration was adjusted to the EC 50 (median effective concentration), 10 EC 50 , and EC 90 according to the ratio, with a final volume of 500 µl. LB liquid medium was used instead of TR-4 bacterial solution as a control. The spores were cultured in a dark incubator at 25 °C, after which 5 µl was extracted from the test tube every 12 h and placed on a glass plate, until the control spores had fully germinated. Spore germination was observed under a Zeiss microscope. In vivo antagonism experiment The experimental samples was divided into three groups. The first group was the biocontrol group. The spore solution of C. siamense 10 µl (spore suspension concentration of 10 6 /ml) was inoculated first, and the bacterial solution of TR-4 fermentation liquid was inoculated 24 h later to observe the control effect of TR-4 on C. siamense on plant leaves of P. × fraseri . In second group (the protection group), plants were sprayed with TR-4 fermentation liquid; they were completely dried and inoculated with C. siamense spore solution to observe whether TR-4 could help plants resist C. siamense infection on the leaves. The third group (the control group) was inoculated with only the C. siamense spore solution, and each experiment was repeated 3 times. Molecular identification of TR-4 The universal primers 27F (5′-AGAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-CGGCTACCTTGTTACCAC-3′) of the bacterial 16S rRNA gene were used for PCR amplification . The PCR mixture was as follows: 2 × Taq PCR Master Mix 25 µL, 2 µL of F primer, 2 µL of R primer (the primer working solution concentration was 10 µM), 2 µL of template DNA (DNA extraction was performed using Vazyme’s DNA extraction kit), and ddH 2 O to a total volume of 50 µL. The PCR procedure was as follows: predenaturation at 94 °C for 5 min; denaturation at 94 °C for 1 min; annealing at 58 °C for 1 min; and denaturation at 72 °C for 2 min. Thirty cycles were repeated and finally extended for 10 min at 72 °C. The PCR products were sequenced by Nanjing Bioengineering. After NCBI BLAST comparison, MEGA7 software was used for sequence analysis, and the neighbor-joining (NJ) method was used to construct a phylogenetic tree. Analysis of TR-4 secretions Determination of ferriphilin Iron is a micronutrient widely found in the Earth’s crust; a small amount of iron is necessary for plants, and iron deficiency is a plant nutrient disorder. Iron forms iron oxide hydrates in the environment, resulting in a lower concentration of free iron and reduced bioavailability. The CAS test solution is a bright blue compound consisting of chromium, cetyltrimethyl ammonium bromide, and iron ions. When the iron ions in the blue test solution are removed by the ferritin secreted by microorganisms, the CAS test solution changes from blue to orange, so the CAS liquid medium can be used to detect the production of ferritin by microorganisms . The light absorption (As) of the supernatant after centrifugation was measured at 630 nm and adjusted to zero using double steaming water as a control. Another blank medium was mixed with the CAS test solution in equal amounts, and its light absorption value was taken as the reference ratio (Ar). The experimental method was performed according to the CAS assay kit instructions. Determination of cellulase activity (3, 5-dinitrosalicylic acid method) Cellulase hydrolyzes cellulose to produce cellobiose, glucose and other reducing sugars, which can reduce the nitro in 3, 5-dinitrosalicylic acid to orange amino compounds, and use a colorimetric method to determine the generation of reducing compounds to indicate the activity of the enzyme . The experimental method refers to the assay of cellulase by Song et al. Determination of chitosanase activity The modified Schales method was used to determine the enzyme activity. The principle behind this process is that soluble chitosan undergoes enzymolysis and releases reducing sugars, which react with the Schales reagent to change color. With N-acetylglucosamine as the standard sugar, the light absorption value of the reducing sugars was determined via a spectrophotometer at 420 nm. The amount of enzyme that breaks down 1 µ/mol NAG per minute is defined as one unit of activity (U) . Microscopic analysis of the inhibitory effect of strain TR-4 on C. siamense Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) samples were obtained from fresh PDA plates. The samples were divided into two categories:(1) no inhibition on fungal growth, such as control group; (2) obvious inhibition of fungal growth, such as the experimental group. The samples were fixed with 2.5% glutaraldehyde and 1% cesium tetroxide at room temperature, dehydrated with ethanol, critically dried, covered with gold, and observed with a scanning electron microscope (JEM 2100) . The TEM samples were prepared by a similar method. The C. siamense are cut into 2 × 3 mm slices. The specimens were placed in 2.5% glutaraldehyde solution, fixed at room temperature for 5∼6 h, cleaned with 0.1M phosphate buffer (PBS, pH 7.2) for 5 times, fixed with 1% cesium tetroxide for 1.5 h, cleaned with PBS for eight times, dehydrated with ethanol and acetone, coated with SPUS resin, sliced 10 µm, stained, and observed under transmission electron microscope. Data analysis In this research, all the experiments were carried out in triplicate and repeated three times to get accurate and reliable data. A completely randomized design was used for the greenhouse experiment, and its data were examined with analysis of variance (ANOVA) followed by the least significant difference (LSD) tests at p < 0.05 using the DPS v9.5 statistical software package.
The C. siamense strain was obtained from the Laboratory of Forest Protection, Nanjing Forestry University, Nanjing, Jiangsu Province, China, and was stored in the China Forestry Microbial Strain Preservation and Management Center under the preservation number CFCC54215. The strain was cultured on PDA medium and subsequently cultured in an incubator at 25 °C. The bacterial strains were isolated from healthy Photinia rubra rhizosphere soil, cultured on NA medium and subsequently cultured in a 30 °C incubator. The TR-4 fermentation liquid broth was the bacterial broth of TR-4 added to 100 ml of LB liquid medium and incubated at 30 °C for 3 days with an OD 600 of about 5. The P. × fraseri leaves used in the experiment were obtained from Yaping Nursery in Nanjing, two-year-old seedlings, which were transplanted and cultured at 28 °C under natural light.
A total of 50 g of soil was taken from five randomly selected points in the inter-root soil of healthy P. × fraseri plants by random sampling method and diluted using sterile water to obtain dilutions at concentrations of 10 −1 10 −2 , 10 −3 , 10 −4 and 10 −5 , respectively . The dilutions were spread on LB solid medium by spreading method and incubated in an incubator at 30 °C for 3 days. Single strains were isolated after labeling based on colonies with different morphological and color characteristics. Counts were taken and the inhibitory effect of the isolated antagonistic bacteria on C. siamense was determined using the plate antagonism method. The antagonistic effect on fungal mycelium was calculated as percentage growth inhibition (% GI). The formula for growth inhibition was 1-(experimental/control) × 100%. Data were obtained from three different experiments.
C. siamense spores A total of 500 µl of 0.1% glucose aqueous solution was added to the C. siamense spores suspension (spore suspension concentration is 10 6 /ml) and TR-4 fermentation liquid in a 2 ml aseptic centrifuge tube, and the concentration was adjusted to the EC 50 (median effective concentration), 10 EC 50 , and EC 90 according to the ratio, with a final volume of 500 µl. LB liquid medium was used instead of TR-4 bacterial solution as a control. The spores were cultured in a dark incubator at 25 °C, after which 5 µl was extracted from the test tube every 12 h and placed on a glass plate, until the control spores had fully germinated. Spore germination was observed under a Zeiss microscope.
antagonism experiment The experimental samples was divided into three groups. The first group was the biocontrol group. The spore solution of C. siamense 10 µl (spore suspension concentration of 10 6 /ml) was inoculated first, and the bacterial solution of TR-4 fermentation liquid was inoculated 24 h later to observe the control effect of TR-4 on C. siamense on plant leaves of P. × fraseri . In second group (the protection group), plants were sprayed with TR-4 fermentation liquid; they were completely dried and inoculated with C. siamense spore solution to observe whether TR-4 could help plants resist C. siamense infection on the leaves. The third group (the control group) was inoculated with only the C. siamense spore solution, and each experiment was repeated 3 times.
The universal primers 27F (5′-AGAGTTTGATCCTGGCTCAG-3′) and 1492R (5′-CGGCTACCTTGTTACCAC-3′) of the bacterial 16S rRNA gene were used for PCR amplification . The PCR mixture was as follows: 2 × Taq PCR Master Mix 25 µL, 2 µL of F primer, 2 µL of R primer (the primer working solution concentration was 10 µM), 2 µL of template DNA (DNA extraction was performed using Vazyme’s DNA extraction kit), and ddH 2 O to a total volume of 50 µL. The PCR procedure was as follows: predenaturation at 94 °C for 5 min; denaturation at 94 °C for 1 min; annealing at 58 °C for 1 min; and denaturation at 72 °C for 2 min. Thirty cycles were repeated and finally extended for 10 min at 72 °C. The PCR products were sequenced by Nanjing Bioengineering. After NCBI BLAST comparison, MEGA7 software was used for sequence analysis, and the neighbor-joining (NJ) method was used to construct a phylogenetic tree.
Iron is a micronutrient widely found in the Earth’s crust; a small amount of iron is necessary for plants, and iron deficiency is a plant nutrient disorder. Iron forms iron oxide hydrates in the environment, resulting in a lower concentration of free iron and reduced bioavailability. The CAS test solution is a bright blue compound consisting of chromium, cetyltrimethyl ammonium bromide, and iron ions. When the iron ions in the blue test solution are removed by the ferritin secreted by microorganisms, the CAS test solution changes from blue to orange, so the CAS liquid medium can be used to detect the production of ferritin by microorganisms . The light absorption (As) of the supernatant after centrifugation was measured at 630 nm and adjusted to zero using double steaming water as a control. Another blank medium was mixed with the CAS test solution in equal amounts, and its light absorption value was taken as the reference ratio (Ar). The experimental method was performed according to the CAS assay kit instructions.
Cellulase hydrolyzes cellulose to produce cellobiose, glucose and other reducing sugars, which can reduce the nitro in 3, 5-dinitrosalicylic acid to orange amino compounds, and use a colorimetric method to determine the generation of reducing compounds to indicate the activity of the enzyme . The experimental method refers to the assay of cellulase by Song et al.
The modified Schales method was used to determine the enzyme activity. The principle behind this process is that soluble chitosan undergoes enzymolysis and releases reducing sugars, which react with the Schales reagent to change color. With N-acetylglucosamine as the standard sugar, the light absorption value of the reducing sugars was determined via a spectrophotometer at 420 nm. The amount of enzyme that breaks down 1 µ/mol NAG per minute is defined as one unit of activity (U) .
C. siamense Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) samples were obtained from fresh PDA plates. The samples were divided into two categories:(1) no inhibition on fungal growth, such as control group; (2) obvious inhibition of fungal growth, such as the experimental group. The samples were fixed with 2.5% glutaraldehyde and 1% cesium tetroxide at room temperature, dehydrated with ethanol, critically dried, covered with gold, and observed with a scanning electron microscope (JEM 2100) . The TEM samples were prepared by a similar method. The C. siamense are cut into 2 × 3 mm slices. The specimens were placed in 2.5% glutaraldehyde solution, fixed at room temperature for 5∼6 h, cleaned with 0.1M phosphate buffer (PBS, pH 7.2) for 5 times, fixed with 1% cesium tetroxide for 1.5 h, cleaned with PBS for eight times, dehydrated with ethanol and acetone, coated with SPUS resin, sliced 10 µm, stained, and observed under transmission electron microscope.
In this research, all the experiments were carried out in triplicate and repeated three times to get accurate and reliable data. A completely randomized design was used for the greenhouse experiment, and its data were examined with analysis of variance (ANOVA) followed by the least significant difference (LSD) tests at p < 0.05 using the DPS v9.5 statistical software package.
Soil sample collection and screening of antagonistic bacteria The experiment isolated 68 soil bacteria from five different concentrations of soil solutions. After the antagonism experiment of the 68 isolated bacteria against C. siamense , it was concluded that a total of 13 strains of bacteria had an effect on C. siamense . The strain with the strongest inhibitory effect was selected as the experimental strain and named TR-4. In vitro antagonistic experiment and inhibitory effects of TR-4 on C. siamense spore germination To further determine the inhibitory effect of TR-4 on C. siamense , the concentrations used were set to 0.001 µl/ml, 0.01 µl/ml, 0.1 µl/ml, 1 µl/ml and 10 µl/ml through C. siamense culture and experiments . According to DPS v9.5 analysis, the average EC 50 was 0.0022 ±0.0013 µl/ml, and the average EC 90 was 0.8115 ±0.1024 µl/ml . Moreover, to verify whether the TR-4 strain has an antagonistic effect on C. siamense spore germination, C. siamense spores were treated with sterile TR-4 filtrate, and the final solution EC 50 , 10 EC 50 and EC 90 were determined. We found that the spore germination rate of Bacillus anthracis treated with the TR-4 strain fermentation filtrate was significantly lower than that of the control group within 12 h. After 48 h, the spore germination rate of the control was 98%, while the spore germination rate of the TR-4 strain treated with fermentation filtrate was significantly lower than that of the control ( and ). The mycelial length and mycelial branching number after spore germination were significantly lower than those in the control group, and the differences did not decrease with increasing time. Like in other studies, in this study, TR-4 was shown to reduce the germination rate of C. siamense and thus reduce their pathogenicity. In vivo antagonism experiment The results of in vivo antagonistic experiments revealed that TR-4 has significant inhibitory effects on C. siamense and protective effects on plants. In the first group of control experiments, the lesion did not expand after spraying the TR-4 bacterial solution, and the control effect was remarkable. In the second group of plant protection experiments, compared with those in the first group of experiments, although the effects were not ideal, some inhibitory effects were detected. The results showed that TR-4 can also be directly inhibit C. siamense growth on leaves of C. siamense plants . In this study, the phenotype and severity of the disease in the biocontrol group were significantly lower than those in the control group, indicating that TR-4 inhibited the incidence of C. siamense in P. × fraseri . Molecular identification of TR-4 The 16S rRNA sequences of TR-4 were analyzed. PCR amplification and sequencing revealed that the length of the 16S rRNA gene was 1430 bp. The 16S rRNA nucleotide sequence of TR-4 was registered in the GenBank database under accession number OP658963.1 . The 16S rRNA sequence of TR-4 was identified by the National Center for Biotechnology Information (NCBI) database and shared 99% homology with the 16S rRNA gene sequence of B. brevis ON014586. A phylogenetic tree was constructed using MEGA 6 software, and strain TR-4 was identified as B. brevis . Analysis of TR-4 secretions In the determination of ferritin, according to the radical formula [(AR-AS)/Ar] × 100%, the relative content of ferritin was 88.46 ±2.08%. In the determination of cellulase activity, the absorbance of the color developing solution was determined, and a standard curve was drawn with the absorbance as the vertical coordinate and the glucose content as the horizontal coordinate. The linear regression equation was used to construct the standard curve of the reducing sugars. The standard curve equation was y = 0.16822 x + 0.0073 ( R 2 = 0.8802). The absorbances of the blank tube and sample were 0.0264 and 0.0485, respectively. The above standard curve equations for the glucose content of the blank tube and sample were 0.1135 and 0.2449, respectively. The results showed that strain TR-4 could produce cellulase, and the ability to produce cellulase was very strong. In the chitosan enzyme activity assay experiment, according to the experimental method, the results were obtained as 2.15 µmol of reducing sugars and 0.215 U of enzyme activity units. Microscopic analysis of the inhibitory effect of strain TR-4 on C. siamense Under scanning electron microscopy, normal mycelia of C. siamense were found to be evenly distributed, smooth and full. After treatment with TR-4, mycelial growth was abnormal, the mycelial shape and surface were deformed, and the surface was contracted. Under transmission electron microscopy, mitochondria, ribosomes, vacuoles, cell walls and even plasma from normal C. siamense cells were clearly visible. After treatment with the TR-4 strain, the cells of C. siamense exhibited obvious changes and damage. The cell wall was transparent, the organelles disappeared, and the vacuoles were deformed .
The experiment isolated 68 soil bacteria from five different concentrations of soil solutions. After the antagonism experiment of the 68 isolated bacteria against C. siamense , it was concluded that a total of 13 strains of bacteria had an effect on C. siamense . The strain with the strongest inhibitory effect was selected as the experimental strain and named TR-4.
antagonistic experiment and inhibitory effects of TR-4 on C. siamense spore germination To further determine the inhibitory effect of TR-4 on C. siamense , the concentrations used were set to 0.001 µl/ml, 0.01 µl/ml, 0.1 µl/ml, 1 µl/ml and 10 µl/ml through C. siamense culture and experiments . According to DPS v9.5 analysis, the average EC 50 was 0.0022 ±0.0013 µl/ml, and the average EC 90 was 0.8115 ±0.1024 µl/ml . Moreover, to verify whether the TR-4 strain has an antagonistic effect on C. siamense spore germination, C. siamense spores were treated with sterile TR-4 filtrate, and the final solution EC 50 , 10 EC 50 and EC 90 were determined. We found that the spore germination rate of Bacillus anthracis treated with the TR-4 strain fermentation filtrate was significantly lower than that of the control group within 12 h. After 48 h, the spore germination rate of the control was 98%, while the spore germination rate of the TR-4 strain treated with fermentation filtrate was significantly lower than that of the control ( and ). The mycelial length and mycelial branching number after spore germination were significantly lower than those in the control group, and the differences did not decrease with increasing time. Like in other studies, in this study, TR-4 was shown to reduce the germination rate of C. siamense and thus reduce their pathogenicity.
antagonism experiment The results of in vivo antagonistic experiments revealed that TR-4 has significant inhibitory effects on C. siamense and protective effects on plants. In the first group of control experiments, the lesion did not expand after spraying the TR-4 bacterial solution, and the control effect was remarkable. In the second group of plant protection experiments, compared with those in the first group of experiments, although the effects were not ideal, some inhibitory effects were detected. The results showed that TR-4 can also be directly inhibit C. siamense growth on leaves of C. siamense plants . In this study, the phenotype and severity of the disease in the biocontrol group were significantly lower than those in the control group, indicating that TR-4 inhibited the incidence of C. siamense in P. × fraseri .
The 16S rRNA sequences of TR-4 were analyzed. PCR amplification and sequencing revealed that the length of the 16S rRNA gene was 1430 bp. The 16S rRNA nucleotide sequence of TR-4 was registered in the GenBank database under accession number OP658963.1 . The 16S rRNA sequence of TR-4 was identified by the National Center for Biotechnology Information (NCBI) database and shared 99% homology with the 16S rRNA gene sequence of B. brevis ON014586. A phylogenetic tree was constructed using MEGA 6 software, and strain TR-4 was identified as B. brevis .
In the determination of ferritin, according to the radical formula [(AR-AS)/Ar] × 100%, the relative content of ferritin was 88.46 ±2.08%. In the determination of cellulase activity, the absorbance of the color developing solution was determined, and a standard curve was drawn with the absorbance as the vertical coordinate and the glucose content as the horizontal coordinate. The linear regression equation was used to construct the standard curve of the reducing sugars. The standard curve equation was y = 0.16822 x + 0.0073 ( R 2 = 0.8802). The absorbances of the blank tube and sample were 0.0264 and 0.0485, respectively. The above standard curve equations for the glucose content of the blank tube and sample were 0.1135 and 0.2449, respectively. The results showed that strain TR-4 could produce cellulase, and the ability to produce cellulase was very strong. In the chitosan enzyme activity assay experiment, according to the experimental method, the results were obtained as 2.15 µmol of reducing sugars and 0.215 U of enzyme activity units.
C. siamense Under scanning electron microscopy, normal mycelia of C. siamense were found to be evenly distributed, smooth and full. After treatment with TR-4, mycelial growth was abnormal, the mycelial shape and surface were deformed, and the surface was contracted. Under transmission electron microscopy, mitochondria, ribosomes, vacuoles, cell walls and even plasma from normal C. siamense cells were clearly visible. After treatment with the TR-4 strain, the cells of C. siamense exhibited obvious changes and damage. The cell wall was transparent, the organelles disappeared, and the vacuoles were deformed .
Prior to this, there have been many studies showing that B. brevis is a good biocontrol bacterial strain. In terms of the ability of rice biocontrol bacteria to control various rice diseases, among the 11 potential biocontrol bacteria, the best was B. brevis strain 1Pe2 . According to previous studies, B. brevis has a significant effect on tea tree Gloeosporium-sinae-sinensis , Elsinoe leucospira , Phyllosticta theaefolia , Fusarium sp. Cercospora theae and other pathogens, indicating that it has inhibitory effects on many C. siamense and is a biocontrol strain of great research value . It was proven that B. brevis has a broad spectrum. In another experiment, it was suggested that yeast and non-viticultural yeasts inhibited fungal mycelia growth through metabolites, laminaria polysaccharide enzymes, nutrient competition, fungal spore germination inhibition, bud tube length shortening, and antifungal volatiles . Therefore, the spore germination test was used to verify the efficacy of the biocontrol bacteria. The biocontrol activity of TR-4 on C. gloeosporioides on ripe olive fruits was verified. Biocontrol bacteria reduce the incidence and severity of C. gloeosporioides , and its incidence on fruit can be reduced by 50–90% . Iron is an essential element for the growth of C. siamense . Siderophiles produced by B. brevis can prevent the absorption of iron by C. siamense . As an index of biocontrol bacteria . In the natural environment, Fe 2+ is easily oxidized to Fe 3+ , so at natural pH, iron is mostly in the form of ferric oxide and ferric hydroxide, two insoluble and very stable polymers that exist in the environment and are difficult to bioutilize. Ferriphilin, a particular iron chelating agent, meets the microbial nutrient requirements of iron by activating, absorbing, and transporting insoluble iron. Cellulases degrade cellulose to produce glucose via a group of enzymes known as chitosan enzymes, which are a class of chitosan with high catalytic activity that exhibit almost no hydrolysis of chitin glycoside hydrolase; these enzymes can convert high-molecular-weight chitosan into low-molecular-weight functional chitosan oligosaccharides . Both chitosan and cellulose are structural components of the cell walls of insects, crustaceans and fungi; thus, it can be concluded that chitinase and cellulase can breakdown the cell walls of insects and fungi . As mentioned in some articles, biocontrol bacteria can secrete some cell wall-degrading enzymes, which can destroy the cell wall of plant pathogens and reduce their pathogenicity. In this study, the cellulase and chitosan enzymes of TR-4 were quantitatively measured. In the Maiti study, significant similarity was detected between B. brevis and the M42-aminopeptidase/endoglucanase of the CelM family using high-performance liquid chromatography and mass spectrometry . In the inhibition of Monilinia fructicola by Bacillus methylotrophicus , Brevibacillus inhibited M. fructicola , and mycelia and spores were abnormally shaped when viewed under an SEM lens. Under TEM, the cell wall was transparent, the organelles disappeared, and the intracellular vacuoles were deformed, similar to the results of this study . In another study, SEM revealed that B. brevis has antifungal, anticancer and larvicidal properties . Like in this study, the cellulase and chitinase secreted by TR-4 decomposed the cell wall of the C. siamense . Therefore, the growth and pathogenicity of the C. siamense were inhibited.
In the present study, the TR-4 strain screened from soil showed significant inhibitory effect on C. siamense , which was identified as B. brevis by 16s rRNA. Meanwhile, the experimental results showed that the inhibitory effect of 0.01 µl/ml TR-4 reached 90%, and the inhibition rate of spore germination by TR-4 on C. siamense reached 95%, and the relative ferritin produced 88.46 ±2.08% of ferritin, 0.2449 of glucose, and 2.15 µmol of final sugar content of chitosanase. under scanning electron microscopy and transmission electron microscopy, it was shown that TR-4 resulted in the leakage of C. siamense cell contents and induced cell death. It can be concluded that TR-4 can be used as a biocontrol bacterium for more in-depth studies. This study lays the foundation for the subsequent exploration of TR-4 and provides a basis for the research and development of natural control agents.
10.7717/peerj.17568/supp-1 Supplemental Information 1 Supplemental Figures 10.7717/peerj.17568/supp-2 Supplemental Information 2 Spore germination data processing There were an average of 50 spores in the 40× objective field. 10.7717/peerj.17568/supp-3 Supplemental Information 3 Colony diameter and inhibition rate
|
One-stage anterior focus debridement, interbody bone graft, and anterior instrumentation and fusion in the treatment of short segment TB | 5e29093a-db50-4df6-956b-5abb327b6e75 | 9771206 | Debridement[mh] | In recent years, the incidence of spinal tuberculosis (TB) has been increasing again due to population growth, human immunodeficiency virus infection, diabetes mellitus, drug resistance, et cetera. Spinal tuberculosis is 1 of the most common types of extrapulmonary tuberculosis, accounting for the first place in total bone and joint tuberculosis. It is a severe spinal disease that frequently causes kyphotic deformity, neurologic deficit and even spinal cord compression. Particularly, spinal tuberculosis with paraplegia or incomplete paralysis accounts for 10% to 46% of all spinal tuberculosis cases. If spinal tuberculosis is not diagnosed and treated in time, it may result in serious complications. Spinal tuberculosis forms abscesses, sequestrum, and tuberculous granulation tissues, which enter the spinal canal to compress the spinal cord to cause nerve damage or even paraplegia. Tuberculous spondylitis (Pott’s disease), a common extrapulmonary manifestation of tuberculosis, typically presents with back pain, tenderness, paraparesis/paraplegia, and various constitutional symptoms. Although there have been great advances in anti-TB drug treatment, surgical management is still critical. Based on combination chemotherapy, active surgical treatment has been widely accepted, which can effectively shorten the treatment cycle, promote the cure, reduce the morbidity, and improve the quality of life. In most cases, TB destroys the load-bearing area of the front of the spine, also known as anterior column. Paravertebral abscess formation is often associated. Destruction of the anterior column not only alters the biomechanics and stability of the spine, but also increases the risk of progression of kyphosis and paraplegia. It has been reported that anterior focus debridement combined with interbody bone graft is a classic surgical procedure for the treatment of spinal TB. It can directly reach the lesion site with a larger operative horizon to completely remove lesions. Moreover, anterior surgery can completely expose the lesions that can be completely removed to reduce the compression of the spinal cord. Bone graft can correct kyphotic deformities and reestablish spinal stability. Therefore, anterior surgery is more suitable for spinal TB with paraplegia or incomplete paralysis, especially Pott’s paralysis of short segment thoracic TB. In this study, we investigated the clinical efficacy of 1-stage anterior focus debridement, interbody bone graft, and anterior instrumentation and fusion in the treatment of short segment thoracic TB with paraplegia or incomplete paralysis.
2.1. Subjects We confirmed that all methods were carried out in accordance with relevant guidelines and regulations. This study was approved by the Institutional Ethics Committee for Medical Scientific research of Xi’an Chest Hospital at April 7th, 2020 (Approval No.: 2020-S0022). Written informed consent were obtained from all patients. From September 2013 to March 2017, 16 patients with short segment thoracic vertebrae tuberculosis with paraplegia or incomplete paralysis who underwent surgery in our hospital were enrolled in the study. Inclusion criteria were as follows: patients with thoracic tuberculous with destruction of 1 or 2 segments of the vertebral body; patients with a mild kyphotic deformity (Cobb angle < 25º); patients with intermittent back pain caused by spinal instability; the effect of anti-tuberculosis treatment was not ideal; patients without the contraindications of anterior thoracotomy; patient who had developed paraplegia or incomplete paralysis. Exclusion criteria were as follows: patients with 3 or more thoracic vertebral lesions; patient who did not have paraplegia or incomplete paralysis; patients with severe kyphotic deformity (Cobb angle > 25º); patients with the contraindications of anterior thoracotomy; and patients who cannot tolerate 1 lung ventilation. 2.2. The diagnosis criteria The diagnosis criteria of spinal tuberculosis was guided by the laboratory examinations (anemia, hypoproteinemia, T-spot, tuberculosis antibody, erythrocyte sedimentation rate [ESR], and C-reactive protein [CRP]), imaging (spinal X-ray films, computed tomography, and agentic resonance imaging) and patients’ symptoms (local pain and percussion pain accompanied with fever, night sweats, and neurological dysfunction). All diagnoses were confirmed by postoperative pathology examinations. 2.3. Preoperative procedure All patients received at least 2 to 4 weeks of first-line anti-tuberculous treatment (rifampicin 0.45 g, isoniazid 0.4 g, pyrazinamide 1.5 g, and ethambutol 0.75 g) before operation. Supporting therapy and symptomatic treatment were conducted when the patients were hospitalized. The doses of anti-TB drugs were appropriately increased in patients with tuberculosis in other parts of the body, or in patients weighing more than 50 kilograms. For patients with paraplegia or incomplete paralysis, surgery should be performed as early as possible. 2.4. Operative technique Informed consent for surgery was signed by the patients. Patients who received tracheal intubation with a double lumen tube were in the left decubitus position. The side lobe was collapsed intraoperatively. A standard right anterior posterolateral surgical incision was performed. Skin and subcutaneous tissues and the right latissimus dorsa muscle and pectoralis major muscle were dissected layer by layer. The upper rib corresponding to the diseased vertebral body on the right side was exposed, and some of the ribs were stripped, cut, and snipped as autograft. The chest was opened with a thoracotomy, the right lung was collapsed, and the right thoracic spine was exposed. The right diseased vertebral body and anterior fascia were examined with obvious swelling and abnormal color. The paravertebral abscess was aspirated with a syringe as a culture specimen. The anterior fascia of the diseased vertebral body was cut longitudinally, and segmental vessels were ligated. Cheese-like substance, necrotic granulation tissues, dead bone particles and other lesions were completely removed, while normal vertebral bone tissues were retained. The compression of the spinal cord was completely relieved. The wound was washed repeatedly with normal saline, and streptomycin (1 g) was administered. Autogenous bone and graft fusion with a titanium cage strut combined with an anterior vertebral screw-plate internal fixation system were used to recover the normal spinal curvature and reconstruct the spine stability (Figs. – ). The drainage tube was placed postoperatively, and culture specimens were sent for pathological examination. 2.5. Postoperative care After the operation, the postural drainage was adopted, and complete lung expansion was confirmed by radiography. The drainage tube was pulled out when the 24-hour drainage flow was < 50 mL. Postoperative anti-TB, anti-infection treatment, electrocardiogram monitoring and other comprehensive treatment were provided. Patients continued to perform HREZ chemotherapy. Nutritional support was provided in patients with postoperative anemia, low serum albumin levels, or loss of appetite. Patients were required to get out of bed for 2 weeks after the operation. After discharge, anti-TB therapy was maintained for 18 to 24 months. 2.6. Evaluation of clinical outcomes All patients were examined clinically and radiologically at 1, 3, 6 and 12 months after the operation and at the last follow-up. The levels of ESR and CRP were the important indexes to evaluate the activity of short segment thoracic vertebrae tuberculosis with paraplegia or incomplete paralysis. Preoperative and postoperative Frankel Grade, kyphotic Cobb angle, and bony fusion were recorded to evaluate the symptom changes (Table ). The definition of Kyphosis Angle was referred to the following literature. According to the lateral X-ray, the kyphotic angel was the angle formed by 2 lines obtained by joining the antero-superior and postero-superior corners of the above lesions, and the antero-inferior and postero-inferior corners of the vertebral below lesion. Postoperative radiographs were conducted to assess the bony fusion level using the radiologic criteria of Bridwell. 2.7. Statistical analysis SPSS statistical software (IBM Corp.) was used to perform statistical analysis. The differences between the preoperative and postoperative indicators were analyzed using the independent sample t test. P < .05 was considered statistically significant.
We confirmed that all methods were carried out in accordance with relevant guidelines and regulations. This study was approved by the Institutional Ethics Committee for Medical Scientific research of Xi’an Chest Hospital at April 7th, 2020 (Approval No.: 2020-S0022). Written informed consent were obtained from all patients. From September 2013 to March 2017, 16 patients with short segment thoracic vertebrae tuberculosis with paraplegia or incomplete paralysis who underwent surgery in our hospital were enrolled in the study. Inclusion criteria were as follows: patients with thoracic tuberculous with destruction of 1 or 2 segments of the vertebral body; patients with a mild kyphotic deformity (Cobb angle < 25º); patients with intermittent back pain caused by spinal instability; the effect of anti-tuberculosis treatment was not ideal; patients without the contraindications of anterior thoracotomy; patient who had developed paraplegia or incomplete paralysis. Exclusion criteria were as follows: patients with 3 or more thoracic vertebral lesions; patient who did not have paraplegia or incomplete paralysis; patients with severe kyphotic deformity (Cobb angle > 25º); patients with the contraindications of anterior thoracotomy; and patients who cannot tolerate 1 lung ventilation.
The diagnosis criteria of spinal tuberculosis was guided by the laboratory examinations (anemia, hypoproteinemia, T-spot, tuberculosis antibody, erythrocyte sedimentation rate [ESR], and C-reactive protein [CRP]), imaging (spinal X-ray films, computed tomography, and agentic resonance imaging) and patients’ symptoms (local pain and percussion pain accompanied with fever, night sweats, and neurological dysfunction). All diagnoses were confirmed by postoperative pathology examinations.
All patients received at least 2 to 4 weeks of first-line anti-tuberculous treatment (rifampicin 0.45 g, isoniazid 0.4 g, pyrazinamide 1.5 g, and ethambutol 0.75 g) before operation. Supporting therapy and symptomatic treatment were conducted when the patients were hospitalized. The doses of anti-TB drugs were appropriately increased in patients with tuberculosis in other parts of the body, or in patients weighing more than 50 kilograms. For patients with paraplegia or incomplete paralysis, surgery should be performed as early as possible.
Informed consent for surgery was signed by the patients. Patients who received tracheal intubation with a double lumen tube were in the left decubitus position. The side lobe was collapsed intraoperatively. A standard right anterior posterolateral surgical incision was performed. Skin and subcutaneous tissues and the right latissimus dorsa muscle and pectoralis major muscle were dissected layer by layer. The upper rib corresponding to the diseased vertebral body on the right side was exposed, and some of the ribs were stripped, cut, and snipped as autograft. The chest was opened with a thoracotomy, the right lung was collapsed, and the right thoracic spine was exposed. The right diseased vertebral body and anterior fascia were examined with obvious swelling and abnormal color. The paravertebral abscess was aspirated with a syringe as a culture specimen. The anterior fascia of the diseased vertebral body was cut longitudinally, and segmental vessels were ligated. Cheese-like substance, necrotic granulation tissues, dead bone particles and other lesions were completely removed, while normal vertebral bone tissues were retained. The compression of the spinal cord was completely relieved. The wound was washed repeatedly with normal saline, and streptomycin (1 g) was administered. Autogenous bone and graft fusion with a titanium cage strut combined with an anterior vertebral screw-plate internal fixation system were used to recover the normal spinal curvature and reconstruct the spine stability (Figs. – ). The drainage tube was placed postoperatively, and culture specimens were sent for pathological examination.
After the operation, the postural drainage was adopted, and complete lung expansion was confirmed by radiography. The drainage tube was pulled out when the 24-hour drainage flow was < 50 mL. Postoperative anti-TB, anti-infection treatment, electrocardiogram monitoring and other comprehensive treatment were provided. Patients continued to perform HREZ chemotherapy. Nutritional support was provided in patients with postoperative anemia, low serum albumin levels, or loss of appetite. Patients were required to get out of bed for 2 weeks after the operation. After discharge, anti-TB therapy was maintained for 18 to 24 months.
All patients were examined clinically and radiologically at 1, 3, 6 and 12 months after the operation and at the last follow-up. The levels of ESR and CRP were the important indexes to evaluate the activity of short segment thoracic vertebrae tuberculosis with paraplegia or incomplete paralysis. Preoperative and postoperative Frankel Grade, kyphotic Cobb angle, and bony fusion were recorded to evaluate the symptom changes (Table ). The definition of Kyphosis Angle was referred to the following literature. According to the lateral X-ray, the kyphotic angel was the angle formed by 2 lines obtained by joining the antero-superior and postero-superior corners of the above lesions, and the antero-inferior and postero-inferior corners of the vertebral below lesion. Postoperative radiographs were conducted to assess the bony fusion level using the radiologic criteria of Bridwell.
SPSS statistical software (IBM Corp.) was used to perform statistical analysis. The differences between the preoperative and postoperative indicators were analyzed using the independent sample t test. P < .05 was considered statistically significant.
3.1. Basic clinical characteristics There were 7 males and 9 females. They aged from 23 to 74 years, with an average age of 46.3 ± 14.5 years. Preoperative images showed that there was vertebral body destruction, intervertebral space collapse, paravertebral abscess and intraspinal invasion. There was 1 case of thoracic 5/6 vertebral destruction, 2 cases of thoracic 6/7 vertebral destruction, 2 cases of thoracic 7/8 vertebral destruction, 4 cases of thoracic 8/9 vertebral destruction, 5 cases of thoracic 9/10 vertebral destruction and 2 cases of thoracic 10/11 vertebral destruction. All patients successfully underwent operation. During the follow-up, spinal paraplegia or incomplete paralysis was significantly improved in all patients. The symptoms of chest and back pain were alleviated or even disappeared within postoperative 1 to 6 months. Patients were followed up for 24 to 48 months, with an average time of 35.6 ± 9.6 months. There was no recurrence among the 16 patients. They all got bony spinal fusion within 4 to 8 months after surgery, which was assessed by spinal X-ray film and/or computed tomography scan. 3.2. The levels of ESR and CRP were decreased after operation The preoperative ESR and CRP levels in the 16 patients were 72.6 ± 27.5 mm/h and 75.7 ± 25.9 mg/L, respectively. They were significantly decreased to 46.6 ± 24.1 mm/h and 41.6 ± 15.0 mg/L at 4 weeks after surgery, and even to 15.9 ± 4.6 mm/h and 4.7 ± 2.0 mg/L at the final follow-up, respectively (Table ). 3.3. Neurologic functions and symptoms were improved by the operation Neurological symptoms of all patients were manifested as complete paraplegia or incomplete paralysis, lower limb weakness, chest sensation or related numbness and paresthesia. Neurological functions were evaluated by the Frankel classification. All patients had improved function status at different degrees. During the follow-up, spinal paraplegia was significantly improved in all patients. Neurologic status of 3 patients with grade A in preoperative neurologic deficits was recovered to normal, that of 4 patients with grade B was recovered to normal, that of 4 patients with grade C was recovered to normal, that of 3 patients with grade A was recovered to grade D, that of 1 patient with grade B was recovered to grade D, and that of 1 patient with grade C was recovered to grade D. Moreover, the thoracic kyphosis angle was ameliorated from preoperative 15.0 ± 3.4° to postoperative 9.1 ± 1.9°. At the final follow-up, the loss of correction was only 0.6°. It still significantly improved in comparison to the preoperative measurements (Table ). 3.4. Complications There were 3 cases of postoperative intercostal neuralgia, and 4 of electrolyte disturbance, 6 of anemia and hypoproteinemia. All of the complications were relieved after symptomatic treatment. There were no patients with pneumothorax or cerebrospinal fluid leak. Wounds were healed without chronic infection or sinus formation. There were no complications related to instrumentation. There was only a small amount of pleural effusion in all patients after surgery, which had been absorbed at 3 months after operation.
There were 7 males and 9 females. They aged from 23 to 74 years, with an average age of 46.3 ± 14.5 years. Preoperative images showed that there was vertebral body destruction, intervertebral space collapse, paravertebral abscess and intraspinal invasion. There was 1 case of thoracic 5/6 vertebral destruction, 2 cases of thoracic 6/7 vertebral destruction, 2 cases of thoracic 7/8 vertebral destruction, 4 cases of thoracic 8/9 vertebral destruction, 5 cases of thoracic 9/10 vertebral destruction and 2 cases of thoracic 10/11 vertebral destruction. All patients successfully underwent operation. During the follow-up, spinal paraplegia or incomplete paralysis was significantly improved in all patients. The symptoms of chest and back pain were alleviated or even disappeared within postoperative 1 to 6 months. Patients were followed up for 24 to 48 months, with an average time of 35.6 ± 9.6 months. There was no recurrence among the 16 patients. They all got bony spinal fusion within 4 to 8 months after surgery, which was assessed by spinal X-ray film and/or computed tomography scan.
The preoperative ESR and CRP levels in the 16 patients were 72.6 ± 27.5 mm/h and 75.7 ± 25.9 mg/L, respectively. They were significantly decreased to 46.6 ± 24.1 mm/h and 41.6 ± 15.0 mg/L at 4 weeks after surgery, and even to 15.9 ± 4.6 mm/h and 4.7 ± 2.0 mg/L at the final follow-up, respectively (Table ).
Neurological symptoms of all patients were manifested as complete paraplegia or incomplete paralysis, lower limb weakness, chest sensation or related numbness and paresthesia. Neurological functions were evaluated by the Frankel classification. All patients had improved function status at different degrees. During the follow-up, spinal paraplegia was significantly improved in all patients. Neurologic status of 3 patients with grade A in preoperative neurologic deficits was recovered to normal, that of 4 patients with grade B was recovered to normal, that of 4 patients with grade C was recovered to normal, that of 3 patients with grade A was recovered to grade D, that of 1 patient with grade B was recovered to grade D, and that of 1 patient with grade C was recovered to grade D. Moreover, the thoracic kyphosis angle was ameliorated from preoperative 15.0 ± 3.4° to postoperative 9.1 ± 1.9°. At the final follow-up, the loss of correction was only 0.6°. It still significantly improved in comparison to the preoperative measurements (Table ).
There were 3 cases of postoperative intercostal neuralgia, and 4 of electrolyte disturbance, 6 of anemia and hypoproteinemia. All of the complications were relieved after symptomatic treatment. There were no patients with pneumothorax or cerebrospinal fluid leak. Wounds were healed without chronic infection or sinus formation. There were no complications related to instrumentation. There was only a small amount of pleural effusion in all patients after surgery, which had been absorbed at 3 months after operation.
Vertebral body lesions caused by tuberculosis always lead to the development of kyphosis, paravertebral abscesses, and even progressive neurological impairment. Neurological impairment is more common in thoracic tuberculosis than elsewhere in spinal tuberculosis. Due to the narrower spinal canal, there are physiological kyphosis in the thoracic region, and greater biomechanical forces in the thoracolumbar region. Godlwana et al have reported that thoracic spinal tuberculosis accounts for 52% of all spinal tuberculosis cases with neurological deficits. Therefore, patients who met these characteristics were enrolled in the retrospective study. Our results revealed that Pott’s disease or incomplete paralysis was primarily caused by nerve compression by epidural abscesses, tuberculous debris, necrotic intervertebral discs, and caseous and granulomatous tissue. It has been reported that although patients with spinal tuberculosis have actively received anti-tuberculosis treatment, there are still 3% to 5% patients with severe progression followed by paraplegia or incomplete paralysis. Moreover, surgical intervention is very necessary for patients with severe progression. Various surgical techniques have been used to treat spinal tuberculosis, but there are few studies on short segmental thoracic tuberculosis with paraplegia or incomplete paralysis. We found that anterior approach was very suitable for patients with short segment thoracic tuberculosis accompanied by paraplegia or incomplete paralysis. Several severe complications related to anterior surgical approaches to the thoracic spine have been reported. Anterior debridement and bone graft directly and thoroughly enable surgeons to treat thoracic spinal tuberculosis with paraplegia or incomplete paralysis, which are more favorable for biomechanical reconstruction. Anterior internal fixation can still maintain the spinal stability and correction of kyphosis when tuberculosis invades short thoracic vertebra segments. The anterior approach enables surgeons to directly reach the lesion site, and operators also have a more spacious and a direct field of vision, simplifying the operative procedures. In our study, we carried out 1-stage anterior focus debridement, interbody bone graft, and anterior instrumentation and fusion in the surgical treatment of short segment thoracic tuberculosis. The operation went smoothly, and there were no serious complications after the operation. All patients obtained satisfactory functional restoration and recovery from short segment tuberculosis through the operation within about a 48-month follow-up. We believe that 1-stage anterior surgical indications mainly are as follows: The damaged portion of the vertebra being located in the anterior and middle columns of the spine; the abscess or bone mass invades the anterior wall of the spinal canal causing compression and resulting in paraplegia or incomplete paralysis; intractable thoracic pain; laminectomy having been performed and posterior bone graft fusion being prevented; and when the number of damaged vertebrae is < 3. Patients who did not meet these conditions were excluded. It has been reported that most spinal tuberculosis operations are performed with autologous ribs, autologous ilium or allograft fusion. However, due to their own characteristics, the ribs are tenuous. They do not provide adequate stability to the anterior column, because of plastic deformation and the small surface area of contact with the adjacent normal vertebral bodies. Many recent studies have reported that titanium mesh cages show potential for reliable spinal reconstruction, high bone fusion, sufficient sagittal profile maintenance, and low implant-related problems. Especially, Zhang et al have reported that titanium mesh with autologous ribs is used for fusion with satisfactory short-term follow-up with several advantages. During the operation, we cut autogenous ribs into particles to implant into the titanium mesh, which was then implanted between the remaining thoracic vertebrae, thus maintaining immediate stability and good compression force. The larger load-bearing surface of the titanium mesh provides a stable interfacial strength. It is mechanically strong enough to prevent discrete loss of height from a fused motion segment. In our study, all patients used the titanium mesh cage, and they achieved bone fusion. Anterior spinal surgery may suffer the disadvantages of greater surgical trauma and more complications, including vascular and visceral injury, chylous leakage and others. Subsidence of titanium mesh cages is reported in anterior column reconstruction after anterior spine surgery. However, these defects did not appear during surgery, and these complications also did not appear during follow-up in our study. It may be related to our years of experience in thoracic surgery and our proficiency in open-heart surgery. Moreover, it may also be related to the limited extent of damage and minor injury of short-segment vertebral tuberculosis. Our results showed that no serious complications occurred after the operation. No complications of atelectasis and aggravation of pulmonary tuberculosis were found. All patients had only a small amount of pleural effusion after surgery, which had been absorbed at the third month after the operation. There were 3 cases of postoperative intercostal neuralgia, 4 cases of electrolyte disturbance, and 6 cases of anemia and hypoproteinemia. They were all relieved after symptomatic treatment. There was no case of pneumothorax and no case of cerebrospinal fluid leak. Wounds were healed without chronic infection or sinus formation. There were no complications related to instrumentation, which may be related to our experience as a specialist tuberculosis hospital. However, tuberculosis cure still relies on formal chemotherapy. Sixteen patients in our study had sturdy implant and favorable curative states. A titanium cage supplemented by autologous bone or allograft bone graft achieved satisfying outcomes in our study. At the last follow-up, all patients recovered well without breakage or transposition of the implant or kyphosis recurrence. All patients had achieved bone fusion, relief from pain, and neurological recovery or significant improvement. There are also several limitations in this study. Firstly, this is a retrospective study with a shorter follow-up period, which may affect the reliability of the evaluation results. Secondly, a few eligible cases were included, and clinical studies with large samples were lacking, which may cause a certain degree of bias. One-stage anterior focus debridement, interbody bone graft, and anterior instrumentation and fusion are suitable and effective surgical treatment methods for short segment thoracic tuberculosis complicated with paraplegia or incomplete paralysis. Surgery should be based on the characteristics of the patient’s thoracic vertebra lesions and specific conditions to develop a personalized program. Adequate and necessary systemic anti-tuberculosis treatment remains essential. Comprehensive measures must be taken to improve the cure rate of short segmental thoracic tuberculosis with paraplegia or incomplete paralysis. Meanwhile, further studies with a large number of cases and longer follow-up will be necessary.
We acknowledge Huijun Zhang, Zenghui Lu, Chao Ding and Lin Wei for their assistance with database collection.
Conceptualization: Zenghui Lu, Huijun Zhang. Data curation: Zenghui Lu, Chao Ding, Lin Wei, Huijun Zhang. Formal analysis: Chao Ding, Lin Wei, Huijun Zhang. Investigation: Huijun Zhang. Methodology: Zenghui Lu, Huijun Zhang. Project administration: Huijun Zhang. Resources: Huijun Zhang. Writing – original draft: Zenghui Lu, Huijun Zhang. Writing – review & editing: Zenghui Lu, Chao Ding, Lin Wei, Huijun Zhang.
|
Maternal interventions to decrease stillbirths and neonatal mortality in Tanzania: evidence from the 2017-18 cross-sectional Tanzania verbal and social autopsy study | 3b3655ef-db4a-4817-92e5-e474c958bdb9 | 10714492 | Forensic Medicine[mh] | Reducing neonatal mortality (NNM) remains the greatest challenge globally to achieving the United Nations child mortality Sustainable Development Goals by the target year 2030 . The contribution of neonatal deaths to under-five mortality (U5M) increased from 40% to 1990 to 45% in 2019 due to relatively greater success in overcoming childhood infectious diseases than maternal and perinatal complications. Sub-Saharan Africa (SSA) in 2019 had the world’s highest U5M and NNM rates, 75 and 27 deaths/1,000 live births, accounting for 53% and 42% of global under-five and neonatal deaths, respectively . Stillbirth also remains a severe problem in SSA, with the world’s highest rate, 22/1,000 births in 2019, accounting for 44% of global stillbirths . Just over half (51%) of these deaths in SSA are estimated to occur intrapartum, the period from the onset of labor until delivery . Pregnancy and labor and delivery (L/D) complications are the most important risk factors for perinatal mortality (PNM) [ – ], with care provided during L/D affording the greatest mortality reductions for neonates and prevention of stillbirths . Intrapartum care is most effectively delivered in a health facility, with higher level facilities providing basic (BEmONC) and comprehensive emergency obstetric and newborn care (CEmONC) best positioned to contribute to maternal, perinatal and neonatal survival . Quality antenatal care (ANC) also plays an important role in reducing PNM and NNM, directly through provision of efficacious interventions [ – ], and indirectly by promoting institutional delivery and educating women on danger signs of pregnancy and where to go for a complication . In 2019 and 2018, respectively, Tanzania had the ninth and tenth highest numbers of stillbirths and neonatal deaths in the world. From 1990 to 2019 Tanzania reduced its U5M rate by more than two-thirds, from 165 to 50/1,000, but NNM decreased only by half, from 40 to 20/1,000 . Tanzania has tracked NNM and PNM through periodic Demographic and Health Surveys (TDHS) since 2004. The PNM rate decreased from 42 to 36/1,000 in the 2004 to 2010 surveys , but then increased to 39/1,000 in the 2015-16 survey . The contribution of stillbirths to PNM remained stable over this period, respectively 44.4%, 47.8% and 46.6%. The TDHS does not distinguish antepartum from intrapartum stillbirth. While skilled attendance at birth and emergency obstetric care made the largest contribution (29%) to the reduction in NNM in Tanzania from 2000 to 2012 , both the stagnant stillbirth rate and insufficient decrease in NNM have been attributed to failures in accomplishing critical maternal and neonatal care objectives of the country’s 2008-15 National Road Map Strategic Plan to Accelerate Reduction of Maternal, Newborn and Child Deaths (One Plan). In particular, inequities in health facility and cesarean (C-section) delivery (as a proxy for CEmONC) in rural areas and by socioeconomic status (SES) prevented achievement of the objectives . As documented by the country’s 2016–2020 One Plan II program, by 2015 the country had achieved just 25% instead of the targeted 70% BEmONC coverage at health centers and 73% instead of 100% CEmONC coverage at hospitals . In addition, while 63% of deliveries took place in a health facility and 6% were by C-section, there was a 32% urban/rural gap and 53% SES gap in facility deliveries, and 8% urban/rural and 14% SES gaps in C-sections . We conducted a national verbal and social autopsy (VASA) study of stillbirths and under-five deaths to estimate the causes and social determinants of the deaths to provide evidence for the country to consider in developing its maternal, newborn and child health programming. We previously reported on the neonatal and 1-59-month-old causes of death (COD) and preventive and curative indicators . The current analysis aims to differentiate antepartum and intrapartum stillbirth and assess the contribution of provider delay in conducting C-section to intrapartum stillbirth; identify maternal complications associated with antepartum and intrapartum stillbirth and leading causes of neonatal death; and examine the impact of different aspects of ANC and its interaction with complications on hospital delivery; all to provide evidence needed to focus antenatal and intrapartum interventions targeted at decreasing stillbirths and neonatal deaths.
The VASA study was conducted on the platform of the 2015-16 TDHS of 13,360 households . The TDHS included a lifetime birth history of all married women 15-49-years-old to identify all live births and deaths, as well as specific questions on ‘pregnancy terminations’ that did not end in a live birth. The VASA study was conducted from mid-November through December 2017, with a follow-up round from January-February 2018 to locate respondents who had moved from their original location. Integrated VASA interviews were attempted of all 851 7-plus-months pregnancy terminations and neonatal (0–27 days) and 1-59-month-old deaths in the five years prior to their TDHS interview. Our prior publication provides details on the VASA questionnaire and study implementation . Birth status, cause of neonatal death and maternal complications The VASA interview first evaluated possible TDHS misclassification of 7-plus-months pregnancy terminations (TDHS stillbirths) and deaths of live born children by asking about cardinal signs of life at birth not asked about by the TDHS. A child was considered stillborn if reported to have never cried, moved, or breathed. Live-born children were classified as a neonatal or 1-59-month-old death, depending on the VASA-reported age at death. Our previous VASA analysis directly estimated the neonatal COD discussed in the current paper using the expert algorithm method [ – ]. Prior estimates of neonatal COD in Tanzania have utilized a multinomial logistic regression model with global VA data and national proximate covariates as inputs . An intrapartum stillbirth was defined as one in which the mother reported that the baby either did not stop moving before labor began, or last moved less than one hour before labor began or less than eight hours before delivery. We also examined how using a 12-hour cut-off and/or including report of no maceration (as an ‘or’ statement) might alter the antepartum and intrapartum proportions and their apparent misclassification. Because the utilized VA questionnaire does not include a question on the time before delivery the mother last felt the baby move, this was determined (for women whose babies stopped moving before labor began) by summing the time before labor began that the baby last moved plus labor duration. The intrapartum category also excludes stillbirths with severe congenital abnormalities since it is surmised that such deaths were not due to complications of the birth process. We defined pregnancy (before labor onset) and L/D complications using algorithms of illness signs and symptoms (panel). For the logistic regression analyses of delivery place and mode described below, “any complication” was defined, respectively, as having one or more pregnancy complications or one or more L/D complications that started before reaching a delivery facility, and as having one or more pregnancy or L/D complications. Panel definitions of maternal complications Statistical analyses The CSPro data collected on netbooks were converted to SAS 9.4 and STATA 16.0 datasets for analysis. Following determination of stillbirths and neonatal deaths, all subsequent analyses were conducted of data weighted and design-corrected based on the TDHS multi-stage sampling design. The same was true for the neonatal COD identified by the earlier paper , now utilized to examine maternal complications-neonatal COD associations. For simplicity of presentation, the weighted and survey design-corrected fractional frequencies were rounded up to the next higher level. Descriptive statistics included frequency distributions. Tests of association included odds ratios (OR) and adjusted odds ratios (aOR) with 95% confidence interval, and Pearson or Rao-Scott chi-square. We examined the association of pregnancy and L/D complications with known causes of antepartum [ – ] and intrapartum [ – , , ] stillbirth and separately for three main causes of neonatal death, including preterm delivery [ , – ], intrapartum-related events (birth asphyxia, birth injury) [ , , – ] and serious neonatal infection . These were also the leading causes in our study population . We conducted descriptive analyses of ANC coverage and mothers’ careseeking for pregnancy and L/D complications, as previously described . We developed logistic regression models to examine the independent associations of having had “any complication” (yes/no) and having received ANC (yes/no), and of their interaction, with hospital delivery, separately for stillbirths and neonatal deaths. Models were developed to examine different aspects of ANC, including four or more visits (ANC4+); “quality ANC” (Q-ANC) consisting of six recommended interventions (blood pressure measurement; urine and blood sample tests; and counseling on proper nutrition, pregnancy danger signs, and where to go for any complication) over the course of all visits ; counseling on danger signs and where to go, without necessarily receiving Q-ANC (DS-ANC); and receiving only one or more of the other four interventions (O-ANC). Potential confounders included in all models were residence (urban/rural), mother’s formal education level (none/some primary/some secondary or higher), and travel time to the nearest health facility in an emergency (less than 30 min/30 minutes or more). Lastly, we conducted analyses to assess whether a delay in conducting C-section might have contributed to antepartum or intrapartum stillbirth or neonatal death. For each of these outcomes we examined labor duration and, by logistic regression, the association of hours before and after reaching the birth attendant with cesarean vs. vaginal delivery, adjusted for the presence of “any complication” (yes/no). Poisson regression was used to estimate the relative difference (RD) in labor duration of cesarean and vaginal deliveries. We used Poisson regression as an alternative to linear regression, assuming that long delivery times were more variable than short ones.
The VASA interview first evaluated possible TDHS misclassification of 7-plus-months pregnancy terminations (TDHS stillbirths) and deaths of live born children by asking about cardinal signs of life at birth not asked about by the TDHS. A child was considered stillborn if reported to have never cried, moved, or breathed. Live-born children were classified as a neonatal or 1-59-month-old death, depending on the VASA-reported age at death. Our previous VASA analysis directly estimated the neonatal COD discussed in the current paper using the expert algorithm method [ – ]. Prior estimates of neonatal COD in Tanzania have utilized a multinomial logistic regression model with global VA data and national proximate covariates as inputs . An intrapartum stillbirth was defined as one in which the mother reported that the baby either did not stop moving before labor began, or last moved less than one hour before labor began or less than eight hours before delivery. We also examined how using a 12-hour cut-off and/or including report of no maceration (as an ‘or’ statement) might alter the antepartum and intrapartum proportions and their apparent misclassification. Because the utilized VA questionnaire does not include a question on the time before delivery the mother last felt the baby move, this was determined (for women whose babies stopped moving before labor began) by summing the time before labor began that the baby last moved plus labor duration. The intrapartum category also excludes stillbirths with severe congenital abnormalities since it is surmised that such deaths were not due to complications of the birth process. We defined pregnancy (before labor onset) and L/D complications using algorithms of illness signs and symptoms (panel). For the logistic regression analyses of delivery place and mode described below, “any complication” was defined, respectively, as having one or more pregnancy complications or one or more L/D complications that started before reaching a delivery facility, and as having one or more pregnancy or L/D complications.
The CSPro data collected on netbooks were converted to SAS 9.4 and STATA 16.0 datasets for analysis. Following determination of stillbirths and neonatal deaths, all subsequent analyses were conducted of data weighted and design-corrected based on the TDHS multi-stage sampling design. The same was true for the neonatal COD identified by the earlier paper , now utilized to examine maternal complications-neonatal COD associations. For simplicity of presentation, the weighted and survey design-corrected fractional frequencies were rounded up to the next higher level. Descriptive statistics included frequency distributions. Tests of association included odds ratios (OR) and adjusted odds ratios (aOR) with 95% confidence interval, and Pearson or Rao-Scott chi-square. We examined the association of pregnancy and L/D complications with known causes of antepartum [ – ] and intrapartum [ – , , ] stillbirth and separately for three main causes of neonatal death, including preterm delivery [ , – ], intrapartum-related events (birth asphyxia, birth injury) [ , , – ] and serious neonatal infection . These were also the leading causes in our study population . We conducted descriptive analyses of ANC coverage and mothers’ careseeking for pregnancy and L/D complications, as previously described . We developed logistic regression models to examine the independent associations of having had “any complication” (yes/no) and having received ANC (yes/no), and of their interaction, with hospital delivery, separately for stillbirths and neonatal deaths. Models were developed to examine different aspects of ANC, including four or more visits (ANC4+); “quality ANC” (Q-ANC) consisting of six recommended interventions (blood pressure measurement; urine and blood sample tests; and counseling on proper nutrition, pregnancy danger signs, and where to go for any complication) over the course of all visits ; counseling on danger signs and where to go, without necessarily receiving Q-ANC (DS-ANC); and receiving only one or more of the other four interventions (O-ANC). Potential confounders included in all models were residence (urban/rural), mother’s formal education level (none/some primary/some secondary or higher), and travel time to the nearest health facility in an emergency (less than 30 min/30 minutes or more). Lastly, we conducted analyses to assess whether a delay in conducting C-section might have contributed to antepartum or intrapartum stillbirth or neonatal death. For each of these outcomes we examined labor duration and, by logistic regression, the association of hours before and after reaching the birth attendant with cesarean vs. vaginal delivery, adjusted for the presence of “any complication” (yes/no). Poisson regression was used to estimate the relative difference (RD) in labor duration of cesarean and vaginal deliveries. We used Poisson regression as an alternative to linear regression, assuming that long delivery times were more variable than short ones.
Of 851 TDHS-identified stillbirths and under-5-years deaths, 783 (92.0%) had a VASA interview completed. Additional file : Table S1 shows the VASA and TDHS classifications of all 783 deaths. The current analysis is of the 204 stillbirths and 228 neonatal deaths identified by the VASA. Most VASA respondents for stillbirths (90.3%) and neonatal deaths (92.5%) were the deceased’s mother. The recall period between the dates of death and interview varied from 1 to 7 years (median 4, IQR 3, 5) both for stillbirths and neonates. Demographic characteristics Pregnancy duration of stillbirths and neonatal deaths was similar (Table ). Nearly half of the neonates died within 24 h of delivery, and nine-tenths in the first week. There was a male predominance both of stillbirths and neonatal deaths. Mothers’ mean age and years of schooling for stillbirths and neonatal deaths were similar, and residence for both was mainly rural. Antepartum and intrapartum stillbirth Sufficient information to classify fetal deaths as antepartum or intrapartum, based solely on mother’s reports of fetal movement, was available for 185/204 (90.7%) stillbirths (Table ). The distribution of these 185 by pregnancy duration was similar to that for all 204 stillbirths (full-term: 67.4%). However, more intrapartum than antepartum stillbirths were products of full-term pregnancies; and intrapartum stillbirths had significantly longer pregnancy duration. Including mothers’ reports of maceration in the VA definitions of intrapartum and antepartum stillbirth resulted in apparent over diagnosis of intrapartum stillbirth, with excessively long reported duration of no fetal movement before delivery without maceration (Additional file : Tables S2 and S3). Maternal complications Several pregnancy complications trended in the expected direction of being positively associated with antepartum stillbirth, but only maternal infection and preeclampsia/eclampsia approached statistical significance. While some L/D complications were positively associated with intrapartum stillbirth, the associations were weak (Additional file : Table S4). Table shows the association of maternal complications with three major causes of early NNM. Antepartum hemorrhage (APH), maternal anemia, and premature rupture of membranes (PROM) were significantly positively associated with early NNM due to preterm delivery, intrapartum-related events, and serious infection, respectively. Additional file : Tables S5 and S6 show that APH was also significantly associated with preterm delivery among all 228 neonatal deaths and among the 129 early onset (age 0–1 day) deaths. Maternal care: antenatal care While 93% and 96% of women with a stillbirth and neonatal death, respectively, made at least one ANC visit, only 55% and 69% achieved ANC4 + and just 19% and 35% received Q-ANC. As seen in Table , in general, mothers who delivered in urban areas and in hospitals had relatively higher coverage. Mothers of stillbirths and neonatal deaths who achieved ANC4 + were, respectively, nearly three times (OR = 2.72, 95% CI 1.24, 6.00, p = 0.014) and twice (OR = 2.18, 95% CI 0.93, 5.06, p = 0.074) as likely to receive Q-ANC as their counterparts with less than four visits. Maternal care: careseeking for complications Somewhat more mothers of stillbirths (61/204 [29.9%]) than neonatal deaths (47/228 [20.8%]) had a pregnancy complication. However, women with a neonatal death sought health care for these complications significantly more often (43/47 [91.6%] vs. 36/61 [58.4%], X 2 = 21.34, p < 0.001), with this difference being driven by careseeking for APH (19/19 [100%] vs. 18/27 [69.0%], X 2 = 26.68, p < 0.001). Nearly half the mothers of stillbirths (101/204 [49.4%]) and neonatal deaths (100/228 [43.8%]) had a L/D complication that began before reaching a delivery facility, but with no differences in careseeking. Maternal care: delivery place and mode When adjusted by logistic regression, urban residence was strongly predictive of hospital delivery both for neonates and stillbirths; while achieving ANC4 + increased hospital delivery for stillbirths but not neonates, and having any complication did not increase hospital delivery for neonates or stillbirths (Additional file : Tables S7 and S8). Nevertheless, having any complication and achieving ANC4 + interacted to more than quadruple hospital delivery of neonates compared to women without a complication and fewer than four ANC visits (Fig. a, Additional file : Table S7). Women with any complication who received Q-ANC or DS-ANC were even more likely to deliver in hospital, while having any complication and receiving O-ANC had no effect on hospital delivery. This differed for stillbirths, for whom both having any complication and receiving any of the three ANC types, vs. not having a complication nor any of the ANC types, did not increase hospital delivery; while having any complication without ANC4 + decreased hospital delivery by four-fifths (Fig. b, Additional file : Table S8). Not depicted in Fig. b is that women with a stillbirth who had any complication and achieved ANC4 + were highly more likely to deliver in hospital than women with a complication who made fewer than four visits (Additional file : Table S8). Women with a neonatal death who had any complication similarly had increased hospital delivery if they received Q-ANC or DS-ANC, compared to women with any complication who did not receive these ANC types (Additional file : Table S7). Fifty-eight (47%) of the 123 women with a stillbirth and any complication said they had a careseeking constraint, the most common being the cost for transportation or health care (20%), thinking they did not need care (17%), lack of transportation (12%), and distance (10%). Twelve (20%) of these 58 delivered at hospital, compared to 37 (57%) of the 66 women without a constraint (OR = 0.19, 95% CI 0.08, 0.44, p < 0.001). Sixty-one other women with a stillbirth had one or more symptoms such as blurred vision and fever that did not meet the criteria for an obstetric complication. Three (44%) of the eight such women with a careseeking constraint delivered at hospital, vs. 25 (48%) of the 53 without a constraint (OR = 0.85, 95% CI 0.12, 5.81, p = 0.864). 16% (32) of stillbirths and 10% (23) of neonatal deaths were delivered by C-section, all at hospital except one neonate. Among hospital deliveries, labor duration of neonates delivered by C-section (median 9.0 h, IQR 4.5, 19.5) and vaginally (median 9.0, IQR 3.0, 16.0) was similar (RD 1.34, 95% CI 0.77, 2.34, p = 0.298); while labor duration of intrapartum stillbirths delivered by C-section (median 16.5, IQR 5.5, 42.5) was prolonged vs. that of vaginal deliveries (median 10.0, IQR 5.0, 18.0) (RD 2.51, 95% CI 1.30, 4.86, p = 0.007). The time after reaching the birth attendant contributed half of the total labor duration of women with an intrapartum stillbirth (median 0.50, IQR 0.11, 0.84). After adjusting for hours of labor before reaching the birth attendant and presence of any complication, women with an intrapartum stillbirth were 6.5% (aOR = 1.065, 95% CI 1.002, 1.132, p = 0.044) more likely to have a C-section for every additional hour before delivery after reaching the attendant. Adjusted for the labor duration phases, having any complication conferred no risk (aOR = 0.963, 95% CI 0.049, 18.778, p = 0.980).
Pregnancy duration of stillbirths and neonatal deaths was similar (Table ). Nearly half of the neonates died within 24 h of delivery, and nine-tenths in the first week. There was a male predominance both of stillbirths and neonatal deaths. Mothers’ mean age and years of schooling for stillbirths and neonatal deaths were similar, and residence for both was mainly rural.
Sufficient information to classify fetal deaths as antepartum or intrapartum, based solely on mother’s reports of fetal movement, was available for 185/204 (90.7%) stillbirths (Table ). The distribution of these 185 by pregnancy duration was similar to that for all 204 stillbirths (full-term: 67.4%). However, more intrapartum than antepartum stillbirths were products of full-term pregnancies; and intrapartum stillbirths had significantly longer pregnancy duration. Including mothers’ reports of maceration in the VA definitions of intrapartum and antepartum stillbirth resulted in apparent over diagnosis of intrapartum stillbirth, with excessively long reported duration of no fetal movement before delivery without maceration (Additional file : Tables S2 and S3).
Several pregnancy complications trended in the expected direction of being positively associated with antepartum stillbirth, but only maternal infection and preeclampsia/eclampsia approached statistical significance. While some L/D complications were positively associated with intrapartum stillbirth, the associations were weak (Additional file : Table S4). Table shows the association of maternal complications with three major causes of early NNM. Antepartum hemorrhage (APH), maternal anemia, and premature rupture of membranes (PROM) were significantly positively associated with early NNM due to preterm delivery, intrapartum-related events, and serious infection, respectively. Additional file : Tables S5 and S6 show that APH was also significantly associated with preterm delivery among all 228 neonatal deaths and among the 129 early onset (age 0–1 day) deaths.
While 93% and 96% of women with a stillbirth and neonatal death, respectively, made at least one ANC visit, only 55% and 69% achieved ANC4 + and just 19% and 35% received Q-ANC. As seen in Table , in general, mothers who delivered in urban areas and in hospitals had relatively higher coverage. Mothers of stillbirths and neonatal deaths who achieved ANC4 + were, respectively, nearly three times (OR = 2.72, 95% CI 1.24, 6.00, p = 0.014) and twice (OR = 2.18, 95% CI 0.93, 5.06, p = 0.074) as likely to receive Q-ANC as their counterparts with less than four visits.
Somewhat more mothers of stillbirths (61/204 [29.9%]) than neonatal deaths (47/228 [20.8%]) had a pregnancy complication. However, women with a neonatal death sought health care for these complications significantly more often (43/47 [91.6%] vs. 36/61 [58.4%], X 2 = 21.34, p < 0.001), with this difference being driven by careseeking for APH (19/19 [100%] vs. 18/27 [69.0%], X 2 = 26.68, p < 0.001). Nearly half the mothers of stillbirths (101/204 [49.4%]) and neonatal deaths (100/228 [43.8%]) had a L/D complication that began before reaching a delivery facility, but with no differences in careseeking.
When adjusted by logistic regression, urban residence was strongly predictive of hospital delivery both for neonates and stillbirths; while achieving ANC4 + increased hospital delivery for stillbirths but not neonates, and having any complication did not increase hospital delivery for neonates or stillbirths (Additional file : Tables S7 and S8). Nevertheless, having any complication and achieving ANC4 + interacted to more than quadruple hospital delivery of neonates compared to women without a complication and fewer than four ANC visits (Fig. a, Additional file : Table S7). Women with any complication who received Q-ANC or DS-ANC were even more likely to deliver in hospital, while having any complication and receiving O-ANC had no effect on hospital delivery. This differed for stillbirths, for whom both having any complication and receiving any of the three ANC types, vs. not having a complication nor any of the ANC types, did not increase hospital delivery; while having any complication without ANC4 + decreased hospital delivery by four-fifths (Fig. b, Additional file : Table S8). Not depicted in Fig. b is that women with a stillbirth who had any complication and achieved ANC4 + were highly more likely to deliver in hospital than women with a complication who made fewer than four visits (Additional file : Table S8). Women with a neonatal death who had any complication similarly had increased hospital delivery if they received Q-ANC or DS-ANC, compared to women with any complication who did not receive these ANC types (Additional file : Table S7). Fifty-eight (47%) of the 123 women with a stillbirth and any complication said they had a careseeking constraint, the most common being the cost for transportation or health care (20%), thinking they did not need care (17%), lack of transportation (12%), and distance (10%). Twelve (20%) of these 58 delivered at hospital, compared to 37 (57%) of the 66 women without a constraint (OR = 0.19, 95% CI 0.08, 0.44, p < 0.001). Sixty-one other women with a stillbirth had one or more symptoms such as blurred vision and fever that did not meet the criteria for an obstetric complication. Three (44%) of the eight such women with a careseeking constraint delivered at hospital, vs. 25 (48%) of the 53 without a constraint (OR = 0.85, 95% CI 0.12, 5.81, p = 0.864). 16% (32) of stillbirths and 10% (23) of neonatal deaths were delivered by C-section, all at hospital except one neonate. Among hospital deliveries, labor duration of neonates delivered by C-section (median 9.0 h, IQR 4.5, 19.5) and vaginally (median 9.0, IQR 3.0, 16.0) was similar (RD 1.34, 95% CI 0.77, 2.34, p = 0.298); while labor duration of intrapartum stillbirths delivered by C-section (median 16.5, IQR 5.5, 42.5) was prolonged vs. that of vaginal deliveries (median 10.0, IQR 5.0, 18.0) (RD 2.51, 95% CI 1.30, 4.86, p = 0.007). The time after reaching the birth attendant contributed half of the total labor duration of women with an intrapartum stillbirth (median 0.50, IQR 0.11, 0.84). After adjusting for hours of labor before reaching the birth attendant and presence of any complication, women with an intrapartum stillbirth were 6.5% (aOR = 1.065, 95% CI 1.002, 1.132, p = 0.044) more likely to have a C-section for every additional hour before delivery after reaching the attendant. Adjusted for the labor duration phases, having any complication conferred no risk (aOR = 0.963, 95% CI 0.049, 18.778, p = 0.980).
Stillbirth and neonatal death represent a major public health problem in many low- and lower middle-income countries, with economic, social, and health implications for families and society . Continued high mortality levels in Tanzania have been attributed to insufficient coverage and inequitable provision of BEmONC and CEmONC services in rural areas and by SES . The current paper, based on analyses of the 2017-18 Tanzania VASA study, provides additional evidence that the country can apply in its effort to improve maternal and newborn health programming. For the first time in Tanzania, we directly estimated the national proportion of all stillbirths that are intrapartum. The level determined, 52.5%, closely agrees with a prior 51.1% indirect estimate for SSA. We based our determination solely on mothers’ reports of fetal movement and found that including reports of maceration resulted in apparent over diagnosis of intrapartum stillbirth, with prolonged lack of fetal movement without maceration. “No maceration” has often been included as a VA criterion of intrapartum stillbirth . However, pathology and VA studies have found poor agreement, respectively, between health providers’ assessment of maceration and time since fetal death and mothers’ reports of maceration and fetal movement [ – ]. Studies have also found high levels and variability of respondent uncertainty regarding the presence of maceration as compared to clarity and consistency of reports of fetal movement , as well as up to 4.5 times risk of (antepartum) stillbirth in women who report decreased fetal movement before labor onset . Other VA studies have similarly given preference to mothers’ reports of fetal movement in distinguishing antepartum from intrapartum stillbirth . Significantly more intrapartum than antepartum stillbirths being full term strengthens the certainty of our definition, since many intrapartum stillbirths are expected of full-term fetuses dying from intrapartum-related events. The importance of this finding is the possibility of decreasing intrapartum stillbirths, which are of longer gestation and have a greater chance of survival through early detection of fetal distress, conduct of C-section, and newborn resuscitation . Our finding for intrapartum stillbirths of median 16.5 h labor duration, with the period from reaching the birth attendant until delivery significantly associated with C-section, suggests that delay in conducting C-section contributed to the deaths. Inadequate availability of general anesthesia equipment has been identified as the main roadblock to timely conduct of C-section in Tanzania . Positive associations identified between APH, maternal anemia, and PROM and, respectively, preterm delivery, intrapartum-related events, and serious neonatal infection can provide guidance in strengthening Tanzania’s 2016–2020 One Plan II and 2021/22-2025/26 One Plan III program updates of its 2008–2015 One Plan program. Utilizing this information for evidence-based quality improvement of service delivery through clinical mentorship and supportive supervision, especially in low performing regions, fits perfectly with the One Plan II and III’s implementation strategies and guiding principles . Although maternal infection is thought to be the cause of up to 40% of spontaneous preterm births without PROM and subsequent neonatal morbidity and mortality, we did not find an association between maternal infection and early NNM due to preterm delivery. Vertically transmitted maternal infection can also be the cause of early onset neonatal sepsis , but we did not find this association among 195 early neonatal deaths, nor among all 228 neonatal deaths or 129 early onset deaths (data not shown). This may be because intrauterine infection causing preterm birth and neonatal sepsis is often asymptomatic , and because other maternal genitourinary infections implicated in preterm birth, including bacterial vaginosis and asymptomatic bacteriuria, are not detected by our VA algorithms. The gap in ANC4 + coverage of mothers of stillbirths (45%) and neonatal deaths (31%) represents an improvement over the 2010 TDHS’s 57% for all pregnant women . This could be due to health sector reforms undertaken by Tanzania during the last decade to expand access to health services. However, continued concerns about quality and urban/rural disparities in access to delivery services temper this conclusion. The positive association between ANC4 + and Q-ANC and the fact that only one-fifth to one-third of women received Q-ANC highlights the country’s need to further strengthen ANC quality, access and coverage. Women with a neonatal death and any complication were no more likely to deliver in hospital than women without a complication unless they had achieved ANC4 + or received Q-ANC or DS-ANC. These findings were less clear for women with a stillbirth and any complication. Like women with a neonatal death and a complication, they were more likely to deliver in hospital if they achieved ANC4 + than if not. However, without ANC4 + they were only one-fifth as likely to deliver in hospital as women without a complication nor ANC4+, and also were not more likely to deliver in hospital if they received Q-ANC or DS-ANC. This differs from findings in Ghana , which suggested that Q-ANC decreased stillbirth by promoting health facility delivery. The tendency in our study for non-hospital delivery by women with a stillbirth and any complication might be explained by their higher reported level of careseeking constraints and lower level of careseeking in the face of a constraint. Aligned with the Tanzanian Countdown Study finding of a large urban/rural disparity in the proportion of births conducted in health facilities, the VASA study found that urban residence was the strongest predictor of hospital delivery both for neonatal deaths and stillbirths. The Countdown Study found inequity in coverage and quality of delivery services in rural areas to be a factor in Tanzania’s slow decline in PNM and NNM. However, both indicators remain higher in urban (PNM: 47/1,000; NNM: 63/1,000) than rural (PNM: 37/1,000; NNM: 47/1,000) Tanzania , with only inconclusive explanations why this is so . A prospective cohort study of pregnant women in rural Tanzania found more L/D complications among women who delivered at facility than at home and, when controlled for complications, PNM was higher among facility births . The authors attributed their findings to the need for improved training in recognizing and managing complications and supplying facilities with essential drugs and equipment. It is reasonable to hypothesize that a similar situation pertains in urban areas, with more women with complications, encouraged by ANC4 + and Q-ANC, delivering in facilities ill-equipped to manage their complications. In such a scenario, high levels of facility delivery in urban areas might even contribute to their higher PNM and NNM levels. Further study of the quality of care provided in urban delivery facilities, beyond the scope of the VASA study, is needed to assess this hypothesis. Limitations VASA study limitations have been discussed elsewhere . Verbal autopsy diagnoses, while currently the most accurate possible at population level in low- and lower middle-income countries, are not as accurate as medical diagnoses with direct measurement. This could possibly result in some inaccuracy in our assessments of association between maternal complications and causes of neonatal death. Also, our interview-based measure of some Q-ANC components may overestimate true quality, since we were not able to determine if health care workers acted on abnormal findings, for example, of blood pressure or hematocrit. There could be recall bias due to the recall period of 1–7 years. Most respondents were the deceased’s mother, who may have provided socially desirable answers to sensitive questions. For example, this might have contributed to the higher reported level of careseeking constraints among women with a stillbirth and any complication who did not seek care. This concern is moderated by the finding that women with similar symptoms that did not qualify as a complication were as likely to deliver at hospital whether or not they reported careseeking problems. Our separate analyses and somewhat different findings for stillbirths and neonatal deaths rest on the ability of VA to distinguish these birth outcomes . Asking about vital signs present at birth, the method used by our VASA study, is assumed to be superior to the usual survey methods of asking a full birth history or full pregnancy history . A comparison of the full birth history and full pregnancy history methods, in which the full pregnancy history asked about signs of life only for babies reported to be born dead found that the full pregnancy history identified more stillbirths but did not decrease misclassification between stillbirths and early neonatal deaths . However, the VASA study asked about signs of life both for babies reported to be born alive and dead, and so might be expected to perform better in this regard. The statistical power of some of our analyses was restricted by sample size. The positive associations of maternal complications with neonatal causes of death are based on few cases, yet nevertheless yielded significant findings. Additional file : Tables S7 and S8 show the n/N of women in each ANC category who delivered at hospital to enable the reader to consider the statistical power. Inclusion of a control group would have enabled assessment of differences in ANC coverage, the level of complications, and hospital delivery among mothers of cases (stillbirths or neonatal deaths) and controls (surviving neonates). However, the lack of a comparison group in VASA studies is common and not so necessary since they examine interventions with proven effectiveness against NNM that should be accessible to all pregnant women and newborns.
VASA study limitations have been discussed elsewhere . Verbal autopsy diagnoses, while currently the most accurate possible at population level in low- and lower middle-income countries, are not as accurate as medical diagnoses with direct measurement. This could possibly result in some inaccuracy in our assessments of association between maternal complications and causes of neonatal death. Also, our interview-based measure of some Q-ANC components may overestimate true quality, since we were not able to determine if health care workers acted on abnormal findings, for example, of blood pressure or hematocrit. There could be recall bias due to the recall period of 1–7 years. Most respondents were the deceased’s mother, who may have provided socially desirable answers to sensitive questions. For example, this might have contributed to the higher reported level of careseeking constraints among women with a stillbirth and any complication who did not seek care. This concern is moderated by the finding that women with similar symptoms that did not qualify as a complication were as likely to deliver at hospital whether or not they reported careseeking problems. Our separate analyses and somewhat different findings for stillbirths and neonatal deaths rest on the ability of VA to distinguish these birth outcomes . Asking about vital signs present at birth, the method used by our VASA study, is assumed to be superior to the usual survey methods of asking a full birth history or full pregnancy history . A comparison of the full birth history and full pregnancy history methods, in which the full pregnancy history asked about signs of life only for babies reported to be born dead found that the full pregnancy history identified more stillbirths but did not decrease misclassification between stillbirths and early neonatal deaths . However, the VASA study asked about signs of life both for babies reported to be born alive and dead, and so might be expected to perform better in this regard. The statistical power of some of our analyses was restricted by sample size. The positive associations of maternal complications with neonatal causes of death are based on few cases, yet nevertheless yielded significant findings. Additional file : Tables S7 and S8 show the n/N of women in each ANC category who delivered at hospital to enable the reader to consider the statistical power. Inclusion of a control group would have enabled assessment of differences in ANC coverage, the level of complications, and hospital delivery among mothers of cases (stillbirths or neonatal deaths) and controls (surviving neonates). However, the lack of a comparison group in VASA studies is common and not so necessary since they examine interventions with proven effectiveness against NNM that should be accessible to all pregnant women and newborns.
While our study demonstrated the ability of Q-ANC and ANC4 + to increase hospital delivery by women with complications, urban residence was the strongest predictor of hospital delivery, and the quality of delivery and neonatal care provided by facilities in all areas is clearly as important as coverage. The VASA study identified complications significantly associated with leading causes of NNM in Tanzania and demonstrated that intrapartum stillbirths were most often full term and likely contributed to by provider delay in conducting C-sections. This information can be used to help focus training of personnel and appropriate supplying and equipping of facilities. Pregnancy complications were highly prevalent among mothers of stillbirths and neonatal deaths in the VASA study and a small minority of women received Q-ANC. Increased coverage of ANC4 + and Q-ANC, especially of WHO’s focused ANC model adopted by Tanzania in 2002, which includes detection, management and, when necessary, referral to specialty care of women with complications, could also contribute to decreasing perinatal and neonatal mortality. Our analysis also suggests that, within the context of a VA- or survey-based evaluation, maternal assessment of fetal movement, without consideration of maceration, is the more reliable means of distinguishing intrapartum from antepartum stillbirth.
Additional file 1: Table S1. 2015/16 TDHS and 2017/18 Tanzania VASA study classification of status at birth of 783 stillbirths, neonatal and child deaths from 08/2011 to 02/2016. Table S2. Intrapartum and antepartum stillbirths defined with and without mothers reports of fetal maceration in relation to fetal movement less than 8 hours before delivery or before the onset of labor, Tanzania, 08/2011 to 02/2016. Table S3. Intrapartum and antepartum stillbirths defined with and without mothers reports of fetal maceration in relation to fetal movement less than 12 hours before delivery or before the onset of labor, Tanzania, 08/2011 to 02/2016. Table S4. Association of maternal complications with 185 intrapartum and antepartum stillbirths, Tanzania, 08/2011 to 02/2016. Table S5. Association of selected maternal complications with three main causes of 228 neonatal (days 0-27) deaths, Tanzania, 08/2011 to 02/2016. Table S6. Association of selected maternal complications with three main causes of 129 early-onset (days 0-1) neonatal deaths, Tanzania, 08/2011 to 02/2016. Table S7. Logistic regression model of the independent effects of four or more antenatal care visits and one or more maternal complications on hospital delivery of neonates that died; and models that include the same potential confounders, showing the effect of the interaction of different aspects of antenatal care and complications on hospital delivery. Table S8. Logistic regression model of the independent effects of four or more antenatal care visits and one or more maternal complications on hospital delivery of stillbirths; and models that include the same potential confounders, showing the effect of the interaction of different aspects of antenatal care and complications on hospital delivery.
|
Colonization, cadavers, and color: Considering decolonization of anatomy curricula | b3c76b8e-2fb3-4c4b-8905-70473ae855ca | 9304213 | Anatomy[mh] | CONTEXT In this paper on decolonization of anatomy curricula we set the scene, including the historical influence of colonialism on anatomy curricula, and present the challenges associated with, and the opportunities for, decolonization of the anatomy curriculum. Decolonization is not a single event or act. It is a long‐term process and commitment. In that vein, this article serves as a starter to the conversation on decolonization of anatomy education as a discipline. Decolonization is messy, it is uncomfortable, it is iterative, and it will require personal, cultural, and institutional commitment, reflection, critical thinking, and action. Above all, it is an opportunity—an opportunity to create a diverse and inclusive learning environment. Our discussion is frank, not all those interested in decolonization are starting from the same place. We have witnessed, even contributed to, significant advances in the sphere of equality, diversity, and inclusion. As you will recognize, difficult conversations were had, past wrongs were acknowledged, and the opportunity for change embraced. We start this article with candor, noting where we come from in order to move forward. This paper is not a literature review for two reasons. Firstly, there is a paucity of literature due to the contemporary nature of decolonization in the remit of anatomy education, and seminal papers within this specialism are yet to be written. There is also potential for publication bias with Western perspectives dominating (Ekmekci, ; Mulimani, ). Secondly, this is an opportunity to adopt a reflective stance and to empower those in anatomy education to start conversations, even those of causing discomfort and unease. 1.1 The current landscape in anatomy and higher education Historical events and, more recently, events in 2020, have considerably changed the educational landscape, and irrevocably at that (Finn, Quinn, et al., ). The protests and reaffirmation of Black Lives Matter (BLM) activism that followed the murder of George Floyd while in police custody brought into sharp focus pre‐existing societal divisions in Western societies. Discrimination and inequalities across a range of contexts (e.g., education, health, criminal justice), especially as experienced by black people, entered the realm of mainstream discussion. As a result, there have been demands for redress and rebalancing across the board. In education, this has taken the form of renewed calls for decolonization of the curriculum. Decolonization, it could be argued, is a form of making curricula inclusive. In the sphere of anatomy education, it involves an acknowledgment of the messy, yet unchangeable, past, where bodies were acquired for dissection in ways that would be wholly unacceptable in modern Western societies. It involves recognition of injustices committed against minorities for the advancement of science, and a redressing of this balance in the form of increasing the visibility and value of these minorities previously utilized without consent or without a face. It has been described in broader contexts of higher education (Jansen, ; Jansen & Osterhammel, ), but less in anatomy (Finn, Quinn, et al., ). An inclusive curriculum is universal and intended to improve the experience, skills and attainment of all students including those in protected characteristic groups. It aims to ensure that the principles of inclusivity are embedded within all aspects of the academic cycle. (AdvanceHE, ) Anatomy, like many other disciplines, has a history steeped in colonialism and colonial practices that with a retrospective lens are unacceptable today to both the science community and the general public. History, though, cannot be undone, it must be acknowledged. For example, anatomy stems from grave robbing, vivisection, dissection of the poor, criminals and the wounded, and Nazi experiments (Finn, Quinn, et al., ). As a discipline, we must start a process of critical reflection on our past and identify actions and opportunities, through this reflective process, to make our educational space as inclusive as possible. Working toward decolonization of curricula is one way that this can be achieved. 1.2 Definitions Anatomical variation (interchangeable with anatomical differences) : an inter‐individual difference between anatomical structures; variations are not abnormalities and are considered normal as they are found consistently among different individuals and are generally asymptomatic. Antiracism : policies or practices opposing racism and promoting racial tolerance. Color : or skin color—the visible pigmentation of the skin, primarily used in this context as an indication of someone's race. Color‐line : social, economic or political barriers that persist between different racial groups. Popularized by Du Bois, it has been expanded to include discrimination beyond color discrimination. Decolonization : the process of undoing practices perceived to be related to colonial past. Within the educational context, confronting and challenging the colonizing practices that have influenced education in the past but which persist in educational practice today. Equality, diversity, inclusion (EDI) : the umbrella term under which policies and processes relating to fair treatment and opportunity for all sit, with the aim of eradicating prejudice and discrimination relating to an individual or group of individual's protected characteristics. Ethnicity : differences between people mostly on the basis of language and shared culture. Race : the historic major groupings into which people have been divided on the basis of physical characteristics or shared ancestry, with perceived qualities or characteristics associated with the particular grouping; today, also considered a mixture of behavioral, cultural, and physical attributes. Racism : discrimination, prejudice or antagonism toward an individual or group of individuals based on the belief that different races possess characteristics, abilities, or qualities that render them inferior. Representation : the portrayal of an individual or group of individuals in a particular way. Social justice : justice pertaining to the unequal distribution of societal wealth, opportunities and privileges. Woke : action and alertness to perceived societal injustices and associated with ideas that involve identity and race promoted by progressives, for example, “white privilege” or reparations for indigenous or enslaved populations. 1.3 What is decolonization and why does it matter? To understand decolonization, we must first acknowledge colonization. Colonization is the practice of settlers occupying another country, acquiring political control, either partially or fully, and exploiting the country economically. To move forward, one must be aware that settler colonialism has impacted upon the organization, governance, curricula, and assessment of compulsory learning. It is these dated settler perspectives that has counted as knowledge, and through the perpetuating of such perspectives, unfair social structures are rationalized and maintained (Tran, ; Tuck & Yang, ). Mortiz Julis Bonn, a German economist, first coined the word “ decolonization ” to describe former colonies that achieved self‐governance (Bonn, ). With respect to curricula, decolonization refers to the creation of spaces and resources for a dialogue among all members of a Higher Education Institution (HEI) on how to envision all cultures and knowledge‐systems in the curriculum, and to do this with respect to what is being taught and how it frames the world (Charles, ). In light of the BLM movement and the reverberation of calls for change, HEIs in many parts of the world with diverse populations have been compelled to rethink their policies and, consequently, review their teaching delivery, assessment, curricula, and physical environments. Jansen and Osterhammel ( ) considered decolonization to be “a technical and rather undramatic term for one of the most dramatic processes in modern history: the disappearance of empire as a political form, and the end of racial hierarchy as a widely accepted political ideology and structuring principle of world order” (p. 1). Decolonising the curriculum refers to the creation of spaces and resources for a dialogue among all members of the education community on how to imagine and envision all cultures and knowledge systems in the curriculum. This is with respect to what is being taught and how it frames the world, all the time questioning from whose viewpoint the information is coming. (Keele University, ). Decolonizing curricula goes beyond inclusivity and diversity. Many believe the latter suggests merely an incorporation of “outside” perspectives, rather than the more radical interrogation of knowledge and whose interests it serves characteristic of the decolonization agenda. Addressing power differentials is at the heart of decolonizing education—with this lens, we re‐look at and re‐develop curricula to show and serve the interests of diverse learners. Many may question the applicability to anatomy education. Uncertainty may be due to an unwillingness to be drawn into the bruising political fray, which sometimes appears to be framed as a zero‐sum game between the forces of “wokeness” and conservatism. Resistance may be due the sentiment, “If it ain't broke, don't fix it,” with the accompanying belief that there is a clamor for change that is neither meaningful nor relevant to teaching and learning around scientific facts. A further consideration is ensuring that “decolonization” is not dismissed as a buzzword that has the potential to lose currency (Charles, ), or that it is not perceived as merely a metaphor (Tuck & Yang, ). Tuck and Yang ( ) argue that the increasing number of calls for decolonization within educational advocacy and scholarship has resulted in decolonization becoming a metaphor (Tuck & Yang, ). Decolonizing is about considering multiple perspectives and making space to think carefully about what to value. (Ferguson et al., ) Given the starkness of societal inequalities for people from black and other minoritized ethnic communities, the paper focuses on decolonization with respect to race (or the idea that people can be categorized on the basis of certain noticeable physical characteristics or their shared ancestry). However, there is also a recognition that inequality and marginalization are intersectional issues—race serves here as an exemplar. More than that though, anatomy education may have particular difficulties embracing teaching and learning around racial differences, which is also discussed. We have striven to convey some of the considerations around decolonization through balancing the wider societal debates with anecdotal experience gleaned from working within the sector, and have aimed to punctuate the predominantly academic discussion with worked examples and cases that will hopefully provide a helpful platform for other educators starting the process of reimagining their own curricula. Whether where we get to counts as decolonization is a question in itself, but we hope that this paper serves as a helpful entry into what is going to be an ongoing process of dialogue and working out what is appropriate, essential, and recommended in the discipline. We talk a lot about color (i.e., in this context the range of hues visible as skin tone, and used commonly as an indication of someone's race) but let us be clear: diversifying cadavers is not decolonization. The latter requires an analysis of power and knowledge production and how certain communities are underserved by the way anatomy teaching is set up. At the same time, with anatomic representation that is currently so white and black, we would argue that diversifying the range of colors is part of the wider decolonizing effort (e.g., see work by Mbaki, Todorova, & Hagan, ; Mbaki et al., ). Room has to be made for the blacks, browns, whites and everything in between if all members of the education community are really to find a place for themselves in the discipline, quite apart from the fact that greater recognition of the various manifestations of pathology has direct clinical implications (Mukwende et al., ). So, in sum and a helpful reminder to those starting out on the decolonizing journey, as a useful framing, decolonization is not a case of black or white. It is black, brown, white, and everything in between. In essence (and within the context of anatomy), it is giving color its rightful place within the curriculum. It is acknowledging indigenous populations (Pitama et al., ), and the diversity in society, health, and healthcare. The aforementioned health inequalities are not the only reason that decolonization of anatomy curricula matters. Anatomists are typically training future health workforces, workers who need to deal with a diverse patient population, appreciate racial morphology (where it has an impact on patient outcomes) and even otherwise develop a sensitivity to the diversity of the pluralistic modern societies we live and work in. Having an understanding of historical “wrongs” and the attempts to redress these issues in education matters, perhaps not so much from the patient outcome perspective but even more so as educationalists and healthcare workers who are vested in the interests of society as a whole and willing to deal sensitively with all the colors and hues encountered in the business of day‐to‐day work‐life.
The current landscape in anatomy and higher education Historical events and, more recently, events in 2020, have considerably changed the educational landscape, and irrevocably at that (Finn, Quinn, et al., ). The protests and reaffirmation of Black Lives Matter (BLM) activism that followed the murder of George Floyd while in police custody brought into sharp focus pre‐existing societal divisions in Western societies. Discrimination and inequalities across a range of contexts (e.g., education, health, criminal justice), especially as experienced by black people, entered the realm of mainstream discussion. As a result, there have been demands for redress and rebalancing across the board. In education, this has taken the form of renewed calls for decolonization of the curriculum. Decolonization, it could be argued, is a form of making curricula inclusive. In the sphere of anatomy education, it involves an acknowledgment of the messy, yet unchangeable, past, where bodies were acquired for dissection in ways that would be wholly unacceptable in modern Western societies. It involves recognition of injustices committed against minorities for the advancement of science, and a redressing of this balance in the form of increasing the visibility and value of these minorities previously utilized without consent or without a face. It has been described in broader contexts of higher education (Jansen, ; Jansen & Osterhammel, ), but less in anatomy (Finn, Quinn, et al., ). An inclusive curriculum is universal and intended to improve the experience, skills and attainment of all students including those in protected characteristic groups. It aims to ensure that the principles of inclusivity are embedded within all aspects of the academic cycle. (AdvanceHE, ) Anatomy, like many other disciplines, has a history steeped in colonialism and colonial practices that with a retrospective lens are unacceptable today to both the science community and the general public. History, though, cannot be undone, it must be acknowledged. For example, anatomy stems from grave robbing, vivisection, dissection of the poor, criminals and the wounded, and Nazi experiments (Finn, Quinn, et al., ). As a discipline, we must start a process of critical reflection on our past and identify actions and opportunities, through this reflective process, to make our educational space as inclusive as possible. Working toward decolonization of curricula is one way that this can be achieved.
Definitions Anatomical variation (interchangeable with anatomical differences) : an inter‐individual difference between anatomical structures; variations are not abnormalities and are considered normal as they are found consistently among different individuals and are generally asymptomatic. Antiracism : policies or practices opposing racism and promoting racial tolerance. Color : or skin color—the visible pigmentation of the skin, primarily used in this context as an indication of someone's race. Color‐line : social, economic or political barriers that persist between different racial groups. Popularized by Du Bois, it has been expanded to include discrimination beyond color discrimination. Decolonization : the process of undoing practices perceived to be related to colonial past. Within the educational context, confronting and challenging the colonizing practices that have influenced education in the past but which persist in educational practice today. Equality, diversity, inclusion (EDI) : the umbrella term under which policies and processes relating to fair treatment and opportunity for all sit, with the aim of eradicating prejudice and discrimination relating to an individual or group of individual's protected characteristics. Ethnicity : differences between people mostly on the basis of language and shared culture. Race : the historic major groupings into which people have been divided on the basis of physical characteristics or shared ancestry, with perceived qualities or characteristics associated with the particular grouping; today, also considered a mixture of behavioral, cultural, and physical attributes. Racism : discrimination, prejudice or antagonism toward an individual or group of individuals based on the belief that different races possess characteristics, abilities, or qualities that render them inferior. Representation : the portrayal of an individual or group of individuals in a particular way. Social justice : justice pertaining to the unequal distribution of societal wealth, opportunities and privileges. Woke : action and alertness to perceived societal injustices and associated with ideas that involve identity and race promoted by progressives, for example, “white privilege” or reparations for indigenous or enslaved populations.
What is decolonization and why does it matter? To understand decolonization, we must first acknowledge colonization. Colonization is the practice of settlers occupying another country, acquiring political control, either partially or fully, and exploiting the country economically. To move forward, one must be aware that settler colonialism has impacted upon the organization, governance, curricula, and assessment of compulsory learning. It is these dated settler perspectives that has counted as knowledge, and through the perpetuating of such perspectives, unfair social structures are rationalized and maintained (Tran, ; Tuck & Yang, ). Mortiz Julis Bonn, a German economist, first coined the word “ decolonization ” to describe former colonies that achieved self‐governance (Bonn, ). With respect to curricula, decolonization refers to the creation of spaces and resources for a dialogue among all members of a Higher Education Institution (HEI) on how to envision all cultures and knowledge‐systems in the curriculum, and to do this with respect to what is being taught and how it frames the world (Charles, ). In light of the BLM movement and the reverberation of calls for change, HEIs in many parts of the world with diverse populations have been compelled to rethink their policies and, consequently, review their teaching delivery, assessment, curricula, and physical environments. Jansen and Osterhammel ( ) considered decolonization to be “a technical and rather undramatic term for one of the most dramatic processes in modern history: the disappearance of empire as a political form, and the end of racial hierarchy as a widely accepted political ideology and structuring principle of world order” (p. 1). Decolonising the curriculum refers to the creation of spaces and resources for a dialogue among all members of the education community on how to imagine and envision all cultures and knowledge systems in the curriculum. This is with respect to what is being taught and how it frames the world, all the time questioning from whose viewpoint the information is coming. (Keele University, ). Decolonizing curricula goes beyond inclusivity and diversity. Many believe the latter suggests merely an incorporation of “outside” perspectives, rather than the more radical interrogation of knowledge and whose interests it serves characteristic of the decolonization agenda. Addressing power differentials is at the heart of decolonizing education—with this lens, we re‐look at and re‐develop curricula to show and serve the interests of diverse learners. Many may question the applicability to anatomy education. Uncertainty may be due to an unwillingness to be drawn into the bruising political fray, which sometimes appears to be framed as a zero‐sum game between the forces of “wokeness” and conservatism. Resistance may be due the sentiment, “If it ain't broke, don't fix it,” with the accompanying belief that there is a clamor for change that is neither meaningful nor relevant to teaching and learning around scientific facts. A further consideration is ensuring that “decolonization” is not dismissed as a buzzword that has the potential to lose currency (Charles, ), or that it is not perceived as merely a metaphor (Tuck & Yang, ). Tuck and Yang ( ) argue that the increasing number of calls for decolonization within educational advocacy and scholarship has resulted in decolonization becoming a metaphor (Tuck & Yang, ). Decolonizing is about considering multiple perspectives and making space to think carefully about what to value. (Ferguson et al., ) Given the starkness of societal inequalities for people from black and other minoritized ethnic communities, the paper focuses on decolonization with respect to race (or the idea that people can be categorized on the basis of certain noticeable physical characteristics or their shared ancestry). However, there is also a recognition that inequality and marginalization are intersectional issues—race serves here as an exemplar. More than that though, anatomy education may have particular difficulties embracing teaching and learning around racial differences, which is also discussed. We have striven to convey some of the considerations around decolonization through balancing the wider societal debates with anecdotal experience gleaned from working within the sector, and have aimed to punctuate the predominantly academic discussion with worked examples and cases that will hopefully provide a helpful platform for other educators starting the process of reimagining their own curricula. Whether where we get to counts as decolonization is a question in itself, but we hope that this paper serves as a helpful entry into what is going to be an ongoing process of dialogue and working out what is appropriate, essential, and recommended in the discipline. We talk a lot about color (i.e., in this context the range of hues visible as skin tone, and used commonly as an indication of someone's race) but let us be clear: diversifying cadavers is not decolonization. The latter requires an analysis of power and knowledge production and how certain communities are underserved by the way anatomy teaching is set up. At the same time, with anatomic representation that is currently so white and black, we would argue that diversifying the range of colors is part of the wider decolonizing effort (e.g., see work by Mbaki, Todorova, & Hagan, ; Mbaki et al., ). Room has to be made for the blacks, browns, whites and everything in between if all members of the education community are really to find a place for themselves in the discipline, quite apart from the fact that greater recognition of the various manifestations of pathology has direct clinical implications (Mukwende et al., ). So, in sum and a helpful reminder to those starting out on the decolonizing journey, as a useful framing, decolonization is not a case of black or white. It is black, brown, white, and everything in between. In essence (and within the context of anatomy), it is giving color its rightful place within the curriculum. It is acknowledging indigenous populations (Pitama et al., ), and the diversity in society, health, and healthcare. The aforementioned health inequalities are not the only reason that decolonization of anatomy curricula matters. Anatomists are typically training future health workforces, workers who need to deal with a diverse patient population, appreciate racial morphology (where it has an impact on patient outcomes) and even otherwise develop a sensitivity to the diversity of the pluralistic modern societies we live and work in. Having an understanding of historical “wrongs” and the attempts to redress these issues in education matters, perhaps not so much from the patient outcome perspective but even more so as educationalists and healthcare workers who are vested in the interests of society as a whole and willing to deal sensitively with all the colors and hues encountered in the business of day‐to‐day work‐life.
ANATOMY IN THE CONTEXT OF A DECOLONIZED OR REIMAGINED CURRICULUM When thinking about decolonization of the anatomy curriculum, there are a couple of common misconceptions relating to diversity in anatomy that are deep‐seated and difficult to change. These are (a) that the human body is either identical in all humans or (b) that variation in skin color and ethnicity is a representation of other, more profound, racial variations that underlie the skin (Cunningham, ). In fact, within the social and biological sciences, there is widespread consensus that race is in fact a social construct and not an anatomical “truth” or “attribute”; classifications of race are often solely based on the color of one's skin, rather than the shared 99.9% of our otherwise shared genome (Chou, ). Such misconceptions are unhelpful to a discussion on the decolonization of the anatomy curriculum. Decolonizing anatomy education curricula will entail addressing the following challenges: (a) underrepresentation of certain bodies, (b) difficulty talking about difference, and (c) the hidden curriculum in anatomy education. These will be discussed in turn below. There are undoubtedly numerous other angles and issues that could come under the heading of decolonization within this sector but it is necessary to start the conversation from places of commonality—these three issues are those that are anecdotally encountered by anatomists across Western societies and serve as a starting point for initiating a tricky conversation that has had little coverage until very recently. 2.1 Underrepresentation of certain bodies Within anatomy education, teaching and learning relies upon bodies (cadavers and life models), physical representations of the body (e.g., plastic models), technological software, and diagrammatic representations (e.g., textbooks and anatomy atlases). Anecdotally, diagrammatic, technological, and physical representations are frequently devoid of diversity in terms of the populations they represent (Louie & Wilkes, ; Parker et al., ). Not many attempts have been made to systematically review and collate these representations. Despite anatomy being universal, variation being normal (Bergman, ; Cunningham, ), and skin being the largest and most visible organ, it is only in recent decades that anatomical texts have displayed surface anatomy images with a diverse range of skin tones. It is important to remember that this is unlikely to be a deliberate attempt to perpetuate underrepresentation. After all, cadavers can only be selected from those who donate, meaning diversity may be limited in some regions. There is often a lack of donations from some ethnicities for cultural or religious reasons. Often, there are pragmatic reasons, such as geography and associated jurisdiction, that limit the diversity of donors. With these considerations in mind, it is then unsurprising that healthcare students tend to encounter predominantly white donors within the dissection rooms across the Western hemisphere. Presumably, similar underrepresentations of other ethnicities (including white body donations) occur across other geographical regions (such as the Far‐East, South‐East Asia, and Africa). These are some of the pragmatic reasons to bear in mind, although historically underrepresentation of bodies may have had more sinister reasons (when viewed with a retrospective lens) (Plataforma SINC, ). Research suggests that racial inequities are embedded in the curricular edification of both healthcare professionals and patients (Louie & Wilkes, ). A prime example of the fundamental flaws of instructional design is exemplified by the lack of representation of different skin tones within imagery and models utilized in anatomy education, arguably feeding into the tacit messaging a learner may receive. In 2018, a study analyzed in excess of 4,000 images from anatomy textbooks and determined that there was a significant overrepresentation of light skin tones and an underrepresentations of dark skin tones (Louie & Wilkes, ). Furthermore, racial minorities were often absent at the topic level. These omissions may provide one route through which bias presents within healthcare. Similar findings have been demonstrated in other studies (Louie & Wilkes, ; Parker et al., ; Parker et al., ), with analysis including other protected characteristics such as gender, further supporting the perpetuation of inequity and discrimination. White males have long dominated as the archetypaI representation in Western anatomy textbooks, typically presented as the “universal model” of the human form (Louie & Wilkes, ; Parker et al., ; Plataforma SINC, ). A study analyzed 16,329 images from recommended texts at universities in Europe, the United States and Canada, concluding that the white male was the dominant anatomical representation (Plataforma SINC, ). Whether or not this is a deliberate decision by publishers is not under debate here. The fact of the matter remains that, historically, female anatomy representation was more the exception than given equal representation alongside male anatomy, and the prevailing color of the utilized male representatives was white. This status quo has persisted, despite geopolitical and cultural shifts, suggesting more fundamental issues are at play, and that both organizational and granular level changes are required to redress this imbalance. Textbooks are only one source of potential bias, technological resources and anatomical models are others. Major manufacturers such as SOMSO® began offering black and white skin tone models as part of their general range in the late 1970s. AdamRouilly began offering Clinical Skills simulators with black skin from the 1980s; these have been sold worldwide since then (personal communication) (Adam, Rouilly, ). Despite almost 60 years of availability of different skin tones, anatomical models available within departments still lack diversity. Anecdotally, a major challenge associated with the creation of models representing different ethnicities is that there is a danger that models can become perceived as caricatures of racial stereotypes in the way that features are modeled. However, once again, pragmatic decisions predominate and are most likely to explain the reasons for lack of diversity in this area (although, once identified, it becomes imperative to aim to address such issues). Most universities have a limited budget with which to invest and, as such, models are often a long‐term investment, infrequently replaced and typically purchased at the inception of a department. As a consequence of one‐time investment, representative models are often not available. The representation is also fairly limited and often polarized into either black or white skin coloring, with no clear superficially visible racial differences visible (perhaps yet again due to concerns about caricaturizing the differences and the ensuing offense that could be caused inadvertently). Within anatomy education, models have much historical significance and form part of museum collections; consequently, white skin models dominate. Manufacturers report (personal communication) requests for a range of mannequins and models of different skin tones, body shapes and face/head profiles. Such a range of representations would not be viable for commercial production, both from a logistical viewpoint for institutions as well as from the viewpoint of fears of presenting stereotypical racial features that may cause offense. Similarly, there have been requests made to manufacturers (personal communication) for a range of skeletons, pelves, and skulls from different races; however, many institutions have their own collections of natural bone skeletons which they use for comparative anatomy, preferring them to plastic osteological models—again demonstrating a lack of commercial viability. There has recently been a promising change in the landscape with students and the general public alike taking it upon themselves to contribute to decolonization of the curriculum by tackling the lack of representation in imagery. Within the United Kingdom, medical student Malone Mukwende developed “Mind the Gap: A Handbook of Clinical Signs in Black and Brown Skin,” to show various dermatological conditions as manifested on darker skin (Finn, Quinn, et al., ; Mukwende et al., ). Similarly, a mother who was unable to find images of rash on the same skin tone as her son launched an Instagram account, “Brown Skin Matters” to showcase how skin conditions appear on darker hues of skins, compared to how they are commonly depicted in medical texts and websites. More recently, “Black in Anatomy” was created across a number of social media platforms as a “safe space to network, uplift, support, and amplify the Black contributions to anatomical science” (Black in Anatomy, ). The recent activity to redress the balance goes some way to improving the potential for tackling inequalities. Despite these positive steps, caution must be exercised to ensure that imagery and associated content are as diverse and inclusive as possible and avoid perpetuating associated implicit biases (Finn, Ballard, et al., ). It is crucial to take a critical lens to antiracism initiatives within anatomy education (Kendi, ; Vass & Adams, ). Unfortunately, initiatives are often regarded as performative when institutions are not dedicated to taking a reflexive and critical examination of their role in racism and the language and methods they use to combat it (Alwan, ; Gutierrez, ). This includes an awareness of the potential for curricula to promote further health inequalities (Finn, Ballard, et al., ). 2.2 The difficulty with talking about difference Anatomical variation is well‐documented, yet typically focuses on neurovasulature (e.g., pelvic vasculature) or sex‐related differences (e.g., breast tissue and structure) within texts and curricula. Despite the existence of observable differences in individuals and populations (e.g., skin, eyelids, hair, and teeth), acknowledging differences is challenging and riddled with ambiguity for both educator and learner alike. There is almost a fear of talking explicitly even about very obvious surface anatomical differences between certain groupings of people when teaching anatomy. It is worth noting that in the area of race, we are always having to actively group people, emphasizing certain similarities and differences, and drawing neat and tidy lines across the fuzziness of nature—in any study of racial differences, it would be prudent to interrogate the criteria used to distinguish, say, black people from white people, if indeed these are given. This grouping can relate to certain classes of bodily difference and not others. For example, male skeletons typically have more bone mass than female skeletons, which perhaps can be discussed, whereas discussion of the epicanthic fold of the Asian eye may be considered taboo, at least within Western contexts. This inhibition comes up with respect to sex, but anatomical variation along racial lines is arguably the domain of greatest sensitivity. There are likely to be several important reasons for this, which we briefly explore here. In 1903, Du Bois coined the term “the color‐line” to denote the ultimate cleavage in American society; he postulated that “the problem of the twentieth century is the problem of the color‐line” around which inequality of opportunity and inequality of experience was maximally organized (Du Bois, ; Gannon, ). However significant (or not) racial differences are biologically speaking, humankind has used select visible racial criteria as the basis for slavery, colonial domination, and continuing oppression. If racism in its manifest forms is about the dehumanization of the racialized other, then the otherwise relatively insignificant signifiers of race, that is, manifest physical differences, are likely to be loaded with emotional significance, societally speaking. During the apartheid era in South Africa, for example, “failing” the “pencil test” (i.e., having curly hair that hung onto a pencil) meant that someone of uncertain racial origin would be consigned to the colored , rather than to the white racial category, with all of the societal opprobrium, family separation and material disadvantage this brought with it. Little wonder then that racial differences, with their propensity to bring out the worst in humanity, continue to be taboo. Psychological frameworks (Dalal, ; Scott, ) suggest there is a concomitant anxiety about acknowledging racial differences because those differences do a lot of defensive work for individuals; to clarify, the worst of humanity (e.g., the supposed terroristic tendencies of radicalized Muslims or the supposed intellectual inferiority of black people) can be, psychologically speaking, located and locked away in certain groups of people, as long as we do not become too familiar with “them” and have our defenses challenged through realization that stereotypes are likely to be false (Dalal, ; Scott, ). Not acknowledging and talking about the differences, then, might be one way of denying that racial differences in anatomy are actually relatively insignificant when compared to anatomical similarity across populations. Racism (and this putative function of locating the worst in “others”) depends on stereotypes. Everyday culture is rife with the stereotyping of racialized bodies. According to some psychological theory (Dalal, ; Scott, ), looking beyond these stereotypes to the actuality of complexity across a spectrum requires overcoming deep‐seated, protective prejudice, that is, continually working at it. A simpler explanation of the difficulty in discussing anatomical variation is that those involved in education are anxious about saying the “wrong thing” and being perceived as racist or on the wrong side of history. With BLM and associated activism, we are potentially in the midst of important societal transformation. People from marginalized groups, not only racialized, have been emboldened to talk about discrimination and demand justice. In education, some students are playing an increasingly active role in shaping curricula, with an associated emphasis on social justice (Jackson & White, ; Murray‐García et al., ; Wear et al., ). Educators, however, can feel a pressure not to say anything out of keeping with this agenda, even when it contradicts their long‐held knowledge and beliefs—and often even scientific evidence (Murray‐García et al., ; Paton et al., ; Vass & Adams, ). Pejorative responses to calls for social justice, evident in terms such as “cancel culture” and “wokeness,” speak to this anxiety about being found to be on the wrong side of history in the court of public opinion. Instead of formal knowledge and open dialogue about racial variation, there is often a reliance on “tacit knowledge” of how racial differences manifest clinically in anatomy. Why is being able to talk about racial variation within the context of anatomy education important? Arguably, it is important, as any science depends on observation and the communication of those observations. This is such a truism that it could almost remain unstated, but anatomy is also a science of, among other things, human life and death. Observing ourselves is always complicated, and clear communication and open dialogue is perhaps even more important within this specialism than within the other sciences, primarily to mitigate against anxiety‐exacerbated cognitive biases and pervasive societal stereotypes. At heart, more inclusive and effective anatomy education translates into better patient care for the range of diverse ethnic communities within a population. Again, this is such a fundamental point that it is easy to overlook. In seeing and speaking about racial variation in anatomy, the practitioner may be motivated by supporting informed patient care, rather than scientific racism. It is rarely that simple. Psychology has suggested that motivations are always mixed and sociology, in turn, has illustrated how the individual must navigate power structures that influence what they do to others and what others do to them. With an understanding of the underlying psychosocial complexities in mind, it is important to ask ourselves why we would choose to focus on any particular racial variation in anatomy, and then let those learning from us ask us the same. Quite apart from the implications for communication and transparency, what is at issue here is the culture of anatomy education. The implication becoming apparent in this paper is that anatomy education is not only the study of the structure and parts of the body, but also the space around that body. Anatomy teaching should go beyond biology and structure into sensitivity and receptivity to the many questions learners bring, especially around the emotive topic of race and racialization. To state this clearly: it is not just what we are prepared to talk about but also the way in which we talk about such matters that defines the discipline and adds value to learning encounters (and, ultimately, to patient care and patient outcomes). 2.3 The impact of the hidden curriculum Both of the aforementioned considerations, visual representation and anatomical variation, provide examples of where the hidden curriculum has potentially manifested within anatomy education. There are tacit, implied and hidden messages in everything we do and do not do as educators (Finn & Hafferty, ). One example is in our choices as educators to use or not use (institutional finances and logistical challenges permitting) diverse life models when teaching surface anatomy. Another example is whether we deliberately avoid discussions of race within our anatomy teaching encounters. Matthan and Finn ( ) provide a discussion of the hidden curriculum associated with the use of imaging and digital resources within clinical education (Matthan & Finn, ). What is clear is that the hidden curriculum is not a space in which we can deliver “teaching by stealth” (Aka et al., ), it is rather a space that can be deliberately exploited to deliver messages to our learners. Because it is experienced differently by everyone, we can never really see what is hidden. The hidden curriculum refers to the tacit, implied, unwritten, unofficial, and often unintended behaviours, lessons, values, and perspectives that students learn during their education. (Finn & Hafferty, ) Despite extensive and deliberate use of diverse racial surface anatomy models in delivered teaching sessions, one of the authors (of mixed racial background) was asked for a “Black Anatomy Curriculum”, and for this to be delivered alongside (what can only be presumed to have been considered) the prevalent white anatomy curriculum. This arose from the misconception that there are evident racial differences in gross anatomical content that were deliberately being left out of the teaching, and to satisfy the current “Equality, Diversity and Inclusion” educational exercise, it was felt that a parallel “racially diverse” curriculum should also be delivered. One immediate response might have been to say there is no such thing and have left it at that. However, another more productive response could be to consider where that question came from, why it is relevant and especially when and in what context it was raised, and talk to the students about their curiosity and/or concerns. Were these students in some way trying to redress the imbalances and insensitivities characteristic of the hidden curriculum? Despite its conventional context being the laboratory, anatomy education is not hermetically sealed off from the world. Issues playing out in society seep into the lab, just as this happens in other areas of education; the dynamics of power differentials and prejudice will characterize much of the interaction that makes up teaching and learning in this space. Thinking and talking about racial dynamics and racism can often become unhelpfully black‐or‐white. Issues and positions fall onto either side of a divide and scope for nuance, complexity, and the messy business of working things out is reduced. Someone retweets a message with perceived racist sentiments. They may be deemed racist, then they fall onto one side of a divide (opposite the supposed nonracist). Their whole character is tarnished and seemingly irredeemably so (cf., earlier discussion of how we are apt to locate all the badness in some “other” and thus escape association and related anxieties). People are supposedly “woke” or reactionary, there is nothing in between. Anatomy in the West did not even, until recently, do black‐or‐white, with the overwhelming preponderance of white bodies, whether we were referring to textbook representations or the cadavers to be dissected. Now it appears that black models are increasingly included in surface anatomy textbooks and atlases, although the wider spectrum of shades of brown—that space in between—is rather curiously omitted altogether. When we refer to the hidden curriculum, however, we also have to consider the access and participation of black student and staff bodies in this space. Joseph‐Salisbury ( ) recounts the story of Femi Nylander, an Oxford University alumnus, who finds himself causing a scare on a visit to an Oxford college simply by dint of his blackness (Joseph‐Salisbury, ). Joseph‐Salisbury uses this incident to illustrate the workings of structural white supremacy in Higher Education, drawing on Puwar's ( ) work to show how Nylander's was a Black body out of place (Puwar, ). How out of place, then, might non‐white people feel within the anatomy education space? There are a number of factors to consider, including the ethnic make‐up of faculty, the range of bodies represented, the range of body donations received and the ethnicity of the donors, the way in which bodies and body parts are handled and talked about, religious and spiritual beliefs about life and death, the anatomical facts as against the lived experience of those facts, unconscious bias and racism. 2.4 Reclaiming history, reclaiming the space, reclaiming identity There is often profound mistrust of the branches of science focusing on human biology, bodies, and healthcare within certain ethnic communities (FitzPatrick et al., ). This has to be understood through acknowledging the impact of colonialism, historical abuses, and ongoing racism in society and healthcare (FitzPatrick et al., ). Current feelings about historical abuses of certain racialized bodies in science and medicine in particular (e.g., Tuskegee, Sims, and Lacks) and, indeed, current disparities in the care of and outcomes for black patients (Greenwood et al., ) may be directly implicated in the lack of African Americans, for example, participating in whole body donation (Werede & Thompson, ). When hands‐on anatomical dissection became popular in medical education in the United States and the United Kingdom in the late 18th and early 19th centuries, demand for cadavers exceeded supply. The physical and documentary evidence demonstrates the consequent disproportionate use of the bodies of the poor, the minority ethnic populations, and the marginalized in society in furthering medical education and anatomical science. The resulting progress has benefitted everyone in principle, although some argue it is still the most privileged in society (i.e., in the West, associated with fairer skin tones) that continue to benefit from scientific advances gleaned from the usage of unconsented bodies across the poorer communities, under which several racial minorities must necessarily sit. This is the uncomfortable but vital history and context of anatomy education that needs inclusion in formal curricula. Psychology tells us that that which cannot be spoken is instead enacted. Can such painfully contentious matters be discussed—and do they need to be discussed? Is the dark humor that helps so many practitioners cope with the nature of the work something that can be shared or, is it at the expense of certain groups and, so, exclusionary? Dueñas et al. describe the subjective nature of humor within anatomy, highlighting its use within the anatomy laboratory as subjective and contentious, thus humor becomes a component of the hidden curriculum (Dueñas et al., ). They study reported the use of an “internal barometer” as a self‐gauge for judgments as to whether jokes or mnemonics where appropriate. Judgments included: “Would this cause me personal offense? Is this my type of humor? Is the intent malicious?” With this, we come back to the request put to one of the authors for a “Black Anatomy Curriculum”. Were those clamoring for this moved to find a separate space because they felt excluded from the existing one? Perhaps such a request is not surprising given the widespread calls for social justice and decolonization in education. It is a shame, however, if racial differences in this manner crowd out the overwhelming human similarities that are foundational to anatomy—99.99% of our genome is shared, after all (Chou, ). Given the prevailing whiteness of the anatomy space in the Western country in which this request was made, the students' request may also have contained an attempt to reclaim their identities and an attempt to find a safer space. Their request then can be framed as a challenge to the discipline for a more inclusive and enabling culture. 2.5 What do educators and their institutions need to do? A starting point for educators In order to contextualize the recommendations from our paper, we offer a case study (see Box ), which can be used as an example to frame your thinking about what is relevant and achievable when decolonizing/reinventing the curriculum. Before embarking on such a journey, it is worth identifying your stakeholders, these are delineated in Figure . Figure represents an accessible summary of some of the main considerations for starting to decolonize the anatomy curriculum. Box presents a reflective opportunity, prompting users to think about whether to bring race into the anatomy classroom. BOX 1 Worked example on the anatomy of the skull and face Let us think about the bones of the skull and the tissues of the face (Aka et al., ; Joseph‐Salisbury, ; Puwar, ). It is a fact that the size and shape of the skull varies between different races and thus have long been used as a way to justify the existence of different races. Historically, and importantly now refuted, the structure of the skull was used as a means by which to: Position various races on the evolutionary scale. Exemplify immutable personality types. Identify criminal or more intellectual individuals. Where differences do exist Many are only observable with specialist knowledge and measurement. They are virtually impossible to bring into the classroom to diversify anatomy teaching. They are less varied and pronounced due to mass migration and blended global populations. There are differences in the overlying tissues too due to underlying osteology foundation. These are some examples of observations relating to race that have been made in more historic time: Asian skulls have circular orbits. Africa skulls have wider nasal apertures and flatter conchae. Degree of prognathism (protrusion of the mandible) and orthognathism (the state of not having the lower parts of the face projecting). Caucasian skulls have smaller teeth and a narrow nasal aperture. There is an absent or lower crease in the Asian upper eyelid. Asian eyes have the inner corner covered (the epicanthic fold). Caucasian eyes have the inner corner always exposed and an external fold at the outer edge. We must consider what elements of the information above are relevant when teaching healthcare professionals about racial differences manifesting in the skull and facial tissues. An illustrative example is the nasal conchae which can differ in size and angulation between races. An awareness of such helps the clinician to undertake a procedure with sensitivity to the racial differences without causing harm. A nasendoscopy device placed into the nostril of a patient with a Caucasian heritage needs to be placed with different considerations in mind to that being inserted into a patient of African heritage. If this is done properly, the patient will suffer minimally. BOX 2 Curriculum checklist for considering starting the process of decolonizing curricula Am I trying to achieve inclusivity, decolonization, or both? Who are the stakeholders and how do I engage them? What is relevant to the program/student outcomes? Does this content usefully link to, for example, clinical practice? That is to say, do I need to know this information relating to race in order to change the way I do a procedure or prescribe a medication? Is a discussion on observable racial differences required in this context? If so, how can I encourage a safe space in which to have this discussion? If not, how can I explore racial differences sensitively outside this context and convey the underlying message to students that race does not matter in this instance? How might race be manifesting in the hidden curriculum (at a sessional, program, or institutional level)? Have I asked my students their opinions on the content, delivery and messaging? If not, why not? If yes, how is it going to change what and how I teach? Who else should I involve in the process of reinventing the anatomy curriculum? Who are the stakeholders and have they all had an opportunity to voice their opinions? What adjustments can I reasonably make to ensure delivery of this content is inclusive? What resources am I using in my teaching? Do they provide a realistic representation of the society within which I teach? Is the whole spectrum of human diversity represented in my resources? What is my aspiration to achieve in subsequent iterations, for example, with additional time or financial investment? Redesigning, reshaping, and reframing curricula can be overwhelming for educators, who often do not know where to start. In order to assist educators in thinking about making their curricula more inclusive, and redressing some of the historic imprints of colonialism, we have provided our key considerations. These brief considerations, which it is hoped will serve as a starting point to making more lasting and meaningful curricular changes (and embarking on a more reflexive inclusive anatomy education journey), are based on our practical experience and the wider literature, and are as follows (Box ): Consider representation in imagery, models, and life models: It is important that resources are as racially diverse as possible. Acknowledge intersecting identities that students, life models, or cadavers may hold/have held: Intersecting identities impact on power dynamics and experiences—it is important to be cognizant of the multiple protected characteristics someone may hold/have held. Advocate for the importance of race in anatomy education: Change takes time, keep being, an advocate for the process of decolonization. Provide a safe space for discussion of race, racism, and lived experience: Students, faculty, and other stakeholders should be supported in discussing their experiences, and a safe physical and emotional space provided to do so. Avoid reductionist thinking and polarizing into the black‐white dichotomy: Colonization occurred globally, it is not only about black lives. Remember that race and skin color are spectra. In particular, remember to include the large middle ground of blended backgrounds who rarely feature in the race debates. Avoid stereotyping and creating caricatures: Sometimes in our efforts to be diverse, we fall into the trap of stereotyping. The creation of clinical cases is a particular danger zone for this. Avoid archetypal representations: Anatomy has long used the white male as the archetypal representation, both within text and graphics. Care should be given to use all genders and races where possible and appropriate. Contextualize course materials for students: It is important for students to understand the history and context of the materials they use. For example, where did the illustrations come from and how were the bodies utilized when they were developed (e.g., Nazi Germany, vivisection, etc.) (Mbaki et al., ). Increase surface and living anatomy: Surface and living anatomy offer a valuable opportunity to ensure anatomy teaching is more racially diverse and representative. Think about recruiting life models from a diverse spectrum of people. Using tools such as body paint or art based approaches can bring living anatomy back to the fore (Dueñas & Finn, ). Learn! Training and reflective practice is important: We must understand the history and significance of colonization, as well as read on racism and cultural diversity. Bring to the surface any kind of tricky topic relating to visible differences: Use the safe space that you have created to ensure difficult topics can be discussed. Be aware that the hidden curriculum exists but is experienced differently by each individual: Tacit and implied by definition, the hidden curriculum is where students may pick up on role modeling, attitudes, racism, and other messaging. Awareness of the potential impact in this space is crucial (Finn, Ballard, et al., ). Embrace art and the humanities in order to develop cultural competence and insight: Where conversations may be difficult, lived experience needs to be explored, or resources for expensive models are sparse—the arts and humanities offer much added value to the curriculum. Poetry, art, drama, to name but a few can offer a space for exploration of complex issues (Brown et al., ; Finn, Brown, & Laughey, ; Laughey & Finn, ). Reflect on past practice and look for opportunities to address any lack of inclusivity: Looking back at what has been delivered and why, and then addressing inequalities and inconsistencies will help further the decolonizing process as well as develop a more inclusive anatomy curriculum. BOX 3 The Do's and Don'ts starting to decolonize your curricula
Underrepresentation of certain bodies Within anatomy education, teaching and learning relies upon bodies (cadavers and life models), physical representations of the body (e.g., plastic models), technological software, and diagrammatic representations (e.g., textbooks and anatomy atlases). Anecdotally, diagrammatic, technological, and physical representations are frequently devoid of diversity in terms of the populations they represent (Louie & Wilkes, ; Parker et al., ). Not many attempts have been made to systematically review and collate these representations. Despite anatomy being universal, variation being normal (Bergman, ; Cunningham, ), and skin being the largest and most visible organ, it is only in recent decades that anatomical texts have displayed surface anatomy images with a diverse range of skin tones. It is important to remember that this is unlikely to be a deliberate attempt to perpetuate underrepresentation. After all, cadavers can only be selected from those who donate, meaning diversity may be limited in some regions. There is often a lack of donations from some ethnicities for cultural or religious reasons. Often, there are pragmatic reasons, such as geography and associated jurisdiction, that limit the diversity of donors. With these considerations in mind, it is then unsurprising that healthcare students tend to encounter predominantly white donors within the dissection rooms across the Western hemisphere. Presumably, similar underrepresentations of other ethnicities (including white body donations) occur across other geographical regions (such as the Far‐East, South‐East Asia, and Africa). These are some of the pragmatic reasons to bear in mind, although historically underrepresentation of bodies may have had more sinister reasons (when viewed with a retrospective lens) (Plataforma SINC, ). Research suggests that racial inequities are embedded in the curricular edification of both healthcare professionals and patients (Louie & Wilkes, ). A prime example of the fundamental flaws of instructional design is exemplified by the lack of representation of different skin tones within imagery and models utilized in anatomy education, arguably feeding into the tacit messaging a learner may receive. In 2018, a study analyzed in excess of 4,000 images from anatomy textbooks and determined that there was a significant overrepresentation of light skin tones and an underrepresentations of dark skin tones (Louie & Wilkes, ). Furthermore, racial minorities were often absent at the topic level. These omissions may provide one route through which bias presents within healthcare. Similar findings have been demonstrated in other studies (Louie & Wilkes, ; Parker et al., ; Parker et al., ), with analysis including other protected characteristics such as gender, further supporting the perpetuation of inequity and discrimination. White males have long dominated as the archetypaI representation in Western anatomy textbooks, typically presented as the “universal model” of the human form (Louie & Wilkes, ; Parker et al., ; Plataforma SINC, ). A study analyzed 16,329 images from recommended texts at universities in Europe, the United States and Canada, concluding that the white male was the dominant anatomical representation (Plataforma SINC, ). Whether or not this is a deliberate decision by publishers is not under debate here. The fact of the matter remains that, historically, female anatomy representation was more the exception than given equal representation alongside male anatomy, and the prevailing color of the utilized male representatives was white. This status quo has persisted, despite geopolitical and cultural shifts, suggesting more fundamental issues are at play, and that both organizational and granular level changes are required to redress this imbalance. Textbooks are only one source of potential bias, technological resources and anatomical models are others. Major manufacturers such as SOMSO® began offering black and white skin tone models as part of their general range in the late 1970s. AdamRouilly began offering Clinical Skills simulators with black skin from the 1980s; these have been sold worldwide since then (personal communication) (Adam, Rouilly, ). Despite almost 60 years of availability of different skin tones, anatomical models available within departments still lack diversity. Anecdotally, a major challenge associated with the creation of models representing different ethnicities is that there is a danger that models can become perceived as caricatures of racial stereotypes in the way that features are modeled. However, once again, pragmatic decisions predominate and are most likely to explain the reasons for lack of diversity in this area (although, once identified, it becomes imperative to aim to address such issues). Most universities have a limited budget with which to invest and, as such, models are often a long‐term investment, infrequently replaced and typically purchased at the inception of a department. As a consequence of one‐time investment, representative models are often not available. The representation is also fairly limited and often polarized into either black or white skin coloring, with no clear superficially visible racial differences visible (perhaps yet again due to concerns about caricaturizing the differences and the ensuing offense that could be caused inadvertently). Within anatomy education, models have much historical significance and form part of museum collections; consequently, white skin models dominate. Manufacturers report (personal communication) requests for a range of mannequins and models of different skin tones, body shapes and face/head profiles. Such a range of representations would not be viable for commercial production, both from a logistical viewpoint for institutions as well as from the viewpoint of fears of presenting stereotypical racial features that may cause offense. Similarly, there have been requests made to manufacturers (personal communication) for a range of skeletons, pelves, and skulls from different races; however, many institutions have their own collections of natural bone skeletons which they use for comparative anatomy, preferring them to plastic osteological models—again demonstrating a lack of commercial viability. There has recently been a promising change in the landscape with students and the general public alike taking it upon themselves to contribute to decolonization of the curriculum by tackling the lack of representation in imagery. Within the United Kingdom, medical student Malone Mukwende developed “Mind the Gap: A Handbook of Clinical Signs in Black and Brown Skin,” to show various dermatological conditions as manifested on darker skin (Finn, Quinn, et al., ; Mukwende et al., ). Similarly, a mother who was unable to find images of rash on the same skin tone as her son launched an Instagram account, “Brown Skin Matters” to showcase how skin conditions appear on darker hues of skins, compared to how they are commonly depicted in medical texts and websites. More recently, “Black in Anatomy” was created across a number of social media platforms as a “safe space to network, uplift, support, and amplify the Black contributions to anatomical science” (Black in Anatomy, ). The recent activity to redress the balance goes some way to improving the potential for tackling inequalities. Despite these positive steps, caution must be exercised to ensure that imagery and associated content are as diverse and inclusive as possible and avoid perpetuating associated implicit biases (Finn, Ballard, et al., ). It is crucial to take a critical lens to antiracism initiatives within anatomy education (Kendi, ; Vass & Adams, ). Unfortunately, initiatives are often regarded as performative when institutions are not dedicated to taking a reflexive and critical examination of their role in racism and the language and methods they use to combat it (Alwan, ; Gutierrez, ). This includes an awareness of the potential for curricula to promote further health inequalities (Finn, Ballard, et al., ).
The difficulty with talking about difference Anatomical variation is well‐documented, yet typically focuses on neurovasulature (e.g., pelvic vasculature) or sex‐related differences (e.g., breast tissue and structure) within texts and curricula. Despite the existence of observable differences in individuals and populations (e.g., skin, eyelids, hair, and teeth), acknowledging differences is challenging and riddled with ambiguity for both educator and learner alike. There is almost a fear of talking explicitly even about very obvious surface anatomical differences between certain groupings of people when teaching anatomy. It is worth noting that in the area of race, we are always having to actively group people, emphasizing certain similarities and differences, and drawing neat and tidy lines across the fuzziness of nature—in any study of racial differences, it would be prudent to interrogate the criteria used to distinguish, say, black people from white people, if indeed these are given. This grouping can relate to certain classes of bodily difference and not others. For example, male skeletons typically have more bone mass than female skeletons, which perhaps can be discussed, whereas discussion of the epicanthic fold of the Asian eye may be considered taboo, at least within Western contexts. This inhibition comes up with respect to sex, but anatomical variation along racial lines is arguably the domain of greatest sensitivity. There are likely to be several important reasons for this, which we briefly explore here. In 1903, Du Bois coined the term “the color‐line” to denote the ultimate cleavage in American society; he postulated that “the problem of the twentieth century is the problem of the color‐line” around which inequality of opportunity and inequality of experience was maximally organized (Du Bois, ; Gannon, ). However significant (or not) racial differences are biologically speaking, humankind has used select visible racial criteria as the basis for slavery, colonial domination, and continuing oppression. If racism in its manifest forms is about the dehumanization of the racialized other, then the otherwise relatively insignificant signifiers of race, that is, manifest physical differences, are likely to be loaded with emotional significance, societally speaking. During the apartheid era in South Africa, for example, “failing” the “pencil test” (i.e., having curly hair that hung onto a pencil) meant that someone of uncertain racial origin would be consigned to the colored , rather than to the white racial category, with all of the societal opprobrium, family separation and material disadvantage this brought with it. Little wonder then that racial differences, with their propensity to bring out the worst in humanity, continue to be taboo. Psychological frameworks (Dalal, ; Scott, ) suggest there is a concomitant anxiety about acknowledging racial differences because those differences do a lot of defensive work for individuals; to clarify, the worst of humanity (e.g., the supposed terroristic tendencies of radicalized Muslims or the supposed intellectual inferiority of black people) can be, psychologically speaking, located and locked away in certain groups of people, as long as we do not become too familiar with “them” and have our defenses challenged through realization that stereotypes are likely to be false (Dalal, ; Scott, ). Not acknowledging and talking about the differences, then, might be one way of denying that racial differences in anatomy are actually relatively insignificant when compared to anatomical similarity across populations. Racism (and this putative function of locating the worst in “others”) depends on stereotypes. Everyday culture is rife with the stereotyping of racialized bodies. According to some psychological theory (Dalal, ; Scott, ), looking beyond these stereotypes to the actuality of complexity across a spectrum requires overcoming deep‐seated, protective prejudice, that is, continually working at it. A simpler explanation of the difficulty in discussing anatomical variation is that those involved in education are anxious about saying the “wrong thing” and being perceived as racist or on the wrong side of history. With BLM and associated activism, we are potentially in the midst of important societal transformation. People from marginalized groups, not only racialized, have been emboldened to talk about discrimination and demand justice. In education, some students are playing an increasingly active role in shaping curricula, with an associated emphasis on social justice (Jackson & White, ; Murray‐García et al., ; Wear et al., ). Educators, however, can feel a pressure not to say anything out of keeping with this agenda, even when it contradicts their long‐held knowledge and beliefs—and often even scientific evidence (Murray‐García et al., ; Paton et al., ; Vass & Adams, ). Pejorative responses to calls for social justice, evident in terms such as “cancel culture” and “wokeness,” speak to this anxiety about being found to be on the wrong side of history in the court of public opinion. Instead of formal knowledge and open dialogue about racial variation, there is often a reliance on “tacit knowledge” of how racial differences manifest clinically in anatomy. Why is being able to talk about racial variation within the context of anatomy education important? Arguably, it is important, as any science depends on observation and the communication of those observations. This is such a truism that it could almost remain unstated, but anatomy is also a science of, among other things, human life and death. Observing ourselves is always complicated, and clear communication and open dialogue is perhaps even more important within this specialism than within the other sciences, primarily to mitigate against anxiety‐exacerbated cognitive biases and pervasive societal stereotypes. At heart, more inclusive and effective anatomy education translates into better patient care for the range of diverse ethnic communities within a population. Again, this is such a fundamental point that it is easy to overlook. In seeing and speaking about racial variation in anatomy, the practitioner may be motivated by supporting informed patient care, rather than scientific racism. It is rarely that simple. Psychology has suggested that motivations are always mixed and sociology, in turn, has illustrated how the individual must navigate power structures that influence what they do to others and what others do to them. With an understanding of the underlying psychosocial complexities in mind, it is important to ask ourselves why we would choose to focus on any particular racial variation in anatomy, and then let those learning from us ask us the same. Quite apart from the implications for communication and transparency, what is at issue here is the culture of anatomy education. The implication becoming apparent in this paper is that anatomy education is not only the study of the structure and parts of the body, but also the space around that body. Anatomy teaching should go beyond biology and structure into sensitivity and receptivity to the many questions learners bring, especially around the emotive topic of race and racialization. To state this clearly: it is not just what we are prepared to talk about but also the way in which we talk about such matters that defines the discipline and adds value to learning encounters (and, ultimately, to patient care and patient outcomes).
The impact of the hidden curriculum Both of the aforementioned considerations, visual representation and anatomical variation, provide examples of where the hidden curriculum has potentially manifested within anatomy education. There are tacit, implied and hidden messages in everything we do and do not do as educators (Finn & Hafferty, ). One example is in our choices as educators to use or not use (institutional finances and logistical challenges permitting) diverse life models when teaching surface anatomy. Another example is whether we deliberately avoid discussions of race within our anatomy teaching encounters. Matthan and Finn ( ) provide a discussion of the hidden curriculum associated with the use of imaging and digital resources within clinical education (Matthan & Finn, ). What is clear is that the hidden curriculum is not a space in which we can deliver “teaching by stealth” (Aka et al., ), it is rather a space that can be deliberately exploited to deliver messages to our learners. Because it is experienced differently by everyone, we can never really see what is hidden. The hidden curriculum refers to the tacit, implied, unwritten, unofficial, and often unintended behaviours, lessons, values, and perspectives that students learn during their education. (Finn & Hafferty, ) Despite extensive and deliberate use of diverse racial surface anatomy models in delivered teaching sessions, one of the authors (of mixed racial background) was asked for a “Black Anatomy Curriculum”, and for this to be delivered alongside (what can only be presumed to have been considered) the prevalent white anatomy curriculum. This arose from the misconception that there are evident racial differences in gross anatomical content that were deliberately being left out of the teaching, and to satisfy the current “Equality, Diversity and Inclusion” educational exercise, it was felt that a parallel “racially diverse” curriculum should also be delivered. One immediate response might have been to say there is no such thing and have left it at that. However, another more productive response could be to consider where that question came from, why it is relevant and especially when and in what context it was raised, and talk to the students about their curiosity and/or concerns. Were these students in some way trying to redress the imbalances and insensitivities characteristic of the hidden curriculum? Despite its conventional context being the laboratory, anatomy education is not hermetically sealed off from the world. Issues playing out in society seep into the lab, just as this happens in other areas of education; the dynamics of power differentials and prejudice will characterize much of the interaction that makes up teaching and learning in this space. Thinking and talking about racial dynamics and racism can often become unhelpfully black‐or‐white. Issues and positions fall onto either side of a divide and scope for nuance, complexity, and the messy business of working things out is reduced. Someone retweets a message with perceived racist sentiments. They may be deemed racist, then they fall onto one side of a divide (opposite the supposed nonracist). Their whole character is tarnished and seemingly irredeemably so (cf., earlier discussion of how we are apt to locate all the badness in some “other” and thus escape association and related anxieties). People are supposedly “woke” or reactionary, there is nothing in between. Anatomy in the West did not even, until recently, do black‐or‐white, with the overwhelming preponderance of white bodies, whether we were referring to textbook representations or the cadavers to be dissected. Now it appears that black models are increasingly included in surface anatomy textbooks and atlases, although the wider spectrum of shades of brown—that space in between—is rather curiously omitted altogether. When we refer to the hidden curriculum, however, we also have to consider the access and participation of black student and staff bodies in this space. Joseph‐Salisbury ( ) recounts the story of Femi Nylander, an Oxford University alumnus, who finds himself causing a scare on a visit to an Oxford college simply by dint of his blackness (Joseph‐Salisbury, ). Joseph‐Salisbury uses this incident to illustrate the workings of structural white supremacy in Higher Education, drawing on Puwar's ( ) work to show how Nylander's was a Black body out of place (Puwar, ). How out of place, then, might non‐white people feel within the anatomy education space? There are a number of factors to consider, including the ethnic make‐up of faculty, the range of bodies represented, the range of body donations received and the ethnicity of the donors, the way in which bodies and body parts are handled and talked about, religious and spiritual beliefs about life and death, the anatomical facts as against the lived experience of those facts, unconscious bias and racism.
Reclaiming history, reclaiming the space, reclaiming identity There is often profound mistrust of the branches of science focusing on human biology, bodies, and healthcare within certain ethnic communities (FitzPatrick et al., ). This has to be understood through acknowledging the impact of colonialism, historical abuses, and ongoing racism in society and healthcare (FitzPatrick et al., ). Current feelings about historical abuses of certain racialized bodies in science and medicine in particular (e.g., Tuskegee, Sims, and Lacks) and, indeed, current disparities in the care of and outcomes for black patients (Greenwood et al., ) may be directly implicated in the lack of African Americans, for example, participating in whole body donation (Werede & Thompson, ). When hands‐on anatomical dissection became popular in medical education in the United States and the United Kingdom in the late 18th and early 19th centuries, demand for cadavers exceeded supply. The physical and documentary evidence demonstrates the consequent disproportionate use of the bodies of the poor, the minority ethnic populations, and the marginalized in society in furthering medical education and anatomical science. The resulting progress has benefitted everyone in principle, although some argue it is still the most privileged in society (i.e., in the West, associated with fairer skin tones) that continue to benefit from scientific advances gleaned from the usage of unconsented bodies across the poorer communities, under which several racial minorities must necessarily sit. This is the uncomfortable but vital history and context of anatomy education that needs inclusion in formal curricula. Psychology tells us that that which cannot be spoken is instead enacted. Can such painfully contentious matters be discussed—and do they need to be discussed? Is the dark humor that helps so many practitioners cope with the nature of the work something that can be shared or, is it at the expense of certain groups and, so, exclusionary? Dueñas et al. describe the subjective nature of humor within anatomy, highlighting its use within the anatomy laboratory as subjective and contentious, thus humor becomes a component of the hidden curriculum (Dueñas et al., ). They study reported the use of an “internal barometer” as a self‐gauge for judgments as to whether jokes or mnemonics where appropriate. Judgments included: “Would this cause me personal offense? Is this my type of humor? Is the intent malicious?” With this, we come back to the request put to one of the authors for a “Black Anatomy Curriculum”. Were those clamoring for this moved to find a separate space because they felt excluded from the existing one? Perhaps such a request is not surprising given the widespread calls for social justice and decolonization in education. It is a shame, however, if racial differences in this manner crowd out the overwhelming human similarities that are foundational to anatomy—99.99% of our genome is shared, after all (Chou, ). Given the prevailing whiteness of the anatomy space in the Western country in which this request was made, the students' request may also have contained an attempt to reclaim their identities and an attempt to find a safer space. Their request then can be framed as a challenge to the discipline for a more inclusive and enabling culture.
What do educators and their institutions need to do? A starting point for educators In order to contextualize the recommendations from our paper, we offer a case study (see Box ), which can be used as an example to frame your thinking about what is relevant and achievable when decolonizing/reinventing the curriculum. Before embarking on such a journey, it is worth identifying your stakeholders, these are delineated in Figure . Figure represents an accessible summary of some of the main considerations for starting to decolonize the anatomy curriculum. Box presents a reflective opportunity, prompting users to think about whether to bring race into the anatomy classroom. BOX 1 Worked example on the anatomy of the skull and face Let us think about the bones of the skull and the tissues of the face (Aka et al., ; Joseph‐Salisbury, ; Puwar, ). It is a fact that the size and shape of the skull varies between different races and thus have long been used as a way to justify the existence of different races. Historically, and importantly now refuted, the structure of the skull was used as a means by which to: Position various races on the evolutionary scale. Exemplify immutable personality types. Identify criminal or more intellectual individuals. Where differences do exist Many are only observable with specialist knowledge and measurement. They are virtually impossible to bring into the classroom to diversify anatomy teaching. They are less varied and pronounced due to mass migration and blended global populations. There are differences in the overlying tissues too due to underlying osteology foundation. These are some examples of observations relating to race that have been made in more historic time: Asian skulls have circular orbits. Africa skulls have wider nasal apertures and flatter conchae. Degree of prognathism (protrusion of the mandible) and orthognathism (the state of not having the lower parts of the face projecting). Caucasian skulls have smaller teeth and a narrow nasal aperture. There is an absent or lower crease in the Asian upper eyelid. Asian eyes have the inner corner covered (the epicanthic fold). Caucasian eyes have the inner corner always exposed and an external fold at the outer edge. We must consider what elements of the information above are relevant when teaching healthcare professionals about racial differences manifesting in the skull and facial tissues. An illustrative example is the nasal conchae which can differ in size and angulation between races. An awareness of such helps the clinician to undertake a procedure with sensitivity to the racial differences without causing harm. A nasendoscopy device placed into the nostril of a patient with a Caucasian heritage needs to be placed with different considerations in mind to that being inserted into a patient of African heritage. If this is done properly, the patient will suffer minimally. BOX 2 Curriculum checklist for considering starting the process of decolonizing curricula Am I trying to achieve inclusivity, decolonization, or both? Who are the stakeholders and how do I engage them? What is relevant to the program/student outcomes? Does this content usefully link to, for example, clinical practice? That is to say, do I need to know this information relating to race in order to change the way I do a procedure or prescribe a medication? Is a discussion on observable racial differences required in this context? If so, how can I encourage a safe space in which to have this discussion? If not, how can I explore racial differences sensitively outside this context and convey the underlying message to students that race does not matter in this instance? How might race be manifesting in the hidden curriculum (at a sessional, program, or institutional level)? Have I asked my students their opinions on the content, delivery and messaging? If not, why not? If yes, how is it going to change what and how I teach? Who else should I involve in the process of reinventing the anatomy curriculum? Who are the stakeholders and have they all had an opportunity to voice their opinions? What adjustments can I reasonably make to ensure delivery of this content is inclusive? What resources am I using in my teaching? Do they provide a realistic representation of the society within which I teach? Is the whole spectrum of human diversity represented in my resources? What is my aspiration to achieve in subsequent iterations, for example, with additional time or financial investment? Redesigning, reshaping, and reframing curricula can be overwhelming for educators, who often do not know where to start. In order to assist educators in thinking about making their curricula more inclusive, and redressing some of the historic imprints of colonialism, we have provided our key considerations. These brief considerations, which it is hoped will serve as a starting point to making more lasting and meaningful curricular changes (and embarking on a more reflexive inclusive anatomy education journey), are based on our practical experience and the wider literature, and are as follows (Box ): Consider representation in imagery, models, and life models: It is important that resources are as racially diverse as possible. Acknowledge intersecting identities that students, life models, or cadavers may hold/have held: Intersecting identities impact on power dynamics and experiences—it is important to be cognizant of the multiple protected characteristics someone may hold/have held. Advocate for the importance of race in anatomy education: Change takes time, keep being, an advocate for the process of decolonization. Provide a safe space for discussion of race, racism, and lived experience: Students, faculty, and other stakeholders should be supported in discussing their experiences, and a safe physical and emotional space provided to do so. Avoid reductionist thinking and polarizing into the black‐white dichotomy: Colonization occurred globally, it is not only about black lives. Remember that race and skin color are spectra. In particular, remember to include the large middle ground of blended backgrounds who rarely feature in the race debates. Avoid stereotyping and creating caricatures: Sometimes in our efforts to be diverse, we fall into the trap of stereotyping. The creation of clinical cases is a particular danger zone for this. Avoid archetypal representations: Anatomy has long used the white male as the archetypal representation, both within text and graphics. Care should be given to use all genders and races where possible and appropriate. Contextualize course materials for students: It is important for students to understand the history and context of the materials they use. For example, where did the illustrations come from and how were the bodies utilized when they were developed (e.g., Nazi Germany, vivisection, etc.) (Mbaki et al., ). Increase surface and living anatomy: Surface and living anatomy offer a valuable opportunity to ensure anatomy teaching is more racially diverse and representative. Think about recruiting life models from a diverse spectrum of people. Using tools such as body paint or art based approaches can bring living anatomy back to the fore (Dueñas & Finn, ). Learn! Training and reflective practice is important: We must understand the history and significance of colonization, as well as read on racism and cultural diversity. Bring to the surface any kind of tricky topic relating to visible differences: Use the safe space that you have created to ensure difficult topics can be discussed. Be aware that the hidden curriculum exists but is experienced differently by each individual: Tacit and implied by definition, the hidden curriculum is where students may pick up on role modeling, attitudes, racism, and other messaging. Awareness of the potential impact in this space is crucial (Finn, Ballard, et al., ). Embrace art and the humanities in order to develop cultural competence and insight: Where conversations may be difficult, lived experience needs to be explored, or resources for expensive models are sparse—the arts and humanities offer much added value to the curriculum. Poetry, art, drama, to name but a few can offer a space for exploration of complex issues (Brown et al., ; Finn, Brown, & Laughey, ; Laughey & Finn, ). Reflect on past practice and look for opportunities to address any lack of inclusivity: Looking back at what has been delivered and why, and then addressing inequalities and inconsistencies will help further the decolonizing process as well as develop a more inclusive anatomy curriculum. BOX 3 The Do's and Don'ts starting to decolonize your curricula
CONCLUSIONS Making steps toward truly inclusive curricula, co‐developed with students and other relevant stakeholders, is plausible. Educators must focus on what is relevant within teaching, yet balance this with a space for tricky conversations about race, racism and racial differences, never forgetting the primary aim of anatomy education which is to equip healthcare professionals with accurate information to enable them to do the best they can for their patients. The hidden curriculum is hard to see by definition, but an awareness of its existence and the potential impact it can have on a learner is paramount when tackling systemic racism and training the future health workforce. Delving deep into established practice and exposing areas for improvement within anatomy curricula is ultimately the responsibility of educators and institutions dedicated to inclusivity. Such processes restore health and dignity to populations (Wilson & Cavender, ). Hubris and arrogance have been cited as causing educators to think that making curriculum level changes can positively affect healthcare systems or can transform trainees' experiences (Whitehead et al., ). While tweaking curricula for social justice purposes may be likened by some to the metaphor of “fiddling while Rome is on fire,” tackling health inequalities must start somewhere and perhaps the anatomy laboratory is no worse than any other place—after all, the future healthcare workforce are also the future policymakers. Micro‐level changes accumulate to ultimately bring about large‐scale transformative change, and anatomy education could do with a shake‐up of sorts.
Gabrielle Finn: Conceptualization (equal); writing – original draft (equal); writing – review and editing (equal). Adam Danquah: Conceptualization (equal); writing – original draft (equal); writing – review and editing (equal). Joanna Matthan: Conceptualization (equal); writing – original draft (equal); writing – review and editing (equal).
|
Educación en autocuidado durante programas de rehabilitación cardiaca para pacientes con insuficiencia cardiaca con fracción de eyección preservada: estudio Delphi | 63d1d4df-514b-4667-903e-678ec607d678 | 11305252 | Patient Education as Topic[mh] | La insuficiencia cardiaca (IC) es una enfermedad cada vez con mayor prevalencia. De hecho, durante el periodo comprendido entre 2017 y 2019, aumentó en promedio un 0,6% anualmente a nivel global . El incremento en las tasas de incidencia y de prevalencia de la IC se atribuye al envejecimiento demográfico, siendo las personas mayores de 75 años las que tienen mayor riesgo de desarrollar esta condición . Entre los diferentes tipos de IC, la variante con fracción de eyección preservada (IC-FEp) se presenta en aproximadamente la mitad de los pacientes con diagnóstico de IC . Este subtipo se asocia con una alta prevalencia de comorbilidades y discapacidad , , , lo que se traduce en altos costes sanitarios anuales medios, entorno a 25.000 euros por paciente en el mundo occidental (entre 12.995 y 18.220 euros por paciente en España ) . En consecuencia, abordar eficazmente la IC-FEp es una prioridad tanto para la mejora del pronóstico como para el uso eficiente de los recursos sanitarios . Entre las estrategias no farmacológicas, la rehabilitación cardiaca (RC) es una herramienta terapéutica efectiva y segura para disminuir los síntomas asociados y mejorar la calidad de vida de los pacientes con IC-FEp . Uno de los componentes principales de los programas de RC es la educación en autocuidado y hábitos de vida saludable . La educación en autocuidado capacita a los pacientes a gestionar de manera proactiva su enfermedad en el entorno ambulatorio, reduciendo los reingresos por todas las causas hasta en un 25%, por causas de IC en un 40% y la mortalidad hasta en un 29% . Sin embargo, la intervención educativa en autocuidado se ve notablemente reforzada cuando la información se comunica de manera coherente y uniforme por todos los profesionales sanitarios involucrados en el cuidado del paciente . Por lo tanto, la homogeneización del mensaje entre los profesionales de la salud es esencial para facilitar la transición efectiva hacia el autocuidado y contribuir a optimizar los resultados clínicos en el manejo de la IC-FEp , . En este contexto, surge la necesidad de desarrollar un decálogo en autocuidado para pacientes con IC-FEp que sirva como guía educativa para los profesionales sanitarios que administran la educación en los programas de RC. El objetivo del presente estudio es desarrollar un decálogo de competencias en autocuidado a través de un consenso de expertos multidisciplinar. Como objetivo secundario, se propone determinar la validez de contenido de las competencias en autocuidado durante un programa de RC en pacientes con IC-FEp.
Diseño Se empleó el método e-Delphi para facilitar la participación de los expertos en el panel. Se siguieron las directrices ACcurate COnsensus Reporting Document (ACCORD) para informar sobre métodos de consenso en biomedicina . La lista de verificación ACCORD se muestra en el . El estudio se encuentra enmarcado bajo la ética de un ensayo clínico aleatorizado aprobado por el Comité Ético Provincial de Málaga (2198-N-22) y desarrollado en el Hospital Regional Universitario de Málaga . Se diferencian tres etapas dentro del estudio: 1) Creación del cuestionario inicial. 2) Composición del panel de expertos. 3) Encuesta electrónica: panel e-Delphi. Para la construcción del cuestionario inicial, se llevó a cabo una búsqueda bibliográfica en las principales bases de datos (Medline, Scopus, Embase) y en la literatura gris. Se localizaron las actuales guías de práctica clínica en RC , y autocuidado y se recopilaron las principales recomendaciones en educación terapéutica en la población IC-FEp. Las recomendaciones fueron traducidas a resultados de aprendizaje con el fin de elaborar una lista de competencias, incorporadas en el cuestionario de manera enumerada y ordenada. La versión inicial del cuestionario incluyó 14 dominios, 23 competencias y 75 preguntas, agrupadas de la siguiente forma: control del peso, control de signos vitales, requerimientos de atención médica, hábitos de alimentación saludable, adherencia al tratamiento farmacológico, ejercicio físico, control de la descompensación, profilaxis, hábitos tóxicos, control de la energía, fatiga o cansancio, salud psicoemocional, soporte social, sueño, y salud sexual. Las 23 competencias se etiquetaron en función del elemento de autocuidado al que pertenecían según la teoría de rango medio del autocuidado de las enfermedades crónicas: mantenimiento del estado de salud (M), seguimiento del estado de salud (S) y gestión del estado de salud (G) . Las competencias, las preguntas adaptadas al contexto de los pacientes y la definición de cada elemento de autocuidado, pueden consultarse en la tabla 1 y 2 del . Se usó el correo electrónico como vía de distribución de la encuesta electrónica y la base de datos generada a través del cuestionario LimeSurvey como método de recogida de datos. Por cada ronda, los participantes en el panel disponían de tres semanas para cumplimentar la encuesta. Entre cada ronda, se recopilaron y analizaron los resultados intermedios en un plazo de dos semanas. El trabajo de campo se desarrolló durante 15 semanas, situadas temporalmente entre los meses de mayo y septiembre de 2023. En la ronda 1, el objetivo fue establecer la validez de contenido de cada una de las competencias. Los panelistas evaluaron cada competencia a través de una escala tipo Likert de cinco puntos (0, nada importante; 1, algo importante; 2, importante; 3, bastante importante; 4, muy importante). Además, en el cuestionario se generó una matriz por cada competencia y/o pregunta con las opciones: modificar, eliminar o mantener para depurar la redacción o descartar las competencias o preguntas de la lista. En las rondas 2 y 3, se incluyó una escala tipo Likert de 4 puntos (1, muy en desacuerdo; 2, en desacuerdo; 3, de acuerdo; 4, muy de acuerdo) con el fin de consensuar la inclusión o no de las competencias en el decálogo. Para fomentar el intercambio confidencial de comentarios, opiniones y sugerencias, se añadió un apartado de preguntas abiertas al final de cada grupo de preguntas. Muestra y participantes Los panelistas fueron seleccionados entre los investigadores participantes de un ensayo clínico , el personal asistente dentro del estudio, pacientes con IC-FEp y profesionales con amplia experiencia en el estudio de la IC-FEp a nivel nacional. Se invitó a participar en el panel a 17 profesionales y dos pacientes mediante correo electrónico. Para garantizar un nivel de consenso significativo, se ha descrito la necesidad de incluir al menos a cinco expertos en el panel . Análisis Durante la ronda 1, se usó el índice de validez de contenido (IVC) para cuantificar la validez de contenido de cada una de las competencias. El cálculo del IVC de cada competencia se realizó a partir del conteo de votos de las puntuaciones 3 (considerado bastante importante) y 4 (muy importante) de la escala Likert . El punto de corte más utilizado para la inclusión de ítems dentro de un cuestionario es el 0,8 . Si el IVC de alguna de las competencias era < 0,80, se sometía a votación su exclusión si al menos dos de los expertos proponían eliminarla. Si el IVC era < 0,8 pero no era propuesta para su exclusión, la competencia avanzaba a la ronda 2. En las rondas 2 y 3, se empleó el porcentaje de acuerdo (PA%) para la inclusión de las competencias en el decálogo. PA% se define como el porcentaje de respuestas que obtuvieron una calificación de 3 (de acuerdo) o 4 (totalmente de acuerdo). Se establece consenso si el PA% ≥80% . Para mostrar tanto la tendencia central como la dispersión de los datos, se emplearon la mediana y el rango intercuartílico (IQR) . Se calcularon kappa de Fleiss (k) y alpha de Krippendorff (α) para evaluar el grado de acuerdo de los valores obtenidos de la escala Likert . Para la interpretación de k, se considera un grado de acuerdo pobre si k < 0,20; justo, si k está entre 0,21 y 0,40; moderado si k está entre 0,41 y 0,60 y casi perfecto, si k > 0,80 . Los datos fueron analizados utilizando el software estadístico Jamovi .
Se empleó el método e-Delphi para facilitar la participación de los expertos en el panel. Se siguieron las directrices ACcurate COnsensus Reporting Document (ACCORD) para informar sobre métodos de consenso en biomedicina . La lista de verificación ACCORD se muestra en el . El estudio se encuentra enmarcado bajo la ética de un ensayo clínico aleatorizado aprobado por el Comité Ético Provincial de Málaga (2198-N-22) y desarrollado en el Hospital Regional Universitario de Málaga . Se diferencian tres etapas dentro del estudio: 1) Creación del cuestionario inicial. 2) Composición del panel de expertos. 3) Encuesta electrónica: panel e-Delphi. Para la construcción del cuestionario inicial, se llevó a cabo una búsqueda bibliográfica en las principales bases de datos (Medline, Scopus, Embase) y en la literatura gris. Se localizaron las actuales guías de práctica clínica en RC , y autocuidado y se recopilaron las principales recomendaciones en educación terapéutica en la población IC-FEp. Las recomendaciones fueron traducidas a resultados de aprendizaje con el fin de elaborar una lista de competencias, incorporadas en el cuestionario de manera enumerada y ordenada. La versión inicial del cuestionario incluyó 14 dominios, 23 competencias y 75 preguntas, agrupadas de la siguiente forma: control del peso, control de signos vitales, requerimientos de atención médica, hábitos de alimentación saludable, adherencia al tratamiento farmacológico, ejercicio físico, control de la descompensación, profilaxis, hábitos tóxicos, control de la energía, fatiga o cansancio, salud psicoemocional, soporte social, sueño, y salud sexual. Las 23 competencias se etiquetaron en función del elemento de autocuidado al que pertenecían según la teoría de rango medio del autocuidado de las enfermedades crónicas: mantenimiento del estado de salud (M), seguimiento del estado de salud (S) y gestión del estado de salud (G) . Las competencias, las preguntas adaptadas al contexto de los pacientes y la definición de cada elemento de autocuidado, pueden consultarse en la tabla 1 y 2 del . Se usó el correo electrónico como vía de distribución de la encuesta electrónica y la base de datos generada a través del cuestionario LimeSurvey como método de recogida de datos. Por cada ronda, los participantes en el panel disponían de tres semanas para cumplimentar la encuesta. Entre cada ronda, se recopilaron y analizaron los resultados intermedios en un plazo de dos semanas. El trabajo de campo se desarrolló durante 15 semanas, situadas temporalmente entre los meses de mayo y septiembre de 2023. En la ronda 1, el objetivo fue establecer la validez de contenido de cada una de las competencias. Los panelistas evaluaron cada competencia a través de una escala tipo Likert de cinco puntos (0, nada importante; 1, algo importante; 2, importante; 3, bastante importante; 4, muy importante). Además, en el cuestionario se generó una matriz por cada competencia y/o pregunta con las opciones: modificar, eliminar o mantener para depurar la redacción o descartar las competencias o preguntas de la lista. En las rondas 2 y 3, se incluyó una escala tipo Likert de 4 puntos (1, muy en desacuerdo; 2, en desacuerdo; 3, de acuerdo; 4, muy de acuerdo) con el fin de consensuar la inclusión o no de las competencias en el decálogo. Para fomentar el intercambio confidencial de comentarios, opiniones y sugerencias, se añadió un apartado de preguntas abiertas al final de cada grupo de preguntas.
Los panelistas fueron seleccionados entre los investigadores participantes de un ensayo clínico , el personal asistente dentro del estudio, pacientes con IC-FEp y profesionales con amplia experiencia en el estudio de la IC-FEp a nivel nacional. Se invitó a participar en el panel a 17 profesionales y dos pacientes mediante correo electrónico. Para garantizar un nivel de consenso significativo, se ha descrito la necesidad de incluir al menos a cinco expertos en el panel .
Durante la ronda 1, se usó el índice de validez de contenido (IVC) para cuantificar la validez de contenido de cada una de las competencias. El cálculo del IVC de cada competencia se realizó a partir del conteo de votos de las puntuaciones 3 (considerado bastante importante) y 4 (muy importante) de la escala Likert . El punto de corte más utilizado para la inclusión de ítems dentro de un cuestionario es el 0,8 . Si el IVC de alguna de las competencias era < 0,80, se sometía a votación su exclusión si al menos dos de los expertos proponían eliminarla. Si el IVC era < 0,8 pero no era propuesta para su exclusión, la competencia avanzaba a la ronda 2. En las rondas 2 y 3, se empleó el porcentaje de acuerdo (PA%) para la inclusión de las competencias en el decálogo. PA% se define como el porcentaje de respuestas que obtuvieron una calificación de 3 (de acuerdo) o 4 (totalmente de acuerdo). Se establece consenso si el PA% ≥80% . Para mostrar tanto la tendencia central como la dispersión de los datos, se emplearon la mediana y el rango intercuartílico (IQR) . Se calcularon kappa de Fleiss (k) y alpha de Krippendorff (α) para evaluar el grado de acuerdo de los valores obtenidos de la escala Likert . Para la interpretación de k, se considera un grado de acuerdo pobre si k < 0,20; justo, si k está entre 0,21 y 0,40; moderado si k está entre 0,41 y 0,60 y casi perfecto, si k > 0,80 . Los datos fueron analizados utilizando el software estadístico Jamovi .
De los 17 expertos consultados, 15 completaron la primera ronda del cuestionario y 12 completaron todas las rondas. El panel de expertos estuvo constituido por cinco profesiones sanitarias (cardiología, medicina interna, enfermería, fisioterapia y terapia ocupacional) y pacientes con diagnóstico de IC-FEp. En la se describe el panel de expertos participantes. En la primera ronda, se estatificaron las competencias según el IVC. En la , se detalla la distribución de los resultados obtenidos en la ronda 1. Durante la ronda 2, se sometió a votación la exclusión de tres de las competencias cuyo IVC fue < 0,8, alcanzando unanimidad para la exclusión de tres competencias relacionadas con el control del consumo de potasio , el conocimiento sobre los antidiabéticos orales y la salud sexual . Además, debido a las sugerencias de los expertos, se llevó a votación la combinación de competencias en los siguientes dominios: - «Monitorización de signos y síntomas»: competencias 1-3. - «Hábitos de alimentación saludable»: competencias 6-9. - «Adherencia al tratamiento farmacológico»: competencias 11 y 12. - «Requerimientos de atención médica»: competencias 4 y 15. - «Control de la descompensación en IC»: competencias 5 y 16. Tras las rondas 2 y 3, se calculó el estadístico k y α. En la ronda 2, el grado de acuerdo alcanzado para la inclusión de las competencias fue considerado justo (k = 0,306; p < 0,01). Durante esta misma ronda, se propusieron mejoras para la redacción de las distintas competencias. Una vez incluidas las modificaciones sugeridas por las panelistas, en la ronda 3 se obtuvo un grado de acuerdo moderado (k = 0,606; p < 0,01) para la inclusión de las 20 competencias en el decálogo. El proceso llevado a cabo a través de las tres rondas se muestra en la . En la , se detalla el IVC, el PA%, la mediana y el IQR, así como la ronda donde se obtuvo consenso para cada una de las competencias. Las 20 competencias, agrupadas en 12 dominios, fueron incluidas en la versión final del cuestionario . Con el fin de establecer el orden de prioridad, se ordenaron las diferentes competencias de mayor a menor en función al IVC. El decálogo propuesto se puede encontrar en la tabla 3 del . Derivado del decálogo, se plantea material educativo para facilitar la educación durante los programas de RC, que puede ser consultado en .
El presente estudio analizó la validez de contenido de distintas competencias en hábitos de autocuidado en pacientes con IC-FEp a partir de un panel multidisciplinar de expertos. Como hallazgo principal, se elaboró un decálogo con las competencias en autocuidado consensuado por el panel de expertos con el fin de implementarlo en los programas de RC y facilitar la intervención educativa (consultar Tabla 3, ). De las 20 competencias del decálogo, 12 superaron el punto de corte IVC ≥ 0,8, por lo que poseen una validez de contenido adecuada . La competencia que obtuvo un mayor IVC fue la 15.G (IVC = 1,0), relacionada con los requerimientos de atención médica. De mayor a menor IVC, esta fue seguida por las competencias relacionadas con la adherencia al tratamiento farmacológico, el ejercicio físico, los hábitos tóxicos, el sueño, la monitorización de signos y síntomas, el control de la descompensación, los hábitos de alimentación saludable, el soporte social y la salud psicoemocional. Estas dimensiones están alineadas con las recomendaciones propuestas por la guía de práctica clínica publicada en 2022 . Esta guía enfatiza la importancia del soporte social a lo largo de todo el documento, sugiriendo el uso de cuestionarios para evaluar el aislamiento social . El aislamiento social se ha vinculado a un aumento del 55% en el riesgo de reingreso hospitalario . Esto se debe a que el apoyo social está relacionado con un mejor autocuidado, incluyendo aspectos clave como la adherencia a la medicación, la búsqueda de atención médica, y el ejercicio regular . La salud sexual fue una de las competencias eliminadas por los expertos, a pesar de haber sido evaluada por los pacientes como «muy importante». Otro decálogo similar al presentado , en contraste, incluye la actividad sexual como uno de los temas clave para tener en cuenta en la intervención educativa. La asexualidad en la vejez es uno de los estereotipos más frecuentes en las sociedades occidentales . Los profesionales sanitarios suelen evitar hablar abiertamente sobre sexualidad, lo que lleva a que los pacientes con IC no planteen consultas relacionadas con la esfera sexual , . Las limitaciones en la actividad sexual son más pronunciadas en personas mayores de 63 años con IC, afectando a la frecuencia, el rendimiento y la satisfacción. Sin embargo, el interés en este aspecto no parece variar según la edad . Previamente, se han validado escalas para medir el autocuidado en pacientes con IC, entre las que se encuentran Self-Care Of Heart Failure Index y 9-item European Heart Failure Self-care Behavior Scale . No obstante, competencias como el sueño, el soporte social, la salud psicoemocional o los hábitos tóxicos, no son valorados en ambas escalas. Estas últimas dimensiones requieren de instrumentos adicionales para ser evaluados, como Epworth Sleepiness Scale , Multidimensional Perceived Social Support Scale o Cardiac Depression Scale . Como alternativa, el empleo de preguntas clave, adaptadas a los pacientes por profesionales entrenados, reduciría el tiempo de entrevista necesario para evaluar la adquisición de competencias en autocuidado . En relación con la accesibilidad del contenido educativo, en los últimos años, se han desarrollado estrategias innovadoras para permitir la atención de forma remota . Estas aplicaciones pueden ser útiles para solventar dificultades logísticas, sin embargo, aún la accesibilidad, la aceptabilidad y las barreras de uso deben ser tenidas en cuenta en la población mayor . La efectividad de estas aplicaciones en pacientes con IC aún es incierta , por lo que futuros estudios deben evaluar la eficacia de estas nuevas estrategias para mejorar el autocuidado en IC. La dosis en la que se administra la educación es otro factor determinante para la efectividad de la intervención . Programas educativos de 12 semanas de duración mostraron ser costo-efectivos, reduciendo el número de ingresos hospitalarios y las visitas a urgencias durante el seguimiento de seis meses . Si bien los programas de RC tienen una duración estimada de 12 semanas , , la implementación del decálogo generado permite suministrar la información de manera progresiva y recurrente a lo largo de la intervención. El contenido educativo podría ser presentado semanalmente, abordando uno de los 12 dominios propuestos y las preguntas asociadas. En el contexto de los programas de RC, estas preguntas podrían ser resueltas de forma grupal, lo que permitiría valorar, educar, empoderar y reforzar el contenido de la semana. Aquellos pacientes que no hayan asimilado los conceptos en esa misma semana, se les podría proporcionar material educativo para reforzar su comprensión sobre el dominio en cuestión. Esta distribución semanal del contenido educativo permitiría a los pacientes desarrollar estrategias prácticas para su vida diaria, empoderándoles en su autocuidado .
Este estudio ofrece una respuesta estructurada a una necesidad en los cuidados habituales de los pacientes con IC-FEp, presentando un decálogo que estandariza la intervención educativa durante los programas de RC. Como novedad, se propone incluir la educación en autocuidado embebida en las sesiones de ejercicio para optimizar la eficacia de la RC y los recursos disponibles dentro del sistema de salud. De esta forma, se facilita la transferencia de las recomendaciones de las guías de práctica clínica a la práctica asistencial. Además, la participación de pacientes en el panel se alinea con las recomendaciones propuestas a nivel europeo para la implementación de las guías de práctica clínica . Futuros estudios podrían considerar la inclusión de familiares, cuidadores y/o personas allegadas en paneles similares, desde entornos asistenciales para generar estrategias educativas que se ajusten a las necesidades de diferentes realidades clínicas y sociodemográficas. Entre las limitaciones cabe destacar la selección de los profesionales participantes en el panel de expertos: acumulan más años de experiencia en la práctica clínica y en investigación, pero tienen menos experiencia en gestión. La falta de perfiles dedicados a la gestión puede limitar la viabilidad del decálogo para la práctica clínica. Lo conocido sobre el tema Las guías de práctica clínica en RC abogan por la inclusión de educación en autocuidado y hábitos de vida saludable. La educación en autocuidado reduce los reingresos hospitalarios por todas las causas, los ingresos por IC y la mortalidad en pacientes con IC. ¿Qué aporta este estudio? Una respuesta estructurada a una necesidad en el cuidado habitual de los pacientes con IC-FEp, a partir de un decálogo que estandariza la intervención educativa durante los programas de RC. Preguntas adaptadas al contexto clínico para sistematizar la educación y la evaluación de las competencias dentro los programas de RC en pacientes con IC-FEp. Material educativo derivado de los contenidos propuestos en el decálogo de competencias para reforzar la educación en pacientes IC-FEp.
Las guías de práctica clínica en RC abogan por la inclusión de educación en autocuidado y hábitos de vida saludable. La educación en autocuidado reduce los reingresos hospitalarios por todas las causas, los ingresos por IC y la mortalidad en pacientes con IC.
Una respuesta estructurada a una necesidad en el cuidado habitual de los pacientes con IC-FEp, a partir de un decálogo que estandariza la intervención educativa durante los programas de RC. Preguntas adaptadas al contexto clínico para sistematizar la educación y la evaluación de las competencias dentro los programas de RC en pacientes con IC-FEp. Material educativo derivado de los contenidos propuestos en el decálogo de competencias para reforzar la educación en pacientes IC-FEp.
El presente estudio ha contado con la aprobación del Comité Ético Provincial de Málaga (2198-N-22).
Fondo de Investigación Sanitaria (FIS; exp. PI22/00315) del Instituto de Salud Carlos III y por la Universidad de Málaga, a través del contrato predoctoral obtenido por Celia García-Conejo.
Antonio I. Cuesta-Vargas y Celia García-Conejo han contribuido en la conceptualización del estudio. Celia García-Conejo participó en la obtención de datos, el análisis y la redacción del manuscrito. Estíbaliz Díaz-Balboa y Cristina Roldán-Jiménez han intervenido en la redacción, adaptación y revisión del documento. Todos los autores han colaborado en la revisión crítica del manuscrito.
Los autores declaran no tener ningún conflicto de intereses.
|
Persistent cross-species transmission systems dominate Shiga toxin-producing | ca8ce3f7-0144-4646-b2ed-1cc04547e08b | 11778926 | Biochemistry[mh] | Several areas around the globe experience exceptionally high incidence of Shiga toxin-producing Escherichia coli (STEC), including the virulent serotype E. coli O157:H7. These include Scotland, Ireland, Argentina, and the Canadian province of Alberta . All are home to large populations of agricultural ruminants, STEC’s primary reservoir. However, there are many regions with similar ruminant populations where STEC incidence is unremarkable. What differentiates high-risk regions is unclear. Moreover, with systematic STEC surveillance only conducted in limited parts of the world, there may be unidentified regions with exceptionally high disease burden. STEC infections can arise from local reservoirs, transmitted through food, water, direct animal contact, or contact with contaminated environmental matrices. The most common reservoirs include domesticated ruminants such as cattle, sheep, and goats. Animal contact and consumption of contaminated meat and dairy products are significant risk factors for STEC, as are consumption of leafy greens, tomatoes, and herbs and recreational swimming that have been contaminated by feces from domestic ruminants While STEC has been isolated from a variety of other animal species and outbreaks have been linked to species such as deer and swine, it is unclear what roles they play as maintenance or intermediate hosts. STEC infections can be imported through food items traded nationally and internationally, as has been seen with E. coli O157:H7 outbreaks in romaine lettuce from the United States . Secondary transmission is believed to cause approximately 15% of cases, but transmission of the pathogen is not believed to be sustained through person-to-person transmission over the long term . The mix of STEC infection sources in a region directly influences public health measures needed to control disease burden. Living near cattle and other domesticated ruminants has been linked to STEC incidence, particularly for E. coli O157:H7 These studies suggest an important role for local reservoirs in STEC epidemiology. A comprehensive understanding of STEC’s disease ecology would enable more effective investigations into potential local transmission systems and ultimately their control. Here, we take a phylodynamic, genomic epidemiology approach to more precisely discern the role of the cattle reservoir in the dynamics of E. coli O157:H7 human infections. We focus on the high incidence region of Alberta, Canada to provide insight into characteristics that make the pathogen particularly prominent in such regions. Description of isolates Across the 1215 isolates included in the analyses, we identified 12,273 core genome SNPs. Clade G(vi) constituted 73.6% (n=894) of all isolates . Clade A, which is the most distinct of the E. coli O157:H7 clades, included non-Alberta isolates, two human isolates from Alberta, and no Alberta cattle isolates. The majority of all Alberta isolates belonged to the G(vi) clade (582 of 659; 88.3%), compared to 281 of the 1560 (18.0%) randomly sampled U.S. PulseNet isolates that were successfully assembled and QCed. Among the 62 non-randomly sampled global isolates, only 2 (3.2%) were clade G(vi) . There were 682 (76.3%) clade G(vi) isolates with the stx1a/stx2a profile and 210 (23.5%) with the stx2a -only profile, compared to 2 (0.6%) and 58 (18.1%), respectively, among the 321 isolates outside the G(vi) clade . The majority of clinical cases evolved from local cattle lineages In our primary sample of 121 human and 108 cattle isolates from Alberta from 2007 to 2015, SNP distances were comparable between species . Among sampled human cases, 19 (15.7%; 95% CI 9.7%, 23.4%) were within five SNPs of a sampled cattle strain. The median SNP distance between cattle sequences was 45 (IQR 36–56), compared to 54 (IQR 43–229) SNPs between human sequences from cases in Alberta during the same years. The phylogeny generated by our primary structured coalescent analysis indicated cattle were the primary reservoir, with a high probability that the hosts at nodes along the backbone of the tree were cattle . The root was estimated at 1802 (95% HPD 1731, 1861). The most recent common ancestor (MRCA) of clade G(vi) strains in Alberta was inferred to be a cattle strain, dated to 1969 (95% HPD 1959, 1979). With our assumption of a relaxed molecular clock, the mean clock rate for the core genome was estimated at 9.65×10 –5 (95% HPD 8.13×10 –5 , 1.13×10 –4 ) substitutions/site/year. The effective population size, N e , of the human E. coli O157:H7 population, was estimated as 1060 (95% HPD 698, 1477), and for cattle as 73 (95% HPD 50, 98). We estimated 108 (95% HPD 104, 112) human lineages arose from cattle lineages, and 14 (95% HPD 5, 23) arose from other human lineages . In other words, 88.5% of human lineages seen in Alberta from 2007 to 2015 arose from cattle lineages. We observed minimal influence of our choice of priors . Our sensitivity analysis of equal numbers of isolates from cattle and humans was largely consistent with our primary results, estimating that 94.3% of human lineages arose from cattle lineages . Locally persistent lineages account for the majority of ongoing human disease In our primary analysis, we identified 11 locally persistent lineages (LPLs) . After reincorporating down-sampled isolates, LPLs included a range of 5 (G(vi)-AB LPL 9)–26 isolates (G(vi)-AB LPL 1), with an average of 10. LPL assignment was based on the MCC tree of the combination of four independent chains. LPLs persisted for 5–9 y, with the average LPL spanning 8 y. By definition, MRCAs of each LPL were required to have a posterior probability ≥95% on the MCC tree, and in practice, all had posterior probabilities of 99.7–100%. Additionally, examining all trees sampled from the four chains supported the same major lineages . Our sensitivity analysis of equal numbers of isolates from cattle and humans identified 10 of the same 11 LPLs . G(vi)-AB LPL 9 was no longer identified as an LPL, because it fell below the five-isolate threshold after subsampling. Additionally, G(vi)-AB LPL 8 expanded to include a neighboring branch. LPLs tended to be clustered on the MCC tree. G(vi)-AB LPLs 1–4, 6–8, and 9 and 10 were clustered with MRCAs inferred in 1996 (95% HPD 1992, 1999), 1998 (95% HPD 1995, 2000), and 1993 (95% HPD 1989, 1996), respectively . Cattle were the inferred host of all three ancestral nodes. LPLs were assigned using a threshold of 30 SNPs. In sensitivity analysis testing alternate SNP thresholds, we observed LPLs mimicking the larger clusters of the LPLs from our primary analysis . LPLs included 71 of 108 (65.7%; 95% CI 56.0%, 74.6%) cattle and 46 of 121 (38.0%; 95% CI 29.3%, 47.3%) human isolates. Of the remaining human isolates, 33 (27.3%) were associated with imported infections and 42 (34.7%) with infections from transient local strains. Of the remaining cattle isolates, 11 (10.2%) were imported and 26 (24.1%) were associated with transmission from transient strains. Of the 117 isolates in LPLs, 7 (6.0%) carried only stx2a , and the rest stx1a/stx2a . Among the 112 non-LPL isolates, 1 (0.9%) was stx1a -only, 27 (24.1%) were stx2a -only, 5 (4.5%) were stx2c -only, 68 (60.7%) were stx1a/stx2a , 6 (5.4%) were stx1a/stx2c , and 5 (4.5%) were stx2a/stx2c . To understand long-term persistence, we expanded the phylogeny with additional Alberta Health isolates from 2009 to 2019. Six of the 11 LPLs identified in our primary analysis, G(vi)-AB LPLs 1, 2, 4, 7, 10, and 11, continued to cause disease during the 2016–2019 period . With most cases reported during 2018 and 2019 sequenced, we were able to estimate the proportion of reported E. coli O157:H7 associated with LPLs. Of 217 sequenced cases reported during these 2 y, 162 (74.7%; 95% CI 68.3%, 80.3%) arose from Alberta LPLs. The stx profile of LPL isolates shifted as compared to the primary analysis, with 83 (51.2%; 95% CI 43.3%, 59.2%) of the LPL isolates encoding only stx2a and the rest stx1a/stx2a . Among the 55 non-LPL isolates during 2018–2019, the stx2c -only profile emerged with 16 (29.1%; 95% CI 17.6%, 42.9%) isolates, stx2a -only was found in six (10.9%; 95% CI 4.1%, 22.2%) isolates, and five (9.1%; 95% CI 3.0%, 20.0%) isolates carried both stx2a and stx2c . All five large (≥10 cases) sequenced outbreaks in Alberta during the study period were within clade G(vi). G(vi)-AB LPLs 2 and 7 gave rise to three large outbreaks, accounting for 117 cases (both sequenced and unsequenced), including 83 from an extended outbreak by a single strain in 2018 and 2019, defined as isolates within five SNPs of one another. The two large outbreaks that did not arise from LPLs both occurred in 2014 and were responsible for 164 cases. Locally persistent lineages were not imported Of the 494 U.S. isolates analyzed, nine (1.8%; 95% CI 0.8%, 3.4%) occurred within Alberta LPLs after re-incorporating down-sampled isolates . None of the 62 global isolates were associated with Alberta LPLs. The 9 U.S. isolates were part of G(vi)-AB LPLs 2 (n=3), 4 (n=4), 7 (n=1), and 11 (n=1), all of which had Alberta isolates that spanned 9–13 y and predated the U.S. isolates. There was no evidence of U.S. or global ancestors of LPLs. Based on migration events calculated from the structured tree, we estimated that 11.0% of combined human and cattle Alberta lineages were imported . Alberta sequences were separated from U.S. and global sequences by a median of 63 (IQR 45–236) and 225 (IQR 209–249) SNPs, respectively. Including U.S. and global isolates in the phylogeny did not change which LPLs we identified . The minimum SNPs that LPL isolates differed by was lower than in the Alberta-only analyses, because the core genome shared by all Alberta, U.S., and global isolates was smaller than that of only the Alberta isolates. Alberta sequences included in some LPLs changed slightly. G(vi)-AB LPL 4 lost three Alberta isolates from clinical cases, and G(vi)-AB LPLs 6 and 11 both lost one cattle and one human isolate from Alberta. In these LPLs, the isolates no longer included were the most outlying isolates in the LPLs defined using only Alberta isolates . Of the 217 Alberta human isolates from 2018 and 2019, 160 (73.7%) were still associated with LPLs after the addition of U.S. and global isolates, demonstrating the stability of the extended analysis results. Across the 1215 isolates included in the analyses, we identified 12,273 core genome SNPs. Clade G(vi) constituted 73.6% (n=894) of all isolates . Clade A, which is the most distinct of the E. coli O157:H7 clades, included non-Alberta isolates, two human isolates from Alberta, and no Alberta cattle isolates. The majority of all Alberta isolates belonged to the G(vi) clade (582 of 659; 88.3%), compared to 281 of the 1560 (18.0%) randomly sampled U.S. PulseNet isolates that were successfully assembled and QCed. Among the 62 non-randomly sampled global isolates, only 2 (3.2%) were clade G(vi) . There were 682 (76.3%) clade G(vi) isolates with the stx1a/stx2a profile and 210 (23.5%) with the stx2a -only profile, compared to 2 (0.6%) and 58 (18.1%), respectively, among the 321 isolates outside the G(vi) clade . In our primary sample of 121 human and 108 cattle isolates from Alberta from 2007 to 2015, SNP distances were comparable between species . Among sampled human cases, 19 (15.7%; 95% CI 9.7%, 23.4%) were within five SNPs of a sampled cattle strain. The median SNP distance between cattle sequences was 45 (IQR 36–56), compared to 54 (IQR 43–229) SNPs between human sequences from cases in Alberta during the same years. The phylogeny generated by our primary structured coalescent analysis indicated cattle were the primary reservoir, with a high probability that the hosts at nodes along the backbone of the tree were cattle . The root was estimated at 1802 (95% HPD 1731, 1861). The most recent common ancestor (MRCA) of clade G(vi) strains in Alberta was inferred to be a cattle strain, dated to 1969 (95% HPD 1959, 1979). With our assumption of a relaxed molecular clock, the mean clock rate for the core genome was estimated at 9.65×10 –5 (95% HPD 8.13×10 –5 , 1.13×10 –4 ) substitutions/site/year. The effective population size, N e , of the human E. coli O157:H7 population, was estimated as 1060 (95% HPD 698, 1477), and for cattle as 73 (95% HPD 50, 98). We estimated 108 (95% HPD 104, 112) human lineages arose from cattle lineages, and 14 (95% HPD 5, 23) arose from other human lineages . In other words, 88.5% of human lineages seen in Alberta from 2007 to 2015 arose from cattle lineages. We observed minimal influence of our choice of priors . Our sensitivity analysis of equal numbers of isolates from cattle and humans was largely consistent with our primary results, estimating that 94.3% of human lineages arose from cattle lineages . In our primary analysis, we identified 11 locally persistent lineages (LPLs) . After reincorporating down-sampled isolates, LPLs included a range of 5 (G(vi)-AB LPL 9)–26 isolates (G(vi)-AB LPL 1), with an average of 10. LPL assignment was based on the MCC tree of the combination of four independent chains. LPLs persisted for 5–9 y, with the average LPL spanning 8 y. By definition, MRCAs of each LPL were required to have a posterior probability ≥95% on the MCC tree, and in practice, all had posterior probabilities of 99.7–100%. Additionally, examining all trees sampled from the four chains supported the same major lineages . Our sensitivity analysis of equal numbers of isolates from cattle and humans identified 10 of the same 11 LPLs . G(vi)-AB LPL 9 was no longer identified as an LPL, because it fell below the five-isolate threshold after subsampling. Additionally, G(vi)-AB LPL 8 expanded to include a neighboring branch. LPLs tended to be clustered on the MCC tree. G(vi)-AB LPLs 1–4, 6–8, and 9 and 10 were clustered with MRCAs inferred in 1996 (95% HPD 1992, 1999), 1998 (95% HPD 1995, 2000), and 1993 (95% HPD 1989, 1996), respectively . Cattle were the inferred host of all three ancestral nodes. LPLs were assigned using a threshold of 30 SNPs. In sensitivity analysis testing alternate SNP thresholds, we observed LPLs mimicking the larger clusters of the LPLs from our primary analysis . LPLs included 71 of 108 (65.7%; 95% CI 56.0%, 74.6%) cattle and 46 of 121 (38.0%; 95% CI 29.3%, 47.3%) human isolates. Of the remaining human isolates, 33 (27.3%) were associated with imported infections and 42 (34.7%) with infections from transient local strains. Of the remaining cattle isolates, 11 (10.2%) were imported and 26 (24.1%) were associated with transmission from transient strains. Of the 117 isolates in LPLs, 7 (6.0%) carried only stx2a , and the rest stx1a/stx2a . Among the 112 non-LPL isolates, 1 (0.9%) was stx1a -only, 27 (24.1%) were stx2a -only, 5 (4.5%) were stx2c -only, 68 (60.7%) were stx1a/stx2a , 6 (5.4%) were stx1a/stx2c , and 5 (4.5%) were stx2a/stx2c . To understand long-term persistence, we expanded the phylogeny with additional Alberta Health isolates from 2009 to 2019. Six of the 11 LPLs identified in our primary analysis, G(vi)-AB LPLs 1, 2, 4, 7, 10, and 11, continued to cause disease during the 2016–2019 period . With most cases reported during 2018 and 2019 sequenced, we were able to estimate the proportion of reported E. coli O157:H7 associated with LPLs. Of 217 sequenced cases reported during these 2 y, 162 (74.7%; 95% CI 68.3%, 80.3%) arose from Alberta LPLs. The stx profile of LPL isolates shifted as compared to the primary analysis, with 83 (51.2%; 95% CI 43.3%, 59.2%) of the LPL isolates encoding only stx2a and the rest stx1a/stx2a . Among the 55 non-LPL isolates during 2018–2019, the stx2c -only profile emerged with 16 (29.1%; 95% CI 17.6%, 42.9%) isolates, stx2a -only was found in six (10.9%; 95% CI 4.1%, 22.2%) isolates, and five (9.1%; 95% CI 3.0%, 20.0%) isolates carried both stx2a and stx2c . All five large (≥10 cases) sequenced outbreaks in Alberta during the study period were within clade G(vi). G(vi)-AB LPLs 2 and 7 gave rise to three large outbreaks, accounting for 117 cases (both sequenced and unsequenced), including 83 from an extended outbreak by a single strain in 2018 and 2019, defined as isolates within five SNPs of one another. The two large outbreaks that did not arise from LPLs both occurred in 2014 and were responsible for 164 cases. Of the 494 U.S. isolates analyzed, nine (1.8%; 95% CI 0.8%, 3.4%) occurred within Alberta LPLs after re-incorporating down-sampled isolates . None of the 62 global isolates were associated with Alberta LPLs. The 9 U.S. isolates were part of G(vi)-AB LPLs 2 (n=3), 4 (n=4), 7 (n=1), and 11 (n=1), all of which had Alberta isolates that spanned 9–13 y and predated the U.S. isolates. There was no evidence of U.S. or global ancestors of LPLs. Based on migration events calculated from the structured tree, we estimated that 11.0% of combined human and cattle Alberta lineages were imported . Alberta sequences were separated from U.S. and global sequences by a median of 63 (IQR 45–236) and 225 (IQR 209–249) SNPs, respectively. Including U.S. and global isolates in the phylogeny did not change which LPLs we identified . The minimum SNPs that LPL isolates differed by was lower than in the Alberta-only analyses, because the core genome shared by all Alberta, U.S., and global isolates was smaller than that of only the Alberta isolates. Alberta sequences included in some LPLs changed slightly. G(vi)-AB LPL 4 lost three Alberta isolates from clinical cases, and G(vi)-AB LPLs 6 and 11 both lost one cattle and one human isolate from Alberta. In these LPLs, the isolates no longer included were the most outlying isolates in the LPLs defined using only Alberta isolates . Of the 217 Alberta human isolates from 2018 and 2019, 160 (73.7%) were still associated with LPLs after the addition of U.S. and global isolates, demonstrating the stability of the extended analysis results. Focusing on a region that experiences an especially high incidence of STEC, we conducted a deep genomic epidemiologic analysis of E. coli O157:H7’s multi-host disease dynamics. Our study identified multiple locally evolving lineages transmitted between cattle and humans. These were persistently associated with E. coli O157:H7 illnesses over periods of up to 13 y, the length of our study. Of clinical importance, there was a dramatic shift in the stx profile of the strains arising from locally persistent lineages toward strains carrying only stx2a , which has been associated with increased progression to hemolytic uremic syndrome (HUS) . Our study has provided quantitative estimates of cattle-to-human migration in a high-incidence region, the first such estimates of which we are aware. Our estimates are consistent with prior work that established an increased risk of STEC associated with living near cattle . We showed that 88.5% of strains infecting humans arose from cattle lineages. These transitions can be seen as a combination of the infection of humans from local cattle or cattle-related reservoirs in clade G(vi) and the historic evolution of E. coli O157:H7 from cattle in the rare clades. While our findings indicate the majority of human cases arose from cattle lineages, transmission may involve intermediate hosts or environmental sources several steps removed from the cattle reservoir. Small ruminants (e.g. sheep, goats) have been identified as important STEC reservoirs, and Alberta has experienced outbreaks linked to swine . Exchange of strains between cattle and other animals may occur if co-located, if surface water sources near farms are contaminated, and through wildlife, including deer, birds, and flies Humans can also become infected from environmental sources, such as through swimming in contaminated water. Although transmission systems may be multi-faceted, our analysis demonstrates that local cattle remain an integral part of the transmission system for the vast majority of cases, even when they may not be the immediate source of infection. Indeed, despite our small sample of E. coli O157:H7 isolates from cattle, 15.7% of our human cases were within five SNPs of a cattle isolate, suggesting that cattle were a recent source of transmission, either through direct contact with the animal or their environments or consumption of contaminated food products. The cattle-human transitions we estimated were based on structured coalescent theory, which we used throughout our analyses. This approach is similar to other phylogeographic methods that have previously been applied to E. coli O157:H7 . We inferred the full backbone of the Alberta E. coli O157:H7 phylogeny as arising from cattle, consistent with the postulated global spread of the pathogen via ruminants . Our estimate of the origin of the serotype, at 1802 (95% HPD 1731, 1861), was somewhat earlier than previous estimates, but consistent with global (1890; 95% HPD 1845, 1925) and the United Kingdom (1840; 95% HPD 1817, 1855) studies that used comparable methods. Our dating of the G(vi) clade in Alberta to 1969 (95% HPD 1959, 1979) also corresponds to proposed migrations of clade G into Canada from the U.S. in 1965–1977 . Our study thus adds to the growing body of work on the larger history of E. coli O157:H7, providing an in-depth examination of the G(vi) clade. Our identification of the 11 locally persistent lineages (LPLs) is significant in demonstrating that the majority of Alberta’s reported E. coli O157:H7 illnesses are of local origin. Our definition ensured that every LPL had an Alberta cattle strain and at least five isolates separated by at least 1 y, making the importation of the isolates in a lineage highly unlikely. For an LPL to be fully imported, cattle and human isolates would need to be repeatedly imported from a non-Alberta reservoir where the lineage persisted over several years. Further supporting the evolution of the LPLs within Alberta, all 11 LPLs were in clade G(vi), several were phylogenetically related with MRCAs dating to the late 1990 s, and few non-Alberta isolates fell within LPLs. The nineU.S. isolates associated with Alberta LPLs may reflect Alberta cattle that were slaughtered in the U.S. or infections in travelers from the U.S. Thus, we are confident that the identified LPLs represent locally evolving lineages and potential persistent sources of disease. We also showed that the identification of these LPLs was robust to the sampling strategy, with only the smallest LPL failing to be identified after subsampling left it with <5 isolates. We estimated the proportion of E. coli O157:H7 that were imported into Alberta in two ways. Based on our LPL analysis, we estimated only 27% of human and 10% of cattle E. coli O157:H7 isolates were imported. This was slightly higher than the overall importation estimate of 11% for all Alberta lineages from our global structured coalescent analysis. Our global structured coalescent analysis also estimated that 3% of lineages in the U.S. and 2% of lineages outside the U.S. and Canada had been exported from Alberta, suggesting that Alberta is not a significant contributor to the global E. coli O157:H7 burden beyond its borders. These results place the E. coli O157:H7 population in Alberta within a larger context, indicating that the majority of diseases can be considered local. At least one study has attempted to differentiate local vs. non-local lineages based on travel status, of which may be appropriate in some locations but can miss cases imported through food products, such as produce imported from other countries. To our knowledge, our study provides the first comprehensive determination of local vs. imported status for E. coli O157:H7 cases using external reference cases. Similar studies in regions of both high and moderate incidence would provide further insight into the role of localization on E. coli O157:H7 incidence. Of the 11 lineages we identified as LPLs during the 2007–2015 period, six were also associated with cases that occurred during the 2016–2019 period. During the initial period, 38% of human cases were linked to an LPL, and 6% carried only stx2a . The risk of HUS increases in strains of STEC carrying only stx2a , relative to stx1a/stx2a , meaning the earlier LPL population had fewer high-virulence strains. In 2018 and 2019, the six long-term LPLs were associated with both greater incidence and greater virulence, encompassing 75% of human cases with more than half of LPL isolates carrying only stx2a . The cause of this shift remains unclear, though shifts toward greater virulence in E. coli O157:H7 populations have been seen elsewhere . The growth and diversity of G(vi)-AB LPLs 2, 4, and 7 in the later period suggest these lineages were in stable reservoirs or adapted easily to new niches. Identifying these reservoirs could yield substantial insights into the disease ecology that supports LPL maintenance and opportunities for disease prevention, given the significant portion of illnesses caused by persistent strains. The high proportion of cases associated with cattle-linked local lineages is consistent with what is known about the role of cattle in STEC transmission. Among sporadic STEC infections, 26% have been attributed to animal contact and the farm environment, with a further 19% to pink or raw meat . Similarly, 24% of E. coli O157 outbreaks in the U.S. have been attributed to beef, animal contact, water, or other environmental sources . In Alberta, these are all inherently local exposures, given that 90% of beef consumed in Alberta is produced and/or processed there. Even person-to-person transmission, responsible for 15% of sporadic cases and 16% of outbreaks, includes secondary transmission from cases infected from local sources, which may explain our estimate of 11.5% of human lineages arising from other human lineages. We developed a novel measure of persistence for use in this study, specifically for the purposes of identifying lineages that pose an ongoing threat to public health in a specific region. Persistence has been variably defined in the literature, for example, as shedding of the same strain for at least 4 mo . Most recently, the U.S. CDC identified the first Recurring, Emergent, and Persistent (REP) STEC strain, REPEXH01, an E. coli O157:H7 strain detected since 2017 in over 600 cases. REPEXH01 strains are within 21 allele differences of one another ( https://www.cdc.gov/ncezid/dfwed/outbreak-response/rep-strains/repexh01.html ), and REP strains from similar enteric pathogens are defined based on allele differences of 13–104. Given that we used high-resolution SNP analysis rather than cgMLST, we used a difference of ≤30 SNPs to define persistent lineages. While both our study and the REPEXH01 strain identified by the CDC indicate that persistent strains of E. coli O157:H7 exist, the O157:H7 serotype was defined as sporadic in a German study using the 4 mo shedding definition . This may be due to strain differences between the two locations, but it might also indicate that persistence occurs at the host community level, rather than the individual host level. Understanding microbial drivers of persistence is an active field of research, with early findings suggesting a correlation of STEC persistence to the accessory genome and traits such as biofilm formation and nutrient metabolism . Our approach to studying persistence was specifically designed for longitudinal sampling in high-incidence regions and may be useful for others attempting to identify sources that disproportionately contribute to disease burden. Although we used data from the reservoir species to help define the LPLs in this study, we are testing alternate approaches that rely on only routinely collected public health data. We limited our analysis to E. coli O157:H7 despite the growing importance of non-O157 STEC, as historical multi-species collections of non-O157 isolates are lacking. As serogroups differ meaningfully in exposures, our results may not be generalizable beyond the O157 serogroup. However, cattle are still believed to be a primary reservoir for non-O157 STEC, and cattle density is associated with the risk of several non-O157 serogroups . Person-to-person transmission remains a minor contributor to the STEC burden. For all of these reasons, if we were to conduct this analysis in non-O157 STEC, we expect the majority of human lineages would arise from cattle lineages. Additionally, persistence within the cattle reservoir has been observed for a range of serogroups, suggesting that LPLs also likely exist among non-O157 STEC. Our findings may have implications beyond STEC, as well. Other zoonotic enteric pathogens such as Salmonella and Campylobacter can persist, and outbreaks are regularly linked to localized animal populations and produce-growing operations contaminated by animal feces. The U.S. CDC has also defined REP strains for these pathogens. LPLs could shed light on how and where persistent strains are proliferating, and thus where they can be controlled. The identification of LPLs serves multiple purposes, because they suggest the existence of local reservoir communities that maintain specific strains for long periods. First, they further our understanding of the complex systems that allow STEC to persist. In this study, the LPLs we identified persisted for 5–13 y. The reservoir communities that enable persistence could involve other domestic and wild animals previously found to carry STEC . The feedlot environment also likely plays an important role in persistence, as water troughs and pen floors have been identified as important sources of STEC for cattle . Identifying LPLs is a first step in identifying these reservoir communities and determining what factors enable strains to persist, so as to identify them for targeted interventions. Second, the identification of these LPLs in cattle could identify the specific local reservoirs of STEC. Similar to source tracing in response to outbreaks, LPLs provide an opportunity for cattle growers to identify cattle carrying the specific strains that are associated with a large share of human disease in Alberta. While routinely vaccinating against STEC has not been shown to be efficacious or cost-effective, a ring-type vaccination strategy in response to an identified LPL isolate could overcome the limitations of previous vaccination strategies. Third, the identification of new clinical cases infected with LPL strains could help direct public health investigations toward local sources of transmission. Finally, the disease burden associated with LPLs could be compared across locations and may help explain how high-incidence regions differ from regions with lower incidence. Our analysis was limited to only cattle and humans. Had isolates from a wider range of potential reservoirs been available, we would have been able to elucidate more clearly the roles that various hosts and common sources of infection play in local transmission. Additional hosts may help explain the 1 human-to-cattle predicted transmission, which could be erroneous. As with all studies utilizing public health data, sampling from only severe cases of disease is biased toward clinical isolates. In theory, this could limit the genetic variation among human isolates if virulence is associated with specific lineages. However, clinical isolates were more variable than cattle isolates, dominating the most divergent clade A, so the overrepresentation of severe cases does not appear to have appreciably biased the current study. Similarly, in initially selecting an equal number of human and cattle isolates, we sampled a larger proportion of the human-infecting E. coli O157:H7 population compared to the population that colonizes cattle. As cattle are the primary reservoir of E. coli O157:H7, the pathogen is more prevalent in cattle than in humans, who appear to play a limited role in sustained transmission. In sampling a larger proportion of the strains that infect humans, we likely sampled a wider diversity of these strains compared to those in cattle, which could have biased the analysis toward finding humans as the ancestral host. Thus, the proportion of human lineages arising from cattle lineages (88.5%) might be underestimated, which is also suggested by our sensitivity analysis of equal numbers of cattle and clinical isolates. Finally, we were not able to estimate the impact of strain migration between Alberta and the rest of Canada, because locational metadata for publicly available E. coli O157:H7 sequences from Canada was limited. E. coli O157:H7 infections are a pressing public health problem in many high-incidence regions around the world including Alberta, where a recent childcare outbreak caused >300 illnesses. In the majority of sporadic cases, and even many outbreaks, the source of infection is unknown, making it critical to understand the disease ecology of E. coli O157:H7 at a system level. Here, we have identified a high proportion of human cases arising from cattle lineages and a low proportion of imported cases. Local transmission systems, including intermediate hosts and environmental reservoirs, need to be elucidated to develop management strategies that reduce the risk of STEC infection. In Alberta, local transmission is dominated by a single clade, and over the extended study period, persistent lineages caused an increasing proportion of disease. The local lineages with long-term persistence are of particular concern because of their increasing virulence, yet they also present opportunities as larger, more stable targets for reservoir identification and control. Study design and population We conducted a multi-host genomic epidemiology study in Alberta, Canada. Our primary analysis focused on 2007–2015 due to the availability of isolates from intensive provincial cattle studies . These studies rectally sampled feces from individual animals, hide swabs, fecal pats from the floors of pens of commercial feedlot cattle, or feces from the floors of transport trailers. In studies of pens of cattle, samples were collected from the same cattle at least twice over a 4 to 6 mo period. A one-time composite sample was collected from cattle in transport trailers, which originated from feedlots or auction markets in Alberta. To select both cattle and human isolates, we block randomized by year to ensure representation across the period. We define isolates as single bacterial species obtained from culture. We sampled 123 E. coli O157 cattle isolates from 4660 available. Selected cattle isolates represented 7 of 12 cattle study sites and 56 of 89 sampling occasions from the source studies . We sampled 123 of 1148 E. coli O157 isolates collected from cases reported to the provincial health authority (Alberta Health) during the corresponding time period (Appendix 1). In addition to the 246 isolates for the primary analysis, we contextualized our findings with two additional sets of E. coli O157:H7 isolates : 445 from Alberta Health from 2009 to 2019 and already sequenced as part of other public health activities and 1970 from the U.S. and elsewhere around the world between 1999 and 2019. The additional Alberta Health isolates were sequenced by the National Microbiology Laboratory (NML)-Public Health Agency of Canada (Winnipeg, Manitoba, Canada) as part of PulseNet Canada activities. Isolates sequenced by the NML for 2018 and 2019 constituted the majority of reported E. coli O157:H7 cases for those years (217 of 247; 87.9%). U.S. and global isolates from both cattle and humans were identified from previous literature (n=104) and BV-BRC (n=193). As both processed beef and live cattle are frequently imported into Alberta from the U.S., we selected additional E. coli O157:H7 sequences available through the U.S. CDC’s PulseNet BioProject PRJNA218110. From 2010–2019, 6,791 O157:H7 whole genome sequences were available from the U.S. PulseNet project, 1673 (25%) of which we randomly selected for assembly and clade typing. This study was approved by the University of Calgary Conjoint Health Research Ethics Board, #REB19-0510. A waiver of consent was granted, and all case data were deidentified. Whole genome sequencing, assembly, and initial phylogeny The 246 isolates for the primary analysis were sequenced using Illumina NovaSeq 6000 and assembled into contigs using the Unicycler v04.9 pipeline, as described previously (BioProject PRJNA870153) . Raw read FASTQ files were obtained from Alberta Health for the additional 445 isolates sequenced by the NML and from NCBI for the 152 U.S. and 54 global sequences. We used the SRA Toolkit v3.0.0 to download sequences for U.S. and global isolates using their BioSample (i.e. SAMN) numbers. The corresponding FASTQ files could not be obtained for the six U.S. and seven global isolates we had selected . PopPUNK v2.5.0 was used to cluster Alberta isolates and identify any outside the O157:H7 genomic cluster . For assembling and quality checking (QC) all sequences, we used the Bactopia v3.0.0 pipeline . This pipeline performed an initial QC step on the reads using FastQC v0.12.1, which evaluated read count, sequence coverage, and sequence depth, with failed reads excluded from subsequent assembly. None of the isolates were eliminated during this step for low read quality. We used the Shovill v1.1.0 assembler within the Bactopia pipeline to de novo assemble the Unicycler contigs for the primary analysis and raw reads from the supplementary datasets. Trimmomatic was run as part of Shovill to trim adapters and read ends with quality lower than six and discard reads after trimming with overall quality scores lower than 10. Bactopia generated a quality report on the assemblies, which we assessed based on number of contigs (<500), genome size (≥5.1 Mb), N50 (>30,000), and L50 (≤50). Low-quality assemblies were removed. This included one U.S. sequence, for which two FASTQ files had been attached to a single BioSample identifier; the other sequence for the isolate passed all quality checks and remained in the analysis. Additionally, 16 sequences from the primary analysis dataset and four from the extended Alberta data had a total length of <5.1 Mb. These sequences corresponded exactly to those identified by the PopPUNK analysis to be outside the primary E. coli O157:H7 genomic cluster . Finally, although all isolates were believed to be of cattle or clinical origin during the initial selection, a detailed metadata review identified one isolate of environmental origin in the primary analysis dataset and eight that had been isolated from food items in the extended Alberta data. These were excluded. We used STECFinder v1.1.0 to determine the Shiga toxin gene ( stx ) profile and confirm the E. coli O157:H7 serotype using the wzy or wzx O157 O-antigen genes and detection of the H7 H-antigen. Bactopia’s Snippy workflow, which incorporates Snippy v4.6.0, Gubbins v3.3.0, and IQTree v2.2.2.7, followed by SNP-Sites v2.5.1, was used to generate a core genome SNP alignment with recombinant blocks removed. The maximum likelihood phylogeny of the core genome SNP alignment generated by IQTree was visualized in Microreact v251. The number of core SNPs between isolates was calculated using PairSNP v0.3.1. Clade was determined based on the presence of at least one defining SNP for the clade as published previously Isolates were identified to the clade level, except for clade G where we separated out subclade G(vi). After processing, we had 229 isolates (121 human, 108 cattle) in our primary sample and 430 additional Alberta Health isolates . We had 178 U.S. or global isolates from previous literature (n=88; U.S. n=41, global n=47) and BV-BRC (n=90; U.S. n=75, global n=15). Of the 1673 isolates randomly sampled from the U.S. PulseNet project, 1560 were successfully assembled and passed QC. These included 309 clade G isolates, all of which we included in the analysis; we also randomly sampled and included 69 non-clade G isolates from this sample. Phylodynamic and statistical analyses For our primary analysis, we created a timed phylogeny, a phylogenetic tree on the scale of time, in BEAST2 v2.6.7 using the structured coalescent model in the Mascot v3.0.0 package with demes for cattle and humans . Sequences were down-sampled prior to analysis if within 0–2 SNPs and <3 m from another sequence from the same host type, leaving 115 human and 84 cattle isolates in the primary analysis . The analysis was run using four different seeds to confirm that all converged to the same solution, and tree files were combined before generating a maximum clade credibility (MCC) tree. State transitions between cattle and human isolates over the entirety of the tree, with their 95% highest posterior density (HPD) intervals, were also calculated from the combined tree files. We determined the influence of the prior assumptions on the analysis with a run that sampled from the prior distribution (Appendix 1). We conducted a sensitivity analysis in which we randomly subsampled 84 of the human isolates so that both species had the same number of isolates in the analysis. LPLs were identified based on following criteria: (1) a single lineage of the MCC tree with a most recent common ancestor (MRCA) with ≥95% posterior probability; (2) all isolates ≤30 core SNPs from one another; (3) contained at least 1 cattle isolate; (4) contained ≥5 isolates; and (5) the isolates were collected at sampling events (for cattle) or reported (for humans) over a period of at least 1 y. We counted the number of isolates associated with LPLs, including those down-sampled prior to the phylodynamic analysis. We conducted sensitivity analyses examining different SNP thresholds for the LPL definition. From non-LPL isolates, we estimated the number of local transient isolates vs. imported isolates. For the 121 human E. coli O157:H7 isolates in the primary sample prior to down-sampling, we determined what portion belonged to locally persistent lineages and what portion was likely to be from local transient E. coli O157:H7 populations vs. imported. Human isolates within the LPLs were enumerated (n=46). The 75 human isolates outside LPLs included 56 clade G(vi) isolates and 19 non-G(vi) isolates. Based on the MCC tree from the primary analysis, none of the non-G(vi) human isolates were likely to have been closely related to an isolate from the Alberta cattle population, suggesting that all 19 were imported. As a proportion of all non-LPL human isolates, these 19 constituted 25.3%. While it may be possible that all clade G(vi) isolates were part of a local evolving lineage, it is also possible that the exchange of both cattle and food from other locations was causing the regular importation of clade G(vi) strains and infections. Thus, we used the proportion of non-LPL human isolates outside the G(vi) clade to estimate the proportion of non-LPL human isolates within the G(vi) clade that were imported; i.e., 56 × 25.3 % = 14 . We then conducted a similar exercise for cattle isolates. To contextualize our results in terms of the ongoing human disease burden, we created a timed phylogeny using a constant, unstructured coalescent model of the 199 Alberta isolates from the primary analysis and the additional Alberta Health isolates . The two sets of sequences were combined and down-sampled, leaving 272 human and 84 cattle isolates . We identified LPLs as above, and leveraged the near-complete sequencing of isolates from 2018 and 2019 to calculate the proportion of reported human cases associated with LPLs. Finally, we created a timed phylogeny of Alberta, U.S., and global from 1996 to 2019 to test whether the LPLs were linked to ancestors from locations outside Canada . Due to the size of this tree, we created both unstructured and structured versions. Clade A isolates were excluded due to their small number in Alberta and high level of divergence from other E. coli O157:H7 clades. Down-sampling was conducted separately by species and location. The phylogeny included 358 Alberta, 350 U.S., and 61 global isolates after down-sampling. All BEAST2 analyses were run for 100,000,000 Markov chain Monte Carlo iterations or until all parameters converged with effective sample sizes >200, whichever was longer. Exact binomial 95% confidence intervals (CIs) were computed for proportions. We conducted a multi-host genomic epidemiology study in Alberta, Canada. Our primary analysis focused on 2007–2015 due to the availability of isolates from intensive provincial cattle studies . These studies rectally sampled feces from individual animals, hide swabs, fecal pats from the floors of pens of commercial feedlot cattle, or feces from the floors of transport trailers. In studies of pens of cattle, samples were collected from the same cattle at least twice over a 4 to 6 mo period. A one-time composite sample was collected from cattle in transport trailers, which originated from feedlots or auction markets in Alberta. To select both cattle and human isolates, we block randomized by year to ensure representation across the period. We define isolates as single bacterial species obtained from culture. We sampled 123 E. coli O157 cattle isolates from 4660 available. Selected cattle isolates represented 7 of 12 cattle study sites and 56 of 89 sampling occasions from the source studies . We sampled 123 of 1148 E. coli O157 isolates collected from cases reported to the provincial health authority (Alberta Health) during the corresponding time period (Appendix 1). In addition to the 246 isolates for the primary analysis, we contextualized our findings with two additional sets of E. coli O157:H7 isolates : 445 from Alberta Health from 2009 to 2019 and already sequenced as part of other public health activities and 1970 from the U.S. and elsewhere around the world between 1999 and 2019. The additional Alberta Health isolates were sequenced by the National Microbiology Laboratory (NML)-Public Health Agency of Canada (Winnipeg, Manitoba, Canada) as part of PulseNet Canada activities. Isolates sequenced by the NML for 2018 and 2019 constituted the majority of reported E. coli O157:H7 cases for those years (217 of 247; 87.9%). U.S. and global isolates from both cattle and humans were identified from previous literature (n=104) and BV-BRC (n=193). As both processed beef and live cattle are frequently imported into Alberta from the U.S., we selected additional E. coli O157:H7 sequences available through the U.S. CDC’s PulseNet BioProject PRJNA218110. From 2010–2019, 6,791 O157:H7 whole genome sequences were available from the U.S. PulseNet project, 1673 (25%) of which we randomly selected for assembly and clade typing. This study was approved by the University of Calgary Conjoint Health Research Ethics Board, #REB19-0510. A waiver of consent was granted, and all case data were deidentified. The 246 isolates for the primary analysis were sequenced using Illumina NovaSeq 6000 and assembled into contigs using the Unicycler v04.9 pipeline, as described previously (BioProject PRJNA870153) . Raw read FASTQ files were obtained from Alberta Health for the additional 445 isolates sequenced by the NML and from NCBI for the 152 U.S. and 54 global sequences. We used the SRA Toolkit v3.0.0 to download sequences for U.S. and global isolates using their BioSample (i.e. SAMN) numbers. The corresponding FASTQ files could not be obtained for the six U.S. and seven global isolates we had selected . PopPUNK v2.5.0 was used to cluster Alberta isolates and identify any outside the O157:H7 genomic cluster . For assembling and quality checking (QC) all sequences, we used the Bactopia v3.0.0 pipeline . This pipeline performed an initial QC step on the reads using FastQC v0.12.1, which evaluated read count, sequence coverage, and sequence depth, with failed reads excluded from subsequent assembly. None of the isolates were eliminated during this step for low read quality. We used the Shovill v1.1.0 assembler within the Bactopia pipeline to de novo assemble the Unicycler contigs for the primary analysis and raw reads from the supplementary datasets. Trimmomatic was run as part of Shovill to trim adapters and read ends with quality lower than six and discard reads after trimming with overall quality scores lower than 10. Bactopia generated a quality report on the assemblies, which we assessed based on number of contigs (<500), genome size (≥5.1 Mb), N50 (>30,000), and L50 (≤50). Low-quality assemblies were removed. This included one U.S. sequence, for which two FASTQ files had been attached to a single BioSample identifier; the other sequence for the isolate passed all quality checks and remained in the analysis. Additionally, 16 sequences from the primary analysis dataset and four from the extended Alberta data had a total length of <5.1 Mb. These sequences corresponded exactly to those identified by the PopPUNK analysis to be outside the primary E. coli O157:H7 genomic cluster . Finally, although all isolates were believed to be of cattle or clinical origin during the initial selection, a detailed metadata review identified one isolate of environmental origin in the primary analysis dataset and eight that had been isolated from food items in the extended Alberta data. These were excluded. We used STECFinder v1.1.0 to determine the Shiga toxin gene ( stx ) profile and confirm the E. coli O157:H7 serotype using the wzy or wzx O157 O-antigen genes and detection of the H7 H-antigen. Bactopia’s Snippy workflow, which incorporates Snippy v4.6.0, Gubbins v3.3.0, and IQTree v2.2.2.7, followed by SNP-Sites v2.5.1, was used to generate a core genome SNP alignment with recombinant blocks removed. The maximum likelihood phylogeny of the core genome SNP alignment generated by IQTree was visualized in Microreact v251. The number of core SNPs between isolates was calculated using PairSNP v0.3.1. Clade was determined based on the presence of at least one defining SNP for the clade as published previously Isolates were identified to the clade level, except for clade G where we separated out subclade G(vi). After processing, we had 229 isolates (121 human, 108 cattle) in our primary sample and 430 additional Alberta Health isolates . We had 178 U.S. or global isolates from previous literature (n=88; U.S. n=41, global n=47) and BV-BRC (n=90; U.S. n=75, global n=15). Of the 1673 isolates randomly sampled from the U.S. PulseNet project, 1560 were successfully assembled and passed QC. These included 309 clade G isolates, all of which we included in the analysis; we also randomly sampled and included 69 non-clade G isolates from this sample. For our primary analysis, we created a timed phylogeny, a phylogenetic tree on the scale of time, in BEAST2 v2.6.7 using the structured coalescent model in the Mascot v3.0.0 package with demes for cattle and humans . Sequences were down-sampled prior to analysis if within 0–2 SNPs and <3 m from another sequence from the same host type, leaving 115 human and 84 cattle isolates in the primary analysis . The analysis was run using four different seeds to confirm that all converged to the same solution, and tree files were combined before generating a maximum clade credibility (MCC) tree. State transitions between cattle and human isolates over the entirety of the tree, with their 95% highest posterior density (HPD) intervals, were also calculated from the combined tree files. We determined the influence of the prior assumptions on the analysis with a run that sampled from the prior distribution (Appendix 1). We conducted a sensitivity analysis in which we randomly subsampled 84 of the human isolates so that both species had the same number of isolates in the analysis. LPLs were identified based on following criteria: (1) a single lineage of the MCC tree with a most recent common ancestor (MRCA) with ≥95% posterior probability; (2) all isolates ≤30 core SNPs from one another; (3) contained at least 1 cattle isolate; (4) contained ≥5 isolates; and (5) the isolates were collected at sampling events (for cattle) or reported (for humans) over a period of at least 1 y. We counted the number of isolates associated with LPLs, including those down-sampled prior to the phylodynamic analysis. We conducted sensitivity analyses examining different SNP thresholds for the LPL definition. From non-LPL isolates, we estimated the number of local transient isolates vs. imported isolates. For the 121 human E. coli O157:H7 isolates in the primary sample prior to down-sampling, we determined what portion belonged to locally persistent lineages and what portion was likely to be from local transient E. coli O157:H7 populations vs. imported. Human isolates within the LPLs were enumerated (n=46). The 75 human isolates outside LPLs included 56 clade G(vi) isolates and 19 non-G(vi) isolates. Based on the MCC tree from the primary analysis, none of the non-G(vi) human isolates were likely to have been closely related to an isolate from the Alberta cattle population, suggesting that all 19 were imported. As a proportion of all non-LPL human isolates, these 19 constituted 25.3%. While it may be possible that all clade G(vi) isolates were part of a local evolving lineage, it is also possible that the exchange of both cattle and food from other locations was causing the regular importation of clade G(vi) strains and infections. Thus, we used the proportion of non-LPL human isolates outside the G(vi) clade to estimate the proportion of non-LPL human isolates within the G(vi) clade that were imported; i.e., 56 × 25.3 % = 14 . We then conducted a similar exercise for cattle isolates. To contextualize our results in terms of the ongoing human disease burden, we created a timed phylogeny using a constant, unstructured coalescent model of the 199 Alberta isolates from the primary analysis and the additional Alberta Health isolates . The two sets of sequences were combined and down-sampled, leaving 272 human and 84 cattle isolates . We identified LPLs as above, and leveraged the near-complete sequencing of isolates from 2018 and 2019 to calculate the proportion of reported human cases associated with LPLs. Finally, we created a timed phylogeny of Alberta, U.S., and global from 1996 to 2019 to test whether the LPLs were linked to ancestors from locations outside Canada . Due to the size of this tree, we created both unstructured and structured versions. Clade A isolates were excluded due to their small number in Alberta and high level of divergence from other E. coli O157:H7 clades. Down-sampling was conducted separately by species and location. The phylogeny included 358 Alberta, 350 U.S., and 61 global isolates after down-sampling. All BEAST2 analyses were run for 100,000,000 Markov chain Monte Carlo iterations or until all parameters converged with effective sample sizes >200, whichever was longer. Exact binomial 95% confidence intervals (CIs) were computed for proportions. |
The implementation process of the Confident Birth method in Swedish antenatal education: opportunities, obstacles and recommendations | 66e599a3-6388-43de-8dbc-5b60adfa319e | 8384378 | Patient Education as Topic[mh] | To optimize high-quality maternity care, Sweden has a long tradition of providing antenatal education, including health promotion and birth preparatory courses ( ; ). This education is an essential component in antenatal care ( ; ), and its provision is in line with international and national guidelines ( ; ). The goals of antenatal education vary. However, according to a Cochrane review ( ), a common goal of antenatal education is to build women's confidence in their ability to give birth as well as to prepare expecting parents for childbirth and parenting. Because of the varied content and aim of antenatal education classes, it has proven difficult to evaluate and measure their usefulness ( ). Despite these difficulties, some positive emotional effects of antenatal education have been identified, such as decreased anxiety for the mother and increased involvement of the partner during birth ( ). Similar, higher confidence in ability to cope at home during labour and to handle the birth process have been reported on ( ). Although the effects of antenatal education are largely unknown ( ), a long-term follow-up of a randomized controlled trial showed that those who had undergone a well-structured antenatal education programme reported a more positive birth experience 5 years after childbirth compared to those who had participated in none, or in a more conventional, less structured antenatal education programme ( ). The content and structure of antenatal education have shifted over time, depending on trends in society and in maternal healthcare ( ; ; ). One such trend in Sweden has involved psychoprophylaxis, focusing on teaching breathing and relaxation techniques. A randomized controlled trial ( ) found no associated obstetric outcomes benefits or improvement of the childbirth experience in women using psychoprophylaxis during birth. Yet, there is some evidence suggesting that relaxation techniques may improve pain management during labour and childbirth ( ). Although there are similarities between antenatal education programmes and psychoprophylaxis courses, they are developed in different cultural contexts with different content. A birth preparatory method, Confident Birth, has been developed in a Swedish context. According to its developer ( ), the Confident Birth method was developed based on coping strategies connected to respiratory and stress physiology and activation of the parasympathetic system. The purpose of the method is to strengthen the mother’s inherent physical and emotional capacity through support by a companion of choice, striving to achieve an emotionally safe and confident birth. The support of a companion of choice is a central part of the method ( ). The companionship of choice during birth is defined as the continuous presence of a support person during labour and birth ( ). The Confident Birth method consists of four central components: breathing, relaxation, sound and the mind. In recent years, the Confident Birth method has rapidly gained a foothold in public antenatal clinics and maternity departments across Sweden. Recent research has suggested that future studies on companionship methods, such as the Confident Birth method, should consider factors that may affect the process and context of implementation ( ; ). To our knowledge, there are no published scientific studies on the Confident Birth method nor on its implementation. Thus, the aim of this study is to investigate the perceptions of midwives and first line managers regarding the Confident Birth method and to identify opportunities and obstacles in its implementation.
Study design A qualitative research design was used, which is useful when little is known about the subject ( ), such as the perception of the Confident Birth method and its implementation. Data were collected through semi-structured individual interviews with midwives participating in instructor training in this method, and with first line managers in antenatal healthcare who were involved in the method’s implementation. The data were analysed using content analysis with a deductive approach inspired by Elo and Kyngas ( ). The Consolidated Framework for Implementation Research (CFIR) ( ) was used as a theoretical framework to guide the data analysis and description of the results. Theoretical framework To understand the participants’ perceptions about the Confident Birth method and to identify opportunities and obstacles in its implementation, the CFIR framework ( ) was applied during data analysis and the description of the results. The CFIR, a synthesis of concepts described in 19 existing implementation frameworks, models and theories, has previously been applied in research areas such as healthcare science and clinical management research ( ; ; ). The CFIR describes different aspects to consider during the implementation process ( ; ), and defines implementation as a constellation of processes intended to bring an intervention into use in an organization. The CFIR consists of five main domains—(i) intervention characteristics (features of an intervention that might influence implementation), (ii) inner settings (features of the implementing organization that might influence implementation), (iii) outer settings (features of the external context or environment that might influence implementation), (iv) characteristics of individuals (characteristics of individuals involved in implementation that might influence implementation), and (v) process (includes strategies or tactics that might influence implementation)—with 39 sub-domains. These domains interact in a wide and complex manner, to influence implementation in an efficient way, which means that the sub-domains sometimes overlap. Therefore, not all sub-domains need to be used ( ). In this paper, the studied implementation process includes the four steps necessary to become an instructor in the Confident Birth method, and the aspects influencing the delivery of the education to the expecting parents. For an overview of how the CFIR was applied in this study, see . Setting At the time of the study, the participants were working for the largest public primary healthcare provider in western Sweden, consisting of 69 antenatal clinics. Antenatal healthcare in this region is organized in a healthcare choice system with both publicly and privately owned clinics, all governmental financed and free of charge. Midwives are the key care providers during pregnancy, they provide full antenatal care and identify high-risk pregnancies and make referrals to medical specialists when necessary. As an effort to improve antenatal education, 19 of these antenatal clinics have invested in instructor training for midwives in the Confident Birth method in 2018. The instructor training consisted of four parts: A reading part, in which literature related to the method was studied; A structured web-based training part, describing each part of the method in preparation for the upcoming 3-day workshop; A 3-day workshop, consisting of practical exercises and instruction in how to deliver the content of the method’s manuscript; A practical training part, in which each midwife had to independently hold two Confident Birth courses for 6–10 expecting parents. The midwives who underwent the instructor training in the Confident Birth method are expected by their employer to hold one course a month for expecting parents. Confident Birth classes are offered free of charge to all first-time mothers with partners, and to women with secondary fear of childbirth as well as their partners (if capacity allows). The clinics offer 32–33 Confident Birth courses per month, with seven to nine couples per occasion. Approximately 240 couples undergo the Confident Birth course every month. Participants and data collection Twenty-eight midwives working at the 19 antenatal clinics underwent instructor training in the method. They, and nine first line managers who held employer responsibility for the midwives, were invited to take part in the study via e-mail. Fifteen agreed to participate in the study and two declined, while the remaining 20 did not reply to the request. Sixteen interviews were conducted with 15 participants, 10 instructor midwives, and 5 first line managers. For details, see . One of the participants was interviewed twice, this to obtain complementary information, ∼6 months after the first interview. All participants were registered midwives, and represented 13 of the 19 clinics. The first author (S.J.) conducted the first six interviews in November and December 2018 and the first authors (S.J. and S.F.) conducted the second round of nine interviews in April 2019. Both first line managers and instructor midwives were interviewed to get a deeper understanding of the implementation of the Confident Birth method. The study was approved by the management of the public primary healthcare provider. Before the interview, all participants were given verbal and written information about the study and their right to confidentiality in accordance with the Helsinki Declaration ( ). They were informed that the study was voluntary, and that they could refrain from answering questions or discontinue the interview and withdraw their consent at any time without having to give a reason. Written consent was attained from all participants before the interviews commenced ( ). An interview guide with open-ended questions was developed to ensure data were collected to answer the aim of the study. The interview guide prompted respondents to discuss their experiences to undertake the instructor-training course in the Confident Birth method, followed by questions regarding opportunities and obstacles in its implementation. Questions about specific CFIR sub domains ( ) were not asked; rather questions about the participants perceptions of the Confident Birth method; the instructor training and the implementation of the method, followed by probing questions Do you have any more examples , Can you elaborate . Data collection was continued until the interviewers sensed no new information was discovered. All interviews were held in Swedish, at a location chosen by the participant, were audio-recorded, and lasted around 25–50 min. Analysis The interviews were transcribed verbatim and analysed by S.J. and S.F. using qualitative content analysis with a deductive approach inspired by Elo and Kyngas ( ). In the first phase, the transcriptions were read multiple times to make sense of the data as a whole. In the second phase, meaning units corresponding to the aim of the study were identified. All content that answered the questions What are the instructor midwives and first line managers’ perceptions of the Confident Birth method? and What are the opportunities and obstacles in the implementation? were marked. In the third phase, the meaning units were organized according to the five main domains of the CFIR. In the fourth phase, the sorted meaning units were re-read and organized into 11 sub-domains (of 39 possible) in the CFIR. The authors continually checked that the data used in the analysis was in accordance with the transcriptions, to assure the trustworthiness of the study ( ). Finally, after several refinements of the analysis, a final consensus among all authors was reached on the reported results.
A qualitative research design was used, which is useful when little is known about the subject ( ), such as the perception of the Confident Birth method and its implementation. Data were collected through semi-structured individual interviews with midwives participating in instructor training in this method, and with first line managers in antenatal healthcare who were involved in the method’s implementation. The data were analysed using content analysis with a deductive approach inspired by Elo and Kyngas ( ). The Consolidated Framework for Implementation Research (CFIR) ( ) was used as a theoretical framework to guide the data analysis and description of the results.
To understand the participants’ perceptions about the Confident Birth method and to identify opportunities and obstacles in its implementation, the CFIR framework ( ) was applied during data analysis and the description of the results. The CFIR, a synthesis of concepts described in 19 existing implementation frameworks, models and theories, has previously been applied in research areas such as healthcare science and clinical management research ( ; ; ). The CFIR describes different aspects to consider during the implementation process ( ; ), and defines implementation as a constellation of processes intended to bring an intervention into use in an organization. The CFIR consists of five main domains—(i) intervention characteristics (features of an intervention that might influence implementation), (ii) inner settings (features of the implementing organization that might influence implementation), (iii) outer settings (features of the external context or environment that might influence implementation), (iv) characteristics of individuals (characteristics of individuals involved in implementation that might influence implementation), and (v) process (includes strategies or tactics that might influence implementation)—with 39 sub-domains. These domains interact in a wide and complex manner, to influence implementation in an efficient way, which means that the sub-domains sometimes overlap. Therefore, not all sub-domains need to be used ( ). In this paper, the studied implementation process includes the four steps necessary to become an instructor in the Confident Birth method, and the aspects influencing the delivery of the education to the expecting parents. For an overview of how the CFIR was applied in this study, see .
At the time of the study, the participants were working for the largest public primary healthcare provider in western Sweden, consisting of 69 antenatal clinics. Antenatal healthcare in this region is organized in a healthcare choice system with both publicly and privately owned clinics, all governmental financed and free of charge. Midwives are the key care providers during pregnancy, they provide full antenatal care and identify high-risk pregnancies and make referrals to medical specialists when necessary. As an effort to improve antenatal education, 19 of these antenatal clinics have invested in instructor training for midwives in the Confident Birth method in 2018. The instructor training consisted of four parts: A reading part, in which literature related to the method was studied; A structured web-based training part, describing each part of the method in preparation for the upcoming 3-day workshop; A 3-day workshop, consisting of practical exercises and instruction in how to deliver the content of the method’s manuscript; A practical training part, in which each midwife had to independently hold two Confident Birth courses for 6–10 expecting parents. The midwives who underwent the instructor training in the Confident Birth method are expected by their employer to hold one course a month for expecting parents. Confident Birth classes are offered free of charge to all first-time mothers with partners, and to women with secondary fear of childbirth as well as their partners (if capacity allows). The clinics offer 32–33 Confident Birth courses per month, with seven to nine couples per occasion. Approximately 240 couples undergo the Confident Birth course every month.
Twenty-eight midwives working at the 19 antenatal clinics underwent instructor training in the method. They, and nine first line managers who held employer responsibility for the midwives, were invited to take part in the study via e-mail. Fifteen agreed to participate in the study and two declined, while the remaining 20 did not reply to the request. Sixteen interviews were conducted with 15 participants, 10 instructor midwives, and 5 first line managers. For details, see . One of the participants was interviewed twice, this to obtain complementary information, ∼6 months after the first interview. All participants were registered midwives, and represented 13 of the 19 clinics. The first author (S.J.) conducted the first six interviews in November and December 2018 and the first authors (S.J. and S.F.) conducted the second round of nine interviews in April 2019. Both first line managers and instructor midwives were interviewed to get a deeper understanding of the implementation of the Confident Birth method. The study was approved by the management of the public primary healthcare provider. Before the interview, all participants were given verbal and written information about the study and their right to confidentiality in accordance with the Helsinki Declaration ( ). They were informed that the study was voluntary, and that they could refrain from answering questions or discontinue the interview and withdraw their consent at any time without having to give a reason. Written consent was attained from all participants before the interviews commenced ( ). An interview guide with open-ended questions was developed to ensure data were collected to answer the aim of the study. The interview guide prompted respondents to discuss their experiences to undertake the instructor-training course in the Confident Birth method, followed by questions regarding opportunities and obstacles in its implementation. Questions about specific CFIR sub domains ( ) were not asked; rather questions about the participants perceptions of the Confident Birth method; the instructor training and the implementation of the method, followed by probing questions Do you have any more examples , Can you elaborate . Data collection was continued until the interviewers sensed no new information was discovered. All interviews were held in Swedish, at a location chosen by the participant, were audio-recorded, and lasted around 25–50 min.
The interviews were transcribed verbatim and analysed by S.J. and S.F. using qualitative content analysis with a deductive approach inspired by Elo and Kyngas ( ). In the first phase, the transcriptions were read multiple times to make sense of the data as a whole. In the second phase, meaning units corresponding to the aim of the study were identified. All content that answered the questions What are the instructor midwives and first line managers’ perceptions of the Confident Birth method? and What are the opportunities and obstacles in the implementation? were marked. In the third phase, the meaning units were organized according to the five main domains of the CFIR. In the fourth phase, the sorted meaning units were re-read and organized into 11 sub-domains (of 39 possible) in the CFIR. The authors continually checked that the data used in the analysis was in accordance with the transcriptions, to assure the trustworthiness of the study ( ). Finally, after several refinements of the analysis, a final consensus among all authors was reached on the reported results.
The results are presented according to the CFIR’s five main domains and 11 of its 39 sub-domains. When organization is mentioned in this study, this refers to the 19 antenatal clinics that have invested in Confident Birth, consisting of instructor midwives and first line managers. In the results we mostly refer to both the instructor midwives and first line managers as simply participants, unless it is relevant to contrast the differences in their perceptions based on their roles. Intervention characteristics Evidence strength and quality The participants’ perceptions of the Confident Birth method were similar. They described the method as simple, logical, and built on physiology, and their perceptions were that expecting parents became more confident after completing the programme. The knowledge about the method varied depending on whether the participant was an instructor midwife or a first line manager. The midwives had become more equipped with deeper theoretical knowledge and pedagogical skills after having undergone the training. This included increased understanding of the human body’s physiology and its impact on emotions, and of the mediation of coping strategies during childbirth. The midwives stressed that the use of the method made them feel strengthened and proud of their work, as they felt that it enabled them to practice genuine midwifery. The first line managers’ knowledge about the method came largely from a previously conducted training day for all staff in the organization: A great course that involves women, men, partners, support persons getting knowledge based on physiology. What happens in our bodies due to different emotions. (Instructor midwife 3) Adaptability In accordance with the course manual, being a course instructor required that the midwives strictly follow the Confident Birth manuscript; any editing of the content was prohibited. Some midwives perceived this inflexibility in the method as challenging. Others saw the rigidness as a guiding structure that was easy to follow, ensuring that everyone was presenting the same content in the same way, with limited risk of losing the core concept: It’s a fairly strict concept, which I think is good. It’s a clear framework guiding the core concept that needs to be presented. (Instructor midwife 4) The trouble with the manuscript was that we were told to memorise it verbatim. That's what was hammered in all the time, and that's what was communicated [by the founder] by e-mail, to memorise it verbatim — which I think was a cardinal mistake, because it would have been much better if we’d mastered it ourselves and used our own words. (Instructor midwife 10) Outer settings Patients’ needs and resources An understanding of the demands and resources concerning the needs of the expecting parents’ who would be the beneficiaries of the Confident Birth method was seen as an opportunity. Key needs of the expecting parents included strategies for coping with the upcoming birth, for both the pregnant woman and her companion of choice, which was perceived as having been accomplished through the course’s high attendance rate. A consequence, given the high demand, was that the course was not accessible to all expecting parents who wanted to attend. One such group was immigrants, who did not master the Swedish language: I understand that it [Confident Birth] is valuable, that patients or women are very satisfied, and we midwives have received a very positive response in their evaluations. (First line manager 8) //We aim at being fair and equal, but it can never be 100%. (First line manager 13) Peer pressure Several of the first line managers mentioned that the Confident Birth method was implemented as an opportunity to become an attractive caregiver. This was seen as essential since the public primary healthcare system was undergoing a restructuring, entailing that it will be easier in the future for expecting parents to choose which clinic to register at: We had talked a lot in the management team about our need to profile ourselves. We had started talking about business goals and business acumen; we need to think a little more so that we can make our clinic an attractive choice for parents. (First line manager 9) External policies and incentives No external policies or research had been considered when the method was chosen. In general, the participants described the method as useful and expressed that they did not need scientific proof that it worked: Regarding Confident Birth, it’s not difficult to assume that this must be good. There’s a lot of research that the safer you are when you give birth the better the birth is, so it’s not so difficult to understand. (First line manager 11) For me, it does not matter if it’s evidence-based or not, because I know it [the confident birth method] works. After all, I have no hesitation even if a research report is not presented … I know that this works … and I don't think I'm alone having this feeling. (Instructor midwife 3) Inner settings Network and communications The chain of information and communication had failed at all levels and in all lines during the implementation process. According to the participants, this failure included insufficient information from both the management team and the founder of the Confident Birth method to the midwives delivering the courses. In the end, this had affected the provision of adequate information to the expecting parents. Consequently, the instructor midwives expressed different levels of stress, and some declared that they felt exploited. It was stated that this could have been avoided if adequate information about the different steps involved in the training and its implementation had been communicated to the instructor midwives prior to their decision to participate: It was more work than the midwives had expected, and also more work than we had informed them about prior to course start. And here, in retrospect, I can feel that we lost a lot of their trust. (First line manager 13) Implementation climate Before the implementation of the Confident Birth method, the organization was in a situation whereby antenatal education had varied in content from clinic to clinic. Some clinics offered a wide range of birth preparatory courses such as yoga, psychoprophylaxis and conventional birth preparatory courses, while others had cancelled all forms of courses and instead referred expecting parents to open lectures at the university hospital. This was considered unequal and unfair to the expecting parents. It was expressed that the implementation of the Confident Birth method was an important concept for increasing quality and equality among expecting parents: It was seen that the antenatal education programmes looked extremely different everywhere in the city, and they were of different durations, and some had psychoprophylaxis and some did not, and some had to pay for psychoprophylaxis and some didn’t pay. (First line manager 14) Readiness for implementation The insufficient time provided by the organization for implementing the Confident Birth method made it challenging to achieve a successful implementation. A lack of time was experienced in all parts of the process, from the initial decision-making to the point at which they currently found themselves. The instructor midwives had spent ∼40 to 60 h preparing themselves before the training, while the organization only compensated them for 3 h; the majority of the necessary reading had been done in their spare time. Often, this had led to high stress levels and anxiety. Some midwives expressed that becoming an instructor had interfered negatively with their private life, while others expressed trust in the organization and saw the implementation as rewarding for the mothers and themselves: It didn’t really feel like they [managers] had understood how much time the course preparation required. It was as if they didn’t take it seriously, as if they assumed we would do all this in our spare time. (Instructor midwife 2) Another obstacle the participants expressed was finding available facilities that were appropriate for the course. It was often difficult to find a suitable venue in the area near the expecting parents’ homes. This often resulted in couples not turning up for the course due to the long distance: Sometimes the expecting parents have to travel around the whole city, so there’s always someone not showing up; that's a big loss. (Midwife 12) //In some places we midwives have to search for facilities ourselves. Sometimes we don’t find any suitable facilities, other times they’re fully booked, and sometimes we find a venue but with substandard facilities. (Instructor midwife 3) Characteristics of individuals Knowledge and beliefs about the intervention Most of the participants expressed great trust in the method. Some of the midwives had long experience of teaching other birth preparatory methods, which had affected how they perceived the Confident Birth method. A few midwives stated that the method was nothing new to them, and that the content was just presented in a different form or was an improvement to other birth preparatory methods. This sometimes led to inner conflicts regarding the usefulness of the Confident Birth method, which made it difficult to facilitate the method. The participants who held a more positive view and expressed trust in the method tended to accept the obstacles in the implementation, such as the insufficient information, better than those who held a less positive view. The majority of the instructor midwives had an interest in antenatal education, and had been handpicked or encouraged by their first line managers to attend the instructor training: You have to want it; otherwise, it won't work. (Instructor midwife 7) Process Planning There had not been any plan in place to guide the implementation of the Confident Birth method. With insufficient planning, as well as insufficient time and information throughout the implementation process, both the process of becoming an instructor and what was expected of the midwives had not been clearly described. This was expressed as challenging. When it had become clear what was expected of them, some midwives had experienced panic and stress. As antenatal education in general was not a priority within the organization, the implementation process was seen as challenging among the first line managers. There was a desire for a slower implementation process, with the planning anchored in the organization and the instructor midwives being more involved. The insufficient planning resulted in the participants not having a clear picture of what it would require of them. This contributed to some instructor midwives not completing the process, and others mentioning that they were not sure if they wanted to continue holding the course: It was panic, so we have to start up already in the fall, so we have something for everyone. After all, the problem was that the staff needed their time and process. Within the management team we’d only been processing for maybe a few months. (First line manager 13) //If I continue to feel that having these courses is anxiety-inducing, then I have to start thinking that maybe I shouldn’t expose myself to this forever. (Instructor midwife 2) Reflecting and evaluating Some improvements had been made to the implementation of the method during the implementation process: a contract had been developed for future Confident Birth instructors, including detailed information and expectations; midwives had been compensated for more of their time; discussions had been initiated concerning how to better meet the requirements of women with special needs and courses in English and Arabic had been established. In addition, the management had scheduled regular meetings with the instructor midwives a couple of times per semester for reflection, support and evaluation: They (instructor midwives) were very happy with the training, but were upset with the management team for the lack of time. (First line manager 13)
Evidence strength and quality The participants’ perceptions of the Confident Birth method were similar. They described the method as simple, logical, and built on physiology, and their perceptions were that expecting parents became more confident after completing the programme. The knowledge about the method varied depending on whether the participant was an instructor midwife or a first line manager. The midwives had become more equipped with deeper theoretical knowledge and pedagogical skills after having undergone the training. This included increased understanding of the human body’s physiology and its impact on emotions, and of the mediation of coping strategies during childbirth. The midwives stressed that the use of the method made them feel strengthened and proud of their work, as they felt that it enabled them to practice genuine midwifery. The first line managers’ knowledge about the method came largely from a previously conducted training day for all staff in the organization: A great course that involves women, men, partners, support persons getting knowledge based on physiology. What happens in our bodies due to different emotions. (Instructor midwife 3) Adaptability In accordance with the course manual, being a course instructor required that the midwives strictly follow the Confident Birth manuscript; any editing of the content was prohibited. Some midwives perceived this inflexibility in the method as challenging. Others saw the rigidness as a guiding structure that was easy to follow, ensuring that everyone was presenting the same content in the same way, with limited risk of losing the core concept: It’s a fairly strict concept, which I think is good. It’s a clear framework guiding the core concept that needs to be presented. (Instructor midwife 4) The trouble with the manuscript was that we were told to memorise it verbatim. That's what was hammered in all the time, and that's what was communicated [by the founder] by e-mail, to memorise it verbatim — which I think was a cardinal mistake, because it would have been much better if we’d mastered it ourselves and used our own words. (Instructor midwife 10)
The participants’ perceptions of the Confident Birth method were similar. They described the method as simple, logical, and built on physiology, and their perceptions were that expecting parents became more confident after completing the programme. The knowledge about the method varied depending on whether the participant was an instructor midwife or a first line manager. The midwives had become more equipped with deeper theoretical knowledge and pedagogical skills after having undergone the training. This included increased understanding of the human body’s physiology and its impact on emotions, and of the mediation of coping strategies during childbirth. The midwives stressed that the use of the method made them feel strengthened and proud of their work, as they felt that it enabled them to practice genuine midwifery. The first line managers’ knowledge about the method came largely from a previously conducted training day for all staff in the organization: A great course that involves women, men, partners, support persons getting knowledge based on physiology. What happens in our bodies due to different emotions. (Instructor midwife 3)
In accordance with the course manual, being a course instructor required that the midwives strictly follow the Confident Birth manuscript; any editing of the content was prohibited. Some midwives perceived this inflexibility in the method as challenging. Others saw the rigidness as a guiding structure that was easy to follow, ensuring that everyone was presenting the same content in the same way, with limited risk of losing the core concept: It’s a fairly strict concept, which I think is good. It’s a clear framework guiding the core concept that needs to be presented. (Instructor midwife 4) The trouble with the manuscript was that we were told to memorise it verbatim. That's what was hammered in all the time, and that's what was communicated [by the founder] by e-mail, to memorise it verbatim — which I think was a cardinal mistake, because it would have been much better if we’d mastered it ourselves and used our own words. (Instructor midwife 10)
Patients’ needs and resources An understanding of the demands and resources concerning the needs of the expecting parents’ who would be the beneficiaries of the Confident Birth method was seen as an opportunity. Key needs of the expecting parents included strategies for coping with the upcoming birth, for both the pregnant woman and her companion of choice, which was perceived as having been accomplished through the course’s high attendance rate. A consequence, given the high demand, was that the course was not accessible to all expecting parents who wanted to attend. One such group was immigrants, who did not master the Swedish language: I understand that it [Confident Birth] is valuable, that patients or women are very satisfied, and we midwives have received a very positive response in their evaluations. (First line manager 8) //We aim at being fair and equal, but it can never be 100%. (First line manager 13) Peer pressure Several of the first line managers mentioned that the Confident Birth method was implemented as an opportunity to become an attractive caregiver. This was seen as essential since the public primary healthcare system was undergoing a restructuring, entailing that it will be easier in the future for expecting parents to choose which clinic to register at: We had talked a lot in the management team about our need to profile ourselves. We had started talking about business goals and business acumen; we need to think a little more so that we can make our clinic an attractive choice for parents. (First line manager 9) External policies and incentives No external policies or research had been considered when the method was chosen. In general, the participants described the method as useful and expressed that they did not need scientific proof that it worked: Regarding Confident Birth, it’s not difficult to assume that this must be good. There’s a lot of research that the safer you are when you give birth the better the birth is, so it’s not so difficult to understand. (First line manager 11) For me, it does not matter if it’s evidence-based or not, because I know it [the confident birth method] works. After all, I have no hesitation even if a research report is not presented … I know that this works … and I don't think I'm alone having this feeling. (Instructor midwife 3)
An understanding of the demands and resources concerning the needs of the expecting parents’ who would be the beneficiaries of the Confident Birth method was seen as an opportunity. Key needs of the expecting parents included strategies for coping with the upcoming birth, for both the pregnant woman and her companion of choice, which was perceived as having been accomplished through the course’s high attendance rate. A consequence, given the high demand, was that the course was not accessible to all expecting parents who wanted to attend. One such group was immigrants, who did not master the Swedish language: I understand that it [Confident Birth] is valuable, that patients or women are very satisfied, and we midwives have received a very positive response in their evaluations. (First line manager 8) //We aim at being fair and equal, but it can never be 100%. (First line manager 13)
Several of the first line managers mentioned that the Confident Birth method was implemented as an opportunity to become an attractive caregiver. This was seen as essential since the public primary healthcare system was undergoing a restructuring, entailing that it will be easier in the future for expecting parents to choose which clinic to register at: We had talked a lot in the management team about our need to profile ourselves. We had started talking about business goals and business acumen; we need to think a little more so that we can make our clinic an attractive choice for parents. (First line manager 9)
No external policies or research had been considered when the method was chosen. In general, the participants described the method as useful and expressed that they did not need scientific proof that it worked: Regarding Confident Birth, it’s not difficult to assume that this must be good. There’s a lot of research that the safer you are when you give birth the better the birth is, so it’s not so difficult to understand. (First line manager 11) For me, it does not matter if it’s evidence-based or not, because I know it [the confident birth method] works. After all, I have no hesitation even if a research report is not presented … I know that this works … and I don't think I'm alone having this feeling. (Instructor midwife 3)
Network and communications The chain of information and communication had failed at all levels and in all lines during the implementation process. According to the participants, this failure included insufficient information from both the management team and the founder of the Confident Birth method to the midwives delivering the courses. In the end, this had affected the provision of adequate information to the expecting parents. Consequently, the instructor midwives expressed different levels of stress, and some declared that they felt exploited. It was stated that this could have been avoided if adequate information about the different steps involved in the training and its implementation had been communicated to the instructor midwives prior to their decision to participate: It was more work than the midwives had expected, and also more work than we had informed them about prior to course start. And here, in retrospect, I can feel that we lost a lot of their trust. (First line manager 13) Implementation climate Before the implementation of the Confident Birth method, the organization was in a situation whereby antenatal education had varied in content from clinic to clinic. Some clinics offered a wide range of birth preparatory courses such as yoga, psychoprophylaxis and conventional birth preparatory courses, while others had cancelled all forms of courses and instead referred expecting parents to open lectures at the university hospital. This was considered unequal and unfair to the expecting parents. It was expressed that the implementation of the Confident Birth method was an important concept for increasing quality and equality among expecting parents: It was seen that the antenatal education programmes looked extremely different everywhere in the city, and they were of different durations, and some had psychoprophylaxis and some did not, and some had to pay for psychoprophylaxis and some didn’t pay. (First line manager 14) Readiness for implementation The insufficient time provided by the organization for implementing the Confident Birth method made it challenging to achieve a successful implementation. A lack of time was experienced in all parts of the process, from the initial decision-making to the point at which they currently found themselves. The instructor midwives had spent ∼40 to 60 h preparing themselves before the training, while the organization only compensated them for 3 h; the majority of the necessary reading had been done in their spare time. Often, this had led to high stress levels and anxiety. Some midwives expressed that becoming an instructor had interfered negatively with their private life, while others expressed trust in the organization and saw the implementation as rewarding for the mothers and themselves: It didn’t really feel like they [managers] had understood how much time the course preparation required. It was as if they didn’t take it seriously, as if they assumed we would do all this in our spare time. (Instructor midwife 2) Another obstacle the participants expressed was finding available facilities that were appropriate for the course. It was often difficult to find a suitable venue in the area near the expecting parents’ homes. This often resulted in couples not turning up for the course due to the long distance: Sometimes the expecting parents have to travel around the whole city, so there’s always someone not showing up; that's a big loss. (Midwife 12) //In some places we midwives have to search for facilities ourselves. Sometimes we don’t find any suitable facilities, other times they’re fully booked, and sometimes we find a venue but with substandard facilities. (Instructor midwife 3)
The chain of information and communication had failed at all levels and in all lines during the implementation process. According to the participants, this failure included insufficient information from both the management team and the founder of the Confident Birth method to the midwives delivering the courses. In the end, this had affected the provision of adequate information to the expecting parents. Consequently, the instructor midwives expressed different levels of stress, and some declared that they felt exploited. It was stated that this could have been avoided if adequate information about the different steps involved in the training and its implementation had been communicated to the instructor midwives prior to their decision to participate: It was more work than the midwives had expected, and also more work than we had informed them about prior to course start. And here, in retrospect, I can feel that we lost a lot of their trust. (First line manager 13)
Before the implementation of the Confident Birth method, the organization was in a situation whereby antenatal education had varied in content from clinic to clinic. Some clinics offered a wide range of birth preparatory courses such as yoga, psychoprophylaxis and conventional birth preparatory courses, while others had cancelled all forms of courses and instead referred expecting parents to open lectures at the university hospital. This was considered unequal and unfair to the expecting parents. It was expressed that the implementation of the Confident Birth method was an important concept for increasing quality and equality among expecting parents: It was seen that the antenatal education programmes looked extremely different everywhere in the city, and they were of different durations, and some had psychoprophylaxis and some did not, and some had to pay for psychoprophylaxis and some didn’t pay. (First line manager 14)
The insufficient time provided by the organization for implementing the Confident Birth method made it challenging to achieve a successful implementation. A lack of time was experienced in all parts of the process, from the initial decision-making to the point at which they currently found themselves. The instructor midwives had spent ∼40 to 60 h preparing themselves before the training, while the organization only compensated them for 3 h; the majority of the necessary reading had been done in their spare time. Often, this had led to high stress levels and anxiety. Some midwives expressed that becoming an instructor had interfered negatively with their private life, while others expressed trust in the organization and saw the implementation as rewarding for the mothers and themselves: It didn’t really feel like they [managers] had understood how much time the course preparation required. It was as if they didn’t take it seriously, as if they assumed we would do all this in our spare time. (Instructor midwife 2) Another obstacle the participants expressed was finding available facilities that were appropriate for the course. It was often difficult to find a suitable venue in the area near the expecting parents’ homes. This often resulted in couples not turning up for the course due to the long distance: Sometimes the expecting parents have to travel around the whole city, so there’s always someone not showing up; that's a big loss. (Midwife 12) //In some places we midwives have to search for facilities ourselves. Sometimes we don’t find any suitable facilities, other times they’re fully booked, and sometimes we find a venue but with substandard facilities. (Instructor midwife 3)
Knowledge and beliefs about the intervention Most of the participants expressed great trust in the method. Some of the midwives had long experience of teaching other birth preparatory methods, which had affected how they perceived the Confident Birth method. A few midwives stated that the method was nothing new to them, and that the content was just presented in a different form or was an improvement to other birth preparatory methods. This sometimes led to inner conflicts regarding the usefulness of the Confident Birth method, which made it difficult to facilitate the method. The participants who held a more positive view and expressed trust in the method tended to accept the obstacles in the implementation, such as the insufficient information, better than those who held a less positive view. The majority of the instructor midwives had an interest in antenatal education, and had been handpicked or encouraged by their first line managers to attend the instructor training: You have to want it; otherwise, it won't work. (Instructor midwife 7)
Most of the participants expressed great trust in the method. Some of the midwives had long experience of teaching other birth preparatory methods, which had affected how they perceived the Confident Birth method. A few midwives stated that the method was nothing new to them, and that the content was just presented in a different form or was an improvement to other birth preparatory methods. This sometimes led to inner conflicts regarding the usefulness of the Confident Birth method, which made it difficult to facilitate the method. The participants who held a more positive view and expressed trust in the method tended to accept the obstacles in the implementation, such as the insufficient information, better than those who held a less positive view. The majority of the instructor midwives had an interest in antenatal education, and had been handpicked or encouraged by their first line managers to attend the instructor training: You have to want it; otherwise, it won't work. (Instructor midwife 7)
Planning There had not been any plan in place to guide the implementation of the Confident Birth method. With insufficient planning, as well as insufficient time and information throughout the implementation process, both the process of becoming an instructor and what was expected of the midwives had not been clearly described. This was expressed as challenging. When it had become clear what was expected of them, some midwives had experienced panic and stress. As antenatal education in general was not a priority within the organization, the implementation process was seen as challenging among the first line managers. There was a desire for a slower implementation process, with the planning anchored in the organization and the instructor midwives being more involved. The insufficient planning resulted in the participants not having a clear picture of what it would require of them. This contributed to some instructor midwives not completing the process, and others mentioning that they were not sure if they wanted to continue holding the course: It was panic, so we have to start up already in the fall, so we have something for everyone. After all, the problem was that the staff needed their time and process. Within the management team we’d only been processing for maybe a few months. (First line manager 13) //If I continue to feel that having these courses is anxiety-inducing, then I have to start thinking that maybe I shouldn’t expose myself to this forever. (Instructor midwife 2) Reflecting and evaluating Some improvements had been made to the implementation of the method during the implementation process: a contract had been developed for future Confident Birth instructors, including detailed information and expectations; midwives had been compensated for more of their time; discussions had been initiated concerning how to better meet the requirements of women with special needs and courses in English and Arabic had been established. In addition, the management had scheduled regular meetings with the instructor midwives a couple of times per semester for reflection, support and evaluation: They (instructor midwives) were very happy with the training, but were upset with the management team for the lack of time. (First line manager 13)
There had not been any plan in place to guide the implementation of the Confident Birth method. With insufficient planning, as well as insufficient time and information throughout the implementation process, both the process of becoming an instructor and what was expected of the midwives had not been clearly described. This was expressed as challenging. When it had become clear what was expected of them, some midwives had experienced panic and stress. As antenatal education in general was not a priority within the organization, the implementation process was seen as challenging among the first line managers. There was a desire for a slower implementation process, with the planning anchored in the organization and the instructor midwives being more involved. The insufficient planning resulted in the participants not having a clear picture of what it would require of them. This contributed to some instructor midwives not completing the process, and others mentioning that they were not sure if they wanted to continue holding the course: It was panic, so we have to start up already in the fall, so we have something for everyone. After all, the problem was that the staff needed their time and process. Within the management team we’d only been processing for maybe a few months. (First line manager 13) //If I continue to feel that having these courses is anxiety-inducing, then I have to start thinking that maybe I shouldn’t expose myself to this forever. (Instructor midwife 2)
Some improvements had been made to the implementation of the method during the implementation process: a contract had been developed for future Confident Birth instructors, including detailed information and expectations; midwives had been compensated for more of their time; discussions had been initiated concerning how to better meet the requirements of women with special needs and courses in English and Arabic had been established. In addition, the management had scheduled regular meetings with the instructor midwives a couple of times per semester for reflection, support and evaluation: They (instructor midwives) were very happy with the training, but were upset with the management team for the lack of time. (First line manager 13)
We identified opportunities and obstacles influencing the implementation of the Confident Birth method in western Sweden. The results showed that there was great trust in the method. It had equipped the midwives with tools for mediating coping strategies for women and their companion of choice to use during the upcoming birth. Time-consuming preparations, lack of available venues to conduct the courses, insufficient information at all levels, and no strategy for ensuring that the core of the method remained intact or plans for guiding its implementation were major obstacles to a successful implementation. A strength was that the instructor midwives and first line managers had trust in the method. Although there were concerns about the strictness and inflexibility in the course manuscript, the midwives had become more equipped with deeper theoretical knowledge and pedagogical skills, and felt proud of their work. This indicates that the instructor midwives’ and first line managers’ perceptions of the Confident Birth method as useful, for either themselves in their professional work or the expecting parents, affect the implementation. This mirrors another study on organizational readiness to change care routines, which found that the clinical needs for and usefulness of an intervention may be critical factors for successful implementation ( ). The fact that the Confident Birth method was regarded as strict and inflexible implies the importance of defining the method’s core elements. According to the CFIR, adaptability is described as the extent to which a method can be adapted to meet local needs without jeopardizing the method’s core ( ). Thus, it is critical to adapt the Confident Birth method into the local contexts and needs of the antenatal clinics, without losing its essential core elements. It is promising that the Confident Birth method had been introduced as a step in harmonizing the antenatal education in the region, and that this education would be offered on equal terms to all first-time mothers with partners. The fact that no scientific research or national or international guidelines had been considered before the method was chosen was not found to be an obstacle. Instead, the method was seen as an opportunity, as it provides a means for companionship of choice and promotes women’s own capacity to give birth. When comparing our results with international guidelines ( , ) and recent research ( ; ; ; ; ), it can be argued that the organization is in line with these. However, neither research nor international guidelines have any national implications unless they are contextualized into national, regional or local plans, which requires a long-term commitment and a clear national desire from national or local governments ( ). It is known that steps contributing to a successful implementation in healthcare include committed care providers who are involved in both the development of the innovation for change and the implementation plan ( ). In accordance with implementation science, these steps need to be anchored within the organization before any intervention starts. As found in this study, there was an insufficient chain of communication from the management team to the midwives delivering the course and no access to appropriate venues, in combination with insufficient time for planning and preparation for the implementation of the Confident Birth method. In support of Betram et al. ( ), who suggest that these components are essential for a successful implementation process, it can be argued whether the organization in this study was fully prepared for implementing the Confident Birth method. The provision of good quality antenatal care, including birth preparatory courses, requires sufficient numbers of healthcare providers who are both competent and motivated ( ). We found that the midwives in this study became stressed and dissatisfied because of the time-consuming preparations and the insufficient information surrounding the process. With no improvements, this could lead to elevated attrition rates and further diminish access and quality care. As long as midwives are left out of the decision-making, planning and preparation that concern them, and until their realities are taken seriously, the provision of high-quality care is at risk ( ). It is important to note that the instructor midwives were all skilled professionals, many of them with vast experience of providing antenatal education to expecting parents. Their experiences in facilitating other birth preparatory methods than the Confident Birth method were both an opportunity and an obstacle. Fixsen et al. ( ) point out that a method (in this the Confident Birth method introduced in the instructor course) can be introduced during training sessions, but that the real learning occurs when the midwife holds her first class. Furthermore, regular feedback and reflection sessions are needed to ensure fidelity and good outcomes. Reflection groups are a way to ensure that the method’s core components are kept intact ( ), but may also provide an outlet for those instructor midwives whose opinions about the method differ from what is taught during the training. The participants’ knowledge and beliefs regarding the method play an important role in the success of an implementation process ( ). Personal beliefs are difficult for an organization to influence, but providing ongoing coaching and support as well as listening to the concerns raised by instructor midwives may serve to mediate some of the challenges. During any implementation, it is important to closely monitor the process and constantly review and make necessary enhancements ( ). The lack of time between the decision to invest in the Confident Birth method and its implementation was a major obstacle. A consequence of not having an implementation plan, as found in this study, was that the participants did not know what was expected from them. Clear and abundant information about what the implementation may result in people who were initially interested choosing instead to abstain, which can prevent later dropouts ( ). The organization in focus in this study, has made a number of adaptations to improve the information regarding expectations that is given to midwives prior to their committing to the training, as well as introducing reflection groups for instructors. These are positive developments that will help to resolve the issues that currently plague the midwives, such as time constraints and lack of communication. When implementing a new method, organizational leaders need to map out and prioritize goals, and take into account the likely benefits, costs and resources, as well as potential problems and solutions ( ). Ideally, this should be carried out before implementation, but hindsight can clearly also be useful in evolving the future process. Strengths and limitations The key strength of this study is that it is the first of its kind addressing the Confident Birth method. Another strength was its use of the CFIR framework ( ). However, it can be argued that one limitation is that the CFIR was not used to identify determinants distinguishing between high and low implementation success. On the contrary, the CFIR framework made it possible to guide the analysis, thus to set aside the first author’s preunderstanding in the subject. To our knowledge, there exists no research on the implementation process of antenatal education; hence, we have no former studies to compare the findings with, rather the discussion concerns implementation in general settings. Based on the lessons learned from the implementation of the Confident Birth method, we recommend considering the following aspects when planning and implementing interventions in antenatal care settings: The intervention fills a clinical need The intervention has been adapted to suit the local context The care providers are committed and involved in the process The implementation plan is anchored on all levels within the organization Sufficient time and resources are available Goals are identified, stated and prioritized The communication between managers and care providers is based on honesty and trust The care providers receive regular feedback and reflection opportunities.
The key strength of this study is that it is the first of its kind addressing the Confident Birth method. Another strength was its use of the CFIR framework ( ). However, it can be argued that one limitation is that the CFIR was not used to identify determinants distinguishing between high and low implementation success. On the contrary, the CFIR framework made it possible to guide the analysis, thus to set aside the first author’s preunderstanding in the subject. To our knowledge, there exists no research on the implementation process of antenatal education; hence, we have no former studies to compare the findings with, rather the discussion concerns implementation in general settings. Based on the lessons learned from the implementation of the Confident Birth method, we recommend considering the following aspects when planning and implementing interventions in antenatal care settings: The intervention fills a clinical need The intervention has been adapted to suit the local context The care providers are committed and involved in the process The implementation plan is anchored on all levels within the organization Sufficient time and resources are available Goals are identified, stated and prioritized The communication between managers and care providers is based on honesty and trust The care providers receive regular feedback and reflection opportunities.
This study adds insights into the opportunities and obstacles influencing the implementation of the Confident Birth method. The findings in our study show the importance of adequate planning, time, information and communication throughout the process to have a successful implementation. Based on lessons learned from this study, we have developed recommendations for successful implementation of interventions, such as the Confident Birth, in antenatal care settings.
|
Assessment of complications and success rates of Percutaneous nephrolithotomy: single tract | 95228b0a-6bf4-4c47-af2d-ab6965f41468 | 11742455 | Laparoscopy[mh] | The treatment of staghorn or complex calyceal stones remains one of the most challenging problems in the field of urology ( ; ). These stones are usually large and branched and are frequently infected. Staghorn or complex calyceal stones occupy a significant portion of the renal collecting system, including the renal pelvis and multiple calyces. If not treated, they can cause severe renal impairment or sepsis ( ). In patients with complicated or staghorn calyceal stones, the objective of treatment is to ensure maximum clearance of the stones and to preserve maximum kidney function with minimum complications ( ). Shock wave lithotripsy (SWL), retrograde intrarenal surgery (RIRS), and percutaneous nephrolithotomy (PNL) are all treatment options for these types of stones, and advances in these treatments have significantly reduced the need for open or laparoscopic stone surgery ( ). PNL is the preferred treatment method because it has higher stone-free rates and lower morbidity than open surgery, especially for complex staghorn calculi ( ). Current EAU guidelines list PNL as the standard method of treatment for large kidney stones ( ). Achieving stone-free status after PNL becomes more challenging as stone size increases. Depending on the stone burden and the patient’s anatomy, multiple tracts may be required to achieve stone-free status in a single PNL session ( ; ). Although this approach is widely accepted, establishing multiple percutaneous routes can increase the risk of postoperative complications such as pleural damage, infection, and the need for blood transfusion ( ; ). Therefore, there are ongoing concerns about the safety of multi-tract PNL leading to many urologists having reservations about placing multiple percutaneous tracts during PNL ( ; ). This study retrospectively compared the perioperative outcomes of patients and assessed the safety and efficacy of PNL with multiple percutaneous tracts.
The medical records of consecutive patients aged 18 or over who underwent PNL for staghorn, partial staghorn, and complex kidney stones at a single center between 2014 and 2022 were retrospectively reviewed for this study. Ethical approval for this study was obtained from the University of Health Sciences Kocaeli Derince Training and Research Hospital’s local ethics committee (2022-111). Patients were excluded from the study if they had impaired kidney function, a history of bleeding disorders, skeletal deformities, a solitary kidney, or anatomical kidney abnormalities such as a duplicated collecting system, horseshoe kidneys, or a ureteropelvic junction obstruction. A staghorn kidney stone was defined as the presence of stones in the majority of the renal pelvis and collecting system. A partial staghorn kidney stone was defined as the presence of stones in the renal pelvis and two or more calyces. A complex calyceal stone was defined as the presence of stones in multiple calyces ( ). Before the surgery, all patients underwent a thorough physical examination and medical history, and routine tests were performed, including blood biochemistry and urine analysis and culture. Both non-contrast and contrast-enhanced abdominal tomography were also carried out prior to PNL. Demographic variables were recorded such as age, gender, body mass index (BMI), and comorbidities including diabetes mellitus (DM) and hypertension (HT). Stone characteristics were noted, including size, location, density in Hounsfield units (HU), and the presence of hydronephrosis. Intraoperative and postoperative parameters, including fluoroscopy and operation durations, changes in hemoglobin levels, numbers of blood transfusions, stone-free rates, perioperative complications, duration of nephrostomy removal, and length of hospital stay were also recorded for each patient. The stone burden of the patients was calculated using the Ackerman formula (volume = 0.6 × π × r 2 ), where ‘r’ represents half of the largest diameter of the stone. Additionally, all patients were categorized based on preoperative contrast-enhanced abdominal tomography findings, using Guy’s stone scores ( ). Each patient was reassessed by non-contrast computed tomography, which is customarily conducted 1 month after surgery. Operation success was defined as the patient being stone-free or only having remaining stone particles less than 4 mm. The Clavien-Dindo classification system, which has five grades, was used to classify complications ( ). Each procedure was performed by the same surgeon with extensive experience in the field of endourology. In cases where the surgeon determined that the other calyces could not be reached through a single entrance, a second access point was used during the operation. Patients were divided into single and multi-tract subgroups and the subgroups were analyzed. Surgical technique For the PLN procedure, each patient was given general anesthesia, and an open-ended 5 F ureteral catheter (Marflow ™ , Marflow AG, Switzerland) was inserted with cystoscopy guidance while the patient was in the lithotomy position. After catheter placement, the patient was moved to a prone position, and radio-opaque material and C-arm fluoroscopy were used to visualize the patient’s pelvicalyceal system anatomy. A 19.5-gauge percutaneous needle (Boston Scientific Corporation, MA, USA) was introduced into the appropriate calyx system. Fluoroscopy was used to place a guidewire (Zebra ™ , Boston Scientific Corporation, MA, USA) in the collecting system. The tract was dilated up to 30 F with semirigid amplatz dilators (Boston Scientific Corporation, MA, USA) and an Amplatz sheath was inserted into the collecting system. Stone fragmentation was performed using a pneumatic lithotripter (Calculith ™ Lithotripter, PCK, Turkey) through a 28 F rigid nephroscope (Karl Storz ™ Endoscopy-America Inc., El Segundo, CA, USA). Forceps were used to retrieve the stone particles, and the procedures were completed by inserting a 14 F re-entry nephrostomy catheter. For patients undergoing multi-tract PNL, the second entry was introduced using the same methods as the first entry, and a second 14 F nephrostomy tube was placed in the second entry. Statistical analysis Data were analyzed using SPSS Statistics for Windows, version 22.0 (IBM Corp, Armonk, NY, USA). Descriptive statistics were employed to summarize the data, with quantitative variables analyzed according to their distribution. The normality of continuous variables was assessed using the Kolmogorov-Smirnov test. Normally distributed numeric data are presented as mean ± standard deviation (SD), while non-normally distributed data are presented as median and interquartile ranges (IQR). The independent samples t-test was used to compare means between two groups for normally distributed data, whereas the Mann-Whitney U test was used to compare non-normally distributed data. Categorical variables were compared using the chi-square test to evaluate associations between groups and are expressed as frequencies and percentages. A p -value of less than 0.05 was considered statistically significant.
For the PLN procedure, each patient was given general anesthesia, and an open-ended 5 F ureteral catheter (Marflow ™ , Marflow AG, Switzerland) was inserted with cystoscopy guidance while the patient was in the lithotomy position. After catheter placement, the patient was moved to a prone position, and radio-opaque material and C-arm fluoroscopy were used to visualize the patient’s pelvicalyceal system anatomy. A 19.5-gauge percutaneous needle (Boston Scientific Corporation, MA, USA) was introduced into the appropriate calyx system. Fluoroscopy was used to place a guidewire (Zebra ™ , Boston Scientific Corporation, MA, USA) in the collecting system. The tract was dilated up to 30 F with semirigid amplatz dilators (Boston Scientific Corporation, MA, USA) and an Amplatz sheath was inserted into the collecting system. Stone fragmentation was performed using a pneumatic lithotripter (Calculith ™ Lithotripter, PCK, Turkey) through a 28 F rigid nephroscope (Karl Storz ™ Endoscopy-America Inc., El Segundo, CA, USA). Forceps were used to retrieve the stone particles, and the procedures were completed by inserting a 14 F re-entry nephrostomy catheter. For patients undergoing multi-tract PNL, the second entry was introduced using the same methods as the first entry, and a second 14 F nephrostomy tube was placed in the second entry.
Data were analyzed using SPSS Statistics for Windows, version 22.0 (IBM Corp, Armonk, NY, USA). Descriptive statistics were employed to summarize the data, with quantitative variables analyzed according to their distribution. The normality of continuous variables was assessed using the Kolmogorov-Smirnov test. Normally distributed numeric data are presented as mean ± standard deviation (SD), while non-normally distributed data are presented as median and interquartile ranges (IQR). The independent samples t-test was used to compare means between two groups for normally distributed data, whereas the Mann-Whitney U test was used to compare non-normally distributed data. Categorical variables were compared using the chi-square test to evaluate associations between groups and are expressed as frequencies and percentages. A p -value of less than 0.05 was considered statistically significant.
A total of 208 patients who met the inclusion criteria were included in the study, with 158 in the single-tract group and 50 in the multi-tract group. The mean age of the patients was similar in both the single-tract and multi-tract groups (48.6 ± 14.07 years vs . 49.2 ± 13.50 years, p = 0.798). There was no statistically significant difference between the two groups in terms of ASA scores (median [IQR]: 1 [1–2] vs . 1 [1–2], p = 0.435) or comorbidities, such as DM (7.0% vs . 6.0%, p = 0.813) and HT (15.2% vs . 18.0%, p = 0.635). There were also no significant differences in the characteristics of the stones, such as their location, size, and density, between the two groups, except for Guy’s stone score and the degree of hydronephrosis. The multi-tract group had a higher Guy’s stone score (2.70 ± 0.789 vs . 2.46 ± 0.819, p = 0.028) and a higher proportion of patients with moderate to severe hydronephrosis (54.0% vs . 37.3%, p = 0.037) than the single-tract group. Demographic data and stone parameters of study participants are summarized in . The average total fluoroscopy time was 1.7 ± 0.85 min in the single-tract group and 5.8 ± 2.21 min in the multi-tract group, and the average total operation time was 73.59 ± 44.08 min in the single-tract group and 89.3 ± 27.99 min in the multi-tract group. The average total fluoroscopy time and the average total operation time were both significantly longer in the multi-tract group than in the single-tract group ( p < 0.001). There were no statistically significant differences in preoperative and postoperative hemoglobin levels between the two groups; however, the multi-tract group experienced a greater mean and percentage drop in hemoglobin (2.0 ± 0.99 g/dl and 14.0% vs . 1.7 ± 1.16 g/dl and 12.2%, p = 0.027 and p = 0.017, respectively). Despite this, there was no significant difference in transfusion rates between the two groups (7.6% vs . 12.0%, p = 0.334). The time to nephrostomy catheter removal was comparable in both groups ( p = 0.215). However, the length of hospital stay was significantly higher in the multi-tract group than in the single-tract group (4.2 ± 1.52 days vs . 3.7 ± 1.49 days, p = 0.018). There was no statistically significant difference in stone-free rates between the two groups (76.0% vs . 78.0%, p = 0.766). All intraoperative findings and postoperative outcomes are detailed in . A total of 20 (12.6%) patients in the single-tract group and eight (16%) patients in the multi-tract group experienced minor complications, including postoperative fever, urinary tract infection (UTI), and blood transfusion. Five (3.2%) patients in the single tract group experienced major complications, including three ureterorenoscopy due to migrated stones, one double J stent insertion due to prolonged urinary leak after the removal of the nephrostomy catheter, and one selective angioembolization due to uncontrolled bleeding. Two (4%) patients in the multi-tract group experienced pneumothorax, a major complication, and required chest tube placement. After the symptoms of these two patients improved, the chest tubes were removed, and they were discharged. There was no statistically significant difference in complications between the two groups, and no Clavien Grade 4 or Grade 5 complications were observed in either group ( p = 0.896).
Kidney stones are a prevalent urological issue and carry a high risk of recurrence ( ). In addition to their potential to impair kidney function, large kidney stones can lead to recurrent infections and life-threatening complications such as sepsis. Therefore, it is crucial to ensure the complete clearance of large and staghorn stones ( ). PNL is a preferred treatment for large kidney stones, but achieving complete stone clearance may require multiple access points ( ). While many previous studies have shown the safety of multi-tract PNL compared to single-tract PNL in the treatment of large renal stones, the results remain arguable ( ). In this study, we did not establish a specific criterion for determining preoperative multi-track preference due to the study’s retrospective design. There was no statistically significant difference between the groups for stone burden. However, the higher Guy’s Stone scores in the multi-track group may be a measurable reason for our preference for multi-track. Additionally, this finding suggests that evaluating a patient’s Guy’s stone score before surgery may also help the surgeon predict the preferred method in clinical practice. Achieving a stone-free status is the main objective of PNL. In their study, reported a success rate of 70% in achieving a stone-free status with multi-tract access after a single PNL session involving a median of three accesses in 164 renal units. In a prospective randomized study, compared single-tract and multi-tract access PNL in 54 patients with staghorn stones. They demonstrated that multiple access PNL was better at stone clearance and resulted in a reduced need for additional procedures after surgery compared to single-tract PNL ( ). In another study, compared the early outcomes of single-tract PNL vs . multi-tract PNL in the treatment of staghorn stones. This retrospective study revealed a stone-free rate of 70.1% in the single-tract group and 81.1% in the multi-tract group ( ). In their meta-analysis, found no significant difference in the stone-free rate between single-tract and multi-tract PNL. They suggested that the variability in stone-free rates may be because there is no universally accepted definition of stone-free rate across studies and different studies used different imaging modalities for postoperative assessments (KUB radiography, ultrasound, or CT). They also suggested that the timing of patient follow-up may have impacted study results, as the final stone-free rate is often higher than the immediate postoperative stone-free rate due to the time required for stone fragments to be naturally expelled through urine. In the present study, all patients were re-evaluated using non-contrast abdominal tomography in the first month after surgery, and the results also showed comparable stone clearance rates between the single-tract and multi-tract groups. There is a concern that creating multiple percutaneous accesses may increase the risk of bleeding and complications compared to procedures requiring a single access ( ). However, some studies have shown that multiple tracts do not significantly increase the risk of bleeding or the need for blood transfusions ( ; ). Notably, these studies found that the need for transfusion is associated with low preoperative hemoglobin levels. Similarly, found no significant difference in blood loss, transfusion rates, complications, or operative duration between single-tract and multi-tract PNL. The AUA nephrolithiasis guidelines panel on staghorn calculi reported 7–27% complication rates and a transfusion rate of up to 18% ( ). A recent meta-analysis reported that the blood transfusion rate was higher in patients with multi-tract PNL than in patients with single-tract PNL, but the rates of other complications did not significantly differ between the groups ( ). These differing results may be due to the experience level of the surgeon and the surgical technique used during multi-tract PNL ( ). Morbidity can be reduced with PNL if the surgeon always punctures in full expiration, stays in the lateral half of the rib, and uses a working sheath during nephroscopy and a well-draining nephrostomy tube after the procedure ( ). In the present study, despite a larger decrease in hemoglobin levels seen in the multi-tract group than the single-tract group, there was no statistically significant difference in transfusion rates between the two groups. Although a drop in hemoglobin levels was much more common in the multi-tract group, selective angioembolization was performed on one patient in the single-tract group due to uncontrollable bleeding. Longer fluoroscopy and operation times were seen in the multi-tract PNL group than in the single-tract PNL group in the current study, but using ultrasound guidance in multi-tract PNL may help eliminate these differences. Ultrasound guidance has several advantages, such as the absence of ionizing radiation, a shorter procedure time, fewer punctures, and does not require contrast agents ( ). Secondary accesses in PNL are mainly used to access stones that cannot be reached in the upper calyx. Since these approaches often require an intercostal approach, caution should be exercised regarding pulmonary complications ( ; ). Pulmonary damage due to lung transgression can occur in any location, ranging from 14% on the left side to as high as 29% on the right side, even during controlled expirations ( ). In the present study, two patients in the multi-tract group who developed pneumothorax were treated with chest tube insertion. To reduce these complications, patients who will require a second access should be identified through appropriate preoperative evaluation, and the second access should be created before securing the guide wires ( ). This study has several limitations that should be acknowledged. First, the retrospective design may introduce inherent biases that could affect the validity of the findings. Second, the single-center nature of the study, with operations performed by a single surgeon, limits the generalizability of the results to broader populations. Third, the unequal sample sizes between the groups may have influenced the statistical power and comparability of the outcomes. Lastly, the absence of data on the composition of the stones limited the ability to assess the impact of stone characteristics on surgical outcomes.
The findings of this study suggest that multiple-access tracts result in high stone-free rates and a slight increase in the incidence of acceptable complications. To avoid the higher surgical risks and expenses associated with multiple procedures, it would be advantageous to achieve stone-free results in a single session. Therefore, urologists involved in percutaneous surgery should consider multiple accesses for hard-to-reach stones and should be capable of performing secondary accesses when necessary.
10.7717/peerj.18450/supp-1 Supplemental Information 1 Parameters of study groups.
|
Occupational risks to pregnant obstetrics and gynaecology trainees and physicians: Is it time to think about this? | 85fd8e88-1176-40a9-a0af-f9c63992908e | 10032316 | Gynaecology[mh] | The proportion of women in the workforce has been steadily increasing worldwide. Women now constitute approximately 75% of the global health care workforce and almost 90% in nursing and midwifery professions . In India, females form 38% of all health care workers (HCW) and about 16.8% of allopathic doctors. Roughly 1 in every 3 HCW is a female . The present times have witnessed a dramatic gender shift in the speciality of obstetrics and gynaecology. Women, now comprise a significant proportion of practicing obstetrics and gynaecology specialists all over the world . In 2018, more than 80% of resident doctors and nearly 60% of physicians in the speciality were female, far exceeding any other surgical speciality . This is in stark contrast to 2012, when women comprised >50% of Fellows and Junior Fellows in the American College of Obstetricians and Gynecologists . In India, this figure is much more than 90%, as the patients in this world favour female physicians for their gynaecological issues. The male trainees often feel bias because of patients preferring female physicians . The majority of resident doctors and a significant proportion of physicians in Obstetrics and Gynaecology are in the reproductive age group. They are or will become pregnant at some point in their training program or career. Pregnant HCWs are faced with numerous challenges, as they need to balance their health and the health of their unborn child along with that of their patients. Proper performance of their duties may at times constitute a risk to their own health. Although pregnant women are not more susceptible to most diseases than their non-pregnant counterparts, the consequences of even a mild infection can be far-reaching. Rubella and chickenpox, although most cases are self-resolving, can lead to abortions and congenital abnormalities in the offspring. It is quite common for a health care worker to feel torn between duties towards her patients and co-workers and her responsibilities towards her family and her unborn foetus. A pregnant trainee or even a consultant physician in obstetrics and gynaecology faces unique occupational challenges and hazards. Apart from the physically taxing nature of work in the labour room, where each normal delivery needs continuous monitoring and vigilance for at least 6–8 hours, a number of other occupational risks are unique to the speciality, which probably at the time of pregnancy become matters of concern. To the best of my knowledge, there is no comprehensive review that identifies occupational risks to pregnant obstetricians and gynaecologists. This review focuses on all work-related exposure risks, such as risks of infectious diseases, radiation, stress, violence against doctors, and even peer support, or lack of support, that can have deleterious effects on the health of pregnant physicians and the health of their unborn foetuses. The recent literature related to pregnant health care workers and occupational risks was searched, from various governmental agencies including the World Health Organisation (WHO), Centers for Disease Control and Prevention (CDC), Scientific Advisory Group for Emergencies (SAGE), Occupational Safety and Health Administration (OSHA), and English peer-reviewed journals from databases such as PubMed, Scopus, Google Scholar, EMBASE, and others. The literature regarding workplace regulations for pregnant or lactating health care staff was also reviewed. The search terms used were: ‘pregnant health care worker’ AND ‘occupational risks’; OR ‘radiation exposure’; OR ‘violence against doctors’; OR ‘infectious diseases’ OR ‘physician burn out or stress; Or ‘anaesthetic gases’. The articles referring to obstetrics and gynaecology trainees and physicians were studied in detail to work out all the occupational risks faced by them during pregnancy. Radiation exposure Radiation exposure in early pregnancy is very well known to be associated with teratogenic effects. Although the risk is usually overestimated, the consequences can be significant in cumulative doses. Threshold radiation effects (deterministic effects) occur over a dose threshold and result in cellular injury. Stochastic effects of radiation are incremental, appearing in a dose-response function without a threshold, and are thought to be the primary mechanism of increased risk of cancers . Various agencies have given thresholds that should be taken care of once pregnancy is confirmed, to minimize the effects on the foetus, mainly during organogenesis. The International Commission on Radiological Protection recommends that after a worker declares her pregnancy, the occupational radiation dose should not exceed one mSv during the remainder of the pregnancy. The National Council on Radiation Protection and Measurements, in the United States, recommends a radiation dose limit of 0.5 mSv per month once pregnancy is confirmed to ensure a low exposure during susceptible periods of gestation . The US Environmental Protection Agency recommends a limit of 5 mSv for the entire gestational period . Although only very few procedures in obstetrics and gynaecology use ionizing radiation, hysterosalpingography (HSG) is a standard procedure used to evaluate the patency of fallopian tubes in women presenting with subfertility. The HSG procedure requires the radiologist or preferably a gynaecologist (who is not trained to handle ionizing radiation) to hold the cannula and inject the contrast medium into the patient's cervix while she is being irradiated. The supporting personnel also remain close to the patient. Though a lead apron is worn, frequent or multiple procedures can lead to significant exposure to ionizing radiation, which can be worrisome, especially in the first trimester due to teratogenic effects . A few studies have demonstrated that the dose to the extremities may also be significant enough to warrant monitoring, especially when the procedure is done frequently. Sentinel lymph node mapping, used to identify the affected or sentinel node in gynaecological malignancies, also uses radioactive tracers such as Technetium-99 to identify the affected nodes. Although preliminary studies have indicated that exposure usually falls within the safe limits, many factors, such as time from injection to surgery and distance between the patient's injection site and the surgeon's abdomen, play a significant role . Physicians and trainees predominantly dealing with gynaecological malignancy surgeries, should consider the cumulative dose received. Due to the fear of discrimination by their peers or senior consultants, some of the trainees or resident doctors prefer not to disclose their pregnant status till late. If unaware of the risks during such procedures, they may expose themselves to ionizing radiation, causing inadvertent self-harm. There is a need for a cordial work environment in which the resident doctors can declare their pregnant status, free from the fear of discrimination, and can hold posts in areas that do not involve radiation exposure. Infectious diseases Women during pregnancy are more susceptible to certain viral infections, predominantly due to impaired pathogen clearance and hormonal and immunological alterations. The risk for health care workers, including doctors, is increased manifold mainly due to the prolonged contact with infected patients and that too in closed areas such as birthing suites, high dependency units, and intensive care units. If a pregnant health care worker acquires viral infections such as rubella, cytomegalovirus, herpes simplex, and varicella, particularly at the time of organogenesis, it can be devastating for the foetus. The teratogenic effects of these viruses are well known. Pregnant doctors should not be involved in the care of patients with these infectious diseases. The current pandemic of COVID-19 has made this risk of infectious diseases even more apparent. CDC recognizes health care workers, including doctors, nurses, dentists, paramedics, emergency medical technicians, laboratory personnel collecting and handling samples from infected persons, and morgue workers performing autopsies as the group at highest risk of acquiring Coronavirus . Among health care professionals, certain professionals such as anaesthesiologists, otorhinolaryngologists, dentists, ophthalmologists are at exceptionally high risk because their work demands proximity with patients’ respiratory tracts. Obstetricians also come under very high risk, mainly due to prolonged exposure, especially during labour. CDC and WHO recommend using N95 masks for health care workers, especially during the care of patients with diseases that involve droplet transmissions, such as tuberculosis, severe acute respiratory syndrome (SARS), and COVID-19 . Even after complying with proper protection and preventive measures and using personal protective equipment to protect themselves, health care workers have been affected by the disease. During the initial months of the pandemic, a few efforts were taken to limit the amount of exposure, such as delaying elective surgical procedures and undertaking only emergency ones. However, these practice guidelines cannot be extended to specialities like obstetrics. Every case presenting in labour or needing labour induction due to feto-maternal indications can be treated as an emergency, as any delay can be life-threatening. Obstetrics is probably the only speciality in medicine where the number of cases and surgeries did not decrease, despite the fear of infection. The risks extend beyond the pandemic period and apply to other infectious diseases such as influenza and tuberculosis. The dilemma with personal protective equipment in pregnancy Pregnancy is associated with profound changes in normal respiratory physiology. Dyspnoea is usually a common symptom in late pregnancy. Both mechanical factors (due to the enlarged gravid uterus) and hormonal factors play a role in this. Oxygen consumption increases from the first trimester, increasing by around 30% per term due to maternal metabolic processes and foetal demands . Increased oestrogen causes hyperaemia, oedema, hypersecretion, and friability of the mucosa of the respiratory tract . There is an increase in the number and sensitivity of hypothalamic and medullary progesterone receptors in pregnancy, leading to a rise in the sensitivity of peripheral chemoreceptors to hypoxic conditions . Progesterone also leads to a decreased threshold and increased respiratory centre sensitivity to carbon dioxide. These physiological changes increase the load on the respiratory system in pregnancy. Keeping in mind the airborne transmission of COVID-19 , the Scientific Advisory Group for Emergencies (SAGE) even recommends that HCW caring for patients with suspected or confirmed COVID-19 may need higher grade protective masks, such as FFP3 masks equivalent to N99, to protect them from contracting the virus through the air. Although a few studies suggest that these masks are not associated with adverse effects in pregnancy, these studies are primarily restricted by limited time of exposure, i.e., a maximum of one hour . But in today's scenario, the duration of mask-wearing by pregnant women, especially health care workers, is at least 6–8 hours at a stretch. Several side effects have been reported in health care workers using these face masks for a prolonged duration. These include headache, dryness in the eyes and nose, acne, epistaxis, skin breakdown, and even impaired cognition . Pregnant women, especially in the late second or third trimester, may not be able to maintain their required minute ventilation while breathing through N95 respirators. The workload on breathing increases significantly, leading to decreased oxygen uptake and increased carbon dioxide concentration . Hypoxia and hypercarbia, mainly due to re-breathing caused by retained carbon dioxide in the mask's dead space, occur on prolonged mask usage These changes are evident even at rest and may be exacerbated on mild to moderate exertion. Long-term exposure of the foetus to this increased carbon dioxide level has not been studied. However, some studies suggest that it affects foetal cerebral oxygenation, which may be by regulating the cerebral blood flow and shifting the oxyhaemoglobin dissociation curve . Pregnant women with respiratory ailments such as bronchial asthma or other chronic lung diseases could be at much higher risk. The use of medical and surgical masks and other external airflow resistive load devices has been found to impact some hemodynamic parameters such as diastolic blood pressure and mean blood pressure significantly, in pregnant women and non-pregnant women alike. Although the effect noted was mild, even an increase of 10 mm Hg in a patient with preeclampsia or chronic hypertension could be harmful to the mother and the unborn foetus. Sharps injuries and bloodborne infections All surgeons have a very high risk of needle-stick injury, and obstetrics and gynaecology as a speciality are no different. Resident doctors are at an exceptionally high risk as they are not trained in personal protection measures, and most of them are learning to hold and manipulate the instruments for the first time. A survey of around 700 resident doctors found that almost 99% of them had experienced a sharps injury . The probability of acquiring infection from large-bore needle-stick injury has been reported to be as high as 40% in workers not vaccinated against hepatitis B virus, 1.8% for hepatitis C virus, and 0.3% for human immunodeficiency virus (HIV) . Until the time they are properly trained in handling and manipulating surgical instruments and needles, pregnant resident doctors should not be involved in the surgery of patients with HIV, hepatitis B, and hepatitis C. Further adequate vaccination and good antibody titers against hepatitis B should be a rule for trainee doctors joining any surgical speciality. They should also be trained to handle blood and body fluid spills and be adequately informed regarding post-exposure prophylaxis in case of accidental needle-stick injury. Physician burnout and stress Pregnancy during residency and speciality training in medicine and surgery is challenging. The residency period, especially in clinical specialties like obstetrics and gynaecology, is marked by long duty hours, rotating night shifts, and prolonged standing. Working long hours during the first trimester of pregnancy is associated with threatened abortion and preterm birth . A recent survey on 347 general surgeons, who had at least one pregnancy during residency, reported unmitigated work schedules during pregnancy. There is a negative stigma associated with pregnancy during training. They were also dissatisfied with maternity leave options and inadequate lactation and childcare support. They also desired a better mentorship on work-life integration . Inadequate support by the co-doctors is expected because they are themselves engrossed in their heavy duties. Several studies in the past have stressed that most residents felt inconvenienced by the presence of pregnant or lactating colleagues, as they were forced to cover their responsibilities during their absence (24). Resident doctors during pregnancy and lactation face unique challenges such as arranging for child care during their extended period of absence, maintaining lactation during intense night duties, and frequent breaks for pumping breast milk to ensure proper milk output. Inadequate policies related to pregnancy and parenting may sometimes even adversely affect their career preferences, sometimes even promoting them to quit their career as medical professional . Peer support Fulfilling lactation and child care goals is another challenge for health care workers across all specialities. Maintaining an adequate breast milk supply requires either frequent feeding or frequent pumping, both of which need frequent short breaks in the working schedule. More than half of the doctors and supporting staff opt to quit breastfeeding at an earlier stage than they wished. The nature of the work of health care professionals is such that taking even a short break without proper replacement can even cost lives. There is presently no provision adjustment in the nature of duties of pregnant and lactating health care workers. A written policy regarding avoidance of long duty hours and prolonged standing, and provision of intermittent periods of rest, should be made and brought into practice in health care settings. Provision of lactation rooms with facilities for pumping and storing breast milk should be mandatory. Lactating employees should be provided with frequent short breaks to pump or breastfeed. Although some hospitals do offer an in-house creche and child care facility, taking the baby to hospitals is again a dilemma, especially at the time of the spread of a pandemic which is highly infectious. Exposure to anaesthetic gases and surgical smoke and other chemicals Nitrous oxide and halogenated agents constitute the predominant inhalational agents used for anaesthesia in operation theatres. When inhalational agents are used for induction predominantly for day care procedures or minor surgeries in gynaecology, some waste gases are inadvertently released into the operating room and inhaled by surgeons and their supporting staff. These gases have been associated with adverse pregnancy outcomes such as spontaneous abortions and congenital anomalies in the foetus when inhaled by pregnant women, especially during earlier gestation . Therefore, adequate scavenging systems should be a must in all operation theatres to minimize exposure . Surgical smoke refers to waste gases emitted in the operation theatres due to the burning of tissues with energy sources such as electrocautery. The content of surgical smoke includes water gases containing chemicals such as benzene, 1,2-dichloroethane, and toluene, which are associated with miscarriages, congenital birth defects, foetal growth restriction , and preterm labour . Many studies have found a very high concentration of fine and ultrafine particulate matter when smoke was released during laparoscopic procedures. Although these particles and chemicals have not been studied in much detail, the effects that these particles and chemicals have on the unborn foetus could be significantly grave. Cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC) are increasingly used as a treatment modality for ovarian malignancies and peritoneal carcinomatosis. The chemotherapy agents used include mitomycin c and platinum-based compounds such as cisplatin and carboplatin. Pregnant doctors can be exposed through both inhalation and skin contact. These agents are associated with multiple harmful effects in pregnancy, including miscarriage and congenital malformations . Current recommendations are that pregnant women or those planning to become pregnant should keep themselves away from chemotherapy agents and in operation theatres where HIPEC is being done . Violence against doctors It is a paradox that a profession as noble as health care, with the mission to care for people when their need for care is at a maximum, that is, when they are unwell or terminally ill, is at significant risk of workplace violence. The World Health Organisation defines workplace violence as 'incidents where staff is abused, threatened or assaulted in the circumstances related to their work, including commuting to and from work, involving an explicit or implicit challenge to their safety, well-being or health' . It has been observed that around one-fourth of all violent accidents at work occur in the health care sector, and more than half of all health care workers have experienced violence in some form at their workplace . Women, predominantly of the reproductive age group, represent nearly 80% of the health care workforce . The effects of direct physical violence are well known in foetal injury and death, abruptio placentae, and premature rupture of membranes. The indirect effects of verbal, physical, and even sexual abuse include psychological stress and anxiety, which is well known to cause adverse pregnancy outcomes . What can be done? Pregnant health care professionals including those specialising in obstetrics and gynaecology are themselves often not prepared to identify risk factors that can adversely affect their health at the work place. Occupational risk assessment models which incorporate all possible risk factors , should be implemented in all hospitals. Flexible working policies for their pregnant employees, including avoidance of night shifts and long shifts, especially during the trimesters that involve the highest risk to the foetus have been rightly introduced by some universities such as Indiana University's emergency and internal medicine programs. Employment conditions that are pregnancy- and breastfeeding-friendly are the need of the hour. Radiation exposure in early pregnancy is very well known to be associated with teratogenic effects. Although the risk is usually overestimated, the consequences can be significant in cumulative doses. Threshold radiation effects (deterministic effects) occur over a dose threshold and result in cellular injury. Stochastic effects of radiation are incremental, appearing in a dose-response function without a threshold, and are thought to be the primary mechanism of increased risk of cancers . Various agencies have given thresholds that should be taken care of once pregnancy is confirmed, to minimize the effects on the foetus, mainly during organogenesis. The International Commission on Radiological Protection recommends that after a worker declares her pregnancy, the occupational radiation dose should not exceed one mSv during the remainder of the pregnancy. The National Council on Radiation Protection and Measurements, in the United States, recommends a radiation dose limit of 0.5 mSv per month once pregnancy is confirmed to ensure a low exposure during susceptible periods of gestation . The US Environmental Protection Agency recommends a limit of 5 mSv for the entire gestational period . Although only very few procedures in obstetrics and gynaecology use ionizing radiation, hysterosalpingography (HSG) is a standard procedure used to evaluate the patency of fallopian tubes in women presenting with subfertility. The HSG procedure requires the radiologist or preferably a gynaecologist (who is not trained to handle ionizing radiation) to hold the cannula and inject the contrast medium into the patient's cervix while she is being irradiated. The supporting personnel also remain close to the patient. Though a lead apron is worn, frequent or multiple procedures can lead to significant exposure to ionizing radiation, which can be worrisome, especially in the first trimester due to teratogenic effects . A few studies have demonstrated that the dose to the extremities may also be significant enough to warrant monitoring, especially when the procedure is done frequently. Sentinel lymph node mapping, used to identify the affected or sentinel node in gynaecological malignancies, also uses radioactive tracers such as Technetium-99 to identify the affected nodes. Although preliminary studies have indicated that exposure usually falls within the safe limits, many factors, such as time from injection to surgery and distance between the patient's injection site and the surgeon's abdomen, play a significant role . Physicians and trainees predominantly dealing with gynaecological malignancy surgeries, should consider the cumulative dose received. Due to the fear of discrimination by their peers or senior consultants, some of the trainees or resident doctors prefer not to disclose their pregnant status till late. If unaware of the risks during such procedures, they may expose themselves to ionizing radiation, causing inadvertent self-harm. There is a need for a cordial work environment in which the resident doctors can declare their pregnant status, free from the fear of discrimination, and can hold posts in areas that do not involve radiation exposure. Women during pregnancy are more susceptible to certain viral infections, predominantly due to impaired pathogen clearance and hormonal and immunological alterations. The risk for health care workers, including doctors, is increased manifold mainly due to the prolonged contact with infected patients and that too in closed areas such as birthing suites, high dependency units, and intensive care units. If a pregnant health care worker acquires viral infections such as rubella, cytomegalovirus, herpes simplex, and varicella, particularly at the time of organogenesis, it can be devastating for the foetus. The teratogenic effects of these viruses are well known. Pregnant doctors should not be involved in the care of patients with these infectious diseases. The current pandemic of COVID-19 has made this risk of infectious diseases even more apparent. CDC recognizes health care workers, including doctors, nurses, dentists, paramedics, emergency medical technicians, laboratory personnel collecting and handling samples from infected persons, and morgue workers performing autopsies as the group at highest risk of acquiring Coronavirus . Among health care professionals, certain professionals such as anaesthesiologists, otorhinolaryngologists, dentists, ophthalmologists are at exceptionally high risk because their work demands proximity with patients’ respiratory tracts. Obstetricians also come under very high risk, mainly due to prolonged exposure, especially during labour. CDC and WHO recommend using N95 masks for health care workers, especially during the care of patients with diseases that involve droplet transmissions, such as tuberculosis, severe acute respiratory syndrome (SARS), and COVID-19 . Even after complying with proper protection and preventive measures and using personal protective equipment to protect themselves, health care workers have been affected by the disease. During the initial months of the pandemic, a few efforts were taken to limit the amount of exposure, such as delaying elective surgical procedures and undertaking only emergency ones. However, these practice guidelines cannot be extended to specialities like obstetrics. Every case presenting in labour or needing labour induction due to feto-maternal indications can be treated as an emergency, as any delay can be life-threatening. Obstetrics is probably the only speciality in medicine where the number of cases and surgeries did not decrease, despite the fear of infection. The risks extend beyond the pandemic period and apply to other infectious diseases such as influenza and tuberculosis. Pregnancy is associated with profound changes in normal respiratory physiology. Dyspnoea is usually a common symptom in late pregnancy. Both mechanical factors (due to the enlarged gravid uterus) and hormonal factors play a role in this. Oxygen consumption increases from the first trimester, increasing by around 30% per term due to maternal metabolic processes and foetal demands . Increased oestrogen causes hyperaemia, oedema, hypersecretion, and friability of the mucosa of the respiratory tract . There is an increase in the number and sensitivity of hypothalamic and medullary progesterone receptors in pregnancy, leading to a rise in the sensitivity of peripheral chemoreceptors to hypoxic conditions . Progesterone also leads to a decreased threshold and increased respiratory centre sensitivity to carbon dioxide. These physiological changes increase the load on the respiratory system in pregnancy. Keeping in mind the airborne transmission of COVID-19 , the Scientific Advisory Group for Emergencies (SAGE) even recommends that HCW caring for patients with suspected or confirmed COVID-19 may need higher grade protective masks, such as FFP3 masks equivalent to N99, to protect them from contracting the virus through the air. Although a few studies suggest that these masks are not associated with adverse effects in pregnancy, these studies are primarily restricted by limited time of exposure, i.e., a maximum of one hour . But in today's scenario, the duration of mask-wearing by pregnant women, especially health care workers, is at least 6–8 hours at a stretch. Several side effects have been reported in health care workers using these face masks for a prolonged duration. These include headache, dryness in the eyes and nose, acne, epistaxis, skin breakdown, and even impaired cognition . Pregnant women, especially in the late second or third trimester, may not be able to maintain their required minute ventilation while breathing through N95 respirators. The workload on breathing increases significantly, leading to decreased oxygen uptake and increased carbon dioxide concentration . Hypoxia and hypercarbia, mainly due to re-breathing caused by retained carbon dioxide in the mask's dead space, occur on prolonged mask usage These changes are evident even at rest and may be exacerbated on mild to moderate exertion. Long-term exposure of the foetus to this increased carbon dioxide level has not been studied. However, some studies suggest that it affects foetal cerebral oxygenation, which may be by regulating the cerebral blood flow and shifting the oxyhaemoglobin dissociation curve . Pregnant women with respiratory ailments such as bronchial asthma or other chronic lung diseases could be at much higher risk. The use of medical and surgical masks and other external airflow resistive load devices has been found to impact some hemodynamic parameters such as diastolic blood pressure and mean blood pressure significantly, in pregnant women and non-pregnant women alike. Although the effect noted was mild, even an increase of 10 mm Hg in a patient with preeclampsia or chronic hypertension could be harmful to the mother and the unborn foetus. All surgeons have a very high risk of needle-stick injury, and obstetrics and gynaecology as a speciality are no different. Resident doctors are at an exceptionally high risk as they are not trained in personal protection measures, and most of them are learning to hold and manipulate the instruments for the first time. A survey of around 700 resident doctors found that almost 99% of them had experienced a sharps injury . The probability of acquiring infection from large-bore needle-stick injury has been reported to be as high as 40% in workers not vaccinated against hepatitis B virus, 1.8% for hepatitis C virus, and 0.3% for human immunodeficiency virus (HIV) . Until the time they are properly trained in handling and manipulating surgical instruments and needles, pregnant resident doctors should not be involved in the surgery of patients with HIV, hepatitis B, and hepatitis C. Further adequate vaccination and good antibody titers against hepatitis B should be a rule for trainee doctors joining any surgical speciality. They should also be trained to handle blood and body fluid spills and be adequately informed regarding post-exposure prophylaxis in case of accidental needle-stick injury. Pregnancy during residency and speciality training in medicine and surgery is challenging. The residency period, especially in clinical specialties like obstetrics and gynaecology, is marked by long duty hours, rotating night shifts, and prolonged standing. Working long hours during the first trimester of pregnancy is associated with threatened abortion and preterm birth . A recent survey on 347 general surgeons, who had at least one pregnancy during residency, reported unmitigated work schedules during pregnancy. There is a negative stigma associated with pregnancy during training. They were also dissatisfied with maternity leave options and inadequate lactation and childcare support. They also desired a better mentorship on work-life integration . Inadequate support by the co-doctors is expected because they are themselves engrossed in their heavy duties. Several studies in the past have stressed that most residents felt inconvenienced by the presence of pregnant or lactating colleagues, as they were forced to cover their responsibilities during their absence (24). Resident doctors during pregnancy and lactation face unique challenges such as arranging for child care during their extended period of absence, maintaining lactation during intense night duties, and frequent breaks for pumping breast milk to ensure proper milk output. Inadequate policies related to pregnancy and parenting may sometimes even adversely affect their career preferences, sometimes even promoting them to quit their career as medical professional . Fulfilling lactation and child care goals is another challenge for health care workers across all specialities. Maintaining an adequate breast milk supply requires either frequent feeding or frequent pumping, both of which need frequent short breaks in the working schedule. More than half of the doctors and supporting staff opt to quit breastfeeding at an earlier stage than they wished. The nature of the work of health care professionals is such that taking even a short break without proper replacement can even cost lives. There is presently no provision adjustment in the nature of duties of pregnant and lactating health care workers. A written policy regarding avoidance of long duty hours and prolonged standing, and provision of intermittent periods of rest, should be made and brought into practice in health care settings. Provision of lactation rooms with facilities for pumping and storing breast milk should be mandatory. Lactating employees should be provided with frequent short breaks to pump or breastfeed. Although some hospitals do offer an in-house creche and child care facility, taking the baby to hospitals is again a dilemma, especially at the time of the spread of a pandemic which is highly infectious. Nitrous oxide and halogenated agents constitute the predominant inhalational agents used for anaesthesia in operation theatres. When inhalational agents are used for induction predominantly for day care procedures or minor surgeries in gynaecology, some waste gases are inadvertently released into the operating room and inhaled by surgeons and their supporting staff. These gases have been associated with adverse pregnancy outcomes such as spontaneous abortions and congenital anomalies in the foetus when inhaled by pregnant women, especially during earlier gestation . Therefore, adequate scavenging systems should be a must in all operation theatres to minimize exposure . Surgical smoke refers to waste gases emitted in the operation theatres due to the burning of tissues with energy sources such as electrocautery. The content of surgical smoke includes water gases containing chemicals such as benzene, 1,2-dichloroethane, and toluene, which are associated with miscarriages, congenital birth defects, foetal growth restriction , and preterm labour . Many studies have found a very high concentration of fine and ultrafine particulate matter when smoke was released during laparoscopic procedures. Although these particles and chemicals have not been studied in much detail, the effects that these particles and chemicals have on the unborn foetus could be significantly grave. Cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC) are increasingly used as a treatment modality for ovarian malignancies and peritoneal carcinomatosis. The chemotherapy agents used include mitomycin c and platinum-based compounds such as cisplatin and carboplatin. Pregnant doctors can be exposed through both inhalation and skin contact. These agents are associated with multiple harmful effects in pregnancy, including miscarriage and congenital malformations . Current recommendations are that pregnant women or those planning to become pregnant should keep themselves away from chemotherapy agents and in operation theatres where HIPEC is being done . It is a paradox that a profession as noble as health care, with the mission to care for people when their need for care is at a maximum, that is, when they are unwell or terminally ill, is at significant risk of workplace violence. The World Health Organisation defines workplace violence as 'incidents where staff is abused, threatened or assaulted in the circumstances related to their work, including commuting to and from work, involving an explicit or implicit challenge to their safety, well-being or health' . It has been observed that around one-fourth of all violent accidents at work occur in the health care sector, and more than half of all health care workers have experienced violence in some form at their workplace . Women, predominantly of the reproductive age group, represent nearly 80% of the health care workforce . The effects of direct physical violence are well known in foetal injury and death, abruptio placentae, and premature rupture of membranes. The indirect effects of verbal, physical, and even sexual abuse include psychological stress and anxiety, which is well known to cause adverse pregnancy outcomes . Pregnant health care professionals including those specialising in obstetrics and gynaecology are themselves often not prepared to identify risk factors that can adversely affect their health at the work place. Occupational risk assessment models which incorporate all possible risk factors , should be implemented in all hospitals. Flexible working policies for their pregnant employees, including avoidance of night shifts and long shifts, especially during the trimesters that involve the highest risk to the foetus have been rightly introduced by some universities such as Indiana University's emergency and internal medicine programs. Employment conditions that are pregnancy- and breastfeeding-friendly are the need of the hour. The major employment issues faced by pregnant health care workers include pregnancy-related discrimination, accommodations in the distribution of work or duties, keeping in mind the health of mother-foetus duo, job-protected leave, and wage replacement while on maternity leave. Employment conditions that create more optimal work environments for pregnant employees are the need of the hour. Women, predominantly of the reproductive age group, constitute a significant proportion of the health care work force. Pregnant obstetrics and gynaecology trainees and physicians face numerous occupational risks, including those of infectious diseases, radiation exposure, stress and burnout, violence against doctors, and even lack of peer support. Employment conditions that create more optimal work environments for pregnant employees are the need of the hour. |
Breast Cancer Survivorship Programme: Follow-Up, Rehabilitation, Psychosocial Oncology Care. 1st Central-Eastern European Professional Consensus Statement on Breast Cancer | 1337078c-9edb-4d19-aeb2-98445a0a5ee4 | 9200958 | Internal Medicine[mh] | The recommendations below are based on the available literature in English and the authors’ own experience, and they are in line with comprehensive national and international recommendations on the topic published in English ( – ). The document constitutes one of a series of guidelines developed by the consensus development panel method ( ). Within a complex breast cancer survivorship programme follow-up care restricted to patients considered healed and various types of supportive and palliative measures that should start already at the diagnosis of breast cancer and should be practised throughout its management if needed will be reviewed. Since all consensuses based on clinical practice and the current literature, this consensus will need to be updated as the field evolves. Panel members agree that as a future advancement, a dietitian, a self-help group leader and a GP expert will be involved in the update of this document.
Follow-up care means the regular check-up and support of breast cancer patients who are clinically tumour-free, usually have undergone breast surgery, and many of them need adjuvant hormone therapy ( – ). Follow-up care tasks: • Communication with the patient, facilitating adherence to adjuvant treatment, coordination of care and rehabilitation. • Health education, lifestyle advice (healthy diet, physical activity, etc.). • Detection of relapse, rapid and effective assessment if relapse is suspected. • Facilitating and supporting toleration of adjuvant hormone therapy. • Detection, prevention and treatment of consequences of the disease and side-effects of surgical and adjuvant treatments (referral to mental, physical and social rehabilitation services, if needed). • Tertiary screening: prevention and early detection of metachronous cancers (this is usually the same as the screening strategy for the average risk population; in individuals with BRCA mutations, breast screening, possibly gynaecological screening, and gynaecology assessment during tamoxifen therapy, annually or with individually determined frequency, is recommended). • Declaration of the patient’s health status or need of treatments. • Special aspects: genetic risk, pregnancy. The atmosphere of long-term care differs from that in active oncology treatment facilities: patients should be empowered to return to their normal life and restore their health; they should be provided with help for full rehabilitation. Patients’ independence should be reassured, but at the same time they should be provided with a sense of security, support, and background for the disease they have overcome. During long-term care, the patients should receive adequate information about their situation, state of health and the procedures involved, so that they can fit it into their lifestyle; in the event of a relapse, quick and effective help should be provided to resolve the situation. All these require individualized, open communication, providing a sense of care, and an atmosphere of trust ( , ). It may also be necessary to involve the patient’s family members and close relatives. Currently in Hungary, long-term care is provided by oncologists, but in many countries there is an effort to assign long-term care tasks to GPs or nurses. This requires training and protocols, as well as proper communication with the treatment team. Some of these tasks are highlighted below. Health Education, Lifestyle Advice (Diet, Physical Activity, etc.) The most important aspect is making efforts to achieve healthy body weight, since primarily overweight, but also increased BMI have been associated with an unfavourable prognosis. Although the relation between cancer-related outcome and body weight or diet could not be demonstrated, these factors may adversely affect overall health (including anticancer therapy-related adverse effects), secondary cancer incidence and all-cause mortality rates. Optimal body weight is based on a healthy diet (high in fruits, vegetables, and whole grains and low in processed foods or added sugars) and a right amount of exercise that is not contraindicated even after breast surgery (see Physical Rehabilitation , also). It is recommended that patients stop drinking alcohol and quit smoking ( – , , ). For all these, thorough patient educational activity is needed or, in special cases the help of a registered dietitian. Detection of Relapse, Assessment of Suspected Relapse When examining a patient, it is essential to keep in mind their individual risk for local/regional recurrence or metastasis. The risk of relapse depends not only on the primary tumour status, but also on the treatment administered. If the patient does not receive adjuvant therapy despite a high risk of recurrence, the vigilance of both the treating physician and the patient is essential, the latter being achieved by providing the patient with adequate information. Breast cancer subtype should also be considered: hormone receptor-negative and rapidly proliferating tumours may recur within 5 years after the first treatment, while the risk of relapse for hormone receptor-positive tumours remains constant for at least 10 years. Long-term care is based on careful (purposeful) medical history and physical examination. Instrumental investigations for the assessment of systemic relapse (e.g., diagnostic imaging of the chest, abdomen, bones, tumour marker tests) are only required if there is an indicative complaint or symptom. Indeed, intensive assessment in asymptomatic cases will not affect either the time of diagnosis of metastasis or survival, but it may compromise quality of life due to anxiety and addiction. By contrast, diagnostic imaging of the operated breast and regional lymph nodes requires great care: after breast-conserving surgery, both the operated and contralateral breast should be assessed on a yearly basis, as recommended by a breast radiologist, usually via mammography and ultrasound or MRI (see the chapter on Breast Diagnostics) ( – ). For lobular carcinoma, it is particularly important that ultrasound scanning be part of a complex diagnostic imaging follow-up even after 5 years ( ). The diagnosis of an oligometastatic condition, which has been identified in recent years as a new biological entity, is of paramount importance ( ). Radical local treatment of a slowly progressing and low-mass tumour can be life-saving in some cases. Therefore, if it is suspected, it should be rapidly confirmed with sensitive testing methods, with the hope of a curative treatment and favourable therapeutic outcome ( , ). In some cases (e.g., when they cannot present for long-term care due to a comorbidity), patients may be managed by a GP who would follow the recommended protocol. It is important to inform patients about the course of the follow-up care and the abnormalities that may occur due to the disease or the treatment. Detection, Prevention and Treatment of Consequences of the Disease and Side-effects of Surgical and Adjuvant Treatments (Support, Rehabilitation) Expected side-effects and abnormalities depend on the type of treatment administered, the dose and duration of treatment, the patient’s age and comorbidities. Possible consequences of different treatments are shown in ( – , – ). Side-effects can lead to temporary or long-term decline in body image, physical condition and ability, and mental status, all of which will compromise quality of life ( ). Due to changes in body image, various tools (wigs, breast prostheses, etc.) and breast reconstruction may be considered as immediate or delayed solutions. Complex treatment of the issue is recommended (physical and mental help). Lymphoedema should be prevented by losing weight if the patient is overweighted and by protecting the arm (physical activity is allowed, but weight-bearing by the arm should be avoided, efforts should be made to prevent erysipelas, but venous access to the arm or blood pressure measurement on the operated side is not contraindicated, moreover it may even cause anxiety if it were prohibited ( ). Monitoring of cardiotoxic consequences should be continuous during active oncology treatment; during long-term care, special cardio-oncology care is needed for patients at risk (pre-existing heart disease, prior oncological treatment with cardiotoxic drugs or cardiac/coronary artery radiation exposure), or if there are symptoms suggesting cardiac disease (breathlessness, fatigue, cardiac decompensation) ( , ). Monitoring bone health and osteoporosis should depend on age and the treatments administered. In case of chemotherapy-induced menopause or endocrine therapy, a baseline DEXA test should be performed and then monitored depending on the treatment ( ). For joint complaints, rheumatology examination is recommended and physiotherapy may be deliberately used ( ). For musculoskeletal complaints caused by aromatase inhibitors, switching to tamoxifen or another aromatase inhibitor may be the solution, if necessary. Fatigue, mental disorders and cognitive impairment are well-demonstrated as a consequence of chemotherapy, but not fully clarified in the case of hormone therapies ( – ). During long-term care, it is worthy gathering information on this issue and initiating the patient’s rehabilitation, if needed. The use of a lubricating cream or suppository in case of sexual complaints or vaginal dryness may be tried, and medicinal treatment or pelvic floor exercises may be recommended for urinary incontinence ( ). Managing Endocrine Therapy Adjuvant hormone therapy is usually recommended for a period of 5–10 years, but due to its long duration and successful return of the patient to a normal life, and partly due to possible side-effects, medication adherence is poor in a significant proportion (up to half, according to certain estimates) of patients. Therefore, one of the most important goals of long-term care is to promote good therapy adherence. Ensuring that patients are informed and perform appropriate follow-up tests, as well as side-effect management, will improve results. shows the recommended follow-up assessments for various treatments. Either due to chemotherapy-induced amenorrhoea or due to GnRH analogues, menopausal symptoms may develop in the form of hot flushes, mental instability, sexual complaints (decreased libido, vaginal dryness), which are deteriorated by aromatase inhibitors ( , ). Aromatase inhibitors may cause androgen-type alopecia, too. Tamoxifen is more likely to induce vaginal discharge and weight gain. Gabapentin, a selective serotonin reuptake enhancer (SSRE) and lifestyle changes may help in reducing hot flushes, while topical treatment may be considered to help sexual complaints, e.g. lubricant, vaginal suppositories, or laser treatment, as a novel opportunity ( – , ). Hormone replacement therapy, even the use of oestrogen-containing vaginal creams, is contraindicated. Rheumatological treatments can be administered for joint or muscle pain (especially common with aromatase inhibitors). Special Aspects: Genetic Risk, Pregnancy When a hereditary predisposition to breast cancer is suspected, great caution and tactfulness is required and a sufficiently long time should be allowed for processing the informations ( – ). In cases of a family history suggesting inherited risk of cancer, cancers at a young age, or specific tumour types, testing for BRCA or other hereditary gene mutations is essential and recommended by numerous international guidelines. If justified, and the patient is ready to accept it, the patient may be referred to a genetic counselling centre; ideally this is done at the time of the initial care. If a pathological gene mutation carrier status is confirmed, this has a number of consequences for the follow-up care: preventive breast surgery or adnexectomy depending on future family plans (the risk-reducing effect of Fallopian tube removal with preserving the ovaries is being evaluated in a clinical trial), developing a specific breast screening strategy if needed, or other actions may be considered based on the advice of a geneticist; naturally, the issue of informing and screening the family members also arises. The issue of undertaking pregnancy depends on the risk of relapse, how this changes over time, and the nature and timing of the administered treatments. During the discussion, it is worthy to understand whether the patient sees her illness in a realistic way and, if necessary, to provide objective information about the situation. There is no evidence that pregnancy per se would be detrimental in terms of recovery or recurrence. Chemotherapy may lead to infertility for a shorter or longer period of time; one of the reasons is that hormone production is impaired, although this risk can be reduced by using a GnRH analogue during chemotherapy. The ability to regenerate after chemotherapy and the chance for recovery of fertility decrease with age ( ). For infertility, the patient should be referred to a specialist. Due to the genotoxic effects of chemotherapy, a waiting period of at least 3 years is required after chemotherapy. For a successful pregnancy, hormone therapies should be terminated; if the patient received tamoxifen, a latency of 3 months is required before pregnancy, due to the slow clearance of the drug.
The most important aspect is making efforts to achieve healthy body weight, since primarily overweight, but also increased BMI have been associated with an unfavourable prognosis. Although the relation between cancer-related outcome and body weight or diet could not be demonstrated, these factors may adversely affect overall health (including anticancer therapy-related adverse effects), secondary cancer incidence and all-cause mortality rates. Optimal body weight is based on a healthy diet (high in fruits, vegetables, and whole grains and low in processed foods or added sugars) and a right amount of exercise that is not contraindicated even after breast surgery (see Physical Rehabilitation , also). It is recommended that patients stop drinking alcohol and quit smoking ( – , , ). For all these, thorough patient educational activity is needed or, in special cases the help of a registered dietitian.
When examining a patient, it is essential to keep in mind their individual risk for local/regional recurrence or metastasis. The risk of relapse depends not only on the primary tumour status, but also on the treatment administered. If the patient does not receive adjuvant therapy despite a high risk of recurrence, the vigilance of both the treating physician and the patient is essential, the latter being achieved by providing the patient with adequate information. Breast cancer subtype should also be considered: hormone receptor-negative and rapidly proliferating tumours may recur within 5 years after the first treatment, while the risk of relapse for hormone receptor-positive tumours remains constant for at least 10 years. Long-term care is based on careful (purposeful) medical history and physical examination. Instrumental investigations for the assessment of systemic relapse (e.g., diagnostic imaging of the chest, abdomen, bones, tumour marker tests) are only required if there is an indicative complaint or symptom. Indeed, intensive assessment in asymptomatic cases will not affect either the time of diagnosis of metastasis or survival, but it may compromise quality of life due to anxiety and addiction. By contrast, diagnostic imaging of the operated breast and regional lymph nodes requires great care: after breast-conserving surgery, both the operated and contralateral breast should be assessed on a yearly basis, as recommended by a breast radiologist, usually via mammography and ultrasound or MRI (see the chapter on Breast Diagnostics) ( – ). For lobular carcinoma, it is particularly important that ultrasound scanning be part of a complex diagnostic imaging follow-up even after 5 years ( ). The diagnosis of an oligometastatic condition, which has been identified in recent years as a new biological entity, is of paramount importance ( ). Radical local treatment of a slowly progressing and low-mass tumour can be life-saving in some cases. Therefore, if it is suspected, it should be rapidly confirmed with sensitive testing methods, with the hope of a curative treatment and favourable therapeutic outcome ( , ). In some cases (e.g., when they cannot present for long-term care due to a comorbidity), patients may be managed by a GP who would follow the recommended protocol. It is important to inform patients about the course of the follow-up care and the abnormalities that may occur due to the disease or the treatment.
Expected side-effects and abnormalities depend on the type of treatment administered, the dose and duration of treatment, the patient’s age and comorbidities. Possible consequences of different treatments are shown in ( – , – ). Side-effects can lead to temporary or long-term decline in body image, physical condition and ability, and mental status, all of which will compromise quality of life ( ). Due to changes in body image, various tools (wigs, breast prostheses, etc.) and breast reconstruction may be considered as immediate or delayed solutions. Complex treatment of the issue is recommended (physical and mental help). Lymphoedema should be prevented by losing weight if the patient is overweighted and by protecting the arm (physical activity is allowed, but weight-bearing by the arm should be avoided, efforts should be made to prevent erysipelas, but venous access to the arm or blood pressure measurement on the operated side is not contraindicated, moreover it may even cause anxiety if it were prohibited ( ). Monitoring of cardiotoxic consequences should be continuous during active oncology treatment; during long-term care, special cardio-oncology care is needed for patients at risk (pre-existing heart disease, prior oncological treatment with cardiotoxic drugs or cardiac/coronary artery radiation exposure), or if there are symptoms suggesting cardiac disease (breathlessness, fatigue, cardiac decompensation) ( , ). Monitoring bone health and osteoporosis should depend on age and the treatments administered. In case of chemotherapy-induced menopause or endocrine therapy, a baseline DEXA test should be performed and then monitored depending on the treatment ( ). For joint complaints, rheumatology examination is recommended and physiotherapy may be deliberately used ( ). For musculoskeletal complaints caused by aromatase inhibitors, switching to tamoxifen or another aromatase inhibitor may be the solution, if necessary. Fatigue, mental disorders and cognitive impairment are well-demonstrated as a consequence of chemotherapy, but not fully clarified in the case of hormone therapies ( – ). During long-term care, it is worthy gathering information on this issue and initiating the patient’s rehabilitation, if needed. The use of a lubricating cream or suppository in case of sexual complaints or vaginal dryness may be tried, and medicinal treatment or pelvic floor exercises may be recommended for urinary incontinence ( ).
Adjuvant hormone therapy is usually recommended for a period of 5–10 years, but due to its long duration and successful return of the patient to a normal life, and partly due to possible side-effects, medication adherence is poor in a significant proportion (up to half, according to certain estimates) of patients. Therefore, one of the most important goals of long-term care is to promote good therapy adherence. Ensuring that patients are informed and perform appropriate follow-up tests, as well as side-effect management, will improve results. shows the recommended follow-up assessments for various treatments. Either due to chemotherapy-induced amenorrhoea or due to GnRH analogues, menopausal symptoms may develop in the form of hot flushes, mental instability, sexual complaints (decreased libido, vaginal dryness), which are deteriorated by aromatase inhibitors ( , ). Aromatase inhibitors may cause androgen-type alopecia, too. Tamoxifen is more likely to induce vaginal discharge and weight gain. Gabapentin, a selective serotonin reuptake enhancer (SSRE) and lifestyle changes may help in reducing hot flushes, while topical treatment may be considered to help sexual complaints, e.g. lubricant, vaginal suppositories, or laser treatment, as a novel opportunity ( – , ). Hormone replacement therapy, even the use of oestrogen-containing vaginal creams, is contraindicated. Rheumatological treatments can be administered for joint or muscle pain (especially common with aromatase inhibitors).
When a hereditary predisposition to breast cancer is suspected, great caution and tactfulness is required and a sufficiently long time should be allowed for processing the informations ( – ). In cases of a family history suggesting inherited risk of cancer, cancers at a young age, or specific tumour types, testing for BRCA or other hereditary gene mutations is essential and recommended by numerous international guidelines. If justified, and the patient is ready to accept it, the patient may be referred to a genetic counselling centre; ideally this is done at the time of the initial care. If a pathological gene mutation carrier status is confirmed, this has a number of consequences for the follow-up care: preventive breast surgery or adnexectomy depending on future family plans (the risk-reducing effect of Fallopian tube removal with preserving the ovaries is being evaluated in a clinical trial), developing a specific breast screening strategy if needed, or other actions may be considered based on the advice of a geneticist; naturally, the issue of informing and screening the family members also arises. The issue of undertaking pregnancy depends on the risk of relapse, how this changes over time, and the nature and timing of the administered treatments. During the discussion, it is worthy to understand whether the patient sees her illness in a realistic way and, if necessary, to provide objective information about the situation. There is no evidence that pregnancy per se would be detrimental in terms of recovery or recurrence. Chemotherapy may lead to infertility for a shorter or longer period of time; one of the reasons is that hormone production is impaired, although this risk can be reduced by using a GnRH analogue during chemotherapy. The ability to regenerate after chemotherapy and the chance for recovery of fertility decrease with age ( ). For infertility, the patient should be referred to a specialist. Due to the genotoxic effects of chemotherapy, a waiting period of at least 3 years is required after chemotherapy. For a successful pregnancy, hormone therapies should be terminated; if the patient received tamoxifen, a latency of 3 months is required before pregnancy, due to the slow clearance of the drug.
Note the general and official WHO definition for rehabilitation (1980): “Rehabilitation is an organized assistance needed by people with a long-term or permanent damage to their health, physical and/or mental integrity in order to reintegrate into society and their communities. A coordinated, individualized set of medical, pedagogical, social and occupational measures aimed at making the rehabilitated individual a happy and, if possible, a full-fledged citizen of the society. Rehabilitation is a social task.” The original meaning of the word rehabilitation is good news, the restoration of lost honour, satisfaction—within this conceptual framework, the physician or the caring community should assist in restoring the patient’s self-esteem and reduce the losses associated with illness ( , ). The rehabilitation of a breast cancer patient begins at the time of diagnosis, no matter whether it is an operable/early stage case and has received curative treatment(s), or advanced or metastatic breast cancer that requires continuous treatment and intensive monitoring. Rehabilitation is comprehensive (physical, mental, social) and is conceptually planned; not an ad hoc process. Naturally, rehabilitation is tailored to the prognosis of the disease, which can be estimated based on prognostic factors. Altered physical condition and the presence of mental problems are well known issues, and when these appear and are recognized, it is the oncologist’s responsibility to refer the patient to a specialist in the appropriate field (physiotherapy, reconstructive surgery, psychosocial oncology care, social worker, etc.). During the follow-up period, the task of the oncologist is to prevent and recognize the symptoms and to refer the patient to an appropriate specialist. For rehabilitation purposes, it would be essential to avoid the stigma of the disease and, while underlining the importance of the investigations, treatments and follow-up, it should be ensured that the disease did not become a central issue of the patient’s life, or a determinant of all goals and activities. Comprehensive life counselling is the task of the oncologist that helps the patient’s reintegration into the community of the healthy. For effective rehabilitation, it is important to set realistic goals and to take into account the patient’s individual physical and mental condition and psychointegrative harmony. A prerequisite for effective rehabilitation is that specialists in the physical, mental and social spheres, working as a team, are available when necessary, and provide assistance in all aspects of rehabilitation. Within a comprehensive breast cancer survivorship programme various forms of rehabilitation are usually provided at the initiative of the staff who provides care, treatment or follow-up for the patient ( , ). The important role of patient advocacy and primary care in the holistic approach should be also emphasized. In fact, breast cancer was the first example for initiating patient advocate activity, and Europa Donna was the first breast cancer advocate group that established a Europe-wide coalition ( ). In most countries there are various self-established patient groups that not only provide direct support to patients and their families, but raise social attention, public awareness, reduce stigmatisation and, may have impact on politics too. General practitioners may overtake many breast cancer-specific tasks depending on the need or actual situation such as providing certain tests or delivering certain medications, diagnosing or controlling comorbidities sometimes related to cancer therapy itself, or guiding life style changes etc. In both fields the most important aspect and need is the maintenance of ongoing communication, contact and mutual confidence between the members of the patient advocate group/primary care physician and the representatives of the cancer multidisciplinary expert team.
Oncology Social Work Social work is a supporting activity classified as an applied social science, which promotes social development, improvement of functioning and solving issues at the individual, group and community levels. Hospital social work helps to solve the patients’ and their families’ social issues. Support can also be requested from the Family Support Institute of the Local Government. Social workers’ tasks may include supporting the achievement of social and financial security, mediating individual social services, helping patients back to their home, or guiding patients toward psychosocial oncology care when mood disorders and anxiety are recognized. Supporting the Social Rehabilitation of Breast Cancer Patients Social rehabilitation means the process of integration into the community, the criteria of which are the existence of social relationships, relative financial and economic autonomy and the ability to ensure the means of subsistence. Social rehabilitation begins from the moment the diagnosis is established, and continues throughout the treatment period and sometimes the follow-up care period. Breast cancer is an oncological disease that primarily affects women. The traditional family model of our society has changed, with every second marriage ending in divorce. In many cases, women are breadwinners, and in 86% of single-parent families, it is the mother who raises her children alone. People living in traditional families are also characterized by a “dual-earner” model, so that if the wife/mother falls ill, the family loses earnings ( ). This disease brings changes in the lives of those affected and their relatives, and family members need to adapt to this and promote adaptation in others. Limitations of mental and physical stress tolerance, social disadvantages and lack of resources must also be taken into account. Most Common Social Issues and Their Solutions In the presence of an oncological disease, patients often cannot keep their jobs due to the treatment, side-effects, and mental strain. It is essential that patients/clients themselves decide whether they feel physically and mentally capable to continue their work ( , ). If they are unable to perform their job on a permanent basis, they may claim insurance and social benefits to compensate the loss of earnings. We have included the forms of institutionalized social support in Hungary as an illustration in . Recognition of the psychological processes and reactions and of depression and anxiety symptoms associated with oncological diseases and treatments contributes to the establishment of patient/client compliance skills and that of a good doctor-patient relationship. The patient’s/client’s personality and potential coping mechanisms should be taken into account. These are influenced by the patient’s values, socialization, attitudes, stress management skills, and also by social factors, workplace and family environment, and whether the patient/client has mental illness or addictions. If depression and anxiety disorders exist or develop, or in the event of need of crisis intervention, the patient/client should be referred to a psychiatrist or psychologist. The patient’s/client’s mental condition should be monitored since the time of diagnosis, and the help of a specialist should be sought if any change occurs or if a period of the illness may lead to mental vulnerability. It is important that the patient’s/client’s attitude to mental health would allow the acceptance of the psychological support needed for recovery. Coping with the disease is aided by avoiding isolation and sustaining family, friend, and community relationships. Patients/clients should be guided toward self-help groups and patient organizations, in which they will have the opportunity to share their problems with peers dealing with similar illnesses, who reach out with understanding and set an example of positive vision. After recovery, successful rehabilitation will result in the patient being employed and self-sufficient, which is enabled through employment rehabilitation. Employment rehabilitation means that a previously employed person, who currently has altered work capacity due to illness, is employed in a job matching her current working aptitude. Useful work provides the patient/client with an opportunity to restore self-fulfilment, self-esteem and a sense of worth.
Social work is a supporting activity classified as an applied social science, which promotes social development, improvement of functioning and solving issues at the individual, group and community levels. Hospital social work helps to solve the patients’ and their families’ social issues. Support can also be requested from the Family Support Institute of the Local Government. Social workers’ tasks may include supporting the achievement of social and financial security, mediating individual social services, helping patients back to their home, or guiding patients toward psychosocial oncology care when mood disorders and anxiety are recognized.
Social rehabilitation means the process of integration into the community, the criteria of which are the existence of social relationships, relative financial and economic autonomy and the ability to ensure the means of subsistence. Social rehabilitation begins from the moment the diagnosis is established, and continues throughout the treatment period and sometimes the follow-up care period. Breast cancer is an oncological disease that primarily affects women. The traditional family model of our society has changed, with every second marriage ending in divorce. In many cases, women are breadwinners, and in 86% of single-parent families, it is the mother who raises her children alone. People living in traditional families are also characterized by a “dual-earner” model, so that if the wife/mother falls ill, the family loses earnings ( ). This disease brings changes in the lives of those affected and their relatives, and family members need to adapt to this and promote adaptation in others. Limitations of mental and physical stress tolerance, social disadvantages and lack of resources must also be taken into account.
In the presence of an oncological disease, patients often cannot keep their jobs due to the treatment, side-effects, and mental strain. It is essential that patients/clients themselves decide whether they feel physically and mentally capable to continue their work ( , ). If they are unable to perform their job on a permanent basis, they may claim insurance and social benefits to compensate the loss of earnings. We have included the forms of institutionalized social support in Hungary as an illustration in . Recognition of the psychological processes and reactions and of depression and anxiety symptoms associated with oncological diseases and treatments contributes to the establishment of patient/client compliance skills and that of a good doctor-patient relationship. The patient’s/client’s personality and potential coping mechanisms should be taken into account. These are influenced by the patient’s values, socialization, attitudes, stress management skills, and also by social factors, workplace and family environment, and whether the patient/client has mental illness or addictions. If depression and anxiety disorders exist or develop, or in the event of need of crisis intervention, the patient/client should be referred to a psychiatrist or psychologist. The patient’s/client’s mental condition should be monitored since the time of diagnosis, and the help of a specialist should be sought if any change occurs or if a period of the illness may lead to mental vulnerability. It is important that the patient’s/client’s attitude to mental health would allow the acceptance of the psychological support needed for recovery. Coping with the disease is aided by avoiding isolation and sustaining family, friend, and community relationships. Patients/clients should be guided toward self-help groups and patient organizations, in which they will have the opportunity to share their problems with peers dealing with similar illnesses, who reach out with understanding and set an example of positive vision. After recovery, successful rehabilitation will result in the patient being employed and self-sufficient, which is enabled through employment rehabilitation. Employment rehabilitation means that a previously employed person, who currently has altered work capacity due to illness, is employed in a job matching her current working aptitude. Useful work provides the patient/client with an opportunity to restore self-fulfilment, self-esteem and a sense of worth.
Introduction According to a WHO survey, sedentary lifestyle is the fourth most important risk factor for current endemic diseases worldwide, including cancer. Physical activity means exercise associated with any muscle contraction involving a change in location or position that requires a higher energy expenditure than at resting level. Isometric and isotonic, eccentric and concentric muscle work can be part of physical activity. Established physiotherapy is an essential part of the complex management of breast cancer all along the disease continuum; since no other chapters of this series deal with physiotherapy, here we summarize the related aspects irrespective of the phase of the disease. As a result of regular exercise, the organism undergoes structural, functional, and physiological changes that help to prevent and delay many diseases, or recover from them. This effect is also influenced by the form, intensity, duration, and timing of the exercise. To measure the magnitude of the load, we use the term “metabolic equivalent of task (MET),” which is based on measuring oxygen consumption. Knowing the MET value of physical activities, a desired weekly load can be easily established ( ). Based on the WHO proposal, American and European exercise recommendations were formulated for healthy individuals ( ). Physiological Effects of Physical Exercise • Exercise activates natural killer cells (NK cells) that play a role in killing cancer cells. • It reduces the body’s susceptibility to bacterial infections. • Supports body weight control. • Prevents deterioration of cardiorespiratory endurance, which may occur as a side-effect of cardiotoxic antitumour therapies. • Helps to recover muscle mass, reduces sarcopenia due to disease and treatments. • Reduces the risk of thromboembolic complications, the incidence of which is 7-fold higher in cancer patients than in the average population. • Supports correction of abnormal movement patterns, develops the ability to coordinate and maintain balance, which is deteriorated as a common consequence of polyneuropathy caused by chemotherapy. • Reduces fatigue. • Reduces symptoms of musculoskeletal syndrome causing bone, muscle, and joint pain and stiffness. • Increases bone mineral content, which is important for bone loss due to hormone and chemotherapy, and thus reduces the risk of bone fractures. • Improves self-esteem, reduces the effects of distress, anxiety, fear, pain, and initiates positive self-healing processes. • Reduces the decline of cognitive functions and slows down the ageing process. • Reduces the risk of developing lymphoedema. Workout Forms Aerobic or cardio-training is a continuous or intermittent intense workout of the large skeletal muscle groups for 20–50 min. This type of exercise primarily improves endurance and increases the capacity of the cardiorespiratory system. It includes walking, Nordic walking, running, swimming, cycling, stair climbing, ball sports, etc. Anaerobic or resistance training is a short-term high level effort that helps to prevent muscle atrophy and osteoporosis. Typical forms of resistance training are weightlifting or sprinting. Other exercise types, such as breathing gymnastics, proprioceptive training, stretching, etc. can be incorporated into both training types. Different exercise types are not interchangeable, it is the task of a physiotherapist to set an individualized training programme. The physiotherapist can find out the patient’s usual physical activity or fitness via a specific questionnaire, such as the IPAQ (International Physical Activity Questionnaire), and can create an individual training plan for the patient based on the FITTA criteria: frequency, intensity, time, type of the exercise and perseverance (approach), and the 5R criteria: Repetitions, Rate, Range, Resistance, and Rest ( ). The Place of Physiotherapy in the Perioperative Care of a Breast Cancer Patient Breast cancer therapy most often begins with surgery, so it is recommended that the physiotherapist be in touch with the oncology team, so that they will be informed about the type of surgery and have the opportunity to meet the patient. It is important that the physiotherapist has a BSc or MSc degree, experience in the field of oncology and a close professional relationship with the surgeon and oncologist ( ). Both the period of preparing the patient for surgery and the early postoperative period impose tasks on the physiotherapist and at the same time affect the patient’s later quality of life and the outcome of the disease ( ). Early mobilization and physiotherapy will significantly reduce the functional impairment caused by the disease and interventions. Complex functional impairment of the upper extremities associated with breast surgery may develop including the following: • Pain, hyperaesthesia, paraesthesia, • Stiffness, • Secondary lymphoedema, • Seroma, • Scarring (axillary web syndrome, AWS), • Decreased muscle strength and restricted motion, limited range of motion (ROM), • Weakening of grip strength of the hand, • Complex functional impairments, • Decrease in daily activity, • Sensory disturbances/losses in the chest area, • Posture/body image disorder, • Neck/shoulder girdle dysfunction (involvement of the upper part of the trapezius muscle) ( ). Early and late functional complications of breast cancer treatment along with patient quality of life have long been studied, and a variety of methods are available to manage these in routine patient care ( ). The possibilities for prevention and treatment will be discussed after a presentation of methodology. Assessing both the range of motion of the shoulder and muscle strength of the upper limb is important. Decrease in grip strength of the hand and a limited range of motion pose serious problems to the patient. Both functional tests and other measuring tools can be used to assess functional restriction, which is also a prognostic indicator ( ). Measurement of the upper limb volume can be performed using several methods, and this will significantly help in the early detection of lymphoedema. Circumference differences measured at six anatomical points are highly correlated with the results of water displacement volume measurement ( ). AWS caused by scarring is a typical group of signs and symptoms following oncological breast surgery. In most of the cases, a scarred cord-like lesion is palpable in the armpit; in a milder form it is only perceived by the patient, and therefore recording subjective symptoms is essential. Predisposing factors, incidence, pathological aspects, and therapeutic options for AWS are being actively researched. The lesion usually develops in the armpit, but it may extend down along the elbow pit to the base of the thumb. The syndrome is caused by the occlusion, inflammation and later on the fibrosis of the superficial lymphatic vessels, as a consequence of surgery ( ). The current trend is the global analysis of upper extremity functions that is in addition to the measurement of the range of shoulder motion and anatomical parameters of the upper limb, complex upper limb functions needed to perform everyday tasks, as well as circulatory conditions and physical stress tolerance are assessed ( – ). Questionnaires completed by the patient are also included ( ). Preparing the Patient for Surgery • Assessment of structural and functional condition using the aforementioned tests. • Evaluation of comorbidities. • Teaching early mobilization exercises. • Thrombosis prophylaxis and teaching patients venous exercise and how to use compression bandages. • Information on the symptoms and prevention of occasional lymphoedema. • Assessing the need for and use of an aid (optimal prosthesis, bandage etc.). • Explaining the role of exercise and physical activity in the healing and rehabilitation process. Early Postoperative Tasks • Positioning depending on the type of surgery. • Early mobilization; the goal is to reach a vertical position as soon as possible (sitting, standing, walking). • Early breathing exercise, chest mobilization to help prevent respiratory complications. • Vascular physiotherapy or an elastic bandage or anti-thrombosis stocking applied before mobilization reduces the risk of thrombosis. • Passive, assisted and then active movement of the upper limb on the affected side, teaching facilitation possibilities. • Prevention of contractures. • Core stabilization and mobilization. • Restoring abnormal muscle balance caused by an altered body image. • Preparing for a complex exercise programme, enrolling the patient in a small group class, as soon as possible. • After reconstructive surgery (TRAM, LD, DIEP), lifting the arm above 90° have to be avoided for 3–5 weeks. • Recovery of self-sufficiency functions (measurement of independence based on physical and cognitive capacity according to the “Functional Independence Measure, FIM” scale). This period lasts for a couple of days, but in case of breast reconstruction surgery it may take longer time. Prior to hospital discharge, patients should be enrolled in a rehabilitation support group, when possible, in which they participate in a regular exercise programme under the guidance of a specialist, preferably a physiotherapist. If this is not available, an exercise programme should be created, which can be performed independently by patients in their home, and sports and other leisure time activities may also be suggested. Since oncology treatments after surgery (radiation and/or chemotherapy, hormone therapy, etc.) are also very demanding on the body, regular physical activity and exercise are essential. Lymphoedema Although over the last decade, the widespread adoption of sentinel lymph node biopsy and patient training have significantly reduced the development of upper limb lymphoedema, it is essential that all lymph node-positive breast cancer patients who have undergone surgery, chemotherapy, or radiation therapy are considered potential lymphoedema patients. Therefore, all interventions and physiotherapy procedures causing significant hyperaemia of the affected upper limb should be avoided. (Harmful effects of blood pressure measurement, blood sampling or possibly intravenous treatment have not been confirmed, but are rather an assumption; regrettably, unjustified fear may cause anxiety in the patient.) Patient information, regular movement therapy, and manual lymph drainage (MLD), if needed, all support the functioning of the lymphatic system possibly damaged by the various oncological interventions, and reduce the probability of the progression of the lymphoedema. Because MLD stimulates lymphatic system activity, treatment should only be initiated with the recommendation of the oncology team since it may even pose a risk to the patient. Lymphatic drainage can be performed by a physiotherapist with specialist knowledge of lymphatic drainage in the field of oncology ( ). Complex lymphatic therapy also includes compression treatment, which may use bandages, stockings, and mechanical compression. It is important to know that use of these measures is not optional. Compression Elastic Bandage • Short-elongation, high working pressure elastic bandages are used. • Applied in multiple layers with pressure decreasing evenly from distally to proximal direction (100%–70%). • After manual treatment, it should be applied and maintained while the patient is performing active muscle activity. • This is repeated daily until the reversible mobile part of lymphoedema is removed. Compression Stockings • Can be used at 1 to 3 compression gradients. • Its purpose is to maintain an oedema-free state. • In some cases it can also be used for preventive purposes. • The type, size and gradient of stockings should be determined together with the attending physician. • The stage of lymphoedema and the general condition of the patient and possible comorbidities should also be taken into account. Machine Compression • A complementary procedure, it must not be used alone without other anti-oedema therapies. Early mobilization and active exercise programmes (30–50 min three times per week), complemented with MLD therapy, may significantly reduce the development and progression of the lymphoedema. Complete decongestive therapy (CDT), which includes both MLD and compression therapy, significantly reduces pain and feeling heaviness in the arm ( ). Conclusion With their multiple beneficial effects, regular physical activity, sports and leisure activities improve quality of life and life prospects after complex breast cancer treatment. Due to the effects of complex treatment, age-specific characteristics and comorbidities, many of the patients do not know what type of exercise they may or should perform; the help of a physiotherapist is essential. Physiotherapists participate in the complex breast cancer survivorship programme in cooperation with the other specialists, their specific task and responsibility is building, teaching and supervising short-term and long-term exercise programmes. Physiotherapists may be involved in supporting breast cancer patients at the clinic, specialist care, primary care, home care service and in patient organizations all along the disease course according to the actual situation and need. Physiotherapy exercises and other forms of physiotherapy are now a part of integrative oncology and modern comprehensive breast cancer therapy.
According to a WHO survey, sedentary lifestyle is the fourth most important risk factor for current endemic diseases worldwide, including cancer. Physical activity means exercise associated with any muscle contraction involving a change in location or position that requires a higher energy expenditure than at resting level. Isometric and isotonic, eccentric and concentric muscle work can be part of physical activity. Established physiotherapy is an essential part of the complex management of breast cancer all along the disease continuum; since no other chapters of this series deal with physiotherapy, here we summarize the related aspects irrespective of the phase of the disease. As a result of regular exercise, the organism undergoes structural, functional, and physiological changes that help to prevent and delay many diseases, or recover from them. This effect is also influenced by the form, intensity, duration, and timing of the exercise. To measure the magnitude of the load, we use the term “metabolic equivalent of task (MET),” which is based on measuring oxygen consumption. Knowing the MET value of physical activities, a desired weekly load can be easily established ( ). Based on the WHO proposal, American and European exercise recommendations were formulated for healthy individuals ( ).
• Exercise activates natural killer cells (NK cells) that play a role in killing cancer cells. • It reduces the body’s susceptibility to bacterial infections. • Supports body weight control. • Prevents deterioration of cardiorespiratory endurance, which may occur as a side-effect of cardiotoxic antitumour therapies. • Helps to recover muscle mass, reduces sarcopenia due to disease and treatments. • Reduces the risk of thromboembolic complications, the incidence of which is 7-fold higher in cancer patients than in the average population. • Supports correction of abnormal movement patterns, develops the ability to coordinate and maintain balance, which is deteriorated as a common consequence of polyneuropathy caused by chemotherapy. • Reduces fatigue. • Reduces symptoms of musculoskeletal syndrome causing bone, muscle, and joint pain and stiffness. • Increases bone mineral content, which is important for bone loss due to hormone and chemotherapy, and thus reduces the risk of bone fractures. • Improves self-esteem, reduces the effects of distress, anxiety, fear, pain, and initiates positive self-healing processes. • Reduces the decline of cognitive functions and slows down the ageing process. • Reduces the risk of developing lymphoedema.
Aerobic or cardio-training is a continuous or intermittent intense workout of the large skeletal muscle groups for 20–50 min. This type of exercise primarily improves endurance and increases the capacity of the cardiorespiratory system. It includes walking, Nordic walking, running, swimming, cycling, stair climbing, ball sports, etc. Anaerobic or resistance training is a short-term high level effort that helps to prevent muscle atrophy and osteoporosis. Typical forms of resistance training are weightlifting or sprinting. Other exercise types, such as breathing gymnastics, proprioceptive training, stretching, etc. can be incorporated into both training types. Different exercise types are not interchangeable, it is the task of a physiotherapist to set an individualized training programme. The physiotherapist can find out the patient’s usual physical activity or fitness via a specific questionnaire, such as the IPAQ (International Physical Activity Questionnaire), and can create an individual training plan for the patient based on the FITTA criteria: frequency, intensity, time, type of the exercise and perseverance (approach), and the 5R criteria: Repetitions, Rate, Range, Resistance, and Rest ( ).
Breast cancer therapy most often begins with surgery, so it is recommended that the physiotherapist be in touch with the oncology team, so that they will be informed about the type of surgery and have the opportunity to meet the patient. It is important that the physiotherapist has a BSc or MSc degree, experience in the field of oncology and a close professional relationship with the surgeon and oncologist ( ). Both the period of preparing the patient for surgery and the early postoperative period impose tasks on the physiotherapist and at the same time affect the patient’s later quality of life and the outcome of the disease ( ). Early mobilization and physiotherapy will significantly reduce the functional impairment caused by the disease and interventions. Complex functional impairment of the upper extremities associated with breast surgery may develop including the following: • Pain, hyperaesthesia, paraesthesia, • Stiffness, • Secondary lymphoedema, • Seroma, • Scarring (axillary web syndrome, AWS), • Decreased muscle strength and restricted motion, limited range of motion (ROM), • Weakening of grip strength of the hand, • Complex functional impairments, • Decrease in daily activity, • Sensory disturbances/losses in the chest area, • Posture/body image disorder, • Neck/shoulder girdle dysfunction (involvement of the upper part of the trapezius muscle) ( ). Early and late functional complications of breast cancer treatment along with patient quality of life have long been studied, and a variety of methods are available to manage these in routine patient care ( ). The possibilities for prevention and treatment will be discussed after a presentation of methodology. Assessing both the range of motion of the shoulder and muscle strength of the upper limb is important. Decrease in grip strength of the hand and a limited range of motion pose serious problems to the patient. Both functional tests and other measuring tools can be used to assess functional restriction, which is also a prognostic indicator ( ). Measurement of the upper limb volume can be performed using several methods, and this will significantly help in the early detection of lymphoedema. Circumference differences measured at six anatomical points are highly correlated with the results of water displacement volume measurement ( ). AWS caused by scarring is a typical group of signs and symptoms following oncological breast surgery. In most of the cases, a scarred cord-like lesion is palpable in the armpit; in a milder form it is only perceived by the patient, and therefore recording subjective symptoms is essential. Predisposing factors, incidence, pathological aspects, and therapeutic options for AWS are being actively researched. The lesion usually develops in the armpit, but it may extend down along the elbow pit to the base of the thumb. The syndrome is caused by the occlusion, inflammation and later on the fibrosis of the superficial lymphatic vessels, as a consequence of surgery ( ). The current trend is the global analysis of upper extremity functions that is in addition to the measurement of the range of shoulder motion and anatomical parameters of the upper limb, complex upper limb functions needed to perform everyday tasks, as well as circulatory conditions and physical stress tolerance are assessed ( – ). Questionnaires completed by the patient are also included ( ). Preparing the Patient for Surgery • Assessment of structural and functional condition using the aforementioned tests. • Evaluation of comorbidities. • Teaching early mobilization exercises. • Thrombosis prophylaxis and teaching patients venous exercise and how to use compression bandages. • Information on the symptoms and prevention of occasional lymphoedema. • Assessing the need for and use of an aid (optimal prosthesis, bandage etc.). • Explaining the role of exercise and physical activity in the healing and rehabilitation process. Early Postoperative Tasks • Positioning depending on the type of surgery. • Early mobilization; the goal is to reach a vertical position as soon as possible (sitting, standing, walking). • Early breathing exercise, chest mobilization to help prevent respiratory complications. • Vascular physiotherapy or an elastic bandage or anti-thrombosis stocking applied before mobilization reduces the risk of thrombosis. • Passive, assisted and then active movement of the upper limb on the affected side, teaching facilitation possibilities. • Prevention of contractures. • Core stabilization and mobilization. • Restoring abnormal muscle balance caused by an altered body image. • Preparing for a complex exercise programme, enrolling the patient in a small group class, as soon as possible. • After reconstructive surgery (TRAM, LD, DIEP), lifting the arm above 90° have to be avoided for 3–5 weeks. • Recovery of self-sufficiency functions (measurement of independence based on physical and cognitive capacity according to the “Functional Independence Measure, FIM” scale). This period lasts for a couple of days, but in case of breast reconstruction surgery it may take longer time. Prior to hospital discharge, patients should be enrolled in a rehabilitation support group, when possible, in which they participate in a regular exercise programme under the guidance of a specialist, preferably a physiotherapist. If this is not available, an exercise programme should be created, which can be performed independently by patients in their home, and sports and other leisure time activities may also be suggested. Since oncology treatments after surgery (radiation and/or chemotherapy, hormone therapy, etc.) are also very demanding on the body, regular physical activity and exercise are essential.
• Assessment of structural and functional condition using the aforementioned tests. • Evaluation of comorbidities. • Teaching early mobilization exercises. • Thrombosis prophylaxis and teaching patients venous exercise and how to use compression bandages. • Information on the symptoms and prevention of occasional lymphoedema. • Assessing the need for and use of an aid (optimal prosthesis, bandage etc.). • Explaining the role of exercise and physical activity in the healing and rehabilitation process.
• Positioning depending on the type of surgery. • Early mobilization; the goal is to reach a vertical position as soon as possible (sitting, standing, walking). • Early breathing exercise, chest mobilization to help prevent respiratory complications. • Vascular physiotherapy or an elastic bandage or anti-thrombosis stocking applied before mobilization reduces the risk of thrombosis. • Passive, assisted and then active movement of the upper limb on the affected side, teaching facilitation possibilities. • Prevention of contractures. • Core stabilization and mobilization. • Restoring abnormal muscle balance caused by an altered body image. • Preparing for a complex exercise programme, enrolling the patient in a small group class, as soon as possible. • After reconstructive surgery (TRAM, LD, DIEP), lifting the arm above 90° have to be avoided for 3–5 weeks. • Recovery of self-sufficiency functions (measurement of independence based on physical and cognitive capacity according to the “Functional Independence Measure, FIM” scale). This period lasts for a couple of days, but in case of breast reconstruction surgery it may take longer time. Prior to hospital discharge, patients should be enrolled in a rehabilitation support group, when possible, in which they participate in a regular exercise programme under the guidance of a specialist, preferably a physiotherapist. If this is not available, an exercise programme should be created, which can be performed independently by patients in their home, and sports and other leisure time activities may also be suggested. Since oncology treatments after surgery (radiation and/or chemotherapy, hormone therapy, etc.) are also very demanding on the body, regular physical activity and exercise are essential.
Although over the last decade, the widespread adoption of sentinel lymph node biopsy and patient training have significantly reduced the development of upper limb lymphoedema, it is essential that all lymph node-positive breast cancer patients who have undergone surgery, chemotherapy, or radiation therapy are considered potential lymphoedema patients. Therefore, all interventions and physiotherapy procedures causing significant hyperaemia of the affected upper limb should be avoided. (Harmful effects of blood pressure measurement, blood sampling or possibly intravenous treatment have not been confirmed, but are rather an assumption; regrettably, unjustified fear may cause anxiety in the patient.) Patient information, regular movement therapy, and manual lymph drainage (MLD), if needed, all support the functioning of the lymphatic system possibly damaged by the various oncological interventions, and reduce the probability of the progression of the lymphoedema. Because MLD stimulates lymphatic system activity, treatment should only be initiated with the recommendation of the oncology team since it may even pose a risk to the patient. Lymphatic drainage can be performed by a physiotherapist with specialist knowledge of lymphatic drainage in the field of oncology ( ). Complex lymphatic therapy also includes compression treatment, which may use bandages, stockings, and mechanical compression. It is important to know that use of these measures is not optional. Compression Elastic Bandage • Short-elongation, high working pressure elastic bandages are used. • Applied in multiple layers with pressure decreasing evenly from distally to proximal direction (100%–70%). • After manual treatment, it should be applied and maintained while the patient is performing active muscle activity. • This is repeated daily until the reversible mobile part of lymphoedema is removed. Compression Stockings • Can be used at 1 to 3 compression gradients. • Its purpose is to maintain an oedema-free state. • In some cases it can also be used for preventive purposes. • The type, size and gradient of stockings should be determined together with the attending physician. • The stage of lymphoedema and the general condition of the patient and possible comorbidities should also be taken into account. Machine Compression • A complementary procedure, it must not be used alone without other anti-oedema therapies. Early mobilization and active exercise programmes (30–50 min three times per week), complemented with MLD therapy, may significantly reduce the development and progression of the lymphoedema. Complete decongestive therapy (CDT), which includes both MLD and compression therapy, significantly reduces pain and feeling heaviness in the arm ( ).
• Short-elongation, high working pressure elastic bandages are used. • Applied in multiple layers with pressure decreasing evenly from distally to proximal direction (100%–70%). • After manual treatment, it should be applied and maintained while the patient is performing active muscle activity. • This is repeated daily until the reversible mobile part of lymphoedema is removed.
• Can be used at 1 to 3 compression gradients. • Its purpose is to maintain an oedema-free state. • In some cases it can also be used for preventive purposes. • The type, size and gradient of stockings should be determined together with the attending physician. • The stage of lymphoedema and the general condition of the patient and possible comorbidities should also be taken into account.
• A complementary procedure, it must not be used alone without other anti-oedema therapies. Early mobilization and active exercise programmes (30–50 min three times per week), complemented with MLD therapy, may significantly reduce the development and progression of the lymphoedema. Complete decongestive therapy (CDT), which includes both MLD and compression therapy, significantly reduces pain and feeling heaviness in the arm ( ).
With their multiple beneficial effects, regular physical activity, sports and leisure activities improve quality of life and life prospects after complex breast cancer treatment. Due to the effects of complex treatment, age-specific characteristics and comorbidities, many of the patients do not know what type of exercise they may or should perform; the help of a physiotherapist is essential. Physiotherapists participate in the complex breast cancer survivorship programme in cooperation with the other specialists, their specific task and responsibility is building, teaching and supervising short-term and long-term exercise programmes. Physiotherapists may be involved in supporting breast cancer patients at the clinic, specialist care, primary care, home care service and in patient organizations all along the disease course according to the actual situation and need. Physiotherapy exercises and other forms of physiotherapy are now a part of integrative oncology and modern comprehensive breast cancer therapy.
General Guidelines for Psychosocial Oncology Care It is now worldwide accepted that psychosocial care and psychosocial rehabilitation of patients diagnosed with breast cancer should be provided as an integral part of complex oncology care ( ). This should begin when the diagnosis is communicated to the patient, and be practised within a complex cancer survivorship programme later on. Relevant recommendations are summarized below, and these explain specific features of care based on general guidelines in psychosocial oncology care ( ) and a recent protocol published by the Hungarian Ministry of Health ( ). The summary is intended for all the psychologists, clinical psychologists, psychotherapists, psychiatrists, social workers, mental health professionals who work at an oncology centre providing active medical treatment, at an oncology department/outpatient clinic, at a crisis centre for cancer patients and their relatives or in private practice. Interventions should be adapted to the oncology treatments being given and the patient’s current condition, and therefore close collaboration is required between the attending physician and the professional providing psychosocial care, who ideally is a member of the multidisciplinary team ( , , , , – ). A person diagnosed with breast cancer may need psychosocial support and treatment throughout the entire course of the disease ( ). The Main Crisis Points May Include • The period of assessment for the suspected disease. • Establishment of the diagnosis. • Preparing for surgery, starting oncological therapy. • Initiation of oncological therapy, facing the burdens and side-effects of treatment. • Follow-up/relapse-free period, “recovery to life.” • Relapse, appearancediagnosis of metastases. • Terminal stage. Important Psychosocial Changes Following the Diagnosis of Cancer • Emergence of fear of death, dealing with the issue of financial difficulties. • Changes in body scheme that cause identity confusion (in terms of femininity, motherhood). • Partnership and sexual problems. • Difficulties of lifestyle change. • Financial problems. • Unbalanced family homeostasis, reversal of roles. • Uncertainty about the future. • Fear of recurrence of the disease. Interventions That Can Be Used Effectively in Mental Care • Psychoeducation. • Crisis intervention. • Psychological counselling. • Supportive-expressive psychotherapy. • MBCR (mindfulness-based cancer recovery) programme. • Relaxation, autogenic training, “imaginative” therapies. • Other individual and/or group therapeutic techniques, depending on the qualification and skills of the professional providing the care. For all these, it is essential: • To assess and be aware of the patient’s physical/mental condition (tumour stage, histological type, age, presence of risk factors, level of social support, living conditions, premorbid personality, comorbidities, previous life events, etc.). • To match psychosocial care carefully and flexibly with oncology treatments. Recognizing the importance of emotional problems in cancer patients, in 2017 the Hungarian Cancer Society adopted the International Standard of Quality Cancer Care developed by the International Psycho-Oncology Society (IPOS) ( ) ( https://ipos-society.org/endorsements/organizations ): ○ Psychosocial cancer care should be recognized as a universal human right ○ Quality cancer care must integrate the psychosocial domain into routine care Distress should be measured as the 6th Vital Sign in addition to temperature, blood pressure, pulse, respiratory rate and pain. Psycho-Oncological Assessment and Screening Tools • Quick screening: Distress Thermometer (measures the degree of distress reported by the patient on a scale of 10; above 4, the patient requires support) and Mitchell’s Emotional Thermometers ( , , ) • Mood assessment and recording: BDI, Zung, HADS ( , ) • Evaluation of anxiety: STAI, HADS ( , , ) • Problem List: helps to plan individually tailored support by exploring current psychosocial and spiritual difficulties • Other psychological measuring instruments, depending on the qualification and competence of the psychosocial or mental health professional • The basic principle of screening is that screened patients should be provided with psychological care and their psychological assessment should be adjusted to their current physical and mental condition • All newly presenting patients should be included in oncopsychological screening, regardless of whether they had any premorbid psychiatric illness. It is recommended that tests for quick screening are repeated at different stages of the disease (any treatment event, e.g. relapse; or interim periods, e.g., every six months), preferably in conjunction with oncology follow-up ( ). Possibilities for Psychosocial Oncology Care Intervention in Different Phases of the Disease • Communication of diagnosis: crisis intervention, counselling, supportive therapy, psychodiagnostics, psychosocial screening. • Initiation of treatment: psychoeducation, reduction of distress, supportive therapies, cognitive and behavioural therapies, couple therapy, life management counselling, “imaginative” therapies. • Completion of treatment, recovery: verbal and non-verbal psychotherapies. • Completion of treatment, deteriorating condition: preventive pastoral care, crisis intervention, support for family members, counselling, supportive psychotherapies. • Death, dying: dignity therapy, crisis intervention, grieving process embedded in psychotherapy, bereavement support groups, self-help bereavement groups. • An early preventive approach in interventions is important, anticipating the possibility of recurrence and the effectiveness of second- and third-line treatments, supported by statistical data, if necessary. ○ Together with proper communication, this will improve compliance. It will allow for the creation of a long-term therapeutic collaboration plan, the message of which for the patient is that the treating team trusts in their long-term survival and wants to involve the patient in the treatment process. ○ Starting from the communication of the diagnosis, during the step-by-step process of information-treatment-preparation, it is recommended that issues relevant in the longer term, such as possibilities of breast reconstruction, or the issue of having children after breast cancer treatment, be addressed gradually. Professional Conditions for Psychological Support of Cancer Patients Hungarian National Cancer Control Programme (2006): • Specialists in the psychosocial treatment of cancer patients (clinical or health psychologist, psychiatrist and/or psychotherapist), working together as members of the oncology team with the oncologist, physiotherapist, dietitian and social worker, should be made available in oncology centres, departments and caregiving services. • Continuous consultation and documentation between different professions is essential for monitoring changes in the patient’s condition. • The primary goal is to maintain the best possible quality of life and physical well-being while preserving emotional, social and spiritual well-being. • Appropriate physical environment and work organization, availability of oncopsychological training/further training. This is part 2 of a series of 6 publications on the 1st Central-Eastern European Professional Consensus Statements on Breast Cancer covering imaging diagnosis and screening ( ), pathological diagnosis ( ), surgical treatment ( ), systemic treatment ( ), radiotherapy ( ) of the disease and related follow-up, rehabilitation and psychosocial oncology care issues (present paper).
It is now worldwide accepted that psychosocial care and psychosocial rehabilitation of patients diagnosed with breast cancer should be provided as an integral part of complex oncology care ( ). This should begin when the diagnosis is communicated to the patient, and be practised within a complex cancer survivorship programme later on. Relevant recommendations are summarized below, and these explain specific features of care based on general guidelines in psychosocial oncology care ( ) and a recent protocol published by the Hungarian Ministry of Health ( ). The summary is intended for all the psychologists, clinical psychologists, psychotherapists, psychiatrists, social workers, mental health professionals who work at an oncology centre providing active medical treatment, at an oncology department/outpatient clinic, at a crisis centre for cancer patients and their relatives or in private practice. Interventions should be adapted to the oncology treatments being given and the patient’s current condition, and therefore close collaboration is required between the attending physician and the professional providing psychosocial care, who ideally is a member of the multidisciplinary team ( , , , , – ). A person diagnosed with breast cancer may need psychosocial support and treatment throughout the entire course of the disease ( ). The Main Crisis Points May Include • The period of assessment for the suspected disease. • Establishment of the diagnosis. • Preparing for surgery, starting oncological therapy. • Initiation of oncological therapy, facing the burdens and side-effects of treatment. • Follow-up/relapse-free period, “recovery to life.” • Relapse, appearancediagnosis of metastases. • Terminal stage. Important Psychosocial Changes Following the Diagnosis of Cancer • Emergence of fear of death, dealing with the issue of financial difficulties. • Changes in body scheme that cause identity confusion (in terms of femininity, motherhood). • Partnership and sexual problems. • Difficulties of lifestyle change. • Financial problems. • Unbalanced family homeostasis, reversal of roles. • Uncertainty about the future. • Fear of recurrence of the disease. Interventions That Can Be Used Effectively in Mental Care • Psychoeducation. • Crisis intervention. • Psychological counselling. • Supportive-expressive psychotherapy. • MBCR (mindfulness-based cancer recovery) programme. • Relaxation, autogenic training, “imaginative” therapies. • Other individual and/or group therapeutic techniques, depending on the qualification and skills of the professional providing the care. For all these, it is essential: • To assess and be aware of the patient’s physical/mental condition (tumour stage, histological type, age, presence of risk factors, level of social support, living conditions, premorbid personality, comorbidities, previous life events, etc.). • To match psychosocial care carefully and flexibly with oncology treatments. Recognizing the importance of emotional problems in cancer patients, in 2017 the Hungarian Cancer Society adopted the International Standard of Quality Cancer Care developed by the International Psycho-Oncology Society (IPOS) ( ) ( https://ipos-society.org/endorsements/organizations ): ○ Psychosocial cancer care should be recognized as a universal human right ○ Quality cancer care must integrate the psychosocial domain into routine care Distress should be measured as the 6th Vital Sign in addition to temperature, blood pressure, pulse, respiratory rate and pain. Psycho-Oncological Assessment and Screening Tools • Quick screening: Distress Thermometer (measures the degree of distress reported by the patient on a scale of 10; above 4, the patient requires support) and Mitchell’s Emotional Thermometers ( , , ) • Mood assessment and recording: BDI, Zung, HADS ( , ) • Evaluation of anxiety: STAI, HADS ( , , ) • Problem List: helps to plan individually tailored support by exploring current psychosocial and spiritual difficulties • Other psychological measuring instruments, depending on the qualification and competence of the psychosocial or mental health professional • The basic principle of screening is that screened patients should be provided with psychological care and their psychological assessment should be adjusted to their current physical and mental condition • All newly presenting patients should be included in oncopsychological screening, regardless of whether they had any premorbid psychiatric illness. It is recommended that tests for quick screening are repeated at different stages of the disease (any treatment event, e.g. relapse; or interim periods, e.g., every six months), preferably in conjunction with oncology follow-up ( ). Possibilities for Psychosocial Oncology Care Intervention in Different Phases of the Disease • Communication of diagnosis: crisis intervention, counselling, supportive therapy, psychodiagnostics, psychosocial screening. • Initiation of treatment: psychoeducation, reduction of distress, supportive therapies, cognitive and behavioural therapies, couple therapy, life management counselling, “imaginative” therapies. • Completion of treatment, recovery: verbal and non-verbal psychotherapies. • Completion of treatment, deteriorating condition: preventive pastoral care, crisis intervention, support for family members, counselling, supportive psychotherapies. • Death, dying: dignity therapy, crisis intervention, grieving process embedded in psychotherapy, bereavement support groups, self-help bereavement groups. • An early preventive approach in interventions is important, anticipating the possibility of recurrence and the effectiveness of second- and third-line treatments, supported by statistical data, if necessary. ○ Together with proper communication, this will improve compliance. It will allow for the creation of a long-term therapeutic collaboration plan, the message of which for the patient is that the treating team trusts in their long-term survival and wants to involve the patient in the treatment process. ○ Starting from the communication of the diagnosis, during the step-by-step process of information-treatment-preparation, it is recommended that issues relevant in the longer term, such as possibilities of breast reconstruction, or the issue of having children after breast cancer treatment, be addressed gradually. Professional Conditions for Psychological Support of Cancer Patients Hungarian National Cancer Control Programme (2006): • Specialists in the psychosocial treatment of cancer patients (clinical or health psychologist, psychiatrist and/or psychotherapist), working together as members of the oncology team with the oncologist, physiotherapist, dietitian and social worker, should be made available in oncology centres, departments and caregiving services. • Continuous consultation and documentation between different professions is essential for monitoring changes in the patient’s condition. • The primary goal is to maintain the best possible quality of life and physical well-being while preserving emotional, social and spiritual well-being. • Appropriate physical environment and work organization, availability of oncopsychological training/further training. This is part 2 of a series of 6 publications on the 1st Central-Eastern European Professional Consensus Statements on Breast Cancer covering imaging diagnosis and screening ( ), pathological diagnosis ( ), surgical treatment ( ), systemic treatment ( ), radiotherapy ( ) of the disease and related follow-up, rehabilitation and psychosocial oncology care issues (present paper).
• The period of assessment for the suspected disease. • Establishment of the diagnosis. • Preparing for surgery, starting oncological therapy. • Initiation of oncological therapy, facing the burdens and side-effects of treatment. • Follow-up/relapse-free period, “recovery to life.” • Relapse, appearancediagnosis of metastases. • Terminal stage.
• Emergence of fear of death, dealing with the issue of financial difficulties. • Changes in body scheme that cause identity confusion (in terms of femininity, motherhood). • Partnership and sexual problems. • Difficulties of lifestyle change. • Financial problems. • Unbalanced family homeostasis, reversal of roles. • Uncertainty about the future. • Fear of recurrence of the disease.
• Psychoeducation. • Crisis intervention. • Psychological counselling. • Supportive-expressive psychotherapy. • MBCR (mindfulness-based cancer recovery) programme. • Relaxation, autogenic training, “imaginative” therapies. • Other individual and/or group therapeutic techniques, depending on the qualification and skills of the professional providing the care. For all these, it is essential: • To assess and be aware of the patient’s physical/mental condition (tumour stage, histological type, age, presence of risk factors, level of social support, living conditions, premorbid personality, comorbidities, previous life events, etc.). • To match psychosocial care carefully and flexibly with oncology treatments. Recognizing the importance of emotional problems in cancer patients, in 2017 the Hungarian Cancer Society adopted the International Standard of Quality Cancer Care developed by the International Psycho-Oncology Society (IPOS) ( ) ( https://ipos-society.org/endorsements/organizations ): ○ Psychosocial cancer care should be recognized as a universal human right ○ Quality cancer care must integrate the psychosocial domain into routine care Distress should be measured as the 6th Vital Sign in addition to temperature, blood pressure, pulse, respiratory rate and pain.
• Quick screening: Distress Thermometer (measures the degree of distress reported by the patient on a scale of 10; above 4, the patient requires support) and Mitchell’s Emotional Thermometers ( , , ) • Mood assessment and recording: BDI, Zung, HADS ( , ) • Evaluation of anxiety: STAI, HADS ( , , ) • Problem List: helps to plan individually tailored support by exploring current psychosocial and spiritual difficulties • Other psychological measuring instruments, depending on the qualification and competence of the psychosocial or mental health professional • The basic principle of screening is that screened patients should be provided with psychological care and their psychological assessment should be adjusted to their current physical and mental condition • All newly presenting patients should be included in oncopsychological screening, regardless of whether they had any premorbid psychiatric illness. It is recommended that tests for quick screening are repeated at different stages of the disease (any treatment event, e.g. relapse; or interim periods, e.g., every six months), preferably in conjunction with oncology follow-up ( ).
• Communication of diagnosis: crisis intervention, counselling, supportive therapy, psychodiagnostics, psychosocial screening. • Initiation of treatment: psychoeducation, reduction of distress, supportive therapies, cognitive and behavioural therapies, couple therapy, life management counselling, “imaginative” therapies. • Completion of treatment, recovery: verbal and non-verbal psychotherapies. • Completion of treatment, deteriorating condition: preventive pastoral care, crisis intervention, support for family members, counselling, supportive psychotherapies. • Death, dying: dignity therapy, crisis intervention, grieving process embedded in psychotherapy, bereavement support groups, self-help bereavement groups. • An early preventive approach in interventions is important, anticipating the possibility of recurrence and the effectiveness of second- and third-line treatments, supported by statistical data, if necessary. ○ Together with proper communication, this will improve compliance. It will allow for the creation of a long-term therapeutic collaboration plan, the message of which for the patient is that the treating team trusts in their long-term survival and wants to involve the patient in the treatment process. ○ Starting from the communication of the diagnosis, during the step-by-step process of information-treatment-preparation, it is recommended that issues relevant in the longer term, such as possibilities of breast reconstruction, or the issue of having children after breast cancer treatment, be addressed gradually.
Hungarian National Cancer Control Programme (2006): • Specialists in the psychosocial treatment of cancer patients (clinical or health psychologist, psychiatrist and/or psychotherapist), working together as members of the oncology team with the oncologist, physiotherapist, dietitian and social worker, should be made available in oncology centres, departments and caregiving services. • Continuous consultation and documentation between different professions is essential for monitoring changes in the patient’s condition. • The primary goal is to maintain the best possible quality of life and physical well-being while preserving emotional, social and spiritual well-being. • Appropriate physical environment and work organization, availability of oncopsychological training/further training. This is part 2 of a series of 6 publications on the 1st Central-Eastern European Professional Consensus Statements on Breast Cancer covering imaging diagnosis and screening ( ), pathological diagnosis ( ), surgical treatment ( ), systemic treatment ( ), radiotherapy ( ) of the disease and related follow-up, rehabilitation and psychosocial oncology care issues (present paper).
|
Artificial intelligence in ophthalmology and healthcare: An updated review of the techniques in use | ff381493-ba66-43d0-9bf1-11970128a86a | 7926114 | Ophthalmology[mh] | A search for literature was made using keywords “Artificial Intelligence, techniques, tools, healthcare, ophthalmology, algorithms” in PubMed, Web of Science Core Collection and Google Scholar. The relevant articles which discussed different techniques in use in relation to ophthalmology were shortlisted. Thereafter, the main techniques in use were tabulated. The papers which discussed the same or overlapping research were removed. From a total of 72 articles only 17 were found to be of consequence to be included. Then the grey literature was manually searched for additions. If a firm recommendation of the use of the AI technique existed in peer-reviewed literature, only then it was added to the discussion. The results were subsequently checked against the facts from other industry reports. In case of discordance of reports about the use of the technique, the medical literature was to gain precedence over literature from engineering and other branches as per decided protocol; the discordance would have been highlighted in the discussion. However, the need for reporting such an event did not arise.
The main areas where AI is being applied in healthcare are: Mass screening Diagnostic imaging Laboratory data Electro-diagnosis Genetic diagnosis Clinical data Operation notes Electronic health records Records from wearable devices
AI devices are broadly of two main types: Machine Learning (ML) Techniques analyzing structured data like imaging, genetic and EP data and Natural Language Processing (NLP) Methods extracting information from unstructured data like clinical notes, medical journals and other unstructured medical data .
The Machine Learning algorithms can be broadly divided into: unsupervised learning and supervised learning. Unsupervised learning helps feature extraction while supervised learning is used for predictive analytics after decreasing the principle components for analysis. A semi-supervised mode is also proposed in recent times which bridges the two. Increased computing power, larger amounts of data, real-time-online availability of databases and high availability of fast internet allows predictive-algorithm-development. Today, driverless cars are a distinct business opportunity. Other vistas have been opened by these developments. In ophthalmology, interpretation of complex images has been achieved. In 2009, Retinopathy Online Challenge used competition fundus photographic sets from 17,877 patient visits of 17,877 people with diabetes who had not previously been diagnosed with DR consisting of two fundus images from each eye. These were compared using a single rater to that of a large computer-aided early DR detection project EyeCheck. The fundus photograph set of every visit was analyzed by single retinal expert. 792 out of these 17,877 sets had more than minimal DR which was the threshold for patient referral. Two algorithmic lesion detectors were used on the datasets separately and compared by standard statistical measures (area under the ROC curve as the main performance indicator). The two computerized lesion detectors demonstrated high agreement. At 90% sensitivity, the specificity of the EyeCheck algorithm was 47.7%. The specificity of the ROC-2009 winner algorithm was 43.6%. On comparing this with interobserver variability of the employed experts it was concluded that DR detection algorithms demonstrated maturity and the detection performance was not too different from the prevailing best clinical practices having reached the human intrareader variability limit. A combination of blood vessel parameters, microaneurysm detection, exudates, texture and distance between the exudates and fovea were accepted to be the most important features to detect the different stages of diabetic retinopathy. In 2008 Nayak et al . used area of the exudates, blood vessels and texture parameters analyzed through neural network to classify the fundus image into normal, non-proliferative DR (NPDR) and proliferative DR (PDR). The detection accuracy of 93% with sensitivity of 90% and specificity of 100% were reported. Support vector machine (SVM) classifier classified fundus images into normal, mild, moderate, severe and prolific DR classes with detection accuracy of 82% and sensitivity of 82% and specificity of 88%. Different software to grade the severity of hemorrhages and microaneurysms, hard exudates and cotton-wool spots of DR to classify DR have been developed and evaluated were able to identify hemorrhages and microaneurysms, hard exudates, and cotton wool spots. Adjudication by experts has further improved the algorithms. Deep neural networks trained and validated using Gulshan et al .'s methods gave algorithms to grade retinal fundus photography images according to the International Clinical Diabetic Retinopathy (ICDR) severity scale. In this prospective study conducted with data from 2 tertiary eye care centers in South India, Aravind Eye Hospital and Sankara Nethralaya, the investigators trained the model to make a multiway classification of the 5-point ICDR grade. The algorithm was trained to make the various 5 point predictions. However, only 2 outputs, referable DR and referable DME, of the model were used to demonstrate that the automated DR system's findings generalized to this population of Indian patients in a prospective setting. The feasibility of using automated DR grading and referral system to screening programs was thus further proved in developing countries. Already cardiology has developed automated electrocardiographic analysis and ophthalmology has used wave front analysis in implementing expert systems delivering results at par or beyond the capability of the human experts with years of clinical experience.
AI and AI-enabled machines are classified into seven main categories by two different types of classifications. The machines simulate human mind's thinking. Thus these machines can be: Reactive Machine Systems e.g., Deep Blue chess playing system which defeated the world champion Kasparov in 1997. Limited Memory Machine Systems which improve with experience e.g., chatbots like Tay. Mind Theory Systems recognizing the need for other domains Self-Aware AI that can actually plan for self-preservation. Artificial Narrow Intelligence (ANI) which is focused on a narrow-range of abilities and processes tasks related to one single narrow task. All AI tools right now belong to this weak AI or ANI category e.g., Cortana, Siri or Google Assistant. Artificial General Intelligence (AGI) can transfer knowledge from one domain into another on its own. It is also called strong AI or full AI. it can do “general intelligent action” and can also experience consciousness. We are a long distance away from something like this. Artificial Super Intelligence (ASI) is the future of machine learning. It will surpass humans in all domains and all types of pursuits. Theoretically, it would be able to demonstrate creativity, emotions, engage in relationships, practice different art forms and take “bounded-rationality-decisions” with limited sets of information. Some glimpses in narrow domains can be seen of these even today. As of today, the integration and transference of domain expertise is not there. For example, the Chess or Go playing machines cannot scarcely do other things. But that has begun changing. However, we are still a long distance from anything as powerful as Artificial Super Intelligence.
Neural networks are not the only tools used for Healthcare AI. The main tools being used in the healthcare industry are briefly discussed below. This is not an exhaustive list as only the most common ones are being discussed here. Linear regression This models a linear relationship between a dependent variable or scalar response and one or more explanatory or independent variables. In simple linear regression, the relationship between the dependent and one explanatory variable is studied. In multiple linear regression, it is the relation with more than one explanatory variable. In multivariate linear regression, multiple correlated dependent variables are predicted using different explanatory variables. This relationship can be used for predictive modeling in a very narrow sense in statistics and is one of the simplest tools used for developing functions or equations that explain the results or dependent variable based on independent variables. The result can be viewed as Dependent Variable = Constant+ [Slope x Independent Variable] + Error. Any number of independent variables can be studied and the effort is to reduce the error. Logistic regression When the data has a binomial distribution, which means that it can be separated into two mutually exclusive groups like yes/no, pass/fail, alive/dead or healthy/sick, then the logistic model or logit model is used. It assigns a probability between 0 and 1 to the factors with the sum adding to one. Logistic regression statistical model uses a logistic function to model a binary dependent variable but other more complex analysis and permutations are possible. The logarithm of the odds for the dependent variable labeled “A” is a linear combination of one or more independent variables or “predictors”. These independent variables can be continuous (any real value) or binary (yes/no) variables. The corresponding probability of the value labeled “B” can vary between 0 and 1.00 ies between 0-100%. This function that converts log-odds to probability is the logistic function. The unit of measurement for the log-odds scale is called logit. Similar rendition of models with a different sigmoid function is called the probit model. It is of use where categorical variables are used. Naïve Bayes Naive Bayes is used for constructing classifiers or models that assign class labels to examples of the problems like referable and non-referable. These examples get assigned as vectors of feature values with the class labels drawn from a finite set of feature values. Naive Bayes requires only a small number of training data to estimate the parameters for classification. There is a family of algorithms in naïve Bayes which is dependent on likelihood, probability before and probability after. The common principle for all naive Bayes classifiers is that it assumes independence of features and each feature contributes to the classification regardless of any possible correlations between the features. A naive Bayes classifier considers each of the features to contribute independently to the probability of its classification. In plain English, using Bayesian probability terminology, the above equation can be written as Practically only the numerator of this fraction is important because the denominator the denominator is effectively constant (as it does not depend on C and the values of the individual features say x i are given). The numerator is equivalent to the joint probability model given above. Naive Bayes is a probabilistic machine learning algorithm with wide application in heterogeneous classification tasks like labeling referable and even email spam. It is called 'Naive' because it assumes the features that go into the model are independent of each other. Changing the value of one feature does not directly influence or change the value of any of the other features used in the algorithm. Rev. Thomas Bayes (1702–61) gave us the elements of this and, therefore, it is named after him. It is popular because it can be coded easily and runs almost real time. It is scalable and responds to user's requests instantaneously as the calculations are relatively straight. Decision tree analysis This is a schematic representation of several decisions having two or more outcomes followed by the probability of the occurrence of each of them. This gives a tree-shaped graphical representation of decisions and the nodes or the chance points that help to investigate the possible outcomes . There are broadly 6 steps in a decision tree analysis: Definition of the problem in structured terms listing all the factors relevant to the solution. The probability distributions of the conditional future behavior of those factors are also then estimated. Modelling of the decision process listing all the alternatives in the problem is constructed. The entire decision process is presented schematically and in an organized step-by-step fashion. The application of appropriate probability values to all the braches and sub-branches of the decision tree. The “solution of decision tree” by finding the particular branch of the tree which has the largest expected value or that maximizes the solution (or vice versa depending on the definition of the problem.) Sensitivity analysis can be performed to see how the solution reacts to changes in inputs. This can show how the model behaves when run in real world situations. The underlying assumptions are listed and ideally should be found to be possible and plausible. Nearest Neighbor analysis Nearest Neighbor Analysis evaluates the distances between the given point and the point closest to it. The analysis is done for every point. The algorithm then compares these values to expected values for a random sample of points from a complete spatial randomness (CSR) pattern. CSR is calculated by two assumptions: All points have same likelihood of receiving or not receiving a positive event. Or, as a corollary, are equally likely to have a negative case or negative event All positive events or cases are located independently of one another. The null hypothesis of complete spatial randomness is tested using the standard normal variate (Z statistic). In such a situation, a negative Z score demonstrates clustering while a positive score correlates with dispersion or evenness. The mean nearest neighbor distance Where N is the number of points. d i is the nearest neighbor distance for point i. The expected value of the nearest neighbor distance in a random pattern Where A is the area and B is the length of the perimeter of the study area. The variance The above equations have a correction factor to counteract the boundary effect. Z statistic The output file in nearest neighbor analysis gives: Input data points Total number of points The minimum and maximum of the X and Y coordinates Size of the study area, Observed mean nearest neighbor distance Variance Z statistic (standard normal variate). In this method, the study area is a regular rectangle or square and cannot be used for irregularly shaped study areas. Random Forest Decision Trees Decision trees are the building blocks of the random forest model which act together as an ensemble. Each individual tree in the random forest gives a class prediction and the class with the most votes becomes prediction from the model. It is a very powerful tool. The random forest model outperforms many more sophisticated tools in making a prediction because of the random effects and the central limiting theorem operating together. This is also called the wisdom of crowds. A large number of relatively uncorrelated models (trees) operating as a committee will outperform any of the individual constituent models . The low correlation between models is the key. The trees protect each other from errors unless all of them err in same direction. By probability, some trees are wrong but other trees are right, so the group's probability moves in the correct direction. But an essential precondition is the absence of multicollinearity or correlation with each other such that the predictions don't err in the same direction together. Discriminant analysis Discriminant Analysis is a statistical tool to assess the adequacy of a classification, given the group memberships; or to assign objects to one group among a number of groups. Discriminant Analysis is called Discriminant Function Analysis (DFA) when it is used to separate two groups and Canonical Varieties Analysis (CVA) when more than two groups are involved. Discriminant Analysis can be used to determine predictor variables related to the dependent variable. It can also be used to predict the value of the dependent variable when values of the predictor variables are available. It is often used in combination with Cluster Analysis and allows for determination of the subject's location in cases or controls if the risk factors or predicting factors are known. Support vector machine (SVM) SVM classifies the subjects into two or more groups. The outcome is used as a classifier. It works on mutually exclusive groups of subjects separable into two or more groups through decision boundaries defined by the traits. The goal of training is to assign optimal weight w to each factor so that the sum of weights comes to 1 so that the weights acting with traits explain the outcomes. This can be done by minimizing quadratic loss function or OLS. The main tuning parameters used are kernel, regularization, gamma and margin. The learning of the hyperplane in linear SVM involves transformation of the problem using a linear equation. For linear kernel the equation for prediction for a new input using the dot product between the input (x) and each support vector (xi) is calculated by f(x) = B(0) + sum[ai * (x, xi)]. Larger C value makes a hyperplane which attempts to classify the training points correctly even if that line or plane has to curve repeatedly. A small C value makes the optimizer define a larger margin separating hyperplane at the cost of misclassifying more points. A margin is a separation of line from closest class points. A good margin has large separation for both the classes. Gamma parameter gives influence of a single training example. A low gamma means 'far' points from separation line get considered in calculation and high values mean 'close' points near separation line are considered. Neural network Neural networks refers to a set of algorithms designed to recognize patterns. A neural network can be compared to a network or circuit of neurons. Artificial neural network is a network of artificial neurons or nodes. There are hidden layers between the input and the output as shown in . These hidden layers have weights attached to different inputs and can have complex mathematical functions modeled on them. The connections of the biological neuron can also be modeled as weights. A positive weight represents an excitatory connection and a negative weight signifies an inhibitory one. All inputs are acted upon by the weights attached to the hidden layers and summed to get an output. This is called linear combination. Predictive modeling, adaptive control and training using a dataset can be done with neural networks. Experiential-self-learning using neural networks help draw conclusions from complex and unrelated set of information. They can pick up information from images, sound, text or time series. These are converted into vectors from which numerical signals about all real-world data are picked up. Neural Networks are used as a clustering and classification layer on top of the stored data. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. Neural networks can extract features from other algorithms for clustering and classification working as components of larger machine-learning applications for reinforcement learning, classification and regression. Examples of publicly available deep neural networks like convolutional neural networks are GoogleNet, AlexNet and VGGNet. Software like Caffe and Tensorflow can also be used. Hidden Markov These statistical models help to get the hidden information from observed sequential attributes or symbols. Hidden Markov Models (HMMs) derive their name from Russian mathematician Andrey Andreyevich Markov. They have been used in speech recognition, biological nucleotide sequences, predict exons and introns in DNA, identify functional motifs (domains), in proteins (profile HMM) and align two sequences (pair HMM). A good HMM simulates the real world source by converting the real world's observed data to symbols. Machine Learning techniques based on HMMs have solved problems including speech recognition, optical character recognition, bioinformatic needs like genetic analysis and computational biology problems. In HMM, a discrete stochastic process progresses through a series of states 'hidden' from the observer to generate the output which is the solution to the problem. Each hidden state generates a symbol representing an elementary unit of the modeled data. This is a powerful technique used when a probability for a sequence of observable events can be computed. Some of the events of interest are hidden. They are not observed directly. A Hidden Markov Model (HMM) allows us to talk about both observed events and hidden events. It is like the hidden layers in the neural networks. A transition probability matrix is first constructed representing the probability of moving from first state to second state. The variables of interest and computations include a sequence of observations drawn from a vocabulary, a sequence of observation likelihoods called emission probabilities. Each emission probability expresses the probability of an observation getting generated from a given state from the initial probability distributions over both states. A first-order HMM assumes that the probability of a particular state depends only on the previous state and is not affected by any other state. Other techniques can be modeled for more complex scenarios. IDx-DR, an artificial intelligence algorithm analyzing retina images from Topcon NW400 camera uploaded to the cloud, became the first medical device to be approved by the United States Food and Drug Administration for using artificial intelligence to detect greater than mild diabetic retinopathy in adults with diabetes in April 2018. Intra-Ocular-Lens (IOL) 'super formula' was introduced as a 3-D framework using similarities in IOL formulas to develop IOL 'super surface' by amalgamating the modern formulae-- Hoffer Q, Holladay I, Holladay I with the Koch adjustment and Haigis formulae. This super formula calculates IOL power in all types of eyes. Ecstatic corneal conditions and glaucoma are also seeing a large number of algorithms being developed. Lietman et al . who used artificial neural networks on 106 glaucoma patients and 249 controls for diagnosing glaucoma based on visual fields reported that the algorithm outperformed global indices at high specificities (90%–95%). Li et al . used deep learning algorithm on 4012 pattern deviation images for functional glaucoma diagnosis with a reported accuracy of 87.60%(Sensitivity = 93.20%, Specificity = 82.60%). Yousefi et al . in another cross-sectional study, 677 patients and 1146 controls used unsupervised learning methods of visual fields for prognostication%(Sensitivity = 87%, Specificity = 96%) where unsupervised machine learning consistently detected the progression of glaucoma much earlier than conventional methods. Prediction of progression using Humphrey's Visual Fields even 24-2 algorithm with deep learning can be made up to five and a half years before conventional methods. Mardin et al . combined confocal laser scanning ophthalmoscope images with visual fields using a machine learning classifier to get area under the curve (AUROC) of 0.977 (Sensitivity = 95%, Specificity = 91%). The advantage of AI is that it can use data of great variety and variability to model the outcomes and predict them. Even genetic data can be used for risk stratification once the mapping can be completed. The basic purpose of this article was to focus on the different techniques being used in healthcare and more so in ophthalmology rather than provide an exhaustive list of the reported and ongoing studies.
This models a linear relationship between a dependent variable or scalar response and one or more explanatory or independent variables. In simple linear regression, the relationship between the dependent and one explanatory variable is studied. In multiple linear regression, it is the relation with more than one explanatory variable. In multivariate linear regression, multiple correlated dependent variables are predicted using different explanatory variables. This relationship can be used for predictive modeling in a very narrow sense in statistics and is one of the simplest tools used for developing functions or equations that explain the results or dependent variable based on independent variables. The result can be viewed as Dependent Variable = Constant+ [Slope x Independent Variable] + Error. Any number of independent variables can be studied and the effort is to reduce the error.
When the data has a binomial distribution, which means that it can be separated into two mutually exclusive groups like yes/no, pass/fail, alive/dead or healthy/sick, then the logistic model or logit model is used. It assigns a probability between 0 and 1 to the factors with the sum adding to one. Logistic regression statistical model uses a logistic function to model a binary dependent variable but other more complex analysis and permutations are possible. The logarithm of the odds for the dependent variable labeled “A” is a linear combination of one or more independent variables or “predictors”. These independent variables can be continuous (any real value) or binary (yes/no) variables. The corresponding probability of the value labeled “B” can vary between 0 and 1.00 ies between 0-100%. This function that converts log-odds to probability is the logistic function. The unit of measurement for the log-odds scale is called logit. Similar rendition of models with a different sigmoid function is called the probit model. It is of use where categorical variables are used.
Naive Bayes is used for constructing classifiers or models that assign class labels to examples of the problems like referable and non-referable. These examples get assigned as vectors of feature values with the class labels drawn from a finite set of feature values. Naive Bayes requires only a small number of training data to estimate the parameters for classification. There is a family of algorithms in naïve Bayes which is dependent on likelihood, probability before and probability after. The common principle for all naive Bayes classifiers is that it assumes independence of features and each feature contributes to the classification regardless of any possible correlations between the features. A naive Bayes classifier considers each of the features to contribute independently to the probability of its classification. In plain English, using Bayesian probability terminology, the above equation can be written as Practically only the numerator of this fraction is important because the denominator the denominator is effectively constant (as it does not depend on C and the values of the individual features say x i are given). The numerator is equivalent to the joint probability model given above. Naive Bayes is a probabilistic machine learning algorithm with wide application in heterogeneous classification tasks like labeling referable and even email spam. It is called 'Naive' because it assumes the features that go into the model are independent of each other. Changing the value of one feature does not directly influence or change the value of any of the other features used in the algorithm. Rev. Thomas Bayes (1702–61) gave us the elements of this and, therefore, it is named after him. It is popular because it can be coded easily and runs almost real time. It is scalable and responds to user's requests instantaneously as the calculations are relatively straight.
This is a schematic representation of several decisions having two or more outcomes followed by the probability of the occurrence of each of them. This gives a tree-shaped graphical representation of decisions and the nodes or the chance points that help to investigate the possible outcomes . There are broadly 6 steps in a decision tree analysis: Definition of the problem in structured terms listing all the factors relevant to the solution. The probability distributions of the conditional future behavior of those factors are also then estimated. Modelling of the decision process listing all the alternatives in the problem is constructed. The entire decision process is presented schematically and in an organized step-by-step fashion. The application of appropriate probability values to all the braches and sub-branches of the decision tree. The “solution of decision tree” by finding the particular branch of the tree which has the largest expected value or that maximizes the solution (or vice versa depending on the definition of the problem.) Sensitivity analysis can be performed to see how the solution reacts to changes in inputs. This can show how the model behaves when run in real world situations. The underlying assumptions are listed and ideally should be found to be possible and plausible.
Nearest Neighbor Analysis evaluates the distances between the given point and the point closest to it. The analysis is done for every point. The algorithm then compares these values to expected values for a random sample of points from a complete spatial randomness (CSR) pattern. CSR is calculated by two assumptions: All points have same likelihood of receiving or not receiving a positive event. Or, as a corollary, are equally likely to have a negative case or negative event All positive events or cases are located independently of one another. The null hypothesis of complete spatial randomness is tested using the standard normal variate (Z statistic). In such a situation, a negative Z score demonstrates clustering while a positive score correlates with dispersion or evenness. The mean nearest neighbor distance Where N is the number of points. d i is the nearest neighbor distance for point i. The expected value of the nearest neighbor distance in a random pattern Where A is the area and B is the length of the perimeter of the study area. The variance The above equations have a correction factor to counteract the boundary effect. Z statistic The output file in nearest neighbor analysis gives: Input data points Total number of points The minimum and maximum of the X and Y coordinates Size of the study area, Observed mean nearest neighbor distance Variance Z statistic (standard normal variate). In this method, the study area is a regular rectangle or square and cannot be used for irregularly shaped study areas.
Decision trees are the building blocks of the random forest model which act together as an ensemble. Each individual tree in the random forest gives a class prediction and the class with the most votes becomes prediction from the model. It is a very powerful tool. The random forest model outperforms many more sophisticated tools in making a prediction because of the random effects and the central limiting theorem operating together. This is also called the wisdom of crowds. A large number of relatively uncorrelated models (trees) operating as a committee will outperform any of the individual constituent models . The low correlation between models is the key. The trees protect each other from errors unless all of them err in same direction. By probability, some trees are wrong but other trees are right, so the group's probability moves in the correct direction. But an essential precondition is the absence of multicollinearity or correlation with each other such that the predictions don't err in the same direction together.
Discriminant Analysis is a statistical tool to assess the adequacy of a classification, given the group memberships; or to assign objects to one group among a number of groups. Discriminant Analysis is called Discriminant Function Analysis (DFA) when it is used to separate two groups and Canonical Varieties Analysis (CVA) when more than two groups are involved. Discriminant Analysis can be used to determine predictor variables related to the dependent variable. It can also be used to predict the value of the dependent variable when values of the predictor variables are available. It is often used in combination with Cluster Analysis and allows for determination of the subject's location in cases or controls if the risk factors or predicting factors are known.
SVM classifies the subjects into two or more groups. The outcome is used as a classifier. It works on mutually exclusive groups of subjects separable into two or more groups through decision boundaries defined by the traits. The goal of training is to assign optimal weight w to each factor so that the sum of weights comes to 1 so that the weights acting with traits explain the outcomes. This can be done by minimizing quadratic loss function or OLS. The main tuning parameters used are kernel, regularization, gamma and margin. The learning of the hyperplane in linear SVM involves transformation of the problem using a linear equation. For linear kernel the equation for prediction for a new input using the dot product between the input (x) and each support vector (xi) is calculated by f(x) = B(0) + sum[ai * (x, xi)]. Larger C value makes a hyperplane which attempts to classify the training points correctly even if that line or plane has to curve repeatedly. A small C value makes the optimizer define a larger margin separating hyperplane at the cost of misclassifying more points. A margin is a separation of line from closest class points. A good margin has large separation for both the classes. Gamma parameter gives influence of a single training example. A low gamma means 'far' points from separation line get considered in calculation and high values mean 'close' points near separation line are considered.
Neural networks refers to a set of algorithms designed to recognize patterns. A neural network can be compared to a network or circuit of neurons. Artificial neural network is a network of artificial neurons or nodes. There are hidden layers between the input and the output as shown in . These hidden layers have weights attached to different inputs and can have complex mathematical functions modeled on them. The connections of the biological neuron can also be modeled as weights. A positive weight represents an excitatory connection and a negative weight signifies an inhibitory one. All inputs are acted upon by the weights attached to the hidden layers and summed to get an output. This is called linear combination. Predictive modeling, adaptive control and training using a dataset can be done with neural networks. Experiential-self-learning using neural networks help draw conclusions from complex and unrelated set of information. They can pick up information from images, sound, text or time series. These are converted into vectors from which numerical signals about all real-world data are picked up. Neural Networks are used as a clustering and classification layer on top of the stored data. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. Neural networks can extract features from other algorithms for clustering and classification working as components of larger machine-learning applications for reinforcement learning, classification and regression. Examples of publicly available deep neural networks like convolutional neural networks are GoogleNet, AlexNet and VGGNet. Software like Caffe and Tensorflow can also be used.
These statistical models help to get the hidden information from observed sequential attributes or symbols. Hidden Markov Models (HMMs) derive their name from Russian mathematician Andrey Andreyevich Markov. They have been used in speech recognition, biological nucleotide sequences, predict exons and introns in DNA, identify functional motifs (domains), in proteins (profile HMM) and align two sequences (pair HMM). A good HMM simulates the real world source by converting the real world's observed data to symbols. Machine Learning techniques based on HMMs have solved problems including speech recognition, optical character recognition, bioinformatic needs like genetic analysis and computational biology problems. In HMM, a discrete stochastic process progresses through a series of states 'hidden' from the observer to generate the output which is the solution to the problem. Each hidden state generates a symbol representing an elementary unit of the modeled data. This is a powerful technique used when a probability for a sequence of observable events can be computed. Some of the events of interest are hidden. They are not observed directly. A Hidden Markov Model (HMM) allows us to talk about both observed events and hidden events. It is like the hidden layers in the neural networks. A transition probability matrix is first constructed representing the probability of moving from first state to second state. The variables of interest and computations include a sequence of observations drawn from a vocabulary, a sequence of observation likelihoods called emission probabilities. Each emission probability expresses the probability of an observation getting generated from a given state from the initial probability distributions over both states. A first-order HMM assumes that the probability of a particular state depends only on the previous state and is not affected by any other state. Other techniques can be modeled for more complex scenarios. IDx-DR, an artificial intelligence algorithm analyzing retina images from Topcon NW400 camera uploaded to the cloud, became the first medical device to be approved by the United States Food and Drug Administration for using artificial intelligence to detect greater than mild diabetic retinopathy in adults with diabetes in April 2018. Intra-Ocular-Lens (IOL) 'super formula' was introduced as a 3-D framework using similarities in IOL formulas to develop IOL 'super surface' by amalgamating the modern formulae-- Hoffer Q, Holladay I, Holladay I with the Koch adjustment and Haigis formulae. This super formula calculates IOL power in all types of eyes. Ecstatic corneal conditions and glaucoma are also seeing a large number of algorithms being developed. Lietman et al . who used artificial neural networks on 106 glaucoma patients and 249 controls for diagnosing glaucoma based on visual fields reported that the algorithm outperformed global indices at high specificities (90%–95%). Li et al . used deep learning algorithm on 4012 pattern deviation images for functional glaucoma diagnosis with a reported accuracy of 87.60%(Sensitivity = 93.20%, Specificity = 82.60%). Yousefi et al . in another cross-sectional study, 677 patients and 1146 controls used unsupervised learning methods of visual fields for prognostication%(Sensitivity = 87%, Specificity = 96%) where unsupervised machine learning consistently detected the progression of glaucoma much earlier than conventional methods. Prediction of progression using Humphrey's Visual Fields even 24-2 algorithm with deep learning can be made up to five and a half years before conventional methods. Mardin et al . combined confocal laser scanning ophthalmoscope images with visual fields using a machine learning classifier to get area under the curve (AUROC) of 0.977 (Sensitivity = 95%, Specificity = 91%). The advantage of AI is that it can use data of great variety and variability to model the outcomes and predict them. Even genetic data can be used for risk stratification once the mapping can be completed. The basic purpose of this article was to focus on the different techniques being used in healthcare and more so in ophthalmology rather than provide an exhaustive list of the reported and ongoing studies.
AI applications in healthcare can have tremendous potential and usefulness. However, the success of healthcare AI depends on the availability of clean healthcare data of high quality which can come only with careful execution and liberal funding. It is critical to consider data capture, storing, preparation and mining. Standardization of clinical vocabulary and the sharing of data across platforms is imperative for future growth. It is also important to ensure that bioethical standards are maintained in collection and use of the data. There is a need to develop strong foundations for computational bioethics. The authors hope that this paper helps the stakeholders to realize their potential and make a contribution to the artificial intelligence in healthcare literature as well as practice. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
Effects of corticosteroids on severe community-acquired pneumonia: a closer look at the evidence | 554c59fe-8892-485a-88f5-95092f4f9526 | 10463972 | Dental[mh] | We read with interest the article published by Wu et al., who reported that adjunctive corticosteroids can provide survival benefits and improve clinical outcomes without increasing adverse events in patients with severe community-acquired pneumonia (sCAP) . We commend the authors for conducting this comprehensive systematic review and meta-analysis on this crucial topic, as previous randomized controlled trials have yielded conflicting results. However, we have several concerns regarding the methodologies and results presented in this paper. First, this meta-analysis did not include three pivotal trials: the Santeon-CAP Trial , the CAPISCE-Trial , and the Bellvitge Trial . While these trials recruited a mixed population of patients with severe and non-severe CAP, all of them reported subgroup analyses of patients with sCAP, which could be utilized for data extraction in a study-level meta-analysis. In each of their respective subgroup analyses, the Santeon-CAP Trial, CAPISCE-Trial, and the Bellvitge Trial did not find mortality benefits associated with dexamethasone, prednisolone, or methylprednisolone, respectively, among patients hospitalized for sCAP. We extracted the data from these three trials and conducted a meta-analysis by pooling the results from all 10 studies (7 studies from the meta-analysis by Wu et al. and 3 studies identified through our literature search). We found that hydrocortisone was associated with a reduction in all-cause mortality (HR 0.48 [95% CI: 0.32–0.72]), but this observation was not seen for non-hydrocortisone corticosteroids (HR 0.79 [95% CI: 0.58–1.06]) (Fig. ). Based on these results, it appears that only hydrocortisone, but not other corticosteroids, exhibited an association with a reduced mortality risk among patients hospitalized for sCAP. Second, the authors reported that patients who received corticosteroids, particularly hydrocortisone, for a duration of ≤ eight days without tapering, experienced significantly lower mortality risks (as demonstrated in Table 3 of the manuscript). However, the CAPE COD trial conducted by Dequin and colleagues, which reported mortality benefits among patients receiving hydrocortisone, used an initial hydrocortisone dose of 200 mg with a gradual taper over 8 or 14 days . This study appeared to have been inaccurately categorized under the subgroup analysis of corticosteroids administered for a duration of ≤ eight days without tapering. We performed a subgroup analysis by re-classifying the CAPE COD trial under the subgroup of duration of > eight days with tapering and included the three newly identified studies ( – ). We found that corticosteroid durations of over 8 days or with a taper showed a similar reduction in all-cause mortality compared to corticosteroid durations of less than 8 days or without a taper (HR 0.69 [95% CI: 0.51–0.93] vs. HR 0.55 [95% CI: 0.33–0.92]). Thus, in contrast to the authors' findings, the duration or tapering of corticosteroids did not appear to affect mortality benefits. Third, the authors did not report on the risk of hyperglycemia, a significant adverse event associated with corticosteroid use. We conducted an updated meta-analysis that encompassed all studies reporting on hyperglycemia and found an approximately 50% increased risk of hyperglycemia associated with corticosteroid use compared to placebo (HR 1.50 [95% CI: 1.04–2.17]). These findings further substantiate that corticosteroids can elevate the risk of hyperglycemia, necessitating caution, particularly in diabetes patients hospitalized for sCAP. Finally, the authors omitted the assessment of certainty or confidence in the body of evidence for each evaluated outcome. This represents a crucial element of the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) Checklist, which is recommended for meta-analyses of randomized controlled trials. Such an evaluation contextualizes the results of a meta-analysis and facilitates their application to clinical practice. Once again, we commend the authors for undertaking this significant work and hope that our comments contribute additional insights to the presented data.
|
Network Pharmacology and Metabolomics Reveal Anti-Ferroptotic Effects of Curcumin in Acute Kidney Injury | b4bd3485-2d5e-40b9-b492-fee0e7844550 | 11669278 | Biochemistry[mh] | AKI is a serious condition in clinical inpatients linked to high morbidity and mortality rates. It is marked by a rapid deterioration in glomerular function and buildup of metabolic waste products, for example urea nitrogen and creatinine. , The importance of AKI in the development of the disease is often overlooked. Along with other complications, such as heart failure, kidney injury, and sepsis, one or more episodes of AKI favor the development of chronic kidney disease, thereby increasing the mortality rates. , Studies have shown that the pathophysiology of AKI is linked to immunological imbalance, inflammatory response, oxidative stress, and apoptosis. Pathologically, AKI severely damages renal tubular cells, causing apoptosis and necrosis with renal dysfunction. Although AKI is reversible, there is currently no effective clinical treatment. Currently, the available therapeutic strategies for AKI are mainly symptomatic, and effective interventions based on pathological mechanisms remain to be discovered. Recent years, ferroptosis, a form of cell death characterized by iron-dependent accumulation of lipid peroxide, has been demonstrated to play a key role in the development of AKI. The tryptophan metabolite 5-HT has been found to exert anti-iron death effects in a very different manner from cystine, whereas monoamine oxidase A (MAOA) enhances cellular iron death sensitivity by degrading 5-HT. Many small molecules targeted to inhibit ferroptosis, such as baicalein and silymarin, have been found to have clear AKI ameliorating effects, thus suggesting that targeting iron death may provide new insights into clinical therapeutic strategies for AKI. Herbs or monomers characterized by high potency, low toxicity and multi-targeting have received much attention for the prevention and treatment of ferroptosis-related diseases, especially AKI. , Curcumin (Cur) is a natural polyphenol and the main bioactive compound of Curcuma longa , exerts immunoregulatory, anti-inflammatory, antioxidant, and anti-apoptotic effects. Extensive research demonstrated the remarkable therapeutic effects of Cur on various kidney diseases. And it has been reported that Cur reverses the decline in renal function in patients with AKI by modulating the immune response, reducing inflammatory response and apoptosis, and inhibiting oxidative stress. Several clinical trials have demonstrated the therapeutic potential of Cur in cardiovascular disease, diabetes, and cancer. , In addition, Cur has been found to play a role in ameliorating ferroptosis-related diseases by inhibiting LOX, modulating mitochondrial oxidative stress, and ferritin autophagy. , However, the role of Cur’s anti-ferroptosis effects in AKI treatment is still unknown. Considering the multiple beneficial effects of Cur in AKI treatment, it is important to identify its molecular targets. This information is essential for the future expansion of the practical application of Cur in AKI treatment. Metabolomics is a sub-discipline of systems biology that uses high-throughput methods for the qualitative or quantitative analysis of metabolites. This approach is used to monitor the dynamic changes in metabolites that reflect the state of the body. The metabolome reflects the terminal variation of the pathological or physiological state, genetically controlled intrinsic metabolism, and environmental influences (eg, diet and medicine). , , Therefore, metabolite profiling provides information regarding the disorder-related changes in certain biochemical pathways. , Recent evidence indicated that a significantly metabolic dysregulation caused by hypoxia and mitochondrial dysfunction is closely related to AKI. However, metabolomics falls short in elucidating the endogenous mechanisms that drive alterations in metabolites during AKI, such as the upstream pathways, protein interactions, and metabolite biosynthesis involved. Network pharmacology is a systematic approach based on network analysis of a biological system to select specific nodes that assist in designing multi-target drug molecules. It is designed to investigate the relationship among drugs, targets, pathways, and diseases by constructing a multi-level network model. In recent years, network pharmacology has gained significant popularity as a powerful tool for identifying active ingredients in drugs, predicting specific mechanisms underlying drug actions, analysing major active ingredient targets, and developing combination drugs. Nonetheless, network pharmacology only suggests potential targets and pathways of active compounds, lacking confirmation of the combination and effects that Cur exerts on targets. Consequently, the complementary use of metabolomics and systematic network pharmacology prediction could provide new strategies to clarify the potential mechanisms responsible for the action of Cur in the treatment of AKI. In this research, we conducted such a combined approach to excavate the targets and potential molecular mechanisms of Cur in treating AKI. We identified related targets and metabolic responses that show inhibition of ferroptosis play an important effect in the Cur-treated FA-induced mouse model. This research provides novel insights into the renoprotective effects of Cur in the management of AKI.
Chemicals and Reagents Curcumin (Cur), folic acid (FA), RSL-3, and Erastin were acquired from MedChemExpress (Monmouth Junction, NJ, USA). Deuterium oxide (D2O) was obtained from Qingdao Tenglong Weibo Technology Co., Ltd. (Qingdao, China). 3-(trimethylsilyl) propionate-2, 2, 3, 3-d4 (TSP) was acquired from Sigma (St Louis, MO, USA). Methanol and chloroform were obtained from Sinopharm (Shanghai, China). Cell Counting Kit-8 kit was acquired from Beyotime (Shanghai, China). Monoamine oxidase (MAO) Activity Assay Kit was acquired from Solarbio Technology Co., Ltd. (Beijing, China). The malondialdehyde (MDA) and tissue iron assay kit were acquired from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). Animal Experiment The SPF (Specific Pathogen Free) C57BL/6 mice (Male, 8 weeks-old) were acquired from Shanghai SLAC Animal Company (Shanghai, China). All mice were kept in a controlled environment with a room temperature of 23±3°C and relative humidity of 70±5% under SPF conditions. They were subjected to a 12-hour dark-light cycle and provided with ad libitum access to food and water to promote their well-being and reduce stress. The study protocol received approval from the Ethics Review Committee of Xiamen University (XMULAC20220200). All animal experiments were conducted in accordance with the animal care and use guidelines of Xiamen University (Xiamen, China). After 1 week of acclimatization, all mice were completely and randomly assigned to three groups (N=10 per group): control, FA-treatment (model), and Cur-treatment groups. The AKI model was established by administering a single intraperitoneal injection of FA (200 mg/kg) as previously described. The control group received the same volume of saline. Cur (dose: 100 mg/kg) was administered immediately along with the modelling and three consecutive times within 24 h (ie, with 8-h intervals). All mice were euthanized at 24 h, and both kidneys and blood samples were obtained for subsequent analysis. Detection of Blood Urea Nitrogen (BUN) and Creatinine (CRE) Serum BUN and CRE (N=10) were detected to monitor renal function, as previously reports. , The blood sample was collected and placed at room temperature in a centrifuge tube for 30–60 minutes. Subsequently, the blood sample was centrifuged at 14,000 g for 15 minutes, the separated serum was placed into a clean centrifuge tube, and centrifugation was repeated (14,000 g for 3 minutes) to remove any remaining cells. The obtained serum was used to detect BUN and CRE levels by a commercial reagent kit from Solarbio Technology Co., Ltd. (Beijing, China) (Urea Nitrogen/Urea Content Assay Kit, Cat: BC1535. Creatinine Content Assay Kit, Cat: BC4915). The testing was conducted in accordance with the instructions provided by the manufacturer. Histopathological Assessment The kidneys were fixed in 4% phosphate-buffered formaldehyde to maintain their structure, dehydrated, and then embedded in paraffin. Following serial sectioning (thickness: 5 μm), paraffin-embedded sections were subjected to staining by haematoxylin and eosin. Images were acquired using an inverted microscope (AE31E; Motic, Xiamen, China), with 5 photographs acquired for each sample. Samples Preparation of Nuclear Magnetic Resonance (NMR)-Based Metabolomics Kidney samples (~100 mg) (N=10 per group) were added to 1.5 mL of extract solution (water: chloroform: methanol =2.85:4:4) and subjected to homogenization at a frequency of 65 hz for a duration of 60 seconds to extract aqueous metabolites. After vortexing for 5 minutes, the sample was centrifuged at speed of 12,000 g (4°C, 15 minutes), and the methanol was removed by nitrogen bubbling. The aqueous phase was lyophilized, and redissolved in 600 μL of 50 mm phosphate buffer contained 0.1 mm TSP (pH 7.4, 100% D 2 O,). The redissolved sample was centrifuged at speed of 12,000 g (4°C, 15 minutes) to obtain the supernatant, which was carefully transferred to NMR tubes and centrifuged at 4°C (1500 g, 5 minutes) prior to NMR detection. NMR Detection and Data Processing of NMR-Based Metabolomics The one-dimensional 1 H-NMR spectra were obtained from an 850 MHz NMR spectrometer (Bruker AVANCE III HD, Bruker BioSpin, Ettlingen, Germany) using a NOESYGPPR1D pulse sequence at 25°C. The experimental parameters used for NMR detection were as follows: spectral width: 20 ppm, relaxation delay: 4 s and 32 scans. MestReNova 9.0 software obtained from Mestrelab Research S.L. (Santiago de Compostela, Spain) was utilized to process NMR data, including phase correction, baseline correction. The chemical shifts of the spectrum were referenced to TSP (δ 0.00). The data matrix was obtained using MATLAB R2014b (MathWorks, Natick, MA, USA) by binning the δ 9.5–0.75 spectral region at 0.001 ppm and normalising all the peak integrals according to the peak integrals of the TSP. This was followed by removal of the δ 4.85–4.75 region. The version 8.3 of Chenomx NMR Suite (Chenomx Inc., Edmonton, Canada), Human Metabolome Database (accessed at http://www.hmdb.ca/ on 1 January 2023), and related reported sources were combined to perform resonance assignments of metabolites. The verification of resonance assignments was validated by two-dimensional 1 H- 13 C heteronuclear single quantum correlations (HSQC) spectrum ( Supplementary Figure 1 ). Multivariate Statistical Analysis and Pathway Analysis of NMR-Based Metabolomics Multivariate statistical analysis of the data matrix was conducted on the SIMCA14.1 software package (MKS Umetrics, Malmö, Sweden). Firstly, unsupervised principal component analysis (PCA) was conducted to illustrate the clustering trends for the group separation. Furthermore, supervised partial least squares-discriminant analysis (PLS-DA) was utilized to maximally discriminate the metabolic fingerprinting of kidneys. Rigorous permutation tests of 200 cycles were subsequently proceed to acquire the interpretive (R 2 ) and predictive abilities (Q 2 ) for evaluating the reliability of the PLS-DA model. Significantly altered metabolites (p<0.05) were identified by the IBM SPSS Statistics 22.0 software (IBM, Armonk, NY, USA). The MetaboAnalyst 5.0 web server (accessed at http://www.MetaboAnalyst.ca on 1 January 2023) was used to visualise pathway enrichment, and critical metabolic pathway screening criteria were based on two criteria (ie, pathway impact value [PIV] >1 and p<0.05). Network Pharmacology Construction The version 3.8.2 of Cytoscape software (Cytoscape Consortium, San Diego, CA, USA) was utilized to obtain the metabolite-protein-pathway network and reveal the core metabolites and associated proteins. The disease-associated candidate targets were gained from the GeneCards ( https://www.genecards.org/ ), OMIM ( https://omim . org/), TCMSP ( http://tcmspw.com/tcmsp.php ), and therapeutic target database (TTD; http://db.idrblab.net/ttd/ ) through a search using the keyword ‘acute kidney injury’. The potential targets of Cur were screened through a search using the keyword “Curcumin” in the SwissTargetPrediction ( http://www.swisstargetprediction.ch/ ), BATMAN-TCM ( http://bionet.ncpsb.org/batman-tcm/ ), STITCH 5.0 ( http://stitch.embl.de/ ) and ChEMBL ( https://www.ebi.ac.uk/chembl/ ). The predicted target of Cur against AKI was considered to be in the overlap of the drug targets and disease targets. The KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and GO (Gene Ontology) enrichment analyses of potential targets were conducted by Cytoscape using the ClueGO plugin unit. Import the differential metabolites identified from metabolomics and predicted target of Cur against AKI into Cytoscape, and use Metscape to form an interaction network for visualize the interactions among the genes, enzymes, pathways, and metabolites. Molecular Docking The molecular docking analysis was conducted utilizing the Schrödinger software. The molecular structure of Cur (PubChem CID 969516) was gained from PubChem Compound ( https://www.ncbi.nlm.nih.gov/pccompound ), and was transformed from the native format into pdbqt format using Maestro–LigPrep. The crystal structures of target proteins, including monoamine oxidase A (MAOA; Protein Data Bank identifier [PDB ID]: 2Z5Y), glutaminase 1 (GLS1; PDB ID: 3VOY), and glutaminase 2 (GLS2; PDB ID: 4BQM), were gained from PDB database (the Research Collaboratory for Structural Bioinformatics, https://www.rcsb.org/ ). The obtained protein crystals were optimized by protein preprocessing, regenerate states of native light, H-bond assignment optimization, protein energy minimization, and deletion of water. We used the SiteMap module in Schrödinger to predict the best binding sites, and then used the Receptor Grid Generation module in Schrödinger to set the most suitable Enclosing box to wrap the predicted binding site, and on the basis of which we obtained the active side. Subsequently, molecular docking was performed at the active site and molecular mechanics with generalised Born and surface area solvation (MM-GBSA) computational analysis to assess the stability of ligand-protein binding. Extra precision (XP) Gscore and MM-GBSA dG Bind were used to determine the stability of ligand-protein binding. Finally, PyMOL was utilized to visualize the ligand protein binding with the optimal scores. Enzyme Activities Assay The kidney samples were processed according to the instructions provided in the commercial assay kits (Monoamine Oxidase Activity Assay Kit, Beijing Solarbio Technology Co., Ltd., CAT: BC0015). Extraction solution (sample weight [g]: extraction solution [mL] = 1:1.5) was added to prepare the samples (N=6 per group) for testing. Next, the reagents were added in sequence according to the instructions and read using a microplate reader (BioTek, Winooski, VT, USA) at 360 nm wavelength at 10s and 2h. Ultimately, the results on enzyme activity were normalized depending on the tissue weight. qPCR TriZol (Takara, Kyoto, Japan) was utilized to extract total RNA on the basis of the procedures provided by the manufacturer. Total RNA (1 μg) (N=3 per group) was utilized to cDNA synthesis using the ReverTra Ace qPCR RT Master Mix (LabLead, Beijing, China). The cDNA was utilized to amplify specific target genes using the SYBR Green Real-time PCR Master Mix (TOYOBO). RT-PCR was conducted as follows: 95°C for 10 min, followed by 95°C for 10s, 60°C for 30s, and 95°C for 10s for 39 cycles. The data were measured and exported using CFX96TM Real-Time System (BIO-RAD). The delta cycle threshold (Ct) (2 −ΔΔCt ) approach was utilized to estimate the relative gene expression levels. The data were normalized to those obtained for the internal control β-actin. The primers used for qPCR were acquired from Sangon Biotech Co., Ltd. (Shanghai, China), including MAOA: 5’-GACCTTGACTGCCAAGATT-3’ and 5’-GATCACAAGGCTTTATTCTA-3’ and β-actin: 5’-CTTCCAGCCTTCCTTCCTGG-3’ and 5’-CTGTGTTGGCGTACAGGTCT-3’. Cell Culture Human kidney tubular epithelial cells (HK2) were gained from ATCC (Manassas, VA, USA). The cells were cultured in DMEM (high glucose; HyClone) containing 10% foetal bovine serum (Biological Industries), streptomycin (100 μg/mL), and penicillin (100 IU/mL) (Thermo Fisher Scientific, Waltham, MA, USA) biochemical incubator at 37°C and supplemented with 5% CO 2 . Cell Counting Kit-8 Experiment HK2 cells were seeded in 96-well plates at a density of 5×10 3 /well and cultured for 12 hours with 200 μL of medium. After reaching an appropriate 80–90% confluency, the culture media were supplemented with 2 μM RSL-3 or 2 μM Erastin to induce ferroptosis. Moreover, HK2 cells (N=5 per group) were supplemented with different final concentrations of Cur (50, 25, 10, 5, 1, and 0.1 μM) for 24 h. Subsequently, each well was added with 10% (volume/volume) of Cell Counting Kit-8 reagent. After incubated at 37°C for 2–4 h, Multi-mode microplate readers (POLARstar Omega, BMG LABTECH GmbH, Germany) was used to detect the absorbance at 450 nm wavelength. Lipid Peroxidation Assessed Tissue samples (N=5 per group) were assayed on the basis of the instructions of MDA Assay Kit (Beijing Solarbio Technology Co., Ltd). Microplate readers was used to measure the MDA levels at 530 nm wavelength, which reflect the degree of lipid peroxidation in ferroptosis. Analysis of Tissue Iron Content Tissue samples (N=5 per group) were analyzed for iron content utilizing iron assay kit (Nanjing Jiancheng Bioengineering Institute) based on the provided protocol. Iron levels were determined at 520 nm wavelength using multi-mode microplate readers. Statistical Analysis The Data were expressed as the mean ± standard deviation (SD) and p-values <0.05 indicate statistically significant differences. Differences among groups were assessed by one-way ANOVA utilizing IBM SPSS Statistics 22.0 software (IBM), followed by the post hoc analysis of Tukey’s multiple comparison test for homoscedasticity and Games-Howell for heteroscedasticity. Statistical significance is indicated as follows: *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001. GraphPad Prism 8.0.2 software (GraphPad Software Inc., San Diego, CA, USA) was used to draw graphs.
Curcumin (Cur), folic acid (FA), RSL-3, and Erastin were acquired from MedChemExpress (Monmouth Junction, NJ, USA). Deuterium oxide (D2O) was obtained from Qingdao Tenglong Weibo Technology Co., Ltd. (Qingdao, China). 3-(trimethylsilyl) propionate-2, 2, 3, 3-d4 (TSP) was acquired from Sigma (St Louis, MO, USA). Methanol and chloroform were obtained from Sinopharm (Shanghai, China). Cell Counting Kit-8 kit was acquired from Beyotime (Shanghai, China). Monoamine oxidase (MAO) Activity Assay Kit was acquired from Solarbio Technology Co., Ltd. (Beijing, China). The malondialdehyde (MDA) and tissue iron assay kit were acquired from Nanjing Jiancheng Bioengineering Institute (Nanjing, China).
The SPF (Specific Pathogen Free) C57BL/6 mice (Male, 8 weeks-old) were acquired from Shanghai SLAC Animal Company (Shanghai, China). All mice were kept in a controlled environment with a room temperature of 23±3°C and relative humidity of 70±5% under SPF conditions. They were subjected to a 12-hour dark-light cycle and provided with ad libitum access to food and water to promote their well-being and reduce stress. The study protocol received approval from the Ethics Review Committee of Xiamen University (XMULAC20220200). All animal experiments were conducted in accordance with the animal care and use guidelines of Xiamen University (Xiamen, China). After 1 week of acclimatization, all mice were completely and randomly assigned to three groups (N=10 per group): control, FA-treatment (model), and Cur-treatment groups. The AKI model was established by administering a single intraperitoneal injection of FA (200 mg/kg) as previously described. The control group received the same volume of saline. Cur (dose: 100 mg/kg) was administered immediately along with the modelling and three consecutive times within 24 h (ie, with 8-h intervals). All mice were euthanized at 24 h, and both kidneys and blood samples were obtained for subsequent analysis.
Serum BUN and CRE (N=10) were detected to monitor renal function, as previously reports. , The blood sample was collected and placed at room temperature in a centrifuge tube for 30–60 minutes. Subsequently, the blood sample was centrifuged at 14,000 g for 15 minutes, the separated serum was placed into a clean centrifuge tube, and centrifugation was repeated (14,000 g for 3 minutes) to remove any remaining cells. The obtained serum was used to detect BUN and CRE levels by a commercial reagent kit from Solarbio Technology Co., Ltd. (Beijing, China) (Urea Nitrogen/Urea Content Assay Kit, Cat: BC1535. Creatinine Content Assay Kit, Cat: BC4915). The testing was conducted in accordance with the instructions provided by the manufacturer.
The kidneys were fixed in 4% phosphate-buffered formaldehyde to maintain their structure, dehydrated, and then embedded in paraffin. Following serial sectioning (thickness: 5 μm), paraffin-embedded sections were subjected to staining by haematoxylin and eosin. Images were acquired using an inverted microscope (AE31E; Motic, Xiamen, China), with 5 photographs acquired for each sample.
Kidney samples (~100 mg) (N=10 per group) were added to 1.5 mL of extract solution (water: chloroform: methanol =2.85:4:4) and subjected to homogenization at a frequency of 65 hz for a duration of 60 seconds to extract aqueous metabolites. After vortexing for 5 minutes, the sample was centrifuged at speed of 12,000 g (4°C, 15 minutes), and the methanol was removed by nitrogen bubbling. The aqueous phase was lyophilized, and redissolved in 600 μL of 50 mm phosphate buffer contained 0.1 mm TSP (pH 7.4, 100% D 2 O,). The redissolved sample was centrifuged at speed of 12,000 g (4°C, 15 minutes) to obtain the supernatant, which was carefully transferred to NMR tubes and centrifuged at 4°C (1500 g, 5 minutes) prior to NMR detection.
The one-dimensional 1 H-NMR spectra were obtained from an 850 MHz NMR spectrometer (Bruker AVANCE III HD, Bruker BioSpin, Ettlingen, Germany) using a NOESYGPPR1D pulse sequence at 25°C. The experimental parameters used for NMR detection were as follows: spectral width: 20 ppm, relaxation delay: 4 s and 32 scans. MestReNova 9.0 software obtained from Mestrelab Research S.L. (Santiago de Compostela, Spain) was utilized to process NMR data, including phase correction, baseline correction. The chemical shifts of the spectrum were referenced to TSP (δ 0.00). The data matrix was obtained using MATLAB R2014b (MathWorks, Natick, MA, USA) by binning the δ 9.5–0.75 spectral region at 0.001 ppm and normalising all the peak integrals according to the peak integrals of the TSP. This was followed by removal of the δ 4.85–4.75 region. The version 8.3 of Chenomx NMR Suite (Chenomx Inc., Edmonton, Canada), Human Metabolome Database (accessed at http://www.hmdb.ca/ on 1 January 2023), and related reported sources were combined to perform resonance assignments of metabolites. The verification of resonance assignments was validated by two-dimensional 1 H- 13 C heteronuclear single quantum correlations (HSQC) spectrum ( Supplementary Figure 1 ).
Multivariate statistical analysis of the data matrix was conducted on the SIMCA14.1 software package (MKS Umetrics, Malmö, Sweden). Firstly, unsupervised principal component analysis (PCA) was conducted to illustrate the clustering trends for the group separation. Furthermore, supervised partial least squares-discriminant analysis (PLS-DA) was utilized to maximally discriminate the metabolic fingerprinting of kidneys. Rigorous permutation tests of 200 cycles were subsequently proceed to acquire the interpretive (R 2 ) and predictive abilities (Q 2 ) for evaluating the reliability of the PLS-DA model. Significantly altered metabolites (p<0.05) were identified by the IBM SPSS Statistics 22.0 software (IBM, Armonk, NY, USA). The MetaboAnalyst 5.0 web server (accessed at http://www.MetaboAnalyst.ca on 1 January 2023) was used to visualise pathway enrichment, and critical metabolic pathway screening criteria were based on two criteria (ie, pathway impact value [PIV] >1 and p<0.05).
The version 3.8.2 of Cytoscape software (Cytoscape Consortium, San Diego, CA, USA) was utilized to obtain the metabolite-protein-pathway network and reveal the core metabolites and associated proteins. The disease-associated candidate targets were gained from the GeneCards ( https://www.genecards.org/ ), OMIM ( https://omim . org/), TCMSP ( http://tcmspw.com/tcmsp.php ), and therapeutic target database (TTD; http://db.idrblab.net/ttd/ ) through a search using the keyword ‘acute kidney injury’. The potential targets of Cur were screened through a search using the keyword “Curcumin” in the SwissTargetPrediction ( http://www.swisstargetprediction.ch/ ), BATMAN-TCM ( http://bionet.ncpsb.org/batman-tcm/ ), STITCH 5.0 ( http://stitch.embl.de/ ) and ChEMBL ( https://www.ebi.ac.uk/chembl/ ). The predicted target of Cur against AKI was considered to be in the overlap of the drug targets and disease targets. The KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and GO (Gene Ontology) enrichment analyses of potential targets were conducted by Cytoscape using the ClueGO plugin unit. Import the differential metabolites identified from metabolomics and predicted target of Cur against AKI into Cytoscape, and use Metscape to form an interaction network for visualize the interactions among the genes, enzymes, pathways, and metabolites.
The molecular docking analysis was conducted utilizing the Schrödinger software. The molecular structure of Cur (PubChem CID 969516) was gained from PubChem Compound ( https://www.ncbi.nlm.nih.gov/pccompound ), and was transformed from the native format into pdbqt format using Maestro–LigPrep. The crystal structures of target proteins, including monoamine oxidase A (MAOA; Protein Data Bank identifier [PDB ID]: 2Z5Y), glutaminase 1 (GLS1; PDB ID: 3VOY), and glutaminase 2 (GLS2; PDB ID: 4BQM), were gained from PDB database (the Research Collaboratory for Structural Bioinformatics, https://www.rcsb.org/ ). The obtained protein crystals were optimized by protein preprocessing, regenerate states of native light, H-bond assignment optimization, protein energy minimization, and deletion of water. We used the SiteMap module in Schrödinger to predict the best binding sites, and then used the Receptor Grid Generation module in Schrödinger to set the most suitable Enclosing box to wrap the predicted binding site, and on the basis of which we obtained the active side. Subsequently, molecular docking was performed at the active site and molecular mechanics with generalised Born and surface area solvation (MM-GBSA) computational analysis to assess the stability of ligand-protein binding. Extra precision (XP) Gscore and MM-GBSA dG Bind were used to determine the stability of ligand-protein binding. Finally, PyMOL was utilized to visualize the ligand protein binding with the optimal scores.
The kidney samples were processed according to the instructions provided in the commercial assay kits (Monoamine Oxidase Activity Assay Kit, Beijing Solarbio Technology Co., Ltd., CAT: BC0015). Extraction solution (sample weight [g]: extraction solution [mL] = 1:1.5) was added to prepare the samples (N=6 per group) for testing. Next, the reagents were added in sequence according to the instructions and read using a microplate reader (BioTek, Winooski, VT, USA) at 360 nm wavelength at 10s and 2h. Ultimately, the results on enzyme activity were normalized depending on the tissue weight.
TriZol (Takara, Kyoto, Japan) was utilized to extract total RNA on the basis of the procedures provided by the manufacturer. Total RNA (1 μg) (N=3 per group) was utilized to cDNA synthesis using the ReverTra Ace qPCR RT Master Mix (LabLead, Beijing, China). The cDNA was utilized to amplify specific target genes using the SYBR Green Real-time PCR Master Mix (TOYOBO). RT-PCR was conducted as follows: 95°C for 10 min, followed by 95°C for 10s, 60°C for 30s, and 95°C for 10s for 39 cycles. The data were measured and exported using CFX96TM Real-Time System (BIO-RAD). The delta cycle threshold (Ct) (2 −ΔΔCt ) approach was utilized to estimate the relative gene expression levels. The data were normalized to those obtained for the internal control β-actin. The primers used for qPCR were acquired from Sangon Biotech Co., Ltd. (Shanghai, China), including MAOA: 5’-GACCTTGACTGCCAAGATT-3’ and 5’-GATCACAAGGCTTTATTCTA-3’ and β-actin: 5’-CTTCCAGCCTTCCTTCCTGG-3’ and 5’-CTGTGTTGGCGTACAGGTCT-3’.
Human kidney tubular epithelial cells (HK2) were gained from ATCC (Manassas, VA, USA). The cells were cultured in DMEM (high glucose; HyClone) containing 10% foetal bovine serum (Biological Industries), streptomycin (100 μg/mL), and penicillin (100 IU/mL) (Thermo Fisher Scientific, Waltham, MA, USA) biochemical incubator at 37°C and supplemented with 5% CO 2 .
HK2 cells were seeded in 96-well plates at a density of 5×10 3 /well and cultured for 12 hours with 200 μL of medium. After reaching an appropriate 80–90% confluency, the culture media were supplemented with 2 μM RSL-3 or 2 μM Erastin to induce ferroptosis. Moreover, HK2 cells (N=5 per group) were supplemented with different final concentrations of Cur (50, 25, 10, 5, 1, and 0.1 μM) for 24 h. Subsequently, each well was added with 10% (volume/volume) of Cell Counting Kit-8 reagent. After incubated at 37°C for 2–4 h, Multi-mode microplate readers (POLARstar Omega, BMG LABTECH GmbH, Germany) was used to detect the absorbance at 450 nm wavelength.
Tissue samples (N=5 per group) were assayed on the basis of the instructions of MDA Assay Kit (Beijing Solarbio Technology Co., Ltd). Microplate readers was used to measure the MDA levels at 530 nm wavelength, which reflect the degree of lipid peroxidation in ferroptosis.
Tissue samples (N=5 per group) were analyzed for iron content utilizing iron assay kit (Nanjing Jiancheng Bioengineering Institute) based on the provided protocol. Iron levels were determined at 520 nm wavelength using multi-mode microplate readers.
The Data were expressed as the mean ± standard deviation (SD) and p-values <0.05 indicate statistically significant differences. Differences among groups were assessed by one-way ANOVA utilizing IBM SPSS Statistics 22.0 software (IBM), followed by the post hoc analysis of Tukey’s multiple comparison test for homoscedasticity and Games-Howell for heteroscedasticity. Statistical significance is indicated as follows: *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001. GraphPad Prism 8.0.2 software (GraphPad Software Inc., San Diego, CA, USA) was used to draw graphs.
Cur Treatment Alleviated Renal Injury in Mice with FA-Induced AKI FA-induced AKI has been described in humans, and the animal model recapitulates most of the human AKI pathologies observed in the clinic, including oxidative stress, inflammation, and renal cell death and regeneration. To assess the renoprotective effect of Cur, we established an AKI model induced by a high dose of FA and injected Cur for treatment. The schematic diagram shown in illustrates the experimental procedure of FA-induced AKI and drug administration. Cur significantly reduced the levels of CRE and BUN, which are indicators of renal function. These levels were elevated in the FA-induced AKI model ( and ). Haematoxylin-and-eosin staining of kidney sections indicated that Cur significantly reduced histological damage to the kidneys . Taken together, these data suggested that Cur could protect against functional acute renal malfunction and structural organ injury. Cur Remodelled the Metabolite Profiles in the Kidneys of AKI Mice The non-targeted metabolomics analysis identified a total of 41 metabolites from 1 H NMR spectrum recorded on control, FA-treatment, and Cur-treatment groups of kidneys aqueous extracts ( Table S1 ). Typical one-dimensional 1 H spectrum was shown in . Two dimensional 1 H- 13 C-HSQC spectrum was recorded to confirm the resonance assignments of the metabolites ( Figure S1 ). Firstly, we established the unsupervised PCA model to visualize and comprehensively assess the metabolic patterns of the three groups of kidneys. The PCA model showed that all points were situated in the circle, representing the 95% confidence interval . We founded that samples from the same group showed a clustering trend, whereas those from the three different groups of kidneys exhibited obvious separation . The metabolic profile was well distinguished between FA-treatment and control mice and between Cur- and FA-treatment mice , respectively . These results indicate that an extremely severe metabolic dysfunction in mice with AKI induced by FA and Cur-treatment had a metabolic regulatory effect in FA-induced AKI. In addition, the PLS-DA models were produced to magnificent the separations of the metabolic patterns and identify important metabolites. As expected, there was obvious separation between the FA-treatment with control groups , as well as between the Cur- and FA-treatment groups . The parameters of R2X, R2Y, and Q2 in PLS-DA between the FA-treatment and control groups were 0.935, 0.988, and 0.902, respectively. These parameters between the Cur- and FA-treatment groups were 0.914, 0.933, and 0.873, respectively. In addition, 200 permutation tests were executed to evaluate the robustness of the built model ( and ). The higher R 2 stands for the better explanatory capacity of the established PLS-DA model, while the higher Q 2 represents the better predictive performance. The results suggested that the PLS-DA model had good explanatory and predictive performance. Thus, the model is a valid approach for the identification of important metabolites contributing to these metabolic distinctions. Based on the PLS-DA model, 11 and 10 important metabolites were screened from the FA-treatment versus control groups and Cur- versus FA-treatment groups , respectively. Screening Differential Metabolites and Important Metabolic Pathways The relative contents of the assigned 41 metabolites were calculated according to their NMR peak integrals relative to TSP. Thereafter, a total of 27 and 20 significantly altered metabolites were selected from the pairwise comparisons of FA-treatment versus control groups and Cur- versus FA-treatment groups, respectively. This selection was conducted using one-way ANOVA, followed by Tukey’s multiple comparison test with a criterion of p-value <0.05 ( Table S2 ). To visualize the different levels of metabolite signatures among the control, FA-treatment, and Cur-treatment groups, the heatmap was plotted based on the relative metabolic levels. Furthermore, metabolites satisfying VIP >1 or p-value <0.05 were considered characteristic metabolites associated with the therapeutic effects of Cur on AKI. In total, 30 and 24 characteristic metabolites were screened from the pairwise comparisons of FA-treatment versus control and Cur- versus FA-treatment groups, respectively. Following the intersection of the characteristic metabolites mentioned above, a total of 19 differential metabolites were screened . We conducted pathway analyses utilizing MetaboAnalyst 5.0 to identify the metabolic pathways according to the relative contents of the metabolites. By setting two criteria (ie, PIV >0.1 and p-value <0.05), nine and 11 metabolic pathways were identified from pairwise comparisons of the FA-treatment versus control and Cur- versus FA-treatment groups, respectively ( and ; Table S3 ). Metabolic pathways shared by the two comparisons were considered the most relevant to AKI and protective mechanisms of Cur. These included: Phenylalanine, tyrosine and tryptophan biosynthesis; Nicotinate and nicotinamide metabolism; Phenylalanine metabolism; Taurine and hypotaurine metabolism; Alanine, aspartate and glutamate metabolism; Arginine and proline metabolism; Tryptophan metabolism; Tyrosine metabolism; and Inositol phosphate metabolism. Network Pharmacology We executed network pharmacology analysis to further explore the mechanisms involved in the effects of Cur against AKI. Preliminary analyses were performed using the BATMAN-TCM database. The results revealed that six pathways were enriched in the metabolomic analysis: phenylalanine metabolism; glycine, serine and threonine metabolism; tryptophan metabolism; arginine and proline metabolism; tyrosine metabolism; and glyoxylate and dicarboxylate metabolism. These results indicated the reliability of metabolic pathway analysis. Subsequently, we obtained 162 targets of Cur from SwissTargetPrediction, STITCH, BATMAN-TCM and ChEMBL databases and gathered 1788 targets of AKI from the TCMSP, TTD, OMIM and GeneCards databases. A total of 71 prospective targets were found, and the overlap between the drug targets and disease targets for Cur to treat AKI was visualized using a Venn diagram . In addition, the compound-target network was established utilizing the Cytoscape software . To explore the potential mechanism underlying the renoprotective effects of Cur, we performed GO and KEGG enrichment analysis using ClueGO based on the overlapping targets of Cur and AKI ( and ). GO analysis showed that the top five terms were: 1. positive regulation of lymphocyte migration; 2. positive regulation of lymphocyte chemotaxis; 3. regulation of monooxygenase activity; 4. positive regulation of mononuclear cell migration; 5. regulation of glucose import. The KEGG analysis revealed that the pathways which were significantly affected included D-amino acid metabolism, central carbon metabolism, nitrogen metabolism, phenylalanine metabolism, renal cell carcinoma, cocaine addiction, and the hypoxia-inducible factor-1alpha (HIF-1α) signaling pathway. Analysis of Metabolomics Combined with Network Pharmacology We sought to gain a comprehensive and systematic view of the mechanism by which Cur ameliorates AKI. Therefore, we constructed drug-reaction-enzyme-gene networks by integrating the results of metabolomics and network pharmacology . These networks provide insight into the overall relationship between upstream targets, pathways, and terminal metabolites. Combining the potential targets determined from network pharmacology and the differential metabolites identified from metabolomics, we selected four key upstream targets, namely MAOA, GLS1, GLS2, and acetylcholinesterase (ACHE) . The pathways were alanine, aspartate and glutamate metabolism, tryptophan metabolism and choline metabolism, and the associated core terminal metabolites, including L-alanine, L-glutamine, L-glutamate, 5-hydroxy-L-tryptophan (5-HT), L-tryptophan, and choline. Unlike choline metabolism, alanine, aspartate, and glutamate metabolism, as well as tryptophan metabolism, were identified in the metabolomics pathway analyses. Therefore, excluding ACHE, MAOA, GLS1, and GLS2 may be essential for the renal protective effect of Cur on AKI. Molecular Docking Between Cur and Core Target We performed molecular docking studies using Schrödinger software to further explore the potential interaction between Cur and the core targets, including MAOA, GLS1, and GLS2. The results of molecular docking analysis are shown in . An XP Gscore <−6 indicates good binding affinity and MM-GBSA dG binding energies <−30 kcal/mol indicate stable ligand-protein binding. The molecular docking analysis of MAOA indicated that Cur made hydrogen-bonding interactions with GLU43 and ARG51 at the active site. Moreover, Cur formed a π-Cation with ARG45 and hydrophobic interaction with TYR402, PRO274, ALA448, MET445, and TYR444 ( and ). The binding energy of Cur on MAOA was −55.92 kcal/mol and the docking score was −8.973, indicating stable ligand-protein binding and excellent binding affinity, respectively. In molecular docking analysis of GLS2, Cur made hydrogen-bonding interactions with SER219 and LEU441, and hydrophobic interactions with ALA181, ALA180, VAL417, ALA416, and TYR399 ( and ). The binding energy of Cur on GLS2 was −56.47 kcal/mol and the docking score was −6.023, suggesting stable ligand-protein binding and good binding affinity, respectively. For the Cur-GLS1 complex, Cur formed hydrogen-bonding interactions with TYR414, ASN388, LYS289, and LYS245, as well as hydrophobic interactions with VAL246, ALA247, TRY249 and VAL484 ( and ). The binding energy of Cur on GLS1 was −38.01 kcal/mol and the docking score was −5.197, suggesting stable ligand-protein binding and preferable binding affinity, respectively. These results demonstrated high affinity between Cur and the core targets, especially MAOA. The Renoprotective Effect of Cur Was Closely Linked to MAOA Previous studies have revealed that a close association between AKI and MAOA-catalyzed 5-HT degradation. We sought to further investigate the possibility of MAOA as a target of CUR in the treatment of AKI. Thus, we detected the mRNA levels and enzymatic activity of MAOA in kidney tissue. As expected, the results showed that MAOA mRNA levels were remarkably enhanced in the FA-treatment group compared with the control group, whereas they were reduced after treatment with Cur . Similarly, changes in the enzymatic activity of MAOA were observed in pairwise comparisons between the FA-treatment and control groups and between the Cur- and FA-treatment groups . Furthermore, we observed a reduction in the levels of 5-HT in the FA-treatment group compared with the control group, which was reversed in the Cur-treatment group . These results showed that Cur may regulate the expression and enzymatic activity of MAOA to affect 5-HT metabolism and eventually prevent AKI. Anti-Ferroptotic Effect of Curcumin Ferroptosis, a cell death pattern characteristic of lipid peroxidation induced by ferrous iron overload, plays a core role in the progression of AKI. Extensive research has shown that inhibition of ferroptosis has a renoprotective effect in various AKI models. Interestingly, a recent study suggested that the tryptophan metabolite 5-hydroxytryptamine (5-HT) exerts an anti-ferroptotic effect, whereas MAOA significantly abolishes the protective effect of 5-HT through its degradation. Therefore, we speculated that Cur might possess anti-ferroptotic activity. To verify this hypothesis, we established a ferroptotic model of HK2 cells using RSL-3 or Erastin, and treated the cells with Cur. We found that the ferroptosis induced by RSL-3 or Erastin was reversed by treatment with 1, 5, 10, 25, and 50 μM Cur; almost complete reversal was achieved with 5 μM Cur ( and ). We also measured the MDA levels in kidneys, which reflect the extent of lipid peroxidation . The levels of iron in kidneys were also measured to assess the anti-ferroptotic effect of Cur . These results suggest that Cur induces anti-ferroptosis in mice with AKI.
FA-induced AKI has been described in humans, and the animal model recapitulates most of the human AKI pathologies observed in the clinic, including oxidative stress, inflammation, and renal cell death and regeneration. To assess the renoprotective effect of Cur, we established an AKI model induced by a high dose of FA and injected Cur for treatment. The schematic diagram shown in illustrates the experimental procedure of FA-induced AKI and drug administration. Cur significantly reduced the levels of CRE and BUN, which are indicators of renal function. These levels were elevated in the FA-induced AKI model ( and ). Haematoxylin-and-eosin staining of kidney sections indicated that Cur significantly reduced histological damage to the kidneys . Taken together, these data suggested that Cur could protect against functional acute renal malfunction and structural organ injury.
The non-targeted metabolomics analysis identified a total of 41 metabolites from 1 H NMR spectrum recorded on control, FA-treatment, and Cur-treatment groups of kidneys aqueous extracts ( Table S1 ). Typical one-dimensional 1 H spectrum was shown in . Two dimensional 1 H- 13 C-HSQC spectrum was recorded to confirm the resonance assignments of the metabolites ( Figure S1 ). Firstly, we established the unsupervised PCA model to visualize and comprehensively assess the metabolic patterns of the three groups of kidneys. The PCA model showed that all points were situated in the circle, representing the 95% confidence interval . We founded that samples from the same group showed a clustering trend, whereas those from the three different groups of kidneys exhibited obvious separation . The metabolic profile was well distinguished between FA-treatment and control mice and between Cur- and FA-treatment mice , respectively . These results indicate that an extremely severe metabolic dysfunction in mice with AKI induced by FA and Cur-treatment had a metabolic regulatory effect in FA-induced AKI. In addition, the PLS-DA models were produced to magnificent the separations of the metabolic patterns and identify important metabolites. As expected, there was obvious separation between the FA-treatment with control groups , as well as between the Cur- and FA-treatment groups . The parameters of R2X, R2Y, and Q2 in PLS-DA between the FA-treatment and control groups were 0.935, 0.988, and 0.902, respectively. These parameters between the Cur- and FA-treatment groups were 0.914, 0.933, and 0.873, respectively. In addition, 200 permutation tests were executed to evaluate the robustness of the built model ( and ). The higher R 2 stands for the better explanatory capacity of the established PLS-DA model, while the higher Q 2 represents the better predictive performance. The results suggested that the PLS-DA model had good explanatory and predictive performance. Thus, the model is a valid approach for the identification of important metabolites contributing to these metabolic distinctions. Based on the PLS-DA model, 11 and 10 important metabolites were screened from the FA-treatment versus control groups and Cur- versus FA-treatment groups , respectively.
The relative contents of the assigned 41 metabolites were calculated according to their NMR peak integrals relative to TSP. Thereafter, a total of 27 and 20 significantly altered metabolites were selected from the pairwise comparisons of FA-treatment versus control groups and Cur- versus FA-treatment groups, respectively. This selection was conducted using one-way ANOVA, followed by Tukey’s multiple comparison test with a criterion of p-value <0.05 ( Table S2 ). To visualize the different levels of metabolite signatures among the control, FA-treatment, and Cur-treatment groups, the heatmap was plotted based on the relative metabolic levels. Furthermore, metabolites satisfying VIP >1 or p-value <0.05 were considered characteristic metabolites associated with the therapeutic effects of Cur on AKI. In total, 30 and 24 characteristic metabolites were screened from the pairwise comparisons of FA-treatment versus control and Cur- versus FA-treatment groups, respectively. Following the intersection of the characteristic metabolites mentioned above, a total of 19 differential metabolites were screened . We conducted pathway analyses utilizing MetaboAnalyst 5.0 to identify the metabolic pathways according to the relative contents of the metabolites. By setting two criteria (ie, PIV >0.1 and p-value <0.05), nine and 11 metabolic pathways were identified from pairwise comparisons of the FA-treatment versus control and Cur- versus FA-treatment groups, respectively ( and ; Table S3 ). Metabolic pathways shared by the two comparisons were considered the most relevant to AKI and protective mechanisms of Cur. These included: Phenylalanine, tyrosine and tryptophan biosynthesis; Nicotinate and nicotinamide metabolism; Phenylalanine metabolism; Taurine and hypotaurine metabolism; Alanine, aspartate and glutamate metabolism; Arginine and proline metabolism; Tryptophan metabolism; Tyrosine metabolism; and Inositol phosphate metabolism.
We executed network pharmacology analysis to further explore the mechanisms involved in the effects of Cur against AKI. Preliminary analyses were performed using the BATMAN-TCM database. The results revealed that six pathways were enriched in the metabolomic analysis: phenylalanine metabolism; glycine, serine and threonine metabolism; tryptophan metabolism; arginine and proline metabolism; tyrosine metabolism; and glyoxylate and dicarboxylate metabolism. These results indicated the reliability of metabolic pathway analysis. Subsequently, we obtained 162 targets of Cur from SwissTargetPrediction, STITCH, BATMAN-TCM and ChEMBL databases and gathered 1788 targets of AKI from the TCMSP, TTD, OMIM and GeneCards databases. A total of 71 prospective targets were found, and the overlap between the drug targets and disease targets for Cur to treat AKI was visualized using a Venn diagram . In addition, the compound-target network was established utilizing the Cytoscape software . To explore the potential mechanism underlying the renoprotective effects of Cur, we performed GO and KEGG enrichment analysis using ClueGO based on the overlapping targets of Cur and AKI ( and ). GO analysis showed that the top five terms were: 1. positive regulation of lymphocyte migration; 2. positive regulation of lymphocyte chemotaxis; 3. regulation of monooxygenase activity; 4. positive regulation of mononuclear cell migration; 5. regulation of glucose import. The KEGG analysis revealed that the pathways which were significantly affected included D-amino acid metabolism, central carbon metabolism, nitrogen metabolism, phenylalanine metabolism, renal cell carcinoma, cocaine addiction, and the hypoxia-inducible factor-1alpha (HIF-1α) signaling pathway.
We sought to gain a comprehensive and systematic view of the mechanism by which Cur ameliorates AKI. Therefore, we constructed drug-reaction-enzyme-gene networks by integrating the results of metabolomics and network pharmacology . These networks provide insight into the overall relationship between upstream targets, pathways, and terminal metabolites. Combining the potential targets determined from network pharmacology and the differential metabolites identified from metabolomics, we selected four key upstream targets, namely MAOA, GLS1, GLS2, and acetylcholinesterase (ACHE) . The pathways were alanine, aspartate and glutamate metabolism, tryptophan metabolism and choline metabolism, and the associated core terminal metabolites, including L-alanine, L-glutamine, L-glutamate, 5-hydroxy-L-tryptophan (5-HT), L-tryptophan, and choline. Unlike choline metabolism, alanine, aspartate, and glutamate metabolism, as well as tryptophan metabolism, were identified in the metabolomics pathway analyses. Therefore, excluding ACHE, MAOA, GLS1, and GLS2 may be essential for the renal protective effect of Cur on AKI.
We performed molecular docking studies using Schrödinger software to further explore the potential interaction between Cur and the core targets, including MAOA, GLS1, and GLS2. The results of molecular docking analysis are shown in . An XP Gscore <−6 indicates good binding affinity and MM-GBSA dG binding energies <−30 kcal/mol indicate stable ligand-protein binding. The molecular docking analysis of MAOA indicated that Cur made hydrogen-bonding interactions with GLU43 and ARG51 at the active site. Moreover, Cur formed a π-Cation with ARG45 and hydrophobic interaction with TYR402, PRO274, ALA448, MET445, and TYR444 ( and ). The binding energy of Cur on MAOA was −55.92 kcal/mol and the docking score was −8.973, indicating stable ligand-protein binding and excellent binding affinity, respectively. In molecular docking analysis of GLS2, Cur made hydrogen-bonding interactions with SER219 and LEU441, and hydrophobic interactions with ALA181, ALA180, VAL417, ALA416, and TYR399 ( and ). The binding energy of Cur on GLS2 was −56.47 kcal/mol and the docking score was −6.023, suggesting stable ligand-protein binding and good binding affinity, respectively. For the Cur-GLS1 complex, Cur formed hydrogen-bonding interactions with TYR414, ASN388, LYS289, and LYS245, as well as hydrophobic interactions with VAL246, ALA247, TRY249 and VAL484 ( and ). The binding energy of Cur on GLS1 was −38.01 kcal/mol and the docking score was −5.197, suggesting stable ligand-protein binding and preferable binding affinity, respectively. These results demonstrated high affinity between Cur and the core targets, especially MAOA.
Previous studies have revealed that a close association between AKI and MAOA-catalyzed 5-HT degradation. We sought to further investigate the possibility of MAOA as a target of CUR in the treatment of AKI. Thus, we detected the mRNA levels and enzymatic activity of MAOA in kidney tissue. As expected, the results showed that MAOA mRNA levels were remarkably enhanced in the FA-treatment group compared with the control group, whereas they were reduced after treatment with Cur . Similarly, changes in the enzymatic activity of MAOA were observed in pairwise comparisons between the FA-treatment and control groups and between the Cur- and FA-treatment groups . Furthermore, we observed a reduction in the levels of 5-HT in the FA-treatment group compared with the control group, which was reversed in the Cur-treatment group . These results showed that Cur may regulate the expression and enzymatic activity of MAOA to affect 5-HT metabolism and eventually prevent AKI.
Ferroptosis, a cell death pattern characteristic of lipid peroxidation induced by ferrous iron overload, plays a core role in the progression of AKI. Extensive research has shown that inhibition of ferroptosis has a renoprotective effect in various AKI models. Interestingly, a recent study suggested that the tryptophan metabolite 5-hydroxytryptamine (5-HT) exerts an anti-ferroptotic effect, whereas MAOA significantly abolishes the protective effect of 5-HT through its degradation. Therefore, we speculated that Cur might possess anti-ferroptotic activity. To verify this hypothesis, we established a ferroptotic model of HK2 cells using RSL-3 or Erastin, and treated the cells with Cur. We found that the ferroptosis induced by RSL-3 or Erastin was reversed by treatment with 1, 5, 10, 25, and 50 μM Cur; almost complete reversal was achieved with 5 μM Cur ( and ). We also measured the MDA levels in kidneys, which reflect the extent of lipid peroxidation . The levels of iron in kidneys were also measured to assess the anti-ferroptotic effect of Cur . These results suggest that Cur induces anti-ferroptosis in mice with AKI.
AKI is a condition characterized by a decrease in renal caused by a variety of physiological and pathological factors. In recent years, Cur has been extensively studied as a potential intervention for kidney disease. , , Nevertheless, the targets and corresponding effects (eg, activation, inhibition, or invalid combination) of Cur remain unknown. In this research, the molecular mechanism of Cur against FA-AKI was investigated through the integration of metabolomics and network pharmacology. Firstly, we screened 12 differential metabolites of Cur against AKI in kidneys, and their related metabolic pathways. Secondly, by integrating metabolomics with network pharmacology, we found three core targets (MAOA, GLS1, GLS2), two related pathways (tryptophan metabolism and alanine, aspartate, and glutamate metabolism), and five key metabolites (L-alanine, L-glutamine, L-glutamate, 5-HT, and L-tryptophan). In addition, molecular docking results revealed that Cur exhibited high binding affinity to MAOA. Furthermore, our experiments verified that Cur regulated 5-HT metabolism via MAOA, thereby potentially exerting an anti-ferroptotic effect. As a natural phenolic, Cur exhibits good pharmacological activity, including anti-cancer properties, hepatobiliary protective effects and renoprotective properties, while with safety at high dose in humans. Previous studies have proposed possible mechanisms of Cur involved in the treatment of AKI. It was found that Cur activates the kelch-like ECH associated protein 1/Nrf2 (Keap1/Nrf2) pathway in rats with glycerol-induced AKI. This activation increases the expression of antioxidant enzymes, such as NADP(H) quinone oxidoreductase 1 (NQO1), haemoglobin oxidase-1 (HO-1), and superoxide dismutase (SOD), thereby exerting antioxidant effects. In a cisplatin-induced model of AKI, elevated levels of nitric oxide (NO) increased the severity of AKI. Treatment with Cur inhibited the expression and activity of nitric oxide synthase (NOS), thereby decreasing NO production. In addition, cisplatin-induced AKI was alleviated by treatment with Cur through inhibition of the nuclear factor-kappa B (NF-κB) signalling pathway and reduction of inflammatory factor release. In the present research, we explored the metabolic variations in kidneys to recognize the potential mechanisms underlying the effects of Cur in the treatment of AKI. Overall, 12 differential metabolites and 11 significant metabolic pathways were identified from metabolomics ( and ). These data displayed that the regulation of metabolism exerts a key effect in the activity of Cur against AKI. Regrettably, metabolomics approaches are limited to excavating potential metabolites and associated pathways, without delving into direct relationships between these metabolites. Therefore, the novel strategy of combining metabolomics and network pharmacology, cross-talking upstream targets, pathways, and terminal metabolites, may provide a comprehensive and systematic view of the mechanisms involved in Cur therapy for AKI. According to the molecular docking analysis, MAOA (identified from metabolomics and network pharmacology) exhibited the lowest binding energy with Cur . MAOA is a mitochondrial metabolic enzyme that regulates the oxidative deamination of monoamine neurotransmitters and dietary amines, such as 5-HT. Previous studies have revealed a close association between AKI and MAOA-catalysed degradation of 5-HT. Recently, it was reported that 5-HT exerts an anti-ferroptotic effect as a potent radical-trapping antioxidant. Our results showed that MAOA in mice with AKI was inhibited by Cur, while the levels of 5-HT were increased in the Cur-treatment group . These findings imply that Cur also exerts an anti-ferroptotic effect in mice with AKI. It was recently reported that regulated cell death (eg, necroptosis, apoptosis, and ferroptosis) contributes to different forms of tissue injury. Ferroptosis is a type of programmed cell death distinguished by increased iron-dependent lipid peroxidation. Direct evidence has indicated that ferroptosis plays a critical role in the occurrence and development of AKI, as inhibitors of ferroptosis ameliorated renal injury in diverse animal models of AKI. Although Cur has received widespread attention in the treatment of different types of cancer by inducing anti-ferroptosis, , , few studies thus far have explored the anti-ferroptotic effect of Cur. Treatment with Cur exhibited a significant anti-ferroptotic effect in HK2 cells, as well as reducing the elevated levels of MDA and iron in the kidneys of mice from the FA-treatment group . This evidence suggests that Cur exerts its anti-ferroptotic effect by regulating 5-HT metabolism via MAOA in AKI mice. Our findings in this study links ferroptosis to AKI development and identified important mediators that serves as effective targets in AKI. Cur is suitable to be used in treating AKI induced by many factors in view of its safety, efficacy and very low toxic side effect profile, while whose limitations in clinical application exerts by the poor water solubility, low intestinal absorption efficiency, and low bioavailability in vivo. , For now, formulations techniques including nanoparticles, liposomes, and polymeric micelles contribute in improving bioavailability and enhancing the therapeutic effects of Cur, which improves the potential of Cur’s clinical application. Nevertheless, the combined use of curcumin and piperine indicating that co-administration of curcumin with other substances in clinical applications was also a useful method to improve its efficacy. Thus, Cur nanoformulations or the co-administration maybe the further challenge to us in developing Cur as the candidates for clinical treatment of AKI.
In this research, we re-ensured the renoprotection of Cur in FA-induced AKI mice, and the ferroptosis-related injury in FA-AKI. Notably, we found that Cur ameliorated the ferroptosis-related injury in FA-AKI. Then, we screened MAOA as the key targets involved in the effects of Cur against FA-AKI using a combined metabolomics and network pharmacology approach. Furthermore, we confirmed that Cur regulates 5-HT metabolism through MAOA, thereby inhibiting ferroptosis in cells. These results suggest that the inhibition of ferroptosis is a potential mechanism by which Cur attenuates renal injury, and provides the basis for the development of new therapeutic strategies (such as Cur) for diseases related to ferroptosis, especially AKI.
|
Engineering seed microenvironment with embedded bacteriophages and plant growth promoting rhizobacteria | 99c181b3-4205-4e5e-b9b2-217f90f8029d | 11600732 | Microbiology[mh] | With approximately 800 million individuals currently experiencing food insecurity and a projected global population of 9.7 billion by 2050, there is an urgent need to increase food production significantly in the coming decades . Changing climate patterns and the spread of transboundary diseases necessitate rapid crop adaptation to various stresses . Precision agriculture has been developed to respond to these challenges, utilizing advanced technologies to optimize food production. This approach aims to maximize crop yields while minimizing the use of resources such as water and agrochemicals, thus reducing the environmental impact . Consequently, agriculture is transitioning towards sustainability and technological integration. Seeds possess the highest added value among agricultural products, serving as a fundamental food source and as the cornerstone of agricultural practices . In recent years, seed enhancement technologies have emerged, aimed at improving seed performance through tailored conditioning and specialized regimes . Seed coatings have been devised to regulate seed surface properties, enrich soil with nutrients in specific locations, and modulate seed water uptake . However, the primary emphasis has been on studying payloads intended to improve seed germination based on soil properties and seed type rather than on the materials utilized for encapsulating and delivering these payloads. This approach has restricted the development of seed coatings capable of encapsulating beneficial and fragile compounds, particularly plant growth-promoting rhizobacteria (PGPR) and bacteriophages. PGPR could enhance nutrient availability and phytohormone levels during plant-root interaction, while simultaneously reducing the environmental impact of synthetic fertilizers, salinity, and pesticides . PGPR enhances soil fertility through nitrogen fixation, nutrient solubilization, and the production of growth regulators and antibiotics . When applied as inoculants, these biofertilizers multiply in soil and improve crop productivity through enhanced nutrient cycling . Incorporating microbial inocula into artificial seed coatings can result in diminished microbial viability, potentially compromising the long-term storage capacity of coated seeds . The artificial seed coating creates a challenging microenvironment for PGPR due to osmotic and desiccation stresses . Pseudomonas lalkuanensis , a recently characterized plant growth-promoting rhizobacterium (PGPR) from agricultural soil, exhibits strong antagonistic activity against plant pathogens while enhancing plant growth . The bacterium produces various antimicrobial compounds, including hydrogen cyanide (HCN), ammonia, siderophores, hydrolytic enzymes, and volatile organic compounds, making it a promising biocontrol agent due to its natural adaptation to the crop rhizosphere and mitigate salinity stress . Moreover, protective compounds intended to benefit the seed may unintentionally compromise the survival of symbiotic bacteria due to their biological activity . At the same time, the spread of microbial diseases through seeds is a major concern in agriculture, with the potential to cause significant yield losses . Ralstonia solanacearum is a devastating soil-borne plant pathogen that causes bacterial wilt disease in more than 200 plant species, including economically important crops such as potato, tomato, eggplant, pepper, tobacco, and banana . Its broad host range, global distribution, and high virulence makes it one of the most destructive bacterial plant pathogens worldwide . In pursuing sustainable food production, bacteriophages, natural viruses that target specific bacteria, are emerging as promising biocontrol tools. To combat agricultural diseases, scientists are pioneering new ways to deliver phages, including spraying them directly onto leaves , treating irrigation water , applying treatments to seed tubers , coating seeds, and developing protective shields for leaves . However, several studies have highlighted seed transmission as a major route for plant pathogen transmission . Despite this, only a few studies in recent years have focused on effective plant biocontrol through seed coating . Developing efficient phage coatings requires a comprehensive understanding of seed surface binding mechanisms, enrichment strategies, and phage stability; however, this aspect has surprisingly received little systematic attention. Silk protein, a structural protein traditionally used in textiles, has been repurposed as a natural technical material with applications in regenerative medicine, drug delivery, implantable optoelectronics, and food coatings . In this study, we intended to develop a biomaterial-based approach to engineer seed coatings loaded with PGPR to enhance germination and alleviate soil salinity and bacteriophage to control bacterial pathogens. A biomaterial was formulated using silk protein extracted from Bombyx mori ( B. mori ) cocoons and trehalose for cellular protection. The blend was combined with a PGPR Pseudomonas lalkuanensis A101 and/or a bacteriophage (P-PSG11) targeting Ralstonia solanacearum , and applied onto seed surfaces, adapting the existing seed coating method dip coating. To the best of our knowledge, this is the first study reporting long-term stabilization of bacteria and phages in silk films applied to seeds for enhanced biocontrol effectiveness and stability in soil after application for sustainable agriculture. Materials fabrication and protein purification With slight modifications, silk protein was purified as previously described . Briefly, silk fibroin was extracted from B. mori worm by boiling dime-sized pieces in 0.02 M sodium carbonate (Na 2 CO 3 ) for ∼ 40–50 min. The degummed fibers were collected, rinsed three times in ultrapure water, and dried overnight in a fume hood. The dried fibers were dissolved in a 9.3 M lithium bromide (LiBr) solution (20% wt/vol) at 60 °C for 4 h. A 12 mL portion of the solution was dialyzed against 1 L of ultrapure water for three days (water changed three times daily) to remove the LiBr. The solubilized silk fibroin was then centrifuged at ∼ 12,700 g to remove insoluble silk particles. The concentration of silk in the resultant solution was determined and used at ∼ 4% (wt/vol). This silk solution was used for further experiments or stored at 4 °C for up to 1 month (Fig. ). Assembly of fabrication material P-PSG11 was previously isolated and identified by our research group for the control of R. solanacearum . Phage P-PSG11 was prepared in Tris-HCl phage buffer at pH 7.5 (50 mM Tris-base, 150 mM NaCl, 10 mM MgCl 2 .6H 2 O, and 2 mM CaCl 2 ) and amplified as previously described . Phage titer was determined by spotting 10 μl of serially diluted lysate onto double agar layers containing host bacteria. P. lalkuanensis A101, isolated from the tomato rhizosphere in Ismailia, Egypt, was cultivated using Luria Bertani (LB) media. The identity of P. lalkuanensis was confirmed through 16 S rDNA sequencing, with accession number CP084625.1 (Fig. ). We investigated whether the phage P-PSG11 could inhibit the growth of P. lalkuanensis when combined, aiming to enhance biocontrol effectiveness. Silk film preparation After preparing phage P-PSG11 and P. lalkuanensis A101, they were mixed with nuclease-free water (ddH 2 O), silk film (SF), and silk trehalose film (STF). Phage P-PSG11 and/or P. lalkuanensis A101 were combined with the silk solutions (silk film and silk trehalose film) in a 1:1 ratio (i.e., 10 μL phage/bacteria + 10 μL solution). The mixtures were then spread onto nuclease-free plastic sheets and air-dried in a biosafety cabinet for approximately 30–40 min, as illustrated in Fig. . The resulting films were carefully removed using tweezers and transferred into 1.5 mL nuclease-free tubes for further testing. Phage and PGPR preservation Silk film (SF) and silk trehalose film (STF) samples containing P-PSG11 phage and P. lalkuanensis A101 were stored for over 8 weeks. Their continued effectiveness against R. solanacearum was then assessed in vitro. To evaluate stability, 190 μL nuclease-free water was added to the tubes containing SF or STF. After that, the phage P -PSG11 titers and P. lalkuanensis A101 colonies in the solutions were determined. To determine the phage titers, the double-layer agar method was used. Briefly, the solutions were serially diluted in Tris-HCl buffer (pH 7.5). Then, 500 μL of R. solanacearum (OD600) was mixed with 10 μL of the serially diluted solutions, vortexed at 160 rpm, and incubated at 28 °C for 15 min. This mixture was combined with 4 mL soft agar, poured onto CTG agar plates (1% Casamino acid hydroxylate, 1% tryptone and 1.5% w/v, agar), and incubated overnight at 28 °C. Phage titers were calculated by multiplying the plaque count by the dilution factor. All experiments were done in triplicate (Fig. ). To assess P. lalkuanensis A101 integrity, the serially diluted silk solutions were plated on soft agar plates containing R. solanacearum and incubated overnight at 28 °C for colony counting. The soft agar plates were prepared by pouring 500 μL of R. solanacearum in 4 mL soft agar onto CTG plates. Encapsulation of potato seed Potato ( Solanum tuberosum ) seeds were sterilized with 50% bleach for 3 min, rinsed thrice in H 2 O, and air-dried. P-PSG11 phage was prepared for coating, and P. lalkuanensis A101 was grown overnight to OD600 (1. 80 mL) and centrifuged at 4200 rpm. The pellet was resuspended in 8 mL of 6% (w/v) silk fibroin-trehalose (1:3) suspension containing phage P-PSG11. Seeds were dipped in this solution for 2 min, dried, and planted 24 h later. The coating process aimed to apply 10 8 CFU of P. lalkuanensis A101 and/or P-PSG11 phage per seed. Fifty seed models were coated with a silk film incorporating P. lalkuanensis A101 and phage P-PSG11, followed by air drying. These coated seed models were subsequently utilized in a series of experiments. In vitro studies were conducted to assess germination rates, while in vivo experiments evaluated plant growth development, pathogen suppression efficacy, and salinity stress tolerance. Seed-coated germination Germination rates were assessed using five replicates of coated potato seeds ( Solanum tuberosum No. Huashu2, sourced from Huazhong Agricultural University, Wuhan, China). Comparisons were made with uncoated seeds (5 seeds) and R. solanacearum - infected seeds (5 seeds). Seeds were placed on moistened filter paper in Petri dishes at 25 °C. Germinating seedlings were counted after 7, 14, and 21 days. The total germination percentage was calculated as the average of the five replications using the following formula: [12pt]{minimal} $$\:Germination=\:100$$ Root and shoot lengths were measured on 10 randomly selected seedlings during germination counts. Root length was measured from the collar to the primary root tip, while shoot length was measured from the collar to the primary shoot tip, and both were expressed as mean lengths in centimeters. Seedling Vigor Index was calculated according to the method described by Manonmani, et al., 2002 , using the following formula: [12pt]{minimal} $$\:Seedling\:Vigor\:index=germination\:\:total\:seedling\:length$$ Scanning electron microscopy (SEM) images Scanning electron microscopy (SEM) was used to examine cross-sections of silk (S) and silk-trehalose (ST) coatings applied by dip coating. Micrographs, all taken at the same magnification, revealed uniform film thicknesses of approximately 5 μm for both coating types. These cross-sectional images provided insights into the coatings’ internal structure, density, and the integration of trehalose within the silk matrix. This microstructural analysis is crucial for understanding the coatings’ potential effectiveness in preserving biocontrol agents and influencing seed germination. Pot experiment A pot experiment was conducted from June to August 2023 to evaluate the efficacy of P-PSG11 phage and P. lalkuanensis in promoting potato plant growth, managing bacterial wilt, and mitigating salinity stress. Environmental conditions ranged from 28 to 37 °C, with 58–85% relative humidity and 15–20 Klux light intensity. Potato seeds were coated with phage P-PSG11 and/or P. lalkuanensis . Sterile soil, prepared by air-drying, grinding, and 2 mm sieving, was placed in plastic pots (18 cm height, 26.5 cm diameter) at 5 kg per pot. The soil was analyzed for various properties, including pH, particle size distribution, and soluble cations and anions concentrations . The soil characteristics used in this study are provided in Table . Pots were fertilized with superphosphate (1.0 g/pot, 15.5% P 2 O 5 , equivalent to 31.0 kg P 2 O 5 /ha) prior to sowing. Split applications of ammonium sulphate (0.60 g N/pot) and potassium sulphate (0.25 g K 2 O/pot) were administered 20 days after sowing. The experiment was conducted using a completely randomized design comprising nine distinct treatments, and each treatment was replicated five times. The treatments included: (1) a negative control as coated seeds without biocontrol agents; (2) the pathogen R. solanacearum PS-X4-1, isolated previously from the field and kept in our lab ; (3) salinity stress induced by NaCl at 8 dS/m; (4) R. solanacearum combined with P. lalkuanensis ; (5) R. solanacearum with phage P-PSG11; (6) R. solanacearum in combination with both P. lalkuanensis and phage P-PSG11; (7) NaCl stress with P. lalkuanensis ; (8) NaCl stress with phage P-PSG11; and (9) NaCl stress combined with both P. lalkuanensis and phage P-PSG11. This design allowed for the examination of individual and combined effects of biotic (pathogen and beneficial microorganisms) and abiotic (salinity) stresses on the experimental subjects. Salinity treatments were prepared by mixing 12 g NaCl in 1.2 L water with 650 g soil. Plants were watered every third day, and plant heights and root lengths were measured at weeks 1 and 2 post-germination. Long-term viability of stored phages and PGPR Representative samples of STF were stored at room temperature for up to ≥ 1 year (406 days) to assess long-term stability. The detectability and viability of phage P-PSG11 and P. lalkuanensis A101 in these stored silk trehalose film samples were evaluated. Stability was verified using phage titration for P-PSG11 and colony counting for P. lalkuanensis A101. These methods quantified the concentration of viable agents, providing insights into their preservation within the STF matrix over an extended period at room temperature conditions. Statistical analysis All results are presented as mean values ± standard deviation (SD). Statistical analyses were conducted using one-way analysis of variance (ANOVA) with student’s t-test. Differences were considered statistically significant at the following levels: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. All statistical analyses were performed using GraphPad Prism software (version 8, GraphPad Software, San Diego, CA, USA). With slight modifications, silk protein was purified as previously described . Briefly, silk fibroin was extracted from B. mori worm by boiling dime-sized pieces in 0.02 M sodium carbonate (Na 2 CO 3 ) for ∼ 40–50 min. The degummed fibers were collected, rinsed three times in ultrapure water, and dried overnight in a fume hood. The dried fibers were dissolved in a 9.3 M lithium bromide (LiBr) solution (20% wt/vol) at 60 °C for 4 h. A 12 mL portion of the solution was dialyzed against 1 L of ultrapure water for three days (water changed three times daily) to remove the LiBr. The solubilized silk fibroin was then centrifuged at ∼ 12,700 g to remove insoluble silk particles. The concentration of silk in the resultant solution was determined and used at ∼ 4% (wt/vol). This silk solution was used for further experiments or stored at 4 °C for up to 1 month (Fig. ). P-PSG11 was previously isolated and identified by our research group for the control of R. solanacearum . Phage P-PSG11 was prepared in Tris-HCl phage buffer at pH 7.5 (50 mM Tris-base, 150 mM NaCl, 10 mM MgCl 2 .6H 2 O, and 2 mM CaCl 2 ) and amplified as previously described . Phage titer was determined by spotting 10 μl of serially diluted lysate onto double agar layers containing host bacteria. P. lalkuanensis A101, isolated from the tomato rhizosphere in Ismailia, Egypt, was cultivated using Luria Bertani (LB) media. The identity of P. lalkuanensis was confirmed through 16 S rDNA sequencing, with accession number CP084625.1 (Fig. ). We investigated whether the phage P-PSG11 could inhibit the growth of P. lalkuanensis when combined, aiming to enhance biocontrol effectiveness. After preparing phage P-PSG11 and P. lalkuanensis A101, they were mixed with nuclease-free water (ddH 2 O), silk film (SF), and silk trehalose film (STF). Phage P-PSG11 and/or P. lalkuanensis A101 were combined with the silk solutions (silk film and silk trehalose film) in a 1:1 ratio (i.e., 10 μL phage/bacteria + 10 μL solution). The mixtures were then spread onto nuclease-free plastic sheets and air-dried in a biosafety cabinet for approximately 30–40 min, as illustrated in Fig. . The resulting films were carefully removed using tweezers and transferred into 1.5 mL nuclease-free tubes for further testing. Silk film (SF) and silk trehalose film (STF) samples containing P-PSG11 phage and P. lalkuanensis A101 were stored for over 8 weeks. Their continued effectiveness against R. solanacearum was then assessed in vitro. To evaluate stability, 190 μL nuclease-free water was added to the tubes containing SF or STF. After that, the phage P -PSG11 titers and P. lalkuanensis A101 colonies in the solutions were determined. To determine the phage titers, the double-layer agar method was used. Briefly, the solutions were serially diluted in Tris-HCl buffer (pH 7.5). Then, 500 μL of R. solanacearum (OD600) was mixed with 10 μL of the serially diluted solutions, vortexed at 160 rpm, and incubated at 28 °C for 15 min. This mixture was combined with 4 mL soft agar, poured onto CTG agar plates (1% Casamino acid hydroxylate, 1% tryptone and 1.5% w/v, agar), and incubated overnight at 28 °C. Phage titers were calculated by multiplying the plaque count by the dilution factor. All experiments were done in triplicate (Fig. ). To assess P. lalkuanensis A101 integrity, the serially diluted silk solutions were plated on soft agar plates containing R. solanacearum and incubated overnight at 28 °C for colony counting. The soft agar plates were prepared by pouring 500 μL of R. solanacearum in 4 mL soft agar onto CTG plates. Potato ( Solanum tuberosum ) seeds were sterilized with 50% bleach for 3 min, rinsed thrice in H 2 O, and air-dried. P-PSG11 phage was prepared for coating, and P. lalkuanensis A101 was grown overnight to OD600 (1. 80 mL) and centrifuged at 4200 rpm. The pellet was resuspended in 8 mL of 6% (w/v) silk fibroin-trehalose (1:3) suspension containing phage P-PSG11. Seeds were dipped in this solution for 2 min, dried, and planted 24 h later. The coating process aimed to apply 10 8 CFU of P. lalkuanensis A101 and/or P-PSG11 phage per seed. Fifty seed models were coated with a silk film incorporating P. lalkuanensis A101 and phage P-PSG11, followed by air drying. These coated seed models were subsequently utilized in a series of experiments. In vitro studies were conducted to assess germination rates, while in vivo experiments evaluated plant growth development, pathogen suppression efficacy, and salinity stress tolerance. Germination rates were assessed using five replicates of coated potato seeds ( Solanum tuberosum No. Huashu2, sourced from Huazhong Agricultural University, Wuhan, China). Comparisons were made with uncoated seeds (5 seeds) and R. solanacearum - infected seeds (5 seeds). Seeds were placed on moistened filter paper in Petri dishes at 25 °C. Germinating seedlings were counted after 7, 14, and 21 days. The total germination percentage was calculated as the average of the five replications using the following formula: [12pt]{minimal} $$\:Germination=\:100$$ Root and shoot lengths were measured on 10 randomly selected seedlings during germination counts. Root length was measured from the collar to the primary root tip, while shoot length was measured from the collar to the primary shoot tip, and both were expressed as mean lengths in centimeters. Seedling Vigor Index was calculated according to the method described by Manonmani, et al., 2002 , using the following formula: [12pt]{minimal} $$\:Seedling\:Vigor\:index=germination\:\:total\:seedling\:length$$ Scanning electron microscopy (SEM) was used to examine cross-sections of silk (S) and silk-trehalose (ST) coatings applied by dip coating. Micrographs, all taken at the same magnification, revealed uniform film thicknesses of approximately 5 μm for both coating types. These cross-sectional images provided insights into the coatings’ internal structure, density, and the integration of trehalose within the silk matrix. This microstructural analysis is crucial for understanding the coatings’ potential effectiveness in preserving biocontrol agents and influencing seed germination. A pot experiment was conducted from June to August 2023 to evaluate the efficacy of P-PSG11 phage and P. lalkuanensis in promoting potato plant growth, managing bacterial wilt, and mitigating salinity stress. Environmental conditions ranged from 28 to 37 °C, with 58–85% relative humidity and 15–20 Klux light intensity. Potato seeds were coated with phage P-PSG11 and/or P. lalkuanensis . Sterile soil, prepared by air-drying, grinding, and 2 mm sieving, was placed in plastic pots (18 cm height, 26.5 cm diameter) at 5 kg per pot. The soil was analyzed for various properties, including pH, particle size distribution, and soluble cations and anions concentrations . The soil characteristics used in this study are provided in Table . Pots were fertilized with superphosphate (1.0 g/pot, 15.5% P 2 O 5 , equivalent to 31.0 kg P 2 O 5 /ha) prior to sowing. Split applications of ammonium sulphate (0.60 g N/pot) and potassium sulphate (0.25 g K 2 O/pot) were administered 20 days after sowing. The experiment was conducted using a completely randomized design comprising nine distinct treatments, and each treatment was replicated five times. The treatments included: (1) a negative control as coated seeds without biocontrol agents; (2) the pathogen R. solanacearum PS-X4-1, isolated previously from the field and kept in our lab ; (3) salinity stress induced by NaCl at 8 dS/m; (4) R. solanacearum combined with P. lalkuanensis ; (5) R. solanacearum with phage P-PSG11; (6) R. solanacearum in combination with both P. lalkuanensis and phage P-PSG11; (7) NaCl stress with P. lalkuanensis ; (8) NaCl stress with phage P-PSG11; and (9) NaCl stress combined with both P. lalkuanensis and phage P-PSG11. This design allowed for the examination of individual and combined effects of biotic (pathogen and beneficial microorganisms) and abiotic (salinity) stresses on the experimental subjects. Salinity treatments were prepared by mixing 12 g NaCl in 1.2 L water with 650 g soil. Plants were watered every third day, and plant heights and root lengths were measured at weeks 1 and 2 post-germination. Representative samples of STF were stored at room temperature for up to ≥ 1 year (406 days) to assess long-term stability. The detectability and viability of phage P-PSG11 and P. lalkuanensis A101 in these stored silk trehalose film samples were evaluated. Stability was verified using phage titration for P-PSG11 and colony counting for P. lalkuanensis A101. These methods quantified the concentration of viable agents, providing insights into their preservation within the STF matrix over an extended period at room temperature conditions. All results are presented as mean values ± standard deviation (SD). Statistical analyses were conducted using one-way analysis of variance (ANOVA) with student’s t-test. Differences were considered statistically significant at the following levels: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. All statistical analyses were performed using GraphPad Prism software (version 8, GraphPad Software, San Diego, CA, USA). Stability of P. lalkuanensis A101 and P-PSG11 in silk films Silk-based biomaterials (silk fibroin and a silk trehalose mixture) and double-distilled water (ddH 2 O) were used as preservation media for biocontrol agents rhizobacterium P. lalkuanensis A101 and bacteriophage P-PSG11 stored under room-temperature conditions (Fig. A). Upon mixing the silk biomaterials with either rhizobacterium P. lalkuanensis A101 or bacteriophage P-PSG11 the solutions were converted to films (i.e. silk fibroin film (SF) and silk trehalose film (STF)) using the drop-casting and spray drying method. For STF, a mixture ratio of 1:3 was selected due to its optimal solution viscosity and effective preservation of both P. lalkuanensis A101 and P-PSG11 (data not shown). Unless indicated specifically, the ST mixture ratio 1:3 was used in the following experiments. As seen in Fig. B-G, the viability of of rhizobacterium P. lalkuanensis A101 and bacteriophage P-PSG11 decreased gradually over time with that of STF showing the least drop in viability. Briefly, after 8 weeks of storage in the STF, the average weekly phage titers for P-PSG11 ranged between 10 − 8 and 10 − 9 plaque-forming units (PFU) per milliliter (Fig. B). Concurrently, colony counts for P. lalkuanensis A101 remained between 10 − 7 and 10 − 8 colony-forming units (CFU) per milliliter (Fig. E). In contrast, when using SF as a preservation medium, both agents remained stable for only 28 days. Storage in ddH 2 O at room temperature resulted in the shortest stability period of 7 days. These results highlight the superior efficacy of STF in preserving the bacteriophage and P. lalkuanensis A101 for extended periods without compromising viability. The stability achieved with the STF significantly outperformed the SF and ddH 2 O, demonstrating its potential as an effective preservation medium for these biological agents. Germination boost of encapsulated potato seeds Using a storage period of 4 weeks, potato seeds were subjected to different test conditions to assess their germination quality under controlled laboratory conditions (Fig. ). The test conditions included infection with R. solanacearum (R.S -infected), treatment with water only (control), coating with a mixture of silk and trehalose (containing either P-PSG11 phage only (P-PSG11), P. lalkuanensis A101 only (A101), and a mixture of both (P-PSG11 + A101)) then airdried to form STF (Fig. A, B). As illustrated in Fig. C, seeds coated with a combination of bacteriophage P-PSG11 and P. lalkuanensis A101 in the STF demonstrated the most favorable germination outcome. After 4 weeks of storage, these seeds achieved the highest germination rate of 93.5% ( P < 0.0001). Seeds coated with P. lalkuanensis A101 showed the second-highest germination rate at 86.3% ( P < 0.001). In contrast, seeds primed with P-PSG11 phage and water exhibited a lower germination percentage of 81.8%. Seeds infected with R. solanacearum showed almost no germination (3%). As shown in Fig. D, it was also found that seeds treated with the P-PSG11and P. lalkuanensis A101 combination produced the longest roots, measuring 5.4 cm ( P < 0.0001), followed by P. lalkuanensis A101-treated seeds at 4.9 cm ( P < 0.001). Seeds treated with P-PSG11, and untreated seeds had shorter root lengths of 2.85 cm, while infected seeds showed minimal root growth of 0.5 cm. Seed priming and coating treatments showed statistically significant differences in their effects on seedling vigor (Fig. ). At the same time, the seedling vigor index, a measure of overall seedling health and potential, varied considerably among treatments (Fig. E). Seeds treated with the combination of bacteriophage P-PSG11 and P. lalkuanensis A101 exhibited the highest seedling vigor index of 489.5 ( P < 0.0001). This was followed by seeds treated with P. lalkuanensis A101 alone, with a seedling vigor index of 436.8 ( P < 0.001). In contrast, untreated control seeds and those treated with P-PSG11 showed similar but lower seedling vigor indices of 314.7 and 315, respectively. Seeds infected with R. solanacearum displayed a markedly lower seedling vigor index of just 1.50, indicating severe impairment of seedling development. Scanning electron microscopy (SEM) validation SEM was used to assess if trehalose had any effects on the films and if P. lalkuanensis A101 was encapsulated within the film. As shown in Fig. A and D, SEM images revealed that adding trehalose had no effect on the film formation, and the coating thickness was consistently in the range of 5 ± 2 μm. Furthermore, bacteria P. lalkuanensis A101 could be embedded within the film matrix (Fig. E), with a similar shape to the unembedded bacteria (Fig. F). Pot experiments Pot experiments were conducted to evaluate plant growth, antagonistic effects, and salt tolerance of the potato plants developed from seeds dip-coated with the STF (ratio 1:3) containing P. lalkuanensis A101. and/or P-PSG11. Potato seeds under different treatments were cultivated for 35 days and compared to untreated control seeds (Fig. ). Plant growth enhancement The application of STF containing P. lalkuanensis A101 and P-PSG11 as PGPR and biocontrol agents in potato seed coatings resulted in significant improvements ( P < 0.001) in root elongation (Fig. A) and dry matter content of both roots and shoots (Fig. B). The growth promotion effect (GPE%) on root length was most pronounced with the P. lalkuanensis A101 and P-PSG11 combination (72.7%, P < 0.0001), followed by P. lalkuanensis A101 (61.0%), and P-PSG11 (22.5%), compared to untreated controls (Fig. A). Root dry weight increases were most substantial with the mixture of P-PSG11 and P. lalkuanensis A101 (129.1%), followed by P. lalkuanensis A101 (125.7%) and P-PSG11 (13.1%) compared to controls. Similarly, P-PSG11 and P. lalkuanensis A101 showed the highest increase in shoot dry weight, surpassing the control by 71.38% (Fig. B). Significant increases in plant length or height were also observed (Fig. C). Compared to the control, the GPE% was most pronounced with the P. lalkuanensis A101 and P-PSG11 combination (71.5%), followed by P. lalkuanensis A101 (65.1%), and P-PSG11 (8.2%). The P-PSG11 and P. lalkuanensis A101 combination exhibited the highest GPE% for plant fresh weight (111.3%), followed by P. lalkuanensis A101 (103.5%) and P-PSG11 (35.24%) (Fig. D). The P-PSG11 and P. lalkuanensis A101 combination consistently demonstrated the most significant GPE% across all measured parameters. This indicates its substantial impact on potato development when applied as a seed coating under the pot conditions. Bioprotection against R. solanacearum The STF containing P-PSG11 and P. lalkuanensis A101 mixture, P-PSG11 only, and P. lalkuanensis A101 only significantly reduced wilt incidence by 88.20%, 81.15%, and 77.45%, respectively ( P < 0.0001) (Fig. E) when planting the seeds into the soil spiked with R. solanacearum . These biocontrol agents improved plant survival from 8.45% (without antagonist) to over 88% after seed coating application under pot experiment conditions. Even when R. solanacearum was introduced 35 days post the seed planting, the P-PSG11 and P. lalkuanensis A101 combination still showed some effects in reducing wilt symptoms. Salinity mitigation Potato seeds coated with the P-PSG11 and/or P. lalkuanensis A101 embedded ST film were grown in saline (8 dS/m, by adding NaCl to the topsoil) and non-saline soils for four weeks. Seeds coated with STF containing either P. lalkuanensis A101 only or a P-PSG11 and P. lalkuanensis A101 mixture showed significantly higher germination rates (∼ 80%) under saline conditions compared to uncoated seeds, which had a germination rate of ∼ 35% (Fig. F). In contrast, the seeds coated with P-PSG11 had no difference from the uncoated seeds. Furthermore, throughout the observation period, seedlings from the seeds coated with P. lalkuanensis A101 or a mixture of P-PSG11 and P. lalkuanensis A101 exhibited greater height and more developed root systems than the control seedlings. These results demonstrated that the salt tolerance is mainly due to P. lalkuanensis A101. Determining the integrity of long-term stored phage and PGPR One STF embedded with P-PSG11 + A101 was prepared on April 1, 2023, stored at room temperature for over one year, and evaluated for activity on May 10, 2024 (406 days). As shown in Fig. , biocontrol agents (phage 10 − 6 PFU and bacteria 10 − 5 CFU) preserved in the STF maintained their viability, indicating that trehalose effectively protected the phage and the bacteria. In contrast, silk films without trehalose did not provide the same level of protection. This finding demonstrates that the STF effectively preserves these agents well, ensuring more than one year of activity. P. lalkuanensis A101 and P-PSG11 in silk films Silk-based biomaterials (silk fibroin and a silk trehalose mixture) and double-distilled water (ddH 2 O) were used as preservation media for biocontrol agents rhizobacterium P. lalkuanensis A101 and bacteriophage P-PSG11 stored under room-temperature conditions (Fig. A). Upon mixing the silk biomaterials with either rhizobacterium P. lalkuanensis A101 or bacteriophage P-PSG11 the solutions were converted to films (i.e. silk fibroin film (SF) and silk trehalose film (STF)) using the drop-casting and spray drying method. For STF, a mixture ratio of 1:3 was selected due to its optimal solution viscosity and effective preservation of both P. lalkuanensis A101 and P-PSG11 (data not shown). Unless indicated specifically, the ST mixture ratio 1:3 was used in the following experiments. As seen in Fig. B-G, the viability of of rhizobacterium P. lalkuanensis A101 and bacteriophage P-PSG11 decreased gradually over time with that of STF showing the least drop in viability. Briefly, after 8 weeks of storage in the STF, the average weekly phage titers for P-PSG11 ranged between 10 − 8 and 10 − 9 plaque-forming units (PFU) per milliliter (Fig. B). Concurrently, colony counts for P. lalkuanensis A101 remained between 10 − 7 and 10 − 8 colony-forming units (CFU) per milliliter (Fig. E). In contrast, when using SF as a preservation medium, both agents remained stable for only 28 days. Storage in ddH 2 O at room temperature resulted in the shortest stability period of 7 days. These results highlight the superior efficacy of STF in preserving the bacteriophage and P. lalkuanensis A101 for extended periods without compromising viability. The stability achieved with the STF significantly outperformed the SF and ddH 2 O, demonstrating its potential as an effective preservation medium for these biological agents. Using a storage period of 4 weeks, potato seeds were subjected to different test conditions to assess their germination quality under controlled laboratory conditions (Fig. ). The test conditions included infection with R. solanacearum (R.S -infected), treatment with water only (control), coating with a mixture of silk and trehalose (containing either P-PSG11 phage only (P-PSG11), P. lalkuanensis A101 only (A101), and a mixture of both (P-PSG11 + A101)) then airdried to form STF (Fig. A, B). As illustrated in Fig. C, seeds coated with a combination of bacteriophage P-PSG11 and P. lalkuanensis A101 in the STF demonstrated the most favorable germination outcome. After 4 weeks of storage, these seeds achieved the highest germination rate of 93.5% ( P < 0.0001). Seeds coated with P. lalkuanensis A101 showed the second-highest germination rate at 86.3% ( P < 0.001). In contrast, seeds primed with P-PSG11 phage and water exhibited a lower germination percentage of 81.8%. Seeds infected with R. solanacearum showed almost no germination (3%). As shown in Fig. D, it was also found that seeds treated with the P-PSG11and P. lalkuanensis A101 combination produced the longest roots, measuring 5.4 cm ( P < 0.0001), followed by P. lalkuanensis A101-treated seeds at 4.9 cm ( P < 0.001). Seeds treated with P-PSG11, and untreated seeds had shorter root lengths of 2.85 cm, while infected seeds showed minimal root growth of 0.5 cm. Seed priming and coating treatments showed statistically significant differences in their effects on seedling vigor (Fig. ). At the same time, the seedling vigor index, a measure of overall seedling health and potential, varied considerably among treatments (Fig. E). Seeds treated with the combination of bacteriophage P-PSG11 and P. lalkuanensis A101 exhibited the highest seedling vigor index of 489.5 ( P < 0.0001). This was followed by seeds treated with P. lalkuanensis A101 alone, with a seedling vigor index of 436.8 ( P < 0.001). In contrast, untreated control seeds and those treated with P-PSG11 showed similar but lower seedling vigor indices of 314.7 and 315, respectively. Seeds infected with R. solanacearum displayed a markedly lower seedling vigor index of just 1.50, indicating severe impairment of seedling development. SEM was used to assess if trehalose had any effects on the films and if P. lalkuanensis A101 was encapsulated within the film. As shown in Fig. A and D, SEM images revealed that adding trehalose had no effect on the film formation, and the coating thickness was consistently in the range of 5 ± 2 μm. Furthermore, bacteria P. lalkuanensis A101 could be embedded within the film matrix (Fig. E), with a similar shape to the unembedded bacteria (Fig. F). Pot experiments were conducted to evaluate plant growth, antagonistic effects, and salt tolerance of the potato plants developed from seeds dip-coated with the STF (ratio 1:3) containing P. lalkuanensis A101. and/or P-PSG11. Potato seeds under different treatments were cultivated for 35 days and compared to untreated control seeds (Fig. ). The application of STF containing P. lalkuanensis A101 and P-PSG11 as PGPR and biocontrol agents in potato seed coatings resulted in significant improvements ( P < 0.001) in root elongation (Fig. A) and dry matter content of both roots and shoots (Fig. B). The growth promotion effect (GPE%) on root length was most pronounced with the P. lalkuanensis A101 and P-PSG11 combination (72.7%, P < 0.0001), followed by P. lalkuanensis A101 (61.0%), and P-PSG11 (22.5%), compared to untreated controls (Fig. A). Root dry weight increases were most substantial with the mixture of P-PSG11 and P. lalkuanensis A101 (129.1%), followed by P. lalkuanensis A101 (125.7%) and P-PSG11 (13.1%) compared to controls. Similarly, P-PSG11 and P. lalkuanensis A101 showed the highest increase in shoot dry weight, surpassing the control by 71.38% (Fig. B). Significant increases in plant length or height were also observed (Fig. C). Compared to the control, the GPE% was most pronounced with the P. lalkuanensis A101 and P-PSG11 combination (71.5%), followed by P. lalkuanensis A101 (65.1%), and P-PSG11 (8.2%). The P-PSG11 and P. lalkuanensis A101 combination exhibited the highest GPE% for plant fresh weight (111.3%), followed by P. lalkuanensis A101 (103.5%) and P-PSG11 (35.24%) (Fig. D). The P-PSG11 and P. lalkuanensis A101 combination consistently demonstrated the most significant GPE% across all measured parameters. This indicates its substantial impact on potato development when applied as a seed coating under the pot conditions. R. solanacearum The STF containing P-PSG11 and P. lalkuanensis A101 mixture, P-PSG11 only, and P. lalkuanensis A101 only significantly reduced wilt incidence by 88.20%, 81.15%, and 77.45%, respectively ( P < 0.0001) (Fig. E) when planting the seeds into the soil spiked with R. solanacearum . These biocontrol agents improved plant survival from 8.45% (without antagonist) to over 88% after seed coating application under pot experiment conditions. Even when R. solanacearum was introduced 35 days post the seed planting, the P-PSG11 and P. lalkuanensis A101 combination still showed some effects in reducing wilt symptoms. Potato seeds coated with the P-PSG11 and/or P. lalkuanensis A101 embedded ST film were grown in saline (8 dS/m, by adding NaCl to the topsoil) and non-saline soils for four weeks. Seeds coated with STF containing either P. lalkuanensis A101 only or a P-PSG11 and P. lalkuanensis A101 mixture showed significantly higher germination rates (∼ 80%) under saline conditions compared to uncoated seeds, which had a germination rate of ∼ 35% (Fig. F). In contrast, the seeds coated with P-PSG11 had no difference from the uncoated seeds. Furthermore, throughout the observation period, seedlings from the seeds coated with P. lalkuanensis A101 or a mixture of P-PSG11 and P. lalkuanensis A101 exhibited greater height and more developed root systems than the control seedlings. These results demonstrated that the salt tolerance is mainly due to P. lalkuanensis A101. One STF embedded with P-PSG11 + A101 was prepared on April 1, 2023, stored at room temperature for over one year, and evaluated for activity on May 10, 2024 (406 days). As shown in Fig. , biocontrol agents (phage 10 − 6 PFU and bacteria 10 − 5 CFU) preserved in the STF maintained their viability, indicating that trehalose effectively protected the phage and the bacteria. In contrast, silk films without trehalose did not provide the same level of protection. This finding demonstrates that the STF effectively preserves these agents well, ensuring more than one year of activity. Phage therapy and PGPR have been instrumental in enhancing plant growth and managing various phytopathogens. Bacteriophages, being highly specific to their target pathogens and environmentally friendly, present a low risk to non-target organisms . P. lalkuanensis , as a PGPR, shows potential for both biocontrol and enhancement of plant growth. Once plant growth is optimized and diseases are controlled, these agents require careful storage or transportation at low temperatures. Typically, they must be kept at low temperatures (-20 ℃ for short durations or -80 ℃ for extended periods), particularly for phages and PGPR. Failure to maintain proper storage conditions can lead to degradation, resulting in inconsistent or invalid test results . This degradation could potentially impact the effectiveness of crop treatments . Drawing inspiration from tardigrades’ resilience and Bombyx Mori ’s silk-producing abilities, we developed a method to engineer the seed microenvironment. In the current study, we investigated the potential of using extracted silk solution to create films as an alternative to preserve P. lalkuanensis A101 and phage P-PSG11. A method was developed to preserve phages and PGPR at room temperatures (≥ 25 ℃) without needing cold chain storage and transportation. Our bioinspired approach combines a disaccharide, known for its role in anhydrobiosis, with a structural protein that offers mechanical strength, easy fabrication, adhesion, flexibility, and controlled degradation . Both the bacteriophage P-PSG11 and P. lalkuanensis A101 utilized in this study survived encapsulation within the biomaterial coating, were preserved over time, and were successfully released into the soil. Seeds coated with STF yielded plants that grew faster and stronger in saline soil and effectively controlled R. solanacearum . In our preservation study, P. lalkuanensis A101 and P-PSG11 preserved in SF and STF exhibited prolonged stability at room temperature compared to preservation in ddH 2 O. The STF demonstrated the highest stability, followed by the SF. Specifically, the STF remained stable for 8 weeks at room temperature (25 ℃ − 28 ℃), while the SF maintained stability for 28 days. These findings suggest that STF could potentially be utilized for the long-term stabilization of bacteria and phage beyond the 8-week timeframe. Due to limited availability, we could only retain one sample to assess the STF’s ability to preserve P. lalkuanensis A101 and P-PSG11 phage over one year. Despite this limitation, our initial results indicate that the technique may be promising for preserving and transporting beneficial microorganisms for agricultural and research purposes . In evaluating the germination of coated seeds, dip coating was chosen due to its cost-effectiveness, scalability, and simplicity, making it accessible to all farmers across various resource settings . Among the materials investigated, the silk-trehalose mixture with a 1:3 ratio was selected for its superior mechanical properties, solution viscosity, and preservation efficacy for both P. lalkuanensis A101 and phage P-PSG11. The coating process was tailored to apply ∼ 10 8 CFU of P. lalkuanensis A101 bacteria per seed, aligning with the standards typically mandated by policymakers for PGPR applications . This concentration ensures an adequate initial population of beneficial bacteria to colonize the rhizosphere and promote plant growth. Results (Fig. E) showed that the seed coating composed of P. lalkuanensis A101, phage P-PSG11, and silk-trehalose in a 1:3 ratio was particularly evident in biocontrol against R. solanacearum and resistance to high salinity (8 dS/m). This coating significantly enhanced seed germination and produced more robust seedlings under these stress conditions. In conclusion, our approach has demonstrated significant efficacy in controlling Ralstonia solanacearum and alleviating soil salinity stress by coating potato seeds with the STF embedded with P-PSG11 and A101, compared to uncoated (control) potato seeds. This innovative seed coating method shows promise for revolutionizing agricultural practices by enhancing crop resilience and sustainability. Despite the promising results, this study was limited by sample size, focusing on a single crop (potatoes) and a specific set of stressors ( R. solanacearum and soil salinity). Future developments in microbial inoculants will prioritize the creation of precise and scalable delivery mechanisms for beneficial microbes and the development of multifunctional microbe solutions tailored to various crops. Characterizing the properties of the silk films produced, such as mechanical strength, moisture content, and degradation rates, would further help determine how the seed coating should be used in real applications. These advancements can potentially address critical challenges in water, energy, and food security (WEFS), particularly those related to climate change, soil degradation, and population growth. Below is the link to the electronic supplementary material. Supplementary Material 1 |
The Use of Artificial Intelligence in Gastroenterology: A Glimpse Into the Present | b89c7b70-a033-4b4c-ba0c-3949b8749e9f | 10589593 | Internal Medicine[mh] | Guarantors of the article: Brian C. Jacobson, MD, MPH, FACG. Specific author contributions: sole author. Financial support: None to report. Potential competing interests: None
|
Kenyan adults with type 2 diabetes mellitus (T2DM) increase diabetic knowledge and self-efficacy and decrease hemoglobina1c levels post-educational program | 82f769e8-5dc3-4252-9668-0183bb21e3d9 | 11217845 | Patient Education as Topic[mh] | Type II diabetes mellitus (T2DM) is a chronic and non-communicable disease, with about 1.4 million people 18 years or older in the USA having this diagnosis in 2019 . The probability of Kenyans between 30 and 70 years dying from diabetes is 21 percent . The International Diabetes Federation (IDF) in 2021 estimated that 24 million 20 – 79 years-old adults living in the IDF African region, including Kenya, have diabetes, with 54% undiagnosed . The prevalence of diabetes in Kenya standardized by age was 2.4%, with 44% being aware of their condition and only 7% controlling blood sugar . Although health outcomes in Kenya have improved since 2006 due to a decrease in the burden of communicable diseases, non-communicable diseases such as diabetes have increased . The need to educate Kenyan adults about diabetic risk factors and diabetic-related complications increases with the increasing prevalence of diabetes among this population. Few research studies addressed the need for diabetic education in Kenya. A significant factor contributing to the increased morbidity from T2DM in Kenya is the diabetic knowledge gap. The risk reduction knowledge gap was identified in prediabetic patients in Kenya . Patients usually seek diabetic care late after developing severe and irreversible complications . Several studies document the positive impact of diabetic education on positive health outcomes , . The Kenyan government has sought to eliminate the negative impact by developing the national diabetes strategy and the Kenya National Diabetes Educators Manual. However, these initiatives are yet to be evaluated . The few studies addressing diabetic education in Kenya identified the following factors a) a lack of educational efforts ; b) the concept of managing T2DM by using HbA1c is not prevalent ; c) perceived high cost of testing HbA1c levels in Africa ; and d) significant gap in policy at the community levels. Other studies identified low diabetic dietary and comorbidity knowledge , as knowledge gap areas. Other factors identified were the impact of cultural practices , ; and self-care practices . These knowledge gaps are critical areas of T2DM management, hindering the fight against T2DM in Kenya. This project utilized the Health Belief Model (HBM) for its theoretical approach. According to the model, health behavior can be explained by the influence of modifying factors on individual perceptions to produce a given action or health outcome . The generation and application of new knowledge for a given health outcome are enhanced through health education and behavior. The HBM is constructed around six primary constructs – perceived susceptibility, perceived severity, perceived benefits, perceived barriers, self-efficacy, and cues to action . In the HBM, an individual's motivation for a health behavior is categorized as individual perceptions, modifying factors, and the likelihood of action . Following the assumptions and constructs of the model, the project assumed that by introducing educational intervention as a modifying factor, individual perceptions about diabetes would change. To make it effective, we structured the educational intervention within the cultural context and food preferences. The predicted outcome includes improved self-efficacy, increased diabetic knowledge, and reduced HbA1c levels. The American Association of Clinical Endocrinologists (AACE) position statement supports the use of culturally appropriate education that focuses on the critical knowledge areas to alleviate the challenges of managing diabetes and increasing self-efficacy . The culturally relevant educational models allow the individuals to identify practices that may influence their HbA1c levels and self-care practices . Different education curricula and structures affect the effectiveness of interventions differently. Patient-to-patient education was reported to result in higher glycemic control . Other authors used a formal educational structure with a specified timeframe for the intervention ; and a nurse-led design . Additionally, group educational models had significant advantages that would benefit the Kenyan population because of the cost-effectiveness and enhanced collaboration between stakeholders . In summary, knowledge is a significant modifying variable and can enhance self-care and perception. The structure of the educational intervention similarly is a determining factor for the outcome. Knowledgeable individuals will likely make health-seeking decisions, including dietary changes, activity and exercise, and adherence to treatments and intervention plans. Effective health-seeking decisions are achievable when a patient has high levels of self-efficacy. This project aimed to increase diabetic knowledge and self-efficacy through culturally appropriate educational intervention to decrease HbA1c levels in Kenyan people with T2DM.
Design and protocol A quasi-experimental design was utilized for this project. The participants were divided into a control and an experimental group by systematic assignment. The participants in the experimental group were enrolled in a three-month diabetic educational program provided by one of the authors. The following depicts the project design and protocol. Setting Eldoret is a cosmopolitan Kenyan city located in Western Kenya. The city has a diverse population but a significantly larger Kalenjin population. The community hospital (Reale Hospital), used for participant recruitment, is a well-equipped, 500 beds private hospital located within Eldoret City and serves patients from urban, suburban, and rural locations around the city. The hospital provides inpatient and outpatient services. The hospital serves an average of 750 patients a day, out of which about 100 have a diagnosis of T2DM. Because of time constraints and costs, the hospital provides limited diabetic education to this population.1 Sample The sample size was estimated with the G*Power Software (Version 3.1). One hundred and forty-three subjects were recruited by convenience sampling. The participants were screened for inclusion with the following criteria: a history of T2DM; HbA1c ≥ 6.5% for one year or more via chart review; African heritage; ability to read, write, understand, and speak in English or Swahili; and age between 25–65 years. The educational program was provided in English and Swahili. The age range was selected to eliminate bias from third parties' influence on project outcomes, especially with elderly patients' dependence on caretakers. People with pre-existing conditions that may complicate T2DM, like cancer, mental illness, and any current illness, were excluded from participation. The participants were systematically assigned to experimental (n=71) and control groups (n=72). Participants completed a written consent after receiving a full explanation of the protocol. Incentives given to increase participation includes “The Plate” guide used for portion control, the meal planning guidebook, transportation expense reimbursement (10 dollars), and subsidized HbA1c tests. The HbA1c testing cost was 80 percent funded by Reale Hospital to support the project. Ethics We obtained ethics approval from the Andrews University IRB, the University of East Africa Baraton REC, the National Commission for Science Technology, Innovation (NACOSTI), Reale Hospital, and the Uasin Gishu County Government. Project tools and variables We measured three main variables - diabetic knowledge, self-efficacy, and HbA1c levels. We utilized the University of Michigan Diabetic Knowledge Test (DKT) to measure Diabetic Knowledge (DK) . The test has 25 questions about an individual's knowledge of diabetic disease and management, focusing on the diabetic diet, diabetic testing, recognizing complications of DM, and self-management. With permission from the authors, two questions were added to the 23 items on the Michigan DKT, and five questions were modified to fit the cultural context of the project settings. The modification and addition did not change the concepts measured by the tool. The tool was scored with participants obtaining one point for each correct answer, with 25 possible points. For this project, the cut-off point for adequate diabetic knowledge was a score of 20 out of 25. The tool has consistent reliability data with a ≥ 0.70 . Self-efficacy was defined in this project as a participant's confidence in their ability to manage T2DM. We measured self-efficacy levels with the Stanford University Diabetes Questionnaire (SUDQ) . The questionnaire has eight items, scored on a scale of 1 (lowest confidence) to 10 (highest confidence). The score determines the individual's belief in self-managing T2DM. The average score of all eight items is considered the overall self-efficacy. In this project, an average score of 7 was regarded as adequate self-efficacy. The internal consistency of the SUDQ was 0.89, and the intraclass correlation coefficient was 0.90 . Reale Hospital laboratory technicians assisted with measuring the participants' HbA1c levels using the Afinion™ HbA1c1 assay (Abbott). Demographic data include gender, age, education, marital status, tribe, occupation, income, and household size. Educational program The educational program was structured to bring awareness about a balanced diet as beneficial in managing an individual's blood glucose levels. In this project, we defined a balanced diet as consuming food items from the four main food groups: carbohydrates, protein, fats, and vegetables. We used the plate model to instruct participants because it was easy to use and helpful in controlling portion sizes. The food groups were related to culturally appropriate foods in Kenya. We used participants' individual goals to focus on and visualize the benefits of dietary compliance. The objectives of the educational intervention were to increase diabetic knowledge and self-efficacy and reduce HbA1c. We structured the intervention into three modules taught over three weekly sessions for the experimental group. Participants could join any three consecutive sessions within the three-month intervention period. The first two-hour sessions focused on general diabetes knowledge education, including symptoms, complications, medication, and the significance of HbA1c testing. The participants discussed their self-care goals and identified barriers to achieving the goals. The participants learned about T2DM management in the second session, including nutrient calculations from food labels and aligning meals with the “MyPlate” method. We used the ADA guidelines to instruct participants about portion sizes by requesting they create an imaginary line in the 9-inch plate . They were asked to fill the first half of the plate with culturally appropriate non-starchy vegetables like sukuma (collard greens), sucha (black midnight), and isaka (spider flower) terrier (amarantha). The other half of the plate was divided into two quarters - one-quarter of carbohydrates and a quarter of protein. Examples of protein-rich foods in Kenya include nyama choma (roasted meat), ndengu (lentils), maharakwe (beans), milk, and eggs. Carbohydrate foods include ugali (corn meal), rice, mokimo (a mixture of mashed potatoes, beans, greens, and corn), pumpkins, yam, and cassava. Participants can vary their meal plans to include servings of fruits and milk to substitute for carbohydrates and protein. Participants can have liberal amounts of unsweetened drinks such as tea, coffee, or water. The participants revised their goals after discussing and learning the benefits of dietary education and the barriers to adherence. All participants included goals to mitigate the barriers to compliance. We focused the third session on coping with T2DM and the role of family and community members in managing diabetes. Participants also discussed the perceived benefits of the education, barriers, and cues to action. Data analysis Descriptive statistics were used for the demographic data. Between and within-group analysis was computed using Statistical Package for the Social Sciences (SPSS, v.25). Paired sample t-test was used for within-group pre and post-test data analysis. Data comparisons between experimental and control groups were computed with an independent T-test after ensuring homogeneity of variance by Levene's Test. The normality of the dataset was assessed using the Shapiro-Wilk test. We compared data from diabetic knowledge, self-efficacy, and HbA1c pre- and post-intervention. The level of significance was set at p < 0.05.
A quasi-experimental design was utilized for this project. The participants were divided into a control and an experimental group by systematic assignment. The participants in the experimental group were enrolled in a three-month diabetic educational program provided by one of the authors. The following depicts the project design and protocol.
Eldoret is a cosmopolitan Kenyan city located in Western Kenya. The city has a diverse population but a significantly larger Kalenjin population. The community hospital (Reale Hospital), used for participant recruitment, is a well-equipped, 500 beds private hospital located within Eldoret City and serves patients from urban, suburban, and rural locations around the city. The hospital provides inpatient and outpatient services. The hospital serves an average of 750 patients a day, out of which about 100 have a diagnosis of T2DM. Because of time constraints and costs, the hospital provides limited diabetic education to this population.1
The sample size was estimated with the G*Power Software (Version 3.1). One hundred and forty-three subjects were recruited by convenience sampling. The participants were screened for inclusion with the following criteria: a history of T2DM; HbA1c ≥ 6.5% for one year or more via chart review; African heritage; ability to read, write, understand, and speak in English or Swahili; and age between 25–65 years. The educational program was provided in English and Swahili. The age range was selected to eliminate bias from third parties' influence on project outcomes, especially with elderly patients' dependence on caretakers. People with pre-existing conditions that may complicate T2DM, like cancer, mental illness, and any current illness, were excluded from participation. The participants were systematically assigned to experimental (n=71) and control groups (n=72). Participants completed a written consent after receiving a full explanation of the protocol. Incentives given to increase participation includes “The Plate” guide used for portion control, the meal planning guidebook, transportation expense reimbursement (10 dollars), and subsidized HbA1c tests. The HbA1c testing cost was 80 percent funded by Reale Hospital to support the project.
We obtained ethics approval from the Andrews University IRB, the University of East Africa Baraton REC, the National Commission for Science Technology, Innovation (NACOSTI), Reale Hospital, and the Uasin Gishu County Government.
We measured three main variables - diabetic knowledge, self-efficacy, and HbA1c levels. We utilized the University of Michigan Diabetic Knowledge Test (DKT) to measure Diabetic Knowledge (DK) . The test has 25 questions about an individual's knowledge of diabetic disease and management, focusing on the diabetic diet, diabetic testing, recognizing complications of DM, and self-management. With permission from the authors, two questions were added to the 23 items on the Michigan DKT, and five questions were modified to fit the cultural context of the project settings. The modification and addition did not change the concepts measured by the tool. The tool was scored with participants obtaining one point for each correct answer, with 25 possible points. For this project, the cut-off point for adequate diabetic knowledge was a score of 20 out of 25. The tool has consistent reliability data with a ≥ 0.70 . Self-efficacy was defined in this project as a participant's confidence in their ability to manage T2DM. We measured self-efficacy levels with the Stanford University Diabetes Questionnaire (SUDQ) . The questionnaire has eight items, scored on a scale of 1 (lowest confidence) to 10 (highest confidence). The score determines the individual's belief in self-managing T2DM. The average score of all eight items is considered the overall self-efficacy. In this project, an average score of 7 was regarded as adequate self-efficacy. The internal consistency of the SUDQ was 0.89, and the intraclass correlation coefficient was 0.90 . Reale Hospital laboratory technicians assisted with measuring the participants' HbA1c levels using the Afinion™ HbA1c1 assay (Abbott). Demographic data include gender, age, education, marital status, tribe, occupation, income, and household size.
The educational program was structured to bring awareness about a balanced diet as beneficial in managing an individual's blood glucose levels. In this project, we defined a balanced diet as consuming food items from the four main food groups: carbohydrates, protein, fats, and vegetables. We used the plate model to instruct participants because it was easy to use and helpful in controlling portion sizes. The food groups were related to culturally appropriate foods in Kenya. We used participants' individual goals to focus on and visualize the benefits of dietary compliance. The objectives of the educational intervention were to increase diabetic knowledge and self-efficacy and reduce HbA1c. We structured the intervention into three modules taught over three weekly sessions for the experimental group. Participants could join any three consecutive sessions within the three-month intervention period. The first two-hour sessions focused on general diabetes knowledge education, including symptoms, complications, medication, and the significance of HbA1c testing. The participants discussed their self-care goals and identified barriers to achieving the goals. The participants learned about T2DM management in the second session, including nutrient calculations from food labels and aligning meals with the “MyPlate” method. We used the ADA guidelines to instruct participants about portion sizes by requesting they create an imaginary line in the 9-inch plate . They were asked to fill the first half of the plate with culturally appropriate non-starchy vegetables like sukuma (collard greens), sucha (black midnight), and isaka (spider flower) terrier (amarantha). The other half of the plate was divided into two quarters - one-quarter of carbohydrates and a quarter of protein. Examples of protein-rich foods in Kenya include nyama choma (roasted meat), ndengu (lentils), maharakwe (beans), milk, and eggs. Carbohydrate foods include ugali (corn meal), rice, mokimo (a mixture of mashed potatoes, beans, greens, and corn), pumpkins, yam, and cassava. Participants can vary their meal plans to include servings of fruits and milk to substitute for carbohydrates and protein. Participants can have liberal amounts of unsweetened drinks such as tea, coffee, or water. The participants revised their goals after discussing and learning the benefits of dietary education and the barriers to adherence. All participants included goals to mitigate the barriers to compliance. We focused the third session on coping with T2DM and the role of family and community members in managing diabetes. Participants also discussed the perceived benefits of the education, barriers, and cues to action.
Descriptive statistics were used for the demographic data. Between and within-group analysis was computed using Statistical Package for the Social Sciences (SPSS, v.25). Paired sample t-test was used for within-group pre and post-test data analysis. Data comparisons between experimental and control groups were computed with an independent T-test after ensuring homogeneity of variance by Levene's Test. The normality of the dataset was assessed using the Shapiro-Wilk test. We compared data from diabetic knowledge, self-efficacy, and HbA1c pre- and post-intervention. The level of significance was set at p < 0.05.
One hundred and twenty-three (123) participants completed the study. There were 63 participants in the experimental and 60 in the control group. The demographic data is depicted in below. There were slightly more female participants (57%) in the control group compared to the experimental group (48%); however, the difference was not significant (p =.315). The largest tribe in the study was the Kalenjin, representing 61% of the participants. The Shapiro-Wilk test yielded a test statistic (W) of 0.987 and a corresponding p-value of 0.338. The p-value was greater than our predetermined significance level of 0.05 suggesting that the data may be normally distributed. Analysis of pre-intervention data for between-group differences showed no significant results for diabetic knowledge (t (121) =-1.180, p=.120); self-efficacy (t (121) =0.962, p=.169); and HbA1c levels (t (121) =-0.426, p=.336). However, as depicted in , the between-group differences for post-intervention scores were significant for diabetic knowledge (t (116) =7.218, p<.001); self-efficacy (t (96) =5.323, p<.001); and HbA1c (t (121) =-2.87, p=.003). Diabetic knowledge was significantly improved between pre- and post-data in both groups (control, p=.037; experimental, p<.001). The mean HbA1c levels in the control group slightly increased from 9.11±2.19 to 9.13± 2.22, but the increase was not significant. However, although there was a significant difference in the experimental group's pre- and post-HbA1c levels (p<.001), the control group's pre- and post-data was not significantly different (p=.467). Participants in the experimental group significantly increased their self-efficacy scores by a mean of 1.57 (p<.001). All post-intervention data were substantially different from the pre-intervention data in the experimental group. In contrast, the only significant difference in scores from the control group was diabetic knowledge (see ).
The educational program structured to influence the individual participants' perception as per the HBM showed a significant effect on diabetic knowledge, self-efficacy, and HbA1c levels. Diabetic self-management interventions have generally been shown to improve physiological outcomes in Africans . Self-awareness of treatment targets and self-blood glucose monitoring were among the important factors associated with successful diabetic control in a study conducted in a university clinic . Patient education and training have also improved chronic disease control, such as T2DM ; and hypertension . Our results support the need to improve self-management through structured education and increased self-efficacy for Kenyan adults with T2DM. Although our study did not address comorbidities; however, we realize the multiple challenges faced with managing T2DM with the comorbidities that will require careful adjustment to standard diabetic education and disease management training. The comorbidities include heart diseases and renal impairment . The level of formal education in Kenya continues to improve slowly compared to other developed countries. The slow improvement is also evident in the health education of individuals and communities. The older Kenyan population (45 years and older), with a higher risk of T2DM, was reported to have little or limited formal education . However, most participants in this study (62%) had a college or graduate degree. The inclusion criteria of reading and writing may explain our research's increased number of well-educated participants. The higher percentage of married participants in our study (81%) was close to 67% reported in the national survey . The improved self-efficacy and lower HBA1c may be attributed to a supportive spouse at home helping with the dietary changes. Kenya is a multi-tribal country, and the Kalenjins have been identified as the 6th most prominent tribe in the country . Many Kalenjins live in and near Eldoret, which accounts for many participants from this tribe. However, the diet is similar between tribes. Substituting Kenyan food with the ‘my plate’ method for nutritional education was an eye-opener for the participants as they believed removing table sugar from their diet was all they needed to manage T2DM. In this project, the participant's perceptions of dietary and lifestyle changes were influenced by diabetic education, as evidenced in the post-education increased self-efficacy. Improving patient knowledge with patient education has beneficial effects on diabetic control and reducing diabetic complications . Although we did not measure self-management, their increased self-efficacy contributed to subsequent disease self-management resulting in decreased HbA1c levels observed in the experimental group. The lack of change in the control group's self-efficacy and HbA1c levels further validates the effect of the educational program. Our pre-intervention HbA1c was significantly different from the post-intervention HbA1c, contrasting with the findings from a non-blinded randomized clinical trial in Nairobi, Kenya . Diabetic knowledge has been a critical factor in glycemic control among T2DM patients. The lack of significant difference in the control group's pre- and post-diabetic knowledge test scores attests to the general community's knowledge deficit about T2DM. The limitations of the study include seasonal timing. The study took place during the festive season in Kenya (November-December). We could achieve lower HbA1c levels if the study were completed during a non-festive season with less urge to eat more. The participants' education could account for greater comprehension and motivation for dietary compliance. Using a single site for this study is a limitation, as the findings may differ from other sites with a diverse patient population. Although we recruited and educated participants over three months, the relatively small number of participants also limits the generalizability of this study. We suggest a future study with a larger sample size across multiple sites to decrease these limitations. The single site may also account for the attrition observed in the study. Because of the location and size of the hospital serving both urban and rural communities, the long commute to the hospital may be challenging for patients that live far resulting in their inability to return for post-intervention measures. Another limitation was the modified Michigan DKT questionnaire was not pilot tested for reliability, although the authors approved the changes to the questionnaire as appropriate. This face validity can be strengthened through pilot testing. Although paired t-tests for within-group pre-and post-test data analysis are a potential source of type 1 error, our current analysis aligns with our study's specific objectives and research questions, and we have confidence in the validity of our findings based on this approach.
The findings from this study suggest that a structured diabetic educational program improves HbA1c levels, diabetic knowledge, and self-efficacy in Kenyan people with T2DM. We recommend public awareness and increase structured diabetic education in Kenyan hospitals and community settings to improve health outcomes for people with T2DM.
|
Divergence between confidence and knowledge of endodontists regarding non-odontogenic pain | ac55c592-8609-4b85-a051-a71b8fb843d9 | 10561960 | Dental[mh] | Orofacial pain affects a considerable portion of the population, with odontogenic pain being not only the most prevalent cause of this type of pain , but also the major reason why patients seek a dental office. Therefore, odontogenic pain is the likely diagnosis in many cases. However, opain may present clinical features that resemble non-odontogenic pain, requiring a careful differential diagnosis and assessment. Thus, distinguishing between odontogenic and non-odontogenic pain can be challenging in certain situations, potentially complicating treatment planning and implementation. , This difficulty may be partly associated with limited information regarding the multiple presentations of orofacial pain, a limit that persists even throughout specialty endodontic training. A recent study evidenced that most dental students and dentists claim to be poorly prepared for the diagnosis and treatment of non-odontogenic pain. It should be highlighted that, in general, there is insufficient curriculum in orofacial pain training for undergraduate dental students , and during graduate courses. Thus, dentists, orofacial pain specialists, researchers and patients need to combine efforts to successfully address the urgent need for quality orofacial pain education. Furthermore, orofacial pain conditions are historically not sufficiently characterized, except for temporomandibular disorders (TMD). The first edition of comprehensive and internationally accepted diagnostic and classification criteria for orofacial pain was only published recently. A recent narrative review offers an overview and a brief explanation of how this classification system could be used by general practitioners and endodontists. Few studies investigate dentists’ confidence and knowledge regarding non-odontogenic pain. , Moreover, there is also a focus on TMD pain, , which represents only one facet of non-odontogenic pain within the orofacial domain. There is a shortage of evidence about the endodontist’s knowledge regarding non-odontogenic pain. Such knowledge is of great importance given the prevalence of orofacial pain complains in the endodontist practice. Thus, based on the premises that: (1) pain commonly manifests within the dental clinic; (2) pain is a common symptom in many endodontic clinical conditions; (3) it is necessary for the endodontist to be able to differentiate odontogenic from non-odontogenic pain to avoid invasive and iatrogenic dental procedures and; (4) thus far, no study has explored the relationship between endodontists’ confidence and knowledge concerning non-odontogenic pain; the aim of the present study was to evaluate the self-reported endodontists’ confidence and knowledge regarding non-odontogenic orofacial pain. Our a priori hypothesis was that confidence and knowledge regarding non-odontogenic pain were not associated among endodontists.
This cross-sectional study was approved by The Human Research Ethics Committee (Protocol No. 40225020.0.0000.5417) in accordance with the Helsinki principles. The sample was composed of endodontists of both sexes who obtained their graduation degrees at courses recognized by the Brazilian Federal Council of Dentistry and registered in the representative association of the area in Brazil (Brazilian Society of Endodontics). All these professionals received, by e-mail or WhatsApp, a link containing the Informed Consent Form (ICF) and a questionnaire. The questionnaire encompassed a series of multiple-choice queries, designed with the intent of assessing endodontists’ self-reported confidence and knowledge levels concerning non-odontogenic pain. This instrument was adapted and structured based on a pre-existing questionnaire. The first part of the questionnaire focused on variables such as demographic factors, undergraduate time as a dentist and graduate time as an endodontist. The second part was composed of questions aiming to evaluate the professionals’ self-reported confidence and knowledge regarding non-odontogenic pain. The supplemental file contains the questions that were applied. This questionnaire was integrated into the online platform Google Forms. A link was generated to facilitate access to the questionnaire, which was subsequently sent to the participants preferably via e-mail and/or WhatsApp. Participants were granted the autonomy to decide whether to engage with the survey, after reading and agreeing to participate in the research by signing an Informed Consent Form (ICF). All data collected were stored within the Google Forms tool, protected by a password. Only authorized researchers had the prerogative to access this dataset. Two pivotal questions were used to categorize the participants: (1) “How would you define your knowledge about the different types of orofacial pain, excluding odontogenic pain?” and (2) “After you graduated as a dentist, have you been involved in continuing education courses about orofacial pain?”. Based on these questions, the endodontists were categorized into four distinct groups, as outlined below: Group 1 - those who considered their knowledge as sufficient and had been involved in continuing education courses about orofacial pain; Group 2 - those who considered their knowledge as sufficient and had not been involved in continuing education courses about orofacial pain; Group 3 - those who considered their knowledge as insufficient and had been involved in continuing education courses about orofacial pain; Group 4 - those who considered their knowledge as insufficient and had not been involved in continuing education courses about orofacial pain. Statistical analysis Data were expressed as mean and standard deviation (SD) or percentage, where appropriate. To assess associations between participants who categorized their knowledge about non-odontogenic pain as either sufficient or insufficient and their engagement in continuous education courses about orofacial pain, an initial Chi-square test was employed, resulting in a 4x2 table. In case a significant association was identified, subsequent between-group (2x2) comparisons were conducted using Fischer’s exact test. All statistical analyses were performed using Graph Pad Prism 8. For all analyses, the significance level adopted was 5%.
Data were expressed as mean and standard deviation (SD) or percentage, where appropriate. To assess associations between participants who categorized their knowledge about non-odontogenic pain as either sufficient or insufficient and their engagement in continuous education courses about orofacial pain, an initial Chi-square test was employed, resulting in a 4x2 table. In case a significant association was identified, subsequent between-group (2x2) comparisons were conducted using Fischer’s exact test. All statistical analyses were performed using Graph Pad Prism 8. For all analyses, the significance level adopted was 5%.
In this study, a self-screening questionnaire was sent to all endodontics specialists (n=1,088) registered in the Brazilian Society of Endodontics. In total, 146 completed questionnaires were received, a number that constituted approximately 13.4% of the target population. Among the respondents, 57.5% were female, with an average age of 38.24 years (SD: 9.47). The mean times since graduation and since specialization as an endodontist were, respectively, 15.39 years (SD: 9.69) and 11.10 years (SD: 9.37). A noteworthy 87% reported that the content pertaining to non-odontogenic pain during their undergraduate education was insufficient. However, 50% of the participants considered their personal knowledge about non-odontogenic pain to be sufficient. Fifty-two percent of the respondents reported not having been continuously involved in continuing education courses about orofacial pain and 48% of the respondents sought knowledge about orofacial pain by attending congresses in the area, reading articles and books, taking online courses and attending lectures (81.50%). However, only 18.50% reported having taken a refresher course or a specialization in orofacial pain. A considerable proportion of participants reported encountering a significant prevalence of patients with orofacial pain complaint in their clinical practice (67.5%). contains the six questions that assessed the endodontists’ self-reported confidence levels regarding non-odontogenic pain. Overall, the endodontists considered themselves sufficiently knowledgeable to diagnose and manage non-odontogenic pain. For those who perceived their understanding of orofacial pain to be sufficient, self-reported confidence levels ranged from 71.1% to 97.8% among those who were engaged in continuous education courses, and from 35.7% to 96.4% among those who were not. Conversely, self-reported confidence was lower for the endodontists who considered their knowledge about orofacial pain as insufficient, regardless of having taken (18.2% - 100%) or not (15.7% - 78.4%) continuing education courses about orofacial pain. Statistical analyses revealed significant associations in several instances (p<0.05), underpinning the relationships between different groups ( ). Regarding question 1, it was possible to observe that endodontists who described their knowledge about different orofacial pain types as sufficient, regardless of the continuing education, demonstrated higher confidence in discerning dental pain from non-dental pain ( ). In contrast, in question 2, endodontists who described their knowledge about different orofacial pain types as insufficient, and who did not take courses for the diagnosis and treatment of orofacial pain, exhibited decreased confidence in asserting that non-odontogenic pain could lead to referred pain in the tooth region ( ). Regarding the confidence in the diagnosis of non-odontogenic pain (question 3), the lowest percentages were observed for those who considered their knowledge about different orofacial pain types as insufficient, and who did not take courses for the diagnosis and treatment of orofacial pain. In the significant associations, the attributes of having sufficient knowledge or consistent engagement in continuing education courses pertaining to orofacial pain played pivotal roles. Regarding the treatment of non-odontogenic pain (question 4), a significant association was verified for endodontists who described their knowledge on orofacial pain as sufficient and who took courses on the orofacial pain area. In this group, the highest confidence levels were found (71.1%), whereas the other groups exhibited diminished confidence levels (< 40%). Questions 5 and 6 assessed the confidence of endodontic specialists regarding the persistence of pain beyond the customary healing timeframe following endodontic treatment. The confidence level varied from 75.8 to 84.4% (group 1 and 2) and from 60.8% to 72.7%, (groups 3 and 4), with no significant association among the groups (p>0.05). Despite the satisfactory self-reported confidence, it was verified that the actual knowledge about non-odontogenic pain was low among endodontists (0% - 42%), regardless of whether they considered they had sufficient knowledge about different orofacial pain types and had taken continuing education courses in the area. The only exception refers to the question about the conduct they would have in cases of pain persisting beyond the customary healing period following endodontic treatment, in which the knowledge level was notably high (70.6% -81.9%). Considering endodontists’ knowledge regarding non-odontogenic pain ( ), a significant association among groups was verified only for question 4, which referred to “knowledge about the nomenclature of pain that persists beyond the normal healing time after the endodontic procedure”. This association was most pronounced among those who characterized their familiarity with diverse types of orofacial pain as sufficient, regardless of their participation in orofacial pain courses ( ).
The endodontic diagnosis has fundamental importance for determining the treatment to be performed and requires, from the professional adequate, knowledge and familiarity with diagnostic criteria and classification. Therefore, it is important for endodontics specialists to know how to differentiate odontogenic pain from non-odontogenic pain. This study evaluated a group of Brazilian endodontists’ levels of confidence and knowledge regarding non-odontogenic pain and the main findings were: 1) self-reported confidence about non-odontogenic pain was high, especially for endodontists who considered their knowledge about different orofacial pain types as sufficient, regardless of their participation in continuing education courses about orofacial pain; 2) despite the high self-reported confidence, the knowledge about non-odontogenic pain was insufficient, which indicates that the self-assessed knowledge was largely overestimated. These findings are relevant since, according to the interviewed endodontists, 67.5% of patients cite orofacial pain as the main complaint in the office. Although among the various types of pain in the mouth and face regions, dental-origin pain is the most common diagnosis, , it is crucial that endodontists know how to diagnose and differentiate the multiple presentations of orofacial pain, such as TMD pain, trigeminal neuralgia, post-traumatic trigeminal neuropathic pain, among others, in order to avoid iatrogenic or unnecessary therapeutic conducts. , It is not surprising that the level of confidence in the diagnosis and treatment of non-odontogenic pain was higher in individuals who considered their knowledge on different orofacial pain types as sufficient and had been involved in continuing education courses on orofacial pain. In contrast to our findings, the level of knowledge about non-odontogenic pain was considered insufficient. Differently from our findings, a previous study reported that only 23% of general dentists affirmed they had “good” or “very good” confidence for diagnosing non-dental orofacial pain. Thus, we can observe that, although there could be an increase in the confidence of specialists in relation to general practitioners, this does not necessarily translate into a higher knowledge level. People tend to have overly favorable views on their abilities in many social and intellectual domains. The high confidence levels and insufficient knowledge observed in this study indicates that individuals have difficulties in recognizing their true skill levels. Thus, a person’s lack of knowledge and skills in a certain area cause them to overestimate their own competence. This phenomenon is reported in the literature as the Dunning-Kruger effect. These authors were pioneers on demonstrating such association in a series of experiments on abilities in domains such as logical reasoning, humor and grammar. Moreover, people have difficulty acknowledging their deficiencies for fearing that it can affect them professionally or even because they rarely receive negative criticism in their daily lives. , One of the important conclusions of these experiments was that, by developing their skills, individuals also improve their ability to recognize their own limitations and, therefore, can make more accurate self-assessments. The insufficient knowledge about non-odontogenic pain may, in part, be related to the absence of comprehensive and internationally accepted diagnostic and classification criteria for orofacial pain, since those were only released recently. Historically, orofacial pain conditions were insufficiently characterized, with the probable exception of TMD. Thus, this lack of scientific agreement and consensus on the main characteristics of orofacial pain may have led to confusion, misconceptions and misclassifications and, therefore, to gaps in the knowledge of diagnosis and treatments. In order to fill this knowledge gap, the first edition of the International Classification of Orofacial Pain (ICOP) was recently published, and it may be interesting to follow up the possible educational effects of this classification in future investigations. Although the diagnosis of most cases of pain complaints within the endodontic clinic can be straightforward and not bring challenge in decision-making, in some cases, misinterpretation of pain origins may lead to misdiagnosis and subsequent iatrogenic treatment. A suggestion to improve endodontists’ competence in facing cases of non-odontogenic pain would be the implementation a minimum training in orofacial pain, which would benefit students. For instance, the presentation and discussion of current criteria for the diagnosis and classification of these clinical conditions through the dissemination of the ICOP and the explanation on how to use it. A recent review describes orofacial pain according to the ICOP and how this classification system can assist general practitioners and endodontists differentiate diagnosis of dental and non-dental pain, which in most cases would help prevent unnecessary and potentially harmful dental planning errors and procedures. Finally, a worrying result observed in the present study was that 87% of the respondents considered that the content on non-odontogenic pain taught during their undergraduate was insufficient, which was also found in a previous study. Thus, there is an urgent need to implement a minimum curriculum and training for undergraduate students on orofacial pain, so that they can develop competencies and skills related to diagnosis and treatment under supervised clinical training. , , Some limitations of this present research should be observed: it involved only endodontic specialists who are members of the Brazilian Society of Endodontics, which is not a nationally representative sample of this specialty, thus, the results cannot be generalized. Moreover, the questionnaire did not cover all the necessary knowledge to reach a diagnosis of non-odontogenic pain. However, the purpose of the research was not to identify whether the endodontic specialist felt capable of performing diagnosis and treatment of such conditions, but to identify basic knowledge and, based on that, propose strategies to improve the training of endodontists on orofacial pain, especially to differentiate odontogenic and non-odontogenic pain diagnosis.
This study indicates that most of the participants consider themselves confident in the diagnosis and treatment of non-odontogenic pain. Nonetheless, such confidence does not correspond to a proportional knowledge level, as evidenced by the identified knowledge gap. Thus, the implementation of training and qualification of these professionals in the diagnosis of non-odontogenic pain are highly recommended and can assure them both safety in clinical decision-making and avoidance of potential iatrogenic and unnecessary dental procedures.
|
Knowledge about dental care in patients with head and neck cancer among senior dental school students: a cross-sectional descriptive study | 8103f284-77d9-4869-b8d5-b1cb9ca98a51 | 11264978 | Dental[mh] | The dentist's central role in treating head and neck cancer patients is to care for the patient’s oral cavity before, during, and after radio/chemotherapy. This cross-sectional descriptive study aimed to determine dental students' knowledge about head and neck cancer patients’ dental care. The findings of the present study showed that students’ awareness of oral and dental treatment and care for patients with head and neck cancer is insufficient. It is recommended that teaching staff pay more attention to the lack of knowledge and effort to educate students by holding special courses and workshops.
The central role of dentists in treating head and neck cancer patients is to care for the patient’s oral cavity before, during, and after radio/chemotherapy. Due to the interest and in continuation of previous studies about oral and dental health [ – ], this research aimed to determine dental students' knowledge about head and neck cancer patients’ dental care. Head and neck cancer (HNC) refers to a group of cancers that occur in the oral cavity, pharynx, larynx, paranasal sinuses, nasal cavity, salivary glands, and lymph nodes of the head and neck. HNC is the ninth most common cancer in the world . Oral cancer is one of the major health problems in the world, with a mortality rate of 177,757 out of 377,713 new cases in 2020 and a survival rate of 50% in 5 years [ – ]. HNC treatment includes surgery, radiotherapy, and chemotherapy alone or in combination . HNC and treatment complications often cause important physical problems such as loss of sense of taste, functional problems such as respiratory, speech, and hearing problems, and psychological problems such as depression, social isolation, and delay in returning to work, which hurt all aspects of the affected patient’s life . Oral and dental complications in HNC treatments include mucositis, infection, pain, salivary gland dysfunction, taste change, dysphagia, trismus, and soft and hard tissue necrosis [ – ]. Approximately 80–100% of patients who undergo HNC treatment suffer from oral mucositis. According to the criteria of the World Health Organization, oral mucositis can be grade 3 (severe) or grade 4 (life-threatening) in a large group of patients who receive high-dose radiotherapy. Moreover, it is observed especially in patients with combined radiotherapy and chemotherapy . Delayed side effects of HNC treatment are often irreversible. They may occur several months to several years after the completion of radiotherapy and include trismus, dysphagia, osteoradionecrosis, decreased salivation, permanent dry mouth, and dental caries [ – ]. Dental evaluation and treatment management of HNC patients before and after cancer is one of the cornerstones of a comprehensive care approach [ – ]. The dentist's main role in treating head and neck cancer patients is to take care of the patient's oral cavity before, during, and after radio/chemotherapy. Since oncology patients have a treatment plan that includes different doses of radio or chemotherapy, a careful dental evaluation is needed and the ideal time is before the start of oncological treatment . Poor oral hygiene, and poor dental and periodontal conditions, increase the side effects of HNC treatment, such as non-healing of the wound, and the development of osteoradionecrosis . Dental care before oncological treatment includes oral hygiene instructions, scaling, and root planning, advice to use a non -carious diet, fluoride prophylaxis, and removal of all sources of irritation and infection in the mouth . Good oral hygiene is vital for patients during radiotherapy and chemotherapy. All elective dental treatments should be postponed until the end of treatment . After oncological treatment, the patient needs multiple check-ups. The dentist can monitor the patient by clinical examination to look for possible regional recurrence and metastasis to the cervical region . Dentists should know about the prevention, diagnosis, and management of cancer treatment side effects to minimize the impact of these side effects on patients' lives . Dentists play an important role not only in recognizing precancerous lesions and head and neck cancer but also in recognizing the complications of their treatment and management. This research aimed to determine dental students' knowledge about head and neck cancer patients’ dental care.
This research was a descriptive cross-sectional study of 104, 5th and 6th-year students of the Faculty of Dentistry of Kerman University of Medical Sciences who were selected through the census sampling method. The data collected through the questionnaire consisted of 2 parts. The first part was demographic characteristics, (gender and the year of study), and questions: have you attended a course or workshop on the treatment of patients with HNC? Would you like to participate in a course or workshop on the treatment of HNC patients? Have you had a special course on the dental treatment of patients with HNC during your studies in college? How do you evaluate your information about the side effects of chemotherapy and radiation therapy for patients with HNC? How do you evaluate your information about the side effects of chemotherapy and radiation therapy for patients with HNC? The answers were very good, good, moderate, bad, or very bad. Part 2 consists of 36 questions about dental care in HNC patients consisting of multiple choice questions in 6 parts a) knowledge of dental treatments,b) knowledge of oral hygiene administration before radiotherapy, c)knowledge about oral side effects of radiotherapy, d)knowledge of causes of dental caries after radiotherapy, e)knowledge about recommendations for patients with xerostomia, f)knowledge about pain control in HNC patients. After obtaining the necessary permits, a final-year student who had been trained would attend the classroom, and after the explanation of the questionnaire and the purpose of the research, the questionnaire was distributed among the students. If desired, the correct answers were provided to the student after the questionnaire was collected. The approximate time to complete the questionnaire was 10 min. The way of scoring was that the correct question was given a score of 1 and the wrong answer was given a 0. Therefore, the range of scores was between 0–36. The percentage of correct and incorrect answers was determined for each question, each part of knowledge, and the total knowledge questions. The mean and standard deviation of the knowledge questions were also determined. The percentage of correct answers between 75–100 is good knowledge, 50–74 is average knowledge, and less is considered insufficient knowledge. The questionnaire was compiled based on texts and articles. The validity of the whole questionnaire was confirmed with a validity coefficient of 0.91 and a validity coefficient of 0.89. SPSS version 26 statistical software and t-tests and ANOVA were used for data analysis. A significance level of 0.05 was used. The proposal of this project has been registered with the ethics code IR.KMU.REC.1401.601 in the Medical Ethics Committee of Kerman University. The participants were assured that the information in the questionnaires was confidential and that participation in the project was optional.
In this research, 39.1% of the participants were male, and 60.9% were female. Regarding the year of university admission, 48.9% were in the sixth year. Sixty-three and nine percent of the fifth and sixth-year students had not attended a course or workshop on treating head and neck cancer patients, respectively, and 57.9% were willing to participate in a course or seminar on treating head and neck cancer patients. Forty-eight and one percent of the participants described their information about the side effects of chemotherapy and radiation therapy, respectively, on head and neck cancer patients as bad (Table ). Table shows the knowledge of the participants about dental treatments for patients with head and neck cancers. The most common answer (86.5%) was about the necessity of oral or dental patient evaluation before cancer treatment. The percentage of correct answers to the questions in this section was 49.83%. Table shows the oral hygiene instructions provided to patients with HNC before RT. The most common answer (59.4%) was related to the administration of artificial saliva. The percentage of correct answers in this part was 47.44%. Table shows the participants’ knowledge of the oral side effects of radiotherapy, causes of faster tooth decay after radiotherapy and recommendation for patients with xerostomia. The most correct answer to the oral side effects of radiotherapy was a reduction in saliva after radiotherapy. The percentage of total correct answers in this part was 40.5%. In response to the causes of faster tooth decay after radiotherapy, 70.7% of the participants answered correctly about the change in the amount of saliva secreted, and 56.4% answered about the change in the composition of saliva (Table ). The percentage of total correct answers in this part was 50.37%. The participants’ knowledge of the recommendations for patients with xerostomia after HNC treatment is shown in Table . The percentage of correct answers in this part was 52.05%. Regarding pain control, 57.1% of the patients answered correctly about the use of local anesthetics, and 39.8% answered about the use of ice chips (Table ). The percentage of correct answers in this part was 51.66%. The mean and standard deviation of total knowledge was17.59 ± 6.43. There were significant differences according to sex ( P = 0.05), year of entry to university ( P = 0.001), tendency to participate in a course or workshop on the treatment of head and neck cancer patients ( P = 0.011), and knowledge (Table ).
One of the important approaches to treating patients with HNC is to reduce or eliminate the risk of complications caused by treatment. Dentists should be aware of the importance of preventing, diagnosing, and managing oral complications during treatment to minimize the impact of complications on patients’ lives . In the present study, the range of correct answers was between 40.03 and 52.05 in the different parts. It seems that the knowledge in this study is insufficient, which is compatible with the findings of Pedic et al. . Male students had a much greater level of knowledge in our study. In contrast, other studies have shown that there is no substantial difference in knowledge between men and women . In the present research, sixth-year students had significantly more knowledge. This could be attributed to taking more courses and having more experience working with patients. In this study, 57.9% wanted to participate in courses or workshops related to the oral/dental care of patients with HNC. A statistically significant difference was observed between the willingness to participate and the awareness score. According to the study of Patel et al. , 67% of radiotherapists and 72% of dentists were willing to participate in continuing education courses for oral/dental care in patients with HNC. Given the increase in the number of survivors of HNC due to progress in treatment and supportive care, it is necessary to raise awareness of oral hygiene prevention and treatment to maintain oral health. In this study alone, 86.5% of the population considered it necessary for patients to have their teeth evaluated before radiotherapy. According to the study by Pedic et al. , 96.7% of Sarajevo students, and in the study by Alqahtani et al. in Saudi Arabia, 97% of people working in the dental profession agreed with the need to evaluate the oral or teeth of patients before radiotherapy, which is almost consistent with the findings of the current study. Oral evaluation before cancer treatment is necessary to prevent and treat dental problems to avoid possible complications during cancer treatment. In this study, 41.4% of the patients gave correct answers about the radiation dose that led to osteoradionecrosis. The findings of the study by Pedic et al. showed that the percentage of correct answers at 5 universities was between 35.5 and 62.5%, which is similar to the findings of the present study. Moon et al. showed that the frequency of mandibular osteoradionecrosis is currently a low and modifiable risk factor; for example, tooth extraction before radiotherapy, smoking, and radiotherapy are related to mandibular osteoradionecrosis . In the present study, the students answered questions about their platelet and leukocyte counts if they needed treatment during cancer treatment (55.6% and 53.4%, respectively). According to Pedic et al.'s study , 80% of Sarajevo students and 76.4% of Zagreb students answered correctly, which is more than what was observed in the present study. Patients who are receiving chemotherapy without radiotherapy can undergo dental treatment if their blood count is stable (leukocytes at least 2000 cells/mm 3 , neutrophils more than 1000 cells/mm 3 , and platelets more than 50,000 cells/mm 3 . On the day of dental treatment, a complete blood test and differential blood count (DBC) are required . In the present study, only 3.8% of the participants were aware of the best time to remove teeth with a poor prognosis before HNC treatment, which is less than the prevalence reported by Pedic et al. and Alpöz et al. studies that 65.7% and 27.3%, respectively, of the students, knew that teeth with a poor prognosis should be extracted at least 3 weeks before starting the treatment respectively. In the present study, 42.9% of people answered correctly about the duration of endodontic treatment for symptomatic vital teeth before cancer treatment. In Pedic et al.'s study , 59.3% of the participants answered correctly. The best time for dental treatment was at least three weeks before the start of oncological treatment. If the patient does not have an acute infection, tooth extraction should be performed after radiotherapy and during the “golden window period” . The knowledge of students about the materials and devices prescribed for maintaining oral and dental hygiene was insufficient. Only 42.9% of people were aware of proper toothbrush prescriptions, and 43.6% were aware of alcohol-free antiseptics. According to the study by Alqahtani et al. , 59.5% were aware of proper toothbrush use, and 94% of alcohol-free antiseptics were prescribed; these findings are more than those of the present study. The guidelines established by the Multinational Association of Supportive Care in Cancer and the International Society of Oral Oncology recommend the use of a soft toothbrush, waxed dental floss, and several mouthwashes after brushing . One of the main guidelines in the management of patients with HNC who are receiving radiotherapy or are going to undergo radiotherapy is the use of mouthwash or topical products without alcohol [ – ]. In this study, 27.8% of people answered correctly that oral complications may lead to dose reduction or temporary discontinuation of radiotherapy. In the study by Alqahtani et al. study , 29% of the participants did not know about this matter, which is consistent with our study. When severe oral complications are observed during radiotherapy, it is necessary to temporarily stop radiotherapy . In this study, 51.8% of participants were aware of not prescribing spicy foods to patients with dry mouths. In the study of Alqahtani et al. in Saudi Arabia, 94% of people were aware of this. The reason for this difference in the population is studied because, in the study of Samim et al. , dentists and dental specialists were studied. In this research, 24.1% of people answered correctly that radiation dose affects the growth and development of children's bones and teeth. Many dental and developmental complications of maxillary bones during radiotherapy depend on age, radiation dose, and radiation location. The information of the students is insufficient in this matter. In this study, only 45.1% of people gave the correct answer to the question of when to stop daily oral hygiene in patients undergoing cervical cancer treatment. Considering the changes in the quantity and quality of saliva and the change in the sense of taste of people receiving radiotherapy, which leads to dry mouth and an increase in the incidence of dental caries, daily oral and dental hygiene should never be stopped in patients with HNC. Forty-six percent of the people in this study gave the correct answer to the time required for the follow-up of HNC patients after radiotherapy, which is less than that in the study by Alqahtani et al. where 80% of the participants gave the correct answer. Follow-ups after radiotherapy should be performed at least once every 3–4 months to control caries, control saliva flow, and control periodontal diseases, as well as health education and encouragement of the patient not to use cariogenic foods. Limitations: Since the questionnaires were completed by the participants, there was a possibility that some answers were not accurate, which was beyond the researcher's control.
The findings of this study showed that students' awareness of oral/dental care in patients with HNC was insufficient. There were statistically significant relationships between knowledge, years of education, sex, and willingness to participate in special oral care courses for patients with HNC. The student’s knowledge about the best time for tooth extraction before radiotherapy was very low. Due to insufficient students’ knowledge, it is recommended that a workshop or course on dental treatments be held for HNC patients.
|
Study protocol of a breathing and relaxation intervention included in antenatal education: A randomised controlled trial (BreLax study) | 21d6dc66-b521-4c32-b0ee-e9a053f7c131 | 11460687 | Patient Education as Topic[mh] | Antenatal education classes were developed to inform expectant mothers about pregnancy, labour and birth, and the postpartum period, in order to improve the pregnancy and childbirth experience . Originally, the classes were based on the concepts of, among others, Lamaze and Grantly Dick-Read . Studies indicate positive emotional effects on labour and birth outcomes in women who have attended antenatal education classes. These include lower levels of maternal stress, higher levels of self-efficacy, lower levels of caesarean birth rate and the use of epidural anaesthesia. However, the effectiveness of this method in improving maternal and neonatal outcomes is limited, as is what parts of antenatal education might have an effect . In higher and middle-income countries, women and their partners are offered antenatal education classes as part of antenatal care. In most instances, the classes are offered by midwives and consist of a series of sessions. Additional courses may be offered by women’s health physiotherapists and other women’s health professionals, such as obstetricians. The classes are usually held as a weekly course, with four to ten evening courses or one to three weekends. However, the content of antenatal classes, as well as the focus and duration, varies widely. The main purpose of the classes is to increase knowledge and confidence in relation to pregnancy, labour, and birth, as well as the postpartum period , principally through the provision of information on and preparation for the management of labour pain, by means of non-pharmacological techniques such as breathing and relaxation techniques . In particular, a key factor in achieving this aim is the promotion of self-efficacy so that women will feel confident that they are in control of labour and able to manage their labour pain . For instance, Howarth and Swain were able to show in their randomised controlled trial that women apply trained practical body skills such as breathing and relaxation techniques according to their individual needs, which consequently increases their ability to feel in control. Despite the important role self-efficacy has to play in women’s ability to cope with labour and birth, it has received little attention in the development of antenatal education . It is therefore critical that antenatal education be systematically developed to incorporate breathing and relaxation techniques to improve self-efficacy towards birth. In maternity care, childbirth preparation practices are based on experience but have rarely been systematically developed, implemented, and evaluated . This, in part, explains the heterogeneity of the results of their effectiveness. Although several studies have shown positive effects on women`s self-efficacy, including lower rates of epidural anaesthesia use and recall of labour pain , the overall evidence for a positive association between antenatal preparation and neonatal outcomes remains equivocal . At present, the best available evidence on effects of antenatal education classes focussing on elements such as breathing and relaxation techniques is from research with women who fear childbirth or who are experiencing mental health issues. For these women, antenatal classes strengthened their resources, had a positive effect on pregnancy outcomes and their childbirth experience, and enabled them to be competent and proactive during childbirth . Frequent practise of breathing and relaxation techniques led to women diagnosed with mental health issues such as stress and anxiety feeling better able to manage labour pain and increase their levels of self-efficacy . Given that antenatal education classes are able to have a positive effect on levels of self-efficacy, maternal stress, lower rates of epidural anaesthesia, and reduce recall of labour pain in women without fear of childbirth, we propose the development of an education module focusing on breathing and relaxation technique for inclusion in an antenatal education class. In addition, we propose that this class be assessed for impact on the levels of self-efficacy towards birth and other maternal and neonatal outcomes. Objective The present study aims to test the effects of antenatal education inclusive of a specific training a breathing and relaxation technique (BreLax) on self-efficacy towards birth. Assessment will take place before and after the intervention and compared to the effects of a standard antenatal education class. The secondary objectives are as follows. To identify the impact of BreLax on additional maternal outcomes before and after birth: women’s satisfaction with their experience of childbirth, self-control, pain management, birthing position, duration of labour, and skin-to-skin contact > 1 hour. To identify the effects of BreLax on neonatal outcomes: 5-minute Apgar-Score and umbilical cord pH. To explore women’s perceptions of the applicability of the intervention during labour.
The present study aims to test the effects of antenatal education inclusive of a specific training a breathing and relaxation technique (BreLax) on self-efficacy towards birth. Assessment will take place before and after the intervention and compared to the effects of a standard antenatal education class. The secondary objectives are as follows. To identify the impact of BreLax on additional maternal outcomes before and after birth: women’s satisfaction with their experience of childbirth, self-control, pain management, birthing position, duration of labour, and skin-to-skin contact > 1 hour. To identify the effects of BreLax on neonatal outcomes: 5-minute Apgar-Score and umbilical cord pH. To explore women’s perceptions of the applicability of the intervention during labour.
Design The study will be conducted as a population-based randomised controlled trial (RCT) with a pre-post design and two parallel arms. The control arm will consist of a standard antenatal education class (standard care) that contains information on the value of breathing and relaxation techniques and a few relaxation exercises but no training in a breathing and relaxation technique. The intervention arm will consist of a standard antenatal education class plus special training in a breathing and relaxation technique in class and access to an online manual for independent practice at home with audio and video files (Brelax). Eligible participants will be randomly selected to complete either the control or intervention arm. Three repeated measurements will be performed and, in addition, labour and birth data will be transferred from the documentation system: before intervention (T0); after intervention (T1); labour and birth data outside the documentation system (T2) and two to four weeks after birth (T3) (see , ). Setting This research will be carried out in a Swiss regional hospital with an annual birth rate of around 1800. Antenatal education classes are offered as part of the hospital’s facilities. Participants Pregnant women (age 18 or above) from 10 to 30 weeks of gestation, with a singleton low-risk pregnancy and receiving antenatal care. Exclusion criteria are 1) unwilling to attend an antenatal education class, 2) planning a caesarean section, 3) showing increased levels of fear related to childbirth measured by the fear of birth scale a two-item visual analogue scale (FOBS cut-off above 50) and 3) not able to understand and speak German. Participants will be recruited during consultations of the hospital and clinic midwifes, in surrounding gynaecological as well as midwife practices, and through common social media channels. A website was created for recruitment, which is linked to the advertised antenatal courses on the clinic website. Interested women can register directly for further information. Women who have already registered for antenatal education classes will be informed directly by email. A screening interview will be conducted with potential participants by telephone to verify eligibility, explain the study in detail, and address any concerns. Eligible participants will sign the informed consent form prior to the first antenatal education class. Sample size calculation The sample-size calculation is based on the randomised controlled trial of a self-efficacy enhancing educational programme (SEEEP) by Ip et al., (2009) . This study reported an effect size of 0.78 in increasing self-efficacy in labour and an attrition rate of around 30.7%. In the proposed study, we therefore assume an overall moderate effect size of 0.75. Setting a power at 80%, a significance level of p < .05, a sample size of 29 per arm is needed for a two-arm repeated measures design (GPower, version 3.1). Assuming an attrition rate of 20%, the final sample size will be 35 in each arm and 70 participants in total. Randomisation and allocation Participants will be recruited in their first, second or early third trimester of pregnancy. After women have registered for an antenatal education class, their eligibility will be checked and participants are enrolled. Due to the method of organisation of the course at the birth clinic, randomisation is conducted at the course level. Randomisation is carried out by the study midwife using envelopes containing the corresponding information about the course (intervention yes or no). Blinding Participants will not be informed in which class they will be randomly placed and are accordingly unaware of the intervention, see . However, complete blinding of behavioural interventions is difficult to guarantee as participants from both groups might meet outside the courses, talk about their classes and become aware of the differences. Blinding of midwives conducting antenatal classes is not possible. The collected data will be entered by a research assistant to ensure that data entry and analysis are not undertaken by the same people. The personal data of the participants will not be included in the data set .
The study will be conducted as a population-based randomised controlled trial (RCT) with a pre-post design and two parallel arms. The control arm will consist of a standard antenatal education class (standard care) that contains information on the value of breathing and relaxation techniques and a few relaxation exercises but no training in a breathing and relaxation technique. The intervention arm will consist of a standard antenatal education class plus special training in a breathing and relaxation technique in class and access to an online manual for independent practice at home with audio and video files (Brelax). Eligible participants will be randomly selected to complete either the control or intervention arm. Three repeated measurements will be performed and, in addition, labour and birth data will be transferred from the documentation system: before intervention (T0); after intervention (T1); labour and birth data outside the documentation system (T2) and two to four weeks after birth (T3) (see , ).
This research will be carried out in a Swiss regional hospital with an annual birth rate of around 1800. Antenatal education classes are offered as part of the hospital’s facilities.
Pregnant women (age 18 or above) from 10 to 30 weeks of gestation, with a singleton low-risk pregnancy and receiving antenatal care. Exclusion criteria are 1) unwilling to attend an antenatal education class, 2) planning a caesarean section, 3) showing increased levels of fear related to childbirth measured by the fear of birth scale a two-item visual analogue scale (FOBS cut-off above 50) and 3) not able to understand and speak German. Participants will be recruited during consultations of the hospital and clinic midwifes, in surrounding gynaecological as well as midwife practices, and through common social media channels. A website was created for recruitment, which is linked to the advertised antenatal courses on the clinic website. Interested women can register directly for further information. Women who have already registered for antenatal education classes will be informed directly by email. A screening interview will be conducted with potential participants by telephone to verify eligibility, explain the study in detail, and address any concerns. Eligible participants will sign the informed consent form prior to the first antenatal education class.
The sample-size calculation is based on the randomised controlled trial of a self-efficacy enhancing educational programme (SEEEP) by Ip et al., (2009) . This study reported an effect size of 0.78 in increasing self-efficacy in labour and an attrition rate of around 30.7%. In the proposed study, we therefore assume an overall moderate effect size of 0.75. Setting a power at 80%, a significance level of p < .05, a sample size of 29 per arm is needed for a two-arm repeated measures design (GPower, version 3.1). Assuming an attrition rate of 20%, the final sample size will be 35 in each arm and 70 participants in total.
Participants will be recruited in their first, second or early third trimester of pregnancy. After women have registered for an antenatal education class, their eligibility will be checked and participants are enrolled. Due to the method of organisation of the course at the birth clinic, randomisation is conducted at the course level. Randomisation is carried out by the study midwife using envelopes containing the corresponding information about the course (intervention yes or no).
Participants will not be informed in which class they will be randomly placed and are accordingly unaware of the intervention, see . However, complete blinding of behavioural interventions is difficult to guarantee as participants from both groups might meet outside the courses, talk about their classes and become aware of the differences. Blinding of midwives conducting antenatal classes is not possible. The collected data will be entered by a research assistant to ensure that data entry and analysis are not undertaken by the same people. The personal data of the participants will not be included in the data set .
The intervention was designed in line with the Medical Research Council Framework (MRC) recommended four stages of research for the development and evaluation of complex health interventions and using behaviour change techniques . Theoretical framework Self-efficacy theory The developed complex intervention is based on the theoretical concept of Bandura’s self-efficacy theory , see . According to Bandrua , self-efficacy is considered to be the expectation of whether an individual is capable of performing a particular behaviour in a given situation, which can be divided into outcome expectation and self-efficacy expectation. Outcome expectations refer to the individual’s prediction of the possible outcomes of a set of behaviours. If someone predicts that a certain behaviour will lead to a certain outcome, that behaviour may be activated and chosen. Self-efficacy is primarily influenced by four sources of information, namely performance, vicarious experience, verbal persuasion, and physiological and emotional states . According to Bandura a strong belief in one’s own ability to exercise some control over one’s physical condition can serve as a psychological prognostic indicator of the likely level of health functioning . Consistent with these assumptions, self-efficacy has been shown to have a significant influence on how labour is perceived and how it is physically managed . Therefore, we assume that a strategy to cope with labour pain and increase self-control during birth can increase self-efficacy. To increase women’s self-efficacy towards birth and confidence in their ability to manage birth, the BreLax intervention is predicted to contribute to all four ways of improving self-efficacy. Women will likely have the experience of successfully performing breathing and relaxation techniques and see that other women can perform the techniques, too. They will also receive reassuring comments from class instructors and are likely to feel fewer distracting emotions and physical arousal. Using the online brochure, women will be motivated to continue practising at home. They will be guided through the exercises by audio and video instructions and receive reminders to keep practising. Behaviour change technique The intervention has been developed with a focus on behaviour change techniques and using the taxonomy of behaviour change (see ), as improving the implementation of evidence-based practices depends on behaviour change . Therefore, behaviour change interventions are fundamental to effective practice. Behaviour change interventions can be defined as coordinated packages of measures aimed at changing specific patterns of behaviour. For this purpose, the COM-B framework (capability, opportunity, motivation) has been developed as a supporting factor in the present study. Capability is defined as the psychological and physical ability of the individual to perform the activity in question . In this intervention, this is achieved by practising and enabling exercises in the antenatal education class and by reminding participants to undertake independent practice at home. Opportunity is defined as all factors that lie outside the individual and enable or trigger the behaviour . With the help of the atmosphere and the environment in the antenatal class, women are allowed to implement the behaviour, and through independent practice at home, awareness is created of what women individually need. Motivation is defined as all brain processes that stimulate and control behaviour, not just goals and conscious decisions . Due to the imminent birth of their babies, women in antenatal education classes are motivated to deal with antenatal topics and motivated to strengthen their own abilities (see ). Antenatal education classes It is recommended that women start attending an antenatal education class in the second trimester or early in the third trimester. This recommendation applies to both the intervention and the control group. Participants will be randomly assigned to one of the two groups. One group serves as the control group and will receive a standard antenatal education class without the breathing and relaxation technique, while the intervention group will receive an antenatal education class inclusive of the designed BreLax intervention and the online brochure for practicing at home. Both week-long and weekend courses are offered. Each course has a duration of approximately eight hours spread over four weeks or a weekend. There will be six to twelve women in each class. Midwifery training The antenatal midwives in the birth clinic will be trained before the intervention. The investigator will organise a two-hour workshop in which the course concept and the breathing and relaxation technique will be discussed and practised together. In addition, the investigator will always be available to answer questions from the midwives. Midwives will receive a handout with study information and a description of the development of the breathing and relaxation technique and will be made aware of the importance of adhering to the procedures learnt in the workshop. Informational component The intervention and standard care classes both aim to inform women about pregnancy, labour, birth, and the postpartum period. The main difference is that the intervention group will focus on breathing and relaxation technique (see ). Midwives will instruct participants on how to do the exercises, how to perform the individual breathing pattern, and how to assume the respective positions and movements . Additionally, women will receive a manual with exercises to practise at home two to three times a week for about five to ten minutes each. The control group will receive standard care about breathing and relaxation, but no joint exercise sequences in classes. Component of breathing and relaxation Breath awareness provides physical, mental, and emotional control. Deep breathing increases blood circulation, oxygen flow, and reduces stress, which is beneficial for both the mother and the baby. Through the learning of conscious breathing and relaxation techniques, women can more effectively control their pain and relax when uterus contractions begin, increasing confidence . The core of the breathing technique is prolonged exhalation . Such techniques have been taught in antenatal education for many years and are actively supported by midwives during labour and birth. In the present study, the focus is on prolonged exhalation with individual rhythm, which women will learn and practise during antenatal education. The main advantage of this technique is that it helps in preparation for labour and birth, it can be actively used even in stressful situations, allows for the learning of an individual breathing pattern, and helps women to relax comfortably. Breathing technique refers to breathing with a certain number of repetitions and amplitudes . Building on these techniques, women will be encouraged to continue practising at home using the online brochure for guidance. Women need to learn how to adapt their breathing pattern to their individual needs. Breathing and relaxation techniques are only useful and practical if women can try them out and apply them in a range of everyday situations. Breathing and relaxation techniques can be used on any day and in any stressful or uncomfortable situation . For this reason, it is important that women continue to practise at home. In the best case, breathing and relaxation techniques become routine or become so automated that the techniques are automatically remembered during birth and can be used accordingly. For a habit to form, it is necessary for women to practise over a period of time . Previous literature shows that an average of 66 days is required to recall similar automated exercises . This time period coincides optimally with the recommended start of antenatal education classes in the second trimester. BreLax exercises The antenatal education class offers 30 to 45 minutes of joint practice. In addition, the women have access to audio and video instructions in the online brochure, which last between three and ten minutes. No special materials are required for the exercises. The exercises can be practised both at home and in different daily situations. Exercises to learn prolonged exhalation (following the 3–6 breathing technique for relaxation) Exercises for mental and physical relaxation (4 upright positions, standing, sitting, 4-foot, elevated lateral position, see ) Optional: visualisation, music Scales and outcome measures The primary outcome of the trial is self-efficacy. Further relevant maternal and neonatal outcomes will be summarised in , including the time points at which the measures will be taken. All questionnaire links will be sent to participants via email at each time point for them to fill out on their own devices.
Self-efficacy theory The developed complex intervention is based on the theoretical concept of Bandura’s self-efficacy theory , see . According to Bandrua , self-efficacy is considered to be the expectation of whether an individual is capable of performing a particular behaviour in a given situation, which can be divided into outcome expectation and self-efficacy expectation. Outcome expectations refer to the individual’s prediction of the possible outcomes of a set of behaviours. If someone predicts that a certain behaviour will lead to a certain outcome, that behaviour may be activated and chosen. Self-efficacy is primarily influenced by four sources of information, namely performance, vicarious experience, verbal persuasion, and physiological and emotional states . According to Bandura a strong belief in one’s own ability to exercise some control over one’s physical condition can serve as a psychological prognostic indicator of the likely level of health functioning . Consistent with these assumptions, self-efficacy has been shown to have a significant influence on how labour is perceived and how it is physically managed . Therefore, we assume that a strategy to cope with labour pain and increase self-control during birth can increase self-efficacy. To increase women’s self-efficacy towards birth and confidence in their ability to manage birth, the BreLax intervention is predicted to contribute to all four ways of improving self-efficacy. Women will likely have the experience of successfully performing breathing and relaxation techniques and see that other women can perform the techniques, too. They will also receive reassuring comments from class instructors and are likely to feel fewer distracting emotions and physical arousal. Using the online brochure, women will be motivated to continue practising at home. They will be guided through the exercises by audio and video instructions and receive reminders to keep practising. Behaviour change technique The intervention has been developed with a focus on behaviour change techniques and using the taxonomy of behaviour change (see ), as improving the implementation of evidence-based practices depends on behaviour change . Therefore, behaviour change interventions are fundamental to effective practice. Behaviour change interventions can be defined as coordinated packages of measures aimed at changing specific patterns of behaviour. For this purpose, the COM-B framework (capability, opportunity, motivation) has been developed as a supporting factor in the present study. Capability is defined as the psychological and physical ability of the individual to perform the activity in question . In this intervention, this is achieved by practising and enabling exercises in the antenatal education class and by reminding participants to undertake independent practice at home. Opportunity is defined as all factors that lie outside the individual and enable or trigger the behaviour . With the help of the atmosphere and the environment in the antenatal class, women are allowed to implement the behaviour, and through independent practice at home, awareness is created of what women individually need. Motivation is defined as all brain processes that stimulate and control behaviour, not just goals and conscious decisions . Due to the imminent birth of their babies, women in antenatal education classes are motivated to deal with antenatal topics and motivated to strengthen their own abilities (see ). Antenatal education classes It is recommended that women start attending an antenatal education class in the second trimester or early in the third trimester. This recommendation applies to both the intervention and the control group. Participants will be randomly assigned to one of the two groups. One group serves as the control group and will receive a standard antenatal education class without the breathing and relaxation technique, while the intervention group will receive an antenatal education class inclusive of the designed BreLax intervention and the online brochure for practicing at home. Both week-long and weekend courses are offered. Each course has a duration of approximately eight hours spread over four weeks or a weekend. There will be six to twelve women in each class. Midwifery training The antenatal midwives in the birth clinic will be trained before the intervention. The investigator will organise a two-hour workshop in which the course concept and the breathing and relaxation technique will be discussed and practised together. In addition, the investigator will always be available to answer questions from the midwives. Midwives will receive a handout with study information and a description of the development of the breathing and relaxation technique and will be made aware of the importance of adhering to the procedures learnt in the workshop. Informational component The intervention and standard care classes both aim to inform women about pregnancy, labour, birth, and the postpartum period. The main difference is that the intervention group will focus on breathing and relaxation technique (see ). Midwives will instruct participants on how to do the exercises, how to perform the individual breathing pattern, and how to assume the respective positions and movements . Additionally, women will receive a manual with exercises to practise at home two to three times a week for about five to ten minutes each. The control group will receive standard care about breathing and relaxation, but no joint exercise sequences in classes. Component of breathing and relaxation Breath awareness provides physical, mental, and emotional control. Deep breathing increases blood circulation, oxygen flow, and reduces stress, which is beneficial for both the mother and the baby. Through the learning of conscious breathing and relaxation techniques, women can more effectively control their pain and relax when uterus contractions begin, increasing confidence . The core of the breathing technique is prolonged exhalation . Such techniques have been taught in antenatal education for many years and are actively supported by midwives during labour and birth. In the present study, the focus is on prolonged exhalation with individual rhythm, which women will learn and practise during antenatal education. The main advantage of this technique is that it helps in preparation for labour and birth, it can be actively used even in stressful situations, allows for the learning of an individual breathing pattern, and helps women to relax comfortably. Breathing technique refers to breathing with a certain number of repetitions and amplitudes . Building on these techniques, women will be encouraged to continue practising at home using the online brochure for guidance. Women need to learn how to adapt their breathing pattern to their individual needs. Breathing and relaxation techniques are only useful and practical if women can try them out and apply them in a range of everyday situations. Breathing and relaxation techniques can be used on any day and in any stressful or uncomfortable situation . For this reason, it is important that women continue to practise at home. In the best case, breathing and relaxation techniques become routine or become so automated that the techniques are automatically remembered during birth and can be used accordingly. For a habit to form, it is necessary for women to practise over a period of time . Previous literature shows that an average of 66 days is required to recall similar automated exercises . This time period coincides optimally with the recommended start of antenatal education classes in the second trimester. BreLax exercises The antenatal education class offers 30 to 45 minutes of joint practice. In addition, the women have access to audio and video instructions in the online brochure, which last between three and ten minutes. No special materials are required for the exercises. The exercises can be practised both at home and in different daily situations. Exercises to learn prolonged exhalation (following the 3–6 breathing technique for relaxation) Exercises for mental and physical relaxation (4 upright positions, standing, sitting, 4-foot, elevated lateral position, see ) Optional: visualisation, music
The developed complex intervention is based on the theoretical concept of Bandura’s self-efficacy theory , see . According to Bandrua , self-efficacy is considered to be the expectation of whether an individual is capable of performing a particular behaviour in a given situation, which can be divided into outcome expectation and self-efficacy expectation. Outcome expectations refer to the individual’s prediction of the possible outcomes of a set of behaviours. If someone predicts that a certain behaviour will lead to a certain outcome, that behaviour may be activated and chosen. Self-efficacy is primarily influenced by four sources of information, namely performance, vicarious experience, verbal persuasion, and physiological and emotional states . According to Bandura a strong belief in one’s own ability to exercise some control over one’s physical condition can serve as a psychological prognostic indicator of the likely level of health functioning . Consistent with these assumptions, self-efficacy has been shown to have a significant influence on how labour is perceived and how it is physically managed . Therefore, we assume that a strategy to cope with labour pain and increase self-control during birth can increase self-efficacy. To increase women’s self-efficacy towards birth and confidence in their ability to manage birth, the BreLax intervention is predicted to contribute to all four ways of improving self-efficacy. Women will likely have the experience of successfully performing breathing and relaxation techniques and see that other women can perform the techniques, too. They will also receive reassuring comments from class instructors and are likely to feel fewer distracting emotions and physical arousal. Using the online brochure, women will be motivated to continue practising at home. They will be guided through the exercises by audio and video instructions and receive reminders to keep practising.
The intervention has been developed with a focus on behaviour change techniques and using the taxonomy of behaviour change (see ), as improving the implementation of evidence-based practices depends on behaviour change . Therefore, behaviour change interventions are fundamental to effective practice. Behaviour change interventions can be defined as coordinated packages of measures aimed at changing specific patterns of behaviour. For this purpose, the COM-B framework (capability, opportunity, motivation) has been developed as a supporting factor in the present study. Capability is defined as the psychological and physical ability of the individual to perform the activity in question . In this intervention, this is achieved by practising and enabling exercises in the antenatal education class and by reminding participants to undertake independent practice at home. Opportunity is defined as all factors that lie outside the individual and enable or trigger the behaviour . With the help of the atmosphere and the environment in the antenatal class, women are allowed to implement the behaviour, and through independent practice at home, awareness is created of what women individually need. Motivation is defined as all brain processes that stimulate and control behaviour, not just goals and conscious decisions . Due to the imminent birth of their babies, women in antenatal education classes are motivated to deal with antenatal topics and motivated to strengthen their own abilities (see ).
It is recommended that women start attending an antenatal education class in the second trimester or early in the third trimester. This recommendation applies to both the intervention and the control group. Participants will be randomly assigned to one of the two groups. One group serves as the control group and will receive a standard antenatal education class without the breathing and relaxation technique, while the intervention group will receive an antenatal education class inclusive of the designed BreLax intervention and the online brochure for practicing at home. Both week-long and weekend courses are offered. Each course has a duration of approximately eight hours spread over four weeks or a weekend. There will be six to twelve women in each class.
The antenatal midwives in the birth clinic will be trained before the intervention. The investigator will organise a two-hour workshop in which the course concept and the breathing and relaxation technique will be discussed and practised together. In addition, the investigator will always be available to answer questions from the midwives. Midwives will receive a handout with study information and a description of the development of the breathing and relaxation technique and will be made aware of the importance of adhering to the procedures learnt in the workshop.
The intervention and standard care classes both aim to inform women about pregnancy, labour, birth, and the postpartum period. The main difference is that the intervention group will focus on breathing and relaxation technique (see ). Midwives will instruct participants on how to do the exercises, how to perform the individual breathing pattern, and how to assume the respective positions and movements . Additionally, women will receive a manual with exercises to practise at home two to three times a week for about five to ten minutes each. The control group will receive standard care about breathing and relaxation, but no joint exercise sequences in classes.
Breath awareness provides physical, mental, and emotional control. Deep breathing increases blood circulation, oxygen flow, and reduces stress, which is beneficial for both the mother and the baby. Through the learning of conscious breathing and relaxation techniques, women can more effectively control their pain and relax when uterus contractions begin, increasing confidence . The core of the breathing technique is prolonged exhalation . Such techniques have been taught in antenatal education for many years and are actively supported by midwives during labour and birth. In the present study, the focus is on prolonged exhalation with individual rhythm, which women will learn and practise during antenatal education. The main advantage of this technique is that it helps in preparation for labour and birth, it can be actively used even in stressful situations, allows for the learning of an individual breathing pattern, and helps women to relax comfortably. Breathing technique refers to breathing with a certain number of repetitions and amplitudes . Building on these techniques, women will be encouraged to continue practising at home using the online brochure for guidance. Women need to learn how to adapt their breathing pattern to their individual needs. Breathing and relaxation techniques are only useful and practical if women can try them out and apply them in a range of everyday situations. Breathing and relaxation techniques can be used on any day and in any stressful or uncomfortable situation . For this reason, it is important that women continue to practise at home. In the best case, breathing and relaxation techniques become routine or become so automated that the techniques are automatically remembered during birth and can be used accordingly. For a habit to form, it is necessary for women to practise over a period of time . Previous literature shows that an average of 66 days is required to recall similar automated exercises . This time period coincides optimally with the recommended start of antenatal education classes in the second trimester.
The antenatal education class offers 30 to 45 minutes of joint practice. In addition, the women have access to audio and video instructions in the online brochure, which last between three and ten minutes. No special materials are required for the exercises. The exercises can be practised both at home and in different daily situations. Exercises to learn prolonged exhalation (following the 3–6 breathing technique for relaxation) Exercises for mental and physical relaxation (4 upright positions, standing, sitting, 4-foot, elevated lateral position, see ) Optional: visualisation, music
The primary outcome of the trial is self-efficacy. Further relevant maternal and neonatal outcomes will be summarised in , including the time points at which the measures will be taken. All questionnaire links will be sent to participants via email at each time point for them to fill out on their own devices.
Quantitative data including scale measures will be collected at three timepoints as articulated above (See ). Data will be analysed using the intention-to-treat principle (ITT). If the missing data is >5%, the possibility of multiple imputation will be examined. We intend to minimise missing data and will, for example, design mandatory questions in the questionnaires. In addition, participants will be sent reminders to complete the follow-up questionnaires after two to four weeks. Sociodemographic and obstetric history will be presented using descriptive statistics including frequency, percentage, mean, standard deviation, median and percentiles. For normally distributed continuous variables, mean and standard deviation will be used as measures of central tendency and dispersion. For non-normal data the median will be used. Categorical data will be presented as frequency and percentage. However, to examine the effect within each group, a serial trend analysis, such as a repeated measures ANOVA, will be performed from T0 to T3 for the primary and secondary outcome variables unless the residuals (errors) deviate significantly from the normal distribution. If the data are non-normally distributed, alternative tests such as the Friedman test will be used to analyse the interaction of groups over time. A series of linear mixed-effects models (LMMs) will be used to assess mediation effects. If the assumptions for using LMMs are not met, non-linear mixed-effects models or, in the case of non-normally distributed data, generalised linear mixed models will be used. Quantitative data are analysed using R. The significant level is set at p<0.05.
Given the timeframe and the minimal risk known to the participants in the current study, a data monitoring committee was not formed. The research team will continue to assess the need for the formation of such a committee.
The investigator and the research assistant will be responsible for data collection and the data will be entered directly into a secure web-based database developed to support data collection for Research Studies with Built-In Domain Controls (REDCap) ( project-redcap.org ). The access of the user to the database is restricted and assigned by the investigator. Data will be entered into the database with a unique trial number and no identifiable data will be stored in the database. Data with invalid trial numbers, out-of-range values, or follow-up IDs that do not match the baseline trial number will be excluded.
This study has been approved by the Ethics Committee of Zurich (SNCTP000005672). Participants in both the intervention and control groups will participate voluntarily in the present study. Participants in the control group will receive the standard antenatal education class (standard care). In both the intervention and control groups, participants will receive written informed consent after being informed. Participants in both groups will complete online questionnaires listed in the "Measures" section of the article. Any changes to the protocol are subject to formal amendment and may not be implemented prior to approval by the Ethics Committee of Zurich.
We do not expect any serious risks to participants. Antenatal education is part of antenatal care and is financed by obligatory health insurance (OKP) in Switzerland. Every woman has the opportunity to attend an antenatal education class if she wants to. Women benefit from the information provided in the antenatal class, as well as the exercises to support them during labour and birth. However, participants will be required to complete questionnaires during pregnancy and after birth. This could be an additional challenge and a higher workload for women. Additionally, it is possible that the negative experience of childbirth might return when completing questionnaires. Therefore, participants are offered the opportunity to contact the supervisor midwife and have a follow-up conversation with the attending birth midwife. If the need for further support is indicated, it is possible to involve the relevant specialists in the clinic.
The purpose of this study is to determine whether the inclusion of a breathing and relaxation technique in an antenatal education class can enhance self-efficacy towards birth, in comparison to a standard antenatal education class. To our knowledge, our study will be the first RCT to assess a multi-component intervention such as this. The planned study aims to provide evidence regarding the potential benefits of antenatal education with a focus on breathing and relaxation techniques for pregnant women in general without increased fear of childbirth. It will provide evidence on whether antenatal education classes with a focus on breathing and relaxation techniques effectively prepare women for labour, birth, and pain management. In addition to the results, the strengths and limitations of the study will be discussed. Strengths include the study design of a RCT to ensure sufficient power to demonstrate the effectiveness of the BreLax intervention. Furthermore, the study intervention will be systematically developed based on the concept of self-efficacy and the evidence on behaviour change techniques. The intervention will be taught and practiced together in direct face-to-face contact by the midwives and will be supported by an online brochure to help women to continue independent practice at home. One challenge and potential limitation is that birth is an unpredictable and complex process, and unexpected complications may occur, which might affect the outcome for the mother and child, as well as influence the mother’s childbirth experience. In addition, the quality of care the women will receive will be influenced by multiple factors such as respectful and effective communication and the support of qualified empathetic staff. This study is not designed to explain how these factors affect women’s ability to cope with labour pain. Furthermore, additional research will be needed to understand moderating factors, such as partner support, training frequency and personal interaction between the midwives and participants. The BreLax intervention has the potential to be a promising element of antenatal preparation, as the methodology promotes standardisation and reproducibility. It focuses on an essential component of antenatal preparation, namely breathing and relaxation techniques, and the effectiveness of this will be tested. Digitalisation within the healthcare sector will also be considered with the use of online handbook and reminders.
The study began recruiting participants in December 2023, with enrolment planned over 6–10 months. The end of the study is defined when the postnatal questionnaire has been received from all participants, but no later than 4 months after inclusion of the last patient.
S1 Checklist SPIRIT checklist BreLax. (PDF) S1 Table Taxonomy of the behaviour change technique of the BreLax intervention. (PDF) S1 Fig Upright positions BreLax. (TIF) S1 File BreLax manual women (translated). (PDF) S1 Data (PDF)
|
Distance Education Course about Sexuality for Obstetrics and Gynecology Residents | 7fb3ba7e-3828-4854-bb91-b5dc09c2b66f | 10309461 | Gynaecology[mh] | Sexual health is defined by the World Health Organization (WHO) as “a state of complete physical, mental and social well-being,” and one of the goals proposed by the Pan American Health Organization (PAHO) and by the World Association for Sexology (WAS) to promote sexual health is to provide education, training and support to professionals working in sexual health-related fields. Dealing with human sexuality requires specific knowledge about the different periods of life. Pregnancy is a unique moment in the lives of men and women, a period when sexual dysfunction symptoms are very frequent and may affect the couple's marital relationship and their quality of life in terms of sexual health. Obstetrics and Gynecology (Ob/Gyn) residents and specialists frequently report that they lack specific knowledge about sexuality, and that they feel unprepared to deal with the sexual issues of their obstetrics patients. On the other hand, pregnant women report they would like to receive more information about sexuality during pregnancy from their healthcare providers during their antenatal care visits. A national survey concluded that medical residents are interested in learning more about sexuality during pregnancy to increase their confidence in managing their patients, and that they would appreciate online modules about the topic, due to their lack of time to attend other types of courses. There are few publications on programs of sexual medicine for medical undergraduates or those specifically focused on medical residents, with some on-site course models. To the best of our knowledge, there are no previous publications of online course models on sexuality during pregnancy. We developed an online course about sexuality during pregnancy and the postpartum period specifically focused on Ob/Gyn residents, to complement their professional training in this area. The main objective of this study was to describe the experience of this distance training course for Ob/Gyn residents. We hypothesized that this course would increase the knowledge of the participants about sexuality during pregnancy.
Study Design This prospective educational intervention study was conducted at the Universidade Federal de São Paulo – São Paulo Medical School (UNIFESP-EPM) in the city of São Paulo, Brazil, from April to September 2014. Participants Medical doctors enrolled in officially accredited Ob/Gyn residency programs in São Paulo were eligible to participate. Educational Intervention Development of the Online Sexology Course Content The course content was based on the recommendations of the Brazilian Federation of Obstetrics and Gynecology (Febrasgo, in the Portuguese acronym) about “What should be the content for a sexology course for Ob/Gyns?”. The suggested content was adapted and divided into 10 classes. Each class consisted of two 50-minute modules, with a different lecturer for each module. The ten specific topics were: anatomy and physiology of the human sexual response; sexual dysfunctions, paraphilia and sexual inadequacies; the main psychotherapy techniques used in sexology; pharmacotherapy in sexology; the treatment of desire dysfunctions; the treatment of orgasm dysfunctions; the treatment of dyspareunia psychopathology and vaginism; the impact of male sexual dysfunctions on female sexuality; the impact of gynecological surgeries on female sexuality; and ethics in caring for sexual dysfunctions and inadequacies. We contacted the professionals working in the Sexuality Unit of the Department of Gynecology of Universidade Federal de São Paulo – São Paulo Medical School (UNIFESP-EPM), and invited them to give lectures on the specific topics of the course program. The content of the 10 lectures was divided as follows: 1) Course presentation and content - importance of human sexuality for the Ob/Gyn specialist; 2) History of sexuality/Anatomy of the sexual response cycle - anatomic changes in pregnancy and after childbirth (pregnancy and childbirth, PC); 3) Physiology of the sexual response - sexual response during pregnancy; 4) Treatment of sexual disorders - treatment of sexual disorders in PC/sexual history taking; 5) Female sexual dysfunctions (FSDs)- FSD symptoms in pregnancy; 6) Male sexual dysfunctions and pregnant woman's sexuality; 7) Psychotherapy - psychotherapy in PC; 8) Pharmacotherapy - pharmacotherapy in PC; 9) Gynecological surgeries and female sexuality - gynecological surgeries and female sexuality in PC; 10) Ethical issues/Treatment of FSDs - sexual education groups with pregnant couples. In addition to the topics described before, we created three hypothetical clinical cases for discussion during the last video lecture. In all modules, the participants answered four multiple-choice questions related to the topics/clinical cases presented. These questions were created by the lecturers. At the end of the course, we expected that the participants would be able to: 1) have basic knowledge about the anatomy and physiology of the human sexual response; 2) make a diagnosis and propose a treatment for sexual dysfunctions and inadequacies; 3) identify particularities of the female sexuality during pregnancy and the postpartum period; 4) understand the impact of male and female sexual dysfunctions on the couple's quality of life in terms of sexual health; 5) care for couples with sexual problems during pregnancy in an ethical and adequate manner; 6) work with a multi-professional team when caring for patients with sexual symptoms during pregnancy and the postpartum period; and 7) appreciate the usefulness of online courses as educational tools. The course project was submitted to the Medical Residency Committees of all hospitals that participated in this study. We also asked these committees to help us disseminate information about the course to their local Ob/Gyn residents. Development of the Course Platform, Video Lectures and Assessment Tools We hired a professional company experienced in the creation and maintenance of interactive websites to develop one that was specific for our course. The website allowed the participants to register, give informed consent, watch video lectures, participate in chats and have access to four online questionnaires. These questionnaires were created by the investigators to assess: 1) the participants' sociodemographic characteristics; 2) their previous training, attitude and experience about sexuality in pregnancy and the postpartum period; 3) their general knowledge about the topic at baseline and after the completion of the course; and 4) their general evaluation of course. The professional website company also directed and edited the taping of the video lectures that took place between November 2013 and January 2014, in a conference room at our university. The videos were uploaded to a private YouTube channel. The group of lecturers was composed of teachers or professionals working at UNIFESP/EPMour university in the field of human sexuality, and it included Ob/Gyns, urologists, psychiatrists, physiotherapists, psychologists and social workers. Each video lasted 50 minutes, and the lecturers were coached about video communication skills, such as looking directly at the camera, avoiding excessive gesticulation, keeping good body posture and speaking to the participants as individuals. Two teachers from our Obstetrics Department, who had more than thirty years of experience in the area and were not directly involved with the course, were invited to evaluate the content, language and esthetic quality of the taped lectures using a tool developed for distance education courses by Schons. The two teachers also evaluated the relevance of the multiple-choice questions created by the lecturers to be answered by the students after watching the videos, to assess the knowledge they had acquired. Both teachers considered all 10 lectures and all suggested questions adequate. Course Dissemination and Recruitment of Participants We sent information about the online s exology c ourse to the coordinators of five Ob/Gyn residency programs through emails and phone calls, asking them to help us recruit interested participants from their programs. We asked them to explain and emphasize that the course was specifically for Ob/Gyn residents, and that it was free and online. Throughout March 2014, the principal investigator (TCSBV) personally visited the five residency programs and talked to the residents about the course, encouraging them to enroll. On March 1st, 2014, the website of the course became active, and online registrations were opened. The first page of the website had information about the objectives and contents of the course, the basic curriculum of each lecturer (with links to their full curricula at the Lattes database), how to register and how to give online informed consent. The Course After online registration, the participants received an email confirming their successful enrollment, and were asked to create an individual login and password that ensured the complete confidentiality of their names and personal information. After this step, the participants received three online questionnaires. The first one (the sociodemographic questionnaire) collected data about their year of residency, sex and to which residency program they belonged. The second one was based on a questionnaire used in the Evaluate Project, which was conducted by Abdo et al and consisted of eight multiple choice questions about the participants' training, attitude and experience regarding sexual issues during pregnancy and the postpartum period. The third questionnaire was a pre-course test that evaluated the participants' baseline knowledge about sexuality in pregnancy and the postpartum period. This questionnaire consisted of 36 multiple choice questions based on the questions created by each of the lecturers. The total score for this test was calculated by dividing the number of questions (36) by 10, and the result was multiplied by the number of correct questions. The total score of this test ranged from 0 to 10, with higher scores indicating higher knowledge. In order to have access to the video lectures, the participants had to fill out the three questionnaires. The first lecture became available on April 7th, 2014, and each subsequent lecture was uploaded weekly, every Monday, along with an invitation to participate in a related discussion at the online forum. The principal investigator was available throughout the course to answer online questions posted by the students in the chat forum. Each participant received a weekly email to remind him/her about the next lecture, along with an invitation to participate in the forum. The weekly forum offered a list of additional reading material that could be commented by all. The students were encouraged to send questions, comments and suggestions about the video lectures. The principal investigator monitored the forum daily, and answered all questions posted by the students. After the first lecture, in order to have access to the next one, the student had to answer the four multiple choice questions about the topic of the lecture he/she had watched before. Even if the student did not provide the correct answers to these four questions, he/she was allowed to proceed to the next lecture. The last lecture was uploaded on June 6th, 2014. One week later, only 11 residents had completed the course. The principal investigator decided to invite these residents to become tutors and help motivate other residents from their own institutions to complete the course. These residents from each participating institution helped other local residents who might be having difficulties in completing the course. The tutors informed the principal investigator about these contacts with their peers. We maintained this strategy until the end of the course, on September 30th, 2014. At the end of the course, the students were asked to complete another two questionnaires. The “Post-course Test” evaluated their knowledge about sexuality, and consisted of the same 36 questions of the baseline questionnaire, but in a different order. The second questionnaire assessed their satisfaction with the course. This questionnaire was based on the SERVQUAL tool adapted for educational services. We modified some of the questions of this multi-item scale to assess video education, and produced a questionnaire with 20 questions divided into 5 domains: tangibles (physical facilities, equipment, personnel and communication materials); reliability (performance of the promised service in a reliable and accurate manner); responsiveness (helping students and providing prompt services); assurance (staff knowledge and courtesy, and their ability to convey trust and confidence); and empathy (caring, individualized attention to participants). We added one last question about the general quality of the course. The possible answers ranged from 1 to 5, with higher scores indicating higher level of satisfaction. The overall internal consistency of the questionnaire and of each domain was assessed using the Cronbach alpha (α) coefficient. The overall consistency of the questionnaire was high (α = 0.9), as well as the consistency of each domain (tangibles: α = 0.7; reliability: α = 0.9; responsiveness: α = 0.7; assurance: α = 0.9; and empathy: α = 0.8). At the end of the study, each student received an email thanking him/her for his/her participation, along with a certificate of completion and his/her individual scores in the pre and post-tests, along with the list of correct answers to these tests. The residency coordinators also received an email thanking them for their help, a certificate from UNIFESP-EPM and the course assessment of their own residents. Statistical Aspects The Student t and chi-squared tests were used to analyze the results of the pre- and post-course test scores. Descriptive statistics were used for the participants' sociodemographic and professional characteristics. We used Cronbach α to evaluate the internal consistency of the course satisfaction questionnaire. A minimum of Cronbach α of 0.7 had to be present to indicate good internal consistency. We used the InStat 3 (Statistical Services Centre, University of Reading, Reading, UK) software for the statistical analyses; values of p < 0.05 were considered statistically significant. Ethical Aspects The study followed the Brazilian National Health Council resolution number 466/12 on research involving humans. The Ethics Committee of Universidade Federal de São Paulo approved the study project (process 05889712.0.0000.5505). All participating residents gave online informed consent when registering for the course.
This prospective educational intervention study was conducted at the Universidade Federal de São Paulo – São Paulo Medical School (UNIFESP-EPM) in the city of São Paulo, Brazil, from April to September 2014.
Medical doctors enrolled in officially accredited Ob/Gyn residency programs in São Paulo were eligible to participate.
Development of the Online Sexology Course Content The course content was based on the recommendations of the Brazilian Federation of Obstetrics and Gynecology (Febrasgo, in the Portuguese acronym) about “What should be the content for a sexology course for Ob/Gyns?”. The suggested content was adapted and divided into 10 classes. Each class consisted of two 50-minute modules, with a different lecturer for each module. The ten specific topics were: anatomy and physiology of the human sexual response; sexual dysfunctions, paraphilia and sexual inadequacies; the main psychotherapy techniques used in sexology; pharmacotherapy in sexology; the treatment of desire dysfunctions; the treatment of orgasm dysfunctions; the treatment of dyspareunia psychopathology and vaginism; the impact of male sexual dysfunctions on female sexuality; the impact of gynecological surgeries on female sexuality; and ethics in caring for sexual dysfunctions and inadequacies. We contacted the professionals working in the Sexuality Unit of the Department of Gynecology of Universidade Federal de São Paulo – São Paulo Medical School (UNIFESP-EPM), and invited them to give lectures on the specific topics of the course program. The content of the 10 lectures was divided as follows: 1) Course presentation and content - importance of human sexuality for the Ob/Gyn specialist; 2) History of sexuality/Anatomy of the sexual response cycle - anatomic changes in pregnancy and after childbirth (pregnancy and childbirth, PC); 3) Physiology of the sexual response - sexual response during pregnancy; 4) Treatment of sexual disorders - treatment of sexual disorders in PC/sexual history taking; 5) Female sexual dysfunctions (FSDs)- FSD symptoms in pregnancy; 6) Male sexual dysfunctions and pregnant woman's sexuality; 7) Psychotherapy - psychotherapy in PC; 8) Pharmacotherapy - pharmacotherapy in PC; 9) Gynecological surgeries and female sexuality - gynecological surgeries and female sexuality in PC; 10) Ethical issues/Treatment of FSDs - sexual education groups with pregnant couples. In addition to the topics described before, we created three hypothetical clinical cases for discussion during the last video lecture. In all modules, the participants answered four multiple-choice questions related to the topics/clinical cases presented. These questions were created by the lecturers. At the end of the course, we expected that the participants would be able to: 1) have basic knowledge about the anatomy and physiology of the human sexual response; 2) make a diagnosis and propose a treatment for sexual dysfunctions and inadequacies; 3) identify particularities of the female sexuality during pregnancy and the postpartum period; 4) understand the impact of male and female sexual dysfunctions on the couple's quality of life in terms of sexual health; 5) care for couples with sexual problems during pregnancy in an ethical and adequate manner; 6) work with a multi-professional team when caring for patients with sexual symptoms during pregnancy and the postpartum period; and 7) appreciate the usefulness of online courses as educational tools. The course project was submitted to the Medical Residency Committees of all hospitals that participated in this study. We also asked these committees to help us disseminate information about the course to their local Ob/Gyn residents. Development of the Course Platform, Video Lectures and Assessment Tools We hired a professional company experienced in the creation and maintenance of interactive websites to develop one that was specific for our course. The website allowed the participants to register, give informed consent, watch video lectures, participate in chats and have access to four online questionnaires. These questionnaires were created by the investigators to assess: 1) the participants' sociodemographic characteristics; 2) their previous training, attitude and experience about sexuality in pregnancy and the postpartum period; 3) their general knowledge about the topic at baseline and after the completion of the course; and 4) their general evaluation of course. The professional website company also directed and edited the taping of the video lectures that took place between November 2013 and January 2014, in a conference room at our university. The videos were uploaded to a private YouTube channel. The group of lecturers was composed of teachers or professionals working at UNIFESP/EPMour university in the field of human sexuality, and it included Ob/Gyns, urologists, psychiatrists, physiotherapists, psychologists and social workers. Each video lasted 50 minutes, and the lecturers were coached about video communication skills, such as looking directly at the camera, avoiding excessive gesticulation, keeping good body posture and speaking to the participants as individuals. Two teachers from our Obstetrics Department, who had more than thirty years of experience in the area and were not directly involved with the course, were invited to evaluate the content, language and esthetic quality of the taped lectures using a tool developed for distance education courses by Schons. The two teachers also evaluated the relevance of the multiple-choice questions created by the lecturers to be answered by the students after watching the videos, to assess the knowledge they had acquired. Both teachers considered all 10 lectures and all suggested questions adequate.
The course content was based on the recommendations of the Brazilian Federation of Obstetrics and Gynecology (Febrasgo, in the Portuguese acronym) about “What should be the content for a sexology course for Ob/Gyns?”. The suggested content was adapted and divided into 10 classes. Each class consisted of two 50-minute modules, with a different lecturer for each module. The ten specific topics were: anatomy and physiology of the human sexual response; sexual dysfunctions, paraphilia and sexual inadequacies; the main psychotherapy techniques used in sexology; pharmacotherapy in sexology; the treatment of desire dysfunctions; the treatment of orgasm dysfunctions; the treatment of dyspareunia psychopathology and vaginism; the impact of male sexual dysfunctions on female sexuality; the impact of gynecological surgeries on female sexuality; and ethics in caring for sexual dysfunctions and inadequacies. We contacted the professionals working in the Sexuality Unit of the Department of Gynecology of Universidade Federal de São Paulo – São Paulo Medical School (UNIFESP-EPM), and invited them to give lectures on the specific topics of the course program. The content of the 10 lectures was divided as follows: 1) Course presentation and content - importance of human sexuality for the Ob/Gyn specialist; 2) History of sexuality/Anatomy of the sexual response cycle - anatomic changes in pregnancy and after childbirth (pregnancy and childbirth, PC); 3) Physiology of the sexual response - sexual response during pregnancy; 4) Treatment of sexual disorders - treatment of sexual disorders in PC/sexual history taking; 5) Female sexual dysfunctions (FSDs)- FSD symptoms in pregnancy; 6) Male sexual dysfunctions and pregnant woman's sexuality; 7) Psychotherapy - psychotherapy in PC; 8) Pharmacotherapy - pharmacotherapy in PC; 9) Gynecological surgeries and female sexuality - gynecological surgeries and female sexuality in PC; 10) Ethical issues/Treatment of FSDs - sexual education groups with pregnant couples. In addition to the topics described before, we created three hypothetical clinical cases for discussion during the last video lecture. In all modules, the participants answered four multiple-choice questions related to the topics/clinical cases presented. These questions were created by the lecturers. At the end of the course, we expected that the participants would be able to: 1) have basic knowledge about the anatomy and physiology of the human sexual response; 2) make a diagnosis and propose a treatment for sexual dysfunctions and inadequacies; 3) identify particularities of the female sexuality during pregnancy and the postpartum period; 4) understand the impact of male and female sexual dysfunctions on the couple's quality of life in terms of sexual health; 5) care for couples with sexual problems during pregnancy in an ethical and adequate manner; 6) work with a multi-professional team when caring for patients with sexual symptoms during pregnancy and the postpartum period; and 7) appreciate the usefulness of online courses as educational tools. The course project was submitted to the Medical Residency Committees of all hospitals that participated in this study. We also asked these committees to help us disseminate information about the course to their local Ob/Gyn residents.
We hired a professional company experienced in the creation and maintenance of interactive websites to develop one that was specific for our course. The website allowed the participants to register, give informed consent, watch video lectures, participate in chats and have access to four online questionnaires. These questionnaires were created by the investigators to assess: 1) the participants' sociodemographic characteristics; 2) their previous training, attitude and experience about sexuality in pregnancy and the postpartum period; 3) their general knowledge about the topic at baseline and after the completion of the course; and 4) their general evaluation of course. The professional website company also directed and edited the taping of the video lectures that took place between November 2013 and January 2014, in a conference room at our university. The videos were uploaded to a private YouTube channel. The group of lecturers was composed of teachers or professionals working at UNIFESP/EPMour university in the field of human sexuality, and it included Ob/Gyns, urologists, psychiatrists, physiotherapists, psychologists and social workers. Each video lasted 50 minutes, and the lecturers were coached about video communication skills, such as looking directly at the camera, avoiding excessive gesticulation, keeping good body posture and speaking to the participants as individuals. Two teachers from our Obstetrics Department, who had more than thirty years of experience in the area and were not directly involved with the course, were invited to evaluate the content, language and esthetic quality of the taped lectures using a tool developed for distance education courses by Schons. The two teachers also evaluated the relevance of the multiple-choice questions created by the lecturers to be answered by the students after watching the videos, to assess the knowledge they had acquired. Both teachers considered all 10 lectures and all suggested questions adequate.
We sent information about the online s exology c ourse to the coordinators of five Ob/Gyn residency programs through emails and phone calls, asking them to help us recruit interested participants from their programs. We asked them to explain and emphasize that the course was specifically for Ob/Gyn residents, and that it was free and online. Throughout March 2014, the principal investigator (TCSBV) personally visited the five residency programs and talked to the residents about the course, encouraging them to enroll. On March 1st, 2014, the website of the course became active, and online registrations were opened. The first page of the website had information about the objectives and contents of the course, the basic curriculum of each lecturer (with links to their full curricula at the Lattes database), how to register and how to give online informed consent.
After online registration, the participants received an email confirming their successful enrollment, and were asked to create an individual login and password that ensured the complete confidentiality of their names and personal information. After this step, the participants received three online questionnaires. The first one (the sociodemographic questionnaire) collected data about their year of residency, sex and to which residency program they belonged. The second one was based on a questionnaire used in the Evaluate Project, which was conducted by Abdo et al and consisted of eight multiple choice questions about the participants' training, attitude and experience regarding sexual issues during pregnancy and the postpartum period. The third questionnaire was a pre-course test that evaluated the participants' baseline knowledge about sexuality in pregnancy and the postpartum period. This questionnaire consisted of 36 multiple choice questions based on the questions created by each of the lecturers. The total score for this test was calculated by dividing the number of questions (36) by 10, and the result was multiplied by the number of correct questions. The total score of this test ranged from 0 to 10, with higher scores indicating higher knowledge. In order to have access to the video lectures, the participants had to fill out the three questionnaires. The first lecture became available on April 7th, 2014, and each subsequent lecture was uploaded weekly, every Monday, along with an invitation to participate in a related discussion at the online forum. The principal investigator was available throughout the course to answer online questions posted by the students in the chat forum. Each participant received a weekly email to remind him/her about the next lecture, along with an invitation to participate in the forum. The weekly forum offered a list of additional reading material that could be commented by all. The students were encouraged to send questions, comments and suggestions about the video lectures. The principal investigator monitored the forum daily, and answered all questions posted by the students. After the first lecture, in order to have access to the next one, the student had to answer the four multiple choice questions about the topic of the lecture he/she had watched before. Even if the student did not provide the correct answers to these four questions, he/she was allowed to proceed to the next lecture. The last lecture was uploaded on June 6th, 2014. One week later, only 11 residents had completed the course. The principal investigator decided to invite these residents to become tutors and help motivate other residents from their own institutions to complete the course. These residents from each participating institution helped other local residents who might be having difficulties in completing the course. The tutors informed the principal investigator about these contacts with their peers. We maintained this strategy until the end of the course, on September 30th, 2014. At the end of the course, the students were asked to complete another two questionnaires. The “Post-course Test” evaluated their knowledge about sexuality, and consisted of the same 36 questions of the baseline questionnaire, but in a different order. The second questionnaire assessed their satisfaction with the course. This questionnaire was based on the SERVQUAL tool adapted for educational services. We modified some of the questions of this multi-item scale to assess video education, and produced a questionnaire with 20 questions divided into 5 domains: tangibles (physical facilities, equipment, personnel and communication materials); reliability (performance of the promised service in a reliable and accurate manner); responsiveness (helping students and providing prompt services); assurance (staff knowledge and courtesy, and their ability to convey trust and confidence); and empathy (caring, individualized attention to participants). We added one last question about the general quality of the course. The possible answers ranged from 1 to 5, with higher scores indicating higher level of satisfaction. The overall internal consistency of the questionnaire and of each domain was assessed using the Cronbach alpha (α) coefficient. The overall consistency of the questionnaire was high (α = 0.9), as well as the consistency of each domain (tangibles: α = 0.7; reliability: α = 0.9; responsiveness: α = 0.7; assurance: α = 0.9; and empathy: α = 0.8). At the end of the study, each student received an email thanking him/her for his/her participation, along with a certificate of completion and his/her individual scores in the pre and post-tests, along with the list of correct answers to these tests. The residency coordinators also received an email thanking them for their help, a certificate from UNIFESP-EPM and the course assessment of their own residents.
The Student t and chi-squared tests were used to analyze the results of the pre- and post-course test scores. Descriptive statistics were used for the participants' sociodemographic and professional characteristics. We used Cronbach α to evaluate the internal consistency of the course satisfaction questionnaire. A minimum of Cronbach α of 0.7 had to be present to indicate good internal consistency. We used the InStat 3 (Statistical Services Centre, University of Reading, Reading, UK) software for the statistical analyses; values of p < 0.05 were considered statistically significant.
The study followed the Brazilian National Health Council resolution number 466/12 on research involving humans. The Ethics Committee of Universidade Federal de São Paulo approved the study project (process 05889712.0.0000.5505). All participating residents gave online informed consent when registering for the course.
A total of 219 residents enrolled in the course, and 143 (65.3%) completed all activities. The mean age of the participants was 28 (±2.1) years. Most of them (188, 85.8%) were female, and 162 (74.0%) were in the first 3 years of residency (R1, R2 and R3). The mean age of the 143 participants who completed the course was 27.9 (±2.1) years; 125 (87.4%) of them were female, and 116 (81.1%) were in the first 3 years of their residency. The participants' sociodemographic data and their baseline knowledge about sexuality were presented in a previous publication. Briefly, most of the residents reported that they did not have any sexology classes during their medical graduation (62.5%) or medical residency (52.1%), and the majority (84%) stated that they lacked specific knowledge about sexuality to help them manage their patients. The mean sexuality knowledge scores of the 143 residents who completed the course increased significantly from 4.4 (±1.6) at baseline to 6.0 (±1.3) at the end of the course (maximum grade: 10), ( p < 0.0001). Most of the participants (74.1%, 106/143) reported that the course met their expectations, and 81.1% (116/143) of them would recommend the course to a friend ( ).
According to the findings from this study, the online course about sexuality for Ob/Gyn residents was effective in increasing the participants' specific knowledge about the topic, and the course was assessed by the residents as good. The high level of participant satisfaction could be due to the multidisciplinary team of lecturers, since this has previously been reported by residents in other educational training activities. The main reason pointed by the 219 Ob/Gyn residents to enroll in the course was their perceived need to complement their medical education on sexuality, as pointed by 2/3 of the participants. This was also reported by a previous study by our team, which involved 154 residents of different specialties (Ob/Gyn, psychiatry and internal medicine) at UNIFESP-EPM. In that study, almost all residents (97%) declared that they would like to participate in educational activities to increase their knowledge in this area. These findings suggest that Brazilian residents are acutely aware of their lack of formal training about sexuality during their medical education and residency programs. However, this is not exclusive to Brazil, and has also been reported by international studies. The significant increase in test scores indicates that this distance course contributed to increase the participants' knowledge about sexuality. Similar results were reported by Yolsal et al in a 3-day on-site 20-hour course involving 163 Turkish medical residents of different specialties. The authors also reported significant differences in the mean total scores of knowledge about sexuality before and after the course. They also reported that the residents felt more prepared, motivated and confident to manage sexual issues after the course. Specific knowledge about sexuality is important for obstetricians and gynecologists to make them feel more capable and confident when handling questions on this topic with pregnant couples, thus potentially optimizing the care given to their patients during this period of their lives. According to previous studies, bringing up, asking, informing and providing counseling about sexuality during pregnancy can increase the couple's quality of life in terms of sexual health. The limited capacity of Ob/Gyn residents to deal with their patients' sexual symptoms and their confidence in online education is also common in other countries. American researchers conducted an online survey involving 234 third- and fourth-year medical residents to assess their knowledge and confidence regarding female sexual function and dysfunction. The majority of the respondents felt inadequately trained, and reported that they believed that their confidence in caring for patients with sexual problems would increase with lectures (97.9%) and online modules (90.6%). Online distance courses offer several benefits, such as the possibility of learning according to each individual's personal rhythm and time availability, the comfort of being able to watch the video lectures as many times as one wishes, and the time and money saved, since the participant does not have to travel to another location to participate in educational activities. This study had several strong points. Firstly, it is the largest Latin American study to investigate the training, attitude and experience of Ob/Gyn residents about sexuality during pregnancy. It is also the first publication about an online sexuality course. However, this study had several limitations. First, we had to create an unplanned “motivational strategy”, with the recruitment of 11 tutors to help increase the number of participants that concluded the course. It is possible that this change in our protocol may have influenced the results of the knowledge acquisition and course satisfaction scores, but we cannot infer the extent of this effect. A second limitation of this study is that, due to its exclusively theoretical nature, this type of course could not address all the practical difficulties that health care professionals face when dealing with sexual problems reported by patients. This would demand a more personalized coaching and practical face to face training with the students. Additionally, we did not assess the post-course knowledge retention and the actual usefulness of the course in improving the participants' skills and confidence in dealing with sexuality with their patients months after the course. This successful experience can serve as a model to other investigators interested in promoting similar educational interventions on sexuality for medical residents in Brazil and elsewhere. This kind of initiative could help future obstetricians, gynecologists and other professionals improve the care provided to pregnant couples. More research is needed to confirm the findings of the present study about the effectiveness of online educational interventions to increase the knowledge of young physicians about sexuality.
|
null | 6417640c-62c3-4d4d-a3b6-223cf170ef8b | 8259423 | Anatomy[mh] | Acral melanoma is a distinct subtype of melanoma that most commonly affects the Asian population and has worse survival than other cutaneous melanomas . Acral melanoma may be particularly difficult to distinguish from acral nevus by histopathology, and ancillary methods that help establish the diagnosis may be useful. The CCND1 gene which is located on chromosome 11q13 is a proto-oncogene which is transcribed to protein cyclin D1, and cyclin D1 forms active complexes with CDK4/CDK6, resulting in phosphorylation of the retinoblastoma protein (Rb) which drives G1 to S phase . Abnormalities of the CCND1 gene are found in some malignant melanocytic tumors, and especially in acral melanoma . In acral melanoma, most CCND1 abnormalities are characterized by an increase of the gene copy number, and CCND1 copy number changes are not found in acral melanocytic nevi . A fluorescence in situ hybridization (FISH) panel including CCND1 has proved to be an effective means of distinguishing benign and malignant melanocytic tumors, including acral melanocytic tumors [ – ]. Gene copy number increase in cancer-promoting driver gene in malignant cells may result in protein overexpression, such as in human epithelial growth receptor 2 (HER2) on chromosome 17 . In breast cancer and gastric cancer, there is good correlation in HER2 gene copy number increase and protein overexpression, which allows use of immunohistochemistry (IHC) in these tumors as a method for preliminary screening before resorting to FISH . We wished to determine whether increases in CCND1 gene copy number and cyclin D1 protein expression is correlated in acral melanoma. If so, IHC has potential to serve as a preliminary screening method which is both easier technically and more economical than FISH. The aim of this study was to evaluate the consistency of CCND1 copy number increase with cyclin D1 protein expression in acral melanomas, and to assess the potential role of cyclin D1 IHC serving as a preliminary screening method for CCND1 FISH. For this purpose, we evaluated 61 acral melanomas for CCND1 copy number alteration and cyclin D1 expression.
Patients A total of 61 successive and unselected cases of acral melanoma were collected from the Department of Pathology, School of Basic Medical Sciences, Third Hospital, Peking University Health Science Center from January 2013 to October 2018. In addition to these 61 acral melanomas, 26 benign acral melanocytic nevi were also collected and evaluated. All specimens were fixed in formalin and embedded in paraffin. Two pathologists (Jianying Liu and Jing Su) read these cases independently to confirm the diagnoses. This study was approved by the Research Ethics Committee, Peking University Health Science Centre, Beijing, China. Fluorescence in situ hybridization and signal measurement CCND1 FISH analysis was conducted as previously described using the Vysis Melanoma FISH Probe Kit purchased from Abbott Molecular Inc. (Des Plaines, IL, USA) . After hybridization, FISH slides were screened at high magnification (× 100 objective with oil immersion). A total of 30 non-overlapping intact tumor nuclei were counted for each slide. The average copy number for the CCND1 gene site was calculated. When the average copy number for CCND1 was ≥2.50, the tumor was considered to have an increase in CCND1 copy number. When the average copy number of CCND1 was ≥2.50 but <4.00, the tumor was classified as having a low-level increase in CCND1 copy number; and when the average copy number of CCND1 ≥ 4.00, the tumor was considered to have a high-level increase in CCND1 copy number. Immunohistochemistry and evaluation of immunostaining Cyclin D1 IHC was performed with a LEICA BOND-MAX system using Cyclin D1 Rabbit monoclonal antibody (Cell Marque, California, USA). The percentage of positive cells (nuclear staining) was scored by two pathologists (Jianying Liu and Jing Su) who were blinded to the FISH results. The average score generated by these two pathologists was used as the final IHC score. Statistical analysis The intraclass correlation coefficient of the IHC scores for Jianying Liu and Jing Su was calculated. The intraclass correlation coefficient of the IHC scores generated by the two pathologists (Jianying Liu and Jing Su) was above 90%. The Bland Altman plot (Fig. ) shows the difference mean between the two pathologists is 0.8%, the standard deviation (SD) is 5.4% and the range between difference mean ± 1.96SD is from − 9.9 to 11.5%. These imply that there is good agreement between the two pathologists. The average score for these two pathologists was used as the final IHC score. The correlation between the CCND1 gene copy number and cyclin D1 protein expression was evaluated with Spearman correlation. The most effective cut-off score for cyclin D1 IHC (percentage of positive cells) for predicting FISH results was calculated with ROC curves. The specificity, sensitivity, positive predictive value and negative predictive value of using cyclin D1 IHC scores to predict CCND1 FISH results was calculated with binary logistic regression and ROC curve. The relationship of CCND1 gene copy number alterations and patient gender, as well as tumor ulceration was assessed with the Pearson’s chi-square χ 2 test. The relationship of the CCND1 gene copy number alterations and other clinicopathologic parameters (patient age, Breslow thickness and Clark’s level) were assessed with the independent T test. The relationship of cyclin D1 expression status and patient gender, as well as tumor ulceration were assessed with nonparametric tests. The relationship of cyclin D1 expression status and other clinicopathologic parameters (patient age, Breslow thickness and Clark’s level) were assessed with Spearman correlation. All statistical data were calculated using IBM SPSS statistics 23 (USA). All p values were two-sided. P values < 0.05 were considered statistically significant.
A total of 61 successive and unselected cases of acral melanoma were collected from the Department of Pathology, School of Basic Medical Sciences, Third Hospital, Peking University Health Science Center from January 2013 to October 2018. In addition to these 61 acral melanomas, 26 benign acral melanocytic nevi were also collected and evaluated. All specimens were fixed in formalin and embedded in paraffin. Two pathologists (Jianying Liu and Jing Su) read these cases independently to confirm the diagnoses. This study was approved by the Research Ethics Committee, Peking University Health Science Centre, Beijing, China.
CCND1 FISH analysis was conducted as previously described using the Vysis Melanoma FISH Probe Kit purchased from Abbott Molecular Inc. (Des Plaines, IL, USA) . After hybridization, FISH slides were screened at high magnification (× 100 objective with oil immersion). A total of 30 non-overlapping intact tumor nuclei were counted for each slide. The average copy number for the CCND1 gene site was calculated. When the average copy number for CCND1 was ≥2.50, the tumor was considered to have an increase in CCND1 copy number. When the average copy number of CCND1 was ≥2.50 but <4.00, the tumor was classified as having a low-level increase in CCND1 copy number; and when the average copy number of CCND1 ≥ 4.00, the tumor was considered to have a high-level increase in CCND1 copy number.
Cyclin D1 IHC was performed with a LEICA BOND-MAX system using Cyclin D1 Rabbit monoclonal antibody (Cell Marque, California, USA). The percentage of positive cells (nuclear staining) was scored by two pathologists (Jianying Liu and Jing Su) who were blinded to the FISH results. The average score generated by these two pathologists was used as the final IHC score.
The intraclass correlation coefficient of the IHC scores for Jianying Liu and Jing Su was calculated. The intraclass correlation coefficient of the IHC scores generated by the two pathologists (Jianying Liu and Jing Su) was above 90%. The Bland Altman plot (Fig. ) shows the difference mean between the two pathologists is 0.8%, the standard deviation (SD) is 5.4% and the range between difference mean ± 1.96SD is from − 9.9 to 11.5%. These imply that there is good agreement between the two pathologists. The average score for these two pathologists was used as the final IHC score. The correlation between the CCND1 gene copy number and cyclin D1 protein expression was evaluated with Spearman correlation. The most effective cut-off score for cyclin D1 IHC (percentage of positive cells) for predicting FISH results was calculated with ROC curves. The specificity, sensitivity, positive predictive value and negative predictive value of using cyclin D1 IHC scores to predict CCND1 FISH results was calculated with binary logistic regression and ROC curve. The relationship of CCND1 gene copy number alterations and patient gender, as well as tumor ulceration was assessed with the Pearson’s chi-square χ 2 test. The relationship of the CCND1 gene copy number alterations and other clinicopathologic parameters (patient age, Breslow thickness and Clark’s level) were assessed with the independent T test. The relationship of cyclin D1 expression status and patient gender, as well as tumor ulceration were assessed with nonparametric tests. The relationship of cyclin D1 expression status and other clinicopathologic parameters (patient age, Breslow thickness and Clark’s level) were assessed with Spearman correlation. All statistical data were calculated using IBM SPSS statistics 23 (USA). All p values were two-sided. P values < 0.05 were considered statistically significant.
Clinicopathologic characteristics The clinical and pathologic features of the 61 acral melanoma patients evaluated in this study are summarized in Table . Thirty-two melanoma patients were male and 29 were female (male-to-female ratio 1.1:1). The median patient age was 62 years with a range of 22 to 87 years. Histologic subtypes included acral lentiginous melanoma (43/61, 70.5%) and nodular melanoma (18/61, 29.5%). The mean Breslow thickness was 4.3 mm (range 0.5 mm to 30.0 mm). Ulceration was observed in 27 cases (27/61, 44.3%). A total of 26 benign acral melanocytic nevi from 12 male and 14 female patients of ages 5 to 58 years (median age 29) were evaluated. These nevi were all of conventional type, and included 15 compound nevi and 11 intradermal nevi. The sites included palm (5/26, 19.2%) and sole (21/26, 80.8%). CCND1 copy number alteration in acral melanomas Thirty-two acral melanomas (52.5%, 32/61) showed no CCND1 copy number alterations (Fig. b and e). Twenty-nine acral melanomas (47.5%, 29/61) showed increased CCND1 copy number. Eight of these (8/61, 13.1%) showed low-level copy number increase (average copy number ≥ 2.5 and < 4.0, Figs. b and e) and 21 (21/61, 34.4%) showed high-level copy number increase (average copy number ≥ 4.0, Fig. b and e). Cyclin D1 expression in acral melanomas Nuclear cyclin D1 expression was found in all 61 acral melanomas using IHC. The median IHC score in acral melanoma was 30% (range: 1–95%). In acral melanomas with no CCND1 copy number alteration, the median IHC score was 15% (range: 1–80%) (Fig. c and f). In acral melanomas with low-level CCND1 copy number increase, the median IHC score was 25% (range: 3–90%) (Fig. c and f). In acral melanomas with high-level CCND1 copy number increase, the median IHC score was 60% (range: 1–95%) (Fig. c and f). The median IHC score for acral nevi was 10% (range: 1–30%). Comparison of CCND1 copy number alteration and cyclin D1 protein expression in acral melanomas The correlation of CCND1 gene copy number and cyclin D1 protein expression is shown in Fig. . The cyclin D1 protein expression level has no corelation with CCND1 copy number in acral melanomas with no CCND1 copy number alteration and low-level copy number increase ( P = 0.108). The cyclin D1 protein expression level correlates positively with the CCND1 copy number in acral melanomas with high-level CCND1 copy number increase ( P = 0.038). Using cyclin D1 IHC score to predict CCND1 FISH result Using ROC curves, we found that 27.5% is the most effective cyclin D1 IHC cut-off for predicting CCND1 FISH results, with a sensitivity of 72.4% and a specificity of 62.5%. The positive predictive value is 63.6% and negative predictive value is 71.4%. The cyclin D1 IHC score does not predict CCND1 copy number alterations properly. Correlation of FISH and IHC results with clinicopathologic characteristics CCND1 copy number increase is associated with the Breslow thickness ( P = 0.043) in invasive acral melanomas. There were no CCND1 copy number changes associated with other clinicopathologic parameters under evaluation, including patient age ( P = 0.128), gender ( P = 0.509), ulceration ( P = 0.815), or Clark’s level ( P = 0.887). Furthermore, there was no evidence of association of cyclin D1 expression with these clinicopathologic parameters, including patient age ( P = 0.114), gender ( P = 0.358), Breslow thickness ( P = 0.990), ulceration ( P = 0.198), and Clark’s level ( P = 0.661).
The clinical and pathologic features of the 61 acral melanoma patients evaluated in this study are summarized in Table . Thirty-two melanoma patients were male and 29 were female (male-to-female ratio 1.1:1). The median patient age was 62 years with a range of 22 to 87 years. Histologic subtypes included acral lentiginous melanoma (43/61, 70.5%) and nodular melanoma (18/61, 29.5%). The mean Breslow thickness was 4.3 mm (range 0.5 mm to 30.0 mm). Ulceration was observed in 27 cases (27/61, 44.3%). A total of 26 benign acral melanocytic nevi from 12 male and 14 female patients of ages 5 to 58 years (median age 29) were evaluated. These nevi were all of conventional type, and included 15 compound nevi and 11 intradermal nevi. The sites included palm (5/26, 19.2%) and sole (21/26, 80.8%).
Thirty-two acral melanomas (52.5%, 32/61) showed no CCND1 copy number alterations (Fig. b and e). Twenty-nine acral melanomas (47.5%, 29/61) showed increased CCND1 copy number. Eight of these (8/61, 13.1%) showed low-level copy number increase (average copy number ≥ 2.5 and < 4.0, Figs. b and e) and 21 (21/61, 34.4%) showed high-level copy number increase (average copy number ≥ 4.0, Fig. b and e).
Nuclear cyclin D1 expression was found in all 61 acral melanomas using IHC. The median IHC score in acral melanoma was 30% (range: 1–95%). In acral melanomas with no CCND1 copy number alteration, the median IHC score was 15% (range: 1–80%) (Fig. c and f). In acral melanomas with low-level CCND1 copy number increase, the median IHC score was 25% (range: 3–90%) (Fig. c and f). In acral melanomas with high-level CCND1 copy number increase, the median IHC score was 60% (range: 1–95%) (Fig. c and f). The median IHC score for acral nevi was 10% (range: 1–30%).
The correlation of CCND1 gene copy number and cyclin D1 protein expression is shown in Fig. . The cyclin D1 protein expression level has no corelation with CCND1 copy number in acral melanomas with no CCND1 copy number alteration and low-level copy number increase ( P = 0.108). The cyclin D1 protein expression level correlates positively with the CCND1 copy number in acral melanomas with high-level CCND1 copy number increase ( P = 0.038).
Using ROC curves, we found that 27.5% is the most effective cyclin D1 IHC cut-off for predicting CCND1 FISH results, with a sensitivity of 72.4% and a specificity of 62.5%. The positive predictive value is 63.6% and negative predictive value is 71.4%. The cyclin D1 IHC score does not predict CCND1 copy number alterations properly.
CCND1 copy number increase is associated with the Breslow thickness ( P = 0.043) in invasive acral melanomas. There were no CCND1 copy number changes associated with other clinicopathologic parameters under evaluation, including patient age ( P = 0.128), gender ( P = 0.509), ulceration ( P = 0.815), or Clark’s level ( P = 0.887). Furthermore, there was no evidence of association of cyclin D1 expression with these clinicopathologic parameters, including patient age ( P = 0.114), gender ( P = 0.358), Breslow thickness ( P = 0.990), ulceration ( P = 0.198), and Clark’s level ( P = 0.661).
In this study, we aimed to explore the relationship of CCND1 copy number alteration and cyclin D1 protein expression in acral melanoma, and to determine whether anti-cyclin D1 IHC may be used as a surrogate for direct evaluation of increase in CCND1 copy number. Our results show high-level CCND1 copy number increase has good correlation with cyclin D1 protein expression in acral melanoma. However low-level copy number increases do not show correlation with protein expression in acral melanoma. The sensitivity (72.4%), specificity (62.5%) and positive predictive value (63.6%) of using the IHC score to predict FISH results are not good. Cyclin D1 IHC therefore cannot be used as a surrogate for direct evaluation of increase in CCND1 copy number. Our results are consistent with the possibility CCND1 copy number increase induce high cyclin D1 expression and promote progression in acral melanomas with high-level CCND1 copy number increase. However, for acral melanoma with low-level CCND1 copy number increase, copy number increase is most likely merely a result of genetic instability which occurs during tumor progression and does not induce increase in protein expression . Acral melanoma is the main subtype of melanoma which affects Asian population and this melanoma subtype occurs in glabrous acral skin such as on the palms, soles, and on the nail apparatus . The genomics of acral melanoma are distinct from melanoma from other cutaneous sites . CCND1 copy number increase is known to occur more commonly in acral melanomas than in melanomas in other cutaneous sites [ – ]. However the sensitivity of CCND1 FISH for evaluation of acral melanocytic tumors is not high, and this relatively low sensitivity may result from the high heterogeneity of melanoma . Both whole-genome mutation landscape and targeted genomic profiling studies demonstrate diverse oncogenic processes and genetic alterations in acral melanomas . In our cohort as many as 37.5% (12/32) cases without CCND1 gene copy number increase showed high cyclin D1 protein expression, similar to the findings in a previous study . In the absence of DNA copy number increase, gene overexpression may result from other mechanisms . Factors other than copy number including transcriptional, post-transcriptional and translational regulation may influence cyclin D1 expression in melanoma . At this time we do not know the exact mechanism of high cyclin D1 expression in absence of an increased CCND1 copy number in acral melanoma. This will be explored in our future research. It is also noteworthy that in our cyclin D1 high-level copy number increase group, three cases (3/21, 19.0%) showed low protein expression. Cyclin D1 protein expression is regulated by a complex network, and the mechanism by which low protein expression occurs in the context of high-level increase in gene copy number is unknown. In tumors with low-level CCND1 copy number increase, five cases (5/8, 62.5%) showed low protein expression. This indicates that low-level CCND1 copy number increase does not lead to increase in protein expression in most cases. The copy number increase may be caused by polyploidy. When the CCND1 copy number change is interpreted, it should be expressed in relation to one or more of the other FISH probes used. In our cohort we found that CCND1 copy number increase was associated with the Breslow thickness in invasive acral melanomas. That is, when invasive acral melanoma shows CCND1 copy number increase the tumor will be thicker. This observation suggests that CCND1 alterations may be linked to acral melanoma progression and have prognostic relevance in acral melanomas. Breslow thickness is in general the most important parameter for determining prognosis in melanoma. In our cohort some cases were consultation cases for which we failed to obtain information such as nodal status and overall survival which are more directly correlated with prognosis. We recognize this is a limitation of this study. In summary, we found that in acral melanomas with high-level CCND1 copy number increase IHC correlates well with FISH, while in cases with low-level CCND1 copy number increase or no CCND1 copy number alteration, no correlation was found. Using cyclin D1 IHC to predict CCND1 copy number changes which can be detected by FISH is not reliable. Our findings suggested that IHC is not feasible as a surrogate for direct evaluation of CCND1 gene copy number alteration.
|
Hounsfield unit change in metastatic abdominal lymph nodes treated with combined hyperthermia and radiotherapy | 89e1408a-1a23-4215-a7da-e16540256b6c | 11936282 | Pathologic Processes[mh] | Hyperthermia (HT) refers to the intentional administration of elevated temperatures to the human body or specific anatomical regions, to manage various medical conditions. Historical records indicate the utilization of HT since centuries ago. In the 1960s, researchers initiated investigations into the potential use of regulated heat to treat cancer . Dr. Gordon Dewhirst, an American oncologist, conducted pioneering investigations in 1969 concerning the application of HT to potentiate the impact of radiation therapy (RT) on tumor cells . His empirical inquiries established the foundational framework for the integration of HT into oncological therapeutic strategies. In current medical practice, HT is often used as an adjuvant therapeutic technique in combination with treatments including radiotherapy and chemotherapy . The objective is to augment the efficacy of these interventions by heightening the susceptibility of neoplastic cells to radiation or optimizing pharmaceutical delivery. The combination of HT with RT (HTRT) stands as an established therapeutic approach harnessed in the management of malignancies . HT can enhance oxygen levels within the tumor microenvironment. RT manifests superior effectiveness within an oxygen-abundant environment, given that oxygen molecules assume a pivotal role in the generation of reactive oxygen species responsible for inducing DNA damage within malignant cells. Consequently, HT amplifies the potency of RT through the enhancement of tumor oxygenation. The HTRT is commonly used when cancer has metastasized to the abdominal lymph nodes (LNs) . HT augments the susceptibility of neoplastic cells within the LNs to the effects of RT, rendering them more receptive to the injurious impact of ionizing radiation . Abdominal LNs are implicated in a variety of cancer types, including colorectal, ovarian, pancreatic, and gastric cancer [ – ]. The dissemination of cancer cells to the abdominal LNs is identified as regional LN involvement or metastasis. In such instances, the synergistic utilization of HT and RT holds the potential to enhance treatment outcomes. Necrotic changes in target tissues may occur after HT as it affects the blood flow to the treated area and causes ischemic necrosis . Heat-induced vascular changes reduce blood supply, leading to tissue damage and necrosis. In HT for cancer, inducing local necrosis of tumor tissue is often one of the therapeutic goals. It causes necrotic changes in tumor cells, leading to cell death and tumor destruction. Combining HT with other treatment modalities, such as RT or chemotherapy, further increases the effectiveness of tumor destruction and improves treatment outcomes . Necrosis is the death of cells or tissues and on medical imaging, it is seen as areas of nonviable tissue. Hounsfield Unit (HU) used to observe changes in necrosis are used in computed tomography (CT) scans to measure the radiation density of the tissue and provide useful information about the composition and properties of the imaged structures . HU on CT scans allows the detection and evaluation of areas of necrosis and other pathological changes within the body. In studies evaluating imaging changes, HU values have been used to examine tumor necrosis response to various cancer treatments . The study reported by van der Veldt et al. used HU values for the early prediction of clinical outcomes in patients with metastatic renal cell cancer treated with targeted therapies like sunitinib . These criteria (called as Choi criteria) rely on changes in tumor enhancement patterns observed on contrast-enhanced CT scans. They were developed to assess the effectiveness of antiangiogenic therapies, which are known to influence blood flow and vascularity within tumors. While the clinical benefits of HTRT have been documented, there is limited research examining the radiological changes, particularly HU alterations, in metastatic lymph nodes treated with this combined approach. Understanding these imaging biomarkers could provide valuable insights into treatment response assessment and potentially guide therapeutic decision-making. In this study, necrotic changes in abdominal LNs after HTRT were evaluated by CT before and after treatment and compared with changes with conventional RT alone.
CT scans were acquired prior to and subsequent to treatment in a cohort of consecutive 40 patients undergoing either Combined HTRT or RT alone for metastatic abdominal LNs from 01/01/2019 to 31/03/2022 at Seoul St. Mary’s Hospital. The decision between HTRT and RT alone was made based on several factors including: previous radiation history to the treatment area, patient’s performance status, and feasibility of delivering conventional radiation doses. HTRT was primarily selected for cases where conventional full-dose RT was challenging due to prior radiation exposure or compromised performance status. The REMISSION 1°C device (AdipoLABs, Seoul, Republic of Korea), which was employed for hyperthermia, has a medical high-frequency thermogenic instrument designed to augment cancer therapy through the generation of potent deep-seated heat, operating at a high frequency of 0.46MHz to elevate internal body temperatures. Target delineation was generated with MIM 7.1.9 workstation (MIM Software Inc., USA), and subsequent HU measurements were derived from the designated target regions. Using 64-detector row CT scanners (Somatom Sensation 64; Siemens Healthineers or Discovery CT750 HD; GE Healthcare) or a 128-channel CT scanner (Somatom Definition AS + ; Siemens Healthineers), contrast-enhanced CT images were acquired. The CT protocols were as follows: 5-mm section thickness, 100-200 mAs with automated tube current modulation, 100-120 kVp. For every individual in the cohort, the average HU value within the tumor region was computed from both the pre- and post-treatment CT scans. The average HU values were determined independently for each group. The null hypothesis (H0) is that no disparity exists in average HU values between the HTRT and RT-alone groups. Subsequently, a 2-sample Student’s t-test was employed to assess whether there was a statistically significant difference in average HU values between the two groups. The correlation between HU changes and various clinical variables including treatment modality (HTRT vs. RT alone), radiation dose, patient age, sex, and primary tumor site was further evaluated. These variables were selected based on their potential influence on treatment response and tissue density changes. We investigated the correlation between variables and HU values using a linear regression model To determine whether the HU change between the two groups was statistically significant, we examined the p-value associated with the coefficient. A p-value lower than a specific significance threshold of 0.05 implied that a discrepancy in HU values between the two groups. The study was approved by Seoul St. Mary’s Hospital institutional review board (No: KC23RISI0567). Informed consent was waived because of the retrospective nature of the study and the analysis used anonymous clinical data. Data were accessed for research purposes in 27/07/2024.
The characteristics of the 20 patients who underwent combined HTRT and the 20 patients who underwent RT alone are summarized in . No statistically significant differences in the clinical characteristics were observed between the two groups. However, the effective biological dose of the therapeutic radiation was notably lower in the HTRT group. The HTRT combination was applied in situations where complete irradiation with radiation therapy is challenging, such as areas previously treated to radiation or in patients with compromised performance status. The objective of this study was to evaluate the imaging effects caused by combined relatively lower dose radiation and hyperthermia therapy compared to the conventional dose of RT alone. Notably, no statistical differences were noted in the average HU value prior to treatment or the volume of the treated region. In this study, average HU values and changes in tumor volume were calculated for 20 patients treated with HTRT and 20 patients treated with RT alone. The median value of the average HU of each patient after treatment in the HTRT group was 58.95 HU (range: 15.03 – 136.57 HU) and the median value of the average HU after treatment in the RT-only group was 71.42 HU (range: 37.53 – 144.41 HU). The average change in HU value of the tumor was reduced by 9.05% (range: –80.30% ± 29.94%) with a median value of –8.47 HU in the HTRT-treated group, whereas it decreased by 0.57% (range: –23.14% ± 71.03%) with a median value of –0.41 HU in the RT-only treated group. (p = 0.011). shows a graphical representation of the HU values before and after treatment for each of the 40 patients. Changes observed before and after each treatment are depicted in . The number of patients who showed reduced HU values in the HTRT group (16 patients, 80%) was greater than that in the RT alone group (10 patients, 50%). Moreover, the degree of reduction was significantly greater in the HTRT group. depicts clinical features of good radiologic response in the combined hyperthermia-radiotherapy group. Using a linear regression model, we investigated the relationship between treatment modalities and changes in HU values between with and without combined hyperthermic therapy, Changes in HU values showed statistically significant differences (p = 0.023). On the other hand, the alteration in HU value did not result in a statistically different radiation dose (p = 0.237). The detailed values of linear regression between HU changes and treatment modality and demographic characteristics are summarized in .
This study aimed to evaluate radiological changes, specifically HU alterations, in metastatic abdominal lymph nodes treated with HTRT compared to RT alone. Our findings demonstrated that the HTRT group showed a significantly greater decrease in HU values (9.05% reduction) compared to the RT alone group (0.57% reduction), suggesting more pronounced necrotic changes in the combined treatment group. When high-frequency current is energized in the human body, heat is generated in tissues. This is called ‘deep-seated heat’ . This is because when high-frequency electrical energy is applied, whenever the direction of the current changes, the molecules constituting the tissue vibrate and rub against each other, generating bioheat through rotational motion, twisting motion, and collision motion. Unlike other types of current, high-frequency current, which does not stimulate sensory and motor nerves, can heat specific areas in body tissues without causing muscle contraction . The high-frequency energy converted into biothermal energy raises the temperature of tissues to improve cell function and increase blood flow. The generally known function recovery temperature of the tissue is 42°C . The European Society of Hyperthermic Oncology (ESHO) has established quality standards for hyperthermia interventions. A pivotal criterion entails direct temperature measurement within the tumor, ensuring validation of targeted volume heating to the specified range of 40-43°C . When the local temperature of the tissue rises above 40°C, arterial and capillary dilation occurs by direct effect, blood flow is increased, the body’s defense mechanism, blood circulation is promoted, and metabolism is enhanced. The increase in capillary blood flow due to deep heat generation is 4-5 times higher than at rest . In addition, the supply of oxygen, nutrients, antibodies, and leukocytes increases, and the hydrostatic pressure of capillaries increases due to vasodilation, so lymphatic circulation is promoted. The results of this study showed that, during combined HTRT, the progression of necrosis was more pronounced, showing a greater reduction in HU. The combination of HTRT can have a synergistic effect on tumor tissue, potentially resulting in necrotic changes. HT can sensitize tumor cells to the effects of RT . When cells are exposed to both high heat and radiation, the combined stresses destroy cell structure and function more effectively than either treatment alone. This can lead to more extensive cellular damage and in some cases necrosis. HT increases blood flow to the tumor site and potentially improves oxygen delivery. Improved oxygenation with RT intensifies cell damage by enhancing the formation of free radicals (reactive oxygen species) produced by radiation. When oxygen demand exceeds supply, ischemia occurs, contributing to necrotic changes. The combination of HTRT stimulates the immune system’s response to tumor antigens released as a result of cell damage . This immune response targets and eliminates damaged cells, causing necrotic changes at the treatment site. Several studies have shown that necrotic tumors, as indicated by low HU values on pretreatment CT, have a poor prognosis . However, there are few studies analyzing the changes in HU among different treatment methods. Our study showed that combining hyperthermia with a relatively low radiation dose resulted in a dramatic reduction in HU, which is likely to induce more necrosis. A response is often characterized by at least a 10 HU decrease in tumor attenuation, and Choi criteria had a significantly better predictive value for progression-free survival and overall survival in partial response patients . The combination of HTRT can increase cellular stress, leading to cell death (programmed cell death). In case of severe stress or inadequate repair mechanisms, cells die. HT and RT can affect the tumor microenvironment, by promoting inflammation and disrupting the balance of signaling molecules. These changes contribute to necrotic processes within the tumor. This is a mechanism that is expected to eventually result in good tumor control . Furthermore, during clinical follow-up, patients in the HTRT group showed notable improvements in quality of life, particularly regarding abdominal pain reduction and improved mechanical obstruction. Although these clinical benefits were not systematically evaluated in this study, these observations suggest potential symptomatic advantages of HTRT that warrant further investigation using standardized assessment tools. Pain and symptom control is also an important clinical goal in metastatic cancer patients who require palliative treatment. These are the major limitations of our study. First, the retrospective nature of the study may introduce inherent selection bias. However, we attempted to minimize this through consecutive patient sampling and matching key clinical characteristics between groups. Second, While our sample size was relatively small (n = 40), our findings showed statistically significant differences between groups, and power calculations suggest this was adequate for our primary endpoint. However, a larger prospective study would help validate these results. This study is meaningful in that it shows the actual clinical results of combined HTRT. Notably, future studies on the long-term control of local recurrence and survival analysis of combined HTRT have been planned. In conclusion, this study demonstrated that HTRT induces greater reductions in HU values compared to RT alone in metastatic abdominal lymph nodes, suggesting enhanced necrotic changes. These findings provide radiological evidence for the potential benefits of adding hyperthermia to radiotherapy in selected cases. Future prospective studies with larger cohorts and longer follow-up are warranted to validate these findings and correlate them with clinical outcomes.
|
Perceptions of clinicians on promoting oral health care in an alcohol and other drug use health care service: A qualitative study | 29486477-1fbb-4d24-9d2c-bca9f249b119 | 11886495 | Health Promotion[mh] | INTRODUCTION Harmful or hazardous use of psychoactive substances, including alcohol and other drugs (AOD) contributes to several diseases, loss of productive years and premature mortality . The World Drug Report 2023 stated that around 296 million people aged between 15 and 64 reported drug use in 2021, with 39.5 million experiencing drug use disorders . In 2019, the National Drug Strategy Household Survey reported that approximately 43% of Australians aged 14 and over had illicitly used a drug at some point in their lifetime, and an estimated 16.4% had used an illicit drug in the previous 12 months . Alcohol occupies a significant place in Australian culture . The Australian Bureau of Statistics reported that one in four (5 million) people aged 18 years and over exceeded the Australian Adult Alcohol Guideline in 2020–2021 . AOD use is associated with a range of psychiatric disorders, disability, and comorbidities . Among various health problems associated with drug and alcohol addiction, oral health problems are highly prevalent, and thus require more attention not only by dentists but also by AOD care providers . The use of AOD has a significant negative impact on oral health, resulting in a decline in oral health‐related quality of life . People using AOD often experience compromised immune systems and a preference for sugary foods and beverages, which further worsen their oral health problems . This is manifested through dental diseases, including tooth decay, dry mouth and periodontal (gum) disease . Additionally, methadone (which is used for withdrawal management) along with other opioids and amphetamines, can reduce saliva flow, and enhance sugar craving and bruxism (jaw grinding) resulting in increased risk for tooth wear and decay . Alcohol and tobacco use is also linked with increased dental problems while lower socioeconomic status and homelessness, which are more common among individuals who use AOD, can further impact oral health . Epidemiological studies examining oral health in people who experience AOD use disorders have reported that higher prevalence of dental and oral mucosal diseases as compared to individuals who do not use AOD . A meta‐analysis of 28 studies conducted in 2017 reported that dental diseases were significantly higher in illicit drug users as compared to those who do not experience a substance use disorder . The authors also reported that individuals who use AOD had an average of 3.5 more decayed teeth compared to those who did not but had lesser restorations . Inadequate oral hygiene practices, including infrequent toothbrushing and flossing, and infrequent dental visits contribute to a higher risk of poor oral health among individuals experiencing a substance use disorder [ , , , ]. Moreover, evidence suggests that individuals with substance use disorders undergoing withdrawal management experience both individual and structural barriers to access public oral health services, such as anxiety and fear of dentists, daily struggles to attend appointments and perceived stigma from dentists . Dentists also face several challenges when treating individuals experiencing a substance use disorder and often perceive, the management of this client group as demanding due to various factors like dental fear, difficulties in coping with appointments, and poor compliance to preventive measures . Dentists may sometimes overprescribe opioid medications, which are addictive and prone to abuse, which may ultimately lead to poor oral health outcomes . In Australia, AOD treatment services have two care systems: (i) the general health service system, where similar treatments are provided through general practitioners (GP), psychologists, general hospitals and welfare services; and (ii) the specialist system which offers various services such as withdrawal management, maintenance treatment and psycho‐social therapies . After accessing an AOD treatment service, clients progress through withdrawal, rehabilitation, psycho‐social therapy and maintenance pharmacotherapy, as determined by their individual needs . Considering the evidence from other studies that non‐dental professionals can actively contribute to promoting oral health among at‐risk populations, AOD clinicians also have an opportunity to initiate brief interventions [ , , ]. These could include educating patients about oral health risks, conducting a brief oral health screening and initiating dental referrals. Results from a few studies suggest that individuals who use AOD rarely receive oral health information and education from AOD clinicians . Currently, there is a dearth of evidence globally regarding the perceptions and practices of AOD clinicians towards oral health care, particularly within the Australian context. Therefore, we aimed to explore the knowledge, attitudes and practices of AOD clinicians regarding promoting oral health among AOD clients.
METHODS 2.1 Design, setting and population We used an exploratory qualitative design involving semi‐structured interviews with AOD clinicians working across a large Drug Health Service in Greater Sydney, New South Wales, Australia. The AOD clinicians work across a range of services, including opioid treatment, outpatients, substance use in pregnancy and parenting, general practice advice and support, hospital consultation and liaison, withdrawal management and blood‐borne virus treatment. This area of Sydney was particularly chosen as the population living here is culturally and linguistically diverse, and a significant proportion experience a high level of socioeconomic disadvantage and homelessness . A health report prepared by the South Western Primary Health Network reported a high prevalence of illicit drug usage in some parts of South Western Sydney, where the population is estimated to increase by 29% (from 110,193 people to 141,673 people) by the year 2031 . Purposive sampling was used to recruit participants. Clinicians and other health professionals working in the Opioid Treatment Programs, the Harm Reduction programs, Counselling, Specialist Medical Consultation, Withdrawal Management Program, and Assertive Youth Services were invited to participate in the study. Study flyers were electronically distributed to all staff at each of the facilities by research coordinators working within the service. Interested participants were directed (via flyers) to contact the recruitment champion (PP and KF) who checked for eligibility for recruitment. Participants were then handed over to the appropriate study investigator (TPN) where rapport was established and a time was organised for the interview to take place. 2.2 Data collection Interviews were individually conducted by one researcher [TPN (MND)] with AOD clinicians (doctors and nurses) over a videoconferencing platform (Zoom) using an interview guide (see ) that was informed by a previous review in this area and experts in the field . Interviews were 15–20 minlong and were audio‐recorded. Written informed consent was obtained from all participants prior to the start of the interview. Audio files were transcribed using a professional transcription service. 2.3 Data analysis All transcripts were then uploaded to a qualitative data management/analysis software (NVivo 12 pro) . A hybrid (deductive and inductive) approach to thematic analysis was conducted to identify and analyse contextual patterns and themes within the data . Initially, the transcripts were carefully reviewed multiple times to gain familiarity with the data and to record initial ideas. Using a deductive approach a priori coding framework was developed, informed by the semi‐structured interview guide to identify the major themes. Data was then coded into the major themes. Two researchers [SK (MPH) and AS (MPH)], who were trained in qualitative research, then independently recoded and regrouped the data using an inductive approach informed by Braun and Clarke to identify sub‐themes . The coding structure was then further refined by two other researchers [AG (PhD) and KF (PhD)]. Team meetings were then organised to discuss similarities and differences in the themes and interpretations and a consensus was achieved. An inductive approach guided the consensus meeting to help the team understand the perceptions and barriers of AOD clinicians towards promoting oral health. The findings were presented with the use of pseudonyms for doctors (D) and nurses (N). 2.4 Ethical considerations This study received ethics approval from South Western Sydney Local Health District Research and Ethics Committee (2021/ETH12072). The audio recordings and transcripts were stored on a password‐protected computer as per institutional and ethics committee requirements. Participants were deidentified throughout the transcriptions to ensure anonymity and confidentiality, and numeric pseudonyms were used in the quotes from participants. 2.5 Trustworthiness Various methodological techniques were employed to enhance the trustworthiness of the study. Interviews were conducted by a researcher (TPN) trained in qualitative research methodologies. Debriefings were organised with another researcher (AG) to discuss completeness of the data and identify any potential new areas to explore in subsequent interviews and continued until analytical sufficiency was achieved . A professional transcription service was used to enhance accuracy of the verbatim transcriptions of the audio recordings. Two members of the study team (SK and AS) independently checked the data for accuracy and performed the coding. Coding consensus was achieved with the whole team. Adequate information about the participants, study settings, and data collection are provided in the results and the findings are supported by direct quotes from the participants.
Design, setting and population We used an exploratory qualitative design involving semi‐structured interviews with AOD clinicians working across a large Drug Health Service in Greater Sydney, New South Wales, Australia. The AOD clinicians work across a range of services, including opioid treatment, outpatients, substance use in pregnancy and parenting, general practice advice and support, hospital consultation and liaison, withdrawal management and blood‐borne virus treatment. This area of Sydney was particularly chosen as the population living here is culturally and linguistically diverse, and a significant proportion experience a high level of socioeconomic disadvantage and homelessness . A health report prepared by the South Western Primary Health Network reported a high prevalence of illicit drug usage in some parts of South Western Sydney, where the population is estimated to increase by 29% (from 110,193 people to 141,673 people) by the year 2031 . Purposive sampling was used to recruit participants. Clinicians and other health professionals working in the Opioid Treatment Programs, the Harm Reduction programs, Counselling, Specialist Medical Consultation, Withdrawal Management Program, and Assertive Youth Services were invited to participate in the study. Study flyers were electronically distributed to all staff at each of the facilities by research coordinators working within the service. Interested participants were directed (via flyers) to contact the recruitment champion (PP and KF) who checked for eligibility for recruitment. Participants were then handed over to the appropriate study investigator (TPN) where rapport was established and a time was organised for the interview to take place.
Data collection Interviews were individually conducted by one researcher [TPN (MND)] with AOD clinicians (doctors and nurses) over a videoconferencing platform (Zoom) using an interview guide (see ) that was informed by a previous review in this area and experts in the field . Interviews were 15–20 minlong and were audio‐recorded. Written informed consent was obtained from all participants prior to the start of the interview. Audio files were transcribed using a professional transcription service.
Data analysis All transcripts were then uploaded to a qualitative data management/analysis software (NVivo 12 pro) . A hybrid (deductive and inductive) approach to thematic analysis was conducted to identify and analyse contextual patterns and themes within the data . Initially, the transcripts were carefully reviewed multiple times to gain familiarity with the data and to record initial ideas. Using a deductive approach a priori coding framework was developed, informed by the semi‐structured interview guide to identify the major themes. Data was then coded into the major themes. Two researchers [SK (MPH) and AS (MPH)], who were trained in qualitative research, then independently recoded and regrouped the data using an inductive approach informed by Braun and Clarke to identify sub‐themes . The coding structure was then further refined by two other researchers [AG (PhD) and KF (PhD)]. Team meetings were then organised to discuss similarities and differences in the themes and interpretations and a consensus was achieved. An inductive approach guided the consensus meeting to help the team understand the perceptions and barriers of AOD clinicians towards promoting oral health. The findings were presented with the use of pseudonyms for doctors (D) and nurses (N).
Ethical considerations This study received ethics approval from South Western Sydney Local Health District Research and Ethics Committee (2021/ETH12072). The audio recordings and transcripts were stored on a password‐protected computer as per institutional and ethics committee requirements. Participants were deidentified throughout the transcriptions to ensure anonymity and confidentiality, and numeric pseudonyms were used in the quotes from participants.
Trustworthiness Various methodological techniques were employed to enhance the trustworthiness of the study. Interviews were conducted by a researcher (TPN) trained in qualitative research methodologies. Debriefings were organised with another researcher (AG) to discuss completeness of the data and identify any potential new areas to explore in subsequent interviews and continued until analytical sufficiency was achieved . A professional transcription service was used to enhance accuracy of the verbatim transcriptions of the audio recordings. Two members of the study team (SK and AS) independently checked the data for accuracy and performed the coding. Coding consensus was achieved with the whole team. Adequate information about the participants, study settings, and data collection are provided in the results and the findings are supported by direct quotes from the participants.
RESULTS Sixteen participants were interviewed, of which seven were doctors and nine were nurses. No participants dropped out prior to interview. All participants worked in drug health services in various positions including addiction specialists and trainees ( n = 3), medical officers ( n = 3), psychiatrists ( n = 1), registered nurses ( n = 1), clinical nurse consultants ( n = 7) and nursing managers ( n = 1). Two of the doctors also had general practice experience. Participants had a mean (SD) age of 47.29 years (±12.75) with varied clinical experience in drug health services (range 1–35 years). Most of the participants were female ( n = 9). The thematic analysis of the data resulted in three major themes and eight sub‐themes, which have been outlined in Table . 3.1 Perceptions of providing oral health care among clients 3.1.1 High prevalence of poor oral health and its impact: ‘nine out of ten patients have oral health issues’ Most participants reported that majority of their clients that presented in their services had oral health issues and needed dental care. This was mainly attributed to the use of alcohol and drugs, low nutrition and a general lack of awareness about oral hygiene. ‘I think most of our clients need dental work … So, I think most of our clients who do have an extensive drug history, have dental issues more often than just.’ (N8) ‘I'd say in the process that I just described to say nine out of ten patients have oral health issues.’ (D2) ‘I remember this one comment, that one of the patients said that, look, I would be using this toothbrush after seven or eight years.’ (D3) Most of the doctors and nurses recognised the role of oral health for their clients, citing that it ‘has a big impact on their self‐esteem’, and that having poor oral health was also a reason for experiencing stigma from the community. Some people may have a belief that a facial appearance reflecting poor oral health (like missing teeth) is directly caused by AOD use and/or AOD service treatments, such as methadone maintenance treatment; and this brings into play all the negative feelings that come from being labelled ‘an addict’. ‘It [oral health] really has a big role and big impact on their self‐esteem, on the belief of being accepted in the community because even though they are on the program [methadone] and they are kind of having their life back on track, having very poor oral health, it's always being picked and kind of stigmatising them anywhere they go.’ (D7) In this context, one clinician also recognised the importance of oral health in securing and retaining employment in this population. It was also mentioned that poor oral health is a risk factor for serious health complications as it might lead to sepsis or other chronic diseases. ‘So, there are just a few prominent things, not to mention the psychosocial aspects, the stigma, the shame, the difficulty getting a job if you've got a mouth like that.’ (D4) ‘If you don't look after your teeth, you're more likely to die from a heart attack much, much earlier. If you don't look at the teeth, you [are] very open to symptoms like becoming septic. If you don't look after your teeth, every part of your body can become infected, including your brain.’ (N4) Two doctors and two nurses mentioned that opioid medications such as methadone might worsen oral health‐related quality of life. ‘But I do know that people that have that are on methadone can have more problems with their teeth. I think it might be related to saliva or something like that.’ (N9) However, one clinician reported that although some clients develop misconceptions that methadone is the only reason for their poor oral health, they should also be counselled on the debilitating effects of substance use. ‘When they come in to [see] me and they've been on methadone for six months and say “look at my teeth”, you know, “look what methadone has done to my teeth”. And I say, “probably this is the first time you've looked in the mirror for the last 18 years. Now that you're not running around seeking drugs and relatively stable, you've actually noticed what's been going on for two decades there. And if you want to blame methadone or well, but don't forget to blame heroin, which was working on that for a long time.” So, you see those things all are addressable in counselling where you stratify what the risks are.’ (D1) 3.1.2 Clinical practices regarding oral health: ‘I don't always do it’ Most of the doctors and nurses reported that they do not initiate discussions about oral health with their clients, in general, unless being prompted by the client about an oral health‐related issue. ‘It's truthful to say I don't always do it [talk about oral health]. And it's really good doing this interview because I'm aware of the times that I don't always do it, even though, you know, if you ask me, I go, “yeah, it's really important”.’ (D5) Most nurses mentioned that when oral health complaints come up, they are mostly involved in referring cases to dentists or GPs. ‘They'll always complain about, you know, they've got toothaches or infections or anything like that. And then we would recommend to get a referral to the dentist … And if they wanted some assistance with that, we can sometimes give numbers to any services that we do know that assists patients with accessing oral health care or we would suggest to go to GP.’ (N7) A few doctors noted that they typically conduct a brief oral health examination during their routine assessments, which not only ensures client satisfaction but also addresses any oral health concerns. ‘So, I'll do the routine review, which will entail talking about the substance use, talking about usually the social aspects around it, how it affects their health, and also the physical and mental health aspects. And then usually towards the end we'll go through an examination, examine them neurologically, and then ask if we can have a look in their mouth, just as a simple oral health screening. And it makes them happy that for us to have a look and are eager to show that they've got these issues …’ (D2) A couple of nurses reported they were proactive regarding oral health and made an effort to ask about oral health at their first meeting with a client. ‘We know for a fact that oral health and the rest of your body suffers if we don't look after it … Every patients that sees me, I look at their teeth.’ (N4) ‘I try to have that really non‐judgmental approach in terms of, you know, often phrased it in. You know, tell me a little bit about your teeth.’ (N3) A few doctors mentioned that they provide oral health education to their clients in the form of promoting toothbrushing habits by educating them about brushing techniques. ‘Make sure you've got that toothbrush up [and] you're spending 2 minutes really massaging them, then it should feel like this … instead of scratching. That seems to get across to people what the sensation of brushing gums should be like.’ (D4) 3.2 Barriers to promoting oral health care in AOD settings 3.2.1 Limited oral health training and time constraints among clinicians: ‘we've never been trained …’ Most clinicians cited time constraints to be the main reason for being unable to broach the topic of oral health with their clients. Two doctors mentioned the importance of taking a holistic approach with drug health clients and that there would always be several issues to cover during a consultation. ‘It [asking about oral health] can fall off my own radar because there are so many issues that demand attention … And I do work very holistically, and a lot of things out there have to be popped into that holistic space … I think there's a load of issues that need to be covered and that relates to time.’ (D5) Of all participants, three clinicians reported that their knowledge about oral health was limited as they never experienced any dental training during their medical courses. One of them also went on to say that they felt ‘under‐resourced’ and ‘under‐informed’ in relation to the guidelines around dental management. ‘I'd say as a medical professional, I think we're limited in the extent we don't know the complexities of beyond the simple dental emergency. So, I would say, the times we are probably over treating things in terms of antibiotics and over treating things in terms of pain management, just because it is a bit of black box understanding, what's the underlying root of the problem. So, in that way, I do feel as medical professionals in general, we're under‐resourced or under‐informed as to the guidelines around dental management.’ (D2) ‘Well, I think the big issue is that and I blame medical faculties—we've never been trained in oral health and we've never had lectures on oral health. There are so many areas that we don't get trained in. We get made to do incredible learning with stuff that we'll never see or never use in our lives. But when it comes to sort of having a dental day as part of your medical training hasn't occurred and it's relevant, it's certainly relevant.’ (D1) 3.2.2 Perceived lack of priority accessibility and affordability of dental care services for clients: ‘I think that ' s kind of low down on their priority list …’ Most clinicians mentioned that their clients would generally refrain from accessing dental treatments. They mentioned that this was mainly due to the lack of prioritisation of oral health, as they would mostly have other pressing issues to deal with, such as homelessness or drug dependence. ‘Maybe in their own mind, if they haven't got acute pain from a tooth problem. If my housing is not good and I don't have a roof over my head and I'm worried about my kids … or I'm really not on top of this ice dependence yet . I just think it [oral health] slips down their own internal priority list.’ (D5) ‘It's not that they are ignorant. No, it's just that it's not a priority for others. But keeping the kids is more of a priority.’ (N4) Another frequently cited barrier was the high cost of dental care/treatments. It was also mentioned that this issue is more prevalent to clients coming from a low socioeconomic background, increasing their financial vulnerability. ‘I think that's kind of low down on their priority list and with them a lot of dental, you know, being private and hard to access. Obviously, the cost is very difficult for so many of our more vulnerable clients.’ (N8) ‘And especially now, the hurdles are getting harder because, as you know, most private dentists ask astronomical fees [which is] totally unattainable by our clientele.’ (D1) ‘Do you know how long it takes to get your teeth fixed? Far too long. Do you know how much it costs to get your teeth fixed? Far too much.’ (N4) Even services like the public dental services that were free for low socioeconomic communities were difficult to access for clients. Some clinicians were even unsure of existing public dental referral pathways and how to access them. ‘There's been a couple of times that I tried to ring like it's a centralised number or something. Some public health … and it went basically no where.’ (N1) ‘I don't remember any referral process here [for dental]. Maybe something that we need to have as well.’ (N2) ‘We don't have a system to refer people specifically into public dental and oral health care.’ (D4) 3.3 Recommendations for oral health integration into AOD settings 3.3.1 Oral health education and screening: ‘I think it is our responsibility to be screening and identifying conditions …’ Most physicians mentioned that it would be highly appropriate for them to provide oral health education in the AOD clinics, as they thought that looking at their clientele's health and wellbeing holistically would be an effective way of identifying health conditions and preventing further problems. ‘I think it is our responsibility to be screening and identifying conditions, of course, not necessarily knowing the best course of treatment, but at least knowing the care pathways and how to get people to care in the best way.’ (D1) ‘It's very appropriate because we are trying here to have this holistic approach to the patient's well‐being … we are trying to address the reason and we're trying to help them have a better lifestyle.’ (P7) A couple of doctors mentioned that oral health education should be provided after the acute issue of drug dependence has passed and once the clients are stable. ‘Once the acute issue of the drug issue is beginning, you know that people are more physiologically stable to include it in maybe, package of information about general health issues and preventative health … So, I just want to talk to you about immunisation and so that dental health is part of a little package of really obvious preventative health things.’ (D5) ‘I explained why it's [oral health] important and can affect their health. But ultimately, I do defer to them to prioritise that because often they have competing issues and it's about seeing where that fits in relation to those other issues as to what they pursue, whether it be housing issues or domestic violence issues or substantive issues or other health issues. So, I can only raise it as an issue if they find it important and they'll usually ring.’ (D2) Half of the nurses also mentioned that integrating oral health into routine assessment might be an effective way of asking about oral health issues. ‘I think an [oral health] assessment would be good … I mean, if it's just kind of asking, “do you have any oral health issues and can I help you link you into a service?”., I think that's completely appropriate.’ (N8) ‘… it would be good anyway to have that incorporated in the initial assessment so that information is passed on to the client. And then we do the follow up reviews.‘ (N2) A few doctors and nurses also suggested that oral health education and screening should be provided by all health professionals who see clients in AOD settings. This would ensure consensus in delivering oral health promotion to clients. ‘It should be everyone. This is a service where we have a multidisciplinary approach and we sit and we discuss patients. So, if I'm overlooked at all, I missed it. The nurse or the caseworker will come and raise it and we'll see how we can help it.’ (D7) ‘I think they should all be able to do it. Everyone can have a look and ask a question … Everyone's got a responsibility.’ (N4) 3.3.2 Tailored resources and referral pathways: ‘We don't have a system to refer people’ Most physicians suggested that having resources such as brochures, patient education videos, and protocols or guidelines for dental referrals would be helpful in promoting oral health. ‘I think having a brochure or a poster might be useful, but again, it's difficult because we have quite a few things here. But I think some sort of poster at least for the dental clinic information. I think it would be quite handy to have in a waiting room and just have people know that the services that they can easily access. So, I think something simple just to make it a friendly environment where people know that they can approach it, it's a starting point.’ (P2) A few nurses emphasised ‘it's very much about just having the conversation’ (N1). However, providing a resource that was ‘clear and concise and simple …’ (N1) and ‘given in a sensitive way…’ (N3) was also important. Stating that current referral pathways for dental treatments are associated with long waiting periods in the public dental hospitals, most physicians and nurses stressed on the importance of having a seamless referral pathway between the AOD clinics and public dental clinics. ‘So, I think we could have … a sheet of how to get into the dental referral stuff, because we all know that the system has a lot of delay, you know, and particularly it doesn't really fast track our clients.’ (P1)
Perceptions of providing oral health care among clients 3.1.1 High prevalence of poor oral health and its impact: ‘nine out of ten patients have oral health issues’ Most participants reported that majority of their clients that presented in their services had oral health issues and needed dental care. This was mainly attributed to the use of alcohol and drugs, low nutrition and a general lack of awareness about oral hygiene. ‘I think most of our clients need dental work … So, I think most of our clients who do have an extensive drug history, have dental issues more often than just.’ (N8) ‘I'd say in the process that I just described to say nine out of ten patients have oral health issues.’ (D2) ‘I remember this one comment, that one of the patients said that, look, I would be using this toothbrush after seven or eight years.’ (D3) Most of the doctors and nurses recognised the role of oral health for their clients, citing that it ‘has a big impact on their self‐esteem’, and that having poor oral health was also a reason for experiencing stigma from the community. Some people may have a belief that a facial appearance reflecting poor oral health (like missing teeth) is directly caused by AOD use and/or AOD service treatments, such as methadone maintenance treatment; and this brings into play all the negative feelings that come from being labelled ‘an addict’. ‘It [oral health] really has a big role and big impact on their self‐esteem, on the belief of being accepted in the community because even though they are on the program [methadone] and they are kind of having their life back on track, having very poor oral health, it's always being picked and kind of stigmatising them anywhere they go.’ (D7) In this context, one clinician also recognised the importance of oral health in securing and retaining employment in this population. It was also mentioned that poor oral health is a risk factor for serious health complications as it might lead to sepsis or other chronic diseases. ‘So, there are just a few prominent things, not to mention the psychosocial aspects, the stigma, the shame, the difficulty getting a job if you've got a mouth like that.’ (D4) ‘If you don't look after your teeth, you're more likely to die from a heart attack much, much earlier. If you don't look at the teeth, you [are] very open to symptoms like becoming septic. If you don't look after your teeth, every part of your body can become infected, including your brain.’ (N4) Two doctors and two nurses mentioned that opioid medications such as methadone might worsen oral health‐related quality of life. ‘But I do know that people that have that are on methadone can have more problems with their teeth. I think it might be related to saliva or something like that.’ (N9) However, one clinician reported that although some clients develop misconceptions that methadone is the only reason for their poor oral health, they should also be counselled on the debilitating effects of substance use. ‘When they come in to [see] me and they've been on methadone for six months and say “look at my teeth”, you know, “look what methadone has done to my teeth”. And I say, “probably this is the first time you've looked in the mirror for the last 18 years. Now that you're not running around seeking drugs and relatively stable, you've actually noticed what's been going on for two decades there. And if you want to blame methadone or well, but don't forget to blame heroin, which was working on that for a long time.” So, you see those things all are addressable in counselling where you stratify what the risks are.’ (D1) 3.1.2 Clinical practices regarding oral health: ‘I don't always do it’ Most of the doctors and nurses reported that they do not initiate discussions about oral health with their clients, in general, unless being prompted by the client about an oral health‐related issue. ‘It's truthful to say I don't always do it [talk about oral health]. And it's really good doing this interview because I'm aware of the times that I don't always do it, even though, you know, if you ask me, I go, “yeah, it's really important”.’ (D5) Most nurses mentioned that when oral health complaints come up, they are mostly involved in referring cases to dentists or GPs. ‘They'll always complain about, you know, they've got toothaches or infections or anything like that. And then we would recommend to get a referral to the dentist … And if they wanted some assistance with that, we can sometimes give numbers to any services that we do know that assists patients with accessing oral health care or we would suggest to go to GP.’ (N7) A few doctors noted that they typically conduct a brief oral health examination during their routine assessments, which not only ensures client satisfaction but also addresses any oral health concerns. ‘So, I'll do the routine review, which will entail talking about the substance use, talking about usually the social aspects around it, how it affects their health, and also the physical and mental health aspects. And then usually towards the end we'll go through an examination, examine them neurologically, and then ask if we can have a look in their mouth, just as a simple oral health screening. And it makes them happy that for us to have a look and are eager to show that they've got these issues …’ (D2) A couple of nurses reported they were proactive regarding oral health and made an effort to ask about oral health at their first meeting with a client. ‘We know for a fact that oral health and the rest of your body suffers if we don't look after it … Every patients that sees me, I look at their teeth.’ (N4) ‘I try to have that really non‐judgmental approach in terms of, you know, often phrased it in. You know, tell me a little bit about your teeth.’ (N3) A few doctors mentioned that they provide oral health education to their clients in the form of promoting toothbrushing habits by educating them about brushing techniques. ‘Make sure you've got that toothbrush up [and] you're spending 2 minutes really massaging them, then it should feel like this … instead of scratching. That seems to get across to people what the sensation of brushing gums should be like.’ (D4)
High prevalence of poor oral health and its impact: ‘nine out of ten patients have oral health issues’ Most participants reported that majority of their clients that presented in their services had oral health issues and needed dental care. This was mainly attributed to the use of alcohol and drugs, low nutrition and a general lack of awareness about oral hygiene. ‘I think most of our clients need dental work … So, I think most of our clients who do have an extensive drug history, have dental issues more often than just.’ (N8) ‘I'd say in the process that I just described to say nine out of ten patients have oral health issues.’ (D2) ‘I remember this one comment, that one of the patients said that, look, I would be using this toothbrush after seven or eight years.’ (D3) Most of the doctors and nurses recognised the role of oral health for their clients, citing that it ‘has a big impact on their self‐esteem’, and that having poor oral health was also a reason for experiencing stigma from the community. Some people may have a belief that a facial appearance reflecting poor oral health (like missing teeth) is directly caused by AOD use and/or AOD service treatments, such as methadone maintenance treatment; and this brings into play all the negative feelings that come from being labelled ‘an addict’. ‘It [oral health] really has a big role and big impact on their self‐esteem, on the belief of being accepted in the community because even though they are on the program [methadone] and they are kind of having their life back on track, having very poor oral health, it's always being picked and kind of stigmatising them anywhere they go.’ (D7) In this context, one clinician also recognised the importance of oral health in securing and retaining employment in this population. It was also mentioned that poor oral health is a risk factor for serious health complications as it might lead to sepsis or other chronic diseases. ‘So, there are just a few prominent things, not to mention the psychosocial aspects, the stigma, the shame, the difficulty getting a job if you've got a mouth like that.’ (D4) ‘If you don't look after your teeth, you're more likely to die from a heart attack much, much earlier. If you don't look at the teeth, you [are] very open to symptoms like becoming septic. If you don't look after your teeth, every part of your body can become infected, including your brain.’ (N4) Two doctors and two nurses mentioned that opioid medications such as methadone might worsen oral health‐related quality of life. ‘But I do know that people that have that are on methadone can have more problems with their teeth. I think it might be related to saliva or something like that.’ (N9) However, one clinician reported that although some clients develop misconceptions that methadone is the only reason for their poor oral health, they should also be counselled on the debilitating effects of substance use. ‘When they come in to [see] me and they've been on methadone for six months and say “look at my teeth”, you know, “look what methadone has done to my teeth”. And I say, “probably this is the first time you've looked in the mirror for the last 18 years. Now that you're not running around seeking drugs and relatively stable, you've actually noticed what's been going on for two decades there. And if you want to blame methadone or well, but don't forget to blame heroin, which was working on that for a long time.” So, you see those things all are addressable in counselling where you stratify what the risks are.’ (D1)
Clinical practices regarding oral health: ‘I don't always do it’ Most of the doctors and nurses reported that they do not initiate discussions about oral health with their clients, in general, unless being prompted by the client about an oral health‐related issue. ‘It's truthful to say I don't always do it [talk about oral health]. And it's really good doing this interview because I'm aware of the times that I don't always do it, even though, you know, if you ask me, I go, “yeah, it's really important”.’ (D5) Most nurses mentioned that when oral health complaints come up, they are mostly involved in referring cases to dentists or GPs. ‘They'll always complain about, you know, they've got toothaches or infections or anything like that. And then we would recommend to get a referral to the dentist … And if they wanted some assistance with that, we can sometimes give numbers to any services that we do know that assists patients with accessing oral health care or we would suggest to go to GP.’ (N7) A few doctors noted that they typically conduct a brief oral health examination during their routine assessments, which not only ensures client satisfaction but also addresses any oral health concerns. ‘So, I'll do the routine review, which will entail talking about the substance use, talking about usually the social aspects around it, how it affects their health, and also the physical and mental health aspects. And then usually towards the end we'll go through an examination, examine them neurologically, and then ask if we can have a look in their mouth, just as a simple oral health screening. And it makes them happy that for us to have a look and are eager to show that they've got these issues …’ (D2) A couple of nurses reported they were proactive regarding oral health and made an effort to ask about oral health at their first meeting with a client. ‘We know for a fact that oral health and the rest of your body suffers if we don't look after it … Every patients that sees me, I look at their teeth.’ (N4) ‘I try to have that really non‐judgmental approach in terms of, you know, often phrased it in. You know, tell me a little bit about your teeth.’ (N3) A few doctors mentioned that they provide oral health education to their clients in the form of promoting toothbrushing habits by educating them about brushing techniques. ‘Make sure you've got that toothbrush up [and] you're spending 2 minutes really massaging them, then it should feel like this … instead of scratching. That seems to get across to people what the sensation of brushing gums should be like.’ (D4)
Barriers to promoting oral health care in AOD settings 3.2.1 Limited oral health training and time constraints among clinicians: ‘we've never been trained …’ Most clinicians cited time constraints to be the main reason for being unable to broach the topic of oral health with their clients. Two doctors mentioned the importance of taking a holistic approach with drug health clients and that there would always be several issues to cover during a consultation. ‘It [asking about oral health] can fall off my own radar because there are so many issues that demand attention … And I do work very holistically, and a lot of things out there have to be popped into that holistic space … I think there's a load of issues that need to be covered and that relates to time.’ (D5) Of all participants, three clinicians reported that their knowledge about oral health was limited as they never experienced any dental training during their medical courses. One of them also went on to say that they felt ‘under‐resourced’ and ‘under‐informed’ in relation to the guidelines around dental management. ‘I'd say as a medical professional, I think we're limited in the extent we don't know the complexities of beyond the simple dental emergency. So, I would say, the times we are probably over treating things in terms of antibiotics and over treating things in terms of pain management, just because it is a bit of black box understanding, what's the underlying root of the problem. So, in that way, I do feel as medical professionals in general, we're under‐resourced or under‐informed as to the guidelines around dental management.’ (D2) ‘Well, I think the big issue is that and I blame medical faculties—we've never been trained in oral health and we've never had lectures on oral health. There are so many areas that we don't get trained in. We get made to do incredible learning with stuff that we'll never see or never use in our lives. But when it comes to sort of having a dental day as part of your medical training hasn't occurred and it's relevant, it's certainly relevant.’ (D1) 3.2.2 Perceived lack of priority accessibility and affordability of dental care services for clients: ‘I think that ' s kind of low down on their priority list …’ Most clinicians mentioned that their clients would generally refrain from accessing dental treatments. They mentioned that this was mainly due to the lack of prioritisation of oral health, as they would mostly have other pressing issues to deal with, such as homelessness or drug dependence. ‘Maybe in their own mind, if they haven't got acute pain from a tooth problem. If my housing is not good and I don't have a roof over my head and I'm worried about my kids … or I'm really not on top of this ice dependence yet . I just think it [oral health] slips down their own internal priority list.’ (D5) ‘It's not that they are ignorant. No, it's just that it's not a priority for others. But keeping the kids is more of a priority.’ (N4) Another frequently cited barrier was the high cost of dental care/treatments. It was also mentioned that this issue is more prevalent to clients coming from a low socioeconomic background, increasing their financial vulnerability. ‘I think that's kind of low down on their priority list and with them a lot of dental, you know, being private and hard to access. Obviously, the cost is very difficult for so many of our more vulnerable clients.’ (N8) ‘And especially now, the hurdles are getting harder because, as you know, most private dentists ask astronomical fees [which is] totally unattainable by our clientele.’ (D1) ‘Do you know how long it takes to get your teeth fixed? Far too long. Do you know how much it costs to get your teeth fixed? Far too much.’ (N4) Even services like the public dental services that were free for low socioeconomic communities were difficult to access for clients. Some clinicians were even unsure of existing public dental referral pathways and how to access them. ‘There's been a couple of times that I tried to ring like it's a centralised number or something. Some public health … and it went basically no where.’ (N1) ‘I don't remember any referral process here [for dental]. Maybe something that we need to have as well.’ (N2) ‘We don't have a system to refer people specifically into public dental and oral health care.’ (D4)
Limited oral health training and time constraints among clinicians: ‘we've never been trained …’ Most clinicians cited time constraints to be the main reason for being unable to broach the topic of oral health with their clients. Two doctors mentioned the importance of taking a holistic approach with drug health clients and that there would always be several issues to cover during a consultation. ‘It [asking about oral health] can fall off my own radar because there are so many issues that demand attention … And I do work very holistically, and a lot of things out there have to be popped into that holistic space … I think there's a load of issues that need to be covered and that relates to time.’ (D5) Of all participants, three clinicians reported that their knowledge about oral health was limited as they never experienced any dental training during their medical courses. One of them also went on to say that they felt ‘under‐resourced’ and ‘under‐informed’ in relation to the guidelines around dental management. ‘I'd say as a medical professional, I think we're limited in the extent we don't know the complexities of beyond the simple dental emergency. So, I would say, the times we are probably over treating things in terms of antibiotics and over treating things in terms of pain management, just because it is a bit of black box understanding, what's the underlying root of the problem. So, in that way, I do feel as medical professionals in general, we're under‐resourced or under‐informed as to the guidelines around dental management.’ (D2) ‘Well, I think the big issue is that and I blame medical faculties—we've never been trained in oral health and we've never had lectures on oral health. There are so many areas that we don't get trained in. We get made to do incredible learning with stuff that we'll never see or never use in our lives. But when it comes to sort of having a dental day as part of your medical training hasn't occurred and it's relevant, it's certainly relevant.’ (D1)
Perceived lack of priority accessibility and affordability of dental care services for clients: ‘I think that ' s kind of low down on their priority list …’ Most clinicians mentioned that their clients would generally refrain from accessing dental treatments. They mentioned that this was mainly due to the lack of prioritisation of oral health, as they would mostly have other pressing issues to deal with, such as homelessness or drug dependence. ‘Maybe in their own mind, if they haven't got acute pain from a tooth problem. If my housing is not good and I don't have a roof over my head and I'm worried about my kids … or I'm really not on top of this ice dependence yet . I just think it [oral health] slips down their own internal priority list.’ (D5) ‘It's not that they are ignorant. No, it's just that it's not a priority for others. But keeping the kids is more of a priority.’ (N4) Another frequently cited barrier was the high cost of dental care/treatments. It was also mentioned that this issue is more prevalent to clients coming from a low socioeconomic background, increasing their financial vulnerability. ‘I think that's kind of low down on their priority list and with them a lot of dental, you know, being private and hard to access. Obviously, the cost is very difficult for so many of our more vulnerable clients.’ (N8) ‘And especially now, the hurdles are getting harder because, as you know, most private dentists ask astronomical fees [which is] totally unattainable by our clientele.’ (D1) ‘Do you know how long it takes to get your teeth fixed? Far too long. Do you know how much it costs to get your teeth fixed? Far too much.’ (N4) Even services like the public dental services that were free for low socioeconomic communities were difficult to access for clients. Some clinicians were even unsure of existing public dental referral pathways and how to access them. ‘There's been a couple of times that I tried to ring like it's a centralised number or something. Some public health … and it went basically no where.’ (N1) ‘I don't remember any referral process here [for dental]. Maybe something that we need to have as well.’ (N2) ‘We don't have a system to refer people specifically into public dental and oral health care.’ (D4)
Recommendations for oral health integration into AOD settings 3.3.1 Oral health education and screening: ‘I think it is our responsibility to be screening and identifying conditions …’ Most physicians mentioned that it would be highly appropriate for them to provide oral health education in the AOD clinics, as they thought that looking at their clientele's health and wellbeing holistically would be an effective way of identifying health conditions and preventing further problems. ‘I think it is our responsibility to be screening and identifying conditions, of course, not necessarily knowing the best course of treatment, but at least knowing the care pathways and how to get people to care in the best way.’ (D1) ‘It's very appropriate because we are trying here to have this holistic approach to the patient's well‐being … we are trying to address the reason and we're trying to help them have a better lifestyle.’ (P7) A couple of doctors mentioned that oral health education should be provided after the acute issue of drug dependence has passed and once the clients are stable. ‘Once the acute issue of the drug issue is beginning, you know that people are more physiologically stable to include it in maybe, package of information about general health issues and preventative health … So, I just want to talk to you about immunisation and so that dental health is part of a little package of really obvious preventative health things.’ (D5) ‘I explained why it's [oral health] important and can affect their health. But ultimately, I do defer to them to prioritise that because often they have competing issues and it's about seeing where that fits in relation to those other issues as to what they pursue, whether it be housing issues or domestic violence issues or substantive issues or other health issues. So, I can only raise it as an issue if they find it important and they'll usually ring.’ (D2) Half of the nurses also mentioned that integrating oral health into routine assessment might be an effective way of asking about oral health issues. ‘I think an [oral health] assessment would be good … I mean, if it's just kind of asking, “do you have any oral health issues and can I help you link you into a service?”., I think that's completely appropriate.’ (N8) ‘… it would be good anyway to have that incorporated in the initial assessment so that information is passed on to the client. And then we do the follow up reviews.‘ (N2) A few doctors and nurses also suggested that oral health education and screening should be provided by all health professionals who see clients in AOD settings. This would ensure consensus in delivering oral health promotion to clients. ‘It should be everyone. This is a service where we have a multidisciplinary approach and we sit and we discuss patients. So, if I'm overlooked at all, I missed it. The nurse or the caseworker will come and raise it and we'll see how we can help it.’ (D7) ‘I think they should all be able to do it. Everyone can have a look and ask a question … Everyone's got a responsibility.’ (N4) 3.3.2 Tailored resources and referral pathways: ‘We don't have a system to refer people’ Most physicians suggested that having resources such as brochures, patient education videos, and protocols or guidelines for dental referrals would be helpful in promoting oral health. ‘I think having a brochure or a poster might be useful, but again, it's difficult because we have quite a few things here. But I think some sort of poster at least for the dental clinic information. I think it would be quite handy to have in a waiting room and just have people know that the services that they can easily access. So, I think something simple just to make it a friendly environment where people know that they can approach it, it's a starting point.’ (P2) A few nurses emphasised ‘it's very much about just having the conversation’ (N1). However, providing a resource that was ‘clear and concise and simple …’ (N1) and ‘given in a sensitive way…’ (N3) was also important. Stating that current referral pathways for dental treatments are associated with long waiting periods in the public dental hospitals, most physicians and nurses stressed on the importance of having a seamless referral pathway between the AOD clinics and public dental clinics. ‘So, I think we could have … a sheet of how to get into the dental referral stuff, because we all know that the system has a lot of delay, you know, and particularly it doesn't really fast track our clients.’ (P1)
Oral health education and screening: ‘I think it is our responsibility to be screening and identifying conditions …’ Most physicians mentioned that it would be highly appropriate for them to provide oral health education in the AOD clinics, as they thought that looking at their clientele's health and wellbeing holistically would be an effective way of identifying health conditions and preventing further problems. ‘I think it is our responsibility to be screening and identifying conditions, of course, not necessarily knowing the best course of treatment, but at least knowing the care pathways and how to get people to care in the best way.’ (D1) ‘It's very appropriate because we are trying here to have this holistic approach to the patient's well‐being … we are trying to address the reason and we're trying to help them have a better lifestyle.’ (P7) A couple of doctors mentioned that oral health education should be provided after the acute issue of drug dependence has passed and once the clients are stable. ‘Once the acute issue of the drug issue is beginning, you know that people are more physiologically stable to include it in maybe, package of information about general health issues and preventative health … So, I just want to talk to you about immunisation and so that dental health is part of a little package of really obvious preventative health things.’ (D5) ‘I explained why it's [oral health] important and can affect their health. But ultimately, I do defer to them to prioritise that because often they have competing issues and it's about seeing where that fits in relation to those other issues as to what they pursue, whether it be housing issues or domestic violence issues or substantive issues or other health issues. So, I can only raise it as an issue if they find it important and they'll usually ring.’ (D2) Half of the nurses also mentioned that integrating oral health into routine assessment might be an effective way of asking about oral health issues. ‘I think an [oral health] assessment would be good … I mean, if it's just kind of asking, “do you have any oral health issues and can I help you link you into a service?”., I think that's completely appropriate.’ (N8) ‘… it would be good anyway to have that incorporated in the initial assessment so that information is passed on to the client. And then we do the follow up reviews.‘ (N2) A few doctors and nurses also suggested that oral health education and screening should be provided by all health professionals who see clients in AOD settings. This would ensure consensus in delivering oral health promotion to clients. ‘It should be everyone. This is a service where we have a multidisciplinary approach and we sit and we discuss patients. So, if I'm overlooked at all, I missed it. The nurse or the caseworker will come and raise it and we'll see how we can help it.’ (D7) ‘I think they should all be able to do it. Everyone can have a look and ask a question … Everyone's got a responsibility.’ (N4)
Tailored resources and referral pathways: ‘We don't have a system to refer people’ Most physicians suggested that having resources such as brochures, patient education videos, and protocols or guidelines for dental referrals would be helpful in promoting oral health. ‘I think having a brochure or a poster might be useful, but again, it's difficult because we have quite a few things here. But I think some sort of poster at least for the dental clinic information. I think it would be quite handy to have in a waiting room and just have people know that the services that they can easily access. So, I think something simple just to make it a friendly environment where people know that they can approach it, it's a starting point.’ (P2) A few nurses emphasised ‘it's very much about just having the conversation’ (N1). However, providing a resource that was ‘clear and concise and simple …’ (N1) and ‘given in a sensitive way…’ (N3) was also important. Stating that current referral pathways for dental treatments are associated with long waiting periods in the public dental hospitals, most physicians and nurses stressed on the importance of having a seamless referral pathway between the AOD clinics and public dental clinics. ‘So, I think we could have … a sheet of how to get into the dental referral stuff, because we all know that the system has a lot of delay, you know, and particularly it doesn't really fast track our clients.’ (P1)
DISCUSSION This study explored the knowledge, experiences, and perceptions of AOD clinicians (doctors and nurses) regarding oral health among their clientele. To our knowledge, this is the first study to explore the perceptions of doctors and nurses in an Australian AOD service setting The AOD clinicians in this study recognised that poor oral health was a significant problem among their clients and these findings reaffirm current knowledge in this area. Previous literature has shown that individuals who have a substance use disorder have more oral health problems, high unmet dental treatment needs, and reduced oral health‐related quality of life compared to other populations . Similar to other studies , the findings also show that oral health concerns are not often discussed by AOD clinicians during the course of hospital admission or at treatment services due to various barriers. One of the key barriers identified was the perceived lack of accessibility and affordability of dental care services for clients. Private dental services were viewed as highly cost‐prohibitive for clients, which has been echoed in other studies where low prioritisation of dental care is attributed to lifestyle and socioeconomic factors . As private dental services are unaffordable for many people who have substance use disorders, more clients undergoing rehabilitation or medication will need AOD clinicians to provide basic oral health advice and appropriately refer clients to public dental services . This indicates the importance of integration of oral health protocols in general assessments and creating referral pathways through which AOD clinicians can easily refer clients to public dental services. Some AOD clinicians in our study, however, also acknowledged that they were not aware of existing referral pathways to public dental services. In New South Wales, Australia, the Oral Health Fee for Service Scheme provides free dental care to eligible clients with substance use disorders through public dental services . Due to limited capacity and long waiting lists, clients are issued vouchers to receive dental care from private practitioners registered under the scheme. However, clients often face challenges such as finding suitable providers, understanding cultural needs, and miscommunication . Additionally, there is a lack of data on the uptake of the Oral Health Fee for Service Scheme among this population, and potential issues include choosing providers, making appointments and lack of trauma‐informed care practices . As public dental services in Australia prioritise emergency dental needs , it indicates the need for guidelines and simple, explicit protocols for advice, screening, and referrals . Recently, a Parliament Inquiry into the provision of and access to dental services in Australia invited various submissions from national and state‐level peak bodies and organisations, which discussed the unaffordability and various other systemic barriers to access dental treatment experienced by various socioeconomically disadvantaged populations like AOD clients. Strategies such as integration of oral health care into general health care, providing extended coverage under Medicare (universal health insurance scheme for all Australians), and boosting resources for public dental services were some of the initiatives discussed . These strategies need to be progressed further and implemented to address this important barrier to oral health care for AOD clients. It is equally important to ensure dental referral pathways provide appropriate care as AOD clients can experience overt or subtle stigmatisation in dental settings , which may further deter them from seeking dental treatment. Another important barrier was the lack of oral health knowledge and training among AOD clinicians. One of the contributing factors is the limited oral health training provided in undergraduate medical/nursing programmes, which was clearly highlighted in the study findings and previous research [ , , ]. A survey of 132 general practitoner training programme directors in UK revealed that the majority of programmes (71.2%) did not provide any structured oral health training and very few (10%) trainees were undertaking clinical placements relevant to oral health . Similarly, a global survey of the deans at medical, nursing, and pharmacy schools in universities across Canada, the United States, Europe, Asia, Australia, and New Zealand found that the majority (59.6%) rated their curricula in oral‐systemic health as inadequate . These findings reinforce the need for greater foundation knowledge in oral health among undergraduate medical/nursing students to better prepare future AOD clinicans. Doctors in our study were quite receptive of education sessions or modules that contributed to continuing professional development points; therefore, developing continuing professional development training courses for AOD health professionals on oral health care could be an additional strategy to address this gap in knowledge and clinical practice . In addition, our results suggest that AOD clinicians would benefit from oral health promotion tools to assist them to advocate for oral health care, undertake oral health screening, and provide referrals to appropriate dental services. It is also important that AOD clincians are aware of accessible and affordable dental services that can be shared with clients. Studies conducted in other healthcare settings in Australia have shown that training non‐dental professionals like midwives in oral health promotion has been effective in changing practice and improving patient outcomes through oral health education, screening and referrals . While some participants in our study advocated for oral health among their clients, other doctors and nurses identified the lack of time to be the main barrier in broaching oral health with their clients. Consistent with our findings, a lack of time has been frequently reported as one of the barriers to integrating oral health care into practice [ , , , ]. Limited time during consultations emphasises the need for development of oral health education and screening resources that are concise, memorable, and conveyed at a low literacy level. This is also reflected in a recent scoping review which highlights the absence of existing interventions, models of care, and appropriate resources to promote oral health in the AOD setting . Such tools (in the form of screening tools) should help clinicians in assessing the extent and nature of dental care needs of a client which further expedites the referral process. Lastly, the need for interprofessional collaboration was highlighted in the study findings and has an important role in AOD settings wherein doctors, nurses, and case managers can work closely with dental practitioners to bridge the gaps in current clinical practices . Moreover, our study results further suggest that there is a crucial need for oral health education among all healthcare professionals who see clients in an AOD clinic, which could include the ‘nurse or the caseworker’. Effective coordination or integration of oral health services and other health and social services can be highly effective in providing help and treatment services to this vulnerable population group . This could help in preventing people with substance use disorder from falling through the cracks. For example, the North Richmond Community Health Service, Australia, delivered low‐cost oral health care through public oral health practitioners providing assessments, preventative treatment and dental referrals at a medically supervised injecting room in the North Richmond Community Health Service , resulting in a high uptake of this programme. Additionally, in the USA, the FLOSS programme, which offered comprehensive oral health care (through university dental staff and students) to a sample of patients with substance use disorder and significant dental needs, reported improved treatment outcomes in relation to drug abstinence, completion of withdrawal rehabilitation treatment and employment . These examples highlight that collaborations between both AOD and oral health clinicians is feasible and necessary in future models of care to improve the oral health of clients [54–57].
LIMITATIONS Although we were successful in recruiting doctors and nurses from a service in a local health district that provides care to a significant number of AOD clients in the Greater Sydney Area, our study has a few limitations. First, we were only able to recruit a small number of participants from one service. In addition, we were unable to capture the views of clinicians from other AOD services across Australia. Therefore, our findings need to be interpreted with caution and larger studies are required in this area to confirm our findings. Nevertheless, our study has provided a valuable insight into this important yet underresearched area of care for AOD clients.
CONCLUSION Our study has highlighted the limited emphasis being placed on oral health by doctors and nurses in an AOD service in Australia despite the high prevalence of poor health among clients. AOD clinicians can play a vital role in providing oral health education, screening, and referral if current barriers are addressed. Oral health training in undergraduate courses and through professional development programmes are needed to capacity‐build AOD clinicians in this area along with appropriate oral health resources that take into account their time constraints. Additionally, there is a need to have appropriate dental referral pathways that are affordable and accessible for AOD clients. These strategies could be further supported by interprofessional collaboration between AOD staff, social workers, and dental professionals to ensure comprehensive oral health care is provided to this priority population. This study has provided a valuable platform to develop tailored strategies that address the practice gaps among AOD clinicians and unmet oral health needs of clients.
Each author certifies that their contribution to this work meets the standards of the International Committee of Medical Journal Editors. AG, PP, SH and RS conceptualised the study and the research design. KF and PP coordinated recruitment of participants. TF completed data collection. AS, SK, KF and AG analysed the data and all authors (AS, KF, TPN, SK, PP, GW, RS, SH, AG) contributed to the interpretation of the data. AS drafted the first version of the manuscript, which was further refined by AG and then reviewed by all authors for important intellectual content. The final version has been approved by all authors, and all authors have agreed to be accountable for all aspects of the work.
This study was funded by a partnership grant from Drug Health Services, South Western Sydney Local Health District, and Western Sydney University.
The authors declare no other conflict of interest.
Supporting Information.
|
What is the optimal first-line treatment of autoimmune hepatitis? A systematic review with meta-analysis of randomised trials and comparative cohort studies | a0262a35-84b4-484a-acef-4d6e05cf9bcd | 11956290 | Surgery[mh] | Prednisolone (pred)+/−azathioprine (aza) is effective in achieving remission in patients with autoimmune hepatitis (AIH). However, survival benefit has not been conclusively demonstrated, and uncertainty remains about (a) efficacy in several subgroups, (b) value of adding aza (vs pred alone), (c) optimal initial pred dose, and (d) efficacy and frequency of adverse effects (AEs) of budesonide (bud) vs pred, and mycophenolate (MMF) vs aza.
In an updated systematic review with meta-analysis of first-line AIH treatment, we show that (a): transplant-free survival rates are higher in pred-treated (vs untreated) patients: overall, and in patients without symptoms, without cirrhosis, with decompensated cirrhosis and with acute severe AIH. Also, in those receiving pred+aza (vs pred), (b): higher (>40 mg/day or 0.5 mg/kg/day) initial pred doses (vs lower) confer no clear benefit and cause more AEs; (c): bud (vs pred) achieves similar biochemical response (BR) rates, with fewer cosmetic AEs; and (d) MMF (vs aza) achieves similar BR rates, with fewer serious AEs.
It confirms that further placebo controlled randomised controlled trials in AIH would be unethical and suggests benefits in several patient subgroups. Also, that initial predniso(lo)ne doses exceeding 40 mg/day or 0.5 mg/kg/day are unlikely to confer additional benefits over lower doses and cause more AEs. Third, that a decision regarding budesonide use as a first-line agent should be informed by concern regarding about cosmetic AEs rather than considerations regarding maximum efficacy. Finally, it suggests a role for MMF as a potentially better-tolerated steroid-sparing agent in patients who cannot or who are taking steps not to conceive.
First-line treatment of autoimmune hepatitis (AIH) is based on randomised controlled trials (RCTs) performed in the 1960s and 70s. In a meta-analysis, prednisolone+/−azathioprine was more effective than placebo and more effective than azathioprine alone at achieving disease remission. Prednisolone plus azathioprine was as effective as higher-dose prednisolone monotherapy, with fewer adverse effects (AEs). However, evidence of survival benefit from steroid-based treatment was not demonstrated statistically. Also, it remains unclear whether all patients with AIH require steroids. Or whether there are subgroups who do not. Acute severe (AS)-AIH comprises about 5% of presentations and about 30% of patients require early liver transplant for survival. The efficacy of corticosteroids is not established. In a RCT in patients without cirrhosis, budesonide showed higher efficacy than prednisolone in achieving normal serum transaminases after 6 months, with fewer AEs. Its longer-term efficacy is unclear. A meta-analysis of this trial and one observational study informed the recommendation of prednisolone and budesonide as equivalent first-line treatments in the 2020 American Association for the Study of Liver Diseases guidelines. However, more information on budesonide is now available. Recommendations regarding initial dose of prednisolone have varied widely in guidelines and from expert opinion. Questions also remain regarding steroid-sparing agents (SSAs) in AIH. Azathioprine was shown in early RCTs to enable reduction of steroid dose without loss of efficacy but with fewer AEs. However, it is unclear if SSAs improve survival. Mycophenolate is used as an alternative SSA in patients intolerant of azathioprine, and recently, as a first-line agent and its efficacy has been compared with that of azathioprine in a recent RCT. We present a systemic review with meta-analysis of first-line treatment of AIH to support the (submitted) British Society of Gastroenterology (BSG) AIH Guidelines. We aimed to address the following questions: Is use of corticosteroid (±steroid-sparing agent) associated with better transplant-free survival (compared with non-use), in patients with (a) AIH overall, (b) asymptomatic AIH, (c) without cirrhosis and (d) with decompensated cirrhosis? Are these first-line treatment options associated with better outcomes and/or fewer AEs than their comparators: budesonide (vs prednisolone), mycophenolate (vs azathioprine) and ‘high’ (>35–40 mg/day or 0.5 mg/kg/day) dose prednisolone (vs lower dose)?
We conducted a systemic review with meta-analysis of RCTs and comparative cohort studies including adult patients with AIH, reporting death/transplantation, biochemical response (BR) and/or AEs. We followed the PRISMA 2020 guidelines and registered the protocol in 2021 on the PROSPERO database (CRD42020182668). Information sources An EndNote AIH Library was generated by information specialists at the University of Sheffield to develop the BSG AIH Management Guidelines (in press). Search methods and study inclusion Systematic literature searches were undertaken in February 2020 by Information Specialists at the School of Medicine and Population Health, University of Sheffield, with an updated search in July 2022. We used thesaurus terms and free-text terms relating to patients with AIH ( ). Searches were from inception and limited to human studies. The searches were conducted on Ovid MEDLINE, EMBASE via Ovid, the Cochrane Database of Systematic Reviews (CDSR) and the Cochrane Central Register of Controlled Trials (CENTRAL). Search results were imported into Endnote, and duplicates removed. This library was then searched independently by SF and DG for studies involving prednisolone, prednisone, budesonide, azathioprine and mycophenolate in initial treatment of AIH (and its historical synonyms) in adults: We used the search terms in , and manually selected studies published in full and compatible with the PICO (Patient, Intervention, Control, Outcome) framework inclusion criteria ( ). We updated the search by applying this strategy first, to Medline publications between 1 July 2022 and 30 June 2024 containing the term autoimmune hepatitis plus each one of the search terms in . And second, EMBASE, the CDSR and the CENTRAL from 1 January 2022 to 30 June 2024, using only the term autoimmune hepatitis (assuming that other historical terms for AIH used in constructing the Library ( ) were no longer used). These searches yielded no additional studies meeting the criteria in . Finally, we also searched for similar studies in the references cited in four previous meta-analysis of initial AIH treatment: two addressing overall treatment, one comparison of budesonide and prednisolone, and one high vs low initial prednisolone doses. Outcomes Primary outcome Number of patients dying (any cause) or undergoing liver transplantation, as a ratio of the total. Secondary outcomes Ratio of patients dying of or undergoing transplantation for liver disease. Not including gastrointestinal bleeding, unless explicitly from varices. Ratio achieving BR after 6-month and after 12-month treatment (in one study ‘at least’ 12 months). The denominator was the total cohort number; using instead the number of informative patients (at that time point) yielded essentially identical results. BR was compared between patients receiving high-dose vs low-dose prednisolone, prednisolone vs budesonide, and mycophenolate vs azathioprine. Primary definition of BR was: serum alanine (and where available, aspartate) transaminase levels: ALT (±AST) falling to within the normal range; other definitions included (a) fall of ALT, AST and serum immunoglobulin G to within normal ranges (complete biochemical remission (CBR) if achieved within 6 months), and (b) fall in serum ALT+/−AST to less than twice the upper limit of normal. Within each study, definition of BR in the cohorts compared was identical. Frequency of AEs: Steroid-related. Any of (a) cosmetic AEs: acne, Cushingoid appearance, striae, buffalo hump; (b) metabolic AEs: new-onset diabetes mellitus, hypertension and weight gain (defined as onset of obesity in one study; (c) bone disease (either osteoporosis or a fracture); and (d) psychosis. AEs were binary, without regard to time. Other steroid-related AEs, including anxiety, depression, dyspepsia and myopathy, were not recorded consistently enough for analysis. Azathioprine and mycophenolate-related AEs (any, and those causing drug discontinuation). When possible, we extracted information on the number of patients experiencing each AE. However, when aggregating cosmetic AEs, we summed the number of specific cosmetic AEs, which are thus expressed as total number of cosmetic AEs rather than number of patients with at least one cosmetic AE. For the variable ‘all AEs’, we included only studies which reported at least three of the above different categories of AEs. Data extraction Data were extracted by DG and checked by SF ( ). We obtained additional results from a multicentre audit of AIH management by DG (a coauthor) analysing raw data on file and provided by the first author. This included (i) data on BR in prednisolone and budesonide-treated patients without cirrhosis; (ii) AEs in patients receiving low-dose and high-dose prednisolone; (iii) per cent of patients receiving an SSA in those receiving high and low initial prednisolone doses; (iv) assessment of prednisolone dose in patients receiving and not receiving an SSA; and (v) comparison of death/transplant rates in patients presenting with decompensated cirrhosis. We also obtained additional data on mortality and on AEs in patients without cirrhosis ; this was kindly provided by the authors on request. Risk of bias assessment Risk of bias (ROB) was independently assessed by SF and DG, using the Cochrane Risk of Bias (ROB-2) tool for RCTs and the ROBINS-1 tool for cohort studies. Discrepancies were resolved by discussion. ROB in the cohort studies arose largely from intergroup differences regarding confounding baseline variables and follow-up times. For the outcome death/transplantation, we considered age, percentage with cirrhosis, serum bilirubin and serum ALT as confounding baseline variables. For BR, we considered as baseline confounders: age, percentage with cirrhosis and serum and IgG, (all variables which are predictive of BR, ). Baseline serum transaminase levels do not predict their normalisation on treatment. If inter-group differences for confounding variables did not reach a significance level of p<0.05 or were addressed by multivariate analysis, ROB due to cofounding was deemed moderate; otherwise, it was deemed high. In the absence of established variables predisposing to AEs, reporting of these was deemed at moderate ROB. Other potential sources of bias considered were ( ) imbalances in receiving comedications (ROBIN-1 domains 4.1–4.6: usually azathioprine), in follow-up time, and in missing data (domains 5.1–5.3). Meta-analysis (MA) We used R-Studio to aggregate outcome results (expressed as risk ratio (RR)). We performed no data conversions and considered only binary outcomes. Forest plots were constructed using fixed and random effects models. A p value of <0.05 and RR values with CIs not overlapping unity were deemed significant. Heterogeneity was assessed using the I 2 statistic; values of 25%–49%, 50%–74% and ≥75% representing low, moderate and high heterogeneity. With three or more studies, we calculated the prediction interval. As no analysis involved more than 10 studies, we did not assess publication bias. Reasons for heterogeneity were explored by sensitivity analysis, usually based on risk of bias. Patient and public involvement None. This meta-analysis was done specifically to support the (submitted) BSG AIH Guidelines, the development group of which included two patients. They were aware of the current work but were not involved in it.
An EndNote AIH Library was generated by information specialists at the University of Sheffield to develop the BSG AIH Management Guidelines (in press).
Systematic literature searches were undertaken in February 2020 by Information Specialists at the School of Medicine and Population Health, University of Sheffield, with an updated search in July 2022. We used thesaurus terms and free-text terms relating to patients with AIH ( ). Searches were from inception and limited to human studies. The searches were conducted on Ovid MEDLINE, EMBASE via Ovid, the Cochrane Database of Systematic Reviews (CDSR) and the Cochrane Central Register of Controlled Trials (CENTRAL). Search results were imported into Endnote, and duplicates removed. This library was then searched independently by SF and DG for studies involving prednisolone, prednisone, budesonide, azathioprine and mycophenolate in initial treatment of AIH (and its historical synonyms) in adults: We used the search terms in , and manually selected studies published in full and compatible with the PICO (Patient, Intervention, Control, Outcome) framework inclusion criteria ( ). We updated the search by applying this strategy first, to Medline publications between 1 July 2022 and 30 June 2024 containing the term autoimmune hepatitis plus each one of the search terms in . And second, EMBASE, the CDSR and the CENTRAL from 1 January 2022 to 30 June 2024, using only the term autoimmune hepatitis (assuming that other historical terms for AIH used in constructing the Library ( ) were no longer used). These searches yielded no additional studies meeting the criteria in . Finally, we also searched for similar studies in the references cited in four previous meta-analysis of initial AIH treatment: two addressing overall treatment, one comparison of budesonide and prednisolone, and one high vs low initial prednisolone doses.
Primary outcome Number of patients dying (any cause) or undergoing liver transplantation, as a ratio of the total. Secondary outcomes Ratio of patients dying of or undergoing transplantation for liver disease. Not including gastrointestinal bleeding, unless explicitly from varices. Ratio achieving BR after 6-month and after 12-month treatment (in one study ‘at least’ 12 months). The denominator was the total cohort number; using instead the number of informative patients (at that time point) yielded essentially identical results. BR was compared between patients receiving high-dose vs low-dose prednisolone, prednisolone vs budesonide, and mycophenolate vs azathioprine. Primary definition of BR was: serum alanine (and where available, aspartate) transaminase levels: ALT (±AST) falling to within the normal range; other definitions included (a) fall of ALT, AST and serum immunoglobulin G to within normal ranges (complete biochemical remission (CBR) if achieved within 6 months), and (b) fall in serum ALT+/−AST to less than twice the upper limit of normal. Within each study, definition of BR in the cohorts compared was identical. Frequency of AEs: Steroid-related. Any of (a) cosmetic AEs: acne, Cushingoid appearance, striae, buffalo hump; (b) metabolic AEs: new-onset diabetes mellitus, hypertension and weight gain (defined as onset of obesity in one study; (c) bone disease (either osteoporosis or a fracture); and (d) psychosis. AEs were binary, without regard to time. Other steroid-related AEs, including anxiety, depression, dyspepsia and myopathy, were not recorded consistently enough for analysis. Azathioprine and mycophenolate-related AEs (any, and those causing drug discontinuation). When possible, we extracted information on the number of patients experiencing each AE. However, when aggregating cosmetic AEs, we summed the number of specific cosmetic AEs, which are thus expressed as total number of cosmetic AEs rather than number of patients with at least one cosmetic AE. For the variable ‘all AEs’, we included only studies which reported at least three of the above different categories of AEs.
Number of patients dying (any cause) or undergoing liver transplantation, as a ratio of the total.
Ratio of patients dying of or undergoing transplantation for liver disease. Not including gastrointestinal bleeding, unless explicitly from varices. Ratio achieving BR after 6-month and after 12-month treatment (in one study ‘at least’ 12 months). The denominator was the total cohort number; using instead the number of informative patients (at that time point) yielded essentially identical results. BR was compared between patients receiving high-dose vs low-dose prednisolone, prednisolone vs budesonide, and mycophenolate vs azathioprine. Primary definition of BR was: serum alanine (and where available, aspartate) transaminase levels: ALT (±AST) falling to within the normal range; other definitions included (a) fall of ALT, AST and serum immunoglobulin G to within normal ranges (complete biochemical remission (CBR) if achieved within 6 months), and (b) fall in serum ALT+/−AST to less than twice the upper limit of normal. Within each study, definition of BR in the cohorts compared was identical. Frequency of AEs: Steroid-related. Any of (a) cosmetic AEs: acne, Cushingoid appearance, striae, buffalo hump; (b) metabolic AEs: new-onset diabetes mellitus, hypertension and weight gain (defined as onset of obesity in one study; (c) bone disease (either osteoporosis or a fracture); and (d) psychosis. AEs were binary, without regard to time. Other steroid-related AEs, including anxiety, depression, dyspepsia and myopathy, were not recorded consistently enough for analysis. Azathioprine and mycophenolate-related AEs (any, and those causing drug discontinuation). When possible, we extracted information on the number of patients experiencing each AE. However, when aggregating cosmetic AEs, we summed the number of specific cosmetic AEs, which are thus expressed as total number of cosmetic AEs rather than number of patients with at least one cosmetic AE. For the variable ‘all AEs’, we included only studies which reported at least three of the above different categories of AEs.
Data were extracted by DG and checked by SF ( ). We obtained additional results from a multicentre audit of AIH management by DG (a coauthor) analysing raw data on file and provided by the first author. This included (i) data on BR in prednisolone and budesonide-treated patients without cirrhosis; (ii) AEs in patients receiving low-dose and high-dose prednisolone; (iii) per cent of patients receiving an SSA in those receiving high and low initial prednisolone doses; (iv) assessment of prednisolone dose in patients receiving and not receiving an SSA; and (v) comparison of death/transplant rates in patients presenting with decompensated cirrhosis. We also obtained additional data on mortality and on AEs in patients without cirrhosis ; this was kindly provided by the authors on request.
Risk of bias (ROB) was independently assessed by SF and DG, using the Cochrane Risk of Bias (ROB-2) tool for RCTs and the ROBINS-1 tool for cohort studies. Discrepancies were resolved by discussion. ROB in the cohort studies arose largely from intergroup differences regarding confounding baseline variables and follow-up times. For the outcome death/transplantation, we considered age, percentage with cirrhosis, serum bilirubin and serum ALT as confounding baseline variables. For BR, we considered as baseline confounders: age, percentage with cirrhosis and serum and IgG, (all variables which are predictive of BR, ). Baseline serum transaminase levels do not predict their normalisation on treatment. If inter-group differences for confounding variables did not reach a significance level of p<0.05 or were addressed by multivariate analysis, ROB due to cofounding was deemed moderate; otherwise, it was deemed high. In the absence of established variables predisposing to AEs, reporting of these was deemed at moderate ROB. Other potential sources of bias considered were ( ) imbalances in receiving comedications (ROBIN-1 domains 4.1–4.6: usually azathioprine), in follow-up time, and in missing data (domains 5.1–5.3).
We used R-Studio to aggregate outcome results (expressed as risk ratio (RR)). We performed no data conversions and considered only binary outcomes. Forest plots were constructed using fixed and random effects models. A p value of <0.05 and RR values with CIs not overlapping unity were deemed significant. Heterogeneity was assessed using the I 2 statistic; values of 25%–49%, 50%–74% and ≥75% representing low, moderate and high heterogeneity. With three or more studies, we calculated the prediction interval. As no analysis involved more than 10 studies, we did not assess publication bias. Reasons for heterogeneity were explored by sensitivity analysis, usually based on risk of bias.
None. This meta-analysis was done specifically to support the (submitted) BSG AIH Guidelines, the development group of which included two patients. They were aware of the current work but were not involved in it.
Characteristics of included studies shows the PRISMA diagram. We found 24 studies meeting inclusion criteria (7 RCTs and 17 observational studies). We found one further observational study cited in a prior meta-analysis, making 25 included studies ( ). All cohort studies and the two most recent RCTs used the 1999 or 2008 International AIH Group diagnostic criteria ( ), although up to 10% of patients in the cohort studies did not meet these criteria. The remaining RCTs predated these criteria, and diagnosis of AIH was based on chronic liver disease (usually, abnormal liver tests for >3 months), compatible liver biopsy, serum autoantibodies and hyperglobulinaemia. In three RCTs, serum was positive for hepatitis B markers in 14 (4%–16%) of patients, and all were performed before availability of hepatitis C testing. All studies focused mainly on adults but two RCTs included some children. One study included patients initially diagnosed in childhood. In eight studies (one RCT), information was not reported. The remaining studies explicitly excluded children. We focused on first-line drug treatment following initial diagnosis. However, three RCTs included previously treated patients in whom the episode reported represented treatment of a relapse. Since outcomes in these patients were not separately reported, they are included here. Four RCTs used prednisone and three used prednisolone. Since these are clinically equivalent, the term predniso(lo)ne is used to refer to either drug. Risk of bias (ROB) Only three RCTs were blinded to treatment allocation. However, for the outcome BR, all RCTs were deemed at low ROB ( ). Four of the five RCTs reporting mortality were at low ROB; the fifth was at some ROB because of shorter follow-up time in one of the steroid-receiving cohorts: (prednisolone and azathioprine combined). For AEs, blinded trials were deemed at low ROB and the others at some ROB. Six cohort studies addressing mortality were deemed at high ROB ( ) because potential baseline confounders were either unreported or favoured one treatment group and uncorrected by multivariate analysis ( ). Other observational studies were deemed at moderate ROB for remission and mortality. We evaluated imbalances regarding coreceipt of a steroid-sparing agent, follow-up time and missing data ( ). We did not consider these ever sufficient in themselves to elevate ROB to severe. All 18 cohort studies were deemed at moderate ROB for AEs. Survival benefit of steroid-based treatment Unselected AIH In meta-analysis (MA) of four RCTs and two observational studies, patients receiving corticosteroids (alone or with azathioprine) had (compared with patients receiving no treatment or azathioprine alone) lower rates of all-cause ( ) and liver-related ( ) mortality, with moderate and low heterogeneity respectively. Results were unchanged following exclusion of patients in the Mayo Clinic RCT receiving combination therapy (shorter follow-up time, resulting in some ROB) ( ). These differences remained when the four RCTs were considered separately ( , ); significant for liver-related but not for all-cause mortality. Further subgroup comparisons ( ) also suggested lower rates in those receiving prednisolone monotherapy, compared with placebo (not significant for all-cause mortality). Differences between those receiving predniso(lo)ne alone and receiving azathioprine alone were not significant. The benefit of steroids was also seen (for all-cause and liver-related death/transplantation) in the two cohort studies ( , ); in one, Cox regression analysis confirmed that the association with steroids was independent of baseline prognostic variables. In one study, mortality was similar in those receiving azathioprine compared with placebo. However, in MA of two studies (one RCT; one cohort), patients given prednis(ol)one plus azathioprine had lower all cause ( ) and liver-related mortality ( ) than those taking prednis(ol)one monotherapy. In the RCT, those receiving predniso(lo)ne alone received higher doses, but in the cohort study, they received lower doses, and the survival benefit of adding an SSA was independent of baseline covariates and of initial prednisolone dose. Cirrhosis In meta-analysis (two cohort studies) of patients both with and without cirrhosis at diagnosis, steroid-based treatment was associated with a 3–4-fold reduction in (all-cause) death/transplant rate ( ). An almost threefold reduction was also seen in two cohort studies of decompensated cirrhosis; one was at high ROB because of uncorrected baseline confounding, but the association with treatment persisted on multivariate analysis (unpublished data on file) in the other. Asymptomatic AIH In MA of four cohort studies in patients without symptoms at diagnosis, steroid-based treatment was (compared with no treatment) associated with a reduced all-cause ( ) and liver related ( ) death/transplantation rate. Removing the study at high ROB yielded identical results ( ), which were confirmed on multivariate analysis in another study. Acute severe AIH In MA of four cohort studies, corticosteroids were associated with reduced death/transplantation rate (all liver-related), compared with no treatment, with low heterogeneity ( ). However ROB was high because in three studies, baseline model for end-stage liver disease (MELD) score was lower in those receiving steroids ( ); data were unreported in the fourth. This suggests systemic bias, although in the largest study, multivariate analysis confirmed an association of steroid therapy with survival, independent of MELD score. Initial prednis(ol)one dose We found five studies ( , one RCT, four cohorts), in which results regarding at least one outcome were compared between patients receiving ‘high’ and ‘low’ initial doses of prednis(ol)one. The cut-off value between high and low dose was usually 35–40 mg/day or 1 mg/kg/day. In MA, patients receiving high and low initial prednis(ol)one doses were not different, regarding percentage achieving BR after 6 months ( ) or 12 months ( ); nor were rates of CBR (normal serum transaminases and IgG within 6 months) in two studies ( ). Rates of all-cause ( ) and of liver-related ( ) death/transplant in four studies were also similar in patients receiving high vs low doses of predniso(lo)ne. However, there was high heterogeneity and a wide prediction interval. Of the two largest cohort studies, one found lower mortality in patients receiving higher doses. This study was at high ROB: it included deaths only (no data on transplants) and patients receiving high-dose predniso(lo)ne had favourable baseline variables ( ) which were uncorrected for. However, excluding this study did not alter the result ( ). In the other large cohort, those receiving high-dose prednisio(lo)ne had higher mortality, which persisted in multivariate analysis. Patients receiving high-dose prednis(ol)one had higher rates of any AE ( ) but with moderate heterogeneity and a wide prediction interval (PI). They also had higher rates of cosmetic AEs ( ) and of new-onset diabetes ( ). Differences in bone disease (four studies), weight gain and psychosis (three studies each) and hypertension (two studies) were not significant ( ). Budesonide versus prednisolone In MA of four studies, biochemical remission rates in patients initially receiving budesonide and prednisolone were not different after 6 months ( ) or after 12 months ( ). There was high heterogeneity and wide prediction intervals. Considering only patients without cirrhosis ( ), remission rates were similar after 6 months (three studies) but in one study, were lower after 12 months in patients receiving budesonide. The single RCT showed a higher remission rate after 6 months in budesonide-treated patients, but the rate was unusually low (39%) in the prednisolone group. In one cohort study, CBR rate (after 6 months) was lower in patients receiving budesonide than receiving predniso(lo)ne. In another cohort study, 5-year survival in patients receiving prednisolone and budesonide (overall and in those without cirrhosis) was not significantly different. Patients receiving budesonide had (compared with receiving prednisolone) lower rates of any ( ) and of cosmetic AEs ( ); Considering only patients without cirrhosis (three informative studies), these differences remained significant ( ). However, incidence of new onset diabetes ( ), or of hypertension, weight gain, psychosis and bone disease ( ) was not significantly different in budesonide vs prednisolone-treated patients, either overall, or in patients without cirrhosis (not shown). Apart from with hypertension, heterogeneity was high. Mycophenolate versus azathioprine In MA of two studies (one RCT; one cohort), 6-month BR rate was similar in patients receiving mycophenolate and those receiving azathioprine (both with prednisolone). BR rate at 12 months (available only in the cohort study) was higher in patients receiving mycophenolate (p=0.04). Rate of any AE was similar in the two groups; however, patients receiving mycophenolate had fewer AEs requiring drug discontinuation (low heterogeneity) ( ). In one study, survival rates were not different between patients receiving mycophenolate and receiving azathioprine.
shows the PRISMA diagram. We found 24 studies meeting inclusion criteria (7 RCTs and 17 observational studies). We found one further observational study cited in a prior meta-analysis, making 25 included studies ( ). All cohort studies and the two most recent RCTs used the 1999 or 2008 International AIH Group diagnostic criteria ( ), although up to 10% of patients in the cohort studies did not meet these criteria. The remaining RCTs predated these criteria, and diagnosis of AIH was based on chronic liver disease (usually, abnormal liver tests for >3 months), compatible liver biopsy, serum autoantibodies and hyperglobulinaemia. In three RCTs, serum was positive for hepatitis B markers in 14 (4%–16%) of patients, and all were performed before availability of hepatitis C testing. All studies focused mainly on adults but two RCTs included some children. One study included patients initially diagnosed in childhood. In eight studies (one RCT), information was not reported. The remaining studies explicitly excluded children. We focused on first-line drug treatment following initial diagnosis. However, three RCTs included previously treated patients in whom the episode reported represented treatment of a relapse. Since outcomes in these patients were not separately reported, they are included here. Four RCTs used prednisone and three used prednisolone. Since these are clinically equivalent, the term predniso(lo)ne is used to refer to either drug.
Only three RCTs were blinded to treatment allocation. However, for the outcome BR, all RCTs were deemed at low ROB ( ). Four of the five RCTs reporting mortality were at low ROB; the fifth was at some ROB because of shorter follow-up time in one of the steroid-receiving cohorts: (prednisolone and azathioprine combined). For AEs, blinded trials were deemed at low ROB and the others at some ROB. Six cohort studies addressing mortality were deemed at high ROB ( ) because potential baseline confounders were either unreported or favoured one treatment group and uncorrected by multivariate analysis ( ). Other observational studies were deemed at moderate ROB for remission and mortality. We evaluated imbalances regarding coreceipt of a steroid-sparing agent, follow-up time and missing data ( ). We did not consider these ever sufficient in themselves to elevate ROB to severe. All 18 cohort studies were deemed at moderate ROB for AEs.
Unselected AIH In meta-analysis (MA) of four RCTs and two observational studies, patients receiving corticosteroids (alone or with azathioprine) had (compared with patients receiving no treatment or azathioprine alone) lower rates of all-cause ( ) and liver-related ( ) mortality, with moderate and low heterogeneity respectively. Results were unchanged following exclusion of patients in the Mayo Clinic RCT receiving combination therapy (shorter follow-up time, resulting in some ROB) ( ). These differences remained when the four RCTs were considered separately ( , ); significant for liver-related but not for all-cause mortality. Further subgroup comparisons ( ) also suggested lower rates in those receiving prednisolone monotherapy, compared with placebo (not significant for all-cause mortality). Differences between those receiving predniso(lo)ne alone and receiving azathioprine alone were not significant. The benefit of steroids was also seen (for all-cause and liver-related death/transplantation) in the two cohort studies ( , ); in one, Cox regression analysis confirmed that the association with steroids was independent of baseline prognostic variables. In one study, mortality was similar in those receiving azathioprine compared with placebo. However, in MA of two studies (one RCT; one cohort), patients given prednis(ol)one plus azathioprine had lower all cause ( ) and liver-related mortality ( ) than those taking prednis(ol)one monotherapy. In the RCT, those receiving predniso(lo)ne alone received higher doses, but in the cohort study, they received lower doses, and the survival benefit of adding an SSA was independent of baseline covariates and of initial prednisolone dose. Cirrhosis In meta-analysis (two cohort studies) of patients both with and without cirrhosis at diagnosis, steroid-based treatment was associated with a 3–4-fold reduction in (all-cause) death/transplant rate ( ). An almost threefold reduction was also seen in two cohort studies of decompensated cirrhosis; one was at high ROB because of uncorrected baseline confounding, but the association with treatment persisted on multivariate analysis (unpublished data on file) in the other. Asymptomatic AIH In MA of four cohort studies in patients without symptoms at diagnosis, steroid-based treatment was (compared with no treatment) associated with a reduced all-cause ( ) and liver related ( ) death/transplantation rate. Removing the study at high ROB yielded identical results ( ), which were confirmed on multivariate analysis in another study. Acute severe AIH In MA of four cohort studies, corticosteroids were associated with reduced death/transplantation rate (all liver-related), compared with no treatment, with low heterogeneity ( ). However ROB was high because in three studies, baseline model for end-stage liver disease (MELD) score was lower in those receiving steroids ( ); data were unreported in the fourth. This suggests systemic bias, although in the largest study, multivariate analysis confirmed an association of steroid therapy with survival, independent of MELD score.
In meta-analysis (MA) of four RCTs and two observational studies, patients receiving corticosteroids (alone or with azathioprine) had (compared with patients receiving no treatment or azathioprine alone) lower rates of all-cause ( ) and liver-related ( ) mortality, with moderate and low heterogeneity respectively. Results were unchanged following exclusion of patients in the Mayo Clinic RCT receiving combination therapy (shorter follow-up time, resulting in some ROB) ( ). These differences remained when the four RCTs were considered separately ( , ); significant for liver-related but not for all-cause mortality. Further subgroup comparisons ( ) also suggested lower rates in those receiving prednisolone monotherapy, compared with placebo (not significant for all-cause mortality). Differences between those receiving predniso(lo)ne alone and receiving azathioprine alone were not significant. The benefit of steroids was also seen (for all-cause and liver-related death/transplantation) in the two cohort studies ( , ); in one, Cox regression analysis confirmed that the association with steroids was independent of baseline prognostic variables. In one study, mortality was similar in those receiving azathioprine compared with placebo. However, in MA of two studies (one RCT; one cohort), patients given prednis(ol)one plus azathioprine had lower all cause ( ) and liver-related mortality ( ) than those taking prednis(ol)one monotherapy. In the RCT, those receiving predniso(lo)ne alone received higher doses, but in the cohort study, they received lower doses, and the survival benefit of adding an SSA was independent of baseline covariates and of initial prednisolone dose.
In meta-analysis (two cohort studies) of patients both with and without cirrhosis at diagnosis, steroid-based treatment was associated with a 3–4-fold reduction in (all-cause) death/transplant rate ( ). An almost threefold reduction was also seen in two cohort studies of decompensated cirrhosis; one was at high ROB because of uncorrected baseline confounding, but the association with treatment persisted on multivariate analysis (unpublished data on file) in the other.
In MA of four cohort studies in patients without symptoms at diagnosis, steroid-based treatment was (compared with no treatment) associated with a reduced all-cause ( ) and liver related ( ) death/transplantation rate. Removing the study at high ROB yielded identical results ( ), which were confirmed on multivariate analysis in another study.
In MA of four cohort studies, corticosteroids were associated with reduced death/transplantation rate (all liver-related), compared with no treatment, with low heterogeneity ( ). However ROB was high because in three studies, baseline model for end-stage liver disease (MELD) score was lower in those receiving steroids ( ); data were unreported in the fourth. This suggests systemic bias, although in the largest study, multivariate analysis confirmed an association of steroid therapy with survival, independent of MELD score.
We found five studies ( , one RCT, four cohorts), in which results regarding at least one outcome were compared between patients receiving ‘high’ and ‘low’ initial doses of prednis(ol)one. The cut-off value between high and low dose was usually 35–40 mg/day or 1 mg/kg/day. In MA, patients receiving high and low initial prednis(ol)one doses were not different, regarding percentage achieving BR after 6 months ( ) or 12 months ( ); nor were rates of CBR (normal serum transaminases and IgG within 6 months) in two studies ( ). Rates of all-cause ( ) and of liver-related ( ) death/transplant in four studies were also similar in patients receiving high vs low doses of predniso(lo)ne. However, there was high heterogeneity and a wide prediction interval. Of the two largest cohort studies, one found lower mortality in patients receiving higher doses. This study was at high ROB: it included deaths only (no data on transplants) and patients receiving high-dose predniso(lo)ne had favourable baseline variables ( ) which were uncorrected for. However, excluding this study did not alter the result ( ). In the other large cohort, those receiving high-dose prednisio(lo)ne had higher mortality, which persisted in multivariate analysis. Patients receiving high-dose prednis(ol)one had higher rates of any AE ( ) but with moderate heterogeneity and a wide prediction interval (PI). They also had higher rates of cosmetic AEs ( ) and of new-onset diabetes ( ). Differences in bone disease (four studies), weight gain and psychosis (three studies each) and hypertension (two studies) were not significant ( ).
In MA of four studies, biochemical remission rates in patients initially receiving budesonide and prednisolone were not different after 6 months ( ) or after 12 months ( ). There was high heterogeneity and wide prediction intervals. Considering only patients without cirrhosis ( ), remission rates were similar after 6 months (three studies) but in one study, were lower after 12 months in patients receiving budesonide. The single RCT showed a higher remission rate after 6 months in budesonide-treated patients, but the rate was unusually low (39%) in the prednisolone group. In one cohort study, CBR rate (after 6 months) was lower in patients receiving budesonide than receiving predniso(lo)ne. In another cohort study, 5-year survival in patients receiving prednisolone and budesonide (overall and in those without cirrhosis) was not significantly different. Patients receiving budesonide had (compared with receiving prednisolone) lower rates of any ( ) and of cosmetic AEs ( ); Considering only patients without cirrhosis (three informative studies), these differences remained significant ( ). However, incidence of new onset diabetes ( ), or of hypertension, weight gain, psychosis and bone disease ( ) was not significantly different in budesonide vs prednisolone-treated patients, either overall, or in patients without cirrhosis (not shown). Apart from with hypertension, heterogeneity was high.
In MA of two studies (one RCT; one cohort), 6-month BR rate was similar in patients receiving mycophenolate and those receiving azathioprine (both with prednisolone). BR rate at 12 months (available only in the cohort study) was higher in patients receiving mycophenolate (p=0.04). Rate of any AE was similar in the two groups; however, patients receiving mycophenolate had fewer AEs requiring drug discontinuation (low heterogeneity) ( ). In one study, survival rates were not different between patients receiving mycophenolate and receiving azathioprine.
In this updated systemic review and MA of first-line treatment of AIH, we make several novel observations. First, we provide more robust evidence for the overall mortality benefits of steroid-based treatment. Efficacy of steroids in achieving remission and a suggestive survival benefit were demonstrated by Lamers. In a network analysis of six RCTs, Lu also demonstrated superiority of prednis(ol)one over azathioprine and over placebo in achieving remission but included no data on survival. By incorporating two more recent cohort studies, we observe a significant survival benefit of steroid-based therapy, although we acknowledge the caveats of combining studies of different designs. The initial meta-analysis also suggested fewer AEs on predniso(lo)ne plus azathioprine than on prednisolone monotherapy, probably because of higher doses in the latter regime. Here, we show that this combination therapy may also have a survival benefit; however, this is based on only two studies and needs confirmation. Third, we demonstrate likely survival benefits of steroid-based therapy in several AIH subgroups: including asymptomatic patients, patients with and without cirrhosis, and with decompensated cirrhosis. Also, in patients with acute severe AIH, although the results are biased by the treated groups having less severe liver dysfunction. A trial of steroids may however be justified in acute AIH of moderate severity. Thus, steroid treatment is beneficial in most patients with AIH. However, in one cohort study, no association was found (on multivariate analysis) between steroids and transplant-free survival in patients with ‘mild’ AIH (by several criteria). In such patients, deferring treatment might occasionally be justified. Fourth, we provide clarification on initial prednisolone dose, regarding which guideline recommendations and expert opinion have varied. Usually predniso(lo)ne is tapered as serum transaminases improve. In assessing dose effects, ideally, cumulative dose would be considered but is rarely reported. However, in one study, cumulative predniso(lo)ne dose was 47% higher in patients initially receiving high (vs low) dose, suggesting that initial dose is a reasonable marker of cumulative dose. Here, we show that initial predniso(lo)ne doses exceeding 35–40 mg/day or 0.5 mg/kg/day are no more effective than lower doses in achieving BR or in improving survival (A recent study confirms lack of association between initial prednisolone dose and biochemical remission or event-free survival. Not included here as different dose ‘cut-off’ (30 mg/day), and dose-group numbers were not reported). Comparison of death/transplant rates for patients receiving high vs low initial doses showed high heterogeneity. The larger study 20 of two suggesting lower mortality was at high ROB: however, excluding this did not change the results. Clearly conclusions are tentative, but at the very least, there is no clear evidence for a survival benefit from higher doses. We focused on comparing high vs low-dose cohorts within single studies. The meta-analysis of Zhang did not include some older, or more recent studies, and also, compared ‘average’ doses across studies, inevitably with much overlap. We could not confirm their finding of higher doses associated with higher rates of BR or death/transplant. Regarding steroid-related AES, we confirmed the qualitative associations with higher predniso(lo)ne dose suggested by others. We had access to more studies and our results suggest dose-relationships with cosmetic and overall AEs, and with diabetes. Although we could not confirm a dose relationship with weight gain, psychosis or bone disease (osteoporosis or fracture), a dose relationship with bone disease is suggested in another cohort study. Regarding comparisons between budesonide and prednisolone, we could access more studies than the one RCT considered by Lu, and the two studies considered in the quantitative MA of Vierling. We could not confirm their observation of superior BR rates with budesonide. Indeed, budesonide may be inferior (although based on one study) in achieving CBR. In one cohort study, 5-year death/transplant rates were not different in budesonide and prednisolone-tested patients; however, more data are needed on longer-term outcome. We found that budesonide was associated with fewer overall AEs, and fewer cosmetic AEs than predniso(lo)ne, although this remains largely based on the single RCT, in which AEs were monitored prospectively. However, we fail to show associations of budesonide with reduced diabetes, hypertension, weight gain, psychosis or bone disease, although this might also reflect a bias favouring use of budesonide in patients at high risk of such AEs. Our comparisons of AEs on budesonide vs prednisolone are tentative. Some cohort studies reported very low (or zero) rates of cosmetic AEs or of diabetes, which (given that these studies are retrospective) may result from inadvertent under-reporting. Finally, analysis of two studies suggests that mycophenolate achieves similar rates of biochemical remission after 6 months, and perhaps, higher rates after 12 months; it is also associated with fewer AEs requiring drug discontinuation. Our study has limitations. Despite a detailed search, we found only 25 informative studies. Only three of the seven RCTs were blinded, and in five (performed during the 1960s and 70s), about 15% had hepatitis B virus and an unknown number hepatitis C virus. However, the biopsy and immunological features and the response to predniso(lo)ne suggested that most patients did have AIH. We used established methods for assessing ROB. Many end points (death/transplantation), biochemical remission and some side effects (diabetes) we considered as objective endpoints, assessment of which should be relatively bias-free, even in unblinded studies. The biggest sources of bias were confounding of outcomes in the cohort studies by imbalance between prognostic baseline variables. Assessment of such confounding was usually possible and was sometimes addressed using multivariate analysis. When this was not done and imbalances were clear, we deemed such studies at high ROB; however, excluding them did not change the results. Other potential sources of bias were imbalances in comedications (usually azathioprine) and missing data (see ). We considered these insufficient in themselves to elevate the ROB to serious in any study. Nevertheless, we could not address all sources of bias. We calculated pooled relative risk (RR), using the fixed and random (RE) effects models. We base our conclusions on the RE model, which makes no assumption that patients in individual studies are randomly selected from the same overall AIH pool. We also calculated the PI, which estimates the range of RR values expected in a hypothetical additional study or the likelihood of a future hypothetical patient benefiting from treatment. For many analyses, PI range overlapped with unity, suggesting that benefit (while more likely than not) is not guaranteed. Thus, the evidence for benefit of steroid treatment of AIH is suggestive rather than conclusive. Our results may have implications for practice. First, that steroid treatment of AIH improves transplant-free survival—overall and in several subgroups. Second, that higher initial predniso(lo)ne doses cause more AEs but achieve no clear benefit. Third, that budesonide is not more effective than predniso(lo)ne but has fewer cosmetic AEs; its first-line use should be informed by concerns regarding the latter rather than by the need to maximise efficacy. Finally, that mycophenolate is as effective as azathioprine in achieving BR and is better tolerated; it can thus be considered as a first-line steroid-sparing agent in patients who (because of its teratogenicity) cannot or who are taking active steps not to conceive. Finally, our meta-analysis points to the need for further RCTs and prospective cohort studies of first-line treatments. In these, comparison of AEs will be particularly important. Lamers in the initial MA of AIH treatment noted that AEs were ‘not adequately mentioned’. Unfortunately, this remains the case, especially for cosmetic and mental health AEs, weight gain and even diabetes. Incorporation into clinical practice of a standard proforma for prospectively recording steroid AEs is long overdue in AIH.
10.1136/bmjgast-2024-001549 online supplemental figure 1 10.1136/bmjgast-2024-001549 online supplemental table 1
|
Predicting antimicrobial resistance in | 928a2e67-93a0-4826-ba2b-5f610bcce7e0 | 7059009 | Pathology[mh] | Limited therapy options due to the emergence and spread of multidrug resistance leave clinicians with uncertainty about which drug to prescribe. Inadequate initial therapy, however, may cause suffering or death of infected patients, promotes further resistant development, and imposes an enormous financial burden on healthcare systems and on society in general.
We integrated genomic, transcriptomic, and phenotypic data on antibiotic resistance profiles of 414 clinical Pseudomonas aeruginosa isolates and used a machine learning‐based approach to identify sets of molecular markers that allowed a reliable prediction of antibiotic resistance against four antibiotic classes. Using information on (i) the presence or absence of genes, (ii) sequence variations within genes, and (iii) gene expression profiles alone or in combinations resulted in high (0.8–0.9) or very high (> 0.9) sensitivity and predictive values. Importantly, transcriptome data significantly improved the prediction outcome as compared to using genome information alone. Identified biomarkers included known antibiotic resistance determinants (e.g., gyrA, ampC, oprD , efflux pumps) as well as markers previously not associated with antibiotic resistance.
Our findings demonstrate that the identification of molecular markers for the prediction of antibiotic resistance holds promise to change current resistance diagnostics. However, gene expression information may be required for highly sensitive and specific resistance prediction in the problematic opportunistic pathogen P. aeruginosa .
The rise of antibiotic resistance is a public health issue of greatest importance (Cassini et al , ). Growing resistance hampers the use of conventional antibiotics and leads to increased rates of ineffective empiric antimicrobial therapy. If not adequately treated, infections cause suffering, incapacity, and death, and impose an enormous financial burden on healthcare systems and on society in general (Alanis, ; Gootz, ; Fair & Tor, ). Despite growing medical need, FDA approvals of new antibacterial agents have substantially decreased over the last 20 years (Kinch et al , ). Alarmingly, there are only few agents in clinical development for the treatment of infections caused by multidrug‐resistant Gram‐negative pathogens (Bush & Page, ). Pseudomonas aeruginosa , the causative agent of severe acute as well as chronic persistent infections, is particularly problematic. The opportunistic pathogen exhibits high intrinsic antibiotic resistance and frequently acquires resistance‐conferring genes via horizontal gene transfer (Lister et al , ; Partridge et al , ). Furthermore, the accelerating development of drug resistance due to the acquisition of drug resistance‐associated mutations poses a serious threat. The lack of new antibiotic options underscores the need for optimization of current diagnostics. Diagnostic tests are a core component in modern healthcare practice. Especially in light of rising multidrug resistance, high‐quality diagnostics becomes increasingly important. However, to provide information as the basis for infectious disease management is a difficult task. Antimicrobial susceptibility testing (AST) has experienced little change over the years. It still relies on culture‐dependent methods, and as a consequence, clinical microbiology diagnostics is labor‐intensive and slow. Culture‐based AST requires 48 h (or longer) for definitive results, which leaves physicians with uncertainty about the best drugs to prescribe to individual patients. This delay also contributes to the spread of drug resistance (Oliver et al , ; López‐Causapé et al , ). The introduction of molecular diagnostics could become an alternative to culture‐based methods and could be critical in paving the way to fight antimicrobial resistance. Identification of genetic elements of antimicrobial resistance promises a deeper understanding of the epidemiology and mechanisms of resistance and could lead to a timelier reporting of the resistance profiles as compared to conventional culture‐based testing. It has been demonstrated that for a number of bacterial species, antimicrobial resistance can be highly accurately predicted based on information derived from the genome sequence (Gordon et al , ; Bradley et al , ; Moradigaravand et al , ). However, in the opportunistic pathogen P. aeruginosa even full genomic sequence information is insufficient to predict antimicrobial resistance in all clinical isolates (Kos et al , ). Pseudomonas aeruginosa exhibits a profound phenotypic plasticity mediated by environment‐driven flexible changes in the transcriptional profile (Dötsch et al , ). For example, P. aeruginosa adapts to the presence of antibiotics with the overexpression of the mex genes, encoding the antibiotic extrusion machineries MexAB‐OprM, MexCD‐OprJ, MexEF‐OprN, and MexXY‐OprM. Similarly, high expression of the ampC ‐encoded intrinsic beta‐lactamase confers antimicrobial resistance (Haenni et al , ; Juan et al , ; Goli et al , ; Martin et al , ). Those transcriptional responses are frequently fixed in clinical P. aeruginosa strains, e.g., due to mutations in negative regulators of gene expression (Frimodt‐Møller et al , ; Juarez et al , ). Thus, the isolates develop an environment‐independent resistance phenotype. Up‐regulation of intrinsic beta‐lactamases as well as overexpression of efflux pumps that contribute to the resistance phenotype makes gene‐based testing a challenge, because it is difficult to predict from the genomic sequence, which (combinations of) mutations would lead to an up‐regulation of resistance‐conferring genes (Llanes et al , ; Fernández & Hancock, ; Schniederjans et al , ). In this study, we investigated whether we can reliably predict antimicrobial resistance in P. aeruginosa using not only genomic but also quantitative gene expression information. For this purpose, we sequenced the genomes of 414 drug‐resistant clinical P. aeruginosa isolates and recorded their transcriptional profiles. We built predictive models of antimicrobial susceptibility/resistance to four commonly administered antibiotics by training machine learning classifiers. From these classifiers, we inferred candidate marker panels for a diagnostic assay by selecting resistance‐ and susceptibility‐informative markers via feature selection. We found that the combined use of information on the presence/absence of genes, their sequence variation, and gene expression profiles can predict resistance and susceptibility in clinical P. aeruginosa isolates with high or very high sensitivity and predictive value.
Taxonomy and antimicrobial resistance distribution of 414 DNA‐ and mRNA‐sequenced clinical Pseudomonas aeruginosa isolates A total of 414 P. aeruginosa isolates were collected from clinical microbiology laboratories of hospitals across Germany and at sites in Spain, Hungary, and Romania (Fig A). For all isolates, the genomic DNA was sequenced and transcriptional profiles were recorded. This enabled us to use not only the full genomic information but also information on the gene expression profiles as an input to machine learning approaches. We inferred a maximum likelihood phylogenetic tree based on variant nucleotide sites (Fig B). The tree was constructed by mapping the sequencing reads of each isolate to the genome of the P. aeruginosa PA14 reference strain and then aligning the consensus sequences for each gene. The isolates exhibited a broad taxonomic distribution and separated into two major phylogenetic groups. One included PAO1, PACS2, LESB58, and a cluster of high‐risk clone ST175 isolates; the other included PA14, as well as one large cluster of high‐risk clone ST235 isolates. Both groups comprised several further clades with closely related isolates of the same sequence type as determined by multilocus sequencing typing (MLST). Next, we recorded antibiotic resistance profiles for all isolates regarding the four common anti‐pseudomonas antimicrobials, tobramycin (TOB), ceftazidime (CAZ), ciprofloxacin (CIP), and meropenem (MEM) (Bassetti et al , ; Cardozo et al , ; Tümmler, ) using agar dilution method. Most isolates of our clinical isolate collection exhibit antibiotic resistance against these four antibiotics (Fig C, ). One‐third had a multidrug‐resistant (MDR) phenotype, defined as non‐susceptible to at least three different classes of antibiotics (Magiorakos et al , ). Machine learning for predicting antimicrobial resistance We used the genomic and transcriptomic data of the clinical P. aeruginosa isolates to infer resistance and susceptibility phenotypes to ceftazidime, meropenem, ciprofloxacin, and tobramycin with machine learning classifiers. For each antibiotic, we included all respective isolates categorized as either “resistant” or “susceptible”. For the genomic data, we included sequence variations (single nucleotide polymorphisms; SNPs, including small indels) and gene presence or absence (GPA) as features. In total, we analyzed 255,868 SNPs, represented by 65,817 groups with identical distributions of SNPs across isolates for the same group, and 76,493 gene families with presence or absence information, corresponding to 14,700 groups of identically distributed gene families. 1,306 of these gene families had an indel in some isolate genomes, which we included as an additional feature. We evaluated SNP and GPA groups in combination with gene expression information for 6,026 genes (Fig ). For each drug, we randomly assigned isolates to a training set that comprised 80% of the resistant and susceptible isolates, respectively, and the remaining 20% to a test set. Parameters of machine learning models were optimized on the training set and their value assessed in cross‐validation, while the test set was used to obtain another independent performance estimate. As bacterial population structure can influence machine learning outcomes, e.g., it has been shown before in Escherichia coli that phylo‐groups’ specific markers alone could be used to predict antibiotic resistance phenotypes with accuracies of 0.65–0.91, depending on the antibiotic (Moradigaravand et al , ), we also assessed performance while accounting for population structure based on sequence types through a block cross‐validation approach. We trained several machine learning classification methods on SNPs, GPA, and expression features individually and in combination for predicting antibiotic susceptibility or resistance of isolates and evaluated the classifier performances. We determined MIC (minimal inhibitory concentration) values of all clinical isolates with agar dilution according to CLSI guidelines (CLSI, ) to use as the gold standard for evaluation purposes. We calculated the sensitivity and predictive value of resistance (R) and susceptibility (S) assignment, as well as the macro F1‐score, as an overall performance measure based on a classifier trained on a specific data type combination. The sensitivity reflects how good that classifier is in recovering the assignments of the underlying gold standard, representing the fraction of susceptible, or resistant, samples, respectively. The predictive value reflects how trustworthy the assignments of this particular classifier are, representing the fraction of correct assignments of all susceptible or resistant assignments, respectively. The F1‐score is the harmonic mean of the sensitivity and predictive value for a particular class, i.e., susceptible or resistant. The macro F1‐score is the average over the two F1‐scores. We used the support vector machine (SVM) classifier with a linear kernel, as in Weimann et al , to predict sensitivity or resistance to four different antibiotics. Parameters were optimized in nested cross‐validation, and performance estimates averaged over five repeats of this setup. The combined use of (i) GPA, (ii) SNPs, and (iii) information on gene expression resulted in high (0.8–0.9) or very high (> 0.9) sensitivity and predictive values (Fig ). Notably, the relative contribution of the different information sources to the susceptibility and resistance sensitivity strongly depended on the antibiotic. To assess the effect of the classification technique, we compared the performance of an SVM classifier with a linear kernel to that of random forests and logistic regression, which we and others have successfully used for related phenotype prediction problems (Asgari et al , ; Her & Wu, ; Wheeler et al , ). For this purpose, we used the data type combination with the best macro F1‐score in resistance prediction with the SVM. We evaluated the classification performance in nested cross‐validation and on a held‐out test dataset. In addition, we performed a phylogeny‐aware partitioning of our dataset, to assess the phylogenetic generalization ability of our technique. The performance of the SVM in random cross‐validation was comparable to logistic regression (macro F1‐score for the SVM: 0.83 ± 0.06 vs. logistic regression: 0.84 ± 0.06), but considerably better than the random forest classifiers (0.67 ± 0.14; , ). The performance on the held‐out dataset was in a comparable range (SVM: 0.87 ± 0.07; logistic regression: 0.90 ± 0.04; random forest 0.71 ± 0.16). We furthermore observed similar macro F1‐scores inferred in the phylogenetically selected cross‐validation (SVM: 0.87 ± 0.07; logistic regression: 0.86 ± 0.07; random forest 0.72 ± 0.13), which suggests only a minor influence of the bacterial phylogeny on the classification performance. The performance on the phylogenetically selected held‐out dataset was again comparable, though performance for the random forest deteriorated in comparison with the cross‐validation results (SVM: 0.86 ± 0.06; logistic regression 0.83 ± 0.06; random forests 0.56 ± 0.03). Ciprofloxacin resistance and susceptibility based on SVMs could be correctly predicted with a sensitivity of 0.92 ± 0.01 and 0.87 ± 0.01, and with simultaneously high predictive values of 0.91 ± 0.01 and 0.90 ± 0.01, respectively, using solely SNP information. The sensitivity of 0.80 ± 0.04 and 0.79 ± 0.02 and predictive value of 0.73 ± 0.01 and 0.76 ± 0.02 to predict ciprofloxacin susceptibility and resistance based exclusively on gene expression data were also high. However, there was no added value of using information on gene expression in addition to SNP information for the prediction of susceptibility/resistance toward ciprofloxacin. For the prediction of tobramycin susceptibility and resistance, the machine learning classifiers performed almost equally well when the three input data types (SNPs, GPA, and gene expression) were used individually (values > 0.8). SNP information was predictive of tobramycin resistance; however, it did not further improve the classification performance when combined with the other data types. GPA information alone was the most important data type for classifying tobramycin resistance and susceptibility providing sensitivity values of 0.84 ± 0.01 and 0.95 ± 0.01 and predictive values of 0.88 ± 0.01 and 0.93 ± 0.01, respectively. The performance of GPA‐based prediction increased further when gene expression values were included ( P ‐value of a one‐sided t ‐test: −0.0069 based on the macro F1‐score as determined in repeated cross‐validation; sensitivity values of 0.89 ± 0.01 and 0.94 ± 0.01 for resistance and susceptibility prediction, respectively, and predictive values of 0.88 ± 0.01 and 0.95 ± 0.01). For the correct prediction of meropenem resistance/susceptibility, gene presence/absence was most influential (sensitivity values of 0.87 ± 0.01 and 0.84 ± 0.01 for resistance and susceptibility prediction, respectively, and predictive values of 0.92 ± 0.00 and 0.74 ± 0.01). As observed for tobramycin, the use of genome‐wide information on GPA and of information on gene expression in combination increased the sensitivity to detect resistance as well as susceptibility to meropenem to 0.91 ± 0.02 and 0.86 ± 0.01 and the predictive values to 0.93 ± 0.01 and 0.81 ± 0.03, respectively ( P ‐value of a one‐sided t ‐test: 0.004). For ceftazidime, using only information on gene presence/absence revealed a sensitivity of susceptibility/resistance prediction of 0.69 ± 0.01 and 0.66 ± 0.01, and predictive values of 0.66 ± 0.01 and 0.67 ± 0.01, respectively. Adding gene expression information considerably improved the performance of susceptibility and resistance sensitivity to 0.83 ± 0.02 and 0.81 ± 0.02 and predictive values of 0.81 ± 0.02 and 0.83 ± 0.01 ( P ‐value of a one‐sided t ‐test 7.1 × 10 −7 ). In summary, for tobramycin, ceftazidime, and meropenem combining GPA and expression information gave the most reliable classification results, whereas for ciprofloxacin we found that only using SNPs provided the best performance (Table and ). Thus, for the remainder of the manuscript, we will focus on the results obtained with classifiers trained on those data type combinations. A candidate drug resistance marker panel We determined the minimal number of molecular features required to obtain the highest macro F1‐score for each drug. We inferred the number of features contributing to the classification from the number of non‐zero components of the SVM weight vectors, using a standard cross‐validation setup. For each value of the C parameter, which controls the amount of regularization imposed on the model, the cross‐validation procedure was repeated five times (Fig , ). Performance of antimicrobial resistance prediction peaked for the candidate classifiers using between 50 and 100 features. Notably, the ciprofloxacin classifier required only two SNPs until the learning curve performance was almost saturated, whereas classifiers of drugs that included expression and gene presence/absence markers required more features (> 50) to reach saturation. Next, we determined the C parameter resulting in the least complex SVM model within one standard deviation of the peak performance, i.e., with the best macro F1‐score and as few as possible features for each drug (Friedman et al , ). We chose our candidate marker panel for each drug as the set of all non‐zero features and designated the respective model as the most suitable diagnostic classifier. We used SNP information for ciprofloxacin resistance and susceptibility prediction and the combination of GPA and expression features for tobramycin, meropenem, and ceftazidime. We refer to each of these classifiers as the candidate classifier for susceptibility and resistance prediction for a particular drug. The ciprofloxacin candidate marker panel contained 50 SNPs. The meropenem, ceftazidime, and tobramycin marker lists consisted of 93, 37, and 59 expression and GPA features. The complete list of candidate markers for the prediction of resistance against the four antibiotics is given in . This list includes the candidate markers of the three input features namely GPA, gene expression, and SNPs alone and in combination. Table is a shortlist of the panel markers for each drug based on the data combination that had allowed us to train the most reliable classifier. To test the performance of the candidate marker panel‐based classifiers on an independent set of clinical P. aeruginosa isolates, we used them to predict antibiotic resistance for the samples of the test dataset (Fig , ). On this held‐out data, we obtained an F1‐sore for all drugs that was similarly high as before: Namely this was 0.95 for meropenem, 0.77 for ceftazidime, and 0.96 for tobramycin, using gene expression and gene presence/absence features, and 0.87 for ciprofloxacin using SNP information. These results indicate that the diagnostic classifiers have good generalization abilities when applied to new samples. We observed more variability across drugs than in nested cross‐validation, which is expected due to the smaller size of the test set. Improvement of assignment accuracy with increasing sample numbers We next investigated how prediction performance depended on the number of samples used for classifier training. We trained the SVM classifiers on random subsamples of different sizes of the full dataset with 414 isolates. For each model, we recorded the macro F1‐score in five repeats of 10‐fold nested cross‐validation (Fig ). The classification performance saturates for all our classifiers well before using all available training samples, suggesting that when adding more isolates for resistance classification, the classification performance would improve only very slowly. Markers potentially remaining undiscovered in our study might have very small effect sizes, requiring much larger dataset sizes for their detection. Interestingly, the number of samples required until the performance curve plateaued depends on the drugs and data types used. For ciprofloxacin, the performance of susceptibility/resistance prediction based on SNPs saturated quickly, likely due to the large impact of the known mutations in the quinolone resistance‐determining region (QRDR), whereas the classifiers for the other three drugs, which were trained on expression and gene presence/absence information, required more samples until the F1‐score plateaued. For these classifiers, the dispersion of the macro F1‐score for subsets of the data with fewer samples is also considerably higher than for the ciprofloxacin SNP models. Performance estimation stratifying by sequence type suggests some influence of the bacterial phylogeny on the prediction In P. aeruginosa , different phylo‐groups might contain different antibiotic resistance genes or mutations alone or in combinations. Thus, if there was an association of distinct resistance‐conferring genes with certain phylo‐groups, our machine learning approach might identify markers that distinguish between different phylo‐groups rather than between susceptible and resistant clinical isolates. In Figs , , , , we show susceptibility and resistance of each isolate in the context of the phylogenetic tree as predicted by the diagnostic classifier and based on AST for each of the drug. To assess whether our predictive markers are biased by the phylogenetic structure of the clinical isolate collection, we assessed classification robustness in a block cross‐validation approach. Here, isolates of phylo‐groups with differing sequence types as determined by MLST were grouped into blocks and all isolates of a given block were only allowed to be either in the training or test folds (Figs and ). In addition, instead of using a random assignment of strains into test and training dataset, we analyzed the performance only allowing strains in a test dataset corresponding to the block cross‐validation training dataset with sequence types that were not already included in this training dataset. For all classifiers including our candidate diagnostic classifiers, we found that the block cross‐validation performance estimates were slightly lower than those obtained using a sequence type‐unaware estimation (F1‐score difference between ~ 0.03 and 0.05 for the diagnostic classifiers). This was particularly apparent for some suboptimal data type combinations, such as for predicting tobramycin resistance using SNPs or gene expression, where a substantially lower discriminative performance was achieved in block‐ compared to random cross‐validation (macro F1‐score difference > 0.1, ). Interestingly, we observed that the ranking of the performance by data type remained almost identical for all drugs. Overall, the performance estimates we obtained using this phylogenetically insulated test dataset were comparable to the block cross‐validation estimates, only tobramycin resistance prediction using classifiers trained fully or partly on SNPs dropped considerably in performance. In summary, this confirmed that the various P . aeruginosa phylogenetic subgroups possess similar mechanisms and molecular markers for the resistance phenotype and that the identified markers are largely distinctive for resistance/susceptibility instead of phylogenetic relationships using most data type combinations. Despite the observed independence of the presence of genetic resistance markers and bacterial phylogeny, for some antibiotics and data types we also found a non‐negligible phylo‐group‐dependent performance effect. This underlines the importance of assessing the impact of the phylogeny on the antimicrobial resistance prediction. Misclassified isolates are more frequent near the MIC breakpoints We tested whether we could detect an overrepresentation of misclassified samples among the samples with a MIC value close to the breakpoints compared to samples with higher or lower MIC values, selecting samples from equidistant intervals (in log space) around the breakpoint. We report only the strongest overrepresentation for each drug after multiple testing correction. For ciprofloxacin, significantly more samples with a MIC between 0.5 and 8 were misclassified (31 of 139 samples (22%)) than samples with a MIC smaller than 0.5 or larger than 8 (7 of 219 samples (3%)) (Fisher's exact test with an FDR‐adjusted P ‐value of 6.2 × 10 −8 ; Fig ). For ceftazidime, we found that 46 of 177 samples (26%) with a MIC between 4 and 64 were misclassified whereas only 21 of 157 (13%) of samples with a MIC smaller or higher than those values were misclassified (adjusted P ‐value: 0.014). For meropenem, we found that 26 of 207 samples (13%) with a MIC between 1 and 16 were misclassified, but only 8 of 147 (5%) of all samples with a MIC smaller or higher than those values were misclassified (adjusted P ‐value: 0.05). For tobramycin, no significant difference was found.
Pseudomonas aeruginosa isolates A total of 414 P. aeruginosa isolates were collected from clinical microbiology laboratories of hospitals across Germany and at sites in Spain, Hungary, and Romania (Fig A). For all isolates, the genomic DNA was sequenced and transcriptional profiles were recorded. This enabled us to use not only the full genomic information but also information on the gene expression profiles as an input to machine learning approaches. We inferred a maximum likelihood phylogenetic tree based on variant nucleotide sites (Fig B). The tree was constructed by mapping the sequencing reads of each isolate to the genome of the P. aeruginosa PA14 reference strain and then aligning the consensus sequences for each gene. The isolates exhibited a broad taxonomic distribution and separated into two major phylogenetic groups. One included PAO1, PACS2, LESB58, and a cluster of high‐risk clone ST175 isolates; the other included PA14, as well as one large cluster of high‐risk clone ST235 isolates. Both groups comprised several further clades with closely related isolates of the same sequence type as determined by multilocus sequencing typing (MLST). Next, we recorded antibiotic resistance profiles for all isolates regarding the four common anti‐pseudomonas antimicrobials, tobramycin (TOB), ceftazidime (CAZ), ciprofloxacin (CIP), and meropenem (MEM) (Bassetti et al , ; Cardozo et al , ; Tümmler, ) using agar dilution method. Most isolates of our clinical isolate collection exhibit antibiotic resistance against these four antibiotics (Fig C, ). One‐third had a multidrug‐resistant (MDR) phenotype, defined as non‐susceptible to at least three different classes of antibiotics (Magiorakos et al , ).
We used the genomic and transcriptomic data of the clinical P. aeruginosa isolates to infer resistance and susceptibility phenotypes to ceftazidime, meropenem, ciprofloxacin, and tobramycin with machine learning classifiers. For each antibiotic, we included all respective isolates categorized as either “resistant” or “susceptible”. For the genomic data, we included sequence variations (single nucleotide polymorphisms; SNPs, including small indels) and gene presence or absence (GPA) as features. In total, we analyzed 255,868 SNPs, represented by 65,817 groups with identical distributions of SNPs across isolates for the same group, and 76,493 gene families with presence or absence information, corresponding to 14,700 groups of identically distributed gene families. 1,306 of these gene families had an indel in some isolate genomes, which we included as an additional feature. We evaluated SNP and GPA groups in combination with gene expression information for 6,026 genes (Fig ). For each drug, we randomly assigned isolates to a training set that comprised 80% of the resistant and susceptible isolates, respectively, and the remaining 20% to a test set. Parameters of machine learning models were optimized on the training set and their value assessed in cross‐validation, while the test set was used to obtain another independent performance estimate. As bacterial population structure can influence machine learning outcomes, e.g., it has been shown before in Escherichia coli that phylo‐groups’ specific markers alone could be used to predict antibiotic resistance phenotypes with accuracies of 0.65–0.91, depending on the antibiotic (Moradigaravand et al , ), we also assessed performance while accounting for population structure based on sequence types through a block cross‐validation approach. We trained several machine learning classification methods on SNPs, GPA, and expression features individually and in combination for predicting antibiotic susceptibility or resistance of isolates and evaluated the classifier performances. We determined MIC (minimal inhibitory concentration) values of all clinical isolates with agar dilution according to CLSI guidelines (CLSI, ) to use as the gold standard for evaluation purposes. We calculated the sensitivity and predictive value of resistance (R) and susceptibility (S) assignment, as well as the macro F1‐score, as an overall performance measure based on a classifier trained on a specific data type combination. The sensitivity reflects how good that classifier is in recovering the assignments of the underlying gold standard, representing the fraction of susceptible, or resistant, samples, respectively. The predictive value reflects how trustworthy the assignments of this particular classifier are, representing the fraction of correct assignments of all susceptible or resistant assignments, respectively. The F1‐score is the harmonic mean of the sensitivity and predictive value for a particular class, i.e., susceptible or resistant. The macro F1‐score is the average over the two F1‐scores. We used the support vector machine (SVM) classifier with a linear kernel, as in Weimann et al , to predict sensitivity or resistance to four different antibiotics. Parameters were optimized in nested cross‐validation, and performance estimates averaged over five repeats of this setup. The combined use of (i) GPA, (ii) SNPs, and (iii) information on gene expression resulted in high (0.8–0.9) or very high (> 0.9) sensitivity and predictive values (Fig ). Notably, the relative contribution of the different information sources to the susceptibility and resistance sensitivity strongly depended on the antibiotic. To assess the effect of the classification technique, we compared the performance of an SVM classifier with a linear kernel to that of random forests and logistic regression, which we and others have successfully used for related phenotype prediction problems (Asgari et al , ; Her & Wu, ; Wheeler et al , ). For this purpose, we used the data type combination with the best macro F1‐score in resistance prediction with the SVM. We evaluated the classification performance in nested cross‐validation and on a held‐out test dataset. In addition, we performed a phylogeny‐aware partitioning of our dataset, to assess the phylogenetic generalization ability of our technique. The performance of the SVM in random cross‐validation was comparable to logistic regression (macro F1‐score for the SVM: 0.83 ± 0.06 vs. logistic regression: 0.84 ± 0.06), but considerably better than the random forest classifiers (0.67 ± 0.14; , ). The performance on the held‐out dataset was in a comparable range (SVM: 0.87 ± 0.07; logistic regression: 0.90 ± 0.04; random forest 0.71 ± 0.16). We furthermore observed similar macro F1‐scores inferred in the phylogenetically selected cross‐validation (SVM: 0.87 ± 0.07; logistic regression: 0.86 ± 0.07; random forest 0.72 ± 0.13), which suggests only a minor influence of the bacterial phylogeny on the classification performance. The performance on the phylogenetically selected held‐out dataset was again comparable, though performance for the random forest deteriorated in comparison with the cross‐validation results (SVM: 0.86 ± 0.06; logistic regression 0.83 ± 0.06; random forests 0.56 ± 0.03). Ciprofloxacin resistance and susceptibility based on SVMs could be correctly predicted with a sensitivity of 0.92 ± 0.01 and 0.87 ± 0.01, and with simultaneously high predictive values of 0.91 ± 0.01 and 0.90 ± 0.01, respectively, using solely SNP information. The sensitivity of 0.80 ± 0.04 and 0.79 ± 0.02 and predictive value of 0.73 ± 0.01 and 0.76 ± 0.02 to predict ciprofloxacin susceptibility and resistance based exclusively on gene expression data were also high. However, there was no added value of using information on gene expression in addition to SNP information for the prediction of susceptibility/resistance toward ciprofloxacin. For the prediction of tobramycin susceptibility and resistance, the machine learning classifiers performed almost equally well when the three input data types (SNPs, GPA, and gene expression) were used individually (values > 0.8). SNP information was predictive of tobramycin resistance; however, it did not further improve the classification performance when combined with the other data types. GPA information alone was the most important data type for classifying tobramycin resistance and susceptibility providing sensitivity values of 0.84 ± 0.01 and 0.95 ± 0.01 and predictive values of 0.88 ± 0.01 and 0.93 ± 0.01, respectively. The performance of GPA‐based prediction increased further when gene expression values were included ( P ‐value of a one‐sided t ‐test: −0.0069 based on the macro F1‐score as determined in repeated cross‐validation; sensitivity values of 0.89 ± 0.01 and 0.94 ± 0.01 for resistance and susceptibility prediction, respectively, and predictive values of 0.88 ± 0.01 and 0.95 ± 0.01). For the correct prediction of meropenem resistance/susceptibility, gene presence/absence was most influential (sensitivity values of 0.87 ± 0.01 and 0.84 ± 0.01 for resistance and susceptibility prediction, respectively, and predictive values of 0.92 ± 0.00 and 0.74 ± 0.01). As observed for tobramycin, the use of genome‐wide information on GPA and of information on gene expression in combination increased the sensitivity to detect resistance as well as susceptibility to meropenem to 0.91 ± 0.02 and 0.86 ± 0.01 and the predictive values to 0.93 ± 0.01 and 0.81 ± 0.03, respectively ( P ‐value of a one‐sided t ‐test: 0.004). For ceftazidime, using only information on gene presence/absence revealed a sensitivity of susceptibility/resistance prediction of 0.69 ± 0.01 and 0.66 ± 0.01, and predictive values of 0.66 ± 0.01 and 0.67 ± 0.01, respectively. Adding gene expression information considerably improved the performance of susceptibility and resistance sensitivity to 0.83 ± 0.02 and 0.81 ± 0.02 and predictive values of 0.81 ± 0.02 and 0.83 ± 0.01 ( P ‐value of a one‐sided t ‐test 7.1 × 10 −7 ). In summary, for tobramycin, ceftazidime, and meropenem combining GPA and expression information gave the most reliable classification results, whereas for ciprofloxacin we found that only using SNPs provided the best performance (Table and ). Thus, for the remainder of the manuscript, we will focus on the results obtained with classifiers trained on those data type combinations.
We determined the minimal number of molecular features required to obtain the highest macro F1‐score for each drug. We inferred the number of features contributing to the classification from the number of non‐zero components of the SVM weight vectors, using a standard cross‐validation setup. For each value of the C parameter, which controls the amount of regularization imposed on the model, the cross‐validation procedure was repeated five times (Fig , ). Performance of antimicrobial resistance prediction peaked for the candidate classifiers using between 50 and 100 features. Notably, the ciprofloxacin classifier required only two SNPs until the learning curve performance was almost saturated, whereas classifiers of drugs that included expression and gene presence/absence markers required more features (> 50) to reach saturation. Next, we determined the C parameter resulting in the least complex SVM model within one standard deviation of the peak performance, i.e., with the best macro F1‐score and as few as possible features for each drug (Friedman et al , ). We chose our candidate marker panel for each drug as the set of all non‐zero features and designated the respective model as the most suitable diagnostic classifier. We used SNP information for ciprofloxacin resistance and susceptibility prediction and the combination of GPA and expression features for tobramycin, meropenem, and ceftazidime. We refer to each of these classifiers as the candidate classifier for susceptibility and resistance prediction for a particular drug. The ciprofloxacin candidate marker panel contained 50 SNPs. The meropenem, ceftazidime, and tobramycin marker lists consisted of 93, 37, and 59 expression and GPA features. The complete list of candidate markers for the prediction of resistance against the four antibiotics is given in . This list includes the candidate markers of the three input features namely GPA, gene expression, and SNPs alone and in combination. Table is a shortlist of the panel markers for each drug based on the data combination that had allowed us to train the most reliable classifier. To test the performance of the candidate marker panel‐based classifiers on an independent set of clinical P. aeruginosa isolates, we used them to predict antibiotic resistance for the samples of the test dataset (Fig , ). On this held‐out data, we obtained an F1‐sore for all drugs that was similarly high as before: Namely this was 0.95 for meropenem, 0.77 for ceftazidime, and 0.96 for tobramycin, using gene expression and gene presence/absence features, and 0.87 for ciprofloxacin using SNP information. These results indicate that the diagnostic classifiers have good generalization abilities when applied to new samples. We observed more variability across drugs than in nested cross‐validation, which is expected due to the smaller size of the test set.
We next investigated how prediction performance depended on the number of samples used for classifier training. We trained the SVM classifiers on random subsamples of different sizes of the full dataset with 414 isolates. For each model, we recorded the macro F1‐score in five repeats of 10‐fold nested cross‐validation (Fig ). The classification performance saturates for all our classifiers well before using all available training samples, suggesting that when adding more isolates for resistance classification, the classification performance would improve only very slowly. Markers potentially remaining undiscovered in our study might have very small effect sizes, requiring much larger dataset sizes for their detection. Interestingly, the number of samples required until the performance curve plateaued depends on the drugs and data types used. For ciprofloxacin, the performance of susceptibility/resistance prediction based on SNPs saturated quickly, likely due to the large impact of the known mutations in the quinolone resistance‐determining region (QRDR), whereas the classifiers for the other three drugs, which were trained on expression and gene presence/absence information, required more samples until the F1‐score plateaued. For these classifiers, the dispersion of the macro F1‐score for subsets of the data with fewer samples is also considerably higher than for the ciprofloxacin SNP models.
In P. aeruginosa , different phylo‐groups might contain different antibiotic resistance genes or mutations alone or in combinations. Thus, if there was an association of distinct resistance‐conferring genes with certain phylo‐groups, our machine learning approach might identify markers that distinguish between different phylo‐groups rather than between susceptible and resistant clinical isolates. In Figs , , , , we show susceptibility and resistance of each isolate in the context of the phylogenetic tree as predicted by the diagnostic classifier and based on AST for each of the drug. To assess whether our predictive markers are biased by the phylogenetic structure of the clinical isolate collection, we assessed classification robustness in a block cross‐validation approach. Here, isolates of phylo‐groups with differing sequence types as determined by MLST were grouped into blocks and all isolates of a given block were only allowed to be either in the training or test folds (Figs and ). In addition, instead of using a random assignment of strains into test and training dataset, we analyzed the performance only allowing strains in a test dataset corresponding to the block cross‐validation training dataset with sequence types that were not already included in this training dataset. For all classifiers including our candidate diagnostic classifiers, we found that the block cross‐validation performance estimates were slightly lower than those obtained using a sequence type‐unaware estimation (F1‐score difference between ~ 0.03 and 0.05 for the diagnostic classifiers). This was particularly apparent for some suboptimal data type combinations, such as for predicting tobramycin resistance using SNPs or gene expression, where a substantially lower discriminative performance was achieved in block‐ compared to random cross‐validation (macro F1‐score difference > 0.1, ). Interestingly, we observed that the ranking of the performance by data type remained almost identical for all drugs. Overall, the performance estimates we obtained using this phylogenetically insulated test dataset were comparable to the block cross‐validation estimates, only tobramycin resistance prediction using classifiers trained fully or partly on SNPs dropped considerably in performance. In summary, this confirmed that the various P . aeruginosa phylogenetic subgroups possess similar mechanisms and molecular markers for the resistance phenotype and that the identified markers are largely distinctive for resistance/susceptibility instead of phylogenetic relationships using most data type combinations. Despite the observed independence of the presence of genetic resistance markers and bacterial phylogeny, for some antibiotics and data types we also found a non‐negligible phylo‐group‐dependent performance effect. This underlines the importance of assessing the impact of the phylogeny on the antimicrobial resistance prediction.
We tested whether we could detect an overrepresentation of misclassified samples among the samples with a MIC value close to the breakpoints compared to samples with higher or lower MIC values, selecting samples from equidistant intervals (in log space) around the breakpoint. We report only the strongest overrepresentation for each drug after multiple testing correction. For ciprofloxacin, significantly more samples with a MIC between 0.5 and 8 were misclassified (31 of 139 samples (22%)) than samples with a MIC smaller than 0.5 or larger than 8 (7 of 219 samples (3%)) (Fisher's exact test with an FDR‐adjusted P ‐value of 6.2 × 10 −8 ; Fig ). For ceftazidime, we found that 46 of 177 samples (26%) with a MIC between 4 and 64 were misclassified whereas only 21 of 157 (13%) of samples with a MIC smaller or higher than those values were misclassified (adjusted P ‐value: 0.014). For meropenem, we found that 26 of 207 samples (13%) with a MIC between 1 and 16 were misclassified, but only 8 of 147 (5%) of all samples with a MIC smaller or higher than those values were misclassified (adjusted P ‐value: 0.05). For tobramycin, no significant difference was found.
One of the most powerful weapons in the battlefield of drug‐resistant infections is rapid diagnostics of resistance. Earlier and more detailed information on the pathogens’ antimicrobial resistance profile has the potential to change antimicrobial prescribing behavior and improve the patient's outcome. The demand for faster results has initiated investigation of molecular alternatives to today's culture‐based clinical microbiology procedures. However, for the successful implementation of robust and reliable molecular tools, it is critical to identify the entirety of the molecular determinants of resistance. Failure to detect resistance can lead to the administration of ineffective or suboptimal antimicrobial treatment. This has direct consequences for the patient and poses significant risks especially in the critically ill patient. Conversely, failing to identify susceptibility may result in the avoidance of a drug despite the fact that it would be suitable to treat the pathogen, in the extreme case leading to patient death due to a lack of known treatment options. Overtreatment could also be a consequence and the needless use of broad‐spectrum antibiotics. This drives costs in the hospital, puts patients at risk for more severe side effects, and may contribute to the development of drug resistance by applying undesired selective pressures. In this study, we show that without any prior knowledge on the molecular mechanisms of resistance, machine learning approaches using genomic and transcriptomic features can provide high antibiotic resistance assignment capabilities for the opportunistic pathogen P. aeruginosa . The performance of drug resistance prediction was strongly dependent on the antibiotic. Ciprofloxacin resistance and susceptibility prediction mostly relied on SNP information. Particularly, two SNPs in the quinolone resistance‐determining region (QRDR) of gyrA and parC had the strongest impact on the classification . This is an expected finding as quinolone antibiotics act by binding to their targets, gyrase, and topoisomerase IV (Bruchmann et al , ); and target‐mediated resistance caused by specific mutations in the encoding genes is the most common and clinically significant form of resistance (del Barrio‐Tofiño et al , ). Although the sensitivity to predict resistance and susceptibility from only gene expression data were also high toward ciprofloxacin, there was no added value of using information on gene expression in addition to SNP information. Nevertheless, for the design of a diagnostic test system, it might be of value to include also gene expression information as a fail‐safe strategy. Interestingly, among the gene expression classifiers that were associated with ciprofloxacin susceptibility/resistance, we found prtN, which is involved in pyocin production. Enhanced pyocin production is, as the SOS response, induced under DNA‐damaging stress conditions (Migliorini et al , ) and was recently reported to contribute to ciprofloxacin resistance (Fan et al , ). For the prediction of tobramycin susceptibility and resistance, the machine learning classifiers performed almost equally well when the three input data types (SNPs, GPA, and gene expression) were used individually (sensitivity and predictive values > 0.8). Remarkably, the combined use of the GPA and the gene expression datasets improved the classification performance. Although SNP information also was predictive of tobramycin resistance, it did not further improve the classification performance when combined with the other feature types. GPA information alone was the most important data type for classifying tobramycin resistance or susceptibility. The majority of aminoglycoside‐resistant clinical isolates harbor genes encoding for aminoglycoside‐modifying enzymes (AMEs). The AMEs are very diverse but are usually encoded by genes located on mobile genetic elements, including integrons and transposons. In accordance, the presence of respective markers that indicate the presence of these mobile elements was found to be strongly associated with tobramycin resistance (e.g., qacEdelta1 , sul1, or folP ). However, the most influential discriminator was the presence of the emrE gene. EmrE has been described to directly impact on aminoglycoside resistance by mediating the extrusion of small polyaromatic cations (Li et al , ). Second, we identified the presence of ptsP (encoding phosphoenolpyruvate protein phosphotransferase) as an important marker for tobramycin resistance. This gene has previously already been associated with tobramycin resistance in a transposon mutant library screen (Schurek et al , ). The performance of GPA‐based prediction increased further when gene expression values were included. We found, e.g., amrB ( mexY ), which encodes a multidrug efflux pump known to confer to aminoglycoside resistance (Westbrock‐Wadman et al , ; Lau et al , ), as one of the top candidates within the marker panel. This confirms that expression of efflux pumps is an important bacterial trait that drives the resistance phenotype in P. aeruginosa . Tobramycin resistance/susceptibility was also associated with an altered expression or SNPs within genes involved in type 4 pili motility ( pilB pilV2, pilC, and pilH ) and the type three secretion system ( pcr genes). Although the connection to tobramycin resistance might be not exactly obvious, it has been proposed that surface motility can lead to extensive multidrug adaptive resistance as a result of the collective dysregulation of diverse genes (Sun et al , ). For the correct prediction of meropenem resistance/susceptibility, gene presence/absence was most influential. Interestingly, in contrast to tobramycin resistance classification, we observed a substantial accumulation of indels in specific marker genes. Among these marker genes were ftsY, involved in targeting and insertion of nascent membrane proteins into the cytoplasmic membrane, czcD, encoding a cobalt–zinc–cadmium efflux protein, and oprD . Inactivation of the porin OprD is the leading cause of carbapenem non‐susceptibility in clinical isolates (Köhler et al , ). As expected, also a decreased oprD gene expression in the resistant group of isolates was identified as an important discriminator. Interestingly though, the most important gene expression marker was not the down‐regulated oprD , but an up‐regulation of the gene gbuA, encoding a guanidinobutyrase in the arginine dehydrogenase pathway, in the meropenem‐resistant group of isolates. It is known that arginine metabolism plays a critical role during host adaptation and persistence (Hogardt & Heesemann, ). Interestingly, it was also described before that GbuA is linked to virulence factor expression and the production of pyocyanin (Jagmann et al , ). Our results indicate that up‐regulation of gbuA might be the result of a non‐fully functional OprD porin. Since OprD has been shown to be involved in arginine uptake (Tamber & Hancock, ), one might speculate that lack of arginine due to a non‐functional OprD triggers the expression of gbuA to compensate for the fitness defect of the oprD mutant. Furthermore, components encoding the MexAB‐OprM efflux pump ( mexB, oprM ) were identified as important features associated with resistance. This efflux pump is known to export beta‐lactams, including meropenem (Li et al , ; Srikumar et al , ; Clermont et al , ). As observed for tobramycin, the correct prediction of ceftazidime resistance/susceptibility was strongly influenced by both gene expression values (here ampC, fpvA, pvdD, and algF ) and gene presence/absence (including the presence of mobile genetic elements). While AmpC is a known intrinsic beta‐lactamase, able to hydrolyze cephalosporins (Lister et al , ), the association of ceftazidime resistance with expression variations in fpvA , pvdD , and algF , involved in the uptake of iron and the production of alginate, respectively, is less clear. Interestingly, sequence variations in regulators such as AmpR, AmpG, AmpD (including AmpD homologs), and mpl and alteration in penicillin‐binding proteins such as PBP4 ( dacB ) have been described to trigger constitutive ampC overexpression (Bagge et al , ; Juan et al , , ; Schmidtke & Hanson, ; Moya et al , ; Balasubramanian et al , ; Cabot et al , ). AmpR, however, does not only control ampC expression but has also been described to be a global regulator of resistance and virulence in P. aeruginosa and to be an important acute–chronic switch regulator (Balasubramanian et al , ). As such, AmpR is also involved in the regulation of alginate production as well as iron acquisition via siderophores. This might explain why expression of fpvA , pvdD , and algF was found to be associated with ceftazidime resistance. Since we did not identify any of the previously described sequence variations in the various regulators of ampC expression by the use of the machine learning approach, we re‐analyzed them in more detail. Interestingly, we identified a small number of isolates in the resistant group (11 of 165) harboring an R504C substitution in the gene ftsI (PBP3). Mutations in PBP3 have been described to represent an AmpC‐independent resistance evolution in vitro and occur upon beta‐lactam treatment in vivo (Cabot et al , , ; López‐Causapé et al , ). Particularly, the R504C substitution has been found in clinical cystic fibrosis isolates and is contributing to ceftazidime resistance (López‐Causapé et al , ). However, all but three of our CAZ‐resistant isolates with a R504C mutation in ftsI likewise showed a strong ampC overexpression, most likely explaining why ftsI was not identified as a discriminative marker in our analysis, despite clearly harboring resistance‐associated mutations. Adding information on the gene expression considerably improved the performance of susceptibility and resistance sensitivity for ceftazidime, which was not observed in a similar scale for any other antibiotic. Interestingly, although we recognized widely overlapping resistance profiles for all antibiotics (Fig ), we did not observe a strong co‐resistance bias in the identified markers. For example, among the best performing classifiers for meropenem, ceftazidime, and tobramycin, there were only overlapping markers between ceftazidime and tobramycin. These included expression of PA14_15420 and presence of A7J11_02078/sul1/folP_2, group_282, group_3462, and group_5517 which account for 5/59 and 5/37 of the total features or 14.7%/17.1% of the total weight of the ceftazidime and tobramycin SVM classifiers, respectively. Group_282, group_3462, and group_5517 genes are hypothetical genes. Sul1, which is located on mobile elements (usually class 1 integrons), could indicate that the shared signal of the tobramycin and ceftazidime classifiers is due to resistance genes being found on the same resistance cassettes, as class 1 integrons carrying beta‐lactamases as well as aminoglycoside‐modifying enzymes are frequently detected (Poirel et al , ; Fonseca et al , ). In conclusion, we demonstrate that extending the genetic features (SNPs and gene presence/absence) with gene expression values is key to improving performance. Thereby, relative contribution of the different categories of biomarkers to the susceptibility and resistance sensitivity strongly depended on the antibiotic. This is in stark contrast to the prediction of antibiotic resistance in many Enterobacteriaceae, where knowledge of the presence of resistance‐conferring genes, such as beta‐lactamases, is usually sufficient to correctly predict the susceptibility profiles. However, analysis of the gene expression marker lists revealed that the resistance phenotype in the opportunistic pathogen P. aeruginosa (and possibly also in other non‐fermenters) is multifactorial and that alterations in gene expression can alter the resistance phenotype quite substantially. Intriguingly, we found that the performance of our classifiers improved if the isolates exhibited MIC values that were not close to the breakpoint. This was especially apparent for ciprofloxacin. It has been demonstrated that patients treated with levofloxacin for bloodstream infections caused by Gram‐negative organisms for which MICs were elevated, yet still in the susceptible category, had worse outcomes than similar patients infected with organisms for which MICs were lower (Defife et al , ). A possible explanation for treatment failure could be the presence of first‐step mutations in gyrA that lead to MIC values near the breakpoint. If subjected to quinolones, those isolates can rapidly acquire second‐step mutations in parC that would then exhibit a fully resistant phenotype. An additional explanation might also be that generally, MICs have a low level of reproducibility (Turnidge & Paterson, ; Juan et al , ; Javed et al , ). A non‐accurate categorization due to uncertainty in testing near the MIC breakpoint can explain failure in the assignment of drug resistance by the machine learning classifiers. Capturing the full repertoire of markers that are relevant for predicting antimicrobial resistance in P. aeruginosa will require further studies, to expand the predictive power of the established marker lists. The remaining misclassified samples in our study on the basis of these marker lists represent a valuable resource to uncover further spurious resistance mutations. The broad use of molecular diagnostic tests promises more detailed and timelier information on antimicrobial‐resistant phenotypes. This would enable the implementation of early and more targeted, and thus more effective antimicrobial therapy for improved patient care. Importantly, a molecular assay system can easily be expanded to test for additional information such as the clonal identity of the bacterial pathogen or the presence of critical virulence traits. Thus, availability of molecular diagnostic test systems can also provide prognostic markers for disease outcome and give valuable information on the clonal spread of pathogens in the hospital setting. However, to realize the full potential of the envisaged molecular diagnostics, clinical studies will be needed to demonstrate that broad application of such test systems will have an impact in clinical decision‐making, provide the basis for more efficient antibiotic use, and also decrease the costs of care.
Strain collection and antibiotic resistance profiling Our study included 414 clinical P. aeruginosa isolates provided by different clinics or research institutions: 350 isolates were collected in Germany (138 at the Charité Berlin (CH), 89 at the University Hospital in Frankfurt (F), 39 at the Hannover Medical School (MHH), and 84 at different other locations). Sixty‐two isolates were provided by a Spanish strain collection located at the Son Espases University Hospital in Palma de Mallorca (ESP), and two samples originated from Hungary and Romania, respectively. All clinical isolates were tested for their susceptibility toward the four common anti‐pseudomonas antibiotics tobramycin (TOB), ciprofloxacin (CIP), meropenem (MEM), and ceftazidime (CAZ). Minimal inhibitory concentration (MIC) testing and breakpoint determination were performed in agar dilution according to Clinical & Laboratory Standards Institute (CLSI) Guidelines (CLSI, ). MIC testing was performed in triplicates for all isolates. If results varied, up to five replicates were used. Only isolates with at least three matching results were included in the study. Most of the isolates were categorized as multidrug‐resistant (resistant to three or more antimicrobial classes, ). As reference for differential gene expression and sequence variation analysis, the UCBPP‐PA14 strain was chosen. Colony screening To rule out possible contaminations, all isolates were continuously re‐streaked at least twice from single colonies. Only isolates with reproducible outcomes in phenotypic tests were included in the final panel, which furthermore passed DNA sequencing quality control (> 85% sequencing reads mapped to P. aeruginosa UCBPP‐PA14 reference genome, total read GC content of 64–66%). RNA sequencing For comparable whole‐transcriptome sequencing, all clinical isolates and the UCBPP‐PA14 reference strain were cultivated at 37°C in LB broth and harvested in RNAprotect (Qiagen) at OD 600 = 2. Sequencing libraries were prepared using the ScriptSeq RNA‐Seq Library Preparation Kit (Illumina), and short read data (single end, 50 bp) were generated on an Illumina HiSeq 2500 machine creating on average 3 million reads per sample. The 414 samples were distributed across 24 independent sequencing pools. We assessed possible batch effects using triplicates of the PA14‐wt . The majority of the genome was very stably expressed across the replicates (Pearson correlation coefficient ≥ 0.94). The reads were mapped with Stampy [v1.0.23; (Lunter & Goodson, )] to the UCBPP‐PA14 reference genome (NC_008463.1), which is available for download from the Pseudomonas Genome database ( http://www.pseudomonas.com ). Mapping and calculation of reads per gene (rpg) values were performed as described previously (Khaledi et al , ). Expression counts were log‐transformed (to deal with zero values, we added one to the expression counts). DNA sequencing Sequencing libraries were prepared from genomic DNA using the NEBNext Ultra DNA Library Prep Kit (New England Biolabs) and sequenced in paired‐end mode on Illumina HiSeq or MiSeq machines, generating either 2 × 250 or 2 × 300 bp reads. On average, 2.89 million reads were generated per isolate (ranging from 653,062 to 21,086,866 reads with at least 30 times total genome coverage per isolate). All reads were adapter and quality‐clipped using fastq‐mcf (Andrews, ). SNP calling DNA sequencing reads were mapped with Stampy as described above (see ). For variant calling, SAMtools, v0.1.19 (Li et al , ), was used. We noticed that sometimes sequencing errors (particularly around indels) tended to influence calling accuracy (e.g., a SNP was called although the nucleotide chance appeared only in a fraction of the reads). For correction of these obvious errors, we implemented an additional step where nucleotide positions were converted into the most likely sequence according to the most frequently occurring nucleotide at this position. Phylogeny Paired‐end reads (read length 150, fragment size 200) of eight reference genomes were simulated using art_illumina (v2.5.8) with the default error profile at 20‐fold coverage (Huang et al , ). Together with our 414 clinical isolates, the sequencing reads were mapped to the coding regions of reference genome UCBPP‐PA14 by BWA‐MEM (v0.7.15) (preprint: Li, ). SAMtools (v1.3.1) (Li et al , ) and BamTools (Barnett et al , ) (v2.3.0) were used for indexing and sorting the aligned reads, respectively, followed by variant calling using FreeBayes (v1.1.0) (preprint: Garrison & Marth, ). The consensus coding sequences were computed by BCFtools (v1.6) (Li, ) and then sorted into families by corresponding reference regions. A gene family was excluded if the gene sequence of any of its member differed by more than 10% in lengths as compared to the length of the reference genome gene family. Totally, 5,936 families were retained. The sequences of each family were aligned by MAFFT (v7.310) (Katoh & Standley, ), and the alignments were concatenated. SNP sites that were only present in a single isolate were removed from the alignment. The final alignment was composed of 558,483 columns, and the approximately maximum likelihood phylogeny was then inferred by FastTree (v2.1.10, double precision) (Price et al , ). Pan‐genome analysis and indel calling The trimmed reads were assembled with SPAdes, v.3.0.1, using the –careful parameter (Bankevich et al , ). The assembled genomes were annotated using Prokka (v1.12) (Seemann, ) using the metagenome mode of Prokka for gene calling, as we had noticed that genes on resistance cassettes were often missed by the standard isolate genome gene calling procedure. The gene sequences were clustered into gene families using Roary (Page et al , ). We observed that Roary frequently clustered together gene sequences of drastically varying lengths due to indels or start and stop codon mutations in those gene sequences and frequently also splits orthologous genes into more than one gene family. To overcome this behavior, we modified Roary to require at least 95% alignment coverage in the BLAST step ( https://github.com/hzi-bifo/Roary ). For matching the Prokka annotation and the reference annotation of the PA14 strain, we used bedtools (Quinlan, ) to search for exact overlaps of the gene coordinates. In a second step, we identified all Roary gene families that contained a PA14 gene. To identify insertions and deletions in the Roary gene families, we extracted nucleotide sequences for each gene family and used MAFFT (Katoh & Standley, ) to infer multiple sequence alignments. We restricted this analysis to gene families present in at least 50 strains. Then, we used MSA2VCF ( https://github.com/lindenb/jvarkit/ ) for calling variants in the gene sequences and restricted the output to insertion and deletions of at least nine nucleotides. Support vector machine classification For applying cross‐validation, the dataset was split once randomly and once phylogenetically informed (see below) into k ‐folds ( k set to 10, unless specified otherwise). Classifier hyperparameters were optimized on a k − 1 fold‐sized partition, and performance of the optimally parameterized method was determined on the left out k fraction of the data. This was performed for all possible k partitions, assignments summarized, and final performance measures obtained by averaging. Comparison of different machine learning classifiers We used the training set for hyperparameter tuning of the classifiers, i.e., a linear SVM, RF, and LR, optimizing the F1‐score in 10‐fold cross‐validation and then evaluated the best trained classifier on the held‐out set. The expression features (EXPR) and any combination of features with another data type (GPA and SNPs) were transformed to have zero mean and unit variance, whereas binary features (GPA, SNPs, and GPA+SNPs) were not transformed. The RF classifier was optimized for the macro F1‐score over different hyperparameters: (i) the number of decision trees in the ensemble, (ii) the number of features for computing the best node split, (iii) the function to measure the quality of a split, and (iv) the minimum number of samples required to split a node. The logistic regression and the linear SVM were optimized for the macro F1‐score over: (i) the C parameter (inverse to the regularization strength) and (ii) class weights (to be balanced based on class frequencies or to be uniform over all classes). Subsequently, we measured the performance of the optimized classifiers over accordingly generated, held‐out sets of samples. In clinical practice, P. aeruginosa strains isolated from patients are likely to include sequence types that are already part of our isolate collection. To obtain a more conservative estimate of the performance of the antimicrobial susceptibility prediction, we also validated the classifiers on a held‐out dataset composed of entirely new sequence types and also selected the folds in cross‐validation to be non‐overlapping in terms of their sequence types (block cross‐validation). For partitioning the isolate collection into sequence types, we used spectral clustering over the phylogenetic similarity matrix (preprint: von Luxburg, ). We obtained this matrix by applying a Gaussian kernel over the matrix of distances between isolates based on the branch lengths in the phylogenetic tree. Multilocus sequence typing Consensus fastq files for each isolate were created with SAMtools to extract the seven P. aeruginosa relevant MLST gene sequences ( acsA, aroE, guaA, mutL, nuoD, ppsA, and trpE ). Sequence type information was obtained from the P. aeruginosa MLST database ( https://pubmlst.org/paeruginosa/ ; Jolley & Maiden, ). Implementation We encapsulated the sequencing data processing routines in a stand‐alone package named seq2geno2pheno. The SVM classification was conducted with Model‐T, which is built on scikit‐learn (Pedregosa et al , ) and was already used as the prediction engine in our previous work on bacterial trait prediction (Weimann et al , ). seq2geno2pheno also implements a framework to use a more broader set of classifiers, which we used to compare different classification algorithms for drug resistance prediction. Finally, we created a repository that includes scripts to re‐produce the figures and analyses presented in this paper using the aforementioned packages.
Our study included 414 clinical P. aeruginosa isolates provided by different clinics or research institutions: 350 isolates were collected in Germany (138 at the Charité Berlin (CH), 89 at the University Hospital in Frankfurt (F), 39 at the Hannover Medical School (MHH), and 84 at different other locations). Sixty‐two isolates were provided by a Spanish strain collection located at the Son Espases University Hospital in Palma de Mallorca (ESP), and two samples originated from Hungary and Romania, respectively. All clinical isolates were tested for their susceptibility toward the four common anti‐pseudomonas antibiotics tobramycin (TOB), ciprofloxacin (CIP), meropenem (MEM), and ceftazidime (CAZ). Minimal inhibitory concentration (MIC) testing and breakpoint determination were performed in agar dilution according to Clinical & Laboratory Standards Institute (CLSI) Guidelines (CLSI, ). MIC testing was performed in triplicates for all isolates. If results varied, up to five replicates were used. Only isolates with at least three matching results were included in the study. Most of the isolates were categorized as multidrug‐resistant (resistant to three or more antimicrobial classes, ). As reference for differential gene expression and sequence variation analysis, the UCBPP‐PA14 strain was chosen.
To rule out possible contaminations, all isolates were continuously re‐streaked at least twice from single colonies. Only isolates with reproducible outcomes in phenotypic tests were included in the final panel, which furthermore passed DNA sequencing quality control (> 85% sequencing reads mapped to P. aeruginosa UCBPP‐PA14 reference genome, total read GC content of 64–66%).
For comparable whole‐transcriptome sequencing, all clinical isolates and the UCBPP‐PA14 reference strain were cultivated at 37°C in LB broth and harvested in RNAprotect (Qiagen) at OD 600 = 2. Sequencing libraries were prepared using the ScriptSeq RNA‐Seq Library Preparation Kit (Illumina), and short read data (single end, 50 bp) were generated on an Illumina HiSeq 2500 machine creating on average 3 million reads per sample. The 414 samples were distributed across 24 independent sequencing pools. We assessed possible batch effects using triplicates of the PA14‐wt . The majority of the genome was very stably expressed across the replicates (Pearson correlation coefficient ≥ 0.94). The reads were mapped with Stampy [v1.0.23; (Lunter & Goodson, )] to the UCBPP‐PA14 reference genome (NC_008463.1), which is available for download from the Pseudomonas Genome database ( http://www.pseudomonas.com ). Mapping and calculation of reads per gene (rpg) values were performed as described previously (Khaledi et al , ). Expression counts were log‐transformed (to deal with zero values, we added one to the expression counts).
Sequencing libraries were prepared from genomic DNA using the NEBNext Ultra DNA Library Prep Kit (New England Biolabs) and sequenced in paired‐end mode on Illumina HiSeq or MiSeq machines, generating either 2 × 250 or 2 × 300 bp reads. On average, 2.89 million reads were generated per isolate (ranging from 653,062 to 21,086,866 reads with at least 30 times total genome coverage per isolate). All reads were adapter and quality‐clipped using fastq‐mcf (Andrews, ).
DNA sequencing reads were mapped with Stampy as described above (see ). For variant calling, SAMtools, v0.1.19 (Li et al , ), was used. We noticed that sometimes sequencing errors (particularly around indels) tended to influence calling accuracy (e.g., a SNP was called although the nucleotide chance appeared only in a fraction of the reads). For correction of these obvious errors, we implemented an additional step where nucleotide positions were converted into the most likely sequence according to the most frequently occurring nucleotide at this position.
Paired‐end reads (read length 150, fragment size 200) of eight reference genomes were simulated using art_illumina (v2.5.8) with the default error profile at 20‐fold coverage (Huang et al , ). Together with our 414 clinical isolates, the sequencing reads were mapped to the coding regions of reference genome UCBPP‐PA14 by BWA‐MEM (v0.7.15) (preprint: Li, ). SAMtools (v1.3.1) (Li et al , ) and BamTools (Barnett et al , ) (v2.3.0) were used for indexing and sorting the aligned reads, respectively, followed by variant calling using FreeBayes (v1.1.0) (preprint: Garrison & Marth, ). The consensus coding sequences were computed by BCFtools (v1.6) (Li, ) and then sorted into families by corresponding reference regions. A gene family was excluded if the gene sequence of any of its member differed by more than 10% in lengths as compared to the length of the reference genome gene family. Totally, 5,936 families were retained. The sequences of each family were aligned by MAFFT (v7.310) (Katoh & Standley, ), and the alignments were concatenated. SNP sites that were only present in a single isolate were removed from the alignment. The final alignment was composed of 558,483 columns, and the approximately maximum likelihood phylogeny was then inferred by FastTree (v2.1.10, double precision) (Price et al , ).
The trimmed reads were assembled with SPAdes, v.3.0.1, using the –careful parameter (Bankevich et al , ). The assembled genomes were annotated using Prokka (v1.12) (Seemann, ) using the metagenome mode of Prokka for gene calling, as we had noticed that genes on resistance cassettes were often missed by the standard isolate genome gene calling procedure. The gene sequences were clustered into gene families using Roary (Page et al , ). We observed that Roary frequently clustered together gene sequences of drastically varying lengths due to indels or start and stop codon mutations in those gene sequences and frequently also splits orthologous genes into more than one gene family. To overcome this behavior, we modified Roary to require at least 95% alignment coverage in the BLAST step ( https://github.com/hzi-bifo/Roary ). For matching the Prokka annotation and the reference annotation of the PA14 strain, we used bedtools (Quinlan, ) to search for exact overlaps of the gene coordinates. In a second step, we identified all Roary gene families that contained a PA14 gene. To identify insertions and deletions in the Roary gene families, we extracted nucleotide sequences for each gene family and used MAFFT (Katoh & Standley, ) to infer multiple sequence alignments. We restricted this analysis to gene families present in at least 50 strains. Then, we used MSA2VCF ( https://github.com/lindenb/jvarkit/ ) for calling variants in the gene sequences and restricted the output to insertion and deletions of at least nine nucleotides.
For applying cross‐validation, the dataset was split once randomly and once phylogenetically informed (see below) into k ‐folds ( k set to 10, unless specified otherwise). Classifier hyperparameters were optimized on a k − 1 fold‐sized partition, and performance of the optimally parameterized method was determined on the left out k fraction of the data. This was performed for all possible k partitions, assignments summarized, and final performance measures obtained by averaging.
We used the training set for hyperparameter tuning of the classifiers, i.e., a linear SVM, RF, and LR, optimizing the F1‐score in 10‐fold cross‐validation and then evaluated the best trained classifier on the held‐out set. The expression features (EXPR) and any combination of features with another data type (GPA and SNPs) were transformed to have zero mean and unit variance, whereas binary features (GPA, SNPs, and GPA+SNPs) were not transformed. The RF classifier was optimized for the macro F1‐score over different hyperparameters: (i) the number of decision trees in the ensemble, (ii) the number of features for computing the best node split, (iii) the function to measure the quality of a split, and (iv) the minimum number of samples required to split a node. The logistic regression and the linear SVM were optimized for the macro F1‐score over: (i) the C parameter (inverse to the regularization strength) and (ii) class weights (to be balanced based on class frequencies or to be uniform over all classes). Subsequently, we measured the performance of the optimized classifiers over accordingly generated, held‐out sets of samples. In clinical practice, P. aeruginosa strains isolated from patients are likely to include sequence types that are already part of our isolate collection. To obtain a more conservative estimate of the performance of the antimicrobial susceptibility prediction, we also validated the classifiers on a held‐out dataset composed of entirely new sequence types and also selected the folds in cross‐validation to be non‐overlapping in terms of their sequence types (block cross‐validation). For partitioning the isolate collection into sequence types, we used spectral clustering over the phylogenetic similarity matrix (preprint: von Luxburg, ). We obtained this matrix by applying a Gaussian kernel over the matrix of distances between isolates based on the branch lengths in the phylogenetic tree.
Consensus fastq files for each isolate were created with SAMtools to extract the seven P. aeruginosa relevant MLST gene sequences ( acsA, aroE, guaA, mutL, nuoD, ppsA, and trpE ). Sequence type information was obtained from the P. aeruginosa MLST database ( https://pubmlst.org/paeruginosa/ ; Jolley & Maiden, ).
We encapsulated the sequencing data processing routines in a stand‐alone package named seq2geno2pheno. The SVM classification was conducted with Model‐T, which is built on scikit‐learn (Pedregosa et al , ) and was already used as the prediction engine in our previous work on bacterial trait prediction (Weimann et al , ). seq2geno2pheno also implements a framework to use a more broader set of classifiers, which we used to compare different classification algorithms for drug resistance prediction. Finally, we created a repository that includes scripts to re‐produce the figures and analyses presented in this paper using the aforementioned packages.
AKo, MH, PG, DJ, and GC generated data. AKh and MS performed experiments. AW, EA, MRKM, and ACM developed the computational methodology. AW, AKh, MS, EA, T‐HK, AB, and MRKM analyzed the data. AW, AKh, MS, AO, ACM, and SH interpreted the results. SH and ACM conceived the project, designed experiments, and supervised the work. AW, AKh, T‐HK, and EA generated figures and tables. AKh, AW, SH, and ACM wrote the paper. All authors read and approved the final manuscript.
The authors declare that they have no conflict of interest.
Appendix Click here for additional data file. Expanded View Figures PDF Click here for additional data file. Dataset EV1 Click here for additional data file. Dataset EV2 Click here for additional data file. Dataset EV3 Click here for additional data file. Dataset EV4 Click here for additional data file. Dataset EV5 Click here for additional data file. Dataset EV6 Click here for additional data file. Review Process File Click here for additional data file.
|
Abortion and contraception within prison health care: a qualitative study | 6132bc7f-beba-42b4-a1d9-3da252cf4519 | 11755924 | Surgical Procedures, Operative[mh] | Incarceration presents known barriers to reproductive health and justice including separation from children, vulnerability to permanent loss of custody, disruption in fertility, delayed and denied access to services, and carceral harms such as segregation, use of restraints, and personal (strip) searches . Women are the fastest growing population in prisons in Canada and most incarcerated women are of reproductive age . Our 2021 scoping review of sexual and reproductive health research among prisoners in Canada found most studies addressed sexually transmitted and blood-borne infections . Our international scoping review synthesizing studies of abortion and contraception and people experiencing incarceration found just two Canadian studies . In one, a survey at an Ontario provincial jail designated for women, Liauw et al. found respondents to have higher rates of unintended pregnancy, abortion and unmet contraceptive need than is found in the general population. In a qualitative follow-up study, Liauw et al. found participants commonly encountered discrimination and stigma when seeking reproductive healthcare in jail.
Women in prisons have complex health histories and needs that intersect with sexual and reproductive health. They experience high rates of chronic physical illness , sexually transmitted and blood-borne infections (STBBIs) , histories of childhood abuse , post-traumatic stress disorder (PTSD) , mental illness and substance use . Approximately 4% of people in prisons designated for women are pregnant on admission . Further, healthcare is the most frequent topic of complaints expressed by people in prison . Federal and provincial laws affirm the rights of people in prison to health services at professional standards , and United Nations international minimal standards for the treatment of prisoners, known as the Mandela Rules , and for women prisoners, known as the Bangkok Rules , require attention be paid to the distinctive needs of women, including access to sexual and reproductive health care . Yet, there is no systematic data collection of sexual and reproductive health experiences and outcomes among incarcerated people in Canada , and this area of health experience is generally under-researched in Canada. Abortion was completely decriminalized in Canada 36 years ago, and both procedural and medication abortion are publicly funded; prescription contraception is universally funded in one province. However, barriers to pregnancy prevention and termination persist across the country, such as travel and information gaps. Internationally, research demonstrates incarcerated people face barriers to family planning care including restrictive security practices, institutional processes, staff shortages, stigma, coercion and privacy violations from both health care providers and corrections staff . Our recent study of distance to procedural abortion found institutions of incarceration designated for women may be over 700 km from the nearest procedural abortion facility . Prison health is not routinely taught in health professional curricula, and family planning professionals may be unfamiliar with the specific needs and rights of incarcerated people. Access to abortion for incarcerated people is critical to address structural, gender-, and race-based reproductive health inequities in Canada. The aim of this study was to understand the family planning experiences and needs of women and gender diverse people who have experienced incarceration in Canada and to identify key issues family planning professionals must consider in their provision of care to this underserved population.
Aim The aim of this study was to understand the experiences of seeking contraception and abortion among women and gender diverse people who have experienced incarceration in Canada. Our objective was to explore how family planning professionals can improve knowledge and delivery of sexual and reproductive health care for women and gender diverse people in prisons, and after release. Design This qualitative study was designed and conducted within a framework of community-based research . We partnered with six community organizations designated for women and gender diverse people involved in the criminal legal system at project outset and throughout each step of the research process. Together, we developed research questions, organized recruitment, collected data, conducted analysis, validated key themes and engaged in knowledge mobilization . Our team includes academic researchers, family planning professionals, and expert advisors with lived experience of incarceration. The experts were suggested by the community organizational partners and invited to join the team via email. They received support from other team members in research methods and were financially reimbursed for their time. All research team members identified as ciswomen. We worked collaboratively and iteratively through regular meetings and consultation in person and online. We chose focus groups for data collection because it was logistically easier for the partner community groups, however all participants had the option to talk to a researcher one-on-one instead. Theoretical framework We used a theoretical framework of reproductive justice to underpin the study design and implementation with several key assumptions. Reproductive justice theory was conceptualized in 1994 by twelve Black women working within the human rights movement . The key tenets of reproductive justice include the right to bodily autonomy, to not have children, to choose to have children and to parent those children in safe and sustainable communities. This study was designed and implemented with the theoretical assumption that incarceration is a violation of these rights, as being in prison prevents reproduction, separates parents from their children and families, and elevates risks to the health and survival of women and gender diverse individuals . Study setting and recruitment We recruited focus group participants in partnership with six community organizations providing housing, legal and/or practical support to people with experience of criminalization across Canada. Contact with these organizations is not a requirement for release, and all potential participants were assured participation would have no bearing on their receipt of organizational services. Spanning four provinces, two of the six organizations were located in major urban centres, three in medium size cities, and one in a small city. We provided organizational staff with study materials to distribute through their offices and client-base. We held the focus groups on site at the organizations. Inclusion/Exclusion criteria Eligible participants included English-speaking adult women and gender diverse people who have experienced incarceration in provincial and/or federal institutions designated for women. Data collection We used a semi-structured interview guide, developed for this study. The interview guide was co-developed with expert advisors with lived experience of incarceration (see supplementary file : interview guide). The interview guide asked about experiences accessing abortion and contraception during incarceration, barriers to accessing abortion and contraception in the prison environment, what would make doing so safer and easier for people experiencing incarceration, and what they think health professionals should know about supporting people experiencing incarceration. Focus groups were facilitated by team members MP or CH. We stopped data collection once we had conducted focus groups with all partnering organizations. All potentially identifying information, including names and places, were removed during the transcription process. MP and CH conducted three focus groups each (six in total) between August to December 2023. Locations included Saint John, New Brunswick; Halifax, Dartmouth, and Sydney, Nova Scotia; Vancouver, British Columbia and Toronto, Ontario. Focus group size ranged from three to nine participants, with a total of 35 participants. Given the narrowness of the study aim, and relative specificity of the target participant group, this sample size was determined to provide adequate information power . Focus groups were one to two hours long. To build trust with participants and ensure their anonymity within the study, demographic information was not collected. While we did not require participants to identify their gender, race or other factors, nor did we collect and tabulate this data, we did encourage sharing of pronouns during introductions, and we asked participants to reflect on and share what aspects of their identity might be particularly relevant to the issues. From what was shared, our research team determined deductively that participants had varying experiences in length of time in custody, time since release, as well as diversity in age, racial and cultural backgrounds as well as sexual orientation and gender identity. Data analysis We used reflexive thematic analysis to analyze the transcripts . Reflexive thematic analysis is both methodologically and theoretically flexible, allowing for the use of different theoretical frameworks to guide the analytic process. We used reflexive thematic analysis informed by reproductive justice theory. Data analysis was conducted with the underlying theoretical assumption that the rights to bodily autonomy, to not have children, to choose to have children and to parent those children in safe and sustainable communities are essential human rights. Reflexive thematic analysis also “emphasises the importance of the researcher’s subjectivity as analytic resources”, allowing for the lived experience expertise of the team to inform the development of themes. The team included two lived experience experts, an early career health professional researcher with 10 years of experience in prisoner health and reproductive justice, a senior health professional in reproductive health, and two graduate students in health research. All team members identified as white and several as queer. Reflexivity on how team member identities may have influenced data analysis was practiced through regular check-ins among team members throughout the data analysis process. During team check-ins, team member decisions about coding were reviewed and discussed. Reflexivity practices ensured that team members personal experiences either providing or accessing abortion or contraception did not influence the interpretation of participant experiences. For example, team members without lived experience of incarceration may have misinterpreted institutional barrier related codes due to the lack of first-hand experience of the institutional processes to request care, and due to personal experiences providing or accessing care in non-correctional clinical settings. Check-ins with lived experience experts allowed for a more fulsome understanding of how the institutional request process can facilitate or impede access to care. We used shared google sheets and google docs to manually code, allowing all team members access to transcripts and coding spreadsheets. First, research team members reviewed the raw data to gain familiarity with emerging themes. Two research team members double coded all focus group transcriptions and iteratively developed an initial coding scheme in consultation with the entire team, including lived experience experts. Then, the entire team reviewed initial codes to develop and synthesize shared themes and subthemes. Generated themes were then reviewed in comparison to the initial coding scheme and with lived experience experts. Following review, themes were re-named and collapsed as needed to develop final key themes. Lived experience experts participated in regular coding meetings, sharing their perceptions of meanings, and validated or disputed other team member’s interpretations to come to consensus agreement on themes. Ethical considerations Participants reviewed a consent form and provided informed consent prior to participation. All participants received a gift card of $50 as an honorarium for their involvement. A professional transcriptionist de-identified and transcribed the audio-recordings. This study was approved by the University of New Brunswick Ethics Review Board on June 14, 2023 under the file number 2023-074, in accordance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2 2018). Rigor and reflexivity Expert advisor participation in data analysis ensures reliability of the community-based approach and credibility of findings.
The aim of this study was to understand the experiences of seeking contraception and abortion among women and gender diverse people who have experienced incarceration in Canada. Our objective was to explore how family planning professionals can improve knowledge and delivery of sexual and reproductive health care for women and gender diverse people in prisons, and after release.
This qualitative study was designed and conducted within a framework of community-based research . We partnered with six community organizations designated for women and gender diverse people involved in the criminal legal system at project outset and throughout each step of the research process. Together, we developed research questions, organized recruitment, collected data, conducted analysis, validated key themes and engaged in knowledge mobilization . Our team includes academic researchers, family planning professionals, and expert advisors with lived experience of incarceration. The experts were suggested by the community organizational partners and invited to join the team via email. They received support from other team members in research methods and were financially reimbursed for their time. All research team members identified as ciswomen. We worked collaboratively and iteratively through regular meetings and consultation in person and online. We chose focus groups for data collection because it was logistically easier for the partner community groups, however all participants had the option to talk to a researcher one-on-one instead.
We used a theoretical framework of reproductive justice to underpin the study design and implementation with several key assumptions. Reproductive justice theory was conceptualized in 1994 by twelve Black women working within the human rights movement . The key tenets of reproductive justice include the right to bodily autonomy, to not have children, to choose to have children and to parent those children in safe and sustainable communities. This study was designed and implemented with the theoretical assumption that incarceration is a violation of these rights, as being in prison prevents reproduction, separates parents from their children and families, and elevates risks to the health and survival of women and gender diverse individuals .
We recruited focus group participants in partnership with six community organizations providing housing, legal and/or practical support to people with experience of criminalization across Canada. Contact with these organizations is not a requirement for release, and all potential participants were assured participation would have no bearing on their receipt of organizational services. Spanning four provinces, two of the six organizations were located in major urban centres, three in medium size cities, and one in a small city. We provided organizational staff with study materials to distribute through their offices and client-base. We held the focus groups on site at the organizations.
Eligible participants included English-speaking adult women and gender diverse people who have experienced incarceration in provincial and/or federal institutions designated for women.
We used a semi-structured interview guide, developed for this study. The interview guide was co-developed with expert advisors with lived experience of incarceration (see supplementary file : interview guide). The interview guide asked about experiences accessing abortion and contraception during incarceration, barriers to accessing abortion and contraception in the prison environment, what would make doing so safer and easier for people experiencing incarceration, and what they think health professionals should know about supporting people experiencing incarceration. Focus groups were facilitated by team members MP or CH. We stopped data collection once we had conducted focus groups with all partnering organizations. All potentially identifying information, including names and places, were removed during the transcription process. MP and CH conducted three focus groups each (six in total) between August to December 2023. Locations included Saint John, New Brunswick; Halifax, Dartmouth, and Sydney, Nova Scotia; Vancouver, British Columbia and Toronto, Ontario. Focus group size ranged from three to nine participants, with a total of 35 participants. Given the narrowness of the study aim, and relative specificity of the target participant group, this sample size was determined to provide adequate information power . Focus groups were one to two hours long. To build trust with participants and ensure their anonymity within the study, demographic information was not collected. While we did not require participants to identify their gender, race or other factors, nor did we collect and tabulate this data, we did encourage sharing of pronouns during introductions, and we asked participants to reflect on and share what aspects of their identity might be particularly relevant to the issues. From what was shared, our research team determined deductively that participants had varying experiences in length of time in custody, time since release, as well as diversity in age, racial and cultural backgrounds as well as sexual orientation and gender identity.
We used reflexive thematic analysis to analyze the transcripts . Reflexive thematic analysis is both methodologically and theoretically flexible, allowing for the use of different theoretical frameworks to guide the analytic process. We used reflexive thematic analysis informed by reproductive justice theory. Data analysis was conducted with the underlying theoretical assumption that the rights to bodily autonomy, to not have children, to choose to have children and to parent those children in safe and sustainable communities are essential human rights. Reflexive thematic analysis also “emphasises the importance of the researcher’s subjectivity as analytic resources”, allowing for the lived experience expertise of the team to inform the development of themes. The team included two lived experience experts, an early career health professional researcher with 10 years of experience in prisoner health and reproductive justice, a senior health professional in reproductive health, and two graduate students in health research. All team members identified as white and several as queer. Reflexivity on how team member identities may have influenced data analysis was practiced through regular check-ins among team members throughout the data analysis process. During team check-ins, team member decisions about coding were reviewed and discussed. Reflexivity practices ensured that team members personal experiences either providing or accessing abortion or contraception did not influence the interpretation of participant experiences. For example, team members without lived experience of incarceration may have misinterpreted institutional barrier related codes due to the lack of first-hand experience of the institutional processes to request care, and due to personal experiences providing or accessing care in non-correctional clinical settings. Check-ins with lived experience experts allowed for a more fulsome understanding of how the institutional request process can facilitate or impede access to care. We used shared google sheets and google docs to manually code, allowing all team members access to transcripts and coding spreadsheets. First, research team members reviewed the raw data to gain familiarity with emerging themes. Two research team members double coded all focus group transcriptions and iteratively developed an initial coding scheme in consultation with the entire team, including lived experience experts. Then, the entire team reviewed initial codes to develop and synthesize shared themes and subthemes. Generated themes were then reviewed in comparison to the initial coding scheme and with lived experience experts. Following review, themes were re-named and collapsed as needed to develop final key themes. Lived experience experts participated in regular coding meetings, sharing their perceptions of meanings, and validated or disputed other team member’s interpretations to come to consensus agreement on themes.
Participants reviewed a consent form and provided informed consent prior to participation. All participants received a gift card of $50 as an honorarium for their involvement. A professional transcriptionist de-identified and transcribed the audio-recordings. This study was approved by the University of New Brunswick Ethics Review Board on June 14, 2023 under the file number 2023-074, in accordance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2 2018).
Expert advisor participation in data analysis ensures reliability of the community-based approach and credibility of findings.
Five key themes emerged from our analysis (see Table : Key themes ): (1) Competing health needs; (2) Institutional barriers to care; (3) Mistreatment and unethical care; (4) Health knowledge gaps; and (5) Challenges to care-seeking in community. Theme 1: competing health needs Most participants described how, with scarce access to health services in prison, they felt forced to prioritize their most emergent needs. As one participant described, it was difficult enough to access over-the-counter medications for a headache or menstrual cramps: “Like you can barely get a Tylenol in there let alone birth control or abortion.” -FG2. Participants concluded that given the difficulty they experienced accessing basic health care needs, they expected sexual and reproductive health needs would not be addressed. “They don’t care about anything , so even pregnancy is far off the radar there. Like you’re lucky if you get just your basic care.” -FG1. Sexual and reproductive health was perceived as less serious, given emergencies are commonplace in prison and must take priority. “There was a bunch of emergency situations people have all the time , like crazy shit and nothing happens. -FG3. Rather than health-promoting care like family planning, participants felt care was focused on “sedation”. “[Birth control is] not something that’s brought up , if anything they’re coming in to do psych evaluations to see what medications they can put you on to slow you down , not to help your problems or whatever , they want you to , I don’t know sedate you as much as possible.” -FG6. Focus group conversations frequently turned to unmet mental health and substance use needs. Participants repeatedly expressed the importance of these concerns, stating that in prison, people have “actual things” (FG3) and can’t get care, therefore they believed nobody would get reproductive healthcare: “Because the doctor is also the methadone doctor too so when she comes on Wednesdays , she also has to see the 25 people on methadone or suboxone.” - FG4. Further, the urgency of mental health and substance use concerns were exacerbated by the prison environment: “After a while the stress on you of just being there , being stuck in that room , imagine being in there day in and day out , […] You’d eventually fucking snap , we all would for sure , you would be losing it.” -FG3. Just as sexual and reproductive care concerns were crowded out by mental health and substance use concerns in prison, sexual and reproductive care concerns in the community were crowded out by basic material needs. One participant explained: “So if you’re in active addiction or you’re off doing whatever , […] all you’re thinking every day is about how you’re going to get your drugs and what you’re going to do to get them , every day. They’re not thinking about the doctor , they’re not thinking about getting a prescription for birth control , they’re not thinking about taking it , so unless they have something like the Mirena already or they’ve had their tubes tied , like that’s the least of their worries.” -FG1. Participants needed to secure housing, food, and income support before they started to seek healthcare: “Once you have your basic needs met you can focus on other things.” -FG2. Theme 2: institutional barriers to care Requesting care Like assumptions about the need for contraception in facilities designated for women, participants described how institutional norms and procedures presented barriers to family planning care, and care generally. For example, to request health services, they were first required to submit a written request to a correctional officer. “So basically the guards are assessing our medical needs , which should not happen.” -FG6. Participants explained their paper requests were frequently lost, ignored, or inappropriately triaged: “Their [correctional officer] negligence becomes our problem 100% of the time.” -FG5. Participants recounted prolonged delays between submission of a request and being seen by a health care professional: “The doctor would come weekly , but then they have like 10-minute time slots to see you and they’re like , ‘No you’re fine , you’re fine’. Like it’s like a running joke in that jail where they’re , for stuff like that , cause you could literally be bleeding to death and they’re like , ‘Put in a request in and we’ll see you in a seven days.’” -FG4. Participants described how elevated security classification further exacerbated delays. “Yeah in max you put a request in , it can take like up to four or five days before they answer you back , if they answer you back.” -FG4. Participants characterized the non-response as stressful: “And it’s stressful cause you have to write like a bunch of requests to get anything and a lot of them they don’t answer.” -FG3. Several participants recounted trying themselves to support peers while waiting extensively for staff to respond: “Do you know how many times I’ve called the guards to say , ‘She needs this , she needs that , ’ and they’re like , ‘Oh well.’ She was having a seizure in the bathroom , it took them 10 minutes to get there , I’m sitting there trying to get her out of it” -FG6. One participant described the multi-step process for seeking emergent care: “O.k. so if anything happens you pick up the house phone , the house phone calls main control , main control then sends the guards over , they assess the situation then go based off of there , so it’s like 15–20 minutes after the fact of the situation before an ambulance actually gets called.” -FG6. Expecting disbelief, delays, and non-response, participants described feeling they had to escalate or exaggerate their health complaints in order to be taken seriously. One participant advised: “So always lie. Always lie and tell them it’s an emergency.” -FG4. Some participants feared that submitting multiple requests for health care may lead to being labeled as “difficult”, or punishment like segregation or solitary confinement. As one participant described: “You’re getting punished for trying to get help , trying to get your healthcare.” -FG2. Participants explained navigating the challenge of communicating a health need was serious and needed attention, without appearing frenetic or causing aggravation: “I was pretty much saying I’m thinking about killing myself , I was pretty much saying that without saying it , […] Without pissing them off enough that they put you in seg with a fucking smock right. They will.” -FG2. Online and brief health appointments Participants felt prison health services were inappropriately organized to support patients with complex, competing needs. The transition to virtual care as a COVID-19 protection in prisons had, in many places, not shifted back, and all appointments were very short. “Most times when you get in to see a doctor you’ve got like ten things you need to talk to her about , chances are you make it through three , if that.” -FG4. When asked what would improve health care in prison, participants expressed the importance of time with providers: “Schedule more time for each inmate , seriously you’ve got to have , because like if you’re only seeing inmates once every three months , like I feel like there should be more cause ten minutes isn’t enough.” -FG4. External care During external appointments or hospital stays, participants explained the impact of institutional procedures such as shackling (ankle restraints) and observation by correctional officers. “I had to undress while they were standing outside watching me and stand there half naked getting these x-rays done while there’s another officer just behind the partition , expecting me to run. Where am I going to go?” - FG5. One participant described giving birth while in restraints and under surveillance: “But even then you get taken to hospital , you’re shackled , you’re cuffed , you’re uncomfortable , you’re irritated , you’re going through labour pains and you have these two people that are like attached to you already while you’re trying to push a watermelon out.” -FG5. Theme 3: mistreatment and unethical care Disrespect and neglect Participants described experiencing mistreatment and unethical practices while seeking care in prison, such as routine breaches of confidentiality from health services to correctional officers. This felt especially uncomfortable in the context of family planning services: “The guards just chit-chat among themselves , like the whole fucking place knows what your information. It’s like a big gossip factory.” -FG1. For participants, healthcare is the “ biggest thing going wron g” (FG2) inside prisons, when it should be a site of respect. “It’s such a disrespectful and imbalanced power dynamic inside of prison anyway , that if there’s one area where we should be treated with respect is indeed our healthcare , if they just chose one area.” -FG5. For some, the disrespect amounted to a tool of punishment: “There’s a lot of disrespect for inmates in the system anyways and so when you throw in healthcare it’s just another way to disrespect you.” -FG5. Participants recounted how stigma and stereotypes affected care. “And it’s [health services] already potential for violence , or violations , power dynamics , so it’s already ripe for that even in the community. But when you’re in prison they already just have this inherent stigma and disrespect for you , and they have this , ‘Well it’s your own fault you’re here so you just put up with this.’” -FG5. One participant described how staff perceptions of STBBIs impacted the treatment of a peer, who was vomiting blood and denied care: “They kept her like that for two weeks , they’re just like , ‘No , you’re fine , you’re fine.’ Because she had AIDS they didn’t want to touch her right” - FG2. Mistreatment generates mistrust Bearing witness to peer mistreatment or neglect, including the death of a peer, generated severe mistrust of prison health services: “There was a girl when I was in jail who died in there of pneumonia cause they wouldn’t help her , it’s all fucked up. ” -FG3. Participants shared how the experience of mistreatment caused some to become reactive or even violent: “But even in [X facility] that psychiatrist there has gotten popped so many times in the face he’s just , even the staff are like , “I don’t know why he hasn’t learned”. Like he doesn’t know how to correspond with us , he just provokes.” -FG4. Participants felt health professionals in the institutions lacked experience in trauma and violence-informed care and failed to recognize how their actions while providing care could cause emotional anguish. “None of the nurses there can do my bloodwork because from being human-trafficked , I have really bad veins. I’ve had , they can only try three times each. I’ve had four nurses go at me and be pricked fourteen times and if you don’t think that that’s not triggering…” -FG4. Participants felt health care providers working in the prison context should have training to work with populations with unique and complex health needs. “A lot of people who are coming into the system already have trauma , not saying all but a good number and the healthcare you get in there is the opposite of trauma-informed , they do not understand , like you know , so it makes it even harder , more of a barrier.” -FG6. Shaken by the extent of ethical violations, one participant shared her mistrust was so deep, she did not believe the prison nurses were actually licensed to practice: “Prison nurses , I’m convinced they’re not even real nurses.” -FG1. Contraceptive coercion Some participants believed health care providers in the institutions would be unwilling to provide family planning services (downward contraceptive coercion), because the context restricts and/or prohibits sexual activity: “I don’t think they would administer birth control to you while you were there , even if you asked , I don’t think they would , cause you’re not really sexually active while you’re there and I don’t think they really care if you’re trying to do your own type of thing.” -FG3. Some participants reported experiences of upward contraception coercion, both inside the prison and in community healthcare settings, where they felt pressured or forced to use birth control. One participant recounted how even when she felt her contraception was making her unwell, her request for its removal was denied: “I was in jail for a while , and I couldn’t get it [IUD] taken out so it [messed] up my stomach and it really hurt so I had to go to the hospital when I was in jail so that kind of sucked. […] I kept telling them I was like , ‘Hey , I need to get this out!’ and they were basically like , ‘Well , we’re not going to do anything.’” -FG2. Upward coercion contributed to participants feeling apprehensive and distrusting family planning care providers. “They push it on you. […] In general birth control. […] The nurses bring it up.” -FG4. While some felt family planning to be an afterthought, others describe contraceptive coercion from prison health professionals: “The healthcare is harder in jail than it is on the street. It can be hard , but they push birth control anyway , the doctors” -FG3. Theme 4: health knowledge gaps Abortion and stigma People in prison described limitations to accessing health information due to restrictions against Internet use, expense of phone communications, and limited access to health professionals; this was particularly true for sexual and reproductive health (SRH) needs. Person 1: “I kind of think it’s a little funny that anywhere you go , in the houses , in the jail , healthcare , wherever , you can always find condoms and lube but don’t know how to get a hold of any kind of contraception or to get an abortion , and there’s no pamphlets on any of this stuff. There’s nothing that we can educate ourselves with , except for the books in the library.” Person 2: “That are twenty years outdated.” . Person 1: “And there’s nothing about abortion.” . Person 2: “Nothing about birth control , any contraception , nothing.” -FG5. Although abortion is completely decriminalized in Canada, participants felt uncertainty about access to it while incarcerated. Describing a peer with an unintended pregnancy, several focus group participants shared their uncertainty about what was legally possible while incarcerated: Person 1: “I didn’t know when she got picked up that [abortion] was even an option to get one when she was in there…” . Person 2: “I had no idea.”. Person 3: “I didn’t know that.”. Person 4: “I didn’t know that either.” - FG1. Participants explained that although a common experience for most at some point in their lives, abortion was stigmatized among prisoners and not discussed openly. “Pretty much everybody gets one , we just don’t talk about it.” -FG1. This elevated stigma was attributed to the grief and loss of women and gender diverse people separated (temporarily or permanently) from their existing children. One participant expressed: “You bring up their child , they will melt. So , if you start talking about , ‘Oh I’m pregnant but I want to have an abortion , ’ you’re putting yourself in a bad position , really bad.” -FG1. One participant feared disclosing abortion would result in mistreatment from peers. Offences against children were not tolerated. “My fear , if I say I’m going to get an abortion , I’m killing a child in their eyes.” -FG1. As a result of shame and fear, participants believed little information about abortion circulated in prisons: “It’s terrible and I think that’s why maybe the access to the information is hard for women to get it , is because of the stigmatization.” -FG1. Further, participants believed access to abortion depended on the beliefs of the correctional officers or health professionals who were gatekeepers to services: “If they’re pro-life , like guards and healthcare could withhold information to access. So , you could be a pregnant inmate just thinking well there’s no recourse , there’s no pills , or other inmates could tell you oh no you can’t get those in prison and you would just stop looking for the truth. So , if you come across that misinformation that’s deliberate , because they don’t believe in abortion.” -FG5. One participant considered how a prisoner may have wanted an abortion to avoid losing a future child to the child protection system, but did not have the information needed to get the care: “I know of two girls where they came in pregnant , knew they were pregnant and didn’t want the baby but ended up having the baby and then it was taken away , you know , and I think that’s what they didn’t want.” -FG5. Participants who used substances said they were particularly afraid to seek help in pregnancy, because they were concerned about the consequences of their perinatal substance use: “Well yeah , if someone gets pregnant , if they’re using , they get scared and they don’t know what to do and they don’t know the resources out there that’s provided.” -FG6. Misinformation In several of the focus groups, participants discussed how in the absence of information about family planning resources, misinformation circulated in its place. For example, several shared a belief that “a lot of people” get pregnant with an IUD: “That’s what stressed me out because I was like , how would I know if I got pregnant? I had a friend who got pregnant with an IUD , and she had a tubal pregnancy and she came real close to , you know , so I was always so scared of that.” -FG2. Or, that the contraception “causes infertility”: “So , my first stint was in [X institution] there was a girl that got put on Depo-Provera so this is like thirteen years ago , the shot , and she was on it for like three years and she can’t have any kids anymore because of it.” -FG4. Available, or lack of available, information impacted participants’ contraceptive decision making. “I didn’t know IUD was like contraception , I just think about like condoms. I’m like well I wasn’t using any of those .” -FG2. Many expressed a desire to know more about their family planning and reproductive options both while in prison and in transitional housing in community. Theme 5: challenges to care-seeking in community A final theme emerged that we determined was outside of the scope of this paper, as it pertained to the complex logistics and stigma experienced when accessing health care, period, once released. These barriers include lack of discharge planning, feeling punished by health professionals for not having appropriate paperwork, being stereotyped or stigmatized for a history of criminalization, and the impact of bail or parole conditions that restrict where a person can live, with whom they can have sexual or personal relationships, and what medications they can take or substances they use. Despite the difficulty of accessing healthcare in community, participants consistently reported that accessing care in community was much easier than accessing care in prison: “It’s definitely a lot easier to access stuff now that we’re out […] It’s not like begging a corrections officer.” -FG2.
Most participants described how, with scarce access to health services in prison, they felt forced to prioritize their most emergent needs. As one participant described, it was difficult enough to access over-the-counter medications for a headache or menstrual cramps: “Like you can barely get a Tylenol in there let alone birth control or abortion.” -FG2. Participants concluded that given the difficulty they experienced accessing basic health care needs, they expected sexual and reproductive health needs would not be addressed. “They don’t care about anything , so even pregnancy is far off the radar there. Like you’re lucky if you get just your basic care.” -FG1. Sexual and reproductive health was perceived as less serious, given emergencies are commonplace in prison and must take priority. “There was a bunch of emergency situations people have all the time , like crazy shit and nothing happens. -FG3. Rather than health-promoting care like family planning, participants felt care was focused on “sedation”. “[Birth control is] not something that’s brought up , if anything they’re coming in to do psych evaluations to see what medications they can put you on to slow you down , not to help your problems or whatever , they want you to , I don’t know sedate you as much as possible.” -FG6. Focus group conversations frequently turned to unmet mental health and substance use needs. Participants repeatedly expressed the importance of these concerns, stating that in prison, people have “actual things” (FG3) and can’t get care, therefore they believed nobody would get reproductive healthcare: “Because the doctor is also the methadone doctor too so when she comes on Wednesdays , she also has to see the 25 people on methadone or suboxone.” - FG4. Further, the urgency of mental health and substance use concerns were exacerbated by the prison environment: “After a while the stress on you of just being there , being stuck in that room , imagine being in there day in and day out , […] You’d eventually fucking snap , we all would for sure , you would be losing it.” -FG3. Just as sexual and reproductive care concerns were crowded out by mental health and substance use concerns in prison, sexual and reproductive care concerns in the community were crowded out by basic material needs. One participant explained: “So if you’re in active addiction or you’re off doing whatever , […] all you’re thinking every day is about how you’re going to get your drugs and what you’re going to do to get them , every day. They’re not thinking about the doctor , they’re not thinking about getting a prescription for birth control , they’re not thinking about taking it , so unless they have something like the Mirena already or they’ve had their tubes tied , like that’s the least of their worries.” -FG1. Participants needed to secure housing, food, and income support before they started to seek healthcare: “Once you have your basic needs met you can focus on other things.” -FG2.
Requesting care Like assumptions about the need for contraception in facilities designated for women, participants described how institutional norms and procedures presented barriers to family planning care, and care generally. For example, to request health services, they were first required to submit a written request to a correctional officer. “So basically the guards are assessing our medical needs , which should not happen.” -FG6. Participants explained their paper requests were frequently lost, ignored, or inappropriately triaged: “Their [correctional officer] negligence becomes our problem 100% of the time.” -FG5. Participants recounted prolonged delays between submission of a request and being seen by a health care professional: “The doctor would come weekly , but then they have like 10-minute time slots to see you and they’re like , ‘No you’re fine , you’re fine’. Like it’s like a running joke in that jail where they’re , for stuff like that , cause you could literally be bleeding to death and they’re like , ‘Put in a request in and we’ll see you in a seven days.’” -FG4. Participants described how elevated security classification further exacerbated delays. “Yeah in max you put a request in , it can take like up to four or five days before they answer you back , if they answer you back.” -FG4. Participants characterized the non-response as stressful: “And it’s stressful cause you have to write like a bunch of requests to get anything and a lot of them they don’t answer.” -FG3. Several participants recounted trying themselves to support peers while waiting extensively for staff to respond: “Do you know how many times I’ve called the guards to say , ‘She needs this , she needs that , ’ and they’re like , ‘Oh well.’ She was having a seizure in the bathroom , it took them 10 minutes to get there , I’m sitting there trying to get her out of it” -FG6. One participant described the multi-step process for seeking emergent care: “O.k. so if anything happens you pick up the house phone , the house phone calls main control , main control then sends the guards over , they assess the situation then go based off of there , so it’s like 15–20 minutes after the fact of the situation before an ambulance actually gets called.” -FG6. Expecting disbelief, delays, and non-response, participants described feeling they had to escalate or exaggerate their health complaints in order to be taken seriously. One participant advised: “So always lie. Always lie and tell them it’s an emergency.” -FG4. Some participants feared that submitting multiple requests for health care may lead to being labeled as “difficult”, or punishment like segregation or solitary confinement. As one participant described: “You’re getting punished for trying to get help , trying to get your healthcare.” -FG2. Participants explained navigating the challenge of communicating a health need was serious and needed attention, without appearing frenetic or causing aggravation: “I was pretty much saying I’m thinking about killing myself , I was pretty much saying that without saying it , […] Without pissing them off enough that they put you in seg with a fucking smock right. They will.” -FG2. Online and brief health appointments Participants felt prison health services were inappropriately organized to support patients with complex, competing needs. The transition to virtual care as a COVID-19 protection in prisons had, in many places, not shifted back, and all appointments were very short. “Most times when you get in to see a doctor you’ve got like ten things you need to talk to her about , chances are you make it through three , if that.” -FG4. When asked what would improve health care in prison, participants expressed the importance of time with providers: “Schedule more time for each inmate , seriously you’ve got to have , because like if you’re only seeing inmates once every three months , like I feel like there should be more cause ten minutes isn’t enough.” -FG4. External care During external appointments or hospital stays, participants explained the impact of institutional procedures such as shackling (ankle restraints) and observation by correctional officers. “I had to undress while they were standing outside watching me and stand there half naked getting these x-rays done while there’s another officer just behind the partition , expecting me to run. Where am I going to go?” - FG5. One participant described giving birth while in restraints and under surveillance: “But even then you get taken to hospital , you’re shackled , you’re cuffed , you’re uncomfortable , you’re irritated , you’re going through labour pains and you have these two people that are like attached to you already while you’re trying to push a watermelon out.” -FG5.
Like assumptions about the need for contraception in facilities designated for women, participants described how institutional norms and procedures presented barriers to family planning care, and care generally. For example, to request health services, they were first required to submit a written request to a correctional officer. “So basically the guards are assessing our medical needs , which should not happen.” -FG6. Participants explained their paper requests were frequently lost, ignored, or inappropriately triaged: “Their [correctional officer] negligence becomes our problem 100% of the time.” -FG5. Participants recounted prolonged delays between submission of a request and being seen by a health care professional: “The doctor would come weekly , but then they have like 10-minute time slots to see you and they’re like , ‘No you’re fine , you’re fine’. Like it’s like a running joke in that jail where they’re , for stuff like that , cause you could literally be bleeding to death and they’re like , ‘Put in a request in and we’ll see you in a seven days.’” -FG4. Participants described how elevated security classification further exacerbated delays. “Yeah in max you put a request in , it can take like up to four or five days before they answer you back , if they answer you back.” -FG4. Participants characterized the non-response as stressful: “And it’s stressful cause you have to write like a bunch of requests to get anything and a lot of them they don’t answer.” -FG3. Several participants recounted trying themselves to support peers while waiting extensively for staff to respond: “Do you know how many times I’ve called the guards to say , ‘She needs this , she needs that , ’ and they’re like , ‘Oh well.’ She was having a seizure in the bathroom , it took them 10 minutes to get there , I’m sitting there trying to get her out of it” -FG6. One participant described the multi-step process for seeking emergent care: “O.k. so if anything happens you pick up the house phone , the house phone calls main control , main control then sends the guards over , they assess the situation then go based off of there , so it’s like 15–20 minutes after the fact of the situation before an ambulance actually gets called.” -FG6. Expecting disbelief, delays, and non-response, participants described feeling they had to escalate or exaggerate their health complaints in order to be taken seriously. One participant advised: “So always lie. Always lie and tell them it’s an emergency.” -FG4. Some participants feared that submitting multiple requests for health care may lead to being labeled as “difficult”, or punishment like segregation or solitary confinement. As one participant described: “You’re getting punished for trying to get help , trying to get your healthcare.” -FG2. Participants explained navigating the challenge of communicating a health need was serious and needed attention, without appearing frenetic or causing aggravation: “I was pretty much saying I’m thinking about killing myself , I was pretty much saying that without saying it , […] Without pissing them off enough that they put you in seg with a fucking smock right. They will.” -FG2.
Participants felt prison health services were inappropriately organized to support patients with complex, competing needs. The transition to virtual care as a COVID-19 protection in prisons had, in many places, not shifted back, and all appointments were very short. “Most times when you get in to see a doctor you’ve got like ten things you need to talk to her about , chances are you make it through three , if that.” -FG4. When asked what would improve health care in prison, participants expressed the importance of time with providers: “Schedule more time for each inmate , seriously you’ve got to have , because like if you’re only seeing inmates once every three months , like I feel like there should be more cause ten minutes isn’t enough.” -FG4.
During external appointments or hospital stays, participants explained the impact of institutional procedures such as shackling (ankle restraints) and observation by correctional officers. “I had to undress while they were standing outside watching me and stand there half naked getting these x-rays done while there’s another officer just behind the partition , expecting me to run. Where am I going to go?” - FG5. One participant described giving birth while in restraints and under surveillance: “But even then you get taken to hospital , you’re shackled , you’re cuffed , you’re uncomfortable , you’re irritated , you’re going through labour pains and you have these two people that are like attached to you already while you’re trying to push a watermelon out.” -FG5.
Disrespect and neglect Participants described experiencing mistreatment and unethical practices while seeking care in prison, such as routine breaches of confidentiality from health services to correctional officers. This felt especially uncomfortable in the context of family planning services: “The guards just chit-chat among themselves , like the whole fucking place knows what your information. It’s like a big gossip factory.” -FG1. For participants, healthcare is the “ biggest thing going wron g” (FG2) inside prisons, when it should be a site of respect. “It’s such a disrespectful and imbalanced power dynamic inside of prison anyway , that if there’s one area where we should be treated with respect is indeed our healthcare , if they just chose one area.” -FG5. For some, the disrespect amounted to a tool of punishment: “There’s a lot of disrespect for inmates in the system anyways and so when you throw in healthcare it’s just another way to disrespect you.” -FG5. Participants recounted how stigma and stereotypes affected care. “And it’s [health services] already potential for violence , or violations , power dynamics , so it’s already ripe for that even in the community. But when you’re in prison they already just have this inherent stigma and disrespect for you , and they have this , ‘Well it’s your own fault you’re here so you just put up with this.’” -FG5. One participant described how staff perceptions of STBBIs impacted the treatment of a peer, who was vomiting blood and denied care: “They kept her like that for two weeks , they’re just like , ‘No , you’re fine , you’re fine.’ Because she had AIDS they didn’t want to touch her right” - FG2. Mistreatment generates mistrust Bearing witness to peer mistreatment or neglect, including the death of a peer, generated severe mistrust of prison health services: “There was a girl when I was in jail who died in there of pneumonia cause they wouldn’t help her , it’s all fucked up. ” -FG3. Participants shared how the experience of mistreatment caused some to become reactive or even violent: “But even in [X facility] that psychiatrist there has gotten popped so many times in the face he’s just , even the staff are like , “I don’t know why he hasn’t learned”. Like he doesn’t know how to correspond with us , he just provokes.” -FG4. Participants felt health professionals in the institutions lacked experience in trauma and violence-informed care and failed to recognize how their actions while providing care could cause emotional anguish. “None of the nurses there can do my bloodwork because from being human-trafficked , I have really bad veins. I’ve had , they can only try three times each. I’ve had four nurses go at me and be pricked fourteen times and if you don’t think that that’s not triggering…” -FG4. Participants felt health care providers working in the prison context should have training to work with populations with unique and complex health needs. “A lot of people who are coming into the system already have trauma , not saying all but a good number and the healthcare you get in there is the opposite of trauma-informed , they do not understand , like you know , so it makes it even harder , more of a barrier.” -FG6. Shaken by the extent of ethical violations, one participant shared her mistrust was so deep, she did not believe the prison nurses were actually licensed to practice: “Prison nurses , I’m convinced they’re not even real nurses.” -FG1. Contraceptive coercion Some participants believed health care providers in the institutions would be unwilling to provide family planning services (downward contraceptive coercion), because the context restricts and/or prohibits sexual activity: “I don’t think they would administer birth control to you while you were there , even if you asked , I don’t think they would , cause you’re not really sexually active while you’re there and I don’t think they really care if you’re trying to do your own type of thing.” -FG3. Some participants reported experiences of upward contraception coercion, both inside the prison and in community healthcare settings, where they felt pressured or forced to use birth control. One participant recounted how even when she felt her contraception was making her unwell, her request for its removal was denied: “I was in jail for a while , and I couldn’t get it [IUD] taken out so it [messed] up my stomach and it really hurt so I had to go to the hospital when I was in jail so that kind of sucked. […] I kept telling them I was like , ‘Hey , I need to get this out!’ and they were basically like , ‘Well , we’re not going to do anything.’” -FG2. Upward coercion contributed to participants feeling apprehensive and distrusting family planning care providers. “They push it on you. […] In general birth control. […] The nurses bring it up.” -FG4. While some felt family planning to be an afterthought, others describe contraceptive coercion from prison health professionals: “The healthcare is harder in jail than it is on the street. It can be hard , but they push birth control anyway , the doctors” -FG3.
Participants described experiencing mistreatment and unethical practices while seeking care in prison, such as routine breaches of confidentiality from health services to correctional officers. This felt especially uncomfortable in the context of family planning services: “The guards just chit-chat among themselves , like the whole fucking place knows what your information. It’s like a big gossip factory.” -FG1. For participants, healthcare is the “ biggest thing going wron g” (FG2) inside prisons, when it should be a site of respect. “It’s such a disrespectful and imbalanced power dynamic inside of prison anyway , that if there’s one area where we should be treated with respect is indeed our healthcare , if they just chose one area.” -FG5. For some, the disrespect amounted to a tool of punishment: “There’s a lot of disrespect for inmates in the system anyways and so when you throw in healthcare it’s just another way to disrespect you.” -FG5. Participants recounted how stigma and stereotypes affected care. “And it’s [health services] already potential for violence , or violations , power dynamics , so it’s already ripe for that even in the community. But when you’re in prison they already just have this inherent stigma and disrespect for you , and they have this , ‘Well it’s your own fault you’re here so you just put up with this.’” -FG5. One participant described how staff perceptions of STBBIs impacted the treatment of a peer, who was vomiting blood and denied care: “They kept her like that for two weeks , they’re just like , ‘No , you’re fine , you’re fine.’ Because she had AIDS they didn’t want to touch her right” - FG2.
Bearing witness to peer mistreatment or neglect, including the death of a peer, generated severe mistrust of prison health services: “There was a girl when I was in jail who died in there of pneumonia cause they wouldn’t help her , it’s all fucked up. ” -FG3. Participants shared how the experience of mistreatment caused some to become reactive or even violent: “But even in [X facility] that psychiatrist there has gotten popped so many times in the face he’s just , even the staff are like , “I don’t know why he hasn’t learned”. Like he doesn’t know how to correspond with us , he just provokes.” -FG4. Participants felt health professionals in the institutions lacked experience in trauma and violence-informed care and failed to recognize how their actions while providing care could cause emotional anguish. “None of the nurses there can do my bloodwork because from being human-trafficked , I have really bad veins. I’ve had , they can only try three times each. I’ve had four nurses go at me and be pricked fourteen times and if you don’t think that that’s not triggering…” -FG4. Participants felt health care providers working in the prison context should have training to work with populations with unique and complex health needs. “A lot of people who are coming into the system already have trauma , not saying all but a good number and the healthcare you get in there is the opposite of trauma-informed , they do not understand , like you know , so it makes it even harder , more of a barrier.” -FG6. Shaken by the extent of ethical violations, one participant shared her mistrust was so deep, she did not believe the prison nurses were actually licensed to practice: “Prison nurses , I’m convinced they’re not even real nurses.” -FG1.
] Some participants believed health care providers in the institutions would be unwilling to provide family planning services (downward contraceptive coercion), because the context restricts and/or prohibits sexual activity: “I don’t think they would administer birth control to you while you were there , even if you asked , I don’t think they would , cause you’re not really sexually active while you’re there and I don’t think they really care if you’re trying to do your own type of thing.” -FG3. Some participants reported experiences of upward contraception coercion, both inside the prison and in community healthcare settings, where they felt pressured or forced to use birth control. One participant recounted how even when she felt her contraception was making her unwell, her request for its removal was denied: “I was in jail for a while , and I couldn’t get it [IUD] taken out so it [messed] up my stomach and it really hurt so I had to go to the hospital when I was in jail so that kind of sucked. […] I kept telling them I was like , ‘Hey , I need to get this out!’ and they were basically like , ‘Well , we’re not going to do anything.’” -FG2. Upward coercion contributed to participants feeling apprehensive and distrusting family planning care providers. “They push it on you. […] In general birth control. […] The nurses bring it up.” -FG4. While some felt family planning to be an afterthought, others describe contraceptive coercion from prison health professionals: “The healthcare is harder in jail than it is on the street. It can be hard , but they push birth control anyway , the doctors” -FG3.
Abortion and stigma People in prison described limitations to accessing health information due to restrictions against Internet use, expense of phone communications, and limited access to health professionals; this was particularly true for sexual and reproductive health (SRH) needs. Person 1: “I kind of think it’s a little funny that anywhere you go , in the houses , in the jail , healthcare , wherever , you can always find condoms and lube but don’t know how to get a hold of any kind of contraception or to get an abortion , and there’s no pamphlets on any of this stuff. There’s nothing that we can educate ourselves with , except for the books in the library.” Person 2: “That are twenty years outdated.” . Person 1: “And there’s nothing about abortion.” . Person 2: “Nothing about birth control , any contraception , nothing.” -FG5. Although abortion is completely decriminalized in Canada, participants felt uncertainty about access to it while incarcerated. Describing a peer with an unintended pregnancy, several focus group participants shared their uncertainty about what was legally possible while incarcerated: Person 1: “I didn’t know when she got picked up that [abortion] was even an option to get one when she was in there…” . Person 2: “I had no idea.”. Person 3: “I didn’t know that.”. Person 4: “I didn’t know that either.” - FG1. Participants explained that although a common experience for most at some point in their lives, abortion was stigmatized among prisoners and not discussed openly. “Pretty much everybody gets one , we just don’t talk about it.” -FG1. This elevated stigma was attributed to the grief and loss of women and gender diverse people separated (temporarily or permanently) from their existing children. One participant expressed: “You bring up their child , they will melt. So , if you start talking about , ‘Oh I’m pregnant but I want to have an abortion , ’ you’re putting yourself in a bad position , really bad.” -FG1. One participant feared disclosing abortion would result in mistreatment from peers. Offences against children were not tolerated. “My fear , if I say I’m going to get an abortion , I’m killing a child in their eyes.” -FG1. As a result of shame and fear, participants believed little information about abortion circulated in prisons: “It’s terrible and I think that’s why maybe the access to the information is hard for women to get it , is because of the stigmatization.” -FG1. Further, participants believed access to abortion depended on the beliefs of the correctional officers or health professionals who were gatekeepers to services: “If they’re pro-life , like guards and healthcare could withhold information to access. So , you could be a pregnant inmate just thinking well there’s no recourse , there’s no pills , or other inmates could tell you oh no you can’t get those in prison and you would just stop looking for the truth. So , if you come across that misinformation that’s deliberate , because they don’t believe in abortion.” -FG5. One participant considered how a prisoner may have wanted an abortion to avoid losing a future child to the child protection system, but did not have the information needed to get the care: “I know of two girls where they came in pregnant , knew they were pregnant and didn’t want the baby but ended up having the baby and then it was taken away , you know , and I think that’s what they didn’t want.” -FG5. Participants who used substances said they were particularly afraid to seek help in pregnancy, because they were concerned about the consequences of their perinatal substance use: “Well yeah , if someone gets pregnant , if they’re using , they get scared and they don’t know what to do and they don’t know the resources out there that’s provided.” -FG6. Misinformation In several of the focus groups, participants discussed how in the absence of information about family planning resources, misinformation circulated in its place. For example, several shared a belief that “a lot of people” get pregnant with an IUD: “That’s what stressed me out because I was like , how would I know if I got pregnant? I had a friend who got pregnant with an IUD , and she had a tubal pregnancy and she came real close to , you know , so I was always so scared of that.” -FG2. Or, that the contraception “causes infertility”: “So , my first stint was in [X institution] there was a girl that got put on Depo-Provera so this is like thirteen years ago , the shot , and she was on it for like three years and she can’t have any kids anymore because of it.” -FG4. Available, or lack of available, information impacted participants’ contraceptive decision making. “I didn’t know IUD was like contraception , I just think about like condoms. I’m like well I wasn’t using any of those .” -FG2. Many expressed a desire to know more about their family planning and reproductive options both while in prison and in transitional housing in community.
People in prison described limitations to accessing health information due to restrictions against Internet use, expense of phone communications, and limited access to health professionals; this was particularly true for sexual and reproductive health (SRH) needs. Person 1: “I kind of think it’s a little funny that anywhere you go , in the houses , in the jail , healthcare , wherever , you can always find condoms and lube but don’t know how to get a hold of any kind of contraception or to get an abortion , and there’s no pamphlets on any of this stuff. There’s nothing that we can educate ourselves with , except for the books in the library.” Person 2: “That are twenty years outdated.” . Person 1: “And there’s nothing about abortion.” . Person 2: “Nothing about birth control , any contraception , nothing.” -FG5. Although abortion is completely decriminalized in Canada, participants felt uncertainty about access to it while incarcerated. Describing a peer with an unintended pregnancy, several focus group participants shared their uncertainty about what was legally possible while incarcerated: Person 1: “I didn’t know when she got picked up that [abortion] was even an option to get one when she was in there…” . Person 2: “I had no idea.”. Person 3: “I didn’t know that.”. Person 4: “I didn’t know that either.” - FG1. Participants explained that although a common experience for most at some point in their lives, abortion was stigmatized among prisoners and not discussed openly. “Pretty much everybody gets one , we just don’t talk about it.” -FG1. This elevated stigma was attributed to the grief and loss of women and gender diverse people separated (temporarily or permanently) from their existing children. One participant expressed: “You bring up their child , they will melt. So , if you start talking about , ‘Oh I’m pregnant but I want to have an abortion , ’ you’re putting yourself in a bad position , really bad.” -FG1. One participant feared disclosing abortion would result in mistreatment from peers. Offences against children were not tolerated. “My fear , if I say I’m going to get an abortion , I’m killing a child in their eyes.” -FG1. As a result of shame and fear, participants believed little information about abortion circulated in prisons: “It’s terrible and I think that’s why maybe the access to the information is hard for women to get it , is because of the stigmatization.” -FG1. Further, participants believed access to abortion depended on the beliefs of the correctional officers or health professionals who were gatekeepers to services: “If they’re pro-life , like guards and healthcare could withhold information to access. So , you could be a pregnant inmate just thinking well there’s no recourse , there’s no pills , or other inmates could tell you oh no you can’t get those in prison and you would just stop looking for the truth. So , if you come across that misinformation that’s deliberate , because they don’t believe in abortion.” -FG5. One participant considered how a prisoner may have wanted an abortion to avoid losing a future child to the child protection system, but did not have the information needed to get the care: “I know of two girls where they came in pregnant , knew they were pregnant and didn’t want the baby but ended up having the baby and then it was taken away , you know , and I think that’s what they didn’t want.” -FG5. Participants who used substances said they were particularly afraid to seek help in pregnancy, because they were concerned about the consequences of their perinatal substance use: “Well yeah , if someone gets pregnant , if they’re using , they get scared and they don’t know what to do and they don’t know the resources out there that’s provided.” -FG6.
In several of the focus groups, participants discussed how in the absence of information about family planning resources, misinformation circulated in its place. For example, several shared a belief that “a lot of people” get pregnant with an IUD: “That’s what stressed me out because I was like , how would I know if I got pregnant? I had a friend who got pregnant with an IUD , and she had a tubal pregnancy and she came real close to , you know , so I was always so scared of that.” -FG2. Or, that the contraception “causes infertility”: “So , my first stint was in [X institution] there was a girl that got put on Depo-Provera so this is like thirteen years ago , the shot , and she was on it for like three years and she can’t have any kids anymore because of it.” -FG4. Available, or lack of available, information impacted participants’ contraceptive decision making. “I didn’t know IUD was like contraception , I just think about like condoms. I’m like well I wasn’t using any of those .” -FG2. Many expressed a desire to know more about their family planning and reproductive options both while in prison and in transitional housing in community.
A final theme emerged that we determined was outside of the scope of this paper, as it pertained to the complex logistics and stigma experienced when accessing health care, period, once released. These barriers include lack of discharge planning, feeling punished by health professionals for not having appropriate paperwork, being stereotyped or stigmatized for a history of criminalization, and the impact of bail or parole conditions that restrict where a person can live, with whom they can have sexual or personal relationships, and what medications they can take or substances they use. Despite the difficulty of accessing healthcare in community, participants consistently reported that accessing care in community was much easier than accessing care in prison: “It’s definitely a lot easier to access stuff now that we’re out […] It’s not like begging a corrections officer.” -FG2.
The intention of this qualitative study was to explore family planning care experiences among women and gender diverse people who have experienced incarceration. To our knowledge, this is the first study among formerly incarcerated women in community to focus on abortion and contraception experiences and needs. We found barriers to care-seeking included competing health needs, institutional procedures, mistreatment by health professionals, health knowledge gaps, and persistent challenges on release. Women and gender diverse people who have experienced incarceration have complex health histories, and describe prioritizing mental health and substance use treatment over family planning. Yet mental illness and substance use have significant physiological and social impacts on pregnancy and parenting . Further, despite high lifetime rates of pregnancy, unintended pregnancy, unmet contraceptive need and abortion among incarcerated women and gender diverse people , family planning may not be prioritized by staff in prisons designated for women because of the domination of other health needs and institutional restrictions on sexual activity. We were surprised that the subordination of family planning care continued upon release, associated with protracted disruptions to income, housing, and health services. We expected institutional procedures to present barriers, as has been described in earlier studies and internationally . While some participants brought up the violations they felt in restraints and under observation by correctional officers in the context of reproductive services, they placed greater emphasis on the harm presented by ineffective systems to request care. Submitting requests on paper through correctional officers resulted in lost requests, lack of or delayed response, violations of confidentiality, and a described need to exaggerate symptoms to prompt action. Perceiving emergent needs would go unmet, and having experienced deaths of peers in custody, participants were unconvinced any appeals for family planning would be heard. Further, and most disturbingly, mistreatment when seeking other services caused participants to fear health professionals in the institutions. While dual loyalty and institutionalization of prison-based health professionals are described extensively in the literature , these participants also experienced discrimination and stigma from health professionals in community settings after release. Stereotypes about criminalization, substance use, and drug-seeking persist among health professionals . Research has found even health professionals working in prisons lack preparatory education about circumstances and needs prisoners experience , suggesting a deep need to augment the inclusion of prison health in health training programs. Women and gender diverse people in prison lack confidential and trusted sources of information about their rights to care and pathways to service both inside the institutions and on release. Despite decriminalization, abortion remains stigmatized and mythologized in the broader Canadian society, allowing for information gaps and for misinformation to circulate . Our research suggests the restrictive environment of the prison not only further bars factual dialogue and information sharing, but grief and loss from separation from existing children, and intolerance of violent offences against children, may deepen the silence about abortion. In a routinely restrictive and punitive environment, prisoners may expect restrictions and/or punishment for seeking abortion services, even if access to such services is not specifically restricted and is actually affirmed by provincial, federal and international law. Finally, the impact of bail and parole conditions on seeking family planning care are underappreciated. Relationships, housing, work activities and even medication use may all be restricted, and thus monitored by transitional housing staff and/or parole officers and police. Family planning professionals may not be aware of how these restrictions mediate reproductive decision-making. Strengths and limitations of the work The strengths of this study include the contributions of lived experience experts to the research process, the partnerships with frontline organizations for recruitment, the national scope of inclusion, and participant ability to freely express themselves in a post-incarceration context. Conducted with formerly incarcerated women and gender diverse people, our focus groups were able to not only shed light on experiences while in prison, but those afterwards. This study had several limitations. This study collected data from people with recent histories of incarceration now living in community. People who are currently incarcerated may have varying experiences. We did not formally collect participants’ information about their gender identity, age, race, or other factors, although many did volunteer this information. People who volunteered and were willing to participate in focus groups may not be representative of all people with recent histories of incarceration who are now living in the community, and the experiences they chose to relate may not be representative of all relevant experiences. Some participants may have felt uncomfortable sharing their experiences of abortion in the company of peers in the focus group context. All focus groups were conducted in English, limiting the participation of people for whom English is not their first language. Recommendations for future research The family planning care experienced while incarcerated is a highly under researched area. Future studies should examine health professionals’ knowledge and understanding of the needs of people who have experienced criminalization and examine clinician biases and behaviours towards incarcerated and formerly incarcerated people. Implications for policy and practice Family planning professionals may support improvements to sexual and reproductive health experiences and outcomes among people who have or are experiencing incarceration by recognizing the disproportionate burdens of mental illness and substance use; anticipating the impact of prior negative experiences of care on care-seeking; accepting the limitations to health education and information in the prison context; and appreciating post-release challenges such as displacement and housing precarity. By routinely including care for people in prison in health professional curricula, future care providers may better address these needs as well as challenges to care provision, such as institutional policies and procedures like restraint use and presence of officers. Health facilities and professional organizations should develop policies and position statements to support ethical and comprehensive service delivery for people experiencing incarceration. Family planning is foundational to health and social equity, and service provision should affirm humanity and dignity, including for people in prisons. Future research should address health care provider attitudes and practices and health institutional policies with respect to patients who are or have been incarcerated.
The strengths of this study include the contributions of lived experience experts to the research process, the partnerships with frontline organizations for recruitment, the national scope of inclusion, and participant ability to freely express themselves in a post-incarceration context. Conducted with formerly incarcerated women and gender diverse people, our focus groups were able to not only shed light on experiences while in prison, but those afterwards. This study had several limitations. This study collected data from people with recent histories of incarceration now living in community. People who are currently incarcerated may have varying experiences. We did not formally collect participants’ information about their gender identity, age, race, or other factors, although many did volunteer this information. People who volunteered and were willing to participate in focus groups may not be representative of all people with recent histories of incarceration who are now living in the community, and the experiences they chose to relate may not be representative of all relevant experiences. Some participants may have felt uncomfortable sharing their experiences of abortion in the company of peers in the focus group context. All focus groups were conducted in English, limiting the participation of people for whom English is not their first language.
The family planning care experienced while incarcerated is a highly under researched area. Future studies should examine health professionals’ knowledge and understanding of the needs of people who have experienced criminalization and examine clinician biases and behaviours towards incarcerated and formerly incarcerated people.
Family planning professionals may support improvements to sexual and reproductive health experiences and outcomes among people who have or are experiencing incarceration by recognizing the disproportionate burdens of mental illness and substance use; anticipating the impact of prior negative experiences of care on care-seeking; accepting the limitations to health education and information in the prison context; and appreciating post-release challenges such as displacement and housing precarity. By routinely including care for people in prison in health professional curricula, future care providers may better address these needs as well as challenges to care provision, such as institutional policies and procedures like restraint use and presence of officers. Health facilities and professional organizations should develop policies and position statements to support ethical and comprehensive service delivery for people experiencing incarceration. Family planning is foundational to health and social equity, and service provision should affirm humanity and dignity, including for people in prisons. Future research should address health care provider attitudes and practices and health institutional policies with respect to patients who are or have been incarcerated.
Women and gender diverse people who have experienced incarceration in Canada described multiple barriers to accessing family planning services. Health needs considered more urgent, such as mental health and substance use, may crowd out care-seeking for sexual and reproductive health care. Institutional policies and procedures impinge care-seeking, such as the requirement to request care via correctional officers, and the use of restraints and surveillance while receiving reproductive care. Mistreatment by health professionals, such as violations of privacy and confidentiality, denial of service and coercion, stimulates mistrust and avoidance of care. Although a common experience among incarcerated women and gender diverse people, abortion is stigmatized and information about abortion and contraception is not readily available in Canadian prison contexts.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Anchorage loss of the posterior teeth under different extraction patterns in maxillary and mandibular arches using clear aligner: a finite element study | 69ff154e-ac9c-456d-87a3-ba9ea431d26e | 11465488 | Dentistry[mh] | Bimaxillary dentoalveolar protrusion patients often turn to the orthodontists complaining about unsatisfied appearance and easily injured incisors . To improve the lateral profile, extracting the premolars is recommended. Whether to extract the 1st or the 2nd premolars should consider a combination of factors, including arch crowding, lateral profile protrusion, vertical dimension, and endodontic diseases . It’s crucial to use all the premolar extraction spaces for anterior retraction without undesired mesial movement of the molars . Clear aligner treatment (CAT) is gaining popularity due to its improved aesthetic value and comfort. By virtue of its reliability and validity , CAT is also suitable for the malocclusion with complex mechanical principles , such as premolar extraction cases. However, clear aligners (CAs) close the extraction spaces through shortening the length of aligners. The contraction force generated by the terminal of aligners makes the posterior teeth tip mesially, especially when the 2nd premolars are selected . This phenomenon is known as the roller coaster effect. Enhancing the anchorage management through overcorrection is the cornerstone to deal with this side effect. Previous studies adopted multiple strategies to control of anterior torque. For instance, adding intrusive activation for the incisors, designing power ridge with a specific height . In order to compensate the delayed tooth movement with aligners, optimization of posterior anchorage management should be strengthened simultaneously, taking the form of presetting distal tipping for the posterior teeth before retraction process . To date, although there is consensus relating to posterior anchorage preparation in premolar extraction cases using CAs, the specific value under different extraction patterns is still controversial . Clinical observation finds that the roller coaster effect is more serious in the maxillary arch. Liu et al. demonstrated that the maxillary posterior teeth experienced greater levels of tipping than the mandibular teeth during retraction . To our knowledge, the density of the maxilla and mandible is different . Base on this, a hypothesis is raised that posterior anchorage design varies in the maxillary and mandibular arches owing to the anatomical structure. Finite element analysis (FEA) is an effective discretized numerical computation technology that simulates the in vivo situation by controlling a number of experimental conditions, including finite elements, nodes, and degrees of freedom . Over recent years, FEA has been widely used in the orthodontic biomedical field to investigate the initial displacement and stress distribution following the application of force . This is the first study to compare the posterior anchorage loss between two premolar extraction patterns using CAs, with different parameters of the maxilla and mandible by FEA. Then the required distal tipping of the posterior teeth is inferred. Furthermore, we elucidate the potential biomechanical effects of premolar extraction cases when applying CAs. Finally, we aim to derive relatively specific anchorage preparation value and provide guidance for clinical orthodontic treatment.
Original 3D model design To construct a standard bimaxillary model, cone-beam computed tomography (CBCT) data (HiRes3D-Plus; Largev, Beijing, China) were derived from a healthy volunteer with well-aligned dentition and normal axial inclination of the anterior teeth. Each CBCT had a thickness of 0.15 mm, resulting in a total of 640 horizontal slices. The volunteer willingly participated in this experiment and provided written informed consent. This study was approved by the Ethics Committee of The Air Force Medical University (IRB Reference: KQ-YJ-2024-012). The CBCT data were imported into Mimics software (version 20; Materialise Software, Leuven, Belgium). Then, by applying the threshold procedure, we generated mask layers for the maxillary and mandibular arches, jaws, and temporomandibular joint. Next, we used GEOMAGIC Studio 2014 software (3D Systems, Rock Hill, North Carolina) to optimize the original 3-dimensional (3D) models and organize the surface structure. Then NX 1911 software (Siemens, Nuremberg, Germany) was utilized to construct a basic model of the periodontal ligament (PDL), bone cortex and cancellous, and attachments. We extended the outer surface of the tooth roots by 0.25 mm to generate a preliminary model for the PDL, which incorporated linear elasticity, a uniform thickness and homogeneity. Although the width of the PDL varied among locations and individuals, the realistic PDL geometry shifted the center of resistance towards the alveolar margin by 2.3% for the linear material models only, so the hypothesis of uniform thickness was valid . The elastic modulus of the mandibular PDL was set to be twice that of the maxillary PDL, reflecting the difference between arches . The jaws were also shifted inwards by 1.3 mm to built the cancellous bone model. Then, Boolean subtraction operation instructions were used to build the two 1.3-mm-thick cortical layers surrounding the cancellous core. This study adopted the attachment design which had been proved to have high efficacy in root control movement , with vertical rectangular attachments (2 × 3 × 1 mm) set for the canines and premolars while horizontal rectangular attachments (3 × 2 × 1 mm) set for the molars for aligner retention. The tooth crowns and attachments were extended outwards by 0.5 mm to simulate a 0.5 mm-thick CA membrane (The property of membrane referred to thermoplastic polyurethane). Figure showed the geometry of the components in two models. All components were imported into ANSYS Workbench 2019 software (Ansys, Canonsburg, Pa) to build 3D finite element models and perform FEA.
To construct a standard bimaxillary model, cone-beam computed tomography (CBCT) data (HiRes3D-Plus; Largev, Beijing, China) were derived from a healthy volunteer with well-aligned dentition and normal axial inclination of the anterior teeth. Each CBCT had a thickness of 0.15 mm, resulting in a total of 640 horizontal slices. The volunteer willingly participated in this experiment and provided written informed consent. This study was approved by the Ethics Committee of The Air Force Medical University (IRB Reference: KQ-YJ-2024-012). The CBCT data were imported into Mimics software (version 20; Materialise Software, Leuven, Belgium). Then, by applying the threshold procedure, we generated mask layers for the maxillary and mandibular arches, jaws, and temporomandibular joint. Next, we used GEOMAGIC Studio 2014 software (3D Systems, Rock Hill, North Carolina) to optimize the original 3-dimensional (3D) models and organize the surface structure. Then NX 1911 software (Siemens, Nuremberg, Germany) was utilized to construct a basic model of the periodontal ligament (PDL), bone cortex and cancellous, and attachments. We extended the outer surface of the tooth roots by 0.25 mm to generate a preliminary model for the PDL, which incorporated linear elasticity, a uniform thickness and homogeneity. Although the width of the PDL varied among locations and individuals, the realistic PDL geometry shifted the center of resistance towards the alveolar margin by 2.3% for the linear material models only, so the hypothesis of uniform thickness was valid . The elastic modulus of the mandibular PDL was set to be twice that of the maxillary PDL, reflecting the difference between arches . The jaws were also shifted inwards by 1.3 mm to built the cancellous bone model. Then, Boolean subtraction operation instructions were used to build the two 1.3-mm-thick cortical layers surrounding the cancellous core. This study adopted the attachment design which had been proved to have high efficacy in root control movement , with vertical rectangular attachments (2 × 3 × 1 mm) set for the canines and premolars while horizontal rectangular attachments (3 × 2 × 1 mm) set for the molars for aligner retention. The tooth crowns and attachments were extended outwards by 0.5 mm to simulate a 0.5 mm-thick CA membrane (The property of membrane referred to thermoplastic polyurethane). Figure showed the geometry of the components in two models. All components were imported into ANSYS Workbench 2019 software (Ansys, Canonsburg, Pa) to build 3D finite element models and perform FEA.
The models consisted of 3D 10-node tetrahedrons. The structures were set as linear, elastic, isotropic, and homogenous (Table ), as previously reported . The mechanical response was significantly influenced by the assumed elastic properties of the maxillary and mandibular cancellous. The elastic modulus of type III bone, which consisted of trabeculae that were tightly arranged, was approximately 6-fold higher than that of type IV bone, which had sparsely arranged trabeculae . Based on this, we pioneered the concept of setting the elastic modulus of the mandibular cancellous as 6-fold higher than that of the maxilla.
The upper edge of the maxilla and the lower edge of the mandible were set as the upper and lower boundary, respectively. Bonding contact was set on the interfaces of cancellous-cortical bone, cortical bone-PDL, PDL-tooth, and tooth-attachment. Adjacent teeth were given a small amount of frictionless sliding on the contact surfaces. With a friction coefficient of 0.2, the contact interfaces between the aligners and the crown surfaces and attachments made it possible to move the teeth and retain the appliance . The interaction between the maxillary and mandibular arches was neglected.
Two different extraction groups were established (Fig. ). Model 1 simulated the extraction of the maxillary and mandibular 1st premolars (4/4), while Model 2 simulated the extraction of the 2nd premolars (5/5). The axial inclination of the maxillary central incisors was 110° (U1-SN = 110°) and the axial inclination of the mandibular central incisors was 100° (L1-MP = 100°). In both models, the anterior teeth were set with a sagittal inward step of 0.25 mm to simulate one step retraction. We assumed the axial inclination of maxillary and mandibular central incisors reduced to normal after the orthodontic treatment (U1-SN = 105°; L1-MP = 95°).
FEA was used to investigate the initial force system under static loading. To simulate the clinical situation that treating patients with bimaxillary dentoalveolar protrusion, the incisors were set to be retracted in a sagittal manner by 0.25 mm and the canines were shifted distally by 0.25 mm (In Model 2, the 1st premolars were set to move distally), thus deforming the anterior region of the CAs. The loading force was then applied by the mismatch between the aligners and the initial dentition . The finite element models were assembled using the discrete numerical method, and the number of elements and nodes for each model was shown in Table . Reference coordinate system Three separate coordinate systems were established to investigate the orthodontic tooth movement (OTM) of individual tooth and the entire dentition. The global Cartesian coordinate system was defined for the whole dentition (Fig. a). The X-axis represented the coronal plane with positive values denoting the left side and negative values denoting the right side. The Y-axis represented the sagittal plane, with positive values denoting the posterior side and negative values denoting the anterior side. The Z-axis represents the vertical plane with positive values denoting the superior side and negative values denoting the inferior side. The origin of the local Cartesian coordinate system lay in the center of mass of the clinical crown for individual tooth, and was defined as follows: the x-axis (positive and negative values denoted mesial and distal, respectively), the y-axis (positive and negative values denoted lingual and buccal, respectively), and the apex of the maxillary arches and the incisor and occlusal of the mandibular arches signified positive orientation on the z-axis (Fig. b). Cylindrical coordinate system was used to defined the rotation angle of individual tooth and the origin was located at the center of mass of tooth and the y-axis represented the rotation direction. The positive values indicated the lingual or mesial direction; the negative values indicated buccal or distal direction (Fig. c). Calculation and analysis The displacement direction and amount (based on the Cartesian coordinate system), rotation angle of individual tooth (based on the Cylindrical coordinate system), the hydrostatic stress distribution of the PDL, also the deformation and von Mises stress of CAs were addressed to evaluate the anchorage loss of maxillary and mandibular arches under different extraction patterns.
Three separate coordinate systems were established to investigate the orthodontic tooth movement (OTM) of individual tooth and the entire dentition. The global Cartesian coordinate system was defined for the whole dentition (Fig. a). The X-axis represented the coronal plane with positive values denoting the left side and negative values denoting the right side. The Y-axis represented the sagittal plane, with positive values denoting the posterior side and negative values denoting the anterior side. The Z-axis represents the vertical plane with positive values denoting the superior side and negative values denoting the inferior side. The origin of the local Cartesian coordinate system lay in the center of mass of the clinical crown for individual tooth, and was defined as follows: the x-axis (positive and negative values denoted mesial and distal, respectively), the y-axis (positive and negative values denoted lingual and buccal, respectively), and the apex of the maxillary arches and the incisor and occlusal of the mandibular arches signified positive orientation on the z-axis (Fig. b). Cylindrical coordinate system was used to defined the rotation angle of individual tooth and the origin was located at the center of mass of tooth and the y-axis represented the rotation direction. The positive values indicated the lingual or mesial direction; the negative values indicated buccal or distal direction (Fig. c).
The displacement direction and amount (based on the Cartesian coordinate system), rotation angle of individual tooth (based on the Cylindrical coordinate system), the hydrostatic stress distribution of the PDL, also the deformation and von Mises stress of CAs were addressed to evaluate the anchorage loss of maxillary and mandibular arches under different extraction patterns.
The effect of tooth extraction patterns on the rotation angle of the anterior teeth in the sagittal direction In two models, the anterior teeth exhibited similar lingual inclination tendency. The central incisors exhibited the lowest lingual inclination tendency. The canines in Model 1 and the 1st premolars in Model 2 exhibited the most severe distal tipping tendency, indicating the greatest anchorage loss (Fig. ). Compared between two models, the inclination/ tipping tendency was more notable in Model 1, and the difference was the most obvious in the central incisors. With regard to the maxillary arches, the lingual inclination angle of the central incisor was 0.1172° in Model 1, the value was compared to 0.0442° in Model 2. The canines were distally tipped with the same tendency (Model 1: 0.2966°; Model 2: 0.2423°) (Table ). The lingual inclination angle and sagittal movement value of the maxillary central incisor in Model 1 was about 2.7-fold higher than that of Model 2 (Fig. ; Table ).
In two models, the anterior teeth exhibited similar lingual inclination tendency. The central incisors exhibited the lowest lingual inclination tendency. The canines in Model 1 and the 1st premolars in Model 2 exhibited the most severe distal tipping tendency, indicating the greatest anchorage loss (Fig. ). Compared between two models, the inclination/ tipping tendency was more notable in Model 1, and the difference was the most obvious in the central incisors. With regard to the maxillary arches, the lingual inclination angle of the central incisor was 0.1172° in Model 1, the value was compared to 0.0442° in Model 2. The canines were distally tipped with the same tendency (Model 1: 0.2966°; Model 2: 0.2423°) (Table ). The lingual inclination angle and sagittal movement value of the maxillary central incisor in Model 1 was about 2.7-fold higher than that of Model 2 (Fig. ; Table ).
As the anchorage provided units, the posterior segments behind the extraction spaces underwent mesial tipping tendency in two models (Fig. ). By superimposing the pre- and post- loading models on individual tooth’ s center of mass, the mesial tipping tendency of individual posterior tooth in Model 2 was more pronounced than that of Model 1 (Fig. ). For the maxillary arches, the 1st molar in Model 2 tipped mesially by 0.2672°, over 1.4-fold larger than that in Model 1 (0.1888°). For the mandibular arches, the 1st molar in Model 2 tipped mesially by 0.1596°, over 1.7-fold larger than that in Model 1 (0.0927°) (Fig. ; Table ). These data suggested that the anchorage loss of individual posterior tooth was more remarkable in Model 2. As shown in Table , the 1st premolars in Model 2 exhibited the most severe distal movement tendency while the 2nd molars in Model 1 exhibited the slightest mesial movement tendency. In Model 1, the 2nd premolars were adjacent to the extraction spaces and the angulation was crucial to the anchorage protection. The tendency for mesial tipping was ranked as follows: the 2nd premolars > the 1st molars > the 2nd molars, thus indicating that the closer to the extraction spaces, the more pronounced the tipping tendency, highlighting the need for an anti-tipping design. In Model 2, the 1st premolars and molars tipped towards the extraction spaces. The anchorage values for the 1st premolars were smaller than that of the 1st molars based on the PDL area. Therefore, a greater level of tipping tendency occurred to the 1st premolars under reciprocal force. The differences of anchorage loss between the maxillary and mandibular arches The maxillary and mandibular arches exhibited the same tipping tendency but to a different extent (Fig. ; Table ). The total mesial movement value of the maxillary posterior segments was greater in Model 2, while regarding the mandibular posterior segments, it was greater in Model 1. The tipping tendency of individual maxillary tooth was more obvious than the mandibular tooth (Cylindrical coordinate system). With regard to the anterior segments, what differentiated most were the central incisors of Model 1. The lingual inclination was 0.1172° for the maxillary central incisor in Model 1, which was 3.5-fold higher than that of the mandibular one (0.0335°). The same tendency could be found in the posterior arches. In Model 1, the maxillary 2nd premolar, 1st molar, and 2nd molar mesially tipped by 0.2859°, 0.1888°, and 0.1466°, almost 1.6-, 2.0-, 2.3-fold higher than that of the same site in the mandible, respectively. In Model 2, the maxillary 1st molar and the 2nd molar mesially tipped by 0.2672° and 0.2010°, almost 1.7- and 1.9-fold higher than that of the same site in the mandible, respectively. The most prominent difference between the maxillary and mandibular arches was detected for the 2nd molars in Model 1. PDL hydrostatic stress in the anterior and posterior segments Compared between the maxillary and mandibular arches, the PDL hydrostatic stress was higher in the maxilla (Fig. ). Compared between two models, the PDL hydrostatic stress was more evenly distributed in Model 1, and the closer to the extraction spaces, the higher PDL hydrostatic stress. In Model 1, the highest compressive stress was concentrated on the distal cervical region and the mesial apex of the canines (maxillary: -0.1116 MPa; mandibular: -0.1624 MPa). The highest tensile stress was detected in the mesial cervical region and distal apex of the canines (maxillary: 0.1136 MPa; mandibular: 0.1407 MPa). The PDL hydrostatic stress in Model 2 exhibited relatively higher values of compressive and tensile stress. The highest compressive stress was concentrated on the distal cervical region and the mesial apex of the 1st premolars (maxillary: -0.1117 MPa; mandibular: -0.1310 MPa). The highest tensile stress was detected in the mesial cervical region and the distal apex of the 1st premolars (maxillary: 0.0811 MPa; mandibular: 0.1331 MPa). The total deformation and von Mises stress of CAs The maximum total deformation happened to the incisors region of CAs, while the minimum total deformation was found in the molars region in two models, which was in accordance with the loading process. The stress of CAs were evenly distributed on the incisors and molars region and concentrated on the region adjacent to the extraction spaces (Fig. ).
The maxillary and mandibular arches exhibited the same tipping tendency but to a different extent (Fig. ; Table ). The total mesial movement value of the maxillary posterior segments was greater in Model 2, while regarding the mandibular posterior segments, it was greater in Model 1. The tipping tendency of individual maxillary tooth was more obvious than the mandibular tooth (Cylindrical coordinate system). With regard to the anterior segments, what differentiated most were the central incisors of Model 1. The lingual inclination was 0.1172° for the maxillary central incisor in Model 1, which was 3.5-fold higher than that of the mandibular one (0.0335°). The same tendency could be found in the posterior arches. In Model 1, the maxillary 2nd premolar, 1st molar, and 2nd molar mesially tipped by 0.2859°, 0.1888°, and 0.1466°, almost 1.6-, 2.0-, 2.3-fold higher than that of the same site in the mandible, respectively. In Model 2, the maxillary 1st molar and the 2nd molar mesially tipped by 0.2672° and 0.2010°, almost 1.7- and 1.9-fold higher than that of the same site in the mandible, respectively. The most prominent difference between the maxillary and mandibular arches was detected for the 2nd molars in Model 1.
Compared between the maxillary and mandibular arches, the PDL hydrostatic stress was higher in the maxilla (Fig. ). Compared between two models, the PDL hydrostatic stress was more evenly distributed in Model 1, and the closer to the extraction spaces, the higher PDL hydrostatic stress. In Model 1, the highest compressive stress was concentrated on the distal cervical region and the mesial apex of the canines (maxillary: -0.1116 MPa; mandibular: -0.1624 MPa). The highest tensile stress was detected in the mesial cervical region and distal apex of the canines (maxillary: 0.1136 MPa; mandibular: 0.1407 MPa). The PDL hydrostatic stress in Model 2 exhibited relatively higher values of compressive and tensile stress. The highest compressive stress was concentrated on the distal cervical region and the mesial apex of the 1st premolars (maxillary: -0.1117 MPa; mandibular: -0.1310 MPa). The highest tensile stress was detected in the mesial cervical region and the distal apex of the 1st premolars (maxillary: 0.0811 MPa; mandibular: 0.1331 MPa).
The maximum total deformation happened to the incisors region of CAs, while the minimum total deformation was found in the molars region in two models, which was in accordance with the loading process. The stress of CAs were evenly distributed on the incisors and molars region and concentrated on the region adjacent to the extraction spaces (Fig. ).
FEA is regarded as a reliable method for investigating orthodontic biomechanics through estimating the initial displacement tendency and stress distribution in vitro . But it’s hard to specify the anchorage type involved in the retraction process because FEA reveals only the instantaneous effect. Innovatively, we observed the initial tipping tendency and took it as anchorage loss. Clinically, the final anchorage preparation value only need to be multiplied according to personalized orthodontic steps in order to achieve the largest retraction of anterior teeth meanwhile no mesial movement of the posterior teeth. CAT has clear advantages in terms of aesthetics and comfort . However, there remains a large distance between the predicted and achieved OTM, especially in premolar extraction cases . Clinical observation finds that without additional anchorage management design, CAT is more prone to the roller coaster effect than fixed orthodontic treatment due to the lower rigidity and stress relaxation property , further emphasizing the importance of anchorage management. Miniscrews are widely used as temporary anchorage devices (TADs), which provide strong anchorage management . However, CAs close the extraction spaces by shortening the aligner length, the tooth movement always falls behind the CAs. TADs only cannot match the actual OTM with the aligners exactly, the roller coaster effect will still happen . Deriving from the modern Tweed- Merrifield sequential directional force treatment philosophy, anchorage preparation concept is gradually emerging in CAT . However, Cheng et al. reported that severe mesial tipping tendency of the posterior teeth occurred notwithstanding the power ridge was added as anterior torque compensation . Therefore, it’ s necessary to strengthen the posterior anchorage management. Based on questions raised by the existing literature, we focused particularly on the posterior anchorage design in premolar extraction cases. Assuming a total of 60 orthodontic steps are designed, anti-tipping design for each posterior tooth was inferred (Fig. a; Table ). Previous studies have concentrated on the anchorage management of the 1st molar. Align Technology proposed the G6 protocol which preset 4° of mesial angulation for the maxillary 1st molar with overbite deeper than 2 mm. Dai et al. reported that a distal tipping angle of 6.6° should be planned for the maxillary 1st molar to achieve bodily retraction . Feng et al. suggested that a distal tipping of 8.7° should be designed for the maxillary 1st molar to prevent mesial tipping . Another study prescribed overcorrection (distal tipping) for the 1st molar by 2.9° to counter the anchorage loss. However, the 1st molars still experienced mesial movement (2.2 mm) with a great level of mesial tipping (5.4°) . Based on the exact anchorage loss value, the results will be of significant clinical guidance meaning. Up to now, there is no literature available on the potential biomechanical effects of CAs under the circumstance of anchorage preparation design. This process involves two systems of forces (Fig. b). Firstly, the posterior segments achieve anchorage preparation through the distal deformation of CAs before retraction. As the posterior teeth receive the backwards force, a forward counter-force to the posterior aligners is produced. Due to the entire structure of CAs, the anterior regions receive the transmitted counter-force and exert labial force on the anterior teeth. Secondly, as the CAs contract, the arches receive the force towards the extraction spaces. Since the CAs only wrap the crowns, the force generated by CAs can only create minimal counter-moments , making the teeth tip rather than move bodily. With regards to the posterior segments, the two forces remain in conflict, and the anchorage preparation force takes the responsibility to strengthen the posterior anchorage. With regards to the anterior segments, the retainer contraction force opposes the transmitted counter-force. In the maximun posterior anchorage management cases, the anchorage preparation is required to offset the posterior aligners contraction force, meanwhile exerting the anterior aligners contraction force. Heavier roller coaster effect is always observed in the 2nd premolar extraction cases using CAs. According to this research, the anchorage loss of individual posterior tooth was more severe when the 2nd premolars were extracted, which was in agreement with clinical observation. Three factors may be responsible for the different anchorage loss values between two extraction patterns. Firstly, as the ratio of the posterior vs anterior units is smaller when the 2nd premolars are extracted, greater levels of posterior anchorage loss will occur . Secondly, the posterior segments in Model 1 exhibit stronger anti-tipping ability due to the larger PDL area estimated by Jepsen’ s PDL-area ratio. Thirdly, the 2nd premolar extraction space is bound by a molar and a premolar, whereas the 1st premolar extraction space is bound by a premolar and a canine, suggesting that the moment-to-force ratio applied to teeth produces more tipping in the molars because the more obvious disparity between premolars and molars . In clinical practice, larger posterior anchorage preparation value should be designed in the 2nd premolar extraction cases for better anchorage management. In addition, the selection of premolar extraction patterns can be influenced by the arch crowding degree, lateral protrusion profile and vertical dimension . Shaweesh. noted that the mesial-distal diameter of the 1st premolar was larger than that of the 2nd premolar, thus providing a larger space for decrowding . A previous study of mandibular arches found that the average mandibular reciprocal relative anchorage loss was 25% or 40% for extraction of the 1st or 2nd premolar respectively in CAT . These data suggest that extraction of the 1st premolars is a superior choice in cases that require the greatest improvement in lateral protrusion profile. Enhancing the anchorage management of molars is also a guarantee against deterioration of the vertical dimension profile. Theoretically, due to Christensen’ s phenomenon, the mesial movement of anchorage molars will reduce the vertical dimension . As the mesial movement value of the posterior segments is larger in 2nd premolar extraction cases, the anterior facial height will exhibit greater reduction . In contrast, several studies have failed to identify wedge effect . They believed that the amount of anchorage loss might play a greater role in the vertical dimension than the location of premolar extractions , highlighting the importance of anchorage preparation. As described above, the clinical indication of two different extraction patterns are summarized in Fig. . Based on the FEA results, the tipping tendency of individual maxillary posterior tooth was 1.6 ~ 2.3-fold higher than that of the mandible. And the farther away from the extraction spaces, the greater the divergence (Fig. and Table ). This can be interpreted by following reasons. Firstly, there are differences in the anatomical structure of the maxilla and mandible. The maxilla is composed of a thin bone cortex and slender trabeculae, while the mandible is made up of a thicker bone cortex and stouter trabeculae . During the OTM, resorption of the alveolar bone occurs in front while reconstruction occurs behind. The maxilla with a lower density is more prone to resorption and reconstruction. Therefore, it’ s easier for the maxillary posterior teeth to tip mesially than the mandibular posterior teeth. Secondly, the variation in tooth size should be taken into consideration. The mandibular central incisor is the smallest tooth among the entire arches, so the mandibular arch exhibits a larger difference in anterior-posterior anchorage than the maxillary arch . Thirdly, according to Andrews’ six keys to normal occlusion, the maxillary molars tip mesially by 5° and the mandibular molars tip mesially by 2°; this means that the initial physiological angulation makes the maxillary posterior teeth more sensitive to lose anchorage control. Finally, the target torque of the maxillary and mandibular anterior teeth is different (maxillary central incisor: +7°; mandibular central incisor: -1°). That means the mandibular anterior teeth can accept a greater degree of lingual inclination, while the maxillary anterior teeth cannot. In conclusion, a higher anchorage management is required by the maxillary posterior teeth. It is important that we highlight the fact that the roller coaster effect is not absolutely unfavorable. The selection of indications needs to be taken into account when designing the anchorage preparation. The lingual inclination of the incisors can make up for mild open-bite patients when retracting the anterior teeth . There are some limitations to this study that need to be taken into account. FEA cannot completely simulate the real situation in vivo. Although we try to simulate the real elastic moduli of the maxilla, mandible and PDL, it’s impossible to completely replicate the physiological situation. For instance, the bone density is different in the anterior and posterior region of the maxilla and mandible, which may influence the actual anchorage loss . The properties of the maxilla, mandible and PDL require further investigation to better simulate the clinical condition. What’s more, the anchorage preparation values during CAT need individualized adjustment based on the clinical situation such as the type of membrane materials, age and orthodontic steps. Meanwhile, animal experiment is prepared to validate the results.
In this study, we used FEA to evaluate the rotation and displacement tendency of anterior and posterior teeth in two models. Under different extraction patterns, the posterior teeth showed the same mesial tipping direction but to different degree using CAs. Anchorage preparation should be preset for the posterior teeth to achieve the largest retraction of anterior teeth with no undesired mesial tipping of the posterior teeth. And the values need individualized adjustment. With the same anterior retraction design, the anchorage loss of individual posterior tooth was significantly higher in the 2nd premolar extraction cases compared to the 1st premolar extraction cases. The anchorage loss was larger in the maxilla than the mandible for the same tooth site. When adopting CAT in premolar extraction cases, posterior anchorage preparation design should be enhanced for the maxillary teeth. From a clinical point-of-view, it is important to select the extraction pattern based on the desired movement ratio of the anterior and posterior teeth. If there is desire to improve the lateral profile protrusion by the greatest amount, then the 1st premolar is a superior choice for more retraction.
|
The effect evaluation of advanced penlight | e8b63b89-8ce0-4c9b-8e0f-2e09202e48a0 | 6221280 | Ophthalmology[mh] | Pupil size and reactivity assessment is a regular health and nursing care in general ward, emergency room (ER), and intensive care unit (ICU) [ – ]. Measuring pupillary contraction and change of size after light stimulation can serve as a window to view the brain and evaluate the function of the autonomous nervous system. In addition, pupillometry can be used for early diagnosis of related diseases, to assess the severity of disease, to decide on treatment and nursing care strategy, and to predict the outcome of disease [ – ]. Measuring pupil diameter is also applied to study recognition memory. It is found that the pupils become significantly enlarged when the subjects see the old and familiar items . Under normal conditions, the pupil diameter should be the same in both eyes. The pupil diameter ranges from 1.5 to 6.0 mm. Light stimulation of the pupil causes its contraction, which is also known as the pupil reflex . A penlight provides a source of light and has become the most common used tool to assess the pupil diameter. Asymmetry of pupil constriction in response to light means one pupil constricts and the other remains dilated or constricts more slowly. It may indicate dynamic anisocoria or a Marcus Gunn pupil, a relative afferent pupillary defect (RAPD), or temporal lobe herniation in the brain [ – ]. The pupil measurement ruler is usually attached to the side of general penlights (GPL). The pupil diameter can be measured only after removing the GPL and placing the ruler close to the eye to approximately and indirectly estimate the pupil diameter. The use of the GPL not only extends the evaluation time but also reduces the accuracy of pupil diameter measurement. A number of studies have found the the effects of using different equipment on the outcomes of pupil diameter measurements [ – ]. The accuracy of a pupil diameter measurement and operation time required to perform it are the important characteristics of all new instruments or systems. However, the bulky designs of the refractometer or computer system used for pupil diameter measurement utilized in the above mentioned studies complicate their use in clinical conditions and on patients in critical and intensive care environment. Although the GPL is convenient to use, it has been observed that health caregiver or nurses generally have less confidence in the value of the pupil diameter measured using the GPL. Hence, it is important to redesign the GPL to solve the difficulties and promote the accuracy of pupil diameter measurement in a patient care situation. The purpose of this study is to compare the accuracy and the operational time of pupil diameter measurement by the GPL and a new design penlight.
The design of the advanced penlight To improve the accuracy and convenience of pupil diameter measurement, our research team designed an advanced penlight (APL) ( ). There are two innovations in the design of the proposed APL. One type has a standard pupil size of perspective measurement ruler (PMR). The PMR and rotary design were made by following several steps. Eight sizes of standard pupil diameter (from 2 mm to 9 mm) was printed on a transparent plastic plate as a PMR, which is 5 cm long and 1 cm wide. There is a two-piece metal snap to attach the PMR to the penlight. The bottom part of the snap is attached to the top of the PMR. The top part of the metal snap is than attached to the bottom side of the plate. Next, the hook side of the fastener, which is made by Velcro, was stuck to the bottom of the PMR. The loop side of fastener was stuck at the opposite bottom side of the penlight to adhere the PMR. The PMR can be placed close to the eyes and directly measure the precise value of the pupil diameter before and after pupillary contraction. The metal snap is a rotary design used to fix the PMR and measure the pupil diameter at the desired degrees. The bulb voltage of APL is 2.2 V / 0.25 A. were 3D printing of APL. Experimental design One-group post-test and single-blind study designs were used in this study. The research was approved by an institutional review board (17MMHISO41e). The research was carried out during August 2017 to January 2018. Purposive sampling was used to recruit ninety nursing students from a college in northern Taiwan. The incursive criteria were nursing students who had experience operating a penlight in their nursing internship. The exclusive criteria was having serious eye disease affecting the eyesight or feeling anxious or panic in darker environments. We recruited the participants through leaflets. The participants who were willing to join the research could actively connect to researchers and arrange available times for the experiment. After the researcher explained the research purpose, process, possible benefit, and injury, all participants signed a written consent form. We asked the guardians or parents to sign a consent from for the participants who were under 20 years old. The standard pupil diameter was measured for each subject using a refractometer (RM) (Topcon Auto Kerato- refractometer; 75–1, Hasunuma-Cho, Itabashi-Ku, Tokyo, 174–8580, Japan), and the average values were calculated. The ninety participants measured the subject’s pupil diameter separately by the GPL and the APL using a standard eight-step procedure [ – ] ( ). The specification of the GPL is Spirit brands, model CK-907D, with a 3.0 V LED bulb lamp. The lux of ambient lighting had been measured in each operation and kept at 118 lux. It was suggested to take at least a 1 minute break between each light stimulation [ – ]. In this research, a 10 minutes break was used to restore the sensitivity of the subject's pupil for light stimulation. The pupil diameters before and after pupillary contraction were recorded separately for each participant. We defined pupillary contraction as the maximal constriction after light stimulation. The average times to perform the eight steps when using the GPL and the APL were also computed and recorded without informing the participants to avoid the Hawthorne effect. The participants were required to complete a questionnaire to ask for their opinion after the experiment. Estimation of sample size G-power was used to estimate the sample size. On the basis of a power, effect size, and sample size of 0.95, 0.5, and 0.05, respectively, the sample size was calculated to be 45. At a loss rate of 20%, the sample size was calculated to be at least 54. Questionnaire of using opinion A questionnaire was created to investigate the opinions of the participants for using the APL and the GPL. A four point Likert scale was used with 1 point means strongly disagree, 2 points disagree, 3 points agree, and 4 points strongly agree. Higher scores represent a more positive opinion. The Cronbach’s alpha coefficient of the questionnaire was 0.82 as determined by 30 pretest participants. The validity of questionnaire was evaluated using content validity by three experts. One nurse who has 7 years of experience working in the ICU, one assistant professor who majored in nursing, and one medical physician. The scores of content validity were four points (1 point means strongly disagree, 2 point means disagree, 3 point means agree, and 4 point means strongly agree) to evaluate the correctness, feasibility, appropriateness, and completeness of each question. The average scores of expert validity were between 3.7 to 3.9 points and represent a good validity. Analysis methods The collected data was analyzed by using SPSS for Windows, version 20.0 (IBM Corp. in Armonk, NY). Descriptive statistics were used to analyze the basic characteristics of the participants. The mean differences in pupil diameter and operation time of the GPL and the APL measurements were analyzed by using the t-test. Bland-Altman plots and one sample t-test were used to find a potential dependency between means and differences within the GPL, the APL and the RM. The comparison of participants' opinions after using the APL and the GPL were performed by independent t-test.
To improve the accuracy and convenience of pupil diameter measurement, our research team designed an advanced penlight (APL) ( ). There are two innovations in the design of the proposed APL. One type has a standard pupil size of perspective measurement ruler (PMR). The PMR and rotary design were made by following several steps. Eight sizes of standard pupil diameter (from 2 mm to 9 mm) was printed on a transparent plastic plate as a PMR, which is 5 cm long and 1 cm wide. There is a two-piece metal snap to attach the PMR to the penlight. The bottom part of the snap is attached to the top of the PMR. The top part of the metal snap is than attached to the bottom side of the plate. Next, the hook side of the fastener, which is made by Velcro, was stuck to the bottom of the PMR. The loop side of fastener was stuck at the opposite bottom side of the penlight to adhere the PMR. The PMR can be placed close to the eyes and directly measure the precise value of the pupil diameter before and after pupillary contraction. The metal snap is a rotary design used to fix the PMR and measure the pupil diameter at the desired degrees. The bulb voltage of APL is 2.2 V / 0.25 A. were 3D printing of APL.
One-group post-test and single-blind study designs were used in this study. The research was approved by an institutional review board (17MMHISO41e). The research was carried out during August 2017 to January 2018. Purposive sampling was used to recruit ninety nursing students from a college in northern Taiwan. The incursive criteria were nursing students who had experience operating a penlight in their nursing internship. The exclusive criteria was having serious eye disease affecting the eyesight or feeling anxious or panic in darker environments. We recruited the participants through leaflets. The participants who were willing to join the research could actively connect to researchers and arrange available times for the experiment. After the researcher explained the research purpose, process, possible benefit, and injury, all participants signed a written consent form. We asked the guardians or parents to sign a consent from for the participants who were under 20 years old. The standard pupil diameter was measured for each subject using a refractometer (RM) (Topcon Auto Kerato- refractometer; 75–1, Hasunuma-Cho, Itabashi-Ku, Tokyo, 174–8580, Japan), and the average values were calculated. The ninety participants measured the subject’s pupil diameter separately by the GPL and the APL using a standard eight-step procedure [ – ] ( ). The specification of the GPL is Spirit brands, model CK-907D, with a 3.0 V LED bulb lamp. The lux of ambient lighting had been measured in each operation and kept at 118 lux. It was suggested to take at least a 1 minute break between each light stimulation [ – ]. In this research, a 10 minutes break was used to restore the sensitivity of the subject's pupil for light stimulation. The pupil diameters before and after pupillary contraction were recorded separately for each participant. We defined pupillary contraction as the maximal constriction after light stimulation. The average times to perform the eight steps when using the GPL and the APL were also computed and recorded without informing the participants to avoid the Hawthorne effect. The participants were required to complete a questionnaire to ask for their opinion after the experiment.
G-power was used to estimate the sample size. On the basis of a power, effect size, and sample size of 0.95, 0.5, and 0.05, respectively, the sample size was calculated to be 45. At a loss rate of 20%, the sample size was calculated to be at least 54.
A questionnaire was created to investigate the opinions of the participants for using the APL and the GPL. A four point Likert scale was used with 1 point means strongly disagree, 2 points disagree, 3 points agree, and 4 points strongly agree. Higher scores represent a more positive opinion. The Cronbach’s alpha coefficient of the questionnaire was 0.82 as determined by 30 pretest participants. The validity of questionnaire was evaluated using content validity by three experts. One nurse who has 7 years of experience working in the ICU, one assistant professor who majored in nursing, and one medical physician. The scores of content validity were four points (1 point means strongly disagree, 2 point means disagree, 3 point means agree, and 4 point means strongly agree) to evaluate the correctness, feasibility, appropriateness, and completeness of each question. The average scores of expert validity were between 3.7 to 3.9 points and represent a good validity.
The collected data was analyzed by using SPSS for Windows, version 20.0 (IBM Corp. in Armonk, NY). Descriptive statistics were used to analyze the basic characteristics of the participants. The mean differences in pupil diameter and operation time of the GPL and the APL measurements were analyzed by using the t-test. Bland-Altman plots and one sample t-test were used to find a potential dependency between means and differences within the GPL, the APL and the RM. The comparison of participants' opinions after using the APL and the GPL were performed by independent t-test.
Characteristics of the participants There were ninety participants, of whom 83% were female senior nursing students. The mean age of the participants was 20.01 (SD = 0.47) years. Approximately 78% (N = 70) of the participants had uncorrected visual acuity between 0.5 and 0.9. The mean period of internship in clinical nursing care was 7.12 (SD = 0.48) months. The mean frequency of using the GPL during the internship was 52.31 times (SD = 8.2). Approximately 88.0% of the participants acknowledged that the design quality of the penlight is important for disease progression monitoring ( ). Comparison of the GPL and the RM shows that the left pupil diameter before pupillary contraction (LPD BPC) was 1.52 mm larger than after pupillary contraction (APC) when using light stimulation by the GPL. The right pupil diameter before pupillary contraction (RPD BPC) was 1.47 mm larger than APC when using light stimulation by the GPL. The LPD BPC was 2.23 mm larger than APC when using light stimulation by the RM. The RPD BPC was 1.87 mm larger than APC when using light stimulation by the RM. The average values of pupil diameter measured by the RM were significantly larger than that of pupil diameter measured by the GPL before and after pupillary contraction. The results indicated the pupil diameters measured by GPL and the RM were different. Comparison of the APL and the RM reveals that the LPD BPC was 1.86 mm larger than APC when using light stimulation by the APL. The RPD BPC was 1.86 mm larger than APC when using light stimulation by the APL. The LPD BPC was 2.23 mm larger than APC when using light stimulation by the RM. The RPD BPC was 1.87 mm larger than APC when using light stimulation by the RM. There were no significant differences in the average values of pupil diameter as measured by the APL and the RM. The results indicated the pupil diameters measured by APL and the RM were very close. Analysis of Bland-Altman plot One sample t-test showed the mean differences of the GPL and the RM before (t = 12.626, p<0.001) and after (t = 9.028, p<0.001) pupillary construction reached the statistically significant differences. The mean differences of the APL and the RM before (t = 1.481, p = 0.142) and after (t = 0.712, p = 0.487) pupillary construction had no statistically significant differences. Bland-Altman plot of the GPL and the RM before and after pupillary construction presented in Figs and and revealed the significant differences. Bland-Altman plot of the APL and the RM before and after pupillary construction presented in Figs and and revealed no significant differences. Comparison of operational time for the APL and the GPL The operational times (seconds) of eight standard procedures ( ) performed with the GPL and the APL were measured separately. The average operational time of the eight steps performed with the GPL was 14.81 seconds. The average operational time of the eight steps performed with the APL was 6.12 seconds, which was 8.72 seconds shorter (t = -3.81; p = 0.001) than that of the GPL ( ). Comparison of participants' opinions regarding the use of the APL and the GPL The average scores of the five questions were all significantly higher for those using the APL than those using the GPL ( ). In the item of the confidences to judge the pupil diameter, the mean difference was 1.23 (t = 11.85; p<0.001) higher in the APL than in the GPL. The mean difference between in the item of reduced duration to judge the pupil diameter with the APL and GPL was 1.13 (t = 9.67; p<0.001). All of the participants considered that the convenience and confidence of pupil diameter measurement were higher when using the APL rather than the GPL, and therefore, were more inclined to use the APL.
There were ninety participants, of whom 83% were female senior nursing students. The mean age of the participants was 20.01 (SD = 0.47) years. Approximately 78% (N = 70) of the participants had uncorrected visual acuity between 0.5 and 0.9. The mean period of internship in clinical nursing care was 7.12 (SD = 0.48) months. The mean frequency of using the GPL during the internship was 52.31 times (SD = 8.2). Approximately 88.0% of the participants acknowledged that the design quality of the penlight is important for disease progression monitoring ( ).
shows that the left pupil diameter before pupillary contraction (LPD BPC) was 1.52 mm larger than after pupillary contraction (APC) when using light stimulation by the GPL. The right pupil diameter before pupillary contraction (RPD BPC) was 1.47 mm larger than APC when using light stimulation by the GPL. The LPD BPC was 2.23 mm larger than APC when using light stimulation by the RM. The RPD BPC was 1.87 mm larger than APC when using light stimulation by the RM. The average values of pupil diameter measured by the RM were significantly larger than that of pupil diameter measured by the GPL before and after pupillary contraction. The results indicated the pupil diameters measured by GPL and the RM were different.
reveals that the LPD BPC was 1.86 mm larger than APC when using light stimulation by the APL. The RPD BPC was 1.86 mm larger than APC when using light stimulation by the APL. The LPD BPC was 2.23 mm larger than APC when using light stimulation by the RM. The RPD BPC was 1.87 mm larger than APC when using light stimulation by the RM. There were no significant differences in the average values of pupil diameter as measured by the APL and the RM. The results indicated the pupil diameters measured by APL and the RM were very close.
One sample t-test showed the mean differences of the GPL and the RM before (t = 12.626, p<0.001) and after (t = 9.028, p<0.001) pupillary construction reached the statistically significant differences. The mean differences of the APL and the RM before (t = 1.481, p = 0.142) and after (t = 0.712, p = 0.487) pupillary construction had no statistically significant differences. Bland-Altman plot of the GPL and the RM before and after pupillary construction presented in Figs and and revealed the significant differences. Bland-Altman plot of the APL and the RM before and after pupillary construction presented in Figs and and revealed no significant differences.
The operational times (seconds) of eight standard procedures ( ) performed with the GPL and the APL were measured separately. The average operational time of the eight steps performed with the GPL was 14.81 seconds. The average operational time of the eight steps performed with the APL was 6.12 seconds, which was 8.72 seconds shorter (t = -3.81; p = 0.001) than that of the GPL ( ).
The average scores of the five questions were all significantly higher for those using the APL than those using the GPL ( ). In the item of the confidences to judge the pupil diameter, the mean difference was 1.23 (t = 11.85; p<0.001) higher in the APL than in the GPL. The mean difference between in the item of reduced duration to judge the pupil diameter with the APL and GPL was 1.13 (t = 9.67; p<0.001). All of the participants considered that the convenience and confidence of pupil diameter measurement were higher when using the APL rather than the GPL, and therefore, were more inclined to use the APL.
In our study, the mean values of pupil diameter measured by the GPL had significant differences from the standard values measured by the RM. The result was similar to Couret, et al. , compared to hand-held electronic monocular pupilometer, the standard measurement in pupil size and pupil reflex by penlight yields inaccurate data. Although, the electronic and automated pupillometry is a more reliable instrument in patients' pupil assessment, the GPL is still frequently used in clinical care situations to measure the pupil diameter and pupil reflex of the autonomic nervous system. It is worthwhile to improve the function of GPL in the most cost-effective way to improve the accuracy of measuring the pupil diameter during pupil reflex. The mean values of pupil diameter measured by the APL were much closer to the standard values measured by the RM. Previous research had discovered that by using the penlight with a gauge could improve the result consistency of pupil size assessment than without using a gauge . In the APL, the gauge had been redesign into a moveable PMR to directly compare the pupil diameter. Therefore, the consistency and accuracy of pupil diameter measurement could be enhanced. In addition, the convenience of saving time and accuracy measure design of APL met the expectations of participants with nursing back grounds. It could be applied for health and nursing care practitioner in pupil assessment and disease monitoring in the future. In the result, the average values of the pupil diameter measured by the RM were larger than that of the GPL and APL before and after pupillary contraction. The change in the pupil diameters were influenced by the state of both eyes and degree of retinal illumination [ – ]. The eyes of the subjects are very close to the RM during measurement. The light sensors of the pupil decreased and lead to pupil dilation, which may account for the larger values of pupil diameter in our measurements. A visually direct measurement method by the APL can significantly improve the accuracy of pupil diameter measurements. The design of the PMR allows it to be placed very closely above the pupil so that the examiner can see through the scale to accurately estimate the value of pupil diameter before and after pupillary contraction without moving the APL. Therefore, the values of pupil diameter measured by the APL were all significantly correlated with the values measured by the RM. Pupil diameters before and after pupillary contraction change within a few seconds. The differences between the average values of the pupil diameters before pupillary contraction were 0.21 mm and 0.08 mm in the left and right eyes, respectively, for the RM and the APL measurements. The differences between the average values of the pupil diameter after pupillary contraction were -0.106 mm and 0.07 mm in the left and right eyes, respectively, for the RM and the APL measurements. The differences between the average values of the pupil diameter in same eye measured were 1.23 mm, 1.08 mm, 0.52 mm, and 0.68 mm in the RM than the GPL. The design of the APL allows for detection of a slight change in the pupil diameter and the resulting measurements were closer to the values as measured by the RM. The operation time was 8.72 seconds shorter in using the APL compared to the GPL. The first reason for this is that operating the APL does not require moving the pupil measurement ruler located on the side of the GPL close to each eye. The second reason is that the direct measurement of the APL can increase the users’ confidence in the values of pupil diameters and decrease the time in repeated measurement and interpretation. So far, no research was conducted to study the influence of performance confidence and operational time on the pupil diameter measurements. Our results indicate that the confidence in the values of pupil diameters can significantly decrease the operation time for measurement. Studies rarely compare the users' subjective views of different pupillometry methods. In our results, the mean difference in the questionnaire answers can be interpreted as an effect of physiological parameters. For example, the mean difference of the item in the accuracy to judge the pupil diameter was 1.08 (t = 8.76, p<0.001) higher when using the APL than the GPL. These results are consistent with the results of pupil diameter measurements by the APL and the RM. Compared to the GPL, the APL has a more innovative design and evaluation method of pupil diameter measurement. Its cost is lower. We estimate the price of the APL to be approximately US $9.5. Moreover, the APL can be handmade using available raw materials. The production technology is easy. The potential for mechanization is high, which is beneficial for future mass production. The APL is easy to use and no extra training is required. Pupillometry can be used in many studies and applications. In addition to the evaluation of nursing care quality, medical diagnosis, and disease prognosis, changes in the pupil diameter could become a method of understanding and measurement of cognitive acts . So far, there is no similar product. Therefore, there will be no competition on the market and the potential market is large. At present, general wards, intensive care units, and emergency rooms are equipped with GPL. Therefore, as long as the GPL is slightly modified and improved into the APL, it not only can achieve precise and confidence pupil diameter measurement, but also save the measure time for health and nursing care practitioners. There are several limitations that need to be mentioned. First, due to the limitation of budget and instruments, we chose the RM which is used for ophthalmology to measure the standard pupil size. Thus, the illumination conditions are different and the resulting pupil sizes are different in penlight and the RM. Comparable device is thought to be handheld infrared device, such as, NPi-200 (NeurOptics). Its accuracy has already been proved and shows the temporal change of the pupil diameter stimulated by the LED light source. Second, the participants were purposively sampled and recruited from the same college, and most of them were female, which may have affected the results. To increase the generalizability of the results, we suggest that future studies be performed to determine the inclusive criteria by using gender-matched samples. Diversification of participants' recruitment sites is also needed. We only found the modified design of a conventional LED penlight with pupil gauge had significant differences from the GPL in statistics. However, we cannot ensure its clinical significance due to the fact that the subjects were in good health conditions. We suggest in the future experiment, the APL should be applied in patients with brain or eyes disease to confirm its clinical significance.
Compared with the GPL, the average pupil diameter measured by the APL was closer to the standard pupil diameter measured by the RM. The average operational time for the APL was significantly shorter than that of the GPL. All of the participants believe that the convenience and confidence of the pupil diameter measurement were higher when using the APL rather than the GPL, and they were more inclined to use the APL in a clinical health and nursing care setting. The use of the APL can significantly enhance the accuracy and efficiency of pupil diameter measurements.
S1 Consent (DOCX) Click here for additional data file.
|
Metabolomic disorders caused by an imbalance in the gut microbiota are associated with central precocious puberty | 3862e4a6-54e1-4bc5-aa7e-ddab52043062 | 11646730 | Biochemistry[mh] | Introduction Precocious puberty is defined as the onset of secondary sexual characteristics in girls before the age of 8 and in boys before the age of 9. According to its pathogenesis, precocious puberty can be classified into the following three types: central precocious puberty (CPP), peripheral precocious puberty (PPP) and partial precocious puberty. Most cases of CPP are categorized as idiopathic CPP (ICPP), as they lack identifiable predisposing factors. CPP results from the premature activation of hypothalamic-pituitary-gonadal (HPG) axis due to increased secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus, leading to earlier sexual development. In recent years, the prevalence of CPP has risen significantly, up to 0.5-2% in China, with a higher incidence in girls than in boys ( – ). CPP seriously affects the growth and mental health of children and has attracted attention in both society and the medical community. Therefore, conducting research on the etiology and pathogenesis of ICPP will help us to better understand the disease and establish a foundation for its early diagnosis and treatment. The pathogenesis of ICPP is complex and multifaceted, which may be the result of a combination of genetic, metabolic and environmental factors. Previous studies have shown that genetic factors play an significant role in the onset and progression of ICPP. Kisspeptin, encoded by the KISS1 gene, interacts with hypothalamic GnRH neurons via binding to the G protein-coupled receptor GPR54. This interaction stimulates GnRH-dependent secretion of luteinizing hormone (LH) and follicle-stimulating hormone (FSH), initiating the onset of puberty ( , ). Other known genes associated with ICPP, such as thyroid-specific transcription factor-1 ( TTF1 ) and cut homeobox-1 ( CUX1 ), were identified in our previous studies ( , ). TTF1 encodes a transcription termination factor. CUX1 encodes a member of the homeodomain family of DNA-binding proteins. These genes are thought to regulate sexual development by modulating the Kiss1/GPR54 system. However, their effects appear to be transient and insufficient to fully control the activation of GnRH neurons ( , ). In addition, the potential role of environmental factors, such as environmental endocrine disruptors, in increasing the incidence of CPP by affecting the HPG axis remain an important area of investigation ( ). Recent findings indicate that obese girls are at a higher risk of developing CPP, suggesting that energy or amino acid metabolic pathways may regulate the hypothalamic neuroendocrine network, thereby prompting the activation of GnRH neurons and contributing to the onset of CPP. However, the precise mechanisms underlying these associations remain unclear, highlighting the need for further research and exploration. The gut microbiota refers to the collection of microorganisms residing in the human intestine. Previous studies have shown associations between the gut microbiota and various conditions, including diabetes, obesity, Alzheimer’s disease, depression. Additionally, the gut microbiota is closely connected to the neuroendocrine system and plays a crucial role in the brain-gut-microbiome axis ( – ). Exploring the relationship between metabolic disorders resulting from gut microbiota imbalances and host diseases is crucial for advancing disease prevention and treatment. In recent years, the relationship between the gut microbiota, its metabolites and sexual development has attracted attention from researchers ( , ). In our previous research, we identified that there are three major metabolic pathways - catecholamine metabolism, serotonin metabolism and the tricarboxylic acid cycle - that were altered in children with CPP based on their urine sample analyses. Significant changes were observed in the urinary levels of 4-hydroxyphenylacetic acid, 5-hydroxyindoleacetic acid, indoleacetic acid, 5-hydroxytryptophan, and 5-hydroxykynurenamine in the CPP group. These findings suggested that the development of CPP may be related to metabolic disorders resulting from alterations in the gut microbiota ( ). However, the precise causal relationships and underlying mechanisms linking these metabolic disturbances to CPP remain elusive. In this study, 16S rDNA high-throughput sequencing revealed that the main differences in gut microbiota composition between patients with CPP and healthy controls were an increased abundance of Faecalibacterium and a decreased abundance of Anaerotruncus at the genus level. Metabolomic analysis further demonstrated significant differences in metabolite composition between the CPP and control groups. A total of 51 differentially expressed metabolites were identified, with 32 showing significant upregulation and 19 showing significant downregulation in the CPP group. Further application of Spearman correlation analysis showed that imbalances in gut microbiota can affect the metabolic patterns in CPP patients, as the gut microbiota is involved in regulating phenylalanine and tyrosine biosynthesis and metabolism, the citrate cycle (TCA cycle), glyoxylate and dicarboxylate metabolism, and tryptophan metabolism. Our findings provide novel insights into the mechanism underlying the onset and progression of CPP.
Materials and methods 2.1 Patients and samples In our study, a total of 50 stool and serum samples were collected from girls diagnosed with ICPP at Shanghai Children’s Hospital, affiliated to Shanghai Jiao Tong University. Meanwhile, stool and serum samples were collected from 50 healthy children matched with the ICPP group by age, gender, ethnicity and region during the same period. The study was approved by the ethics committee of Shanghai Children’s Hospital, and informed consent was obtained from all participants. Fresh stool samples were immediately frozen at -80°C to prevent degradation from repeated freeze-thaw cycles. Peripheral blood samples (4 mL) were obtained from each participant, centrifuged at 3000 rpm for 10 minutes, after which the serum was collected, aliquoted into 0.5 mL portions, and stored at -80°C. 2.2 DNA extraction, polymerase chain reaction amplification, and Illumina MiSeq sequencing Microbial DNA was extracted from CPP and control stool samples using the Fast DNA Stool Mini Kit (51604, Qiagen, Germany), according to its instruction manual. Universal primers 341F and 806R were used to amplify the V3-V4 region of the bacterial ribosomal 16S rDNA gene. When designing specific primers, the index sequence and connector sequence suitable for Illumina MiSeq PE250 should be added to the 5’ end of the universal primer. The primer sequences used are as follows: Forward primer (5’-3’): CCTACGGGRSGCAGCAG (341F) Reverse primer (5’-3’): GGACTACVVGGGTATCTAATC (806R) PCR amplification was performed using Kapa Hifi Hotstart Readymix PCR kit with high fidelity enzyme. Amplicons were extracted from 2% agarose gels and purified with AxyPrep DNA gel recovery kit (Axygen Biosciences, USA). The purified PCR products were tested by Thermo Nanodrop 2000 microspectrophotometer and 2% agarose gels. 2.3 16S rDNA gene sequence analysis Qubit 2.0 (Invitrogen, USA) was used for library quantitation. Paired-end sequencing was performed using Illumina’s MiSeq PE250 Sequencer (Illumina, USA). Paired-end data obtained by sequencing was spliced with PANDAseq software ( https://github.com/neufeld/pandaseq , version 2.9), and long Reads with high variability were obtained for 16S analysis. The resulting raw reads were filtered as follows: 1) maximum number of N base = 3; 2) minimum average quality score of each read = 20; 3) the length of reads between 250bp and 500bp. Clean Reads are finally obtained. The reads with 97% identity were clustered into Operational Taxonomic Units (OTUs) using UPARSE ( http://drive5.com/uparse/ ). A representative sequence of each OTU was assigned to a taxonomic level in the Ribosomal Database Project (RDP, http://rdp.cme.msu.edu/ ) database using 0.8 as the minimum confidence threshold. Alpha and beta diversity were calculated using QIIME software (version 1.9.1) with the default parameters. α-diversity represents an analysis of diversity in a single sample reflected by parameters including Observed species index, Chao 1 index, Simpson index, Shannon index and PD whole tree index using QIIME. β-diversity is used to measure the microbiota structure between different groups. The results of Unifrac are used to measure β-diversity, which are generally divided into Unweighted Unifrac and Weighted Unifrac. Both the weighted and unweighted Unifrac distance matrices were plotted in the principal coordinate analysis (PCoA), and analyses of similarities (ANOSIMs) were performed. The higher the index, the greater the differences between groups. The linear discriminant analysis (LDA) effect size (LEfSe) method was used to analyze the differentially expressed bacterial taxa at different levels between CPP patients and healthy controls. LEfSe analysis is mainly used to find and identify two or more biomarkers and genomic characteristics, such as genes, metabolic pathways and taxonomy. LEfSe analysis used LDA to detect differential abundance and characteristics between groups at the phylum, class, order, family, and genus levels. Bacterial taxa with LDA scores greater than the set threshold (the lowest was 2) were considered biomarkers with statistical differences. The abundances of functional categories in the Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologs was predicted by Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt). 2.4 Quantitative analysis of microbial metabolomics Feces samples were thawed on ice-bath to diminish degradation. About 10 mg of each sample was weighted and transferred to a new 1.5 ml tube. Then 25 μl of water was added and the sample was homogenated with zirconium oxide beads for 3 min. 185 μl of ACN/Methanol (8/2) was added to extract the metabolites. The sample was centrifuged at 18000 g for 20 min. Then the supernatant was transferred to a 96-well plate. The following procedures were performed on a Biomek 4000 workstation (Biomek 4000, Beckman Coulter, USA). 20 μl of freshly prepared derivative reagents was added to each well. The plate was sealed and the derivatization was carried out at 30°C for 60 min. After derivatization, 350 μl of ice-cold 50% methanol solution was added to dilute the sample. Then the plate was stored at -20°C for 20 minutes and followed by 4000 g centrifugation at 4°C for 30 min. 135 μl of supernatant was transferred to a new 96-well plate with 15 μl internal standards in each well. Serial dilutions of derivatized stock standards were added to the left wells. Finally, the plate was sealed for LC-MS analysis. An ultra-performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) system (ACQUITY UPLC-Xevo TQ-S, Waters Corp., Milford, MA, USA) was used to quantitate the microbial metabolite in this study by Metabo-Profile Biotechnology (Shanghai) Co., Ltd. The optimized instrument settings are briefly described as follows. For HPLC, column: ACQUITY HPLC BEH C18 1.7 × 10−6 m VanGuard precolumn (2.1 × 5 mm) and ACQUITY HPLC BEH C18 1.7 × 10−6 m analytical column (2.1 × 100 mm), column temp.: 40°C, sample manager temp.: 10°C, mobile phases: A = water with 0.1% formic acid; and B = acetonitrile/IPA (70:30), gradient conditions: 0–1 min (5% B), 1–11 min (5–78% B), 11–13.5 min (78–95% B), 13.5–14 min (95–100% B), 14–16 min (100% B), 16–16.1 min (100-5% B), 16.1–18 min (5% B), flow rate: 0.40 mL min−1, and injection vol.: 5.0 μL. For mass spectrometer, capillary: 1.5 (ESI+), 2.0 (ESI-) Kv, source temp.: 150°C, desolvation temp.: 550°C, and desolvation gas flow: 1000 L h−1. The metabolites were identified using the STD method, employing the Q300 kit (Metabo-Profile, Shanghai, China). This method enables the quantitative detection of a wide array of metabolites, including amino acids, phenols, phenyl or benzyl derivatives, indoles, organic acids, fatty acids, sugars, and bile acids in biological samples of varying concentrations on the same microtiter plate. The Q300 kit utilizes 60 internal standards, such as L_Arginine_15N2, Hippuric acid_D5, TCDCA_D9, D_Glucose_D7, Carnitine_D3, C5 0_D9 and Citric acid_D4, along with 306 one-to-one standards for accurate quantification. The derivatization reaction was carried out using 3-nitrophenylhydrazine as the derivatization reagent and 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide as the catalyst. Quality control (QC) on the samples were carried out in order to ensure high quality analysis of samples by the instrument. The raw data files generated by UPLC-MS/MS were processed using the QuanMET software (v2.0, Metabo-Profile, Shanghai, China) to perform peak integration, calibration, and quantitation for each metabolite. Mass spectrometry-based quantitative metabolomics refers to the determination of the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration (i.e., calibration curve). For many metabolomics studies, two types of statistical analysis are extensively performed: 1) multivariate statistical analyses such as principal component analysis (PCA), partial least square discriminant analysis (PLS-DA), orthogonal partial least square discriminant analysis (OPLS-DA) and so on; 2) univariate statistical analyses including student t-test, Mann-Whitney-Wilcoxon (U-test), ANOVA, correlation analysis, etc. PCA is an unsupervised modeling method commonly used to detect data outliers, clustering, and classification trends without a priori knowledge of the sample set. The first principal component (PC1) expresses more variation than the second principal component (PC2), which, in turn, expresses more variation than PC3, and so on. PLS-DA and/or OPLS-DA has been extensively used for multi-class classification and identification of differently altered metabolites. In the current project, PLS-DA modeling is used as a multi-class classifier to visualize the difference between global metabolic profiles among the groups that provides more valuable information beyond what can be gleaned from PCA. The OPLS method is an improved PLS-DA method for modeling and further screening of differential metabolites between the CPP group and the control group. 2.5 Serum sex hormone detection Serum samples (25 μl) were analyzed to measure levels of LH, FSH, E2 and other hormones using a chemiluminescence method. The preparation, calibration, dilution, quality control, correction, and analysis procedures were conducted in strict accordance with the operation manual of chemiluminescence instrument (Beckman, USA). 2.6 Spearman correlation analysis Spearman correlation analysis was performed on the 16S rDNA sequencing and metabolomics data to investigate associations between differential gut microbiota and metabolites. LDA >2 and P < 0.05 were used as criteria for screening and extracting differential gut microbiota and related functional data, followed by extraction of differential metabolomics data. The results of the Spearman correlation analysis were visualized in a heatmap. For each metabolite, data were included in the heatmap if its correlation with at least one gut microbiota had a P value < 0.05 and an absolute correlation coefficient (R) > 0.3. 2.7 Statistical analysis SAS software (Version 9.2) was used for statistical analysis in this study. Age and body mass index (BMI) data between the two groups were compared using the Mann-Whitney Wilcoxon test, with statistically significance defined as P < 0.05.
Patients and samples In our study, a total of 50 stool and serum samples were collected from girls diagnosed with ICPP at Shanghai Children’s Hospital, affiliated to Shanghai Jiao Tong University. Meanwhile, stool and serum samples were collected from 50 healthy children matched with the ICPP group by age, gender, ethnicity and region during the same period. The study was approved by the ethics committee of Shanghai Children’s Hospital, and informed consent was obtained from all participants. Fresh stool samples were immediately frozen at -80°C to prevent degradation from repeated freeze-thaw cycles. Peripheral blood samples (4 mL) were obtained from each participant, centrifuged at 3000 rpm for 10 minutes, after which the serum was collected, aliquoted into 0.5 mL portions, and stored at -80°C.
DNA extraction, polymerase chain reaction amplification, and Illumina MiSeq sequencing Microbial DNA was extracted from CPP and control stool samples using the Fast DNA Stool Mini Kit (51604, Qiagen, Germany), according to its instruction manual. Universal primers 341F and 806R were used to amplify the V3-V4 region of the bacterial ribosomal 16S rDNA gene. When designing specific primers, the index sequence and connector sequence suitable for Illumina MiSeq PE250 should be added to the 5’ end of the universal primer. The primer sequences used are as follows: Forward primer (5’-3’): CCTACGGGRSGCAGCAG (341F) Reverse primer (5’-3’): GGACTACVVGGGTATCTAATC (806R) PCR amplification was performed using Kapa Hifi Hotstart Readymix PCR kit with high fidelity enzyme. Amplicons were extracted from 2% agarose gels and purified with AxyPrep DNA gel recovery kit (Axygen Biosciences, USA). The purified PCR products were tested by Thermo Nanodrop 2000 microspectrophotometer and 2% agarose gels.
16S rDNA gene sequence analysis Qubit 2.0 (Invitrogen, USA) was used for library quantitation. Paired-end sequencing was performed using Illumina’s MiSeq PE250 Sequencer (Illumina, USA). Paired-end data obtained by sequencing was spliced with PANDAseq software ( https://github.com/neufeld/pandaseq , version 2.9), and long Reads with high variability were obtained for 16S analysis. The resulting raw reads were filtered as follows: 1) maximum number of N base = 3; 2) minimum average quality score of each read = 20; 3) the length of reads between 250bp and 500bp. Clean Reads are finally obtained. The reads with 97% identity were clustered into Operational Taxonomic Units (OTUs) using UPARSE ( http://drive5.com/uparse/ ). A representative sequence of each OTU was assigned to a taxonomic level in the Ribosomal Database Project (RDP, http://rdp.cme.msu.edu/ ) database using 0.8 as the minimum confidence threshold. Alpha and beta diversity were calculated using QIIME software (version 1.9.1) with the default parameters. α-diversity represents an analysis of diversity in a single sample reflected by parameters including Observed species index, Chao 1 index, Simpson index, Shannon index and PD whole tree index using QIIME. β-diversity is used to measure the microbiota structure between different groups. The results of Unifrac are used to measure β-diversity, which are generally divided into Unweighted Unifrac and Weighted Unifrac. Both the weighted and unweighted Unifrac distance matrices were plotted in the principal coordinate analysis (PCoA), and analyses of similarities (ANOSIMs) were performed. The higher the index, the greater the differences between groups. The linear discriminant analysis (LDA) effect size (LEfSe) method was used to analyze the differentially expressed bacterial taxa at different levels between CPP patients and healthy controls. LEfSe analysis is mainly used to find and identify two or more biomarkers and genomic characteristics, such as genes, metabolic pathways and taxonomy. LEfSe analysis used LDA to detect differential abundance and characteristics between groups at the phylum, class, order, family, and genus levels. Bacterial taxa with LDA scores greater than the set threshold (the lowest was 2) were considered biomarkers with statistical differences. The abundances of functional categories in the Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologs was predicted by Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt).
Quantitative analysis of microbial metabolomics Feces samples were thawed on ice-bath to diminish degradation. About 10 mg of each sample was weighted and transferred to a new 1.5 ml tube. Then 25 μl of water was added and the sample was homogenated with zirconium oxide beads for 3 min. 185 μl of ACN/Methanol (8/2) was added to extract the metabolites. The sample was centrifuged at 18000 g for 20 min. Then the supernatant was transferred to a 96-well plate. The following procedures were performed on a Biomek 4000 workstation (Biomek 4000, Beckman Coulter, USA). 20 μl of freshly prepared derivative reagents was added to each well. The plate was sealed and the derivatization was carried out at 30°C for 60 min. After derivatization, 350 μl of ice-cold 50% methanol solution was added to dilute the sample. Then the plate was stored at -20°C for 20 minutes and followed by 4000 g centrifugation at 4°C for 30 min. 135 μl of supernatant was transferred to a new 96-well plate with 15 μl internal standards in each well. Serial dilutions of derivatized stock standards were added to the left wells. Finally, the plate was sealed for LC-MS analysis. An ultra-performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) system (ACQUITY UPLC-Xevo TQ-S, Waters Corp., Milford, MA, USA) was used to quantitate the microbial metabolite in this study by Metabo-Profile Biotechnology (Shanghai) Co., Ltd. The optimized instrument settings are briefly described as follows. For HPLC, column: ACQUITY HPLC BEH C18 1.7 × 10−6 m VanGuard precolumn (2.1 × 5 mm) and ACQUITY HPLC BEH C18 1.7 × 10−6 m analytical column (2.1 × 100 mm), column temp.: 40°C, sample manager temp.: 10°C, mobile phases: A = water with 0.1% formic acid; and B = acetonitrile/IPA (70:30), gradient conditions: 0–1 min (5% B), 1–11 min (5–78% B), 11–13.5 min (78–95% B), 13.5–14 min (95–100% B), 14–16 min (100% B), 16–16.1 min (100-5% B), 16.1–18 min (5% B), flow rate: 0.40 mL min−1, and injection vol.: 5.0 μL. For mass spectrometer, capillary: 1.5 (ESI+), 2.0 (ESI-) Kv, source temp.: 150°C, desolvation temp.: 550°C, and desolvation gas flow: 1000 L h−1. The metabolites were identified using the STD method, employing the Q300 kit (Metabo-Profile, Shanghai, China). This method enables the quantitative detection of a wide array of metabolites, including amino acids, phenols, phenyl or benzyl derivatives, indoles, organic acids, fatty acids, sugars, and bile acids in biological samples of varying concentrations on the same microtiter plate. The Q300 kit utilizes 60 internal standards, such as L_Arginine_15N2, Hippuric acid_D5, TCDCA_D9, D_Glucose_D7, Carnitine_D3, C5 0_D9 and Citric acid_D4, along with 306 one-to-one standards for accurate quantification. The derivatization reaction was carried out using 3-nitrophenylhydrazine as the derivatization reagent and 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide as the catalyst. Quality control (QC) on the samples were carried out in order to ensure high quality analysis of samples by the instrument. The raw data files generated by UPLC-MS/MS were processed using the QuanMET software (v2.0, Metabo-Profile, Shanghai, China) to perform peak integration, calibration, and quantitation for each metabolite. Mass spectrometry-based quantitative metabolomics refers to the determination of the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration (i.e., calibration curve). For many metabolomics studies, two types of statistical analysis are extensively performed: 1) multivariate statistical analyses such as principal component analysis (PCA), partial least square discriminant analysis (PLS-DA), orthogonal partial least square discriminant analysis (OPLS-DA) and so on; 2) univariate statistical analyses including student t-test, Mann-Whitney-Wilcoxon (U-test), ANOVA, correlation analysis, etc. PCA is an unsupervised modeling method commonly used to detect data outliers, clustering, and classification trends without a priori knowledge of the sample set. The first principal component (PC1) expresses more variation than the second principal component (PC2), which, in turn, expresses more variation than PC3, and so on. PLS-DA and/or OPLS-DA has been extensively used for multi-class classification and identification of differently altered metabolites. In the current project, PLS-DA modeling is used as a multi-class classifier to visualize the difference between global metabolic profiles among the groups that provides more valuable information beyond what can be gleaned from PCA. The OPLS method is an improved PLS-DA method for modeling and further screening of differential metabolites between the CPP group and the control group.
Serum sex hormone detection Serum samples (25 μl) were analyzed to measure levels of LH, FSH, E2 and other hormones using a chemiluminescence method. The preparation, calibration, dilution, quality control, correction, and analysis procedures were conducted in strict accordance with the operation manual of chemiluminescence instrument (Beckman, USA).
Spearman correlation analysis Spearman correlation analysis was performed on the 16S rDNA sequencing and metabolomics data to investigate associations between differential gut microbiota and metabolites. LDA >2 and P < 0.05 were used as criteria for screening and extracting differential gut microbiota and related functional data, followed by extraction of differential metabolomics data. The results of the Spearman correlation analysis were visualized in a heatmap. For each metabolite, data were included in the heatmap if its correlation with at least one gut microbiota had a P value < 0.05 and an absolute correlation coefficient (R) > 0.3.
Statistical analysis SAS software (Version 9.2) was used for statistical analysis in this study. Age and body mass index (BMI) data between the two groups were compared using the Mann-Whitney Wilcoxon test, with statistically significance defined as P < 0.05.
Results 3.1 Clinical data A total of 50 children with ICPP were recruited from Shanghai Children’s Hospital. The inclusion criteria were as follows: (1) onset of secondary sexual characteristics in girls before 8 years of age; (2) GnRH stimulation test showing a peak LH level (LHP) ≥ 5 mIU/mL and an LHP/FSHP ratio > 0.6; (3) ovarian volume ≥ 1 mL; (4) exclusion of secondary CPP due to other causes; and (5) no history of drug treatment related to CPP, including Chinese herbal medicines. The exclusion criteria were: (1) presence of pituitary tumors or other organic lesions; (2) use of traditional Chinese medicine within 1 month prior to enrollment; (3) use of antibiotics, probiotics or prebiotics within 1 month prior to enrollment; and (4) coexisting gastrointestinal diseases or impaired liver function. At the same time, 50 healthy children matched with the CPP group in age, sex, ethnicity, and region were recruited as controls. None of the participants in either group had a history of other diseases. The mean age of the children in the CPP group was 8.137 years, while the mean age of the control group was 7.902 years. The average BMI of the CPP group was 16.294 kg/m 2 , compared to 15.720 kg/m 2 for the control group. There were no significant differences in both age and BMI between the two groups (P > 0.05). 3.2 Discrepancies in the structure and diversity of the gut microbiota between CPP and control groups To explore the correlation between CPP and gut microbiota, fecal samples from girls diagnosed with CPP and healthy controls were analyzed using 16S rDNA high-throughput sequencing. A total of 3,545,161 effective sequences were obtained, with an average of 35,451.61 ± 1,883.29 tags per sample, ranging from 30,067 to 38,945 tags. Sequence lengths were predominantly between 407 to 422 bp, with an average length of 412.65 ± 3.01 bp ( ). Sequences clustered at 97% similarity yielded 638 OTUs, with 467 OTUs shared by both the CPP and control groups. In addition, the results showed that 128 OTUs were unique to the control group, corresponding to 41 individuals (82% of the control subjects), while 43 OTUs were unique to the CPP group, corresponding to 34 patients (68% of CPP patients), as shown in the Venn diagram ( ). These results suggested significant differences in OTU distribution between the two groups. A representative sequence of each OTU was assigned to a taxonomic level in the RDP. The microbial abundances of the two groups at the phylum, class, order, family and genus levels were analyzed. The results showed that at the genus level, the abundance of Faecalibacterium in the CPP group was higher than that in the control group, whereas the abundances of Prevotella and Roseburia were reduced in the CPP group ( ). To assess the differences in the diversity and richness of the gut microbiota in the CPP and control group, we analyzed the α-diversity index. The Shannon index showed that the diversity and richness of the gut microbiota in the CPP group were significantly lower than in the control group ( P = 0.044) ( ). α-diversity analysis, combined with PCoA, indicated substantial differences in fecal microbial composition between CPP patients and controls (Adonis P = 0.012, R 2 = 0.019) ( ). Further, we carried out LEfSe and Wilcoxon tests to identify specific microorganisms in the gut microbiota that differed between CPP and control groups. Features with an LDA score cut-off of 2 were considered significant. LEfSe analysis showed that, at the phylum level, Synergistetes and Euryarchaeota were less abundant in the CPP group compared to the control group. At the genus level, Faecalibacterium and Klebsiella were significantly abundant in the CPP group, while Prevotella, Anaerotruncus, Dialister, Veillonella, Methanobrevibacter, Cetobacterium and Clostridium XVIII were significantly reduced in the CPP group. ( ). Wilcoxon test results further confirmed significant differences between the two groups (P < 0.01) at the genus level, with Faecalibacterium significantly enriched and Anaerotruncus and Pyramidobacter significantly decreased in the CPP group ( ). Collectively, these results showed that the increased Faecalibacterium and decreased Anaerotruncus were important characteristics of the disordered gut microbiota in patients with CPP. 3.3 Altered metabolism in CPP patients In this study, UPLC−MS/MS was used to analyze metabolomic data from stool samples in both the CPP and control groups, aiming to identify the differentially expressed metabolites. First, unsupervised PCA was used to evaluate within-group clustering, detect any outliers, and assess group separation. Then, a supervised analysis method (PLS-DA) was used to reduce the influence of individual variation within each group. The results of PLS-DA showed that there were significant differences in the composition of metabolites between the CPP and control groups ( ). Furthermore, this study used the Mann−Whitney U test ( P value and fold change [FC] value), a univariate statistical method, to identify metabolites with significantly different expression between the two groups. As shown in , there were differences in the expression of small metabolites between the CPP and control groups. Compared with the control group, a total of 51 differentially expressed metabolites were identified, with 32 showing significant upregulation ( P ≤ 0.05, FC>1) and 19 showing significant downregulation ( P ≤ 0.05, FC>1) in the CPP group. These differentially expressed metabolites included amino acids, benzenoids, carbohydrates, fatty acids, indoles, organic acids, phenyl propanoic acids and phenylpropanoids ( ). Among them, the most representative differentially expressed metabolites in the CPP group, ranked by smallest P value and largest FC, were as follows: increased 3-3-hydroxyphenyl-3-hydroxypropanoic acid (HPHPA), 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid, acetoacetic acid, isocitric acid, cis-aconitic acid, citric acid, formic acid, glycolic acid, and decreased 4-hydroxyphenylpyruvic acid, L-tryptophan, phenylpyruvic acid, and phenylacetic acid. Among these metabolites, the largest fold changes were seen with 3,4-dihydroxyhydrocinnamic acid (FC=10.432) and HPHPA (FC=7.803), both of which had small P values ( , ). A heatmap ( ) visually clusters these metabolites, suggesting potential inter-metabolite interactions. Next, we used the hsa library, based on the KEGG database, to perform metabolic pathway enrichment analysis (MPEA) in order to identify the most relevant metabolic pathways associated with these differentially expressed metabolites. Based on the comprehensive P value and impact value, the analysis identified several metabolic pathways altered in CPP patients, including phenylalanine metabolism, glyoxylate and dicarboxylate metabolism, aminoacyl-tRNA biosynthesis, citrate cycle (TCA cycle), tyrosine metabolism, phenylalanine, tyrosine and tryptophan biosynthesis, valine, leucine and isoleucine biosynthesis and tryptophan metabolism ( , ). 3.4 Correlation analysis of gut microbiota imbalance and metabolite changes in CPP patients To explore the potential role of the gut microbiome in influencing the onset and progression of CPP through metabolic pathways, we conducted Spearman correlation analysis. The results indicated that several metabolites altered in CPP patients were significantly related to changes in gut microbiota composition. Notably, metabolites involved in phenylalanine and tyrosine biosynthesis and metabolism, including HPHPA, 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid, and acetoacetic acid, which were significantly increased in CPP patients, showed negative correlations with Anaerotruncus . The decreased metabolites 4-hydroxyphenylpyruvic acid and phenylacetic acid exhibited significant positive correlations with Anaerotruncus ( ). Additionally, metabolites involved in the TCA cycle and glyoxylate and dicarboxylate metabolism, including isocitric acid, cis-aconitic acid, citric acid, formic acid and glycolic acid, were significantly increased in the feces of CPP patients. Correlation analysis indicated that isocitric acid, cis-aconitic acid and citric acid were positively correlated with Faecalibacterium , which were enriched in CPP group, while exhibiting negative correlations with Anaerotruncus . Formic acid showed a negative correlation with Prevotella , and glycolic acid was negatively correlated with Anaerotruncus ( ). Furthermore, L-tryptophan, a crucial metabolite in tryptophan metabolism, was significantly reduced in the feces of CPP patients compared to controls, while oxoadipic acid level was notably elevated. Correlation analysis showed that Faecalibacterium was negatively correlated with L-tryptophan and positively correlated with oxoadipic acid ( ). Collectively, these above results indicated that the altered metabolites observed in the patients with CPP were correlated with specific gut microbiota profiles. 3.5 Correlation analysis of altered gut microbiome and metabolites with serum hormones in CPP patients We further analyzed the correlation of changed gut microbiota and metabolites with serum hormones, including baseline LH, FSH, E2, and the peak values of LH, FSH, E2 after GnRH stimulation test. The results showed that Anaerotruncus was negatively correlated with the peak value of FSH following the GnRH stimulation test, while Faecalibacterium exhibited a positive correlation with the peak value of LH after GnRH stimulation test ( ). Moreover, L-tryptophan showed a negatively correlation with the peak value of LH, and acetoacetic acid showed a positive correlation with basic LH and FSH levels ( ).
Clinical data A total of 50 children with ICPP were recruited from Shanghai Children’s Hospital. The inclusion criteria were as follows: (1) onset of secondary sexual characteristics in girls before 8 years of age; (2) GnRH stimulation test showing a peak LH level (LHP) ≥ 5 mIU/mL and an LHP/FSHP ratio > 0.6; (3) ovarian volume ≥ 1 mL; (4) exclusion of secondary CPP due to other causes; and (5) no history of drug treatment related to CPP, including Chinese herbal medicines. The exclusion criteria were: (1) presence of pituitary tumors or other organic lesions; (2) use of traditional Chinese medicine within 1 month prior to enrollment; (3) use of antibiotics, probiotics or prebiotics within 1 month prior to enrollment; and (4) coexisting gastrointestinal diseases or impaired liver function. At the same time, 50 healthy children matched with the CPP group in age, sex, ethnicity, and region were recruited as controls. None of the participants in either group had a history of other diseases. The mean age of the children in the CPP group was 8.137 years, while the mean age of the control group was 7.902 years. The average BMI of the CPP group was 16.294 kg/m 2 , compared to 15.720 kg/m 2 for the control group. There were no significant differences in both age and BMI between the two groups (P > 0.05).
Discrepancies in the structure and diversity of the gut microbiota between CPP and control groups To explore the correlation between CPP and gut microbiota, fecal samples from girls diagnosed with CPP and healthy controls were analyzed using 16S rDNA high-throughput sequencing. A total of 3,545,161 effective sequences were obtained, with an average of 35,451.61 ± 1,883.29 tags per sample, ranging from 30,067 to 38,945 tags. Sequence lengths were predominantly between 407 to 422 bp, with an average length of 412.65 ± 3.01 bp ( ). Sequences clustered at 97% similarity yielded 638 OTUs, with 467 OTUs shared by both the CPP and control groups. In addition, the results showed that 128 OTUs were unique to the control group, corresponding to 41 individuals (82% of the control subjects), while 43 OTUs were unique to the CPP group, corresponding to 34 patients (68% of CPP patients), as shown in the Venn diagram ( ). These results suggested significant differences in OTU distribution between the two groups. A representative sequence of each OTU was assigned to a taxonomic level in the RDP. The microbial abundances of the two groups at the phylum, class, order, family and genus levels were analyzed. The results showed that at the genus level, the abundance of Faecalibacterium in the CPP group was higher than that in the control group, whereas the abundances of Prevotella and Roseburia were reduced in the CPP group ( ). To assess the differences in the diversity and richness of the gut microbiota in the CPP and control group, we analyzed the α-diversity index. The Shannon index showed that the diversity and richness of the gut microbiota in the CPP group were significantly lower than in the control group ( P = 0.044) ( ). α-diversity analysis, combined with PCoA, indicated substantial differences in fecal microbial composition between CPP patients and controls (Adonis P = 0.012, R 2 = 0.019) ( ). Further, we carried out LEfSe and Wilcoxon tests to identify specific microorganisms in the gut microbiota that differed between CPP and control groups. Features with an LDA score cut-off of 2 were considered significant. LEfSe analysis showed that, at the phylum level, Synergistetes and Euryarchaeota were less abundant in the CPP group compared to the control group. At the genus level, Faecalibacterium and Klebsiella were significantly abundant in the CPP group, while Prevotella, Anaerotruncus, Dialister, Veillonella, Methanobrevibacter, Cetobacterium and Clostridium XVIII were significantly reduced in the CPP group. ( ). Wilcoxon test results further confirmed significant differences between the two groups (P < 0.01) at the genus level, with Faecalibacterium significantly enriched and Anaerotruncus and Pyramidobacter significantly decreased in the CPP group ( ). Collectively, these results showed that the increased Faecalibacterium and decreased Anaerotruncus were important characteristics of the disordered gut microbiota in patients with CPP.
Altered metabolism in CPP patients In this study, UPLC−MS/MS was used to analyze metabolomic data from stool samples in both the CPP and control groups, aiming to identify the differentially expressed metabolites. First, unsupervised PCA was used to evaluate within-group clustering, detect any outliers, and assess group separation. Then, a supervised analysis method (PLS-DA) was used to reduce the influence of individual variation within each group. The results of PLS-DA showed that there were significant differences in the composition of metabolites between the CPP and control groups ( ). Furthermore, this study used the Mann−Whitney U test ( P value and fold change [FC] value), a univariate statistical method, to identify metabolites with significantly different expression between the two groups. As shown in , there were differences in the expression of small metabolites between the CPP and control groups. Compared with the control group, a total of 51 differentially expressed metabolites were identified, with 32 showing significant upregulation ( P ≤ 0.05, FC>1) and 19 showing significant downregulation ( P ≤ 0.05, FC>1) in the CPP group. These differentially expressed metabolites included amino acids, benzenoids, carbohydrates, fatty acids, indoles, organic acids, phenyl propanoic acids and phenylpropanoids ( ). Among them, the most representative differentially expressed metabolites in the CPP group, ranked by smallest P value and largest FC, were as follows: increased 3-3-hydroxyphenyl-3-hydroxypropanoic acid (HPHPA), 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid, acetoacetic acid, isocitric acid, cis-aconitic acid, citric acid, formic acid, glycolic acid, and decreased 4-hydroxyphenylpyruvic acid, L-tryptophan, phenylpyruvic acid, and phenylacetic acid. Among these metabolites, the largest fold changes were seen with 3,4-dihydroxyhydrocinnamic acid (FC=10.432) and HPHPA (FC=7.803), both of which had small P values ( , ). A heatmap ( ) visually clusters these metabolites, suggesting potential inter-metabolite interactions. Next, we used the hsa library, based on the KEGG database, to perform metabolic pathway enrichment analysis (MPEA) in order to identify the most relevant metabolic pathways associated with these differentially expressed metabolites. Based on the comprehensive P value and impact value, the analysis identified several metabolic pathways altered in CPP patients, including phenylalanine metabolism, glyoxylate and dicarboxylate metabolism, aminoacyl-tRNA biosynthesis, citrate cycle (TCA cycle), tyrosine metabolism, phenylalanine, tyrosine and tryptophan biosynthesis, valine, leucine and isoleucine biosynthesis and tryptophan metabolism ( , ).
Correlation analysis of gut microbiota imbalance and metabolite changes in CPP patients To explore the potential role of the gut microbiome in influencing the onset and progression of CPP through metabolic pathways, we conducted Spearman correlation analysis. The results indicated that several metabolites altered in CPP patients were significantly related to changes in gut microbiota composition. Notably, metabolites involved in phenylalanine and tyrosine biosynthesis and metabolism, including HPHPA, 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid, and acetoacetic acid, which were significantly increased in CPP patients, showed negative correlations with Anaerotruncus . The decreased metabolites 4-hydroxyphenylpyruvic acid and phenylacetic acid exhibited significant positive correlations with Anaerotruncus ( ). Additionally, metabolites involved in the TCA cycle and glyoxylate and dicarboxylate metabolism, including isocitric acid, cis-aconitic acid, citric acid, formic acid and glycolic acid, were significantly increased in the feces of CPP patients. Correlation analysis indicated that isocitric acid, cis-aconitic acid and citric acid were positively correlated with Faecalibacterium , which were enriched in CPP group, while exhibiting negative correlations with Anaerotruncus . Formic acid showed a negative correlation with Prevotella , and glycolic acid was negatively correlated with Anaerotruncus ( ). Furthermore, L-tryptophan, a crucial metabolite in tryptophan metabolism, was significantly reduced in the feces of CPP patients compared to controls, while oxoadipic acid level was notably elevated. Correlation analysis showed that Faecalibacterium was negatively correlated with L-tryptophan and positively correlated with oxoadipic acid ( ). Collectively, these above results indicated that the altered metabolites observed in the patients with CPP were correlated with specific gut microbiota profiles.
Correlation analysis of altered gut microbiome and metabolites with serum hormones in CPP patients We further analyzed the correlation of changed gut microbiota and metabolites with serum hormones, including baseline LH, FSH, E2, and the peak values of LH, FSH, E2 after GnRH stimulation test. The results showed that Anaerotruncus was negatively correlated with the peak value of FSH following the GnRH stimulation test, while Faecalibacterium exhibited a positive correlation with the peak value of LH after GnRH stimulation test ( ). Moreover, L-tryptophan showed a negatively correlation with the peak value of LH, and acetoacetic acid showed a positive correlation with basic LH and FSH levels ( ).
Discussion The gut microbiota is a general term for microorganisms residing in the human intestine. The gut microbiota is closely linked to the neuroendocrine system and significantly influences the brain-gut-microbiome axis. Numerous clinical studies have demonstrated bidirectional interactions within the axis. Gut microbes interact with the central nervous system through neural, endocrine, and immune signaling pathways. In turn, the brain can modulate gut microbiota composition and function ( ). Recent studies have noted that the gut microbiota in the CPP girls resembles that of obese cohorts ( , ). Dong et al. identified an association between the gut microbiota in CPP girls and short-chain fatty acids (SCFAs) production ( ). Additionally, the gut microbiota and its derived SCFAs have been shown to reverse obesity-induced precocious puberty in female rats by regulating HPG axis ( ). Furthermore, the imbalance of the gut microbiota can alter nitric oxide synthesis, which is closely associated with the progression of CPP ( ). These studies highlight the gut microbiota as a significant regulatory “organ” of the HPG axis. Different from the previous researches, our study found novel disordered gut microbiota in CPP patients, particularly involving Faecalibacterium and Anaerotruncus . These changes can influence essential metabolic pathways, including phenylalanine and tyrosine biosynthesis and metabolism, the TCA cycle, glyoxylate and dicarboxylate metabolism, and tryptophan metabolism. Our findings provide valuable insights into the pathogenesis of CPP by integrating specific microbiota profiles and metabolic disruptions. 4.1 Distinct gut microbial signature in patients with CPP Our findings revealed that the most important characteristics of the disordered gut microbiota in patients with CPP were an increased abundance of Faecalibacterium and a decreased abundance of Anaerotruncus . Faecalibacterium , belonging to the phylum Firmicutes , resides in the human gut and plays a role in various host metabolic processes. The sole species within this genus, Faecalibacterium prausnitzii (F. prausnitzii ), functions to produce butyrate. Butyrate, one of the most abundant SCFAs in the colon, serves as an energy source for colonocytes and plays an important role in maintaining intestinal health ( ). F. prausnitzii has been associated with many endocrine diseases, such as type 2 diabetes and polycystic ovary syndrome, with studies noting significant changes in its abundance in the faces of affected patients ( , ). While direct studies on the relationship between F. prausnitzii and CPP are lacking, researches indicated that its abundance correlate with hormone levels, such as LH and FSH. F. prausnitzii could impact the secretion of gut-brain mediators like ghrelin and peptide YY (PYY), by producing SCFAs. Ghrelin is a peptide that can lead to adiposity by enhancing the appetite and reducing fat utilization. PYY, co-localized with GLP-1 in the L-cells of the distal gut. Alterations in ghrelin and PYY levels subsequently impact the secretion of sex hormones (LH, FSH, etc.) by influencing kisspeptin neurons through the HPG axis ( , ). This finding is consistent with our finding that Faecalibacterium is positively correlated with the peak value of LH. Previous studies have shown that Anaerotruncus is associated with Parkinson’s disease, obesity, and other diseases. Zhang et al. found that estrogen deficiency induced by ovariectomy led to an increase in the level of Anaerotruncus in the gut of rats ( ). These results indicated that there is a certain connection and interaction between serum hormones and Anaerotruncus . In our study, we found that the abundance of Anaerotruncus in the CPP group was significantly lower than that in the control group, and Anaerotruncus was negatively correlated with the peak value of FSH. We speculate that the decreased Anaerotruncus might lead to precocious puberty by causing an increase in the level of FSH. However, the mechanism of the Anaerotruncus -mediated regulation of HPG axis activation needs to be further explored. 4.2 Patients with CPP have different metabolite profiles that are related to different gut microbiota Through the fecal metabolomic analysis and the correlation analysis with differential gut microbes, this study identified different metabolite profiles and metabolic pathways in CPP patients that were related to specific gut microbiome. First, the biosynthesis and metabolism pathways of phenylalanine and tyrosine were significantly disrupted in the CPP group. This disruption was evidenced by changes in the levels of various metabolites, including HPHPA, 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid and acetoacetic acid, which were significantly increased, while 4-hydroxyphenylpyruvic acid and phenylacetic acid were decreased in the CPP group. Correlation analysis suggested that these altered metabolites were significantly correlated with Anaerotruncus . Our previous study found lower levels of phenylalanine and tyrosine, precursors of catecholamines, alongside higher levels of their major end products (homovanillic acid and vanillylmandelic acid) in the urine samples of CPP subjects compared to healthy controls ( ). This suggests that the metabolism of phenylalanine, tyrosine and catecholamines is disordered in children with CPP. Catecholamines are critical neurotransmitters in vivo and play an significant role in regulating GnRH secretion by hypothalamic neurons ( – ). In this study, we identified the disordered metabolic pathways of tyrosine and phenylalanine, with a significantly elevated level of homovanillic acid, the major end metabolite of catecholamine, consistent with previous findings. Therefore, we hypothesize that phenylalanine and tyrosine metabolism may regulate GnRH secretion in the hypothalamus by affecting catecholamine metabolism. In addition, among the different metabolites, HPHPA (FC=7.803) and 3,4-dihydroxyhydrocinnamic acid (FC=10.432) exhibited the largest fold changes compared to the control group. HPHPA is an abnormal catabolism product of phenylalanine metabolism in bacteria. Phenylalanine first generates a tyrosine analogue, m-tyrosine, when it is being metabolized by gut microorganisms, and m-tyrosine is further metabolized to generate HPHPA ( , ). We suggested that HPHPA may act as a catecholamine analogue and potentially modulate catecholamine signaling pathway. However, the precise mechanisms by which HPHPA influences catecholamine signaling pathway remain unclear. Further investigation is required to validate this hypothesis and explore the underlying mechanisms of HPHPA’s role in the pathophysiology of CPP. 3,4-dihydroxyhydrocinnamic acid, also known as dihydrocaffeic acid (DHCA), a metabolic product of gut microorganisms, is known to activate PI3K and Akt phosphorylation, promote insulin secretion, increase the clearance of peripheral glucose, and affect the body’s energy balance ( ). There is no direct evidence that 3,4-dihydroxyhydrocinnamic acid is associated with CPP. Considering that energy metabolism is recognized as a significant regulator of the kisspeptin/Kiss1r system ( , ), we hypothesize that the upregulation of 3,4-dihydroxyhydrocinnamic acid may affect puberty development through its effects on energy metabolism. However, further studies are needed to validate this hypothesis. Next, we determined that the TCA cycle and glyoxylate and dicarboxylate metabolism pathways were upregulated in CPP patients, as evidenced by altered levels of isocitric acid, cis-aconitic acid, citric acid, formic acid and glycolic acid. Correlation analysis suggested that these changed metabolites were positively correlated with Faecalibacterium . The TCA cycle, also known as the citric acid cycle, is essential for energy production. The glyoxylate cycle, which is unique to plants and microorganisms, converts fat into sugar to provide energy and synthesizes dicarboxylic acid to supplement the TCA cycle ( , ). Energy metabolism plays an important role in the onset of puberty, as it can affect pubertal development through the kisspeptin-Kissr signal pathway. In conditions of excess energy, such as increased food intake, there is an upregulation of Kiss1 mRNA expression in the hypothalamus, leading to elevated LH levels ( , ). In our previous study, we observed that prepubertal female rats with overnutrition experienced earlier onset of puberty, characterized by decreased expression of ghrelin and increased expression of GnRH and KISS-1/Kisspeptin in the hypothalamus compared to malnourished rats. This suggests a link between energy balance and pubertal development ( , ). Thus, this study provides further evidence that energy metabolism plays a significant role in the development of CPP. Additionally, we revealed that L-tryptophan, a key component of the tryptophan metabolism pathway, was significantly lower in CPP patients compared to the control group, and L-tryptophan was negatively correlated with Faecalibacterium . Tryptophan is an essential amino acid, which is initially converted to 5-hydroxytryptophan by tryptophan hydroxylase. Subsequently, 5-hydroxytryptophan is further converted to serotonin (5-HT). Tryptophan crosses the blood−brain barrier and plays a crucial role in the synthesis of 5-HT in the central nervous system. 5-HT neurons in the central nervous system can communicate with GnRH neurons through synaptic transmission, where 5-HT binds to various receptors on GnRH neurons, eliciting either inhibitory or excitatory effects. The presence of multiple 5-HT receptors, including 5-HT1A, 5-HT2A, 5-HT2C, 5-HT4, and 5-HT7, allows for complex modulation of GnRH activity in a time- and dose-dependent manner. For instance, binding to the 5-HT1A receptor on the neuronal cell activates Gi protein, resulting in hyperpolarization and inhibition of the rhythmic intracellular GnRH release ( – ). Moreover, previous study found that level of 5-hydroxytryptophan was significantly lower in the urine of CPP subjects, while levels of 5-hydroxyindoleacetic acid and 5-hydroxykynurenamine were elevated, suggesting upregulation of the 5-HT metabolic pathway in the CPP population ( ). Our correlation analysis further indicates that L-tryptophan is negatively correlated with peak value of LH. Based on these findings, we hypothesize that gut microbiota in CPP subjects may modulate the tryptophan-5-HT pathway, reducing 5-HT synthesis in the hypothalamus, thus diminishing its inhibitory effect on GnRH neurons. This could lead to increased expression of LH and related sex hormones, facilitating initiation of pubertal sexual development.
Distinct gut microbial signature in patients with CPP Our findings revealed that the most important characteristics of the disordered gut microbiota in patients with CPP were an increased abundance of Faecalibacterium and a decreased abundance of Anaerotruncus . Faecalibacterium , belonging to the phylum Firmicutes , resides in the human gut and plays a role in various host metabolic processes. The sole species within this genus, Faecalibacterium prausnitzii (F. prausnitzii ), functions to produce butyrate. Butyrate, one of the most abundant SCFAs in the colon, serves as an energy source for colonocytes and plays an important role in maintaining intestinal health ( ). F. prausnitzii has been associated with many endocrine diseases, such as type 2 diabetes and polycystic ovary syndrome, with studies noting significant changes in its abundance in the faces of affected patients ( , ). While direct studies on the relationship between F. prausnitzii and CPP are lacking, researches indicated that its abundance correlate with hormone levels, such as LH and FSH. F. prausnitzii could impact the secretion of gut-brain mediators like ghrelin and peptide YY (PYY), by producing SCFAs. Ghrelin is a peptide that can lead to adiposity by enhancing the appetite and reducing fat utilization. PYY, co-localized with GLP-1 in the L-cells of the distal gut. Alterations in ghrelin and PYY levels subsequently impact the secretion of sex hormones (LH, FSH, etc.) by influencing kisspeptin neurons through the HPG axis ( , ). This finding is consistent with our finding that Faecalibacterium is positively correlated with the peak value of LH. Previous studies have shown that Anaerotruncus is associated with Parkinson’s disease, obesity, and other diseases. Zhang et al. found that estrogen deficiency induced by ovariectomy led to an increase in the level of Anaerotruncus in the gut of rats ( ). These results indicated that there is a certain connection and interaction between serum hormones and Anaerotruncus . In our study, we found that the abundance of Anaerotruncus in the CPP group was significantly lower than that in the control group, and Anaerotruncus was negatively correlated with the peak value of FSH. We speculate that the decreased Anaerotruncus might lead to precocious puberty by causing an increase in the level of FSH. However, the mechanism of the Anaerotruncus -mediated regulation of HPG axis activation needs to be further explored.
Patients with CPP have different metabolite profiles that are related to different gut microbiota Through the fecal metabolomic analysis and the correlation analysis with differential gut microbes, this study identified different metabolite profiles and metabolic pathways in CPP patients that were related to specific gut microbiome. First, the biosynthesis and metabolism pathways of phenylalanine and tyrosine were significantly disrupted in the CPP group. This disruption was evidenced by changes in the levels of various metabolites, including HPHPA, 3,4-dihydroxyhydrocinnamic acid, homovanillic acid, 3-hydroxyphenylacetic acid and acetoacetic acid, which were significantly increased, while 4-hydroxyphenylpyruvic acid and phenylacetic acid were decreased in the CPP group. Correlation analysis suggested that these altered metabolites were significantly correlated with Anaerotruncus . Our previous study found lower levels of phenylalanine and tyrosine, precursors of catecholamines, alongside higher levels of their major end products (homovanillic acid and vanillylmandelic acid) in the urine samples of CPP subjects compared to healthy controls ( ). This suggests that the metabolism of phenylalanine, tyrosine and catecholamines is disordered in children with CPP. Catecholamines are critical neurotransmitters in vivo and play an significant role in regulating GnRH secretion by hypothalamic neurons ( – ). In this study, we identified the disordered metabolic pathways of tyrosine and phenylalanine, with a significantly elevated level of homovanillic acid, the major end metabolite of catecholamine, consistent with previous findings. Therefore, we hypothesize that phenylalanine and tyrosine metabolism may regulate GnRH secretion in the hypothalamus by affecting catecholamine metabolism. In addition, among the different metabolites, HPHPA (FC=7.803) and 3,4-dihydroxyhydrocinnamic acid (FC=10.432) exhibited the largest fold changes compared to the control group. HPHPA is an abnormal catabolism product of phenylalanine metabolism in bacteria. Phenylalanine first generates a tyrosine analogue, m-tyrosine, when it is being metabolized by gut microorganisms, and m-tyrosine is further metabolized to generate HPHPA ( , ). We suggested that HPHPA may act as a catecholamine analogue and potentially modulate catecholamine signaling pathway. However, the precise mechanisms by which HPHPA influences catecholamine signaling pathway remain unclear. Further investigation is required to validate this hypothesis and explore the underlying mechanisms of HPHPA’s role in the pathophysiology of CPP. 3,4-dihydroxyhydrocinnamic acid, also known as dihydrocaffeic acid (DHCA), a metabolic product of gut microorganisms, is known to activate PI3K and Akt phosphorylation, promote insulin secretion, increase the clearance of peripheral glucose, and affect the body’s energy balance ( ). There is no direct evidence that 3,4-dihydroxyhydrocinnamic acid is associated with CPP. Considering that energy metabolism is recognized as a significant regulator of the kisspeptin/Kiss1r system ( , ), we hypothesize that the upregulation of 3,4-dihydroxyhydrocinnamic acid may affect puberty development through its effects on energy metabolism. However, further studies are needed to validate this hypothesis. Next, we determined that the TCA cycle and glyoxylate and dicarboxylate metabolism pathways were upregulated in CPP patients, as evidenced by altered levels of isocitric acid, cis-aconitic acid, citric acid, formic acid and glycolic acid. Correlation analysis suggested that these changed metabolites were positively correlated with Faecalibacterium . The TCA cycle, also known as the citric acid cycle, is essential for energy production. The glyoxylate cycle, which is unique to plants and microorganisms, converts fat into sugar to provide energy and synthesizes dicarboxylic acid to supplement the TCA cycle ( , ). Energy metabolism plays an important role in the onset of puberty, as it can affect pubertal development through the kisspeptin-Kissr signal pathway. In conditions of excess energy, such as increased food intake, there is an upregulation of Kiss1 mRNA expression in the hypothalamus, leading to elevated LH levels ( , ). In our previous study, we observed that prepubertal female rats with overnutrition experienced earlier onset of puberty, characterized by decreased expression of ghrelin and increased expression of GnRH and KISS-1/Kisspeptin in the hypothalamus compared to malnourished rats. This suggests a link between energy balance and pubertal development ( , ). Thus, this study provides further evidence that energy metabolism plays a significant role in the development of CPP. Additionally, we revealed that L-tryptophan, a key component of the tryptophan metabolism pathway, was significantly lower in CPP patients compared to the control group, and L-tryptophan was negatively correlated with Faecalibacterium . Tryptophan is an essential amino acid, which is initially converted to 5-hydroxytryptophan by tryptophan hydroxylase. Subsequently, 5-hydroxytryptophan is further converted to serotonin (5-HT). Tryptophan crosses the blood−brain barrier and plays a crucial role in the synthesis of 5-HT in the central nervous system. 5-HT neurons in the central nervous system can communicate with GnRH neurons through synaptic transmission, where 5-HT binds to various receptors on GnRH neurons, eliciting either inhibitory or excitatory effects. The presence of multiple 5-HT receptors, including 5-HT1A, 5-HT2A, 5-HT2C, 5-HT4, and 5-HT7, allows for complex modulation of GnRH activity in a time- and dose-dependent manner. For instance, binding to the 5-HT1A receptor on the neuronal cell activates Gi protein, resulting in hyperpolarization and inhibition of the rhythmic intracellular GnRH release ( – ). Moreover, previous study found that level of 5-hydroxytryptophan was significantly lower in the urine of CPP subjects, while levels of 5-hydroxyindoleacetic acid and 5-hydroxykynurenamine were elevated, suggesting upregulation of the 5-HT metabolic pathway in the CPP population ( ). Our correlation analysis further indicates that L-tryptophan is negatively correlated with peak value of LH. Based on these findings, we hypothesize that gut microbiota in CPP subjects may modulate the tryptophan-5-HT pathway, reducing 5-HT synthesis in the hypothalamus, thus diminishing its inhibitory effect on GnRH neurons. This could lead to increased expression of LH and related sex hormones, facilitating initiation of pubertal sexual development.
Conclusions Our study revealed the gut microbial and metabolite characteristics associated with CPP by integrating microbiomics and metabolomics approaches. The most important characteristics of the disordered gut microbiota, at the genus level, were an increased abundance of Faecalibacterium and a decreased abundance of Anaerotruncus . These gut microbiota changes appear to influence various metabolites and participate in the regulation of several key metabolic pathways, including phenylalanine and tyrosine biosynthesis and metabolism, the TCA cycle, glyoxylate and dicarboxylate metabolism, and tryptophan metabolism. These findings suggest that the gut microbiome may be involved in the onset and progression of CPP through altering the metabolic profile. However, there are some limitations of our study. First, our current findings are purely associative and do not establish a causal relationship between the observed alterations and CPP. Furthermore, we cannot exclude the possibilities that the changes in the gut microbiome and metabolite profiles might be secondary to, or coincident with, CPP or other symptoms associated with the condition. This is hoped to be further investigated. Additionally, this study only included female participants. Given that puberty itself is sexual different, the underlying mechanisms may differ between girls and boys. As such, it is uncertain whether the findings and conclusions of this study are applicable to boys. Future studies are needed to investigate potential sex-specific differences in the mechanisms of CPP. Overall, this study contributes to the understanding of the interaction between gut microbiota metabolomics and CPP, which will be of great significance for the clinical diagnosis and treatment of CPP in the future.
|
The impact of splinting timepoint of mobile mandibular incisors on the outcome of periodontal treatment—preliminary observations from a randomized clinical trial | b40555a4-1283-4f95-8911-90d1feae8f97 | 8311063 | Dental[mh] | The primary features of periodontitis include the loss of periodontal tissue support, clinical attachment loss (CAL) and alveolar bone loss, presence of increased periodontal pocket depth, and gingival bleeding . Disease progression may lead to pathological tooth mobility which can result from acute periodontal inflammation, traumatic occlusion, and an apical shift of the rotational center of the tooth as it occurs in advanced alveolar bone loss. Patients with severe periodontitis often have a combination of these conditions, and the increased mobility can cause inconveniences for the patient. The new classification of periodontal disease states that teeth with progressive mobility may require splinting therapy to improve patient comfort . Recent evidence also indicates a trend toward additional improvement for the Oral Health-Related Quality of Life (OHRQoL) of periodontitis patients by splinting mobile incisors as part of periodontal therapy , and retrospective studies show high survival rates and periodontal stability of splinted teeth during long-term supportive periodontal therapy (SPT) . However, it is unclear which timepoint during systematic periodontal treatment is optimal for splinting of mobile teeth. There is only limited evidence on this topic, and so dentists often decide according to individual preferences or due to the request of the patient. Patients affected by increased tooth mobility are often afraid of tooth loss and expect swift improvements after therapy which argues for splinting mobile teeth at the beginning of the systematic periodontal treatment. Furthermore, it is manually easier to perform subgingival debridement at non-mobile teeth. The idea that splinting reduces potential scaling-induced trauma is also widely accepted . In addition, there is literature that indicates a possible influence of baseline tooth mobility on clinical outcomes of regenerative treatment of deep intrabony defects, with better outcomes at teeth with low mobility . On the other hand, the elimination of periodontal inflammation and the correction of occlusal pre-contacts can favor regeneration of the surrounding tissues and thereby reduce tooth mobility. Furthermore, changes in tooth position caused by swelling are also reversible. From this point of view, there is also a strong rationale for splinting mobile teeth after active periodontal treatment. Hence, the aim of the study is to evaluate the impact of splinting periodontally compromised mobile mandibular incisors with unfavorable prognosis on the Oral Health-Related Quality of Life (OHRQoL) and on the change of periodontal parameters on the splinted teeth after periodontal therapy in a prospective and randomized study design over a period of 5 years. The results presented are short-term results 12 months after FMD. This is the second publication in the scope of the study.
The study participants were recruited between November 2016 and December 2018 from patients of the authors’ department. Inclusion criteria at the patient level were presence of periodontitis with at least 6 teeth with a probing pocket depths (PPD) ≥ 4 mm, age ≥ 18 years, and presence of ≥ 12 natural teeth. Tooth-related inclusion criteria were presence of at least one mandibular incisor with a mobility degree II or III in combination with a clinical attachment loss (CAL) ≥ 5 mm and a relative alveolar bone loss (ABL) of ≥ 50% at the affected tooth. Patients with a cross or head bite, stress-induced bruxism, an implant in the mandibular anterior region, or active periodontal therapy (APT) within the last 2 years were excluded from the study. Primary outcome variables were mean CAL and mean PPD of teeth 33 to 43 before systematic periodontal treatment (baseline, BL) and 12 months after FMD (T2). The randomization of the included patients was performed via selected envelopes using block randomization with a 1:1 ratio for the assignment to group A or group B. Patients of group A received splinting of teeth 33 to 43 prior to full-mouth disinfection (FMD) and group B 7 months after FMD. Active periodontal therapy APT of all patients was performed according to the department’s concept of systematic periodontal therapy (PT). This meant a total of nine sessions (visit 1 to visit 9) for each study participant during the oral hygiene phase and non-surgical periodontal therapy. BL periodontal status and the medical history (including smoking status and presence of systemic diseases) were assessed at visit 1. BL oral hygiene indices were assessed at visit 2. Non-surgical periodontal therapy (performed by a dental hygienist) included the removal of all subgingival deposits at visits 5/6 according to a modified concept of full-mouth disinfection (FMD) as described previously and the adjustment of occlusion in case of premature contacts. If necessary, an adjunctive antibiotic administration (500-mg amoxicillin and 400-mg metronidazole, three times per day for 7 days) was performed according to the current recommendations of the German Society for Periodontology. The outcome of non-surgical periodontal therapy was re-evaluated 3 months after FMD (visit 9). Remaining pockets of 4 mm and bleeding on probing (BOP) and pockets of 5 mm were re-instrumented at this visit and at following supportive periodontal therapy (SPT) sessions. Patients with remaining sites ≥ 6 mm and/or furcation involvement were recommended to undergo further surgical interventions (visit 9b). Mandibular incisors and canines were excluded from additional surgical treatment. After completion of APT, patients were referred to SPT every 4 months (visits 10 and 11). The oral hygiene indices recorded at all sessions of the oral hygiene phase and SPT were the plaque control record (PCR) and the gingival bleeding index (GBI) . Patients were scheduled for re-examination 12 months after FMD and completion of APT by a blinded examiner (T2; 12 months ± 8 weeks after FMD). The procedure of the study is shown in Fig. . Assessment of periodontal parameters and mobility degrees Periodontal status was assessed with PPD and CAL measured at 6 sites/tooth (PCP-UNC15 probe, Hu-Friedy, Frankfurt, Germany). Except for one patient, all periodontal status at BL were assessed and documented (ParoStatus®, ParoStatus.de, Berlin, Germany) by one of two calibrated examiners. At T2, patients were followed up by an examiner who was blinded to the group affiliation. Measured against a reference model, the relative agreement of all examiners (measurement accuracy of ± 1 mm) was 89.3–96.0% for the CAL and 94.6–99.3% for the PPD. The horizontal deflection of the mandibular incisors was measured in millimeters using a new method as described previously . All measurements were performed by the same examiner and then converted to a modified Lindhe and Nyman degree classification (degree I: pathological mobility ≤ 1 mm in labio-oral direction, degree II: mobility of > 1–2 mm, degree III: exceeding 2 mm in labial-oral direction and/or in vertical direction). Assessment of Oral Health-Related Quality of Life The patients’ Oral Health-Related Quality of Life (OHRQoL) was assessed using the German short version of the Oral Health Impact Profile (OHIP-G14) . The OHIP-G14 questionnaires were self-completed by the participants at BL and T2. Responses of the OHIP-G14 are summed to give the total OHIP-G14 summary score and can range from 0 to 56 with a high score indicating a poorer OHRQoL. Splinting Patients of group A received splinting of mobile mandibular incisors prior to FMD (visit 4), while patients of group B received splinting therapy 7 months after FMD (visit 10). In all patients, teeth 33 to 43 were splinted using composite (Tetric EvoCeram/Flow, IvoclarVivadent, Ellwangen, Germany) and a fiber-reinforced composite (FRC) strand (everStick Perio, GC Germany, Bad Homburg, Germany) (Fig. ). Canines were included into splints for stability. The mobility of a canine was not an exclusion criterion. Even in the case of mobility of a canine, the splinting was inserted only from 33 to 43. All splints were inserted by the same dentist according to a standardized protocol as described previously . The adjustment of occlusion in case of premature contacts of teeth 33–43 was performed, if necessary. Patients were instructed how to clean the splinted teeth (adaption and handling of interdental brushes). Statistical analysis Statistical analysis was done based on the set of patients who completed follow-up at T2. As this study is explorative, no formal sample size calculation was performed. The sample size of 34 was chosen based on considerations of feasibility and was considered sufficient to obtain first estimates of group differences regarding the various variables. The recorded periodontal status data was exported from the documentation software (ParoStatus®, ParoStatus.de, Berlin, Germany) to a table calculation program (Excel®, Microsoft). All other data and the oral hygiene indices were entered manually into the same tabular program independently by two different persons. Any discrepancies were corrected accordingly after the original documents were reviewed again. Descriptive statistics for periodontal parameters and oral hygiene parameters were assessed by calculating means, standard deviation, median, first and third quantile, minimum, and maximum. BL values and BL-T2 differences were compared between the two groups using the chi-square test for binary variables and the Mann–Whitney U test for all other variables. Corresponding 95% confidence intervals (CI) for the proportion difference (binary variables) or the median of differences (other variables) are given. The patient was considered as a statistical unit. The two timepoints BL and T2 are compared within groups using the Wilcoxon two-sample signed-rank test. Corresponding 95% confidence intervals of the median of differences are given. A possible association of the periodontal situation (mean CAL_overall, mean CAL_33-43, mean PPD_overall, mean PPD_33-43) and possible influencing factors was analyzed using multivariate regression with covariates (group [A/B], smoking status [non-smoker/smoker], antibiotic therapy [not received/received], systemic factors [not present/present], surgical intervention [not received/received]). All p -values are to be interpreted descriptively; thus, no adjustment for multiple testing was performed. p -values below 0.05 were regarded as considerable. Third molars and dental implants were excluded from analysis. Analysis was done using the statistical software R v. 4.0.1 (The R Project, The R Foundation).
APT of all patients was performed according to the department’s concept of systematic periodontal therapy (PT). This meant a total of nine sessions (visit 1 to visit 9) for each study participant during the oral hygiene phase and non-surgical periodontal therapy. BL periodontal status and the medical history (including smoking status and presence of systemic diseases) were assessed at visit 1. BL oral hygiene indices were assessed at visit 2. Non-surgical periodontal therapy (performed by a dental hygienist) included the removal of all subgingival deposits at visits 5/6 according to a modified concept of full-mouth disinfection (FMD) as described previously and the adjustment of occlusion in case of premature contacts. If necessary, an adjunctive antibiotic administration (500-mg amoxicillin and 400-mg metronidazole, three times per day for 7 days) was performed according to the current recommendations of the German Society for Periodontology. The outcome of non-surgical periodontal therapy was re-evaluated 3 months after FMD (visit 9). Remaining pockets of 4 mm and bleeding on probing (BOP) and pockets of 5 mm were re-instrumented at this visit and at following supportive periodontal therapy (SPT) sessions. Patients with remaining sites ≥ 6 mm and/or furcation involvement were recommended to undergo further surgical interventions (visit 9b). Mandibular incisors and canines were excluded from additional surgical treatment. After completion of APT, patients were referred to SPT every 4 months (visits 10 and 11). The oral hygiene indices recorded at all sessions of the oral hygiene phase and SPT were the plaque control record (PCR) and the gingival bleeding index (GBI) . Patients were scheduled for re-examination 12 months after FMD and completion of APT by a blinded examiner (T2; 12 months ± 8 weeks after FMD). The procedure of the study is shown in Fig. .
Periodontal status was assessed with PPD and CAL measured at 6 sites/tooth (PCP-UNC15 probe, Hu-Friedy, Frankfurt, Germany). Except for one patient, all periodontal status at BL were assessed and documented (ParoStatus®, ParoStatus.de, Berlin, Germany) by one of two calibrated examiners. At T2, patients were followed up by an examiner who was blinded to the group affiliation. Measured against a reference model, the relative agreement of all examiners (measurement accuracy of ± 1 mm) was 89.3–96.0% for the CAL and 94.6–99.3% for the PPD. The horizontal deflection of the mandibular incisors was measured in millimeters using a new method as described previously . All measurements were performed by the same examiner and then converted to a modified Lindhe and Nyman degree classification (degree I: pathological mobility ≤ 1 mm in labio-oral direction, degree II: mobility of > 1–2 mm, degree III: exceeding 2 mm in labial-oral direction and/or in vertical direction).
The patients’ Oral Health-Related Quality of Life (OHRQoL) was assessed using the German short version of the Oral Health Impact Profile (OHIP-G14) . The OHIP-G14 questionnaires were self-completed by the participants at BL and T2. Responses of the OHIP-G14 are summed to give the total OHIP-G14 summary score and can range from 0 to 56 with a high score indicating a poorer OHRQoL.
Patients of group A received splinting of mobile mandibular incisors prior to FMD (visit 4), while patients of group B received splinting therapy 7 months after FMD (visit 10). In all patients, teeth 33 to 43 were splinted using composite (Tetric EvoCeram/Flow, IvoclarVivadent, Ellwangen, Germany) and a fiber-reinforced composite (FRC) strand (everStick Perio, GC Germany, Bad Homburg, Germany) (Fig. ). Canines were included into splints for stability. The mobility of a canine was not an exclusion criterion. Even in the case of mobility of a canine, the splinting was inserted only from 33 to 43. All splints were inserted by the same dentist according to a standardized protocol as described previously . The adjustment of occlusion in case of premature contacts of teeth 33–43 was performed, if necessary. Patients were instructed how to clean the splinted teeth (adaption and handling of interdental brushes).
Statistical analysis was done based on the set of patients who completed follow-up at T2. As this study is explorative, no formal sample size calculation was performed. The sample size of 34 was chosen based on considerations of feasibility and was considered sufficient to obtain first estimates of group differences regarding the various variables. The recorded periodontal status data was exported from the documentation software (ParoStatus®, ParoStatus.de, Berlin, Germany) to a table calculation program (Excel®, Microsoft). All other data and the oral hygiene indices were entered manually into the same tabular program independently by two different persons. Any discrepancies were corrected accordingly after the original documents were reviewed again. Descriptive statistics for periodontal parameters and oral hygiene parameters were assessed by calculating means, standard deviation, median, first and third quantile, minimum, and maximum. BL values and BL-T2 differences were compared between the two groups using the chi-square test for binary variables and the Mann–Whitney U test for all other variables. Corresponding 95% confidence intervals (CI) for the proportion difference (binary variables) or the median of differences (other variables) are given. The patient was considered as a statistical unit. The two timepoints BL and T2 are compared within groups using the Wilcoxon two-sample signed-rank test. Corresponding 95% confidence intervals of the median of differences are given. A possible association of the periodontal situation (mean CAL_overall, mean CAL_33-43, mean PPD_overall, mean PPD_33-43) and possible influencing factors was analyzed using multivariate regression with covariates (group [A/B], smoking status [non-smoker/smoker], antibiotic therapy [not received/received], systemic factors [not present/present], surgical intervention [not received/received]). All p -values are to be interpreted descriptively; thus, no adjustment for multiple testing was performed. p -values below 0.05 were regarded as considerable. Third molars and dental implants were excluded from analysis. Analysis was done using the statistical software R v. 4.0.1 (The R Project, The R Foundation).
A total of 34 patients met the inclusion criteria and agreed to participate. Until T2, a total of eight patients dropped out: Two patients because of health reasons, five patients discontinued therapy for unknown reasons, and one patient withdrew his consent. Accordingly, 26 study participants could be included in the statistical analysis (group A: 12 patients, group B: 14 patients). Two patients were re-examined (T2) 2 to 3 months later than planned due to SARS-CoV-2 pandemic. Thus, re-examinations at T2 took place between February 2018 and August 2020. Descriptive statistic Descriptive statistic of the study cohort is shown in Table . Survival of splinted teeth and splints At BL, all patients except one had all mandibular incisors and canines. In one patient of group A, one incisor was missing due to aplasia of the tooth and gap closure was present. Thus, at BL a total of 155 teeth were present in the area of teeth 33–43 (group A: 71 teeth, group B: 84 teeth). Due to the hopeless prognosis (according to Kwok and Caton ), two patients of group A underwent removal of one incisor prior to splinting therapy. Accordingly, in three patients of group A, one tooth was missing in the mandibular anterior region, and only five teeth were splinted together at visit 4. In the two extraction cases, the missing tooth was replaced with an adhesively fixed pontic. The distribution of BL mobility of the mandibular incisors is given in Table . In two patients of group B, the mobility decreased so much until visit 10 that splinting was no longer indicated (mobility degree decreased from II and III to 0 and I). At T2, none of teeth 33 to 43 were lost. In one patient of group A, debonding of an incisor from the splint occurred 12 months after splinting. The affected splint was fixed prior to blinded re-examination at T2. No other complications or fractures of the splints were observed. Periodontal parameters and oral hygiene indices The mean values of the periodontal status and oral hygiene indices at both examination points are shown in Figs. and . The distribution of PPD at teeth 33 to 43 is shown in Fig. . Group differences at the examination points are not found either for periodontal parameters or oral hygiene indices. However, the PCR_overall at T2 tends to show a difference ( p = 0.060). In both groups, the mean CAL and PPD of the overall dentition and at teeth 33–43 improved significantly from BL to T2 (al p ≤ 0.005). Regression analyses of CAL and PPD changes (from BL to T2) for the overall dentition and the local changes at teeth 33–43 show a positive association with adjunctive antibiotic administration for PPD_33-43, PPD_overall, and CAL_overall. For the changes in area 33–43, the regression analysis also shows a tendency toward a higher reduction of periodontal parameters within group A compared to group B (PPD_33-43: − 0.91 vs. − 0.27 mm; CAL_33-43: − 1.02 vs. − 0.47 mm) (Table ). Oral Health-Related Quality of Life At T2, the mean OHIP-G14 summary score of the entire study population is 10.7 ± 7.7 (median: 8.5; range: 0–25). For group A, the mean OHIP-G14 score is 10.5 ± 6.8 (median: 10; range: 0–18) and for group B 10.7 ± 7.7 (median: 8.5, range: 0–25).
Descriptive statistic of the study cohort is shown in Table .
At BL, all patients except one had all mandibular incisors and canines. In one patient of group A, one incisor was missing due to aplasia of the tooth and gap closure was present. Thus, at BL a total of 155 teeth were present in the area of teeth 33–43 (group A: 71 teeth, group B: 84 teeth). Due to the hopeless prognosis (according to Kwok and Caton ), two patients of group A underwent removal of one incisor prior to splinting therapy. Accordingly, in three patients of group A, one tooth was missing in the mandibular anterior region, and only five teeth were splinted together at visit 4. In the two extraction cases, the missing tooth was replaced with an adhesively fixed pontic. The distribution of BL mobility of the mandibular incisors is given in Table . In two patients of group B, the mobility decreased so much until visit 10 that splinting was no longer indicated (mobility degree decreased from II and III to 0 and I). At T2, none of teeth 33 to 43 were lost. In one patient of group A, debonding of an incisor from the splint occurred 12 months after splinting. The affected splint was fixed prior to blinded re-examination at T2. No other complications or fractures of the splints were observed.
The mean values of the periodontal status and oral hygiene indices at both examination points are shown in Figs. and . The distribution of PPD at teeth 33 to 43 is shown in Fig. . Group differences at the examination points are not found either for periodontal parameters or oral hygiene indices. However, the PCR_overall at T2 tends to show a difference ( p = 0.060). In both groups, the mean CAL and PPD of the overall dentition and at teeth 33–43 improved significantly from BL to T2 (al p ≤ 0.005). Regression analyses of CAL and PPD changes (from BL to T2) for the overall dentition and the local changes at teeth 33–43 show a positive association with adjunctive antibiotic administration for PPD_33-43, PPD_overall, and CAL_overall. For the changes in area 33–43, the regression analysis also shows a tendency toward a higher reduction of periodontal parameters within group A compared to group B (PPD_33-43: − 0.91 vs. − 0.27 mm; CAL_33-43: − 1.02 vs. − 0.47 mm) (Table ).
At T2, the mean OHIP-G14 summary score of the entire study population is 10.7 ± 7.7 (median: 8.5; range: 0–25). For group A, the mean OHIP-G14 score is 10.5 ± 6.8 (median: 10; range: 0–18) and for group B 10.7 ± 7.7 (median: 8.5, range: 0–25).
This study prospectively investigates the 12-month outcome of PT at mobile mandibular incisors which were splinted from canine to canine either prior to FMD or 7 months after FMD. The periodontal situation was significantly improved by PT. Patients who received adjunctive antibiotic therapy showed a higher reduction of the overall CAL and PPD and the PPD at teeth 33 to 43. That adjunctive antibiotic therapy leads to better therapy outcomes, especially in the reduction and proportion of PPD from initial deep pockets ≥ 5 mm, has already been demonstrated . In the present study, the proportion of deep pockets was relatively high at teeth 33–43, which may explain the better outcome in this area in patients with adjunctive antibiotic administration. For the patients who received splinting after FMD, there was a tendency for a smaller reduction of the overall CAL and PPD. This might also be due to the higher proportion of diabetics in this group in which the outcome of periodontal therapy can be negatively affected by the systemic conditions . Furthermore, the initial periodontal situation in group B was better compared to that in group A. At T2, both groups were then at a similar periodontal level. Thus, the improvement in group A was higher than that in group B. This difference could also be caused by the “regression to the mean” effect. In two patients who should have received splinting therapy after FMD, splinting was no longer indicated due to a significant decrease in mobility. None splinted tooth got lost during the observation period and only one splint showed debonding of a single tooth. Thus, high survival rates were observed for both the splinted teeth and the splints. Although there are only a few other prospective studies investigating the survival rates of splinted teeth and splints, the results are quite different. In accordance with our results, Kumbuloglu et al. also found a remarkably high survival rate for splints in their prospective observation of 19 periodontitis patients that had splinting therapy with FRC strands and composite from mandibular canine to canine. After 4.5 years, the survival rate of splints was 94.8%, and none of the splinted teeth was lost during the observation period. In contrast, Sekhar et al. observed relatively many splint fractures in their prospective study. During a period of 12 weeks, eleven out of 20 splints showed fractures. In current literature on splinting therapy of periodontally compromised and mobile teeth, retrospective studies present the largest patient cohorts. Here, remarkably high survival rates of splinted teeth were also observed over periods of 11 up to 12 years . Thus, Graetz et al. found that splinted teeth were not at higher risk for tooth loss compared to non-splinted teeth. They included 57 patients with 227 splinted teeth over a mean observation period of 11 years in their analysis. Only 26 splinted teeth were lost during the mean observation time but 75.3% of all splints required repair. It should be noted that all types of teeth were included in this study and that splints on lower anterior teeth required fewer repairs, while repairs tended to be more likely in posterior teeth. Sonnenschein et al. also observed no tooth loss in 39 patients with 162 splinted mandibular anterior teeth within the first 3 years after splint placement in a retrospective study. After 7 (24 patients, 98 splinted teeth) and 12 years (16 patients, 71 splinted teeth), one splinted tooth was lost in each case. In contrast to the study by Graetz et al. , this study found a high survival rate of splints. A total of 74.4% of the original splints were still intact after 3 years and 67.3% after 10 years. The discussed studies do not address the question of whether splinting is more beneficial before or after subgingival instrumentation but Alkan et al. investigated this question. They examined ten patients who received splinting of mandibular incisors before non-surgical subgingival debridement and eleven patients who received splinting therapy after the subgingival debridement. There were no differences in the outcome of periodontal therapy after 6 months, and the authors conclude that splinting of periodontally compromised teeth prior to non-surgical subgingival debridement and thus the elimination of potential scaling-induced trauma have no additional effect on the outcome of PT. In this study, however, the primary intention of splinting was not to eliminate tooth mobility to improve oral comfort and the patient’s chewing and biting function but rather to determine whether immobilization by splinting provides better healing and thus a better therapeutic outcome. In contrast, other studies indicate a possible influence of baseline tooth mobility on clinical outcomes of regenerative treatment, with better outcomes at teeth with low mobility . In the presented study, the patients who received splinting of teeth 33 to 43 before FMD showed a tendency toward a better outcome of PT in the splinting area. A possible explanation for this is better healing due to stabilization of teeth preventing early disruption of the blood clot from the root surface. Thus, the timepoint at which mobile teeth are splinted during systematic periodontal treatment could potentially have an impact on the therapy outcome. A common intention of splinting therapy is to improve the oral comfort of patients affected by severe tooth mobility. The 3-month results of the presented study population investigated the impact of splinting on OHRQoL and found a trend toward better OHRQoL in patients who had additional splinting therapy compared to the non-splinted control group . Twelve months after FMD (both groups received splinting), the mean OHIP-G14 summary score and thus the OHRQoL are almost identical in both groups. It can therefore be assumed that OHRQoL improves more quickly with earlier splinting but that after a short time there is no difference compared to patients who received splinting later. As already shown 3 months after FMD, high plaque scores are also found after 12 months, especially on splinted teeth. The question therefore remains whether splinting leads to a reduction of oral hygiene control at home. The increased plaque scores are also reflected in a high number of sites with BOP. Therefore, gingivitis is still observed in many patients despite the significant decrease in PPD. The follow-up will show whether this situation will improve during further SPT. The strength of the study is its prospective and randomized design with blinded re-examination. Possible limitations of the study are the small sample size and the different distribution of diabetes between the groups. Furthermore, the different initial periodontal situation could have an influence on the results. Future studies can use the preliminary results for sample size calculation. For example, one might choose CAL_33-43 at 12 months after FMD as primary endpoint that is analyzed with an ANCOVA adjusted for BL CAL_33-43. The observed group difference was 0.55 mm (Table ) and the standard deviation not higher than 2 mm (Fig. ). Employing a correlation of BL and 12-month value of 0.8 (supported by the data), significance level of 0.05, and power of 0.8, the required sample size of an ANCOVA analysis would be 76 patients per group. However, a greater group difference might be seen after a longer follow-up period. In summary, the presented study found a tendency for better outcomes of periodontal parameters after systematic periodontal treatment when splinting mobile mandibular incisors before FMD compared to splinting 7 months after FMD. The study also shows that splinting after FMD enables to detect remission of tooth mobility and thus the opportunity to avoid splinting. Based on the results of the study, it is not possible to state at which point systematic periodontal therapy splinting of mobile teeth is more beneficial. On the one hand, there seems to be a tendency toward a higher reduction of periodontal parameters when splinting prior to FMD and a faster improvement of the OHRQoL. On the other hand, a wait-and-see approach enables detection of remission of tooth mobility. Independent of the timepoint of splinting therapy, it seems that more intense oral hygiene instructions and short SPT intervals are required to ensure the shortcomings from limited personal oral hygiene efficiency due to splinting. Future research will show how periodontal parameters, survival rate of splinted teeth, and splints will develop in the long-term and whether recommendations for the timepoint of splinting can be derived on this basis.
|
Uptake of pharmacist recommendations by patients after discharge: Implementation study of a patient-centered medicines review service | fa216cc7-2f93-42d4-ab4b-c6bef5698918 | 10061906 | Patient-Centered Care[mh] | Polypharmacy, defined here as the taking of five or more medicines concurrently, is associated with a high prevalence of potentially inappropriate medicine (PIM – defined in supplementary Table ) use and occurs frequently in those aged 65 years or over [ – ]. PIM use results in poor outcomes including falls, emergency department visits, increased costs, adverse events, and functional decline . Deprescribing - the patient-centred, supervised process of dose reduction or cessation of PIMs - has been identified as part of good prescribing but as limited and reactive rather than proactive, generally occurring because of an adverse event . Deprescribing does not appear to be part of current hospital inpatient practice . Yet the simple count of prescribed medicines at discharge has been shown to outperform complex indicators of therapy quality, such as Beers’ list 2019 and STOPP criteria Version 2 when identifying people at risk and predicting poor outcomes . In Australia, up to 30% of hospital admissions for patients over 75 years of age have been found to be medicine-related, with up to three-quarters potentially preventable, the single most important predictor being the number of medicines taken . The risk of harm and of poor adherence rises with the addition of each new medicine , with harm described to be at epidemic proportions . Transitions from hospital to primary care further increase the risk for reasons that include increased medicine sensitivity due to deconditioning and ongoing recovery from acute illness, inaccuracies in medicine reconciliation, insufficient patient education, poor communication with primary care and unexplained medicine changes [ – ]. As many as 44% of patients do not follow medicine changes initiated in hospital, continuing to take discontinued medicines, failing to implement dosage changes or to take newly prescribed medicines , which may themselves be potentially inappropriate . While the best strategies to combat PIM use in primary care remain unclear [ , , ], effective transitional pharmacist-led strategies have been described [ – ]. They have included medicine reconciliation and review in the context of multidisciplinary care, patient counselling, communication with primary care providers and post-discharge follow-up. Although patient engagement in understanding and managing their medicines is strongly encouraged, it is uncommon [ , – ]. Transitional patient-centred care has been described as poorly understood and a missed opportunity for pharmacists , such care recognised as improving patient satisfaction and decision making and reducing adverse events and readmissions [ – ]. A paradigm shift in such care is needed . Australian hospital safety and quality standards state that patients and their caregivers should be actively involved in their care, and that they should receive verbal and written information in ways that are meaningful to them . Patient-directed education or coaching has been shown to be the most influential component of multicomponent interventions for successful transitions . However, there is limited research on the impact of pharmacy health coaching , or how well patient-centred care is applied to medicine management in Australian hospitals . Patients have been reported to arrive at hospital taking PIMs, have PIMs commenced and be discharged on PIMs . To address this problem, an implementation program for a discharge medicine review service was begun in 2006 with the development of prescribing appropriateness criteria for older Australians . This criteria set was applied in a scoping study , which found a high incidence of PIM use at our hospital. A randomised controlled trial subsequently applied the criteria during medicine review at discharge in intervention patients, sent to patients’ general practitioners (GPs) for actioning. No significant difference in criteria-based recommendations between intervention and control groups were found at follow-up. GPs implemented a relatively low number (42%) of recommendations . This led to a new intervention strategy; the patient and/or caregiver were made the driver of change in reducing their use of PIMs. A patient-centred discharge medicines review service was commenced in 2016. This study aims to identify the processes, barriers and facilitators that influenced the implementation and intervention effectiveness of this service. For example, limited organisational resources and low leadership engagement have been identified as barriers to implementation of transitional care innovations, whereas adaptability of innovations and high perceived benefit by users identified as facilitators . Implementing research into healthcare practice can be complex and unpredictable, with failure common [ – ]. A post-implementation (post hoc) study of these factors was conducted, such studies being commonly used to analyse and explain the implementation process . A prospective audit was conducted to determine the effectiveness of the resulting patient-centred intervention. Aims of the study To describe an implementation program in the development of a patient-centred medicine review service; to assess service impact on older patients and their caregivers actioning recommendations after discharge from hospital. Ethics approvals Ethics approval was obtained from the Human Research Ethics Committee of The University of Sydney for each phase of the intervention process, begun in 2006 (project numbers 2011-2015/10043, 2019/209). Approval was also obtained from the Hospitals Medical Executive Committee. Written informed consent was obtained from all individual patients or their caregivers.
To describe an implementation program in the development of a patient-centred medicine review service; to assess service impact on older patients and their caregivers actioning recommendations after discharge from hospital.
Ethics approval was obtained from the Human Research Ethics Committee of The University of Sydney for each phase of the intervention process, begun in 2006 (project numbers 2011-2015/10043, 2019/209). Approval was also obtained from the Hospitals Medical Executive Committee. Written informed consent was obtained from all individual patients or their caregivers.
Implementation process Many different implementation frameworks have been developed to plan, guide, and evaluate implementation efforts [ – ]. Implementation (or process evaluation) dimensions (defined in supplementary Table ) recommended by the Cochrane Qualitative and Implementation Methods Group were identified by the authors post-intervention that determined the resulting intervention. To gain a broad understanding of determinants of practice (that is, barriers or facilitators), a checklist resulting from a synthesis of frameworks was chosen to identify determinants responsible for achieving the desired outcome. Combining different frameworks may enable a more comprehensive study . Reporting was guided by the “Standards for Reporting Implementation Studies” checklist . Intervention Setting The intervention, a prospective post-hospital audit of recommendations made to patients and/or caregivers at discharge, was carried out at a private, not-for-profit 55 bed hospital in Sydney Australia. Patients were admitted for exacerbations of chronic medical conditions such as heart failure, Parkinson’s disease, chronic obstructive pulmonary disease/asthma, degenerative spinal disease, and inflammatory bowel disease; for rehabilitation after heart, spinal, joint, gastrointestinal, breast or gynaecologic surgery, or trauma from motor vehicle accidents or falls; for palliative care due to metastatic disease; and for management of infections such as cellulitis, pneumonia or urosepsis. Chronic medical conditions and medicines were representative of older Australian community patients . Patients were admitted under the care of one of three geriatricians, rehabilitation specialists or one of two palliative care physicians, supported by two staff doctors. Multidisciplinary care was provided by nursing staff, physiotherapists, occupational therapists, dieticians, social workers, and a discharge planner. The clinical pharmacist (BJB) was an experienced medicines review pharmacist. Eligibility criteria All patients 65 years or older were eligible. There were no other exclusion criteria. Specifically, patients were not excluded if taking less than five medicines, cognitively impaired, whose second language was English, were being discharged to residential or supportive care, lived distant from the hospital, had a terminal illness, or had vision or hearing impairment. Intervention Between July 2019 and March 2020, a convenience sample of 100 patients were recruited for follow-up after discharge. Between one to four patients were discharged daily, the first alternating with the last on a non-alphabetized list being recruited daily. Where cognitive impairment was present, as determined by a Montreal Cognitive Assessment (MoCA) test score of less than 26/30, or where there was language, hearing or vision difficulties, a caregiver was recruited. Two to three days before discharge, the pharmacist explained to the patient and/or caregiver that sometimes, the benefit of taking certain medicines may be unclear, or the dose may need adjustment. A safer or cheaper medicine or even no medicine at all may be more appropriate. Permission to review their medicines, make recommendations and follow them up was sought, an information sheet provided, and a consent form signed. A medicine list would be provided that detailed the best times to take their medicines, brand names, purpose, cost considerations, relevant side effects and easy-to-understand recommendations to assist with management. Medicines were then reconciled, and reviewed utilizing validated prescribing appropriateness criteria, shown in this setting to detect approximately three quarters of all causes of medicine-related problems (MRPs) . A comprehensive medicine review was conducted according to the protocol of the Pharmaceutical Society of Australia , including opportunities for non-pharmacologic care. Patient-directed education was provided during a discharge interview, timing facilitated by allied health staff. Patients/caregivers were encouraged to discuss with their GPs those recommendations important to them for prescription medicines, and to consider for themselves their use of non-prescription medicines. The pharmacist acted as the patient/caregivers’ advocate in proactively addressing PIM use, catering to patient/caregiver health literacy. The discharge medicine list with recommendations and pharmacist contact details was sent separately to GPs, and where appropriate to aged care facilities, community nurses and pharmacies. Where patients had no GP, support was given finding one. Because it was necessary for all patients to have their medicines reconciled and reviewed and to receive discharge counselling, a control group was not possible. The time taken for each activity was recorded to determine the cost of the service. This included finding medical notes and walking corridors. Patients were invited to fill in a general hospital feedback form at discharge as part of standard practice. Ten to fourteen days after discharge, each patient or caregiver was contacted, either by phone or in person. Enquiry was made about the actioning of each recommendation, and the results including GP response recorded. Patients’ reports of changes to medicine use were accepted as truthful. Where there had been no visit to a GP or specialist doctor, support and reassurance was provided, and a repeat contact time made. The patient journey consisted of six stages (Figure ), fitted into episodes of physiotherapy/hydrotherapy attendance, sleep, and mealtimes. Reporting followed the STROBE checklist for observational studies . Data analysis Data were entered into Microsoft Excel (version 2203), checked for normality, and analyzed using descriptive statistics.
Many different implementation frameworks have been developed to plan, guide, and evaluate implementation efforts [ – ]. Implementation (or process evaluation) dimensions (defined in supplementary Table ) recommended by the Cochrane Qualitative and Implementation Methods Group were identified by the authors post-intervention that determined the resulting intervention. To gain a broad understanding of determinants of practice (that is, barriers or facilitators), a checklist resulting from a synthesis of frameworks was chosen to identify determinants responsible for achieving the desired outcome. Combining different frameworks may enable a more comprehensive study . Reporting was guided by the “Standards for Reporting Implementation Studies” checklist .
The intervention, a prospective post-hospital audit of recommendations made to patients and/or caregivers at discharge, was carried out at a private, not-for-profit 55 bed hospital in Sydney Australia. Patients were admitted for exacerbations of chronic medical conditions such as heart failure, Parkinson’s disease, chronic obstructive pulmonary disease/asthma, degenerative spinal disease, and inflammatory bowel disease; for rehabilitation after heart, spinal, joint, gastrointestinal, breast or gynaecologic surgery, or trauma from motor vehicle accidents or falls; for palliative care due to metastatic disease; and for management of infections such as cellulitis, pneumonia or urosepsis. Chronic medical conditions and medicines were representative of older Australian community patients . Patients were admitted under the care of one of three geriatricians, rehabilitation specialists or one of two palliative care physicians, supported by two staff doctors. Multidisciplinary care was provided by nursing staff, physiotherapists, occupational therapists, dieticians, social workers, and a discharge planner. The clinical pharmacist (BJB) was an experienced medicines review pharmacist.
All patients 65 years or older were eligible. There were no other exclusion criteria. Specifically, patients were not excluded if taking less than five medicines, cognitively impaired, whose second language was English, were being discharged to residential or supportive care, lived distant from the hospital, had a terminal illness, or had vision or hearing impairment.
Between July 2019 and March 2020, a convenience sample of 100 patients were recruited for follow-up after discharge. Between one to four patients were discharged daily, the first alternating with the last on a non-alphabetized list being recruited daily. Where cognitive impairment was present, as determined by a Montreal Cognitive Assessment (MoCA) test score of less than 26/30, or where there was language, hearing or vision difficulties, a caregiver was recruited. Two to three days before discharge, the pharmacist explained to the patient and/or caregiver that sometimes, the benefit of taking certain medicines may be unclear, or the dose may need adjustment. A safer or cheaper medicine or even no medicine at all may be more appropriate. Permission to review their medicines, make recommendations and follow them up was sought, an information sheet provided, and a consent form signed. A medicine list would be provided that detailed the best times to take their medicines, brand names, purpose, cost considerations, relevant side effects and easy-to-understand recommendations to assist with management. Medicines were then reconciled, and reviewed utilizing validated prescribing appropriateness criteria, shown in this setting to detect approximately three quarters of all causes of medicine-related problems (MRPs) . A comprehensive medicine review was conducted according to the protocol of the Pharmaceutical Society of Australia , including opportunities for non-pharmacologic care. Patient-directed education was provided during a discharge interview, timing facilitated by allied health staff. Patients/caregivers were encouraged to discuss with their GPs those recommendations important to them for prescription medicines, and to consider for themselves their use of non-prescription medicines. The pharmacist acted as the patient/caregivers’ advocate in proactively addressing PIM use, catering to patient/caregiver health literacy. The discharge medicine list with recommendations and pharmacist contact details was sent separately to GPs, and where appropriate to aged care facilities, community nurses and pharmacies. Where patients had no GP, support was given finding one. Because it was necessary for all patients to have their medicines reconciled and reviewed and to receive discharge counselling, a control group was not possible. The time taken for each activity was recorded to determine the cost of the service. This included finding medical notes and walking corridors. Patients were invited to fill in a general hospital feedback form at discharge as part of standard practice. Ten to fourteen days after discharge, each patient or caregiver was contacted, either by phone or in person. Enquiry was made about the actioning of each recommendation, and the results including GP response recorded. Patients’ reports of changes to medicine use were accepted as truthful. Where there had been no visit to a GP or specialist doctor, support and reassurance was provided, and a repeat contact time made. The patient journey consisted of six stages (Figure ), fitted into episodes of physiotherapy/hydrotherapy attendance, sleep, and mealtimes. Reporting followed the STROBE checklist for observational studies .
Data were entered into Microsoft Excel (version 2203), checked for normality, and analyzed using descriptive statistics.
Implementation Processes and determinants identifying actions taken in the implementation of a discharge medicines review service appear in Table . Processes of context, fidelity, implementer engagement, intervention quality and reach (definitions supplementary Table ) appeared in each phase, as did the following determinants: feasibility; mandate, authority, and accountability; quality assurance and patient safety systems; source of the recommendation. The most commonly occurring determinants were capacity to plan change; implementer engagement; and patient needs, beliefs, knowledge, and motivation. Intervention The implemented service was audited between July 2019 and March 2020. Of the 166 patients recruited, 66 were excluded; 11 were transferred to other hospitals due to the occurrence of an acute medical condition such as bleeding or chest pain, or for a procedure unavailable onsite; six left before interview; no recommendations requiring follow-up were made for 33 patients; nine patients were uncontactable after discharge; three had not seen a doctor within four weeks of discharge; three were admitted to another hospital within two weeks of discharge, and one patients family refused follow-up, leaving 100 patients. All patients/caregivers received a discharge medicine list and review form, and all agreed to participate in a medicines discharge interview and to consider discussing those recommendations important to them with their GP. All patients were followed-up. The pharmacist did not communicate directly with GPs, nor did any GP contact the pharmacist. Mean participant age was 83.1 years, mean total number of medicines 10.4, with a mean number of 8.9 medical conditions per patient. Of 100 patients, five took less than 5 regular medicines, 48 took five to nine regular medicines, and 47 took 10 regular medicines or more - classed as hyper polypharmacy . Fifty six percent of patients were counselled in the presence of a caregiver. Of 368 recommendations made to 100 patients/caregivers, 351 (95%) were actioned, with 284 (77% of those actioned) reported to be implemented and 206 (21%) regularly taken medicines deprescribed – 141 ceased and 65 medicines reduced in dose (Table ). There were 340 causes of a medicine-related problems (MRPs - 3.4 per patient), classified according to a validated system . The top 10 categories represented 92% (312/340) of all causes of MRPs, the most common being: Medicine not effective for the indication treated; medicine was not the most safe/effective; and indication does not warrant medicine treatment (Table ) Medicines for acid-related disorders, multivitamins, complementary and alternative medicines, and mineral supplements were the most common medicines ceased. Gabapentinoids, opiates, proton pump inhibitors and statins were the most common medicines reduced in dose. The time taken to reconcile, review and interview patients/caregivers averaged 63.6 minutes/patient. Recommendations not actioned (17 or 4.6% of the total number) occurred if patients/caregivers decided they were unimportant. Recommendations not implemented occurred because medicines were continued despite evidence provided of poor or absent effectiveness, or GPs considering recommendations unnecessary. Examples included non-discontinuation of glucosamine and prescription of proton pump inhibitors despite apparent lack of indication. Oral feedback about the service from attending doctors and nursing staff, and written feedback from patients presented at patient care committee meetings, was consistently positive with respect to the quality and usefulness of the service. Examples of medicine management recommendations made to patients appear in supplementary Table , according to the cause of their medicine related problem.
Processes and determinants identifying actions taken in the implementation of a discharge medicines review service appear in Table . Processes of context, fidelity, implementer engagement, intervention quality and reach (definitions supplementary Table ) appeared in each phase, as did the following determinants: feasibility; mandate, authority, and accountability; quality assurance and patient safety systems; source of the recommendation. The most commonly occurring determinants were capacity to plan change; implementer engagement; and patient needs, beliefs, knowledge, and motivation.
The implemented service was audited between July 2019 and March 2020. Of the 166 patients recruited, 66 were excluded; 11 were transferred to other hospitals due to the occurrence of an acute medical condition such as bleeding or chest pain, or for a procedure unavailable onsite; six left before interview; no recommendations requiring follow-up were made for 33 patients; nine patients were uncontactable after discharge; three had not seen a doctor within four weeks of discharge; three were admitted to another hospital within two weeks of discharge, and one patients family refused follow-up, leaving 100 patients. All patients/caregivers received a discharge medicine list and review form, and all agreed to participate in a medicines discharge interview and to consider discussing those recommendations important to them with their GP. All patients were followed-up. The pharmacist did not communicate directly with GPs, nor did any GP contact the pharmacist. Mean participant age was 83.1 years, mean total number of medicines 10.4, with a mean number of 8.9 medical conditions per patient. Of 100 patients, five took less than 5 regular medicines, 48 took five to nine regular medicines, and 47 took 10 regular medicines or more - classed as hyper polypharmacy . Fifty six percent of patients were counselled in the presence of a caregiver. Of 368 recommendations made to 100 patients/caregivers, 351 (95%) were actioned, with 284 (77% of those actioned) reported to be implemented and 206 (21%) regularly taken medicines deprescribed – 141 ceased and 65 medicines reduced in dose (Table ). There were 340 causes of a medicine-related problems (MRPs - 3.4 per patient), classified according to a validated system . The top 10 categories represented 92% (312/340) of all causes of MRPs, the most common being: Medicine not effective for the indication treated; medicine was not the most safe/effective; and indication does not warrant medicine treatment (Table ) Medicines for acid-related disorders, multivitamins, complementary and alternative medicines, and mineral supplements were the most common medicines ceased. Gabapentinoids, opiates, proton pump inhibitors and statins were the most common medicines reduced in dose. The time taken to reconcile, review and interview patients/caregivers averaged 63.6 minutes/patient. Recommendations not actioned (17 or 4.6% of the total number) occurred if patients/caregivers decided they were unimportant. Recommendations not implemented occurred because medicines were continued despite evidence provided of poor or absent effectiveness, or GPs considering recommendations unnecessary. Examples included non-discontinuation of glucosamine and prescription of proton pump inhibitors despite apparent lack of indication. Oral feedback about the service from attending doctors and nursing staff, and written feedback from patients presented at patient care committee meetings, was consistently positive with respect to the quality and usefulness of the service. Examples of medicine management recommendations made to patients appear in supplementary Table , according to the cause of their medicine related problem.
Continuing positive feedback and the results of this study resulted in our non-government, not-for-profit (private) hospital commencing and continuing to pay for a non-dispensing or cognitive pharmacy service. Facilitators influencing the implementation of transitional care innovations have been identified and include the benefits and usefulness of the innovation to healthcare providers; patient satisfaction resulting in high buy-in from healthcare providers and management; quality of information transfer; clear roles and responsibilities of key team members; support from allied health and administrative staff; and regular communication and feedback about the innovation . These facilitators appear in this study. Gaining the approval of the Hospital’s executive officers, board of management and medical committee was considered critical in legitimizing the clinical role of the pharmacist. The Hospital supported implementation from inception, providing organizational and policy support. Allied healthcare team support was also essential to facilitate implementation, contributing to the design and evaluation of the service at each stage. This has been found to make interventions more likely to be effective at ward level and represented a participatory action research approach . Such an approach has been used to improve care of delirium in older inpatients and to address inappropriate psychotropic medicine use in residential care . Staff understood that the pharmacist taking time to talk to patients/caregivers about medicines was fundamental to patient care. Patient-centered care appeared to be of low priority in Australian hospitals and internationally [ , , ], featuring poor delivery of information [ , – ]. Transition interventions involving caregivers also appeared uncommon [ , , ] and often with poor pharmacist involvement . Caregivers need to be recognized as partners in management to reduce communication failures and share information received by patients [ , , ]. Care delivered in this study motivated patients/caregivers to become effective facilitators of medicine management change after discharge. Educating patients/caregivers facilitated crossing the primary-secondary interface, where the pharmacist was made the person for accurately determining and explaining the appropriateness of patients’ medicines and providing it in plainly written form . Such a model of pharmacist care did not appear to be standard practice . In a realist synthesis of pharmacist-conducted medicine reviews in discharged patients , factors likely to lead to beneficial outcomes were discussed. Corresponding to these factors, this study engaged healthcare professionals, patients, and caregivers; recruited patients in a trusted environment supportive of the integral role and skill of the pharmacist; established hospital organizational support; provided a pharmacist who understood the critical role of medicine review and integration with staff; and had access to comprehensive information about patients . Handover at transitions of care involved transfer of responsibility to GPs. However, in this study, PIM use was identified and discussed with the patient/caregiver, who were requested to take it up with their GP if it concerned them. This differed from standard practice of pharmacists making recommendations directly to GPs. . GPs then had their attention directed to PIM use by a concerned patient. This proved effective in influencing GPs decision-making behavior (the “nudge” strategy ) through overcoming personal cognitive biases, habits, fear of upsetting the patient, therapeutic inertia (failure to alter therapy when indicated ) or psychological reactance – a motivational state that affirms a person’s freedom of choice, even if opposite to a recommendation . The presence of MRPs after discharge was not unusual, as hospital doctors may not review long-term medicines unrelated to the current admission, viewing it as the GPs role . After discharge, the GP may assume that medicines have been evaluated and were appropriate to continue. Lack of hospital review represented a lost opportunity, as most older Australians were willing to stop one or more of their regular medicines if their GP said they could . Strengths and limitations The behavioural nudge featured in this study requires confirmation . Cost of the service appeared dependent upon pharmacist time per patient. Follow-up was short, although persistence of discharge medicine changes following medicine review have been demonstrated . Patients/caregivers reports of medicine changes were accepted as truthful, with no further form of validation. This study was performed in a small hospital by a single pharmacist, limiting generalisability. No clinical outcomes were reported. However, the implementation process delivered a funded service judged effective by management. There were no patient exclusion criteria other than age, adding to real-world impact.
The behavioural nudge featured in this study requires confirmation . Cost of the service appeared dependent upon pharmacist time per patient. Follow-up was short, although persistence of discharge medicine changes following medicine review have been demonstrated . Patients/caregivers reports of medicine changes were accepted as truthful, with no further form of validation. This study was performed in a small hospital by a single pharmacist, limiting generalisability. No clinical outcomes were reported. However, the implementation process delivered a funded service judged effective by management. There were no patient exclusion criteria other than age, adding to real-world impact.
An implementation program resulted in the commencement of a paid patient-centred discharge medicine review service with an implementation rate of recommendations exceeding that of a previous effort. Failure of patient centred care appeared common in hospitals. This, combined with low rates of medicine review in those recently discharged from hospital , meant that the epidemic of medicine-related harm may remain undiminished.
Additional file 1. Additional file 2.
|
Wide-ranging organic nitrogen diets of freshwater picocyanobacteria | 4194ee7f-bf4e-4a02-b967-6cfec4d57ef7 | 11851481 | Biochemistry[mh] | Picocyanobacteria (< 2 μm in diameter) represent the smallest group of Cyanobacteria, yet have a large impact on global aquatic ecosystems . They thrive in a wide range of habitats , where they are major primary producers . Globally warming temperatures are expected to promote picocyanobacterial growth due to their wide thermal tolerance, increasing their influence as primary producers . Research into their ecology, evolution, and genomic capabilities has predominantly targeted marine environments . However, picocyanobacteria frequently dominate freshwater cyanobacterial communities (primarily Synechococcus and Cyanobium spp.), contributing up to 90% of total lake cyanobacteria biomass (with cyanobacteria often constituting a large proportion of total phytoplankton biomass ). This is commonly attributed to their large surface-area to volume ratio ; however, only recently have studies begun to address the question of ecological distribution and adaptation using genomic data, a framework that offers considerable insight into ecological processes . Picocyanobacteria (also known as the monophyletic clade Syn/Pro ) radiated within microcyanobacteria, a monophyletic clade containing lineages with small cell diameters (< 3 μm) . The Syn/Pro initially comprised marine taxa, though subsequent sampling has improved the phylogenetic resolution, and four sub-clusters are now recognized . Marine Synechococcus strains, prevalent throughout the global oceans, are found in sub-cluster 5.1 with 20 sub-clades further identified based on ecology and biogeography . A sister group to sub-cluster 5.1 contains Prochlorococcus , split into high-light and low-light-dwelling clades . Meanwhile, sub-clusters 5.2 and 5.3 contain picocyanobacteria from a greater diversity of habitats, encompassing fresh water, brackish, and marine strains. Widespread sampling has recently markedly expanded genomic information on freshwater picocyanobacteria, enabling greater elucidation of their adaptation to their environment, though no ecotypes are yet identified . As sources of bioavailable nitrogen (N), more attention has been paid to inorganic N (e.g. ammonium [NH 4 + ], nitrate [NO 3 − ], and N 2 -fixation) than to dissolved organic N (DON). Recent studies have shown, however, that cyanobacteria are mixotrophs and can utilize DON, and specifically amino acids (AAs), as their N source . The DON pool is a heterogeneous mixture of nitrogenous compounds with significant concentrations of urea, free AAs, oligopeptides, nucleic acids, and humic substances amid many thousands of other, primarily uncharacterized, compounds, including chitin and glyphosate . Chitin, one of the most abundant natural compounds , has been shown to be bioavailable to some cyanobacteria in its natural particulate form in addition to its DON form as chitosan . Likewise, the herbicide glyphosate is increasingly found in fresh waters and increasingly demonstrated to promote cyanobacterial growth . DON can originate from a variety of allochthonous sources, including human and livestock excretion, cellular decay, soil leachate, and atmospheric deposition . In inland waters, DON commonly represents the bulk of total dissolved N in oligo- and meso-trophic waterbodies, which picocyanobacteria tend to dominate . Sixty percent of the total DON pool is thought to be readily metabolized by primary producers, significantly increasing known bioavailable N concentration, and contributing to available nutrient load . AAs are an essential bioavailable component of DON, found as both readily consumed dissolved free AAs or dissolved combined AAs that form variously sized polypeptides. The concentration of free AAs in surface waters is typically in the nM range, yet their rapid turnover and efficient microbial uptake suggests a disproportionately large contribution to N uptake . As a proportion of total DON, the pool of total dissolved AA (combined + free AAs) in fresh waters is 5–28% , making up a greater proportion of DON than in marine environments: 1–12% . Additionally, oligotrophic waterbodies have a greater proportion of DON vs total N than eutrophic waters , amplifying the contribution of AAs to facilitate N requirements in these low nutrient environments (compared to greater inorganic utilization in eutrophic habitats). The specific composition of DON is often varied and is generally characterized by the surrounding catchment and local land-use practices . It is currently unknown how the specific composition of dissolved free AAs influences the proliferation of freshwater picocyanobacteria, though elucidating this would enable the prediction of picocyanobacterial communities based on watershed management and trophic status. Understanding the role of DON in sustaining picocyanobacterial abundance in oligotrophic environments is essential for evaluating their mixotrophic capabilities. This study investigates the mixotrophic potential of freshwater picocyanobacteria and compares it to the better-studied marine picocyanobacteria. This involved firstly, a comparative genomic analysis to identify encoded assimilation capabilities of various DON compounds and differences based on habitat in 295 freshwater and marine picocyanobacteria strains. Secondly, growth assays of axenic cultures to determine if potential DON compounds could support growth. Thirdly, quantitative proteomic analysis of Synechococcus sp. CCAP1479/10 to investigate the intracellular response to growth on selected AAs as putative N sources. We find that mixotrophic potential is widespread in freshwater picocyanobacteria, potentially contributing to their growth in oligotrophic environments.
Strains Two freshwater (salinity <0.5 ppt) picocyanobacteria strains were obtained: Synechococcus sp. CCY9618 (Culture Collection Yerseke; isolated from Vinkeveen, The Netherlands) and Synechococcus sp. CCAP1479/10 (Culture Collection of Algae and Protozoa; isolated from Windermere, UK). Axenic cultures of these strains were produced using fluorescent-activated cell sorting . Taxa selection and genome datasets Picocyanobacterial genomes ( Syn/Pro clade) were obtained from the National Center for Biotechnology Information RefSeq database in September 2023. The environment from which these strains were initially isolated was determined from the cyanobacterial metadata (e.g. Genbank , JGI, scientific literature). Genome completeness was assessed using BUSCO v5.6.1 , where genomes with a completeness score less than 90% (commonly held as the threshold for a high-quality draft genome ) were excluded. A total of 295 high-quality cyanobacteria genomes were analysed, comprising 88 genomes from freshwater environments and 207 genomes from marine/brackish environments . Nitrogen assimilation gene identification An in-depth search through the scientific literature and maps of metabolic pathways identified 328 genes involved in cyanobacteria N assimilation and AA biosynthesis/degradation. These searches identified experimentally characterized proteins involved in the transport, metabolism, and biosynthesis/degradation of N. In addition, KEGG and MetaCyc pathway mapping were utilized to identify putative pathways and enzymes involved in cyanobacterial AA biosynthesis and degradation. These target genes were used in comparative genomics analyses with selected query sequences . Comparative genomic analyses Target genes in our genome dataset were identified using BLASTP v2.11.0+ . An E-value threshold of 1 × 10 −5 was used to return the best match per genome for each query sequence. Identified genes for each target were compiled and then aligned with MAFFT v7.520 using local pair alignment. For each gene, phylogenetic trees were estimated in IQ-TREE v2.2.5 using the -m MF option to determine the best model . Homology of target genes were checked based on their phylogeny. The presence of target genes indicates the potential for functional capability in the strain, it does not guarantee functional activity. Phylogenomic analysis Evolutionary relationships of the taxa utilized in this study were estimated using phylogenomic analysis. Our genome dataset comprised 295 picocyanobacteria genomes, plus eight Synechococcus spongiarum genomes (to complete the Syn/Pro ), and an outgroup of 10 Synechococcus elongatus strains . Ortholog sequences from 143 protein-coding genes (based on previously published studies ) were compiled from each genome of our expanded dataset for phylogenomic analysis, carried out following a previously published method . Growth rate measurements Axenic Synechococcus sp. CCY9618 and Synechococcus sp. CCAP1479/10 cultures were grown in 150 cm 2 vented flasks containing 400 ml BG-11 media. After growth for 4 days at 10–20 μmol m −2 s −1 (spectral range 400 to 700 nm) from white LED light with a 16 h: 8 h light:dark cycle at 20°C, each culture was centrifuged for 5 min at 1260 × g and the pellet washed three times with N-free BG-11 medium . The cultures were then cultivated for a further 24 h in 400 ml N-free BG-11 to remove residual N. Triplicate 25 cm 2 vented culture flasks were prepared for each strain and N source with 11 ml of N-free BG-11 medium and 1 ml of culture inoculum, supplemented with a N source. N sources included organic (20 proteinogenic AAs, urea, chitin, glyphosate) and inorganic N (NH 4 + and NO 3 − ). These N substrates were selected based on their significant contribution to DON (AAs and urea ), their metabolic novelty (chitin and glyphosate ), and their historically common use as N sources (NH 4 + and NO 3 − ). Two N concentrations were utilized. A high concentration (250 mg N L −1 ) based on the N content of BG-11 media, to compare growth on organic N substates to NO 3 − in this commonly used medium for freshwater cyanobacteria in laboratory settings. A lower concentration (1 mg N L −1 ) was also utilized to improve the generalizability of the findings to ecological settings, using a more environmentally relevant N concentration to accurately mimic the concentrations encountered in freshwater natural environments . Each flask was incubated for 14 days under the conditions described above. Picocyanobacterial growth was determined by daily measurement of optical density (OD) at 750 nm on 200 μl aliquots using a Multiskan SkyHigh Microplate Spectrophotometer (ThermoFisher Scientific, Waltham, MA, USA). Poor tyrosine solubility necessitated a reduced high N concentration of 25 mg N L −1 for this condition. Growth rates and lag phase duration were determined using Growthcurver v3.0.1 . Statistical analysis was carried out using a two-tailed t test with FDR-adjusted P values ( Q ). Proteomic growth conditions Synechococcus sp. CCAP1479/10 was selected for subsequent proteomic analysis based on its generally shortened lag phase on the tested organic N substrates and greater number of amino acid transporters (AATs) (N-II, N-III, GltS) compared to Synechococcus sp. CCY9618 (N-II, N-III). Synechococcus sp. CCAP1479/10 was grown and harvested as described above. Following N-free incubation, 1 ml of culture was inoculated into triplicate flasks containing 11 ml N-free BG-11 supplemented with 250 mg N L −1 of a N source (NO 3 − , arginine, asparagine, glutamate, proline) or no N for a total of six conditions. The selected organic substrates were chosen to include a range of chemical properties (charge) and preferred AAT substrates in this strain. Cultures were incubated for two to 5 days, until exponential phase was reached (48 h incubation for N-starvation condition), at 10–20 μmol m −2 s −1 white LED light with a 16 h: 8 h light:dark cycle at 20°C. 2 ml aliquots were subsequently collected for protein extraction. Protein extraction, quantitative proteomics, and data analysis Protein content was extracted from each sample using a NoviPure Microbial Protein Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Protein concentration was determined using a Nanodrop Spectrophotometer 2000 (ThermoFisher Scientific, Waltham, MA, USA) and sent to the Proteomics Facility at the University of Bristol for quantitative proteomic analysis, see for details. Only proteins detected in all replicates were used for further analysis. ANOVA was used to determine significant enrichment among proteins, followed by Tukey’s Post-Hoc test (FDR-adjusted) to determine significance between conditions. Differentially expressed proteins (DEPs) were deemed statistically significant with a Q value less than 0.05 and a log 2 fold change greater than 0.5/less than −0.5. Proteins were functionally annotated using eggnog (see and ) and pathway enrichment analysis was carried out using KEGG and hypergeometric distribution tests.
Two freshwater (salinity <0.5 ppt) picocyanobacteria strains were obtained: Synechococcus sp. CCY9618 (Culture Collection Yerseke; isolated from Vinkeveen, The Netherlands) and Synechococcus sp. CCAP1479/10 (Culture Collection of Algae and Protozoa; isolated from Windermere, UK). Axenic cultures of these strains were produced using fluorescent-activated cell sorting .
Picocyanobacterial genomes ( Syn/Pro clade) were obtained from the National Center for Biotechnology Information RefSeq database in September 2023. The environment from which these strains were initially isolated was determined from the cyanobacterial metadata (e.g. Genbank , JGI, scientific literature). Genome completeness was assessed using BUSCO v5.6.1 , where genomes with a completeness score less than 90% (commonly held as the threshold for a high-quality draft genome ) were excluded. A total of 295 high-quality cyanobacteria genomes were analysed, comprising 88 genomes from freshwater environments and 207 genomes from marine/brackish environments .
An in-depth search through the scientific literature and maps of metabolic pathways identified 328 genes involved in cyanobacteria N assimilation and AA biosynthesis/degradation. These searches identified experimentally characterized proteins involved in the transport, metabolism, and biosynthesis/degradation of N. In addition, KEGG and MetaCyc pathway mapping were utilized to identify putative pathways and enzymes involved in cyanobacterial AA biosynthesis and degradation. These target genes were used in comparative genomics analyses with selected query sequences .
Target genes in our genome dataset were identified using BLASTP v2.11.0+ . An E-value threshold of 1 × 10 −5 was used to return the best match per genome for each query sequence. Identified genes for each target were compiled and then aligned with MAFFT v7.520 using local pair alignment. For each gene, phylogenetic trees were estimated in IQ-TREE v2.2.5 using the -m MF option to determine the best model . Homology of target genes were checked based on their phylogeny. The presence of target genes indicates the potential for functional capability in the strain, it does not guarantee functional activity.
Evolutionary relationships of the taxa utilized in this study were estimated using phylogenomic analysis. Our genome dataset comprised 295 picocyanobacteria genomes, plus eight Synechococcus spongiarum genomes (to complete the Syn/Pro ), and an outgroup of 10 Synechococcus elongatus strains . Ortholog sequences from 143 protein-coding genes (based on previously published studies ) were compiled from each genome of our expanded dataset for phylogenomic analysis, carried out following a previously published method .
Axenic Synechococcus sp. CCY9618 and Synechococcus sp. CCAP1479/10 cultures were grown in 150 cm 2 vented flasks containing 400 ml BG-11 media. After growth for 4 days at 10–20 μmol m −2 s −1 (spectral range 400 to 700 nm) from white LED light with a 16 h: 8 h light:dark cycle at 20°C, each culture was centrifuged for 5 min at 1260 × g and the pellet washed three times with N-free BG-11 medium . The cultures were then cultivated for a further 24 h in 400 ml N-free BG-11 to remove residual N. Triplicate 25 cm 2 vented culture flasks were prepared for each strain and N source with 11 ml of N-free BG-11 medium and 1 ml of culture inoculum, supplemented with a N source. N sources included organic (20 proteinogenic AAs, urea, chitin, glyphosate) and inorganic N (NH 4 + and NO 3 − ). These N substrates were selected based on their significant contribution to DON (AAs and urea ), their metabolic novelty (chitin and glyphosate ), and their historically common use as N sources (NH 4 + and NO 3 − ). Two N concentrations were utilized. A high concentration (250 mg N L −1 ) based on the N content of BG-11 media, to compare growth on organic N substates to NO 3 − in this commonly used medium for freshwater cyanobacteria in laboratory settings. A lower concentration (1 mg N L −1 ) was also utilized to improve the generalizability of the findings to ecological settings, using a more environmentally relevant N concentration to accurately mimic the concentrations encountered in freshwater natural environments . Each flask was incubated for 14 days under the conditions described above. Picocyanobacterial growth was determined by daily measurement of optical density (OD) at 750 nm on 200 μl aliquots using a Multiskan SkyHigh Microplate Spectrophotometer (ThermoFisher Scientific, Waltham, MA, USA). Poor tyrosine solubility necessitated a reduced high N concentration of 25 mg N L −1 for this condition. Growth rates and lag phase duration were determined using Growthcurver v3.0.1 . Statistical analysis was carried out using a two-tailed t test with FDR-adjusted P values ( Q ).
Synechococcus sp. CCAP1479/10 was selected for subsequent proteomic analysis based on its generally shortened lag phase on the tested organic N substrates and greater number of amino acid transporters (AATs) (N-II, N-III, GltS) compared to Synechococcus sp. CCY9618 (N-II, N-III). Synechococcus sp. CCAP1479/10 was grown and harvested as described above. Following N-free incubation, 1 ml of culture was inoculated into triplicate flasks containing 11 ml N-free BG-11 supplemented with 250 mg N L −1 of a N source (NO 3 − , arginine, asparagine, glutamate, proline) or no N for a total of six conditions. The selected organic substrates were chosen to include a range of chemical properties (charge) and preferred AAT substrates in this strain. Cultures were incubated for two to 5 days, until exponential phase was reached (48 h incubation for N-starvation condition), at 10–20 μmol m −2 s −1 white LED light with a 16 h: 8 h light:dark cycle at 20°C. 2 ml aliquots were subsequently collected for protein extraction.
Protein content was extracted from each sample using a NoviPure Microbial Protein Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. Protein concentration was determined using a Nanodrop Spectrophotometer 2000 (ThermoFisher Scientific, Waltham, MA, USA) and sent to the Proteomics Facility at the University of Bristol for quantitative proteomic analysis, see for details. Only proteins detected in all replicates were used for further analysis. ANOVA was used to determine significant enrichment among proteins, followed by Tukey’s Post-Hoc test (FDR-adjusted) to determine significance between conditions. Differentially expressed proteins (DEPs) were deemed statistically significant with a Q value less than 0.05 and a log 2 fold change greater than 0.5/less than −0.5. Proteins were functionally annotated using eggnog (see and ) and pathway enrichment analysis was carried out using KEGG and hypergeometric distribution tests.
Uptake capabilities of amino acids and other forms of DON There are seven AATs characterized in cyanobacteria, of which four are broad-substrate ABC-type transporters with varying substrate affinities and preferences: N-I and N-III for neutral non-polar AAs , N-II for acidic/neutral polar AAs , and Bgt for basic AAs . Though N-I is absent from the Syn/Pro clade, neutral AA uptake can occur via N-III which is encoded in 95% of freshwater picocyanobacteria . In contrast, this neutral AAT is not as prevalent in marine picocyanobacteria: it is absent from the 5.1 and Prochlorococcus sub-clusters entirely and found only in marine Synechococcus of sub-cluster 5.2 . Our comparative genomic analysis indicates that the N-II transporter is more widespread: it is found in 95% of freshwater picocyanobacteria and 90% of non- Prochlorococcus marine picocyanobacteria (only present in 50% of Prochlorococcus strains) and represents the sole broad-substrate AAT among the majority of marine picocyanobacteria. The Bgt transporter is currently the only known active-uptake method for basic AAs in cyanobacteria . However, this transporter is uncommon among the Syn/Pro clade, found only in sub-cluster 5.2 among freshwater (23%) and marine (18%) strains (almost all in sub-cluster 5.2B). Additional AATs found in cyanobacteria are substrate-specific, predominately glutamate transporters reflecting the central role of this AA in N metabolism. Of the two sodium-dependent glutamate-specific transporters known, Gtr (a TRAP-type composed of three components: two integral membrane proteins ( gtrA and gtrB ), and a periplasmic binding domain ( gtrC )) is present in marine picocyanobacteria across all sub-clusters, though gtrC (not essential for function ) is absent from marine strains, whereas GltS is found more commonly in sub-cluster 5.2, especially among freshwater strains . However, these transporters are absent from the majority of picocyanobacteria, with Gtr only present in 25% of marine strains and GltS slightly more abundant, encoded by 49% of freshwater strains . In comparison, AgcS, a cyanobacterial glycine-specific transporter that has been expressed in E.coli , is prevalent in marine picocyanobacteria, especially among sub-cluster 5.1 (found in 99% of strains), and generally absent from freshwater strains (11% presence). The uptake of other sources of organic N is also widespread among picocyanobacteria. Urea uptake through the Urt ABC transporter and urease activity is prevalent throughout the Syn/Pro . Other common sources of DON include oligo- and di-peptides. The Opp oligopeptide transporter is not found in picocyanobacteria, however the Dpp di-peptide transporter is present in both freshwater and marine strains. The assimilation of chitin can take place through two pathways which are differentially encoded among picocyanobacterial sub-clusters . Direct catabolism of chitin is more common in marine picocyanobacteria of sub-cluster 5.1, with 38% of these strains encoding chitinase ( chiA ) but lacking chitin deacetylase capability. In contrast, the potential of chitin acetylation into chitosan, and subsequent catabolism of chitosan with chitosanase, is found in sub-cluster 5.2 among both freshwater (56%) and marine (55%) strains with chitinase rarely encoded (7% of all sub-cluster 5.2 strains). Glyphosate is a novel source of organic N, with uptake enabled via the phosphonate transporter encoded by phnD . This is prevalent among most picocyanobacteria, only largely absent in freshwater strains of sub-cluster 5.3 . However, it is important to recognize that the presence of these genes alone does not confer functional activity of these pathways, as seen in our growth assays below. Amino acid biosynthesis and degradation Picocyanobacteria overwhelmingly have the capacity for AA biosynthesis, with almost a full complement of biosynthetic pathways found among all habitats and sub-clusters. The sole exception to this is the generation of alanine in Prochlorococcus . The absence of alanine dehydrogenase (ald) is found in the high-light and low-light I Prochlorococcus ecotypes, from which all known AATs are absent , suggesting an alternative alanine biosynthesis pathway or a requirement for extracellular alanine import through novel transporters. Of the 61 AA degradation pathways analysed, 32 are identified in picocyanobacteria, either partially (encoding initial enzyme but lacking subsequent enzymes) or completely . Of these 32 pathways, 29 are found in freshwater (and marine) picocyanobacteria with the remaining three (arginine asparagine (asparaginase) and glutamate (deamination and hydroxyglutarate) pathways found only in marine strains. Complete degradation pathways are found for nine AAs: alanine, arginine, aspartate (2 pathways), cysteine, glutamate (2 pathways), glutamine (2 pathways), glycine, methionine (2 pathways), and proline. Meanwhile five AAs: asparagine, phenylalanine, serine, tryptophan, and tyrosine, lack components of any degradative pathway in freshwater picocyanobacteria. Organic N bioavailability Synechococcus sp. CCY9618 and Synechococcus sp. CCAP1479/10 encode the N-II (acidic AAs) and N-III transporter (neutral non-polar AAs) but lack Bgt (basic AAs), suggesting that basic AAs would be unavailable unlike N-II and N-III substrates. Most of the tested substrates, including basic AAs, exhibited some degree of bioavailability and supported the growth of both axenic picocyanobacteria strains under both high (250 mg N L −1 ) and low (1 mg N L −1 ) concentrations . However, two polar AAs, cysteine and threonine, did not support growth. Limited tyrosine bioavailability was demonstrated only for CCAP1479/10 at a high concentration, whereas methionine was utilized, to some extent, only by CCY9618. In contrast, glyphosate and chitin were unavailable (although growth at high chitin concentrations could not be quantified because of particulate occlusion of the spectrophotometer). Under high concentrations of N, the greatest picocyanobacteria yields occurred with aspartate for CCY9618 and proline for CCAP1479/10 . At the lower concentration of N, yield was greatest on NO 3 − , whereas the greatest yield on an organic substrate occurred with proline for both strains with yields of 49.4% for CCY9618 and 54.3% for CCAP1479/10 compared to NO 3 − (100%) . The fastest picocyanobacterial growth rates were associated with basic AAs . At a high N concentration, histidine supported the greatest growth rates for both CCY9618 (r = 5.04 ± 2.57 day −1 ) and CCAP1479/10 (r = 6.41 ± 12.02 day −1 ), however due to a single sharp increase in OD it is difficult to estimate rate accurately for this substrate. The greatest reliable growth rates utilized arginine as a N substrate for CCY9618 (r = 4.10 ± 1.66 day −1 ) and aspartate for CCAP1479/10 (r = 4.82 ± 1.15 day −1 ). At low concentrations, basic AAs also supported high growth rates, yet the greatest rates were achieved utilizing valine for both strains (CCY9618: r = 1.66 ± 0.36 day −1 ; CCAP1479/10: r = 3.20 ± 1.94 day −1 ). Picocyanobacterial lag phases and N concentration Under a high N concentration, the shortest lag phases were found in CCAP1479/10, on substrates which can be immediately incorporated into N metabolic pathways—glutamate (2.21 ± 0.12 days) and glutamine (2.38 ± 0.12 days). Growth on aspartate also occurred with a short lag (2.16 ± 0.09 days), suggesting that acidic AAs may require minimal adaptation time. There are significant differences in the duration of lag phase between CCY9618 and CCAP1479/10 when grown under high and low concentrations of N . At high concentrations, growth on five AA substrates resulted in significantly shorter lag phases in CCAP1479/10 than in CCY9618 (aspartate ( P adjusted value ( Q ) = 0.0049), histidine ( Q = 0.014), valine ( Q = 0.031), phenylalanine ( Q = 0.0097), and proline ( Q = 0.0073)) . In contrast, the lag phase was shorter on glycine for CCY9618 ( Q = 0.022). At lower N concentrations, significant differences in the duration of lag phase were less prevalent . Proteomic response to growth on amino acids TMT proteomics for CCAP1479/10 resulted in the identification of 5720 unique peptides and 1167 proteins. Of these, only proteins that were detected in all three biological replicates were analysed further, resulting in a total of 5134 peptides and 836 proteins ( and ). FDR-adjusted ANOVA and Tukey Test analyses identified 224 unique DEPs . The 836 proteins detected in triplicate in this study correspond to 24.3% of the predicted 3441 proteins encoded in the CCAP1479/10 genome , consistent with percentages quantified from other studies, albeit towards the bottom of the expected range . The number of DEPs varied considerably among conditions. Of the 224 DEPs identified, 172 were linked to N-starvation and 160 were linked to growth across the four AA N-substrates compared to NO 3 − . Compared to NO 3 − and N-starvation, growth on glutamate (NO 3 − : 103 DEPs; N-starvation: 122 DEPs) and proline (NO 3 − : 112 DEPs; N-starvation: 116 DEPs) yielded more DEPs than growth on arginine (NO 3 − : 51 DEPs; N-starvation: 83 DEPs) and asparagine (NO 3 − : 69 DEPs; N-starvation: 90 DEPs). Of particular interest is the overlap of DEPs among AA substrate conditions. Only four up-regulated DEPs are shared between the four AAs and approximately half of all DEPs during growth with proline (42%) and glutamate (53%) are specific to that AA . In contrast, 79 DEPs were down-regulated among all four AAs compared to NO 3 − . Pathway-enrichment KEGG pathway-enrichment analysis identified 37 unique pathways with differential expression between growth on AAs and NO 3 − . Of these, 21 pathways are associated with over-expression under AA growth, though only five pathways are up-regulated under two or more AA-substrate growth conditions, indicating a large degree of variation in nutrient response . The four AA growth conditions display varying degrees of pathway enrichment, with arginine only significantly up-regulated in one pathway (“cytoskeleton proteins”) whereas growth on proline resulted in the significant up-regulation of 11 pathways (including “lysine biosynthesis” and “arginine biosynthesis”). Pathways involved with AA metabolism and transporters were expected to be up-regulated in AA-grown CCAP1479/10 compared to growth on NO 3 − , yet this was found only when grown with glutamate and proline . Nitrogen assimilation and amino acid associated DEPs Compared to NO 3 − , AA metabolism/biosynthesis proteins involved with lysine (DapB, DapF, DapL), arginine (ArgJ, ArgB), and asparagine (GatC) were up-regulated during growth with at least one AA . All except DapB were up-regulated under proline growth, with ArgJ and DapF up-regulated under both the proline and glutamate conditions. Conversely, the only N-assimilation associated protein up-regulated under asparagine growth is DapB, catalyzing an earlier step in lysine biosynthesis than DapF. Growth on arginine did not result in the up-regulation of any proteins associated with N assimilation, consistent with the lack of pathway enrichment. The abundance of Amt1 (NH 4 + transporter) under proline growth is also identified, perhaps suggesting extracellular proline degradation and subsequent deamination forming NH 4 + . In contrast, the periplasmic substrate-binding component of the AAT N-III (NatI) is the only differentially expressed AAT subunit, down-regulated in CCAP1479/10 grown on glutamate compared to NO 3 − . Transporters In addition to NatI of the N-III system and Amt1, multiple other transporters, characterized and novel, increased in abundance when AA were provided as substrates . Subunits of two systems are found among non-N associated DEPs. These include the substrate-binding protein of the sulphate ABC transporter (SbpA) (asparagine and proline vs NO 3 − ) and subunits of the high-affinity bicarbonate ABC transporter – CmpC (ATPase; glutamate vs NO 3 − ) and CmpA (substrate-binding protein; proline vs NO 3 − ). Uncharacterized proteins associated with ABC transporters were also identified. In particular, Ga0436389_004_46165_47994 is up-regulated under asparagine, glutamate, and proline growth compared to NO 3 − and is an unknown substrate ABC transporter ATP-binding protein. BLAST analysis of this protein reveals an MdlB domain superfamily involved in multidrug transport, primarily efflux, of small hydrophobic molecules . This may suggest associations to hydrophobic AA export caused by build-up of intracellular AAs, though proline is the only hydrophobic AA tested in this proteomic analysis. Other proteins Expression of proteins involved in multiple physiological processes were up-regulated in AA-grown vs NO 3 — grown CCAP1479/10, including those involved with translation, photosynthesis, and stress response . Translation-associated DEPs were identified on growth with asparagine, glutamate, and proline, but absent when grown on arginine. These proteins include tRNA ligases in addition to several core components of the 50S ribosomal subunit . However, although the differential expression of tRNA ligases are limited to up-regulation, a substantial number of both 30S and 50S ribosomal proteins are down-regulated on AA substrates . This pattern also occurs with DEPs associated with photosynthesis. In comparison to NO 3 − , proteins involved with pigment biosynthesis (AcsF and CpcF) are up-regulated under asparagine, glutamate, and proline, whereas protein subunits of the PSI and PSII complexes are consistently down-regulated. Furthermore, FtsH1, linked to nutrient stress response in cyanobacteria, was up-regulated in the arginine, asparagine, and glutamate conditions, but not when grown on proline. However, FtsH1 was also up-regulated in the same AA conditions when compared to N-starvation .
There are seven AATs characterized in cyanobacteria, of which four are broad-substrate ABC-type transporters with varying substrate affinities and preferences: N-I and N-III for neutral non-polar AAs , N-II for acidic/neutral polar AAs , and Bgt for basic AAs . Though N-I is absent from the Syn/Pro clade, neutral AA uptake can occur via N-III which is encoded in 95% of freshwater picocyanobacteria . In contrast, this neutral AAT is not as prevalent in marine picocyanobacteria: it is absent from the 5.1 and Prochlorococcus sub-clusters entirely and found only in marine Synechococcus of sub-cluster 5.2 . Our comparative genomic analysis indicates that the N-II transporter is more widespread: it is found in 95% of freshwater picocyanobacteria and 90% of non- Prochlorococcus marine picocyanobacteria (only present in 50% of Prochlorococcus strains) and represents the sole broad-substrate AAT among the majority of marine picocyanobacteria. The Bgt transporter is currently the only known active-uptake method for basic AAs in cyanobacteria . However, this transporter is uncommon among the Syn/Pro clade, found only in sub-cluster 5.2 among freshwater (23%) and marine (18%) strains (almost all in sub-cluster 5.2B). Additional AATs found in cyanobacteria are substrate-specific, predominately glutamate transporters reflecting the central role of this AA in N metabolism. Of the two sodium-dependent glutamate-specific transporters known, Gtr (a TRAP-type composed of three components: two integral membrane proteins ( gtrA and gtrB ), and a periplasmic binding domain ( gtrC )) is present in marine picocyanobacteria across all sub-clusters, though gtrC (not essential for function ) is absent from marine strains, whereas GltS is found more commonly in sub-cluster 5.2, especially among freshwater strains . However, these transporters are absent from the majority of picocyanobacteria, with Gtr only present in 25% of marine strains and GltS slightly more abundant, encoded by 49% of freshwater strains . In comparison, AgcS, a cyanobacterial glycine-specific transporter that has been expressed in E.coli , is prevalent in marine picocyanobacteria, especially among sub-cluster 5.1 (found in 99% of strains), and generally absent from freshwater strains (11% presence). The uptake of other sources of organic N is also widespread among picocyanobacteria. Urea uptake through the Urt ABC transporter and urease activity is prevalent throughout the Syn/Pro . Other common sources of DON include oligo- and di-peptides. The Opp oligopeptide transporter is not found in picocyanobacteria, however the Dpp di-peptide transporter is present in both freshwater and marine strains. The assimilation of chitin can take place through two pathways which are differentially encoded among picocyanobacterial sub-clusters . Direct catabolism of chitin is more common in marine picocyanobacteria of sub-cluster 5.1, with 38% of these strains encoding chitinase ( chiA ) but lacking chitin deacetylase capability. In contrast, the potential of chitin acetylation into chitosan, and subsequent catabolism of chitosan with chitosanase, is found in sub-cluster 5.2 among both freshwater (56%) and marine (55%) strains with chitinase rarely encoded (7% of all sub-cluster 5.2 strains). Glyphosate is a novel source of organic N, with uptake enabled via the phosphonate transporter encoded by phnD . This is prevalent among most picocyanobacteria, only largely absent in freshwater strains of sub-cluster 5.3 . However, it is important to recognize that the presence of these genes alone does not confer functional activity of these pathways, as seen in our growth assays below.
Picocyanobacteria overwhelmingly have the capacity for AA biosynthesis, with almost a full complement of biosynthetic pathways found among all habitats and sub-clusters. The sole exception to this is the generation of alanine in Prochlorococcus . The absence of alanine dehydrogenase (ald) is found in the high-light and low-light I Prochlorococcus ecotypes, from which all known AATs are absent , suggesting an alternative alanine biosynthesis pathway or a requirement for extracellular alanine import through novel transporters. Of the 61 AA degradation pathways analysed, 32 are identified in picocyanobacteria, either partially (encoding initial enzyme but lacking subsequent enzymes) or completely . Of these 32 pathways, 29 are found in freshwater (and marine) picocyanobacteria with the remaining three (arginine asparagine (asparaginase) and glutamate (deamination and hydroxyglutarate) pathways found only in marine strains. Complete degradation pathways are found for nine AAs: alanine, arginine, aspartate (2 pathways), cysteine, glutamate (2 pathways), glutamine (2 pathways), glycine, methionine (2 pathways), and proline. Meanwhile five AAs: asparagine, phenylalanine, serine, tryptophan, and tyrosine, lack components of any degradative pathway in freshwater picocyanobacteria.
Synechococcus sp. CCY9618 and Synechococcus sp. CCAP1479/10 encode the N-II (acidic AAs) and N-III transporter (neutral non-polar AAs) but lack Bgt (basic AAs), suggesting that basic AAs would be unavailable unlike N-II and N-III substrates. Most of the tested substrates, including basic AAs, exhibited some degree of bioavailability and supported the growth of both axenic picocyanobacteria strains under both high (250 mg N L −1 ) and low (1 mg N L −1 ) concentrations . However, two polar AAs, cysteine and threonine, did not support growth. Limited tyrosine bioavailability was demonstrated only for CCAP1479/10 at a high concentration, whereas methionine was utilized, to some extent, only by CCY9618. In contrast, glyphosate and chitin were unavailable (although growth at high chitin concentrations could not be quantified because of particulate occlusion of the spectrophotometer). Under high concentrations of N, the greatest picocyanobacteria yields occurred with aspartate for CCY9618 and proline for CCAP1479/10 . At the lower concentration of N, yield was greatest on NO 3 − , whereas the greatest yield on an organic substrate occurred with proline for both strains with yields of 49.4% for CCY9618 and 54.3% for CCAP1479/10 compared to NO 3 − (100%) . The fastest picocyanobacterial growth rates were associated with basic AAs . At a high N concentration, histidine supported the greatest growth rates for both CCY9618 (r = 5.04 ± 2.57 day −1 ) and CCAP1479/10 (r = 6.41 ± 12.02 day −1 ), however due to a single sharp increase in OD it is difficult to estimate rate accurately for this substrate. The greatest reliable growth rates utilized arginine as a N substrate for CCY9618 (r = 4.10 ± 1.66 day −1 ) and aspartate for CCAP1479/10 (r = 4.82 ± 1.15 day −1 ). At low concentrations, basic AAs also supported high growth rates, yet the greatest rates were achieved utilizing valine for both strains (CCY9618: r = 1.66 ± 0.36 day −1 ; CCAP1479/10: r = 3.20 ± 1.94 day −1 ).
Under a high N concentration, the shortest lag phases were found in CCAP1479/10, on substrates which can be immediately incorporated into N metabolic pathways—glutamate (2.21 ± 0.12 days) and glutamine (2.38 ± 0.12 days). Growth on aspartate also occurred with a short lag (2.16 ± 0.09 days), suggesting that acidic AAs may require minimal adaptation time. There are significant differences in the duration of lag phase between CCY9618 and CCAP1479/10 when grown under high and low concentrations of N . At high concentrations, growth on five AA substrates resulted in significantly shorter lag phases in CCAP1479/10 than in CCY9618 (aspartate ( P adjusted value ( Q ) = 0.0049), histidine ( Q = 0.014), valine ( Q = 0.031), phenylalanine ( Q = 0.0097), and proline ( Q = 0.0073)) . In contrast, the lag phase was shorter on glycine for CCY9618 ( Q = 0.022). At lower N concentrations, significant differences in the duration of lag phase were less prevalent .
TMT proteomics for CCAP1479/10 resulted in the identification of 5720 unique peptides and 1167 proteins. Of these, only proteins that were detected in all three biological replicates were analysed further, resulting in a total of 5134 peptides and 836 proteins ( and ). FDR-adjusted ANOVA and Tukey Test analyses identified 224 unique DEPs . The 836 proteins detected in triplicate in this study correspond to 24.3% of the predicted 3441 proteins encoded in the CCAP1479/10 genome , consistent with percentages quantified from other studies, albeit towards the bottom of the expected range . The number of DEPs varied considerably among conditions. Of the 224 DEPs identified, 172 were linked to N-starvation and 160 were linked to growth across the four AA N-substrates compared to NO 3 − . Compared to NO 3 − and N-starvation, growth on glutamate (NO 3 − : 103 DEPs; N-starvation: 122 DEPs) and proline (NO 3 − : 112 DEPs; N-starvation: 116 DEPs) yielded more DEPs than growth on arginine (NO 3 − : 51 DEPs; N-starvation: 83 DEPs) and asparagine (NO 3 − : 69 DEPs; N-starvation: 90 DEPs). Of particular interest is the overlap of DEPs among AA substrate conditions. Only four up-regulated DEPs are shared between the four AAs and approximately half of all DEPs during growth with proline (42%) and glutamate (53%) are specific to that AA . In contrast, 79 DEPs were down-regulated among all four AAs compared to NO 3 − .
KEGG pathway-enrichment analysis identified 37 unique pathways with differential expression between growth on AAs and NO 3 − . Of these, 21 pathways are associated with over-expression under AA growth, though only five pathways are up-regulated under two or more AA-substrate growth conditions, indicating a large degree of variation in nutrient response . The four AA growth conditions display varying degrees of pathway enrichment, with arginine only significantly up-regulated in one pathway (“cytoskeleton proteins”) whereas growth on proline resulted in the significant up-regulation of 11 pathways (including “lysine biosynthesis” and “arginine biosynthesis”). Pathways involved with AA metabolism and transporters were expected to be up-regulated in AA-grown CCAP1479/10 compared to growth on NO 3 − , yet this was found only when grown with glutamate and proline .
Compared to NO 3 − , AA metabolism/biosynthesis proteins involved with lysine (DapB, DapF, DapL), arginine (ArgJ, ArgB), and asparagine (GatC) were up-regulated during growth with at least one AA . All except DapB were up-regulated under proline growth, with ArgJ and DapF up-regulated under both the proline and glutamate conditions. Conversely, the only N-assimilation associated protein up-regulated under asparagine growth is DapB, catalyzing an earlier step in lysine biosynthesis than DapF. Growth on arginine did not result in the up-regulation of any proteins associated with N assimilation, consistent with the lack of pathway enrichment. The abundance of Amt1 (NH 4 + transporter) under proline growth is also identified, perhaps suggesting extracellular proline degradation and subsequent deamination forming NH 4 + . In contrast, the periplasmic substrate-binding component of the AAT N-III (NatI) is the only differentially expressed AAT subunit, down-regulated in CCAP1479/10 grown on glutamate compared to NO 3 − .
In addition to NatI of the N-III system and Amt1, multiple other transporters, characterized and novel, increased in abundance when AA were provided as substrates . Subunits of two systems are found among non-N associated DEPs. These include the substrate-binding protein of the sulphate ABC transporter (SbpA) (asparagine and proline vs NO 3 − ) and subunits of the high-affinity bicarbonate ABC transporter – CmpC (ATPase; glutamate vs NO 3 − ) and CmpA (substrate-binding protein; proline vs NO 3 − ). Uncharacterized proteins associated with ABC transporters were also identified. In particular, Ga0436389_004_46165_47994 is up-regulated under asparagine, glutamate, and proline growth compared to NO 3 − and is an unknown substrate ABC transporter ATP-binding protein. BLAST analysis of this protein reveals an MdlB domain superfamily involved in multidrug transport, primarily efflux, of small hydrophobic molecules . This may suggest associations to hydrophobic AA export caused by build-up of intracellular AAs, though proline is the only hydrophobic AA tested in this proteomic analysis.
Expression of proteins involved in multiple physiological processes were up-regulated in AA-grown vs NO 3 — grown CCAP1479/10, including those involved with translation, photosynthesis, and stress response . Translation-associated DEPs were identified on growth with asparagine, glutamate, and proline, but absent when grown on arginine. These proteins include tRNA ligases in addition to several core components of the 50S ribosomal subunit . However, although the differential expression of tRNA ligases are limited to up-regulation, a substantial number of both 30S and 50S ribosomal proteins are down-regulated on AA substrates . This pattern also occurs with DEPs associated with photosynthesis. In comparison to NO 3 − , proteins involved with pigment biosynthesis (AcsF and CpcF) are up-regulated under asparagine, glutamate, and proline, whereas protein subunits of the PSI and PSII complexes are consistently down-regulated. Furthermore, FtsH1, linked to nutrient stress response in cyanobacteria, was up-regulated in the arginine, asparagine, and glutamate conditions, but not when grown on proline. However, FtsH1 was also up-regulated in the same AA conditions when compared to N-starvation .
The dominance of picocyanobacteria in oligotrophic environments has been mostly linked to reduced cell size and associated rapid nutrient uptake . Other factors were first proposed in marine picocyanobacteria, with ecological genomics identifying genetic characteristics behind their oceanic distribution and nutrient bioavailability, including their capacity for organic assimilation . Although knowledge of freshwater picocyanobacteria is less developed , recent large-scale freshwater picocyanobacteria sampling offers an opportunity to understand their genomic capabilities and mixotrophic potential, altering the paradigm of nutrient uptake for this fundamental keystone group. Diversity of amino acid bioavailability Our growth assays on axenic cultures indicate that most AAs are potential N sources for freshwater picocyanobacteria. This contrasts with non- Syn/Pro freshwater cyanobacteria where AA utilization is variable and often limited , demonstrating that freshwater picocyanobacteria have among the most diverse DON assimilation potential. S. elongatus PCC 6301, a model cyanobacterium, has only been successfully grown on glutamine , whereas Synechocystis sp. PCC 6714 was limited to growth on glutamine, asparagine, and arginine, unable to utilize nine other AAs . Heterocystous cyanobacteria also display a variety of capabilities, with Pseudanabaena spp. only able to grow on charged AAs and Anabaena sp. PCC 7122 able to utilize neutral AAs but unable to grow on half of the AAs tested . Only Spirulina platensis has similar uptake capabilities to those found here . As such, the contribution of freshwater organic N diversity must be considered when examining picocyanobacterial abundance, providing an enhanced dietary supply in oligotrophic conditions. Meanwhile, although AA bioavailability is diverse and varied in picocyanobacteria, specific metabolic pathways could not be identified for a subset of AA substrates. This highlights the current lack of understanding of cyanobacterial AA metabolism outside of the central molecules (i.e. glutamate and aspartate), elucidation of which is necessary to achieve a holistic view of the cyanobacterial community response to nutrient diversity. In addition to widespread bioavailability, the lag phase differences observed indicates the shorter adaptation time required for CCAP1479/10 for several substrates compared to CCY9618, though for some substrates this pattern was reversed. Differing microbial communities, catchment land use, and type of nutrient inputs can impact the AA composition, resulting in waterbodies with varying dominant total dissolved AA profiles . Intraspecific variation in the adaptation time to individual nutrient sources may shape the initial microbial composition when first exposed to a nutrient flux, influencing community dynamics and dominant microbial strains. This may have implications for the wider cyanobacterial community, with the variety of species-specific responses to individual nutrient sources of heterogeneous DON potentially being a key driver in oligotrophic micro-community composition. Potential mechanisms for basic AA assimilation without dedicated transporters Freshwater picocyanobacteria lack the basic AAT Bgt, though their capacity to utilize arginine and lysine as N sources highlights the complexity behind AA assimilation. We propose three possible mechanisms for basic AA uptake without a known dedicated transporter. The first is a broader specificity for the charged N-II AAT among the Syn/Pro , previously suggested in marine picocyanobacteria . All AAT characterization has been carried out in non- Syn/Pro cyanobacteria , thus the uptake capacity within picocyanobacteria may be greater than expected. Secondly, an unidentified transporter may be responsible for basic AA uptake in picocyanobacteria, however, no such transporter was identified in the basic substrate condition of this study. Recent studies have identified putative AAT permeases in freshwater picocyanobacteria , though these remain uncharacterized with expression and uptake properties unknown. The knowledge gap regarding the molecular capabilities of freshwater picocyanobacteria is large, owing to the lack of a model organism and the absence of experimental research on this keystone group. Thirdly, AAs may be partially decoupled from AATs, with extracellular degradation bypassing the need for dedicated AATs and instead yielding available NH 4 + or NO 3 − for subsequent uptake. This extracellular AA oxidase activity has previously been demonstrated in various taxa, including cyanobacteria , diatoms , and green alga , though no up-regulation of AA oxidases were detected in this study. Two mechanisms behind extracellular AA degradation are known. The first involves the secretion of AA oxidases directly into the external environment, releasing NH 4 + and H 2 O 2 , the latter of which acts as a cytotoxin . The second mechanism is based on the passive diffusion of AAs into the periplasm through outer membrane porins, followed by extracellular catabolism through the action of cell surface AA oxidases . Although oxidation rates are low and highly variable in aquatic environments , the importance of extracellular N release for picocyanobacterial N uptake remains to be clarified, with further work needed to identify the precise uptake mechanisms. Metabolic responses to growth on amino acids The proteomic analysis of picocyanobacteria growth on AAs may indicate the initiation of a stress response and a reduced requirement for inorganic C. Lysine biosynthesis is up-regulated under most AA N-substrates tested, with lysine accumulation linked to environmental stress response throughout the biosphere . These mechanisms are thought to involve an increase in lysine biosynthesis and subsequent conversion to various metabolites including saccharopine , cadaverine , and the compatible solute pipecolate , though DEPs associated with these were not identified in this study. Furthermore, an additional stress response protein (FtsH1) is up-regulated under arginine, asparagine, and glutamate-growth conditions. FtsH1 is involved in the cyanobacterial nutrient stress response, forming a FtsH1/3 protease complex which digests transcription factors repressing activation of Fe, P, N, and inorganic C assimilation proteins . The conditions in this study provide an excess of nutrient, thus the up-regulation of nutrient stress responses compared to NO 3 − is striking . Although growth on some AAs equaled or exceeded that on NO 3 − , it is possible that accumulation of metabolites may have had negative consequences and been responsible for the stress response. In addition to this mild stress response, the proteomic analysis indicates that C skeletons from the AAs are being utilized and may explain the down-regulation of photosynthesis proteins. The molar Redfield ratio of C and N requirements (6.6:1) against those in glutamate and proline (5:1) are similar, which would facilitate balanced growth. DON assimilation mechanisms differ in freshwater and marine Picocyanobacteria The diversity of AATs in freshwater picocyanobacteria is greater than in their marine counterparts. Whereas freshwater picocyanobacteria encode two broad-specificity AATs in addition to a glutamate-specific transporter, marine picocyanobacteria (predominantly sub-cluster 5.1/ Prochlorococcus strains) encode N-II and the limited function of AgcS. These observed genotypic differences between freshwater and marine groups may be influenced by their respective evolutionary environments. For example, the composition of DON in marine environments is often more autochthonous than freshwater environments , decreasing nutrient profile heterogeneity and necessitating reduced AAT diversity. In addition, other factors such as temperature and salinity can influence the available fraction of DON, impacting the solubility and bioavailability of nitrogenous compounds . The concentration of DON is consistently greater in fluvial and limnetic systems compared to the open ocean, with DON heterogeneity also increasing in fresh waters due to land use variation, land cover, and hydrology . This may promote the abundance of freshwater picocyanobacteria in their oligotrophic environments, where competition for the limited available nutrients may require greater diversity in nutrient uptake mechanisms. In contrast, the open ocean is less directly affected by anthropogenic influences and associated nutrient diversity, reducing the necessity of wide-ranging uptake capabilities. The prevalence of the N-II AAT in most picocyanobacteria may provide insights into the role of charged AAs. The preferred substrates for N-II are glutamate and aspartate, some of the most abundant AAs in freshwater and oceanic environments . This may provide a large bioavailable N source globally for the Syn/Pro , demonstrated by a high uptake rate of DON among marine environments . This study utilizes comparative genomics to identify the organic N assimilation machinery in freshwater picocyanobacteria; however, it must be noted that there are limitations to this approach. The ability to express identified genes cannot be taken for granted, and the presence of assimilation-associated genes does not itself indicate that the functional activity is present. This has been previously seen in the freshwater picocyanobacterium Vulcanococcus limneticus LL, encoding the nif operon of N-fixation though yielding no evidence of its expression or capacity to fix N 2 . These issues can be addressed by use of -omics techniques to identify expression (though these have limitations themselves ), or the effective replication of true environmental conditions . We find that AA bioavailability is widespread among freshwater picocyanobacteria. Freshwater picocyanobacteria thrive in low-nutrient environments where organic forms of N dominate . The broad range of AA bioavailability observed here may support the growth of picocyanobacteria in systems where the concentration of inorganic N is low. However, expected assimilation patterns based on encoded AATs are not identified, suggesting that AATs are not the only factors to be considered, and mechanisms for extracellular degradation (i.e. external oxidases) may be pivotal in DON utilization . In addition, potential mechanisms for organic N uptake (AA, chitin) differ between freshwater and marine picocyanobacteria, highlighting their adaptation to different ecological niches and the influence of the nutritionally heterogeneous nature of freshwater environments. Future research should elucidate the assimilation method of basic AAs and explore in greater detail the mechanisms and effective bioavailable concentrations for other organic N sources (i.e. chitin, glyphosate), including at lower concentrations which are present in oligotrophic environments. Research into organic nutrients is not limited to cyanobacteria—AAs are also bioavailable to freshwater algae ; however, the full diversity of response remains untested. Greater understanding of the association between nutrient inputs and community composition will enable future community changes to be predicted and encourage effective freshwater monitoring.
Our growth assays on axenic cultures indicate that most AAs are potential N sources for freshwater picocyanobacteria. This contrasts with non- Syn/Pro freshwater cyanobacteria where AA utilization is variable and often limited , demonstrating that freshwater picocyanobacteria have among the most diverse DON assimilation potential. S. elongatus PCC 6301, a model cyanobacterium, has only been successfully grown on glutamine , whereas Synechocystis sp. PCC 6714 was limited to growth on glutamine, asparagine, and arginine, unable to utilize nine other AAs . Heterocystous cyanobacteria also display a variety of capabilities, with Pseudanabaena spp. only able to grow on charged AAs and Anabaena sp. PCC 7122 able to utilize neutral AAs but unable to grow on half of the AAs tested . Only Spirulina platensis has similar uptake capabilities to those found here . As such, the contribution of freshwater organic N diversity must be considered when examining picocyanobacterial abundance, providing an enhanced dietary supply in oligotrophic conditions. Meanwhile, although AA bioavailability is diverse and varied in picocyanobacteria, specific metabolic pathways could not be identified for a subset of AA substrates. This highlights the current lack of understanding of cyanobacterial AA metabolism outside of the central molecules (i.e. glutamate and aspartate), elucidation of which is necessary to achieve a holistic view of the cyanobacterial community response to nutrient diversity. In addition to widespread bioavailability, the lag phase differences observed indicates the shorter adaptation time required for CCAP1479/10 for several substrates compared to CCY9618, though for some substrates this pattern was reversed. Differing microbial communities, catchment land use, and type of nutrient inputs can impact the AA composition, resulting in waterbodies with varying dominant total dissolved AA profiles . Intraspecific variation in the adaptation time to individual nutrient sources may shape the initial microbial composition when first exposed to a nutrient flux, influencing community dynamics and dominant microbial strains. This may have implications for the wider cyanobacterial community, with the variety of species-specific responses to individual nutrient sources of heterogeneous DON potentially being a key driver in oligotrophic micro-community composition.
Freshwater picocyanobacteria lack the basic AAT Bgt, though their capacity to utilize arginine and lysine as N sources highlights the complexity behind AA assimilation. We propose three possible mechanisms for basic AA uptake without a known dedicated transporter. The first is a broader specificity for the charged N-II AAT among the Syn/Pro , previously suggested in marine picocyanobacteria . All AAT characterization has been carried out in non- Syn/Pro cyanobacteria , thus the uptake capacity within picocyanobacteria may be greater than expected. Secondly, an unidentified transporter may be responsible for basic AA uptake in picocyanobacteria, however, no such transporter was identified in the basic substrate condition of this study. Recent studies have identified putative AAT permeases in freshwater picocyanobacteria , though these remain uncharacterized with expression and uptake properties unknown. The knowledge gap regarding the molecular capabilities of freshwater picocyanobacteria is large, owing to the lack of a model organism and the absence of experimental research on this keystone group. Thirdly, AAs may be partially decoupled from AATs, with extracellular degradation bypassing the need for dedicated AATs and instead yielding available NH 4 + or NO 3 − for subsequent uptake. This extracellular AA oxidase activity has previously been demonstrated in various taxa, including cyanobacteria , diatoms , and green alga , though no up-regulation of AA oxidases were detected in this study. Two mechanisms behind extracellular AA degradation are known. The first involves the secretion of AA oxidases directly into the external environment, releasing NH 4 + and H 2 O 2 , the latter of which acts as a cytotoxin . The second mechanism is based on the passive diffusion of AAs into the periplasm through outer membrane porins, followed by extracellular catabolism through the action of cell surface AA oxidases . Although oxidation rates are low and highly variable in aquatic environments , the importance of extracellular N release for picocyanobacterial N uptake remains to be clarified, with further work needed to identify the precise uptake mechanisms.
The proteomic analysis of picocyanobacteria growth on AAs may indicate the initiation of a stress response and a reduced requirement for inorganic C. Lysine biosynthesis is up-regulated under most AA N-substrates tested, with lysine accumulation linked to environmental stress response throughout the biosphere . These mechanisms are thought to involve an increase in lysine biosynthesis and subsequent conversion to various metabolites including saccharopine , cadaverine , and the compatible solute pipecolate , though DEPs associated with these were not identified in this study. Furthermore, an additional stress response protein (FtsH1) is up-regulated under arginine, asparagine, and glutamate-growth conditions. FtsH1 is involved in the cyanobacterial nutrient stress response, forming a FtsH1/3 protease complex which digests transcription factors repressing activation of Fe, P, N, and inorganic C assimilation proteins . The conditions in this study provide an excess of nutrient, thus the up-regulation of nutrient stress responses compared to NO 3 − is striking . Although growth on some AAs equaled or exceeded that on NO 3 − , it is possible that accumulation of metabolites may have had negative consequences and been responsible for the stress response. In addition to this mild stress response, the proteomic analysis indicates that C skeletons from the AAs are being utilized and may explain the down-regulation of photosynthesis proteins. The molar Redfield ratio of C and N requirements (6.6:1) against those in glutamate and proline (5:1) are similar, which would facilitate balanced growth.
The diversity of AATs in freshwater picocyanobacteria is greater than in their marine counterparts. Whereas freshwater picocyanobacteria encode two broad-specificity AATs in addition to a glutamate-specific transporter, marine picocyanobacteria (predominantly sub-cluster 5.1/ Prochlorococcus strains) encode N-II and the limited function of AgcS. These observed genotypic differences between freshwater and marine groups may be influenced by their respective evolutionary environments. For example, the composition of DON in marine environments is often more autochthonous than freshwater environments , decreasing nutrient profile heterogeneity and necessitating reduced AAT diversity. In addition, other factors such as temperature and salinity can influence the available fraction of DON, impacting the solubility and bioavailability of nitrogenous compounds . The concentration of DON is consistently greater in fluvial and limnetic systems compared to the open ocean, with DON heterogeneity also increasing in fresh waters due to land use variation, land cover, and hydrology . This may promote the abundance of freshwater picocyanobacteria in their oligotrophic environments, where competition for the limited available nutrients may require greater diversity in nutrient uptake mechanisms. In contrast, the open ocean is less directly affected by anthropogenic influences and associated nutrient diversity, reducing the necessity of wide-ranging uptake capabilities. The prevalence of the N-II AAT in most picocyanobacteria may provide insights into the role of charged AAs. The preferred substrates for N-II are glutamate and aspartate, some of the most abundant AAs in freshwater and oceanic environments . This may provide a large bioavailable N source globally for the Syn/Pro , demonstrated by a high uptake rate of DON among marine environments . This study utilizes comparative genomics to identify the organic N assimilation machinery in freshwater picocyanobacteria; however, it must be noted that there are limitations to this approach. The ability to express identified genes cannot be taken for granted, and the presence of assimilation-associated genes does not itself indicate that the functional activity is present. This has been previously seen in the freshwater picocyanobacterium Vulcanococcus limneticus LL, encoding the nif operon of N-fixation though yielding no evidence of its expression or capacity to fix N 2 . These issues can be addressed by use of -omics techniques to identify expression (though these have limitations themselves ), or the effective replication of true environmental conditions . We find that AA bioavailability is widespread among freshwater picocyanobacteria. Freshwater picocyanobacteria thrive in low-nutrient environments where organic forms of N dominate . The broad range of AA bioavailability observed here may support the growth of picocyanobacteria in systems where the concentration of inorganic N is low. However, expected assimilation patterns based on encoded AATs are not identified, suggesting that AATs are not the only factors to be considered, and mechanisms for extracellular degradation (i.e. external oxidases) may be pivotal in DON utilization . In addition, potential mechanisms for organic N uptake (AA, chitin) differ between freshwater and marine picocyanobacteria, highlighting their adaptation to different ecological niches and the influence of the nutritionally heterogeneous nature of freshwater environments. Future research should elucidate the assimilation method of basic AAs and explore in greater detail the mechanisms and effective bioavailable concentrations for other organic N sources (i.e. chitin, glyphosate), including at lower concentrations which are present in oligotrophic environments. Research into organic nutrients is not limited to cyanobacteria—AAs are also bioavailable to freshwater algae ; however, the full diversity of response remains untested. Greater understanding of the association between nutrient inputs and community composition will enable future community changes to be predicted and encourage effective freshwater monitoring.
Supplementary_Information_ismejo_wrae236 Supplementary_Tables_ismejo_wrae236
|
Taxonomic and metabolic characterisation of biofilms colonising Roman stuccoes at Baia’s thermal baths and restoration strategies | 4d379c7f-ed6f-4c7a-bd81-5b86ad1e0fb4 | 11530618 | Biochemistry[mh] | Stuccoes are traditional decorative elements for ceilings and vaults, but also vertical surfaces, in buildings and villas of the Roman age , . Before the discovery of Nero’s Domus Aurea in Rome in the 15th century, the only evidence of these artworks came from written accounts, such as like Pliny the Elder’s Naturalis Historia . Roman stuccoes are frequently polychrome, but for many years were supposed to be white or monochromatic; however, likewise Greek statues and temples, we now know that the white coloration was generally due to the loss of pigments rather than a lack of them. No single recipe seemed to exist for their preparation . Recent analyses of mineralogical composition of stuccoes from the Domus Aurea revealed the same elemental composition of plaster, i.e., the use of calcium hydroxide as binder and calcite as aggregates . Due to the delicate carving techniques, very few stuccoes have survived intact or almost intact to present days and, in rarest cases, have preserved the original colours. Examples can be found in the abovementioned Domus Aurea in Rome , in some tombs excavated in the area of Pozzuoli , , in the area of Baia and Pompeii, Herculaneum and Stabiae , . Furthermore, after being uncovered by archaeological excavations, as all decorative features, stuccoes are also exposed to deterioration by abiotic and biotic agents, which alter both their structure and aesthetics. For all these reasons, they are of particular conservation interest. In recent years, the study of deterioration caused by biological agents (biodeterioration) has been fostered by the introduction of -omics techniques . These techniques have enabled not only the taxonomic characterisation of microbial communities involved in the colonisation of substrata (metabarcoding and metagenomics) but also their metabolic activity (metatranscriptomics, metabolomics, proteomics). Some of these applications like metabarcoding are becoming routinary (e.g – ). , while the others are still at their infancy. Understanding the type and activity of microorganisms is crucial for planning the best strategies for their removal and to prevent future colonisations. To date, no published studies have used -omics tools for the study of biodeterioration of ancient stuccoes. In this study, we employed a multi-omics approach combining metabarcoding of two molecular markers (16S for bacterial communities and 18S for eukaryotic communities) and untargeted metabolomics to assess the taxonomic and metabolic profiles of the bacterial and eukaryotic community involved in the biodeterioration of Roman stuccoes in a thermal environment called laconicum in the archaeological site of Baia (Campania region, Italy). To the best of our knowledge, this is the first attempt to characterise both microorganism and metabolite diversity using -omics approaches in ancient stuccoes. Furthermore, we tested the efficacy of extracts of essential oils at different dilutions to remove the biological patinas from the stuccoes, utilising homemade tiles as test samples.
Study site and sampling The laconicum is located on the terrace of the upper peristyle of the so called “Sosandra sector” in the archaeological park of Baia (Campania region, Italy). The room is characterised by an “L”-shaped plan, with a main rectangular body opening eastward, and a corridor that opens southward into an adjacent room, where the statue of the Venus Sosandra was found in 1953 . The room is of small size, with a surface area of approximately 5 m 2 . It is a balneum , a private environment with a thermal function, specifically used for steam baths ( laconicum ). The internal wall structure of the laconicum is made of opus reticulatum flanked by opus vittatum , and the ceiling is decorated with stuccoes (Fig. A). The narrow corridor showcases a sequence of medallions arranged along the main north-south axis, each connected to its neighbours and to the edges of the ceiling through short, straight bands. The frames are decorated with delicate pearl mouldings, triple for the medallions and double for the connecting strips (Fig. B). The medallions have a diameter of 40 cm, while the mouldings are 6 cm wide (triple) and 3 cm wide (double). Among the figures depicted are animals such as lions, swans, and marine creatures and mythological characters like Cupid and Nereids (Fig. B). The stuccoes are made of dolomitic lime with calcium and magnesium carbonate as binders. The crystalline structure of the dolomite gives greater compactness and hardness to the lime, making it more resistant. The high quantity of dehydrated calcium sulphate on the surface of some of them can be attributed to the presence of saline efflorescence. Biological patinas were sampled with sterile scalpels by a restoration student (Sara Scamardella) thanks to an agreement between the University of Studies Suor Orsola Benincasa and the archaeological park of Baia. Two samples (S1 and S2) were collected from the inner east side of the laconicum , while the other five from the ceiling decorated with stuccoes (S3-S7). Sample S2 was close to a saline efflorescence. The samples were placed into plastic tubes and stored at -20 °C until later analyses. Metabarcoding analyses The bacterial and eukaryotic components of biological patinas from the stuccoes were characterised through a metabarcoding approach, amplifying the V3-V4 region of 16S rRNA, and the V4 region of 18S rRNA, respectively. Total DNA was extracted using the DNeasy PowerSoil Pro Kit (Qiagen, Hilden, Germany) following the manufacturer’s protocol. A qualitative and quantitative analysis of extracted DNA was carried out through visualisation on gel electrophoresis and with a Qubit v.4 fluorometer using the dsDNA HS Assay Kit (Life Technologies, Thermo Fisher Scientific, Waltham, MA, USA). Extracted DNA was stored at -20 °C until shipment for amplification and high-throughput sequencing. Metabarcoding analyses were carried out by Integrated Microbiome Resource (IMR; Halifax, Canada) at the condition specified on the company website ( http://imr.bio/protocols ) and using the primer sets (“Bacteria-specific “Illumina” V3-V4, B969F and BA1406R” and “Eukaryote-specific V4, E572F and E1009R” ) described in the same webpage. To monitor possible biological contaminations, an extraction blank (sample “ctrl”) was processed as negative control alongside the other samples till the sequencing step; in addition, PCR controls were carried out by the company. Raw data (fastq format) were processed to generate amplicon sequence variants (ASVs) using the dada2 pipeline in R . At the end of the pipeline, singletons were removed and, to account for differences in the number of ASVs across samples, data were normalised to the median value (20,206 for 16S dataset and 138 for 18S dataset) using the “rrarefy” function of the vegan R package . Taxonomic assignation of eukaryotes was carried out using a BLAST approach against the PR2 database v4.14.0 ( https://github.com/pr2database/pr2database/releases ) and, for hits not assigned, against the nucleotide (nr) database. Five hits were stored for each query and the final assignment was determined using a lowest common ancestor (lca) approach to identify ASVs at the lowest taxonomic level when different matches occurred at the same similarity percentage. For the latter task, we used the galaxy-tool-lca script ( https://github.com/naturalis/galaxy-tool-lca ), which is partly based on MEGAN’s lca method . Taxonomic assignation of bacterial ASVs was carried out using the naive Bayesian classifier method against the Silva reference database v138.1 ( https://zenodo.org/record/4587955#.Ylqor9NBw2w ); assignation at species level were determined by exact match (100% identity) between ASVs and sequenced reference strains always within dada2 using the “silva_species_assignment_v138.1” database. Taxonomic composition of bacterial, eukaryotic, and combined communities was represented as barplots in RStudio using the phyloseq package and plotted with ggplot2 . Venn diagrams were built using the file2meco and microeco R packages to assess whether communities with similar taxonomic composition at Domain rank (bacteria vs. eukaryotes) in the barplots were also similar at lower taxonomic levels (phylum and family). A principal component analysis (PCA) was carried out on the ClustVis webserver ( https://biit.cs.ut.ee/clustvis/ ) to detect structure in our 16S, 18S, and 16S + 18S data. Metabolomic analysis Approximately 10 mg of pulverized sample material was weighed into a brown vial and exact mass was noted. A quantity of 0.5 mL of acetone: methanol (50:50) was added to each sample. Samples were mixed using a vortex at 1650 rpm for 3 min and then placed in a glass container filled with crushed ice; the container was kept in an ultrasonic bath for 30 min. The extract was filtered into a LC vial using a 0.2 μm filter, and the vial was stored in a freezer (-80 °C) until analysis. The extraction procedure was repeated two times or more for the same sample. Extracted samples were analyzed by mass spectrometry using the Q-Orbitrap EXPLORIS 120 (Thermo Fisher Scientific, Foster City, CA, USA) and by Ultra-Performance Liquid Chromatography–Mass Spectrometry (UPLC-MS) analysis to get the metabolite profile, using a VANQUISH UPLC (Thermo Fisher Scientific, Foster City, CA, USA) coupled to a Q EXACTIVE mass spectrometer (Thermo Fisher Scientific, Foster City, CA, USA) equipped with an electrospray ionization (ESI) source in positive mode. A volume of 5 µL of sample was injected into a Hypersil GOLD™ C18 column (2.1 × 200 mm, 1.9 μm, Thermo Fisher Scientific, Foster City, CA, USA). Mobile phase was A: water + 0.1% TFA and B: Acetonitrile + 0.1% TFA. Gradient settings were: 0 min 5%B, 10 min 70%B, 11 min 70–95%B, isocratic for 1 min. Total flow was 0.35 ml min-1, column temperature was 40 °C. Chromatographic data were also recorded using a Photodiode array detector operating with a frequency of 12.5 Hz. Metabolites were assigned to functional groups using ClassyFire ( http://classyfire.wishartlab.com ) after conversion of chemical names to several formats (ChEBI, KEGG, and InChI codes) in the Chemical Translation Service (CTS; http://cts.fiehnlab.ucdavis.edu/batch ). Metabolic profiles across samples were showed as barplots of functional groups using the packages phyloseq and ggplot2 . A heatmap was plotted to visualise the abundance patterns of selected metabolites across samples; we selected metabolites that occurred at least in two samples and with abundance > 1%. A PCA was also carried out as per metabarcoding data to detect possible structure in our samples. To link the metabolites to specific metabolic pathways, we used the standard KEGG compound names (C codes) previously retrieved in CTS as input for MetaboAnalyst v6.0 ( https://www.metaboanalyst.ca/MetaboAnalyst/Secure/utils/NameMapView.xhtml ), and then we used the KEGG , mapper search tool ( https://www.genome.jp/kegg/mapper/search.html ). Metabarcoding-metabomolomics associations A PermANOVA analysis with the adonis function of the vegan package was carried out to detect significant associations between abundance of microbial (bacterial and eukaryotic) communities and metabolite concentrations. Removal of biological patinas with essential oils For the removal of biological patinas, we used ESSENZIO (IBIX Biocare, Lugo, Italy), a biodegradable and biocompatible product based on a blend of essential oils (mainly extracts of Origanum vulgare and Thymus vulgaris ). Before in vivo removal of patinas, the product was first tested on homemade tiles of slaked lime and marble powder contaminated with biofilms taken in situ and grown in vitro to simulate the surface of the stuccoes. Based on the results obtained by Cennamo et al. , the mixture was tested at different dilutions in demineralized water (10%, 20% and 50%) and after different application times (30 min, 1 h, 1 h and 30 min, 2 h) on specimens prepared in the laboratory (data not shown). The effectiveness of the treatment was evaluated through a visual comparison with clean specimens. Once the most suitable times and concentrations have been identified, the treatment was extended to the entire portion of the stucco decorations where biological patinas were observed.
The laconicum is located on the terrace of the upper peristyle of the so called “Sosandra sector” in the archaeological park of Baia (Campania region, Italy). The room is characterised by an “L”-shaped plan, with a main rectangular body opening eastward, and a corridor that opens southward into an adjacent room, where the statue of the Venus Sosandra was found in 1953 . The room is of small size, with a surface area of approximately 5 m 2 . It is a balneum , a private environment with a thermal function, specifically used for steam baths ( laconicum ). The internal wall structure of the laconicum is made of opus reticulatum flanked by opus vittatum , and the ceiling is decorated with stuccoes (Fig. A). The narrow corridor showcases a sequence of medallions arranged along the main north-south axis, each connected to its neighbours and to the edges of the ceiling through short, straight bands. The frames are decorated with delicate pearl mouldings, triple for the medallions and double for the connecting strips (Fig. B). The medallions have a diameter of 40 cm, while the mouldings are 6 cm wide (triple) and 3 cm wide (double). Among the figures depicted are animals such as lions, swans, and marine creatures and mythological characters like Cupid and Nereids (Fig. B). The stuccoes are made of dolomitic lime with calcium and magnesium carbonate as binders. The crystalline structure of the dolomite gives greater compactness and hardness to the lime, making it more resistant. The high quantity of dehydrated calcium sulphate on the surface of some of them can be attributed to the presence of saline efflorescence. Biological patinas were sampled with sterile scalpels by a restoration student (Sara Scamardella) thanks to an agreement between the University of Studies Suor Orsola Benincasa and the archaeological park of Baia. Two samples (S1 and S2) were collected from the inner east side of the laconicum , while the other five from the ceiling decorated with stuccoes (S3-S7). Sample S2 was close to a saline efflorescence. The samples were placed into plastic tubes and stored at -20 °C until later analyses.
The bacterial and eukaryotic components of biological patinas from the stuccoes were characterised through a metabarcoding approach, amplifying the V3-V4 region of 16S rRNA, and the V4 region of 18S rRNA, respectively. Total DNA was extracted using the DNeasy PowerSoil Pro Kit (Qiagen, Hilden, Germany) following the manufacturer’s protocol. A qualitative and quantitative analysis of extracted DNA was carried out through visualisation on gel electrophoresis and with a Qubit v.4 fluorometer using the dsDNA HS Assay Kit (Life Technologies, Thermo Fisher Scientific, Waltham, MA, USA). Extracted DNA was stored at -20 °C until shipment for amplification and high-throughput sequencing. Metabarcoding analyses were carried out by Integrated Microbiome Resource (IMR; Halifax, Canada) at the condition specified on the company website ( http://imr.bio/protocols ) and using the primer sets (“Bacteria-specific “Illumina” V3-V4, B969F and BA1406R” and “Eukaryote-specific V4, E572F and E1009R” ) described in the same webpage. To monitor possible biological contaminations, an extraction blank (sample “ctrl”) was processed as negative control alongside the other samples till the sequencing step; in addition, PCR controls were carried out by the company. Raw data (fastq format) were processed to generate amplicon sequence variants (ASVs) using the dada2 pipeline in R . At the end of the pipeline, singletons were removed and, to account for differences in the number of ASVs across samples, data were normalised to the median value (20,206 for 16S dataset and 138 for 18S dataset) using the “rrarefy” function of the vegan R package . Taxonomic assignation of eukaryotes was carried out using a BLAST approach against the PR2 database v4.14.0 ( https://github.com/pr2database/pr2database/releases ) and, for hits not assigned, against the nucleotide (nr) database. Five hits were stored for each query and the final assignment was determined using a lowest common ancestor (lca) approach to identify ASVs at the lowest taxonomic level when different matches occurred at the same similarity percentage. For the latter task, we used the galaxy-tool-lca script ( https://github.com/naturalis/galaxy-tool-lca ), which is partly based on MEGAN’s lca method . Taxonomic assignation of bacterial ASVs was carried out using the naive Bayesian classifier method against the Silva reference database v138.1 ( https://zenodo.org/record/4587955#.Ylqor9NBw2w ); assignation at species level were determined by exact match (100% identity) between ASVs and sequenced reference strains always within dada2 using the “silva_species_assignment_v138.1” database. Taxonomic composition of bacterial, eukaryotic, and combined communities was represented as barplots in RStudio using the phyloseq package and plotted with ggplot2 . Venn diagrams were built using the file2meco and microeco R packages to assess whether communities with similar taxonomic composition at Domain rank (bacteria vs. eukaryotes) in the barplots were also similar at lower taxonomic levels (phylum and family). A principal component analysis (PCA) was carried out on the ClustVis webserver ( https://biit.cs.ut.ee/clustvis/ ) to detect structure in our 16S, 18S, and 16S + 18S data.
Approximately 10 mg of pulverized sample material was weighed into a brown vial and exact mass was noted. A quantity of 0.5 mL of acetone: methanol (50:50) was added to each sample. Samples were mixed using a vortex at 1650 rpm for 3 min and then placed in a glass container filled with crushed ice; the container was kept in an ultrasonic bath for 30 min. The extract was filtered into a LC vial using a 0.2 μm filter, and the vial was stored in a freezer (-80 °C) until analysis. The extraction procedure was repeated two times or more for the same sample. Extracted samples were analyzed by mass spectrometry using the Q-Orbitrap EXPLORIS 120 (Thermo Fisher Scientific, Foster City, CA, USA) and by Ultra-Performance Liquid Chromatography–Mass Spectrometry (UPLC-MS) analysis to get the metabolite profile, using a VANQUISH UPLC (Thermo Fisher Scientific, Foster City, CA, USA) coupled to a Q EXACTIVE mass spectrometer (Thermo Fisher Scientific, Foster City, CA, USA) equipped with an electrospray ionization (ESI) source in positive mode. A volume of 5 µL of sample was injected into a Hypersil GOLD™ C18 column (2.1 × 200 mm, 1.9 μm, Thermo Fisher Scientific, Foster City, CA, USA). Mobile phase was A: water + 0.1% TFA and B: Acetonitrile + 0.1% TFA. Gradient settings were: 0 min 5%B, 10 min 70%B, 11 min 70–95%B, isocratic for 1 min. Total flow was 0.35 ml min-1, column temperature was 40 °C. Chromatographic data were also recorded using a Photodiode array detector operating with a frequency of 12.5 Hz. Metabolites were assigned to functional groups using ClassyFire ( http://classyfire.wishartlab.com ) after conversion of chemical names to several formats (ChEBI, KEGG, and InChI codes) in the Chemical Translation Service (CTS; http://cts.fiehnlab.ucdavis.edu/batch ). Metabolic profiles across samples were showed as barplots of functional groups using the packages phyloseq and ggplot2 . A heatmap was plotted to visualise the abundance patterns of selected metabolites across samples; we selected metabolites that occurred at least in two samples and with abundance > 1%. A PCA was also carried out as per metabarcoding data to detect possible structure in our samples. To link the metabolites to specific metabolic pathways, we used the standard KEGG compound names (C codes) previously retrieved in CTS as input for MetaboAnalyst v6.0 ( https://www.metaboanalyst.ca/MetaboAnalyst/Secure/utils/NameMapView.xhtml ), and then we used the KEGG , mapper search tool ( https://www.genome.jp/kegg/mapper/search.html ).
A PermANOVA analysis with the adonis function of the vegan package was carried out to detect significant associations between abundance of microbial (bacterial and eukaryotic) communities and metabolite concentrations.
For the removal of biological patinas, we used ESSENZIO (IBIX Biocare, Lugo, Italy), a biodegradable and biocompatible product based on a blend of essential oils (mainly extracts of Origanum vulgare and Thymus vulgaris ). Before in vivo removal of patinas, the product was first tested on homemade tiles of slaked lime and marble powder contaminated with biofilms taken in situ and grown in vitro to simulate the surface of the stuccoes. Based on the results obtained by Cennamo et al. , the mixture was tested at different dilutions in demineralized water (10%, 20% and 50%) and after different application times (30 min, 1 h, 1 h and 30 min, 2 h) on specimens prepared in the laboratory (data not shown). The effectiveness of the treatment was evaluated through a visual comparison with clean specimens. Once the most suitable times and concentrations have been identified, the treatment was extended to the entire portion of the stucco decorations where biological patinas were observed.
Metabarcoding analyses The Illumina sequencing of V3-V4 16S region generated 215,759 raw reads distributed across seven samples. The clean, annotated dataset contained 82,352 sequences corresponding to 265 ASVs in four samples (S1, S2, S3, and S6); after the normalization procedure, 73,039 sequences and 265 ASVs remained (Table ). The raw eukaryotic dataset based on the 18S–V4 region included 34,150 sequences across seven samples. The clean, annotated dataset contained 23,245 sequences corresponding to 23 ASVs in three samples (S4, S5, and S7); after normalization, 552 sequences and 18 ASVs remained (Table ). Details regarding the number of sequences discarded in each pre-processing step per sample are provided in Table . The negative control (sample “ctrl”) resulted in no sequences for 16S marker and 21 sequences assigned to Cladosporium and 24 to humans for 18S marker; since human sequences were discarded as non-target and the fungal ones were not found in any of our samples, we excluded a possible role of contaminants in our diversity estimates. In four out of seven samples, microbial community was constituted almost exclusively by bacteria, while in the remaining three samples, by eukaryotes (Fig. A); regarding samples from stuccoes, three out of five (S4, S5, and S7) were dominated by eukaryotes. Bacterial taxa were absent in samples S4, S5, and S7, while eukaryotic taxa occurred in all but S2 sample. The 95.1% of bacterial ASVs assigned to phylum level were shared by all samples (Fig. B), while only the 59.6% of ASVs at family level was shared (Fig. C). Regarding eukaryotes, only the 33.8% of ASVs attributed to phylum level was shared, and the remaining 66.2% of ASVs were exclusive to sample S7 (Fig. D). At family rank, no ASVs were shared, with the 67.6% of them exclusive to sample S7, the 29.9% to sample S5 and only 2.5% to sample S4 (Fig. E). A total of 14 bacterial families with abundance > 5% of total sequences were identified (Fig. F). Euzebyaceae was the only family present in all taxa and at notable abundance; members of Nitriliruptoraceae were present only in three samples (S1, S3, and S6). Not considering unclassified bacteria and families collapsed at abundance < 5%, bacterial community of most samples was represented by 6–8 families; the only exception was sample S2, with only three families ( Euzebyaceae , Termosynechoccaceae , and Trueperaceae ), of which Termosynechoccaceae included roughly the half of ASVs. With the only exception of Euzebyaceae , Pseudonocardiaceae , and Rhizobiaceae , all the other families were represented by single genera or unclassifiable sequences at lower taxonomic levels (Table ). Most of the latter were in the family Nitriliruptoraceae , while among cyanobacteria we observed some taxa particularly abundant for which assignation was not achieved even at family level (Table ). Only five assignments at species level were obtained ( Aliihoeflea aestuarii , Halomonas chromatireducens , Nocardiopsis exhalans , Pelagibacterium lentulum , and Streptomyces sodiiphilus ), plus two ambiguities ( Brevundimonas bacteroides / B. variabilis and Nocardiopsis exhalans / N. valliformis ). The eukaryotic component was dominant in samples S4, S5 and S7 (Fig. A) but overall, taxonomically limited, accounting almost exclusively for chlorophytes and fungi (Fig. G). Among the former ones, we identified Picocystis salinarum R.A.Lewin, and Ctenocladus circinnatus Borzì, and assigned other ASVs to the genera Picochlorum W.J.Henley & al., and Pseudostichococcus L.Moewus (Table ); for fungi, the only attribution at genus level was for Cyphellophora de Vries ( Eurotiomycetes ), while all the other ASVs belonged to the classes Dothideomycetes and Sordariomycetes . Such taxa were not equally distributed across samples, with Picocystis salinarum occurring only in samples S1 and S3, Ctenocladus in samples S1 and S6, Picochlorum in sample S7, and Pseudostichococcus in samples S5 and S6. Similarly, ASVs assigned to fungi, despite particularly abundant in sample S7, were exclusive of each sample and never shared (Table ). The PCA analysis based on the bacterial dataset (Supplementary Figure a) separated along the first axis (46.5% of variance) sample S6 from S2, S3 and S1, the latter further separated from the others on the second axis by 33.7% of variance. Similarly, in the PCA analysis based on eukaryotic data (Supplementary Figure b), sample S6 was separated by all the others on PC1 (43.3% of variance), and S1 by the remnant samples by PC2 (32%); eukaryotic communities of samples S3, S4, and S5 were closely related. The same pattern of distinctiveness of samples S6 and S1 along PC1 and PC2, respectively was observed in the combined dataset (Supplementary Figure c). The PCA based on metabolomic data (Supplementary Figure d) showed a different pattern, with samples S1, S4, and S6 separated from the others on the first axis (57.3% of variance), and S1 and S2 from the others on the second axis (17.3% of variance). Metabolomic analyses and metabarcoding-metabolomics associations Employing mass spectrometry, we identified and annotated 162 metabolites across six powdered material samples (Table ). Sample S7 was excluded from the analysis due to insufficient source material, which led to a failed extraction. Almost a quarter of metabolites belonged to lipids and lipid-like (fatty acids) molecules (23.5%) and carbohydrates (21.6%), followed by organic acids and derivatives (13.6%) and amino acids, peptides, and analogues (12.3%) (Fig. A); a list of the other classes of compounds is available in Table . According to the heatmap based on the most abundant metabolites (Fig. B), the sample S2 showed a different metabolic profile in respect to the other, with lactic acid constituting around the 20% of all metabolites, and followed by 2-Hydroxyisocaproic acid (~ 7%). For the other samples, we observed a clustering that was compatible with the distance among samples (see also Fig. A). Regarding metabolites, Samples S3 and S4 were characterized for 10–15% by sorbose and lactic acid, while samples S1, S5 and S6 showed highest abundances of the former. KEGG compound codes (C) were obtained for 129 metabolites (Table ) and attributed to the following pathways: “metabolic pathways” (85 compounds), biosynthesis of secondary metabolites (38), microbial metabolism (29), biosynthesis of aminoacids (18), and carbon and protein metabolism (10 and 17, respectively) (Tables and ). No correlation was found between microbial and metabolite abundances ( p > 0.05) according to the adonis test. Removal of biofilms The conditions that were found to be optimal for removing the biological patinas on the mortar test specimens were dilution of ESSENZIO at 50% in demineralized water and application time of an hour and a half (Fig. A, point 3). In all the other tiles, dark spots are visible as indicative of the presence of microorganisms. After this trial, the above-mentioned treatment was applied in-situ on all the stuccoes under restoration; an example of treatment is provided in Fig. B and C, and D. Since the biofilm layers were rather thin and not uniformly distributed, it was not necessary to use a support that would allow for a longer exposure time. At last, the surface was rinsed and cleaned with scalpel and swab. After 11 months from the treatment, no biofilms have been observed yet. An example of stucco decoration before and after removal of biofilms with essential oils is provided in Supplementary Figure .
The Illumina sequencing of V3-V4 16S region generated 215,759 raw reads distributed across seven samples. The clean, annotated dataset contained 82,352 sequences corresponding to 265 ASVs in four samples (S1, S2, S3, and S6); after the normalization procedure, 73,039 sequences and 265 ASVs remained (Table ). The raw eukaryotic dataset based on the 18S–V4 region included 34,150 sequences across seven samples. The clean, annotated dataset contained 23,245 sequences corresponding to 23 ASVs in three samples (S4, S5, and S7); after normalization, 552 sequences and 18 ASVs remained (Table ). Details regarding the number of sequences discarded in each pre-processing step per sample are provided in Table . The negative control (sample “ctrl”) resulted in no sequences for 16S marker and 21 sequences assigned to Cladosporium and 24 to humans for 18S marker; since human sequences were discarded as non-target and the fungal ones were not found in any of our samples, we excluded a possible role of contaminants in our diversity estimates. In four out of seven samples, microbial community was constituted almost exclusively by bacteria, while in the remaining three samples, by eukaryotes (Fig. A); regarding samples from stuccoes, three out of five (S4, S5, and S7) were dominated by eukaryotes. Bacterial taxa were absent in samples S4, S5, and S7, while eukaryotic taxa occurred in all but S2 sample. The 95.1% of bacterial ASVs assigned to phylum level were shared by all samples (Fig. B), while only the 59.6% of ASVs at family level was shared (Fig. C). Regarding eukaryotes, only the 33.8% of ASVs attributed to phylum level was shared, and the remaining 66.2% of ASVs were exclusive to sample S7 (Fig. D). At family rank, no ASVs were shared, with the 67.6% of them exclusive to sample S7, the 29.9% to sample S5 and only 2.5% to sample S4 (Fig. E). A total of 14 bacterial families with abundance > 5% of total sequences were identified (Fig. F). Euzebyaceae was the only family present in all taxa and at notable abundance; members of Nitriliruptoraceae were present only in three samples (S1, S3, and S6). Not considering unclassified bacteria and families collapsed at abundance < 5%, bacterial community of most samples was represented by 6–8 families; the only exception was sample S2, with only three families ( Euzebyaceae , Termosynechoccaceae , and Trueperaceae ), of which Termosynechoccaceae included roughly the half of ASVs. With the only exception of Euzebyaceae , Pseudonocardiaceae , and Rhizobiaceae , all the other families were represented by single genera or unclassifiable sequences at lower taxonomic levels (Table ). Most of the latter were in the family Nitriliruptoraceae , while among cyanobacteria we observed some taxa particularly abundant for which assignation was not achieved even at family level (Table ). Only five assignments at species level were obtained ( Aliihoeflea aestuarii , Halomonas chromatireducens , Nocardiopsis exhalans , Pelagibacterium lentulum , and Streptomyces sodiiphilus ), plus two ambiguities ( Brevundimonas bacteroides / B. variabilis and Nocardiopsis exhalans / N. valliformis ). The eukaryotic component was dominant in samples S4, S5 and S7 (Fig. A) but overall, taxonomically limited, accounting almost exclusively for chlorophytes and fungi (Fig. G). Among the former ones, we identified Picocystis salinarum R.A.Lewin, and Ctenocladus circinnatus Borzì, and assigned other ASVs to the genera Picochlorum W.J.Henley & al., and Pseudostichococcus L.Moewus (Table ); for fungi, the only attribution at genus level was for Cyphellophora de Vries ( Eurotiomycetes ), while all the other ASVs belonged to the classes Dothideomycetes and Sordariomycetes . Such taxa were not equally distributed across samples, with Picocystis salinarum occurring only in samples S1 and S3, Ctenocladus in samples S1 and S6, Picochlorum in sample S7, and Pseudostichococcus in samples S5 and S6. Similarly, ASVs assigned to fungi, despite particularly abundant in sample S7, were exclusive of each sample and never shared (Table ). The PCA analysis based on the bacterial dataset (Supplementary Figure a) separated along the first axis (46.5% of variance) sample S6 from S2, S3 and S1, the latter further separated from the others on the second axis by 33.7% of variance. Similarly, in the PCA analysis based on eukaryotic data (Supplementary Figure b), sample S6 was separated by all the others on PC1 (43.3% of variance), and S1 by the remnant samples by PC2 (32%); eukaryotic communities of samples S3, S4, and S5 were closely related. The same pattern of distinctiveness of samples S6 and S1 along PC1 and PC2, respectively was observed in the combined dataset (Supplementary Figure c). The PCA based on metabolomic data (Supplementary Figure d) showed a different pattern, with samples S1, S4, and S6 separated from the others on the first axis (57.3% of variance), and S1 and S2 from the others on the second axis (17.3% of variance).
Employing mass spectrometry, we identified and annotated 162 metabolites across six powdered material samples (Table ). Sample S7 was excluded from the analysis due to insufficient source material, which led to a failed extraction. Almost a quarter of metabolites belonged to lipids and lipid-like (fatty acids) molecules (23.5%) and carbohydrates (21.6%), followed by organic acids and derivatives (13.6%) and amino acids, peptides, and analogues (12.3%) (Fig. A); a list of the other classes of compounds is available in Table . According to the heatmap based on the most abundant metabolites (Fig. B), the sample S2 showed a different metabolic profile in respect to the other, with lactic acid constituting around the 20% of all metabolites, and followed by 2-Hydroxyisocaproic acid (~ 7%). For the other samples, we observed a clustering that was compatible with the distance among samples (see also Fig. A). Regarding metabolites, Samples S3 and S4 were characterized for 10–15% by sorbose and lactic acid, while samples S1, S5 and S6 showed highest abundances of the former. KEGG compound codes (C) were obtained for 129 metabolites (Table ) and attributed to the following pathways: “metabolic pathways” (85 compounds), biosynthesis of secondary metabolites (38), microbial metabolism (29), biosynthesis of aminoacids (18), and carbon and protein metabolism (10 and 17, respectively) (Tables and ). No correlation was found between microbial and metabolite abundances ( p > 0.05) according to the adonis test.
The conditions that were found to be optimal for removing the biological patinas on the mortar test specimens were dilution of ESSENZIO at 50% in demineralized water and application time of an hour and a half (Fig. A, point 3). In all the other tiles, dark spots are visible as indicative of the presence of microorganisms. After this trial, the above-mentioned treatment was applied in-situ on all the stuccoes under restoration; an example of treatment is provided in Fig. B and C, and D. Since the biofilm layers were rather thin and not uniformly distributed, it was not necessary to use a support that would allow for a longer exposure time. At last, the surface was rinsed and cleaned with scalpel and swab. After 11 months from the treatment, no biofilms have been observed yet. An example of stucco decoration before and after removal of biofilms with essential oils is provided in Supplementary Figure .
The mechanisms of biodeterioration of cultural heritage have been studied for many years. From the first, culture-based approaches, which often favour the detection of few, fast-growing opportunistic species over other, less abundant ones – , the introduction of culture-independent methods has revealed an astonishing taxonomic diversity of microorganisms colonising monuments and historical manufacts – . However, such pioneer techniques as DGGE (Denaturing Gradient Gel Electrophoresis) and ARISA (Automated Ribosomal Intergenic Spacer Analysis), to cite just a few, did not provided a taxonomic information about the members of a given microbial community, and were often coupled to culture-based approaches , . The introduction of Next Generation Sequencing (NGS) approaches and their affordability of in terms of data processing, and reduced costs over years, has empowered the knowledge of microorganisms causing biodeterioration of cultural heritage in terms of taxonomic, physiological and metabolic diversity – . Among the various techniques, DNA metabarcoding has attracted great interest in the last few years, by allowing the simultaneous amplification and sequencing of short fragments of previously established markers as 16S, 18S, 23S, and ITS2 for the characterization of prokaryotes, eukaryotes, algae and fungi (e.g., , , ). These studies have contributed to enrich the “taxonomic library” of microorganisms associated to different cultural heritage items as common stone monuments , , man-made artefacts , and wall- and traditional paintings to peculiar materials as ceramics . Stuccoes fall within the latter group, being very fragile materials that rarely survive intact or almost intact to present days. Available literature on the taxonomic diversity of microorganisms forming biofilms on such materials is from the vault of a XVIII century Italian church , and Mayan stucco masks from Guatemala or buildings from Mexico . In all of these studies, no similarities in microbial communities of stuccoes were observed: fungi ( Aspergillus , Chaetomium , Sarocladium , and Stachybotrys ) were dominant in the Italian church, while cyanobacteria (especially Gloecapsopsis , Pseudoanabaena , and Rhabdoderma in the former, and Gloeocapsa , Synechocystis -like, and Xenococcus in the latter) were the dominant taxa. In both Mayan samples, as also reported by Garcia de Miguel et al. , eukaryotic algae except for Chlorella were absent from most samples, and this trend was explained with their exposition to direct sunlight, which was likely responsible for desiccation. The laconicum object of this study is, on the contrary, a small, semi-confined environment, with rather uniform values of temperature, light and humidity across the year. High humidity (75–95%) is also favoured by its exposition close to a hill and phenomena of capillary rise of water from the ground and infiltration from several sides. This environment is also constantly protected from direct sunlight and excessive ventilation. This could be the reason why we found several eukaryotic algae like Ctenocladus , Picochlorum , Picocystis salinarum , and Pseudostichococcus , in addition to different genera of cyanobacteria ( Leptolyngbya , Loriellopsis , Nodosilinea , and Nodularia ,) and fungi ( Cyphellophora ). In addition to climatic factors, the diversity of taxa found on stuccoes could be explained with the porosity and water retention capacity of this material stuccoes, which is known to present high bioreceptivity . It should be taken into consideration that most of the biological patina here found developed both in direct contact with the surface of the stuccos and on the thin limestone encrustations that were present above them. Regarding bacterial colonisation of stuccoes data are even more limited, but Agarossi et al. , found Nocardia and Streptomyces as the most abundant genera in the subterranean Neo-Phytagorean basilica of Porta Maggiore in Rome (1st century AD). We found several bacterial genera (excluding cyanobacteria) particularly abundant like Chelativorans , Longispora , Nitrolancea , Phytoactinopolyspora , and Pseudonocardia , but none of them was typical of stuccoes, but also found on adjacent, non-stucco samples. Nonetheless, microbial colonization is also driven by the spatially different micro-environments, including both exposure and physical-chemical characteristic of stuccoes. For instance, Halomonas chromatireducens , a species of halophilic bacteria, was only found in sample S1, which is in proximity to saline incrustations, while Chelativorans , a genus of Gram-negative, strictly aerobic bacteria generally isolated from nutrient-poor environments was abundant in all samples. In sample S1, the one close to saline incrustations, we also found bacteria typical of marine or tidal environments as Oceanicaulis , Aliihoeflea aestuarii , and Pelagibacterium lentulum : their occurrence could be due to the influence of nearby sea sprays (the distance of the laconicum from the sea is of just 200 m), as also explained for other samples collected in Baia and for some eukaryotic species mentioned above, typical of saline environments. Salt-tolerant bacteria and archaea have been reported as colonisers of stone monuments, especially in porous building materials subjected to rainwater and rising damp that contain soluble salts – . We did not find any archaeal sequences in our dataset, and we cannot exclude, beside a real absence, a bias due to primer choice. According to the company’s protocol ( https://imr.bio/protocols.html ), the primer pair for the V3-V4 region of 16S should have a moderate coverage (0–90%) at amplifying archaea. Further studies using archaeal-specific primers are needed to assess their contribution in the biodeterioration of cultural heritage sites, especially the ones interested by saline incrustations. Regarding the biodeterioration potential of the microorganisms here detected, some of them were already known to be involved in aesthetic (e.g., discolouration) and structural (e.g., corrosion, deterioration and decay) damages to cultural heritage items. Among the most abundant bacterial genera identified, we report Actinomycetospora , Egibacter , Loriellopsis , Pseudonocardia , , , Rubrobacter , , , Streptomyces , and Truepera . Although none of these studies focused on the biodeterioration of stuccoes, it is likely that these microorganisms could have a similar impact on this substrate. In addition, other abundant bacteria, such as Chelativorans , have been indirectly associated with wood decay , or biopolymers degradation in marine environments, as in the case of Oceanocaulis . For eukaryotes, the available literature on the biodeterioration is mostly focused on algae and fungi, especially from stone monuments – . Regarding stuccoes, most of studies have investigated the role of fungi, reporting Aspergillus and Chaetomium , Penicillium , Sarocladium , and Stachybotrys . Among these, in our study we only found Sarocladium in the stucco sample S6. Other studies (see above) reported only on the presence of microorganisms, therefore allowing no inferences on their role in biodegradative phenomena. About algae, the genus Ctenocladus here found was already reported for stuccoes, despite the direct role in biodeterioration was not assessed. For the genera Picochlorum , Picocystis , and Pseudostichococcus , we report their occurrence for the first time on stuccoes and we did not find any study implying their involvement in the deterioration of cultural heritage. Metabolomic studies are still at their infancy in the field of cultural heritage (e.g., – ), but are gaining increasing importance due to their capabilities at providing qualitative and quantitative data on small molecules that are part of the metabolic pathways. Specifically, untargeted metabolomics is particularly useful to search for biomarkers likely involved in the biodeterioration of CH, by collecting data from hundreds to thousands of metabolites that belong to various classes of chemicals in a single analysis . According to the studies reviewed in , pathways like biosynthesis and degradation of aminoacids, ubiquinone and other terpenoid-quinone biosynthesis, pigments biosynthesis and degradation were shared among different CH objects. In our study, most of metabolites belonged to biosynthesis of secondary metabolites, microbial metabolism, biosynthesis of aminoacids, and carbon and protein metabolism, indicating active metabolic processes. In some cases, it has been possible to link the presence of specific metabolites to particular taxa, e.g. chlorophyll a to photosynthetic eukaryotes and cyanobacteria, and diatoxanthin to Picocystis salinarum . Diatoxanthin is a pigment typical of heterokonts (especially diatoms) and, outside this group, to date it has been only found in this species; because we found no diatom sequences in our samples and P. salinarum was particularly abundant in some of them, we are confident in the inferred attribution of such metabolite to this species. However, our correlation analysis between abundance of taxa and metabolites provided weak signals. Finding statistically significant correlations could be difficult in CH studies for several reasons. One factor responsible for this could be the bias in the sampling strategy of metabolomics but also metagenomics studies. Indeed, historical objects are sampled in non-invasive ways, which result in collecting very small amounts of superficial material from small portions that could not be representative of the entire surface . For instance, there is the risk that the communities living in the deeper layers of the biofilm, which are often at contact with the CH, are not sampled to avoid damages to the object. In addition, it should be considered that a statistically robust sampling could not be often achieved because some manufacts are unique (impossibility to compare biofilms from similar substrata) or, on the same manufact, the number of biofilms that can be collected is limited (different degree of biodeterioration). Another affecting factor could be the choice of the barcoding marker, with broad-spectrum markers unable of capturing fine taxonomic resolution that could be, instead, relevant to link some organisms to specific metabolites. For instance, the universal 18S-V4 region could have underestimated the diversity of some fungi groups, which are better detected with ITS region marker, by amplifying other eukaryotic taxa . However, the nature of detected metabolites could be responsible for such weak associations. Indeed, most of metabolites found here and in other studies on different substrata (e.g., ) belong to generic pathways and cannot be associated to specific communities. The use of green biocides such as essential oil extracts represents a valid alternative to synthetic biocides, allowing a restoration to be carried out using products with low toxicity, easy to handle and environmentally sustainable. Indeed, in the last few years there was an increase in the use of phyto-derivatives like liquorice leaf extract and essential oils as safer and eco-friendly alternative to chemicals to be used as natural biocides for the restoration of cultural heritage – . Despite the wide spectrum of efficacy of essential oils across different taxa and domains, it was found that Gram negative bacteria were generally more resistant than Gram positive bacteria, and that some had major effects on fungi than others . In the field of cultural heritage, attention has been paid on extract oils from the family Lamiaceae, especially from Thymus and Origanum , because they have proven to be effective on different microorganisms and for their availability in commerce as ready-to-use blends – . An example of the application of these essential oils concerns the restoration of some artworks from the Catholic cemetery for foreigners in Rome , where up to three applications were repeated into a hydroalcoholic solution (70% ethanol and 30% water), with a concentration of 5%. In other cases, as the removal of biofilms under the tiles of the floor mosaic of Leda’s House in the archaeological park of Solunto in Sicily and on the sculpture “The Silvano” in the archaeological museum of Florence , applications of a thymus blend at 15% and a bush application in 2% demineralized water respectively, were proven to be effective. Regarding the specific application of ESSENZIO by IBIX Biocare, successful restoration interventions were achieved for the mosaics located in the room XIX of the Insula of the Muses in Ostia Antica and a mosaic fountain in Ravenna . From the in-depth analysis of the different application methods tested in the studies described above, we identified the most important parameters to take into consideration for the application of essential oils and we were able to remove the biofilms on the surfaces of the laconicum . The treatment with ESSENZIO at 50% in demineralized water with an application exposure of one hour and half was successful, and no biofilms were observed after eleven months from the restoration.
Genetic and biochemical high-throughput techniques (-omic tools) have opened new paths toward the study of microorganisms forming biofilms on cultural heritage and their metabolic activity that is responsible of biodeterioration phenomena. Despite the potential of such -omic tools, which reflects in increasing literature on metabarcoding studies of CH objects, the number of studies using metabolomics and integrating both approaches is still small. In this study, we reported on the taxonomic and metabolite diversity of microorganisms forming biofilms on ancient stuccoes. At best of our knowledge, this is the first wide-spectrum intervention on ancient Roman stuccoes (besides the work by Bruno et al. on catacombs in Rome), as well as the first study integrating -omics approaches as metabarcoding and metabolomics on such fragile cultural heritage object. We confirmed that metabarcoding is a powerful technique to quickly and thoroughly characterise the taxonomic diversity of microorganisms in complex matrices as biofilms in respect to classical, culture-based approaches, also in fragile and delicate materials as stuccoes. In addition, we demonstrated that it is possible to extract and isolate numerous metabolites from very low amount of material and that such untargeted-metabolomics analyses are indicative of metabolic pathways active in the biofilms. However, we also argued that the paucity of biological material collected from stuccoes or cultural heritage items in general, as well as the scattered distribution of biofilms in the study system could affect the detection of statistically significant correlations between abundance of taxa and metabolites. Last, we have proven that a treatment based on essential oils from thyme and oregano effectively removes both bacterial and eukaryotic biofilms from stuccoes, thus confirming its utility in restoration of cultural heritage. To promote the adoption of such integrative approaches, future efforts should focus on the establishment of standardised strategies for sampling, data pre-processing, and statistical analyses of cultural heritage objects. Additionally, it is important not to overlook the value of culture-based methods in enriching taxonomic and metabolomics reference libraries, which are essential for any -omic study.
Supplementary Material 1. Supplementary Material 2. Supplementary Material 3.
|
A powerful and versatile new fixation protocol for immunostaining and in situ hybridization that preserves delicate tissues | 2ae6abf0-f7d5-4dfd-94ae-fce5b7b599e2 | 11533299 | Anatomy[mh] | Regeneration is the ability to restore tissues or organs lost to injury and it varies widely among metazoans. While some animals like fish and axolotls are capable of regenerating certain appendages and tissues, others like planarian flatworms and Hydra are capable of whole-body regeneration. . The cellular and molecular activities that drive regeneration are not yet fully understood. Understanding the molecular changes that take place in the delicate wound epidermis and newly produced tissue is essential to revealing the molecular basis of regeneration. RNA in situ hybridization (ISH) is a key method for studying gene expression patterns both during homeostasis and regeneration . Unlike bulk and single-cell RNA-sequencing methods, ISH provides extensive detail by visualizing gene expression patterns in their native tissue contexts . Furthermore, because this method does not require transgene expression, it can be performed on wildtype research organisms that do not yet have developed genetic toolkits. As such, it is particularly useful for research questions being pursued in diverse research organisms . The freshwater planarian S. mediterranea can regrow a complete animal from a body fragment that is less than 1% of its original size . This remarkable capacity for regeneration has attracted the attention of generations of biologists. Its study has required the development of methods to detect, measure, and visualize the cells and molecules underpinning regeneration. ISH has been a primary tool for studying the biology of planarian stem cells and regeneration . Yet, current ISH protocols have several shortcomings. Penetration of probes into tissue for whole-mount in situ hybridization (WISH) is difficult to achieve. As such, permeability is increased through tissue digestion with proteinase K and through aggressive treatment with the mucolytic agent N-acetyl cysteine (NAC) . These harsh treatments can damage or destroy delicate tissues and often result in the shredding of both the epidermis and the regeneration blastema (the fragile unpigmented tissue at the wound edge which gives rise to lost body parts). Moreover, immunological assays could be weak on samples prepared by this protocol, likely because proteinase digestion disrupts target epitopes. Other protocols have been developed for fixing whole planarians that preserve the gross anatomical structures and perform well in immunological assays, but those methods are not compatible with ISH . An ideal method would preserve delicate tissues and permit the simultaneous analysis of RNA and protein expression patterns. Here, we present a new fixation protocol for ISH and immunofluorescence in planarians. We have combined approaches from several fixation techniques into a Nitric Acid/Formic Acid (NAFA) strategy for sample preparation that better preserves the delicate epidermis and blastema than previous methods do . This NAFA protocol does not include a protease digestion, providing increased compatibility with immunological assays, while not compromising ISH signal. We also show this protocol can be easily adapted for ISH studies in the regenerating killifish tail fin. Thus, the protocol is potentially applicable to a wide range of species and particularly facilitates the study of delicate tissues via ISH and immunofluorescence. We sought to create a new fixation protocol for planarians that would be compatible with both ISH and antibody-based assays while preserving the structural integrity of the animals. We reasoned that combining the acid treatment strategies of a variety of protocols could make the samples compatible with multiple applications . We also included the calcium chelator ethylene glycol-bis(β-aminoethyl ether)-N,N,N′,N′-tetraacetic acid (EGTA) to inhibit nucleases and preserve RNA integrity during sample preparation . To determine the extent to which the new combination of acids preserved the samples, we used the integrity of the epidermis as a proxy for tissue preservation, and we visualized it immunostaining cilia with an anti-acetylated tubulin antibody . We tested a N itric A cid/ F ormic A cid (NAFA) fixation and compared it against two well established fixation protocols in the field, NA (Rompolas) and N-Acetyl-Cysteine (NAC) . We found that the integrity of the epidermis is well preserved in both the NA (Rompolas) and NAFA protocols, whereas noticeable breaches of integrity were detected when the protocol using the mucolytic compound NAC was tested (Fig. ). We concluded from these results that the NAFA protocol worked as well as the NA (Rompolas) protocol and preserved the sample considerably better than the NAC protocol did. Given the success of the anti-acetylated tubulin antibody staining, we tested whether the NAFA protocol could be used for ISH assays. To ensure the NAFA protocol allows antisense RNA probe penetration into tissues, we chose genes known to mark the internal neoblast cell population ( piwi-1 ), and a more external cell population, a subset of the epidermal progenitors ( zpuf-6 ) . First, we tested whether the expression of piwi-1 and zpuf-6 could be detected via chromogenic WISH (Fig. ). While the NAFA and NAC protocols produced indistinguishable patterns of expression for the two genes, we could not observe any piwi-1 and zpuf-6 signal with the NA (Rompolas) protocol (Fig. A, B). These experiments also revealed epidermal damage when NAC was used (Fig. B). To further investigate epidermal integrity and WISH signal, we performed chromogenic WISH for zpuf-6 using the NAC and NAFA protocols (Additional file 1: Fig. S1) then sectioned the animals afterwards for histological analysis. The sections revealed that the outermost layer with zpuf-6 + cells was intact when using the NAFA protocol but damaged by the NAC protocol (Fig. S1A and S1B). Also, we tested whether three different carboxylic acids (formic acid, acetic acid, and lactic acid) can be used in the NAFA protocol. We performed chromogenic WISH for piwi-1 , zpuf-6 , in addition to markers of the central nervous system ( pc2 ) , and gastrovascular system ( porcupine ) . All showed similar expression patterns in both the NAFA and NAC protocols (Additional file 2: Fig. S2). While all three carboxylic acids can be used to determine gene expression patterns and are effective across multiple transcripts, we chose to use formic acid because it has the simplest chemical structure. We conclude from these findings that the new NAFA protocol both preserves epidermis integrity and can be used to detect gene expression in different planarian tissues via WISH. Next, we investigated whether we could use the new NAFA protocol in planaria to carry out fluorescent in situ hybridization (FISH) in tandem with immunostaining. Using confocal microscopy, we detected the neoblast and epidermal progenitor markers piwi-1 and zpuf-6 , respectively (Fig. ). The intensity of the piwi-1 fluorescent signal was indistinguishable between the NAC and NAFA protocols but much weaker for the NA (Rompolas) protocol (Additional file 3: Fig. S3). Furthermore, confocal microscopy showed that the epidermis was damaged with the NAC protocol but was not visibly affected when using the NAFA protocol (Fig. B). After whole-mount FISH, we immunostained for mitotic cells with an antibody that recognizes the Serine-10 phosphorylated form of histone H3 (anti-H3P) . While we did not observe statistically significant differences in H3P density among the protocols (Additional file 4: Fig. S4A), the anti-H3P antibody showed brighter signal with the NAFA protocol when compared to both Rompolas and NAC protocols (Fig. A, Additional file 4: S4B, S4C). Therefore, NAFA is highly compatible with tandem FISH and immunostaining. Next, we sought to more thoroughly characterize the ability of the three protocols to label external and internal tissues by immunofluorescence using antibodies against acetylated tubulin and Smed-6G10 . As in prior experiments, the NA (Rompolas) and NAFA protocol preserved the cilia while they were damaged in the NAC protocol (Fig. B). In case of the muscle antibody, we observed that all the three protocols produced qualitatively similar staining pattern (Fig. C). However, NAC treatment sometimes damaged the body wall musculature resulting in inconsistent stainings when compared to the NAFA protocol (Additional file 5: Fig. S5). The NAFA protocol retained tightly packed evenly spaced muscle fibers, outermost circular muscle fibers, while NAC treatment disrupted the integrity of the muscle fibers and at places lost the circular fibers (Additional file 6: Fig. S6A). To further compare the muscle staining between the NAC and NAFA protocols, we imaged the internal gut musculature and observed that the NAC protocol produced crisper stainings compared to the NAFA protocol (Additional file 6: Fig. S6B). Similarly, we evaluated both protocols’ compatibility with staining protonephridia, another internal structure which is also labeled by the anti-acetylated tubulin antibody. This approach allowed us to compare external vs. internal staining using the same antibody. We observed similar staining of protonephridia in both the protocols, but the epidermal cilia were damaged in the NAC protocol while the NAFA protocol preserved the cilia (Additional file 6: Fig. S6C and S6D). Thus, the NAFA protocol is well suited to studying fragile external structures and most internal structures. We then assessed if we could use the new NAFA protocol to develop two-color FISH with two different RNA probes using piwi-1 and zpuf-6 . Because the NA (Rompolas) protocol is not compatible with ISH, we only compared the NAFA and NAC protocols to each other. First, we detected zpuf-6 gene expression followed by piwi-1 . We used confocal microscopy to image the samples and observed similar expression patterns of piwi-1 in both protocols. However, the NAFA protocol showed a clearer expression pattern of the epidermal progenitor zpuf-6 likely because the integrity of the epidermis was preserved (Fig. A, B). After the double FISH, we explored the mitotic cells in the same samples using anti-H3P antibody. We observed comparable densities of H3P nuclei for both the protocols (Fig. A, B, and Additional file 7: Fig. S7). Therefore, NAFA is compatible with two-color FISH and immunostaining. To confirm that the NAFA protocol preserves the epidermis even after double FISH of piwi-1 and zpuf-6 , we subsequently performed immunostaining for cilia. The confocal images of the dorsal and ventral sides of planarians after two-color FISH showed well preserved cilia with the NAFA protocol. In contrast, we failed to detect the same pattern of cilia in planarians treated with NAC protocol (Fig. A, B). Hence, the NAFA protocol not only preserves the internal structures akin to the NAC protocol but also maintains epidermal integrity even after the strenuous protocol of labeling two separate transcripts and a protein. We next tested if we could use the NAFA protocol to study the wounding response during planarian regeneration without damaging the fragile epidermis or nascent blastema tissues. We performed FISH of piwi-1 and the immunostaining of cilia on trunk fragments 8 h post amputation (hpa) and at 1, 2, 4, and 8 days post amputation (dpa) to assay for epidermal integrity (Fig. A, B). Confocal images showed that epidermal integrity was compromised by the NAC protocol, while the NAFA samples had very clear staining of cilia on trunks throughout regeneration (Fig. A, B). Remarkably, while the piwi-1 FISH pattern was similar between the NAFA and NAC protocols, the NAFA-fixed fragments exhibited an area of undifferentiated tissue that could not be detected in the NAC fragments (compare white arrows in Fig. A to red arrows in Fig. B). We next imaged the blastema at higher magnification with confocal microscopy. These images reinforced that the NAFA protocol preserves the wound epidermis and the blastema, while it was heavily damaged by the NAC protocol (Fig. C and Additional file 8: Fig. S8). To independently verify preservation of the wound epidermis when using the NAFA protocol, we carried out Acid Fuchsin Orange G staining (AFOG). Cryosections of animals fixed with the NAC protocol showed extensive damage to the epidermis, while the NAFA-treated samples had well organized epidermis with tall cells and distinct basal lamina (red arrows) (Additional file 9: Fig. S9A and Fig. S9B). The wound epidermis (8 hpa) was damaged and at times lost in NAC-treated sections but was retained in NAFA-treated sections (Additional file 9: Fig. S9A). Similarly, the blastema at 4 dpa was better preserved upon NAFA treatment (Additional file 9: Fig. S9B). Taken together, the data show that the NAFA protocol is well suited to study wounding responses and blastema formation during regeneration. Given the NAFA protocol’s superior preservation of delicate tissues in planaria, we next sought to determine if it can be adapted to study regeneration responses in other organisms. Current ISH protocols have performed poorly for probing gene expression changes in large whole-mount samples, particularly those involving the establishment of wound epidermis and a regeneration blastema in adults (e.g., the teleost caudal fin). The short-lived African killifish Nothobranchius furzeri can regenerate appendages and even organs such as the heart after injury, making them ideally suited to investigate tissue regeneration in adult animals . However, WISH experiments on the regenerating killifish tail fin can be difficult due to high variability and low signal to noise ratio . To test whether the use of formic acid during fixation can facilitate robust WISH signal development, amputated killifish tail fins were fixed using 4% paraformaldehyde (PFA) with or without formic acid at 1 and 3 dpa (Fig. A). ISH for an early blastema gene follistatin-like-1 (fstl1) showed that the use of formic acid in the fixative increased the signal-to-noise ratio resulting in intense signal at the site of injury. In contrast, the fstl1 signal in samples fixed without formic acid was masked by background noise (Fig. B, C). Similar results were observed for a blastema gene, wnt10a , in 3 dpa samples (Additional file 10: Fig. S10). These results demonstrate that adding formic acid to the fixative can enhance ISH signals in regenerating fish fins, facilitating global analysis of gene expression dynamics. Furthermore, it highlights the robustness of the NAFA protocol and shows that it can be easily adapted to a variety of tissues and organisms. Preservation of external tissue layers is especially important for a research organism used to study regeneration, because stem cell proliferation and differentiation take place just beneath the wounding epidermis and form a blastema which grows to replace lost tissues . Current ISH protocols facilitate probe penetration with harsh chemical treatments which damage delicate tissues, such as the ciliated epidermis in planarians and blastemas in both vertebrates and invertebrates. These same treatments can also damage or eliminate epitopes necessary for immunostainings. The new NAFA protocol addresses these shortcomings and allows for performing immunofluorescence and ISH on the same samples while preserving the delicate outer cellular layers of the planarian S. mediterranea . The use of formic acid fixative also enhanced ISH results in the regenerating tail fin of the African killifish N. furzeri . The greatly improved tissue integrity and increased signal to noise ratio provided by the NAFA protocol will enable researchers to investigate gene expression changes during wound healing and blastema formation. The NAFA protocol, like the NA (Rompolas) protocol, is highly compatible with immunofluorescence. Both protocols use nitric acid during fixation, which is known to euthanize and flatten planarians while preserving the ciliated epidermis . However, use of nitric acid alone is not sufficient to enable detection of gene expression by ISH. To develop a protocol that is compatible with both ISH and immunofluorescence, we explored the use of carboxylic acids, which are widely used in a variety of fixation approaches . These methods are a subset of a broader class called coagulant fixatives which act by precipitating proteins instead of covalently crosslinking them . Acid treatments enhance immunohistochemical studies by hydrolyzing crosslinks and potentially disrupting protein complexes, in a process known as antigen retrieval . In contrast, the NAC protocol uses enzymatic proteinase K treatment to permeabilize the sample. While immunofluorescence signals can be generated from this method, these signals are much weaker at times than those produced by the NAFA or NA (Rompolas) methods, presumably due to the loss of target epitopes by enzymatic digestion. Furthermore, the harsh mucolytic NAC treatment tears the outer layers of the planarian body, making it difficult to use for studying fragile tissues such as the epidermis and regeneration blastema. The NAFA protocol is also highly compatible with in situ hybridization, in stark contrast to the NA (Rompolas) protocol. Three main possibilities exist to account for this compatibility: (1) that samples fixed using the NAFA protocol are more permeable to riboprobes than samples fixed by the NA (Rompolas) protocol, (2) that RNA targets are more available to ISH probes than they are in other coagulating fixation conditions, or (3) that target RNA molecules are better preserved by NAFA than they are with harsher acid treatments. Below, we evaluate the likelihood of each of these three possibilities. First, samples fixed with the NA (Rompolas) protocol are sufficiently permeabilized to allow antibodies to penetrate to internal structures detectable by immunofluorescence, yet in situ hybridization fails on these samples. While the structures of specific antisense mRNA probes are unknown, the relatively short probes used in this study still do not yield any appreciable signal with the NA (Rompolas) protocol. This suggests that sample permeability may not explain NAFA’s superior performance in ISH. Because size affects diffusion rate and riboprobe penetration, a systematic study with probes of varying lengths is necessary to assess permeabilization in samples fixed by each method. Second, relative to prolonged strong acid treatments, such as the NA (Rompolas) protocol, the proteins in NAFA samples will likely not be hydrolyzed to the same extent, and will also be crosslinked, two factors which would be expected to increase the size and complexity of proteins bound to and around RNA molecules. Since NAFA fixation likely leads target RNA molecules to be bound or surrounded by networks of crosslinked proteins, we hypothesize that increased RNA availability to probes is another unlikely explanation for the compatibility of NAFA with ISH. Third, compared to the NA (Rompolas) protocol, NAFA’s much briefer nitric acid treatment almost certainly results in less acid hydrolysis of RNA. Furthermore, the NAFA protocol includes EGTA to chelate calcium ions, as many RNase enzymes require these to digest RNA molecules . Of the three possibilities for the NAFA protocol’s compatibility with ISH, we posit that preservation of RNA integrity is the most likely explanation. The benefits of the NAFA protocol are likely due to the unique approach of simultaneously performing crosslinking and carboxylic acid treatments. As we devised this method, we tested three carboxylic acids for their performance in ISH and chose formic acid, which is chemically the smallest and simplest carboxylic. Formic acid is the strongest of the three acids tested in this study. It is unknown whether other untested carboxylic acids would perform better on ISH in planarians. However, for aliphatic carboxylic acids such as the ones tested here, increasing length of the carbon chain is inversely proportional to acid strength, so we expect other acids would be unlikely to produce the full benefits created by the formic acid treatment of the NAFA protocol. Furthermore, carboxylic acids with long aliphatic carbon chains have detergent-like properties, making them potentially unsuitable for fixing tissue samples. The NAFA protocol can be used for preparing whole-mount planarian samples for immunofluorescence, ISH, and tissue sections for histological stainings like AFOG. The combination of using a carboxylic acid like formic acid in the fixative also improved ISH signal in the killifish tail fin indicating the ease of adapting this protocol for a wide variety of research organisms. Given the success of the NAFA protocol in traditional ISH protocol with long riboprobes, it is likely compatible with Hybridization Chain Reaction v3.0 (HCR) which uses multiple short RNA probes . Future studies will determine the compatibility of NAFA fixation with HCR. Because it preserves the integrity of the ciliated epidermis in planarians, this method may be useful for the study of other samples with multiciliated cells, such as the lung epithelium, oviduct, and inner ear. Future work will explore the applicability of the NAFA protocol in a diverse array of samples and research organisms. We describe a fixation protocol using nitric acid and formic acid (NAFA) which preserves the fragile tissues such as the planarian regeneration blastema and epidermis. NAFA protocol is compatible with a variety of downstream assays such as in situ hybridization, immunofluorescence, and histological stainings. The protocol was also easily adapted to probe for gene expression in the regenerating killifish tail fin. Thus, the method promises to be broadly applicable for a variety of tissues and research organisms. Animal husbandry Asexual Schmidtea mediterranea planarians were grown in 1 × Montjuic water in recirculating systems or static cultures in Tupperware boxes at 20 °C . When maintained in static cultures 1 × Montjuic water was supplemented with gentamycin (50–100 µg/mL). Animals were fed with either beef liver chunks or puree, 1–3 times a week. Animals were starved for at least 1 week before use in experiments . The inbred strain GRZ of the African turquoise killifish Nothobranchius furzeri were grown at 26 °C, and caudal fin amputation was carried out as described previously . All vertebrate work was performed according to the protocols approved by the Stowers Institute for Medical Research Institutional Animal Care and Use Committee. Riboprobe synthesis Hapten-labeled antisense RNA probes were synthesized with a few modifications to the previously published protocol . Up to 1 μg of PCR-amplified DNA templates were used for T7 based in vitro transcription reaction to generate antisense RNA sequences. Probes were synthesized for either 2 h or overnight at 37 °C in a thermocyler using digoxigenin (DIG), fluorescein, or DNP-labeling mix. Template DNA was degraded by incubating the reaction with RNase-free DNase for 45 min at 37 °C. Riboprobes were precipitated at − 80 °C for 1 h in 0.5 volumes of 7.5 M ammonium acetate and 2 volumes of ice-cold ethanol. RNA pellet was obtained by centrifugation at 14,000 rpm for 30 min at 4 °C. RNA pellet was washed in 75% ethanol and air dried before resuspending in 100 μL of deionized formamide. We generally used these riboprobes at 1:1000 dilution in ISH experiments. NA (Rompolas), NAC, and NAFA fixation Fixation with NA (Rompolas) protocol was carried out as described before with the following modifications: fixation with relaxant solution was carried out for 16 h at RT. Animals were washed in PBS and post-fixed with 4% paraformaldehyde in PBS for 10 min. Samples were permeabilized in 1% IGEPAL CA-360 for 10 min and washed with PBSTx prior to carrying out ISH or immunostaining experiments. Animals were fixed using NAC protocol as described previously . Briefly, animals were euthanized in 5% NAC for 5 min and fixed in 4% formaldehyde for 45 min. Animals were dehydrated in methanol and stored in − 20 °C at least overnight and up to several months. When ready to use for the experiments, samples were rehydrated in PBSTx and bleached using formamide bleach for 2 h. Animals were permeabilized with proteinase K for 10 min and post-fixed with 4% formaldehyde for 10 min. Following two 10-min washes with PBSTx, samples were continued with either ISH or immunostaining procedures. In NAFA fixation, animals were euthanized in NA solution and fixed in FA solution for 45 min. Following fixation, animals were dehydrated in methanol and stored in − 20 °C until ready for use. Animals were rehydrated and bleached in formamide bleach for 2 h before continuing with either ISH or immunostaining. The detailed step-by-step protocol for NAFA fixation is provided in Additional files 11–15. All the recipes for solutions used in the protocol are described in Additional file 16. All chemicals used in the study are listed in Additional file 17: Supplementary Table 1. ISH and immunostaining Animals fixed with the three different methods were treated identically for ISH and immunostaining following previously published protocols . Fluorescently conjugated tyramides were synthesized from N-hydroxy-succinimidyl esters as previously described . The detailed step-by-step protocols for ISH and immunostaining are provided in Supplementary Files 1A-1E. Histological sectioning and AFOG staining WISH-stained animals were cryosectioned at 7 µm thickness as described previously . For Acid Fuchsin Orange G (AFOG) staining, fixed samples were embedded in paraffin and processed into 10-μm-thick sections. AFOG staining was carried out as previously described . Imaging Colorimetric WISH samples were imaged on Leica M205 stereo microscope. Fluorescent images were taken on a Zeiss confocal microscope or Nikon Spinning disk and processed in Fiji . For Figs. and , animals were mounted either dorsally or ventrally to capture surface ciliary patterns. H3P densities were determined from maximum intensity projections as described before . H3P intensity was determined by the brightness of each focus identified by Fiji’s “Find maxima” function. Average piwi-1 intensity was calculated from maximum intensity projections. Asexual Schmidtea mediterranea planarians were grown in 1 × Montjuic water in recirculating systems or static cultures in Tupperware boxes at 20 °C . When maintained in static cultures 1 × Montjuic water was supplemented with gentamycin (50–100 µg/mL). Animals were fed with either beef liver chunks or puree, 1–3 times a week. Animals were starved for at least 1 week before use in experiments . The inbred strain GRZ of the African turquoise killifish Nothobranchius furzeri were grown at 26 °C, and caudal fin amputation was carried out as described previously . All vertebrate work was performed according to the protocols approved by the Stowers Institute for Medical Research Institutional Animal Care and Use Committee. Hapten-labeled antisense RNA probes were synthesized with a few modifications to the previously published protocol . Up to 1 μg of PCR-amplified DNA templates were used for T7 based in vitro transcription reaction to generate antisense RNA sequences. Probes were synthesized for either 2 h or overnight at 37 °C in a thermocyler using digoxigenin (DIG), fluorescein, or DNP-labeling mix. Template DNA was degraded by incubating the reaction with RNase-free DNase for 45 min at 37 °C. Riboprobes were precipitated at − 80 °C for 1 h in 0.5 volumes of 7.5 M ammonium acetate and 2 volumes of ice-cold ethanol. RNA pellet was obtained by centrifugation at 14,000 rpm for 30 min at 4 °C. RNA pellet was washed in 75% ethanol and air dried before resuspending in 100 μL of deionized formamide. We generally used these riboprobes at 1:1000 dilution in ISH experiments. Fixation with NA (Rompolas) protocol was carried out as described before with the following modifications: fixation with relaxant solution was carried out for 16 h at RT. Animals were washed in PBS and post-fixed with 4% paraformaldehyde in PBS for 10 min. Samples were permeabilized in 1% IGEPAL CA-360 for 10 min and washed with PBSTx prior to carrying out ISH or immunostaining experiments. Animals were fixed using NAC protocol as described previously . Briefly, animals were euthanized in 5% NAC for 5 min and fixed in 4% formaldehyde for 45 min. Animals were dehydrated in methanol and stored in − 20 °C at least overnight and up to several months. When ready to use for the experiments, samples were rehydrated in PBSTx and bleached using formamide bleach for 2 h. Animals were permeabilized with proteinase K for 10 min and post-fixed with 4% formaldehyde for 10 min. Following two 10-min washes with PBSTx, samples were continued with either ISH or immunostaining procedures. In NAFA fixation, animals were euthanized in NA solution and fixed in FA solution for 45 min. Following fixation, animals were dehydrated in methanol and stored in − 20 °C until ready for use. Animals were rehydrated and bleached in formamide bleach for 2 h before continuing with either ISH or immunostaining. The detailed step-by-step protocol for NAFA fixation is provided in Additional files 11–15. All the recipes for solutions used in the protocol are described in Additional file 16. All chemicals used in the study are listed in Additional file 17: Supplementary Table 1. Animals fixed with the three different methods were treated identically for ISH and immunostaining following previously published protocols . Fluorescently conjugated tyramides were synthesized from N-hydroxy-succinimidyl esters as previously described . The detailed step-by-step protocols for ISH and immunostaining are provided in Supplementary Files 1A-1E. WISH-stained animals were cryosectioned at 7 µm thickness as described previously . For Acid Fuchsin Orange G (AFOG) staining, fixed samples were embedded in paraffin and processed into 10-μm-thick sections. AFOG staining was carried out as previously described . Colorimetric WISH samples were imaged on Leica M205 stereo microscope. Fluorescent images were taken on a Zeiss confocal microscope or Nikon Spinning disk and processed in Fiji . For Figs. and , animals were mounted either dorsally or ventrally to capture surface ciliary patterns. H3P densities were determined from maximum intensity projections as described before . H3P intensity was determined by the brightness of each focus identified by Fiji’s “Find maxima” function. Average piwi-1 intensity was calculated from maximum intensity projections. Additional file 1: Supplementary Fig. S1. Epidermal integrity is preserved with NAFA protocol. Chromogenic in situ of zpuf-6 (epidermal progenitor). Transverse histology section taken anterior to the pharynx. (A) Samples fixed with the NAC protocol. Black arrows indicate damage to the epidermal layer. (B) Samples fixed with the NAFA protocol. No disruptions to the epidermis are visible. Brightfield images were taken with a stereomicroscope. Scale bars: 100 μm. Additional file 2: Supplementary Fig. S2. Different carboxylic acids tested in optimization of a new in situ protocol. Chromogenic WISH of piwi-1 , zpuf-6 , pc2 , and porcupine . (A) NAC, (B) formic acid (4.8%), (C) acetic acid (4.9%) and (D) lactic acid (4.2%). Brightfield images were taken with a stereomicroscope. Scale bars: 100 μm. Additional file 3: Supplementary Fig. S3. FISH signal intensities are comparable between NAFA and NAC protocols. (A) Mean intensity of piwi-1 FISH signal was calculated from the max projections represented in Fig. . Plot shows box and whisker plot for three animals per condition. P -values were calculated with Student’s t -test. Additional file 4: Supplementary Fig. S4. NAFA protocol yields brighter H3P signal without changing the density of dividing cells. (A) Comparison of H3P + nuclei per square millimeter. (B) Comparison of mean fluorescence intensity of H3P + nuclei from each animal. Box and whisker plots show median values and interquartile ranges. N = 3 animals per condition. P -values for (A) and (B) were calculated with Student’s t -test. (C) Representative images of H3P stainings. Top row: all images shown with same brightness/contrast settings, optimized to NAFA. Bottom row: each image shown with custom settings. Images are max projections of confocal stacks, scale bars: 50 μm. Additional file 5: Supplementary Fig. S5. Consistent labeling of muscle fibers by the NAFA protocol. (A) Max projections of confocal image stacks of animals immunostained with the muscle antibody Smed-6G10. Images are arranged with anterior to the left for all animals. For rows 1–2 the dorsal surface is visible, while for rows 3–6 the ventral surface and mouth are visible. All six animals processed for each condition are shown, scale bars: 200 μm. Additional file 6: Supplementary Fig. S6. Immunostaining of internal structures by the NAFA protocol. Maximum intensity projections of confocal image substacks to specifically visualize external and internal structures. (A) 40 × magnification image of the body wall musculature stained by Smed-6G10. Scale bars: 50 μm. (B) Upper: Maximum intensity projection of 3 z-stacks of whole-mount immunostaining showing gut musculature. Scale bars: 200 μm. Lower: Maximum intensity projection of sub-stacks of 40 × magnification image of gut musculature in tail stripe region, posterior to pharynx. Scale bars 50 μm. (C) Maximum intensity projection of top 15 microns of 40 × magnification immunofluorescence images of the ventral ciliated epidermis. Upper: anti-acetylated tubulin (gray), lower: merge with DAPI (blue). (D) Similar to C), for different substacks to highlight protonephridia staining by anti-acetylated tubulin. All scale bars for C) and D) are 50 μm. Additional file 7: Supplementary Fig. S7 Densities of mitotic cells are comparable between NAFA and NAC protocols. Number of H3P + nuclei were counted and divided by the area of the worm to obtain density. These number are from the max projection images represented in Fig. . Each dot represents an animal. P -values were calculated with Student’s t -test. Additional file 8: Supplementary Fig. S8. NAFA protocol maintains epidermal and blastema integrity during regeneration. (A) DAPI staining. (B) FISH of zpuf-6 and DAPI staining. (C) FISH of zpuf-6 and immunostaining of anti-acetylated tubulin (cilia). (B and C) White arrows show the affected epidermal layer and red arrows at the blastema show epidermal integrity during regeneration. Maximum intensity projection of confocal images (40X). Ant: anterior and Post: posterior. Regenerating trunk fragments. Scale bars are 50 μm. Additional file 9: Supplementary Fig. S9. NAFA protocol is compatible with histological staining and maintains epidermal integrity during regeneration. (A) 8 hpa longitudinal sections stained with AFOG. Magnified images of the area around the wound marked by yellow dotted box are shown. (B) 4 dpa longitudinal sections stained with AFOG. Areas of the zoomed in images are highlighted by dashed boxes. Red arrows mark the epidermis. Brightfield images were taken with a compound microscope. Scale bars for whole-mounts are 100 μm, 50 μm for insets. Additional file 10: Supplementary Fig. S10. Expression of wnt10a at the blastema is better observed when the tissue is fixed in the presence of formic acid. (A) Chromogenic in situ hybridization for the injury-responsive gene wnt10a. 3 dpa tail fins fixed with or without formic acid and probed for wnt10a. Amputation site is indicated by red dashed line. Brightfield images were taken with a stereomicroscope. Scale bars are 500 μm. Additional file 11: Detailed step by step protocol describing colorimetric WISH with NAFA fixation. Additional file 12: Detailed step by step protocol describing FISH using the NAFA fixation. Additional file 13: Detailed step by step protocol describing FISH and immunostaining protocol with NAFA fixation. Additional file 14: Detailed step by step protocol describing whole mount immunofluorescence staining using the NAFA fixation. Additional file 15: Detailed step by step protocol for colorimetric WISH on killifish fins using the NAFA protocol. Additional file 16: Details of solutions used for fixation, WISH, and immunofluorescence. Additional file 17: Supplementary Table 1 – Vendor information and catalog numbers for all the reagents required to carry out NAFA fixation, WISH, and immunofluorescence. |
Global trends in tumor-associated neutrophil research: a bibliometric and visual analysis | 26fd20b7-a4bf-4868-aa35-8f1e9f615d9b | 11949894 | Neoplasms[mh] | Introduction Neutrophils, originating from Granulocyte-Monocyte Progenitors (GMPs) cells in the bone marrow ( , ), play a key role in the innate immune system of the human body, which is not only involved in defending against infections, regulating inflammatory responses and tissue repair ( – ), but also in tumorigenesis and progression that has been progressively recognized by the scientific community in recent years. In the complex and dynamically changing environment of the tumor microenvironment (TME), neutrophils are recruited and transformed into tumor-associated neutrophils (TANs) with varying degrees of maturity and tissue affinity in response to a variety of chemokines secreted by tumor cells ( , ), such as C-X-C motif chemokine [e. g., interleukin-8 (IL-8)], which exhibit significant heterogenous ( ). TANs have a diversity and plasticity that enables them to exhibit anti-tumor or pro-tumor properties depending on the signals in the TME and are accordingly classified into anti-tumor N1 TANs and protumorigenic N2 TANs ( ). However, no specific surface markers have been identified to differentiate between these two subtypes, and thus TANs are mainly defined by their functional phenotype ( ). TANs mentioned above exert their anticancer effects through different mechanisms, including through antibody dependent cell-mediated cytotoxicity (ADCC), direct cytotoxicity through the release of cytotoxic mediators such as ROS, myeloperoxidase (MPO), and the activation of adaptive anti-tumor immune response ( – ); some studies have shown that, in comparison to N2 TANs, anti-tumor N1 TANs have been shown to produce higher levels of Tumor necrosis factor i α (TNF- α ), MIP-1a, H2O2, and NO, and to be cytotoxic to tumor cells in vitro and in vivo ( , ); TANs and neutrophil extracellular traps (NETs) also interact to promote immune evasion in a PD-L1/PD-1-dependent interaction, a phenomenon widely recognized in pancreatic cancer ( , ). The formation of TANs is a finely regulated process involving multiple cytokines, signaling pathways, and microenvironmental factors. In terms of phenotypic polarization, the formation of N1-type TANs may be associated with the exposure of cytokines such as interferon (IFN) and TNF- α ( , ), which promote anti-tumor immune response. In contrast, the formation of N2-type TANs is associated with factors such as transforming growth factor- β (TGF- β ) ( ), which tend to promote angiogenesis and immune escape, thereby favoring tumor progression. In TME, neutrophils may also be affected by other cytokines, direct contact with tumor cells, and other immune cell interactions, which together result in a shift in neutrophil phenotype ( – ). Specific signaling pathways, such as phosphatidylinositol 3-kinase (PI3K)/Akt, MAPK, and NF- κ B ( ), may play a key role in this process. Meanwhile, microenvironmental factors such as hypoxia, acidosis and nutrient supply may also affect neutrophil phenotype and function by influencing neutrophil metabolism and signaling ( ). The double-edged nature of TANs has important implications for tumor progression and patient prognosis. Although early studies of TANs focused on anticancer effects, a growing body of research suggests that TANs often exhibit a pattern in TME that is more similar to the N2-type tumor-promoting phenotype. The fact that tumor cells themselves mediate neutrophil recruitment to the site of tumorigenesis by secreting CXC chemokines also strongly suggests that TANs are not an effective means of host antitumor activity. Both neutrophil depletion assays and tumor site neutrophil accumulation inhibition assays have been shown to prevent tumor trophic vasculature ( – ) and inhibit tumor growth. Meanwhile, some clinical studies have shown that the presence of neutrophils leads to a poor prognosis. For example, elevated levels of PMNs in the fine bronchioloalveolar space of patients with bronchioloalveolar carcinoma were significantly associated with poor prognosis ( ). These findings illustrate that although TANs have the potential to fight tumors by activating the immune system, they are more likely to be manipulated by tumor cells to promote tumor growth and spread. Their role as potential therapeutic targets for tumors requires a deeper exploration of the dual role of TANs in tumor immunity. How to reduce the production of N2-type as well as increase the expression of N1-type may be a hot research direction in the future. The plasticity and heterogenous of TANs allow TANs to promote or inhibit tumor growth and progression through a variety of complex pathways; therefore, quantitative evaluation and analysis of the current status of research, focus areas, and development trends of TANs is essential for understanding their role in tumor development. Bibliometrics is a cross-cutting science that quantitatively analyses all knowledge carriers using mathematics and statistics ( ). It combines mathematics, statistics, and bibliography into a comprehensive body of knowledge that focuses on quantification and allows for the evaluation of systematic criteria in the field of medical research ( ). Bibliometrics can provide researchers with a comprehensive and objective perspective. Not only can it identify the history and future trends of a specific field, but it can also systematically assess the research progress of different countries, institutions, and researchers ( , ). The aim of this study was to explore the past research on tumor-associated neutrophils through bibliometrics and to provide new perspectives and directions for future research on TANs and to find the next research hotspot of TANs.
Methods 2.1 Search strategies Data were extracted from the Web of Science Core Collection (WOSCC) database, one of the most widely used sources for academic and bibliometric analyses ( ). The search formula was TS= (“tumor associated Neutrophil*”) OR (“tumor-associated Neutrophil*”) OR (“tumor associated Neutrophil*”) OR (“tumor-associated Neutrophil*”) OR (“ cancer associated Neutrophil*”) OR (“cancer-associated Neutrophil*”), set the publication period of the article to be from 2000 to 2024, limited the language to English. The publication type was limited to article and review article, and a total of 615 articles were retrieved. All retrieved documents were saved as records, and citations were output in the form of all records and citations, saved as plain text files and stored, and the study was completed on March 21, 2024, in order to ensure the accuracy of the data and to prevent data bias due to database updates. We completed the retrieval and collection of all data on 3/21/2024. 2.2 Data collection Raw data were extracted from selected publications, including Abstract, Author(s), Title, Source, Times Cited Count, Accession Number, Authors Identifiers, ISSN/ISBN, PubMed ID, Conf. Info/Sponsors, Addresses, Affiliations, Document Type, Keywords, WOS Categories, Research Areas, WOS Editions (print only), Cited References, Cited Reference Count, Hot Paper, Highly Cited, Usage Count, Funding Information, Publisher Information, Open Access, Page Count, Source Abbrev. Number, Language, Publication year, References, Keywords, Researcher’s h-index, Journal Impact Factor (IF), and Journal Citation Report (JCR) divisions were obtained from Web of Science. The productivity of the paper was measured by the number of citations. Duplicate articles were merged into one element and misspelled words were corrected artificially. Cleaned data was exported for further analysis. 2.3 Bibliometric analysis and visualization Bibliometric analysis and visualization are important tools for revealing trends and knowledge structures in research areas. By using tools such as R software, VOSviewer, and CiteSpace, we can analyze large amounts of bibliometric data in depth to derive valuable insights. We used R software to perform Lotka’s Law analysis to explore the distribution of authors’ publication frequencies ( ). Lotka’s Law reveals the relationship between a small number of highly prolific authors and a large number of less prolific authors in scientific fields ( ). The statistical analysis function of the R software allows us to test the applicability of this law in the study of TANs and to identify high-productivity authors and key literature, which is important for understanding the knowledge production and dissemination patterns of TANs. By using the VOSviewer tool, we constructed a scientometric network graph that presents the elements of the literature data such as keywords, authors, institutions or journals as nodes in the network graph ( ). The size of a node is determined by its frequency of occurrence in the literature or the number of publications, while the connections between nodes reflect their relevance ( ). By looking at the clustering of the nodes, we can identify closely related research topics or areas, while the thickness of the connections shows the strength of collaboration or citation between these nodes. In addition, Total link strength (TLS) represents the number of co-occurrences, which to some extent can reflect the collaborative exchange relationship between countries, organizations, and authors. By calculating and comparing the Total link strength, we can identify the key nodes that have an important position in the research field of TANs, and these key nodes may be the leaders, core institutions, or influential journals in the research field. Finally, with CiteSpace software, we can visualize and analyze the literature citation relationships in the research field ( ). CiteSpace can help us track the evolution of research hotspots and identify key nodes and outbreaks in the research field. By analyzing the citation network and co-occurring keywords, we can reveal the development trend of TANs-related research and potential research opportunities.
Search strategies Data were extracted from the Web of Science Core Collection (WOSCC) database, one of the most widely used sources for academic and bibliometric analyses ( ). The search formula was TS= (“tumor associated Neutrophil*”) OR (“tumor-associated Neutrophil*”) OR (“tumor associated Neutrophil*”) OR (“tumor-associated Neutrophil*”) OR (“ cancer associated Neutrophil*”) OR (“cancer-associated Neutrophil*”), set the publication period of the article to be from 2000 to 2024, limited the language to English. The publication type was limited to article and review article, and a total of 615 articles were retrieved. All retrieved documents were saved as records, and citations were output in the form of all records and citations, saved as plain text files and stored, and the study was completed on March 21, 2024, in order to ensure the accuracy of the data and to prevent data bias due to database updates. We completed the retrieval and collection of all data on 3/21/2024.
Data collection Raw data were extracted from selected publications, including Abstract, Author(s), Title, Source, Times Cited Count, Accession Number, Authors Identifiers, ISSN/ISBN, PubMed ID, Conf. Info/Sponsors, Addresses, Affiliations, Document Type, Keywords, WOS Categories, Research Areas, WOS Editions (print only), Cited References, Cited Reference Count, Hot Paper, Highly Cited, Usage Count, Funding Information, Publisher Information, Open Access, Page Count, Source Abbrev. Number, Language, Publication year, References, Keywords, Researcher’s h-index, Journal Impact Factor (IF), and Journal Citation Report (JCR) divisions were obtained from Web of Science. The productivity of the paper was measured by the number of citations. Duplicate articles were merged into one element and misspelled words were corrected artificially. Cleaned data was exported for further analysis.
Bibliometric analysis and visualization Bibliometric analysis and visualization are important tools for revealing trends and knowledge structures in research areas. By using tools such as R software, VOSviewer, and CiteSpace, we can analyze large amounts of bibliometric data in depth to derive valuable insights. We used R software to perform Lotka’s Law analysis to explore the distribution of authors’ publication frequencies ( ). Lotka’s Law reveals the relationship between a small number of highly prolific authors and a large number of less prolific authors in scientific fields ( ). The statistical analysis function of the R software allows us to test the applicability of this law in the study of TANs and to identify high-productivity authors and key literature, which is important for understanding the knowledge production and dissemination patterns of TANs. By using the VOSviewer tool, we constructed a scientometric network graph that presents the elements of the literature data such as keywords, authors, institutions or journals as nodes in the network graph ( ). The size of a node is determined by its frequency of occurrence in the literature or the number of publications, while the connections between nodes reflect their relevance ( ). By looking at the clustering of the nodes, we can identify closely related research topics or areas, while the thickness of the connections shows the strength of collaboration or citation between these nodes. In addition, Total link strength (TLS) represents the number of co-occurrences, which to some extent can reflect the collaborative exchange relationship between countries, organizations, and authors. By calculating and comparing the Total link strength, we can identify the key nodes that have an important position in the research field of TANs, and these key nodes may be the leaders, core institutions, or influential journals in the research field. Finally, with CiteSpace software, we can visualize and analyze the literature citation relationships in the research field ( ). CiteSpace can help us track the evolution of research hotspots and identify key nodes and outbreaks in the research field. By analyzing the citation network and co-occurring keywords, we can reveal the development trend of TANs-related research and potential research opportunities.
Results Between 2000 and 2024, a total of 691 articles were published in the field. Based on the exclusion criteria, we finally included 615 eligible original articles in our study. The specific flow chart is shown in . 3.1 Trends in literature publishing output The number of papers published in each period reflects the research trends in the field. As shown in , for research on TAN, from 2007 to 2024, the overall trend has shown a steady increase, although the number of articles published has fluctuated in some periods. From 2009, when the N1/N2 functional classification of TANs was formally proposed by Fridlender ZG et al. ( ), to 2015, the output of TANs-related literature was extremely low, with fewer than 20 articles per year, suggesting that research remained stagnant ( ). From 2015 to 2024, the number of publications increased exponentially, with 548 articles published on TANs, representing 89.1% of the last two decades. This represents a surge in TANs research, indicating that TANs research has entered a period of rapid development in recent years. This may be related to the fact that TME-related studies have become hotspots and neutrophils have gradually gained importance in tumors. We collected 615 relevant studies in the field of TANs research between January 2000 and 2024 from the Web of Science database. Among them, the global citation score (GCS) was 37,374, and the average citation score per item was 60.77, and the global citation score was as high as 8,716 in 2022, which may be a breakthrough in this field of research. After 2015, this research was in a rapid development stage, and the number of annual publications gradually increased. 3.2 Distribution of countries/regions From 51 countries/regions involved in TANs, shows the 10 countries/regions with the highest number of publications and the corresponding citation frequency and centrality. Among them, China published the most papers (N=193), followed by USA (N=164) and Germany (N=60), and the highest citation frequency was USA (N=15679), followed by China (N=7463) and Germany (N=4297). The average number of citations of Israel, England, and USA topped the list, and although China published the most papers, the average number of citations was lower than most countries/regions. shows that research is mainly concentrated in the Northern Hemisphere, and it is worth noting that the links between countries/regions are mainly between North America and East Asia, and that Oceania is also relatively strongly linked to North America and East Asia. The total link strength of countries/regions measures the importance of the country/region’s position in the network. In summary, despite the large number of papers published in this area, USA’s research maintains its dominant position; and the number of articles published by Israel, although small, is on the whole of a high academic standard. illustrates the distribution of corresponding authors’ countries based on the number of documents, distinguishing between Single Country Publications (SCP) and Multiple Country Publications (MCP). China, the USA, and Germany are the leading countries in terms of total publications, with China having a higher number of Single Country Publications (SCP), while the USA shows a notable presence of Multiple Country Publications (MCP). Other countries like Italy, Japan, and Canada also contribute significantly to the research output. This chart highlights the global collaboration in the field, with a considerable portion of publications involving multiple countries. 3.3 Distribution of institutions lists the top 10 institutions in terms of number of publications, frequency of citations, and corresponding centrality. The top three institutions with the highest number of publications are: Fudan Univ (N=22), Univ Penn (N=15) and Univ Duisburg Essen (N=14). Among the top ten most productive institutions, 50% belonged to China, followed by two in Italy and one each in USA, Germany and Israel. The three institutions with the highest citation frequency are Univ Penn (N=4860), Stanford Univ (N=2096), and Harvard Univ (N=1929). It is worth noting that Brigham & Womens Hosp, Harvard Med Sch, Fudan Univ, and Univ Naples Federico-II showed a high total link strength, indicating that these institutions have a more important position in the research in the field of TANs, and may be the key nodes in the research field of TANs. Taken together Univ Penn has a much higher citation frequency, number of publications, and relatively high total link strength, which means that its research work on TANs has high visibility and influence in the academic community. Research institutions were analyzed to understand the global distribution of research related to TANs and to provide opportunities for collaboration. In VOSviewer, institutional collaborations are categorized into 11 closely related blocks ( ). shows the ratio of institutional publications to total publications obtained by dividing the number of TANs-related papers published by each institution in the last five years by the total number of papers published by each institution from 2007 to 2024, i.e., the ratio of institutional publications to total publications in the last five years. The color bias toward red means a high ratio, indicating that these institutions are emerging forces in the field of networking; the color bias toward blue means a low ratio, indicating that these institutions have done relatively little research in the field of TANs in recent years. The results show that the number of studies conducted by institutions such as Univ Penn, Humanitas Univ, and Shanghai Jiao Tong Univ has increased significantly over the past five years. In contrast, institutions such as Fudan Univ, Univ Duisburg Essen, and Hebrew Univ Jerusalem conducted relatively few studies in the past five years. Ranking the research institutions by Total Link Strength, also shows the top ten institutions with the highest Total Link Strength. compared to the top ten most prolific institutions, USA has a significant increase in the number of institutions with a total of five institutions on the list. three institutions from Italy are also ranked in the list, and one institution from China made the list. 3.4 Distribution of authors A total of 3763 authors were involved in the study of tumor-associated neutrophils. Scientific productivity based on Lotka’s law showed that 86% of the authors published only one paper ( ) ( ). The author with the most publications was Jablonska, Jadwiga (University of Duisburg Essen) (N=18), followed by Fridlender, Zvi G. (Univ Jerusalem) (N=16), Galdiero, Maria Rosaria (University of Naples Federico II) (N=11) and Granot, Zvi (Hebrew University of Jerusalem) (N=10). VOSvivewer shows collaboration between authors of literature related to TANs ( ), which provides the opportunity for researchers to find research partners in their own research field and to identify research partners and industry authorities in the field. Granot, Zvi and Marone, Ginanni the central figures of this collaborative network. As we can see from the , Granot, Zvi is associated with Fridlender, Zvi G., Jablonska, Jadwiga, and Marone, Ginanni is actively collaborating with Mantovani, Alberto, but other than that the clusters are relatively independent of each other and not closely connected, and this relatively decentralized collaborative network may indicate that authors in the field of TANs have been working together for many years. Network may indicate that there is less cross-national and cross-institutional co-research among authors in the field of TANs, or it may signal that the field of TANs has not yet gained widespread research. The co-cited author analysis refers to two authors whose literature is simultaneously cited by a third author ( ). A higher citation frequency indicates a higher degree of consistency between these authors in terms of academic interest and depth of research. By analyzing the authors with the highest number of publications and co-citation frequency, the research strength of the authors and the research hotspots related to TANs can be visualized. gives the top 10 authors in terms of the number of publications 、citations and co-citation frequency, respectively. The most cited author is FRIDLENDER ZG (Univ Jerusalem) (N=4807), followed by ALBELDA SM (University of Pennsylvania) (N=3637), and the author with the highest co-citation frequency is Fridlender, Zvi G. (Univ Jerusalem) (N=540), followed by Mantovani, A (Humanitas University) (N=289). It is noteworthy that Fridlender, Zvi G. has a high impact in this field both in terms of citations and co-citations. The H-index, G-index and M-index are measures of the academic impact of a researcher, an academic journal or an institution. According to the H-index, papers submitted by an author or country/region are cited no less than H times but no more than H times. The key of the H-index is that it takes into account both the number of papers and the number of citations, which can reflect a scholar’s academic influence more comprehensively ( ). The G-index helps to identify the highly cited papers of scholars, and thus reflects the scholar’s academic achievements more accurately ( ). The M-index is mainly used to comprea the academic influence of different scholars in the same field, especially when the distribution of citation counts is similar ( ). Combining the three indices, FRIDLENDER ZG and JABLONSKA J are two scholars with high academic influence in the field of TANs, and the two scholars are from Israel and Germany, respectively. Notably, three of the scholars with H-index in the top ten are from Israel, three from Italy, two from Germany, and one each from China and the United States. 3.5 Journal publication analysis Journals ranked in the top 25% (inclusive) of the impact factor are located in JCR quartile 1 (Q1), and journals ranked in the top 25%-50% (inclusive) of the impact factor are located in JCR quartile 2 (Q2). lists the top 10 journals in terms of number of articles and their corresponding IF (JCR2023). The journal with the highest number of publications was Frontiers in Immunology (5. 7, Q1) ( ), followed by CANCERS (4. 5, Q1) ( ), INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES (4. 9, Q1) ( ) and FRONTIERS IN ONCOLOGY (3. 5, Q2) ( ). There are 12 journals in the top ten in terms of publications, seven journals distributed in the Q1 JCR, and only five journals with an IF of 5 or more. Among these journals, the most frequently cited journals are Frontiers in Immunology, Cancers and International Journal of Molecular Sciences. SEMINARS IN IMMUNOLOGY had the highest impact factor (IF=7. 4), followed by ONCOIMMUNOLOGY(IF=6. 5). It is worth noting that ONCOTARGET, despite having a large number of publications, has not been indexed by SCI since 2018, so the papers retrieved from this journal are all before 2018, which may mean that the journal fails to meet the appropriate academic standards or quality requirements, leading to a decline in academic recognition of the research results published in this journal. 2023 JOURNAL Most of the top 10 co-cited journals in the Citation Report (JCR) are located in the Q1 region, with the exception of Journal of Immunology. The impact of academic journals depends on the number of times they are co-cited, which indicates their importance in a particular research area ( ). The journals with the highest co-citation frequency were Cancer Res (12. 5, Q1) (2182) and Journal of Immunology (3. 6, Q2) (1888). Nine of the top 10 journals in terms of co-citation frequency are distributed in JCR Q1, and seven journals have an IF of more than 10. The visualization map generated by VOSviewer shows the various types of journals involved in research in the field of TANs and their interconnections with each other. These journals are grouped into different clusters based on the similarities between them ( ), and are generally divided into 4 categories: The blue cluster focuses on research in autoimmunity and cell biology (Journal of Leukocyte Biology, Immunobiology, Cells, etc.); the green cluster focuses on research in immunity and cancer (Frontiers in Immunology, Cancers, International Journal of Molec, etc.); the red cluster focuses on oncology (Frontiers in Oncology, Oncoimmunology, etc.); and the yellow cluster focuses on clinical research and treatment and molecular biology related fields (Febs Journal, Cancer and Metastasis Reviews, etc.). Based on the co-citation frequency, these journals were categorized into four groups, which tended to have similar research directions ( ). The red cluster focuses on cancer-related fields (Cancer Research, Clinical Cancer Research, etc.); the green cluster focuses on immunology (Frontiers in Immunology, Journal of Clinical Investigation, etc.); and the blue cluster is mainly in the biochemistry and molecular biology (Nature, Cell, Science, etc.); the yellow cluster is mainly in the field of translational medicine (Nature Communications, Science Translational Medicine, etc.). presents the annual heat map of journals for the past decade. The data can be roughly divided into three modules. In 2015-2016, the most highly cited journals were concentrated in the following areas: Seminar in Immunology (IF=7. 4), Cancer Cell (IF=48. 8), and International Journal of Cancer (IF=5. 7). All three belong to the journals in JCR region 1, which have a high impact factor. In the 2017-2018 period, the focus was on the JOURNAL OF LEUKOCYTE BIOLOGY (IF=3. 6), PLOS ONE (IF=2. 9), and SCIENTIFIC REPORTS (IF=3. 8), which have relatively low impact factors. Subsequently, in the period following 2019, the focus shifts back to high-impact factor journals such as Nature Communications (IF=14. 7), Cancers (IF=4. 5) and Cells (IF=5. 1). We used knowledge flow analysis to explore the evolution of knowledge citations and co-citations among cited journals ( ), the journal bilabeled graphs show the thematic distribution of scholarly journals, changes in citation trajectories and changes in research centers, with the labels on the left representing Citing journals, and the labels on the right representing the cited journals ( , ). The citation linkage of the colored curves pointing from the citation graph to the cited graph show the citation connections. Citing journals are mainly from the fields known as research frontiers, such as MOLECULAR, BLOLOGY, and IMMUNOLOGY. The cited journals are mainly from the fields known as knowledge bases. It is worth noting that both Citing journals, and the cited journals belong to the same label, which suggests that research on TANs is still concentrated in certain areas and has not expanded to other areas. 3.6 Keyword analysis Keywords play a crucial role in academic papers, as they concisely summarize the core topic, objectives, target audience, and methodology employed in the research. A systematic analysis of keywords can reveal the trends and evolution of research in a particular academic field, as well as the focus of research at a given time ( ). Keywords are not only a quick way to understand the main idea of a paper, but also an important indicator of the concerns and research hotspots in an academic field ( ). shows the top 20 keywords in order of frequency. The most frequent keyword is “neutrophil” (179), followed by “tumor-associated neutrophils” ( ). In addition, “tumor microenvironment” ( ) and “tumor” ( ) are frequently occurring keywords, indicating that their corresponding fields are popular in TANs-related research. also shows the specific diseases appearing in the research field of TANs. breast cancer and colorectal cancer appeared more than 20 times, among which breast cancer had the highest number of occurrences and the highest Total link strength, and it is worth noting that lung cancer, which appeared less than 20 times, had a higher Total link strength. We used VOSviewer software to extract 76 keywords from the titles and abstracts of 615 articles, and set the minimum number of citations to 4, resulting in a total of 60 keywords. We drew a visualization map of these 60 keywords ( ) and found that all the keywords were roughly classified into 7 major categories, representing 7 different research directions in the field of TANs: the red clusters were related to inflammatory response and tumor development, the green clusters were related to tumor immunity, the purple clusters were mainly related to immune regulation, the blue clusters were related to the immune-suppressing mechanisms in the tumor microenvironment, the cyan clusters were related to the tumor immunotherapy, yellow clustering is related to the role of TANs in tumors, and orange clustering is related to tumor metastasis, angiogenesis and metabolic regulation mechanisms. shows the impact analysis of keywords, represented by the average number of normal citations for articles with a given keyword. A redder keyword represents a hotter keyword, which means that articles with this keyword have a high average number of citations; a bluish color is the opposite. Analysis of the results of this graph shows that the terms pancreatic cancer, tumor, chemokine, angiogenesis, immune evasion, neutrophil-to-lymphocyte ratio, etc. are the most popular keywords. Notably, these keywords are all related to immune escape from tumors in some way, which implies that more and more studies tend to explore the tumor-promoting functions of TANs. inflammation, cxcl1, nets, immunosuppression, neutrophil, prognosis, T cells therapy, glioma, etc. are followed by the research heat, and these keywords reflect the attention to the inflammatory response, immune regulation, tumor biology, and related therapeutic approaches in the field of TANs research, revealing their research hotspots in pathophysiological mechanisms, disease progression, and therapeutic strategies. shows the annual popularity (annual citations/total annual citations) of the keywords from 2007 - 2024. The annual popularity of the keywords HCC, MYELOID-DERIVED SUPPRESSOR CELLS, and IMMUNOMODULATION has been relatively low in recent years. In contrast, the annual popularity of keywords such as NETs, COLORECTAL CANCER, IMMUNOTHERAPY, and HETEROGENEITY has been relatively high in recent years, indicating that these keywords represent emerging frontiers. shows the keyword popularity correlation, where keywords with high popularity in similar time periods are clustered in different clusters and marked with different colors. The results show nine clusters: Pink Clusters (NETS, TUMOUR MICROENVIRO, TUMOR IMMUNOLOGY, CANCER, CD66B, CXCR2, NON-SMALL CELL LUN), Purple Clusters (IMMUNOMODULATION, INFLAMMATION, CANCER METASTASIS), Orange Cluster (MELANOMA, IMMUNOSUPPRESSION, POLARIZATION), Blue Cluster (NEUTROPHIL, IMMUNE MICROENVIRO, PD-1), Cyan Cluster (MACROPHAGES, PANCREATIC CANCER, TUMOR MICROENVIRON, MYELOID CELLS, THERAPY, HYPOXIA, T CELLS, INNATE IMMUNITY, MYELOID-DERIVED SU, TUMOR), Green Clusters (INNATE IMMUNITY, MYELOID-DERIVED SU TUMOR, COLORECTAL CANCER, GASTRIC CANCER, PD-L1, ADAPTIVE IMMUNITY, CHEMOKINE, CYTOKINES, ANGIOGENESIS, TUMOR-ASSOCIATED M), Dark Blue Clusters (ANGIOGENESIS TUMOR-ASSOCIATED M, APOPTOSIS, SURVIVAL, BREAST CANCER, CANCER IMMUNOTHERA, G-CSF), Yellow Cluster (TANS, HCC, PROGNOSIS), Red Cluster (NEUTROPHIL POLARIZ, CHRONIC INFLAMMATI, GRANULOPOIESIS, NETOSIS, IMMUNOTHERAPY, HETEROGENEITY, IMMUNE CELLS). The areas where Cyan and Green overlap are filled with Light Green, and the areas where Green and Dark Blue overlap are filled with Light Blue. This indicates that the keywords within the same cluster have high popularity within the same time period. It is worth noting that the keywords in the green cluster partially intersect with the keywords in the cyan and dark blue clusters, respectively, which we have labeled in dark red in the figure. These keywords are INNATE IMMUNITY, MYELOID-DERIVED SU, TUMOR, ANGIOGENESIS, and TUMOR-ASSOCIATED M. These five keywords have high prevalence in multiple time periods, suggesting that they are more favored by researchers. illustrates the relationships between various research fields, with the size of each node representing the prominence of the field. “Oncology” stands out as the most central and influential field in this network, indicating its significant role in the broader research landscape. Other fields, such as “Immunology,” “Biochemistry & Molecular Biology,” “Gastroenterology & Hepatology,” and “Pharmacology & Pharmacy,” show strong connections, highlighting the interdisciplinary nature of research in these areas. Interestingly, the relatively small fields of “Nanoscience & Nanotechnology” and “Biophysics” have high centrality, representing the bridging role of these fields. 3.7 Highly cited reference analysis lists the top ten most frequently cited articles. The most frequently cited article is “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs” (Fridlender, ZG, et al., 2009) (2220) ( ) ( ), which focuses on how TGF- β blockers alter the phenotype of tumor-associated neutrophils (TANs) from a pro-tumorigenic “N2” phenotype to an anti-tumorigenic “N1” phenotype, which slows tumor growth and enhance the activation of CD8+ T cells. It is noteworthy that this is the first paper in the field to present a detailed concept and systematic study of TANs, which has laid the foundation for its high citation rate. Next, “The prognostic landscape of genes and infiltrating immune cells across human cancers” (Gentles, AJ, et al., 2015) (2025) ( ), which is also the article with the highest average citation frequency per year, this paper reveals the prognostic landscape of genes and tumor-infiltrating immune cells across different types of cancers by comprehensively analyzing the expression profile and overall survival data of approximately 18, 000 human tumors, identifying the FOXM1 regulatory network as a major predictor of poor prognosis, while pointing out that genes such as KLRB1 expression is associated with positive prognosis in tumor-associated leukocytes, which includes tumor-associated neutrophils. shows the top 25 references with the strongest citation bursts. The first two citation bursts occurred in 2010 and 2011. They are titled “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs” and “Tumor-associated Neutrophils: New Targets for Cancer Therapy”. Associated Neutrophils: New Targets for Cancer Therapy”. Notably, “Tumor-associated neutrophils stimulate T cell responses in early-stage human lung cancer” was published in The Journal of Clinical Investigation by Eruslanov EB et al. in 2014. al in The Journal of Clinical Investigation in 2014 (Strength= 26. 91), and its outbreak lasted from 2014 to 2019. Sagiv JY’s “Hyperglycemia Impairs Neutrophil Mobilization Leading to Enhanced Metastatic Seeding” published by Sagiv JY in Cell Reports also had a high outbreak (Strength= 19. 75). The results showed that 2016 had the highest number of newly cited outbreaks ( ), followed by 2012 ( ), indicating that the high number of outbreaks in these two years triggered a related research boom. Article co-citation analysis analyzed the relationship between articles by analyzing their co-citation frequency. Relationships between studies were displayed in CiteSpace, and the authors and years of the 25 most frequently cited articles are shown in . The results show that Fridlender, ZG et al. ( ) in 2009 published “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs’ was the most cited, which may have some relevance to its earliest outbreaks. This was followed by the most cited articles from 2014 ( ) and 2016 ( ), which served as links. Ultimately, the majority of articles were cited in “Neutrophil diversity and plasticity in tumor progression and therapy” by Jaillons et al., 2020 ( ). In bibliometric studies, the analysis of Local Citations can help us to understand the research dynamics of a specific field or discipline, identify core authors and key literature, as well as assess the scholarly contribution of a research organization or an individual. shows the ten articles with the highest Local Citations in WOSCC. It is worth noting that the Fridlender, ZG et al. published in 2009 are again at the top and three of them are from 2016. The clustering is based on the degree of association between the literature and is divided into 19 categories, indicated by different colors ( ). The category with the highest number of published articles is #0, and the most common keyword in these articles is trogocytosis. Chronologically, the earliest areas of research in TANs were stand-alone research clusters:#19 cd66b, and #9 hcc was also a relatively early research cluster, which developed into #2 extracellular matrix. #2 extracellular matrix then developed into #0 trogocytosis, #4 hypoxia-inducible factor, #7 immune-related adverse events, #11 netosis, #15 cytokine, and in addition, #17 azd5069 later became a stand-alone research cluster; #16 developed primarily into cluster #10, with #10 g-csf, #13 er stress, #16 tumor-infiltrating lymphocytes, and #18 tumor markers developing together into #1 cancer immunotherapy and #2 extracellular matrix clusters. 2 extracellular matrix clusters; #0 and #1 in turn developed together into #6. after 2012, #16 tumor-infiltrating lymphocytes and #18 tumor markers were closely related and developed into three relatively independent groups, #0 trogocytosis, #1 cancer immunotherapy, and #2 extracellular matrix. Subsequently, the closeness of the links between the study regions declined further, with the emergence of several relatively independent clusters, including #7 immune-related adverse events, #11 netosis, #17 azd5069. In scientometrics and network analysis, centrality serves as an indicator of the relative importance or influence of nodes within a network. A high centrality value signifies that a node (e.g., a document or author) holds significant connections or interactions with other nodes. Generally, nodes with high centrality play a key role in disseminating information, influencing decision-making, and shaping knowledge networks. These nodes, often regarded as ‘hubs,’ are central to the flow of information and can guide or promote the dissemination of knowledge. In academic literature networks, a high centrality score may indicate that a document has a substantial influence in a specific research area or that an author’s work has garnered widespread academic attention. In the field of tumor-associated neutrophils (TANs), three references stand out with centrality scores greater than 0.1, signaling their pivotal contributions to the development of the field and potential scientific breakthroughs ( ). The article “Tissue-infiltrating neutrophils constitute the major in vivo source of angiogenesis-inducing MMP-9 in the tumor microenvironment” has the highest centrality. This groundbreaking study first demonstrated that TANs, which can rapidly release pre-stored contents, are key contributors to the highly angiogenic MMP-9 in tumors. The second article, “Neutrophil function: from mechanisms to disease,” is a comprehensive review that systematically explores the role of neutrophils in diseases, particularly in tumors, establishing it as a leading work in the field. The third article, “TRPM2 Mediates Neutrophil Killing of Disseminated Tumor Cells,” identifies the mechanism by which neutrophils kill disseminated tumor cells in a Ca2+-dependent manner via TRPM2, shedding light on how neutrophils limit metastatic spread.
Trends in literature publishing output The number of papers published in each period reflects the research trends in the field. As shown in , for research on TAN, from 2007 to 2024, the overall trend has shown a steady increase, although the number of articles published has fluctuated in some periods. From 2009, when the N1/N2 functional classification of TANs was formally proposed by Fridlender ZG et al. ( ), to 2015, the output of TANs-related literature was extremely low, with fewer than 20 articles per year, suggesting that research remained stagnant ( ). From 2015 to 2024, the number of publications increased exponentially, with 548 articles published on TANs, representing 89.1% of the last two decades. This represents a surge in TANs research, indicating that TANs research has entered a period of rapid development in recent years. This may be related to the fact that TME-related studies have become hotspots and neutrophils have gradually gained importance in tumors. We collected 615 relevant studies in the field of TANs research between January 2000 and 2024 from the Web of Science database. Among them, the global citation score (GCS) was 37,374, and the average citation score per item was 60.77, and the global citation score was as high as 8,716 in 2022, which may be a breakthrough in this field of research. After 2015, this research was in a rapid development stage, and the number of annual publications gradually increased.
Distribution of countries/regions From 51 countries/regions involved in TANs, shows the 10 countries/regions with the highest number of publications and the corresponding citation frequency and centrality. Among them, China published the most papers (N=193), followed by USA (N=164) and Germany (N=60), and the highest citation frequency was USA (N=15679), followed by China (N=7463) and Germany (N=4297). The average number of citations of Israel, England, and USA topped the list, and although China published the most papers, the average number of citations was lower than most countries/regions. shows that research is mainly concentrated in the Northern Hemisphere, and it is worth noting that the links between countries/regions are mainly between North America and East Asia, and that Oceania is also relatively strongly linked to North America and East Asia. The total link strength of countries/regions measures the importance of the country/region’s position in the network. In summary, despite the large number of papers published in this area, USA’s research maintains its dominant position; and the number of articles published by Israel, although small, is on the whole of a high academic standard. illustrates the distribution of corresponding authors’ countries based on the number of documents, distinguishing between Single Country Publications (SCP) and Multiple Country Publications (MCP). China, the USA, and Germany are the leading countries in terms of total publications, with China having a higher number of Single Country Publications (SCP), while the USA shows a notable presence of Multiple Country Publications (MCP). Other countries like Italy, Japan, and Canada also contribute significantly to the research output. This chart highlights the global collaboration in the field, with a considerable portion of publications involving multiple countries.
Distribution of institutions lists the top 10 institutions in terms of number of publications, frequency of citations, and corresponding centrality. The top three institutions with the highest number of publications are: Fudan Univ (N=22), Univ Penn (N=15) and Univ Duisburg Essen (N=14). Among the top ten most productive institutions, 50% belonged to China, followed by two in Italy and one each in USA, Germany and Israel. The three institutions with the highest citation frequency are Univ Penn (N=4860), Stanford Univ (N=2096), and Harvard Univ (N=1929). It is worth noting that Brigham & Womens Hosp, Harvard Med Sch, Fudan Univ, and Univ Naples Federico-II showed a high total link strength, indicating that these institutions have a more important position in the research in the field of TANs, and may be the key nodes in the research field of TANs. Taken together Univ Penn has a much higher citation frequency, number of publications, and relatively high total link strength, which means that its research work on TANs has high visibility and influence in the academic community. Research institutions were analyzed to understand the global distribution of research related to TANs and to provide opportunities for collaboration. In VOSviewer, institutional collaborations are categorized into 11 closely related blocks ( ). shows the ratio of institutional publications to total publications obtained by dividing the number of TANs-related papers published by each institution in the last five years by the total number of papers published by each institution from 2007 to 2024, i.e., the ratio of institutional publications to total publications in the last five years. The color bias toward red means a high ratio, indicating that these institutions are emerging forces in the field of networking; the color bias toward blue means a low ratio, indicating that these institutions have done relatively little research in the field of TANs in recent years. The results show that the number of studies conducted by institutions such as Univ Penn, Humanitas Univ, and Shanghai Jiao Tong Univ has increased significantly over the past five years. In contrast, institutions such as Fudan Univ, Univ Duisburg Essen, and Hebrew Univ Jerusalem conducted relatively few studies in the past five years. Ranking the research institutions by Total Link Strength, also shows the top ten institutions with the highest Total Link Strength. compared to the top ten most prolific institutions, USA has a significant increase in the number of institutions with a total of five institutions on the list. three institutions from Italy are also ranked in the list, and one institution from China made the list.
Distribution of authors A total of 3763 authors were involved in the study of tumor-associated neutrophils. Scientific productivity based on Lotka’s law showed that 86% of the authors published only one paper ( ) ( ). The author with the most publications was Jablonska, Jadwiga (University of Duisburg Essen) (N=18), followed by Fridlender, Zvi G. (Univ Jerusalem) (N=16), Galdiero, Maria Rosaria (University of Naples Federico II) (N=11) and Granot, Zvi (Hebrew University of Jerusalem) (N=10). VOSvivewer shows collaboration between authors of literature related to TANs ( ), which provides the opportunity for researchers to find research partners in their own research field and to identify research partners and industry authorities in the field. Granot, Zvi and Marone, Ginanni the central figures of this collaborative network. As we can see from the , Granot, Zvi is associated with Fridlender, Zvi G., Jablonska, Jadwiga, and Marone, Ginanni is actively collaborating with Mantovani, Alberto, but other than that the clusters are relatively independent of each other and not closely connected, and this relatively decentralized collaborative network may indicate that authors in the field of TANs have been working together for many years. Network may indicate that there is less cross-national and cross-institutional co-research among authors in the field of TANs, or it may signal that the field of TANs has not yet gained widespread research. The co-cited author analysis refers to two authors whose literature is simultaneously cited by a third author ( ). A higher citation frequency indicates a higher degree of consistency between these authors in terms of academic interest and depth of research. By analyzing the authors with the highest number of publications and co-citation frequency, the research strength of the authors and the research hotspots related to TANs can be visualized. gives the top 10 authors in terms of the number of publications 、citations and co-citation frequency, respectively. The most cited author is FRIDLENDER ZG (Univ Jerusalem) (N=4807), followed by ALBELDA SM (University of Pennsylvania) (N=3637), and the author with the highest co-citation frequency is Fridlender, Zvi G. (Univ Jerusalem) (N=540), followed by Mantovani, A (Humanitas University) (N=289). It is noteworthy that Fridlender, Zvi G. has a high impact in this field both in terms of citations and co-citations. The H-index, G-index and M-index are measures of the academic impact of a researcher, an academic journal or an institution. According to the H-index, papers submitted by an author or country/region are cited no less than H times but no more than H times. The key of the H-index is that it takes into account both the number of papers and the number of citations, which can reflect a scholar’s academic influence more comprehensively ( ). The G-index helps to identify the highly cited papers of scholars, and thus reflects the scholar’s academic achievements more accurately ( ). The M-index is mainly used to comprea the academic influence of different scholars in the same field, especially when the distribution of citation counts is similar ( ). Combining the three indices, FRIDLENDER ZG and JABLONSKA J are two scholars with high academic influence in the field of TANs, and the two scholars are from Israel and Germany, respectively. Notably, three of the scholars with H-index in the top ten are from Israel, three from Italy, two from Germany, and one each from China and the United States.
Journal publication analysis Journals ranked in the top 25% (inclusive) of the impact factor are located in JCR quartile 1 (Q1), and journals ranked in the top 25%-50% (inclusive) of the impact factor are located in JCR quartile 2 (Q2). lists the top 10 journals in terms of number of articles and their corresponding IF (JCR2023). The journal with the highest number of publications was Frontiers in Immunology (5. 7, Q1) ( ), followed by CANCERS (4. 5, Q1) ( ), INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES (4. 9, Q1) ( ) and FRONTIERS IN ONCOLOGY (3. 5, Q2) ( ). There are 12 journals in the top ten in terms of publications, seven journals distributed in the Q1 JCR, and only five journals with an IF of 5 or more. Among these journals, the most frequently cited journals are Frontiers in Immunology, Cancers and International Journal of Molecular Sciences. SEMINARS IN IMMUNOLOGY had the highest impact factor (IF=7. 4), followed by ONCOIMMUNOLOGY(IF=6. 5). It is worth noting that ONCOTARGET, despite having a large number of publications, has not been indexed by SCI since 2018, so the papers retrieved from this journal are all before 2018, which may mean that the journal fails to meet the appropriate academic standards or quality requirements, leading to a decline in academic recognition of the research results published in this journal. 2023 JOURNAL Most of the top 10 co-cited journals in the Citation Report (JCR) are located in the Q1 region, with the exception of Journal of Immunology. The impact of academic journals depends on the number of times they are co-cited, which indicates their importance in a particular research area ( ). The journals with the highest co-citation frequency were Cancer Res (12. 5, Q1) (2182) and Journal of Immunology (3. 6, Q2) (1888). Nine of the top 10 journals in terms of co-citation frequency are distributed in JCR Q1, and seven journals have an IF of more than 10. The visualization map generated by VOSviewer shows the various types of journals involved in research in the field of TANs and their interconnections with each other. These journals are grouped into different clusters based on the similarities between them ( ), and are generally divided into 4 categories: The blue cluster focuses on research in autoimmunity and cell biology (Journal of Leukocyte Biology, Immunobiology, Cells, etc.); the green cluster focuses on research in immunity and cancer (Frontiers in Immunology, Cancers, International Journal of Molec, etc.); the red cluster focuses on oncology (Frontiers in Oncology, Oncoimmunology, etc.); and the yellow cluster focuses on clinical research and treatment and molecular biology related fields (Febs Journal, Cancer and Metastasis Reviews, etc.). Based on the co-citation frequency, these journals were categorized into four groups, which tended to have similar research directions ( ). The red cluster focuses on cancer-related fields (Cancer Research, Clinical Cancer Research, etc.); the green cluster focuses on immunology (Frontiers in Immunology, Journal of Clinical Investigation, etc.); and the blue cluster is mainly in the biochemistry and molecular biology (Nature, Cell, Science, etc.); the yellow cluster is mainly in the field of translational medicine (Nature Communications, Science Translational Medicine, etc.). presents the annual heat map of journals for the past decade. The data can be roughly divided into three modules. In 2015-2016, the most highly cited journals were concentrated in the following areas: Seminar in Immunology (IF=7. 4), Cancer Cell (IF=48. 8), and International Journal of Cancer (IF=5. 7). All three belong to the journals in JCR region 1, which have a high impact factor. In the 2017-2018 period, the focus was on the JOURNAL OF LEUKOCYTE BIOLOGY (IF=3. 6), PLOS ONE (IF=2. 9), and SCIENTIFIC REPORTS (IF=3. 8), which have relatively low impact factors. Subsequently, in the period following 2019, the focus shifts back to high-impact factor journals such as Nature Communications (IF=14. 7), Cancers (IF=4. 5) and Cells (IF=5. 1). We used knowledge flow analysis to explore the evolution of knowledge citations and co-citations among cited journals ( ), the journal bilabeled graphs show the thematic distribution of scholarly journals, changes in citation trajectories and changes in research centers, with the labels on the left representing Citing journals, and the labels on the right representing the cited journals ( , ). The citation linkage of the colored curves pointing from the citation graph to the cited graph show the citation connections. Citing journals are mainly from the fields known as research frontiers, such as MOLECULAR, BLOLOGY, and IMMUNOLOGY. The cited journals are mainly from the fields known as knowledge bases. It is worth noting that both Citing journals, and the cited journals belong to the same label, which suggests that research on TANs is still concentrated in certain areas and has not expanded to other areas.
Keyword analysis Keywords play a crucial role in academic papers, as they concisely summarize the core topic, objectives, target audience, and methodology employed in the research. A systematic analysis of keywords can reveal the trends and evolution of research in a particular academic field, as well as the focus of research at a given time ( ). Keywords are not only a quick way to understand the main idea of a paper, but also an important indicator of the concerns and research hotspots in an academic field ( ). shows the top 20 keywords in order of frequency. The most frequent keyword is “neutrophil” (179), followed by “tumor-associated neutrophils” ( ). In addition, “tumor microenvironment” ( ) and “tumor” ( ) are frequently occurring keywords, indicating that their corresponding fields are popular in TANs-related research. also shows the specific diseases appearing in the research field of TANs. breast cancer and colorectal cancer appeared more than 20 times, among which breast cancer had the highest number of occurrences and the highest Total link strength, and it is worth noting that lung cancer, which appeared less than 20 times, had a higher Total link strength. We used VOSviewer software to extract 76 keywords from the titles and abstracts of 615 articles, and set the minimum number of citations to 4, resulting in a total of 60 keywords. We drew a visualization map of these 60 keywords ( ) and found that all the keywords were roughly classified into 7 major categories, representing 7 different research directions in the field of TANs: the red clusters were related to inflammatory response and tumor development, the green clusters were related to tumor immunity, the purple clusters were mainly related to immune regulation, the blue clusters were related to the immune-suppressing mechanisms in the tumor microenvironment, the cyan clusters were related to the tumor immunotherapy, yellow clustering is related to the role of TANs in tumors, and orange clustering is related to tumor metastasis, angiogenesis and metabolic regulation mechanisms. shows the impact analysis of keywords, represented by the average number of normal citations for articles with a given keyword. A redder keyword represents a hotter keyword, which means that articles with this keyword have a high average number of citations; a bluish color is the opposite. Analysis of the results of this graph shows that the terms pancreatic cancer, tumor, chemokine, angiogenesis, immune evasion, neutrophil-to-lymphocyte ratio, etc. are the most popular keywords. Notably, these keywords are all related to immune escape from tumors in some way, which implies that more and more studies tend to explore the tumor-promoting functions of TANs. inflammation, cxcl1, nets, immunosuppression, neutrophil, prognosis, T cells therapy, glioma, etc. are followed by the research heat, and these keywords reflect the attention to the inflammatory response, immune regulation, tumor biology, and related therapeutic approaches in the field of TANs research, revealing their research hotspots in pathophysiological mechanisms, disease progression, and therapeutic strategies. shows the annual popularity (annual citations/total annual citations) of the keywords from 2007 - 2024. The annual popularity of the keywords HCC, MYELOID-DERIVED SUPPRESSOR CELLS, and IMMUNOMODULATION has been relatively low in recent years. In contrast, the annual popularity of keywords such as NETs, COLORECTAL CANCER, IMMUNOTHERAPY, and HETEROGENEITY has been relatively high in recent years, indicating that these keywords represent emerging frontiers. shows the keyword popularity correlation, where keywords with high popularity in similar time periods are clustered in different clusters and marked with different colors. The results show nine clusters: Pink Clusters (NETS, TUMOUR MICROENVIRO, TUMOR IMMUNOLOGY, CANCER, CD66B, CXCR2, NON-SMALL CELL LUN), Purple Clusters (IMMUNOMODULATION, INFLAMMATION, CANCER METASTASIS), Orange Cluster (MELANOMA, IMMUNOSUPPRESSION, POLARIZATION), Blue Cluster (NEUTROPHIL, IMMUNE MICROENVIRO, PD-1), Cyan Cluster (MACROPHAGES, PANCREATIC CANCER, TUMOR MICROENVIRON, MYELOID CELLS, THERAPY, HYPOXIA, T CELLS, INNATE IMMUNITY, MYELOID-DERIVED SU, TUMOR), Green Clusters (INNATE IMMUNITY, MYELOID-DERIVED SU TUMOR, COLORECTAL CANCER, GASTRIC CANCER, PD-L1, ADAPTIVE IMMUNITY, CHEMOKINE, CYTOKINES, ANGIOGENESIS, TUMOR-ASSOCIATED M), Dark Blue Clusters (ANGIOGENESIS TUMOR-ASSOCIATED M, APOPTOSIS, SURVIVAL, BREAST CANCER, CANCER IMMUNOTHERA, G-CSF), Yellow Cluster (TANS, HCC, PROGNOSIS), Red Cluster (NEUTROPHIL POLARIZ, CHRONIC INFLAMMATI, GRANULOPOIESIS, NETOSIS, IMMUNOTHERAPY, HETEROGENEITY, IMMUNE CELLS). The areas where Cyan and Green overlap are filled with Light Green, and the areas where Green and Dark Blue overlap are filled with Light Blue. This indicates that the keywords within the same cluster have high popularity within the same time period. It is worth noting that the keywords in the green cluster partially intersect with the keywords in the cyan and dark blue clusters, respectively, which we have labeled in dark red in the figure. These keywords are INNATE IMMUNITY, MYELOID-DERIVED SU, TUMOR, ANGIOGENESIS, and TUMOR-ASSOCIATED M. These five keywords have high prevalence in multiple time periods, suggesting that they are more favored by researchers. illustrates the relationships between various research fields, with the size of each node representing the prominence of the field. “Oncology” stands out as the most central and influential field in this network, indicating its significant role in the broader research landscape. Other fields, such as “Immunology,” “Biochemistry & Molecular Biology,” “Gastroenterology & Hepatology,” and “Pharmacology & Pharmacy,” show strong connections, highlighting the interdisciplinary nature of research in these areas. Interestingly, the relatively small fields of “Nanoscience & Nanotechnology” and “Biophysics” have high centrality, representing the bridging role of these fields.
Highly cited reference analysis lists the top ten most frequently cited articles. The most frequently cited article is “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs” (Fridlender, ZG, et al., 2009) (2220) ( ) ( ), which focuses on how TGF- β blockers alter the phenotype of tumor-associated neutrophils (TANs) from a pro-tumorigenic “N2” phenotype to an anti-tumorigenic “N1” phenotype, which slows tumor growth and enhance the activation of CD8+ T cells. It is noteworthy that this is the first paper in the field to present a detailed concept and systematic study of TANs, which has laid the foundation for its high citation rate. Next, “The prognostic landscape of genes and infiltrating immune cells across human cancers” (Gentles, AJ, et al., 2015) (2025) ( ), which is also the article with the highest average citation frequency per year, this paper reveals the prognostic landscape of genes and tumor-infiltrating immune cells across different types of cancers by comprehensively analyzing the expression profile and overall survival data of approximately 18, 000 human tumors, identifying the FOXM1 regulatory network as a major predictor of poor prognosis, while pointing out that genes such as KLRB1 expression is associated with positive prognosis in tumor-associated leukocytes, which includes tumor-associated neutrophils. shows the top 25 references with the strongest citation bursts. The first two citation bursts occurred in 2010 and 2011. They are titled “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs” and “Tumor-associated Neutrophils: New Targets for Cancer Therapy”. Associated Neutrophils: New Targets for Cancer Therapy”. Notably, “Tumor-associated neutrophils stimulate T cell responses in early-stage human lung cancer” was published in The Journal of Clinical Investigation by Eruslanov EB et al. in 2014. al in The Journal of Clinical Investigation in 2014 (Strength= 26. 91), and its outbreak lasted from 2014 to 2019. Sagiv JY’s “Hyperglycemia Impairs Neutrophil Mobilization Leading to Enhanced Metastatic Seeding” published by Sagiv JY in Cell Reports also had a high outbreak (Strength= 19. 75). The results showed that 2016 had the highest number of newly cited outbreaks ( ), followed by 2012 ( ), indicating that the high number of outbreaks in these two years triggered a related research boom. Article co-citation analysis analyzed the relationship between articles by analyzing their co-citation frequency. Relationships between studies were displayed in CiteSpace, and the authors and years of the 25 most frequently cited articles are shown in . The results show that Fridlender, ZG et al. ( ) in 2009 published “Polarization of Tumor-Associated Neutrophil Phenotype by TGF- β : N1 versus N2 TANs’ was the most cited, which may have some relevance to its earliest outbreaks. This was followed by the most cited articles from 2014 ( ) and 2016 ( ), which served as links. Ultimately, the majority of articles were cited in “Neutrophil diversity and plasticity in tumor progression and therapy” by Jaillons et al., 2020 ( ). In bibliometric studies, the analysis of Local Citations can help us to understand the research dynamics of a specific field or discipline, identify core authors and key literature, as well as assess the scholarly contribution of a research organization or an individual. shows the ten articles with the highest Local Citations in WOSCC. It is worth noting that the Fridlender, ZG et al. published in 2009 are again at the top and three of them are from 2016. The clustering is based on the degree of association between the literature and is divided into 19 categories, indicated by different colors ( ). The category with the highest number of published articles is #0, and the most common keyword in these articles is trogocytosis. Chronologically, the earliest areas of research in TANs were stand-alone research clusters:#19 cd66b, and #9 hcc was also a relatively early research cluster, which developed into #2 extracellular matrix. #2 extracellular matrix then developed into #0 trogocytosis, #4 hypoxia-inducible factor, #7 immune-related adverse events, #11 netosis, #15 cytokine, and in addition, #17 azd5069 later became a stand-alone research cluster; #16 developed primarily into cluster #10, with #10 g-csf, #13 er stress, #16 tumor-infiltrating lymphocytes, and #18 tumor markers developing together into #1 cancer immunotherapy and #2 extracellular matrix clusters. 2 extracellular matrix clusters; #0 and #1 in turn developed together into #6. after 2012, #16 tumor-infiltrating lymphocytes and #18 tumor markers were closely related and developed into three relatively independent groups, #0 trogocytosis, #1 cancer immunotherapy, and #2 extracellular matrix. Subsequently, the closeness of the links between the study regions declined further, with the emergence of several relatively independent clusters, including #7 immune-related adverse events, #11 netosis, #17 azd5069. In scientometrics and network analysis, centrality serves as an indicator of the relative importance or influence of nodes within a network. A high centrality value signifies that a node (e.g., a document or author) holds significant connections or interactions with other nodes. Generally, nodes with high centrality play a key role in disseminating information, influencing decision-making, and shaping knowledge networks. These nodes, often regarded as ‘hubs,’ are central to the flow of information and can guide or promote the dissemination of knowledge. In academic literature networks, a high centrality score may indicate that a document has a substantial influence in a specific research area or that an author’s work has garnered widespread academic attention. In the field of tumor-associated neutrophils (TANs), three references stand out with centrality scores greater than 0.1, signaling their pivotal contributions to the development of the field and potential scientific breakthroughs ( ). The article “Tissue-infiltrating neutrophils constitute the major in vivo source of angiogenesis-inducing MMP-9 in the tumor microenvironment” has the highest centrality. This groundbreaking study first demonstrated that TANs, which can rapidly release pre-stored contents, are key contributors to the highly angiogenic MMP-9 in tumors. The second article, “Neutrophil function: from mechanisms to disease,” is a comprehensive review that systematically explores the role of neutrophils in diseases, particularly in tumors, establishing it as a leading work in the field. The third article, “TRPM2 Mediates Neutrophil Killing of Disseminated Tumor Cells,” identifies the mechanism by which neutrophils kill disseminated tumor cells in a Ca2+-dependent manner via TRPM2, shedding light on how neutrophils limit metastatic spread.
Discussion Tumor-associated neutrophils (TANs) are integral to the tumor microenvironment, where they interact with tumor cells to sustain cancer properties and form a complex network of interactions between cancer cells and immune cells. To delineate the temporal and spatial distribution, key contributors, central publications, and to identify research hotspots and frontiers, we conducted an analysis of 615 neutrophil-related literature on tumors using the CiteSpace 6. 3. R2 Advanced, VOSviewer 1. 6. 18, and Rbibliometrix software. This analysis was based on data extracted from Web of Science for the period spanning from 2000 to 2024. The upward trend observed in the annual publication count underscores the significant potential of TANs in cancer research, indicating that this field is burgeoning and ripe for further exploration. 4.1 General information The present study conducted an analysis of 615 TANs-related articles, which were sourced from the Web of Science Core Collection (WOSCC) database. These articles were authored by 3763 researchers affiliated with 934 institutions across 51 countries, and were published between January 1, 2000, and March 21, 2024. The exponential growth in the number of articles over this period highlights the escalating interest in TANs within the scientific community. The concept of TANs was formally introduced in 2009 by Fridlender ZG and colleagues, marking the inception of focused research in this domain. Since then, the field has seen a notable increase in scholarly output, particularly over the past decade. Notably, the number of publications in 2022 was approximately ten times that of 2012, a statistic that underscores the vibrant and burgeoning nature of TANs research in recent years. In the national and regional analysis, the metrics of publication count and Total Link Strength are pivotal for evaluating a country or region’s role within the global research cooperative network. A higher Total Link Strength suggests that the country or region may serve as a hub within the research field. From a global perspective, as illustrated in , , China and the United States emerge as the central research hubs in the network domain. China leads in the number of publications, with the second-highest citation frequency, while the United States follows closely with the second-highest publication count and the highest citation frequency. The United States also boasts the highest Total Link Strength of 109, outpacing all other nations, with Germany ranking second with a Total Link Strength of 54, signifying its significant role in global research collaboration. Furthermore, countries such as China, Italy, Israel, and the United Kingdom have actively engaged in network research and cooperation. Among the top 10 institutions, five are Chinese, and two are Italian. Notably, the University of Pennsylvania (Univ Penn) in the United States stands out with a significantly higher citation frequency compared to other institutions. Its high citation rate and Total Link Strength are indicative of the high quality of its research and its recognition within the academic community. Additionally, global research institutions have established cooperative networks, with emerging entities like Humanitas University and Shanghai Jiao Tong University showing a marked increase in their research output in the network field. This surge suggests that these institutions may be poised to become new leaders in the field. Conversely, institutions such as Fudan University and the University of Duisburg-Essen have exhibited a relatively lower research activity in recent years, which could imply a shift in their research focus or a commitment to more in-depth, long-term research endeavors. Based on the analysis, China and the United States play a pivotal role in the global network research landscape. China demonstrates its robust research output capacity, evidenced by its leading position in publication numbers. Meanwhile, the United States asserts its leading role and extensive influence in global research cooperation, as reflected in its high citation frequency and Total Link Strength. Furthermore, while Univ Penn in the United States is among the top research institutions, there is also a notable rise of new institutions that are making significant contributions to the field. In the realm of author contributions, the seminal work by Jablonska, Jadwiga; Fridlender, Zvi G.; Galdiero, Maria Rosaria; Granot, Zvi and others indicate their pivotal role in the field of TANs. As depicted in , Fridlender, Zvi G., hailing from the University of Jerusalem, Israel, stands out with the highest co-citation count, significantly outpacing his peers. This preeminence is largely attributed to his groundbreaking research on the phenotypic polarization of TANs, published in 2009. His study, which contributed to the functional classification of TANs into N1 and N2 subtypes, has provided novel insights into the multifaceted role of TANs within the tumor microenvironment. This work has not only catalyzed further research in the field but has also become the most frequently cited literature, solidifying Fridlender’s status as a leading figure with the highest citation frequency, the most published articles, the highest H-index, and the strongest Total Link Strength among all authors. Furthermore, the research conducted by the GENTLES, A. J. team, as presented in ‘The prognostic landscape of genes and infiltrating immune cells across human cancers, ‘ published in 2015, is recognized for its second-highest citation frequency and the top average annual citation rate. Their large-scale data analysis has shed light on the significant interplay between gene expression and tumor-infiltrating immune cells in cancer prognosis. Subsequently, COFFELT, S. B. ‘s 2016 publication, ‘Neutrophils in cancer: neutral no more, ‘ which ranks third in both citation frequency and average annual citation, has also made a substantial impact on the understanding of the cancer immune microenvironment. In conclusion, scholars such as Jablonska, Jadwiga, and Fridlender, Zvi G., have exerted considerable influence in the TANs research domain. Fridlender’s pioneering work is particularly esteemed and widely acknowledged. Concurrently, the contributions of the GENTLES, A. J. and COFFELT, S. B. have been instrumental in advancing our knowledge of the cancer immune microenvironment. As detailed in and illustrated in , ‘Frontiers in Immunology’ emerges as a leading journal in the field, boasting both the highest publication count and a prominent position within the top 10 journals for co-citation frequency. ‘Cancer Research’ distinguishes itself with the highest co-citation frequency, securing the ninth rank in citation frequency overall. This distinction can be largely attributed to the substantial number of highly impactful articles featured in the journal. It is noteworthy that among the top 10 journals by co-citation frequency, there is a significant representation of specialized journals. Specifically, three are dedicated to oncology (‘Cancer Research’, ‘Cancer Cell’, ‘Clinical Cancer Research’), two to immunology (‘Journal of Immunology’, ‘Frontiers in Immunology’), and three are multidisciplinary scientific journals (‘Nature’, ‘Cell’, ‘Proceedings of the National Academy of Sciences, USA’ or ‘PNAS’). Additionally, ‘Nature Medicine’ and ‘Journal of Biological Chemistry’ are associated with biology and immunology, respectively, while ‘Nature’, ‘Cell’, and ‘PNAS’ are linked to molecular biology. This distribution aligns with the biplot analysis presented in , which underscores the interdisciplinary nature of the research landscape. The analysis of the journal heat map reveals the dynamics of heat within the field of scholarly publishing, from an initial tendency to publish in prestigious journals with high impact factors, to a later preference for high open access journals, and ultimately a renewed focus on those high impact journals. This fluctuation reflects the progression of research in TANs, with high-quality research generating attention and open journals helping to spread the field of research, further contributing to the emergence of higher-quality research. The analysis of specific literatures reveals the key role of these literatures in the field of TANs research. As shown in and , “Polarization of Tumor-Associated Neutrophil Phenotype by TGF-β: N1 versus N2 TANs” is undoubtedly the most authoritative article in this research area. It has the highest citation frequency and is closely linked to other highly cited literature, as well as being the first to explode with citations. “Tumor-associated neutrophils stimulate T cell responses in early-stage human lung cancer” and “ Hyperglycemia Impairs Neutrophil Mobilization Leading to Enhanced Metastatic Seeding” were the two articles with the strongest outbreaks and longer durations, suggesting that they also have high impact. “Neutrophil diversity and plasticity in tumor progression and therapy” is a review with a high citation rate that also cites many highly cited articles, which is a good source of information for readers to understand the field of TANs research. 4.2 Hotspots and frontiers Keyword analysis is instrumental in discerning the frontiers and focal points within a field of study. In this research, a comprehensive keyword analysis was performed to delineate the predominant trends and temporal shifts in the domain of Tumor-Associated Neutrophils (TANs). The principal keywords identified include ‘neutrophil’, ‘tumor-associated neutrophils’, ‘tumor microenvironment’, ‘tumor’, ‘immunotherapy’, ‘NETs’ (Neutrophil Extracellular Traps), ‘metastasis’, ‘inflammation’, and ‘tumor-associated macrophages’ (as listed in ). These keywords predominantly pertain to the biological functions of TANs, their roles within the tumor microenvironment, and their implications in immunotherapy and inflammatory responses, indicating that these topics are currently at the forefront of TANs research. The co-occurrence network diagram elucidates that high-frequency keywords such as ‘immunology’, ‘oncology’, and ‘inflammation’ have been central to past research endeavors. Within the immunology sphere, keywords like ‘immune cells’, ‘immune evasion’, ‘immune checkpoints’, ‘prognosis’, and ‘macrophage’ have been prominent. Similarly, in oncology, ‘tumor-associated neutrophils’, ‘metastasis’, ‘anti-tumor immunity’, and ‘tumor-associated macrophages’ have garnered significant attention. In the context of inflammation, ‘neutrophil’, ‘inflammation’, ‘innate immunity’, and ‘granulopoiesis’ have been frequently discussed. The recurring terms ‘neutrophil’ and ‘tumor-associated neutrophils’ underscore the pivotal role of neutrophils in the tumor microenvironment, as well as their potential contributions to tumor progression and immunomodulation. Furthermore, the recurrent mention of ‘tumor microenvironment’ and ‘immunotherapy’ underscores the significance of the immune context in cancer therapeutics. Notably, breast and colorectal cancers have emerged as the most active disease areas within TANs research, with their high frequency and Total Link Strength indicating a central position in neutrophil-related studies. This prominence may be attributed to the elevated incidence and mortality rates associated with these cancers, drawing considerable research focus. Employing VOSviewer software for visual mapping, the research on TANs was categorized into seven major directions. These clusters not only highlight the diversity of research but also suggest potential interconnections between different research avenues. For instance, the interplay between inflammatory responses and tumor development, along with the part played by immunomodulation in tumor immunity, are areas meriting further in-depth exploration in future studies. Keyword impact and heatmap analysis also shed light on the evolution of research trends and the shift in scientific interest. Keyword impact analysis illustrates that tumor-promoting research on TANs is a very hot topic. Heatmap analysis, on the other hand, reveals emerging research areas in TANs research (e. g., Nets, IMMUNOTHERAP, etc.), and those keywords appearing in multiple clusters, such as INNATE IMMUNITY, MYELOID-DERIVED SU, which not only show a consistently high level of interest in a specific area, but also connect different research topics. The heat of these keywords suggests that they may represent critical nodes in the disease process or potential targets for therapeutic intervention. For example, INNATE IMMUNITY relates to the first response of the intrinsic immune system, whereas MYELOID-DERIVED SU is associated with immunosuppression in the tumor microenvironment. TUMOR and TUMOR-ASSOCIATED M are directly linked to tumor progression and metastasis, whereas ANGIOGENESIS is a key process in tumor growth and spread. 4.3 Hot research and future prospects of TANs The following aspects of current research on tumor-associated neutrophils (TANs) will be discussed: the origin of TANs, keyword hotspots in the field of TANs research, the anti-tumor and pro-tumor functions of TANs, and the pathogenic factors underlying the polarization and functional differences of TANs. Neutrophils, the most abundant polymorphonuclear leukocytes ( ), are crucial for the innate immune system and have a significant role in tumor microenvironments (TME) ( ). Originating from granulocyte-monocyte progenitors (GMPs) in the bone marrow ( ), neutrophils mature and are mobilized into the bloodstream by Granulocyte colony-stimulating factor (G-CSF) ( ). TANs result from neutrophil development and infiltration into the TME, primarily controlled by the chemokine receptor type (CXCR2) axis ( ). In the TME, various cellular constituents release CXCR2 ligands, such as CXCL1-8, creating a chemotactic gradient that guides neutrophils to the tumor ( , ). Kyle J. Eash et al. have shown that CXCR4 and CXCR2 are essential for controlling neutrophil release, with only fully differentiated neutrophils expressing CXCR2 gaining entry into the bloodstream and subsequently infiltrating target tissues ( , ).Additional studies corroborate that SMAD4 deletion promotes the recruitment of TANs through the CXCR2 axis ( ). SMAD4 deletion promotes colorectal cancer (CRC) expression of C-C Motif Chemokine Ligand 15 (CCL15) and recruits the CCR1 TAN (CCL15-CCR1 axis) with arginase-1 (ARG-1) and matrix metalloproteinase 9 (MMP-9) activities, thereby forming a pre-metastatic niche for disseminated tumor cells (e. g., in the lungs) ( , , ). Interleukin-8 (IL-8), overexpressed in multiple cancer types, recruits neutrophils to the TME through CXCR1 and CXCR2, influencing TAN formation ( – ). Jing He et al. showed that IL-8 secretion, induced by METTL3 disruption, promotes TANs recruitment and regulates tumor growth ( ). Yang’s research highlights IL-8’s role in TANs recruitment and JAG2 expression, and the blockade of CXCR2 signaling reduces tumor growth and TANs numbers while enhancing CD8+ T cell activity ( ). Collectively, these studies underscore the pivotal role of the CXCR2-IL-8 axis in mediating the recruitment of TANs within the TME. Metastatic tumors, on the other hand, induce chemotaxis of circulating neutrophils by secreting large amounts of G-CSF; these recruited neutrophils are mostly immature and immunosuppressive, promoting cancer metastasis ( ).Actually, only fully differentiated neutrophils expressing CXCR2 are permitted to enter the Circulatory system and subsequently infiltrate the corresponding tissues. G-CSF has been demonstrated to facilitate the proliferation of neutrophil precursor cells, resulting in the expansion and infiltration of immature neutrophils in the peripheral blood. This phenomenon does not contravene the complete differentiation of CXCR2. Myeloid-Derived Suppressor Cells (MDSCs) is a term used to describe a population of myeloid-derived non-lymphoid immunosuppressive cells that are enriched in cancer patients ( ). G-MDSCs, which share surface markers such as CD11b with TANs, have been observed to differentiate into CD11b+/CD66b+ TANs in gastric cancer, a process linked to immunosuppression and tumor metastasis ( ). It may suggest that when the body is in a state of urgency, G-MDSCs, originating from the expansion of immature myeloid cells (IMCs) in the bone marrow, migrate to peripheral tissues where they are transformed into TANs by cytokines like TGF-β present in the TME. Myeloid-related proteins (MRPs), specifically S100A8 and S100A9, are implicated in neutrophil migration, with high expression levels observed in the TME and pre-metastatic niche ( , ). The spleen is also a significant source of TANs, which contribute to tumor progression by mobilizing immature myeloid cells that differentiate into tumor-associated macrophages (TAMs) and TANs ( , ). These cells promote tumor growth and metastasis through cytokine secretion. Although the current study has not directly demonstrated the impact of splenic regulation of TANs on tumor therapy, it does highlight the critical role of the spleen in tumor-associated immune cell generation and demonstrates the importance of the spleen as a potential therapeutic target. The transition of neutrophils into N2 TANs is intimately linked to their role in facilitating tumor growth, irrespective of whether this occurs via the bone marrow-circulating-TME axis or the splenic route. Our keyword analysis from the existing literature highlights ‘chemokine’ and ‘immune evasion’ as prominent terms, underscoring the pivotal role of TANs in tumor immune evasion. Although the number of studies on the induction and development of N1 TANs is relatively limited, the phenomenon of immunosculpting or immunoediting – defined as the crosstalk between immune cells and tumor cells - indicates the potential for such interactions to alter tumor biological phenotypes. This concept suggests that the limited research on N1 TANs may be overlooking a crucial aspect of how these cells contribute to the dynamic immune-tumor interface ( ). It has been demonstrated that tumor cells can influence the secretion of molecules by neutrophils, which in turn promote tumor growth. To illustrate, breast cancer cells secrete GM-CSF, which induces neutrophils to produce oncostatin M, a protein that boosts VEGF production and cancer cell invasion ( ). However, under effective immunotherapy, there is an increase in the number of neutrophils present in tumors, accompanied by the expression of interferon-stimulated genes (ISGs), which exhibit antitumor functions. The transcription factor IRF1 in neutrophils is pivotal for an efficacious antitumor response; its absence precludes the efficacy of immunotherapy ( ). Furthermore, a study identified a crosstalk between tumor-associated neutrophils (TANs) and CRC cells through the AGR2-CD98hc-xCT axis, which enhances CRC cell migration and creates a feedback loop driving metastasis ( ). These results point to the potential of TANs in cancer therapy, suggesting that they can be mobilized against cancer cells rather than simply promoting tumor growth. Therefore, strategies to modulate TANs may be a new way to improve the efficacy of cancer immunotherapy. As illustrated in , the relationship between TANs and “Cancer” is one of the most rapidly evolving areas of research in recent years. As a critical component of the tumor microenvironment, TANs exert a profound influence on tumor progression and metastasis ( – ). TANs have been demonstrated to exert a regulatory influence on tumor growth, with their presence observed in a wide range of solid tumors, including metastatic melanoma ( ), bronchoalveolar carcinoma ( ), renal carcinoma ( ), head and neck squamous cell carcinoma (HNSCC) ( ), pancreatic cancer ( ), and gastric cancer ( ), has been identified as a marker of a poor prognosis in a number of clinical and laboratory studies. In the context of these specific types of cancer, neutrophils have been observed to exhibit tumor-promoting properties that may potentially be harmful to the host. Among these, hepatocellular carcinoma, pancreatic cancer, and gastric cancer have emerged as key diseases in our identified hot research areas. The term ‘TME’ has been referenced 85 times in the keywords over the past fifteen years, which serves to illustrate its significance. TME exerts control over neutrophil recruitment through the operation of specific molecular mechanisms. On the other hand, the accumulation of TANs in TME is intimately linked to tumor invasiveness and metastatic progression. Song et al. revealed that in hepatocellular carcinoma(HCC)-TME, cancer associated fibroblasts (CAF) -derived cardiotrophin-like cytokine factor 1 (CLCF1) increased the paracrine secretion of CXCL6 and TGF-β in tumor cells, thereby promoting the infiltration and polarization of TANs ( ). In clinical samples, upregulation of the CLCF1-CXCL6/TGF- β axis was strongly associated with the emergence of cancer stem cells, increased “N2”-polarised TANs, high tumor stage and poor prognosis ( ). A transcriptional study in NSCLC has identified a TANs cluster characterized by overexpression of high mobility group box 1 (HMGB1). This cluster is hypothesized to interact with the TME via HMGB1-TIM-3 interactions, potentially suppressing antitumor immunity and facilitating immune evasion through the GATA2/HMGB1/TIM-3 signaling axis ( ). These findings collectively indicate that TANs and TME exhibit multifaceted roles in relation to one another and that they may potentially share common pathologic mechanisms across diverse cancer types. The ratio of tumor-infiltrating neutrophils to lymphocytes (NLR) is a valuable indicator for assessing the prognosis of cancer patients, reflecting the immune status in the TME, which is also one of the hottest keywords we have found. A higher neutrophil-to-lymphocyte ratio (NLR) is associated with a poorer prognosis in numerous types of cancer ( , – ). In a study of uroepithelial carcinoma of the bladder, the presence of neutrophils and NLR were associated with high-grade uroepithelial tumors, TANs were associated with tumor grade and stage, and TALs (especially CD8 T cells) and NLR were more likely to be associated with progression of tumor invasion in this study ( ). Chen et al. demonstrated that a low N1/N2 ratio was associated with poorer tumor differentiation, easier lymph node metastasis, and a higher TNM stage ( ). Conversely, a high N1/N2 ratio was identified as an important prognostic indicator for overall survival (OS) and recurrence-free survival (RFS). Additionally, tumor-associated N1/N2 neutrophils exhibited an inverse correlation with tumor-infiltrating CD8+ T cells and Tregs. In conclusion, the inverse correlation between TANs and lymphocytes may facilitate a deeper understanding of the immune system and its functioning, and thus merits further investigation. The anti-tumor and pro-tumor mechanisms of TANs have now been more widely demonstrated. Firstly, it is established that tumor-associated neutrophils (TANs) can directly kill tumor cells through self-exposure and cytotoxic effects ( ). It is noteworthy that this killing effect is associated with local hypoxia but not with T cells, TANs recruitment is reduced after hypoxia is relieved and TANs under these conditions is more capable of killing tumor cells ( ). TANs also have indirect anti-tumor effects, as they can stimulate adaptive anti-tumor immune responses by promoting the recruitment of other immune cells and having antigen-presenting potential themselves ( , ). Vono et al. demonstrated that neutrophils isolated from vaccine-draining lymph nodes of rhesus monkeys exhibited HLA-DR expression and were capable of presenting vaccine antigens to autologous antigen-specific memory CD4+ T cells in vitro ( ). This suggests that neutrophils may function as antigen-presenting cells (APCs), leveraging their abundance in the immune system to potentially regulate antigen-specific T cell responses. Neutrophils can recruit and activate T cells by secreting cytokines such as TNFα, and histone G promotes T cell proliferation and cytokine production ( , ). For instance, TANs secrete human mast cell chymotrypsin (HC) and human neutrophil histone G (hCG), both of which readily cleave two interleukin-1 (IL-1)-associated alerting proteins, interleukin-18 (IL-18) and interleukin-33 (IL-33), as well as the cytokine interleukin-15 (IL-15), which is important for T-cell homeostasis ( ). TANs also kill tumor cells by generating reactive oxygen species (ROS), with hypochlorous acid (HOCl) playing a major role in recognizing the surface of target cells and mediating tumor cell lysis by a mechanism dependent on leukocyte function-associated antigen 1 (LFA-1) ( ). In addition, the distinctive adhesion pathway mediated by the upregulation of CD11b/CD18 on activated neutrophils allows these cells to adhere to the vascular endothelium and form a sub-neighborhood microenvironment, which allows for the local aggregation of oxidants and proteolytic enzymes in concentrations sufficient to cause endothelial damage and matrix degradation ( , ). Another interesting study have shown that neutrophil-produced H2O2 activates transient receptor potential cation channels (TRPM2), resulting in the uptake of lethal levels of calcium ions by tumor cells ( ). Furthermore, TRPM2 expression is up-regulated in cancerous tissues, making these cells more susceptible to the cytotoxic effects of neutrophils ( ). In addition to ROS toxicity, TANs also induce tumor cell death by promoting the expression of nitric oxide synthase (iNOS) and the release of nitric oxide (NO) via hepatocyte growth factor (HGF) ( ). Notably, superoxide itself is not directly involved in cell killing; instead, catalase (which converts H2O2 to H2O and O2) completely inhibits cell killing ( ). In addition, TANs can directly kill tumor cells via antibody-dependent cytotoxicity (ADCC), which is achieved by neutrophils through the expression of Fc receptors that mediate ADCC and may mechanically disrupt tumor cell membranes through interactions with signal-regulated protein α (SIRP α ) and CD47 ( , ). This phenomenon has been found in a variety of cancers (including non-Hodgkin’s lymphoma, breast cancer and B-cell lymphoma) ( – ). In a mouse model of cervical adenocarcinoma, TANs secrete proteases that induce tumor cell detachment from the basement membrane, thereby inhibiting tumor growth and metastasis. Despite the evidence from these studies indicating that TANs have an anti-tumor function, neutrophils are primarily known to have an immunosuppressive effect ( ). Once TANs are activated within the tumor microenvironment, they significantly enhance the inflammatory environment and drive tumor progression through a series of complex mechanisms. The release of large quantities of interleukin-8 (IL-8) by inflammatory cells has two main effects. Firstly, it promotes the survival of TANs, and secondly, it attracts more neutrophils to accumulate at the tumor site, thus exacerbating the inflammatory response ( ). The upregulation of IL-8 and neutrophil enrichment in KRAS-mutant CRC tissues has been demonstrated, which suggesting that exosomes may transfer mutant KRAS to recipient cells and trigger increases in IL-8 production, neutrophil recruitment and formation of NETs, eventually leading to the deterioration of CRC ( ). In contrast to the anti-tumor function of ROS described above, TANs are able to increase tissue sensitivity to carcinogens by releasing ROS and RNS and mediating genotoxicity. The research conducted by Stefanie K. Wculek and colleagues indicates that neutrophils amplify the genotoxicity of ethyl carbamate in lung cells through the generation of ROS, and this process directly facilitates tumor transformation, with ROS-dependent DNA damage being temporally confined to ethyl carbamate exposure and distinctly unrelated to extensive tissue damage or inflammation ( ). In 2019, Veronika Butin-Israeli identified a novel mechanism of genotoxicity that, interestingly, does not rely on ROS. In contrast, TANs facilitate the formation of double-strand breaks (DSBs) in epithelial DNA through the release of pro-inflammatory microRNA particles (miR-23a and miR-155), and the accumulation of DSBs in injured epithelial cells subsequently results in genomic instability, impaired tissue healing, and the promotion of tumorigenesis ( ). Prostaglandin E2 (PGE2) or neutrophil elastase (NE) can directly promote the proliferation of tumor cells. A. McGarry Houghton have demonstrated that NE induces degradation of insulin receptor substrate-1 (IRS-1) in tumor cell endosomes, as NE degraded IRS-1, there was increased interaction between phosphatidylinositol 3-kinase (PI3K) and the potent mitogen platelet-derived growth factor receptor (PDGFR), thereby skewing the PI3K axis toward tumor cell proliferation ( ). The release of MMP-9 is associated with the promotion of tumor angiogenesis and plays an important role in extracellular matrix(ECM) remodeling and membrane protein cleavage ( ). A study of prostate cancer has revealed the molecular mechanism by which MMP-9 regulates tumor cell invasion and metastasis. It has been indicated that MMP-9 enhances prostate cancer cell invasion by specifically degrading serpin protease nexin-1 (PN-1) and deregulating the inhibitory effect of PN-1 on urokinase plasminogen activator (uPA) ( ). Whereas in the study by Lukas et al. neutrophil-derived MMP-9 was found to mediate the release of larger VEGF isoforms not through cleavage but rather, and they demonstrated that MMP-9 was able to release biologically active VEGF165 from the ECM of colon cancer cells via cleavage of acetylheparin sulfate, which promotes tumor angiogenesis ( ). In addition, the immunosuppressive capacity of neutrophil subpopulations has all been associated with tumorigenesis ( ). In conclusion, it can be stated that TANs play an important role in several key aspects of tumor malignant transformation, progression, extracellular matrix remodeling, angiogenesis, cell migration and immunosuppression. This is achieved by degrading the extracellular matrix, inhibiting immune responses, stimulating tumor cell proliferation, increasing tumor metastatic potential and promoting angiogenesis, which in turn promotes tumor progression. The dual effect of TANs can also be observed in another keyword: Neutrophil Extracellular Trap Networks(NETs). On the one hand, TANs are involved in anti-tumor immune responses by releasing NETs. NETs are capable of capturing and confining tumor cells, while they contain antimicrobial proteins and enzymes (e.g. myeloperoxidase MPO and neutrophil elastase NE) that directly kill tumor cells ( , , ). Moreover, NETs facilitate tumor immune surveillance by stimulating dendritic cells and augmenting T cell-mediated immune responses ( , ). Conversely, NETs may also be involved in tumor progression by promoting tumor cell invasion and migration. The reticular structure of NETs may provide a physical adhesion platform for circulating tumor cells (CTCs), thereby promoting tumor cell colonization and metastasis in distal organs ( ). Moreover, the enzymes present within NETs are capable of degrading the extracellular matrix, thereby facilitating the spread of tumor cells ( , ). It is noteworthy that the oncogenic role of NE has been demonstrated in lung, prostate, and colon cancer ( , – ). With respect to tumor immune evasion, NETs may facilitate tumor cell evasion of immune surveillance by forming a physical barrier that impedes immune cell recognition and attack. Additionally, NET discharge may modify chemical signals within the tumor microenvironment, influencing immune cell polarization and functionality, and consequently, the equilibrium of the tumor immune response ( ). As previously discussed, the contrasting roles of N2 TANs in promoting tumor formation and N1 TANs in exerting antitumor effects have been delineated with reasonable clarity. However, the underlying factors that mediate these dichotomous effects of TANs remain unclear. Consequently, investigating these factors will constitute a pivotal research focus in future studies of TANs. The hypothesis that TANs are classified as N1/N2 types has been corroborated by further research on TANs. The study by Mareike Ohms et al. was successful in polarizing human neutrophils into N1/N2 types in vitro , and it could show functional and phenotypical differences between neutrophils cultured in the presence of N1- or N2-polarizing cocktails ( ). In present study, scientists have identified a number of molecules that can be used to differentiate between N1 and N2. The N1 markers include intercellular cell adhesion molecule-1 (ICAM-1), inducible nitric oxide synthase (iNOS), C-C motif ligand 3 (CCL3), and TNF-α, among others. The N2 markers include CCL17, CCL2, Arg, CCL5, and vascular endothelial growth factor (VEGF) ( , ).The role of transforming growth factor-β (TGF-β) signaling within the tumor microenvironment (TME) has been implicated in the promotion of a pro-tumorigenic neutrophil phenotype (N2) ( ). In contrast, type I interferon (IFN) signaling or the blockade of TGF-β signaling has been shown to direct neutrophils toward an antitumor phenotype (N1) ( ).The significance of these two pivotal inducing factors is further underscored by the data presented in . Moreover, ongoing research continues to uncover additional factors that modulate the polarization and functional profile of TANs. For instance, Chung et al. revealed that Smad3 activation in TANs is associated with the predominant N2 polarization status and poor prognosis of non-small cell lung carcinoma (NSCLC) patients, while they proposed CD16b/iNOS and CD16b/CD206 as markers to identify human N1 and N2 TANs ( ). This discoveris may resolve the inability to distinguish the two subtypes from surface markers, but further experiments are required to validate this conclusion. Luo et al.’s study disclosed that the expression of N2-specific marker genes was significantly reduced in TANs following pretreatment with 4-phenylbutyric acid. This observation suggests that the pro-tumorigenic capabilities of TANs may be diminished when endoplasmic reticulum stress is not activated. Therefore, it is plausible to posit that the activation of the endoplasmic reticulum could be implicated in the phenotypic shift of TANs toward the N2 state ( ).Wang et al. showed that HCC cell-derived CXCL9 promotes N1 polarization of neutrophils in vitro , while the specific CXCR3 inhibitor AMG487 significantly blocked this process ( ).These findings provide further evidence for the dual effects of TANs and suggest that TANs may directly or indirectly affect patient survival and prognosis. It would be beneficial for future studies to consider comprehensive analyses covering multiple cancer types in order to explore the heterogeneity of the phenotypic distribution of TANs, which would help to deepen our understanding of the functional and clinical relevance of TANs. Although both typologies of TANs are now generally recognized, recent studies suggest that a simple dichotomy of immune cells in cancer may not provide a comprehensive description of TANs. A study utilizing time-of-flight mass spectrometry (CyTOF) analysis of cytometry has demonstrated the existence of at least seven subpopulations of mature neutrophils that differ in surface markers and function in individuals with cancer ( ). It can be posited that a variety of different anti-tumor and pro-tumor effects may be exhibited by different phenotypes of mature neutrophils in the context of TANs. A unique subset of HLA-DR TAN with anti-tumor capacity is also detected in early stages of human lung cance, the subpopulation, exhibiting characteristics of both granulocytes and antigen-presenting cells like dendritic cells and macrophages and termed ‘Hybrid TANs, ‘ is capable of effectively inducing T-cell responses, encompassing both tumor antigen-specific and non-specific immunity ( , ). Notably, the number of such hybrid TANs was found to be decreased in large tumors, which appeared to be due to an associated hypoxic TME ( ). Given the considerable heterogeneity and plasticity of TANs within the TME, accurate subpopulation analysis of TANs has become an important research focus ( ). However, it is important to note that neutrophils in cancer are not limited to TANs but also include numerous subpopulations in the bone marrow and circulation ( ). To date, the extensive heterogeneity of neutrophils in cancer remains a topic worthy of further study. The interaction of TANs with a variety of other cell types in the tumor microenvironment, including TAMs, platelets, natural killer (NK) cells and T cells, forms a complex network that influences tumor development and metastasis. The terms “TAMs” and “T cell” are both significant keywords that are highlighted on the keyword hotspot map. Although there is no direct evidence that TANs and TAM interact via MPO and MMR, there is already evidence that a large MPO-positive neutrophil infiltrate is found in colorectal ( ) and lung cancers ( ), with high levels of macrophage mannose receptor (MMR) expression by M2-like macrophages ( ). Whereas MPO binding to MMR induces secretion of reactive oxygen intermediates, IL-8, TNF-gr and GM-CSF in chronic inflammatory environments (e. g. rheumatoid joints) ( ). This may suggest that TANs and TAMs co-exist in a specific way in the tumor microenvironment, promoting an inflammatory response in the tumor microenvironment. A study revealed a correlation between elevated NLR and elevated CCL2 expression in tumor tissues ( ). Additionally, the conditioned medium of TANs and recombinant CCL2 and CCL17 were observed to enhance the migration of macrophages derived from HCC patients or mice. These findings collectively indicate that TANs and TAMs interact through chemokines, such as CCL2, and collectively promote tumor growth and metastasis. Interestingly, the recruitment of TAMs by TANs to the appropriate regions in turn regulates the function of TANs, similar to the interaction between neutrophils and macrophages in the inflammatory environment. In addition to direct inhibition of T cells via ROS, iNOS and mediators such as ARG1, TANs can also inhibit T cell anti-tumor immunity by recruiting TAMs and regulatory T cells (Tregs) to remodeling of the TME via CCL17 and CCL2 ( ). TANs inhibit T cells by expressing programmed cell death-ligand 1 (PD-L1) to suppress anti-tumor response. In contrast, blockade of PD-1/PD-L1 resulted in reduced immunosuppression of T cells and enhanced infiltration and activation. Zhang et al. found that after tumorigenesis, TANs displayed N2-like state and secreted cytokine IL-10 to promote the activation of c-Met/STAT3 signaling, while the transcription factor STAT3 increased the level of PD-L1 in tumor cells and promoted the polarization of neutrophils toward N2-like state. granulocyte polarization toward an N2-like state, leading to a positive feedback loop between TANs, IL-10, STAT3, PD-L1, and TANs themselves ( ). Inhibiting one of the processes in the positive feedback pathway may prove beneficial in the treatment and prognosis of the tumor. Michaeli et al. reported that TANs promote immunosuppression by strongly inducing CD8+ T cell apoptosis, which leads to tumor progression, and that the TANs-induced CD8+ T cell death mechanism involving the TNF signaling pathway and NO production ( ). In contrast, it has been reported that TANs can promote CD8+ T-cell recruitment and activation by producing T-cell chemoattractant (e. g., CCL3, CXCL9, and CXCL10) and proinflammatory cytokines (IL-12, TNF-α, and GM-CSF) ( ). The question of how to regulate the production of T cell-promoting factors by TANs and reduce the production of suppressors is also a topic of discussion. In the early stages of lung cancer, crosstalk between TANs and activated T cells resulted in significant upregulation of CD54, CD86, OX40L, and 4-1BBL co-stimulatory molecules on the surface of neutrophils, which promoted T cell proliferation in a positive feedback loop. This result suggests that the upregulation of co-stimulatory molecules on TANs enhances T cell immunity, whereas the upregulation of PD-L1 suppresses T cell responses ( ). Modulating specific signaling molecules in the microenvironment to direct TANs toward a phenotype that promotes T cell immunity, or monitoring changes in surface molecules of TANs during tumor treatment, could serve as valuable strategies for assessing therapeutic efficacy and predicting alterations in the tumor’s immune response. Platelets are the first site to appear in the inflammatory process that accompanies the development of cancer. No influx of monocytes, lymphocytes, dendritic cells or NK cells was observed in the early stages of metastasis formation ( ). It appears that neutrophil recruitment to the tumor microenvironment is dependent on platelet activation. This process does not occur when the function of these cells is impaired or when platelets are reduced. The function of platelets in the formation of TANs can be considered in two distinct ways. Firstly, platelets release the chemokine CXCL5/7, which binds to CXCR2 on the surface of neutrophils, thereby activating and migrating these cells ( ). Secondly, platelets serve as a source of TGF-β, which plays a pivotal role in the development of N2 TANs ( , ). Recent studies have also revealed a potential inhibitory effect of TANs and neutrophils on natural killer (NK) cell function during tumor development. The study by Sun et al. showed that TANs inhibit the cytotoxicity and infiltration capacity of NK cells through the PD-L1/PD-1 axis and regulate the expression of PD-L1 and PD-1 through the GCSF/STAT3 and IL-18 pathways, revealing the effect of neutrophils on NK cell dysfunction in the loaded state and its molecular mechanisms ( ). Yang et al. revealed that tumor-associated neutrophils were able to influence macrophages, NK cells and T cells through IL16, IFN-II and SPP1 signaling pathways ( ). The main mechanism may be the release of nitric oxide (NO) and ROS and arginase 1 (ARG1) activity by TANs to inhibit NK cytotoxicity and T cell proliferation ( , ). The elucidation of these mechanisms provides new insights into understanding the complexity of immunosuppression in the tumor microenvironment, and future studies will need to further explore how intervention in these pathways can enhance the anti-tumor activity of NK cells or help NK cells escape suppression. TANs are actively involved in the recruitment of B cells to the TME in addition to their ability to produce extensive crosstalk with the aforementioned cells. Merav E. Shaul et al. have clarified that TNFα is the main cytokine in TANs-mediated B cell chemotaxis, that recruitment of CD45+B220+CD138- splenic B cells by TANs in vitro leads to B cell phenotypic modulation, and that in vitro experiments have confirmed the ability of TANs to induce B cell differentiation into IgG-producing plasma cells, and that the process is dependent on the surface of TANs. process is dependent on a B-cell activating factor (BAFF) contact mechanism on the surface of TANs ( ). Interestingly we will find that TNFα tends to associate with N1-type TANs, which may suggest a novel immunoregulatory network between TNFα, N1-type TANs and B-cells, in which the interaction between TANs and B-cells is critical for the formation of tumor immune response. This interaction may, together with other immune cell types, such as T cells and dendritic cells, constitute a complex network of immune responses that collectively influence tumor progression and patient response to therapy. Increasing evidence suggests that neutrophils play an active role in promoting tumor development. However, clinical applications are still limited to the systemic treatment of TANs to avoid neutropenia ( ). The blockade of immune checkpoints of the neutrophil programmed cell death 1 (PD1)/PD-L1 pathway, targeted binding of CXCR2, CXCR4, G-CSF, TGF- β , etc., which in turn inhibits the recruitment, expansion and polarization of tumor neutrophils, may provide some ideas for neutrophil-targeted tumor therapies. In view of the above discussions, we propose a series of prospective research directions for the investigation of TANs (1): Elucidating the mechanisms that induce the polarization of TANs from the N2-type to the N1-type during their chemotactic migration (2). Investigating the shared pathological mechanisms between TANs and the TME across a spectrum of cancers (3). Determining whether there are variations in TAN subtypes observed among patients with different cancers (4). Identifying additional markers to differentiate between TAN subtypes, addressing the complexity and heterogeneity of TANs (5). Clarifying the intricate mechanisms of TAN interactions with other tumor-associated cells, such as TAMs, tumor-associated platelets, and T cells. These research avenues may provide insights into the role of TANs in tumorigenesis and inform the development of novel therapeutic strategies. We anticipate that subsequent research will leverage the complete antitumor potential of TANs and integrate existing effective antineoplastic therapies with targeted neutrophil interventions, thus offering a promising direction that could result in safer and more efficacious treatment strategies. 4.4 Limitations This study is the first to use bibliometric visualization to analyze studies related to TANs over the past 20 years. However, this study inevitably has some limitations. First, the data used in this study were only from the WOSCC database, excluding data from other databases such as PubMed, Cochrane Library, and Google Scholar. Despite the comprehensiveness and reliability of WOSCC, there may be some missing literature in the data from the WOSCC database; only English language literature was included in this study, which may lead to biased results. There was also the inclusion of literature up to March 21, 2024, and subsequent publications were not included in the study in time. Secondly, the data in this study may be inconsistent in many ways, for example, the same institution may have used different names at different times; and the same author published papers in the field at different institutions. Finally, although this study provides a comprehensive overview of the research field of TANs, there are some limitations in the study of keywords. The keyword analysis relied primarily on the titles and abstracts of the literature, which may not have fully captured the depth of information in the full text of the articles; the setting of the minimum number of citations may have excluded some emerging but important research directions. Future research could further explore these limitations and utilize more comprehensive data analysis methods to provide deeper insights.
General information The present study conducted an analysis of 615 TANs-related articles, which were sourced from the Web of Science Core Collection (WOSCC) database. These articles were authored by 3763 researchers affiliated with 934 institutions across 51 countries, and were published between January 1, 2000, and March 21, 2024. The exponential growth in the number of articles over this period highlights the escalating interest in TANs within the scientific community. The concept of TANs was formally introduced in 2009 by Fridlender ZG and colleagues, marking the inception of focused research in this domain. Since then, the field has seen a notable increase in scholarly output, particularly over the past decade. Notably, the number of publications in 2022 was approximately ten times that of 2012, a statistic that underscores the vibrant and burgeoning nature of TANs research in recent years. In the national and regional analysis, the metrics of publication count and Total Link Strength are pivotal for evaluating a country or region’s role within the global research cooperative network. A higher Total Link Strength suggests that the country or region may serve as a hub within the research field. From a global perspective, as illustrated in , , China and the United States emerge as the central research hubs in the network domain. China leads in the number of publications, with the second-highest citation frequency, while the United States follows closely with the second-highest publication count and the highest citation frequency. The United States also boasts the highest Total Link Strength of 109, outpacing all other nations, with Germany ranking second with a Total Link Strength of 54, signifying its significant role in global research collaboration. Furthermore, countries such as China, Italy, Israel, and the United Kingdom have actively engaged in network research and cooperation. Among the top 10 institutions, five are Chinese, and two are Italian. Notably, the University of Pennsylvania (Univ Penn) in the United States stands out with a significantly higher citation frequency compared to other institutions. Its high citation rate and Total Link Strength are indicative of the high quality of its research and its recognition within the academic community. Additionally, global research institutions have established cooperative networks, with emerging entities like Humanitas University and Shanghai Jiao Tong University showing a marked increase in their research output in the network field. This surge suggests that these institutions may be poised to become new leaders in the field. Conversely, institutions such as Fudan University and the University of Duisburg-Essen have exhibited a relatively lower research activity in recent years, which could imply a shift in their research focus or a commitment to more in-depth, long-term research endeavors. Based on the analysis, China and the United States play a pivotal role in the global network research landscape. China demonstrates its robust research output capacity, evidenced by its leading position in publication numbers. Meanwhile, the United States asserts its leading role and extensive influence in global research cooperation, as reflected in its high citation frequency and Total Link Strength. Furthermore, while Univ Penn in the United States is among the top research institutions, there is also a notable rise of new institutions that are making significant contributions to the field. In the realm of author contributions, the seminal work by Jablonska, Jadwiga; Fridlender, Zvi G.; Galdiero, Maria Rosaria; Granot, Zvi and others indicate their pivotal role in the field of TANs. As depicted in , Fridlender, Zvi G., hailing from the University of Jerusalem, Israel, stands out with the highest co-citation count, significantly outpacing his peers. This preeminence is largely attributed to his groundbreaking research on the phenotypic polarization of TANs, published in 2009. His study, which contributed to the functional classification of TANs into N1 and N2 subtypes, has provided novel insights into the multifaceted role of TANs within the tumor microenvironment. This work has not only catalyzed further research in the field but has also become the most frequently cited literature, solidifying Fridlender’s status as a leading figure with the highest citation frequency, the most published articles, the highest H-index, and the strongest Total Link Strength among all authors. Furthermore, the research conducted by the GENTLES, A. J. team, as presented in ‘The prognostic landscape of genes and infiltrating immune cells across human cancers, ‘ published in 2015, is recognized for its second-highest citation frequency and the top average annual citation rate. Their large-scale data analysis has shed light on the significant interplay between gene expression and tumor-infiltrating immune cells in cancer prognosis. Subsequently, COFFELT, S. B. ‘s 2016 publication, ‘Neutrophils in cancer: neutral no more, ‘ which ranks third in both citation frequency and average annual citation, has also made a substantial impact on the understanding of the cancer immune microenvironment. In conclusion, scholars such as Jablonska, Jadwiga, and Fridlender, Zvi G., have exerted considerable influence in the TANs research domain. Fridlender’s pioneering work is particularly esteemed and widely acknowledged. Concurrently, the contributions of the GENTLES, A. J. and COFFELT, S. B. have been instrumental in advancing our knowledge of the cancer immune microenvironment. As detailed in and illustrated in , ‘Frontiers in Immunology’ emerges as a leading journal in the field, boasting both the highest publication count and a prominent position within the top 10 journals for co-citation frequency. ‘Cancer Research’ distinguishes itself with the highest co-citation frequency, securing the ninth rank in citation frequency overall. This distinction can be largely attributed to the substantial number of highly impactful articles featured in the journal. It is noteworthy that among the top 10 journals by co-citation frequency, there is a significant representation of specialized journals. Specifically, three are dedicated to oncology (‘Cancer Research’, ‘Cancer Cell’, ‘Clinical Cancer Research’), two to immunology (‘Journal of Immunology’, ‘Frontiers in Immunology’), and three are multidisciplinary scientific journals (‘Nature’, ‘Cell’, ‘Proceedings of the National Academy of Sciences, USA’ or ‘PNAS’). Additionally, ‘Nature Medicine’ and ‘Journal of Biological Chemistry’ are associated with biology and immunology, respectively, while ‘Nature’, ‘Cell’, and ‘PNAS’ are linked to molecular biology. This distribution aligns with the biplot analysis presented in , which underscores the interdisciplinary nature of the research landscape. The analysis of the journal heat map reveals the dynamics of heat within the field of scholarly publishing, from an initial tendency to publish in prestigious journals with high impact factors, to a later preference for high open access journals, and ultimately a renewed focus on those high impact journals. This fluctuation reflects the progression of research in TANs, with high-quality research generating attention and open journals helping to spread the field of research, further contributing to the emergence of higher-quality research. The analysis of specific literatures reveals the key role of these literatures in the field of TANs research. As shown in and , “Polarization of Tumor-Associated Neutrophil Phenotype by TGF-β: N1 versus N2 TANs” is undoubtedly the most authoritative article in this research area. It has the highest citation frequency and is closely linked to other highly cited literature, as well as being the first to explode with citations. “Tumor-associated neutrophils stimulate T cell responses in early-stage human lung cancer” and “ Hyperglycemia Impairs Neutrophil Mobilization Leading to Enhanced Metastatic Seeding” were the two articles with the strongest outbreaks and longer durations, suggesting that they also have high impact. “Neutrophil diversity and plasticity in tumor progression and therapy” is a review with a high citation rate that also cites many highly cited articles, which is a good source of information for readers to understand the field of TANs research.
Hotspots and frontiers Keyword analysis is instrumental in discerning the frontiers and focal points within a field of study. In this research, a comprehensive keyword analysis was performed to delineate the predominant trends and temporal shifts in the domain of Tumor-Associated Neutrophils (TANs). The principal keywords identified include ‘neutrophil’, ‘tumor-associated neutrophils’, ‘tumor microenvironment’, ‘tumor’, ‘immunotherapy’, ‘NETs’ (Neutrophil Extracellular Traps), ‘metastasis’, ‘inflammation’, and ‘tumor-associated macrophages’ (as listed in ). These keywords predominantly pertain to the biological functions of TANs, their roles within the tumor microenvironment, and their implications in immunotherapy and inflammatory responses, indicating that these topics are currently at the forefront of TANs research. The co-occurrence network diagram elucidates that high-frequency keywords such as ‘immunology’, ‘oncology’, and ‘inflammation’ have been central to past research endeavors. Within the immunology sphere, keywords like ‘immune cells’, ‘immune evasion’, ‘immune checkpoints’, ‘prognosis’, and ‘macrophage’ have been prominent. Similarly, in oncology, ‘tumor-associated neutrophils’, ‘metastasis’, ‘anti-tumor immunity’, and ‘tumor-associated macrophages’ have garnered significant attention. In the context of inflammation, ‘neutrophil’, ‘inflammation’, ‘innate immunity’, and ‘granulopoiesis’ have been frequently discussed. The recurring terms ‘neutrophil’ and ‘tumor-associated neutrophils’ underscore the pivotal role of neutrophils in the tumor microenvironment, as well as their potential contributions to tumor progression and immunomodulation. Furthermore, the recurrent mention of ‘tumor microenvironment’ and ‘immunotherapy’ underscores the significance of the immune context in cancer therapeutics. Notably, breast and colorectal cancers have emerged as the most active disease areas within TANs research, with their high frequency and Total Link Strength indicating a central position in neutrophil-related studies. This prominence may be attributed to the elevated incidence and mortality rates associated with these cancers, drawing considerable research focus. Employing VOSviewer software for visual mapping, the research on TANs was categorized into seven major directions. These clusters not only highlight the diversity of research but also suggest potential interconnections between different research avenues. For instance, the interplay between inflammatory responses and tumor development, along with the part played by immunomodulation in tumor immunity, are areas meriting further in-depth exploration in future studies. Keyword impact and heatmap analysis also shed light on the evolution of research trends and the shift in scientific interest. Keyword impact analysis illustrates that tumor-promoting research on TANs is a very hot topic. Heatmap analysis, on the other hand, reveals emerging research areas in TANs research (e. g., Nets, IMMUNOTHERAP, etc.), and those keywords appearing in multiple clusters, such as INNATE IMMUNITY, MYELOID-DERIVED SU, which not only show a consistently high level of interest in a specific area, but also connect different research topics. The heat of these keywords suggests that they may represent critical nodes in the disease process or potential targets for therapeutic intervention. For example, INNATE IMMUNITY relates to the first response of the intrinsic immune system, whereas MYELOID-DERIVED SU is associated with immunosuppression in the tumor microenvironment. TUMOR and TUMOR-ASSOCIATED M are directly linked to tumor progression and metastasis, whereas ANGIOGENESIS is a key process in tumor growth and spread.
Hot research and future prospects of TANs The following aspects of current research on tumor-associated neutrophils (TANs) will be discussed: the origin of TANs, keyword hotspots in the field of TANs research, the anti-tumor and pro-tumor functions of TANs, and the pathogenic factors underlying the polarization and functional differences of TANs. Neutrophils, the most abundant polymorphonuclear leukocytes ( ), are crucial for the innate immune system and have a significant role in tumor microenvironments (TME) ( ). Originating from granulocyte-monocyte progenitors (GMPs) in the bone marrow ( ), neutrophils mature and are mobilized into the bloodstream by Granulocyte colony-stimulating factor (G-CSF) ( ). TANs result from neutrophil development and infiltration into the TME, primarily controlled by the chemokine receptor type (CXCR2) axis ( ). In the TME, various cellular constituents release CXCR2 ligands, such as CXCL1-8, creating a chemotactic gradient that guides neutrophils to the tumor ( , ). Kyle J. Eash et al. have shown that CXCR4 and CXCR2 are essential for controlling neutrophil release, with only fully differentiated neutrophils expressing CXCR2 gaining entry into the bloodstream and subsequently infiltrating target tissues ( , ).Additional studies corroborate that SMAD4 deletion promotes the recruitment of TANs through the CXCR2 axis ( ). SMAD4 deletion promotes colorectal cancer (CRC) expression of C-C Motif Chemokine Ligand 15 (CCL15) and recruits the CCR1 TAN (CCL15-CCR1 axis) with arginase-1 (ARG-1) and matrix metalloproteinase 9 (MMP-9) activities, thereby forming a pre-metastatic niche for disseminated tumor cells (e. g., in the lungs) ( , , ). Interleukin-8 (IL-8), overexpressed in multiple cancer types, recruits neutrophils to the TME through CXCR1 and CXCR2, influencing TAN formation ( – ). Jing He et al. showed that IL-8 secretion, induced by METTL3 disruption, promotes TANs recruitment and regulates tumor growth ( ). Yang’s research highlights IL-8’s role in TANs recruitment and JAG2 expression, and the blockade of CXCR2 signaling reduces tumor growth and TANs numbers while enhancing CD8+ T cell activity ( ). Collectively, these studies underscore the pivotal role of the CXCR2-IL-8 axis in mediating the recruitment of TANs within the TME. Metastatic tumors, on the other hand, induce chemotaxis of circulating neutrophils by secreting large amounts of G-CSF; these recruited neutrophils are mostly immature and immunosuppressive, promoting cancer metastasis ( ).Actually, only fully differentiated neutrophils expressing CXCR2 are permitted to enter the Circulatory system and subsequently infiltrate the corresponding tissues. G-CSF has been demonstrated to facilitate the proliferation of neutrophil precursor cells, resulting in the expansion and infiltration of immature neutrophils in the peripheral blood. This phenomenon does not contravene the complete differentiation of CXCR2. Myeloid-Derived Suppressor Cells (MDSCs) is a term used to describe a population of myeloid-derived non-lymphoid immunosuppressive cells that are enriched in cancer patients ( ). G-MDSCs, which share surface markers such as CD11b with TANs, have been observed to differentiate into CD11b+/CD66b+ TANs in gastric cancer, a process linked to immunosuppression and tumor metastasis ( ). It may suggest that when the body is in a state of urgency, G-MDSCs, originating from the expansion of immature myeloid cells (IMCs) in the bone marrow, migrate to peripheral tissues where they are transformed into TANs by cytokines like TGF-β present in the TME. Myeloid-related proteins (MRPs), specifically S100A8 and S100A9, are implicated in neutrophil migration, with high expression levels observed in the TME and pre-metastatic niche ( , ). The spleen is also a significant source of TANs, which contribute to tumor progression by mobilizing immature myeloid cells that differentiate into tumor-associated macrophages (TAMs) and TANs ( , ). These cells promote tumor growth and metastasis through cytokine secretion. Although the current study has not directly demonstrated the impact of splenic regulation of TANs on tumor therapy, it does highlight the critical role of the spleen in tumor-associated immune cell generation and demonstrates the importance of the spleen as a potential therapeutic target. The transition of neutrophils into N2 TANs is intimately linked to their role in facilitating tumor growth, irrespective of whether this occurs via the bone marrow-circulating-TME axis or the splenic route. Our keyword analysis from the existing literature highlights ‘chemokine’ and ‘immune evasion’ as prominent terms, underscoring the pivotal role of TANs in tumor immune evasion. Although the number of studies on the induction and development of N1 TANs is relatively limited, the phenomenon of immunosculpting or immunoediting – defined as the crosstalk between immune cells and tumor cells - indicates the potential for such interactions to alter tumor biological phenotypes. This concept suggests that the limited research on N1 TANs may be overlooking a crucial aspect of how these cells contribute to the dynamic immune-tumor interface ( ). It has been demonstrated that tumor cells can influence the secretion of molecules by neutrophils, which in turn promote tumor growth. To illustrate, breast cancer cells secrete GM-CSF, which induces neutrophils to produce oncostatin M, a protein that boosts VEGF production and cancer cell invasion ( ). However, under effective immunotherapy, there is an increase in the number of neutrophils present in tumors, accompanied by the expression of interferon-stimulated genes (ISGs), which exhibit antitumor functions. The transcription factor IRF1 in neutrophils is pivotal for an efficacious antitumor response; its absence precludes the efficacy of immunotherapy ( ). Furthermore, a study identified a crosstalk between tumor-associated neutrophils (TANs) and CRC cells through the AGR2-CD98hc-xCT axis, which enhances CRC cell migration and creates a feedback loop driving metastasis ( ). These results point to the potential of TANs in cancer therapy, suggesting that they can be mobilized against cancer cells rather than simply promoting tumor growth. Therefore, strategies to modulate TANs may be a new way to improve the efficacy of cancer immunotherapy. As illustrated in , the relationship between TANs and “Cancer” is one of the most rapidly evolving areas of research in recent years. As a critical component of the tumor microenvironment, TANs exert a profound influence on tumor progression and metastasis ( – ). TANs have been demonstrated to exert a regulatory influence on tumor growth, with their presence observed in a wide range of solid tumors, including metastatic melanoma ( ), bronchoalveolar carcinoma ( ), renal carcinoma ( ), head and neck squamous cell carcinoma (HNSCC) ( ), pancreatic cancer ( ), and gastric cancer ( ), has been identified as a marker of a poor prognosis in a number of clinical and laboratory studies. In the context of these specific types of cancer, neutrophils have been observed to exhibit tumor-promoting properties that may potentially be harmful to the host. Among these, hepatocellular carcinoma, pancreatic cancer, and gastric cancer have emerged as key diseases in our identified hot research areas. The term ‘TME’ has been referenced 85 times in the keywords over the past fifteen years, which serves to illustrate its significance. TME exerts control over neutrophil recruitment through the operation of specific molecular mechanisms. On the other hand, the accumulation of TANs in TME is intimately linked to tumor invasiveness and metastatic progression. Song et al. revealed that in hepatocellular carcinoma(HCC)-TME, cancer associated fibroblasts (CAF) -derived cardiotrophin-like cytokine factor 1 (CLCF1) increased the paracrine secretion of CXCL6 and TGF-β in tumor cells, thereby promoting the infiltration and polarization of TANs ( ). In clinical samples, upregulation of the CLCF1-CXCL6/TGF- β axis was strongly associated with the emergence of cancer stem cells, increased “N2”-polarised TANs, high tumor stage and poor prognosis ( ). A transcriptional study in NSCLC has identified a TANs cluster characterized by overexpression of high mobility group box 1 (HMGB1). This cluster is hypothesized to interact with the TME via HMGB1-TIM-3 interactions, potentially suppressing antitumor immunity and facilitating immune evasion through the GATA2/HMGB1/TIM-3 signaling axis ( ). These findings collectively indicate that TANs and TME exhibit multifaceted roles in relation to one another and that they may potentially share common pathologic mechanisms across diverse cancer types. The ratio of tumor-infiltrating neutrophils to lymphocytes (NLR) is a valuable indicator for assessing the prognosis of cancer patients, reflecting the immune status in the TME, which is also one of the hottest keywords we have found. A higher neutrophil-to-lymphocyte ratio (NLR) is associated with a poorer prognosis in numerous types of cancer ( , – ). In a study of uroepithelial carcinoma of the bladder, the presence of neutrophils and NLR were associated with high-grade uroepithelial tumors, TANs were associated with tumor grade and stage, and TALs (especially CD8 T cells) and NLR were more likely to be associated with progression of tumor invasion in this study ( ). Chen et al. demonstrated that a low N1/N2 ratio was associated with poorer tumor differentiation, easier lymph node metastasis, and a higher TNM stage ( ). Conversely, a high N1/N2 ratio was identified as an important prognostic indicator for overall survival (OS) and recurrence-free survival (RFS). Additionally, tumor-associated N1/N2 neutrophils exhibited an inverse correlation with tumor-infiltrating CD8+ T cells and Tregs. In conclusion, the inverse correlation between TANs and lymphocytes may facilitate a deeper understanding of the immune system and its functioning, and thus merits further investigation. The anti-tumor and pro-tumor mechanisms of TANs have now been more widely demonstrated. Firstly, it is established that tumor-associated neutrophils (TANs) can directly kill tumor cells through self-exposure and cytotoxic effects ( ). It is noteworthy that this killing effect is associated with local hypoxia but not with T cells, TANs recruitment is reduced after hypoxia is relieved and TANs under these conditions is more capable of killing tumor cells ( ). TANs also have indirect anti-tumor effects, as they can stimulate adaptive anti-tumor immune responses by promoting the recruitment of other immune cells and having antigen-presenting potential themselves ( , ). Vono et al. demonstrated that neutrophils isolated from vaccine-draining lymph nodes of rhesus monkeys exhibited HLA-DR expression and were capable of presenting vaccine antigens to autologous antigen-specific memory CD4+ T cells in vitro ( ). This suggests that neutrophils may function as antigen-presenting cells (APCs), leveraging their abundance in the immune system to potentially regulate antigen-specific T cell responses. Neutrophils can recruit and activate T cells by secreting cytokines such as TNFα, and histone G promotes T cell proliferation and cytokine production ( , ). For instance, TANs secrete human mast cell chymotrypsin (HC) and human neutrophil histone G (hCG), both of which readily cleave two interleukin-1 (IL-1)-associated alerting proteins, interleukin-18 (IL-18) and interleukin-33 (IL-33), as well as the cytokine interleukin-15 (IL-15), which is important for T-cell homeostasis ( ). TANs also kill tumor cells by generating reactive oxygen species (ROS), with hypochlorous acid (HOCl) playing a major role in recognizing the surface of target cells and mediating tumor cell lysis by a mechanism dependent on leukocyte function-associated antigen 1 (LFA-1) ( ). In addition, the distinctive adhesion pathway mediated by the upregulation of CD11b/CD18 on activated neutrophils allows these cells to adhere to the vascular endothelium and form a sub-neighborhood microenvironment, which allows for the local aggregation of oxidants and proteolytic enzymes in concentrations sufficient to cause endothelial damage and matrix degradation ( , ). Another interesting study have shown that neutrophil-produced H2O2 activates transient receptor potential cation channels (TRPM2), resulting in the uptake of lethal levels of calcium ions by tumor cells ( ). Furthermore, TRPM2 expression is up-regulated in cancerous tissues, making these cells more susceptible to the cytotoxic effects of neutrophils ( ). In addition to ROS toxicity, TANs also induce tumor cell death by promoting the expression of nitric oxide synthase (iNOS) and the release of nitric oxide (NO) via hepatocyte growth factor (HGF) ( ). Notably, superoxide itself is not directly involved in cell killing; instead, catalase (which converts H2O2 to H2O and O2) completely inhibits cell killing ( ). In addition, TANs can directly kill tumor cells via antibody-dependent cytotoxicity (ADCC), which is achieved by neutrophils through the expression of Fc receptors that mediate ADCC and may mechanically disrupt tumor cell membranes through interactions with signal-regulated protein α (SIRP α ) and CD47 ( , ). This phenomenon has been found in a variety of cancers (including non-Hodgkin’s lymphoma, breast cancer and B-cell lymphoma) ( – ). In a mouse model of cervical adenocarcinoma, TANs secrete proteases that induce tumor cell detachment from the basement membrane, thereby inhibiting tumor growth and metastasis. Despite the evidence from these studies indicating that TANs have an anti-tumor function, neutrophils are primarily known to have an immunosuppressive effect ( ). Once TANs are activated within the tumor microenvironment, they significantly enhance the inflammatory environment and drive tumor progression through a series of complex mechanisms. The release of large quantities of interleukin-8 (IL-8) by inflammatory cells has two main effects. Firstly, it promotes the survival of TANs, and secondly, it attracts more neutrophils to accumulate at the tumor site, thus exacerbating the inflammatory response ( ). The upregulation of IL-8 and neutrophil enrichment in KRAS-mutant CRC tissues has been demonstrated, which suggesting that exosomes may transfer mutant KRAS to recipient cells and trigger increases in IL-8 production, neutrophil recruitment and formation of NETs, eventually leading to the deterioration of CRC ( ). In contrast to the anti-tumor function of ROS described above, TANs are able to increase tissue sensitivity to carcinogens by releasing ROS and RNS and mediating genotoxicity. The research conducted by Stefanie K. Wculek and colleagues indicates that neutrophils amplify the genotoxicity of ethyl carbamate in lung cells through the generation of ROS, and this process directly facilitates tumor transformation, with ROS-dependent DNA damage being temporally confined to ethyl carbamate exposure and distinctly unrelated to extensive tissue damage or inflammation ( ). In 2019, Veronika Butin-Israeli identified a novel mechanism of genotoxicity that, interestingly, does not rely on ROS. In contrast, TANs facilitate the formation of double-strand breaks (DSBs) in epithelial DNA through the release of pro-inflammatory microRNA particles (miR-23a and miR-155), and the accumulation of DSBs in injured epithelial cells subsequently results in genomic instability, impaired tissue healing, and the promotion of tumorigenesis ( ). Prostaglandin E2 (PGE2) or neutrophil elastase (NE) can directly promote the proliferation of tumor cells. A. McGarry Houghton have demonstrated that NE induces degradation of insulin receptor substrate-1 (IRS-1) in tumor cell endosomes, as NE degraded IRS-1, there was increased interaction between phosphatidylinositol 3-kinase (PI3K) and the potent mitogen platelet-derived growth factor receptor (PDGFR), thereby skewing the PI3K axis toward tumor cell proliferation ( ). The release of MMP-9 is associated with the promotion of tumor angiogenesis and plays an important role in extracellular matrix(ECM) remodeling and membrane protein cleavage ( ). A study of prostate cancer has revealed the molecular mechanism by which MMP-9 regulates tumor cell invasion and metastasis. It has been indicated that MMP-9 enhances prostate cancer cell invasion by specifically degrading serpin protease nexin-1 (PN-1) and deregulating the inhibitory effect of PN-1 on urokinase plasminogen activator (uPA) ( ). Whereas in the study by Lukas et al. neutrophil-derived MMP-9 was found to mediate the release of larger VEGF isoforms not through cleavage but rather, and they demonstrated that MMP-9 was able to release biologically active VEGF165 from the ECM of colon cancer cells via cleavage of acetylheparin sulfate, which promotes tumor angiogenesis ( ). In addition, the immunosuppressive capacity of neutrophil subpopulations has all been associated with tumorigenesis ( ). In conclusion, it can be stated that TANs play an important role in several key aspects of tumor malignant transformation, progression, extracellular matrix remodeling, angiogenesis, cell migration and immunosuppression. This is achieved by degrading the extracellular matrix, inhibiting immune responses, stimulating tumor cell proliferation, increasing tumor metastatic potential and promoting angiogenesis, which in turn promotes tumor progression. The dual effect of TANs can also be observed in another keyword: Neutrophil Extracellular Trap Networks(NETs). On the one hand, TANs are involved in anti-tumor immune responses by releasing NETs. NETs are capable of capturing and confining tumor cells, while they contain antimicrobial proteins and enzymes (e.g. myeloperoxidase MPO and neutrophil elastase NE) that directly kill tumor cells ( , , ). Moreover, NETs facilitate tumor immune surveillance by stimulating dendritic cells and augmenting T cell-mediated immune responses ( , ). Conversely, NETs may also be involved in tumor progression by promoting tumor cell invasion and migration. The reticular structure of NETs may provide a physical adhesion platform for circulating tumor cells (CTCs), thereby promoting tumor cell colonization and metastasis in distal organs ( ). Moreover, the enzymes present within NETs are capable of degrading the extracellular matrix, thereby facilitating the spread of tumor cells ( , ). It is noteworthy that the oncogenic role of NE has been demonstrated in lung, prostate, and colon cancer ( , – ). With respect to tumor immune evasion, NETs may facilitate tumor cell evasion of immune surveillance by forming a physical barrier that impedes immune cell recognition and attack. Additionally, NET discharge may modify chemical signals within the tumor microenvironment, influencing immune cell polarization and functionality, and consequently, the equilibrium of the tumor immune response ( ). As previously discussed, the contrasting roles of N2 TANs in promoting tumor formation and N1 TANs in exerting antitumor effects have been delineated with reasonable clarity. However, the underlying factors that mediate these dichotomous effects of TANs remain unclear. Consequently, investigating these factors will constitute a pivotal research focus in future studies of TANs. The hypothesis that TANs are classified as N1/N2 types has been corroborated by further research on TANs. The study by Mareike Ohms et al. was successful in polarizing human neutrophils into N1/N2 types in vitro , and it could show functional and phenotypical differences between neutrophils cultured in the presence of N1- or N2-polarizing cocktails ( ). In present study, scientists have identified a number of molecules that can be used to differentiate between N1 and N2. The N1 markers include intercellular cell adhesion molecule-1 (ICAM-1), inducible nitric oxide synthase (iNOS), C-C motif ligand 3 (CCL3), and TNF-α, among others. The N2 markers include CCL17, CCL2, Arg, CCL5, and vascular endothelial growth factor (VEGF) ( , ).The role of transforming growth factor-β (TGF-β) signaling within the tumor microenvironment (TME) has been implicated in the promotion of a pro-tumorigenic neutrophil phenotype (N2) ( ). In contrast, type I interferon (IFN) signaling or the blockade of TGF-β signaling has been shown to direct neutrophils toward an antitumor phenotype (N1) ( ).The significance of these two pivotal inducing factors is further underscored by the data presented in . Moreover, ongoing research continues to uncover additional factors that modulate the polarization and functional profile of TANs. For instance, Chung et al. revealed that Smad3 activation in TANs is associated with the predominant N2 polarization status and poor prognosis of non-small cell lung carcinoma (NSCLC) patients, while they proposed CD16b/iNOS and CD16b/CD206 as markers to identify human N1 and N2 TANs ( ). This discoveris may resolve the inability to distinguish the two subtypes from surface markers, but further experiments are required to validate this conclusion. Luo et al.’s study disclosed that the expression of N2-specific marker genes was significantly reduced in TANs following pretreatment with 4-phenylbutyric acid. This observation suggests that the pro-tumorigenic capabilities of TANs may be diminished when endoplasmic reticulum stress is not activated. Therefore, it is plausible to posit that the activation of the endoplasmic reticulum could be implicated in the phenotypic shift of TANs toward the N2 state ( ).Wang et al. showed that HCC cell-derived CXCL9 promotes N1 polarization of neutrophils in vitro , while the specific CXCR3 inhibitor AMG487 significantly blocked this process ( ).These findings provide further evidence for the dual effects of TANs and suggest that TANs may directly or indirectly affect patient survival and prognosis. It would be beneficial for future studies to consider comprehensive analyses covering multiple cancer types in order to explore the heterogeneity of the phenotypic distribution of TANs, which would help to deepen our understanding of the functional and clinical relevance of TANs. Although both typologies of TANs are now generally recognized, recent studies suggest that a simple dichotomy of immune cells in cancer may not provide a comprehensive description of TANs. A study utilizing time-of-flight mass spectrometry (CyTOF) analysis of cytometry has demonstrated the existence of at least seven subpopulations of mature neutrophils that differ in surface markers and function in individuals with cancer ( ). It can be posited that a variety of different anti-tumor and pro-tumor effects may be exhibited by different phenotypes of mature neutrophils in the context of TANs. A unique subset of HLA-DR TAN with anti-tumor capacity is also detected in early stages of human lung cance, the subpopulation, exhibiting characteristics of both granulocytes and antigen-presenting cells like dendritic cells and macrophages and termed ‘Hybrid TANs, ‘ is capable of effectively inducing T-cell responses, encompassing both tumor antigen-specific and non-specific immunity ( , ). Notably, the number of such hybrid TANs was found to be decreased in large tumors, which appeared to be due to an associated hypoxic TME ( ). Given the considerable heterogeneity and plasticity of TANs within the TME, accurate subpopulation analysis of TANs has become an important research focus ( ). However, it is important to note that neutrophils in cancer are not limited to TANs but also include numerous subpopulations in the bone marrow and circulation ( ). To date, the extensive heterogeneity of neutrophils in cancer remains a topic worthy of further study. The interaction of TANs with a variety of other cell types in the tumor microenvironment, including TAMs, platelets, natural killer (NK) cells and T cells, forms a complex network that influences tumor development and metastasis. The terms “TAMs” and “T cell” are both significant keywords that are highlighted on the keyword hotspot map. Although there is no direct evidence that TANs and TAM interact via MPO and MMR, there is already evidence that a large MPO-positive neutrophil infiltrate is found in colorectal ( ) and lung cancers ( ), with high levels of macrophage mannose receptor (MMR) expression by M2-like macrophages ( ). Whereas MPO binding to MMR induces secretion of reactive oxygen intermediates, IL-8, TNF-gr and GM-CSF in chronic inflammatory environments (e. g. rheumatoid joints) ( ). This may suggest that TANs and TAMs co-exist in a specific way in the tumor microenvironment, promoting an inflammatory response in the tumor microenvironment. A study revealed a correlation between elevated NLR and elevated CCL2 expression in tumor tissues ( ). Additionally, the conditioned medium of TANs and recombinant CCL2 and CCL17 were observed to enhance the migration of macrophages derived from HCC patients or mice. These findings collectively indicate that TANs and TAMs interact through chemokines, such as CCL2, and collectively promote tumor growth and metastasis. Interestingly, the recruitment of TAMs by TANs to the appropriate regions in turn regulates the function of TANs, similar to the interaction between neutrophils and macrophages in the inflammatory environment. In addition to direct inhibition of T cells via ROS, iNOS and mediators such as ARG1, TANs can also inhibit T cell anti-tumor immunity by recruiting TAMs and regulatory T cells (Tregs) to remodeling of the TME via CCL17 and CCL2 ( ). TANs inhibit T cells by expressing programmed cell death-ligand 1 (PD-L1) to suppress anti-tumor response. In contrast, blockade of PD-1/PD-L1 resulted in reduced immunosuppression of T cells and enhanced infiltration and activation. Zhang et al. found that after tumorigenesis, TANs displayed N2-like state and secreted cytokine IL-10 to promote the activation of c-Met/STAT3 signaling, while the transcription factor STAT3 increased the level of PD-L1 in tumor cells and promoted the polarization of neutrophils toward N2-like state. granulocyte polarization toward an N2-like state, leading to a positive feedback loop between TANs, IL-10, STAT3, PD-L1, and TANs themselves ( ). Inhibiting one of the processes in the positive feedback pathway may prove beneficial in the treatment and prognosis of the tumor. Michaeli et al. reported that TANs promote immunosuppression by strongly inducing CD8+ T cell apoptosis, which leads to tumor progression, and that the TANs-induced CD8+ T cell death mechanism involving the TNF signaling pathway and NO production ( ). In contrast, it has been reported that TANs can promote CD8+ T-cell recruitment and activation by producing T-cell chemoattractant (e. g., CCL3, CXCL9, and CXCL10) and proinflammatory cytokines (IL-12, TNF-α, and GM-CSF) ( ). The question of how to regulate the production of T cell-promoting factors by TANs and reduce the production of suppressors is also a topic of discussion. In the early stages of lung cancer, crosstalk between TANs and activated T cells resulted in significant upregulation of CD54, CD86, OX40L, and 4-1BBL co-stimulatory molecules on the surface of neutrophils, which promoted T cell proliferation in a positive feedback loop. This result suggests that the upregulation of co-stimulatory molecules on TANs enhances T cell immunity, whereas the upregulation of PD-L1 suppresses T cell responses ( ). Modulating specific signaling molecules in the microenvironment to direct TANs toward a phenotype that promotes T cell immunity, or monitoring changes in surface molecules of TANs during tumor treatment, could serve as valuable strategies for assessing therapeutic efficacy and predicting alterations in the tumor’s immune response. Platelets are the first site to appear in the inflammatory process that accompanies the development of cancer. No influx of monocytes, lymphocytes, dendritic cells or NK cells was observed in the early stages of metastasis formation ( ). It appears that neutrophil recruitment to the tumor microenvironment is dependent on platelet activation. This process does not occur when the function of these cells is impaired or when platelets are reduced. The function of platelets in the formation of TANs can be considered in two distinct ways. Firstly, platelets release the chemokine CXCL5/7, which binds to CXCR2 on the surface of neutrophils, thereby activating and migrating these cells ( ). Secondly, platelets serve as a source of TGF-β, which plays a pivotal role in the development of N2 TANs ( , ). Recent studies have also revealed a potential inhibitory effect of TANs and neutrophils on natural killer (NK) cell function during tumor development. The study by Sun et al. showed that TANs inhibit the cytotoxicity and infiltration capacity of NK cells through the PD-L1/PD-1 axis and regulate the expression of PD-L1 and PD-1 through the GCSF/STAT3 and IL-18 pathways, revealing the effect of neutrophils on NK cell dysfunction in the loaded state and its molecular mechanisms ( ). Yang et al. revealed that tumor-associated neutrophils were able to influence macrophages, NK cells and T cells through IL16, IFN-II and SPP1 signaling pathways ( ). The main mechanism may be the release of nitric oxide (NO) and ROS and arginase 1 (ARG1) activity by TANs to inhibit NK cytotoxicity and T cell proliferation ( , ). The elucidation of these mechanisms provides new insights into understanding the complexity of immunosuppression in the tumor microenvironment, and future studies will need to further explore how intervention in these pathways can enhance the anti-tumor activity of NK cells or help NK cells escape suppression. TANs are actively involved in the recruitment of B cells to the TME in addition to their ability to produce extensive crosstalk with the aforementioned cells. Merav E. Shaul et al. have clarified that TNFα is the main cytokine in TANs-mediated B cell chemotaxis, that recruitment of CD45+B220+CD138- splenic B cells by TANs in vitro leads to B cell phenotypic modulation, and that in vitro experiments have confirmed the ability of TANs to induce B cell differentiation into IgG-producing plasma cells, and that the process is dependent on the surface of TANs. process is dependent on a B-cell activating factor (BAFF) contact mechanism on the surface of TANs ( ). Interestingly we will find that TNFα tends to associate with N1-type TANs, which may suggest a novel immunoregulatory network between TNFα, N1-type TANs and B-cells, in which the interaction between TANs and B-cells is critical for the formation of tumor immune response. This interaction may, together with other immune cell types, such as T cells and dendritic cells, constitute a complex network of immune responses that collectively influence tumor progression and patient response to therapy. Increasing evidence suggests that neutrophils play an active role in promoting tumor development. However, clinical applications are still limited to the systemic treatment of TANs to avoid neutropenia ( ). The blockade of immune checkpoints of the neutrophil programmed cell death 1 (PD1)/PD-L1 pathway, targeted binding of CXCR2, CXCR4, G-CSF, TGF- β , etc., which in turn inhibits the recruitment, expansion and polarization of tumor neutrophils, may provide some ideas for neutrophil-targeted tumor therapies. In view of the above discussions, we propose a series of prospective research directions for the investigation of TANs (1): Elucidating the mechanisms that induce the polarization of TANs from the N2-type to the N1-type during their chemotactic migration (2). Investigating the shared pathological mechanisms between TANs and the TME across a spectrum of cancers (3). Determining whether there are variations in TAN subtypes observed among patients with different cancers (4). Identifying additional markers to differentiate between TAN subtypes, addressing the complexity and heterogeneity of TANs (5). Clarifying the intricate mechanisms of TAN interactions with other tumor-associated cells, such as TAMs, tumor-associated platelets, and T cells. These research avenues may provide insights into the role of TANs in tumorigenesis and inform the development of novel therapeutic strategies. We anticipate that subsequent research will leverage the complete antitumor potential of TANs and integrate existing effective antineoplastic therapies with targeted neutrophil interventions, thus offering a promising direction that could result in safer and more efficacious treatment strategies.
Limitations This study is the first to use bibliometric visualization to analyze studies related to TANs over the past 20 years. However, this study inevitably has some limitations. First, the data used in this study were only from the WOSCC database, excluding data from other databases such as PubMed, Cochrane Library, and Google Scholar. Despite the comprehensiveness and reliability of WOSCC, there may be some missing literature in the data from the WOSCC database; only English language literature was included in this study, which may lead to biased results. There was also the inclusion of literature up to March 21, 2024, and subsequent publications were not included in the study in time. Secondly, the data in this study may be inconsistent in many ways, for example, the same institution may have used different names at different times; and the same author published papers in the field at different institutions. Finally, although this study provides a comprehensive overview of the research field of TANs, there are some limitations in the study of keywords. The keyword analysis relied primarily on the titles and abstracts of the literature, which may not have fully captured the depth of information in the full text of the articles; the setting of the minimum number of citations may have excluded some emerging but important research directions. Future research could further explore these limitations and utilize more comprehensive data analysis methods to provide deeper insights.
Conclusions In this study, we used bibliometrics and visual analytics to conduct a comprehensive review and analysis of research on tumor-associated neutrophils (TANs) between 2000 and 2024. A comprehensive review was conducted through the Web of Science Core Collection (WOSCC) database, we systematically sorted out the global trends in TANs research, identifying key publications, core authors, and research institutions, as well as research hotspots and frontiers in the field. Frontiers in Immunology and CANCERS are influential journals in the field, and Fridlender, Zvi G. is a leading author in the field. The fields of immunology, oncology and inflammation are currently experiencing a surge in interest, with the extensive heterogeneity of TANs, the pro-tumorigenic function of the N2 type and its relationship with TME, various cancers, or crosstalk with other immune cells emerging as popular avenues for future research. This study elucidates the basic scientific knowledge of TANs and the relationship with tumors and other immune cells, also provides important clues to research trends and hotspots. We hope that this study will promote academic exchanges in the field of TANs research and help researchers better grasp the current general trends in the field.
|
Structural Determination, Biological Function, and Molecular Modelling Studies of Sulfoaildenafil Adulterated in Herbal Dietary Supplement | f5860454-2e3a-4b73-a71f-430cb7ec3a9e | 7916901 | Pharmacology[mh] | Herbal medicines or dietary supplements have been popularized and advertised as natural and safe for human consumption. Nevertheless, some herbal drugs are contaminated, including with synthetic chemical compounds used to adulterate their marketed products in order to enhance the effects of their products, in which it is claimed that they are able to help treat certain chronic ailments and diseases . There have been numerous recent studies that have reported about herbal drugs for the treatment of erectile dysfunction to enhance male sexual performance . Those published adulterants include the synthetic phosphodiesterase-5 (PDE5) inhibitors that do not only include FDA-approved drugs , but also their synthetic analogues, i.e., homosildenafil , thiohomosildenafil, thiosildenafil , and thiomethisosildenafil through minor structural modifications. However, the presence of these drug analogues can cause some serious health risks and unexpected side-effects for patients, especially when their uses have not been clinically proven to be safe, resulting in unpredictable adverse effects. Numerous analogues of synthetic PDE5 inhibitors, for example sildenafil, vardenafil, and tadalafil, have been studied in the cases . It has been observed that the development of hypertension has been associated with endothelial dysfunction characterized by oxidative stress and by decreasing endothelium-derived relaxing factors, such as the consumption of nitric oxide (NO) . NO is synthesized through the activity of nitric oxide synthase (NOS) enzymes by expressing in the endothelium of arteries and the neuron cells . Current reports have proposed that, in general, analogues of sildenafil display a variety of cellular functions, including muscle relaxation, anti-inflammation, and signal transduction , in addition to displaying beneficial effects of sexual endothelial dysfunction and pulmonary hypertension. According to this situation, when sexual stimulation causes a local release of NO, the synthetic inhibitory effects of PDE5 creates a retaining intracellular cyclic guanosine monophosphate (cGMP) levels, resulting in muscle relaxation and an inflow of blood into the corpus cavernosum penis. Almost all herbal supplements were detected to contain adulterated products with sildenafil analogues, which can be obtained over the counter at regular drugstores. One of the most powerful techniques for structural determination of the isolated compounds in herbal extracts is NMR ( 1 H- and 13 C-NMR) spectroscopy . Furthermore, these synthetic compounds have also been investigated and presented structural similarities to sildenafil by means of UV spectroscopy, liquid chromatography (LC), high-resolution mass spectroscopy (MS), and X-ray structure analysis . However, in many cases, there is no information available regarding the potential toxicological or pharmacological effects on the public. Here, we demonstrated that sulfoaildenafil, a thioketone analog of sildenafil, has been detected as an adulterant in herbal aphrodisiacs. The effects of the isolated compound have been focused for the first time from both of the structural characteristics and the experimental point of view performance in vitro. The present study was designed to determine its effects on the human umbilical vein endothelial EA.hy926 cells, focusing the role of toxicity, NO-releasing levels, and the regulation gene expression of NO synthesis and PDE5 inhibitory effect. Finally, this bulk material, which displays structural similarity to sildenafil, was analyzed for the presence of a PDE5 inhibitor using a theoretical calculation.
2.1. Structural Characterization Through the HPLC technique, the extracted solution from an herbal supplement was analyzed and purified into nine fractions, as shown in . Among all the fractions, the dominant peak of fraction-7 (F7) was isolated as pale-yellow crystals after recrystallization from dimethylformamide and diethyl ether. Unfortunately, the present work actually focused on structural determination using single crystal X-ray diffraction analysis, however, carrying out routine conventional measurement of the single crystal sample was difficult due to weak crystal structure refinement results. Therefore, the thus-obtained compound has been characterized in terms of its structure by comparing the 1 H NMR, 13 C NMR spectroscopy, and mass spectrometry. shows the NMR spectrometry of F7. The 1 H and 13 C NMR spectrums of this compound are shown in . In brief, 1 H NMR spectrum revealed special character of dimethyl piperazine ring at δ H 1.05 (d, J = 6.4 Hz, 6H). The methylene protons of piperazine ring signal at δ H 3.64 (d, J = 9.4 Hz, 2H) and δ H 1.90 (t, J = 10.9 Hz, 2H), which are characterized as the deshielded equatorial protons of a rigid 6-membered ring . The 13 C NMR spectra indicated five primary carbons; five secondary carbons; five tertiary carbons; and eight quaternary carbons . Furthermore, the characteristic structure was confirmed by distortionless enhancement by polarization transfer (DEPT) 90°/135° NMR and 1 H- 13 C HSQC as shown in . The total ion chromatogram and product ion spectrum for F7 compound are shown in . The product ions at m / z 448, 393, 327, 315, 299, 113, and 99 were observed in the mass spectrometry . The fragment ion at m / z 448 represents a moiety characteristic stemming from the decomposed piperazine ring that contained secondary nitrogen which was observed only for sulfoaildenafil. The compound demonstrates the loss of the piperazine moiety from the molecule, deducing the ion transition from m / z 505 to 393. The product signal at m / z 299 was defined by the loss of the ethyl group as a base peak from the fragment at m / z 327. The molecular ion chromatogram for F7 was also observed at m / z 505 by ESI-TOF/MS analysis, corresponding to the molecular formula of C 23 H 33 O 3 N 6 S 2 [M + H] + . As the results, the isolated F7 compound was clearly identified as sulfoaildenafil related to the previous studies . 2.2. Effect of Sulfoaildenafil in Human Umbilical Vein Endothelial Cell Line The releasing NO triggers vascular endothelial cell through the activity of both inducible nitric oxide synthase ( i NOS) and endothelial nitric oxide synthase ( e NOS) enzymes. NO production stimulates the cyclic guanosine monophosphate (cGMP) synthesis via guanylyl cyclase enzyme in endothelial cells, which induce to smooth muscle relaxation, vasodilation, and penile erection, respectively . The feedback loop mechanism of cGMP elevation increases phosphodiesterase type 5 (PDE5) gene expression and enzyme activity which transform into GMP in the smooth muscle, leading to decrease the erection of penile . Sildenafil is an orally active PDE5 inhibitor for the treatment of penile erection dysfunction . Firstly, the cytotoxicity of the sulfoaildenafil was evaluated that provided more than 80% of Ea.hy926 endothelial cell lines survival rate at the concentration of less than 12.5 µg mL −1 as seen in . Thus, this compound at the concentration of 10 µg mL −1 was chosen to use in further experimental studies. For NO determination, in a, sulfoaildenafil has significantly increased the releasing of NO in the concentration range of 1.25–10 µg mL −1 compared to the cell culture medium control (Ctrl). Similarly, sildenafil was found to significantly elevate NO production in endothelial cell lines ( a). As reported by previous literatures , the material of sildenafil has been reported to increase the NO releasing in the human umbilical vein endothelial cells in insulin resistance conditions and Ea.hy926 endothelial cell lines. According to the NO releasing, it is generated by nitric oxide synthase ( i NOS and e NOS) in endothelial cells or triggered endothelial cells by itself or an exogenous source such as NO donor drugs. As expected, sulfoaildenafil, that was able to significantly elevate the NO production ( a), can up-regulate the levels of i NOS and e NOS gene expression in Ea.hy926 endothelial cells corresponding to sildenafil used as positive control as showed in b. Surprisingly, the active compound of sulfoaildenafil significantly stimulated the upregulation of both i NOS and e NOS genes at greater levels than that of sildenafil, as illustrated in double asterisks connected with solid lines in b. Furthermore, sulfoaildenafil, at the same time, significantly motivated the PDE5A gene upregulation as well as sildenafil in comparison with the cell culture medium control . Altogether, these results indicated that sulfoaildenafil, comparing to sildenafil material, enhanced NO production through the i NOS and e NOS gene expression, which also subsequently up-regulated PDE5 gene expression. This is the first time studying about sulfoaildenafil biological effects, a thioketone analogue of sildenafil, on the erectile dysfunction in the in vitro experimental approach. 2.3. Computational Studies As per the above results, the characteristics of sulfoaildenafil was revealed by using a combination of NMR and mass spectroscopy techniques as well as the biological activities. According to the most well-known PDE5 inhibitor of sildenafil, for this reason, the model compound of sulfoaildenafil, an analog of sildenafil, was used as an active material for molecular docking and molecular dynamics simulation approaches. 2.3.1. Molecular Docking Study Based on the crystal structure of PDE5, the potential binding activity has been described by subdividing it into three main regions, namely: (i) A metal-binding pocket (M pocket), (ii) a solvent-filled hydrophilic side pocket (S pocket), and (iii) a pocket containing the purine-selective glutamine and hydrophobic clamp (Q pocket) . Here, the molecular docking approach was firstly performed to predict the bioactive binding modes and affinity of the PDE5 inhibitor on the target protein. It should be noted that all models of the well-known PDE5 inhibitors were found to occupy part of the Q pocket (Gln817 and Leu804) at the immediate vicinity of the binding site with the pyrazolopyrimidinone ring of the inhibitors, suggesting the above-mentioned drugs can be accommodated in PDE5 protein and also present PDE5 inhibitor activity . The binding modes were observed at the same site with slightly different binding conformations compared with the sildenafil as a common drug used as a PDE5 inhibitor . Each compound shows favorable binding energy, with such results obtained from AutoDock Vina falling in the range of −10.2~−8.9 kcal mol −1 . shows an observed binding affinity and common amino acid binding residues within 5 Å that was identified to play a key role in the potential activity for PDE5 inhibition (see ). Although a lower estimated value of the binding affinity indicates stronger interactions of the protein–ligand complex, the small binding energy difference among these complexes is only 0.5~1 kcal mol −1 . The interactions of each drug with the potential site of PDE5 were mediated by the hydrophilic/hydrophobic interactions as supported by the findings in previous studies . As a combined result of experimental study, the subtle differences that were found in the estimated binding energy have led us to further investigate the obtained complex by a comparison between sildenafil and sulfoaildenafil using MD simulations. 2.3.2. Molecular Dynamics Simulations To enhance the configuration space for sampling accessibility to the molecular geometries, 100 ns long-time simulations of PDE5 with and without the addition of sildenafil and sulfoaildenafil were performed. The structural stability of the proteins as well as the position of the ligands in the binding site cleft were monitored using root mean square deviations (RMSD) with respect to their optimized initial structure . Steady oscillation and small fluctuation of RMSD were observed, indicating that the previous complexes were more stable and endured lesser conformational changes during simulations. Binding Free Energy Evaluation To demonstrate the binding interaction of the complex systems, the values of the relative binding free energy (ΔG binding ) obtained from MM-PBSA protocol were calculated as listed in . The results showed that the sildenafil (ΔG binding = −20.34 kcal mol −1 ) slightly binds to the PDE5 protein better than sulfoaildenafil (ΔG binding = −15.45 kcal mol −1 ) with an energy difference of ~5 kcal mol −1 . The same tendency of energy values between MM-PBSA and docking calculations were observed. This slight decrease in the size of the binding free energy of sulfoaildenafil correlated with the shifts of the unfavorable term in ( i ) the van der Waals (vdW) interaction by 10.77 kcal mol −1 , (ii) the intermolecular electrostatic interactions (EEL) by 41.45 kcal mol −1 , and ( iii ) the entropy configuration by 2.38 kcal mol −1 . The change in the contribution from the desolvation of non-polar groups (ENPOLAR) is almost zero. The polar solvation free energy (EPS) of sulfoaildenafil is less unfavorable over 2 times relative to the sildenafil, being shifted by −44.29 kcal mol −1 . Nevertheless, this difference in EPS term is not sufficient to compensate for the loss in the vdW causing drug resistance. Unfavorable shifts in EEL and vdW terms of sulfoaildenafil are overcompensated by favorable change in the EPS interaction free energy, leading to an improved affinity in comparison to sildenafil. It can be highlighted that the structural inspection alone may not be sufficient for identifying the key contributions to binding affinity, where the effects of solvation term are taken into account. The contributions of the essential amino acids to the binding interaction have been investigated by calculating per-residue free energy decomposition. a presents the decomposed per-residue free energy upon the binding of each complex system. The negative and positive values represent favorable and unfavorable contributions, respectively. According to the a, the hydrophobic amino acids (Ile665, Ile768, and Phe796) and one electrically charged residue (Arg667) of PDE5 have more favorable interactions with strong binding affinity for sildenafil than that of sulfoaildenafil. On the other hand, the sulfoaildenafil showed more favorable contact with three hydrophobic residues (Leu765, Leu804, and Met816) and one essential neutral-charged amino acid (Gln817) from the active site of PDE5. In addition, we further evaluated the per-residue free energy decomposition of the key binding residue based on the free energy of vdW and the sum of ELE interactions . It can be noted that the most favorable contribution of the binding free energy of sulfoaildenafil-bound system was essentially both the vdW and ELE decomposition energies, consisting of Tyr612, Ile813, Met816, Gln817, and Phe820, while the energy term of the vdW was dominant for sildenafil-bound systems including Ile665, Ile768, Phe786, and Leu804. This precisely indicates dynamics interactions upon the binding modes of sildenafil and sulfoaildenafil on the PDE5 protein. Hydrogen Bond Analysis Analysis of hydrogen bond formation was conducted, and more than a 10% occupancy rate was presented as listed in . Gln817 of PDE5 was shown to contribute to the key residue interaction with a markedly a high occupancy of hydrogen bonding to interact with both inhibitors. Arg667 of PDE5 that showed favorable binding interactions in the decomposition analysis was found to form hydrogen bonds with sildenafil with a high occupancy rate. On the other hand, sulfoaildenafil is oriented in the potential site through negligible hydrogen bonding with proton-accepting Ser663, while there is no proton acceptor found in the sildenafil. b shows the final conformations of sildenafil- and sulfoaildenafil-bound PDE5 complexes at 100 ns simulations, with hydrogen bond-forming residues shown in stick representation. Using the number of hydrogen bonds between the inhibitors and the potential residues in PDE5 alone is able to explain the reason that the binding free energy is distinctly different from each other. Dynamic Cross-Correlation Matrix (DCCM) Analysis To observe the conformational changes of PDE5 protein upon the binding effect of sildenafil and sulfoaildenafil, DCCM analysis was conducted to evaluate the occurrence of dynamic motions for residue correlations based on the positions of Cα–atoms of free PDE5 and ligand-bound complex. We perform an initial visual inspection of the dynamic maps obtained from MD simulation period. As illustrated in , the diagonal elements of the correlation maps describe fluctuation of individual residues, while the off-diagonal elements represent to an inter-residue correlation (cross-correlations) . The cross-correlation coefficients range from a value of −1 (blue-grey regions) to a value of +1 (red to yellow regions). It seems from the correlation map that an overall positive correlation is observed in the case of free PDE5, confirming conformational changes after the ligand binding ( c). After ligand binding, DCCM map revealed that both of the ligands effect on the structure conformation of PDE5 protein as illustrated by the change in dynamic patterns and correlations. Firstly, on the Q pocket regions, with residues being 800~820, sildenafil-bound PDE5 was a remarkable decrease in the positive correlated motion (red arrows, in a) than that of the sulfoaildenafil-bound complex (red arrows, a in b). This result agrees well with the findings of previous studies that found more negative cross-correlation coefficients in the protein part, which arise from an external perturbation of small ligand binding . According to the decomposition energy ( a), sildenafil triggered correlated motion change (blue-grey region) as opposed to sulfoaildenafil in residues around 640~670 (b in ). The decrease in correlated motions was observed within residues 760~790 for both of ligands, as seen in blue region (c in ) and grey region (d in ).
Through the HPLC technique, the extracted solution from an herbal supplement was analyzed and purified into nine fractions, as shown in . Among all the fractions, the dominant peak of fraction-7 (F7) was isolated as pale-yellow crystals after recrystallization from dimethylformamide and diethyl ether. Unfortunately, the present work actually focused on structural determination using single crystal X-ray diffraction analysis, however, carrying out routine conventional measurement of the single crystal sample was difficult due to weak crystal structure refinement results. Therefore, the thus-obtained compound has been characterized in terms of its structure by comparing the 1 H NMR, 13 C NMR spectroscopy, and mass spectrometry. shows the NMR spectrometry of F7. The 1 H and 13 C NMR spectrums of this compound are shown in . In brief, 1 H NMR spectrum revealed special character of dimethyl piperazine ring at δ H 1.05 (d, J = 6.4 Hz, 6H). The methylene protons of piperazine ring signal at δ H 3.64 (d, J = 9.4 Hz, 2H) and δ H 1.90 (t, J = 10.9 Hz, 2H), which are characterized as the deshielded equatorial protons of a rigid 6-membered ring . The 13 C NMR spectra indicated five primary carbons; five secondary carbons; five tertiary carbons; and eight quaternary carbons . Furthermore, the characteristic structure was confirmed by distortionless enhancement by polarization transfer (DEPT) 90°/135° NMR and 1 H- 13 C HSQC as shown in . The total ion chromatogram and product ion spectrum for F7 compound are shown in . The product ions at m / z 448, 393, 327, 315, 299, 113, and 99 were observed in the mass spectrometry . The fragment ion at m / z 448 represents a moiety characteristic stemming from the decomposed piperazine ring that contained secondary nitrogen which was observed only for sulfoaildenafil. The compound demonstrates the loss of the piperazine moiety from the molecule, deducing the ion transition from m / z 505 to 393. The product signal at m / z 299 was defined by the loss of the ethyl group as a base peak from the fragment at m / z 327. The molecular ion chromatogram for F7 was also observed at m / z 505 by ESI-TOF/MS analysis, corresponding to the molecular formula of C 23 H 33 O 3 N 6 S 2 [M + H] + . As the results, the isolated F7 compound was clearly identified as sulfoaildenafil related to the previous studies .
The releasing NO triggers vascular endothelial cell through the activity of both inducible nitric oxide synthase ( i NOS) and endothelial nitric oxide synthase ( e NOS) enzymes. NO production stimulates the cyclic guanosine monophosphate (cGMP) synthesis via guanylyl cyclase enzyme in endothelial cells, which induce to smooth muscle relaxation, vasodilation, and penile erection, respectively . The feedback loop mechanism of cGMP elevation increases phosphodiesterase type 5 (PDE5) gene expression and enzyme activity which transform into GMP in the smooth muscle, leading to decrease the erection of penile . Sildenafil is an orally active PDE5 inhibitor for the treatment of penile erection dysfunction . Firstly, the cytotoxicity of the sulfoaildenafil was evaluated that provided more than 80% of Ea.hy926 endothelial cell lines survival rate at the concentration of less than 12.5 µg mL −1 as seen in . Thus, this compound at the concentration of 10 µg mL −1 was chosen to use in further experimental studies. For NO determination, in a, sulfoaildenafil has significantly increased the releasing of NO in the concentration range of 1.25–10 µg mL −1 compared to the cell culture medium control (Ctrl). Similarly, sildenafil was found to significantly elevate NO production in endothelial cell lines ( a). As reported by previous literatures , the material of sildenafil has been reported to increase the NO releasing in the human umbilical vein endothelial cells in insulin resistance conditions and Ea.hy926 endothelial cell lines. According to the NO releasing, it is generated by nitric oxide synthase ( i NOS and e NOS) in endothelial cells or triggered endothelial cells by itself or an exogenous source such as NO donor drugs. As expected, sulfoaildenafil, that was able to significantly elevate the NO production ( a), can up-regulate the levels of i NOS and e NOS gene expression in Ea.hy926 endothelial cells corresponding to sildenafil used as positive control as showed in b. Surprisingly, the active compound of sulfoaildenafil significantly stimulated the upregulation of both i NOS and e NOS genes at greater levels than that of sildenafil, as illustrated in double asterisks connected with solid lines in b. Furthermore, sulfoaildenafil, at the same time, significantly motivated the PDE5A gene upregulation as well as sildenafil in comparison with the cell culture medium control . Altogether, these results indicated that sulfoaildenafil, comparing to sildenafil material, enhanced NO production through the i NOS and e NOS gene expression, which also subsequently up-regulated PDE5 gene expression. This is the first time studying about sulfoaildenafil biological effects, a thioketone analogue of sildenafil, on the erectile dysfunction in the in vitro experimental approach.
As per the above results, the characteristics of sulfoaildenafil was revealed by using a combination of NMR and mass spectroscopy techniques as well as the biological activities. According to the most well-known PDE5 inhibitor of sildenafil, for this reason, the model compound of sulfoaildenafil, an analog of sildenafil, was used as an active material for molecular docking and molecular dynamics simulation approaches. 2.3.1. Molecular Docking Study Based on the crystal structure of PDE5, the potential binding activity has been described by subdividing it into three main regions, namely: (i) A metal-binding pocket (M pocket), (ii) a solvent-filled hydrophilic side pocket (S pocket), and (iii) a pocket containing the purine-selective glutamine and hydrophobic clamp (Q pocket) . Here, the molecular docking approach was firstly performed to predict the bioactive binding modes and affinity of the PDE5 inhibitor on the target protein. It should be noted that all models of the well-known PDE5 inhibitors were found to occupy part of the Q pocket (Gln817 and Leu804) at the immediate vicinity of the binding site with the pyrazolopyrimidinone ring of the inhibitors, suggesting the above-mentioned drugs can be accommodated in PDE5 protein and also present PDE5 inhibitor activity . The binding modes were observed at the same site with slightly different binding conformations compared with the sildenafil as a common drug used as a PDE5 inhibitor . Each compound shows favorable binding energy, with such results obtained from AutoDock Vina falling in the range of −10.2~−8.9 kcal mol −1 . shows an observed binding affinity and common amino acid binding residues within 5 Å that was identified to play a key role in the potential activity for PDE5 inhibition (see ). Although a lower estimated value of the binding affinity indicates stronger interactions of the protein–ligand complex, the small binding energy difference among these complexes is only 0.5~1 kcal mol −1 . The interactions of each drug with the potential site of PDE5 were mediated by the hydrophilic/hydrophobic interactions as supported by the findings in previous studies . As a combined result of experimental study, the subtle differences that were found in the estimated binding energy have led us to further investigate the obtained complex by a comparison between sildenafil and sulfoaildenafil using MD simulations. 2.3.2. Molecular Dynamics Simulations To enhance the configuration space for sampling accessibility to the molecular geometries, 100 ns long-time simulations of PDE5 with and without the addition of sildenafil and sulfoaildenafil were performed. The structural stability of the proteins as well as the position of the ligands in the binding site cleft were monitored using root mean square deviations (RMSD) with respect to their optimized initial structure . Steady oscillation and small fluctuation of RMSD were observed, indicating that the previous complexes were more stable and endured lesser conformational changes during simulations. Binding Free Energy Evaluation To demonstrate the binding interaction of the complex systems, the values of the relative binding free energy (ΔG binding ) obtained from MM-PBSA protocol were calculated as listed in . The results showed that the sildenafil (ΔG binding = −20.34 kcal mol −1 ) slightly binds to the PDE5 protein better than sulfoaildenafil (ΔG binding = −15.45 kcal mol −1 ) with an energy difference of ~5 kcal mol −1 . The same tendency of energy values between MM-PBSA and docking calculations were observed. This slight decrease in the size of the binding free energy of sulfoaildenafil correlated with the shifts of the unfavorable term in ( i ) the van der Waals (vdW) interaction by 10.77 kcal mol −1 , (ii) the intermolecular electrostatic interactions (EEL) by 41.45 kcal mol −1 , and ( iii ) the entropy configuration by 2.38 kcal mol −1 . The change in the contribution from the desolvation of non-polar groups (ENPOLAR) is almost zero. The polar solvation free energy (EPS) of sulfoaildenafil is less unfavorable over 2 times relative to the sildenafil, being shifted by −44.29 kcal mol −1 . Nevertheless, this difference in EPS term is not sufficient to compensate for the loss in the vdW causing drug resistance. Unfavorable shifts in EEL and vdW terms of sulfoaildenafil are overcompensated by favorable change in the EPS interaction free energy, leading to an improved affinity in comparison to sildenafil. It can be highlighted that the structural inspection alone may not be sufficient for identifying the key contributions to binding affinity, where the effects of solvation term are taken into account. The contributions of the essential amino acids to the binding interaction have been investigated by calculating per-residue free energy decomposition. a presents the decomposed per-residue free energy upon the binding of each complex system. The negative and positive values represent favorable and unfavorable contributions, respectively. According to the a, the hydrophobic amino acids (Ile665, Ile768, and Phe796) and one electrically charged residue (Arg667) of PDE5 have more favorable interactions with strong binding affinity for sildenafil than that of sulfoaildenafil. On the other hand, the sulfoaildenafil showed more favorable contact with three hydrophobic residues (Leu765, Leu804, and Met816) and one essential neutral-charged amino acid (Gln817) from the active site of PDE5. In addition, we further evaluated the per-residue free energy decomposition of the key binding residue based on the free energy of vdW and the sum of ELE interactions . It can be noted that the most favorable contribution of the binding free energy of sulfoaildenafil-bound system was essentially both the vdW and ELE decomposition energies, consisting of Tyr612, Ile813, Met816, Gln817, and Phe820, while the energy term of the vdW was dominant for sildenafil-bound systems including Ile665, Ile768, Phe786, and Leu804. This precisely indicates dynamics interactions upon the binding modes of sildenafil and sulfoaildenafil on the PDE5 protein. Hydrogen Bond Analysis Analysis of hydrogen bond formation was conducted, and more than a 10% occupancy rate was presented as listed in . Gln817 of PDE5 was shown to contribute to the key residue interaction with a markedly a high occupancy of hydrogen bonding to interact with both inhibitors. Arg667 of PDE5 that showed favorable binding interactions in the decomposition analysis was found to form hydrogen bonds with sildenafil with a high occupancy rate. On the other hand, sulfoaildenafil is oriented in the potential site through negligible hydrogen bonding with proton-accepting Ser663, while there is no proton acceptor found in the sildenafil. b shows the final conformations of sildenafil- and sulfoaildenafil-bound PDE5 complexes at 100 ns simulations, with hydrogen bond-forming residues shown in stick representation. Using the number of hydrogen bonds between the inhibitors and the potential residues in PDE5 alone is able to explain the reason that the binding free energy is distinctly different from each other. Dynamic Cross-Correlation Matrix (DCCM) Analysis To observe the conformational changes of PDE5 protein upon the binding effect of sildenafil and sulfoaildenafil, DCCM analysis was conducted to evaluate the occurrence of dynamic motions for residue correlations based on the positions of Cα–atoms of free PDE5 and ligand-bound complex. We perform an initial visual inspection of the dynamic maps obtained from MD simulation period. As illustrated in , the diagonal elements of the correlation maps describe fluctuation of individual residues, while the off-diagonal elements represent to an inter-residue correlation (cross-correlations) . The cross-correlation coefficients range from a value of −1 (blue-grey regions) to a value of +1 (red to yellow regions). It seems from the correlation map that an overall positive correlation is observed in the case of free PDE5, confirming conformational changes after the ligand binding ( c). After ligand binding, DCCM map revealed that both of the ligands effect on the structure conformation of PDE5 protein as illustrated by the change in dynamic patterns and correlations. Firstly, on the Q pocket regions, with residues being 800~820, sildenafil-bound PDE5 was a remarkable decrease in the positive correlated motion (red arrows, in a) than that of the sulfoaildenafil-bound complex (red arrows, a in b). This result agrees well with the findings of previous studies that found more negative cross-correlation coefficients in the protein part, which arise from an external perturbation of small ligand binding . According to the decomposition energy ( a), sildenafil triggered correlated motion change (blue-grey region) as opposed to sulfoaildenafil in residues around 640~670 (b in ). The decrease in correlated motions was observed within residues 760~790 for both of ligands, as seen in blue region (c in ) and grey region (d in ).
Based on the crystal structure of PDE5, the potential binding activity has been described by subdividing it into three main regions, namely: (i) A metal-binding pocket (M pocket), (ii) a solvent-filled hydrophilic side pocket (S pocket), and (iii) a pocket containing the purine-selective glutamine and hydrophobic clamp (Q pocket) . Here, the molecular docking approach was firstly performed to predict the bioactive binding modes and affinity of the PDE5 inhibitor on the target protein. It should be noted that all models of the well-known PDE5 inhibitors were found to occupy part of the Q pocket (Gln817 and Leu804) at the immediate vicinity of the binding site with the pyrazolopyrimidinone ring of the inhibitors, suggesting the above-mentioned drugs can be accommodated in PDE5 protein and also present PDE5 inhibitor activity . The binding modes were observed at the same site with slightly different binding conformations compared with the sildenafil as a common drug used as a PDE5 inhibitor . Each compound shows favorable binding energy, with such results obtained from AutoDock Vina falling in the range of −10.2~−8.9 kcal mol −1 . shows an observed binding affinity and common amino acid binding residues within 5 Å that was identified to play a key role in the potential activity for PDE5 inhibition (see ). Although a lower estimated value of the binding affinity indicates stronger interactions of the protein–ligand complex, the small binding energy difference among these complexes is only 0.5~1 kcal mol −1 . The interactions of each drug with the potential site of PDE5 were mediated by the hydrophilic/hydrophobic interactions as supported by the findings in previous studies . As a combined result of experimental study, the subtle differences that were found in the estimated binding energy have led us to further investigate the obtained complex by a comparison between sildenafil and sulfoaildenafil using MD simulations.
To enhance the configuration space for sampling accessibility to the molecular geometries, 100 ns long-time simulations of PDE5 with and without the addition of sildenafil and sulfoaildenafil were performed. The structural stability of the proteins as well as the position of the ligands in the binding site cleft were monitored using root mean square deviations (RMSD) with respect to their optimized initial structure . Steady oscillation and small fluctuation of RMSD were observed, indicating that the previous complexes were more stable and endured lesser conformational changes during simulations. Binding Free Energy Evaluation To demonstrate the binding interaction of the complex systems, the values of the relative binding free energy (ΔG binding ) obtained from MM-PBSA protocol were calculated as listed in . The results showed that the sildenafil (ΔG binding = −20.34 kcal mol −1 ) slightly binds to the PDE5 protein better than sulfoaildenafil (ΔG binding = −15.45 kcal mol −1 ) with an energy difference of ~5 kcal mol −1 . The same tendency of energy values between MM-PBSA and docking calculations were observed. This slight decrease in the size of the binding free energy of sulfoaildenafil correlated with the shifts of the unfavorable term in ( i ) the van der Waals (vdW) interaction by 10.77 kcal mol −1 , (ii) the intermolecular electrostatic interactions (EEL) by 41.45 kcal mol −1 , and ( iii ) the entropy configuration by 2.38 kcal mol −1 . The change in the contribution from the desolvation of non-polar groups (ENPOLAR) is almost zero. The polar solvation free energy (EPS) of sulfoaildenafil is less unfavorable over 2 times relative to the sildenafil, being shifted by −44.29 kcal mol −1 . Nevertheless, this difference in EPS term is not sufficient to compensate for the loss in the vdW causing drug resistance. Unfavorable shifts in EEL and vdW terms of sulfoaildenafil are overcompensated by favorable change in the EPS interaction free energy, leading to an improved affinity in comparison to sildenafil. It can be highlighted that the structural inspection alone may not be sufficient for identifying the key contributions to binding affinity, where the effects of solvation term are taken into account. The contributions of the essential amino acids to the binding interaction have been investigated by calculating per-residue free energy decomposition. a presents the decomposed per-residue free energy upon the binding of each complex system. The negative and positive values represent favorable and unfavorable contributions, respectively. According to the a, the hydrophobic amino acids (Ile665, Ile768, and Phe796) and one electrically charged residue (Arg667) of PDE5 have more favorable interactions with strong binding affinity for sildenafil than that of sulfoaildenafil. On the other hand, the sulfoaildenafil showed more favorable contact with three hydrophobic residues (Leu765, Leu804, and Met816) and one essential neutral-charged amino acid (Gln817) from the active site of PDE5. In addition, we further evaluated the per-residue free energy decomposition of the key binding residue based on the free energy of vdW and the sum of ELE interactions . It can be noted that the most favorable contribution of the binding free energy of sulfoaildenafil-bound system was essentially both the vdW and ELE decomposition energies, consisting of Tyr612, Ile813, Met816, Gln817, and Phe820, while the energy term of the vdW was dominant for sildenafil-bound systems including Ile665, Ile768, Phe786, and Leu804. This precisely indicates dynamics interactions upon the binding modes of sildenafil and sulfoaildenafil on the PDE5 protein. Hydrogen Bond Analysis Analysis of hydrogen bond formation was conducted, and more than a 10% occupancy rate was presented as listed in . Gln817 of PDE5 was shown to contribute to the key residue interaction with a markedly a high occupancy of hydrogen bonding to interact with both inhibitors. Arg667 of PDE5 that showed favorable binding interactions in the decomposition analysis was found to form hydrogen bonds with sildenafil with a high occupancy rate. On the other hand, sulfoaildenafil is oriented in the potential site through negligible hydrogen bonding with proton-accepting Ser663, while there is no proton acceptor found in the sildenafil. b shows the final conformations of sildenafil- and sulfoaildenafil-bound PDE5 complexes at 100 ns simulations, with hydrogen bond-forming residues shown in stick representation. Using the number of hydrogen bonds between the inhibitors and the potential residues in PDE5 alone is able to explain the reason that the binding free energy is distinctly different from each other. Dynamic Cross-Correlation Matrix (DCCM) Analysis To observe the conformational changes of PDE5 protein upon the binding effect of sildenafil and sulfoaildenafil, DCCM analysis was conducted to evaluate the occurrence of dynamic motions for residue correlations based on the positions of Cα–atoms of free PDE5 and ligand-bound complex. We perform an initial visual inspection of the dynamic maps obtained from MD simulation period. As illustrated in , the diagonal elements of the correlation maps describe fluctuation of individual residues, while the off-diagonal elements represent to an inter-residue correlation (cross-correlations) . The cross-correlation coefficients range from a value of −1 (blue-grey regions) to a value of +1 (red to yellow regions). It seems from the correlation map that an overall positive correlation is observed in the case of free PDE5, confirming conformational changes after the ligand binding ( c). After ligand binding, DCCM map revealed that both of the ligands effect on the structure conformation of PDE5 protein as illustrated by the change in dynamic patterns and correlations. Firstly, on the Q pocket regions, with residues being 800~820, sildenafil-bound PDE5 was a remarkable decrease in the positive correlated motion (red arrows, in a) than that of the sulfoaildenafil-bound complex (red arrows, a in b). This result agrees well with the findings of previous studies that found more negative cross-correlation coefficients in the protein part, which arise from an external perturbation of small ligand binding . According to the decomposition energy ( a), sildenafil triggered correlated motion change (blue-grey region) as opposed to sulfoaildenafil in residues around 640~670 (b in ). The decrease in correlated motions was observed within residues 760~790 for both of ligands, as seen in blue region (c in ) and grey region (d in ).
To demonstrate the binding interaction of the complex systems, the values of the relative binding free energy (ΔG binding ) obtained from MM-PBSA protocol were calculated as listed in . The results showed that the sildenafil (ΔG binding = −20.34 kcal mol −1 ) slightly binds to the PDE5 protein better than sulfoaildenafil (ΔG binding = −15.45 kcal mol −1 ) with an energy difference of ~5 kcal mol −1 . The same tendency of energy values between MM-PBSA and docking calculations were observed. This slight decrease in the size of the binding free energy of sulfoaildenafil correlated with the shifts of the unfavorable term in ( i ) the van der Waals (vdW) interaction by 10.77 kcal mol −1 , (ii) the intermolecular electrostatic interactions (EEL) by 41.45 kcal mol −1 , and ( iii ) the entropy configuration by 2.38 kcal mol −1 . The change in the contribution from the desolvation of non-polar groups (ENPOLAR) is almost zero. The polar solvation free energy (EPS) of sulfoaildenafil is less unfavorable over 2 times relative to the sildenafil, being shifted by −44.29 kcal mol −1 . Nevertheless, this difference in EPS term is not sufficient to compensate for the loss in the vdW causing drug resistance. Unfavorable shifts in EEL and vdW terms of sulfoaildenafil are overcompensated by favorable change in the EPS interaction free energy, leading to an improved affinity in comparison to sildenafil. It can be highlighted that the structural inspection alone may not be sufficient for identifying the key contributions to binding affinity, where the effects of solvation term are taken into account. The contributions of the essential amino acids to the binding interaction have been investigated by calculating per-residue free energy decomposition. a presents the decomposed per-residue free energy upon the binding of each complex system. The negative and positive values represent favorable and unfavorable contributions, respectively. According to the a, the hydrophobic amino acids (Ile665, Ile768, and Phe796) and one electrically charged residue (Arg667) of PDE5 have more favorable interactions with strong binding affinity for sildenafil than that of sulfoaildenafil. On the other hand, the sulfoaildenafil showed more favorable contact with three hydrophobic residues (Leu765, Leu804, and Met816) and one essential neutral-charged amino acid (Gln817) from the active site of PDE5. In addition, we further evaluated the per-residue free energy decomposition of the key binding residue based on the free energy of vdW and the sum of ELE interactions . It can be noted that the most favorable contribution of the binding free energy of sulfoaildenafil-bound system was essentially both the vdW and ELE decomposition energies, consisting of Tyr612, Ile813, Met816, Gln817, and Phe820, while the energy term of the vdW was dominant for sildenafil-bound systems including Ile665, Ile768, Phe786, and Leu804. This precisely indicates dynamics interactions upon the binding modes of sildenafil and sulfoaildenafil on the PDE5 protein.
Analysis of hydrogen bond formation was conducted, and more than a 10% occupancy rate was presented as listed in . Gln817 of PDE5 was shown to contribute to the key residue interaction with a markedly a high occupancy of hydrogen bonding to interact with both inhibitors. Arg667 of PDE5 that showed favorable binding interactions in the decomposition analysis was found to form hydrogen bonds with sildenafil with a high occupancy rate. On the other hand, sulfoaildenafil is oriented in the potential site through negligible hydrogen bonding with proton-accepting Ser663, while there is no proton acceptor found in the sildenafil. b shows the final conformations of sildenafil- and sulfoaildenafil-bound PDE5 complexes at 100 ns simulations, with hydrogen bond-forming residues shown in stick representation. Using the number of hydrogen bonds between the inhibitors and the potential residues in PDE5 alone is able to explain the reason that the binding free energy is distinctly different from each other.
To observe the conformational changes of PDE5 protein upon the binding effect of sildenafil and sulfoaildenafil, DCCM analysis was conducted to evaluate the occurrence of dynamic motions for residue correlations based on the positions of Cα–atoms of free PDE5 and ligand-bound complex. We perform an initial visual inspection of the dynamic maps obtained from MD simulation period. As illustrated in , the diagonal elements of the correlation maps describe fluctuation of individual residues, while the off-diagonal elements represent to an inter-residue correlation (cross-correlations) . The cross-correlation coefficients range from a value of −1 (blue-grey regions) to a value of +1 (red to yellow regions). It seems from the correlation map that an overall positive correlation is observed in the case of free PDE5, confirming conformational changes after the ligand binding ( c). After ligand binding, DCCM map revealed that both of the ligands effect on the structure conformation of PDE5 protein as illustrated by the change in dynamic patterns and correlations. Firstly, on the Q pocket regions, with residues being 800~820, sildenafil-bound PDE5 was a remarkable decrease in the positive correlated motion (red arrows, in a) than that of the sulfoaildenafil-bound complex (red arrows, a in b). This result agrees well with the findings of previous studies that found more negative cross-correlation coefficients in the protein part, which arise from an external perturbation of small ligand binding . According to the decomposition energy ( a), sildenafil triggered correlated motion change (blue-grey region) as opposed to sulfoaildenafil in residues around 640~670 (b in ). The decrease in correlated motions was observed within residues 760~790 for both of ligands, as seen in blue region (c in ) and grey region (d in ).
3.1. Materials Sildenafil citrate (Viagra ® ) 100 mg tablets were purchased from Pfizer Labs (Division of Pfizer Inc., NY, NY, USA). Griess’s reagent was obtained from Merck (Sigma-Aldrich Pte Ltd., Singapore). HPLC grade acetonitrile, propa-2-ol, and methanol were purchased from RCI Labscan (RCI Labscan Limited, Bangkok, Thailand). The analytical grade of acetone, methanol, and chloroform solvents were purchased from RCI Labscan (RCI Labscan Limited, Bangkok, Thailand). Formic acid for LC/MS was purchased from Fisher Chemical (Pardubice, Czech Replublic). Ammonium acetate was purchased from Ajax Finechem (Part of Thermo Fisher Scientific, North Ryde, Australia). The 18 MΩ cm deionized water was generated using a Milli-Q system (Millipore, Bedford, MA, USA). Illustra RNAspin Mini RNA Isolation Kit was purchased from Cytiva (Formerly GE Healthcare Life Sciences, Wien, Austria). Tetro cDNA Synthesis Kit and SensiFAST TM SYBR ® Lo-ROX Kit were purchased from Bioline (Singapore). The primer was obtained from Invitrogen TM . Reagents used in cell culturing were acquired from Gibco (Thermo Fisher Scientific, Life Technologies Corporation, New York, NY, USA). All chemicals used in this study were of analytical grade. 3.2. Herbal Supplement Preparations Four capsules of dietary supplement (600 mg/capsule) were treated with 70% acetonitrile by ultrasonic shaking at room temperature for 60 min before centrifuging. The solid material remaining in the centrifuge tube was repeatedly extracted for 3-times in the same manner. The filtrated-supernatants were recovered and evaporated under reduced pressure using rotary evaporator at 60 °C until dry to obtain the dark-brown color extract (360 mg). The dried-extract sample was stored at 4 °C until next further analysis. 3.3. Purification of the Extract Sample by HPLC Analysis The extracted materials from herbal supplement were isolated by reverse-phase chromatography using high-performance liquid chromatography (HPLC) technique. The freeze-dried materials were dissolved in acetonitrile at room temperature and then filtered through a 0.45 µm Nylon filter (Fisher Scientific, Merelbeke, Belgium). HPLC analysis was performed on an Agilent 1260 Infinity series HPLC equipped with binary pumping system (Agilent Technologies (Thailand) Co. Ltd., Bangkok, Thailand). A Semi-Prep, ZORBAX SB-C18 column (9.4 × 250 mm, 80Å, 5 µm) was used to purify the extract materials. The HPLC condition was performed at a flow rate of 2.5 mL min −1 , consisted of (A) acetonitrile, and (B) 100 mM ammonium acetate (pH 6.5) buffer. The isocratic mobile phase was used starting with 100% (B) before changing linearly to 10% (B) over 5 min, and holding at 10% (B) for 20 min. The column was re-equilibrated for 5 min prior to the start of the next process. The chromatogram spectra were set at 226 nm. Each high-fractionated peak was collected using Agilent 1260 Infinity fraction collector. The main subsequent fractions of crude extract were collected, and the combined fractionations were recovered. Among all the fractions, the highest mountain peak (F7) was observed as pale-yellow crystals after recrystallization from dimethylformamide and diethyl ether that was used for the structural characterization. 3.4. Characterizations 3.4.1. NMR Analysis Nuclear magnetic resonance (NMR) was measured by a Bruker DPX 400 NMR spectrometer (Bruker UK Limited, Coventry, UK) with a 5 mm multinuclear inverse probe at 296 K. The 1 H and 13 C spectra were observed at 400 and 100 MHz, respectively. Crystalline solid of active compound, approximately 6 mg, was dissolved with chloroform- d as a solvent for NMR spectroscopy analysis. 3.4.2. ESI-TOF/MS Measurement The high-resolution mass spectrum was acquired on a MicrOTOF-QII (Bruker Daltonics, Bremen, Germany). The concentration 1.0 µg mL −1 of active compound infused directly into the ESI-TOF/MS spectrometer with sodium formate as an internal standard. The measurement conditions for TOF-MS were set as follows: Positive ion electrospray mode, capillary exit voltage at 4.5 kV. The MS data were recorded in the full scan mode in range of m / z 50–1000. 3.4.3. Ultra-High-Performance Liquid Chromatography-Triple Quadrupole MS Method (UHPLC/MS/MS) UHPLC/MS/MS analysis was performed with a Dionex Ultimate 3000 (Thermo Fisher Scientific Inc., MA, USA) separation module connected with a MicrOTOF-QII mass spectrometer (Bruker Daltonics, Bremen, Germany). The isolated compound was dissolved in acetonitrile to a concentration of 1.0 µg mL −1 . The chromatograms were carried out on a Luna ® C18(2) (100 × 2.0 mm, 3.0 µm particle size 100Å; phenomenex ® , Torrance, CA, USA) at 40 °C. The mobile phases consisted of 5 mM ammonium acetate and 0.1% formic acid (A) and acetonitrile (B). Gradient elution program was set as follows: 10% (B) for 1 min and increased to 40% (B) in 9 min, raised to 75% (B) in 3.5 min, then further increased to 80% (B) in 2.5 min, and held there for 5 min before decreased to 10% (B) in 0.1 min and equilibrated the column for 3.9 min. The flow rate was set at 0.3 mL min −1 , and the injected volume was 5 µL. The [M + H] + ions were selected as precursor ion, and MS/MS spectra were acquired. The mass spectrometer was performed in the positive ionization mode, and the spray voltage was set at 4.5 kV with collision energy at 40 eV. The nitrogen served both as auxiliary, collision gas, and nebulizer gas with following parameters: Nebulizer gas at 2.0 Bar, dry gas 7.0 L min −1 , and dry temperature at 240 °C. 3.5. Cell Culture and Treatments Human umbilical vein endothelial cell line; Ea.hy926 (ATCC ® number CRL-2922) was cultured in Dulbecco’s Modified Eagle Medium (DMEM) containing with 10% fetal bovine serum, 2% hypoxanthine-aminopterin-thymidine (HAT), 100 U mL −1 of penicillin-G sodium and 100 µg mL −1 of streptomycin at 37 °C in 5% CO 2 . Phytochemicals at indicated concentrations from MTT assay were used to treat the cells into 96-well plate (10,000 cells/well). 10% DMSO was used as the positive control that indicated the cellular toxicity. For determination of NO production and the gene expression of i NOS, e NOS, and PDE5A, cells were plated in 6-well plate at a density of 50,000 cells/well. The cells were growth arrested at 80% confluency before being used in the experiments. Sildenafil at a concentration 10 µg mL −1 was used as a positive control in the in vitro study. After the treatment period, cell lysates were collected for the determination of gene expression levels, while culture supernatants were collected for the measurement of NO releasing. 3.5.1. Measurement of NO Production For analysis of NO production from the nitrile accumulation in culture media via Griess reaction assay, 100 µL of treated-media samples or sodium nitrile standards (0–100 µM) were mixed with 100 µL of Griess reagent (1% sulfanilamide, 0.1% N -(1-naphtyl) ethylenediamine dihydrochloride in 2.5% H 3 PO 4 solution). The mixture solution was incubated for 10 min at room temperature and the absorbance was measured at 540 nm using a microplate reader. The concentration of NO in each sample was measured to generate a standard curve . 3.5.2. Gene Expression Analysis Using Real-Time Reverse Transcriptase Polymerase Chain Reaction (Real-Time RT PCR) The total RNA was extracted by using Illustra TM RNAspin Mini RNA Isolation Kit. Five hundred nanograms of total RNA was reverse-transcribed into cDNA using Tetro cDNA Synthesis Kit. Real time PCR was conducted to determine the reaction of denaturation, annealing and extension using SensiFAST TM SYBR ® Lo-ROX Kit on 7500 Fast Real-Time PCR system (Applied Biosystems TM , Thermo Fisher Scientific, New York, NY, USA). The specific primers are shown in , which were determined the i NOS, e NOS and PDE5A gene expression in which β-actin was used as the reference constitutive gene. The data were calculated by using the 2 − Δ Δ C T method . 3.5.3. Statistical Analysis The results were displayed as the mean ± standard deviation (SD) of at the least three independent experiments. The statistical differences compared with an among multiple groups were performed by t -test. The value of p ≤ 0.05 was considered statistically significant. 3.6. Computational Analysis 3.6.1. Protein and Ligand Preparation: A complex structure of PDE5 protein containing sildenafil (SIL) was obtained from the X-ray crystallography structure of the Protein Data Bank with PDB code of 2H42 . To prepare the structure for docking, the ligand and all water molecules were removed. Charges and non-polar hydrogen atoms were added using the prepare_receptor4.py script from MGLTools 1.5.6 . The three-dimensional (3D) structures of PDE5 inhibitors, vardenafil (VAF), tadalafil (TAF), and sulfoaildenafil (SUF), were obtained from the National Center for Biotechnology Information with PubChem compound summary for CID135400189, CID110635, and CID56841591, respectively . The initial structure was followed by short optimization with gradient tolerance of 0.0100 kcal mol −1 Å of root mean squared (RMS) using the software of Discovery Studio visualizer 2019 (3DEXPERIENCE Company, Vélizy-Villacoublay, France) . Individual PDB files were prepared for docking using the prepare_ligand4.py script from MGLTools, using only the largest non-bonded fragment present. 3.6.2. Docking Parameters The software package of AutoDock Vina was performed for all molecular docking simulation study to anchor the PDE5 inhibitors into the active site of the PDE5 protein. In general, the docking parameters were kept to their default values. The total size of the cubic docking box was set to be 60 Å along each dimension ( x , y , and z ) by the grid point spacing of 0.375 Å. The ligand molecule from the complex PDB ID:2H42 structure was used for the center of the grid box ( x , y , and z ; 30.790, 119.342, 11.038). Exhaustiveness parameter corresponding to the amount of sampling effort was set to 100 with the energy range of 10 kcal mol −1 , and the maximum number of poses to report was set to 20 using the built-in clustering analysis with a 2.0 Å cut-off. 3.6.3. Molecular Dynamics Simulations and Binding Free Energy Calculation All molecular dynamics (MD) simulations were performed by PMEMD.CUDA from AMBER 18 suite of programs on NVIDIA Geforce GTX-1070 Ti for speeding up the simulation times. All parameters used in this study were set according to the procedures described in previous work . Briefly, the general AMBER force field (GAFF) parameters were carried out to generate the atomic parameters of each ligand and Gasteiger charge was used to assign the charge parameter for all ligands in MD simulations. Each complex structure under periodic boundary conditions was solvated in a cubic box of TIP3P water molecules extending to 10 Å along each direction from the complex model, and Na + ions were added as neutralizing counterions. The cutoff distance was kept to 12 Å in order to compute the non-bonded interactions. The AMBER ff14SB force field parameters were used to apply the description of the complex characterization. The long-range electrostatic were treated using the particle mesh Ewald (PME) method . The SHAKE algorithm and Langevin dynamics were applied to constrain the bonds that involved hydrogen atoms and to control the temperature. The time step of 2 fs was set and the trajectory was recorded every 0.2 ps. The temperature was gradually increased from 0 to 310.15 K over a period of 100 ps of NVT dynamics and followed by 5 ns of NPT equilibration at 310.15 K and 1 atm pressure. Finally, a total 100 ns of the production phase NVT-MD simulation was performed for properties collection. Trajectory analyses (root mean square deviation and fluctuation, dynamic cross-correlation, hydrogen bond) were carried out from the production phase MD using CPPTRAJ module in Amber 18 program . Binding free energy calculation of each simulation complex was performed based on selected MD snapshots using Amber molecular mechanics Poisson–Boltzmann surface area (MM-PBSA) and molecular mechanics Generalized Born surface area (MM-GBSA) protocols . The 2500 snapshots were extracted from the trajectory simulation data. The grid size from the PB calculations in MM-PBSA was 0.5 Å. The values of the interior and exterior dielectric constants in MM-GBSA were set to 1 and 80, respectively. The structural images were presented using DS software. 3.6.4. Dynamic Cross-Correlation Matrix Analysis Dynamic movements between the Cα–atoms in PDE5 protein over the simulation period were quantified in the term of the dynamic cross-correlation matrix (DCCM). DCCM was analyzed using CPPTRAJ module of the AMBER 18 suites. The cross-correlation matrix elements, C ij , are defined by : C i j = ‹ Δ r i Δ r j › ( ‹ Δ r i 2 › ‹ Δ r j 2 › ) 1 2 where i and j represents the position vectors of residue in the structure. The displacement vectors in each residue are represented as Δ r i and Δ r j . The dynamic diagrams are displayed as a color-coded matrix of Pearson correlation coefficients. The movement towards the same direction between the residue pairs show a positive value (+1) in the color ranges from light green to deep red; while the movement of opposite direction shows a negative value (−1) in the color range from grey to royal blue. The diagonal square relates to the relationship of a residue with itself, i.e., only region remarked to have highly positive values (red), while off-diagonal elements describe inter-residue correlation (cross-correlations).
Sildenafil citrate (Viagra ® ) 100 mg tablets were purchased from Pfizer Labs (Division of Pfizer Inc., NY, NY, USA). Griess’s reagent was obtained from Merck (Sigma-Aldrich Pte Ltd., Singapore). HPLC grade acetonitrile, propa-2-ol, and methanol were purchased from RCI Labscan (RCI Labscan Limited, Bangkok, Thailand). The analytical grade of acetone, methanol, and chloroform solvents were purchased from RCI Labscan (RCI Labscan Limited, Bangkok, Thailand). Formic acid for LC/MS was purchased from Fisher Chemical (Pardubice, Czech Replublic). Ammonium acetate was purchased from Ajax Finechem (Part of Thermo Fisher Scientific, North Ryde, Australia). The 18 MΩ cm deionized water was generated using a Milli-Q system (Millipore, Bedford, MA, USA). Illustra RNAspin Mini RNA Isolation Kit was purchased from Cytiva (Formerly GE Healthcare Life Sciences, Wien, Austria). Tetro cDNA Synthesis Kit and SensiFAST TM SYBR ® Lo-ROX Kit were purchased from Bioline (Singapore). The primer was obtained from Invitrogen TM . Reagents used in cell culturing were acquired from Gibco (Thermo Fisher Scientific, Life Technologies Corporation, New York, NY, USA). All chemicals used in this study were of analytical grade.
Four capsules of dietary supplement (600 mg/capsule) were treated with 70% acetonitrile by ultrasonic shaking at room temperature for 60 min before centrifuging. The solid material remaining in the centrifuge tube was repeatedly extracted for 3-times in the same manner. The filtrated-supernatants were recovered and evaporated under reduced pressure using rotary evaporator at 60 °C until dry to obtain the dark-brown color extract (360 mg). The dried-extract sample was stored at 4 °C until next further analysis.
The extracted materials from herbal supplement were isolated by reverse-phase chromatography using high-performance liquid chromatography (HPLC) technique. The freeze-dried materials were dissolved in acetonitrile at room temperature and then filtered through a 0.45 µm Nylon filter (Fisher Scientific, Merelbeke, Belgium). HPLC analysis was performed on an Agilent 1260 Infinity series HPLC equipped with binary pumping system (Agilent Technologies (Thailand) Co. Ltd., Bangkok, Thailand). A Semi-Prep, ZORBAX SB-C18 column (9.4 × 250 mm, 80Å, 5 µm) was used to purify the extract materials. The HPLC condition was performed at a flow rate of 2.5 mL min −1 , consisted of (A) acetonitrile, and (B) 100 mM ammonium acetate (pH 6.5) buffer. The isocratic mobile phase was used starting with 100% (B) before changing linearly to 10% (B) over 5 min, and holding at 10% (B) for 20 min. The column was re-equilibrated for 5 min prior to the start of the next process. The chromatogram spectra were set at 226 nm. Each high-fractionated peak was collected using Agilent 1260 Infinity fraction collector. The main subsequent fractions of crude extract were collected, and the combined fractionations were recovered. Among all the fractions, the highest mountain peak (F7) was observed as pale-yellow crystals after recrystallization from dimethylformamide and diethyl ether that was used for the structural characterization.
3.4.1. NMR Analysis Nuclear magnetic resonance (NMR) was measured by a Bruker DPX 400 NMR spectrometer (Bruker UK Limited, Coventry, UK) with a 5 mm multinuclear inverse probe at 296 K. The 1 H and 13 C spectra were observed at 400 and 100 MHz, respectively. Crystalline solid of active compound, approximately 6 mg, was dissolved with chloroform- d as a solvent for NMR spectroscopy analysis. 3.4.2. ESI-TOF/MS Measurement The high-resolution mass spectrum was acquired on a MicrOTOF-QII (Bruker Daltonics, Bremen, Germany). The concentration 1.0 µg mL −1 of active compound infused directly into the ESI-TOF/MS spectrometer with sodium formate as an internal standard. The measurement conditions for TOF-MS were set as follows: Positive ion electrospray mode, capillary exit voltage at 4.5 kV. The MS data were recorded in the full scan mode in range of m / z 50–1000. 3.4.3. Ultra-High-Performance Liquid Chromatography-Triple Quadrupole MS Method (UHPLC/MS/MS) UHPLC/MS/MS analysis was performed with a Dionex Ultimate 3000 (Thermo Fisher Scientific Inc., MA, USA) separation module connected with a MicrOTOF-QII mass spectrometer (Bruker Daltonics, Bremen, Germany). The isolated compound was dissolved in acetonitrile to a concentration of 1.0 µg mL −1 . The chromatograms were carried out on a Luna ® C18(2) (100 × 2.0 mm, 3.0 µm particle size 100Å; phenomenex ® , Torrance, CA, USA) at 40 °C. The mobile phases consisted of 5 mM ammonium acetate and 0.1% formic acid (A) and acetonitrile (B). Gradient elution program was set as follows: 10% (B) for 1 min and increased to 40% (B) in 9 min, raised to 75% (B) in 3.5 min, then further increased to 80% (B) in 2.5 min, and held there for 5 min before decreased to 10% (B) in 0.1 min and equilibrated the column for 3.9 min. The flow rate was set at 0.3 mL min −1 , and the injected volume was 5 µL. The [M + H] + ions were selected as precursor ion, and MS/MS spectra were acquired. The mass spectrometer was performed in the positive ionization mode, and the spray voltage was set at 4.5 kV with collision energy at 40 eV. The nitrogen served both as auxiliary, collision gas, and nebulizer gas with following parameters: Nebulizer gas at 2.0 Bar, dry gas 7.0 L min −1 , and dry temperature at 240 °C.
Nuclear magnetic resonance (NMR) was measured by a Bruker DPX 400 NMR spectrometer (Bruker UK Limited, Coventry, UK) with a 5 mm multinuclear inverse probe at 296 K. The 1 H and 13 C spectra were observed at 400 and 100 MHz, respectively. Crystalline solid of active compound, approximately 6 mg, was dissolved with chloroform- d as a solvent for NMR spectroscopy analysis.
The high-resolution mass spectrum was acquired on a MicrOTOF-QII (Bruker Daltonics, Bremen, Germany). The concentration 1.0 µg mL −1 of active compound infused directly into the ESI-TOF/MS spectrometer with sodium formate as an internal standard. The measurement conditions for TOF-MS were set as follows: Positive ion electrospray mode, capillary exit voltage at 4.5 kV. The MS data were recorded in the full scan mode in range of m / z 50–1000.
UHPLC/MS/MS analysis was performed with a Dionex Ultimate 3000 (Thermo Fisher Scientific Inc., MA, USA) separation module connected with a MicrOTOF-QII mass spectrometer (Bruker Daltonics, Bremen, Germany). The isolated compound was dissolved in acetonitrile to a concentration of 1.0 µg mL −1 . The chromatograms were carried out on a Luna ® C18(2) (100 × 2.0 mm, 3.0 µm particle size 100Å; phenomenex ® , Torrance, CA, USA) at 40 °C. The mobile phases consisted of 5 mM ammonium acetate and 0.1% formic acid (A) and acetonitrile (B). Gradient elution program was set as follows: 10% (B) for 1 min and increased to 40% (B) in 9 min, raised to 75% (B) in 3.5 min, then further increased to 80% (B) in 2.5 min, and held there for 5 min before decreased to 10% (B) in 0.1 min and equilibrated the column for 3.9 min. The flow rate was set at 0.3 mL min −1 , and the injected volume was 5 µL. The [M + H] + ions were selected as precursor ion, and MS/MS spectra were acquired. The mass spectrometer was performed in the positive ionization mode, and the spray voltage was set at 4.5 kV with collision energy at 40 eV. The nitrogen served both as auxiliary, collision gas, and nebulizer gas with following parameters: Nebulizer gas at 2.0 Bar, dry gas 7.0 L min −1 , and dry temperature at 240 °C.
Human umbilical vein endothelial cell line; Ea.hy926 (ATCC ® number CRL-2922) was cultured in Dulbecco’s Modified Eagle Medium (DMEM) containing with 10% fetal bovine serum, 2% hypoxanthine-aminopterin-thymidine (HAT), 100 U mL −1 of penicillin-G sodium and 100 µg mL −1 of streptomycin at 37 °C in 5% CO 2 . Phytochemicals at indicated concentrations from MTT assay were used to treat the cells into 96-well plate (10,000 cells/well). 10% DMSO was used as the positive control that indicated the cellular toxicity. For determination of NO production and the gene expression of i NOS, e NOS, and PDE5A, cells were plated in 6-well plate at a density of 50,000 cells/well. The cells were growth arrested at 80% confluency before being used in the experiments. Sildenafil at a concentration 10 µg mL −1 was used as a positive control in the in vitro study. After the treatment period, cell lysates were collected for the determination of gene expression levels, while culture supernatants were collected for the measurement of NO releasing. 3.5.1. Measurement of NO Production For analysis of NO production from the nitrile accumulation in culture media via Griess reaction assay, 100 µL of treated-media samples or sodium nitrile standards (0–100 µM) were mixed with 100 µL of Griess reagent (1% sulfanilamide, 0.1% N -(1-naphtyl) ethylenediamine dihydrochloride in 2.5% H 3 PO 4 solution). The mixture solution was incubated for 10 min at room temperature and the absorbance was measured at 540 nm using a microplate reader. The concentration of NO in each sample was measured to generate a standard curve . 3.5.2. Gene Expression Analysis Using Real-Time Reverse Transcriptase Polymerase Chain Reaction (Real-Time RT PCR) The total RNA was extracted by using Illustra TM RNAspin Mini RNA Isolation Kit. Five hundred nanograms of total RNA was reverse-transcribed into cDNA using Tetro cDNA Synthesis Kit. Real time PCR was conducted to determine the reaction of denaturation, annealing and extension using SensiFAST TM SYBR ® Lo-ROX Kit on 7500 Fast Real-Time PCR system (Applied Biosystems TM , Thermo Fisher Scientific, New York, NY, USA). The specific primers are shown in , which were determined the i NOS, e NOS and PDE5A gene expression in which β-actin was used as the reference constitutive gene. The data were calculated by using the 2 − Δ Δ C T method . 3.5.3. Statistical Analysis The results were displayed as the mean ± standard deviation (SD) of at the least three independent experiments. The statistical differences compared with an among multiple groups were performed by t -test. The value of p ≤ 0.05 was considered statistically significant.
For analysis of NO production from the nitrile accumulation in culture media via Griess reaction assay, 100 µL of treated-media samples or sodium nitrile standards (0–100 µM) were mixed with 100 µL of Griess reagent (1% sulfanilamide, 0.1% N -(1-naphtyl) ethylenediamine dihydrochloride in 2.5% H 3 PO 4 solution). The mixture solution was incubated for 10 min at room temperature and the absorbance was measured at 540 nm using a microplate reader. The concentration of NO in each sample was measured to generate a standard curve .
The total RNA was extracted by using Illustra TM RNAspin Mini RNA Isolation Kit. Five hundred nanograms of total RNA was reverse-transcribed into cDNA using Tetro cDNA Synthesis Kit. Real time PCR was conducted to determine the reaction of denaturation, annealing and extension using SensiFAST TM SYBR ® Lo-ROX Kit on 7500 Fast Real-Time PCR system (Applied Biosystems TM , Thermo Fisher Scientific, New York, NY, USA). The specific primers are shown in , which were determined the i NOS, e NOS and PDE5A gene expression in which β-actin was used as the reference constitutive gene. The data were calculated by using the 2 − Δ Δ C T method .
The results were displayed as the mean ± standard deviation (SD) of at the least three independent experiments. The statistical differences compared with an among multiple groups were performed by t -test. The value of p ≤ 0.05 was considered statistically significant.
3.6.1. Protein and Ligand Preparation: A complex structure of PDE5 protein containing sildenafil (SIL) was obtained from the X-ray crystallography structure of the Protein Data Bank with PDB code of 2H42 . To prepare the structure for docking, the ligand and all water molecules were removed. Charges and non-polar hydrogen atoms were added using the prepare_receptor4.py script from MGLTools 1.5.6 . The three-dimensional (3D) structures of PDE5 inhibitors, vardenafil (VAF), tadalafil (TAF), and sulfoaildenafil (SUF), were obtained from the National Center for Biotechnology Information with PubChem compound summary for CID135400189, CID110635, and CID56841591, respectively . The initial structure was followed by short optimization with gradient tolerance of 0.0100 kcal mol −1 Å of root mean squared (RMS) using the software of Discovery Studio visualizer 2019 (3DEXPERIENCE Company, Vélizy-Villacoublay, France) . Individual PDB files were prepared for docking using the prepare_ligand4.py script from MGLTools, using only the largest non-bonded fragment present. 3.6.2. Docking Parameters The software package of AutoDock Vina was performed for all molecular docking simulation study to anchor the PDE5 inhibitors into the active site of the PDE5 protein. In general, the docking parameters were kept to their default values. The total size of the cubic docking box was set to be 60 Å along each dimension ( x , y , and z ) by the grid point spacing of 0.375 Å. The ligand molecule from the complex PDB ID:2H42 structure was used for the center of the grid box ( x , y , and z ; 30.790, 119.342, 11.038). Exhaustiveness parameter corresponding to the amount of sampling effort was set to 100 with the energy range of 10 kcal mol −1 , and the maximum number of poses to report was set to 20 using the built-in clustering analysis with a 2.0 Å cut-off. 3.6.3. Molecular Dynamics Simulations and Binding Free Energy Calculation All molecular dynamics (MD) simulations were performed by PMEMD.CUDA from AMBER 18 suite of programs on NVIDIA Geforce GTX-1070 Ti for speeding up the simulation times. All parameters used in this study were set according to the procedures described in previous work . Briefly, the general AMBER force field (GAFF) parameters were carried out to generate the atomic parameters of each ligand and Gasteiger charge was used to assign the charge parameter for all ligands in MD simulations. Each complex structure under periodic boundary conditions was solvated in a cubic box of TIP3P water molecules extending to 10 Å along each direction from the complex model, and Na + ions were added as neutralizing counterions. The cutoff distance was kept to 12 Å in order to compute the non-bonded interactions. The AMBER ff14SB force field parameters were used to apply the description of the complex characterization. The long-range electrostatic were treated using the particle mesh Ewald (PME) method . The SHAKE algorithm and Langevin dynamics were applied to constrain the bonds that involved hydrogen atoms and to control the temperature. The time step of 2 fs was set and the trajectory was recorded every 0.2 ps. The temperature was gradually increased from 0 to 310.15 K over a period of 100 ps of NVT dynamics and followed by 5 ns of NPT equilibration at 310.15 K and 1 atm pressure. Finally, a total 100 ns of the production phase NVT-MD simulation was performed for properties collection. Trajectory analyses (root mean square deviation and fluctuation, dynamic cross-correlation, hydrogen bond) were carried out from the production phase MD using CPPTRAJ module in Amber 18 program . Binding free energy calculation of each simulation complex was performed based on selected MD snapshots using Amber molecular mechanics Poisson–Boltzmann surface area (MM-PBSA) and molecular mechanics Generalized Born surface area (MM-GBSA) protocols . The 2500 snapshots were extracted from the trajectory simulation data. The grid size from the PB calculations in MM-PBSA was 0.5 Å. The values of the interior and exterior dielectric constants in MM-GBSA were set to 1 and 80, respectively. The structural images were presented using DS software. 3.6.4. Dynamic Cross-Correlation Matrix Analysis Dynamic movements between the Cα–atoms in PDE5 protein over the simulation period were quantified in the term of the dynamic cross-correlation matrix (DCCM). DCCM was analyzed using CPPTRAJ module of the AMBER 18 suites. The cross-correlation matrix elements, C ij , are defined by : C i j = ‹ Δ r i Δ r j › ( ‹ Δ r i 2 › ‹ Δ r j 2 › ) 1 2 where i and j represents the position vectors of residue in the structure. The displacement vectors in each residue are represented as Δ r i and Δ r j . The dynamic diagrams are displayed as a color-coded matrix of Pearson correlation coefficients. The movement towards the same direction between the residue pairs show a positive value (+1) in the color ranges from light green to deep red; while the movement of opposite direction shows a negative value (−1) in the color range from grey to royal blue. The diagonal square relates to the relationship of a residue with itself, i.e., only region remarked to have highly positive values (red), while off-diagonal elements describe inter-residue correlation (cross-correlations).
A complex structure of PDE5 protein containing sildenafil (SIL) was obtained from the X-ray crystallography structure of the Protein Data Bank with PDB code of 2H42 . To prepare the structure for docking, the ligand and all water molecules were removed. Charges and non-polar hydrogen atoms were added using the prepare_receptor4.py script from MGLTools 1.5.6 . The three-dimensional (3D) structures of PDE5 inhibitors, vardenafil (VAF), tadalafil (TAF), and sulfoaildenafil (SUF), were obtained from the National Center for Biotechnology Information with PubChem compound summary for CID135400189, CID110635, and CID56841591, respectively . The initial structure was followed by short optimization with gradient tolerance of 0.0100 kcal mol −1 Å of root mean squared (RMS) using the software of Discovery Studio visualizer 2019 (3DEXPERIENCE Company, Vélizy-Villacoublay, France) . Individual PDB files were prepared for docking using the prepare_ligand4.py script from MGLTools, using only the largest non-bonded fragment present.
The software package of AutoDock Vina was performed for all molecular docking simulation study to anchor the PDE5 inhibitors into the active site of the PDE5 protein. In general, the docking parameters were kept to their default values. The total size of the cubic docking box was set to be 60 Å along each dimension ( x , y , and z ) by the grid point spacing of 0.375 Å. The ligand molecule from the complex PDB ID:2H42 structure was used for the center of the grid box ( x , y , and z ; 30.790, 119.342, 11.038). Exhaustiveness parameter corresponding to the amount of sampling effort was set to 100 with the energy range of 10 kcal mol −1 , and the maximum number of poses to report was set to 20 using the built-in clustering analysis with a 2.0 Å cut-off.
All molecular dynamics (MD) simulations were performed by PMEMD.CUDA from AMBER 18 suite of programs on NVIDIA Geforce GTX-1070 Ti for speeding up the simulation times. All parameters used in this study were set according to the procedures described in previous work . Briefly, the general AMBER force field (GAFF) parameters were carried out to generate the atomic parameters of each ligand and Gasteiger charge was used to assign the charge parameter for all ligands in MD simulations. Each complex structure under periodic boundary conditions was solvated in a cubic box of TIP3P water molecules extending to 10 Å along each direction from the complex model, and Na + ions were added as neutralizing counterions. The cutoff distance was kept to 12 Å in order to compute the non-bonded interactions. The AMBER ff14SB force field parameters were used to apply the description of the complex characterization. The long-range electrostatic were treated using the particle mesh Ewald (PME) method . The SHAKE algorithm and Langevin dynamics were applied to constrain the bonds that involved hydrogen atoms and to control the temperature. The time step of 2 fs was set and the trajectory was recorded every 0.2 ps. The temperature was gradually increased from 0 to 310.15 K over a period of 100 ps of NVT dynamics and followed by 5 ns of NPT equilibration at 310.15 K and 1 atm pressure. Finally, a total 100 ns of the production phase NVT-MD simulation was performed for properties collection. Trajectory analyses (root mean square deviation and fluctuation, dynamic cross-correlation, hydrogen bond) were carried out from the production phase MD using CPPTRAJ module in Amber 18 program . Binding free energy calculation of each simulation complex was performed based on selected MD snapshots using Amber molecular mechanics Poisson–Boltzmann surface area (MM-PBSA) and molecular mechanics Generalized Born surface area (MM-GBSA) protocols . The 2500 snapshots were extracted from the trajectory simulation data. The grid size from the PB calculations in MM-PBSA was 0.5 Å. The values of the interior and exterior dielectric constants in MM-GBSA were set to 1 and 80, respectively. The structural images were presented using DS software.
Dynamic movements between the Cα–atoms in PDE5 protein over the simulation period were quantified in the term of the dynamic cross-correlation matrix (DCCM). DCCM was analyzed using CPPTRAJ module of the AMBER 18 suites. The cross-correlation matrix elements, C ij , are defined by : C i j = ‹ Δ r i Δ r j › ( ‹ Δ r i 2 › ‹ Δ r j 2 › ) 1 2 where i and j represents the position vectors of residue in the structure. The displacement vectors in each residue are represented as Δ r i and Δ r j . The dynamic diagrams are displayed as a color-coded matrix of Pearson correlation coefficients. The movement towards the same direction between the residue pairs show a positive value (+1) in the color ranges from light green to deep red; while the movement of opposite direction shows a negative value (−1) in the color range from grey to royal blue. The diagonal square relates to the relationship of a residue with itself, i.e., only region remarked to have highly positive values (red), while off-diagonal elements describe inter-residue correlation (cross-correlations).
Here, we found a synthetic contaminant in herbal aphrodisiacs purchased at a general drug store. The compound was identified as sulfoaildenafil, a thioketone analog of sildenafil. Analytical techniques, including HPLC, LC-MS/MS spectrometry and NMR spectroscopy, were carried out for the isolation, purification, and characterization of this compound. The sulfoaildenafil, which displays structural similarity to synthetic inhibitors of PDE5, has been illegally added to dietary supplements causing subsequent health risks to consumers. The effects of sulfoaildenafil have been investigated for the first time by means of carrying out experiment and theoretical approaches to postulate premising the selective inhibition of PDE5 activity in comparison with the complex of sildenafil as a commercially controlled drug. The biological results revealed that sulfoaildenafil can affect the therapeutic level of NO through the upregulation of nitric oxide synthase ( i NOS and e NOS) and PDE5 gene expressions. According to the MD simulations, we suggest that sulfoaildenafil as well as sildenafil could be potent inhibitors of PDE5 protein with specific binding mode and affinity of the key residue interactions. Indeed, considering that the resolved complexes between sildenafil- and sulfoaildenafil-bound PDE5 reveal a clear hydrogen bond formation at Gln817 of PDE5 protein, the small binding free energy difference between these compounds is about 5 kcal mol −1 . This report provides fundamental knowledge for the screening of adulterants in herbal drugs and the data in this study can be useful for this particular purpose. These are unique features of the potential activity of PDE5 protein and its inhibitors, sildenafil, and sulfoaildenafil; configurations are key considerations for understanding the modes of actions and predicting the biological activity of PDE5 inhibitors. Furthermore, the experimental data gathered herein with regard to the biological functions of sulfoaildenafil, with a focus on the role of toxicity, NO-releasing levels, and gene expression in the in vitro, have supported these concrete results.
|
Identification of Metabolic Characteristic–Pancreatic Ductal Adenocarcinoma Associations Using Mendelian Randomization and Metabolomics | 24edbab6-f2cc-4bbf-979c-365db344a283 | 11753325 | Biochemistry[mh] | Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive malignancies, with a dismal five-year survival rate of just 9%. It is currently the third leading cause of cancer-related deaths . Although surgical resection remains the most effective treatment for PDAC, approximately 80–90% of patients present with locally advanced and metastatic diseases at diagnosis, making surgery unfeasible . Despite advances in chemoradiotherapy and targeted therapies, the long-term survival rate for PDAC patients remains disappointing . This underscores the urgent need for novel therapeutic targets and molecular markers to enhance the precision of PDAC management. The pancreas performs both endocrine and exocrine functions, essential for glucose metabolism and digestive processes . Disruptions in these processes can contribute to the development of diabetes, a common comorbidity in PDAC patients . PDAC cells undergo significant metabolic adaptations to support their rapid proliferation and survival in a typically nutrient-poor tumor microenvironment . Metabolic reprogramming in PDAC involves alterations in glycolysis, the Warburg effect, glutamine metabolism, lipid metabolism, and amino acid metabolism . Recent research has demonstrated that PDAC cells can promote tumor growth and metastasis by remodeling various metabolic pathways, leading to the exploration of several targeted metabolic therapies in preclinical studies . Understanding metabolic reprogramming is crucial for elucidating the pathogenesis and progression of PDAC. Targeting specific metabolic dysregulations in PDAC cells could pave the way for novel therapeutic strategies and potentially improve patient prognosis. Exploring PDAC from a metabolic perspective provides valuable insights into its diagnosis and treatment. However, due to the metabolic heterogeneity of PDAC cells, further studies on PDAC-specific metabolic profiles are required. Mendelian randomization (MR), a method of causal inference based on genetic variation, provides insights into the associations between exposures and outcomes by selecting single nucleotide polymorphisms (SNPs) as instrumental variables (IVs). MR leverages the random distribution of genotypes to minimize confounding factors in observational studies, thereby ensuring the reliability of causal inferences. In this study, we first utilized MR to assess the associations between plasma metabolites/metabolite ratios and PDAC. We then performed detailed annotations of these metabolites and conducted enrichment analysis for both metabolites and the shared proteins corresponding to significant metabolite ratios. Additionally, we performed serum metabolomic analysis of PDAC patients to correlate metabolite profiles with clinicopathological features. Our research aims to identify metabolite molecules associated with PDAC risk and prognosis and explore PDAC-specific metabolic characteristics, thereby providing a scientific basis for further research and developing new therapeutic strategies. Study Design The study design is illustrated in Fig. . We employed a comprehensive MR approach to investigate the potential associations between metabolite levels/metabolite ratios and PDAC risk. The study was conducted in several phases: (1) screening of genetic variants linked with 1091 plasma metabolites and 309 metabolite ratios, (2) selection of a genome-wide association study (GWAS) dataset for PDAC as the outcome measures, (3) application of two-sample MR to separately estimate the associations between 1400 plasma metabolites/metabolite ratios and PDAC, and (4) annotation of significant metabolites and corresponding shared enzymes or transport proteins. Additionally, we conducted functional and metabolic pathway enrichment analysis for both significant metabolites and proteins linked to significant metabolite ratios. The methodological framework adhered strictly to the STROBE-MR guidelines (Table ). On the other hand, we also explored the effects of metabolites and metabolic pathways identified in the MR study on the prognosis of PDAC with peripheral blood samples from 32 PDAC patients collected in Beijing Luhe Hospital. The detailed process entailed the following: (1) patient grouping based on their clinical and pathological characteristics and (2) comparing their metabolite profiles. First, patients who experienced more than 5% weight loss within six months prior to blood collection or those with a BMI below 20 kg/m 2 , and over 2% weight loss were classified into the cachexia group . The remaining patients were deemed as non-cachexia. Second, patients were divided into early-stage (IA-IIB) and late-stage (IV) groups based on clinical staging. Third, patients who underwent radical resection were categorized into the surgery group, while those who did not have surgery or only received palliative surgery were categorized into the control group. Fourth, in the surgical group, patients were further classified into the positive margin and clear margin subgroups based on the surgical margin status. Fifth, patients were stratified into groups with or without lymph node metastasis. Finally, patients were divided into the embolus and non-embolus groups based on the presence of vascular tumor emboli. To further reveal the potential metabolic characteristics associated with poor prognosis, differential metabolites identified in each group were annotated using the Human Metabolome Database (HMDB) and the LIPID MAPS Structure database (LMSD). Metabolites that were either up or downregulated in two or more groups were integrated for KEGG pathway enrichment analysis. The association analysis between metabolite and overall survival (OS) was performed for each subject. Data Source The plasma metabolite summary dataset was derived from a metabolomics study involving 8299 participants from the Canadian Longitudinal Study on Aging (CLSA) cohort . The CLSA is a large-scale research program encompassing biomedical data from over 50,000 individuals across Canada . The GWAS conducted by Jiang et al. focused on 8299 European participants from the CLSA cohort. This study obtained comprehensive whole-genome genotype data and circulating plasma metabolite profiles from these participants to identify 1091 plasma metabolites and 309 metabolite ratios under genetic control. The summary dataset associated with PDAC was obtained from the study conducted by Jiang et al., which included 209 PDAC cases and 456,139 control cases . This study employed the fastGWA-GLMM, a genome-wide association tool based on generalized linear mixed models to UK Biobank data. All selected GWAS studies for this MR analysis had the necessary ethical approvals, and related documents, including informed consent forms, are available in the supplementary materials of the original publications. Selection of Genetic Variants Genetic variants were extracted as instrumental variables (IVs) from the GWAS pooled dataset of plasma metabolites/metabolite ratios according to the following steps: (1) We identified SNPs significantly associated with each metabolite level using a genome-wide locus significance threshold ( P < 1 × 10 −5 ). (2) SNPs with multiple alleles (> 2) and those located on chromosome 23 were excluded from consideration. (3) SNPs with a minor allele frequency (MAF) of less than 0.01 were removed. (4) We utilized the 1000 Genomes European reference panel to address linkage disequilibrium (LD) between genetic variants, employing criteria of r 2 < 0.01 and a window size > 10,000 kb. (5) F -statistics were calculated to assess the effectiveness of each genetic variant as an IV and its fit in the linear regression model. Higher F -statistic values indicate a stronger explanatory power of the IV for dependent variables. The F -statistic was computed using the formula F = R 2 × ( N − 2)/(1 − R 2 ) , where R 2 quantifies the proportion of variance in the dependent variable explained by the genetic variant. R 2 was calculated with the formula: R 2 = 2 × EAF × (1 − EAF) × beta 2 /(2 × EAF × (1 − EAF) × beta 2 ) + 2 × EAF × (1 − EAF) × SE × N × beta 2 . Genetic variants with an F -statistic greater than 10 were selected as IVs to minimize the influence of weakly predictive IVs on subsequent MR analyses. MR Analysis We first conducted a preliminary assessment of the impact of 1400 plasma metabolites/metabolite ratios on PDAC. For exposures with a single IV, the Wald ratio method was employed. For exposures featuring multiple IVs, we applied the Inverse Variance Weighted (IVW) method, which combines the effects of multiple IVs on outcomes to enhance estimation accuracy through least squares estimation, thereby providing a more precise estimation of the effect. To determine statistical significance, we applied a threshold of P < 0.05. Metabolites meeting this criterion were further analyzed using complementary MR methods, including MR-Egger, weighted median, and simple median approaches. Furthermore, we conducted the MR Steiger test to verify the directionality of relationships and mitigate the risk of reverse causation . Sensitivity Analyses To ensure the robustness of our results, we performed a series of sensitivity analyses. (1) We calculated Cochran's Q statistic by summing the residual squares of each IV concerning the outcome, followed by a chi-square test to assess the potential differential impact of IVs. Additionally, we computed the I 2 value to assess the heterogeneity of IV effects using the following formula: I 2 = ( Q − Q _df)/ Q . An I 2 value greater than 25 to 50% indicates moderate to high heterogeneity . (2) To eliminate the influence of confounding factors on the causal associations, we detected potential horizontal pleiotropy using the MR-Egger method to construct a regression model with an intercept term ( θ 0) and correct it through a significance test of the Egger regression slope . (3) We detected horizontal pleiotropy and identified and removed outlier IVs by computing the regression residuals of IVs using the MR-PRESSO method and subsequently generated adjusted causal estimates to provide more robust MR results. (4) We employed the leave-one-out method to assess the consistency of our MR results. This involved recalculating the causal effect by excluding each IV in turn and comparing these estimates to the overall results to determine if any single IV significantly influenced the causal association. (5) We visualized the estimates and their confidence intervals using scatter plots and funnel plots to identify potential outliers. All statistical analyses were conducted using the TwoSampleMR and MR-PRESSO packages in R version 4.2.0 . Functional and Metabolic Pathways Analysis To explore potential metabolic characteristics through which metabolic alterations may impact PDAC, we conducted the following analyses: Firstly, we searched the HMDB to gather detailed information on small molecule metabolites identified in the human body and annotated significant metabolites for their super- and sub-pathways. Subsequently, Building on the research of Chen et al., we extracted data related to significant metabolite ratios, including associated enzymes or transporter proteins, protein types, and shared protein genes . Furthermore, we annotated shared genes for cellular components (CC), biological processes (BP), and molecular functions (MF) using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) tool and performed pathway enrichment analyses to illuminate the functional roles of these genes within biological processes . Patients and Samples Serum samples were obtained from 32 patients with pathologically confirmed PDAC at the Department of Oncology, Beijing Luhe Hospital. For the clinical cohort recruited for this study, we chose to obtain plasma from patients at baseline (before receiving chemotherapeutic agents) and centrifuged the serum and preserved at − 80°C. Tables and list their characteristics. The study was approved by the Ethics Committee of Beijing Luhe Hospital, and all participants offered written informed consent in accordance with the Helsinki Declaration. Patient treatments remained independent of the study, and all research procedures adhered strictly to applicable laws and regulations. Untargeted Metabolomics by Liquid Chromatography (LC)-MS/MS Sample Preparation Prior to analysis, plasma samples were subjected to a protein precipitation process. A 50 μL aliquot of the plasma sample was combined with 200 μL of cold methanol, vortexed thoroughly, and then subjected to centrifugation at 14,000 rpm for 15 min at 4 °C. The resulting supernatant should be transferred to a new tube, lyophilized and subsequently re-solubilized with 100 μL of 20% methanol in water. Subsequently, the supernatant was transferred to a new centrifuge tube, lyophilized, and re-dissolved in 100 μL of 20% methanol in water. The resulting solution was then analyzed in both positive and negative ion mode. Liquid Chromatography Conditions Liquid phase separation was conducted on a Thermo Fisher Ultimate 3000 UHPLC system utilizing a Waters ACQUITY UPLC BEH C8 column (2.1 mm × 100 mm, 1.7 μm). Mobile phase A was a 0.1% formic acid aqueous solution, while mobile phase B was a 0.1% formic acid acetonitrile solution. The gradient elution procedure was as follows: 0–1 min, 5% B; 1.1–11 min, 5–100% B; 11.1–13 min, 100% B; and 13.1–15 min, 5% B. The flow rate was 0.35 mL/mL. The flow rate was 0.35 mL/min, the column temperature was 50 °C, and the injection volume was 5 μL. Mass Spectrometry Conditions Mass spectrometry was performed on a Thermo Fisher Q Exactive Plus mass spectrometer equipped with an electrospray ionization source (ESI) for dual mode scanning of positive and negative ions, with a scanning range of 70–1050 m / z and a resolution of 70,000. The parameters of the ionization source included a spray voltage of 3.8 kV (positive mode)/ − 3.0 kV (negative mode), a flow rate of 35 Arb for the sheath gas, a flow rate of 8 Arb for the auxiliary gas, a temperature of the ion transfer tube of 320 °C, and a heating temperature of 350 °C for the auxiliary gas. Metabolite Identification and Data Analysis Quality controls (QCs) were implemented to ensure data quality. The initial screening of metabolites was based on the signal-to-noise ratio ( S / N > 10) and the relative standard deviation (RSD < 30%) of the QC samples. Ultra Performance Liquid Chromatography Tandem Mass Spectrometry (LC–MS) was used to identify peaks based on mass ( m / z ) and retention time (RT). Concentrated metabolite extraction samples were analyzed using the Vanquish UHPLC system (Thermo Fisher Scientific) coupled with the Orbitrap Q Exactive HF-X mass spectrometer (Thermo Fisher Scientific). LC–MS/MS analysis was conducted in both positive and negative ion modes. Differential metabolites between groups were defined as those with |log2-fold change (FC)|≥ 1. Metabolite classifications were annotated using the HMDB and LipidMaps databases, and pathway enrichment analysis was conducted on the metabolite lists. Statistical Analysis Two-sided t -tests or Mann–Whitney U tests were employed to identify differential metabolites between groups. Pearson correlation coefficients were used to evaluate the relationship between metabolites and OS, with a P -value of < 0.05 deemed statistically significant. Functional annotation and enrichment analysis of up- and downregulated differential metabolites were performed using the Metabolome Database and the KEGG pathways. The study design is illustrated in Fig. . We employed a comprehensive MR approach to investigate the potential associations between metabolite levels/metabolite ratios and PDAC risk. The study was conducted in several phases: (1) screening of genetic variants linked with 1091 plasma metabolites and 309 metabolite ratios, (2) selection of a genome-wide association study (GWAS) dataset for PDAC as the outcome measures, (3) application of two-sample MR to separately estimate the associations between 1400 plasma metabolites/metabolite ratios and PDAC, and (4) annotation of significant metabolites and corresponding shared enzymes or transport proteins. Additionally, we conducted functional and metabolic pathway enrichment analysis for both significant metabolites and proteins linked to significant metabolite ratios. The methodological framework adhered strictly to the STROBE-MR guidelines (Table ). On the other hand, we also explored the effects of metabolites and metabolic pathways identified in the MR study on the prognosis of PDAC with peripheral blood samples from 32 PDAC patients collected in Beijing Luhe Hospital. The detailed process entailed the following: (1) patient grouping based on their clinical and pathological characteristics and (2) comparing their metabolite profiles. First, patients who experienced more than 5% weight loss within six months prior to blood collection or those with a BMI below 20 kg/m 2 , and over 2% weight loss were classified into the cachexia group . The remaining patients were deemed as non-cachexia. Second, patients were divided into early-stage (IA-IIB) and late-stage (IV) groups based on clinical staging. Third, patients who underwent radical resection were categorized into the surgery group, while those who did not have surgery or only received palliative surgery were categorized into the control group. Fourth, in the surgical group, patients were further classified into the positive margin and clear margin subgroups based on the surgical margin status. Fifth, patients were stratified into groups with or without lymph node metastasis. Finally, patients were divided into the embolus and non-embolus groups based on the presence of vascular tumor emboli. To further reveal the potential metabolic characteristics associated with poor prognosis, differential metabolites identified in each group were annotated using the Human Metabolome Database (HMDB) and the LIPID MAPS Structure database (LMSD). Metabolites that were either up or downregulated in two or more groups were integrated for KEGG pathway enrichment analysis. The association analysis between metabolite and overall survival (OS) was performed for each subject. The plasma metabolite summary dataset was derived from a metabolomics study involving 8299 participants from the Canadian Longitudinal Study on Aging (CLSA) cohort . The CLSA is a large-scale research program encompassing biomedical data from over 50,000 individuals across Canada . The GWAS conducted by Jiang et al. focused on 8299 European participants from the CLSA cohort. This study obtained comprehensive whole-genome genotype data and circulating plasma metabolite profiles from these participants to identify 1091 plasma metabolites and 309 metabolite ratios under genetic control. The summary dataset associated with PDAC was obtained from the study conducted by Jiang et al., which included 209 PDAC cases and 456,139 control cases . This study employed the fastGWA-GLMM, a genome-wide association tool based on generalized linear mixed models to UK Biobank data. All selected GWAS studies for this MR analysis had the necessary ethical approvals, and related documents, including informed consent forms, are available in the supplementary materials of the original publications. Genetic variants were extracted as instrumental variables (IVs) from the GWAS pooled dataset of plasma metabolites/metabolite ratios according to the following steps: (1) We identified SNPs significantly associated with each metabolite level using a genome-wide locus significance threshold ( P < 1 × 10 −5 ). (2) SNPs with multiple alleles (> 2) and those located on chromosome 23 were excluded from consideration. (3) SNPs with a minor allele frequency (MAF) of less than 0.01 were removed. (4) We utilized the 1000 Genomes European reference panel to address linkage disequilibrium (LD) between genetic variants, employing criteria of r 2 < 0.01 and a window size > 10,000 kb. (5) F -statistics were calculated to assess the effectiveness of each genetic variant as an IV and its fit in the linear regression model. Higher F -statistic values indicate a stronger explanatory power of the IV for dependent variables. The F -statistic was computed using the formula F = R 2 × ( N − 2)/(1 − R 2 ) , where R 2 quantifies the proportion of variance in the dependent variable explained by the genetic variant. R 2 was calculated with the formula: R 2 = 2 × EAF × (1 − EAF) × beta 2 /(2 × EAF × (1 − EAF) × beta 2 ) + 2 × EAF × (1 − EAF) × SE × N × beta 2 . Genetic variants with an F -statistic greater than 10 were selected as IVs to minimize the influence of weakly predictive IVs on subsequent MR analyses. We first conducted a preliminary assessment of the impact of 1400 plasma metabolites/metabolite ratios on PDAC. For exposures with a single IV, the Wald ratio method was employed. For exposures featuring multiple IVs, we applied the Inverse Variance Weighted (IVW) method, which combines the effects of multiple IVs on outcomes to enhance estimation accuracy through least squares estimation, thereby providing a more precise estimation of the effect. To determine statistical significance, we applied a threshold of P < 0.05. Metabolites meeting this criterion were further analyzed using complementary MR methods, including MR-Egger, weighted median, and simple median approaches. Furthermore, we conducted the MR Steiger test to verify the directionality of relationships and mitigate the risk of reverse causation . To ensure the robustness of our results, we performed a series of sensitivity analyses. (1) We calculated Cochran's Q statistic by summing the residual squares of each IV concerning the outcome, followed by a chi-square test to assess the potential differential impact of IVs. Additionally, we computed the I 2 value to assess the heterogeneity of IV effects using the following formula: I 2 = ( Q − Q _df)/ Q . An I 2 value greater than 25 to 50% indicates moderate to high heterogeneity . (2) To eliminate the influence of confounding factors on the causal associations, we detected potential horizontal pleiotropy using the MR-Egger method to construct a regression model with an intercept term ( θ 0) and correct it through a significance test of the Egger regression slope . (3) We detected horizontal pleiotropy and identified and removed outlier IVs by computing the regression residuals of IVs using the MR-PRESSO method and subsequently generated adjusted causal estimates to provide more robust MR results. (4) We employed the leave-one-out method to assess the consistency of our MR results. This involved recalculating the causal effect by excluding each IV in turn and comparing these estimates to the overall results to determine if any single IV significantly influenced the causal association. (5) We visualized the estimates and their confidence intervals using scatter plots and funnel plots to identify potential outliers. All statistical analyses were conducted using the TwoSampleMR and MR-PRESSO packages in R version 4.2.0 . To explore potential metabolic characteristics through which metabolic alterations may impact PDAC, we conducted the following analyses: Firstly, we searched the HMDB to gather detailed information on small molecule metabolites identified in the human body and annotated significant metabolites for their super- and sub-pathways. Subsequently, Building on the research of Chen et al., we extracted data related to significant metabolite ratios, including associated enzymes or transporter proteins, protein types, and shared protein genes . Furthermore, we annotated shared genes for cellular components (CC), biological processes (BP), and molecular functions (MF) using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) tool and performed pathway enrichment analyses to illuminate the functional roles of these genes within biological processes . Serum samples were obtained from 32 patients with pathologically confirmed PDAC at the Department of Oncology, Beijing Luhe Hospital. For the clinical cohort recruited for this study, we chose to obtain plasma from patients at baseline (before receiving chemotherapeutic agents) and centrifuged the serum and preserved at − 80°C. Tables and list their characteristics. The study was approved by the Ethics Committee of Beijing Luhe Hospital, and all participants offered written informed consent in accordance with the Helsinki Declaration. Patient treatments remained independent of the study, and all research procedures adhered strictly to applicable laws and regulations. Sample Preparation Prior to analysis, plasma samples were subjected to a protein precipitation process. A 50 μL aliquot of the plasma sample was combined with 200 μL of cold methanol, vortexed thoroughly, and then subjected to centrifugation at 14,000 rpm for 15 min at 4 °C. The resulting supernatant should be transferred to a new tube, lyophilized and subsequently re-solubilized with 100 μL of 20% methanol in water. Subsequently, the supernatant was transferred to a new centrifuge tube, lyophilized, and re-dissolved in 100 μL of 20% methanol in water. The resulting solution was then analyzed in both positive and negative ion mode. Liquid Chromatography Conditions Liquid phase separation was conducted on a Thermo Fisher Ultimate 3000 UHPLC system utilizing a Waters ACQUITY UPLC BEH C8 column (2.1 mm × 100 mm, 1.7 μm). Mobile phase A was a 0.1% formic acid aqueous solution, while mobile phase B was a 0.1% formic acid acetonitrile solution. The gradient elution procedure was as follows: 0–1 min, 5% B; 1.1–11 min, 5–100% B; 11.1–13 min, 100% B; and 13.1–15 min, 5% B. The flow rate was 0.35 mL/mL. The flow rate was 0.35 mL/min, the column temperature was 50 °C, and the injection volume was 5 μL. Mass Spectrometry Conditions Mass spectrometry was performed on a Thermo Fisher Q Exactive Plus mass spectrometer equipped with an electrospray ionization source (ESI) for dual mode scanning of positive and negative ions, with a scanning range of 70–1050 m / z and a resolution of 70,000. The parameters of the ionization source included a spray voltage of 3.8 kV (positive mode)/ − 3.0 kV (negative mode), a flow rate of 35 Arb for the sheath gas, a flow rate of 8 Arb for the auxiliary gas, a temperature of the ion transfer tube of 320 °C, and a heating temperature of 350 °C for the auxiliary gas. Metabolite Identification and Data Analysis Quality controls (QCs) were implemented to ensure data quality. The initial screening of metabolites was based on the signal-to-noise ratio ( S / N > 10) and the relative standard deviation (RSD < 30%) of the QC samples. Ultra Performance Liquid Chromatography Tandem Mass Spectrometry (LC–MS) was used to identify peaks based on mass ( m / z ) and retention time (RT). Concentrated metabolite extraction samples were analyzed using the Vanquish UHPLC system (Thermo Fisher Scientific) coupled with the Orbitrap Q Exactive HF-X mass spectrometer (Thermo Fisher Scientific). LC–MS/MS analysis was conducted in both positive and negative ion modes. Differential metabolites between groups were defined as those with |log2-fold change (FC)|≥ 1. Metabolite classifications were annotated using the HMDB and LipidMaps databases, and pathway enrichment analysis was conducted on the metabolite lists. Prior to analysis, plasma samples were subjected to a protein precipitation process. A 50 μL aliquot of the plasma sample was combined with 200 μL of cold methanol, vortexed thoroughly, and then subjected to centrifugation at 14,000 rpm for 15 min at 4 °C. The resulting supernatant should be transferred to a new tube, lyophilized and subsequently re-solubilized with 100 μL of 20% methanol in water. Subsequently, the supernatant was transferred to a new centrifuge tube, lyophilized, and re-dissolved in 100 μL of 20% methanol in water. The resulting solution was then analyzed in both positive and negative ion mode. Liquid phase separation was conducted on a Thermo Fisher Ultimate 3000 UHPLC system utilizing a Waters ACQUITY UPLC BEH C8 column (2.1 mm × 100 mm, 1.7 μm). Mobile phase A was a 0.1% formic acid aqueous solution, while mobile phase B was a 0.1% formic acid acetonitrile solution. The gradient elution procedure was as follows: 0–1 min, 5% B; 1.1–11 min, 5–100% B; 11.1–13 min, 100% B; and 13.1–15 min, 5% B. The flow rate was 0.35 mL/mL. The flow rate was 0.35 mL/min, the column temperature was 50 °C, and the injection volume was 5 μL. Mass spectrometry was performed on a Thermo Fisher Q Exactive Plus mass spectrometer equipped with an electrospray ionization source (ESI) for dual mode scanning of positive and negative ions, with a scanning range of 70–1050 m / z and a resolution of 70,000. The parameters of the ionization source included a spray voltage of 3.8 kV (positive mode)/ − 3.0 kV (negative mode), a flow rate of 35 Arb for the sheath gas, a flow rate of 8 Arb for the auxiliary gas, a temperature of the ion transfer tube of 320 °C, and a heating temperature of 350 °C for the auxiliary gas. Quality controls (QCs) were implemented to ensure data quality. The initial screening of metabolites was based on the signal-to-noise ratio ( S / N > 10) and the relative standard deviation (RSD < 30%) of the QC samples. Ultra Performance Liquid Chromatography Tandem Mass Spectrometry (LC–MS) was used to identify peaks based on mass ( m / z ) and retention time (RT). Concentrated metabolite extraction samples were analyzed using the Vanquish UHPLC system (Thermo Fisher Scientific) coupled with the Orbitrap Q Exactive HF-X mass spectrometer (Thermo Fisher Scientific). LC–MS/MS analysis was conducted in both positive and negative ion modes. Differential metabolites between groups were defined as those with |log2-fold change (FC)|≥ 1. Metabolite classifications were annotated using the HMDB and LipidMaps databases, and pathway enrichment analysis was conducted on the metabolite lists. Two-sided t -tests or Mann–Whitney U tests were employed to identify differential metabolites between groups. Pearson correlation coefficients were used to evaluate the relationship between metabolites and OS, with a P -value of < 0.05 deemed statistically significant. Functional annotation and enrichment analysis of up- and downregulated differential metabolites were performed using the Metabolome Database and the KEGG pathways. Association Between Plasma Metabolites and PDAC We selected eligible genetic variants as IVs for 1091 plasma metabolites and 309 metabolite ratios. Preliminary MR analysis identified 66 significant metabolite/metabolite ratios. After excluding unknown metabolites (X-11795, X-12007, X-12906, X-13684, X-17010, X-21286, X-21834, X-23974, X-23782, X-24241, and X-25520), we focused on 55 known metabolites/metabolite ratios. Of these, twenty-one plasma metabolites and five metabolite ratios were positively associated with PDAC (OR > 1, P < 0.05), while fifteen plasma metabolites and fourteen metabolite ratios were negatively correlated with PDAC (OR < 1, P < 0.05). (OR: odds ratio; CI: confidence interval. An OR (95%CI) greater than 1 with a P -value less than 0.05 indicates a positive correlation between the metabolite level and PDAC risk. Conversely, an OR (95%CI) less than 1 with a P -value less than 0.05 indicates a negative correlation between the metabolite level and PDAC risk. Eleven plasma metabolites were involved in amino acid metabolic pathways, including methionine, cysteine, S-adenosylmethionine (SAM), taurine, arginine, proline, tyrosine, alanine, aspartate, tryptophan, phenylalanine, leucine, isoleucine, and valine, as well as the urea cycle. Thirteen plasma metabolites were linked to lipid metabolic pathways, including fatty acids, steroids, sphingolipids, phosphatidylcholine, and secondary bile acid metabolism. These findings indicated that lipid and amino acid metabolism alterations may influence PDAC. Additionally, 2′-O-methylcytosine may influence nucleotide metabolism through its involvement in pyrimidine metabolic pathways, indicating a potential association between nucleotide metabolism and PDAC (Table and Fig. ). Table presents the correlation results between 1400 metabolites/metabolite ratios and PDAC. As demonstrated in Tables and Supplementary Figs. , the IVs for the 55 significant metabolites/metabolite ratios effectively explain the association between exposure and outcome ( F -statistic > 10), with no evidence of potential reverse causality or heterogeneity. Functional and Metabolic Pathway Analysis of Significant Metabolites/Metabolite Ratios We performed KEGG pathway enrichment analysis for 36 significant metabolites. The results revealed substantial enrichment in amino acid metabolism pathways, including valine, leucine, and isoleucine metabolism; phenylalanine, tyrosine, and tryptophan metabolism; cysteine and methionine metabolism; and arginine and proline metabolism. Additionally, notable enrichment was unveiled in lipid metabolism pathways, including linoleic acid, α-linolenic acid, glycerophospholipid, unsaturated fatty acid, and arachidonic acid metabolism (Table and Supplementary Fig. ). We accessed the information of shared enzymes or transporters and shared genes corresponding to the significant metabolite ratios identified by J Brent Richards et al. (Table ). Functional annotation of these genes showed that the shared proteins were primarily localized in the endoplasmic reticulum membrane, basal plasma membrane, and mitochondrial matrix. These proteins exhibited enzymatic and transporter activities related to the metabolism of substances such as glucuronidation, steroid hormones, and retinol (Table ). KEGG pathway analysis indicated that among the top 20 enriched pathways for these proteins, seven were associated with amino acid metabolism, including alanine, aspartate, and glutamate metabolism, arginine biosynthesis, phenylalanine, tyrosine, and tryptophan biosynthesis, cysteine and methionine metabolism, and tyrosine metabolism. Three pathways were related to carbohydrate metabolism, including ascorbate and aldarate metabolism, butanoate metabolism, and pentose and glucuronate interconversions. Additionally, two pathways were involved in lipid metabolism, specifically steroid hormone biosynthesis and glycerophospholipid metabolism (Table and Supplementary Fig. ). Comparative Analysis of Metabolite Differences and Functional Enrichment in PDAC Subgroups We performed differential metabolite analysis among 32 PDAC patients stratified by clinical criteria (Table ). Each group was categorized based on prognostic outcomes, with the poorer prognosis group as the experimental group for analysis. Firstly, 53 metabolites were significantly upregulated compared to the non-cachexia group, while 28 metabolites were significantly downregulated in the cachexia group (Fig. A, B, Table ). Secondly, in the late-stage group, 51 metabolites were significantly upregulated, and 31 metabolites were significantly downregulated compared to the early-stage group. Same trends were observed when comparing the non-radical and radical surgery groups (Fig. C, D, Table ). Thirdly, 18 metabolites were significantly upregulated, and 24 were significantly downregulated in the positive margin group relative to the clear margin group (Fig. A, B, Table ). Fourthly, 27 metabolites were significantly upregulated, and 53 were significantly downregulated in the lymph node metastasis group compared to the non-metastasis group (Fig. C, D, Table ). Lastly, 8 metabolites were significantly upregulated, and 12 were significantly downregulated in the embolus group compared to non-embolus groups (Fig. A, B, Table ). We integrated metabolites that were significantly altered in two or more experimental groups. This analysis revealed that 24 metabolites were consistently upregulated, and 15 were downregulated across multiple groups (Table ). KEGG enrichment analysis revealed that the upregulated metabolites were primarily enriched in pathways related to primary bile acid biosynthesis, taurine and hypotaurine metabolism, and pyrimidine metabolism. In contrast, the downregulated metabolites were predominantly enriched in pathways associated with arginine biosynthesis, vitamin B6 metabolism, pyrimidine metabolism, arginine and proline metabolism, and D-amino acid metabolism (Fig. A, B, Table ). Tables present detailed information on metabolites that are significantly up- or downregulated between different subgroups. The metabolites were identified in conjunction with the HMDB database and graded according to the chemical classification system (CLASS I-IV), which has a progressively finer hierarchical structure to characterize the metabolite’s function and biological significance. Correlation Analysis Between Metabolites and OS of PDAC Patients We analyzed the correlation between metabolite levels and OS of 32 PDAC patients. Our finding revealed that 37 metabolites had a significant positive correlation with OS ( P < 0.05, corr > 0), while 28 metabolites had a significant negative correlation with OS ( P < 0.05, corr < 0). KEGG pathway analysis of these metabolites showed that those negatively correlated with OS were mainly enriched in metabolic pathways of taurine and its derivatives, primary bile acid biosynthesis, ascorbate and aldarate, pyrimidine, cysteine, and methionine metabolism. Conversely, metabolites positively correlated with OS were predominantly enriched in metabolic pathways of arginine and proline, D-amino acids, vitamin B6, and histidine (Fig. C, D, Table ). Integrated Analysis of Metabolic Pathways We conducted an integrated analysis to compare of metabolic pathways associated with OS in PDAC patients and pathways enriched with significantly upregulated and downregulated differential metabolites across multiple experimental groups. The results showed that (1) the primary bile acid biosynthesis and taurine and hypotaurine metabolism pathways were significantly upregulated across multiple subgroups and negatively correlated with OS and (2) arginine biosynthesis, arginine and proline metabolism, and aminoacyl-tRNA biosynthesis were significantly downregulated in multiple subgroups and positively correlated with OS of patients. Notably, both significantly upregulated and downregulated differential metabolites were enriched in the pyrimidine metabolism pathway, which was associated with poorer OS. These findings are supported by the MR-based study, which also indicated a correlation between these metabolic pathways and PDAC. Other metabolic pathways identified by MR analysis, including glucose and glucuronic acid interconversion, bile secretion, ascorbate and aldarate metabolism, cysteine and methionine metabolism, butyrate metabolism, and cholesterol metabolism, were negatively correlated with OS. However, no significant enrichment of differential metabolites, whether up- or downregulated, was observed in these metabolic pathways (Table ). We selected eligible genetic variants as IVs for 1091 plasma metabolites and 309 metabolite ratios. Preliminary MR analysis identified 66 significant metabolite/metabolite ratios. After excluding unknown metabolites (X-11795, X-12007, X-12906, X-13684, X-17010, X-21286, X-21834, X-23974, X-23782, X-24241, and X-25520), we focused on 55 known metabolites/metabolite ratios. Of these, twenty-one plasma metabolites and five metabolite ratios were positively associated with PDAC (OR > 1, P < 0.05), while fifteen plasma metabolites and fourteen metabolite ratios were negatively correlated with PDAC (OR < 1, P < 0.05). (OR: odds ratio; CI: confidence interval. An OR (95%CI) greater than 1 with a P -value less than 0.05 indicates a positive correlation between the metabolite level and PDAC risk. Conversely, an OR (95%CI) less than 1 with a P -value less than 0.05 indicates a negative correlation between the metabolite level and PDAC risk. Eleven plasma metabolites were involved in amino acid metabolic pathways, including methionine, cysteine, S-adenosylmethionine (SAM), taurine, arginine, proline, tyrosine, alanine, aspartate, tryptophan, phenylalanine, leucine, isoleucine, and valine, as well as the urea cycle. Thirteen plasma metabolites were linked to lipid metabolic pathways, including fatty acids, steroids, sphingolipids, phosphatidylcholine, and secondary bile acid metabolism. These findings indicated that lipid and amino acid metabolism alterations may influence PDAC. Additionally, 2′-O-methylcytosine may influence nucleotide metabolism through its involvement in pyrimidine metabolic pathways, indicating a potential association between nucleotide metabolism and PDAC (Table and Fig. ). Table presents the correlation results between 1400 metabolites/metabolite ratios and PDAC. As demonstrated in Tables and Supplementary Figs. , the IVs for the 55 significant metabolites/metabolite ratios effectively explain the association between exposure and outcome ( F -statistic > 10), with no evidence of potential reverse causality or heterogeneity. We performed KEGG pathway enrichment analysis for 36 significant metabolites. The results revealed substantial enrichment in amino acid metabolism pathways, including valine, leucine, and isoleucine metabolism; phenylalanine, tyrosine, and tryptophan metabolism; cysteine and methionine metabolism; and arginine and proline metabolism. Additionally, notable enrichment was unveiled in lipid metabolism pathways, including linoleic acid, α-linolenic acid, glycerophospholipid, unsaturated fatty acid, and arachidonic acid metabolism (Table and Supplementary Fig. ). We accessed the information of shared enzymes or transporters and shared genes corresponding to the significant metabolite ratios identified by J Brent Richards et al. (Table ). Functional annotation of these genes showed that the shared proteins were primarily localized in the endoplasmic reticulum membrane, basal plasma membrane, and mitochondrial matrix. These proteins exhibited enzymatic and transporter activities related to the metabolism of substances such as glucuronidation, steroid hormones, and retinol (Table ). KEGG pathway analysis indicated that among the top 20 enriched pathways for these proteins, seven were associated with amino acid metabolism, including alanine, aspartate, and glutamate metabolism, arginine biosynthesis, phenylalanine, tyrosine, and tryptophan biosynthesis, cysteine and methionine metabolism, and tyrosine metabolism. Three pathways were related to carbohydrate metabolism, including ascorbate and aldarate metabolism, butanoate metabolism, and pentose and glucuronate interconversions. Additionally, two pathways were involved in lipid metabolism, specifically steroid hormone biosynthesis and glycerophospholipid metabolism (Table and Supplementary Fig. ). We performed differential metabolite analysis among 32 PDAC patients stratified by clinical criteria (Table ). Each group was categorized based on prognostic outcomes, with the poorer prognosis group as the experimental group for analysis. Firstly, 53 metabolites were significantly upregulated compared to the non-cachexia group, while 28 metabolites were significantly downregulated in the cachexia group (Fig. A, B, Table ). Secondly, in the late-stage group, 51 metabolites were significantly upregulated, and 31 metabolites were significantly downregulated compared to the early-stage group. Same trends were observed when comparing the non-radical and radical surgery groups (Fig. C, D, Table ). Thirdly, 18 metabolites were significantly upregulated, and 24 were significantly downregulated in the positive margin group relative to the clear margin group (Fig. A, B, Table ). Fourthly, 27 metabolites were significantly upregulated, and 53 were significantly downregulated in the lymph node metastasis group compared to the non-metastasis group (Fig. C, D, Table ). Lastly, 8 metabolites were significantly upregulated, and 12 were significantly downregulated in the embolus group compared to non-embolus groups (Fig. A, B, Table ). We integrated metabolites that were significantly altered in two or more experimental groups. This analysis revealed that 24 metabolites were consistently upregulated, and 15 were downregulated across multiple groups (Table ). KEGG enrichment analysis revealed that the upregulated metabolites were primarily enriched in pathways related to primary bile acid biosynthesis, taurine and hypotaurine metabolism, and pyrimidine metabolism. In contrast, the downregulated metabolites were predominantly enriched in pathways associated with arginine biosynthesis, vitamin B6 metabolism, pyrimidine metabolism, arginine and proline metabolism, and D-amino acid metabolism (Fig. A, B, Table ). Tables present detailed information on metabolites that are significantly up- or downregulated between different subgroups. The metabolites were identified in conjunction with the HMDB database and graded according to the chemical classification system (CLASS I-IV), which has a progressively finer hierarchical structure to characterize the metabolite’s function and biological significance. We analyzed the correlation between metabolite levels and OS of 32 PDAC patients. Our finding revealed that 37 metabolites had a significant positive correlation with OS ( P < 0.05, corr > 0), while 28 metabolites had a significant negative correlation with OS ( P < 0.05, corr < 0). KEGG pathway analysis of these metabolites showed that those negatively correlated with OS were mainly enriched in metabolic pathways of taurine and its derivatives, primary bile acid biosynthesis, ascorbate and aldarate, pyrimidine, cysteine, and methionine metabolism. Conversely, metabolites positively correlated with OS were predominantly enriched in metabolic pathways of arginine and proline, D-amino acids, vitamin B6, and histidine (Fig. C, D, Table ). We conducted an integrated analysis to compare of metabolic pathways associated with OS in PDAC patients and pathways enriched with significantly upregulated and downregulated differential metabolites across multiple experimental groups. The results showed that (1) the primary bile acid biosynthesis and taurine and hypotaurine metabolism pathways were significantly upregulated across multiple subgroups and negatively correlated with OS and (2) arginine biosynthesis, arginine and proline metabolism, and aminoacyl-tRNA biosynthesis were significantly downregulated in multiple subgroups and positively correlated with OS of patients. Notably, both significantly upregulated and downregulated differential metabolites were enriched in the pyrimidine metabolism pathway, which was associated with poorer OS. These findings are supported by the MR-based study, which also indicated a correlation between these metabolic pathways and PDAC. Other metabolic pathways identified by MR analysis, including glucose and glucuronic acid interconversion, bile secretion, ascorbate and aldarate metabolism, cysteine and methionine metabolism, butyrate metabolism, and cholesterol metabolism, were negatively correlated with OS. However, no significant enrichment of differential metabolites, whether up- or downregulated, was observed in these metabolic pathways (Table ). PDAC is a highly malignant tumor with a poor prognosis, and its five-year survival rate remains consistently in the single digits. Despite recent advances in drug development, treatment of PDAC continues to pose substantial challenges. PDAC cells exhibit unique metabolic characteristics, with various metabolic alterations critical for disease progression and maintenance. Understanding these metabolic changes can provide insights into potential therapeutic targets and diagnostic markers for PDAC. Our previous research found that PAIP2B is generally under-expressed or absent in PDAC tissues and plays a role in regulating metabolic pathways . In the current study, we applied MR to investigate causal relationships between metabolite levels/metabolite ratios and PDAC. Twenty-one plasma metabolites and five metabolite ratios, primarily involved in amino acid and lipid metabolic pathways, were positively associated with PDAC. Glutamine, arginine, and serine/glycine metabolism play crucial roles in PDAC, influencing various aspects of tumor development and progression . However, the metabolic characteristics under different clinical characteristics of PDAC remain unclear. Patients with cachexia-sarcopenia generally have a worse prognosis and shorter survival compared to those without these conditions . Cachexia-sarcopenia process is characterized by increased energy expenditures, exacerbated by limitations on caloric intake and absorption due to metabolic imbalances. In this study, fatty acids and their conjugates were negatively correlated with cachexia, whereas tryptophan metabolism was positively correlated with the condition. A monoclonal antibody against growth differentiation factor 15 (GDF-15) shows promise in ameliorating cancer cachexia by modulating fatty acid oxidation and glycolysis, and it has entered Phase 1 clinical trials . We analyzed the differential expression of metabolites in PDAC patients at different stages. Amino acid metabolism significantly impacts tumor biology, with previous studies underscoring the significant role of amino acid metabolic reprogramming in PDAC progression and prognosis . Tryptophan, alanine, aspartate, and glutamate metabolism significantly correlate with the disease stage. In advanced or unresectable PDAC, glutamate metabolism was significantly upregulated. A targeted radiopharmaceutical therapy using 211At-AITM, which recognizes the metabotropic glutamate receptor 1 (mGluR1), has demonstrated efficacy in vivo in eradicating mGluR1-positive human pancreatic tumors in approximately 50% of tumor-bearing mice, with minimal toxicity . Perineural invasion (PNI) in PDAC often indicates a poorer prognosis and higher recurrence risk . Kynurenic acid, a tryptophan metabolite, acts as an NMDA receptor antagonist and may affect the tumor immune microenvironment by inhibiting T cell proliferation and cytokine secretion, thereby assisting tumor evasion from immune surveillance . Our study confirmed increased levels of kynurenic acid in late-stage PDAC patients’ peripheral blood, potentially associated with poor prognosis. Arginine and glutamine are closely linked to PDAC prognosis and survival outcomes. As a major carbon and nitrogen source, glutamine is essential for PDAC growth. Son et al. have demonstrated that the KRAS gene mediates glutamine metabolism reprogramming by regulating aspartate transaminase GOT1 and GOT2 transcription, with GOT1 inhibition promoting cancer cell death . Additionally, Lee et al. revealed PDAC’s reliance on glutamine for ornithine synthesis, consequently promoting polyamine synthesis, with elevated pro-polyamine gene expression correlating with poor prognosis . Glutamine ammonia ligase (GLUL), a key enzyme in glutamine synthesis, exhibits high expression in both PDAC patients and mouse models. Inhibiting GLUL activity has shown potential in mitigating KRAS G12D-mediated PDAC progression . These findings underscore the significance of amino acids in tumor metabolism and immune function and enhance our understanding of the disease, thereby improving our diagnostic and prognostic capabilities and abilities to identify new therapeutic targets. Our study has several unavoidable limitations. First, the limited availability of GWAS studies for PDAC patients with specific clinicopathological features restricts our ability to utilize MR methods to accurately analyze the specific metabolic characteristics affecting PDAC prognosis from a genomic perspective. Second, the small sample size necessitates validation of our serum metabolomics results in larger cohorts. Third, in our study, both upregulated and downregulated differential metabolites were notably enriched in the pyrimidine metabolism pathway, and we analyzed that this might be because the pyrimidine metabolism pathway was involved in both significantly up- and downregulated metabolites, and thus the expression of the pyrimidine metabolism pathway needs to be explored and proved by further experiments. Fourth, due to the limitations of our sample size and retrospective study, results of metabolite comparisons and metabolite-OS correlation analyses based on different clinical cohort subgroups need to be further validated. Future research should continue to explore these associations in larger, more diverse cohorts and actively conduct prospective studies to validate these findings and translate them into clinical practice. Furthermore, our study mainly explores metabolite profiles associated with PDAC prognosis; however, early diagnosis is also critical for the treatment of PDAC. Thus, future study could focus on the trends or pattern of metabolites/metabolic ratio that are uniquely associated with early stage of PDAC as compared to late or middle stage PDAC to highlight the usefulness of metabolites as early diagnostic biomarkers. The GWAS conducted by Jiang et al. focused on 8299 European participants from the CLSA cohort, providing valuable insights into the genetic basis of metabolic traits and their role in aging and related diseases. By leveraging genetic instruments, MR can help elucidate whether observed associations are likely causal, offering insights that can enhance the understanding, diagnosis, and treatment of PDAC. Future studies should continue to explore these causal pathways in larger and more diverse populations to validate and extend these findings. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 6163 KB) Supplementary file2 (XLS 533 KB) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.