title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Ethno-veterinary practice for the treatment of animal diseases in Neelum Valley, Kashmir Himalaya, Pakistan | 47ccc20a-ec21-4918-b371-fd5ea8164969 | 8087047 | Pharmacology[mh] | Medicinal plants have been used across the globe since ages due to their efficacy, availability as well as cultural beliefs. The herbal remedies are an essential part of the traditional medicinal practices in the indigenous Himalayan mountain communities. Plant based ethnoveterinary medicine are widely practiced in the Himalayan region since the livestock rearing is an integral part of the livelihoods . These traditional herbal medicines provide efficient and cheap therapies along with their common accessibility in comparison to the western allopathic drugs . This ethnic knowledge is directly linked with the local biodiversity and runs deep in the fabric of the rural societies through centuries . Documentation of this altruistic folk knowledge holds key importance especially with the ratification of the Nagoya Protocol in order to maintain cultural heritage . The growing scientific evidence suggests that this Ethnic knowledge supplemented with the new scientific insights can offer socially acceptable and eco-friendly approaches vital for the sustainable development of the local communities . The western Himalayan mountains of Kashmir region supports rich biodiversity attributed to its diverse geography and landscape spanning from deep valley floor through terraced lands and dense forests, up to snow-capped alpine peaks . This mosaic of diverse niches, habitat heterogeneity and the microclimatic variation along the altitudinal gradient results into harboring a bewildering floristic diversity in the region . The rural mountain communities of the Kashmir region practice an agro-pastoral semi nomadic lifestyle, mainly depending on livestock rearing and subsistence agriculture for their livelihood . Medicinal plants have been widely used as a primary source of prevention and control of livestock diseases in the local communities for several centuries, as the inhabitants have learned the medicinal usage of plants growing in their close vicinity . It is an interesting topic to assess the monitory values of this plant based ethno-veterinary linked directly with the increasing cost of livestock rearing and maintenance. Furthermore, these ethnoveterinary medicine are very dynamic and multipurpose as they can treat several different types of livestock disorders, along with being readily available in the remote areas and cheapest as compared to the synthetic drugs . This precious indigenous knowledge has usually been disseminated from one generation without any proper documentation and preservation . The ethnoveterinary knowledge in the region is facing a threat of erosion as the locals are changing their preferences due to rapid socioeconomic transformations in the region synchronized with the environmental changes and technological advancements . The researchers have done a lot of work on the ethno-medicinal applications of plants for human health . But literature review reveals that very few studies have been carried out on ethnoveterinary applications of the local herbs in the region indicating a significant knowledge gap . Although there are very few studies available on the indigenous ethno-veterinary practices in various parts of Pakistan , the western Himalayan mountain region of Kashmir still remains unexplored in this regard because of its remoteness, harsh climatic conditions and rugged terrain. Current study was designed to document the valuable ethnoveterinary knowledge from this unexplored area to fill the knowledge gap. The specific objectives of the study include to document the important ethnoveterinary applications of local plant species of the Kashmir region used to treat the livestock ailments and disorders by the mountain populations of the area.
Study area Natural geomorphological features of Pakistan ranges from the snowcapped peaks of Himalaya and other mountain ranges in the north, the sandy beaches and mangrove swamps in south; allowing different landscapes and climates with variety of flora and fauna. This study was conducted in District Neelum of Azad Jammu & Kashmir (AJ&K), Pakistan, which is a hilly area with rugged topography, located in the extreme north of the AJ&K ( : Map of the study area). Total area of the district Neelum is 3621 Sq. kms with a population of 1.96 million . Neelum Valley is located at 74°- 24′–50″ to 74°–31′–50″ longitude and 34°–50′–40″ to 35° latitudes. Elevation of AJ&K ranges from 360 meters in the south to 6325 meters in the north. The study area lies at an altitude of 2000 meters to 4000 meters. Most of the study area is on high altitude. The climate is temperate with cold winters and moderate summers. The winter season start from November and extends up to April. The high altitude areas remain under snow for 5 months. The major crop of the area is maize, while potatoes and red beans are also cultivated. The valley is rich in the floral diversity. The dominant tree species in the area are Pinus wallichiana , Abies pindrow , Picea smithiana , Cedrus deodara , Acer caesium , Aesculus indica and Prunus cornuta , while the dominant shrubs include Viburnum grandiflorum , Indigofera heterantha , Rubus ellipticus . The dominant herbs are Sambucus wightiana , Artemisia vulgaris , Lindelofia stylosua , Bistorta amplexicaulis , Polygonum alpinum and Bergenia ciliata . Neelum Valley is home to different ethnic groups like Mughal, Chaudhry, Butt, Pire, Wani, Syed, Malik, Turks, Khawaja, Rajput, etc. These groups migrated from different areas and are now settled in Neelum Valley. There is cultural and linguistic diversity in the area because of their different past backgrounds. The common languages spoken in the area include Hindko, Kashmiri, Gojri, Shina and Pashtu. The most distinctive features of district Neelum are its mountain ranges, natural lakes, waterfalls and valleys. Documentation was carried out in three sub-valleys of the district Neelum i.e., Surgan, Shounther and Guraize Valley and in a most populated town area Kel ( : Map of the study area). There are very limited livelihood opportunities available for the people of Neelum Valley. Most of the pastoralists in the mountain part of Azad Jammu & Kashmir (AJ&K) and the farmers in the high fertile lands are practicing livestock raring from centuries. Livestock plays a pivotal role as it provides farmyard manure, rural transport, milk, meat and source of entertainment in the sports like polo and also has major role in rural economy by providing income and employment to small hold farmers and poor people of the society. Easily accessible and available ethnoveterinary medicinal plants provide a cheaper source for treatment of various diseases. In these communities, the modern veterinary health curative system is inadequate, therefore the inhabitants utilize traditional ethnoveterinary medicinal health system for health care. The economic condition of the farmers also restricts them to the use of modern allopathic drugs, which ultimately leads to poor livestock production and financial losses due to poor health of animals. Under such circumstances, ethnoveterinary medicines can be promoted as an alternative drugs and it can help in alleviation of the poverty by empowering the people to make use of their own resources for the treatment of their livestock. Ethnoveterinary field work and interviews Ethics statement Code of ethics of International Society of Ethnobiology (2008) was followed during data collection ( http://ethnobiology.net/code-of-ethics/ ). As the data collection was about the animals, therefore, the people who were in close interaction with the animals were targeted. After complete briefings to the informants about the purpose of this research work, verbal consents were taken from all the localities from where the data was collected. As most of the informants were illiterate and it was not possible to take written consent from them. Demography and data collection For the collection and documentation of demographic information, well informed persons of the relevant area were approached for interviews and group discussion in accordance with the standardized questionnaires prepared for this purpose. In order to collect the ethnoveterinary information, the data was gathered from the informants, conducting extensive field visits during the year 2012–2015 with the help of pre-planned questionnaires as standardized data collecting protocols . Institutional Review Board (IRB) permission was not required for data collection. But formal verbal approval from the respondents was taken before data collection at each locality. The methods employed during the present study were designed with the sole purpose of eliciting the precious wealth of information on the ethno-veterinary uses of medicinal plants practiced by the natives of the Kashmir Himalaya following the methods reported previously . Field surveys were conducted in various localities and some of these localities are: Surgan, Kalay Pani, Bukwali, Kel, Arangkel, Domail Bala, Shounther, Lunda Nar, Janawaii, Phulawaii, Halmat and Taobutt, The elderly and experienced members of the tribes, locally known `Budhair’ (aged), preferably above the age of forty were interviewed. More often, they were accompanied to the field for identification of plant species used in the veterinary treatment and their preferred habitats. The survey targeted farmers, shepherds, pastoralists, traditional healers, gardeners, shopkeepers, and plant collectors who had the knowledge of veterinary practices. The plant specimens were shown to them for authentication of relevant information, such as mode of preparation, method of use and dosage of each medicinal plant species. To bring an element of accuracy, the information obtained from one locality was cross-checked with that of others. Distribution status of the plant species used in the veterinary practices in the region (critically endangered, endangered, vulnerable and secure) was also determined on the basis of field observation and information collected from the inhabitants of the area. Plant collection, identification and herbarium deposition Plant specimens collection and utilization data collection was carried out from upper part of Neelum Valley, located at 74°- 24′–50″ to 74°–31′–50″ longitude and 34°–50′–40″ to 35° latitudes and altitude of 6500–13000 feet (2000–4000 meters). Specimens were collected mostly from wild with exception of few (5 cultivated species) from the cultivated lands. There is no requirement of any permit or permission to collect the samples. Most of the collection was carried out from public land which is property of the State and no formal permission is required for research work from the forest department of the State. In case where data collection was required from private lands, verbal permission was sorted from the land owners, before data collection at each site. Specimens of medicinal plants collected from each locality were provided with a collection number for future reference and supported by check lists for inventory. The plant specimens collected were processed at the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad and then identified with the help of available literature . The properly processed plant specimens were deposited in the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad . Data analysis Relative frequency citation (RFC) . The frequency of citation was calculated to assess the incidence of one particular plant species used for the treatment of veterinary diseases in relation to the overall citations for all plants. Relative frequency of citation was calculated using RFC = FC/N Where FC = is the number of informants reporting the use of plant divided by the sum of informants who took part in the study (N) . While, RFC = number of citation (for a given species) divided by number of citations for all species . Frequency of citation for a particular species = (Number of citations for that particular species/Number of all citations for all species)*100. Use Value (UV) . Use Value (UV) of a species was calculated using UV = FC/N. where FC is Frequency Citation of one species divided by sum of the informants participated in the study (N). The relative importance of each species was computed according to the given formula: U V s = ∑ U V i N i , ; Where ‘UVi” represents use value for a given species among the informants who participated, and ‘Ni’ represents the sum of informants.
Natural geomorphological features of Pakistan ranges from the snowcapped peaks of Himalaya and other mountain ranges in the north, the sandy beaches and mangrove swamps in south; allowing different landscapes and climates with variety of flora and fauna. This study was conducted in District Neelum of Azad Jammu & Kashmir (AJ&K), Pakistan, which is a hilly area with rugged topography, located in the extreme north of the AJ&K ( : Map of the study area). Total area of the district Neelum is 3621 Sq. kms with a population of 1.96 million . Neelum Valley is located at 74°- 24′–50″ to 74°–31′–50″ longitude and 34°–50′–40″ to 35° latitudes. Elevation of AJ&K ranges from 360 meters in the south to 6325 meters in the north. The study area lies at an altitude of 2000 meters to 4000 meters. Most of the study area is on high altitude. The climate is temperate with cold winters and moderate summers. The winter season start from November and extends up to April. The high altitude areas remain under snow for 5 months. The major crop of the area is maize, while potatoes and red beans are also cultivated. The valley is rich in the floral diversity. The dominant tree species in the area are Pinus wallichiana , Abies pindrow , Picea smithiana , Cedrus deodara , Acer caesium , Aesculus indica and Prunus cornuta , while the dominant shrubs include Viburnum grandiflorum , Indigofera heterantha , Rubus ellipticus . The dominant herbs are Sambucus wightiana , Artemisia vulgaris , Lindelofia stylosua , Bistorta amplexicaulis , Polygonum alpinum and Bergenia ciliata . Neelum Valley is home to different ethnic groups like Mughal, Chaudhry, Butt, Pire, Wani, Syed, Malik, Turks, Khawaja, Rajput, etc. These groups migrated from different areas and are now settled in Neelum Valley. There is cultural and linguistic diversity in the area because of their different past backgrounds. The common languages spoken in the area include Hindko, Kashmiri, Gojri, Shina and Pashtu. The most distinctive features of district Neelum are its mountain ranges, natural lakes, waterfalls and valleys. Documentation was carried out in three sub-valleys of the district Neelum i.e., Surgan, Shounther and Guraize Valley and in a most populated town area Kel ( : Map of the study area). There are very limited livelihood opportunities available for the people of Neelum Valley. Most of the pastoralists in the mountain part of Azad Jammu & Kashmir (AJ&K) and the farmers in the high fertile lands are practicing livestock raring from centuries. Livestock plays a pivotal role as it provides farmyard manure, rural transport, milk, meat and source of entertainment in the sports like polo and also has major role in rural economy by providing income and employment to small hold farmers and poor people of the society. Easily accessible and available ethnoveterinary medicinal plants provide a cheaper source for treatment of various diseases. In these communities, the modern veterinary health curative system is inadequate, therefore the inhabitants utilize traditional ethnoveterinary medicinal health system for health care. The economic condition of the farmers also restricts them to the use of modern allopathic drugs, which ultimately leads to poor livestock production and financial losses due to poor health of animals. Under such circumstances, ethnoveterinary medicines can be promoted as an alternative drugs and it can help in alleviation of the poverty by empowering the people to make use of their own resources for the treatment of their livestock.
Ethics statement Code of ethics of International Society of Ethnobiology (2008) was followed during data collection ( http://ethnobiology.net/code-of-ethics/ ). As the data collection was about the animals, therefore, the people who were in close interaction with the animals were targeted. After complete briefings to the informants about the purpose of this research work, verbal consents were taken from all the localities from where the data was collected. As most of the informants were illiterate and it was not possible to take written consent from them. Demography and data collection For the collection and documentation of demographic information, well informed persons of the relevant area were approached for interviews and group discussion in accordance with the standardized questionnaires prepared for this purpose. In order to collect the ethnoveterinary information, the data was gathered from the informants, conducting extensive field visits during the year 2012–2015 with the help of pre-planned questionnaires as standardized data collecting protocols . Institutional Review Board (IRB) permission was not required for data collection. But formal verbal approval from the respondents was taken before data collection at each locality. The methods employed during the present study were designed with the sole purpose of eliciting the precious wealth of information on the ethno-veterinary uses of medicinal plants practiced by the natives of the Kashmir Himalaya following the methods reported previously . Field surveys were conducted in various localities and some of these localities are: Surgan, Kalay Pani, Bukwali, Kel, Arangkel, Domail Bala, Shounther, Lunda Nar, Janawaii, Phulawaii, Halmat and Taobutt, The elderly and experienced members of the tribes, locally known `Budhair’ (aged), preferably above the age of forty were interviewed. More often, they were accompanied to the field for identification of plant species used in the veterinary treatment and their preferred habitats. The survey targeted farmers, shepherds, pastoralists, traditional healers, gardeners, shopkeepers, and plant collectors who had the knowledge of veterinary practices. The plant specimens were shown to them for authentication of relevant information, such as mode of preparation, method of use and dosage of each medicinal plant species. To bring an element of accuracy, the information obtained from one locality was cross-checked with that of others. Distribution status of the plant species used in the veterinary practices in the region (critically endangered, endangered, vulnerable and secure) was also determined on the basis of field observation and information collected from the inhabitants of the area. Plant collection, identification and herbarium deposition Plant specimens collection and utilization data collection was carried out from upper part of Neelum Valley, located at 74°- 24′–50″ to 74°–31′–50″ longitude and 34°–50′–40″ to 35° latitudes and altitude of 6500–13000 feet (2000–4000 meters). Specimens were collected mostly from wild with exception of few (5 cultivated species) from the cultivated lands. There is no requirement of any permit or permission to collect the samples. Most of the collection was carried out from public land which is property of the State and no formal permission is required for research work from the forest department of the State. In case where data collection was required from private lands, verbal permission was sorted from the land owners, before data collection at each site. Specimens of medicinal plants collected from each locality were provided with a collection number for future reference and supported by check lists for inventory. The plant specimens collected were processed at the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad and then identified with the help of available literature . The properly processed plant specimens were deposited in the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad . Data analysis Relative frequency citation (RFC) . The frequency of citation was calculated to assess the incidence of one particular plant species used for the treatment of veterinary diseases in relation to the overall citations for all plants. Relative frequency of citation was calculated using RFC = FC/N Where FC = is the number of informants reporting the use of plant divided by the sum of informants who took part in the study (N) . While, RFC = number of citation (for a given species) divided by number of citations for all species . Frequency of citation for a particular species = (Number of citations for that particular species/Number of all citations for all species)*100. Use Value (UV) . Use Value (UV) of a species was calculated using UV = FC/N. where FC is Frequency Citation of one species divided by sum of the informants participated in the study (N). The relative importance of each species was computed according to the given formula: U V s = ∑ U V i N i , ; Where ‘UVi” represents use value for a given species among the informants who participated, and ‘Ni’ represents the sum of informants.
Code of ethics of International Society of Ethnobiology (2008) was followed during data collection ( http://ethnobiology.net/code-of-ethics/ ). As the data collection was about the animals, therefore, the people who were in close interaction with the animals were targeted. After complete briefings to the informants about the purpose of this research work, verbal consents were taken from all the localities from where the data was collected. As most of the informants were illiterate and it was not possible to take written consent from them.
For the collection and documentation of demographic information, well informed persons of the relevant area were approached for interviews and group discussion in accordance with the standardized questionnaires prepared for this purpose. In order to collect the ethnoveterinary information, the data was gathered from the informants, conducting extensive field visits during the year 2012–2015 with the help of pre-planned questionnaires as standardized data collecting protocols . Institutional Review Board (IRB) permission was not required for data collection. But formal verbal approval from the respondents was taken before data collection at each locality. The methods employed during the present study were designed with the sole purpose of eliciting the precious wealth of information on the ethno-veterinary uses of medicinal plants practiced by the natives of the Kashmir Himalaya following the methods reported previously . Field surveys were conducted in various localities and some of these localities are: Surgan, Kalay Pani, Bukwali, Kel, Arangkel, Domail Bala, Shounther, Lunda Nar, Janawaii, Phulawaii, Halmat and Taobutt, The elderly and experienced members of the tribes, locally known `Budhair’ (aged), preferably above the age of forty were interviewed. More often, they were accompanied to the field for identification of plant species used in the veterinary treatment and their preferred habitats. The survey targeted farmers, shepherds, pastoralists, traditional healers, gardeners, shopkeepers, and plant collectors who had the knowledge of veterinary practices. The plant specimens were shown to them for authentication of relevant information, such as mode of preparation, method of use and dosage of each medicinal plant species. To bring an element of accuracy, the information obtained from one locality was cross-checked with that of others. Distribution status of the plant species used in the veterinary practices in the region (critically endangered, endangered, vulnerable and secure) was also determined on the basis of field observation and information collected from the inhabitants of the area.
Plant specimens collection and utilization data collection was carried out from upper part of Neelum Valley, located at 74°- 24′–50″ to 74°–31′–50″ longitude and 34°–50′–40″ to 35° latitudes and altitude of 6500–13000 feet (2000–4000 meters). Specimens were collected mostly from wild with exception of few (5 cultivated species) from the cultivated lands. There is no requirement of any permit or permission to collect the samples. Most of the collection was carried out from public land which is property of the State and no formal permission is required for research work from the forest department of the State. In case where data collection was required from private lands, verbal permission was sorted from the land owners, before data collection at each site. Specimens of medicinal plants collected from each locality were provided with a collection number for future reference and supported by check lists for inventory. The plant specimens collected were processed at the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad and then identified with the help of available literature . The properly processed plant specimens were deposited in the Herbarium Department of Botany, University of Azad Jammu & Kashmir, Muzaffarabad .
Relative frequency citation (RFC) . The frequency of citation was calculated to assess the incidence of one particular plant species used for the treatment of veterinary diseases in relation to the overall citations for all plants. Relative frequency of citation was calculated using RFC = FC/N Where FC = is the number of informants reporting the use of plant divided by the sum of informants who took part in the study (N) . While, RFC = number of citation (for a given species) divided by number of citations for all species . Frequency of citation for a particular species = (Number of citations for that particular species/Number of all citations for all species)*100. Use Value (UV) . Use Value (UV) of a species was calculated using UV = FC/N. where FC is Frequency Citation of one species divided by sum of the informants participated in the study (N). The relative importance of each species was computed according to the given formula: U V s = ∑ U V i N i , ; Where ‘UVi” represents use value for a given species among the informants who participated, and ‘Ni’ represents the sum of informants.
In the present study, 39 plant species of 21 families have been recorded for their ethnoveterinary importance in the area. A total of 126 informants were interviewed at their homes, in the field or at the religious places through convenience sampling. Among these, 73 were the females and 53 were the male, Young informants (43) were between the ages of 30–45 years, 56 were of the age 40–60 years and sixteen were 61 to 75 years old. Rest of the 11 informants were of the age of 76 or above . Majority of the informants (87) were illiterate and 26 informants were having 10 to 12 years of education while 13 informants were holding graduation level degrees. During interviews, it was observed that the illiterate and old age group informants have more traditional knowledge of plants than young and educated class. Females of above 40 years of age were found more informative and true practitioner of the ethnoveterinary sector. All the informants were interviewed in local language Pahari/Hindko/Kashmiri. The key questions on ethnoveterinary were on local names of plants and their parts used, mode of preparation and administration, amount of dose given, disease treated and personal experience of informants. Taxonomic distribution and growth form of medicinal plants The current study reported 39 medicinal plants belonged to 21 families, which were used for the treatment of 21 livestock diseases . These include 24 herbs (62%), 10 shrubs (25%), 3 trees (11%) and 2 climbers (1%). Polygonaceae was the dominant family that contributed 7 species, followed by Crassulaceae (5 species), Asteraceae (4 species), Papilionaceae (3 species) and Lamiaceae, Apiaceae, Caprifoliaceae (2 species each). The remaining 11 families were represented by one species each ( : Family-wise distribution of the plants used for veterinary treatments). Plant part(s) used, formulation and use categories The information regarding the usage of parts of the plants was obtained from the participants revealed that different parts of the plants are used for preparation of remedies. Roots were the most used parts (49%) followed by aerial parts (28%), seeds (8%), fruits (8%), barks and resins (2% each), and leaves (3%) in the veterinary treatments ( : Plant parts used to cure different disease in the animals). The main method for preparation of the remedies was mashed uncooked (19 species), cooked (15 species), decoction (03 species), and powder and resin (one species each). The key informants in this study reported 21 major therapeutic uses of the plants which included enterotoxaemia, dysentery, indigestion, internal heat, dehydration, tonic, milk production, ecto-parasitism, post-delivery treatment, anti-salt, hemoglobinuria, prolapse of uterus, Peste des petits ruminants (PPR) a transboundary viral disease, dyspnea, repeat breeding, goat pox, deworming, nephritis, strangles, constipation and cough . A total of 9 species were used as tonic, 9 in indigestion, 4 species for post-delivery treatment, deworming and constipations each, 3 for dysentery, 2 for each enterotoxaemia, dyspnea, internal heat, milk production, cough, goat pox, one for anti-salt, dehydration, repeat breeding, nephritis, PPR, strangles, hemoglobinuria and ecto-parasitism. Medicinal plants used as tonic were Saussurea lappa , Aralia cachemiriana , Bistorta amplexicaulis , B . affinis , Helianthus annus , Geranium wallichianum , Berberis lycium , Aesculus indica and Angelica cyclocarpa . Plant species used for the treatment of indigestion were Aesculus indica , Thymus linearis , Saussurea lappa , Angelica archangelica , A . cyclocarpa , Rumex nepalensis , Zea mays and Viburnum grandiflorum . Plant species used to cure post-delivery treatments were Dipsacus inermis , Rumex acetosa , Rumex nepalensis and Taraxacum laevigatum . Each plant species is provided with its scientific name and author citation, followed by the family, local name (in italics), growth form, altitudinal range in meters above mean sea level), distribution status in the region (critically endangered, endangered, vulnerable and secure), and lastly in brief the part (s) used and the mode of preparation and the dosage (wherever available). Proportion of the life form of the species is also given ( ; : Proportion of life form of the plant species used in ethnoveterinary). Relative frequency of citation and use value Relative Frequency of Citation (RFC) and Use Value (UV) of the medicinal plants was calculated ranging from 41 to 7.32 . The highest RFC was found for Saussurea lappa (7.32), followed by Rumex acetosa (6.61), Rumex nepalensis (6.43), Thymus linearis (5.0) and Angelica cyclocarpa (5.0). The lowest relative frequency of citation was recorded by Rhodiola pinnatifida , Taraxacum laevigatum and Helianthus annuus (0.89 each). The highest UV was recorded for Saussurea lappa (0.33), followed by Rumex acetosa (0.29), Rumex nepalensis (0.29), Thymus linearis and Angelica cyclocarpa (0.22 each). The lowest use value was recorded in Rhodiola pinnatifida and Taraxacum laevigatum , which was 0.04 each .
The current study reported 39 medicinal plants belonged to 21 families, which were used for the treatment of 21 livestock diseases . These include 24 herbs (62%), 10 shrubs (25%), 3 trees (11%) and 2 climbers (1%). Polygonaceae was the dominant family that contributed 7 species, followed by Crassulaceae (5 species), Asteraceae (4 species), Papilionaceae (3 species) and Lamiaceae, Apiaceae, Caprifoliaceae (2 species each). The remaining 11 families were represented by one species each ( : Family-wise distribution of the plants used for veterinary treatments).
The information regarding the usage of parts of the plants was obtained from the participants revealed that different parts of the plants are used for preparation of remedies. Roots were the most used parts (49%) followed by aerial parts (28%), seeds (8%), fruits (8%), barks and resins (2% each), and leaves (3%) in the veterinary treatments ( : Plant parts used to cure different disease in the animals). The main method for preparation of the remedies was mashed uncooked (19 species), cooked (15 species), decoction (03 species), and powder and resin (one species each). The key informants in this study reported 21 major therapeutic uses of the plants which included enterotoxaemia, dysentery, indigestion, internal heat, dehydration, tonic, milk production, ecto-parasitism, post-delivery treatment, anti-salt, hemoglobinuria, prolapse of uterus, Peste des petits ruminants (PPR) a transboundary viral disease, dyspnea, repeat breeding, goat pox, deworming, nephritis, strangles, constipation and cough . A total of 9 species were used as tonic, 9 in indigestion, 4 species for post-delivery treatment, deworming and constipations each, 3 for dysentery, 2 for each enterotoxaemia, dyspnea, internal heat, milk production, cough, goat pox, one for anti-salt, dehydration, repeat breeding, nephritis, PPR, strangles, hemoglobinuria and ecto-parasitism. Medicinal plants used as tonic were Saussurea lappa , Aralia cachemiriana , Bistorta amplexicaulis , B . affinis , Helianthus annus , Geranium wallichianum , Berberis lycium , Aesculus indica and Angelica cyclocarpa . Plant species used for the treatment of indigestion were Aesculus indica , Thymus linearis , Saussurea lappa , Angelica archangelica , A . cyclocarpa , Rumex nepalensis , Zea mays and Viburnum grandiflorum . Plant species used to cure post-delivery treatments were Dipsacus inermis , Rumex acetosa , Rumex nepalensis and Taraxacum laevigatum . Each plant species is provided with its scientific name and author citation, followed by the family, local name (in italics), growth form, altitudinal range in meters above mean sea level), distribution status in the region (critically endangered, endangered, vulnerable and secure), and lastly in brief the part (s) used and the mode of preparation and the dosage (wherever available). Proportion of the life form of the species is also given ( ; : Proportion of life form of the plant species used in ethnoveterinary).
Relative Frequency of Citation (RFC) and Use Value (UV) of the medicinal plants was calculated ranging from 41 to 7.32 . The highest RFC was found for Saussurea lappa (7.32), followed by Rumex acetosa (6.61), Rumex nepalensis (6.43), Thymus linearis (5.0) and Angelica cyclocarpa (5.0). The lowest relative frequency of citation was recorded by Rhodiola pinnatifida , Taraxacum laevigatum and Helianthus annuus (0.89 each). The highest UV was recorded for Saussurea lappa (0.33), followed by Rumex acetosa (0.29), Rumex nepalensis (0.29), Thymus linearis and Angelica cyclocarpa (0.22 each). The lowest use value was recorded in Rhodiola pinnatifida and Taraxacum laevigatum , which was 0.04 each .
Ethnoveterinary applications of the local plant species is an important part of the Himalayan mountain populations in the Kashmir region as livestock rearing plays a vital role in the local microeconomics and livelihood support in the region. Semi nomadic populations prefer the ethno-medicine as compared to the allopathic remedies as they are cheaper and readily available . Our findings revealed that the local populations use a significant number (i.e. 39 spp.) of locally available plants for their livestock health care . Medicinal Plant species utilized for livestock treatments harbor diverse range of habitats ranging from valley plains, temperate mountain forests, and alpine pastures climates in a wide altitudinal of 1800–3700 m. . It was observed that old age population groups, especially females, possessed more ethnobotanical knowledge because of their higher association with typical agro-pastoral lifestyle as compared to the younger generation . The taxonomic analysis indicated the dominance of Polygonaceae, Asteraceae and Crassulaceae . These families comprised mostly of herbaceous taxa in the local ethnoveterinary flora which relates with broader ecological amplitude and abundance of these families in the region . The routes of administration of these herbal remedies were essentially oral whereas plant root was the most widely used part followed by the aerial part as a whole or the leaves. The herbs were the leading growth form of the medicinal species followed by shrubs, and trees . Herbs are often used because of their frequent availability, ease of collection and applications . Plant species were reported to be used through different modes of preparation to form crude drugs as well as to be fed as food supplements to promote faster weight gain, as enterotoxaemia, indigestion, dehydration, ecto-parasitism, post-delivery complications, dewormer, relieve constipation, respiratory, and reproductive disorders . The quantitative ethnobotanical indices offer accurate estimates of the plant use frequencies which can be utilized for the conservation management of the heavily consumed threatened plants of the region . Our results have identified several important plants including Saussurea lappa , Aconogonon molle , Angelica cyclocarpa , Rumex acetosa , Geranium wallichianum Rumex nepalensis , Angelica glauca and Thymus linearis , having relatively higher use values in the region. Relative Frequency of Citation (RFC) and Use Value (UV) shows that the highest RFC was found for Saussurea lappa , Rumex acetosa and Rumex nepalensis while the lowest relative frequency of citation was recorded for Rhodiola pinnatifida , Taraxacum laevigatum and Helianthus annuus . Similarly, the highest UV was recorded for Saussurea lappa , Rumex acetosa , and Thymus linearis and lowest use value recorded was in Rhodiola pinnatifida and Taraxacum laevigatum . These overexploited species are most prime candidate for conservation in the region demanding immediate attention . It was observed that the method of administering ethno-veterinary plant remedies varied greatly among the different ethnic communities . Different communities were recorded to use different plant species for treating the same disease and vice versa. Similarly plant were used singly, as well as in combinations for treating various livestock ailments which reflects diversity of the ethnic knowledge and heterogeneity in the cultural practices . Ethnic usage of indigenous medicinal plants to treat veterinary disorders and ailments offers a significant contribution in sustaining the livelihood support system of the local populations in the region . The diverse ethnic knowledge reflects the rich cultural values of the society linked with sustainable utilization of the local plant diversity . Results provide a valuable database which has dynamic implications in the management of natural resource in the area . These findings also provide baseline information by identifying the effective herbal remedies for livestock health which can be utilized by veterinarians and pharmacologists for the development of new therapies as well as isolation of bioactive compounds . The results also serve as a conservationist’s proxy and provide an insightful scientific information for the conservation management of overexploited plant species of the region .
Indigenous communities in Neelum Valley are dependent on medicinal plants for ethnoveterinary use. The people practiced 39 medicinal plants to cure 21 livestock diseases. Knowledge about the traditional medicinal system is restricted to the herders, farmers and elder community member. Some important plants like Dipsacus inermis , Rumex nepalensis , Angelica cyclocarpa , Saussurea lappa , Aesculus indica , etc. are having great significance in the ethnoveterinary practices. Among these, Saussurea lappa and Rumex nepalensis were found with highest use value and frequency of citation. The younger generation is unaware of this traditional treasure and takes no interest due to modernization. The current study has an important contribution towards the preservation of indigenous plants-based knowledge from extinction. New ethnoveterinary uses in the study area were found for enterotoxaemia, dehydration, indigestion, dewormer, etc. The phytochemical and pharmacological investigations to isolate the active compound and testing the in vitro or in vivo efficiency of the above mentioned plants against the targeted veterinary diseases are important. In addition to this, critical toxicological investigations are required for safe and secure use of documented ethno-medicines.
S1 File Sample of questionnaire used during field survey for obtaining ethnobotanical information. (DOCX) Click here for additional data file. S1 Fig (JPG) Click here for additional data file.
|
Mesonephric‐type adenocarcinomas of the ovary: prevalence, diagnostic reproducibility, outcome, and value of | b0b7b1ed-9b3a-4c35-8f6e-de921c9d6fab | 11227277 | Anatomy[mh] | There has been a surge in studies of mesonephric‐like adenocarcinomas involving the endometrium and ovary in recent years . Due to a lack of an association with mesonephric remnants, McFarland et al proposed the term mesonephric‐like adenocarcinoma to separate them from mesonephric‐type adenocarcinoma (MA) of the uterine cervix . In 2020, this entity received its own ICD‐O code (9111/3) . Evidence including associated or admixed Müllerian lesions (e.g. atypical endometrial hyperplasia, endometriosis, endometrioid neoplasm, and low‐grade serous carcinoma) and shared clonal relationship support a Müllerian origin of these tumors with Wolffian/mesonephric transdifferentiation, rather than a true mesonephric origin . The phenomenon of transdifferentiation mostly from endometrioid lesions would also explain the morphological overlap with endometrioid carcinomas (ECs) resulting in diagnostic problems. McFarland et al initially described this entity as an ‘unusual variant of endometrioid carcinoma’ , leading to a dispute whether this is a truly independent entity . Mesonephric‐like adenocarcinomas of the ovary and the uterine corpus and MA of the uterine cervix share identical phenotypes, KRAS driver mutations, and overlapping proteomic and DNA methylation profiles, the latter being distinct from other Müllerian histotypes and potentially somatically acquired during transdifferentiation . Therefore, we proposed using the same term of ‘mesonephric‐type’ adenocarcinoma (MA) and the ICD‐O code 9110/3 for tumors from cervical, endometrial, and ovarian sites, regardless of the cell of origin . A four‐marker immunohistochemical (IHC) panel of GATA3, TTF1, ER, and PR was used in the discovery of MA and represents a valuable ancillary test . PAX2 is another marker of benign mesonephric cell lineage but its diagnostic utility in the context of MA has not been evaluated . The objectives of our study were first to assess the prevalence of MA in large retrospective cohorts of ovarian carcinomas using a combination of IHC screening and morphological review. Second, we evaluated the interobserver reproducibility based on morphology alone and morphology plus IHC to refine the diagnostic approach. Third, we compared survival and biomarker expression of MA and EC. Study cohorts An overview of the study flow is provided in supplementary material, Figure . We gathered 1,537 ovarian epithelial neoplasms from two existing population‐based ovarian carcinoma cohorts [Ovarian Cancer in Alberta and British Columbia (OVAL‐BC), n = 804 and Alberta Ovarian Tumour Type (AOVT), n = 536] and from previously described consecutive hospital‐based series ( n = 197) . The OVAL‐BC study recruited incident cases from cancer registries of two Canadian provinces diagnosed between 2001–2012 (BC) and 2005–2011 (AB) . The AOVT study identified ovarian carcinomas from the Alberta Cancer Registry diagnosed between 1978 and 2010 . Duplicate patients in the OVAL‐BC and AOVT cohorts were excluded. All cases were subjected to an IHC‐integrated review to confirm or reclassify histotypes . Previously assessed IHC markers (ER/PR, PMS2/MSH6, p53, WT1, PAX8, and ARID1A) were used for correlative analyses . PMS2/MSH6 and p53 were used to assign the mismatch repair deficient (MMRd) and p53 abnormal (p53abn) molecular subtypes for EC . POLE mutation status was not available; hence, MMR proficient and p53 normal EC were considered of no specific molecular profile (NSMP). ECs were graded using FIGO grade . All cohorts received local ethics approval (HREBA.CC‐16‐0161, HREBA.CC‐16.0159, HREBA.CC‐16.0371, HREBA.CC‐21‐0362, and HREBA.CC‐19‐0444). Immunohistochemistry and interpretation Sections of 4‐μm thickness obtained from tissue microarrays (TMAs) were used for IHC on a Dako Omnis autostainer (Agilent Technologies, Santa Clara, CA, USA) with onboard heat‐induced epitope retrieval, followed by antibody incubation and use of Dako EnVision FLEX (Agilent Technologies). The antibody clones, suppliers, dilutions, and Dako specific protocols were as follows: GATA3 (L50‐823, Biocare, 1/400, H20‐10M‐20), TTF1 (SPT24, Leica, 1/200, H20‐X‐20), PAX2 (EP235, Bio SB, 1/50, H30‐10R‐30), CD10 [56C6, DAKO, ready‐to‐use (RTU), H30‐X‐30], and Calretinin (Dak‐Calret1, Dako, RTU, H25‐X‐25). Markers were scored in a three‐tier system based on staining distribution and categorized as 0 = absent, 1 = focal (staining in 1–50% of tumor cells), and 2 = diffuse (staining >50%). The results were dichotomized as either absent or present, except for PAX2, where diffuse was considered normal retained versus abnormal being reduced or absent. ER and PR were paired and the higher expression value of the two markers was used to represent the ER/PR combination. From the four‐marker panel of GATA3, TTF1, and ER/PR, four IHC groups were created: (1) MA‐IHC profile, defined as GATA3 and/or TTF1 expression with complete absence of ER/PR; (2) EC‐IHC profile, defined as absence of GATA3 and TTF1 with at least focal expression of ER/PR; (3) IHC double‐negative, which were negative for Wolffian/mesonephric markers (GATA3, TTF1) and Müllerian markers (ER/PR); and (4) IHC double‐positive, which were positive GATA3 and/or TTF1 with at least focal ER/PR. Identification of ovarian MA in retrospective cohorts To identify MA, we used a two‐step process. The first step was the application of a four‐marker IHC screen (GATA3, TTF1, ER, and PR) followed by a morphologic review of tumors with an MA‐IHC profile. Since almost all MA cases detected by the IHC screen were previously diagnosed as EC, as a second step, we conducted a morphologic review of the remaining 369 EC, followed by integration of IHC profile (supplementary material, Figure ). The latter was performed on two representative full sections that were selected during previous full slide reviews independently by two pathologists (MK and ZA) with the knowledge of the IHC profile. Both did not participate in the interobserver reproducibility study and achieved consensus in discordant instances at a multiheaded microscope. Interobserver reproducibility A reproducibility set of 66 cases consisting of EC and MA, enriched for the latter, was compiled and one representative digital slide per case was scanned at ×40 magnification using an Aperio scanner (Leica Biosystems, Vista, CA, USA). Fourteen MAs were previously assessed for global DNA methylation profiles using the Illumina Infinium EPIC (850K) BeadChip (Illumina, San Diego, CA, USA) . Five gynecologic subspecialty pathologists reviewed the digital slides, blinded to clinical information, and categorized them into EC or MA in an initial round of morphology‐based assessment. The following description for MA was provided : ‘the characteristic low power architecture of MA is a compact, blue‐appearing proliferation of small or long tubules with intraluminal eosinophilic (PAS+, Alcian blue negative) secretions’; however, the latter can be a focal finding. In addition, a variety of architectural patterns is also characteristic for MA including glandular, solid/spindled, papillary, trabecular, retiform, sieve‐like, and chorded‐hyalinized. The cell shape is cuboidal rather than columnar as seen in EC. The nuclear atypia is generally low to moderate with papillary thyroid carcinoma‐like nuclear features such as open/vesicular chromatin, nuclear overlapping, grooves, angulated nuclear contours/indentations, and inconspicuous nucleoli. Mitotic activity is usually conspicuous. Squamous or mucinous differentiation may be seen in EC but should be absent in MA except for circumstances of admixed MA and EC. Subspecialty pathologists were also asked whether they would order IHC and if they indicated yes, results for the four‐marker IHC panel of GATA3, TTF1, ER, and PR (but not the actual IHC slides) were provided. They then had the opportunity to revise their diagnosis based on the IHC results. Morphological feature review of MA versus EC A subset of 22 MAs and 71 ECs (52 EC NSMP, 6 EC MMRd, and 13 EC p53abn) underwent detailed morphological review from two representative whole H&E sections evaluating the following features: low power color (blue versus pale), presence of a background adenofibroma, psammoma bodies, number of architectural patterns, types of architectural patterns (sieve‐like/cribriform/microcystic, glomeruloid, glandular/pseudoendometrioid, ductal/slit‐like angulated, long tubular, trabecular, solid/spindled/nested, and papillary/villoglandular), the presence of colloid‐like luminal material, cytologic/nuclear features (nuclear crowding/overlapping, prominent nucleoli, dense, vesicular, or open chromatin, columnar cell shape with abundant cytoplasm, cuboidal cell shape with scant cytoplasm), and mitotic count per 10 high‐power fields. Squamous differentiation, mucinous differentiation, cytoplasmic clearing, and cilia were described as conspicuous, focal, or absent. Statistics Categorical data were compared using Pearson's chi‐squared test and continuous data using a Student's t ‐test or analysis of variance of means. Bonferroni correction was used to correct for multiple testing. For paired interobserver reproducibility, Cohen's kappa coefficient as well as percentage agreement was calculated and reported as an average and range over the five subspecialty pathologist pairs. Nominal logistic regression modeling was used to calculate an area under the curve with receiver operator characteristics for morphological features in combination. Recursive partitioning was performed to establish hierarchy for IHC markers. Kaplan–Meier survival analyses were performed to estimate 5‐year survivals with disease‐specific survival as the endpoint. The follow‐up period was right censored at 10 years. Differences were assessed by a log‐rank test. Cox proportional hazards regression models were applied to estimate hazard ratios (HRs) with 95% confidence intervals (CIs). Multivariate Cox regression models were adjusted for age (continuous), stage (I versus II–IV), and p53 status (normal versus abnormal). JMP17.0.0.0 was used for statistical analyses. An overview of the study flow is provided in supplementary material, Figure . We gathered 1,537 ovarian epithelial neoplasms from two existing population‐based ovarian carcinoma cohorts [Ovarian Cancer in Alberta and British Columbia (OVAL‐BC), n = 804 and Alberta Ovarian Tumour Type (AOVT), n = 536] and from previously described consecutive hospital‐based series ( n = 197) . The OVAL‐BC study recruited incident cases from cancer registries of two Canadian provinces diagnosed between 2001–2012 (BC) and 2005–2011 (AB) . The AOVT study identified ovarian carcinomas from the Alberta Cancer Registry diagnosed between 1978 and 2010 . Duplicate patients in the OVAL‐BC and AOVT cohorts were excluded. All cases were subjected to an IHC‐integrated review to confirm or reclassify histotypes . Previously assessed IHC markers (ER/PR, PMS2/MSH6, p53, WT1, PAX8, and ARID1A) were used for correlative analyses . PMS2/MSH6 and p53 were used to assign the mismatch repair deficient (MMRd) and p53 abnormal (p53abn) molecular subtypes for EC . POLE mutation status was not available; hence, MMR proficient and p53 normal EC were considered of no specific molecular profile (NSMP). ECs were graded using FIGO grade . All cohorts received local ethics approval (HREBA.CC‐16‐0161, HREBA.CC‐16.0159, HREBA.CC‐16.0371, HREBA.CC‐21‐0362, and HREBA.CC‐19‐0444). Sections of 4‐μm thickness obtained from tissue microarrays (TMAs) were used for IHC on a Dako Omnis autostainer (Agilent Technologies, Santa Clara, CA, USA) with onboard heat‐induced epitope retrieval, followed by antibody incubation and use of Dako EnVision FLEX (Agilent Technologies). The antibody clones, suppliers, dilutions, and Dako specific protocols were as follows: GATA3 (L50‐823, Biocare, 1/400, H20‐10M‐20), TTF1 (SPT24, Leica, 1/200, H20‐X‐20), PAX2 (EP235, Bio SB, 1/50, H30‐10R‐30), CD10 [56C6, DAKO, ready‐to‐use (RTU), H30‐X‐30], and Calretinin (Dak‐Calret1, Dako, RTU, H25‐X‐25). Markers were scored in a three‐tier system based on staining distribution and categorized as 0 = absent, 1 = focal (staining in 1–50% of tumor cells), and 2 = diffuse (staining >50%). The results were dichotomized as either absent or present, except for PAX2, where diffuse was considered normal retained versus abnormal being reduced or absent. ER and PR were paired and the higher expression value of the two markers was used to represent the ER/PR combination. From the four‐marker panel of GATA3, TTF1, and ER/PR, four IHC groups were created: (1) MA‐IHC profile, defined as GATA3 and/or TTF1 expression with complete absence of ER/PR; (2) EC‐IHC profile, defined as absence of GATA3 and TTF1 with at least focal expression of ER/PR; (3) IHC double‐negative, which were negative for Wolffian/mesonephric markers (GATA3, TTF1) and Müllerian markers (ER/PR); and (4) IHC double‐positive, which were positive GATA3 and/or TTF1 with at least focal ER/PR. MA in retrospective cohorts To identify MA, we used a two‐step process. The first step was the application of a four‐marker IHC screen (GATA3, TTF1, ER, and PR) followed by a morphologic review of tumors with an MA‐IHC profile. Since almost all MA cases detected by the IHC screen were previously diagnosed as EC, as a second step, we conducted a morphologic review of the remaining 369 EC, followed by integration of IHC profile (supplementary material, Figure ). The latter was performed on two representative full sections that were selected during previous full slide reviews independently by two pathologists (MK and ZA) with the knowledge of the IHC profile. Both did not participate in the interobserver reproducibility study and achieved consensus in discordant instances at a multiheaded microscope. A reproducibility set of 66 cases consisting of EC and MA, enriched for the latter, was compiled and one representative digital slide per case was scanned at ×40 magnification using an Aperio scanner (Leica Biosystems, Vista, CA, USA). Fourteen MAs were previously assessed for global DNA methylation profiles using the Illumina Infinium EPIC (850K) BeadChip (Illumina, San Diego, CA, USA) . Five gynecologic subspecialty pathologists reviewed the digital slides, blinded to clinical information, and categorized them into EC or MA in an initial round of morphology‐based assessment. The following description for MA was provided : ‘the characteristic low power architecture of MA is a compact, blue‐appearing proliferation of small or long tubules with intraluminal eosinophilic (PAS+, Alcian blue negative) secretions’; however, the latter can be a focal finding. In addition, a variety of architectural patterns is also characteristic for MA including glandular, solid/spindled, papillary, trabecular, retiform, sieve‐like, and chorded‐hyalinized. The cell shape is cuboidal rather than columnar as seen in EC. The nuclear atypia is generally low to moderate with papillary thyroid carcinoma‐like nuclear features such as open/vesicular chromatin, nuclear overlapping, grooves, angulated nuclear contours/indentations, and inconspicuous nucleoli. Mitotic activity is usually conspicuous. Squamous or mucinous differentiation may be seen in EC but should be absent in MA except for circumstances of admixed MA and EC. Subspecialty pathologists were also asked whether they would order IHC and if they indicated yes, results for the four‐marker IHC panel of GATA3, TTF1, ER, and PR (but not the actual IHC slides) were provided. They then had the opportunity to revise their diagnosis based on the IHC results. MA versus EC A subset of 22 MAs and 71 ECs (52 EC NSMP, 6 EC MMRd, and 13 EC p53abn) underwent detailed morphological review from two representative whole H&E sections evaluating the following features: low power color (blue versus pale), presence of a background adenofibroma, psammoma bodies, number of architectural patterns, types of architectural patterns (sieve‐like/cribriform/microcystic, glomeruloid, glandular/pseudoendometrioid, ductal/slit‐like angulated, long tubular, trabecular, solid/spindled/nested, and papillary/villoglandular), the presence of colloid‐like luminal material, cytologic/nuclear features (nuclear crowding/overlapping, prominent nucleoli, dense, vesicular, or open chromatin, columnar cell shape with abundant cytoplasm, cuboidal cell shape with scant cytoplasm), and mitotic count per 10 high‐power fields. Squamous differentiation, mucinous differentiation, cytoplasmic clearing, and cilia were described as conspicuous, focal, or absent. Categorical data were compared using Pearson's chi‐squared test and continuous data using a Student's t ‐test or analysis of variance of means. Bonferroni correction was used to correct for multiple testing. For paired interobserver reproducibility, Cohen's kappa coefficient as well as percentage agreement was calculated and reported as an average and range over the five subspecialty pathologist pairs. Nominal logistic regression modeling was used to calculate an area under the curve with receiver operator characteristics for morphological features in combination. Recursive partitioning was performed to establish hierarchy for IHC markers. Kaplan–Meier survival analyses were performed to estimate 5‐year survivals with disease‐specific survival as the endpoint. The follow‐up period was right censored at 10 years. Differences were assessed by a log‐rank test. Cox proportional hazards regression models were applied to estimate hazard ratios (HRs) with 95% confidence intervals (CIs). Multivariate Cox regression models were adjusted for age (continuous), stage (I versus II–IV), and p53 status (normal versus abnormal). JMP17.0.0.0 was used for statistical analyses. Identification of ovarian MA in retrospective cohorts First, a four‐marker IHC screen was performed on 1,537 ovarian epithelial neoplasms, and the expression of GATA3, TTF1, ER, and PR across histotypes is shown in supplementary material, Table . EC showed the highest frequency of GATA3 expression (37/384, 9.6%; 5.7% focal, 3.9% diffuse) and TTF1 (35/384, 9.1%; 2.9% focal, 6.2% diffuse). The MA‐IHC profile was present in 45/1,537 (2.9%) across all histotypes. Of the 45 tumors with an MA‐IHC profile, 13 (28.8%) were deemed to be MA on morphologic review (12 initially classified as EC, 1 initially classified as clear cell carcinoma). The remaining 32 MA‐IHC profiles were confirmed by morphological review as 4 ECs, 22 clear cell carcinomas, 5 mucinous carcinomas, and 1 mucinous borderline tumor. Morphological review of the remaining 369 ECs with the knowledge of their IHC profile identified an additional 20 instances with morphological MA features. By integration of their IHC profile, eight EC‐IHC were classified as EC and five IHC double‐negative cases as MA. Of the remaining seven IHC double‐positive cases, three had high ER/PR expression and four had low ER/PR expression. Although we used GATA/TTF1/ER/PR for our primary IHC screen, we separately assessed PAX2 as a potential mesonephric marker. PAX2 was expressed in normal Müllerian tissue including endometrium, fallopian tube, and mesonephric remnants (supplementary material, Figure ). PAX2 showed normal expression in almost all MA versus 8.0% of high‐grade serous carcinomas and 14.6% of initial EC (Figure and supplementary material, Table ). Therefore, based on PAX2, five IHC double‐positive cases were ultimately classified as MA (four low ER/PR and PAX2 normal, one high ER/PR and PAX2 normal, supplementary material, Figure ) and two as EC (both high ER/PR and PAX2 abnormal). The prevalence of MA among those initially classified as EC was 5.7% (22/385, supplementary material, Figure ). We identified another 7 recently diagnosed MA during a period where 62 ECs diagnoses were made resulting in a total of 30 MAs from which 14 had previously been subject to DNA methylation profiling ; all 14 showed epigenetic profiles in keeping with mesonephric differentiation (Figure ). Interobserver reproducibility One representative digital slide per case from the review set of 66 tumors was evaluated by five subspecialty pathologists (supplementary material, Table ). The diagnostic agreement between MA and EC reached a fair kappa coefficient of 0.376 (range: 0.209–0.602, supplementary material, Table ). A diagnosis of MA was favored on average in 10/66 (15.2%, range: 5–16) and requested the four‐marker IHC panel on average in 38/66 (57.6%, range: 25–46) cases (supplementary material, Table ). After integration of IHC (GATA3/TTF1/ER/PR) results, the interobserver agreement improved to a substantial kappa of 0.727 (range: 0.655–0.852, p = 0.00017, supplementary material, Table ). Changes were made on average in 8 cases (range: 4–13), in the majority toward MA (average 6, range: 3–11) and less toward EC (average 2, range: 1–5) resulting in an average MA diagnosis in 21.2% (14/66 cases, range: 11–19, supplementary material, Table ). By integrating IHC, consensus among the five subspecialty pathologists increased from 43/66 (65.2%, 43 ECs, 0 MA) to 53/66 (80.3%, 46 ECs, 7 MAs). Persistent diagnostic issues Even after IHC integration using the four‐marker panel, diagnostic discordances remained in 13 cases with 1/5 subspecialty pathologists deviating from the majority in 9 and 2/5 subspecialty pathologists in 4. Of these 13, 11 had previously been subject to DNA methylation profiling , all of which showed epigenetic profiles in keeping with mesonephric differentiation. Of note, this included four cases with a majority diagnosis of EC (supplementary material, Table ). Diagnostic discordance occurred when IHC data were not requested (nine instances affecting nine cases, Figure ), suggesting some MAs have subtle morphological features and/or morphological features were not well appreciated. Further, the integration of the IHC results with morphology caused difficulties in 14 instances affecting 7 cases. The latter consisted of IHC double‐negative ( n = 4), IHC double‐positive ( n = 2), and one case with an MA‐IHC profile (Figure ). One double‐positive tumor with diffuse ER expression was considered MA by a minority of subspecialty pathologists, a diagnosis supported by DNA methylation profiling results (supplementary material, Figure ). Morphological feature review of MA versus EC Among the morphological features assessed on 22 MAs and 71 ECs, 8 were not significantly different between the two histotypes, and these included solid/spindled/nested and papillary/villoglandular architecture as well as cytoplasmic clearing, cilia, chromatin pattern, mitotic count and psammoma bodies (supplementary material, Table ). After a Bonferroni correction for multiple testing, eight features remained significantly different between MA and EC, and these included blue low‐power color, colloid‐like luminal material, three or more architectural patterns, sieve‐like/cribriform/microcystic and glomeruloid architectures, columnar cell shape with abundant cytoplasm or cuboidal cell shape with scant cytoplasm, and nuclear crowding. As columnar and cuboidal cell shape were nearly mutually exclusive, only cuboidal cell shape was included in a nominal logistic regression model together with the other significant features yielding an area under the curve of 0.998 in distinguishing MA from EC with cuboidal cell shape with scant cytoplasm and blue low‐power color being the strongest contributors to the model. Clinicopathological features of MA The clinicopathological features of 30 MAs and 363 ECs were compared (Table ). Patients with EC were on average 8 years younger than those with MA. Associated endometriosis occurred at a similar rate. Despite a similar stage distribution, the 5‐year survival for patients with MA was lower (64.8%) compared with EC (85.7%) but slightly better than that of high‐grade serous carcinoma patients (45.4%, Figure ). In multivariable analysis adjusted for age, stage, and p53 status, patients with MA had an increased risk of earlier death (HR = 3.08, 95% CI: 1.62–5.85, p < 0.0001) compared with the reference EC group. This was a slightly higher risk than p53 abnormal EC (HR = 2.32, 95% CI: 1.30–4.14, supplementary material, Table ). The survival difference between MA and EC NSMP remained significant in a stratified analysis for stage I disease (log‐rank p = 0.0002, supplementary material, Figure ). Biomarker expression of MA compared to EC Table compares the expression of selected IHC markers in MA with EC, with significant differences in GATA3, TTF1, PAX2, ER, PR, MMR proteins, p53, and WT1. Notably, the expression of CD10 and calretinin was not significantly different. PAX2 showed both high sensitivity (96.6%) and high (90.4%) specificity for MA when using a cutoff of >50% of tumor cells for normal expression. Using hierarchical partitioning with normal versus abnormal PAX2 expression, combined ER/PR status, and either GATA3 or TTF1 positivity achieved >98% precision ( R 2 , 0.822) in distinguishing MA from EC. Figure shows a proposed morphology and IHC‐based diagnostic decision algorithm (alternative approach without PAX2 shown in supplementary material, Figure ). Survival analysis within EC Within EC, we performed survival analysis comparing the different IHC profiles also including the molecular subtypes (p53abn, MMRd, and NSMP). In the original EC cohort, an MA‐IHC profile as well as p53 abnormal molecular subtype was associated with a shorter survival, whereas a double‐positive and double‐negative IHC profile showed an intermediate prognosis compared with the favorable EC‐IHC profile and MMRd molecular subtype (Figure ). After removing MA from the EC pool, EC NSMP with a double‐positive IHC profile, EC p53 abnormal and MA all had a similar survival (Figure ). We also noted that for individual markers, significant unfavorable prognostic associations for GATA3, TTF1, and combined ER/PR status remained for the revised EC NSMP cohort even after MA was removed (supplementary material, Figure ). However, this did not apply for normal PAX2, which was significantly associated with unfavorable prognosis in the original EC cohort, but not after MA was removed. In contrast, the nine EC NSMP with morphological features of MA but an EC‐IHC profile had a favorable outcome with no patient dying of disease (supplementary material, Figure ). Survival analysis within MA Finally, we performed exploratory survival analysis within the 26 MAs with available outcome data. Surprisingly, stage was not associated with survival, and patients with MA confined to the ovary had a 5‐year survival of 65.1% compared with 61.9% seen in MA with spread beyond the ovary (log‐rank p = 0.24, supplementary material, Figure ). While grade and GATA3 status were not prognostic, there was a significant association with shorter survival for TTF1 expressing MA (log‐rank p = 0.0077, supplementary material, Figure ). MA in retrospective cohorts First, a four‐marker IHC screen was performed on 1,537 ovarian epithelial neoplasms, and the expression of GATA3, TTF1, ER, and PR across histotypes is shown in supplementary material, Table . EC showed the highest frequency of GATA3 expression (37/384, 9.6%; 5.7% focal, 3.9% diffuse) and TTF1 (35/384, 9.1%; 2.9% focal, 6.2% diffuse). The MA‐IHC profile was present in 45/1,537 (2.9%) across all histotypes. Of the 45 tumors with an MA‐IHC profile, 13 (28.8%) were deemed to be MA on morphologic review (12 initially classified as EC, 1 initially classified as clear cell carcinoma). The remaining 32 MA‐IHC profiles were confirmed by morphological review as 4 ECs, 22 clear cell carcinomas, 5 mucinous carcinomas, and 1 mucinous borderline tumor. Morphological review of the remaining 369 ECs with the knowledge of their IHC profile identified an additional 20 instances with morphological MA features. By integration of their IHC profile, eight EC‐IHC were classified as EC and five IHC double‐negative cases as MA. Of the remaining seven IHC double‐positive cases, three had high ER/PR expression and four had low ER/PR expression. Although we used GATA/TTF1/ER/PR for our primary IHC screen, we separately assessed PAX2 as a potential mesonephric marker. PAX2 was expressed in normal Müllerian tissue including endometrium, fallopian tube, and mesonephric remnants (supplementary material, Figure ). PAX2 showed normal expression in almost all MA versus 8.0% of high‐grade serous carcinomas and 14.6% of initial EC (Figure and supplementary material, Table ). Therefore, based on PAX2, five IHC double‐positive cases were ultimately classified as MA (four low ER/PR and PAX2 normal, one high ER/PR and PAX2 normal, supplementary material, Figure ) and two as EC (both high ER/PR and PAX2 abnormal). The prevalence of MA among those initially classified as EC was 5.7% (22/385, supplementary material, Figure ). We identified another 7 recently diagnosed MA during a period where 62 ECs diagnoses were made resulting in a total of 30 MAs from which 14 had previously been subject to DNA methylation profiling ; all 14 showed epigenetic profiles in keeping with mesonephric differentiation (Figure ). One representative digital slide per case from the review set of 66 tumors was evaluated by five subspecialty pathologists (supplementary material, Table ). The diagnostic agreement between MA and EC reached a fair kappa coefficient of 0.376 (range: 0.209–0.602, supplementary material, Table ). A diagnosis of MA was favored on average in 10/66 (15.2%, range: 5–16) and requested the four‐marker IHC panel on average in 38/66 (57.6%, range: 25–46) cases (supplementary material, Table ). After integration of IHC (GATA3/TTF1/ER/PR) results, the interobserver agreement improved to a substantial kappa of 0.727 (range: 0.655–0.852, p = 0.00017, supplementary material, Table ). Changes were made on average in 8 cases (range: 4–13), in the majority toward MA (average 6, range: 3–11) and less toward EC (average 2, range: 1–5) resulting in an average MA diagnosis in 21.2% (14/66 cases, range: 11–19, supplementary material, Table ). By integrating IHC, consensus among the five subspecialty pathologists increased from 43/66 (65.2%, 43 ECs, 0 MA) to 53/66 (80.3%, 46 ECs, 7 MAs). Even after IHC integration using the four‐marker panel, diagnostic discordances remained in 13 cases with 1/5 subspecialty pathologists deviating from the majority in 9 and 2/5 subspecialty pathologists in 4. Of these 13, 11 had previously been subject to DNA methylation profiling , all of which showed epigenetic profiles in keeping with mesonephric differentiation. Of note, this included four cases with a majority diagnosis of EC (supplementary material, Table ). Diagnostic discordance occurred when IHC data were not requested (nine instances affecting nine cases, Figure ), suggesting some MAs have subtle morphological features and/or morphological features were not well appreciated. Further, the integration of the IHC results with morphology caused difficulties in 14 instances affecting 7 cases. The latter consisted of IHC double‐negative ( n = 4), IHC double‐positive ( n = 2), and one case with an MA‐IHC profile (Figure ). One double‐positive tumor with diffuse ER expression was considered MA by a minority of subspecialty pathologists, a diagnosis supported by DNA methylation profiling results (supplementary material, Figure ). MA versus EC Among the morphological features assessed on 22 MAs and 71 ECs, 8 were not significantly different between the two histotypes, and these included solid/spindled/nested and papillary/villoglandular architecture as well as cytoplasmic clearing, cilia, chromatin pattern, mitotic count and psammoma bodies (supplementary material, Table ). After a Bonferroni correction for multiple testing, eight features remained significantly different between MA and EC, and these included blue low‐power color, colloid‐like luminal material, three or more architectural patterns, sieve‐like/cribriform/microcystic and glomeruloid architectures, columnar cell shape with abundant cytoplasm or cuboidal cell shape with scant cytoplasm, and nuclear crowding. As columnar and cuboidal cell shape were nearly mutually exclusive, only cuboidal cell shape was included in a nominal logistic regression model together with the other significant features yielding an area under the curve of 0.998 in distinguishing MA from EC with cuboidal cell shape with scant cytoplasm and blue low‐power color being the strongest contributors to the model. MA The clinicopathological features of 30 MAs and 363 ECs were compared (Table ). Patients with EC were on average 8 years younger than those with MA. Associated endometriosis occurred at a similar rate. Despite a similar stage distribution, the 5‐year survival for patients with MA was lower (64.8%) compared with EC (85.7%) but slightly better than that of high‐grade serous carcinoma patients (45.4%, Figure ). In multivariable analysis adjusted for age, stage, and p53 status, patients with MA had an increased risk of earlier death (HR = 3.08, 95% CI: 1.62–5.85, p < 0.0001) compared with the reference EC group. This was a slightly higher risk than p53 abnormal EC (HR = 2.32, 95% CI: 1.30–4.14, supplementary material, Table ). The survival difference between MA and EC NSMP remained significant in a stratified analysis for stage I disease (log‐rank p = 0.0002, supplementary material, Figure ). MA compared to EC Table compares the expression of selected IHC markers in MA with EC, with significant differences in GATA3, TTF1, PAX2, ER, PR, MMR proteins, p53, and WT1. Notably, the expression of CD10 and calretinin was not significantly different. PAX2 showed both high sensitivity (96.6%) and high (90.4%) specificity for MA when using a cutoff of >50% of tumor cells for normal expression. Using hierarchical partitioning with normal versus abnormal PAX2 expression, combined ER/PR status, and either GATA3 or TTF1 positivity achieved >98% precision ( R 2 , 0.822) in distinguishing MA from EC. Figure shows a proposed morphology and IHC‐based diagnostic decision algorithm (alternative approach without PAX2 shown in supplementary material, Figure ). EC Within EC, we performed survival analysis comparing the different IHC profiles also including the molecular subtypes (p53abn, MMRd, and NSMP). In the original EC cohort, an MA‐IHC profile as well as p53 abnormal molecular subtype was associated with a shorter survival, whereas a double‐positive and double‐negative IHC profile showed an intermediate prognosis compared with the favorable EC‐IHC profile and MMRd molecular subtype (Figure ). After removing MA from the EC pool, EC NSMP with a double‐positive IHC profile, EC p53 abnormal and MA all had a similar survival (Figure ). We also noted that for individual markers, significant unfavorable prognostic associations for GATA3, TTF1, and combined ER/PR status remained for the revised EC NSMP cohort even after MA was removed (supplementary material, Figure ). However, this did not apply for normal PAX2, which was significantly associated with unfavorable prognosis in the original EC cohort, but not after MA was removed. In contrast, the nine EC NSMP with morphological features of MA but an EC‐IHC profile had a favorable outcome with no patient dying of disease (supplementary material, Figure ). MA Finally, we performed exploratory survival analysis within the 26 MAs with available outcome data. Surprisingly, stage was not associated with survival, and patients with MA confined to the ovary had a 5‐year survival of 65.1% compared with 61.9% seen in MA with spread beyond the ovary (log‐rank p = 0.24, supplementary material, Figure ). While grade and GATA3 status were not prognostic, there was a significant association with shorter survival for TTF1 expressing MA (log‐rank p = 0.0077, supplementary material, Figure ). From a diagnostic perspective, our study shows that MAs are underrecognized when classic morphological features are not identified. There were also difficulties in integrating morphological features with certain IHC profiles. Furthermore, IHC profiles alone are of prognostic significance. We recommend a low threshold for morphological features of MA to order an ancillary IHC marker panel consisting of PAX2, ER/PR, and GATA3/TTF1. The current understanding is that MA develops through transdifferentiation from predominantly endometrioid Müllerian lesions, which is further supported by a high frequency of associated endometriosis in MA (62% in the current study, which is nearly identical to the rate of 63% reported in a recent study on extrauterine MA ) as well as the loss of ARID1A expression observed in a subset of MA in the current study. The process of transdifferentiation from a Müllerian to a mesonephric phenotype, however, may not be a sudden categorical switch but rather a continuum, which would explain the diagnostic challenges between MA and EC. This is exemplified in our study by cases with EC phenotype by light microscopy but GATA3/TTF1 expression and varying levels of ER/PR expression (double‐positive for mesonephric and Müllerian markers) showing a similar prognosis to MA. Double‐positive EC occurred at a similar frequency to cases reclassified as MA. GATA3 and TTF1 remained prognostic as individual markers in EC even after MAs were removed. This suggests that these markers are an early indication of mesonephric transdifferentiation in tumors that otherwise largely retain Müllerian features. Further in‐depth study of these double‐positive (i.e. positive for mesonephric and Müllerian markers) cases focusing on KRAS mutation status, DNA methylation, and copy number profiles may shed light on the question of how much ER/PR positivity is acceptable in MA. Notably, we observed one double‐positive case with >50% ER/PR expression in the reproducibility set, which by DNA methylation profiling clustered with MA . Considering the methylation profile as a well‐conserved marker of cell lineage, this most likely represents a true MA. Euscher et al also accepted 3/33 cases with >50% ER expression as ovarian MA . Comparing the methylation profile of these cases with bona fide MA and EC might reveal examples of an intermediate mesonephric‐like state of transdifferentiation between Müllerian and mesonephric differentiation. On the other hand, samples that showed morphological features of MA but an EC‐IHC profile and favorable prognosis indicate that ancillary IHC is important to correctly diagnose these cases. In general, however, MA morphology was underrecognized in the reproducibility set and the significant increase in interobserver reproducibility when integrating IHC information supports the notion that ancillary IHC is required for an MA diagnosis. According to our hierarchical decision tree, cases with normal retained PAX2 expression and absence of ER/PR with an appropriate morphological phenotype are almost certainly MA. Despite the long‐established knowledge that PAX2 is a marker of mesonephric cell lineage with expression in benign mesonephric remnants , and case reports of PAX2 expression in MA , it is somewhat surprising that PAX2 has not been further studied as a potential diagnostic marker for MA, particularly since it has been shown that PAX2 is lost in endometrioid neoplasms including the precursor stage of atypical endometrial hyperplasia . Herein, we show that PAX2 is the most sensitive and specific single marker to distinguish MA from EC based on a large number of cases. In fact, almost all MAs showed strong diffuse, normal PAX2 expression in TMA cores. GATA3 and TTF1 demonstrated limited sensitivity in our study with 79.3% and 46.7%, respectively, which for GATA3 is in line with prior studies (90.5%, 38/42) . However, the frequency of TTF1 expression in our study was lower compared with the literature (78.6%, 33/42). We believe that focal TTF1 staining missed on TMAs is at most a minor contributor and does not fully explain the discrepancy. In a prior study comparing ER IHC expression in TMA cores versus whole slide sections, the disagreement for TMAs with two or three cores (approximately half of our cases were represented by two and half by three cores) versus whole section with a cutoff of ≥1% was 3.4% and 1.1%, respectively . Therefore, the underestimation of TTF1 using TMAs should be less than 5%. Furthermore, TTF1 was highly prognostic in our analyses, despite using TMAs. Nevertheless, due to their high specificity GATA3 and TTF1 remain useful diagnostic markers in the differential diagnosis of MA and EC, whereas CD10 and calretinin are not. We recommend a low threshold for ordering ancillary IHC on cases displaying features commonly observed in MA including blue low‐power appearance, luminal eosinophilic material, and admixture of architectural patterns such as tubules or sieve‐like and cuboidal cell shape. We also emphasize that the common solid/spindled patterns, which may not directly trigger consideration of an MA, should also prompt ancillary IHC. Features of exclusion are unequivocal squamous or prominent mucinous/seromucinous differentiation, except in an event of a distinct admixed Müllerian component. High‐grade nuclear atypia characteristic of p53 abnormal tumors should be absent . At this point, we consider abnormal p53 and MMRd as being incompatible with a diagnosis of MA. However, exceptions may exist; e.g. the possibility of an acquired TP53 mutation during progression as seen in many other cancer entities . Euscher et al accepted one case with a TP53 mutation and abnormal p53 IHC as an ovarian MA, which showed small foci of marked cytologic atypia . On the other hand, MA should show normal PAX2 expression, while other markers (GATA3, TTF1, and ER/PR) have imperfect sensitivity and require complex integration with morphology. Although case series of ovarian MA have been reported previously , herein, we found the prevalence of MA among cases initially classified as EC to be 5.7%. While our cohorts are population‐based, their selection bias toward non‐high‐grade serous carcinomas and, therefore, the prevalence of MA within ovarian carcinomas as a whole could not be assessed. However, assuming that 10–15% of all ovarian carcinomas are EC , the frequency of MA among all ovarian carcinomas would be estimated to be 0.6–0.9%. This is comparable to a rate of 0.7% reported in two similar studies in endometrial carcinomas but lower than another study again highlighting the application of different thresholds of diagnostic and exclusion criteria. Two previous studies indicated an unfavorable prognosis for ovarian MA compared with other histotypes . However, our study is the first to show statistically significant and independent prognostic implications of MA histotype compared with EC, which remained significant in a stratified analysis for stage I tumors, highlighting the importance of recognizing MA at low stage. Overall, the survival curve for MA showed a gradual protracted decline similar to low‐grade serous carcinomas; with a notable crossing of clear cell carcinomas and an intermediate survival between EC and high‐grade serous carcinoma. The survival of MA is similar to p53 abnormal EC. Of note, when the initial FIGO grade was applied to MA, grade was not prognostic, supporting the previous notion that MA should not be graded . A limitation of our study is the lack of KRAS mutation status, however, KRAS mutations are neither entirely sensitive (89%) nor specific (67%) for the distinction of MA from EC . Several examples considered as MA in our study showed a methylation profile consistent with MA providing an alternative gold standard beyond light microscopy and IHC and KRAS mutation status. Another limitation was the lack of full slide review for associations with other Müllerian lesions beyond endometriosis. MA with associated borderline tumors and as a part of mixed carcinomas have been reported in 12% and 42% of cases, respectively . Future studies of MA and associated lesions using omics techniques might shed further light into whether transdifferentiation occurs as a continuum or a categorical switch. Since our inclusion criteria were only cases with a diagnosis of the five major histotypes of ovarian carcinomas, rare cases of MA carcinosarcoma, which historically may have been diagnosed as carcinosarcoma, could not have been identified . It is also important to keep in mind that the sensitivity and specificity of our IHC markers are somewhat artificial since they have been used to define the gold standard. While we used presence/absence for ER/PR and GATA3/TTF1 and diffuse or >50% for PAX2, these cutoffs should be further evaluated on full sections in the clinical setting. Nevertheless, in the context of distinguishing MA from EC, PAX2 has value as a screening marker for MA due to its high sensitivity, while normal p53 and MMR proficiency are specific for MA. Classic MA morphology together with an MA IHC‐profile is diagnostic of MA. The occurrence of subtle MA morphology and non‐MA IHC profiles in some tumors requires complex integration to arrive at a final diagnosis. Future studies using methylation profiling and copy number analyses should clarify further gray areas such as double positive cases and different components in cases demonstrating MA admixed with Müllerian lesions. MK and C‐HL conceived, designed and supervised the study. EYK, ZA and MK identified MA cases, reviewed morphological features and collected IHC scores. SL, TO, TT, LW and NJPW participated in interobserver reproducibility. LSC, GSN, CJRS, AvD and FKFK provided resources including samples, clinical data abstraction and analysis tools. MK drafted the manuscript. All authors revised the manuscript and approved the final version. Figure S1. Flow chart of identification of 30 MA cases Figure S2. PAX2 expression in normal Müllerian tissue Figure S3. Illustration of MA#29 with exceptionally high ER expression (distribution 100%, intensity 3) Figure S4. Kaplan–Meier survival analyses comparing MA and EC of NSMP stratified by stage Figure S5. Hierarchical decision tree using combined morphologic and immunohistochemistry‐based identification of MA without PAX2 Figure S6. Kaplan–Meier survival analyses within EC of NSMP Figure S7. Kaplan–Meier survival analyses, the same as Figure 4C, except for the addition of EC cases with morphological features suggestive of MA Figure S8. Kaplan–Meier survival analyses within MA by stage, grade, TTF1 expression and GATA3 expression Table S1. Expression of GATA3, TTF1, ER, PR, and PAX2 across histotypes (initial diagnoses) Table S2. Interobserver reproducibility assessment by case Table S3. Paired kappa values from the interobserver reproducibility assessment Table S4. Summary of changes during immunohistochemistry integration in the interobserver reproducibility assessment Table S5. Comparison of morphological features between mesonephric‐type and endometrioid carcinoma cases Table S6. Multivariable survival analysis of mesonephric‐type versus endometrioid carcinoma cases |
Response to the comment on Women in ophthalmology - An upsurge! | 44ad2e1a-e2ed-47be-b3fb-9d8cbbee1af7 | 9359261 | Ophthalmology[mh] | Nil.
There are no conflicts of interest.
|
The effect of whitening toothpastes on the surface properties and color stability of different ceramic materials | 56561060-b080-4462-993b-b9941d7b9e89 | 11520786 | Dentistry[mh] | Tooth brushing is one of the most common methods used to provide oral hygiene . However, people’s expectations of toothbrushing today are not only to improve oral health, but also to achieve whiter teeth. To meet this demand, toothpastes are formulated with various ingredients to remove stains and prevent plaque build-up. Abrasives are the major cleaning agents in toothpaste formulations, according to recent studies . Unfortunately, depending on the restorative material, abrasives that play an effective role in whitening and stain removal can create undesirable surface roughness on teeth or dentures. This can damage the restorative material and cause scratches that can lead to discolouration. The amount of wear is closely related to the particle size, density and arrangement of the abrasives in the paste, as well as the frequency and force of brushing . Rough surfaced restorations are more prone to plaque accumulation and staining. An increase in bacterial retention has been reported for roughness values greater than 0.2 μm . Oral care manufacturers are constantly improving and developing new approaches to teeth whitening to meet individual expectations. Therefore, there is a wide variety of products on the market today that address the problem of tooth discolouration. Pastes containing silica, calcium pyrophosphate, sodium bicarbonate, calcium carbonate act by mechanically removing the coloured biofilm and chromophores from the enamel surface . Oxidants such as hydrogen peroxide and calcium peroxide reduce the severity of colouration by chemically changing the pigments adhering to the tooth surface . Optical whitening toothpastes containing blue covarine, which can be considered a new technology, create a colour change through a light effect, instead of eliminating or changing the pigments on the tooth surface. This light effect is achieved by coating the tooth enamel with a fine blue layer . Another whitening group that has gained popularity in recent years is toothpastes and powders containing activated charcoal . It is believed that activated charcoal has an effect on extrinsic pigments in a similar way to the abrasion caused by toothpaste . However, there are concerns in the existing literature regarding its impact on the surface. Charcoal has been described as an abrasive mineral for teeth or gingival tissue. Increasing the size of charcoal particles also adversely affects surface smoothness and increased surface roughness can also lead to caries formation and discolouration . Modern dentistry has seen a shift from traditional manufacturing methods to Computer Aided Design-Computer Aided Manufacturing (CAD-CAM) technology . This technology offers a standardized and predictable approach by eliminating traditional measurement and model acquisition methods, thereby reducing the margin of error and shortening production time . The increasing demand for aesthetics has driven the development of ceramic materials used with CAD-CAM technology. The successful combination of mechanical properties, such as wear resistance and high rigidity, with aesthetic and biological compatibility features has resulted in a growing preference for ceramic materials as restoration materials . Resin ceramic materials developed for CAD-CAM workflow have enabled the increased use of polymer and ceramic combinations in dental restorations and are preferred in minimally invasive dentistry applications due to their excellent processability and superior aesthetic properties . These materials can be classified as resin nanoceramics (RNCs) and polymer infiltrated ceramics (PICN). Resin nanoceramics consist of nanometric sized ceramic fillers randomly distributed in a polymer matrix . Zirconia-reinforced lithium silicates, another material compatible with CAD-CAM systems, offer advantages in terms of both superior aesthetic properties and mechanical strength due to their high glassy content, making them frequently used in clinical practice . The purpose of this in vitro research was to investigate the effect of toothpastes with different chemical properties on the surface properties and colour of CAD-CAM resin nanoceramics and zirconia-reinforced lithium silicate. The null hypotheses of the study were: (a) whitening toothpastes will not effect the surface roughness of hybrid and glass ceramic materials, (b) whitening toothpastes will not cause discolouration of the tested materials.
Preparation of samples A total of 96 rectangular samples (2 × 10 × 12 mm 3 ) were prepared by cutting CAD-CAM blocks (A2, HT) made of resin-based nanoceramic (Cerasmart, GC Corp., Tokyo, Japan) and zirconia-reinforced lithium silicate (Celtra Duo, Dentsply, Constance, Germany) with a water-cooled precision cutter (Micracut 201, Metkon, Bursa, Turkey) (Fig. ). Both sides of all samples were grounded with silicon carbide abrasive papers (Gripo 2 V, Metkon, Bursa, Turkey) with 400, 600, 800, 1000 and 1200 grit at 100 rpm respectively . The specimens were cleaned in an ultrasonic bath (Vitasonic-II, Vita Zahnfabrik) and dried for 60 min at room temperature. The Celtra Duo samples were then randomly separated into two groups. One group was treated with glaze paste (Celtra Duo Universal Glaze, Dentsply, Sirona) and subjected to firing (Programat P-310, Ivoclar Vivadent, Schaan, Liechtenstein) at a sintering temperature of 820℃ for 1 min with a sintering speed of 60 °C/min, while the other group remained untreated . A digital caliper (Alpha Tools, Mannheim, Germany) was used to check the final thickness of the samples. The specimens of each group were further divided into four subgroups ( n = 8) for brushing with toothpastes containing different whitening ingredients. Roughness measurement The initial surface roughness values of the samples were measured via a contact profilometer (Surtronic 25, Taylor Hobson, Leicester, UK) with a measuring length of 4 mm, cut-off length 0.8 mm and stylus speed of 1 mm/sec, then recorded in µm. The instrument was calibrated prior to each measurement. Measurements were carried out on three distinct areas of the sample surfaces and the initial surface roughness values (Ra0) were obtained by averaging. After the brushing process, the final surface roughness values (Ra1) were registered by applying the same principles. All measurements were performed by the same observer (Ş.E.G) . Color measurement Initial colour measurements of all specimens were made using the standard D65 light source with a dental spectrophotometer (Vita Easyshade Advance, Vita Zahnfabrik, Germany) in accordance with the CIE L*a*b* colour system. Measurements were taken from the center of the specimens on a grey background. The instrument was positioned perpendicular to the sample surface. Before each measurement, the spectrophotometer was calibrated in “single tooth” mode with a white calibration plate supplied by the manufacturer. In order to avoid possible variations in colour values, all measurements were taken three times and averaged. After brushing, the final colour values were measured by using the same method . The colour difference (ΔE) was calculated using the following equation ΔE= [(ΔL*) 2 + (Δa*) 2 + (Δb*) 2 ] 1/2 . Mechanical brushing A brushing simulator (DentArGe TB-6.1 Brushing Simulator, Analitik Medikal, Turkey) was used to simulate the brushing process (Fig. ). Each specimen was fixed in the six seperate plastic containers of the simulator with condensed silicone (Zetaplus, Zhermack SpA, Badia Polesine, Italy). FDA-certified toothbrushes of medium hardness (Dipadent, Difaş, İstanbul, Turkey) were screwed to the plastic toothbrush holder arms. For all study groups, 1:1 mixture of toothpaste and distilled water was prepared and placed in plastic containers to cover the samples . 1 g of toothpaste was mixed with 1 ml of distilled water. A digital analytical balance (Radwag AS220R2, Poland) and syringe were used to adjust the amounts. It was considered to ensure that the mixture was always present on the samples. The control group brushed with a conventional toothpaste (Colgate Max Fresh, Colgate-Palmolive, New York, USA). The other three groups brushed with three different whitening toothpastes. A silica-containing toothpaste (Opalescence, Ultradent Products Inc., Utah, USA) was used for the second group, an activated carbon-containing toothpaste (Curaprox Black in White, Curaden, Kriens, Switzerland) for the third and a blue covarine-containing toothpaste (Signal White Now, Unilever, France) for the fourth group (Table ). 30,000 brushing cycles, equivalent to 3 years, were performed on all samples. The speed of the brush was 250 per minute with a back-and-forth motion . To ensure standardization, a new toothbrush and toothpaste mixture were prepared for each sample. Brushing was conducted at room temperature (25 °C) with a vertical force of 350 g, a stroke length of 10 mm, and a reciprocating motion at a rotational speed of 40 mm/sec. The toothbrushes were changed every 5,000 cycles . After brushing, all samples were washed in an ultrasonic bath and dried for 24 h. Surface morphology The surface morphology of the samples was analyzed by using scanning electron microscopy (SEM) (Quanta FEG 450; Oxford Instruments, Uedem, Netherlands) at x2000 magnification, under low vacuum, at 20 kV with a working distance of 9.3–11.2 mm. SEM image of one sample for each group is presented. Obtained data were processed by SPSS V28 (IBM Corp. IBM SPSS statics for windows, Armonk, NY, USA). The normality of the data was analyzed with Kolmogorov-Smirnov test. Kruskal-Wallis test was performed to compare the data among the groups for both surface roughness and color stability. Mann-Whitney test is used for pairwise comparison of groups. For analysis of dependent quantitative data, Friedman and Wilcoxon tests were used. A p-value of ≤ to 0.05 was considered statistically significant.
A total of 96 rectangular samples (2 × 10 × 12 mm 3 ) were prepared by cutting CAD-CAM blocks (A2, HT) made of resin-based nanoceramic (Cerasmart, GC Corp., Tokyo, Japan) and zirconia-reinforced lithium silicate (Celtra Duo, Dentsply, Constance, Germany) with a water-cooled precision cutter (Micracut 201, Metkon, Bursa, Turkey) (Fig. ). Both sides of all samples were grounded with silicon carbide abrasive papers (Gripo 2 V, Metkon, Bursa, Turkey) with 400, 600, 800, 1000 and 1200 grit at 100 rpm respectively . The specimens were cleaned in an ultrasonic bath (Vitasonic-II, Vita Zahnfabrik) and dried for 60 min at room temperature. The Celtra Duo samples were then randomly separated into two groups. One group was treated with glaze paste (Celtra Duo Universal Glaze, Dentsply, Sirona) and subjected to firing (Programat P-310, Ivoclar Vivadent, Schaan, Liechtenstein) at a sintering temperature of 820℃ for 1 min with a sintering speed of 60 °C/min, while the other group remained untreated . A digital caliper (Alpha Tools, Mannheim, Germany) was used to check the final thickness of the samples. The specimens of each group were further divided into four subgroups ( n = 8) for brushing with toothpastes containing different whitening ingredients.
The initial surface roughness values of the samples were measured via a contact profilometer (Surtronic 25, Taylor Hobson, Leicester, UK) with a measuring length of 4 mm, cut-off length 0.8 mm and stylus speed of 1 mm/sec, then recorded in µm. The instrument was calibrated prior to each measurement. Measurements were carried out on three distinct areas of the sample surfaces and the initial surface roughness values (Ra0) were obtained by averaging. After the brushing process, the final surface roughness values (Ra1) were registered by applying the same principles. All measurements were performed by the same observer (Ş.E.G) .
Initial colour measurements of all specimens were made using the standard D65 light source with a dental spectrophotometer (Vita Easyshade Advance, Vita Zahnfabrik, Germany) in accordance with the CIE L*a*b* colour system. Measurements were taken from the center of the specimens on a grey background. The instrument was positioned perpendicular to the sample surface. Before each measurement, the spectrophotometer was calibrated in “single tooth” mode with a white calibration plate supplied by the manufacturer. In order to avoid possible variations in colour values, all measurements were taken three times and averaged. After brushing, the final colour values were measured by using the same method . The colour difference (ΔE) was calculated using the following equation ΔE= [(ΔL*) 2 + (Δa*) 2 + (Δb*) 2 ] 1/2 .
A brushing simulator (DentArGe TB-6.1 Brushing Simulator, Analitik Medikal, Turkey) was used to simulate the brushing process (Fig. ). Each specimen was fixed in the six seperate plastic containers of the simulator with condensed silicone (Zetaplus, Zhermack SpA, Badia Polesine, Italy). FDA-certified toothbrushes of medium hardness (Dipadent, Difaş, İstanbul, Turkey) were screwed to the plastic toothbrush holder arms. For all study groups, 1:1 mixture of toothpaste and distilled water was prepared and placed in plastic containers to cover the samples . 1 g of toothpaste was mixed with 1 ml of distilled water. A digital analytical balance (Radwag AS220R2, Poland) and syringe were used to adjust the amounts. It was considered to ensure that the mixture was always present on the samples. The control group brushed with a conventional toothpaste (Colgate Max Fresh, Colgate-Palmolive, New York, USA). The other three groups brushed with three different whitening toothpastes. A silica-containing toothpaste (Opalescence, Ultradent Products Inc., Utah, USA) was used for the second group, an activated carbon-containing toothpaste (Curaprox Black in White, Curaden, Kriens, Switzerland) for the third and a blue covarine-containing toothpaste (Signal White Now, Unilever, France) for the fourth group (Table ). 30,000 brushing cycles, equivalent to 3 years, were performed on all samples. The speed of the brush was 250 per minute with a back-and-forth motion . To ensure standardization, a new toothbrush and toothpaste mixture were prepared for each sample. Brushing was conducted at room temperature (25 °C) with a vertical force of 350 g, a stroke length of 10 mm, and a reciprocating motion at a rotational speed of 40 mm/sec. The toothbrushes were changed every 5,000 cycles . After brushing, all samples were washed in an ultrasonic bath and dried for 24 h.
The surface morphology of the samples was analyzed by using scanning electron microscopy (SEM) (Quanta FEG 450; Oxford Instruments, Uedem, Netherlands) at x2000 magnification, under low vacuum, at 20 kV with a working distance of 9.3–11.2 mm. SEM image of one sample for each group is presented. Obtained data were processed by SPSS V28 (IBM Corp. IBM SPSS statics for windows, Armonk, NY, USA). The normality of the data was analyzed with Kolmogorov-Smirnov test. Kruskal-Wallis test was performed to compare the data among the groups for both surface roughness and color stability. Mann-Whitney test is used for pairwise comparison of groups. For analysis of dependent quantitative data, Friedman and Wilcoxon tests were used. A p-value of ≤ to 0.05 was considered statistically significant.
Surface roughness Surface roughness values before brushing did not differ significantly between material groups ( p > 0.05) (Table ). However, brushing increased surface roughness for all materials. Notably, the roughness of the material surfaces brushed with Opalescence™ and Curaprox™ differed significantly from the other groups (Table ). The surface roughness values of CS samples brushed with both Curaprox™ and Opalescence™ were found to be higher than those of CD and CDG samples. When evaluating the effect of different toothpastes on the surface roughness of the materials, no significant difference was observed in any of the CS ( p = 0.426), CD ( p = 0.102), and CDG ( p = 0.129) samples (Table ). A significant difference was observed between the materials brushed with Curaprox™ when analysing the change in surface roughness values of the samples before and after brushing ( p = 0.008). Specifically, surface roughness change of CS samples were significantly higher than CD and CDG groups in the Curaprox™ brushed materials (Table ). Color change Between the toothpaste groups, CS showed a significant difference in colour change ( p = 0.01). The control group had the highest colour change (7.54), while Curaprox™ had the lowest (4.05) (Table ). No significant difference was found between the brushed CD samples ( p > 0.05) (Table ). However, a significant difference was found in CDG samples ( p = 0.018). Opalescence™ treated samples exhibited the highest ΔE value (8.0), while the control group exhibited the lowest (5.65) (Table ). For the samples brushed with Curaprox™, a statistical difference ( p = 0.034) was found between all material groups. CD and CDG colour changes were significantly higher than CS ( p < 0.05). Surface morphology Figure shows surface images of all groups and nonbrushed specimens. Following the brushing process, brush marks and varying degrees of deterioration were observed on all sample surfaces. The brush marks on CS samples were more noticeable than on CD and CDG samples.
Surface roughness values before brushing did not differ significantly between material groups ( p > 0.05) (Table ). However, brushing increased surface roughness for all materials. Notably, the roughness of the material surfaces brushed with Opalescence™ and Curaprox™ differed significantly from the other groups (Table ). The surface roughness values of CS samples brushed with both Curaprox™ and Opalescence™ were found to be higher than those of CD and CDG samples. When evaluating the effect of different toothpastes on the surface roughness of the materials, no significant difference was observed in any of the CS ( p = 0.426), CD ( p = 0.102), and CDG ( p = 0.129) samples (Table ). A significant difference was observed between the materials brushed with Curaprox™ when analysing the change in surface roughness values of the samples before and after brushing ( p = 0.008). Specifically, surface roughness change of CS samples were significantly higher than CD and CDG groups in the Curaprox™ brushed materials (Table ).
Between the toothpaste groups, CS showed a significant difference in colour change ( p = 0.01). The control group had the highest colour change (7.54), while Curaprox™ had the lowest (4.05) (Table ). No significant difference was found between the brushed CD samples ( p > 0.05) (Table ). However, a significant difference was found in CDG samples ( p = 0.018). Opalescence™ treated samples exhibited the highest ΔE value (8.0), while the control group exhibited the lowest (5.65) (Table ). For the samples brushed with Curaprox™, a statistical difference ( p = 0.034) was found between all material groups. CD and CDG colour changes were significantly higher than CS ( p < 0.05).
Figure shows surface images of all groups and nonbrushed specimens. Following the brushing process, brush marks and varying degrees of deterioration were observed on all sample surfaces. The brush marks on CS samples were more noticeable than on CD and CDG samples.
To maintain oral hygiene, toothbrushes and toothpastes are the most commonly used tools. Ideally, a toothpaste should effectively clean and remove external stains without damaging tooth enamel and restorations. However, various properties added to these tools can cause permanent changes to teeth and restorations . This research compared the effects of four different toothpastes, which containes silica, blue covarine, activated carbon, and one conventional, on the surface roughness and colour values of CAD-CAM materials. The null hypotheses were rejected as significant changes observed in roughness and colour of the materials. All samples were treated with the same surface finishing procedures to eliminate surface irregularities and ensure standardization. Studies have shown that surface roughness values exceeding 0.2 μm lead to biofilm formation on restoration surfaces, increased adsorption of colorant particles and material wear . A study reported that surface roughness values of 0.5 μm and above could be differentiated by the individual’s tongue . In line with this, our study obtained similar initial surface roughness values for all samples (CS = 0.28 ± 0.09, CD = 0.26 ± 0.07, CDG = 0.17 ± 0.04). Similarly, Siam et al. conducted a study on zirconia-reinforced lithium silicate specimens (Celtra Duo) which were divided into two subgroups, glazed and polished. No statistically significant difference was found between the two surfaces ( p = 0.8204) when the surface roughness of the specimens was measured using an optical profilometer. Likewise, our study found no statistically significant difference in the surface roughness values of CD and CDG samples before brushing. Abrasions on the surface of restorations are a crucial factor affecting their clinical lifespan. Various ageing protocols are used to simulate long-term material evaluation, with thermal cycling, mastication cycles, and brushing simulations being the most common . This study, observed an increase in surface roughness values for all materials after the ageing process with brushing. Similarly, Kim et al. , reported increased surface roughness values for Cerasmart and Celtra Duo blocks after thermal ageing in their investigation of surface properties of different CAD-CAM blocks. In contrast, Picolo et al. found no significant difference in surface roughness after brushing for lithium silicate glass ceramics reinforced with zirconia. This variation is believed to be caused by differences in toothpaste types and dilution ratios, as well as the use of a soft toothbrush for brushing. In a study evaluating the surface roughness of chairside CAD-CAM materials after brushing, the Cerasmart group showed the lowest surface roughness values before brushing and the highest roughness values after brushing when compared to leucite-based glass ceramic and polymer infiltrated glass ceramic blocks . Sugiyama et al. investigated the surface roughness of CAD-CAM blocks with different contents after mechanical cleaning. They reported higher roughness values for composite blocks (Shofu Block) compared to lithium silicate blocks (Celtra Duo). In our study, the surface roughness values of the CS group specimens brushed with Curaprox™ and Opalescence™ were significantly different from CD and CDG samples ( p < 0.008). This variation is likely due to the lower hardness of resin compared to ceramic-based materials. When examining the post-brushing colour change values, a significant difference was found between the CS and CDG groups ( p = 0.010, p = 0.018, respectively), while no difference was observed in the CD group ( p = 0.669). Pouranfar et al. investigated the colour change between ceramic polymer (Cerasmart, VITA Enamic) and lithium disilicate (IPS E-max CAD) after 12 years of brushing. Our findings are consistent with their finding that the color change of ceramic polymers was greater. The degree of staining depends on the stain resistance of the material and the bleaching agent content. The higher degree of colour change in the CS group, compared to the CD group, can be attributed to the resin content in the material. The CDG group’s higher color change compared to the CD group is believed to be due to the abrasion of the glaze material and the material’s increased susceptibility to external factors. In recent years, various teeth whitening products, including toothpaste and mouthwash, have been introduced to markets and pharmacies. However, the effectiveness and potential downsides of these non-prescripted products remain unclear. According to a study, toothpastes containing silica can erode the resin matrix, resulting in increased surface roughness. However, toothpastes with hydrated silica have lower abrasive properties . The toothpastes used in this research contained different types of silica. Opalescence™ toothpaste contains silica, while Signal™ White Now, Colgate™ Max Fresh, and Curaprox™ Black in White toothpastes contain hydrated silica. The brushing process resulted in a significant increase in the roughness values of CS samples brushed with Opalescence™, which may be attributed to its silica content. Activated charcoal is added to toothpastes to enhance their whitening properties by absorbing coloring substances. While some studies suggest that it can effectively clean teeth due to its porous and large surface area, the literature on this topic is limited . Palandi et al. reported that charcoal-containing products did not have a whitening effect and might cause negative changes in enamel topography in their brushing study with bovine teeth. Thomas et al. stated that charcoal-containing pastes showed lower whitening and higher abrasion than other alternatives. A study investigated the colour and surface properties of composite resin toothpastes containing whitening agents. The results showed that toothpastes containing activated carbon exhibited no statistical difference in colour change compared to conventional toothpastes . Our study found that charcoal-containing pastes had similarly low whitening efficiency, but were the only group that caused a significant difference in surface roughness change values compared to other paste types. The use of blue covarine in toothpastes aims to create a fine, translucent appearance on tooth enamel, reducing the yellowish color and making teeth appear whiter and brighter by shifting the shade on the scale towards white . However, while some studies in the literature contradict this claim, no research has been conducted specifically on CAD-CAM ceramics. A study was performed to examine the impact of whitening toothpastes and mouthwashes on bovine dentin. The study found that blue covarine content resulted in similar color change values to traditional fluoride toothpastes . Demir et al. stained two different composite resins and then brushed them with toothpastes having different chemical ingredients. They reported that the toothpaste containing blue covarine provided a partial improvement, but this was not within clinically acceptable limits. Another study investigating the effect of whitening and conventional toothpastes on the discolouration of various composite resins reported that blue covarine-containing pastes had no different effect than conventional pastes . The results also displayed that Signal™ White Now produced similar color changes to conventional toothpaste in all three material groups. The scanning electron microscope (SEM) images confirmed the changes measured with the profilometer. The evaluation of these images revealed that brushing ceramics with abrasive toothpastes caused more significant changes in surface morphology compared to conventional toothpaste. The CS samples brushed with Opalescence™ and Curaprox™ toothpastes showed deeper and more prominent lines. Some of the limitations of this research include the inability to imitate the thermal and pH cycles of the mouth and the inability to reflect nutritional habits, saliva proteins and enzymes in the experiments. In addition, the samples have flat surfaces devoid of the anatomical pits and fissures found in natural teeth, hindering the complete simulation of polishing and brushing processes. Further studies conducted in conditions closer to the oral environment, utilizing a wider variety of toothpaste products and CAD-CAM materials, would increase the accuracy of the results.
Irrespective of the toothpaste used, it was concluded that the roughness of all sample surfaces increased after brushing. Resin-containing nanoceramics exhibited greater wear compared to glazed and non-glazed zirconia-reinforced lithium silicates. It is important to note that toothpastes containing charcoal may impair surface smoothness. The impact of tooth whitening toothpastes on colour alteration varies depending on the properties of the material.
|
Arbuscular mycorrhizal fungal spore communities and co-occurrence networks demonstrate host-specific variation throughout the growing season | fbd733c1-50d7-4198-be28-3d7bfca18389 | 11604739 | Microbiology[mh] | Microbial community assembly involves a series of ecological filters and processes that determine the composition of microbial communities (Morin ). Community assembly is first determined by broad scale factors like regional species pool, dispersal, and climate, followed by local level variation in abiotic and biotic conditions that influence the abundance of individual species (Vellend ; Kraft et al. ; Funk ). While the importance of broad (e.g., climate and dispersal; Barberán et al. ; Kivlin et al. ; Powell et al. ) and local level factors (e.g., soil conditions, disturbance, species interactions; Bahram et al. ; Fujita et al. ; Hopkins and Bennett ; Nemergut et al. ) on microbial community assembly has been reasonably well studied, this work often is limited to individual, static time points and does not consider the importance of seasonal variation (Shinohara et al. ; but see Lundberg et al. ; Shi et al. ; Aleklett et al. ). Because temporal variation is an important determinant of broad (e.g., seasons and successional stage; Bennett et al. ; Duhamel et al. ; Hopkins et al. ; Yang et al. ) and local level community assembly filters (e.g., nutrient availability and life history stage; Bahram et al. ; Hopkins et al. ; Mouquet et al. ), our poor understanding of seasonal variation represents a key gap in our understanding of microbial community assembly. Many microbial taxa display seasonal trends in abundance (i.e., seasonality) that are directly related to seasonal variation in community assembly filters (Harvey et al. ; Santos-Gonzalez et al. ; Buckeridge et al. ). When measured, seasonality, or sampling time, is often the greatest determinant of microbial community composition, and outweighs local effects of nutrient availability and disturbance (Hopkins et al. ; Nemergut et al. ; Shinohara et al. ). Groups such as arbuscular mycorrhizal fungi (AM; obligate mutualists of > 80% of land plants) are key examples of this, as their sporulation and fitness is closely tied to seasonal changes in plant communities and plant growth (e.g., peaks in sporulation during and just after the plant growing season; (Smith and Read ; Deveautour et al. ; Hopkins et al. ). Further, AM fungi also display species specific variation in abundance that can be tied to different host species (Bever et al. ; Eom et al. ; Kivlin et al. ) and changes in season (spring vs. summer; Pringle and Bever ; Santos-Gonzalez et al. ). Some of this variation is likely because of different growing periods of host plants (i.e., spring ephemerals vs. warm season grasses) that allow for seasonal niche differentiation amongst AM fungal symbionts (Su et al. ; Bennett et al. ). The close connections between AM fungi and their plant hosts demonstrates the importance of considering seasonal variation in community assembly. Seasonal variation in AM fungal communities may further vary with the life history stage of the host plant (e.g., vegetative vs. flowering stage). Because the nutrient requirements of plant hosts change with life history stage (Chapin ; Römer and Schilling ; Grant et al. ), this could produce changes in the phosphorus (P) for carbon (C) exchange between host plants and AM fungal symbionts (Reynolds et al. ; Johnson et al. ; Ji and Bever ), with implications for fungal fitness and communities. For example, when plants are actively growing, greater amounts of P are required which could favor the fitness of AM fungal taxa that provide host plants with substantial P (Bever et al. ; Kiers et al. ). When plants flower or senesce for the year, however, plant nutrient demand is expected to decrease and correspondingly reduce C transfer to AM fungal symbionts (Lekberg et al. ). If changes in resource allocation alter the competitive ability of AM fungal symbionts, this could influence the interactions between species that influence community assembly (Bennett and Bever ; Christian and Bever ). AM fungal community assembly is also likely modified by host plant species, as plants vary in mycorrhizal responsiveness (i.e., the benefit a plant receives from association with AM fungi; Wilson and Hartnett ; Koziol and Bever ; Deveautour et al. ) and their ability to differentiate between more versus less beneficial AM fungal symbionts (Bever et al. ; Hopkins et al. ). This means that seasonal variation in AM fungal community assembly likely varies with not only season and plant life history stage, but also differences in plant-AM fungal interactions. We tested how seasonal variation, plant life history stage (vegetative vs. flowering), and host plant species influenced AM fungal spore community assembly. We sampled AM fungal spore communities during the vegetive and flowering phases of two grassland species Baptisia bracteata var. leucophaea (which flowers in late spring-early summer and is vegetative in summer) and Andropogon gerardii (vegetative during spring-summer, flowers in late summer-early fall). This allowed us to test how plant life history stage and plant species influenced: (1) AM fungal spore community composition, (2) the abundances of individual AM fungal taxa, and (3) the associations between AM fungal taxa. We hypothesized that AM fungal spore community composition and species associations would shift between plant life history stages and seasons, with greater diversity and sporulation in the fall (when plants senesce) and lower community stochasticity during the flowering stage (early summer- B. bracteata ; early fall- A. gerardii ). We further hypothesized that seasonal variation in AM fungal spore community assembly and species associations would vary between plant hosts.
Study system This work was conducted at the Anderson County Prairie Preserve (38° 10’ N, -95° 16’ W; Anderson County, KS). The preserve encompasses almost 1,400-acres that are maintained with annual to biennial fire, grazing, and mowing management. This work occurred in tract 13, which is a remnant tallgrass prairie. Soils at this site are part of the Clareson-Rock outcrop complex (USDA NRCS ). The site hosts a diverse spring and summer floral assemblage of forb, legume, and graminoid vegetation, including members of Asclepias , Baptisia , Dalea , Andropogon , Helianthus , Liatris , Schizachyrium , and Amorpha (Kansas Biological Survey ). Average annual temperatures range from 7 °C to 19 °C. Average annual precipitation is 970.3 mm with the majority occurring between April and September. In this work, Baptisia bracteata var. leucophaea (C3 forb; plains wild-indigo) and Andropogon gerardii (C4 grass; big bluestem) were used as representative prairie plants. These taxa were chosen because of their relative dominance in tract 13 and their differences in seasonality. B. bracteata is an herbaceous perennial legume that emerges and is physiologically active in early spring and flowers in mid-spring. A. gerardii is a perennial, warm-season bunchgrass that emerges mid spring, is physiologically active in the heat of summer and flowers in the mid to late summer. Both plant species are responsive to AM fungi, with (A) gerardii growth nearly doubling when grown with AM fungi (mycorrhizal responsiveness = 99%) and (B) bracteata demonstrating an 83% increase in growth (Wilson and Hartnett ). Plot set-up Experimental plots ( n = 10) containing pairs of B. bracteata and A. gerardii (plants within each pair separated by ≤ 1 m) were established in spring 2019. Plots were marked with plastic marker flags for easy rediscovery at each sample time. Because of varied fire history at tract 13 (half of the tract burned in October 2018), fire history was recorded for each plot to account for variation in management. Field sampling AM fungal spore community samples were collected in June 2019 (end of B. bracteata flowering) and in September 2019 (end of (A) gerardii flowering). AM fungal spore communities were used for assessment of community composition because they allow for assessment of viability, sorting into morphospecies, are directly indicative of fitness (Bever et al. ; Bever ), are closely linked to plant community dynamics (Su et al. ; Middleton and Bever ), and are reliable indicators of seasonal variation in belowground communities (Pringle and Bever ). A 2 cm diameter soil corer was used to collect a single rhizosphere sample (depth of 15 cm) next to the base of each (B) bracteata ( n = 10) and A. gerardii plant ( n = 10) at each sampling time ( n = 2). This produced 20 AM fungal spore samples for each sampling time, for a total of 40 samples across the entire study period. Samples were kept cool with ice packs in the field and then stored at 4 °C within six hours of collection. The soil corer was cleaned with paper towels and sterilized with 70% EtOH between samples. AM fungal spores were extracted from soil samples within 2–4 weeks of collection using 2 mm and 38 μm sieves, followed by centrifugation with 60% sucrose solution. Extracted spores were stored in water at 4 °C until communities were quantified. Spore community analysis AM fungal spore communities were quantified using a Nikon dissection scope (Nikon, Tokyo, Japan) at 30x magnification. Spores were sorted into morphotypes based on pigmentation color, size, internal lipid contents, and hyaline appearance. Counts for each morphotype, total spore count (i.e., sporulation), and diversity (inverse Simpson metric) were recorded for each sample. When possible, putative classifications were applied to morphotypes using INVAM species descriptions (INVAM ). Statistical analyses All analyses were conducted in R version 4.3.2 (R Core Team ). We tested how plant host ID ( B. bracteata and A. gerardii ) and sampling time (spring vs. summer) influenced AM fungal spore community composition using principle coordinates analyses (PCoA) and permutational multivariate analysis of variance (PERMANOVA) with the Vegan package (Oksanen et al. ). Bray-Curtis dissimilarity matrices and ordinations were produced for AM fungal spore communities using the vegdist and prcomp functions. Following ordination, a PERMANOVA was used to test the effect of plant host ID, sampling time, and their interaction effects on AM fungal spore communities using the adonis2 function. The PERMANOVA model also accounted for prior fire history and location effects. The fire history term (i.e., presence/absence of Fall 2018 fires) was included first to account for its effect because the adonis function uses sequential sums of squares. To account for plot level variation, permutations ( n = 999) were restricted to within sampling plot. Plant host ID and sampling time effects on AM fungal spore community diversity (continuous; Inv. Simpson), community beta dispersion (continuous; Bray-Curtis), sporulation (count), and morphotype abundance (count) were assessed using either type III linear mixed effect (LMERs; continuous data) or type III generalized linear mixed effect models (GLMERs; poisson link function, count data) using either the lmer or glmer functions (lme4 package; Bates et al. ) followed by the joint_tests function (emmeans package; Lenth ). LMER and GLMER models included plant host ID, sampling time, and their interaction as fixed effects, and controlled for plot and fire history. Note that the interaction term represents spore community turnover between plant host life history stages (e.g., Spring growth vs. Summer flowering times in A. gerardii ). Following significant main effects, estimated marginal means were extracted using the emmeans function and tested with contrasts using the contrast function. Due to rarity and low sporulation, it was not possible to test changes in abundance for every spore morphotype. Plant host ID and sampling time effects on intra-community associations were tested using network analysis tools available in the NetCoMi package (Peschel et al. ). AM fungal spore community networks were first constructed for plant host species at each sampling time using the netConstruct function. This allowed for comparison of AM fungal spore community co-occurrence networks between (e.g., summer B. bracteata – flowering stage vs. fall B. bracteata – vegetative stage) and within (e.g., summer B. bracteata vs. summer A. gerardii ) sampling times. Networks were created using a matched-pairs design (controls for plot effects), biweight midcorrelation association functions (robust to outliers), and a sparsification threshold of 0.3. Network metrics (Table ) for the largest connected component (LCC) and entire network were measured using the netAnalyze function with the “cluster_fast_greedy” clustering algorithm. Hub taxa were identified using combinations of node degree, betweenness, closeness, and eigen vector with hub threshold set to 0.9 (combined values must exceed 0.9 to be considered a hub taxon). Network metrics were then compared using the netCompare function with permutations set to 1000.
This work was conducted at the Anderson County Prairie Preserve (38° 10’ N, -95° 16’ W; Anderson County, KS). The preserve encompasses almost 1,400-acres that are maintained with annual to biennial fire, grazing, and mowing management. This work occurred in tract 13, which is a remnant tallgrass prairie. Soils at this site are part of the Clareson-Rock outcrop complex (USDA NRCS ). The site hosts a diverse spring and summer floral assemblage of forb, legume, and graminoid vegetation, including members of Asclepias , Baptisia , Dalea , Andropogon , Helianthus , Liatris , Schizachyrium , and Amorpha (Kansas Biological Survey ). Average annual temperatures range from 7 °C to 19 °C. Average annual precipitation is 970.3 mm with the majority occurring between April and September. In this work, Baptisia bracteata var. leucophaea (C3 forb; plains wild-indigo) and Andropogon gerardii (C4 grass; big bluestem) were used as representative prairie plants. These taxa were chosen because of their relative dominance in tract 13 and their differences in seasonality. B. bracteata is an herbaceous perennial legume that emerges and is physiologically active in early spring and flowers in mid-spring. A. gerardii is a perennial, warm-season bunchgrass that emerges mid spring, is physiologically active in the heat of summer and flowers in the mid to late summer. Both plant species are responsive to AM fungi, with (A) gerardii growth nearly doubling when grown with AM fungi (mycorrhizal responsiveness = 99%) and (B) bracteata demonstrating an 83% increase in growth (Wilson and Hartnett ).
Experimental plots ( n = 10) containing pairs of B. bracteata and A. gerardii (plants within each pair separated by ≤ 1 m) were established in spring 2019. Plots were marked with plastic marker flags for easy rediscovery at each sample time. Because of varied fire history at tract 13 (half of the tract burned in October 2018), fire history was recorded for each plot to account for variation in management.
AM fungal spore community samples were collected in June 2019 (end of B. bracteata flowering) and in September 2019 (end of (A) gerardii flowering). AM fungal spore communities were used for assessment of community composition because they allow for assessment of viability, sorting into morphospecies, are directly indicative of fitness (Bever et al. ; Bever ), are closely linked to plant community dynamics (Su et al. ; Middleton and Bever ), and are reliable indicators of seasonal variation in belowground communities (Pringle and Bever ). A 2 cm diameter soil corer was used to collect a single rhizosphere sample (depth of 15 cm) next to the base of each (B) bracteata ( n = 10) and A. gerardii plant ( n = 10) at each sampling time ( n = 2). This produced 20 AM fungal spore samples for each sampling time, for a total of 40 samples across the entire study period. Samples were kept cool with ice packs in the field and then stored at 4 °C within six hours of collection. The soil corer was cleaned with paper towels and sterilized with 70% EtOH between samples. AM fungal spores were extracted from soil samples within 2–4 weeks of collection using 2 mm and 38 μm sieves, followed by centrifugation with 60% sucrose solution. Extracted spores were stored in water at 4 °C until communities were quantified.
AM fungal spore communities were quantified using a Nikon dissection scope (Nikon, Tokyo, Japan) at 30x magnification. Spores were sorted into morphotypes based on pigmentation color, size, internal lipid contents, and hyaline appearance. Counts for each morphotype, total spore count (i.e., sporulation), and diversity (inverse Simpson metric) were recorded for each sample. When possible, putative classifications were applied to morphotypes using INVAM species descriptions (INVAM ).
All analyses were conducted in R version 4.3.2 (R Core Team ). We tested how plant host ID ( B. bracteata and A. gerardii ) and sampling time (spring vs. summer) influenced AM fungal spore community composition using principle coordinates analyses (PCoA) and permutational multivariate analysis of variance (PERMANOVA) with the Vegan package (Oksanen et al. ). Bray-Curtis dissimilarity matrices and ordinations were produced for AM fungal spore communities using the vegdist and prcomp functions. Following ordination, a PERMANOVA was used to test the effect of plant host ID, sampling time, and their interaction effects on AM fungal spore communities using the adonis2 function. The PERMANOVA model also accounted for prior fire history and location effects. The fire history term (i.e., presence/absence of Fall 2018 fires) was included first to account for its effect because the adonis function uses sequential sums of squares. To account for plot level variation, permutations ( n = 999) were restricted to within sampling plot. Plant host ID and sampling time effects on AM fungal spore community diversity (continuous; Inv. Simpson), community beta dispersion (continuous; Bray-Curtis), sporulation (count), and morphotype abundance (count) were assessed using either type III linear mixed effect (LMERs; continuous data) or type III generalized linear mixed effect models (GLMERs; poisson link function, count data) using either the lmer or glmer functions (lme4 package; Bates et al. ) followed by the joint_tests function (emmeans package; Lenth ). LMER and GLMER models included plant host ID, sampling time, and their interaction as fixed effects, and controlled for plot and fire history. Note that the interaction term represents spore community turnover between plant host life history stages (e.g., Spring growth vs. Summer flowering times in A. gerardii ). Following significant main effects, estimated marginal means were extracted using the emmeans function and tested with contrasts using the contrast function. Due to rarity and low sporulation, it was not possible to test changes in abundance for every spore morphotype. Plant host ID and sampling time effects on intra-community associations were tested using network analysis tools available in the NetCoMi package (Peschel et al. ). AM fungal spore community networks were first constructed for plant host species at each sampling time using the netConstruct function. This allowed for comparison of AM fungal spore community co-occurrence networks between (e.g., summer B. bracteata – flowering stage vs. fall B. bracteata – vegetative stage) and within (e.g., summer B. bracteata vs. summer A. gerardii ) sampling times. Networks were created using a matched-pairs design (controls for plot effects), biweight midcorrelation association functions (robust to outliers), and a sparsification threshold of 0.3. Network metrics (Table ) for the largest connected component (LCC) and entire network were measured using the netAnalyze function with the “cluster_fast_greedy” clustering algorithm. Hub taxa were identified using combinations of node degree, betweenness, closeness, and eigen vector with hub threshold set to 0.9 (combined values must exceed 0.9 to be considered a hub taxon). Network metrics were then compared using the netCompare function with permutations set to 1000.
Plant host ID and sampling time determine AM fungal spore community composition B. bracteata and (A) gerardii were associated with distinct AM fungal spore communities (F 1,34 =15.3, p = 0.001, R 2 = 0.26; Table ; Fig. ) that displayed significant seasonal turnover between the spring and summer (F 1,34 =8.04, p = 0.001, R 2 = 0.14). In total, 12 different AM fungal species were identified, with 8 taxa common to both host plant species, 1 taxon found only in (B) bracteata , 2 taxa unique to (A) gerardii , 3 taxa found only in spring, and 2 taxa found only in the summer. (B) bracteata spore communities were less stochastic (lower beta-dispersion; F 1,26.3 =4.8, p = 0.04; Table ; Fig. a), were marginally more diverse (F 1,26.3 =3.9, p = 0.06; Fig. b), and had higher sporulation (F 1,Inf =198, p < 0.001; Fig. c) than (A) gerardii spore communities. Further, (B) bracteata spore communities demonstrated lower sporulation during the summer sampling time (vegetative phase; p < 0.001). A. gerardii spore communities became less stochastic (lower beta-dispersion; p = 0.04) and more diverse ( p < 0.001) during the summer sampling time (flowering phase). In summary, plant host ID was the strongest determinant of AM fungal spore community composition; however, spore communities associated with each plant host species displayed significant seasonal variation. Plant host ID and sampling time influence species abundances The abundances of AM fungal species differed with sampling time and between plant hosts. During the spring sampling time, Scutellospora sp.1 (F 1,Inf =24, p < 0.001; Table ; Fig. a) and Glomerales sp.2 (F 1,Inf =51, p < 0.001, Fig. b) abundances were highest relative to the summer. Further, B. bracteata plants hosted greater abundances of Archaeospora trappei (F 1,Inf =85, p < 0.001, Fig. c), Diversisporales sp.1 (F 1,Inf =48, p < 0.001, Fig. d), Scutellospora sp. 1 (F 1,Inf =16, p < 0.001), and Glomerales sp.2 (F 1,Inf =64, p < 0.001) relative to A. gerardii hosts. Changes in seasonal species abundances also displayed host-specific patterns. Specifically, (A) trappei (F 1,Inf =8.6, p = 0.003) and Diversisporales sp.1 (F 1,Inf =12, p < 0.001) abundances decreased during the summer with (B) bracteata hosts, but increased during the summer with A. gerardii hosts. To conclude, AM fungal species demonstrated seasonal variation in abundance that was modified by plant host species. Plant host ID and sampling time structure AM fungal spore community networks AM fungal spore community network structure varied between sampling times and this effect was influenced by plant host species. B. bracteata networks displayed substantial changes in structure between the spring and summer. In the spring, the largest connected component (LCC) for B. bracteata was smaller ( p = 0.03; Table ; Fig. a and b), displayed a higher degree of clustering ( p = 0.01), was less modular ( p = 0.01), had a greater edge density ( p = 0.004), greater natural connectivity ( p = 0.002), and different topography ( p = 0.01) than in the summer. At the whole network scale, modularity was higher in summer ( p = 0.01), and network topography (i.e., graphlet correlation distance; p = 0.04) differed between the spring and summer. A. gerardii networks, however, did not vary in topography or structure between the spring and summer (Table ; Fig. c and d). Network structure also varied between B. bracteata and (A) gerardii ; however, this effect was largely restricted to the summer. Spring (B) bracteata LCC’s were smaller ( p = 0.03; Tables and ), had higher edge densities ( p = 0.008), greater natural connectivity ( p = 0.008), smaller path lengths ( p = 0.03), and different topographies ( p = 0.02) than (A) gerardii networks. At the whole network scale, spring (B) bracteata networks were more modular ( p = 0.006) and had different topographies ( p = 0.01) than A. gerardii networks. In the summer, however, network topography and structure did not differ between these plant hosts, with the exception of greater edge densities in A. bracteata LCC’s ( p = 0.04). To summarize, AM fungal spore community network structure varied between the spring and summer, however, this effect was influenced by plant host ID.
B. bracteata and (A) gerardii were associated with distinct AM fungal spore communities (F 1,34 =15.3, p = 0.001, R 2 = 0.26; Table ; Fig. ) that displayed significant seasonal turnover between the spring and summer (F 1,34 =8.04, p = 0.001, R 2 = 0.14). In total, 12 different AM fungal species were identified, with 8 taxa common to both host plant species, 1 taxon found only in (B) bracteata , 2 taxa unique to (A) gerardii , 3 taxa found only in spring, and 2 taxa found only in the summer. (B) bracteata spore communities were less stochastic (lower beta-dispersion; F 1,26.3 =4.8, p = 0.04; Table ; Fig. a), were marginally more diverse (F 1,26.3 =3.9, p = 0.06; Fig. b), and had higher sporulation (F 1,Inf =198, p < 0.001; Fig. c) than (A) gerardii spore communities. Further, (B) bracteata spore communities demonstrated lower sporulation during the summer sampling time (vegetative phase; p < 0.001). A. gerardii spore communities became less stochastic (lower beta-dispersion; p = 0.04) and more diverse ( p < 0.001) during the summer sampling time (flowering phase). In summary, plant host ID was the strongest determinant of AM fungal spore community composition; however, spore communities associated with each plant host species displayed significant seasonal variation.
The abundances of AM fungal species differed with sampling time and between plant hosts. During the spring sampling time, Scutellospora sp.1 (F 1,Inf =24, p < 0.001; Table ; Fig. a) and Glomerales sp.2 (F 1,Inf =51, p < 0.001, Fig. b) abundances were highest relative to the summer. Further, B. bracteata plants hosted greater abundances of Archaeospora trappei (F 1,Inf =85, p < 0.001, Fig. c), Diversisporales sp.1 (F 1,Inf =48, p < 0.001, Fig. d), Scutellospora sp. 1 (F 1,Inf =16, p < 0.001), and Glomerales sp.2 (F 1,Inf =64, p < 0.001) relative to A. gerardii hosts. Changes in seasonal species abundances also displayed host-specific patterns. Specifically, (A) trappei (F 1,Inf =8.6, p = 0.003) and Diversisporales sp.1 (F 1,Inf =12, p < 0.001) abundances decreased during the summer with (B) bracteata hosts, but increased during the summer with A. gerardii hosts. To conclude, AM fungal species demonstrated seasonal variation in abundance that was modified by plant host species.
AM fungal spore community network structure varied between sampling times and this effect was influenced by plant host species. B. bracteata networks displayed substantial changes in structure between the spring and summer. In the spring, the largest connected component (LCC) for B. bracteata was smaller ( p = 0.03; Table ; Fig. a and b), displayed a higher degree of clustering ( p = 0.01), was less modular ( p = 0.01), had a greater edge density ( p = 0.004), greater natural connectivity ( p = 0.002), and different topography ( p = 0.01) than in the summer. At the whole network scale, modularity was higher in summer ( p = 0.01), and network topography (i.e., graphlet correlation distance; p = 0.04) differed between the spring and summer. A. gerardii networks, however, did not vary in topography or structure between the spring and summer (Table ; Fig. c and d). Network structure also varied between B. bracteata and (A) gerardii ; however, this effect was largely restricted to the summer. Spring (B) bracteata LCC’s were smaller ( p = 0.03; Tables and ), had higher edge densities ( p = 0.008), greater natural connectivity ( p = 0.008), smaller path lengths ( p = 0.03), and different topographies ( p = 0.02) than (A) gerardii networks. At the whole network scale, spring (B) bracteata networks were more modular ( p = 0.006) and had different topographies ( p = 0.01) than A. gerardii networks. In the summer, however, network topography and structure did not differ between these plant hosts, with the exception of greater edge densities in A. bracteata LCC’s ( p = 0.04). To summarize, AM fungal spore community network structure varied between the spring and summer, however, this effect was influenced by plant host ID.
AM fungal spore community seasonal dynamics were closely linked to plant host species and life history stage. While AM fungal communities demonstrated strong turnover between the spring (e.g., higher sporulation) and late summer (e.g., higher diversity), the strength of these changes was modified by host plant species. B. bracteata generally hosted a larger, more diverse spore community than A. gerardii , however, the abundances of two AM fungal species were linked to host plant flowering times. Specifically, A. trappei ( A. gerardii ) and Diversisporales sp.1 ( B. bracteata and (A) gerardii ) abundances were highest during host flowering periods. Furthermore, AM fungal species associations also varied between seasons and plant hosts, with (B) bracteata associated AM fungal networks becoming less modular and less clustered in the summer versus the spring, and A. gerardii associated networks remaining relatively stable between seasons and life history stages. This builds on past work identifying the importance of seasonal variation (Pringle and Bever ; Santos-Gonzalez et al. ; Bennett et al. ; Deveautour et al. ) and plant host species (Bever et al. ; Eom et al. ; Koziol and Bever ) on AM fungal community assembly by demonstrating how AM fungal seasonal community dynamics both vary between plant hosts and can be linked to plant life history stages. Because AM fungal community seasonal dynamics are closely linked to plant host ID and life history stage, it is critical that soil microbial ecologists consider both sampling time and host-plant life history stage when assessing microbial community assembly. Seasonal variation in AM fungal community assembly was closely linked to host plant species and life history stage. AM fungal community composition remained distinct between B. bracteata and (A) gerardii throughout the growing season, with (B) bracteata hosting a larger (greater sporulation) and more diverse AM fungal symbiont community than (A) gerardii . Some of these differences are likely due to the leguminous nature of (B) bracteata (plants are less N limited), which can favor plant resource allocation to AM fungal symbionts, reduce competition among AM fungal taxa, and potentially explain the higher levels of sporulation and diversity (Bennett and Bever ; Johnson et al. , ). While AM fungal spore community diversity was generally lower in (A) gerardii hosts relative to (B) bracteata , it is worth noting that diversity did increase when (A) gerardii flowered, whereas no change in diversity was observed for (B) bracteata . This may reflect shifts in A. gerardii nutrient requirements during the flowering stage (e.g., lower N and P requirements; Chapin ; Grant et al. ) and corresponding reductions in plant C allocation to AM fungal symbionts (Smith and Read ). If preferential allocation of plant C to specific AM fungal symbionts is reduced during A. gerardii flowering stages, this could alter the competitive abilities of AM fungal taxa and allow for increased diversity in AM fungal communities (Bennett and Bever ; Bever et al. ; Kiers et al. ; Bever ; Christian and Bever ; Hopkins et al. ). The differences in AM fungal diversity and sporulation between the two host plant species demonstrates the importance of the host plant in the seasonal dynamics of AM fungal community assembly. Seasonal variation in AM fungal community networks differed between plant host species. B. bracteata associated networks became less modular and largest connected component size (LCC) increased during the vegetative (summer) versus the flowering stage (spring). The higher level of between-species associations (lower modularity and larger LCCs) during B. bracteata’s vegetative stage would correspond with less plant physiological activity (lower photosynthate production) and greater competition for plant C among AM fungi (Johnson et al. , ). Conversely, A. gerardii associated network structure did not change between vegetative (spring) and flowering stages (summer) despite subsequent changes in community composition, diversity, and sporulation. This implies that A. gerardii hosts a relatively stable AM fungal network throughout the growing season with some taxa increasing or decreasing during different host life history stages (Bennett et al. ; Deveautour et al., 2020; Santos-Gonzalez et al. ). Consideration of additional plant species and functional groups is required to determine if the observed trends in network structure can be extended to other grass, forb, and legume species though. AM fungal spore community seasonal dynamics mirrored seasonal resource partitioning in plant hosts. B. bracteata , the spring ephemeral, is active and grows in the early spring, and then flowers by early summer, whereas (A) gerardii , the warm season specialist, is most active in the summer and flowers in the early fall. By growing and flowering at different times during the growing season, (B) bracteata and (A) gerardii (as well as other cool and warm season specialists) partition the growing season and reduce interspecific competition (Weltzin and McPherson ; Ford ; Doležal et al. ). In this study, AM fungal spore community dynamics closely followed this trend, as AM fungal species that were active in the early spring sporulated in early summer (as (B) bracteata flowered) and AM fungal communities that were active in the summer demonstrated higher spore diversity when A. gerardii flowered (relative to its growth phase). Similar to their plant hosts, this means that AM fungi can also partition the growing season (i.e., cool vs. warm season specialists) and that seasonal differences in AM fungal physiology may contribute to AM fungal coexistence and diversity (Bever et al. ). This supports prior work in grassland systems where AM fungal taxa ( Gigaspora gigantea -cool season and Acaulospora capsicula -warm season) displayed distinct seasonal sporulation patterns (Schultz et al. ; Pringle and Bever ). While not considered in this study, seasonal sporulation patterns may also be influenced by dispersal via aerial propagules (Chaudhary et al. ) or animals (Paz et al. ), with the contribution of dispersal increasing during times of increased sporulation such as the late Spring and Summer. Additionally, seasonal variation in sporulation may also be affected by environmental conditions such as drought; however, sporulation responses to low rainfall and arid conditions are known to vary so more work is required to understand the effect of drought on AM fungal spore communities (Al-Karaki et al. ; Deveautour et al. ). Finally, summer AM fungal network structure did not differ between host-plant species. This suggests that environmental conditions may dominate plant host controls on network structure, and favor greater connectivity and associations between AM fungal symbionts at the end of the growing season (Kaisermann et al. ; Bastías et al. ). If this effect was due to the low rainfall conditions experienced in summer 2019, this may be evidence of stressful conditions favoring both higher levels of interaction among soil microbes (i.e., the stress gradient hypothesis; David et al. ; Hesse et al. ) and greater sporulation (Daniels and Skipper ). Nevertheless, more work is required to test how water stress influences AM fungal networks across a large set of plant hosts. In conclusion, AM fungal community assembly displayed strong seasonal trends that differed strongly between host plant species. This work is the first to test how the seasonal dynamics of AM fungal community assembly vary between plant host species, and builds on prior work demonstrating the importance of seasonal variation (Santos-Gonzalez et al. ; Deveautour et al. ), changes in AM fungal network structure (Bennett et al. ), and plant host species (Bever et al. ; Wilson and Hartnett ; Koziol and Bever ) contributions to AM fungal community assembly. By observing how the seasonal trajectories of AM fungal spore communities varied between host plant species, we demonstrated how between species associations (i.e., biological filters) influence the ongoing seasonal dynamics that determine AM fungal community assembly. Future work should identify how seasonal trends in AM fungal community assembly vary between plant functional groups, environmental conditions, and disturbance regimes. Because temporal dynamics are an important determinant of community assembly, consideration of the processes that shape microbial community assembly over time can help us better understand soil microbial roles in above- and belowground ecosystems.
|
Atypical rat bite fever associated with knee joint infection in a Chinese patient: a case report | a4205e55-a2d3-43f7-a552-94304a75b2d0 | 11877930 | Surgical Procedures, Operative[mh] | Rat bite fever (RBF) is a rare zoonosis transmitted from rodents to humans through bites and scratches . The typical symptoms include fever, rash, and polyarthralgia. The disease is primarily caused by Streptobacillus moniliformis ( S. moniliformis ) or Spirillum minus ( S. minus ) infection. RBF cases show regional distribution patterns, with S. moniliformis infections being more prevalent in Europe and North America and S.minus infections predominantly occurring in Asia . RBF was first reported in the United States in 1839, while two cases of infection with S. minus were reported in China in 1913 . Herein, we report a rare case of RBF with unilateral knee joint infection caused by S. moniliformis in a Chinese patient. The patient did not exhibit typical RBF symptoms but presented with an ailment in a single joint without fever or rash. This is the first instance of using knee arthroscopy to elucidate the intraarticular histopathological changes caused by S. moniliformis infection. The findings provide clinical and histopathological insights to help clinicians identify atypical RBF.
On 18 July 2024, a 77-year-old man sought medical care at the Haikou Orthopedic and Diabetes Hospital of Shanghai Sixth People’s Hospital, Haikou, Hainan Province, China, for left knee joint pain. The patient experienced recurrent pain and swelling in his left knee joint for three years and did not go to hospital for treatment.During this period, although he purchased painkillers and took them intermittently for a short period of time, the specific medications are unknown. Recently, the patient has experienced aggravation of the knee joint symptoms, including swelling, pain, limited extension, and difficulty in walking. However, the patient did not develop fever. Ten days before being admitted to our hospital, the patient had been treated at a local traditional Chinese medicine (TCM) hospital with topical Chinese herbal medicines to promote blood circulation, dispel blood stasis, and reduce swelling; oral TCM medications to unblock meridians; and irradiation therapy (infrared radiation). The patient neither showed fever during treatment nor took antipyretic drugs.The patient’s symptoms progressively worsened despite treatment. Consequently, the patient was transferred to our hospital for further treatment. The patient was previously healthy without history of long-term medication use or drug allergies but had consumed alcohol for over 60 years. The patient reported that he would drink approximately 150 ml of liquor every night with dinner, with an alcohol content of approximately 50% vol. Physical examination upon admission to our hospital indicated a body temperature, 36.7 °C; pulse rate, 62 beats/min; respiration rate, 20 breaths/min; and blood pressure, 151/63 mmHg. The patient’s left knee was swollen, having a dark skin colour without erythema or pigmentation and warm to touch, with a positive floating patella test result. The patient could not perform flexion or extension movements of the left knee joint, with flexion deformity and aggravation of pain during movement. Furthermore, the circumference of the left knee was significantly larger than that of the right knee. The dorsalis pedis artery pulsation was normal; the left foot exhibited good peripheral blood circulation, slight swelling in the ankle region, and normal sensation and movement in the toes. The patient’s laboratory test results on admission were: white blood cell (WBC) count, 8.09 × 10 9 /L (normal range: 3.5–9.5 × 10 9 /L); neutrophil percentage, 75.70% (normal range: 40–75%); high-sensitivity C-reactive protein (hsCRP) level, 144.9 mg/L (normal range: 0–10 mg/L); erythrocyte sedimentation rate, 107.90 mm/h (normal range: 0–15 mm/h); alanine aminotransferase level, 177.9 U/L (normal range: 9–50 U/L); aspartate transaminase level, 72.6 U/L(normal range: 15–40 U/L); alkaline phosphatase level, 283.5 U/L (normal range: 45–125 U/L); gamma-glutamyltransferase level, 180.9 U/L (normal range: 10–60 U/L); total bile acid level, 151.9 µmol/L (normal range: 0–13 µmol/L); and direct bilirubin level, 13.40 µmol/L (normal range: 0–6.84 µmol/L). The patient’s inflammatory markers were elevated, suggesting possible bacterial infection, with abnormal liver function, which may have been associated with the use of TCM medications and long-term alcohol consumption. Ultrasound of the lower extremity blood vessels showed calf muscle venous thrombosis, possibly attributable to reduced mobility during the past 10 days owing to knee joint pain. Three-Tesla magnetic resonance imaging plain scans (Fig. ) indicated the presence of osteoarthritis and synovitis in the left knee joint. Cartilage damage was observed in the patella, medial, and lateral condyles, and tibial plateau. Moreover, bone marrow oedema was observed in the medial and lateral condyles and tibial plateau. Effusion was found in the left knee joint cavity and suprapatellar bursa; multiple loose bodies were observed around the knee joint. Clinical symptoms, laboratory test results, and imaging results were used to preliminarily diagnose synovitis of unknown aetiology. However, the possibility of intraarticular infection could not be ruled out. To determine the presence or absence of infectious arthritis, synovial fluid was extracted from the patient’s affected knee on the day of admission for routine bacterial and fungal cultures. Four days after admission, the patient underwent knee arthroscopy for arthroscopic exploration, debridement, and synovectomy. Arthroscopic exploration revealed extensive mass-like soft tissue proliferation with a red cloud-like morphology, obvious inflammation, abnormal abundance of capillaries, and an extremely high tendency to bleed when debridement was performed with a shaver. Of note, the appearance of the soft tissue proliferations differed from those of typical synovitis, villonodular synovitis, and intraarticular infection, and they manifested as unique pathological lesions under arthroscopic view (Fig. ). Soft tissue proliferations were excised intraoperatively for metagenomic next-generation sequencing (mNGS), routine bacterial and fungal cultures and histopathological examination. Obvious bleeding upon slight excision of soft tissue proliferations caused blood flow to the arthroscopic field of view. Consequently, repeated electrocautery using a plasma knife was performed, which considerably increased surgical difficulty, time, and bleeding. Ultimately, infection-induced soft tissue proliferations at the anterior, medial, and lateral sides of the knee joint, suprapatellar bursa, and synovial membrane were partially resected. Four hours postoperatively, the patient developed a fever (38.4 °C) for the first time during the disease, and his highest body temperature reached 39.4 °C. The occurrence of absorption fever owing to significant intraoperative bleeding was highly likely. However, the possibility of surgery-induced spread of infection could not be ruled out. Blood samples were collected for culture. The patient was treated with physical cooling and oral administration of paracetamol/caffeine tablets (0.5 g, once), after which the body temperature decreased below 38 °C. On postoperative night 2, the microbiology laboratory reported the presence of S. moniliformis in the synovial fluid from the knee joint collected upon admission. Gram stain showed elongated Gram-negative rods. The specimens were incubated at 35 °C with 5% CO2 for approximately 72 h; a small amount of bacterial growth was first observed on Columbia blood agar and chocolate agar. In our hospital, since only biochemical phenotypic testing could be performed, the specimens were sent to other laboratories for mass spectrometry identification. The first attempt of mass spectrometry identification of bacteria before surgery failed. Considering the patient’s request and need for making a clear diagnosis as soon as possible, we decided to send the specimens for mNGS testing during surgery. The second attempt mass spectrometry identification was performed at Affiliated Haikou Hospital of Xiangya Medical College. The laboratory used matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) for identification and the MALDI Biotyper system (Bruker, Germany).The bacterial isolate was finally identified as S. moniliformis , which was consistent with the mNGS test results. The results of histopathological examination indicated chronic non-specific synovitis. Under the microscope, numerous erythrocytes accompanied by plasma cells and lymphocytes were observed in the tissue specimens (Fig. a and b). Upon history taking, the patient stated that there was a large rat population around his living environment and he vaguely remembered experiencing a rat bite in the left foot several years ago. Considering the clinical history and microbiological culture results, RBF-induced septic arthritis was diagnosed. In our hospital, antimicrobial susceptibility testing of fastidious organism is not performed. This bacterium has been widely reported to be sensitive to beta-lactam drugs, particularly penicillin and ceftriaxone . Antimicrobial susceptibility testing is not generally needed if the patient shows improvement on therapy. Following consultation with experts in infectious diseases, laboratory medicine, orthopaedics, and pharmacy, the patient was treated with an anti-infective regimen of ceftriaxone injection (2 g, once daily through intravenous drip). On postoperative day 3, the patient was no longer febrile; a negative blood culture result was found. The mNGS results for the knee synovial fluid indicated that the pathogen was S. moniliformis (91155 reads). Epstein–Barr virus (EBV) was also detected in one read. Detection was performed at Guangzhou Darui Medical Testing Laboratory using single-end, 50-base read length (SE50) sequencing on a MGISEQ-200 Sequencer (MGI Tech, China) in accordance with the manufacturer’s instructions. The sequences of all microorganisms detected in the samples were obtained. We blasted the obtained S. moniliformis sequence; the alignment results showed that the sequence showed 100% identity (50/50 bp) with the reference sequence S. moniliformis DSM 12,112 (GenBank accession number CP001779.1). The mNGS data were mapped to the reads on the reference genome bin interval of the S. moniliformis database. A total of 91,155 reads were mapped to S. moniliformis , and the bin interval coverage was 100%. After ceftriaxone treatment for 2 weeks, the patient’s pain and swelling were alleviated, and there was a significant improvement in infection-related markers: WBC count, 4.14 × 10 9 /L (normal range: 3.5–9.5 × 10 9 /L); neutrophil percentage, 65.80% (normal range: 40–75%); hsCRP level, 69.43 mg/L (normal range: 0–10 mg/L). A negative culture result was obtained from the re-culture of the knee synovial fluid, indicating significant control of the infection. Since the patient opted for discharge, sequential therapy was administered at home with amoxicillin (0.5 g every 8 h) for 8 weeks; the patient showed good recovery during the subsequent follow-up.
RBF is an infectious disease with a long history. The disease likely originated in India and has been reported from various countries . Individuals living in poverty are particularly susceptible to RBF , with most cases reported in China, originating from rural areas. With the rising popularity of rodents as pets, the proportion of the population with a potential risk of infection may increase. S. moniliformis , a gram-negative filamentous bacillus, is one of the two types of bacteria that can cause RBF. It is a fastidious bacterium that grows relatively slowly . growing S. moniliformis in blood cultures is challenging. Furthermore, sodium polyanethol sulfonate (SPS), the anticoagulant component of blood culture bottles, has been reported to inhibit its growth at concentrations as low as 0.0125% . The use of culture media containing SPS components should be avoided as much as possible. If this cannot be avoided, 10 ml of blood should be drawn in blood culture medium containing 0.05% SPS to obtain the best dilution ratio and improve the success rate of culture . Fortunately, the microbiological culture and mNGS results in our patient jointly indicated S. moniliformis infection in the knee synovial fluid. Most cases of RBF-induced arthritis are aseptic; consequently, cases exhibiting a positive bacterial culture in knee synovial fluid are extremely rare . Clinical differentiation between aseptic and septic arthritis is challenging due to the necessity for confirming bacterial infection in the synovial fluid. mNGS, which is not limited by culture conditions and enables testing with high sensitivity and specificity within a short detection time, provides significant advantages for microbial identification. Its disadvantages include high testing costs, susceptibility to interference from background microorganisms, and requiring extensive expertise to interpret test results. For instance, in this case, EBV was detected in the patient’s knee effusion, which is one of the common human viruses and is associated with diseases such as infectious mononucleosis. Since the virus infects a wide range of people and the number of reads detected this time was low, we considered it to be a latent virus. Considering using mNGS only when conventional culture cannot uncover the organism is generally recommended. Specimens can be frozen for subsequent testing if needed. Additionally, 16S rRNA PCR should be considered in lieu of mNGS in most cases wherein bacterial infection is high on the differential diagnosis. During the early stages of infection, RBF can cause systemic symptoms such as fever, chills, headache, and vomiting. Typical symptoms include polyarthralgia and rash, with the knee joints being the most commonly affected and the rash usually occurring on the extremities . In China, most patients with atypical RBF who have joint involvement seek medical attention from orthopaedists. Patients without fever and a lack of significant increase in WBC and neutrophil counts are highly likely to be misdiagnosed with aseptic synovitis. Owing to the rarity of atypical RBF, many orthopaedists have limited knowledge about the disease, thereby making accurate diagnosis difficult. Compared with RBF complicated by reactive arthritis, RBF-induced septic arthritis usually requires antibiotic therapy with a longer treatment duration , In vitro studies have shown that S. moniliformis is susceptible to various antibiotics, including penicillin, ceftriaxone, aztreonam, clindamycin, erythromycin, tetracycline, vancomycin, and teicoplanin . Currently, penicillin is the preferred antibiotic for RBF treatment . However, the optimal treatment regimen for patients with septic arthritis remains unclear . The bone and joint penetration ability of ceftriaxone is superior to that of penicillin , and RBF has been successfully treated with ceftriaxone in previously reported cases . Therefore, intravenous ceftriaxone was selected as the initial therapy for managing this patient. At present, there are no susceptibility testing standards for S. moniliformis . When referring to treatment regimens described in previous case reports, differences in antibiotic resistance profiles among the same bacterial species across different countries and regions must be taken into consideration, and caution should be exercised when selecting appropriate medications. Regarding localised infections, timely and radical surgical intervention can reduce the duration of antibiotic treatment and risk of adverse drug reactions. In this case, the patient underwent surgical intervention by an orthopaedist. There are two main surgical approaches currently used in clinical practise, namely, open surgery and arthroscopy-assisted surgery. Arthroscopic debridement and synovectomy were selected for managing our patient, as they offer a short recovery time, mild postoperative pain, and low risk of complications compared with open surgery . Arthroscopy also enabled direct visualisation and documentation of the intraarticular infection. We observed more obvious tissue proliferations than typical synovitis, manifesting as red cloud-like masses. Similar to the findings in mice infected with S. moniliformis , massive proliferation of fibrous connective tissues was observed within the joints, with lesions concentrated in the joint cavity and neighbouring periosteum . To our best knowledge, this is the first report of an atypical case of RBF with knee joint infection in a Chinese patient. In atypical cases, a detailed inquiry about patient information such as disease history, occupation, living environment, and history of contact with rodents, active efforts to obtain microbiological culture results, and appropriate adoption of mNGS technology for pathogen identification in culture-negative cases would help achieve an accurate clinical diagnosis. This is also the first report of arthroscopic observation of the pathological characteristics of intraarticular infection caused by S. moniliformis , which primarily included massive proliferation of fibrous connective tissue and obvious capillary proliferation with a red cloud-like appearance. Therefore, when similar observations are made under arthroscopy in the future, the possibility of S. moniliformis infection should be considered.
|
Exploring international differences in ovarian cancer treatment: a comparison of clinical practice guidelines and patterns of care | 0f0d8fd9-252e-4d93-a8c5-735ad7b5fb72 | 7656152 | Gynaecology[mh] | Ovarian cancer is the sixth most common cancer among women and has the highest mortality rate of all gynecological cancers internationally. Previous findings from the International Cancer Benchmarking Partnership (ICBP) demonstrated that, while differences in stage at diagnosis for ovarian cancer partly explained the survival gap, differences in survival also existed within each stage of disease. This a global collaboration of clinicians, policymakers, researchers, and cancer data experts, seeking to explain cancer survival differences between high-income countries with comprehensive cancer registry coverage, similar national health system expenditure, and universal access to healthcare. The ICBP SurvMark-2 project recently demonstrated international differences in ovarian cancer survival within age and stage groups, particularly for older women (65–74 years) with advanced disease, where 3-year net survival ranged from 52% (Norway) to 29% (Ireland). Notwithstanding improvements in ovarian cancer outcomes internationally, variations in age- and stage-specific survival suggest differences in treatment may exist. Clinical practice guidelines are designed to ensure that patients receive optimal care, typically based on the best available evidence, and offer a way of exploring treatment differences by comparing recommendations internationally. One also investigates patterns of care to explore how they align with guideline recommendations and how they may be influenced by health system-related factors. Most women with ovarian cancer are diagnosed with advanced stage disease (III-IV), for which optimal treatment is cytoreductive surgery and chemotherapy. Surgical options consist of either primary debulking surgery, or neoadjuvant chemotherapy followed by interval debulking and may involve extensive (‘ultra-radical’) procedures. Despite a lack of consensus regarding primary versus interval debulking, and lack of prospective evidence supporting extensive/ultra-radical surgery, the goal of surgery remains no residual macroscopic disease, which is associated with improved survival. Systemic therapies play an important role in ovarian cancer treatment, with the use of carboplatin and paclitaxel chemotherapy now well-established. Intra-peritoneal chemotherapy remains controversial due to toxicity and, despite early promise, later trials failed to show improved survival. Radiotherapy may also be used in specific types of ovarian cancer and for palliation of advanced disease. Choice of therapies has further evolved through BRCA-mutation testing and the success of poly (ADP-ribose) polymerase (PARP) inhibitors in clinical trials. Health system-related factors influence the type and quality of treatment patients receive. This includes resources needed for surgery, such as sufficient operating theater time and intensive care unit beds, funding for expensive anti-cancer drugs, and the use of national audits to inform change and improve outcomes. Due to the lack of uniform and robust clinical data available to directly compare clinical practices among different countries, we have used guidelines and a validated questionnaire to indirectly explore differences in patterns of care and how these relate to survival for women with ovarian cancer in seven high-income countries. This study is the first to compare international clinical practice for ovarian cancer treatment.
A document search was performed using PubMed; guideline-specific databases (Guidelines International Network); and online government portals. Search terms included specific disease name and relevant jurisdiction (‘ovarian cancer treatment guidelines OR pathway in Canada’). Inclusion criteria were guidelines widely used in routine ovarian cancer treatment as validated by working group members from each country, and guideline revisions from the past 20 years to highlight the frequency of updates over time. Information was extracted for jurisdiction; organization(s); publication year(s); and treatment modality. A working group of 19 clinicians was formed to validate the guideline comparison and provide additional insight into clinical practice differences. A questionnaire was developed based on a previous survey conducted by Farrell et al investigating changes in surgical practice for ovarian cancer in Australia and New Zealand and extended to include additional questions. The questionnaire consisted of 34 questions divided into four sections: 1. respondent characteristics, 2. surgical practice, 3. systemic therapy, and 4. health system-related factors, and was validated and tested by a clinical working group. The gynecological oncologists/specialist surgeons, medical oncologists, clinical oncologists, and general gynecologists were chosen to receive the questionnaire using existing local networks available to the working group members. Working group members identified distribution lists which included those actively involved in the treatment of ovarian cancer. Country-specific response rates were calculated by dividing the number of responses by the number of questionnaires distributed in each country. Denominators were either database confirmed or estimated by multiplying the number of centers treating ovarian cancer in the country by the approximate number of specialists in each center. Survey-specific response rates were also calculated using the number of responses from each country. All data were maintained in Microsoft Excel (version 1803). In all tables, figures, and supplementary materials, countries are ordered using the latest 3-year survival figures (2010–2014) from highest (Norway, 57.2%) to lowest (Ireland, 44.8%). Comparisons of 3-year survival by ‘distant’ stage (2010–2014) were performed using Spearman's rho in R Studio (version 3.5.1). Information gathered from guidelines and the survey relate to the current time period (~2019) and does not align with the 2010–2014 survival data.
Twenty-seven guidelines were identified . Comparisons are presented by treatment modality with additional textual information in . Guideline measures were compared across countries . Most countries had a single organization producing guidelines except the UK (n=6). The UK also had the most documents identified (n=9) and the most recently published guideline (2018). Denmark had the most revisions, updating the same guideline four times since 2003. When crudely compared, correlations between guideline measures and survival for each country were all non-significant . A total of 119 clinicians completed the patterns of care survey. Respondent characteristics are summarized in . Country-specific response rates ranged from 100% (Norway) to 10% (Australia). Survey-specific response rates also varied between countries, ranging from 3% to 27% (for Ireland and Canada, respectively) . Only survey results from sections 1 and 4 are reported here. 10.1136/ijgc-2020-001403.supp1 Supplementary data 10.1136/ijgc-2020-001403.supp2 Supplementary data 10.1136/ijgc-2020-001403.supp3 Supplementary data 10.1136/ijgc-2020-001403.supp4 Supplementary data 10.1136/ijgc-2020-001403.supp5 Supplementary data Surgery Guideline recommendations for surgery remained consistent. All guidelines recommend surgical staging, including pelvic and para-aortic lymph node sampling either randomly or by resection of suspicious nodes. Danish guidelines also recommend systematic lymphadenectomy for early-stage disease. All guidelines containing surgical recommendations included the options of primary debulking surgery (unless contra-indicated) and neoadjuvant chemotherapy followed by interval debulking, with the aim of complete cytoreduction, if feasible . These guidelines also considered cytoreductive surgery for ‘relapsed’/‘recurrent’ disease depending on disease-free interval and patient performance status (results not shown). 10.1136/ijgc-2020-001403.supp6 Supplementary data When surveyed, clinicians reported differences in patterns of surgical care. Norwegian clinicians reported the highest rates of primary surgery in patients with advanced epithelial ovarian cancer, whereas those from the UK reported the lowest rates of primary and highest rates of interval surgery . In the same patient population, all Norwegian and Australian respondents either agreed or strongly agreed with ultra-radical surgery, whereas clinicians from Canada and the UK agreed with ultra-radical surgery to a lesser extent, with some respondents either disagreeing or strongly disagreeing with this approach . When crudely compared, willingness to undertake extensive/ultra-radical surgery correlated with 3-year survival by distant stage (r s =0.94, p=0.017). Clinicians across all countries reported ‘medical co-morbidities’ as a perceived barrier to achieving optimal debulking in patients with advanced disease. Norwegian clinicians were least likely to report data for an ‘older patient population’. UK clinicians reported ‘a lack of supportive care (intensive care unit beds)’ more often than clinicians from other countries and were less likely to report ‘non-resectable metastasis outside abdominal cavity’; a barrier frequently reported by clinicians elsewhere . 10.1136/ijgc-2020-001403.supp7 Supplementary data Systemic/Radiation Therapy All guidelines recommended six cycles of platinum-based chemotherapy consisting of carboplatin and paclitaxel, most additionally recommending docetaxel, gemcitabine, or liposomal doxorubicin in cases of hypersensitivity and/or allergy to paclitaxel (results not shown). Differences were seen in guideline recommendations for other types of systemic therapy. Canadian, Australian, and Scottish guidelines recommend intra-peritoneal chemotherapy, whereas Danish and other UK guidelines do not recommend it outside of clinical trials. Norwegian and New Zealand guidelines omit guidance on intra-peritoneal chemotherapy. Norwegian, Australian, Danish, Canadian, and Scottish guidelines recommend considering bevacizumab, whereas guidelines from New Zealand and the UK (Wales and Northern Ireland) do not. PARP inhibitors, including olaparib and/or niraparib, were recommended in all countries except New Zealand. Most of these guidelines recommended PARP inhibitors as maintenance treatment for relapsed platinum-sensitive BRCA mutation-positive advanced ovarian cancer. Some more recent guidelines from the UK (2019) and Ontario, Canada (2018) also recommend olaparib in newly diagnosed advanced disease. Differences in radiotherapy were found. Cancer Australia’s optimal care pathway states that “some women may benefit from radiation treatment”, and British Columbia guidelines recommend radiotherapy on an individual basis for clear cell ovarian cancer. Alberta’s guideline indicates that radiation oncologists should consider radiotherapy in the context of palliation for selected cases to improve local control. All other guidelines did not contain radiotherapy recommendations. Health System-Related Factors Differences in perceived health system-related barriers to accessing optimal treatment were reported . Norwegian clinicians most commonly reported restrictions in prescribing high-cost medications. Canadian clinicians often reported a ‘lack of patient access to clinical trials’. From the UK, a ‘lack of hospital staffing’ was commonly reported followed by ‘delays in treatment’. In New Zealand, clinicians often reported a lack of resources and funding for second-line drugs. Clinicians from Australia, Canada, the UK, New Zealand, and Ireland also reported a lack of treatment monitoring’. Danish clinicians most often reported perceiving ‘no barriers’.
Guideline recommendations for surgery remained consistent. All guidelines recommend surgical staging, including pelvic and para-aortic lymph node sampling either randomly or by resection of suspicious nodes. Danish guidelines also recommend systematic lymphadenectomy for early-stage disease. All guidelines containing surgical recommendations included the options of primary debulking surgery (unless contra-indicated) and neoadjuvant chemotherapy followed by interval debulking, with the aim of complete cytoreduction, if feasible . These guidelines also considered cytoreductive surgery for ‘relapsed’/‘recurrent’ disease depending on disease-free interval and patient performance status (results not shown). 10.1136/ijgc-2020-001403.supp6 Supplementary data When surveyed, clinicians reported differences in patterns of surgical care. Norwegian clinicians reported the highest rates of primary surgery in patients with advanced epithelial ovarian cancer, whereas those from the UK reported the lowest rates of primary and highest rates of interval surgery . In the same patient population, all Norwegian and Australian respondents either agreed or strongly agreed with ultra-radical surgery, whereas clinicians from Canada and the UK agreed with ultra-radical surgery to a lesser extent, with some respondents either disagreeing or strongly disagreeing with this approach . When crudely compared, willingness to undertake extensive/ultra-radical surgery correlated with 3-year survival by distant stage (r s =0.94, p=0.017). Clinicians across all countries reported ‘medical co-morbidities’ as a perceived barrier to achieving optimal debulking in patients with advanced disease. Norwegian clinicians were least likely to report data for an ‘older patient population’. UK clinicians reported ‘a lack of supportive care (intensive care unit beds)’ more often than clinicians from other countries and were less likely to report ‘non-resectable metastasis outside abdominal cavity’; a barrier frequently reported by clinicians elsewhere . 10.1136/ijgc-2020-001403.supp7 Supplementary data
All guidelines recommended six cycles of platinum-based chemotherapy consisting of carboplatin and paclitaxel, most additionally recommending docetaxel, gemcitabine, or liposomal doxorubicin in cases of hypersensitivity and/or allergy to paclitaxel (results not shown). Differences were seen in guideline recommendations for other types of systemic therapy. Canadian, Australian, and Scottish guidelines recommend intra-peritoneal chemotherapy, whereas Danish and other UK guidelines do not recommend it outside of clinical trials. Norwegian and New Zealand guidelines omit guidance on intra-peritoneal chemotherapy. Norwegian, Australian, Danish, Canadian, and Scottish guidelines recommend considering bevacizumab, whereas guidelines from New Zealand and the UK (Wales and Northern Ireland) do not. PARP inhibitors, including olaparib and/or niraparib, were recommended in all countries except New Zealand. Most of these guidelines recommended PARP inhibitors as maintenance treatment for relapsed platinum-sensitive BRCA mutation-positive advanced ovarian cancer. Some more recent guidelines from the UK (2019) and Ontario, Canada (2018) also recommend olaparib in newly diagnosed advanced disease. Differences in radiotherapy were found. Cancer Australia’s optimal care pathway states that “some women may benefit from radiation treatment”, and British Columbia guidelines recommend radiotherapy on an individual basis for clear cell ovarian cancer. Alberta’s guideline indicates that radiation oncologists should consider radiotherapy in the context of palliation for selected cases to improve local control. All other guidelines did not contain radiotherapy recommendations.
Differences in perceived health system-related barriers to accessing optimal treatment were reported . Norwegian clinicians most commonly reported restrictions in prescribing high-cost medications. Canadian clinicians often reported a ‘lack of patient access to clinical trials’. From the UK, a ‘lack of hospital staffing’ was commonly reported followed by ‘delays in treatment’. In New Zealand, clinicians often reported a lack of resources and funding for second-line drugs. Clinicians from Australia, Canada, the UK, New Zealand, and Ireland also reported a lack of treatment monitoring’. Danish clinicians most often reported perceiving ‘no barriers’.
This study suggests international differences in ovarian cancer treatment. Differences were seen in guidelines measures, including the number of documents published and revisions made. Although these did not correlate with survival when crudely compared, further research exploring the complex relationship between international guidelines and outcomes is needed. Despite consistency across guidelines, reported surgical practices varied. While all guidelines recommend primary or interval debulking for patients with advanced disease, clinicians from countries with higher survival (Norway and Australia) reported higher rates of primary debulking. Although complete primary debulking has been associated with higher survival in late-stage patients, a lack of consensus still exists, with an Australian retrospective study finding that an increasing shift towards interval debulking was associated with increased survival. Commentators argue that primary debulking should still be considered the treatment of choice for fit patients with advanced resectable disease, whereas interval debulking is more suitable for patients with poorer performance and nutritional status, who are more likely to develop post-operative morbidity and mortality. Furthermore, most guidelines did not explicitly recommend extensive/ultra-radica’ surgery and yet clinicians from higher performing countries were more likely than those from lower performing countries to agree with ‘ultra-radical’ surgery. Norwegian clinicians were least likely to perceive age as a barrier to achieving optimal cytoreduction and Norway demonstrated the highest survival in elderly patients with distant stage disease. In the UK, where clinicians perceived a lack of supportive care, survival for these patients was lower. These barriers could make clinicians less willing to operate on some patients, instead preferring a palliative option. Patients with advanced ovarian cancer are more likely to have severe co-morbidities and higher mortality, and historically, elderly patients were shown to be less likely to receive comprehensive surgical treatment. A Dutch study recently found that older patients and those with advanced disease were significantly less likely to receive any cancer-directed treatment. Moreover, available resources and operating theater time may influence a surgeons’ ability to perform extensive surgery and could impact patient outcomes. Importantly, it is this sub-category of elderly patients with advanced disease where survival is lowest and where significant differences exist. Other factors affecting surgical outcome include centralization of services, patient selection, discussion at multidisciplinary meetings and adequate pre-operative staging. It is noteworthy that the two lowest performing countries, New Zealand and Ireland, had not implemented centralization in the 2010–2014 time period. Moreover, some higher and lower performing countries centralized around the same time, including Norway (1995), the UK (1999), and Denmark (2001), indicating that other factors could be playing a role within centralized services, including access to specialist surgery. Since surgical outcome is a key prognostic factor for women with advanced ovarian cancer, differences in surgical practice may be contributing to survival variations and warrant further investigation within countries. Differences were found in recommended systemic therapies. Intra-peritoneal chemotherapy remains contentious, with recent trial results failing to show a survival benefit in women with newly diagnosed advanced ovarian cancer. Disparities in its use may also stem from issues associated with increased resources and catheter-related complications, and that to overcome these would require time and specialist training. The benefits of bevacizumab are debated since it is yet to demonstrate improvements in overall survival. Given international differences in national health services spending, inequalities in access to high-cost drugs like bevacizumab may reflect different levels of available investment. Bevacizumab is not recommended in some of the lowest performing countries (Wales, Northern Ireland, and New Zealand), as it is not funded. New Zealand’s funding decisions for new medicines are taken by Pharmac, the national regulatory body who declined bevacizumab based on several decision criteria. The Cancer Drugs Fund in England and Scottish Medicines Consortium in Scotland fund bevacizumab for selected patients. PARP inhibitors have shown survival benefit in patients with a BRCA mutation with relapsed disease and, as of December 2019, are now recommended in all countries. Their recent introduction does not align with the 2010–2014 study period, but PARP inhibitors will probably influence future survival analyses. Different health system-related barriers to providing optimal treatment were perceived by clinicians internationally. For example, restrictions to drug prescribing were reported in Norway. In a qualitative study, Norwegian oncologists recently described distrust in their centralized drug review process, which has led to inequities in drug availability due to the privatization of high-cost medications. In Australia, insufficient hospital staffing was reported as a perceived barrier to providing optimal treatment. This has been reported by Australian health professionals previously, relating to a lack of staff with specialized expertize outside metropolitan centers. Similarly, in New Zealand, where a lack of resources was reported, a previous clinical audit suggested inadequate theater space could be impacting patient waiting times and outcomes. In Canada, a perceived lack of patient access to clinical trials was reported. This may correspond with previous findings describing the barriers faced by Canadian physicians when participating in clinical research. Moreover, clinicians from countries with higher survival, such as Denmark and Norway, were more likely to report having no barriers to providing optimal treatment, whereas clinicians from countries with lower survival often reported a lack of treatment monitoring (via national and/or local audit) as a perceived barrier. One example of a current national auditing system is the Danish Gynecological Cancer Database, which has collected data on all women with ovarian cancer treated at Danish hospitals since 2005. Another example is the introduction of Scotland’s quality performance indicators. National auditing for ovarian cancer has been recommended as a method of investigating treatment disparities and informing quality improvements. Given challenges in comparing ovarian cancer treatment rates internationally, this study supports existing calls for improved data collection at local and national levels. This is particularly pertinent in countries with lower survival, where national audits are not routinely conducted, and highlights a key area of improvement for policy and practice. We note some limitations in this study. We acknowledge low survey response rates from some countries, which may not reflect true patterns of care. Comparatively, in a review of international surveys for patterns of surgery in advanced ovarian cancer, response rates ranged from 30% to 81%. Discrepancies between country- and survey-specific response rates are also noted. Countries with higher denominators often had lower response rates, yet also received a larger number of responses. This has the potential to bias conclusions drawn from the survey about national practice patterns. Questionnaires were distributed to either membership societies or specialist hospital departments. Questionnaires distributed via society mailing lists were sent to a wider demographic of clinicians, some of whom may not have responded. Therefore some country-specific response rates may appear disproportionately lower than others. We were not able to determine the demographic characteristics of non-responders and were not able to exclude responder bias. The varying responsibilities of specialists internationally may hinder the interpretation of survey results. Gynecological oncologists in certain countries cover both surgery and medical oncology (Norway and Canada), but in others provide specialist surgical care only (Denmark and the UK). The questionnaire did not ask clinicians what proportion of their practice is spent treating ovarian cancer, and some respondents may be treating fewer patients with ovarian cancer than others, potentially affecting the results. Information was not collected on the willingness of clinicians not to operate on patients. Rates of patients receiving no treatment were also not accounted for. An existing study has shown that a large proportion of patients with ovarian cancer receive no treatment at all, often because patients choose not to have treatment. This group are likely to be older with more co-morbidities, have poorer outcomes, and could partly explain international survival variations. The guideline recommendations and survey results relate to the current time-period (~2019), which must be considered when comparing these findings with ICBP SurvMark-2 results (2010–2014). Despite consistency across guidelines, surgical practice varied internationally, particularly in rates of primary versus interval debulking, views towards extensive/ultra-radical surgery, and perceived barriers to achieving optimal cytoreduction. These differences are probably due to a combination of patient, clinician, and health system-related factors. Given the importance of surgical outcome in survival for patients with advanced ovarian cancer, differences in surgical practice could be a key driver of international disparities. Differences in recommendations for systemic/radiation therapies were apparent and may reflect inequalities in levels of investment available to health systems to fund expensive drugs. In an effort to internationally benchmark ovarian cancer treatment, we indicate certain characteristics relating to countries with higher stage-specific survival including higher reported rates of primary debulking; willingness to undertake ultra-radical procedures; greater access to high-cost drugs; and auditing. Treatment differences noted between countries warrant further investigation at local levels to determine their severity and potential impact on patient outcomes, particularly for older women with advanced disease, and in countries with lower stage-specific survival.
|
Mastering your fellowship: Part 1, 2024 | fb0b9f6b-6d7e-4150-acf5-dda278945b5e | 10839228 | Family Medicine[mh] | This section in the South African Family Practice journal is aimed at helping registrars prepare for the Fellowship of the College of Family Physicians (South Africa) (FCFP [SA]) Final Part A examination and will provide examples of the question formats encountered in the written examination: multiple choice question (MCQ) in the form of single best answer (SBA – Type A) and/or extended matching question (EMQ – Type R); short answer question (SAQ), questions based on the critical reading of a journal article (CRJ: evidence-based medicine) and an example of an objectively structured clinical examination (OSCE) question. Each of these question types is presented based on the College of Family Physicians blueprint and the key learning outcomes (LO) of the FCFP (SA) programme. The MCQs are based on the 10 clinical domains of family medicine, the SAQs are aligned with the five national unit standards and the critical reading section will include evidence-based medicine and primary care research methods. This edition is based on unit standard one (effectively manage themselves, their team and their practice, in any sector, with visionary leadership and self-awareness, to ensure the provision of high-quality, evidence-based care), unit standard two (evaluate and manage patients with both undifferentiated and more specific problems cost-effectively according to the bio-psycho-social approach), unit standard four (facilitate the learning of others regarding the discipline of family medicine, primary health care and other health-related matters) and unit standard five (conduct all aspects of healthcare in an ethical and professional manner). The clinical domain covered in this edition is infectious diseases. We suggest you attempt to answer the questions (by yourself or with peers or supervisors) before finding the model answers online: http://www.safpj.co.za/ . Please visit the Colleges of Medicine website for guidelines on the Fellowship examination: https://www.cmsa.co.za/view_exam.aspx?QualificationID=9 . We are keen to hear about how this series assists registrars and their supervisors in preparing for the FCFP (SA) examination. Please email us ( [email protected] ) with your feedback and suggestions.
Theme: HIV and Syphilis testing in pregnancy Options: No testing needed Rapid syphilis test Rapid HIV test Dual HIV and syphilis rapid test Rapid HIV and rapid plasma reagent (RPR) tests ELISA test for HIV and RPR test Rapid HIV test and fluorescent treponemal antibody absorption (FTA-ABS) test For each patient scenario below, match the most appropriate test(s) from the options above. Each option may be used once, more than once or not used. Scenarios: A 25-year-old Para 1 Gravida 2 (P1G2) presents for her third antenatal visit at 30 weeks gestation. Her booking visit at 20 weeks revealed that she had no medical (past or present) problems, and her rapid HIV test and syphilis test were negative. She is asymptomatic, and her vital signs and examination are normal at this visit. A 30-year-old P2G3 presents at 26 weeks gestation. Her booking visit at 18 weeks was unremarkable, and the midwife noted that she was treated for syphilis in her previous pregnancy. She is HIV-negative, had no medical problems and is currently asymptomatic. Her vital signs and examination at this visit are normal. Short answer: Scenario 1: d Scenario 2: e Discussion: The recently published 2023 Guideline for Vertical Transmission Prevention of Communicable Infections by the National Department of Health has updated the HIV and syphilis testing guidelines during pregnancy, given the adverse clinical outcomes associated with both these conditions. Testing for HIV-negative pregnant women is now recommended at four weekly intervals based on the Basic Antenatal Care Plus (BANC+) visits. Testing occurs at the booking visit and then at 20, 26, 30, 34 and 38 weeks. To improve the efficiency of this new recommendation, a few new point-of-care tests have been introduced. These include the rapid syphilis test, a specific treponemal test that will remain positive for life in a previously infected individual. The dual test incorporates the traditionally used HIV rapid test and the rapid syphilis test. Suppose no positive history of syphilis is provided. In that case, treatment can be initiated for syphilis based on the rapid test, and a confirmatory non-specific RPR test needs to be done (see ). If the facility has no rapid syphilis test, the recommendation is to draw blood for an RPR test during the four weekly visits. outlines the treatment algorithm for patients who test positive for syphilis with the rapid test. The patient may be asymptomatic in primary, secondary or tertiary syphilis, so point-of-care testing provides a valuable adjunct to the clinical decision-making process. describes testing related to primary, secondary and tertiary syphilis. Back to our scenarios: In scenario one, the patient requires routine testing for HIV and syphilis and has no history of previous syphilis, so the dual test is most appropriate and cost-effective as a single point-of-care test will test for both conditions. In scenario 2, the patient gives a history of previous syphilis, so the point-of-care rapid tests are unreliable and the RPR test must be used for the screening process and the titre measured. The RPR is sent to the laboratory, and turnaround time varies based on context. It is important to note that a confirmatory RPR test must accompany a positive rapid test, but the patient must be treated with the first dose of penicillin. Benzathine penicillin is the most effective treatment for syphilis, and three doses are needed in pregnant women. Further reading South African National Department of Health. Guideline for vertical transmission prevention of communicable infections. Pretoria: South African National Department of Health; 2023.
This question was previously used in an FCFP (SA) written paper. You are the district family physician in Mopane District in Limpopo province of South Africa, where malaria is endemic. A newly employed medical officer (MO) phones you about a 19-year-old male high school student with a fever and impaired consciousness. Other systems are unremarkable. The MO suspects malaria as a coronavirus disease 2019 (COVID-19) test, and a lumbar puncture is negative. The patient is admitted, and the MO asks for further assistance with the assessment and management of the patient. You advise him to do a malaria rapid diagnostic test, which is positive for malaria, and the patient is started on treatment for severe malaria: You realise that junior doctors in the district need more knowledge of malaria. How would you apply the steps of planning and implementing a teaching session on managing severe malaria by doctors at the district hospital level? (8 marks) Provide two learning outcomes (LOs) in the correct format for your teaching session. (2 marks) At the end of the session, you decide to assess the doctors’ ability to manage patients presenting acutely with complicated malaria. List four assessment methods that allow the learner to demonstrate – knows, knows how, shows how, does. (4 marks) What three adult learning principles would you apply, and explain in more detail how you would do this? (6 marks) Total: 20 marks Suggested answers (the answers should show some application to the scenario) 1. You realise that junior doctors in the district need more knowledge of malaria. How would you apply the steps of planning and implementing a teaching session on managing severe malaria by doctors at the district hospital level? (8 marks) Step A. Why: (decide the need and objectives) (2 marks) ■ Confirm that the topic of malaria is important to address for district health services. ■ Clarify the learning needs of the doctors and gaps in knowledge. Step B. What: (LOs and content) (2 marks) ■ Define LOs for the teaching session, including – knowledge, skills and attitudes. At the end of this session, the learner is expected to classify malaria into uncomplicated and complicated (knowledge) and list the management strategies appropriate to the district hospital level. The learner must outline their knowledge of the drugs and when to use them. ■ Define the content and resources needed – for example, the National Institute For Communicable Diseases of South Africa (NICD) national guidelines. Step C. How: (teaching methods and logistics) (2 marks) ■ Plan the teaching method to be used for example, a PowerPoint lecture followed by small group discussion using case scenarios as part of the hospital’s continuing professional development (CPD) programme. ■ Plan logistics: date, venue, equipment needed, CPD points and invitations. Step D. So what: (evaluation) (2 marks) ■ Obtain feedback from participants – simple questionnaire. ■ Reflect on feedback to revise future teaching sessions. 2. Provide two learning outcomes in the correct format for your teaching session. (2 marks) A LO should specify what the doctor can do at the end of the teaching session. The LO could be based on knowledge, skills or attitudes. One mark each for the LO and description of what should be known or done, for example, maximum of two examples for 2 marks in total: Knowledge: At the end of this session, you should be able to list the differences between uncomplicated and complicated malaria. (1 mark) Skill: At the end of this session, you should be able to demonstrate how to use a rapid malaria test correctly or some aspect of inpatient care and monitoring. (1 mark) Attitude: At the end of this session, you should be able to counsel uncomplicated malaria patients on correct drug use and safety netting for when to return to the clinic or hospital. (1 mark) 3. At the end of the session, you decide to assess the doctors’ ability to manage patients presenting acutely with complicated malaria. List four assessment methods that allow the learner to demonstrate – knows, knows how, shows how, does. (4 marks) Knows – written, oral or computer-based test on knowledge of complicated malaria at the factual recall level. (1 mark) Knows how – written, oral or computer-based tests on applying knowledge to a patient case vignette. (1 mark) Shows how – demonstrates skill in a simulated setting, for example, simulation scenario of a patient with complicated malaria. (1 mark) Does – observed consultations in the emergency room or workplace-based assessment (WPBA), chart review, audit of practice. (1 mark) 4. What three adult learning principles would you apply and explain in more detail how you would do this? (6 marks) (Any 3 points from the list below, one mark for principle and one mark for how it will be applied) Provide a safe and supportive environment – personal freedom and individuality should be honoured, non-judgemental and trust established. Provide an environment that promotes intellectual freedom – learners are allowed to experiment and be creative, with different learning styles. Provide an environment that treats learners as peers – learners are respected as intelligent and experienced adults, identify gaps and choose the best learning activities to address these. Encourage self-directed learning – learners need to take responsibility for their needs and become actively involved in their learning, as interaction, experimentation and dialogue help cement learnt facts and theory. Challenge people beyond their current level of ability – need to establish objectives and learning needs. Learners should be actively involved in learning – adopt styles other than didactic lecturing. Provide regular feedback to learners by summarising, identifying of future needs, facilitating reflection and helping to develop action plans; these actions ensure constant improvement in a learner-centred way. Further reading South African National Department of Health. National guidelines for the treatment of malaria, South Africa [hompage on the Internet]. Pretoria: National Department of Health; 2019 [cited 2023 Jul 18]. Available from: https://knowledgehub.health.gov.za/elibrary/national-guidelines-treatment-malaria-2019 Brits H. How to plan and implement a teaching activity. In: Mash B, Brits H, Naidoo M, Ras T, editors. SA family practice manual. Cape Town: Van Schaik, 2023; p. 735–742. Mehay R, editor. The essential handbook for GP training and education. London: CRC Press, 2013; London; p. 114–117.
Read the accompanying article carefully and then answer the following questions. As far as possible use your own words. Do not copy out chunks from the article. Be guided by the allocation of marks concerning the length of your responses. Hoque M, Hoque ME, Van Hal G, Buckus S. Prevalence, incidence and seroconversion of HIV and syphilis infections among pregnant women of South Africa. S Afr J Infect Dis. 2021;36(1):a296. https://doi.org/10.4102/sajid.v36i1.296 Total: 30 marks Critically appraise the authors’ choice of a retrospective cohort study for addressing the research question. (4 marks) Critically appraise the sampling strategy. (4 marks) Describe aspects related to the ethical considerations for conducting this study. (4 marks) Critically appraise the choice of the statistical tests considered for this study. (5 marks) Did the study manage to address a focused research question? Discuss. (3 marks) Critically analyse the limitations of the study. (4 marks) Use a structured approach (e.g., relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice. (6 marks) Suggested answers: 1. Critically appraise the authors’ choice of a retrospective cohort study for addressing the research question. (4 marks) A cohort study is a quantitative study that aims to determine a condition’s natural history and incidence. It uses a longitudinal design to analyse the progression of the disease. In addition, it can calculate the hazard ratio, incidence rate, relative risk and cumulative incidence. Although it is not always possible to establish causality in a study, cohort studies can provide valuable information on the link between various risk factors and outcomes. This process involves assessing the effects of exposure on a group of subjects who were not exposed to a certain factor. Unlike cross-sectional studies, cohort studies are not used to determine prevalence. Instead, they are used to study causes and incidence. In this study, the retrospective cohort study design represented a time-efficient approach to address the study objectives, and data are readily available from existing patient records in the clinic register. 2. Critically appraise the sampling strategy. (4 marks) The researchers chose a peri-urban primary healthcare setup that provides first-level care to predominantly Zulu-speaking black African populations. Members of the research team were based at this facility. No sampling was done, as the researchers collected data from the antenatal clinic register of all pregnant women who attended booking visits between January 2018 and December 2018. No rationale for this time interval was provided. One calendar year is a convenient choice but may not reflect the trends over a longer period. It may also be useful to reflect on how basic antenatal care (BANC) protocols may have varied over this period. Some would question whether 1 year falls into the category of a cross-sectional study. shows how the data of the 1503 study subjects were organised during the data analysis. It is important to note that this data only reflects the total number of pregnant women who received all or aspects of their antenatal care experience at this facility during this period and excludes the unbooked patients or patients who received antenatal care elsewhere in the health system (private vs public, district health services vs specialist level care). As such this may not reflect the true prevalence or incidence of the conditions of interest at the community level. The researchers did not specify how they addressed any potential biases, which may have been because of the frequency of antenatal visits, including more frequent visits compared to the norm of the antenatal care package and late booking in the antenatal period. 3. Describe aspects related to the ethical considerations for conducting this study. (4 marks) Conducting research is a process that involves making ethical decisions. These principles help guide the design and execution of studies. Researchers must follow a code of conduct when gathering information from individuals. Human research aims to understand real-world phenomena, explore effective treatments, investigate behaviours and explore diverse ways of improving people’s lives. When it comes to conducting studies, ethical considerations play a significant role. These considerations protect the rights of research participants, enhance research validity and keep scientific or academic integrity. The ethical consideration relevant to this study consists of the following aspects: Ethics committee review and site access approval: The Umgungundlovu Health Ethics Review Board approved the study protocol. The Kwadabeka Community Health Centre’s management provided written permission to allow the use of the antenatal clinic register. Informed consent process: The informed consent was waived because there was no direct contact with participants. No new data were generated as the researchers used secondary data from the register containing data collected as part of routine care. Protection of personal information: The researchers maintained strict privacy and confidentiality during the study. Prevention of third-party injury: There was no statement about all other forms of harm, including psychological, social and physical as the researchers accessed the patients’ identity and diagnosis via the clinic register. However, this retrospective study design did not involve any experimental interventions and patient data were collected as part of routine clinical care. 4. Critically appraise the choice of the statistical tests considered for this study. (5 marks) The statistical analysis of a cohort study assesses the association between multiple exposures and outcomes over time and builds prognostic or prediction models. In this study, the authors use the following statistical tests: Pearson chi-square (χ 2 ) and p -values Regression Adjusted odds ratio (OR) with corresponding 95% confidence intervals (95% CI) and p -values Categorical variables were presented as proportions and frequencies. The differences in the proportions of HIV and syphilis among the different obstetric and demographic elements at the booking visit were then examined using the Pearson chi-square and p -values. The study used a step-by-step approach to analyse the variables to determine the risk factors for HIV and syphilis incidence and prevalence during pregnancy. The results were then presented with an adjusted OR. A step-by-step binary logistic regression procedure was then performed to identify the factors that could influence the prevalence and incidence of both HIV and syphilis. The regression results were presented with an adjusted OR of 95% confidence intervals and p -values. The p -values under 0.05 were regarded as significant. In cohort studies, model building is crucial. Researchers may need to develop explanatory models focused on identifying factors that have a significant relationship with an outcome. On the other hand, in predictive models, the goal is to predict an individual’s likelihood of experiencing a future occurrence or a possible diagnosis. The current study did not use any model building for identifying risk factors for HIV and syphilis seroconversion during pregnancy. 5. Did the study manage to address a focused research question? Discuss. (3 marks) The study aimed to help health officials plan how best to manage and control syphilis and HIV infections among pregnant women. It sought to estimate these conditions’ prevalence and risk factors during antenatal care. The research question was focused as it described the population of interest (pregnant women attending antenatal care at the peri-urban primary health centre) and the condition or phenomenon of interest (prevalence, incidence and seroconversion of HIV and syphilis during pregnancy) in a particular community or area (Kwadabeka Community Health Centre of Durban, South Africa). The study determined the prevalence, seroconversion and incidence of HIV and syphilis. Furthermore, associated risk factors for syphilis were identified, especially maternal age, parity and HIV status. 6. Critically analyse the limitations of the study. (4 marks) The retrospective cohort study is limited because of the nature of its research design. The following limitations should be considered: ■ Susceptible to loss to follow-up compared with cross-sectional studies. ■ Confounding variables are the major problem in analysing the data. ■ Susceptible to information bias and recall bias. ■ Less control over variables. The study lacks sufficient evidence to establish the risk of HIV and syphilis seroconversion in pregnant women who visited the antenatal care (ANC) clinic. There were limited study variables for risk factors. It also did not consider the factors that affect the new infection, such as the availability of resources and the socio-economic status of the women. The study might have underestimated the prevalence of these conditions because of the lack of access to the clinic and the cost of travel. The authors have no control over the variables and information bias. The data were collected retrospectively from clinical records. The rates might have underestimated or overestimated the prevalence of syphilis and HIV as many women were missing because of inaccessibility to the ANC clinic. Loss to follow-up (syphilis follow-up rate was 77%) and confounding factors during data analysis directly influence the associated risk between HIV and syphilis seroconversion. 7. Use a structured approach (e.g., relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice. (6 marks) The READER format may be used to answer this question: Relevance to family medicine and primary care? Education – does it challenge existing knowledge or thinking? Applicability – are the results applicable to my practice? Discrimination – is the study scientifically valid enough? Evaluation – given the aforementioned, how would I score or evaluate the usefulness of this study for my practice? Reaction – what will I do with the study findings? The answer may be a subjective response but should show a reflection on the possible changes within the student’s practice within the South African public healthcare system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g., private general practice). The reflection on whether all essential outcomes were considered depends on the reader’s perspective (is there other information you would have liked to see?) . A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as HIV and syphilis are common health concerns during pregnancy. A better understanding of the risk factors of HIV and syphilis seroconversion during antenatal care in primary care clinics will help primary care teams, and policymakers plan appropriate interventions. E: Among the pregnant women attending ANC in South Africa, the prevalence of syphilis and HIV varies depending on the province; by monitoring the prevalence and incidence of syphilis and HIV among pregnant women, as well as naming risk factors, the author explores the effective interventions to prevent seroconversion during pregnancy. There is inadequate knowledge about the transmission of these diseases among pregnant women in a Midwife-run Obstetric Unit (MOU). This study aims to inform health managers about strategies to control and treat syphilis and HIV infections among pregnant women. A: For this study, it would be possible to generalise its findings to a similar South African public sector antenatal clinic. D: On discrimination, there is a fair congruity between the research methodology, data collection methods and data analysis. There was a clear lack of inclusion and exclusion criteria for selecting study participants. (It would have been helpful to state the methodological position). Furthermore, the researchers did not describe the comparative cohort group with different characteristics exposure, although the clinical event of interest (HIV and syphilis) was well explained. E: The study’s findings may be relevant when supplying antenatal care services. The study findings did point to the high prevalence of HIV and syphilis among pregnant women. Furthermore, the study highlighted the role of primary care providers in the prevention of seroconversion of HIV and syphilis in a high-risk category (pregnancy or antenatal period), as well as the need for counselling and testing, antiretroviral therapy (ART) and health education to change the behaviour on prevention of HIV and syphilis among pregnant women in particular and the general population at large. It may be helpful to remind oneself of the nature of the study design and its limitations and that the study setting-related behavioural factors influence HIV and syphilis seroconversions. R: The study’s findings may help manage HIV and syphilis during antenatal care in primary health care settings. The study explained the higher rates of HIV and syphilis infection among pregnant women that lead to adverse perinatal outcomes. Effective interventions, such as testing and counselling, should be implemented in primary care to minimise the impact of these infections on pregnancy outcomes. Further reading Pather M. Evidence-based family medicine. In: Mash B, editor. Handbook of family medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 430–453. MacAuley D. READER: An acronym to aid critical reading by general practitioners. Br J Gen Pract. 1994;44(379):83–85. The Critical Appraisals Skills Programme (CASP). 2023. CASP checklists [homepage on the Internet]. [cited 2023 Feb 04]. Available from: https://casp-uk.net/casp-tools-checklists/ Goodyear-Smith F, Mash B, editor. How to do primary care research. Boca Raton, FL: CRC Press, Taylor and Francis Group; 2019.
A cohort study is a quantitative study that aims to determine a condition’s natural history and incidence. It uses a longitudinal design to analyse the progression of the disease. In addition, it can calculate the hazard ratio, incidence rate, relative risk and cumulative incidence. Although it is not always possible to establish causality in a study, cohort studies can provide valuable information on the link between various risk factors and outcomes. This process involves assessing the effects of exposure on a group of subjects who were not exposed to a certain factor. Unlike cross-sectional studies, cohort studies are not used to determine prevalence. Instead, they are used to study causes and incidence. In this study, the retrospective cohort study design represented a time-efficient approach to address the study objectives, and data are readily available from existing patient records in the clinic register.
The researchers chose a peri-urban primary healthcare setup that provides first-level care to predominantly Zulu-speaking black African populations. Members of the research team were based at this facility. No sampling was done, as the researchers collected data from the antenatal clinic register of all pregnant women who attended booking visits between January 2018 and December 2018. No rationale for this time interval was provided. One calendar year is a convenient choice but may not reflect the trends over a longer period. It may also be useful to reflect on how basic antenatal care (BANC) protocols may have varied over this period. Some would question whether 1 year falls into the category of a cross-sectional study. shows how the data of the 1503 study subjects were organised during the data analysis. It is important to note that this data only reflects the total number of pregnant women who received all or aspects of their antenatal care experience at this facility during this period and excludes the unbooked patients or patients who received antenatal care elsewhere in the health system (private vs public, district health services vs specialist level care). As such this may not reflect the true prevalence or incidence of the conditions of interest at the community level. The researchers did not specify how they addressed any potential biases, which may have been because of the frequency of antenatal visits, including more frequent visits compared to the norm of the antenatal care package and late booking in the antenatal period.
Conducting research is a process that involves making ethical decisions. These principles help guide the design and execution of studies. Researchers must follow a code of conduct when gathering information from individuals. Human research aims to understand real-world phenomena, explore effective treatments, investigate behaviours and explore diverse ways of improving people’s lives. When it comes to conducting studies, ethical considerations play a significant role. These considerations protect the rights of research participants, enhance research validity and keep scientific or academic integrity. The ethical consideration relevant to this study consists of the following aspects: Ethics committee review and site access approval: The Umgungundlovu Health Ethics Review Board approved the study protocol. The Kwadabeka Community Health Centre’s management provided written permission to allow the use of the antenatal clinic register. Informed consent process: The informed consent was waived because there was no direct contact with participants. No new data were generated as the researchers used secondary data from the register containing data collected as part of routine care. Protection of personal information: The researchers maintained strict privacy and confidentiality during the study. Prevention of third-party injury: There was no statement about all other forms of harm, including psychological, social and physical as the researchers accessed the patients’ identity and diagnosis via the clinic register. However, this retrospective study design did not involve any experimental interventions and patient data were collected as part of routine clinical care.
The statistical analysis of a cohort study assesses the association between multiple exposures and outcomes over time and builds prognostic or prediction models. In this study, the authors use the following statistical tests: Pearson chi-square (χ 2 ) and p -values Regression Adjusted odds ratio (OR) with corresponding 95% confidence intervals (95% CI) and p -values Categorical variables were presented as proportions and frequencies. The differences in the proportions of HIV and syphilis among the different obstetric and demographic elements at the booking visit were then examined using the Pearson chi-square and p -values. The study used a step-by-step approach to analyse the variables to determine the risk factors for HIV and syphilis incidence and prevalence during pregnancy. The results were then presented with an adjusted OR. A step-by-step binary logistic regression procedure was then performed to identify the factors that could influence the prevalence and incidence of both HIV and syphilis. The regression results were presented with an adjusted OR of 95% confidence intervals and p -values. The p -values under 0.05 were regarded as significant. In cohort studies, model building is crucial. Researchers may need to develop explanatory models focused on identifying factors that have a significant relationship with an outcome. On the other hand, in predictive models, the goal is to predict an individual’s likelihood of experiencing a future occurrence or a possible diagnosis. The current study did not use any model building for identifying risk factors for HIV and syphilis seroconversion during pregnancy.
The study aimed to help health officials plan how best to manage and control syphilis and HIV infections among pregnant women. It sought to estimate these conditions’ prevalence and risk factors during antenatal care. The research question was focused as it described the population of interest (pregnant women attending antenatal care at the peri-urban primary health centre) and the condition or phenomenon of interest (prevalence, incidence and seroconversion of HIV and syphilis during pregnancy) in a particular community or area (Kwadabeka Community Health Centre of Durban, South Africa). The study determined the prevalence, seroconversion and incidence of HIV and syphilis. Furthermore, associated risk factors for syphilis were identified, especially maternal age, parity and HIV status.
The retrospective cohort study is limited because of the nature of its research design. The following limitations should be considered: ■ Susceptible to loss to follow-up compared with cross-sectional studies. ■ Confounding variables are the major problem in analysing the data. ■ Susceptible to information bias and recall bias. ■ Less control over variables. The study lacks sufficient evidence to establish the risk of HIV and syphilis seroconversion in pregnant women who visited the antenatal care (ANC) clinic. There were limited study variables for risk factors. It also did not consider the factors that affect the new infection, such as the availability of resources and the socio-economic status of the women. The study might have underestimated the prevalence of these conditions because of the lack of access to the clinic and the cost of travel. The authors have no control over the variables and information bias. The data were collected retrospectively from clinical records. The rates might have underestimated or overestimated the prevalence of syphilis and HIV as many women were missing because of inaccessibility to the ANC clinic. Loss to follow-up (syphilis follow-up rate was 77%) and confounding factors during data analysis directly influence the associated risk between HIV and syphilis seroconversion.
The READER format may be used to answer this question: Relevance to family medicine and primary care? Education – does it challenge existing knowledge or thinking? Applicability – are the results applicable to my practice? Discrimination – is the study scientifically valid enough? Evaluation – given the aforementioned, how would I score or evaluate the usefulness of this study for my practice? Reaction – what will I do with the study findings? The answer may be a subjective response but should show a reflection on the possible changes within the student’s practice within the South African public healthcare system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g., private general practice). The reflection on whether all essential outcomes were considered depends on the reader’s perspective (is there other information you would have liked to see?) . A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as HIV and syphilis are common health concerns during pregnancy. A better understanding of the risk factors of HIV and syphilis seroconversion during antenatal care in primary care clinics will help primary care teams, and policymakers plan appropriate interventions. E: Among the pregnant women attending ANC in South Africa, the prevalence of syphilis and HIV varies depending on the province; by monitoring the prevalence and incidence of syphilis and HIV among pregnant women, as well as naming risk factors, the author explores the effective interventions to prevent seroconversion during pregnancy. There is inadequate knowledge about the transmission of these diseases among pregnant women in a Midwife-run Obstetric Unit (MOU). This study aims to inform health managers about strategies to control and treat syphilis and HIV infections among pregnant women. A: For this study, it would be possible to generalise its findings to a similar South African public sector antenatal clinic. D: On discrimination, there is a fair congruity between the research methodology, data collection methods and data analysis. There was a clear lack of inclusion and exclusion criteria for selecting study participants. (It would have been helpful to state the methodological position). Furthermore, the researchers did not describe the comparative cohort group with different characteristics exposure, although the clinical event of interest (HIV and syphilis) was well explained. E: The study’s findings may be relevant when supplying antenatal care services. The study findings did point to the high prevalence of HIV and syphilis among pregnant women. Furthermore, the study highlighted the role of primary care providers in the prevention of seroconversion of HIV and syphilis in a high-risk category (pregnancy or antenatal period), as well as the need for counselling and testing, antiretroviral therapy (ART) and health education to change the behaviour on prevention of HIV and syphilis among pregnant women in particular and the general population at large. It may be helpful to remind oneself of the nature of the study design and its limitations and that the study setting-related behavioural factors influence HIV and syphilis seroconversions. R: The study’s findings may help manage HIV and syphilis during antenatal care in primary health care settings. The study explained the higher rates of HIV and syphilis infection among pregnant women that lead to adverse perinatal outcomes. Effective interventions, such as testing and counselling, should be implemented in primary care to minimise the impact of these infections on pregnancy outcomes. Further reading Pather M. Evidence-based family medicine. In: Mash B, editor. Handbook of family medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 430–453. MacAuley D. READER: An acronym to aid critical reading by general practitioners. Br J Gen Pract. 1994;44(379):83–85. The Critical Appraisals Skills Programme (CASP). 2023. CASP checklists [homepage on the Internet]. [cited 2023 Feb 04]. Available from: https://casp-uk.net/casp-tools-checklists/ Goodyear-Smith F, Mash B, editor. How to do primary care research. Boca Raton, FL: CRC Press, Taylor and Francis Group; 2019.
The objective of the station: This station tests the candidate’s ability to consult with an HIV-positive patient requesting malaria prophylaxis and yellow fever vaccination for travel purposes. Type of station: Integrated consultation. Role player: Simulated patient: adult male or female. Instructions to the candidate: You are the family physician working at the consultant clinic of a large district hospital, and the nurse has asked you to consult with this 32-year-old patient. Your task: Please consult with this patient. You do not need to examine this patient. All examination findings will be provided on request. Instructions for the examiner: This is an integrated consultation station in which the candidate has 15 min. Familiarise yourself with the assessor guidelines that detail the required responses expected from the candidate. No marks are allocated. In the mark sheet , tick off one of the three responses for each competency listed. Make sure you are clear on what the criteria are for judging a candidate’s competence in each area. Provide the following information to the candidate when requested: examination findings. Please switch off your cell phone. Please do not prompt the student. Please ensure that the station remains tidy and is reset between candidates. The aim is to establish that the candidate has an effective and safe approach to counselling an HIV-positive patient seeking malaria prophylaxis and yellow fever vaccination before visiting a malaria and yellow fever endemic area. A working definition of competent performance: the candidate effectively completes the task within the allotted time, in a manner that maintains patient safety, even though the execution may not be efficient and well structured: ■ Not competent: patient safety is compromised (including ethical-legally) or the task is not completed. ■ Competent : the task is completed safely and effectively. ■ Good : in addition to displaying competence, the task is completed efficiently and in an empathic, patient-centred manner (acknowledges patient’s ideas, beliefs, expectations, concerns/fears). Guidance for examiners regarding Establishes and maintains a good clinician–patient relationship: The competent candidate is respectful and engages with the patient in a dignified manner. (Ascertains reason for the consultation and makes the patient feel comfortable while ensuring the ground for confidentiality is set) The good candidate is empathic, compassionate and collaborative, facilitating patient participation in key areas of the consultation. (Maintains this throughout the consultation) Gathering information The competent candidate gathers sufficient information to establish a clinical assessment ( Detailed relevant history including current and past medical conditions including HIV status, CD4 count, viral load, medications and risk factors upon which a decision may be made ). The good candidate additionally has a structured and holistic approach ( explores the patient’s agenda concerning individual and contextual issues ). Clinical reasoning The competent candidate identifies the reason for the consultation ( malaria prophylaxis and need for yellow fever vaccination ) and acknowledges the relevant challenges and dilemmas for this patient ( risks, contraindications ). The good candidate has a structured approach to addressing the patient’s agenda ( individual and contextual ) and considers other travel medicine options ( typhoid, diarrhoeal disease, hepatitis) . Explaining and planning The competent candidate uses clear language to explain to the patient and uses strategies to ensure patient understanding. For vaccination purposes, persons with asymptomatic HIV infection and CD4+ cell counts of 200/µL to 500/µL are considered to have limited immune deficits and are generally candidates for immunisation. HIV-positive persons with CD4+ cell counts less than 200/µL or a history of an AIDS-defining illness should not receive live-attenuated viral or bacterial vaccines because of the risk of serious systemic disease and suboptimal response to vaccination. This applies to yellow fever vaccination . HIV-positive adults are more prone to acquiring malaria and are at an increased risk of severe malaria and death. Malaria can worsen HIV disease progression. Therefore, the prevention of malaria is even more important in these individuals . Further, drugs used for malaria prophylaxis may interact with antiretroviral drugs and there is a general lack of data on the safety and efficacy of antimalarial regimens in patients taking antiretroviral therapy . The good candidate additionally ensures that the patient is actively involved in decision-making, paying particular attention to knowledge-sharing and empowerment, given the dilemma faced by the patient. Travellers with severe immune compromise, including those with symptomatic HIV infection and AIDS, should be strongly discouraged from travelling to destinations that present an actual risk for yellow fever. If travelling to an area at risk of yellow fever is unavoidable, these travellers should be carefully instructed in methods to avoid mosquito bites and be provided with a vaccination medical waiver . The exemption letter, signed by the physician, simply states ‘Yellow fever vaccine for “NAME” is medically contraindicated because of the following condition: [ age, pregnancy, immunocompromised status ] ’. However, international health regulations do not allow an exemption from yellow fever vaccination for travel to a country that has a vaccination requirement for entry, even for medical reasons. Thus, travellers should be warned that some countries may not accept vaccination waiver documents. If the waiver is rejected, the option of deportation might be preferable to receipt of the vaccine at the destination. For countries requiring entry vaccination, travellers must have proof that the vaccine was administered at least 10 days before entry .
Establishes and maintains a good clinician–patient relationship: The competent candidate is respectful and engages with the patient in a dignified manner. (Ascertains reason for the consultation and makes the patient feel comfortable while ensuring the ground for confidentiality is set) The good candidate is empathic, compassionate and collaborative, facilitating patient participation in key areas of the consultation. (Maintains this throughout the consultation) Gathering information The competent candidate gathers sufficient information to establish a clinical assessment ( Detailed relevant history including current and past medical conditions including HIV status, CD4 count, viral load, medications and risk factors upon which a decision may be made ). The good candidate additionally has a structured and holistic approach ( explores the patient’s agenda concerning individual and contextual issues ). Clinical reasoning The competent candidate identifies the reason for the consultation ( malaria prophylaxis and need for yellow fever vaccination ) and acknowledges the relevant challenges and dilemmas for this patient ( risks, contraindications ). The good candidate has a structured approach to addressing the patient’s agenda ( individual and contextual ) and considers other travel medicine options ( typhoid, diarrhoeal disease, hepatitis) . Explaining and planning The competent candidate uses clear language to explain to the patient and uses strategies to ensure patient understanding. For vaccination purposes, persons with asymptomatic HIV infection and CD4+ cell counts of 200/µL to 500/µL are considered to have limited immune deficits and are generally candidates for immunisation. HIV-positive persons with CD4+ cell counts less than 200/µL or a history of an AIDS-defining illness should not receive live-attenuated viral or bacterial vaccines because of the risk of serious systemic disease and suboptimal response to vaccination. This applies to yellow fever vaccination . HIV-positive adults are more prone to acquiring malaria and are at an increased risk of severe malaria and death. Malaria can worsen HIV disease progression. Therefore, the prevention of malaria is even more important in these individuals . Further, drugs used for malaria prophylaxis may interact with antiretroviral drugs and there is a general lack of data on the safety and efficacy of antimalarial regimens in patients taking antiretroviral therapy . The good candidate additionally ensures that the patient is actively involved in decision-making, paying particular attention to knowledge-sharing and empowerment, given the dilemma faced by the patient. Travellers with severe immune compromise, including those with symptomatic HIV infection and AIDS, should be strongly discouraged from travelling to destinations that present an actual risk for yellow fever. If travelling to an area at risk of yellow fever is unavoidable, these travellers should be carefully instructed in methods to avoid mosquito bites and be provided with a vaccination medical waiver . The exemption letter, signed by the physician, simply states ‘Yellow fever vaccine for “NAME” is medically contraindicated because of the following condition: [ age, pregnancy, immunocompromised status ] ’. However, international health regulations do not allow an exemption from yellow fever vaccination for travel to a country that has a vaccination requirement for entry, even for medical reasons. Thus, travellers should be warned that some countries may not accept vaccination waiver documents. If the waiver is rejected, the option of deportation might be preferable to receipt of the vaccine at the destination. For countries requiring entry vaccination, travellers must have proof that the vaccine was administered at least 10 days before entry .
The competent candidate proposes appropriate intervention. Counselling about avoiding mosquito bites (e.g., bed netting, insect repellents, permethrin-impregnated clothing). They should also be prescribed appropriate drugs for malaria prophylaxis . Daily doxycycline. The advantages of doxycycline include low cost and preventive effects against diarrhoea, leptospirosis and Rickettsia species infections. Doxycycline’s disadvantages include the need to take it daily, associated photosensitivity, the potential for gastrointestinal upset and the need to take it one day before and four weeks after exposure, or Daily atovaquone-proguanil. Advantages of atovaquone-proguanil include its safety and the need to take it only one day before and seven days after exposure; disadvantages include higher cost, the potential for headache, gastrointestinal upset, insomnia and the need to take it daily and with food . Yellow fever vaccination is not recommended because of the patient’s CD4 count . Typhoid vaccine parenteral and not the live oral vaccine – 5 years since the previous vaccine The good candidate additionally discusses the risks of travel given the patient’s low CD4 count and offers a letter of waiver that is subject to visiting countries’ protocols. The patient is counselled regarding individual and contextual issues . Indicates that Mefloquine is no longer available in South Africa Role player instructions Adult male/female. 36-year-old patient. Opening statement: ‘Doctor, I need to travel to the Democratic Republic of Congo, and I want medication for malaria and a yellow fever vaccination. Can you please help me …’. Open responses: Freely tell the doctor … I previously travelled to Zimbabwe in 2019 and I got vaccinated for typhoid and hepatitis. I also got COVID-19 vaccinations. I just need malaria prevention medication and the yellow fever vaccine. Closed responses: Only tell the doctor if she or he brings this up: You have no complaints except for a minor sore throat. You have been diagnosed with diabetes after contracting COVID-19 in 2021. You think your glucose is controlled with Metformin 850 mg three times a day. You tested positive for HIV 2 years ago and are on Tenofovir, Lamivudine and Dolutegravir (TLD). You have no other chronic conditions and no allergies. You never had any surgery. You do not smoke cigarettes and do not consume alcohol. You are a refugee living in South Africa since 2010 and have received permanent residency. You work as an Uber driver having completed over 10 000 trips. You have a South African wife who is also HIV positive and you have two children aged 4 years and 7 years. They will not travel with you to the DRC. Ideas, concerns and expectations: You are concerned that malaria medication will affect your ARV medication. You have heard from the media that malaria is very dangerous and kills you. Worried about the well-being of your wife if anything happens to you. You need to travel urgently to the DRC to attend your father’s funeral; you will not be able to live with yourself if you do not go as your father was responsible for your upbringing, protection and success in life. You graduated as a teacher in the DRC and had to flee because of the civil war in your country. You also believe that his soul/spirit will not rest if you do not perform the last rites. You expect the doctor to provide sound advice and treatment to allow you to attend the funeral. You also want to know if you should take a break from your ARV medication while travelling. Examination findings: Body mass index – 19 kg/m 2 . Blood pressure – 118/72 mmHg, heart rate: 86 beats/minute. Haemoglobin – 11.5 gm/dL; HbA1c – 8.1%. Random blood glucose (HGT) – 5.9 mmol/L. Urinalysis – No abnormalities. CD4 count 190/µL (6 months ago); viral load result pending. Ear, nose and throat (ENT) – Normal except for mild oral thrush. Cardio-respiratory systems – No abnormalities. Abdomen – No abnormalities. Neuro – No abnormalities. Further reading South African National Department of Health. National guidelines for the prevention of malaria, South Africa [homepage on the Internet]. Pretoria: National Department of Health; 2018 [cited 2023 Jul 18]. Available from: https://www.nicd.ac.za/wp-content/uploads/2019/03/National-Guidelines-for-prevention-of-Malaria_updated-08012019-1.pdf Centers for Disease Control and Prevention (CDC). CDC yellow book 2024: Health information for international travel [homepage on the Internet]. [cited 2023 Jul 18]. Available from: https://wwwnc.cdc.gov/travel/page/yellowbook-home Smith DS. Travel medicine and vaccines for HIV-infected travellers. Top Antivir Med. 2012;20(3):111–115.
|
Application of the random forest algorithm to predict skilled birth attendance and identify determinants among reproductive-age women in 27 Sub-Saharan African countries; machine learning analysis | 885c7ba1-b414-4e4f-9e92-289c8f1f1110 | 11887244 | Surgical Procedures, Operative[mh] | Maternal mortality refers to a mother’s death owing to complications arising from childbirth or pregnancy . This issue is a forefront public health challenge around the globe which is pronounced in low- and middle-income countries, particularly in the sub-Saharan African (SSA) regions where the burdens remain significantly high . This exaggerated mortality rate of mothers indicates the poor health status of mothers and the poor quality of health care services. It also reflects the community’s poor economic, social, cultural, and political status and the disadvantaged condition of women in their residence . Moreover, this problem is further complicated in developing countries due to limited access to antenatal care and the shortage of skilled birth attendants . Globally, approximately 0.5 million women of reproductive age die every year from pregnancy and childbirth-related complications . In developing countries, 94% of maternal deaths were reported. Among these, the SSA and southern Asia comprise 86% of these deaths, with SSA alone accounting for over 60% of global maternal deaths . Regarding this, the majority of maternal deaths are attributed to direct obstetric complications, with up to 98% being both preventable and treatable . Whereas in Africa, maternal mortality is predominantly caused by excessive hemorrhage at 34%, other direct obstructs causes at 17%, infections at 10%, preeclampsia at 9%, and a combined 16% due to obstructed labor, abortion, and anemia . Skilled birth attendants (SBA) are accredited for providing the continuum of care. These professionals, including doctors, nurses, and midwives, have been educated and trained to proficiently avert and manage any complications that may arise in women during labor, delivery, and the early postpartum period . They are devoted to ensuring the best optimum care for mothers and newborns . Hence, during pregnancy and childbirth, most obstetric complications that emerge are unpredictable. This necessitates the intervention of skilled attendants before, during, and after birth to ensure the safety of both mother and newborn . Moreover, SBA-assisted delivery significantly contributes to reducing unnecessary maternal and newborn deaths . While establishing a direct causal relationship between skilled birth attendance and maternal mortality is challenging, researchers demonstrated that skilled birth attendants can significantly reduce and prevent approximately 16 to 33% of maternal deaths . Although the involvement of skilled birth attendants has been on the rise in SSA, their services are not equally accessible to all social groups within various societies or countries . Various studies in different countries have identified multiple factors independently associated with limited access to SBA. In SSA, disparities in the use of SBA are largely affected by factors like residency and the level of maternal education . Many studies found variables such as Educational status , accessibility of antenatal care visit , age of mother , previous pregnancy complication , the decision of place of delivery , parity , husband education , wealth index , accessing health care , sex of provider , resident area , maternal occupation , and costs of services, transportation availability, and distance to health facilities were identified as significant predictors of skilled birth attendance. The Millennium Development Goal (MDG) on maternal health is projected to reduce maternal mortality by three-quarters between 1990 and 2015. It is the primary healthcare indicator for monitoring the process, based on the proportion of births attended by skilled health personnel. This goal was significantly achieved in Western countries . So far, considerable improvements in the health status of many populations have been reported in developing countries. Nonetheless, the MDGs to reduce maternal and newborn mortality unmet in many SSA nations . The findings could inform evidence-based public health decisions to enhance skilled birth attendance, thereby reducing maternal and newborn mortality. Since providing equitable maternal and child health services to improve health outcomes throughout their life course is a key priority of universal health coverage. Several studies have utilized classical statistical methods to identify the determinates influencing skill birth attendance status among reproductive age women . Classical data analysis and associated factor identification have been accomplished through statistical models . Statistical modeling cautious about uncertainty, it requires a lot of attention to confidence intervals and hypothesis tests . Machine Learning algorithms are typically designed to make accurate predictions by learning from data rather making prior assumptions . This approach enables algorithms to reveal hidden knowledge and patterns that may not be obvious based on prior assumptions . Machine learning models also offer the capability to interact seamlessly with digital systems, enabling organizations to leverage insights from research to address practical problems and real-world challenges. By bridging the gap between theoretical research and practical applications, ML models drive innovations and advancements in the healthcare industry . Therefore, this study aimed to address the evidence gap on skilled birth attendance by developing a predictive model, and by identifying the most determinant factors among reproductive age women in sub-Saharan African countries withing machine learning algorism.
Study period and setting The study used data from recent surveys conducted in 27 SSA countries between 2016 and 2024 G.C. These countries were Angola, Benin, Burkina Faso, Burundi, Cameroon, Côte d’Ivoire, Ethiopia, Gabon, Ghana, Gambia, Guinea, Kenya, Liberia, Lesotho, Madagascar, Malawi, Mali, Mauritania, Mozambique, Nigeria, Rwanda, Senegal, Sierra Leone, South Africa, Tanzania, Uganda, and Zambia. For this analysis, the study focused on reproductive women by combining individual records from each country and identifying those who had at least one live birth. A total weighted sample of 198,707 reproductive women who had at least one live birth was included in this study (Fig. ). The data used in the study is publicly available and can be accessed at https://dhsprogram.com/data/available-datasets.cfm . Study variables Dependent variable The outcome of interest was skilled birth attendance. In this study, a birth was considered as skilled birth attendance if it is attended by a skilled health personnel (doctor, nurse, and midwife and auxiliary midwife) and other person also consisted of a traditional birth attendant, traditional health volunteer, community/village health volunteer, and neighbors/friends/relatives. This was dichotomized and coded based on the assistance of delivery (skilled birth attendant = 1, other persons = 0) . Independent variables The independent variables in this study were chosen based on a comprehensive review of existing literature on factors affecting skilled birth attendance. These variables include place of residence, educational attainment, internet usage, wealth index, place of delivery, marital status, husband’s occupation, current employment status, permission to access healthcare facilities, financial capability for treatment, distance to healthcare facilities, reluctance to seek treatment alone, and media exposure . Data management and analysis The data was appended and weighted using SPSS version 27 and Microsoft Excel 2019. Python 3.12 was used for further analysis. Several Python libraries were used in this study to support various stages of the analysis. Pandas and NumPy were used to manipulate data and compute numerical results. Matplotlib and Seaborn were used for visualization. The Scikit-Learn package was used for accessing and developing machine learning models, encompassing tasks such as splitting data, and evaluating model performance. For model development and feature selection, the Random Forest classifier was selected as the primary algorithm. The classifier was selected for its ability to handle complex, high-dimensional datasets often encountered in healthcare settings. Unlike simpler models linear models, Random Forest can capture complex relationships and intricate interactions between predictors without requiring strict parametric assumptions . By leveraging an ensemble of decision trees trained on random subsets of data, Random Forest reduces overfitting and enhances generalization, ensuring more stable and reliable predictions . Its built-in feature importance mechanism also provides valuable insights into the most influential variables, aiding interpretability and decision-making . Compared to other ensemble methods like Gradient Boosting or XGBoost, Random Forest is particularly advantageous when the primary goal is variance reduction rather than maximizing predictive accuracy on potentially overfit training data . Its bagging approach aggregates predictions from multiple trees, which is especially effective in mitigating the impact of outliers or noise (Fig. ). The random forest classifier Random Forest is an ensemble learning method that combines multiple decision trees to improve classification performance by aggregating the predictions of many trees. This technique reduces overfitting and variance compared to a single decision tree. It uses bagging (Bootstrap Aggregating), where each tree is trained on a random subset of the data, selected with replacement. The trees are further diversified by considering only a random subset of features at each split, which helps reduce correlation between trees. The n_estimators parameter controls the number of trees in the forest, with more trees generally improving performance but increasing computational time. The max_depth parameter limits the depth of each tree; deeper trees capture more complex patterns but may overfit. The max_features parameter determines the number of features considered for splitting at each node, with a common value being ‘sqrt’ (the square root of the total number of features). The criterion parameter measures the quality of the split, where ‘gini’ and ‘entropy’ are commonly used, with Gini impurity being slightly faster. The min_samples_split parameter specifies the minimum number of samples required to split a node; higher values help prevent overfitting but may reduce model complexity. Similarly, min_samples_leaf sets the minimum number of samples required to be at a leaf node, starting with 1 and adjusted based on the data. Random Forests are also robust to overfitting due to the averaging of multiple trees and can handle missing data using surrogate splits. Additionally, Random Forest has a built-in feature selection mechanism that identifies the most influential variables, providing valuable insights into their impact on the model’s predictions . Data refinement and feature processing Missing values were handled using K-Nearest Neighbor (KNN) imputation with k = 5 to preserve data integrity. When multiple covariates were missing simultaneously, we applied a multivariate KNN imputation approach, using information from multiple related variables to predict the missing values. This method ensured that the imputed data-maintained consistency with the existing patterns in the dataset. (Fig. ). Due to the imbalance in the outcome variable, the Synthetic Minority Oversampling Technique (SMOTE) with a sampling ratio of 0.23 was applied to generate synthetic samples and improve model performance. One-hot encoding was used to transform categorical variables into numerical representations, ensuring compatibility with machine learning algorithms. Additionally, normalization and scaling (Min-Max scaling between 0 and 1) were performed to standardize feature distributions, enhancing model convergence and evaluation. To further optimize the dataset, Recursive Feature Elimination (RFE) was applied to select the top 13 most important features out of the available predictors, reducing dimensionality and improving both model efficiency and interpretability. These preprocessing steps collectively enhanced the model’s robustness and predictive accuracy . Data segmentation The study used 10-fold cross-validation to ensure robust model evaluation and performance assessment. The dataset was divided into training and testing sets. Ten-fold cross-validation was used to assess and validate the model’s performance by dividing the data into 10 folds. The model was trained and tested 10 times, with each test set being a separate fold and the remaining folds serving as the training set. This method reduces overfitting and yields a more reliable estimate of the model’s generalization performance . Model fitting and optimization The random forest classifier was trained on the preprocessed and balanced dataset. Cross-validation ensured that each model was validated on different subsets of the training data, allowing for an accurate evaluation of performance. We applied Random Forest and optimized the following hyperparameters: n_estimators = 300, max_depth = 30, max_features = ‘sqrt’, criterion = ‘gini’, min_samples_split = 10, min_samples_leaf = 5, and bootstrap = True. These values were selected to strike a balance between model performance and computational efficiency. The chosen settings helped reduce overfitting, improved generalization, and ensured that the model was both robust and effective in capturing complex patterns. The predictions for the classifier was generated on the test set, and a custom threshold of 0.5 was applied to the predicted probabilities to classify the target variable . Performance metrics Model performance was evaluated using discriminations metrics, including accuracy, precision, recall, F1 score, and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) were used to assess the model’s capacity to distinguish between positive and negative instances. Model explanation and feature selection The link between the predictors and the outcome variable assessed using and SHapley Additive exPlanations (SHAP) feature significance approach . The SHAP method was used to assess the impact of each feature on model predictions. SHAP analysis uses a game theory framework to offer a global or local interpretation and explanation for any machine learning model’s prediction . SHAP was chosen because it provides clear and interpretable insights into how each feature contributes to model decisions, which is crucial in healthcare applications where interpretability is important.
The study used data from recent surveys conducted in 27 SSA countries between 2016 and 2024 G.C. These countries were Angola, Benin, Burkina Faso, Burundi, Cameroon, Côte d’Ivoire, Ethiopia, Gabon, Ghana, Gambia, Guinea, Kenya, Liberia, Lesotho, Madagascar, Malawi, Mali, Mauritania, Mozambique, Nigeria, Rwanda, Senegal, Sierra Leone, South Africa, Tanzania, Uganda, and Zambia. For this analysis, the study focused on reproductive women by combining individual records from each country and identifying those who had at least one live birth. A total weighted sample of 198,707 reproductive women who had at least one live birth was included in this study (Fig. ). The data used in the study is publicly available and can be accessed at https://dhsprogram.com/data/available-datasets.cfm .
Dependent variable The outcome of interest was skilled birth attendance. In this study, a birth was considered as skilled birth attendance if it is attended by a skilled health personnel (doctor, nurse, and midwife and auxiliary midwife) and other person also consisted of a traditional birth attendant, traditional health volunteer, community/village health volunteer, and neighbors/friends/relatives. This was dichotomized and coded based on the assistance of delivery (skilled birth attendant = 1, other persons = 0) . Independent variables The independent variables in this study were chosen based on a comprehensive review of existing literature on factors affecting skilled birth attendance. These variables include place of residence, educational attainment, internet usage, wealth index, place of delivery, marital status, husband’s occupation, current employment status, permission to access healthcare facilities, financial capability for treatment, distance to healthcare facilities, reluctance to seek treatment alone, and media exposure .
The outcome of interest was skilled birth attendance. In this study, a birth was considered as skilled birth attendance if it is attended by a skilled health personnel (doctor, nurse, and midwife and auxiliary midwife) and other person also consisted of a traditional birth attendant, traditional health volunteer, community/village health volunteer, and neighbors/friends/relatives. This was dichotomized and coded based on the assistance of delivery (skilled birth attendant = 1, other persons = 0) .
The independent variables in this study were chosen based on a comprehensive review of existing literature on factors affecting skilled birth attendance. These variables include place of residence, educational attainment, internet usage, wealth index, place of delivery, marital status, husband’s occupation, current employment status, permission to access healthcare facilities, financial capability for treatment, distance to healthcare facilities, reluctance to seek treatment alone, and media exposure .
The data was appended and weighted using SPSS version 27 and Microsoft Excel 2019. Python 3.12 was used for further analysis. Several Python libraries were used in this study to support various stages of the analysis. Pandas and NumPy were used to manipulate data and compute numerical results. Matplotlib and Seaborn were used for visualization. The Scikit-Learn package was used for accessing and developing machine learning models, encompassing tasks such as splitting data, and evaluating model performance. For model development and feature selection, the Random Forest classifier was selected as the primary algorithm. The classifier was selected for its ability to handle complex, high-dimensional datasets often encountered in healthcare settings. Unlike simpler models linear models, Random Forest can capture complex relationships and intricate interactions between predictors without requiring strict parametric assumptions . By leveraging an ensemble of decision trees trained on random subsets of data, Random Forest reduces overfitting and enhances generalization, ensuring more stable and reliable predictions . Its built-in feature importance mechanism also provides valuable insights into the most influential variables, aiding interpretability and decision-making . Compared to other ensemble methods like Gradient Boosting or XGBoost, Random Forest is particularly advantageous when the primary goal is variance reduction rather than maximizing predictive accuracy on potentially overfit training data . Its bagging approach aggregates predictions from multiple trees, which is especially effective in mitigating the impact of outliers or noise (Fig. ).
Random Forest is an ensemble learning method that combines multiple decision trees to improve classification performance by aggregating the predictions of many trees. This technique reduces overfitting and variance compared to a single decision tree. It uses bagging (Bootstrap Aggregating), where each tree is trained on a random subset of the data, selected with replacement. The trees are further diversified by considering only a random subset of features at each split, which helps reduce correlation between trees. The n_estimators parameter controls the number of trees in the forest, with more trees generally improving performance but increasing computational time. The max_depth parameter limits the depth of each tree; deeper trees capture more complex patterns but may overfit. The max_features parameter determines the number of features considered for splitting at each node, with a common value being ‘sqrt’ (the square root of the total number of features). The criterion parameter measures the quality of the split, where ‘gini’ and ‘entropy’ are commonly used, with Gini impurity being slightly faster. The min_samples_split parameter specifies the minimum number of samples required to split a node; higher values help prevent overfitting but may reduce model complexity. Similarly, min_samples_leaf sets the minimum number of samples required to be at a leaf node, starting with 1 and adjusted based on the data. Random Forests are also robust to overfitting due to the averaging of multiple trees and can handle missing data using surrogate splits. Additionally, Random Forest has a built-in feature selection mechanism that identifies the most influential variables, providing valuable insights into their impact on the model’s predictions .
Missing values were handled using K-Nearest Neighbor (KNN) imputation with k = 5 to preserve data integrity. When multiple covariates were missing simultaneously, we applied a multivariate KNN imputation approach, using information from multiple related variables to predict the missing values. This method ensured that the imputed data-maintained consistency with the existing patterns in the dataset. (Fig. ). Due to the imbalance in the outcome variable, the Synthetic Minority Oversampling Technique (SMOTE) with a sampling ratio of 0.23 was applied to generate synthetic samples and improve model performance. One-hot encoding was used to transform categorical variables into numerical representations, ensuring compatibility with machine learning algorithms. Additionally, normalization and scaling (Min-Max scaling between 0 and 1) were performed to standardize feature distributions, enhancing model convergence and evaluation. To further optimize the dataset, Recursive Feature Elimination (RFE) was applied to select the top 13 most important features out of the available predictors, reducing dimensionality and improving both model efficiency and interpretability. These preprocessing steps collectively enhanced the model’s robustness and predictive accuracy .
The study used 10-fold cross-validation to ensure robust model evaluation and performance assessment. The dataset was divided into training and testing sets. Ten-fold cross-validation was used to assess and validate the model’s performance by dividing the data into 10 folds. The model was trained and tested 10 times, with each test set being a separate fold and the remaining folds serving as the training set. This method reduces overfitting and yields a more reliable estimate of the model’s generalization performance .
The random forest classifier was trained on the preprocessed and balanced dataset. Cross-validation ensured that each model was validated on different subsets of the training data, allowing for an accurate evaluation of performance. We applied Random Forest and optimized the following hyperparameters: n_estimators = 300, max_depth = 30, max_features = ‘sqrt’, criterion = ‘gini’, min_samples_split = 10, min_samples_leaf = 5, and bootstrap = True. These values were selected to strike a balance between model performance and computational efficiency. The chosen settings helped reduce overfitting, improved generalization, and ensured that the model was both robust and effective in capturing complex patterns. The predictions for the classifier was generated on the test set, and a custom threshold of 0.5 was applied to the predicted probabilities to classify the target variable .
Model performance was evaluated using discriminations metrics, including accuracy, precision, recall, F1 score, and Area Under the Receiver Operating Characteristic Curve (AUC-ROC) were used to assess the model’s capacity to distinguish between positive and negative instances.
The link between the predictors and the outcome variable assessed using and SHapley Additive exPlanations (SHAP) feature significance approach . The SHAP method was used to assess the impact of each feature on model predictions. SHAP analysis uses a game theory framework to offer a global or local interpretation and explanation for any machine learning model’s prediction . SHAP was chosen because it provides clear and interpretable insights into how each feature contributes to model decisions, which is crucial in healthcare applications where interpretability is important.
Socio-demographic characteristics of study participants In this study, we utilized a total of 198,707 individuals to examine various demographic and socio-economic characteristics. The study found that 71% of women received assistance from an SBA during their last childbirth. In terms of residence, 66.6% (132,409) live in rural areas, while 33.4% (66,298) reside in urban areas. Regarding education, 36.4% (72,210) have no formal education, 32.8% (65,222) have attended primary school, 26.5% (52,624) completed secondary education, and 4.3% (8,651) achieved higher education. Mobile phone ownership is relatively high, with 52.7% (104,725) owning a mobile phone, while 47.3% (93,982) do not. Only 17.5% (34,629) of participants reported using the internet, while 82.5% (164,078) had no internet access. Wealth distribution shows a majority in the poorer categories: 24.8% (49,205) are in the poorest, 21.5% (42,760) in the poorer, and 20.2% (40,073) in the middle. The richer group comprises 17.9% (35,511), while the richest group makes up 15.7% (31,158). Regarding maternal healthcare, 73.1% (145,403) deliver in healthcare facilities, while 26.9% (53,304) deliver at home. Media exposure is reported by 63.8% (126,828) of the population, while 36.2% (71,879) have no media exposure (Fig. and Fig. ). Machine learning anlaysis The Random Forest classifier was employed to predict skilled birth attendance, utilizing its ensemble-based architecture to achieve robust and accurate predictions. This model constructs multiple decision trees during training and aggregates their outputs through majority voting, enabling it to capture complex, non-linear interactions among features. The performance of the model is reflected in its evaluation metrics, with an accuracy of 92%, precision of 93%, recall of 96%, and an F1 score of 94%. The high precision underscores its capacity to minimize false positives, while the strong recall highlights its effectiveness in correctly identifying skilled birth attendance cases (Fig. ). Figure the line graph presents the performance metrics across 10 folds in a cross-validation process for predicting skilled birth attendance among reproductive age women. It highlights that Recall (red line) consistently achieves the highest values, ranging between approximately 96% and 96.5% across all folds. F1 Score (purple line) follows closely, maintaining stable performance slightly below Recall. Precision (green line) demonstrates moderate variability, generally scoring around 93%, while Accuracy (blue line) has the lowest and slightly fluctuating values, hovering near 92%. This comparison emphasizes the robustness of the model’s recall and balanced performance across the other metrics during cross-validation. The high Recall value is particularly important in this study as it ensures that the model effectively identifies a majority of cases where skilled birth attendance utilization occurs, which is critical for designing interventions aimed at improving maternal and newborn health outcomes (Fig. ). The Receiver Operating Characteristic (ROC) curve further illustrates the model’s performance, with an Area Under the Curve (AUC) of 92%, demonstrating excellent discriminatory power. The Random Forest model significantly outperforms random guessing, as indicated by the ROC curve’s deviation from the diagonal baseline. This strong performance is complemented by the model’s ability to explain feature importance, as highlighted by the SHAP analysis, making it an effective tool for understanding the determinants of skilled birth attendance. Overall, the Random Forest classifier proves to be a reliable and insightful method for addressing maternal health challenges and guiding data-driven interventions aimed at improving access to skilled birth care (Fig. ). Determinates of skilled birth attendance In this study, we employed SHAP analysis with random forest classifier. Using this methodology enabled us to identify the significance predictors of skill birth attendance and also improving our understanding of their influence and interpretability. The SHAP summary plot highlights the contributions of various factors to the likelihood of skilled birth attendance. Among the predictors, the place of delivery emerges as the most significant determinant. Institutional deliveries strongly increase the probability of skilled birth attendance, while home deliveries decrease it. Similarly, place of residence plays a critical role, with urban settings being associated with higher access to skilled birth attendance compared to rural areas. Socioeconomic factors, such as education level and wealth index, exhibit a strong positive influence, indicating that higher education and wealth levels are associated with increased access to skilled care during childbirth. Conversely, lower levels of these factors negatively impact this likelihood. Access to information through internet usage and media exposure further enhances the chances of skilled birth attendance, suggesting that improved communication and awareness play pivotal roles. Barriers to healthcare access, such as getting permission to visit a health facility, obtaining money for treatment, and not wanting to go alone, were found to negatively influence skilled birth attendance. In addition, distance to health facilities is another critical barrier, with longer distances reducing the likelihood of accessing skilled care. Ownership of a mobile phone and current employment status exhibit mixed but generally positive effects, likely reflecting the indirect benefits of economic empowerment and connectivity on healthcare access. husband’s occupation shows modest influence. Marital status, although a relatively minor factor, still provides slight variations in its impact on skilled birth attendance. In summary, the analysis underscores that reducing systemic barriers, enhancing socioeconomic conditions, and improving access to healthcare facilities and information are crucial for promoting skilled birth attendance, especially in rural and underserved communities (Fig. ).
In this study, we utilized a total of 198,707 individuals to examine various demographic and socio-economic characteristics. The study found that 71% of women received assistance from an SBA during their last childbirth. In terms of residence, 66.6% (132,409) live in rural areas, while 33.4% (66,298) reside in urban areas. Regarding education, 36.4% (72,210) have no formal education, 32.8% (65,222) have attended primary school, 26.5% (52,624) completed secondary education, and 4.3% (8,651) achieved higher education. Mobile phone ownership is relatively high, with 52.7% (104,725) owning a mobile phone, while 47.3% (93,982) do not. Only 17.5% (34,629) of participants reported using the internet, while 82.5% (164,078) had no internet access. Wealth distribution shows a majority in the poorer categories: 24.8% (49,205) are in the poorest, 21.5% (42,760) in the poorer, and 20.2% (40,073) in the middle. The richer group comprises 17.9% (35,511), while the richest group makes up 15.7% (31,158). Regarding maternal healthcare, 73.1% (145,403) deliver in healthcare facilities, while 26.9% (53,304) deliver at home. Media exposure is reported by 63.8% (126,828) of the population, while 36.2% (71,879) have no media exposure (Fig. and Fig. ).
The Random Forest classifier was employed to predict skilled birth attendance, utilizing its ensemble-based architecture to achieve robust and accurate predictions. This model constructs multiple decision trees during training and aggregates their outputs through majority voting, enabling it to capture complex, non-linear interactions among features. The performance of the model is reflected in its evaluation metrics, with an accuracy of 92%, precision of 93%, recall of 96%, and an F1 score of 94%. The high precision underscores its capacity to minimize false positives, while the strong recall highlights its effectiveness in correctly identifying skilled birth attendance cases (Fig. ). Figure the line graph presents the performance metrics across 10 folds in a cross-validation process for predicting skilled birth attendance among reproductive age women. It highlights that Recall (red line) consistently achieves the highest values, ranging between approximately 96% and 96.5% across all folds. F1 Score (purple line) follows closely, maintaining stable performance slightly below Recall. Precision (green line) demonstrates moderate variability, generally scoring around 93%, while Accuracy (blue line) has the lowest and slightly fluctuating values, hovering near 92%. This comparison emphasizes the robustness of the model’s recall and balanced performance across the other metrics during cross-validation. The high Recall value is particularly important in this study as it ensures that the model effectively identifies a majority of cases where skilled birth attendance utilization occurs, which is critical for designing interventions aimed at improving maternal and newborn health outcomes (Fig. ). The Receiver Operating Characteristic (ROC) curve further illustrates the model’s performance, with an Area Under the Curve (AUC) of 92%, demonstrating excellent discriminatory power. The Random Forest model significantly outperforms random guessing, as indicated by the ROC curve’s deviation from the diagonal baseline. This strong performance is complemented by the model’s ability to explain feature importance, as highlighted by the SHAP analysis, making it an effective tool for understanding the determinants of skilled birth attendance. Overall, the Random Forest classifier proves to be a reliable and insightful method for addressing maternal health challenges and guiding data-driven interventions aimed at improving access to skilled birth care (Fig. ).
In this study, we employed SHAP analysis with random forest classifier. Using this methodology enabled us to identify the significance predictors of skill birth attendance and also improving our understanding of their influence and interpretability. The SHAP summary plot highlights the contributions of various factors to the likelihood of skilled birth attendance. Among the predictors, the place of delivery emerges as the most significant determinant. Institutional deliveries strongly increase the probability of skilled birth attendance, while home deliveries decrease it. Similarly, place of residence plays a critical role, with urban settings being associated with higher access to skilled birth attendance compared to rural areas. Socioeconomic factors, such as education level and wealth index, exhibit a strong positive influence, indicating that higher education and wealth levels are associated with increased access to skilled care during childbirth. Conversely, lower levels of these factors negatively impact this likelihood. Access to information through internet usage and media exposure further enhances the chances of skilled birth attendance, suggesting that improved communication and awareness play pivotal roles. Barriers to healthcare access, such as getting permission to visit a health facility, obtaining money for treatment, and not wanting to go alone, were found to negatively influence skilled birth attendance. In addition, distance to health facilities is another critical barrier, with longer distances reducing the likelihood of accessing skilled care. Ownership of a mobile phone and current employment status exhibit mixed but generally positive effects, likely reflecting the indirect benefits of economic empowerment and connectivity on healthcare access. husband’s occupation shows modest influence. Marital status, although a relatively minor factor, still provides slight variations in its impact on skilled birth attendance. In summary, the analysis underscores that reducing systemic barriers, enhancing socioeconomic conditions, and improving access to healthcare facilities and information are crucial for promoting skilled birth attendance, especially in rural and underserved communities (Fig. ).
Using the recent DHS dataset, this study aimed to predict skill birth attendance and identifying its determinants in the Sub-Saharan African region. The Random Forest classifier demonstrated strong predictive performance in identifying skilled birth attendance, achieving an accuracy of 92%, precision of 93%, recall of 96%, F1 score of 94%, and an AUC 92%. The high recall value, which ensures that most cases of skilled birth attendance are correctly identified, is particularly critical for designing maternal health interventions. These findings align with previous studies that have utilized machine learning models to predict maternal healthcare utilization. For instance, a study conducted in Ethiopia employed machine learning approach, to predict skilled birth attendance Random Forest classifier achieved an accuracy of 85% and an AUC of 95%. Our study surpasses these results, likely due to a larger dataset (198,707 women across 27 SSA countries) and enhanced feature selection using recursive feature elimination. The higher performance in our model may be attributed to advanced data preprocessing techniques, such as SMOTE for class imbalance, KNN imputation for missing values, and Recursive Feature Elimination (RFE) for feature selection. Moreover, our model’s SHAP analysis provided deeper insights into key determinants of skilled birth attendance. Our results highlight that facility delivery, Urban residency, maternal education, wealth index, urban residence, and media exposure are the most influential factors. However, our model additionally emphasizes the role of internet usage and healthcare access barriers (e.g., distance to health facilities, financial constraints, and permission issues), which are often overlooked in traditional analyses. The study found that 71% of women received assistance from an SBA during their last childbirth. This finding is concordant with studies from Togo, which shows 66.7% , and East African countries, 67.1% . Nonetheless, this finding was much higher than studies conducted in Cambodia 19.8% , Bangladesh 35.9% , Ethiopia 28.6% , Nigeria 13% , and Nepal 48% . This disparity might be related to recent enhancements in healthcare accessibility, particularly for skilled delivery services, which can be largely credited to intensified community mobilization efforts. Besides, the inauguration of the Health Development Army (HDA) initiative has shown progress. Hence, the HDA has increased interest and trust in these healthcare offerings by engaging and educating communities about the benefits of skilled delivery services . On the other hand, this study reported lower SBA than studies from Namibia 80.3% . This high SBA prevalence could be due to effective government policies, improved healthcare infrastructure, educational campaigns, and community mobilization efforts. This culminated effort will make skilled delivery services more accessible and trusted, resulting in higher utilization rates nationwide. In our SHAP analysis, the utilization of SBA during childbirth is significantly determined by various predictors. These include the place of delivery, place of residence, mother’s educational status, women’s wealth index, media exposure, use of internet access or having a mobile phone, distance from home to a health facility with SBA, husband’s participation, and marital status. Our SHAP analysis also underscored that women’s residing geographical location is a highly significant predictor of the number of women obtaining SBA at birth. The SHAP summary plot indicated that women who lived in rural areas had a stronger negative influence on accessing SBA than those who lived in urban regions. This finding was supported by studies conducted in Bangladesh , Ghana , South Sudan , Namibia , and Cameroon . This similarity explained by obtaining SBA at birth can be attributed to differences in accessibility and infrastructure between urban and rural areas. Women in urban areas benefit from closer proximity to healthcare facilities providing skilled delivery services and more readily available transportation options. In contrast, rural areas often lack adequate transportation services, road infrastructure, and telecommunication, hindering free ambulance services. Besides, the most often urban areas residents could have a higher economic status, better infrastructure, education, and health facilities, contributing to inequalities in delivery by skilled attendants . Moreover, rural women are more affected by traditional practices contradicting modern healthcare. Our SHAP analysis demonstrated that women’s educational status was a significant predictor of obtaining SBA at birth. The SHAP summary plot indicated that mothers with higher education levels were more likely to access SBA than their less-educated counterparts. This finding is in agreement with various studies conducted in Ethiopia , Nigeria , Nepal , Bangladesh , and Vietnam . This similarity could be due to women with higher education levels possessing better health literacy, greater access to healthcare information, and improved economic opportunities, allowing them to afford healthcare services and transportation. They also have more autonomy in healthcare decisions and a stronger awareness of their rights, which empowers them to seek skilled care. In addition, education helps challenge cultural barriers, further promoting the utilization of SBA . Our SHAP analysis also observed that women’s wealth index significantly influences the likelihood of accessing SBA. The summary plot indicated that mothers from higher wealth indexes were likelier to obtain SBA than their counterparts. This finding is in line with different studies from South Sudan , Uganda , Kenya , Togo , Ghana and Bangladesh . This signifies the impact of financial constraints of mothers on healthcare utilization that endangers maternal well-being. It further affects the post-natal care of mothers and children. Although maternal health services in SSA, including ANC and delivery services, are free, residual costs such as travel and supplies can discourage poor women from obtaining these services . On behalf of this claim, another mixed study conducted in other regions of Ethiopia supports this and identifies travel costs as a major barrier to using SBA . Our SHAP analysis also highlighted that women without mass media exposure were less likely to obtain SBA. This summary plot showed that women with internet access, television, or cell phones had more potential to utilize SBA than women without these resources, consistent with different studies conducted in Ghana , Malawi , Cameroon , and Ethiopia . Furthermore, a study conducted in Guinea found that women who watched television at least once a week were more likely to utilize SBA . This similarity is possibly due to mothers having media exposure, cell phones, and internet access who would become knowledgeable of the major obstetric danger signs during delivery. This was the very determining factor for mothers to obtain SBA at birth. Hence, mothers may feel anxious about potential obstetric complications after giving birth. To alleviate this anxiety, they prefer the assistance of SBA during childbirth. This is further supported by a study conducted in SSA countries, which suggested mass media can disseminate information while fostering interpersonal communication that can facilitate behavioral change . Our SHAP analysis showed that Husbands’ participation promoted the utilization of SBA for women. Studies have shown that husbands involved in birth preparedness and complication readiness are more likely to support their wives in accessing skilled care during childbirth. For example, a study conducted in, Ethiopia , Nigeria , and Myanmar found that husbands’ involvement was significantly associated with the utilization of SBA services. This study also found that women accompanied by men were more likely to access SBA consistent with a study conducted in Nigeria . Instead wanting to go alone, women often prefer the presence of their husbands’ when seeking the service of SBA, because their partner provides emotional support during labor, can advocate for proper care, helps cover costs, and participates in decision-making about medical interventions. Moreover, cultural norms in some societies expect husbands to accompany their wives during childbirth. Thus, these aspects together create a more positive childbirth experience and promote the utilization of SBA . Our SHAP analysis also indicated a strong association between the distance of women’s residences from health facilities and their likelihood of accessing SBA. For example, women who had to travel more than a two-hour walk to reach a healthcare facility were more likely to obtain SBA than those within an hour’s walk. This finding agreed with various studies from Nepal , Kenya , Ethiopia , and Pakistan . This resemblance might be due to mothers who live far from health facilities often prefer to give birth at home without SBA assistance and only seek medical help if labor at home fails because walking is the primary mode of transportation in the study area. Despite the availability of motorcycles, they are unsuitable for laboring women. Consequently, women walk or are carried on beds by others (i.e. cultural ambulance), which prolongs the journey to health facilities. Our SHAP analysis showed that the women’s marital status was comparatively predicted using SBA. The summary plot indicated that married women were more likely to utilize SBA than unmarried women. This finding was consistent with studies in Nigeria, Malawi and Ethiopia . The possible justification might be due to married women has acquired greater financial resources and support, allowing them to afford the costs associated with skilled maternal health care. With their partners’ support, they are more likely to access and utilize these essential services, ensuring better outcomes for both mother and child . Strength and limitation This study utilized a large weighted dataset of 198,707 women from 27 Sub-Saharan African countries, increasing the generalizability of its findings on skilled birth attendance. By leveraging advanced machine learning techniques, it detected complex, non-linear relationships often overlooked by traditional statistical methods, while SHAP analysis was employed to evaluate the relative importance of each predictor, providing actionable insights. Moreover, the study bridges the research-practice gap by emphasizing practical and effective solutions for improving real-world health care services. Nonetheless, reliance on self-reported data may introduce response bias, and the cross-sectional study design limits causal inference. Variations in survey years may influence the significance of skilled birth attendance predictors in specific countries, and the absence of localized factors could limit the applicability of the findings to certain demographic groups. Conclusion and recommendation This study underscores the power of machine learning, particularly the Random Forest classifier, in predicting skilled birth attendance and uncovering key determinants across 27 Sub-Saharan African countries. With an AUC-ROC of 92%, recall of 96%, accuracy of 92%, precision of 93%, and an F1 score of 93%, the model demonstrates exceptional predictive capability in analyzing complex maternal health data. Critical determinants of skilled birth attendance include facility delivery, maternal education, wealth index, urban residence, internet access, media exposure, and healthcare accessibility. Women from urban, wealthier, and more educated backgrounds had higher probabilities of receiving skilled birth attendance, while geographical, financial, and decision-making barriers significantly hindered access. To enhance skilled birth attendance rates and reduce maternal mortality, targeted interventions should focus on expanding healthcare infrastructure in rural areas, strengthening maternal education programs, eliminating financial constraints, leveraging mass media for awareness, fostering male involvement in maternal health decisions, improving transportation and healthcare accessibility, and integrating machine learning models for data-driven public health strategies. By implementing these evidence-based solutions, policymakers and healthcare providers can drive meaningful improvements in maternal and neonatal health outcomes across Sub-Saharan Africa.
This study utilized a large weighted dataset of 198,707 women from 27 Sub-Saharan African countries, increasing the generalizability of its findings on skilled birth attendance. By leveraging advanced machine learning techniques, it detected complex, non-linear relationships often overlooked by traditional statistical methods, while SHAP analysis was employed to evaluate the relative importance of each predictor, providing actionable insights. Moreover, the study bridges the research-practice gap by emphasizing practical and effective solutions for improving real-world health care services. Nonetheless, reliance on self-reported data may introduce response bias, and the cross-sectional study design limits causal inference. Variations in survey years may influence the significance of skilled birth attendance predictors in specific countries, and the absence of localized factors could limit the applicability of the findings to certain demographic groups.
This study underscores the power of machine learning, particularly the Random Forest classifier, in predicting skilled birth attendance and uncovering key determinants across 27 Sub-Saharan African countries. With an AUC-ROC of 92%, recall of 96%, accuracy of 92%, precision of 93%, and an F1 score of 93%, the model demonstrates exceptional predictive capability in analyzing complex maternal health data. Critical determinants of skilled birth attendance include facility delivery, maternal education, wealth index, urban residence, internet access, media exposure, and healthcare accessibility. Women from urban, wealthier, and more educated backgrounds had higher probabilities of receiving skilled birth attendance, while geographical, financial, and decision-making barriers significantly hindered access. To enhance skilled birth attendance rates and reduce maternal mortality, targeted interventions should focus on expanding healthcare infrastructure in rural areas, strengthening maternal education programs, eliminating financial constraints, leveraging mass media for awareness, fostering male involvement in maternal health decisions, improving transportation and healthcare accessibility, and integrating machine learning models for data-driven public health strategies. By implementing these evidence-based solutions, policymakers and healthcare providers can drive meaningful improvements in maternal and neonatal health outcomes across Sub-Saharan Africa.
|
Cross Sectional Survey of Antenatal Educators’ Views About Current Antenatal Education Provision | e1be4086-176f-4459-a792-b47dff159c43 | 11358166 | Patient Education as Topic[mh] | Antenatal education (ANE) is mandated by the National Institute for Health and Social Care Excellence (NICE) and is seen as an important part of care to prepare people for childbirth and the immediate postnatal period. It is currently offered to women/birthing people (referred to as women henceforth) and their partners by the National Health Service (NHS) (Gokce Isbir et al., , ; Guidance, ; Kacperczyk-Bartnik et al., ). In the United Kingdom (UK), whilst the content is not well defined NICE guidelines recommend a focus on childbirth, breastfeeding and immediate postnatal support. Antenatal education is one way in which women and their birth partners can be prepared for birth (Gokce Isbir et al., , ; Kacperczyk-Bartnik et al., ). The “Ready for Child program” found that women who attended antenatal classes reported a more positive birth experience (Maimburg et al., ). This is particularly important as one in three women experience some form of birth trauma (Alcorn et al., ), factors which may contribute to this include postpartum haemorrhage, emergency caesarean and admission to the neonatal intensive care unit. Good quality birth preparation may be key in managing mental health risks of traumatic delivery, by preparing women for different eventualities of childbirth and enabling them to develop coping strategies to manage pain and deal with changes during the course of their delivery (Alcorn et al., ). Traumatic childbirth experiences have been associated with postpartum mental health problems, including depression and post-traumatic stress disorder (PTSD) (Ayers & Pickering, ; Taheri et al., ). Poor mental health during the postnatal period has significant consequences including attachment disorders and a reduction in breastfeeding rates (Beck & Watson, ; Nilsson et al., ). Antenatal education aims to increase birth preparedness through a two-pronged approach. Firstly, it improves overall understanding of childbirth and the likelihood of requiring intervention (Shub et al., ). Secondly, it aims to equip individuals with coping strategies, both pharmacological (i.e. pain relief) and non-pharmacological, to cope with pain and emotional distress, during birth and labour (Green et al., ). There is a wide variety of non-pharmacological coping strategies that can be used by women to promote coping with childbirth including but not limited to TENS, aromatherapy, acupuncture, massage, hypnosis, labour support, reflexology and labour positions. Supporting women to use these in labour is recommended as part of care within the NICE guidelines and trials evidence their use (Kimber et al., ; Liu et al., ). However, there is a lack of high-quality research about both which specific coping strategies women find most effective in labour and what opportunities women have to learn and develop these skills (Beverley Griggs, ; Brixval et al., ; Levett et al., ; Prasertcharoensuk & Thinkhamrop, ). Approximately 65% of women in the UK are offered NHS antenatal education classes and 23% of primiparous women attend non-NHS classes (Beverley Griggs, ). There is currently no available guidance about the content or quantity of antenatal education that should be delivered by the NHS to patients or on how classes should be delivered (Guidance, ). Detailed information about the content of current antenatal education provisions, and variations between providers is not available (Svensson et al., ). Antenatal education is delivered by a variety of educators. In the NHS it is often delivered by community midwives and physiotherapists. However, in the private sector there are a greater variety of antenatal educators; some are clinically trained, some are trained by private organisations [e.g. National Childbirth Trust (NCT)] and others provide specific education around skills (e.g. yoga, hypnobirthing). What many of these educators have in common is that they arrange, deliver and advocate for antenatal education. Collectively this group of people has a wealth of experience and an excellent overview of antenatal education provisions in the UK. This study aims to understand the extent to which educators perceive that current antenatal education supports women to be prepared for childbirth and to identify how they believe the quality of this can be improved to support women in developing coping strategies.
A cross-sectional, UK wide, online survey to describe antenatal educators’ perspectives regarding current antenatal care, conducted between October 2019 and May 2020 . Each topic covered had both multiple choice and open-ended questions. We asked questions about: Respondent demographic characteristics and job role Antenatal educator perspectives of adequacy of current antenatal education provisions and the impact of antenatal education on birth preparedness. Feedback that antenatal educators have received from women about current NHS antenatal education classes. Accessibility of teaching on coping strategies and antenatal educator views about these. Antenatal educators’ views on ideal provisions for structure and content for antenatal education classes. Survey data was collected and managed using REDCap electronic data capture tools hosted at the University of Bristol (Harris et al., , ). Table shows the topics discussed and themes elicited from these. Antenatal educators were purposively sampled. For purposes of anonymity, the organisation of respondents was not recorded however, organisations targeted included: NCT educators, hypnobirthing practitioners and NHS midwives. Antenatal educators were identified via an internet search to discover groups of interest and individuals who are influential, for example heads of community midwifery services, and had to be currently delivering antenatal education either within the NHS or through private practice. Organisations were screened to ensure they were operating in the UK, and were contacted by email with a link to the survey. Antenatal educators were given the option to complete the survey online or through telephone interview, however the majority of information collected was via online survey. Sampling was conducted for diversity across antenatal educational educator groups in terms of type of deliverer or type of ANE delivered. Respondents’ data were used in the analysis if they had completed at least one survey item other than their demographic characteristics. Data from the multiple-choice questions were subjected to quantitative analysis. Percentages were calculated using the count as the numerator and the total number of participants who answered that question as the denominator, such that the denominator varied according to question. Qualitative data derived from the open-ended questions were exported into NVivo for analysis (International ). Thematic data analysis involved an iterative process of reading and re-reading questionnaire responses whilst open coding for words and phrases, followed by assigning words and phrases into clusters and then further assigning them to super-ordinate themes (Attride-Stirling, ). As a validity check, two further authors independently read questionnaire responses and identified no additional themes. The themes mentioned by the greatest number of antenatal educators were prioritised in the analysis. Patient representatives were involved in the study steering committee and inputted into the initial study design.
Survey invitations were sent to 478 antenatal educators; both individuals and organisations. Ninety-nine participants responded, a response rate of 20.7%, with 94 complete and five partially complete responses. There was representation of educators across England, Scotland and Wales. Twenty-five of the survey respondents were qualified midwives and 21 were independent antenatal practitioners. Adequacy of Current Antenatal Education Provision to Prepare Women for Labour and Birth 62% of antenatal educators stated that they did not believe NHS antenatal classes prepared women for labour and birth. In the free-text responses to this question the most commonly mentioned themes elicited were: barriers to accessibility of classes; limited class content; and lack of information tailored to individuals. Barriers to Attending Classes Many antenatal educators described barriers to accessibility of classes as a reason why NHS antenatal education classes do not adequately prepare women for labour and birth. Barriers to accessibility included lack of availability of classes, midweek and daytime classes not being accessible for women with childcare or work constraints, only primiparous women being invited to attend, and language barriers whereby classes are only offered in English. Specific examples from antenatal educators included “women are not advised to sign up for the antenatal course until they are 30 weeks pregnant, but by this time the antenatal course is already full” [antenatal educator]. One midwife commented that “having worked in the NHS I do not believe there is the time, nor the resources to adequately prepare women, especially when there are language barriers”. Limited Class Content Many antenatal educators described NHS antenatal education classes as having limited breadth and depth of information with significant emphasis on medical intervention. One private antenatal education teacher described NHS classes as being “biased towards a medical model of birth.” Time constraints were cited as a contributing factor to limited class content. An NHS midwife commented that “the duration of classes presents a barrier for meaningful discussion.” Lack of Individualised Information Many antenatal educators believed there was a lack of individualised, parent-centred care in antenatal education classes meaning that parents were not aware of all of their options in order to make an informed decision: “Very few women realise the choices they have around the care they are given before, during, or after birth.” Additionally, one antenatal educator described NHS antenatal education classes as a “tick-box exercise” whereby information given is standardised and another educator commented “more individualised person-centred support is needed”. Feedback Received from Women About Current NHS Antenatal Education Classes Seventy-nine antenatal educators (80%) had heard feedback from women about their experiences of NHS antenatal classes. 24% of these commented that women had fed back they didn’t feel adequately prepared for birth. Commonly mentioned themes in feedback received were: the teaching style and quality; and class resources including timings and class size. Teaching Style and Quality Several antenatal educators said that women felt classes had inadequately prepared them for birth. Many antenatal educators had received feedback about the teaching style in classes. Women felt they would prefer a more interactive and engaging teaching style as opposed to a lecture-based class and that classes needed to be smaller to help with engagement. One private antenatal educator received feedback that “the style was lecture-like rather than exploring what people thought—but this is understandable given the lack of time available.” Additionally, women felt they had few practical techniques to help with birthing: “Women usually say there is a lot of information given and classes do not provide practical techniques.” Additional feedback alluded to the variability in the quality of teaching between midwives. For example, one piece of feedback commented: “they (midwives) often receive little to no training in teaching or facilitating adult learning. I hear often from parents that the delivery of the information could be greatly improved.” However, other feedback suggested that women valued being taught by midwives: “it was good to hear from the midwives’ experience and opinion.” Class Resources Including Timings and Class Size Several antenatal educators received feedback that classes had too many people and were too short. One antenatal educator commented that due to the “lack of time available” it was difficult to facilitate discussions and individualise teaching. Another piece of feedback a midwife received was “the sessions were rushed and crowded. That the experience was very much like school, you sat and listened” and another commented “most (women) say the classes were too big, they were too short to get any detailed information and only covered normal birth. [as opposed to medical interventions such as instrumental delivery or caesarean sections]”. Teaching Coping Strategies to Support Labour and Birth A total of 94 antenatal educators responded to the questions addressing the use of antenatal education to equip women with coping strategies for labour and birth. 55% of these antenatal educators believed the opportunity for women to learn about coping strategies for labour and birth varied between location and educators. 35% of antenatal educators believed women did not have adequate opportunity to develop these skills. In the free text response to this question barriers to developing coping strategies were acknowledged with themes relating to: affordability of private provisions; and time constraints. Affordability of Private Provisions as a Barrier to Developing Coping Strategies The most frequently mentioned barrier to developing coping strategies was affordability of private classes which was mentioned by 32% of stakeholders. Many antenatal educators commented that private birth preparation classes addressed a greater variety of coping strategies than NHS classes but were not always affordable. For example, some antenatal educators, both private and NHS, felt that little time was spent on developing non-pharmacological coping strategies in NHS classes. One hypnobirthing instructor commented that “where women can afford private classes, they have the opportunity to develop coping strategies. Where they cannot afford private classes (the majority of women) they are unable to get the information that they need.” Time Constraints as a Barrier to Developing Coping Strategies Time constraints were cited as another barrier to developing strategies with one private midwife saying “I don’t think there is enough time in NHS classes to do this, as there is so much to cover in such a short timeframe.” How Best to Support Women to Develop Coping Strategies Antenatal educators were asked what they thought was the best method to support women to develop coping strategies in the antenatal period. Themes discussed were increased practice of coping strategies throughout the antenatal period; and the roles of health care providers in enabling parents to develop these strategies. Antenatal educators suggested approaches to facilitate increased practice of coping strategies included introducing them earlier in pregnancy and increasing the frequency of practicing coping strategies both in a home environment and healthcare setting. Antenatal educators also mentioned that the actions of healthcare providers played an important part in preparing women to use coping strategies during labour and birth for example, the non-biased presentation of available coping strategies regardless of the attitudes of educators towards the coping strategy. One private midwife said “women have to be given all of the tools in the toolbox and then they have to learn which ones are their favourite and become familiar with them. They need to know and be educated about them all.” Supporting the Utilisation of Coping Strategies How Healthcare Professionals Can Support Women to Use Coping Strategies During Labour and Birth Antenatal educators described the need for: a parent-centred, individualised approach; Continuity of Care (COC) and knowledge of healthcare professionals about a wide range of coping strategies. A parent-centred approach, whereby healthcare professionals are aware of a woman’s birth plan and preferences was the most commonly mentioned theme. This encompasses where women want to labour, whether they want to have a vaginal delivery or caesarean section and their analgesia preferences. COC was cited as a useful strategy to help with parent-centred care with one hypnobirthing teacher suggesting COC is important so that “professionals can understand a woman’s preferences”. Many antenatal educators felt that the healthcare professional having a wide range of knowledge surrounding non-pharmacological and pharmacological coping strategies is also important. The Most Useful Strategies to Support Coping in Labour When asked which coping strategies antenatal educators thought women find most useful, physical movement was perceived as the most helpful and yoga least helpful (Fig. ). We provided the opportunity to suggest additional coping strategies. Antenatal educators suggested: self-care strategies (rest, nutrition, and hydration), aromatherapy and distraction therapies. Birth Partners and Coping Strategies Forty-seven antenatal educators (50%) believed that birth partners do not have the opportunity to learn about coping strategies to support women and 59 antenatal educators (63%) believed that there is not opportunity for birth partners to develop coping mechanisms to support themselves. Structure and Content of Antenatal Education Classes Antenatal educators were asked how many hours of antenatal education are needed and realistic within the NHS budget to enable birth preparation. 19% of antenatal educators thought that up to 4 h of education was required, 33% thought that up to 5 h of education was needed and 29% suggested that more than 5 h are required. With regards to topics that are essential to be included in NHS or private classes there was variation in what antenatal educators’ thought were priority topics. Positions for labour and choice of birth location were seen as the most important topics to cover in NHS classes. Meanwhile hypnobirthing techniques, breathing techniques and focus on awareness of choice with regards to birth plans or preferences were seen to be the most important topics to include in private classes (Fig. ). Antenatal educators suggested that the most appropriate healthcare professional for women to hear from during classes was community midwives. Antenatal educators thought the least appropriate healthcare professional for women to hear from was consultant obstetricians (Fig. ).
62% of antenatal educators stated that they did not believe NHS antenatal classes prepared women for labour and birth. In the free-text responses to this question the most commonly mentioned themes elicited were: barriers to accessibility of classes; limited class content; and lack of information tailored to individuals. Barriers to Attending Classes Many antenatal educators described barriers to accessibility of classes as a reason why NHS antenatal education classes do not adequately prepare women for labour and birth. Barriers to accessibility included lack of availability of classes, midweek and daytime classes not being accessible for women with childcare or work constraints, only primiparous women being invited to attend, and language barriers whereby classes are only offered in English. Specific examples from antenatal educators included “women are not advised to sign up for the antenatal course until they are 30 weeks pregnant, but by this time the antenatal course is already full” [antenatal educator]. One midwife commented that “having worked in the NHS I do not believe there is the time, nor the resources to adequately prepare women, especially when there are language barriers”. Limited Class Content Many antenatal educators described NHS antenatal education classes as having limited breadth and depth of information with significant emphasis on medical intervention. One private antenatal education teacher described NHS classes as being “biased towards a medical model of birth.” Time constraints were cited as a contributing factor to limited class content. An NHS midwife commented that “the duration of classes presents a barrier for meaningful discussion.” Lack of Individualised Information Many antenatal educators believed there was a lack of individualised, parent-centred care in antenatal education classes meaning that parents were not aware of all of their options in order to make an informed decision: “Very few women realise the choices they have around the care they are given before, during, or after birth.” Additionally, one antenatal educator described NHS antenatal education classes as a “tick-box exercise” whereby information given is standardised and another educator commented “more individualised person-centred support is needed”.
Many antenatal educators described barriers to accessibility of classes as a reason why NHS antenatal education classes do not adequately prepare women for labour and birth. Barriers to accessibility included lack of availability of classes, midweek and daytime classes not being accessible for women with childcare or work constraints, only primiparous women being invited to attend, and language barriers whereby classes are only offered in English. Specific examples from antenatal educators included “women are not advised to sign up for the antenatal course until they are 30 weeks pregnant, but by this time the antenatal course is already full” [antenatal educator]. One midwife commented that “having worked in the NHS I do not believe there is the time, nor the resources to adequately prepare women, especially when there are language barriers”.
Many antenatal educators described NHS antenatal education classes as having limited breadth and depth of information with significant emphasis on medical intervention. One private antenatal education teacher described NHS classes as being “biased towards a medical model of birth.” Time constraints were cited as a contributing factor to limited class content. An NHS midwife commented that “the duration of classes presents a barrier for meaningful discussion.”
Many antenatal educators believed there was a lack of individualised, parent-centred care in antenatal education classes meaning that parents were not aware of all of their options in order to make an informed decision: “Very few women realise the choices they have around the care they are given before, during, or after birth.” Additionally, one antenatal educator described NHS antenatal education classes as a “tick-box exercise” whereby information given is standardised and another educator commented “more individualised person-centred support is needed”.
Seventy-nine antenatal educators (80%) had heard feedback from women about their experiences of NHS antenatal classes. 24% of these commented that women had fed back they didn’t feel adequately prepared for birth. Commonly mentioned themes in feedback received were: the teaching style and quality; and class resources including timings and class size. Teaching Style and Quality Several antenatal educators said that women felt classes had inadequately prepared them for birth. Many antenatal educators had received feedback about the teaching style in classes. Women felt they would prefer a more interactive and engaging teaching style as opposed to a lecture-based class and that classes needed to be smaller to help with engagement. One private antenatal educator received feedback that “the style was lecture-like rather than exploring what people thought—but this is understandable given the lack of time available.” Additionally, women felt they had few practical techniques to help with birthing: “Women usually say there is a lot of information given and classes do not provide practical techniques.” Additional feedback alluded to the variability in the quality of teaching between midwives. For example, one piece of feedback commented: “they (midwives) often receive little to no training in teaching or facilitating adult learning. I hear often from parents that the delivery of the information could be greatly improved.” However, other feedback suggested that women valued being taught by midwives: “it was good to hear from the midwives’ experience and opinion.” Class Resources Including Timings and Class Size Several antenatal educators received feedback that classes had too many people and were too short. One antenatal educator commented that due to the “lack of time available” it was difficult to facilitate discussions and individualise teaching. Another piece of feedback a midwife received was “the sessions were rushed and crowded. That the experience was very much like school, you sat and listened” and another commented “most (women) say the classes were too big, they were too short to get any detailed information and only covered normal birth. [as opposed to medical interventions such as instrumental delivery or caesarean sections]”.
Several antenatal educators said that women felt classes had inadequately prepared them for birth. Many antenatal educators had received feedback about the teaching style in classes. Women felt they would prefer a more interactive and engaging teaching style as opposed to a lecture-based class and that classes needed to be smaller to help with engagement. One private antenatal educator received feedback that “the style was lecture-like rather than exploring what people thought—but this is understandable given the lack of time available.” Additionally, women felt they had few practical techniques to help with birthing: “Women usually say there is a lot of information given and classes do not provide practical techniques.” Additional feedback alluded to the variability in the quality of teaching between midwives. For example, one piece of feedback commented: “they (midwives) often receive little to no training in teaching or facilitating adult learning. I hear often from parents that the delivery of the information could be greatly improved.” However, other feedback suggested that women valued being taught by midwives: “it was good to hear from the midwives’ experience and opinion.”
Several antenatal educators received feedback that classes had too many people and were too short. One antenatal educator commented that due to the “lack of time available” it was difficult to facilitate discussions and individualise teaching. Another piece of feedback a midwife received was “the sessions were rushed and crowded. That the experience was very much like school, you sat and listened” and another commented “most (women) say the classes were too big, they were too short to get any detailed information and only covered normal birth. [as opposed to medical interventions such as instrumental delivery or caesarean sections]”.
A total of 94 antenatal educators responded to the questions addressing the use of antenatal education to equip women with coping strategies for labour and birth. 55% of these antenatal educators believed the opportunity for women to learn about coping strategies for labour and birth varied between location and educators. 35% of antenatal educators believed women did not have adequate opportunity to develop these skills. In the free text response to this question barriers to developing coping strategies were acknowledged with themes relating to: affordability of private provisions; and time constraints. Affordability of Private Provisions as a Barrier to Developing Coping Strategies The most frequently mentioned barrier to developing coping strategies was affordability of private classes which was mentioned by 32% of stakeholders. Many antenatal educators commented that private birth preparation classes addressed a greater variety of coping strategies than NHS classes but were not always affordable. For example, some antenatal educators, both private and NHS, felt that little time was spent on developing non-pharmacological coping strategies in NHS classes. One hypnobirthing instructor commented that “where women can afford private classes, they have the opportunity to develop coping strategies. Where they cannot afford private classes (the majority of women) they are unable to get the information that they need.” Time Constraints as a Barrier to Developing Coping Strategies Time constraints were cited as another barrier to developing strategies with one private midwife saying “I don’t think there is enough time in NHS classes to do this, as there is so much to cover in such a short timeframe.” How Best to Support Women to Develop Coping Strategies Antenatal educators were asked what they thought was the best method to support women to develop coping strategies in the antenatal period. Themes discussed were increased practice of coping strategies throughout the antenatal period; and the roles of health care providers in enabling parents to develop these strategies. Antenatal educators suggested approaches to facilitate increased practice of coping strategies included introducing them earlier in pregnancy and increasing the frequency of practicing coping strategies both in a home environment and healthcare setting. Antenatal educators also mentioned that the actions of healthcare providers played an important part in preparing women to use coping strategies during labour and birth for example, the non-biased presentation of available coping strategies regardless of the attitudes of educators towards the coping strategy. One private midwife said “women have to be given all of the tools in the toolbox and then they have to learn which ones are their favourite and become familiar with them. They need to know and be educated about them all.”
The most frequently mentioned barrier to developing coping strategies was affordability of private classes which was mentioned by 32% of stakeholders. Many antenatal educators commented that private birth preparation classes addressed a greater variety of coping strategies than NHS classes but were not always affordable. For example, some antenatal educators, both private and NHS, felt that little time was spent on developing non-pharmacological coping strategies in NHS classes. One hypnobirthing instructor commented that “where women can afford private classes, they have the opportunity to develop coping strategies. Where they cannot afford private classes (the majority of women) they are unable to get the information that they need.”
Time constraints were cited as another barrier to developing strategies with one private midwife saying “I don’t think there is enough time in NHS classes to do this, as there is so much to cover in such a short timeframe.”
Antenatal educators were asked what they thought was the best method to support women to develop coping strategies in the antenatal period. Themes discussed were increased practice of coping strategies throughout the antenatal period; and the roles of health care providers in enabling parents to develop these strategies. Antenatal educators suggested approaches to facilitate increased practice of coping strategies included introducing them earlier in pregnancy and increasing the frequency of practicing coping strategies both in a home environment and healthcare setting. Antenatal educators also mentioned that the actions of healthcare providers played an important part in preparing women to use coping strategies during labour and birth for example, the non-biased presentation of available coping strategies regardless of the attitudes of educators towards the coping strategy. One private midwife said “women have to be given all of the tools in the toolbox and then they have to learn which ones are their favourite and become familiar with them. They need to know and be educated about them all.”
How Healthcare Professionals Can Support Women to Use Coping Strategies During Labour and Birth Antenatal educators described the need for: a parent-centred, individualised approach; Continuity of Care (COC) and knowledge of healthcare professionals about a wide range of coping strategies. A parent-centred approach, whereby healthcare professionals are aware of a woman’s birth plan and preferences was the most commonly mentioned theme. This encompasses where women want to labour, whether they want to have a vaginal delivery or caesarean section and their analgesia preferences. COC was cited as a useful strategy to help with parent-centred care with one hypnobirthing teacher suggesting COC is important so that “professionals can understand a woman’s preferences”. Many antenatal educators felt that the healthcare professional having a wide range of knowledge surrounding non-pharmacological and pharmacological coping strategies is also important. The Most Useful Strategies to Support Coping in Labour When asked which coping strategies antenatal educators thought women find most useful, physical movement was perceived as the most helpful and yoga least helpful (Fig. ). We provided the opportunity to suggest additional coping strategies. Antenatal educators suggested: self-care strategies (rest, nutrition, and hydration), aromatherapy and distraction therapies. Birth Partners and Coping Strategies Forty-seven antenatal educators (50%) believed that birth partners do not have the opportunity to learn about coping strategies to support women and 59 antenatal educators (63%) believed that there is not opportunity for birth partners to develop coping mechanisms to support themselves.
Antenatal educators described the need for: a parent-centred, individualised approach; Continuity of Care (COC) and knowledge of healthcare professionals about a wide range of coping strategies. A parent-centred approach, whereby healthcare professionals are aware of a woman’s birth plan and preferences was the most commonly mentioned theme. This encompasses where women want to labour, whether they want to have a vaginal delivery or caesarean section and their analgesia preferences. COC was cited as a useful strategy to help with parent-centred care with one hypnobirthing teacher suggesting COC is important so that “professionals can understand a woman’s preferences”. Many antenatal educators felt that the healthcare professional having a wide range of knowledge surrounding non-pharmacological and pharmacological coping strategies is also important.
When asked which coping strategies antenatal educators thought women find most useful, physical movement was perceived as the most helpful and yoga least helpful (Fig. ). We provided the opportunity to suggest additional coping strategies. Antenatal educators suggested: self-care strategies (rest, nutrition, and hydration), aromatherapy and distraction therapies.
Forty-seven antenatal educators (50%) believed that birth partners do not have the opportunity to learn about coping strategies to support women and 59 antenatal educators (63%) believed that there is not opportunity for birth partners to develop coping mechanisms to support themselves.
Antenatal educators were asked how many hours of antenatal education are needed and realistic within the NHS budget to enable birth preparation. 19% of antenatal educators thought that up to 4 h of education was required, 33% thought that up to 5 h of education was needed and 29% suggested that more than 5 h are required. With regards to topics that are essential to be included in NHS or private classes there was variation in what antenatal educators’ thought were priority topics. Positions for labour and choice of birth location were seen as the most important topics to cover in NHS classes. Meanwhile hypnobirthing techniques, breathing techniques and focus on awareness of choice with regards to birth plans or preferences were seen to be the most important topics to include in private classes (Fig. ). Antenatal educators suggested that the most appropriate healthcare professional for women to hear from during classes was community midwives. Antenatal educators thought the least appropriate healthcare professional for women to hear from was consultant obstetricians (Fig. ).
Antenatal educators have highlighted that the current NHS antenatal education provision is inadequate for preparing women for labour and birth. They described practical barriers (e.g. timing, availability, language barriers) and barriers to quality (e.g. limited time, lack of individualised, parent-centred care, poor delivery/teaching preparation). Their assessment is that coping strategies are not taught widely throughout NHS antenatal classes but are often taught by private providers, which creates inequality in provision. Despite these concerns antenatal educators believed that different topics should be covered in NHS and private classes. Our study found practical barriers to the accessibility of NHS antenatal classes. Greater focus on planning the classes to suit the needs of individuals and their partners is needed to improve this. This may include class times outside of the working day, online classes and widening access to multiparous women for whom classes are sometimes not available and making classes accessible for people who do not speak English. In addition, antenatal educators believed that there are inequalities in access to antenatal education provision (both NHS and non-NHS). This was also seen in a antenatal education care review which found that women in the most and least deprived quintiles and Black, Asian and Minority Ethnic (BAME) communities were less likely than other women to have been invited to attend classes (Beverley Griggs, ). Morton and Simkin report “health maintenance for all not just for the richest” is an important part of respectful maternity care (Morton & Simkin, ). This is particularly emphasised by the fact that women who are least likely to access private classes (young people, BAME groups, lower socioeconomic classes) are also the people who are at greatest risk of morbidity and mortality (Beverley Griggs, ), and so are perhaps the people whom the NHS needs to target the most when considering antenatal preparation for birth. One suggestion could be the training of educators from the same communities as women who are least likely to attend, so that antenatal education can be delivered in the native language of people attending the class and would also mean that the educator could have a better knowledge and cultural understanding of the challenges that may impact these specific groups. In addition to the discrepancy in the quality of information delivered between NHS and private providers, and the lack of adequate opportunity to develop coping strategies, antenatal educators were also concerned that NHS ANE covers a narrower range of topics and is more medically focussed. This is likely secondary to the barrier of lack of time, for example, for a woman to be taught hypnobirthing and to have the opportunity to practice it requires more time. Access to a narrower range of topics may lead to a woman, especially one who can only afford to access NHS classes, being less aware of options during labour and make her feel less involved in the decision-making process. A lack of involvement in decision making during labour has been shown to contribute to negative birth experiences including poorer postnatal mental health outcomes (Elmir et al., ; Olde et al., ; Thomson & Downe, ). A standardised curriculum may improve the accessibility of a wider range of topics in a more time efficient way. While it is acknowledged that there is not currently funded time to deliver additional content, it may be possible to reduce this inequality by within the standardised curriculum highlighting basic resources to women. We found that antenatal educators believe NHS classes sizes to be too large, creating a barrier for meaningful discussion. Other research has also shown that large class groups create an obstacle to a participatory educational approach (O’Sullivan et al., ). To improve how classes are planned and structured, further research should involve speaking to pregnant and recently post-partum individuals who have recent lived experiences of attending antenatal classes to find out their specific needs and allow them to voice suggestions for the shape of educational opportunities within their journey. As such, a standardised curriculum can be devised which prioritises the content which pregnant women find most important in a time efficient way that is useful for the patient. Results from our study suggest that there is variation in class facilitation skills between educators which may further drive inequalities in the antenatal education provision across the UK. This was similarly found in an observational study by Cutajar and Cyna which found variability in the content and time taken for information delivery in antenatal classes between midwives (Cutajar & Cyna, ). Within the public sector, there is no specific training in this area and community midwives may be required to deliver education as part of their job role, whether they are accomplished class facilitators or not. This may be improved through having specific midwives to be trained to deliver education rather than it being the role of every midwife by default and additionally, educators should be delivering education on topics about which they are confident for example, labour ward midwives or consultant obstetricians may be better suited to deliver education around medical interventions such as the use of forceps than community midwives. Strengths and Limitations To the authors’ knowledge, no other studies have examined perspectives of antenatal educators of antenatal education in the UK. Strengths of our study included a mixed methods approach which allowed us to explore further the reasons for participants’ responses. Additionally, we received responses from both NHS and non-NHS antenatal educators who deliver a broad variety of antenatal classes, with good representation from across England, Scotland and Wales. A wide range of professionals are involved in the delivery of ANE, we believed it was important that all of their voices are heard. The paper also covers what is included in ANE, it does not however, consider the practicalities of how the hospital is prepared to support the woman in her choices during labour and birth. We acknowledge that the support from different professionals may be varied however, within the NHS there is much support from the midwifery community for using a wide range of coping strategies (Merriel et al., ). Additionally, there are other topics which may be important to cover in ANE that are not currently included in the broad guidelines for antenatal education provided by NICE. This could include respectful maternity care and addressing issues such as intimate partner violence, further research to design a standardised curriculum may consider including these topics. The response rate of 21% to our study was not high. However, this is in keeping with return rates with other online surveys, and even with a higher response rate, we feel there would have remained an element of self-selection whereby our research is more likely to be impacted by those who are more motivated to be involved with childbirth education (Hendra & Hill, ; Iversen et al., ; Khazaal et al., ). The study only takes in the views of antenatal educators, not people attending classes. We accessed both NHS and private providers because we believed it was important to hear about provision in both sectors.
To the authors’ knowledge, no other studies have examined perspectives of antenatal educators of antenatal education in the UK. Strengths of our study included a mixed methods approach which allowed us to explore further the reasons for participants’ responses. Additionally, we received responses from both NHS and non-NHS antenatal educators who deliver a broad variety of antenatal classes, with good representation from across England, Scotland and Wales. A wide range of professionals are involved in the delivery of ANE, we believed it was important that all of their voices are heard. The paper also covers what is included in ANE, it does not however, consider the practicalities of how the hospital is prepared to support the woman in her choices during labour and birth. We acknowledge that the support from different professionals may be varied however, within the NHS there is much support from the midwifery community for using a wide range of coping strategies (Merriel et al., ). Additionally, there are other topics which may be important to cover in ANE that are not currently included in the broad guidelines for antenatal education provided by NICE. This could include respectful maternity care and addressing issues such as intimate partner violence, further research to design a standardised curriculum may consider including these topics. The response rate of 21% to our study was not high. However, this is in keeping with return rates with other online surveys, and even with a higher response rate, we feel there would have remained an element of self-selection whereby our research is more likely to be impacted by those who are more motivated to be involved with childbirth education (Hendra & Hill, ; Iversen et al., ; Khazaal et al., ). The study only takes in the views of antenatal educators, not people attending classes. We accessed both NHS and private providers because we believed it was important to hear about provision in both sectors.
Antenatal educators believe current antenatal education provision in the UK does not adequately prepare women for labour and birth. There is an inequality in the level of information, quality of delivery and accessibility of ANE between NHS and private providers. To reduce this healthcare inequality, a standardised minimum curriculum for NHS classes is needed and training on delivery of this education for midwives needs to be enhanced so that high-quality education is available to everyone. Future research should investigate the views of pregnant women and their partners about current antenatal education provisions in the UK to establish their needs and define a high-quality, evidence-based NHS curriculum.
|
Functional Identification and Genetic Analysis of O-Antigen Gene Clusters of Food-Borne Pathogen | 6af95841-dddc-47ca-858a-cce4d56ab3f3 | 11380512 | Microbiology[mh] | Yersinia enterocolitica is a Gram-negative, food-borne, zoonotic pathogen with worldwide distribution . It is the causative agent of yersiniosis, which is mainly characterized by gastrointestinal disorders; however, extraintestinal systemic manifestations and septic complications have also been recognized occasionally . According to their biochemical features and pathogenic properties, Y. enterocolitica strains are divided into six biotypes: the highly pathogenic biotype 1B; the weakly pathogenic biotypes 2–5; and the non-pathogenic biotype 1A . In addition, with regard to the antigenic scheme, more than 70 serotypes of Y. enterocolitica have been identified to date . Among these, only 11 serotypes have been reported to be associated with human yersiniosis, with O:3, O:9, O:8 and O:5,27 being the most prevalent, according to clinical evidence . In Gram-negative bacteria, lipopolysaccharide (LPS) is a major component of the outer membrane. The full LPS molecule includes three structurally distinct regions: the lipid A; an oligosaccharide core; and the O-antigenic polysaccharide (O-antigen) . The O-antigen is made of oligosaccharide repeats (O-units) each consisting of two to eight different monosaccharide residues (heteroglycans) or, in some bacteria, identical sugars (homoglycans). The O-antigen is the most variable LPS component in terms of composition and structure, and thus provides the molecular basis for serotyping, one of the main useful tools for epidemiological, diagnostic, and phylogenetic purposes . Enzymes involved in O-antigen synthesis are encoded in a number of genes, which are always located within a chromosome-encoded locus, namely, the O-antigen gene cluster (O-AGC), and the high variability of O-antigens results normally from the genetic diversity of O-AGC . The synthesis of O-antigen depends on one of two classical pathways: the Wzx/Wzy dependent pathway and the ABC transporter (Wzm/Wzt) dependent pathway . The O-antigen chemical structures have been reported for several Y. enterocolitica serotypes, including O:1 to O:3, O:5,27, O:6, and O:8 to O:10 , and the O-AGCs are known for serotypes O:3, O:9, O:8, O:5 and O:5,27 . In serotype O:8, the O-AGC is located between the hemH and gsk genes. In other serotypes, however, the hemH-gsk locus is occupied by the outer core gene cluster, and the O-AGCs are located elsewhere, always along with several transposase genes; this probably indicates a hotspot region of lateral gene transfer . Since serotyping is one of the main characteristics of Y. enterocolitica , a number of PCR-based methods have been developed for the detection of clinically prevalent serotypes of Y. enterocolitica with higher efficiency and specificity based on the comprehensive elucidation of their O-AGCs . In addition to the main serotypes which are strongly associated with human infection, a few minor serotypes have also been reported as causal agents in sporadic cases . Besides the O-antigen that has been demonstrated to be a virulence factor in Y. enterocolitica , pathogenic Y. enterocolitica strains harbor many other chromosome- and plasmid-encoded virulence determinants that are essential for pathogenesis . The main chromosomal virulence genes include the following: ail , encoding the protein which plays a key role during attachment and invasion processes, and which also confers serum resistance ; inv , whose product is required in the early phase of infection ; myfA , encoding a fibrillar submit of Myf, an important factor at the beginning of infection ; and yst , encoding an enterotoxin . In addition, several genes located within a virulence plasmid, pYV, are directly involved in the pathogenicity of Y. enterocolitica , including yadA , encoding Yersinia adhesin, and the yop virulon genes, encoding a type III secretion system (T3SS), namely, Ysc-Yops, translocator YopB/D, control element YopN, and effector YopE/H/M/O/P/T . Normally, high-virulence strains of biotypes 1B and low-virulence strains of biotypes 2–5 carry the chromosome-encoded virulence markers ail , inv , and ystA , in addition to pYV, for their full virulence expression. In contrast, nonvirulent strains belonging to biotype 1A lack pYV-encoding virulence factors, but mainly possess ystB , another type of yst gene . Many studies showed that Y. enterocolitica strains of different biotypes exhibit disparate pathogenic properties and virulence profiles . However, the link between serotypes and virulence gene distribution in Y. enterocolitica has not yet been reported. During the routine epidemiological surveillance, a Y. enterocolitica strain, which was numbered WL-21, was isolated from the stool sample of a chicken by the Shandong Centre for Disease Control and Prevention. Further agglutination test by using antisera (provided by Chinese Center for Disease Control and Prevention) showed that WL-21 was serotype O:10, and biotyping test according to a previous study showed that this isolate belonged to biotype 1A. As an uncommon serotype, neither the O-AGC nor the virulence patterns of O:10 was revealed. The first objective of the present study was to characterize the O-AGC of WL-21, and certify its role in O-antigen synthesis. Secondly, through comprehensive in silico analysis of the O-AGCs, we produced a blueprint that may contribute to elucidating the origin and evolution of O-AGC in Y. enterocolitica . Finally, we sought to recognize any interactions between O-antigen and other virulence factors in the pathogenicity of Y. enterocolitica .
Bacterial Strains, Plasmids, and Growth Conditions Details of WL-21 and its derivatives, plasmids, and primers which were used in this study are given in . All strains used for sequencing and gene manipulation were cultured in Luria–Bertani (LB) medium at 30°C. When necessary, the cultures were supplemented with chloramphenicol (25 μg/ml). LB medium (Cat. no. R20214) and chloramphenicol (Cat. no. S17022) were purchased from YuanYe Bio-Technology Co., Ltd. (China). The DNA extraction kit (Cat. no. DP302-02) was purchased from Tiangen Biotech Co., Ltd. (China). High-Fidelity DNA Polymerase (Cat. no. M0530S), restriction enzymes (Cat. nos. R3142V and R3104V) and T4 DNA ligase (Cat. no. M0202V) were purchased from New England Biolab (USA). Sucrose (Cat. no. A610498), L-arabinose (Cat. no. A610071), and reagents for LPS preparation and sodium dodecyl sulphate–polyacrylamide gel electrophoresis (SDS-PAGE) were purchased from Sangon Co., Ltd. (China). Primers were synthesized by Genewiz (China). Genome Sequencing and Annotation The genomic DNA of Y. enterocolitica strain WL-21 was extracted from 5 ml of the overnight bacterial culture. Then, the genomic DNA was sheared, polished, and prepared using the Nextera XT DNA library prep kit (Illumina, USA). Genomic libraries containing 500-bp paired-end inserts were constructed, and sequencing was then performed using Solexa sequencing technologies (Illumina) to produce approximately 100-fold coverage. The obtained reads were assembled using the de novo genome assembly program Velvet to generate a multi-contig draft genome . For gene prediction and annotation, Prokka (v1.12) was used , and for additional annotation, the assembled sequences were searched against GenBank non-redundant (NR), UniProt, and Pfam database using BLAST (v2.5.0+) . The TMHMM v2.0 analysis program ( http://www.cbs.dtu.dk/services/TMHMM-2.0/ ) was used to identify potential transmembrane domains within the protein sequences. The sequence data were submitted to GenBank under accession number PP132002. Meanwhile, the genome sequence of a strain IP2222 (O:36) was downloaded from GenBank (GCA_000285015.1) and annotated using the above procedures. Construction of Plasmids and Strains Gene deletion in the WL-21 chromosome was performed according to a two-step homologous recombination with pRE112 containing the sacB counter-selectable marker, as described previously . The upstream and downstream fragments of the wzm gene were amplified from the WL-21 genome using primer pairs upF/upR and downF/downR. Next, the two fragments were combined with each other using primers upF and downR, followed by fusion with the linearized pRE112 to yield the suicide plasmid pRE112-updown, which was introduced into the S-17 strain by electroporation, generating the donor strain S-17-pRE112-updown. For conjugation, S-17-pRE112-updown and the recipient strain WL-21 were grown at 30°C with shaking overnight. Then, the cultures were diluted as a ratio of 1:100 into fresh LB medium and incubated at 30°C with shaking until the optical density at 600 nm (OD 600 ) reached approximately 0.6. Next, the donor and the recipient strains were mixed at a ratio of 3:1 (v/v), and the mixture was resuspended in 100 μl of LB medium and was spotted on a LB plate with incubation at 30°C for 48h. After conjugation, the cells were collected and transferred to chloramphenicol containing LB plate to screen for clones via a single crossover event. The growing clones were then transferred into fresh LB medium and incubated at 30°C overnight to induce the second homologous recombination. The overnight culture was diluted and spread on LB plates containing 10% sucrose and grown at 30°C for 24 h. Finally, the resultant clones were transferred onto LB plates and LB plates with chloramphenicol simultaneously, and clones sensitive to chloramphenicol were selected and confirmed by PCR and sequencing. For the complementation test, the wzm gene was amplified from the WL-21 genome using primers wzm-cF and wzm-cR. The PCR fragment was digested with restriction enzymes KpnI and HindIII, and the digested fragment was cloned into pBAD33, which was also treated with the same enzymes, resulting in plasmid pBAD33-wzm. Expression of the cloned wzm gene is under the control of the P BAD promoter, which could be activated by arabinose. Then, pBAD33-wzm was introduced into WL-21Δ wzm by electroporation, generating the complementary strain WL-21Δ wzm :: wzm . LPS Extraction and SDS-PAGE Analysis Strains were grown overnight at 22°C with shaking, and cultures were diluted into 20 ml of fresh LB broth at a ratio of 1:100 and incubated at 22°C to mid-log phase at a final OD600 = 0.8. To induce wzm expression under the control of pBAD33, L-arabinose (0.5 mg/ml) was added to cultures at the OD 600 = 0.4, and the cultures were incubated continuously to mid-log phase at the OD 600 = 0.8. Then, LPS used for SDS-PAGE analysis was prepared using the hot aqueous-phenol method, as previously described . The extracted LPSs were separated using 12%SDS-PAGE at 50 V for 30 min and 100 V for 2 h; subsequently, they were visualized by silver staining using the Fast Silver Stain Kit (Cat. no. P0017S; Beyotime, China), according to the manufacturer’s protocol. The gel image was captured using a GS900 calibrated densitometer (BioRad Laboratories, USA). Analysis of Putative O-AGCs and Virulence Marker Profiles Genomes of those isolates with the serotypes assigned to by the other submitters were downloaded from the GenBank database. Putative O-AGCs of serotypes were characterized using the in-house Bacterial Surface Polysaccharide Gene Database. In general, query genome sequences in GenBank format were searched against the database using BLASTp with the cutoff %coverage > 60 and %identity > 30. An O-AGC candidate could be defined using the following criteria: (1) the smallest number of successive genes is six; (2) the number of successive genes annotated “No hits” is no more than three; and (3) there must be glycosyl transferase gene(s), in addition to wzm / wzt or at least one of wzx and wzy genes. Schematic diagram of genes was generated using ChiPlot . The downloaded genome sequences, along with that of WL-21, were also compared with the selected virulence genes, including ail , inv , myfA , ystA/B , yadA and yop virulon , ymoA , yplA , fliA and flh/C/D , fyuA and ybtP/Q , and ysa genes , which have been identified previously. Nucleotide sequences were compared using BLASTn, and genes with >90% coverage match and >85% identity match were classified as present. Isolates were clustered according to their virulence profiles, and the pattern was visualized by ChiPlot .
Details of WL-21 and its derivatives, plasmids, and primers which were used in this study are given in . All strains used for sequencing and gene manipulation were cultured in Luria–Bertani (LB) medium at 30°C. When necessary, the cultures were supplemented with chloramphenicol (25 μg/ml). LB medium (Cat. no. R20214) and chloramphenicol (Cat. no. S17022) were purchased from YuanYe Bio-Technology Co., Ltd. (China). The DNA extraction kit (Cat. no. DP302-02) was purchased from Tiangen Biotech Co., Ltd. (China). High-Fidelity DNA Polymerase (Cat. no. M0530S), restriction enzymes (Cat. nos. R3142V and R3104V) and T4 DNA ligase (Cat. no. M0202V) were purchased from New England Biolab (USA). Sucrose (Cat. no. A610498), L-arabinose (Cat. no. A610071), and reagents for LPS preparation and sodium dodecyl sulphate–polyacrylamide gel electrophoresis (SDS-PAGE) were purchased from Sangon Co., Ltd. (China). Primers were synthesized by Genewiz (China).
The genomic DNA of Y. enterocolitica strain WL-21 was extracted from 5 ml of the overnight bacterial culture. Then, the genomic DNA was sheared, polished, and prepared using the Nextera XT DNA library prep kit (Illumina, USA). Genomic libraries containing 500-bp paired-end inserts were constructed, and sequencing was then performed using Solexa sequencing technologies (Illumina) to produce approximately 100-fold coverage. The obtained reads were assembled using the de novo genome assembly program Velvet to generate a multi-contig draft genome . For gene prediction and annotation, Prokka (v1.12) was used , and for additional annotation, the assembled sequences were searched against GenBank non-redundant (NR), UniProt, and Pfam database using BLAST (v2.5.0+) . The TMHMM v2.0 analysis program ( http://www.cbs.dtu.dk/services/TMHMM-2.0/ ) was used to identify potential transmembrane domains within the protein sequences. The sequence data were submitted to GenBank under accession number PP132002. Meanwhile, the genome sequence of a strain IP2222 (O:36) was downloaded from GenBank (GCA_000285015.1) and annotated using the above procedures.
Gene deletion in the WL-21 chromosome was performed according to a two-step homologous recombination with pRE112 containing the sacB counter-selectable marker, as described previously . The upstream and downstream fragments of the wzm gene were amplified from the WL-21 genome using primer pairs upF/upR and downF/downR. Next, the two fragments were combined with each other using primers upF and downR, followed by fusion with the linearized pRE112 to yield the suicide plasmid pRE112-updown, which was introduced into the S-17 strain by electroporation, generating the donor strain S-17-pRE112-updown. For conjugation, S-17-pRE112-updown and the recipient strain WL-21 were grown at 30°C with shaking overnight. Then, the cultures were diluted as a ratio of 1:100 into fresh LB medium and incubated at 30°C with shaking until the optical density at 600 nm (OD 600 ) reached approximately 0.6. Next, the donor and the recipient strains were mixed at a ratio of 3:1 (v/v), and the mixture was resuspended in 100 μl of LB medium and was spotted on a LB plate with incubation at 30°C for 48h. After conjugation, the cells were collected and transferred to chloramphenicol containing LB plate to screen for clones via a single crossover event. The growing clones were then transferred into fresh LB medium and incubated at 30°C overnight to induce the second homologous recombination. The overnight culture was diluted and spread on LB plates containing 10% sucrose and grown at 30°C for 24 h. Finally, the resultant clones were transferred onto LB plates and LB plates with chloramphenicol simultaneously, and clones sensitive to chloramphenicol were selected and confirmed by PCR and sequencing. For the complementation test, the wzm gene was amplified from the WL-21 genome using primers wzm-cF and wzm-cR. The PCR fragment was digested with restriction enzymes KpnI and HindIII, and the digested fragment was cloned into pBAD33, which was also treated with the same enzymes, resulting in plasmid pBAD33-wzm. Expression of the cloned wzm gene is under the control of the P BAD promoter, which could be activated by arabinose. Then, pBAD33-wzm was introduced into WL-21Δ wzm by electroporation, generating the complementary strain WL-21Δ wzm :: wzm .
Strains were grown overnight at 22°C with shaking, and cultures were diluted into 20 ml of fresh LB broth at a ratio of 1:100 and incubated at 22°C to mid-log phase at a final OD600 = 0.8. To induce wzm expression under the control of pBAD33, L-arabinose (0.5 mg/ml) was added to cultures at the OD 600 = 0.4, and the cultures were incubated continuously to mid-log phase at the OD 600 = 0.8. Then, LPS used for SDS-PAGE analysis was prepared using the hot aqueous-phenol method, as previously described . The extracted LPSs were separated using 12%SDS-PAGE at 50 V for 30 min and 100 V for 2 h; subsequently, they were visualized by silver staining using the Fast Silver Stain Kit (Cat. no. P0017S; Beyotime, China), according to the manufacturer’s protocol. The gel image was captured using a GS900 calibrated densitometer (BioRad Laboratories, USA).
Genomes of those isolates with the serotypes assigned to by the other submitters were downloaded from the GenBank database. Putative O-AGCs of serotypes were characterized using the in-house Bacterial Surface Polysaccharide Gene Database. In general, query genome sequences in GenBank format were searched against the database using BLASTp with the cutoff %coverage > 60 and %identity > 30. An O-AGC candidate could be defined using the following criteria: (1) the smallest number of successive genes is six; (2) the number of successive genes annotated “No hits” is no more than three; and (3) there must be glycosyl transferase gene(s), in addition to wzm / wzt or at least one of wzx and wzy genes. Schematic diagram of genes was generated using ChiPlot . The downloaded genome sequences, along with that of WL-21, were also compared with the selected virulence genes, including ail , inv , myfA , ystA/B , yadA and yop virulon , ymoA , yplA , fliA and flh/C/D , fyuA and ybtP/Q , and ysa genes , which have been identified previously. Nucleotide sequences were compared using BLASTn, and genes with >90% coverage match and >85% identity match were classified as present. Isolates were clustered according to their virulence profiles, and the pattern was visualized by ChiPlot .
The Putative O-AGC of WL-21 Correlates Well with the O-Antigen Structure of Y. enterocolitica O:10 The putative O-AGC of WL-21 is 16,137 bp in length, and consists of 11 open reading frames ( orfs ). These orfs were located between the dcuC and galU-galF genes, and the transcribed direction of orf2 to 11 was from dcuC to galU , with the exception of orf1 just downstream of dcuC , in line with the evidence for O:3 and very similar to that for O:9. Details of proposed functions of the 11 orfs were summarized in . Orf1, a TerC family protein (Pfam03741), was assigned to the function of transporter in O:3 and O:8. TerC protein may be implicated in resistance to tellurium ; however, no involvement of this protein in O-antigen synthesis has yet been reported. Orf2, 3, 8, and 9 were annotated as mannose-1-phosphate guanylyltransferase (ManC), phosphoglucomutase (ManB), GDP-mannose 4,6-dehydratase (Gmd), and GDP-6-deoxy-D-talose 4-dehydrogenase (Rmd), respectively. These four enzymes, along with mannose-6-phosphate isomerase (ManA), whose coding gene is always outside the O-AGC, are responsible for the synthesis of GDP-D-rhamnose (GDP-D-Rha p ), the nucleotide precursor of D-Rha p . This is consistent with the existence of D-Rha p , which is the only sugar component in the backbone of O:10-antigen. Orf4 and 5 were proposed as the ABC transporter permease (Wzm) and the ABC transporter ATP-binding protein (Wzt), respectively, suggesting that it is very likely that the O-antigen of WL-21 was generated via the ABC transporter dependent pathway. orf6 , just downstream of wzt , was annotated a methyltransferase gene. This is very common in O-AGCs containing wzm/wzt genes, and its function has been identified as adding a methyl group to the non-reducing terminus of the O-antigen, thus halting polymerization and regulating chain length . Finally, Orf7, 10, and 11 were all recognized as glycosyl transferases. These enzymes could presumably be involved in the sequential transfer of GDP-D-Rha p to D-Rha p polymers, forming the backbone, and the transfer of L-xylulose (L-Xul f ) to D-Rha p , forming the side-chain linkage L-Xul f -(β2→2)-D-Rha p . However, the exact functions of these glycosyl transferases should be confirmed biochemically in the future. We also observed the hemH-gsk locus at the WL-21 genome , within which the genetic organization was identical to the outer core gene clusters of O:3 and O:9. This finding also indicated that the genetic elements between dcuC and galU-galF were the candidates for WL-21 O-antigen synthesis genes. Deletion and Complementation Test Confirmed the Functionality of WL-21 O-AGC To confirm the role of dcuC-galU-galF locus in O-antigen synthesis, a wzm knockout strain WL-21Δ wzm was constructed. As can obviously be seen in the LPS profile, the WL-21 wide-type strain exhibited a complete LPS, while WL-21Δ wzm only generated an O-antigen-depleted LPS . Within the ABC transporter dependent pathway, the translocation of O-antigen is mediated by the integral membrane protein (Wzm)/hydrophilic protein complex (Wzt) . Therefore, WL-21Δ wzm lost the ability to produce the Wzm/Wzt complex for O-antigen translocation, thus only the lipid A-core band was generated without O-antigen. Moreover, the complete LPS profile could be restored from the wzm complemented strain, WL-21Δ wzm :: wzm . However, we noticed that the chain-length of O-anitgen increased slightly in WL-21Δ wzm :: wzm compared to that in the wide type strain . There is no clear-cut explanation, as polysaccharide structural studies always tend to focus on repeating units, rather than chain-length regulation. A study demonstrated that in Escherichia coli O9a, a prototype for the biosynthesis of O-antigens by the ABC transporter dependent pathway, the occurrence of distinct chain-length of O-antigens was dependent on the relative concentration of two enzymes, the glycosyl transferase WbdA and the bifunctional kinase–methyl transferase WbdD . As a methyltransferase UbiG and three glycosyl transferases were also annotated in WL21, we propose that there might be similar chain-length regulating mechanism in Y. enterocolitica . Together, these results still proved that the dcuC-galU-galF locus is the O-AGC of WL21, and that the WL21 O-antigen is synthesized by the ABC transporter dependent pathway. The Putative O-AGCs of Most Y. enterocolitica Are Generally Divided into Two Genetic Patterns A total of 137 Y. enterocolitica isolates from GenBank were assigned to certain serotypes by the submitters, the majority of which are prevalent serotypes with their O-AGCs being reported before this study: 41 for serotype O:3, eight for serotype O:5, 17 for serotype O:5,27, ten for serotype O:8, and 31 for serotype O:9. The remainder are isolates with uncharacterized O-AGCs, including isolates of serotypes O:1, O:2, O:4, O:6, O:7, O:13, O:19, O:21, and O:36. We next derived the putative O-AGCs from their genomes. Our analysis indicated that (1) isolates with the same serotype possess identical genetic organization within the putative O-AGC; and (2) most serotypes exhibit unique O-AGC profiles with the exception of O:1, O:2, O:7 and O:19. Consequently, we selected one isolate within each serotype as a type strain for further analysis and found the genetic loci of these O-AGCs exhibited two patterns: within the hemH-gsk locus or somewhere else . In addition, the O-AGCs shared by O:1 and O:2 were similar to that of O:3, and their O-AGCs were located between or adjacent to insertion sequence (IS), as was also the case in O:5 and O:5,27. Another common feature of the above-mentioned serotypes, along with O:9 and O:10 (strain WL-21 in this study), is that the O-AGCs were all located outside the hemH-gsk locus, and that their O-antigens seem to be synthesized by the ABC transporter dependent pathway. The putative O-AGCs of serotypes O:4, O:7/O:19, O:13, and O:21 were all mapped between hemH and gsk , as was also the case with O:8, and genes in each serotype were transcribed from hemH to gsk . However, in the case of O:36, the putative O-AGC genes were located between hemH and gsk , with the exception of four genes which were upstream of hemH and transcribed in the opposite direction. Another probably shared characteristic of these serotypes was that they produce their O-antigens via the Wzx/Wzy dependent pathway, except in the case of O:7/O:19. In particular, all strains of the latter pattern had a DDH sugar synthesis gene set which was present at the 5’ end within their O-AGCs . All these features resembled those found in Y. pseudotuberculosis . This evidence implied that the O-AGCs of Y. enterocolitica serotypes with hemH-gsk patterns, and those of Y. pseudotuberculosis , very likely originated from a common ancestor, followed by separate evolutionary events under the pressure of different niches, whereas the O-AGCs with the wzm/wzt gene set underwent a distinct evolutionary history. Intriguingly, strains of O:7/O:19 possessed wzm/wzt and wzx/wzy simultaneously within their hemH-gsk loci ; this has not been previously reported among the O-AGCs of Y. enterocolitica and other Yersinia species. In a Vibrio cholera strain, the co-location of O-AGC and capsular synthesis genes containing both wzm/wzt and wzx/wzy was revealed, suggesting co-evolution of new O- and capsular-antigens . However, no evidence of capsule has been reported in Y. enterocolitica . Therefore, the elucidation of the O-antigen processing mechanism in O:7/O:19 may arouse more interests in our next study. Another piece of evidence which attracted our attention was that none of the serotype O:6 isolates possessed standard O-AGC; only the outer core gene cluster within the hemH-gsk locus was discovered. In a few species, for example, in Vibrio parahaemolyticus , the O-antigen is deficient in the LPS molecule and the antigenic scheme is based on the variation of core oligosaccharide . Thus, we propose that the antigenic property of O:6 may be determined by its core oligosaccharide, instead of O-antigen. This hypothesis in Y. enterocolitica O:6 could be addressed by fully elucidating the LPS structure. Strains of Serotypes are Grouped according to Their Virulence Profiles A total of 138 genomes representing 15 serotypes were investigated for virulence gene screening . Among these virulence genes, inv , ymoA , yplA , filA , flhC , and filD were distributed in all or almost all isolates; no differences among serotypes were therefore exhibited. According to the pattern of other virulence genes, the 15 serotypes could be clearly divided into three groups: Group 1, consisting of serotypes O:1, O:2, O:3, O:5,27 and O:9; Group 2, mainly composed of serotypes O:5, O:6,30, and a few other minor serotypes including O:10 of this study; and Group 3, with serotype O:8 as its main member, along with O:4, O:21, one strain of O13, and one strain of O7 . In Group 1, the largest group, all isolates were characterized by the presence of ail and ystA , and the absence of myfA , yadA , ystB , fyuA , ybtP , ybtQ , and yas genes. Clearly, isolates of Group 1 could be further divided into two subgroups: 1a and 1b. Except for serotype O:2 strains which were located in Group 1b, other strains assigned to serotypes O:1, O:3, O:5,27 and O:9 were distributed into two subgroups. Obviously, the two subgroups were differentiated only by the presence/absence of the yop virulon. The yop virulon is the core of Yersinia pathogenicity machinery and is located within the Pyv ; unfortunately, it may be lost during prolonged storage, frequent passaging, or temperatures higher than 37°C , which, we propose, may lead to the absence of yop virulon in Group 1a. myfA , encoding a factor important for the beginning of infection, was originally found in bioserotype 4/O:3 strains isolated from clinical cases of yersiniosis, as well as some biotype 1A strains from patients with diarrhea ; however, no myfA was found in the O:3 strains in this study. Within Group 2, all isolates possessed ystB instead of ystA , and most isolates possess myfA and yadA , the two genes absent in all isolates of Group 1. While ystB , instead of ystA , is known as a classical virulence marker of biotype 1A strain , the strong implication of ystB in serotypes O:5 and O:6, the main members of Group 2, has not been recorded previously. Another feature of Group 2 is that none of the isolates held ail or yop virulons. In contrast to the situation in subgroup 1a, the absence of yop virulon in Group 2 was unlikely to be attributed to the loss of PYv, since none of the Group 2 isolates were yop virulon positive. In terms of clinical manifestation, O:5/O:6 strains may partially or wholly lose the ability to cause the formation of necrotic abscesses; this is mainly mediated by the functionality of Pyv . The main characteristics of Group 3 were that fyuA , ybtP , ybtQ , and yas genes were only present in the relevant isolates of this group, and thus the virulence factors were most abundant. fyuA , ybtP and ybtQ are the most representative genes of a pathogenicity island termed the high-pathogenicity island (HPI), and most of the genes of HPI are involved in the biosynthesis, transport, and regulation of yersiniabactin . In addition, yas genes located within a Ysa pathogenicity island (Ysa-PI) encoding another T3SS play an important role in the colonization of gastrointestinal tissues during the earliest stage of infection . HPI and Ysa-PI have been reported to be present only in Y. enterocolitica biotype 1B, the highly virulent biotype . Our genome-wide analysis also showed that these two PIs were only present in isolates of Group 3 (three of these were assigned to biotype 1B), suggesting that strains of Group 3 must have unique pathogenic characteristics and mechanisms. Herein, the in silico O-AGC characterization of non-prevalent serotypes provides the genetic basis for the design of novel geno-serotyping targets, and for the development of novel assays for subtyping and epidemiological surveillance. The revealing of different patterns of O-AGC locations indicates that there must be distinct evolutionary route among Y. enterocolitica strains, the mechanism of which needs further investigation. The O-antigen is a key virulence determinant, and its role in the pathogenesis of Y. enterocolitica has also been identified . In particular, the relationship between certain serotypes and specific virulence factors has been investigated in several bacteria species, for instance, in enterohemorrhagic E. coli O157:H7 . Moreover, the regulation role of O-antigen in pathogenic promotion and environmental tolerance has also been confirmed . Here, we revealed a strong association between O-serotypes and the profile of virulence genes in Y. enterocolitica as a whole; this finding, we believe, will advance research into the role of O-antigens and other virulence factors, including their synergistic effect in the pathogenicity of this important food-borne pathogen.
Y. enterocolitica O:10 The putative O-AGC of WL-21 is 16,137 bp in length, and consists of 11 open reading frames ( orfs ). These orfs were located between the dcuC and galU-galF genes, and the transcribed direction of orf2 to 11 was from dcuC to galU , with the exception of orf1 just downstream of dcuC , in line with the evidence for O:3 and very similar to that for O:9. Details of proposed functions of the 11 orfs were summarized in . Orf1, a TerC family protein (Pfam03741), was assigned to the function of transporter in O:3 and O:8. TerC protein may be implicated in resistance to tellurium ; however, no involvement of this protein in O-antigen synthesis has yet been reported. Orf2, 3, 8, and 9 were annotated as mannose-1-phosphate guanylyltransferase (ManC), phosphoglucomutase (ManB), GDP-mannose 4,6-dehydratase (Gmd), and GDP-6-deoxy-D-talose 4-dehydrogenase (Rmd), respectively. These four enzymes, along with mannose-6-phosphate isomerase (ManA), whose coding gene is always outside the O-AGC, are responsible for the synthesis of GDP-D-rhamnose (GDP-D-Rha p ), the nucleotide precursor of D-Rha p . This is consistent with the existence of D-Rha p , which is the only sugar component in the backbone of O:10-antigen. Orf4 and 5 were proposed as the ABC transporter permease (Wzm) and the ABC transporter ATP-binding protein (Wzt), respectively, suggesting that it is very likely that the O-antigen of WL-21 was generated via the ABC transporter dependent pathway. orf6 , just downstream of wzt , was annotated a methyltransferase gene. This is very common in O-AGCs containing wzm/wzt genes, and its function has been identified as adding a methyl group to the non-reducing terminus of the O-antigen, thus halting polymerization and regulating chain length . Finally, Orf7, 10, and 11 were all recognized as glycosyl transferases. These enzymes could presumably be involved in the sequential transfer of GDP-D-Rha p to D-Rha p polymers, forming the backbone, and the transfer of L-xylulose (L-Xul f ) to D-Rha p , forming the side-chain linkage L-Xul f -(β2→2)-D-Rha p . However, the exact functions of these glycosyl transferases should be confirmed biochemically in the future. We also observed the hemH-gsk locus at the WL-21 genome , within which the genetic organization was identical to the outer core gene clusters of O:3 and O:9. This finding also indicated that the genetic elements between dcuC and galU-galF were the candidates for WL-21 O-antigen synthesis genes.
To confirm the role of dcuC-galU-galF locus in O-antigen synthesis, a wzm knockout strain WL-21Δ wzm was constructed. As can obviously be seen in the LPS profile, the WL-21 wide-type strain exhibited a complete LPS, while WL-21Δ wzm only generated an O-antigen-depleted LPS . Within the ABC transporter dependent pathway, the translocation of O-antigen is mediated by the integral membrane protein (Wzm)/hydrophilic protein complex (Wzt) . Therefore, WL-21Δ wzm lost the ability to produce the Wzm/Wzt complex for O-antigen translocation, thus only the lipid A-core band was generated without O-antigen. Moreover, the complete LPS profile could be restored from the wzm complemented strain, WL-21Δ wzm :: wzm . However, we noticed that the chain-length of O-anitgen increased slightly in WL-21Δ wzm :: wzm compared to that in the wide type strain . There is no clear-cut explanation, as polysaccharide structural studies always tend to focus on repeating units, rather than chain-length regulation. A study demonstrated that in Escherichia coli O9a, a prototype for the biosynthesis of O-antigens by the ABC transporter dependent pathway, the occurrence of distinct chain-length of O-antigens was dependent on the relative concentration of two enzymes, the glycosyl transferase WbdA and the bifunctional kinase–methyl transferase WbdD . As a methyltransferase UbiG and three glycosyl transferases were also annotated in WL21, we propose that there might be similar chain-length regulating mechanism in Y. enterocolitica . Together, these results still proved that the dcuC-galU-galF locus is the O-AGC of WL21, and that the WL21 O-antigen is synthesized by the ABC transporter dependent pathway.
Y. enterocolitica Are Generally Divided into Two Genetic Patterns A total of 137 Y. enterocolitica isolates from GenBank were assigned to certain serotypes by the submitters, the majority of which are prevalent serotypes with their O-AGCs being reported before this study: 41 for serotype O:3, eight for serotype O:5, 17 for serotype O:5,27, ten for serotype O:8, and 31 for serotype O:9. The remainder are isolates with uncharacterized O-AGCs, including isolates of serotypes O:1, O:2, O:4, O:6, O:7, O:13, O:19, O:21, and O:36. We next derived the putative O-AGCs from their genomes. Our analysis indicated that (1) isolates with the same serotype possess identical genetic organization within the putative O-AGC; and (2) most serotypes exhibit unique O-AGC profiles with the exception of O:1, O:2, O:7 and O:19. Consequently, we selected one isolate within each serotype as a type strain for further analysis and found the genetic loci of these O-AGCs exhibited two patterns: within the hemH-gsk locus or somewhere else . In addition, the O-AGCs shared by O:1 and O:2 were similar to that of O:3, and their O-AGCs were located between or adjacent to insertion sequence (IS), as was also the case in O:5 and O:5,27. Another common feature of the above-mentioned serotypes, along with O:9 and O:10 (strain WL-21 in this study), is that the O-AGCs were all located outside the hemH-gsk locus, and that their O-antigens seem to be synthesized by the ABC transporter dependent pathway. The putative O-AGCs of serotypes O:4, O:7/O:19, O:13, and O:21 were all mapped between hemH and gsk , as was also the case with O:8, and genes in each serotype were transcribed from hemH to gsk . However, in the case of O:36, the putative O-AGC genes were located between hemH and gsk , with the exception of four genes which were upstream of hemH and transcribed in the opposite direction. Another probably shared characteristic of these serotypes was that they produce their O-antigens via the Wzx/Wzy dependent pathway, except in the case of O:7/O:19. In particular, all strains of the latter pattern had a DDH sugar synthesis gene set which was present at the 5’ end within their O-AGCs . All these features resembled those found in Y. pseudotuberculosis . This evidence implied that the O-AGCs of Y. enterocolitica serotypes with hemH-gsk patterns, and those of Y. pseudotuberculosis , very likely originated from a common ancestor, followed by separate evolutionary events under the pressure of different niches, whereas the O-AGCs with the wzm/wzt gene set underwent a distinct evolutionary history. Intriguingly, strains of O:7/O:19 possessed wzm/wzt and wzx/wzy simultaneously within their hemH-gsk loci ; this has not been previously reported among the O-AGCs of Y. enterocolitica and other Yersinia species. In a Vibrio cholera strain, the co-location of O-AGC and capsular synthesis genes containing both wzm/wzt and wzx/wzy was revealed, suggesting co-evolution of new O- and capsular-antigens . However, no evidence of capsule has been reported in Y. enterocolitica . Therefore, the elucidation of the O-antigen processing mechanism in O:7/O:19 may arouse more interests in our next study. Another piece of evidence which attracted our attention was that none of the serotype O:6 isolates possessed standard O-AGC; only the outer core gene cluster within the hemH-gsk locus was discovered. In a few species, for example, in Vibrio parahaemolyticus , the O-antigen is deficient in the LPS molecule and the antigenic scheme is based on the variation of core oligosaccharide . Thus, we propose that the antigenic property of O:6 may be determined by its core oligosaccharide, instead of O-antigen. This hypothesis in Y. enterocolitica O:6 could be addressed by fully elucidating the LPS structure.
A total of 138 genomes representing 15 serotypes were investigated for virulence gene screening . Among these virulence genes, inv , ymoA , yplA , filA , flhC , and filD were distributed in all or almost all isolates; no differences among serotypes were therefore exhibited. According to the pattern of other virulence genes, the 15 serotypes could be clearly divided into three groups: Group 1, consisting of serotypes O:1, O:2, O:3, O:5,27 and O:9; Group 2, mainly composed of serotypes O:5, O:6,30, and a few other minor serotypes including O:10 of this study; and Group 3, with serotype O:8 as its main member, along with O:4, O:21, one strain of O13, and one strain of O7 . In Group 1, the largest group, all isolates were characterized by the presence of ail and ystA , and the absence of myfA , yadA , ystB , fyuA , ybtP , ybtQ , and yas genes. Clearly, isolates of Group 1 could be further divided into two subgroups: 1a and 1b. Except for serotype O:2 strains which were located in Group 1b, other strains assigned to serotypes O:1, O:3, O:5,27 and O:9 were distributed into two subgroups. Obviously, the two subgroups were differentiated only by the presence/absence of the yop virulon. The yop virulon is the core of Yersinia pathogenicity machinery and is located within the Pyv ; unfortunately, it may be lost during prolonged storage, frequent passaging, or temperatures higher than 37°C , which, we propose, may lead to the absence of yop virulon in Group 1a. myfA , encoding a factor important for the beginning of infection, was originally found in bioserotype 4/O:3 strains isolated from clinical cases of yersiniosis, as well as some biotype 1A strains from patients with diarrhea ; however, no myfA was found in the O:3 strains in this study. Within Group 2, all isolates possessed ystB instead of ystA , and most isolates possess myfA and yadA , the two genes absent in all isolates of Group 1. While ystB , instead of ystA , is known as a classical virulence marker of biotype 1A strain , the strong implication of ystB in serotypes O:5 and O:6, the main members of Group 2, has not been recorded previously. Another feature of Group 2 is that none of the isolates held ail or yop virulons. In contrast to the situation in subgroup 1a, the absence of yop virulon in Group 2 was unlikely to be attributed to the loss of PYv, since none of the Group 2 isolates were yop virulon positive. In terms of clinical manifestation, O:5/O:6 strains may partially or wholly lose the ability to cause the formation of necrotic abscesses; this is mainly mediated by the functionality of Pyv . The main characteristics of Group 3 were that fyuA , ybtP , ybtQ , and yas genes were only present in the relevant isolates of this group, and thus the virulence factors were most abundant. fyuA , ybtP and ybtQ are the most representative genes of a pathogenicity island termed the high-pathogenicity island (HPI), and most of the genes of HPI are involved in the biosynthesis, transport, and regulation of yersiniabactin . In addition, yas genes located within a Ysa pathogenicity island (Ysa-PI) encoding another T3SS play an important role in the colonization of gastrointestinal tissues during the earliest stage of infection . HPI and Ysa-PI have been reported to be present only in Y. enterocolitica biotype 1B, the highly virulent biotype . Our genome-wide analysis also showed that these two PIs were only present in isolates of Group 3 (three of these were assigned to biotype 1B), suggesting that strains of Group 3 must have unique pathogenic characteristics and mechanisms. Herein, the in silico O-AGC characterization of non-prevalent serotypes provides the genetic basis for the design of novel geno-serotyping targets, and for the development of novel assays for subtyping and epidemiological surveillance. The revealing of different patterns of O-AGC locations indicates that there must be distinct evolutionary route among Y. enterocolitica strains, the mechanism of which needs further investigation. The O-antigen is a key virulence determinant, and its role in the pathogenesis of Y. enterocolitica has also been identified . In particular, the relationship between certain serotypes and specific virulence factors has been investigated in several bacteria species, for instance, in enterohemorrhagic E. coli O157:H7 . Moreover, the regulation role of O-antigen in pathogenic promotion and environmental tolerance has also been confirmed . Here, we revealed a strong association between O-serotypes and the profile of virulence genes in Y. enterocolitica as a whole; this finding, we believe, will advance research into the role of O-antigens and other virulence factors, including their synergistic effect in the pathogenicity of this important food-borne pathogen.
|
Artificial Intellgence in the Era of Precision Oncological Imaging | eb143387-2e8c-40b2-9789-4ff7648805a2 | 9703524 | Internal Medicine[mh] | The dramatic increase in the amount of imaging data and storage and processing capacity over the past decades has promoted the frenzied development of artificial intelligence (AI) systems in diagnostic imaging. There is hardly a radiology field that did not face extensive AI research and fast-paced clinical assimilation. Cancer imaging, undoubtedly, is where its impact has so far been the greatest, ever since the introduction of the first computer-aided detection (CAD) systems in the 1980s. The rapid improvement and high-level performance of CAD systems in lung and breast cancer screening contributed to a growing interest in the development of AI-based tools and their continuous integration into routine cancer imaging. The AI technological domain boasts a great diversity of architectures and task targets, which tend to be ultra-specialized. Oncological applications include the identification of patients at risk of developing cancer, automatic lesion detection with CAD systems, treatment planning tools, and models for predicting treatment response and prognosis. Furthermore, radiomics and radiogenomics methods allow to personalize the risk profile of each patient through novel imaging markers not detectable by the human eye of even the most experienced radiologists. These findings reflect the unique features of oncological patients and their diseases and can be used to devise patient-tailored management strategies and personalized targeted therapies according to the principles of precision medicine. The aim of this article is to provide an overview of the applications of AI in oncological imaging, as well as to analyze obstacles to their wider use in clinical practice. The following AI application will be discussed: risk stratification, lesion detection and cancer screening, radiomics and radiogenomics, tumor segmentation and treatment planning, and treatment and prognosis prediction. Relevant AI terminology and concepts will be introduced before proceeding to the main discussion.
Technology that mimics human intelligence to solve human problems is the core of what is collectively called AI. Developed as a branch of computer science, present-day AI is a broad field of knowledge that welcomes contributions from different disciplines. While AI is still far from realizing its full potential, it has already shown outstanding results in a variety of fields, notably including the research and clinical activity of radiology departments. However, an almost habitual presence of AI in radiology coexists with an often-superficial knowledge of its inner workings and a degree of confusion about AI terminology. Terms such as AI, machine learning (ML), deep learning (DL), and neural networks are often used interchangeably, despite having substantial differences. In the classical paradigm of computer science, a machine (ie, a computer) performs on an input the function for which it is programed to obtain an output. The problem is that it is not possible to translate the extremely complex cognitive process that underlies the work of an experienced radiologist into the programming code. This challenge can be addressed through an ML approach in which the model, like the human brain, could learn from its mistakes. ML algorithms attempt to approximate a required function recognizing meaningful patterns in data. Most ML approaches used in medical imaging require some degree of human intervention to be trained and are called “supervised algorithms.” Supervised algorithms are trained on sample data sets containing typical examples of inputs and corresponding outputs. In the simplest cases, the training dataset is labeled by human experts based on manually chosen characteristics of interest. For example, the training dataset may consist of both native chest computed tomography (CT) scans and examination in which lung nodules are highlighted and classified as benign or malignant. However, training datasets can also contain a mix of labeled and unlabeled images. In this case, the algorithm would quantitatively assess the voxels constituting lung nodules and decide what features make them appear benign or malignant. Finally, in advanced AI applications, the training dataset can consist of unlabeled data that the system will reclassify and organize based on common characteristics to try to predict subsequent inputs. This type of unsupervised learning generally makes use of so-called DL algorithms. From an operational point of view, ML tools are built using artificial neural networks (ANN) that has proved particularly fit for medical imaging. ANN are inspired by the human brain and consist of several layers of interconnected “nodes” or “cells.” The outermost layer is an input layer for initial data, while the innermost layer of the algorithm is the output layer. The cells in different levels are connected and activate one another as the information passes downstream and gets analyzed. The output of each cell depends on a single input value multiplied by a “weight” value. If the output of any individual node is above a specified threshold value, it becomes activated and sends data to the next layer of the network. Otherwise, no data is passed along. Thanks to their unique learning features and sensitive calibration range, ANN algorithms are powerful tools for analyzing large amounts of imaging data progressively shaping the weight of their connections to reduce the uncertainty of the approximation as they are exposed to more data samples. The number of middle layers depends on the algorithm's function and complexity, and an increased number of layers gives rise to so-called DL algorithms. DL uses ANN to discover intricate structures in large data sets with different and increasing levels of abstraction. , DL models are well-suited for medical image analysis, allowing the detection of hidden patterns and uncover insightful outcomes, sometimes beyond what human experts can provide. DL algorithms contribute to the fast development of radiomics, which has recently emerged as a state-of-the-art science in the field of individualized medicine. First defined in 2012 as “high throughput extraction of quantitative imaging features with the intent of creating mineable databases from radiological images,” radiomics represents a new approach to medical imaging analysis and allows to further bridge the gap between raw image data and clinical and biological endpoints. Radiomics is based on the assumption that biomedical images contain disease-specific information that is imperceptible to the human eye. , Although radiomics is not necessarily AI-based, the advances in ML and DL algorithms have greatly facilitated the research and application of radiomics models. Thanks to AI methods and advanced mathematical analysis, radiomics models quantitatively assess large-scale extracted imaging data to identify imaging biomarkers that go beyond simple qualitative evaluation. In oncological imaging, radiomics features are related to tumor size, shape, intensity, and relationships between voxels and texture. These features collectively provide the so-called radiomics signature of the tumor. Radiogenomics is a field closely related to and drawing from radiomics and is based on the hypothesis that extracted quantitative imaging data are a phenotypic manifestation of the mechanisms that occur at the genomic, transcriptomic, or proteomic levels. Radiogenomics combines large volumes of quantitative data extracted from medical images with individual genomic phenotypes to assess the genomic profile of tumors. This allows the creation of prediction models used to stratify patients, guide therapeutic strategies, and evaluate clinical outcomes. Imaging data can be further combined with clinical and laboratory information and other personalized patient variables to improve the precision of diagnostic imaging, predict outcomes, and identify optimal management.
Identifying patients at risk of developing malignancy and referring them to personalized screening programs is one of the major challenges in modern oncology. AI algorithms allow the derivation of clinically important predictors from generic and often weakly correlated imaging features. When combined with clinical data, this information facilitates the identification of patients that may be at risk of developing malignant lesions. Moreover, AI has the potential to increase the accuracy of radiological assessment of tumor aggressiveness and differentiation between benign and malignant lesions, allowing for more precise patients’ managament. AI-based prediction models have been developed for a variety of imaging techniques and a wide range of malignancies, including lung, colorectal, thyroid, breast, and prostatic cancers. Breast cancer has traditionally attracted major interest for AI-based risk prediction models. Breast cancer remains the leading cause of female cancer mortality with the survival rates in developing countries being as low as 50% due to late detection. A personalized, accurate risk scoring system would identify patients at high risk of developing breast tumors and in need of strict imaging monitoring. International Breast Intervention Study (IBIS) model, or Tyrer–Cuzick (TC) model, is a scoring system guiding breast cancer screening and prevention. It accounts for age, genotype, family history of breast cancer, age at menarche and first birth, menopausal status, atypical hyperplasia, lobular carcinoma in situ, height, and body mass index. Despite its widespread use, IBIS/TC model demonstrated limited accuracy in some high-risk patient populations. Integration of DL-identified high-risk imaging features can refine the accuracy of the IBIS/TC model and overcome some of its limitations. , Breast density is a mammographic feature closely related to the risk of breast cancer and is integral to the correct reporting of mammographic examinations. It can be successfully assessed by AI algorithms with an excellent agreement and high intraclass correlation coefficient between the AI software and expert readers. A hybrid DL model evaluated by Yala et al included mammographic breast density, age, weight, height, menarche age, menopausal status, detailed family history of breast and ovarian cancer, breast cancer gene (BRCA) mutation status, history of atypical hyperplasia, and history of lobular carcinoma in situ. The IBIS/TC and hybrid DL models showed an area under the curve (AUC) of 0.62 (95% confidence interval [CI]: 0.57-0.66), and 0.70 (95% CI: 0.66-0.75), respectively. The hybrid model placed 31% of patients in the top risk category, compared with 18% identified by the IBIS/TC model, and was able to identify the features associated with long-term risk beyond early detection of the disease. Another aspect related to risk stratification includes the assessment of incidentally discovered benign lesions and identifying those that are more likely to develop malignancy in the future. This is particularly relevant when dealing with lung nodules, as radiologists routinely evaluate hundreds of lung nodules to assess their size, location, margins, and evolution. This information is then subjectively interpreted with the help of guidelines and about the clinical characteristics and history of the patient, to stratify the risk and customize therapeutic and monitoring protocols. AI can appreciably facilitate this challenging and time-consuming task. Baldwin et al assessed the performance of an AI-based lung cancer prediction convolutional neural network (LCP-CNN) compared with the multivariate Brock model, which estimates the risk of malignancy for CT-detected pulmonary nodules. LCP-CNN score demonstrated an improved AUC compared with Brock model (89.6%, 95% CI: 87.6-91.5 and 86.8%, 95% CI: 84.3-89.1, respectively) and allowed to identify a larger proportion of benign nodules with a reduced false-negative rate. Integration of LCP-CNN into the assessment of lung nodules detected on chest CT scans could potentially reduce diagnostic time delays. Another CNN-based model integrating imaging features with clinical data and biomarkers achieved 94% sensitivity and 91% specificity for the differentiation of benign and malignant pulmonary nodules on CT imaging. The results highlight the potential of AI to reduce the need for follow-up scans in low-scoring benign nodules, while accelerating the investigation and treatment of high-scoring malignant nodules and reducing the costs of follow-up examinations. In another study, the use of an auxiliary AI-based tool allowed to improve readers’ sensitivity and specificity in the classification of malignancy risk of indeterminate pulmonary nodules on chest CT. Moreover, it improved interobserver agreement for management recommendations of these indeterminate lung nodules. In another approach, a DL signature was developed for N2 lymph node involvement prediction and prognosis stratification in clinical stage I nonsmall cell lung cancer (NSCLC). A multicenter study performed to test its clinical utility demonstrated an AUC of 0.82, 0.81, and 0.81 in an internal test set, external test cohort, and prospective test cohort, respectively. Moreover, higher DL scores were associated with more activation of tumor proliferation pathways and were predictive of poorer survival rates. AI models have also been used for risk stratification in other types of lesions. An ANN-based DL algorithm allowed reliable differentiation of malignant versus nonmalignant breast nodules first assessed with breast ultrasound and initially reported as breast imaging reporting data systems (BI-RADS) 3 and 4. Hamm et al developed ANN for automated characterization of liver nodules on magnetic resonance imaging (MRI), with a 92% sensitivity and 98% specificity in the training set and higher sensitivity and specificity compared to radiologists in the test dataset. An ML model allowed to distinguish benign from malignant cystic renal lesions on CT with an AUC of 0.96 and a benefit in the clinical decision algorithm over management guidelines based on Bosniak classification. Similarly, an AI-based DL model correctly predicted the majority of benign and malignant pancreatic cystic lesions and outperformed Fukuoka guidelines. In the risk stratification of endometrial cancer, several features identified on T2-weighted MRI were selected to build a predictive ML model, which demonstrated an accuracy of 71% and 72% in the training and test datasets, respectively. Interestingly, the study by Hsu et al identified that the body composition analysis by an AI algorithm can predict mortality in cancer, highlighting a significant correlation between sarcopenia and mortality in pancreatic cancer patients.
Screening for clinically occult malignant tumors represents one of the most important goals of oncological imaging and allows timely treatment of tumors that would otherwise pass unnoticed. Radiological screening programs evaluate hundreds of patients at a time, creating a considerable amount of imaging data to review. AI applications can be used to manage the workload and to reduce observational oversights and false-negative readings. Improved detection of cancer through screening is thereby a significant area of interest in oncological imaging AI applications. CAD tools are AI systems commonly employed in radiological screening. CAD programs are pattern recognition software that assists radiologists in identifying potential anomalies in radiology examinations. Although all CAD systems are somehow AI-based, their design can employ a wide range of architectures of varying complexity and depth. Relatively simple early CADs were designed to reflect radiologists’ perspectives and searched for the findings normally assessed by human readers. For example, in breast cancer screening, CAD assessed mammograms for the presence of microcalcifications, structural distortions, and masses. Advanced present-day CAD systems make extensive use of DL models to extract information that is not immediately accessible to human operators with promising results and potential for improved accuracy compared with radiologists or clinicians. Whereas earlier CAD programs were characterized by high sensitivity and low specificity, newer techniques with improved specificity could significantly facilitate cancer screening. However, radiologists’ experience and judgment remain central to defining the factual relevance of the result provided by CAD systems and deciding further steps. In clinical routine, CAD systems can be used in a variety of ways. When CAD is used as a “first reader,” a primary CAD assessment is followed by an evaluation by a radiologist that only reviews the anomalies identified by CAD. Alternatively, a screening test can be initially assessed by a radiologist, followed by a secondary CAD review. Problematic areas identified by CAD software are then re-evaluated before the conclusion is formed. Finally, in a concurrent examination, a radiologist routinely assesses the images as the CAD marks remain visible. Breast cancer screening represents one of the most successful examples of the long-standing implementation of CAD programs into clinical practice. Early forms of breast screening CAD systems, developed in the 1980s, were limited by poor specificity and low diagnostic accuracy resulting in numerous false positives, unnecessary biopsies, and soaring costs. Integration of DL into recent CAD systems led to significant improvements in specificity. The accuracy of DL-based CADs for the detection of breast cancer on mammography is comparable to that of radiologists, although the latter tend to be slightly more specific at the expense of less sensitivity. A single CAD reading can be equivalent to a double reading by 2 radiologists as required by standardized guidelines. CAD-based screening also allows early detection of pulmonary nodules, defined as circumscribed round-shaped parenchymal lesions of <3 cm in diameter. The first CADs for automatic detection of lung nodules on chest CT appeared in the early 2000s, although the lack of specificity impeded their widespread clinical use, similar to other early CAD programs. Thanks to the availability of large databases of chest CT scans and the integration of DL techniques into CAD architectures, modern systems demonstrate higher specificity and decreased false-positive rates. CAD systems based on DL algorithms detect more pulmonary nodules on CT scans compared to double reading by radiologists and allow to classify the detected nodules based on selected features. Another DL-based CAD algorithm outperformed thoracic radiologists for the detection of malignant pulmonary nodules on chest x-rays and improved radiologists’ performance when used as a second reader. CT colonography, or virtual colonoscopy, is a screening technique for the early identification of colonic polyps before they progress to colorectal cancer. Although CAD can improve polyp detection rates, several structures can result in false-positive readings, including haustral folds, coarse mucosa, diverticula, rectal tubes, extracolonic findings, and lipomas. The experience of the reader is thereby essential for the appropriate interpretation and reporting ( and ). Several CAD algorithms improved the detection of prostatic malignancies on MRI, including difficult-to-assess areas such as the central part of the gland or the transition zone. Other possible applications of AI in cancer screening include the detection of metastatic lesions in patients with known primary cancers. Although conceptually this task is similar to detecting primary tumors, the results are currently characterized by low specificity. In the assessment patients with of melanoma, none of the CAD-detected pulmonary nodules proved to be malignant or clinically significant at a follow up.
Radiogenomics, an integration of “radiomics” and “genomics” notions through AI technology, is currently emerging as the state-of-the-art field of precision medicine in oncology. Identification of an array of different genotypes and deregulated pathways involved in the pathogenesis of cancers through the advances in genomic technology lead to the paradigm change of how cancer is seen, classified, and managed with a shift toward a truly personalized approach to each case. Meticulous molecular characterization of malignant tumors is thus one of the mainstays of customized approaches in oncology. However, a vast scale genome-based characterization of cancer is not yet routinely adapted due to the invasiveness, technical complexity, high costs, and timing limitations. Radiogenomics allows a noninvasive and comprehensive characterization of tumor gene expression patterns through imaging phenotype. Distinct portions of tumors and metastases from the same tumor may be characterized by different molecular characteristics, which might change over time. As it is not possible to sample every portion of each tumor and metastatic lesions at multiple time points, the characterization of malignancies by biopsy suffers from significant limitations. Unlike biopsy, radiogenomics analyzes the 3-dimensional tumor landscape in its complexity, does not depend on the heterogeneity of a bioptic sample, and can be used as a virtual biopsy tool. Moreover, radiogenomics enables the noninvasive assessment of multiple lesions at different time points. While most existing studies focus on the analysis of primary tumors, radiogenomics can potentially be applied to the analysis of metastatic lesions. Evaluation of the tumor genomic signature through radiogenomics can further improve our understanding of the natural history of the disease through quick, reproducible, and inexpensive assessment, leading to improved prediction of patient prognosis, individualized therapeutic approaches, optimized enrollment for targeted therapies, and better assessment of treatment responses. Most existing radiogenomics, AI-based models deal with mutations of methylguanine methyltransferase (MGMT), isocitrate dehydrogenase (IDH) 1/2, BRCA1/2, Lumina A/B, estrogen receptor (ER), progesterone receptor (PR), epidermal growth factor receptor (EGFR), Ki-67, and human epidermal growth factor receptor 2 (HER2), due to the data availability. In particular, several studies demonstrated the validity of radiogenomic features for the identification of genetic alterations in patients with pulmonary adenocarcinoma. Radiogenomics models could distinguish EGFR-mutated and EGFR-wildtype pulmonary adenocarcinomas, as well as differentiate EGFR-positive and Kirsten rat sarcoma virus (KRAS)-positive cases. Addition of radiomics data to a clinical prediction model significantly improved the prediction of EGFR status in pulmonary adenocarcinoma ( P = .03). EGFR mutation status in NSCLC can also be predicted with quantitative radiomics biomarkers from pretreatment CT scans. In another study, anaplastic lymphoma kinase (ALK), receptor tyrosine kinase-1 (ROS-1), and rearranged during transfection (RET) fusion-positive pulmonary adenocarcinomas could be identified through a prediction model with a combination of clinical data and CT and positron emission tomography (PET) characteristics. EGFR mutations inferred from routine MRI were also demonstrated in patients with glioblastoma based on perfusion patterns in perilesional edema. Moreover, neuro-oncology boasts a variety of ML models developed for a more comprehensive characterization of gliomas. Zhang et al created an algorithm to differentiate these malignancies into low and high grade, with an overall accuracy ranging between 94% and 96%. An ANN model based on the texture analysis of T2-weighted, fluid-attenuated inversion recovery (FLAIR), and T1-weighted postcontrast MRI identified O-6-methylguanine-DNA methyltransferase promoter methylation status in patients newly diagnosed with glioblastoma with an accuracy of 87.7%, allowing to predict the improved response to chemotherapy among patients with this epigenetic change. Another model allowed to infer a variety of genetic mutations in gliomas through the analysis of multiparametric precontrast and postcontrast MRI, including O-6-methylguanine-DNA methyltransferase promoter methylation status, IDH1 mutation, and 1p/19q codeletion, with the accuracy of 83% to 94%. Codeletion of chromosomes 1p and 19q can also be quantitatively assessed from the MRI texture of T2-weighted images with high sensitivity and specificity. Promising radiogenomic models in breast imaging allow differentiating molecular subtypes of breast cancer based on MRI dynamic contrast enhancement imaging. , Genetic pathways of breast tumors were associated with several MRI features, including tumor size, blurred tumor margin, and irregular tumor shape. Messenger ribonucleic acid (mRNA) expressions in breast tumors showed significant association with tumor size and enhancement texture on MRI. The biological behavior of breast cancer, in particular expression of HER2 and other receptors, can also be predicted on ultrasound imaging. , In prostate imaging, radiomics evaluation of prostatic tumor profile on MRI allows to reliably predict the Gleason's grade as defined by pathology. A study conducted among patients with colorectal cancer identified a radiogenomic signature that can reliably predict microsatellite instability status of the tumors and stratify patients into low-risk and high-risk groups. Another radiomic signature identified KRAS/neuroblastoma rat sarcaoma virus (RAS) viral oncogene homolog (NRAS)/B-Raf proto-oncogene (BRAF) mutations in colorectal cancer, which reduce the response to monoclonal antibodies cetuximab and panitumumab.
Tumor segmentation serves several clinical and research purposes in oncological imaging. Segmentation is used to determine the volume of tumors, their morphology and relationships with the surrounding organs and tissues and is crucial for the imaging-based planning of surgery or radiotherapy. Segmentation also plays an important role in the assessment of treatment response. From an operational point of view, tumor segmentation requires the division of an image into multiple parts that are homogeneous with respect to one or more characteristics or features, such as colors, grayscale, spatial textures, or geometric shapes. Before the advent of computer-assisted segmentation tools, contours were manually traced on each slice of imaging scans. These 2-dimensional segmentations were then put together to create a 3-dimensional reconstruction of the lesion within the acquisition volume. Present-day AI segmentation systems greatly shorten the analysis times and improve the reproducibility and inter-reader variability of segmentations, especially when compared with inexperienced operators. First machine-assisted segmentation tools were supervised algorithms based on line and edge detection, which traced image gradients along object boundaries. Modern segmentation tools are predominantly based on a large variety of DL models. While most image segmentation techniques use one imaging modality at one specific time point, their performance and applicability can be improved by combining images from several sources (multispectral segmentation) or integrating images over time (dynamic or temporal segmentation). Multimodal images can be used to improve the segmentation accuracy by accounting for the advantages and disadvantages of individual imaging modalities. For example, CT provides a detailed definition of bone structures but low soft-tissue contrast, whereas MRIs are characterized by high soft-tissue contrast but lower spatial resolution. Combined multimodality images would facilitate segmentation and provide additional imaging information. However, multimodal images need to be accurately co-registered to be consistent and are not always available for segmentation. Several DL segmentation tools have been developed for use in oncological radiology and are currently clinically available for lesion characterization, treatment planning, and follow up. Neuro-oncological imaging is one of the leading fields for the application of AI segmentation systems with remarkable potential for workflow and clinical impact. For some types of brain tumors, such as low-grade gliomas, surgical resection is currently the first therapeutic option. Considering the diffuse nature of neural networks at the basis of cognitive functions, the choice of resection margins can dramatically affect the brain function and the patient's quality of life. The development of AI-based systems for the segmentation of brain tumors allows to individually optimize the so-called ”onco-functional balance” and propose tailored resection margins. AI can also be used in other phases of personalized anatomical–functional planning and intraoperative strategy. Radiotherapy planning is another important field of application of AI in oncological imaging. Prostate radiotherapy is a well-established curative procedure that moves toward the use of MRI for targeting of adaptive radiotherapy processes. The lack of clear prostate boundaries, tissue heterogeneity, and wide interindividual variety of prostate morphology hinder MRI radiotherapy planning. Among different ML methods proposed for the automated segmentation of prostate tumors, CNN demonstrate the most promising results. Recent studies demonstrated that CNN-based segmentation systems successfully detect prostatic abnormalities and reliably segment the gland and its subzones for subsequent precision radiotherapy. , Comelli et al proposed a DL MRI prostate segmentation model that could be efficiently applied for prostate delineation even in small training datasets with potential benefit for personalized patient management. CT is customarily used before radiotherapy to calculate the absorbed dose through the assessment of the density of irradiated tissues. However, in everyday clinical practice, most patients receive both MRI and CT scanning as part of their radiotherapy workup, and radiotherapy can be moving toward the sole acquisition of MRI with an AI generation of synthetic CT images. DL can be used to generate synthetic CT images from T1-weighted MRI sequences without any significant difference in dose distribution compared to standard CT imaging. –
Response to treatment in solid tumors is a key element of oncological imaging. Spatial and temporal heterogeneity and complexity of tumor responses to treatment represent an ongoing challenge for oncological radiologists. , The current standardized response assessment metrics, such as tumor size changes in response evaluation criteria in solid tumors (RECIST) criteria, do not reliably predict the underlying biological response. For example, an initial increase in tumor size, called pseudoprogression, is commonly seen in immunotherapy and is a sign of favorable response to treatment. Conversely, an initial decrease in tumor size, known as pseudoresponse, may be associated with increased tumor aggressiveness as is observed with some anti-angiogenesis agents. AI is a valuable ally for radiologists in determining more accurate methods of treatment response assessment. The prognostic value of AI models has been demonstrated for a variety of oncological fields, including breast, lung, brain, prostate, and head and neck tumors. A recent systematic review and meta-analysis by Chen et al concluded that radiomics has the potential to noninvasively predict the response and outcome of immunotherapy in patients with NSCLC. Another model integrating DL radiomics features with circulating tumor cell count could predict the recurrence of patients with early-stage NSCLC treated with stereotactic body radiation therapy. Although AI-based radiomic approaches have not yet been implemented as a decision-making tool in the clinical setting, additional external, and clinical validations can facilitate personalized treatment for patients with NSCLC. Recent advances in ML algorithms were used for the development of multimodality models for accurate predictions of the survival of individuals with breast cancer. – Ha et al reported an 88% accuracy of DL-CNN in predicting the response of breast cancer to neoadjuvant chemotherapy based on pretreatment MRI. The delayed contrast enhancement on MRI of invasive HER2 + breast tumors could identify molecular cancer subtypes with better response to HER2 + targeted therapy. A radiomic model predicted the pathological response to neoadjuvant chemotherapy in patients with locally advanced rectal cancer based on MRI. The performance of the model further improved when combined with standard clinical evaluation. In hepatocellular carcinoma, AI can provide great benefits in patients’ management by predicting the response to a variety of treatments, including transarterial chemoembolization. , Immunotherapy is one of the most promising tools in oncological treatment. However, despite its remarkable success rate, immunotherapy is still curbed by high costs and toxicities, while its clinical benefit is limited to a specific subset of patients. AI algorithms with integrated imaging biomarkers allow us to predict the response to immunotherapy, as well as identify early responders in order to optimize its cost-effectiveness and clinical impact. For example, radiomic signature inferred from pretreatment and posttreatment CT scans of patients with NSCLC correlated to the density of tumor-infiltrating lymphocytes and the expression of programed cell death (PD) ligand-1 and identified early responders to immune checkpoint inhibitor therapy. In a large multicenter study, a complex radiomic marker of CD8-cell infiltration predicted response to PD-1 and PD ligand-1 inhibitors. Another radiomic marker based on precontrast and postcontrast CT scans and clinical data was able to predict response in patients with NSCLC undergoing anti-PD-1 immunotherapy, with an AUC up to 0.78. CT radiomic biomarker could predict response to immunochemotherapy among patients with renal cell carcinoma. Moreover, as described above, AI-based models can distinguish pseudoprogression from the response to immunotherapy. A model combining radiomics signature on PET/CT, tumor volume, and blood markers successfully predicted pseudoprogression in metastatic melanoma treated with immune checkpoint inhibition. Finally, the use of AI and radiomics contribute and empower further research into cancer immunity in a bid to better understand the interplay of different genomic and molecular processes at the basis of tumoral response to immunotherapy. The prediction of disease relapse is also crucial for the right treatment planning and AI can provide benefits in this field. Prognostic models integrate genomic profiles and clinical information to stratify the risk of relapse for the choice of the most appropriate therapeutic strategy in accordance with the principles of individualization in cancer treatment. , Mantle cell lymphoma is an unusual lymphoid malignancy with a poor prognosis and short durations of treatment response. Although most patients present with aggressive disease, there is also an indolent subtype characterized by the translocation between chromosomes 11 and 14 (q13; q32) with overexpression of cyclin D1. The heterogeneity of mantle lymphoma and its outcomes necessitates precise prognosis prediction. Pretreatment CT-based AI model was able to predict relapse of mantle cell lymphoma patients with an accuracy of 70%.
To date, AI algorithms in oncological radiology have primarily been applied to manage common and time-consuming problems, such as breast and lung cancer screening. However, as described in this review, fast-paced research, and development of AI algorithms in oncological imaging led to rapid upscaling of their impact and increasing focus on ultra-specialized and high-precision tasks guiding medical decisions and improving patients’ therapies. However, several important issues must be addressed before they can be fully and successfully integrated into clinical practice. Rigorous standards and high transparency in the development, training, testing, and validation of AI models are all essential prerequisites to make AI results reliable, explainable, and interpretable. Large volumes of high-quality, representative, and well-curated data are needed for the development of robust AI algorithms, as data is considered more critical than hardware and software in the success of AI applications. Although increasing demand for diagnostic imaging examinations produces an exponential buildup of imaging data, it often lacks appropriate quality verification and association with laboratory and clinical parameters and patients’ outcomes. Patient privacy and informed permission are also important ethical and legal predicaments that require concrete legal steps. Expertise and training are needed to correctly label and segment the imaging data used for the validation of AI algorithms. As a result, small-size imaging datasets are often used, reducing the impact of the results, and limiting their applicability. Moreover, data used in the development of AI protocols can be affected by biases related to the clinical, social, and even geographical scenarios in which they were gathered. Reproducibility and generalizability of AI models is a major obstacle that potentially limits their performance in new datasets compared to the training data. Reproducibility of AI results is further complicated by the heterogeneity of acquisition protocols and a multitude of steps needed for the correct identification and processing of imaging features. The difficulty in prospectively collecting unbiased, good-quality, and conspicuous data highlights the essential role of large data sets created through multicenter and multi-institutional collaborations for training and rigorous validation of the algorithms. Controlled prospective studies are needed to enable the shift from research to clinical routine. This is particularly important for ML and DL algorithms that operate as “black box” models, where automated decision making cannot be directly assessed or validated by human operators. Moreover, the interest in unsupervised learning on unlabeled data is constantly rising. Increased algorithm transparency and explainability are needed before the large-scale integration of these models into clinical practice can be possible. Interdisciplinarity should always be available when dealing with AI in healthcare, as it affects research results and their clinical value. The exchange of knowledge and skills between experts in different fields markedly impacts methodology, provides robustness to results, and facilitates their translation into everyday clinical practice. Another important aspect that will determine the wide and routine diffusion of AI in the future is its perception and acceptance by both radiologists and patients. Healthcare specialists’ knowledge of AI has a significant impact on their willingness to learn and apply this technology in their job. A survey on 1041 residents and radiologists highlighted that limited knowledge of AI was associated with fear of replacement, whereas intermediate to advanced levels of knowledge were linked with a positive attitude toward AI. Therefore, dedicated training in the AI field may improve its clinical acceptance and use. Patient education and engagement are also essential for the success of AI in clinical practice. Surveys of patients’ perception of AI highlighted a generally positive attitude toward using AI-based systems, particularly in a supportive role. However, concerns about cyber-security, accuracy, and lack of empathy and face-to-face relationship have also been raised. The need of providing AI explanations to ensure patients’ trust and acceptance is a crucial point.
AI is becoming increasingly integrated into oncological radiology workflow, and this tendency will likely continue in the future, leading to major improvements in patients’ management and quality of life. A wide variety of routine imaging tasks can be outsourced and automated thanks to AI, including disease detection and quantification and lesions segmentation. Moreover, the use of AI radiogenomics in oncological imaging is undergoing exponential growth, contributing to the personalization and fine tuning of oncological treatments and approaches. In the next few years, machine learning and neural networks models will become a significant aid in every aspect of oncology, enabling sophisticated analysis of oncological patients and detailed disease characterization. AI technologies in oncological imaging have to overcome several important obstacles before they can be widely used in routine clinical practice. One of the main challenges consists in the effective organization and preprocessing of multi-institutional cohorts of large-scale data needed to obtain clinically reliable algorithms. Ultimately, robust AI-powered multidimensional disease profiling through imaging, clinical, and molecular data in patients with cancer will allow improving clinical strategies and further breach the gap to truly personalized medicine.
|
PhyloFunc: phylogeny-informed functional distance as a new ecological metric for metaproteomic data analysis | 679f0fee-9f6d-4fdc-a3fb-464bf531899d | 11817178 | Biochemistry[mh] | The human body is inhabited by trillions of microorganisms that collectively shape the functionality of our complex internal ecosystems, primarily through protein activity . The integration of taxonomic composition, functional activity, and ecological processes offers valuable insights into the dynamic responses of the microbiome . Metaproteomics stands out among omics approaches by directly measuring protein expression, providing unparalleled insights into the functional activities of microbial communities . Beta-diversity is a metric to measure the degree of dissimilarity between ecological communities , and it has been applied to metaproteomics datasets by assessing the variation in abundances of function or taxonomic composition inferred by protein biomass . Microbiome functional beta-diversity refers to the variation in functional gene/protein patterns between microbial communities across different environments or conditions . It has been used to gain valuable insights into the patterns and variations across different metaproteomes . Beta-diversity metrics are typically calculated without incorporating phylogenetic information . Commonly used beta-diversity measures, such as Bray–Curtis dissimilarity , Jaccard distance , and Euclidean distance, rely solely on the abundance or presence/absence of taxa or functions within communities. These metrics are useful for comparing compositional features across samples, but do not account for the phylogenetic relationships or evolutionary history of microbial taxa within the communities. In contrast, the UniFrac distance was specifically developed for microbiome composition data, considering both the abundance of taxa (or their presence/absence in the case of unweighted UniFrac), and their phylogenetic relatedness. This makes UniFrac more biologically meaningful in reflecting microbial community differences compared to methods that rely solely on taxa abundances . However, the UniFrac distance is measured by taxonomic presence/abundance and does not incorporate any functional information of the microbiome. Functional compensatory effect between phylogenetically related species describes a dynamic process where closely related species adjust their functional roles to maintain overall ecosystem functionality. Therefore, the inclusion of functional information can provide a more ecologically relevant perspective compared to relying solely on species abundances. More recently, computational algorithms phylogenetic robust principal-component analysis and Phylogenetic Organization of Metagenomic Signals have been developed to integrate phylogenetic information with metagenomic functional profiles . Despite these advancements, these diversity metrics are derived without considering whether these genes are expressed or not. In other words, these distance metrics reflect the beta-diversity of a microbiome sample set based on genomic contents rather than the actual expressed functions. Here, we developed a novel computational pipeline termed Phylo genetically informed Func tional (PhyloFunc) distance to address the above issue by integrating evolutionary relationships with functional attributes to generate functional dissimilarity distances between metaproteomes. We applied PhyloFunc distance to a toy dataset and two real metaproteomic case datasets to evaluate its performance. The results demonstrate that PhyloFunc can group metaproteomes exhibiting functional compensatory behavior between phylogenetically related taxa more closely. Additionally, this approach proved more sensitive to specific environmental responses that were undetectable using other beta-diversity metrics. Finally, we developed a Python package of PhyloFunc to implement and streamline the calculation of the PhyloFunc distance algorithm. In addition to supporting custom phylogenetic trees, the package includes an embedded UHGG tree, enabling users to bypass tree input when their protein group IDs are based on the UHGG database . The algorithm of PhyloFunc distance Consider a microbiome sample set as a metacommunity of a total of [12pt]{minimal} $$S$$ S species. Metaproteomics analysis can be performed on each of the samples, and a phylogenetic tree of the [12pt]{minimal} $$S$$ S species in the metacommunity can be obtained using data from metagenomics, 16S rRNA gene sequencing, or by subsequently retrieving the 16S rRNA gene sequences from databases after inferring taxonomy from metaproteomics data . A phylogeny-informed taxon-function dataset can therefore be summarized (Fig. A). Subsequently, the PhyloFunc distance can be computed as the summary of between-sample functional distance of each phylogenetic tree node which is weighted by taxonomic abundance and branch length of the tree. We define PhyloFunc distance [12pt]{minimal} $${PiF}_{ab}$$ PiF ab between two microbiome samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b as follows: 1 [12pt]{minimal} $${PiF}_{ab}=_{i=1}^{N}{l}_{i}{d}_{i(ab)}{p}_{ia}{p}_{ib}$$ PiF ab = ∑ i = 1 N l i d i ( a b ) p ia p ib where [12pt]{minimal} $$N$$ N is the total number of nodes of the phylogenetic tree ( [12pt]{minimal} $$N S$$ N ≥ S ), [12pt]{minimal} $${l}_{i}$$ l i is the branch length between node i and its “parent,” and [12pt]{minimal} $${p}_{ia}$$ p ia and [12pt]{minimal} $${p}_{ib}$$ p ib represent the relative taxonomic abundance of samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b at node i . [12pt]{minimal} $${d}_{i(ab)}$$ d i ( a b ) is the metaproteomic functional distance of node i between samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b measured by the weighted Jaccard distance between proteomic contents of taxon i : 2 [12pt]{minimal} $${d}_{i(ab)}=1-_{j}^{ }min({F}_{ja},{F}_{jb})}{{ }_{j}^{ }max({F}_{ja},{F}_{jb})}$$ d i ( a b ) = 1 - ∑ j ϕ m i n ( F ja , F jb ) ∑ j ϕ m a x ( F ja , F jb ) where [12pt]{minimal} $$$$ ϕ denotes the total number of functions and [12pt]{minimal} $${F}_{ja}$$ F ja and [12pt]{minimal} $${F}_{jb}$$ F jb represent the normalized functional abundance of the [12pt]{minimal} $$j$$ j th function in samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b , respectively. A more detailed explanation of the calculation process of the PhyloFunc distance is provided in the “ ” section, as well as in a step-by-step demonstration in Supplementary Figs. S1 and S2. We argue that PhyloFunc is a highly informative metric incorporating hierarchical information of taxon-specific functionality and phylogeny of functions. This contrasts with taxon-only, function-only, or taxon-function table -based metrics, each of which overlooks important relationships between features. First, we demonstrate the strength of PhyloFunc in accounting for the evolutional relatedness of functions in a synthetic toy dataset. This toy dataset is comprised of three samples, each containing six proteins. These proteins are “annotated” to three different taxa and two different functions (Fig. B, , ). The phylogenetic tree specific to the dataset indicates that taxon T1 and T2 are more closely related. First, if we only consider the taxonomic or functional abundances of the metaproteomes, we can sum up protein abundances to obtain taxon-only or function-only tables as in Fig. B. Another common approach involves calculating protein-level functional distances, where proteins are represented as taxon-specific features (Fig. C). Finally, we introduce the phylogeny-informed taxon-function dataset as would be required for PhyloFunc, as shown in Fig. D. In the toy dataset, by considering only one dimension—either taxonomic or functional abundances—we can design an extreme scenario where the combined profiles of all three samples are identical. Naturally, in such a case, the distances between the samples calculated from taxon-only and function-only datasets are zero (Fig. E), indicating that taxon-only and function-only data may not capture variability among the samples under certain circumstances. Next, based on the taxon-function dataset, we observed that distances between sample pairs were consistently identical (Fig. F). In other words, samples are uniformly different from each other when assuming that each protein is equally significant and functions independently. However, as we complemented the dataset with a phylogenetic tree which contains five nodes (including three leaves and two internal nodes) and simulated weights of branches N2T1 and N2T2 as smaller than N1T3 (i.e., the genetic dissimilarity between T1 and T2 is less than that between T1 or T2 and T3), phylogeny-informed taxon-function data were integrated, enabling the computation of the PhyloFunc distance (details illustrated in Supplementary Figs. S1 and S2). The distance between samples S1 and S2 became closer, while the distance between samples S1 (or S2) and S3 became further apart (Fig. G). Functional compensation occurs when taxonomically related species undergo functional alterations that allow them to maintain ecosystem processes despite changes in the species’ own functionality. This demonstrates that PhyloFunc sensitively reflects such mechanism, as functional compensation was intentionally designed to occur between S1 and S2 in the toy dataset. Proof of principle of PhyloFunc using a synthetic mouse gut microbiome dataset We next demonstrate that the result presented with our synthetic toy dataset can be true in real-world microbiomes, by analyzing a metaproteomic dataset of mouse gut microbiomes . These mice were inoculated with a synthetic consortium consisting of 14 or 15 gut bacterial strains (differentiated by the absence or presence of Bacteroides cellulosilyticus ) and subjected to diets containing different types of dietary fibers. This metaproteomic dataset involved mice allocated to two distinct dietary groups: one fed with HiSF-LoFV (upper tertile of saturated fat content and lower tertile of fruit and vegetable consumption) and the other fed with food supplemented with pea fiber (PEFi). The metaproteomic samples collected on the 19th day of feeding, which exhibited the greatest variation between the samples according to Patnode et al. , were chosen for our evaluation of the PhyloFunc method. We performed database search by using MetaLab 2.3 based on author-provided dataset. The full-length 16S rRNA sequences of the 15 strains were used to construct a phylogenetic tree (Supplementary Fig. S3), and functional annotation was performed against the eggNOG 6.0 database . Subsequently, we generated the phylogenetic-tree informed taxon-function table of this specific dataset (see the “ ” for details). We compared the performance of PhyloFunc with three abundance-based distance metrics (Euclidean distance, Bray–Curtis dissimilarity, and Binary Jaccard distance) that use taxon-function data tables as input, i.e., the three methods cannot be informed by phylogenetic information. After computing the three conventional distance metrics and PhyloFunc distance across all samples, the PCoA method was used to visualize and reduce the dimensions of the metrics to show the functional beta-diversity between different samples and groups (Fig. A). For all metrics, we observed clear separations between two diet groups, i.e., the HiSF-LoFV group (represented by brown points) and the PEFi group (purple points). Samples were also distinguished by 14-member communities (circle points) and 15-member communities (triangle points). The HiSF-LoFV group showed the contrast between 14 and 15 species communities by all 4 distance metrics. However, the contrast within the PEFi group was much smaller in the PhyloFunc PCoA result, whereas it appeared more pronounced in PCoA plots of the other three metrics. To explore the underlying ecological origination of this phenomenon, we first aggregated each function across the seven Bacteroides and Phocaeicola species (i.e., Bacteroides caccae , B. cellulosilyticus , Bacteroides finegoldii , Bacteroides ovatus , Bacteroides thetaiotaomicron , Parabacteroides massiliensis , Phocaeicola vulgatus ) to form a Bacteroides supergroup while maintaining the functional profiles of the other eight taxa unchanged. This outcome is reflected in the PCoA plots shown in Fig. B, and we observed that the result corresponding to PhyloFunc was similar to that of the original dataset, whereas PCoA results from the other three distance measures display reduced distances between the two types of communities fed with PEFi. This indicates that PhyloFunc distance effectively recognizes the functional compensatory effect of Bacteroides , whereas other distance measures may magnify the impact of functional differences between Bacteroides on ecosystem functionality. Finally, to further validate this observation, we calculated these four distances separately for Bacteroides -specific data (Fig. C) and data excluding Bacteroides species (Fig. D) before implementing the PCoA analyses. The results showed that PCoA plots based on Bacteroides (Fig. C) closely resemble those obtained from the original dataset, maintaining a distinct separation between the two PEFi communities across three conventional methods. However, when all Bacteroides data were excluded (Fig. D), the three conventional PCoA plots exhibited clustering outcomes similar to PhyloFunc distances calculated from grouped Bacteroides functions. This indicates that when features were considered independent in this dataset, Bacteroides played a predominant role in shaping the PCoA outcome. In contrast, PhyloFunc demonstrates its capability for hierarchical management of functional alterations among taxonomically related species by weighing functional dissimilarities between these taxa with smaller branch lengths. Since PhyloFunc is derived from the original UniFrac concept but replaces taxonomic intensity differences with metaproteomic functional distances at the nodes, we further compared it with UniFrac (Supplementary Fig. S4). The results showed that UniFrac failed to achieve clear separations in PCoA, further demonstrating that PhyloFunc provides superior resolution by integrating functional dimensions. This highlights its advantage in capturing microbial community dynamics more effectively. PhyloFunc exihibits sensitivity to in vitro human gut microbiome drug responses To further demonstrate the effectiveness of PhyloFunc distance, we applied our PhyloFunc metric to a more complex multidimensional dataset from a live human gut microbiome exposed in vitro to different drug treatments . The experiments were performed using the RapidAIM assay . In this experiment, a human gut microbiome sample was subjected to five different drugs — azathioprine (AZ), ciprofloxacin (CP), diclofenac (DC), paracetamol (PR), and nizatidine (NZ). These drugs were administered at three distinct concentrations: low (100 μM), medium (500 μM), and high (biologically relevant drug concentrations as reported by Li et al., 2020 ), and three technical replicates were performed for each treatment. We reanalyzed the dataset using a database generated by metagenomic sequencing of the microbiome’s baseline sample and performed metagenomic tree construction and taxon-function table preprocessing (see the “ ”). Taxonomic and functional annotations resulted in a taxon-function table containing 973 OGs and 99 genera. The phylogenetic tree constructed by a maximum likelihood method comprises 228 nodes (including 115 leaf nodes), along with the calculated weights for 228 branches (Supplementary Fig. S5). After calculating the PhyloFunc distance and the other three distances based on preprocessed data, hierarchical clustering and PCoA were both implemented for each drug to compare the functional analysis ability (Fig. A, , , Supplementary Figs. S6, S7, S8, S9). For all samples, hierarchical clustering results based on the four distance metrics effectively reflected the impact of drugs on the diversity of the gut microbiome. PhyloFunc method showed consistency with other metrics and can effectively group samples corresponding to drugs (Fig. A). This was particularly evident for drugs CP, DC, and NZ, which had marked effects on microbiome functional profiles. This consistency and effectiveness in grouping further proved the method’s validity and robustness. Furthermore, we observed that samples stimulated with high concentration of PR (PR.H), which did not show clustered responses at the taxon-function level with the other three distance-based methods, were effectively clustered by the PhyloFunc method. For a more detailed comparison, we subdivided the data for each drug and compared the effects of each drug group to the control group (NC) on microbial diversity. The results for PR drugs are shown in Fig. B, , , while those results for other drugs are presented in the supplementary materials (Supplementary Figs. S6, S7, S8, S9). For the high concentration of PR, the PhyloFunc distance method grouped the samples of microbiome showing weak responses to the PR drug compared with the control group (NC). Meanwhile, the PCoA results indicated that the PhyloFunc distance method can distinguish different concentrations of the PR drug from the NC in Fig. C. However, it was evident from the PCoA results that there were larger overlapping regions between the PR and the NC samples when using the other methods (Fig. C). We examined the statistical significance of the clustering by comparing distances between replicates to distances between groups (Fig. D). It can be observed that for high concentration of PR drug, between-group PhyloFunc distance is significantly higher than the between-replicate PhyloFunc distance, which indicates that a drug response has been detected. The other three metrics had no statistical significance in this comparison. For the same set comparisons performed on the other four drugs presented in Supplementary Figs. S6, S7, S8, S9, it becomes evident that the PhyloFunc method achieves superior or equivalent levels of significance in detecting drug responses. We further performed PERMANOVA tests using the human gut microbiota datasets, analyzed the effects of different compounds separately, and assessed the differential separation of groups across varying concentrations (Supplementary Tables S1, S2, S3, S4, S5). Results showed that within the compound groups PR, NZ, DC, and CP, PhyloFunc demonstrated the lowest p -value among all four metrics or at least equivalent to one other metric in one case. However, for AZ, the Binary Jaccard distance showed the lowest p -value. Even in this case, PhyloFunc still demonstrated PERMANOVA significance, along with highest R2 and F-values across all four distance measurements. Despite the overall merits of PhyloFunc over other metrics shown in this dataset, we argue that its strength does not lie in achieving the greatest discrimination among groups compared to other metrics. Instead, it stands out in integrating phylogenetic and functional information to provide deeper ecological insights, which can sometimes manifest as sample discrimination, as demonstrated in this case. PhyloFunc shows higher predictive power in analyzing microbial responses To further evaluate the predictive power of PhyloFunc in comparison to conventional distance metrics, we employed classification algorithms (KNN, MLP, SVM) to construct machine learning models. These models were built based on four different distance metrics to predict the identity of drugs. Due to the limited sample size, leave-one-out cross-validation was applied to evaluate the accuracy of different models and to compare their performance, resulting in the accuracy comparison results as depicted in Fig. . For each classification algorithm, we fine-tuned the parameters as detailed in Supplementary Table 6. All the three algorithms predicting classification performance showed that PhyloFunc resulted in higher, if not equivalent, predicted accuracy compared to those based on the other three distance metrics. Streamlined functional distance calculation with the PhyloFunc package To enhance the broad applicability of PhyloFunc, we have developed a user-friendly Python package of PhyloFunc ( https://pypi.org/project/phylofunc/ ), which includes two primary functions: PhyloFunc_distance for calculating the distance between a pair of samples and PhyloFunc_matrix for computing a distance matrix across multiple samples. The package offers flexibility with two input options for phylogenetic trees. First, users can provide custom phylogenetic trees in Newick format, constructed from their sample-specific metagenomics or 16S rRNA gene amplicon sequencing data, enabling broader applicability for various research contexts (as we have demonstrated using the synthetic mouse gut microbiome dataset and the in vitro human gut microbiome datasets, Fig. & Fig. ). Second, it incorporates an embedded phylogenetic tree (bac120_iqtree_v2.0.1.nwk) from the UHGG database as the default input, enabling users to bypass sequencing metagenomic data when their metaproteomic search relies on the UHGG database. We compared the results between UHGG-based and metagenomics-based trees and show highly reproducible results (Supplementary Fig. S10 versus Fig. A). To further support users, we have provided a step-by-step tutorial on GitHub ( https://github.com/lumanottawa/PhyloFunc/tree/main/1_PhyloFunc_package_tutorial ), which includes detailed instructions, example input file formats, and implementation guidelines. This comprehensive package and its accompanying resources are designed to remove barriers to computational analysis with PhyloFunc, enabling researchers, including those without a bioinformatics background, to easily integrate it into their metaproteomics and microbiome studies. Consider a microbiome sample set as a metacommunity of a total of [12pt]{minimal} $$S$$ S species. Metaproteomics analysis can be performed on each of the samples, and a phylogenetic tree of the [12pt]{minimal} $$S$$ S species in the metacommunity can be obtained using data from metagenomics, 16S rRNA gene sequencing, or by subsequently retrieving the 16S rRNA gene sequences from databases after inferring taxonomy from metaproteomics data . A phylogeny-informed taxon-function dataset can therefore be summarized (Fig. A). Subsequently, the PhyloFunc distance can be computed as the summary of between-sample functional distance of each phylogenetic tree node which is weighted by taxonomic abundance and branch length of the tree. We define PhyloFunc distance [12pt]{minimal} $${PiF}_{ab}$$ PiF ab between two microbiome samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b as follows: 1 [12pt]{minimal} $${PiF}_{ab}=_{i=1}^{N}{l}_{i}{d}_{i(ab)}{p}_{ia}{p}_{ib}$$ PiF ab = ∑ i = 1 N l i d i ( a b ) p ia p ib where [12pt]{minimal} $$N$$ N is the total number of nodes of the phylogenetic tree ( [12pt]{minimal} $$N S$$ N ≥ S ), [12pt]{minimal} $${l}_{i}$$ l i is the branch length between node i and its “parent,” and [12pt]{minimal} $${p}_{ia}$$ p ia and [12pt]{minimal} $${p}_{ib}$$ p ib represent the relative taxonomic abundance of samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b at node i . [12pt]{minimal} $${d}_{i(ab)}$$ d i ( a b ) is the metaproteomic functional distance of node i between samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b measured by the weighted Jaccard distance between proteomic contents of taxon i : 2 [12pt]{minimal} $${d}_{i(ab)}=1-_{j}^{ }min({F}_{ja},{F}_{jb})}{{ }_{j}^{ }max({F}_{ja},{F}_{jb})}$$ d i ( a b ) = 1 - ∑ j ϕ m i n ( F ja , F jb ) ∑ j ϕ m a x ( F ja , F jb ) where [12pt]{minimal} $$$$ ϕ denotes the total number of functions and [12pt]{minimal} $${F}_{ja}$$ F ja and [12pt]{minimal} $${F}_{jb}$$ F jb represent the normalized functional abundance of the [12pt]{minimal} $$j$$ j th function in samples [12pt]{minimal} $$a$$ a and [12pt]{minimal} $$b$$ b , respectively. A more detailed explanation of the calculation process of the PhyloFunc distance is provided in the “ ” section, as well as in a step-by-step demonstration in Supplementary Figs. S1 and S2. We argue that PhyloFunc is a highly informative metric incorporating hierarchical information of taxon-specific functionality and phylogeny of functions. This contrasts with taxon-only, function-only, or taxon-function table -based metrics, each of which overlooks important relationships between features. First, we demonstrate the strength of PhyloFunc in accounting for the evolutional relatedness of functions in a synthetic toy dataset. This toy dataset is comprised of three samples, each containing six proteins. These proteins are “annotated” to three different taxa and two different functions (Fig. B, , ). The phylogenetic tree specific to the dataset indicates that taxon T1 and T2 are more closely related. First, if we only consider the taxonomic or functional abundances of the metaproteomes, we can sum up protein abundances to obtain taxon-only or function-only tables as in Fig. B. Another common approach involves calculating protein-level functional distances, where proteins are represented as taxon-specific features (Fig. C). Finally, we introduce the phylogeny-informed taxon-function dataset as would be required for PhyloFunc, as shown in Fig. D. In the toy dataset, by considering only one dimension—either taxonomic or functional abundances—we can design an extreme scenario where the combined profiles of all three samples are identical. Naturally, in such a case, the distances between the samples calculated from taxon-only and function-only datasets are zero (Fig. E), indicating that taxon-only and function-only data may not capture variability among the samples under certain circumstances. Next, based on the taxon-function dataset, we observed that distances between sample pairs were consistently identical (Fig. F). In other words, samples are uniformly different from each other when assuming that each protein is equally significant and functions independently. However, as we complemented the dataset with a phylogenetic tree which contains five nodes (including three leaves and two internal nodes) and simulated weights of branches N2T1 and N2T2 as smaller than N1T3 (i.e., the genetic dissimilarity between T1 and T2 is less than that between T1 or T2 and T3), phylogeny-informed taxon-function data were integrated, enabling the computation of the PhyloFunc distance (details illustrated in Supplementary Figs. S1 and S2). The distance between samples S1 and S2 became closer, while the distance between samples S1 (or S2) and S3 became further apart (Fig. G). Functional compensation occurs when taxonomically related species undergo functional alterations that allow them to maintain ecosystem processes despite changes in the species’ own functionality. This demonstrates that PhyloFunc sensitively reflects such mechanism, as functional compensation was intentionally designed to occur between S1 and S2 in the toy dataset. We next demonstrate that the result presented with our synthetic toy dataset can be true in real-world microbiomes, by analyzing a metaproteomic dataset of mouse gut microbiomes . These mice were inoculated with a synthetic consortium consisting of 14 or 15 gut bacterial strains (differentiated by the absence or presence of Bacteroides cellulosilyticus ) and subjected to diets containing different types of dietary fibers. This metaproteomic dataset involved mice allocated to two distinct dietary groups: one fed with HiSF-LoFV (upper tertile of saturated fat content and lower tertile of fruit and vegetable consumption) and the other fed with food supplemented with pea fiber (PEFi). The metaproteomic samples collected on the 19th day of feeding, which exhibited the greatest variation between the samples according to Patnode et al. , were chosen for our evaluation of the PhyloFunc method. We performed database search by using MetaLab 2.3 based on author-provided dataset. The full-length 16S rRNA sequences of the 15 strains were used to construct a phylogenetic tree (Supplementary Fig. S3), and functional annotation was performed against the eggNOG 6.0 database . Subsequently, we generated the phylogenetic-tree informed taxon-function table of this specific dataset (see the “ ” for details). We compared the performance of PhyloFunc with three abundance-based distance metrics (Euclidean distance, Bray–Curtis dissimilarity, and Binary Jaccard distance) that use taxon-function data tables as input, i.e., the three methods cannot be informed by phylogenetic information. After computing the three conventional distance metrics and PhyloFunc distance across all samples, the PCoA method was used to visualize and reduce the dimensions of the metrics to show the functional beta-diversity between different samples and groups (Fig. A). For all metrics, we observed clear separations between two diet groups, i.e., the HiSF-LoFV group (represented by brown points) and the PEFi group (purple points). Samples were also distinguished by 14-member communities (circle points) and 15-member communities (triangle points). The HiSF-LoFV group showed the contrast between 14 and 15 species communities by all 4 distance metrics. However, the contrast within the PEFi group was much smaller in the PhyloFunc PCoA result, whereas it appeared more pronounced in PCoA plots of the other three metrics. To explore the underlying ecological origination of this phenomenon, we first aggregated each function across the seven Bacteroides and Phocaeicola species (i.e., Bacteroides caccae , B. cellulosilyticus , Bacteroides finegoldii , Bacteroides ovatus , Bacteroides thetaiotaomicron , Parabacteroides massiliensis , Phocaeicola vulgatus ) to form a Bacteroides supergroup while maintaining the functional profiles of the other eight taxa unchanged. This outcome is reflected in the PCoA plots shown in Fig. B, and we observed that the result corresponding to PhyloFunc was similar to that of the original dataset, whereas PCoA results from the other three distance measures display reduced distances between the two types of communities fed with PEFi. This indicates that PhyloFunc distance effectively recognizes the functional compensatory effect of Bacteroides , whereas other distance measures may magnify the impact of functional differences between Bacteroides on ecosystem functionality. Finally, to further validate this observation, we calculated these four distances separately for Bacteroides -specific data (Fig. C) and data excluding Bacteroides species (Fig. D) before implementing the PCoA analyses. The results showed that PCoA plots based on Bacteroides (Fig. C) closely resemble those obtained from the original dataset, maintaining a distinct separation between the two PEFi communities across three conventional methods. However, when all Bacteroides data were excluded (Fig. D), the three conventional PCoA plots exhibited clustering outcomes similar to PhyloFunc distances calculated from grouped Bacteroides functions. This indicates that when features were considered independent in this dataset, Bacteroides played a predominant role in shaping the PCoA outcome. In contrast, PhyloFunc demonstrates its capability for hierarchical management of functional alterations among taxonomically related species by weighing functional dissimilarities between these taxa with smaller branch lengths. Since PhyloFunc is derived from the original UniFrac concept but replaces taxonomic intensity differences with metaproteomic functional distances at the nodes, we further compared it with UniFrac (Supplementary Fig. S4). The results showed that UniFrac failed to achieve clear separations in PCoA, further demonstrating that PhyloFunc provides superior resolution by integrating functional dimensions. This highlights its advantage in capturing microbial community dynamics more effectively. To further demonstrate the effectiveness of PhyloFunc distance, we applied our PhyloFunc metric to a more complex multidimensional dataset from a live human gut microbiome exposed in vitro to different drug treatments . The experiments were performed using the RapidAIM assay . In this experiment, a human gut microbiome sample was subjected to five different drugs — azathioprine (AZ), ciprofloxacin (CP), diclofenac (DC), paracetamol (PR), and nizatidine (NZ). These drugs were administered at three distinct concentrations: low (100 μM), medium (500 μM), and high (biologically relevant drug concentrations as reported by Li et al., 2020 ), and three technical replicates were performed for each treatment. We reanalyzed the dataset using a database generated by metagenomic sequencing of the microbiome’s baseline sample and performed metagenomic tree construction and taxon-function table preprocessing (see the “ ”). Taxonomic and functional annotations resulted in a taxon-function table containing 973 OGs and 99 genera. The phylogenetic tree constructed by a maximum likelihood method comprises 228 nodes (including 115 leaf nodes), along with the calculated weights for 228 branches (Supplementary Fig. S5). After calculating the PhyloFunc distance and the other three distances based on preprocessed data, hierarchical clustering and PCoA were both implemented for each drug to compare the functional analysis ability (Fig. A, , , Supplementary Figs. S6, S7, S8, S9). For all samples, hierarchical clustering results based on the four distance metrics effectively reflected the impact of drugs on the diversity of the gut microbiome. PhyloFunc method showed consistency with other metrics and can effectively group samples corresponding to drugs (Fig. A). This was particularly evident for drugs CP, DC, and NZ, which had marked effects on microbiome functional profiles. This consistency and effectiveness in grouping further proved the method’s validity and robustness. Furthermore, we observed that samples stimulated with high concentration of PR (PR.H), which did not show clustered responses at the taxon-function level with the other three distance-based methods, were effectively clustered by the PhyloFunc method. For a more detailed comparison, we subdivided the data for each drug and compared the effects of each drug group to the control group (NC) on microbial diversity. The results for PR drugs are shown in Fig. B, , , while those results for other drugs are presented in the supplementary materials (Supplementary Figs. S6, S7, S8, S9). For the high concentration of PR, the PhyloFunc distance method grouped the samples of microbiome showing weak responses to the PR drug compared with the control group (NC). Meanwhile, the PCoA results indicated that the PhyloFunc distance method can distinguish different concentrations of the PR drug from the NC in Fig. C. However, it was evident from the PCoA results that there were larger overlapping regions between the PR and the NC samples when using the other methods (Fig. C). We examined the statistical significance of the clustering by comparing distances between replicates to distances between groups (Fig. D). It can be observed that for high concentration of PR drug, between-group PhyloFunc distance is significantly higher than the between-replicate PhyloFunc distance, which indicates that a drug response has been detected. The other three metrics had no statistical significance in this comparison. For the same set comparisons performed on the other four drugs presented in Supplementary Figs. S6, S7, S8, S9, it becomes evident that the PhyloFunc method achieves superior or equivalent levels of significance in detecting drug responses. We further performed PERMANOVA tests using the human gut microbiota datasets, analyzed the effects of different compounds separately, and assessed the differential separation of groups across varying concentrations (Supplementary Tables S1, S2, S3, S4, S5). Results showed that within the compound groups PR, NZ, DC, and CP, PhyloFunc demonstrated the lowest p -value among all four metrics or at least equivalent to one other metric in one case. However, for AZ, the Binary Jaccard distance showed the lowest p -value. Even in this case, PhyloFunc still demonstrated PERMANOVA significance, along with highest R2 and F-values across all four distance measurements. Despite the overall merits of PhyloFunc over other metrics shown in this dataset, we argue that its strength does not lie in achieving the greatest discrimination among groups compared to other metrics. Instead, it stands out in integrating phylogenetic and functional information to provide deeper ecological insights, which can sometimes manifest as sample discrimination, as demonstrated in this case. To further evaluate the predictive power of PhyloFunc in comparison to conventional distance metrics, we employed classification algorithms (KNN, MLP, SVM) to construct machine learning models. These models were built based on four different distance metrics to predict the identity of drugs. Due to the limited sample size, leave-one-out cross-validation was applied to evaluate the accuracy of different models and to compare their performance, resulting in the accuracy comparison results as depicted in Fig. . For each classification algorithm, we fine-tuned the parameters as detailed in Supplementary Table 6. All the three algorithms predicting classification performance showed that PhyloFunc resulted in higher, if not equivalent, predicted accuracy compared to those based on the other three distance metrics. To enhance the broad applicability of PhyloFunc, we have developed a user-friendly Python package of PhyloFunc ( https://pypi.org/project/phylofunc/ ), which includes two primary functions: PhyloFunc_distance for calculating the distance between a pair of samples and PhyloFunc_matrix for computing a distance matrix across multiple samples. The package offers flexibility with two input options for phylogenetic trees. First, users can provide custom phylogenetic trees in Newick format, constructed from their sample-specific metagenomics or 16S rRNA gene amplicon sequencing data, enabling broader applicability for various research contexts (as we have demonstrated using the synthetic mouse gut microbiome dataset and the in vitro human gut microbiome datasets, Fig. & Fig. ). Second, it incorporates an embedded phylogenetic tree (bac120_iqtree_v2.0.1.nwk) from the UHGG database as the default input, enabling users to bypass sequencing metagenomic data when their metaproteomic search relies on the UHGG database. We compared the results between UHGG-based and metagenomics-based trees and show highly reproducible results (Supplementary Fig. S10 versus Fig. A). To further support users, we have provided a step-by-step tutorial on GitHub ( https://github.com/lumanottawa/PhyloFunc/tree/main/1_PhyloFunc_package_tutorial ), which includes detailed instructions, example input file formats, and implementation guidelines. This comprehensive package and its accompanying resources are designed to remove barriers to computational analysis with PhyloFunc, enabling researchers, including those without a bioinformatics background, to easily integrate it into their metaproteomics and microbiome studies. Metaproteomics is an informative approach to studying the functionality of the human gut microbiome and its implications in human health and disease. Evaluation of beta-diversity is often one of the initial steps in metaproteomics data exploration. However, there has been a lack of a measurement tool that effectively captures the ecology-centric variations in metaproteomics data. The beta-diversity of gut metaproteome samples is influenced not only by the abundance of taxa and taxon-specific functional compositions but also by the phylogenetic relatedness between taxa. Therefore, including phylogenetic information with protein group taxonomic and functional annotations can better empower researchers to explore both the functional and ecological dynamics of microbial communities, offering insights much overlooked by solely considering taxonomic and functional abundances. Here, we proposed a novel beta-diversity metric, PhyloFunc, which provides a comprehensive perspective to better detect functional responses to drugs by incorporating phylogenetic information to inform functional distances. Through a simulated dataset, we illustrated the calculation process and indication of the PhyloFunc distance method. This simple toy dataset makes it possible for readers to follow the calculations and understand the hierarchy of PhyloFunc algorithm more effectively. It hierarchically incorporates functional abundance of proteins, taxonomic abundance, and phylogenetic relationship between taxa. As demonstrated by the proof-of-concept toy dataset, as well as a real-world dataset, we report that PhyloFunc distance can account for the functional compensatory effect among taxonomically related species and offered a more ecologically relevant measurement of functional diversity compared to the three established distance methods tested. Functional compensation can mitigate the impact of species loss or functional changes on the overall ecosystem function, thereby helping maintain ecosystem functions. Research has shown that functional compensation among closely related species with harboring functional redundancy is a key mechanism in sustaining ecosystem functions in response to environmental stimulants . Our PhyloFunc metric is built on such a mechanism, leveraging the functional roles of related taxa to provide a more ecologically relevant measure of ecological beta-diversity. Furthermore, we tested PhyloFunc using a dataset of in vitro drug responses of a human gut microbiome. We first showed that for drugs exhibiting strong effects, PhyloFunc distance showed agreements with other distance metrics. Interestingly, we further observed that for drugs exerting milder effects, the PhyloFunc method can detect new responses and achieve better classification evaluation results than the other tested distance measures, providing deeper insights into drug-microbiome interactions. This result suggests PhyloFunc’s potential for clinical applications. By offering deeper insights into how various drugs affect the functional ecology of the human gut microbiome, PhyloFunc could be useful in developing personalized medicine approaches , optimizing drug therapies, and understanding the microbial basis of drug efficacy and side effects. Apart from drug-microbiome interactions, the PhyloFunc metric has significant potential across an even-broader range of applications. These applications extend to any area where evaluation of microbial ecology responses is required, including but not limited to personalized nutrition, prebiotics/probiotics development, disease diagnostics, etc. In this work, we introduce a novel metric PhyloFunc and provide its method of computation. The PhyloFunc metric integrates phylogenetic information with taxonomic and functional data to better capture beta-diversity in gut metaproteomes, offering sensitive insights into microbial ecology responses in health and disease applications. To streamline the calculation of PhyloFunc distances, we developed the Python package PhyloFunc , which automates the process of calculating functional distances between sample pairs and generates comprehensive distance matrices for multiple samples. This enables efficient assessment of metaproteomic functional beta-diversity across datasets. Data preparation Metagenomics data processing and taxonomic and phylogenomic analysis Total genomic DNA from a human stool sample was extracted FastDNA™ SPIN Kit with the FastPrep-24™ instrument (MP Biomedicals, Santa Ana, CA, USA). Sequencing libraries were constructed with Illumina TruSeq DNA Sample Prep kit v3 (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Paired-end (100-bp) sequencing was performed with the Illumina NovaSeq 6000 at the Génome Québec Innovation Centre of McGill University (Montreal, Canada). The raw reads were quality-filtered to remove the adapter and low-quality sequences using fastp v0.12.4 (fastp -q 15 -u 40) . The reads were then mapped to the human (hg38; RefSeq: GCF_000001405.39) and phiX reference genomes, and the matches were removed with the Kraken2 v.2.0.9 package. Metagenome assembly of the quality-filtered nonhuman reads was processed by MEGAHIT v1.2.9 using the –presets meta-large –min-contig-len 1000 parameters. For metagenomic binning, the single_easy_bin command of SemiBin v1.5.1 was used. The resulting bins were then assessed for contamination and completeness with DAS Tool v1.1.4 , retaining only high-quality bins or metagenome-assembled genomes (MAGs) with < 50% completeness. The assembled contigs were then annotated using the PROkaryotic Dynamic programming Gene-finding ALgorithm (Prodigal) v2.6.3 to predict open reading frames (ORF). The contigs were translated into amino acid sequences using the anonymous gene prediction mode (prodigal -p meta) and default parameters. The final 115 MAGs were taxonomically classified using the GTDB-Tk v2.1.0 with the r207_v2 . For the phylogenomic analysis, a maximum-likelihood (ML) tree was constructed de novo using the protein sequence alignment produced by GTDB-Tk. First, the aligned sequences were trimmed using trimAl v1.4.rev15 with the heuristic “-automated1” method, and the ML tree was constructed using the IQ-TREE multicore version 2.2.0.3 COVID-edition with 1000 bootstrapping and visualized and annotated using the Interactive Tree Of Life (iTOL) web tool . Lastly, the protein-coding sequences of the MAGs were compiled into a single FASTA file and used as the metagenome-inferred protein database for the metaproteomic search. 16S rRNA data processing The full-length 16S rRNA sequences of the 15 bacterial strains which consistently colonize animals (Supplementary Table 7) were used to construct a phylogenetic tree, using the Maximum Likelihood method in MEGA v11 with 1000 bootstrapping and default parameters. Metaproteomes database search and taxonomic and functional annotations Metaproteomic database searches of the mouse gut microbiome data obtained from Patnode et al. (2019) were performed using MetaLab 2.3 based on the author-provided database of the manuscript (patnodeCommunity_Mmus_allDiets_plus_contams_FR.fasta) with default parameters. Briefly, search parameters included a PSM FDR of 0.01, protein FDR of 0.01, and site FDR of 0.01. Minimum peptide length was set to 7. Modifications considered in protein quantification included N-terminal acetylation and methionine oxidation. The analysis also utilized matching between runs with a time window of 1 min. For taxonomic annotation, we used the protein names from the fasta file headers of the author-provided database to infer taxonomic originations of the proteins. Metaproteomic database search of the RapidAIM cultured human gut microbiome was performed using MaxQuant 1.6.17.0 using the sample set specific metagenomics database (protein-coding sequences of the MAGs), and the match between runs option was enabled for label-free quantification with default parameters same as the mouse gut microbiome dataset stated above. Taxonomic annotation of the synthetic mouse gut microbiota and human gut microbiome datasets was performed in two consecutive steps. First, taxonomic information was extracted by matching protein IDs to their origins: for the mouse microbiome dataset, protein IDs were matched to species-specific protein IDs based on their taxonomic origins; for the metagenomic MAG database-searched human gut microbiome dataset, protein IDs were matched to the metagenomic MAG taxonomic classification results. Second, lowest common ancestors (LCAs) were generated at the protein group level, and species-level LCAs were subsequently extracted for further analysis. The datasets achieved 88% and 73% species-level LCA matches at the protein group level for the mouse and human microbiome datasets, respectively. Functional annotation was performed against the eggNOG 6.0 database using DIAMOND v2.1.10 with BLASTp , applying a stringent e -value threshold of 10 −5 , under a Linux environment. Root-level orthologous groups (OGs) from the top-1 annotation were used for further analysis, resulting in seed ortholog annotation coverage rate of 99.80% ± 0.03% (mean ± SD, N = 3 datasets) and OG annotation coverage rate of 99.50% ± 0.11% (mean ± SD, N = 3 datasets). Furthermore, an additional metaproteomic database search of the same human microbiome was conducted using the UHGG database with MetaLab-MAG 1.0.7 and quantitative analysis performed by PANDA v1.2.7 . The UHGG database contains a phylogenetic tree that can be directly accessed by the PhyloFunc package. The UHGG database includes a phylogenetic tree that is directly compatible with the PhyloFunc package. Moreover, for microbiomes analyzed using the UHGG database, genome IDs (corresponding to tree nodes) can be directly inferred from genome-specific protein IDs. Functional annotation is also performed using eggNOG 6.0. Data preprocessing From data preparation, we obtained three different data files for each metaproteomic dataset, i.e., a protein group table with abundance information, a taxonomic table, and a functional table with annotation information. First, we filtered out any protein group with the “REV_” indicator in the protein group table, removed contaminant proteins, and included intensities of microbial protein groups based on label-free quantification (LFQ). Based on the taxonomic and functional annotations described above, we aggregated protein abundances by grouping them according to the same functional OG IDs within the same taxonomic lineage. Subsequently, all of the taxa in the taxon-specific functional table were renamed to align with the names of all leaf nodes in the tree file. Simultaneously, the tree was traversed by a recursive method to assign names to all internal nodes to create a branch table. This table included each branch’s information such as precedent, consequent, the number of child nodes, and branch length. For calculating PhyloFunc distances, branch length values were extracted from the branch table, corresponding to the rows in the “consequent” column whose values matched the taxon names in the taxon-function table. For the case of the sum of functions across Bacteroides species in the mouse gut microbiome dataset, we utilized a single Bacteroides node instead of the subtree encompassing all of Bacteroides species. The branch length value for a Bacteroides node was 0.04, which was the length of the branch connecting this subtree. To this end, we gathered the phylogeny-informed taxon-function dataset, which was comprised of two components: the taxon-function table and the branches table (similar as illustrated in Fig. D). The calculation process of the PhyloFunc distance and other traditional distances Both the construction of the phylogenetic tree and the computation of the PhyloFunc distance were implemented through programming in Python. To illustrate the calculation process of PhyloFunc-based distance most clearly, we employed a simulated dataset as a demonstration (Supplementary Figs. S1 and S2). Briefly, based on taxon-function data obtained by preprocessing methods, relative abundance of each function within each taxon and the relative abundance of taxon were calculated by taxon-specific protein biomass contributions. Secondly, the relative functional abundance was weighed by their corresponding relative taxonomic abundance and then expanded to represent all nodes up to the root of the phylogeny by summing up each node to get the expanded table. Similarly, the taxonomic table was converted into the expanded table by summing up all nodes in the phylogeny tree. Thirdly, functional distances between each sample pair were calculated according to Eq. . Finally, each functional distance was weighed by branch length and relative protein abundances in samples pairs, and PhyloFunc distances between samples were then calculated according to Eq. . Other methods of Bray–Curtis dissimilarity, binary Jaccard distance, and Euclidean distance were calculated using the R package “vegan” . For binary Jaccard distance, we considered nonzero numbers as 1 in binary and used the parameter “binary” to calculate distances between sample pairs. Evaluation and visualization Different methods including PCoA, statistical tests, hierarchical clustering, classification algorithms, and PERMANOVA tests were applied to evaluate the performance across different distances. The detail of the evaluation can be found in the figure legends and main texts. The PCoA analysis was realized by R function dudi.pco in package ade4. PCoA plots were visualized using the R package ggplot2, with the aspect ratio standardized to 1:1 to ensure a consistent comparison. In the PCoA plots of the human gut microbiome dataset, three replicated points for each group were connected with straight lines and displayed as triangles. Box plots also were visualized using the R package ggplot2. PERMANOVA was performed using R function adonis2. Hierarchical clustering was performed using the R function hclust, and hierarchical clustering plots were visualized using the R package stats. Based on normalized PhyloFunc and the other three distance metrics, we selected three standard algorithms—KNN, MLP, and SVM—to construct classification models and employed a leave-one-out cross-validation approach for splitting training and test sets. The distance matrix was used as the input sample data, with the names of five drugs as classification labels. In each iteration, one sample from the distance dataset was designated as the test set, while the remaining samples were used as the training set to build the classification model. The performance of each model was evaluated by comparing its prediction for the test sample with the true drug classification. Accuracy was calculated as the proportion of correctly classified samples across all iterations. We used the grid search method to obtain the optimal parameters for each classification algorithm of different distance methods. The primary optimal parameters were presented in Supplementary Table 6, while the corresponding high-accuracy evaluation results were illustrated in Fig. B. The classification models were implemented in Python 3.11, and Python packages Pandas, NumPy, and sklearn were used. The grid search method for the selection of optimal parameters was implemented in the Python package sklearn.model_selection. Metagenomics data processing and taxonomic and phylogenomic analysis Total genomic DNA from a human stool sample was extracted FastDNA™ SPIN Kit with the FastPrep-24™ instrument (MP Biomedicals, Santa Ana, CA, USA). Sequencing libraries were constructed with Illumina TruSeq DNA Sample Prep kit v3 (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Paired-end (100-bp) sequencing was performed with the Illumina NovaSeq 6000 at the Génome Québec Innovation Centre of McGill University (Montreal, Canada). The raw reads were quality-filtered to remove the adapter and low-quality sequences using fastp v0.12.4 (fastp -q 15 -u 40) . The reads were then mapped to the human (hg38; RefSeq: GCF_000001405.39) and phiX reference genomes, and the matches were removed with the Kraken2 v.2.0.9 package. Metagenome assembly of the quality-filtered nonhuman reads was processed by MEGAHIT v1.2.9 using the –presets meta-large –min-contig-len 1000 parameters. For metagenomic binning, the single_easy_bin command of SemiBin v1.5.1 was used. The resulting bins were then assessed for contamination and completeness with DAS Tool v1.1.4 , retaining only high-quality bins or metagenome-assembled genomes (MAGs) with < 50% completeness. The assembled contigs were then annotated using the PROkaryotic Dynamic programming Gene-finding ALgorithm (Prodigal) v2.6.3 to predict open reading frames (ORF). The contigs were translated into amino acid sequences using the anonymous gene prediction mode (prodigal -p meta) and default parameters. The final 115 MAGs were taxonomically classified using the GTDB-Tk v2.1.0 with the r207_v2 . For the phylogenomic analysis, a maximum-likelihood (ML) tree was constructed de novo using the protein sequence alignment produced by GTDB-Tk. First, the aligned sequences were trimmed using trimAl v1.4.rev15 with the heuristic “-automated1” method, and the ML tree was constructed using the IQ-TREE multicore version 2.2.0.3 COVID-edition with 1000 bootstrapping and visualized and annotated using the Interactive Tree Of Life (iTOL) web tool . Lastly, the protein-coding sequences of the MAGs were compiled into a single FASTA file and used as the metagenome-inferred protein database for the metaproteomic search. 16S rRNA data processing The full-length 16S rRNA sequences of the 15 bacterial strains which consistently colonize animals (Supplementary Table 7) were used to construct a phylogenetic tree, using the Maximum Likelihood method in MEGA v11 with 1000 bootstrapping and default parameters. Metaproteomes database search and taxonomic and functional annotations Metaproteomic database searches of the mouse gut microbiome data obtained from Patnode et al. (2019) were performed using MetaLab 2.3 based on the author-provided database of the manuscript (patnodeCommunity_Mmus_allDiets_plus_contams_FR.fasta) with default parameters. Briefly, search parameters included a PSM FDR of 0.01, protein FDR of 0.01, and site FDR of 0.01. Minimum peptide length was set to 7. Modifications considered in protein quantification included N-terminal acetylation and methionine oxidation. The analysis also utilized matching between runs with a time window of 1 min. For taxonomic annotation, we used the protein names from the fasta file headers of the author-provided database to infer taxonomic originations of the proteins. Metaproteomic database search of the RapidAIM cultured human gut microbiome was performed using MaxQuant 1.6.17.0 using the sample set specific metagenomics database (protein-coding sequences of the MAGs), and the match between runs option was enabled for label-free quantification with default parameters same as the mouse gut microbiome dataset stated above. Taxonomic annotation of the synthetic mouse gut microbiota and human gut microbiome datasets was performed in two consecutive steps. First, taxonomic information was extracted by matching protein IDs to their origins: for the mouse microbiome dataset, protein IDs were matched to species-specific protein IDs based on their taxonomic origins; for the metagenomic MAG database-searched human gut microbiome dataset, protein IDs were matched to the metagenomic MAG taxonomic classification results. Second, lowest common ancestors (LCAs) were generated at the protein group level, and species-level LCAs were subsequently extracted for further analysis. The datasets achieved 88% and 73% species-level LCA matches at the protein group level for the mouse and human microbiome datasets, respectively. Functional annotation was performed against the eggNOG 6.0 database using DIAMOND v2.1.10 with BLASTp , applying a stringent e -value threshold of 10 −5 , under a Linux environment. Root-level orthologous groups (OGs) from the top-1 annotation were used for further analysis, resulting in seed ortholog annotation coverage rate of 99.80% ± 0.03% (mean ± SD, N = 3 datasets) and OG annotation coverage rate of 99.50% ± 0.11% (mean ± SD, N = 3 datasets). Furthermore, an additional metaproteomic database search of the same human microbiome was conducted using the UHGG database with MetaLab-MAG 1.0.7 and quantitative analysis performed by PANDA v1.2.7 . The UHGG database contains a phylogenetic tree that can be directly accessed by the PhyloFunc package. The UHGG database includes a phylogenetic tree that is directly compatible with the PhyloFunc package. Moreover, for microbiomes analyzed using the UHGG database, genome IDs (corresponding to tree nodes) can be directly inferred from genome-specific protein IDs. Functional annotation is also performed using eggNOG 6.0. Total genomic DNA from a human stool sample was extracted FastDNA™ SPIN Kit with the FastPrep-24™ instrument (MP Biomedicals, Santa Ana, CA, USA). Sequencing libraries were constructed with Illumina TruSeq DNA Sample Prep kit v3 (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. Paired-end (100-bp) sequencing was performed with the Illumina NovaSeq 6000 at the Génome Québec Innovation Centre of McGill University (Montreal, Canada). The raw reads were quality-filtered to remove the adapter and low-quality sequences using fastp v0.12.4 (fastp -q 15 -u 40) . The reads were then mapped to the human (hg38; RefSeq: GCF_000001405.39) and phiX reference genomes, and the matches were removed with the Kraken2 v.2.0.9 package. Metagenome assembly of the quality-filtered nonhuman reads was processed by MEGAHIT v1.2.9 using the –presets meta-large –min-contig-len 1000 parameters. For metagenomic binning, the single_easy_bin command of SemiBin v1.5.1 was used. The resulting bins were then assessed for contamination and completeness with DAS Tool v1.1.4 , retaining only high-quality bins or metagenome-assembled genomes (MAGs) with < 50% completeness. The assembled contigs were then annotated using the PROkaryotic Dynamic programming Gene-finding ALgorithm (Prodigal) v2.6.3 to predict open reading frames (ORF). The contigs were translated into amino acid sequences using the anonymous gene prediction mode (prodigal -p meta) and default parameters. The final 115 MAGs were taxonomically classified using the GTDB-Tk v2.1.0 with the r207_v2 . For the phylogenomic analysis, a maximum-likelihood (ML) tree was constructed de novo using the protein sequence alignment produced by GTDB-Tk. First, the aligned sequences were trimmed using trimAl v1.4.rev15 with the heuristic “-automated1” method, and the ML tree was constructed using the IQ-TREE multicore version 2.2.0.3 COVID-edition with 1000 bootstrapping and visualized and annotated using the Interactive Tree Of Life (iTOL) web tool . Lastly, the protein-coding sequences of the MAGs were compiled into a single FASTA file and used as the metagenome-inferred protein database for the metaproteomic search. The full-length 16S rRNA sequences of the 15 bacterial strains which consistently colonize animals (Supplementary Table 7) were used to construct a phylogenetic tree, using the Maximum Likelihood method in MEGA v11 with 1000 bootstrapping and default parameters. Metaproteomic database searches of the mouse gut microbiome data obtained from Patnode et al. (2019) were performed using MetaLab 2.3 based on the author-provided database of the manuscript (patnodeCommunity_Mmus_allDiets_plus_contams_FR.fasta) with default parameters. Briefly, search parameters included a PSM FDR of 0.01, protein FDR of 0.01, and site FDR of 0.01. Minimum peptide length was set to 7. Modifications considered in protein quantification included N-terminal acetylation and methionine oxidation. The analysis also utilized matching between runs with a time window of 1 min. For taxonomic annotation, we used the protein names from the fasta file headers of the author-provided database to infer taxonomic originations of the proteins. Metaproteomic database search of the RapidAIM cultured human gut microbiome was performed using MaxQuant 1.6.17.0 using the sample set specific metagenomics database (protein-coding sequences of the MAGs), and the match between runs option was enabled for label-free quantification with default parameters same as the mouse gut microbiome dataset stated above. Taxonomic annotation of the synthetic mouse gut microbiota and human gut microbiome datasets was performed in two consecutive steps. First, taxonomic information was extracted by matching protein IDs to their origins: for the mouse microbiome dataset, protein IDs were matched to species-specific protein IDs based on their taxonomic origins; for the metagenomic MAG database-searched human gut microbiome dataset, protein IDs were matched to the metagenomic MAG taxonomic classification results. Second, lowest common ancestors (LCAs) were generated at the protein group level, and species-level LCAs were subsequently extracted for further analysis. The datasets achieved 88% and 73% species-level LCA matches at the protein group level for the mouse and human microbiome datasets, respectively. Functional annotation was performed against the eggNOG 6.0 database using DIAMOND v2.1.10 with BLASTp , applying a stringent e -value threshold of 10 −5 , under a Linux environment. Root-level orthologous groups (OGs) from the top-1 annotation were used for further analysis, resulting in seed ortholog annotation coverage rate of 99.80% ± 0.03% (mean ± SD, N = 3 datasets) and OG annotation coverage rate of 99.50% ± 0.11% (mean ± SD, N = 3 datasets). Furthermore, an additional metaproteomic database search of the same human microbiome was conducted using the UHGG database with MetaLab-MAG 1.0.7 and quantitative analysis performed by PANDA v1.2.7 . The UHGG database contains a phylogenetic tree that can be directly accessed by the PhyloFunc package. The UHGG database includes a phylogenetic tree that is directly compatible with the PhyloFunc package. Moreover, for microbiomes analyzed using the UHGG database, genome IDs (corresponding to tree nodes) can be directly inferred from genome-specific protein IDs. Functional annotation is also performed using eggNOG 6.0. From data preparation, we obtained three different data files for each metaproteomic dataset, i.e., a protein group table with abundance information, a taxonomic table, and a functional table with annotation information. First, we filtered out any protein group with the “REV_” indicator in the protein group table, removed contaminant proteins, and included intensities of microbial protein groups based on label-free quantification (LFQ). Based on the taxonomic and functional annotations described above, we aggregated protein abundances by grouping them according to the same functional OG IDs within the same taxonomic lineage. Subsequently, all of the taxa in the taxon-specific functional table were renamed to align with the names of all leaf nodes in the tree file. Simultaneously, the tree was traversed by a recursive method to assign names to all internal nodes to create a branch table. This table included each branch’s information such as precedent, consequent, the number of child nodes, and branch length. For calculating PhyloFunc distances, branch length values were extracted from the branch table, corresponding to the rows in the “consequent” column whose values matched the taxon names in the taxon-function table. For the case of the sum of functions across Bacteroides species in the mouse gut microbiome dataset, we utilized a single Bacteroides node instead of the subtree encompassing all of Bacteroides species. The branch length value for a Bacteroides node was 0.04, which was the length of the branch connecting this subtree. To this end, we gathered the phylogeny-informed taxon-function dataset, which was comprised of two components: the taxon-function table and the branches table (similar as illustrated in Fig. D). Both the construction of the phylogenetic tree and the computation of the PhyloFunc distance were implemented through programming in Python. To illustrate the calculation process of PhyloFunc-based distance most clearly, we employed a simulated dataset as a demonstration (Supplementary Figs. S1 and S2). Briefly, based on taxon-function data obtained by preprocessing methods, relative abundance of each function within each taxon and the relative abundance of taxon were calculated by taxon-specific protein biomass contributions. Secondly, the relative functional abundance was weighed by their corresponding relative taxonomic abundance and then expanded to represent all nodes up to the root of the phylogeny by summing up each node to get the expanded table. Similarly, the taxonomic table was converted into the expanded table by summing up all nodes in the phylogeny tree. Thirdly, functional distances between each sample pair were calculated according to Eq. . Finally, each functional distance was weighed by branch length and relative protein abundances in samples pairs, and PhyloFunc distances between samples were then calculated according to Eq. . Other methods of Bray–Curtis dissimilarity, binary Jaccard distance, and Euclidean distance were calculated using the R package “vegan” . For binary Jaccard distance, we considered nonzero numbers as 1 in binary and used the parameter “binary” to calculate distances between sample pairs. Different methods including PCoA, statistical tests, hierarchical clustering, classification algorithms, and PERMANOVA tests were applied to evaluate the performance across different distances. The detail of the evaluation can be found in the figure legends and main texts. The PCoA analysis was realized by R function dudi.pco in package ade4. PCoA plots were visualized using the R package ggplot2, with the aspect ratio standardized to 1:1 to ensure a consistent comparison. In the PCoA plots of the human gut microbiome dataset, three replicated points for each group were connected with straight lines and displayed as triangles. Box plots also were visualized using the R package ggplot2. PERMANOVA was performed using R function adonis2. Hierarchical clustering was performed using the R function hclust, and hierarchical clustering plots were visualized using the R package stats. Based on normalized PhyloFunc and the other three distance metrics, we selected three standard algorithms—KNN, MLP, and SVM—to construct classification models and employed a leave-one-out cross-validation approach for splitting training and test sets. The distance matrix was used as the input sample data, with the names of five drugs as classification labels. In each iteration, one sample from the distance dataset was designated as the test set, while the remaining samples were used as the training set to build the classification model. The performance of each model was evaluated by comparing its prediction for the test sample with the true drug classification. Accuracy was calculated as the proportion of correctly classified samples across all iterations. We used the grid search method to obtain the optimal parameters for each classification algorithm of different distance methods. The primary optimal parameters were presented in Supplementary Table 6, while the corresponding high-accuracy evaluation results were illustrated in Fig. B. The classification models were implemented in Python 3.11, and Python packages Pandas, NumPy, and sklearn were used. The grid search method for the selection of optimal parameters was implemented in the Python package sklearn.model_selection. Supplementary Material 1. Figure . The calculation process of PhyloFunc distance, part 1. Figure S2. The calculation process of PhyloFunc distance, part 2. Figure S3. The phylogenetic tree of the mouse gut microbiome case dataset. Figure S4. Comparison of four UniFrac distance metrics applied to the mouse gut microbiome dataset. Figure S5. The phylogenetic tree of the human gut microbiome case dataset. Figure S6. Comparison different distances for human gut microbiome by PCoA and statistical analysis between Azathioprine (AZ) and control group (NC). Figure S7. Comparison different distances for human gut microbiome by PCoA and statistical analysis between Ciprofloxacin (CP) and control group (NC). Figure S8. Comparison different distances for human gut microbiome by PCoA and statistical analysis between Diclofenac (DC) and control group (NC). Figure S9. Comparison different distances for human gut microbiome by PCoA and statistical analysis between Nizatidine (NZ) and control group (NC). Figure S10. Comparison of different distance metrics on same human gut microbiome dataset searched against the UHGG database. Supplementary tables: Table S1. PERMANOVA results between Paracetamol (PR) and control group (NC). Table S2. PERMANOVA results between Nizatidine (NZ) and control group (NC). Table S3. PERMANOVA results between Diclofenac (DC) and control group (NC). Table S4. PERMANOVA results between Ciprofloxacin (CP) and control group (NC). Table S5. PERMANOVA results between Azathioprine (AZ) and control group (NC). Table S6. List of primary optimal parameters for classification. Table S7. List of microbial species used to construct the phylogenetic tree. |
Artificial intelligence chatbots as sources of patient education material for cataract surgery: ChatGPT-4 versus Google Bard | 5f0ac3dc-a3b6-4b40-bc3c-978e0970ff12 | 11487885 | Patient Education as Topic[mh] | Artificial intelligence (AI) has the potential to transform ophthalmology in a number of ways; disease-based deep learning algorithms are now being used to assist in the diagnosis and evaluation of a range of ophthalmic conditions. Patients undergoing cataract surgery will be increasingly exposed to AI-generated patient education materials, as these supersede traditional sources.
To the best of our knowledge, this is the first time that cataract surgery patient education material generated by Chat Generative Pre-trained Transformer (ChatGPT-4) and Google Bard, its primary competitor, have been compared side by side. ChatGPT-4 fared better overall, scoring higher on understandability metrics and fidelity to the prompt engineering instruction. Patients with different backgrounds and degrees of health literacy are likely to comprehend cataract surgery patient education material provided by ChatGPT-4 more readily than those generated by Google Bard.
Both ChatGPT-4 and Google Bard exhibited a good baseline safety profile for generating responses to cataract surgery frequently asked questions. However, rigorous validations should still be carried out before prematurely deploying large language models into day-to-day clinical practice, due to the possibility of incorrect information.
Artificial intelligence (AI) has advanced significantly since its inception in 1956. With an uptake of 100 million users within the first 2 months of its launch in November 2022, the large language model (LLM) ChatGPT (Chat Generative Pre-trained Transformer) (Open AI, San Francisco, California, USA) set off seismic waves around the world. This was shortly followed by the release of the Google Bard chatbot (Alphabet, Mountain View, California, USA) in March 2023. These generative AI LLMs were were initially pretrained in an unsupervised manner on massive text corpora, including books, articles and other online sources, totaling billions of words. This was followed by some model optimisation for different downstream tasks, a process referred to as ‘few-shot learning’. With the use of statistical word prediction (informed by context and prior words), this architecture enables the processing and transformation of the entire input and context into meaningful human-like text. These general-purpose LLMs are continually proving themselves to be powerful tools for their language generation capabilities, and due to their remarkable adaptability and practicality, have permeated all fields including healthcare. They are continuously evolving, both through their own inherent natural language processing and also through software updates, with OpenAI unveiling its latest version, ChatGPT-4, in March 2023. In comparison to its predecessor, ChatGPT-4 is ‘more reliable, creative and able to handle many more nuanced instructions’, while also being able to perform better in academic and specialised fields. In fact, ChatGPT-4 was able to outperform both ChatGPT-3.5 and other LLMs specifically fine-tuned on medical knowledge (Pathways Language Model and Large Language Model Meta AI 2) in US Medical Licensing Exam and Fellowship of the Royal College of Ophthalmologists Part 2 mock examinations, highlighting its potential as a valuable tool for clinical support and medical education. Google Bard has also undergone numerous software upgrades, with the Gemini Pro LLM being the latest addition in December 2023. AI has the capacity to profoundly transform the field of ophthalmology in numerous ways. Disease-based deep learning algorithms in ophthalmology are already being used to aid diagnosis and assessment of retinal diseases, glaucoma, cataract, corneal diseases and many others. However, AI’s utility in ophthalmology goes beyond simply aiding diagnosis or assessment. It also possesses the ability to transform the manner in which patients receive information and knowledge about their condition or recommended procedure/s. With more than 20 million procedures performed annually worldwide, cataract surgery is one of the most common surgeries. LLM chatbots are being increasingly used by patients and the general public as an alternative source of patient information to printed patient leaflets. It remains unclear whether these are reliable resources in the context of cataract surgery. This study aims to conduct the initial direct comparison of patient education material on cataract surgery produced by ChatGPT (version GPT-4) and its primary competitor, Google Bard. By examining the understandability and actionability of these new information sources, we aim to provide additional reassurance and confidence to healthcare providers as well as patients when using LLM-based patient information.
98 frequently asked questions (FAQs) about cataract surgery in English were compared in this cross-sectional study, conducted in November 2023. These FAQs were compiled from the following five reliable, online sources of patient information: the Moorfields Eye Hospital cataract surgery leaflet, the Royal College of Ophthalmologists and the Royal National Institute of Blind People patient information leaflet ‘Understanding Cataracts’, the UK National Health Service patient information webpage on cataracts, the National Eye Institute patient information webpage on cataracts, and ‘Patient’ (UK registered trademark) patient information webpage on cataracts. illustrates the question curation flow chart, with 39 excluded duplicates and 20 augmented questions to ensure that the information is clear and comprehensive. A total of 59 remaining questions were divided into three domains: condition (n=15), cataract surgery and preparation for surgery (n=21) and recovery after cataract surgery (n=23). They were then used as input prompts for ChatGPT-4 and Google Bard on 15 November 2023 and 16 November 2023. The statement ‘please provide patient education material to the following question at a fifth-grade reading level’ was used for ‘prompt engineering’. This was followed by 1 of the 59 questions, which were entered into the ChatGPT-4 and Google Bard user interphases. Examples of this process are provided in . The decision to use a fifth-grade reading level was based on the premise that patient education material should be prepared at a reading level appropriate for sixth grade or lower in order to ensure maximum comprehension and conformity. Prior to inputting each new question, the ‘New Chat’ feature was used in ChatGPT-4 and Google Bard. This was done in a private browser with a cleared cache to avoid any data leakage or utilisation of previous question prompts and responses, the purpose being to simulate real-world patient enquiries. For each question prompt, the initial ChatGPT-4 and Google Bard output responses were used. To determine the readability of the generated responses and evaluate conformity to the prompt engineering instruction for both ChatGPT-4 and Google Bard, the Flesch-Kincaid Grade Level was worked out using a Flesch-Kincaid calculator. The Flesch-Kincaid Grade Level indicates the level of education required to comprehend a specific text. Since accessory preceding or concluding sentences in responses such as, ‘Here’s some patient education material about conditions that can cause symptoms similar to cataracts, written at a fifth-grade reading level’, and ‘This material is designed to be easy to understand and engaging for someone at a fifth-grade reading level’, affect the Flesch-Kincaid difficulty and were not present in every statement, they were removed for standardisation. Four ophthalmologists (two registrars/residents and two consultants) independently graded the responses for each of the 59 questions using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P) Auto-Scoring Form, to obtain understandability and actionability scores of the patient education material responses. The PEMAT-P is a validated instrument that uses 26 binary questions to assess and compare the understandability and actionability of patient education materials. Higher scores, expressed as percentages, indicate superior performance. Within this tool, understandability is characterised as the ability of individuals from different backgrounds and with differing levels of health literacy to comprehend and articulate essential messages. Actionability, on the other hand, refers to their capacity to determine actionable steps based on the provided information. Blinding was not feasible in this study, as the PEMAT-P assesses the visual layout as part of its scoring system. Therefore, in order to assess how each chatbot formatted their answers visually, a screenshot of each response was required to preserve the visual layout as presented. Considering these limitations, we believe that this methodology is ideal for its intended purpose. The relevance and accuracy of each chatbot response were also assessed as part of the ‘understandability’ domain, with a specific binary question, ‘the material does not include information or content that distracts from its purpose’. As a secondary measure, these ophthalmologists also evaluated the generated responses for any inaccurate or hazardous information. All data were analysed by using IBM SPSS Statistics for Windows, V.24.0 (IBM; released 2016). The normality of the data was assessed using the Shapiro-Wilk test. Inter-rater reliability for PEMAT-P scoring was assessed using an intraclass correlation coefficient, specifically, a two-way mixed effects, absolute agreement and multiple raters method. Non-parametric related samples were analysed using a Wilcoxon signed rank test. A p<0.05 was deemed statistically significant. Patients and public involvement Patients or the public were not involved in this study.
Patients or the public were not involved in this study.
Flesch-Kincaid Grade Levels ChatGPT-4 Flesch-Kincaid Grade Levels ranged from 2.80 to 8.70, while those for Google Bard ranged from 5.00 to 12.90. Google Bard had a significantly higher mean overall Flesch-Kincaid Grade Level (8.02) compared with ChatGPT-4 (5.75) (z=−6.15, p<0.001). This was also noted in all three domains as shown in . PEMAT-P understandability and actionability scores PEMAT-P understandability scores ranged from 53% to 94%, while actionability scores ranged from 0 to 75%. displays the individual mean PEMAT-P understandability and actionability scores of ChatGPT-4 and Google Bard-generated responses. ChatGPT-4 had significantly higher overall PEMAT-P understandability scores in comparison to Google Bard ( z =−4.4, p<0.001), while there was no statistically significant difference in overall actionability scores (z=−95, p=0.344). presents a comparison of ChatGPT-4 and Google Bard PEMAT-P understandability and actionability mean scores per domain.
ChatGPT-4 Flesch-Kincaid Grade Levels ranged from 2.80 to 8.70, while those for Google Bard ranged from 5.00 to 12.90. Google Bard had a significantly higher mean overall Flesch-Kincaid Grade Level (8.02) compared with ChatGPT-4 (5.75) (z=−6.15, p<0.001). This was also noted in all three domains as shown in .
PEMAT-P understandability scores ranged from 53% to 94%, while actionability scores ranged from 0 to 75%. displays the individual mean PEMAT-P understandability and actionability scores of ChatGPT-4 and Google Bard-generated responses. ChatGPT-4 had significantly higher overall PEMAT-P understandability scores in comparison to Google Bard ( z =−4.4, p<0.001), while there was no statistically significant difference in overall actionability scores (z=−95, p=0.344). presents a comparison of ChatGPT-4 and Google Bard PEMAT-P understandability and actionability mean scores per domain.
To the best of our knowledge, this is the first head-to-head comparative cross-sectional study evaluating the performance ChatGPT-4 and Google Bard in generating cataract surgery patient education material. ChatGPT-4-generated responses had significantly better PEMAT-P understandability scores in comparison to Google Bard, particularly on areas related to ‘cataract surgery and preparation for surgery’ and ‘recovery after cataract surgery’, with comparable results for the ‘condition’ domain. With regard to PEMAT-P actionability scores, no statistically significant difference was found between ChatGPT-4- and Google Bard-generated responses, both overall and also in each individual domain. These findings indicate that patients from different backgrounds and with differing levels of health knowledge are more likely to understand better ChatGPT-4-generated patient education material on cataract surgery than Google Bard-generated material. This is in spite of the PEMAT-P tool considering the inclusion and quality of any visual aids in the patient education material, with Google Bard regularly including images while ChatGPT-4 did not. However, when it comes to patients being able to identify what they can do based on this material, no difference was found between the two LLMs. Of note, a concern that was raised over the past few months in the comparison of these two LLMs is the September 2021 knowledge cut-off date that earlier ChatGPT models had. However, since its integration into Bing in February 2023, ChatGPT is now able to browse the internet in real-time in order to provide up-to-date information, similar to Google Bard. Therefore, in their current forms, both LLMs are able to provide updated and contextually relevant information which reflects real-world updates and developments in cataract surgery. A potential issue with the integration of LLMs into clinical practice is the accuracy and safety of information. LLMs are highly dependent on their training data, and since they were trained using a variety of resources including unverified internet-based content, inaccuracies could arise if the training data is incorrect, leading to patient harm. As mentioned above, the relevance and accuracy of each chatbot response were assessed as part of the ‘understandability’ PEMAT-P domain. A separate subjective screen conducted by each of the four graders during scoring did not identify any dangerous information (defined as any incorrect or inaccurate information that could lead to patient harm). This provides a good confirmation of the safety record of both ChatGPT-4 and Google Bard in generating responses to cataract surgery FAQs. However, rigorous validations should still be carried out before prematurely deploying LLMs into day-to-day clinical practice, due to the possibility of incorrect information. Another way that chatbots can generate misleading or harmful information is through a phenomenon known as ‘AI hallucination’—responses that sound confident despite being nonsensical or unfaithful to the training data. This differs from the above, in which case the incorrect information provided is in keeping with the training data. Reported types of hallucinations include factual errors or inaccuracies, logical fallacies and confabulations (adding irrelevant details on top of a correct answer). However, due to the lack of unified, established terminology, there is extensive inconsistency in the definition and use of the term ‘hallucination’. Due to these inconsistencies, we did not set out to measure the frequency of this phenomenon. However, in one of its output responses, Google Bard included an image of cervical dilation instead of pupillary dilation, even though the text was discussing pupillary dilation. This can be considered to be a hallucination. To minimise these ‘hallucinations’, the use of ‘prompt engineering’ was used in this study, as advised by the UK National Cyber Security Centre. ‘Prompt engineering’ enables optimisation and fine-tuning of LLMs for specific functions, through the creation of effective inputs. In our study, the preceding phrase ‘please provide patient education material to the following question at a fifth-grade reading level’ was designed to promote inclusivity of patients with poorer literacy levels, by stimulating the generation of patient education material at the recommended fifth-grade reading level. ChatGPT-4 showed higher fidelity to this ‘prompt engineering’ instruction in comparison to Google Bard, with a significantly better mean Flesch-Kincaid Grade level of 5.75. This was also seen across all three question domains. Of note, although both only used medical jargon sparingly, with clear explanations when this was used, Google Bard was noted to present longer and more detailed answers which could have influenced this score. ‘Prompt engineering’ is a useful tool that would enable healthcare providers to use LLMs effectively and safely while allowing them to specifically craft the tone, format and deliver AI-generated patient education material and minimise hallucinations. Although LLMs have the potential to bring about significant changes, it is important to approach them with great caution, particularly in the vital context of patient care. An issue with LLM-generated material is the inability to fact-check presented information, as responses are not accompanied by references. Biased responses may, therefore, be inadvertently generated due to biased training data. AI chatbots also lack the ability to assume responsibility or adhere to ethical or moral limitations, hence restricting their current functionality to simply ‘assistive tools’. Our study has a number of limitations. First, while the PEMAT-P scoring system has undergone extensive testing and is validated, it cannot ensure that a material that scores highly would be effective with a given patient population, as the PEMAT-P contents might not fully reflect patients’ perspectives. In our study, the patient education material was graded by ophthalmologists with good knowledge and understanding of cataract surgery. It is possible that the way the patient education material is graded would be different to how patients would grade it. Furthermore, since ‘prompt engineering’ for patient education material to be at a fifth-grade reading level was used in this study, the results might differ from patient search results on LLM chatbots in the real world. The next step would ideally be to do a follow-up study and test AI-generated patient education material on cataract surgery in the real world, with patient participation. Another limitation is that the PEMAT-P scoring can vary depending on the reviewer’s interpretation. We attempted to eliminate this subjectivity in our study through the use of four independent reviewers, with results showing statistically significant fair to excellent inter-rater reliability scores. As mentioned previously, blinding was not feasible in this study, as the PEMAT-P assesses the visual layout as part of its scoring system. Considering these limitations, we believe that this methodology is ideal for its intended purpose. Finally, the understandability and actionability of the educational materials were measured at one distinct time point. Therefore, longitudinal comparative studies should be conducted to determine improvements, especially since LLM technology is constantly evolving, as evidenced by the imminent arrival of ChatGPT-5. This study provides a strong proof of concept for future deployment of AI in ophthalmology and offers valuable guidance to both patients and healthcare providers in selecting between the two main AI chatbots as sources of educational content on cataract surgery. ChatGPT-4 outperformed Google Bard in terms of overall PEMAT-P understandability ratings and adherence to the prompt engineering instruction. No statistically significant difference was found in the PEMAT-P accountability scores, and dangerous information was identified for either LLM chatbots. As mentioned above, the next step would ideally be to do a follow-up study to test AI-generated patient education material on cataract surgery in the real world, with patient participation, alongside longitudinal comparative studies. In particular, it will be important to measure the impact of LLMs on patient education initiatives and understand the impact on the clinical pathway even before medical consultation. For example, future work should assess whether LLMs improve patient satisfaction or rather cause more preoperative anxiety and whether they reduce patient visits or consultation time. Studies assessing the accuracy and consistency, along with the hallucination generation rate, in AI-generated patient education material should also be conducted. Furthermore, it is crucial to assess the cataract surgery patient education material produced in different languages in order to measure the worldwide impact of this LLM technology.
10.1136/bmjophth-2024-001824 online supplemental file 1
|
Norms for Clinical Use of CXM, a Real-Time Marker of Height Velocity | 02c3cb68-82b4-43d4-b00a-b1feb364bdb3 | 7765635 | Physiology[mh] | Study design The goals of this investigation were to validate CXM as a marker for HV at time of measurement, to describe modifications that optimize the CXM assay, and to establish reference ranges for CXM values in healthy, normally growing infants and children with no known risk factors for impaired growth. The optimized CXM assay was used to reanalyze samples from Shriners Hospitals for Children (SHC) and Oregon Health & Science University (OHSU) in Portland, Oregon, used for the previous CXM study , as well as additional samples from Nemours Children’s Specialty Care, Jacksonville, Florida. Study procedures Acquisition of serum, plasma, and dried blood spot (DBS) samples collected from SHC and OHSU clinics in Portland, Oregon, including institutional review board approval was described previously . The collection of serum and plasma samples from Nemours Children’s Specialty Care, Jacksonville, Florida, was described by Olney et al . The study was approved by the Nemours Florida Institutional Review Board, including the subsequent research use of the samples. Forty-two individuals were recruited from SHC, 220 from Nemours Children’s Specialty Care, and 40 from OHSU. Of those enrolled, 190 participants had single appointments when plasma, serum, or DBS and biometric measurements were collected. Ninety-four participants were evaluated twice approximately 6 months apart when plasma and serum samples were collected. Eighteen individuals were sampled up to 3 times at time points of 0, 6, and 12 months. Two male samples yielded HV of greater than 20 cm per year. They were excluded from HV analysis as outliers because their extreme rate of growth was assumed to be due to measurement error. The characteristics of the participants involved in the study are summarized in ; detailed information is provided in Supplemental Table 1 . Blood samples were collected from 10 nongrowing adults from the SHC site for control purposes. Additionally, blood samples from 34 individuals followed in growth clinics were used for the plasma-serum-DBS comparisons only. Heights were measured on standing, wall-mounted stadiometers (Perspective Enterprises at SHC and Holtain Ltd at Nemours) and calibrated daily by a standard 100-cm rod. At both centers, measurements were completed in a clinical setting in the Pediatric Endocrine and Diabetes clinics by medical assistants specifically trained in accurate measurement techniques. Height and weight measurements were recorded at the time of sampling for each patient. For participants with serial height measurements at least 6 months apart, annualized HV was calculated using the change in height measurements. As described in the original publications, the blood sampling protocols differed at the sites . Samples were taken at the beginning of the study interval for the Nemours samples and at the end or the interval for the SHC and OHSU samples. Almost all samples were taken in the morning for the Nemours participants; many were collected in the afternoon for the SHC and OHSU participants. Sample sizes for tests of CXM marker to HV associations were not determined by a priori power analyses because this was an observational study using convenience samples. Plasma and serum samples were processed in vacutainers (Beckton-Dickson No. 368036 and No. 367983, respectively), aliquoted into microcentrifuge tubes, and stored immediately at –20°C or –80°C. Analytical procedures The original CXM assay protocol was described previously . It is abbreviated here except for modifications relevant to its optimization. Serum, plasma dilution, and assay procedure All serum and plasma samples were thawed and diluted in sample diluent at 1:200 for individuals younger than 18 years or 1:20 for participants older than 18 years. All diluted samples were assayed in duplicate using the same batch lot of plates, calibrators, and quality controls (QC). The calibrator consisted of recombinant noncollagenous domain 1 of human type X collagen purchased from BioMatik. Dried blood spot sampling DBS samples were obtained by finger sticks and spotting onto Whatman 903 Protein Saver Cards. DBS cards were then dried for 1 to 4 hours at room temperature, placed in resealable bags containing desiccant packets, and stored at –20°C until assayed. All samples included in this study were assayed in a blinded fashion in duplicate. Information pertaining to these samples can be found in Supplemental Table 1 . All data were graphed using GraphPad Prism software, version 7.03. Dried blood spot elution and assay procedure One 3.1-mm punch was taken per pediatric DBS spot and eluted with 250 µL of sample diluent in the well of a sealed polypropylene microplate. The plate was incubated overnight at 4°C on ice to reduce condensation. The elution plate was then placed on a shaker at 450 rpm for 10 minutes at room temperature. Each sample (100 µL) was then measured in duplicate and the CXM concentration determined from a serial diluted rNC1 calibrator curve using 4 Parameter Logistic nonlinear regression model fit from BioTek Gen5 software ( R 2 > 0.95 was acceptable). DBS quality controls created in the initial data collection were run with each DBS assay plate and data plotted in . Each result was multiplied by its associated dilution (calculated dilution factor assumes 1.67 µL plasma per spot assayed) for its equivalent nanogram per milliliter (ng/mL) concentration. Optimization of collagen X biomarker assay rNC1 purchased from BioMatik was reconstituted, actual concentration determined using amino acid analysis and Qbit 2.0 fluorometer protein analyzers, then diluted to a stock concentration of 700 ng/mL for use in calibrator and quality control–spiked sample preparations. Prior to assay optimization, calibration curves for the CXM assay were prepared by serially diluting 800 pg/mL rNC1 calibrator into sample diluent immediately before running an assay. It was discovered that the rNC1 calibrator can be difficult to dilute with low levels of variance unless vigorously vortexed, therefore a prediluted set of calibrators was created for assay optimization. A total of 200 mL of each calibrator (800, 400, 200, 100, 50, 25, 12.5, and 0 pg/mL) was created by diluting the rNC1 stock in sample diluent from 700 ng/mL stock preparations. A total of 675 µL of each level was aliquoted into 1.1-mL strip tubes and stored at –20°C to use in future assays. Serum and plasma quality control samples were created by diluting freshly thawed serum or plasma 1:200, aliquoting, and storing in a similar fashion to the calibrators. Before performing a CXM assay, a set of each calibrator and controls was thawed at room temperature, vortexed vigorously for 1 minute, then centrifuged at 1000 rpm for 1 minute to remove droplets that may have adhered to the caps while vortexing. An 8-well multichannel pipette was then used to reverse-pipette 100 µL of each calibrator or quality control set into a CXM 96-well enzyme-linked immunosorbent assay (ELISA) plate in duplicate. A 675-µL set of strip tubes contains enough sample to run three 96-well CXM ELISA plates. This batch production both of calibrators and controls limited the amount of variance that occurs through serial dilution of one calibrator and repeat dilution of each serum or plasma control over time. This lot of calibrators and controls will be used to verify and validate calibrator and QC preparations in the future. The data from our previous publication relied on calibration curved from serially diluted stocks of rNC1 for each run, potentially increasing interassay variability. Two types of QC samples were prepared for assessing the interassay and intra-assay variance as well as validating each CXM assay run. rNC1 controls were created by spiking rNC1 stock into sample diluent in a method similar to the calibrator preparation at concentration levels between the calibrators, namely 700 pg/mL (HQC 700), 250 pg/mL (MQC 250), and 10 pg/mL (LQC 10). Serum and plasma QCs were created by diluting human serum and plasma samples of children with sufficient quantity for bulk dilution 1:200 and aliquoting into strip tubes. Data analysis CXM vs age data tables for girls and boys (Supplemental Table 1) were entered into the R software package loaded with the Generalized Additive Model for Location, Scale and Shape (GAMLSS) v5.1 to 4 statistical package . Subsets of the data were analyzed for sex, and the cutoff for age was set at 21 years. Within the GAMLSS package, the LMS method using ST3 curve distribution calculated with lowest global deviance to generate centiles curves for the data from girls, whereas BCCGo was used for boys. Centile curves were plotted with normal data superimposed over curves in Prism software version 0.8.3.0 for Windows (GraphPad Software; www.graphpad.com ). Simple linear regression models correlations and the Kruskal-Wallis test for group differences with the Dunn test follow-up were computed in Prism version 0.8.3.0 and Stata 14 (StataCorp; www.stata.com ). The difference in correlation coefficients was tested using Fisher Z transformation .
The goals of this investigation were to validate CXM as a marker for HV at time of measurement, to describe modifications that optimize the CXM assay, and to establish reference ranges for CXM values in healthy, normally growing infants and children with no known risk factors for impaired growth. The optimized CXM assay was used to reanalyze samples from Shriners Hospitals for Children (SHC) and Oregon Health & Science University (OHSU) in Portland, Oregon, used for the previous CXM study , as well as additional samples from Nemours Children’s Specialty Care, Jacksonville, Florida.
Acquisition of serum, plasma, and dried blood spot (DBS) samples collected from SHC and OHSU clinics in Portland, Oregon, including institutional review board approval was described previously . The collection of serum and plasma samples from Nemours Children’s Specialty Care, Jacksonville, Florida, was described by Olney et al . The study was approved by the Nemours Florida Institutional Review Board, including the subsequent research use of the samples. Forty-two individuals were recruited from SHC, 220 from Nemours Children’s Specialty Care, and 40 from OHSU. Of those enrolled, 190 participants had single appointments when plasma, serum, or DBS and biometric measurements were collected. Ninety-four participants were evaluated twice approximately 6 months apart when plasma and serum samples were collected. Eighteen individuals were sampled up to 3 times at time points of 0, 6, and 12 months. Two male samples yielded HV of greater than 20 cm per year. They were excluded from HV analysis as outliers because their extreme rate of growth was assumed to be due to measurement error. The characteristics of the participants involved in the study are summarized in ; detailed information is provided in Supplemental Table 1 . Blood samples were collected from 10 nongrowing adults from the SHC site for control purposes. Additionally, blood samples from 34 individuals followed in growth clinics were used for the plasma-serum-DBS comparisons only. Heights were measured on standing, wall-mounted stadiometers (Perspective Enterprises at SHC and Holtain Ltd at Nemours) and calibrated daily by a standard 100-cm rod. At both centers, measurements were completed in a clinical setting in the Pediatric Endocrine and Diabetes clinics by medical assistants specifically trained in accurate measurement techniques. Height and weight measurements were recorded at the time of sampling for each patient. For participants with serial height measurements at least 6 months apart, annualized HV was calculated using the change in height measurements. As described in the original publications, the blood sampling protocols differed at the sites . Samples were taken at the beginning of the study interval for the Nemours samples and at the end or the interval for the SHC and OHSU samples. Almost all samples were taken in the morning for the Nemours participants; many were collected in the afternoon for the SHC and OHSU participants. Sample sizes for tests of CXM marker to HV associations were not determined by a priori power analyses because this was an observational study using convenience samples. Plasma and serum samples were processed in vacutainers (Beckton-Dickson No. 368036 and No. 367983, respectively), aliquoted into microcentrifuge tubes, and stored immediately at –20°C or –80°C.
The original CXM assay protocol was described previously . It is abbreviated here except for modifications relevant to its optimization.
All serum and plasma samples were thawed and diluted in sample diluent at 1:200 for individuals younger than 18 years or 1:20 for participants older than 18 years. All diluted samples were assayed in duplicate using the same batch lot of plates, calibrators, and quality controls (QC). The calibrator consisted of recombinant noncollagenous domain 1 of human type X collagen purchased from BioMatik.
DBS samples were obtained by finger sticks and spotting onto Whatman 903 Protein Saver Cards. DBS cards were then dried for 1 to 4 hours at room temperature, placed in resealable bags containing desiccant packets, and stored at –20°C until assayed. All samples included in this study were assayed in a blinded fashion in duplicate. Information pertaining to these samples can be found in Supplemental Table 1 . All data were graphed using GraphPad Prism software, version 7.03.
One 3.1-mm punch was taken per pediatric DBS spot and eluted with 250 µL of sample diluent in the well of a sealed polypropylene microplate. The plate was incubated overnight at 4°C on ice to reduce condensation. The elution plate was then placed on a shaker at 450 rpm for 10 minutes at room temperature. Each sample (100 µL) was then measured in duplicate and the CXM concentration determined from a serial diluted rNC1 calibrator curve using 4 Parameter Logistic nonlinear regression model fit from BioTek Gen5 software ( R 2 > 0.95 was acceptable). DBS quality controls created in the initial data collection were run with each DBS assay plate and data plotted in . Each result was multiplied by its associated dilution (calculated dilution factor assumes 1.67 µL plasma per spot assayed) for its equivalent nanogram per milliliter (ng/mL) concentration.
rNC1 purchased from BioMatik was reconstituted, actual concentration determined using amino acid analysis and Qbit 2.0 fluorometer protein analyzers, then diluted to a stock concentration of 700 ng/mL for use in calibrator and quality control–spiked sample preparations. Prior to assay optimization, calibration curves for the CXM assay were prepared by serially diluting 800 pg/mL rNC1 calibrator into sample diluent immediately before running an assay. It was discovered that the rNC1 calibrator can be difficult to dilute with low levels of variance unless vigorously vortexed, therefore a prediluted set of calibrators was created for assay optimization. A total of 200 mL of each calibrator (800, 400, 200, 100, 50, 25, 12.5, and 0 pg/mL) was created by diluting the rNC1 stock in sample diluent from 700 ng/mL stock preparations. A total of 675 µL of each level was aliquoted into 1.1-mL strip tubes and stored at –20°C to use in future assays. Serum and plasma quality control samples were created by diluting freshly thawed serum or plasma 1:200, aliquoting, and storing in a similar fashion to the calibrators. Before performing a CXM assay, a set of each calibrator and controls was thawed at room temperature, vortexed vigorously for 1 minute, then centrifuged at 1000 rpm for 1 minute to remove droplets that may have adhered to the caps while vortexing. An 8-well multichannel pipette was then used to reverse-pipette 100 µL of each calibrator or quality control set into a CXM 96-well enzyme-linked immunosorbent assay (ELISA) plate in duplicate. A 675-µL set of strip tubes contains enough sample to run three 96-well CXM ELISA plates. This batch production both of calibrators and controls limited the amount of variance that occurs through serial dilution of one calibrator and repeat dilution of each serum or plasma control over time. This lot of calibrators and controls will be used to verify and validate calibrator and QC preparations in the future. The data from our previous publication relied on calibration curved from serially diluted stocks of rNC1 for each run, potentially increasing interassay variability. Two types of QC samples were prepared for assessing the interassay and intra-assay variance as well as validating each CXM assay run. rNC1 controls were created by spiking rNC1 stock into sample diluent in a method similar to the calibrator preparation at concentration levels between the calibrators, namely 700 pg/mL (HQC 700), 250 pg/mL (MQC 250), and 10 pg/mL (LQC 10). Serum and plasma QCs were created by diluting human serum and plasma samples of children with sufficient quantity for bulk dilution 1:200 and aliquoting into strip tubes.
CXM vs age data tables for girls and boys (Supplemental Table 1) were entered into the R software package loaded with the Generalized Additive Model for Location, Scale and Shape (GAMLSS) v5.1 to 4 statistical package . Subsets of the data were analyzed for sex, and the cutoff for age was set at 21 years. Within the GAMLSS package, the LMS method using ST3 curve distribution calculated with lowest global deviance to generate centiles curves for the data from girls, whereas BCCGo was used for boys. Centile curves were plotted with normal data superimposed over curves in Prism software version 0.8.3.0 for Windows (GraphPad Software; www.graphpad.com ). Simple linear regression models correlations and the Kruskal-Wallis test for group differences with the Dunn test follow-up were computed in Prism version 0.8.3.0 and Stata 14 (StataCorp; www.stata.com ). The difference in correlation coefficients was tested using Fisher Z transformation .
We performed a variety of technical validation tests after assay optimization to verify that the assay dynamics met accepted standards. CXM calibrators and controls as described in the assay optimization section were run on each plate and were used to generate data for this study. shows the control data generated from 31 96-well ELISA plates run in this study. Average interassay coefficient of variation percentage for serum and plasma controls ranged from 4.8% to 6.3% with a similar variance for HQC 700 (700 pg/mL) and MQC 250 (250 pg/mL). The lowest QC level, LQC 10 (10 pg/mL) exhibited the highest variation of 10.9%, which was expected because this control is very close to the previously determined lower limit of quantification of 5.4 pg/mL. Overall, these data showed a low level of interassay and intra-assay variance both for spiked-rNC1 and diluted serum and plasma samples. The average intraassay variation for all serum, plasma, and DBS data was 3%, 4% and 4%, respectively. The cross-sectional CXM data along with percentile curves (97%, 90%, 50%, 10%, 3%) for each sex were plotted in a scatterplot by age and CXM concentration in and . Percentile curves were calculated using the LMS method within the R software package , which uses the mean (M) and coefficient of variation (S) to summarize the CXM vs age data into a smooth (L) curve . The well-established HV percentile curves published by Kelly and colleagues were superimposed on CXM data in and in green. Similar to established norms, the pubertal growth spurt identified by CXM values was approximately 2 to 3 years earlier for girls compared with boys. The ages of peak growth velocity of our study participants were close to those previously reported for girls and boys by Kelly et al . As shown in and , CXM values decrease in a linear fashion during the transition from the peak HV through the postpubertal growth cessation to nongrowing adults. CXM levels were all less than 1 ng/mL in the 10 participants older than 20 years involved in this study. shows the relationship between CXM and HV values in 110 participants. Compared with our original report, the number of data points is substantially greater (118 vs 44) and the CXM-to-velocity ratio is modestly higher. The slope and corresponding correlation of the HV/CXM lines of best fit for girls is 3.5 CXM ng/mL per cm/year HV ( r =0.82) compared with 3.19 CXM ng/mL per cm/year HV for boys ( r = 0.78; ). The Fisher Z transformation test found the difference in correlation coefficients was not statistically significant ( z = 0.6, P = .54). Because the difference in slopes was trivial, the results were combined in . Tanner staging was performed on 199 individuals who were at least age 6 years at Nemours and previously described . Of the 199, 76 had data from 2 visits, and to maximize sample size the assumption of independence was relaxed and both observations included, yielding a sample of 275 observations . The relationship between CXM levels and Tanner stage is shown in . Both for boys and girls, there was an overall difference in CXM by Tanner stage ( P < .001). For girls, the levels peaked at breast Tanner stage III and are statistically different from girls at all other Tanner stages ( P < .05 for all Dunn post hoc pairwise comparisons). For the boys, levels were not statistically different between Tanner stages I through IV and all are higher than for boys at Tanner stage V ( P < .001). New samples included paired serum and plasma, allowing our original comparison of plasma vs serum concentrations of CXM to be expanded and the relationship between the biomarker and the blood component used for assay to be better defined. shows the similarity of measurements for plasma and serum samples drawn at the same time, indicating that for practical purposes plasma and serum can be used interchangeably for measuring CXM. The majority of samples assayed in this study contained both serum and plasma drawn at the same time for comparison of which the serum value was used to generate normal data in all of the figures. However, only plasma was available for 23 participants and therefore the plasma value was used for analysis. Strong correlations similar to those previously described were observed when retested DBS results were plotted against the combined new and retested plasma and serum biomarker results; therefore, serum and plasma sample data were considered equivalent for this analysis ( and ). It is important to note that although the rNC1-spiked controls do not contain serum or plasma, they exhibit coefficient of variation percentages similar to diluted serum and plasma controls, providing evidence that nonspecific binding of reagents or serum effects are minimal in this assay.
We recently identified CXM as a potential biomarker for length/HV from cross-sectional and longitudinal analyses of a relatively small number of healthy children, 83 and 14, respectively . Since our original publication, we have optimized the CXM assay, retested our original samples, and added many additional participants, bringing the number of children analyzed to 302 for cross-sectional studies and 116 for longitudinal studies. Fourteen children in the longitudinal study had 2 separate HVs calculated from 3 clinic visits and were therefore used twice to generate 130 data points for . Our new results confirm and strengthen our original findings and support the conclusion that CXM is a real-time biomarker for HV. Our expanded data set allows for a more complete definition of CXM as a HV marker and its relationship to conventional HV norms derived from stadiometer measurements. For example, conversion of our cross-sectional data to age-specific LMS percentile curves confirms the remarkably similar pattern of CXM plots to conventional HV norms reported by Kelly et al . The greater range of CXM values compared with conventional norms presumably reflects the smaller number of data points for CXM so that outliers have a greater impact on curves. This effect is illustrated by the 3 girls with very high CXM values in the 9- to 12-year age range, which shifts the curves upward in . Analysis of our expanded longitudinal growth velocity data found a strong correlation of CXM with conventional HV, Pearson r = 0.80 ( P < .001) both for girls and boys. Considering that HV is the integrated product of hundreds of genetic and environmental factors, we believe identifying a marker whose level in girls and boys is 69%/59% explained by HV is remarkable. No previously reported biomarker has shown this level of correlation with HV in normally growing children . The reference ranges generated from LMS analysis and plotted in and are based on a much smaller number of children than are the established norms reported by Kelly and colleagues . We view them as low-resolution representations of CXM variation for normally growing healthy children that are expected to evolve as more studies are conducted and definitive norms are established. We consider them an essential step in the development of CXM as a clinical tool to assess HV. We predict the greatest potential clinical use of CXM will be to monitor changes in HV over time. In contrast to conventional HV determination by stadiometer, which requires taking measurements a minimum of 6 months apart, CXM describes HV in real time. This property makes it potentially ideal for monitoring disease progression or response to growth-stimulating therapies in individual and well-defined cohorts of children in treatment studies. A case in point is the clinical trial recently reported by Savarirayan et al in which children with achondroplasia were treated with daily injections of C-type natriuretic peptide . The reported CXM concentrations measured with our assay mirrored dose-dependent increases in conventionally determined HV, the established standard typically required by regulatory agencies, such as the US Food and Drug Administration. In contrast to the potential utility of CXM in longitudinal studies, the value of single CXM measurements may be limited by the modest variability in the CXM/HV correlation. This variability likely reflects the relatively small number of individuals analyzed to date compared with the large number of children whose growth data were analyzed to establish conventional HV norms . Another possible factor is the time of sampling, as we previously described a diurnal variation for CXM with values higher in the morning than in the afternoon . Although most of the Nemours samples, which represent most of the samples, were collected in the morning, many of the SHC and OHSU samples were obtained in the afternoon. So diurnal variation may have contributed to the observed variability. We predict that CXM/HV correlation will improve as more samples are analyzed and sampling protocols become more rigorous. If so, one-time CXM measurements may have potential as a real-time screening tool for children whose growth velocity is outside the normal range. Currently, we propose that CXM data reported here be used as working reference ranges for CXM to assess growth in the pediatric population. There are several technical issues of this study that deserve comment. The first involves how growth velocity is determined and defined. CXM values reflect instantaneous HV at that time of measurement. In contrast, conventional HV values reflect average growth rate during the time period between measurements. The 2 methods are difficult to compare. A useful analogy is measuring blood glucose and glycated hemoglobin in diabetic patients. If one wanted to equate CXM and conventional HV measurements and timing, CXM could be assessed daily for the duration of the measurement period and a value calculated that best correlates with HV from the area under the curve. But this approach is not practical, so we make the assumptions that CXM changes little over a given time span and that a single determination is proportional to the area under the curve, and we use it as a proxy. An exception to this way of thinking may arise during the prepubertal and pubertal years, when sudden changes in HV may occur leading to discrepancy between the 2 methods for tracking HV. Given this possibility, we recommend CXM testing at 2- to 3-month intervals during this period if detecting and/or monitoring the pubertal growth spurt is clinically relevant. Similarly, we suggest that CXM be measured at shorter intervals if it is being used to monitor responses to growth-stimulating therapy. Overall, we stress that CXM measurement and conventional HV determination should be viewed as complementary means to assess HV. CXM values varied by Tanner stage both for girls and boys in whom staging had been performed (see ). The girls peaked at breast Tanner stage III as expected because this coincides with the timing of the pubertal growth spurt. However, we did not find this expected association with boys at Tanner stage IV. We attribute this result to skewing of the data by a small group of Tanner stage IV boys with low CXM levels. We anticipate that boys’ CXM values will end up peaking at genitalia Tanner stage IV as more studies are conducted along these lines. In our previous publication, the relationship between plasma and serum CXM concentrations suggested that plasma had slightly higher levels potentially due to lost CXM in the removed clot from the blood . Testing an additional 217 matched plasma and serum samples to the 115 matched samples from our previous study allowed us to more accurately define the relationship between serum and plasma blood components (see ). Our new data suggest that plasma and serum have equivalent CXM values and therefore either serum or plasma can be used for CXM concentration determinations. Despite this result, we still suggest that if a study is to be performed it use only serum or plasma for the sake of consistency. When comparing reassayed DBS samples to the reassayed plasma and serum from our previous study, the relationship between each DBS result and its matched plasma or serum counterpart result matched very closely, with similar variance . Notably, the values of these rerun samples did not differ significantly from the data generated for our previous publication after being stored at –20°C for more than 2 years since they were previously assayed and more than 5 years since the original sample collection. Similarly, the reassayed values for CXM in plasma and serum compared closely with those obtained for our original publication after being stored for 4 years. Samples assayed from Nemours were collected 8 to 12 years prior and stored at –80°C since collection. The CXM concentration of these samples generated comparable results to those in the same age ranges of our previous study, suggesting that the CXM biomarker is stable for at least this amount of time stored at –20°C and below. Significant amounts of serum and plasma from both of these studies are currently stored at –20°C, so it will be possible to reassay these samples in the future to confirm the stability of this biomarker in samples frozen for prolonged periods. The fact that this biomarker is stable once serum/plasma or DBS is stored at –20°C and below for multiple years may mean that archived samples collected and stored below –20°C for other studies completed long ago may be assayed for the CXM biomarker. The correlation of CXM measured in DBS to CXM measured in blood, especially serum, is not as strong as it is for CXM measured in serum vs plasma. This is not surprising given the additional steps involved in obtaining and processing DBS samples compared to analyzing serum or plasma samples. Importantly, we have adopted more stringent sampling and processing protocols to minimize DBS variability, which we expect to improve the correlation of DBS to blood CXM measurements going forward. In summary, we have expanded our observations regarding CXM and HV. We define CXM concentration percentile curves using LMS analysis for normally growing, healthy children from birth to age 20 years. Compared with our initial publication, the substantial increase in sample number allows for more accurate definition and interpretation of CXM levels and percentiles for individuals of a given age and sex. These data provide working reference ranges that can be used to assess individuals with normal and potentially abnormal skeletal growth. We expect CXM to one day become a valuable tool for estimating growth velocity in the clinical setting.
|
Plant interaction traits determine the biomass of arbuscular mycorrhizal fungi and bacteria in soil | cf0fc808-32a2-4e7c-a3bf-3ed6bcfd801a | 11860738 | Microbiology[mh] | Species interactions are important for maintaining biodiversity, productivity, and resilience of ecosystems (McCann, ; Ratzke et al., ; Tylianakis et al., ). Many species depend on mutualistic relationships for crucial processes such as pollination, dispersal, resource acquisition, or stress alleviation (Allesina & Tang, ; Bascompte et al., ), such that these interactions comprise a key component of a species' niche (Carscadden et al., ). A species' interaction niche, the degree to which it interacts with members of another trophic guild, such as its mutualistic partners, is often described as a continuum between specialism and generalism and has important ecological implications for both guilds (Poisot et al., ). For example, generalist pollinators tend to positively affect plant production (Maldonado et al., ), while specialists can enhance coexistence by reducing competition (Bastolla et al., ). In nature, the presence of a range of interaction niches contributes to biodiversity and community stability (Dehling et al., ; Poisot et al., ). Despite this importance, the definition of a specialist and generalist is not always straightforward (Poisot et al., ; Rohr et al., ). For example, the term specialist is often applied to members of one guild that interact with few partners but also to those selectively interacting with phylogenetically related partners (Bascompte, ; Montesinos‐Navarro et al., ). By contrast, a generalist is commonly defined either as a species with many or diverse interactions. These differences in the definitions of specialism and generalism are problematic because they lead to the pooling of species interaction traits that may vary in their effects on the community. Additionally, mutualistic interactions can be predicted through phylogenetic relationships (Rezende et al., ) or by species traits (Eklöf et al., ; Vázquez et al., ) and generality can be conserved across a species' range (Emer et al., ), yet interactions (particularly those of generalists) can be determined by random encounter probability (related to species' abundances; Vázquez et al., ) and shaped by the local environment (Tylianakis et al., ). Thus, it remains unclear to what extent the local environment shapes species interaction generalism. Resolving the various facets of interaction traits of mutualistic species would improve understanding of the assembly and maintenance of ecological communities. Possibly the oldest mutualism among eukaryotes is that between plants and arbuscular mycorrhizal fungi (AMF), which occurs in more than three‐quarters of vascular plant species and most terrestrial ecosystems (Brundrett & Tedersoo, ). Arbuscular mycorrhizal (AM) plants allocate on average 6% of photosynthetic carbon (C) to obligately biotrophic soil fungi of the subphylum Glomeromycotina (Hawkins et al., ). In exchange, plants receive multiple benefits from AMF, including improved water and nutrient acquisition (Vogelsang et al., ) and pathogen and stress resistance (Begum et al., ; Lutz et al., ). Consequently, AM plants are significant sinks for atmospheric carbon dioxide (Parihar et al., ). While AMF abundance is highly correlated to soil C sequestration in field studies (Wilson et al., ), it is less clear how AMF diversity, largely mediated by plant hosts, influences C allocation to the soil microbial community. Enhanced understanding of plant interaction traits for AMF may provide insight into how host species affect the diversity and production of soil ecosystems (Bennett & Groten, ). Compared with other mutualisms, plant–AMF interactions are not well understood, partly due to the many stochastic, abiotic, and biotic filters that affect community assembly (HilleRisLambers et al., ; Vályi et al., ). For example, root AMF communities vary based on the available AMF pool (Šmilauer et al., ), which is context‐dependent (Šmilauer et al., ; Tylianakis et al., ) and influenced by soil properties (Gerz et al., ). Locally, host‐specific AMF assemblages suggest that host identity plays a key role in determining AMF composition and biomass (Leff et al., ; Veresoglou & Rillig, ). Increasing evidence points to the role of host traits in plant–AMF niche partitioning. AMF colonization rates correlate with AM plant root traits (Bergmann et al., ), shaping plant interaction niches for AMF. For instance, grasses tend to host more AMF taxa than forbs in grasslands and may also differ in AMF colonization rates and composition (Sepp et al., ; Šmilauer et al., ). AM hosts may adopt various strategies in selecting the number and taxonomic composition of their mutualists because AMF vary in root colonization patterns and nutrient transfer abilities (Horsch et al., ; Lendenmann et al., ). Generalist hosts may benefit from the complementary effects of multiple AMF (Jansa et al., ; Koide, ), but these benefits come with trade‐offs, such as higher carbon costs, especially when cheaters are present (Bever et al., ; Kiers & Denison, ). In some environments, forming specialized interactions with a few beneficial AMF may be advantageous (Werner & Kiers, ). Distinct plant interaction niches for AMF can influence ecosystem C cycling both directly by altering AMF communities and indirectly through AMF‐mediated effects on soil bacterial communities. Up to 40% of photosynthetic C is lost from plant roots as fatty acids (FAs), carbohydrates, and other metabolites, fueling the growth of the AMF mycelium and a complex, yet specific, community of rhizosphere bacteria (Jiang et al., ; Marschner & Baumann, ). Arbuscular mycorrhizal fungi also produce metabolites that alter the bacterial composition of their hyphospheres (Huang et al., ) and nutrient availability in soil (Zhang et al., ). Together, these processes create plant–soil feedback that shapes future plant community assembly (Crawford et al., ), ultimately influencing ecosystem carbon cycling on larger spatial and temporal scales. While plant interaction niches play a crucial role in structuring soil communities, detailed knowledge of their effects on C allocation to AMF and bacterial communities remains sparse. Here, we characterized plant interaction niches with AMF, which we define using a range of diversity metrics to encompass the various facets of specialism/generalism. We sought to understand how these interaction traits affect AMF and bacterial biomass in rhizosphere soil. Firstly, we generated different biotic and abiotic filters on AMF community assembly by growing eight plant species under two experimental conditions. We test the hypothesis (Hyp 1 ) that plant species' interaction roles as AMF generalists or specialists are stable to these changes, comparing the multidimensional plant interaction niches under different experimental conditions by Procrustes analyses. Secondly, we sought to learn how plant interaction niches affect AMF biomass in rhizosphere soil. We expected interaction generalist hosts to be capable of greater C allocation to AMF due to their enhanced nutrient supply resulting from complementarity effects of their AMF communities (Jansa et al., ; Koide, ). In turn, we expected that higher rhizosphere AMF biomass would lead to a greater root‐encounter probability and a greater proportion of the root system being colonized, increasing interaction generalism. We therefore test the hypothesis (Hyp 2 ) that host interaction generalism is positively associated with AMF biomass in rhizosphere soils. We quantified the abundance of the neutral lipid fatty acid (NLFA) 16:1ω5 as a proxy for AMF biomass and modeled its response to plant interaction generalism, while accounting for plant phylogeny, root, and shoot biomass in a Bayesian framework. Finally, we explore the effect of host interaction generalism with AMF on bacterial biomass in the rhizosphere. While plant and AMF species may have differential effects on bacterial communities (Scheublin et al., ; Söderberg et al., ), we expected a positive relationship between soil bacterial biomass and plant interaction generalism due to complementary effects of many AMF on bacterial species. We therefore test the hypothesis (Hyp 3 ) that soil bacterial biomass would increase in response to plants' interaction traits associated with host generalism for AMF. We estimate bacterial biomass using phospholipid fatty acid (PLFA) analysis of bacterial biomarkers and model the effect of plant interaction traits, accounting for plant phylogeny, root, and shoot biomass in a Bayesian framework. Our study reveals how plant interaction traits affect the productivity of soil ecosystems, contributing to the understanding of how changes in biodiversity affect ecosystem C cycling. Glasshouse experiments We conducted two glasshouse experiments, differing only in soil substrate and abiotic conditions, to characterize plant interaction niches for AMF and test hypothesis 1, enabling us to assess whether interaction niches are sensitive to soil and environmental conditions. Experiment 2 was used to determine whether interaction traits affect AMF and bacterial biomass in rhizosphere soil. We selected eight co‐occurring plant species from the pasture site where field soil was collected. In each experiment, five replicates per plant species were grown in mesocosms, with each mesocosm consisting of a single plant seedling in a potting mix containing field‐collected soil as the AMF inoculum source. Three plant‐free mesocosms were used as controls. To create different filters on AMF community assembly, we collected field soil in different seasons and altered the potting mix composition for experiments 1 and 2. All mesocosms were maintained in a glasshouse for 16 weeks. At harvest, we collected the aboveground plant biomass and roots to determine dry weight. We sampled rhizosphere soil and randomly subsampled from the roots for later lipid extraction from both substrates and collected a small random subsample from the roots for later DNA extraction. For experiment 1, only root samples for DNA analysis were collected as described. Details on the study site, glasshouse experiment, and harvest can be found in Appendix : Section . Characterizing the AMF community To identify AMF in plant root samples, we extracted DNA and amplified the internal transcribed spacer 2 (ITS2) region of the eukaryotic ribosomal DNA by polymerase chain reaction (PCR) using primers ITS3 and ITS4 (Tedersoo et al., ). The PCR products were sequenced on the Illumina MiSeq platform. The resulting amplicon sequence variants (ASVs) were assigned a fungal taxonomy using the UNITE 8.2 (2020) database (Nilsson et al., ). We filtered the data to contain only sequences assigned to the subphylum Glomeromycotina, analyzing each ASV as a proxy for AMF species (Fu et al., ). To test whether the soil AMF community significantly differed between the soils collected in different seasons, we applied a permutational analysis of variance (PERMANOVA). See Appendix : Section for details of molecular, bioinformatics, and sampling completeness steps. Defining the plant– AMF interaction niche Interaction partner diversity has multiple components, and each can be measured in different ways (Morris et al., ). To comprehensively describe plant interaction niches for AMF, we calculated eight diversity metrics based on AMF sequences from plant roots, encompassing different numeric and phylogenetic components of of α‐, β‐, and γ‐diversity. These included mean AMF richness and Shannon's diversity index per plant species (numeric α‐diversities), along with the average mean phylogenetic distance (MPD) for all replicates per species (phylogenetic α‐diversity). We also calculated the proportion of core AMF species (those present in ≥60% of replicates) per species (β[core]) and the number of compositional units of AMF per species (β[CU]) to represent numeric β‐diversity. The mean UniFrac distance per species was used as a measure of phylogenetic β‐diversity. Finally, by pooling replicates per species, we calculated the total number of unique AMF (numeric γ‐diversity) and total MPD (phylogenetic γ‐diversity). Details of interaction niche metrics are in Appendix : Section . Stability of plant interaction niche for AMF To examine if plant interaction niches for AMF were stable under different environmental conditions, we compared plant–AMF interaction niches in experiments 1 and 2. For each plant species, we created a table of the absolute values of the diversity metrics in each experiment and applied symmetric Procrustes analysis (Peres‐Neto & Jackson, ) in vegan (Oksanen et al., ) to test the correlation of the plants' interaction niches in the two experiments. A permutational test using the maximal number of permutations and the function protest was used to assess the statistical significance of each correlation. To visualize plant–interaction niches for AMF in the two experiments, diversity metrics were scaled to vary between 0 and 1, from most specialist to most generalist plant species and plotted as stacked radar plots. Quantifying microbial biomass To test whether the widths of plant interaction niches (“specialism/generalism”) affected C allocation to the soil microbial community, AMF and bacterial biomass were quantified using neutral lipid and phospholipid fatty acid (NLFA & PLFA) analysis. The NLFA 16:1ω5 is strongly correlated with AMF structures in roots and soil (Sharma & Buyer, ) and serves as a reliable proxy for C allocation to AMF, as AMF cannot synthesize FAs and depend on the host for FA C14:0 (Luginbuehl et al., ). Soil bacterial biomass was estimated using 31 bacterial PLFA biomarkers (Appendix : Table ). Lipids were extracted from lyophilised rhizosphere soil and root samples in experiment 2 following Lewe et al. with modifications described in Appendix : Section . Soil bacterial biomass and AMF biomass in soil and roots are reported as mean ± standard deviation. Significant differences among groups were assessed using ANOVA or, when assumptions were violated, a Kruskal–Wallis test. Effect of host interaction niches on C allocation to AMF and bacteria To test whether host interaction traits for AMF influenced C allocation to AMF and bacteria in soils, we used Bayesian linear mixed modeling. Since diversity metrics are somewhat interdependent, we applied principal components analysis (PCA) to extract uncorrelated linear recombinations of the eight diversity metrics used to characterize plant interaction niches, calculated per replicate. Principal components (PC) 1–3 collectively described 87.6% of the variation in plant niche space for AMF. We therefore modeled AMF and bacterial biomass in soils as a function of PC1, PC2, and PC3. To account for possible effects of plant biomass on C allocation to soil microbes, we included plant root and shoot biomass or their ratio (root: shoot) as covariates in the models. We also included total AMF biomass of the plant's root system as a covariate to account for possible differences in soil AMF biomass due to variation in root AMF biomass. Because plant species were unevenly distributed across three families, we accounted for phylogenetic nonindependence among samples by including a phylogenetic covariance matrix as a random effect. Plant species identity was also included as a random effect to account for plant functional traits not explained by phylogeny or root and shoot biomass. The best fit model was selected based on convergence and accuracy criteria, including using posterior predictive checks and leave‐one‐out cross‐validation (LOO‐CV) from all possible variable configurations (Vehtari et al., ). For the best fit model, we tested our hypotheses that plant interaction generalism with AMF increases soil AMF biomass (Hyp2) and soil bacterial biomass (Hyp3) by computing evidence ratios (i.e., the ratio of the posterior probability of each hypothesis against its alternative) for the model parameters associated with plant interaction generalism (PC1, PC2, PC2). Using the same approach, we also tested if specific diversity metrics were a better fit and selected the metric explaining the most variance along each PC axis. AMF and bacterial biomass were then modeled as functions of those diversities. All Bayesian models were fitted in brms (Bürkner, ). Details can be found in Appendix : Section . We conducted two glasshouse experiments, differing only in soil substrate and abiotic conditions, to characterize plant interaction niches for AMF and test hypothesis 1, enabling us to assess whether interaction niches are sensitive to soil and environmental conditions. Experiment 2 was used to determine whether interaction traits affect AMF and bacterial biomass in rhizosphere soil. We selected eight co‐occurring plant species from the pasture site where field soil was collected. In each experiment, five replicates per plant species were grown in mesocosms, with each mesocosm consisting of a single plant seedling in a potting mix containing field‐collected soil as the AMF inoculum source. Three plant‐free mesocosms were used as controls. To create different filters on AMF community assembly, we collected field soil in different seasons and altered the potting mix composition for experiments 1 and 2. All mesocosms were maintained in a glasshouse for 16 weeks. At harvest, we collected the aboveground plant biomass and roots to determine dry weight. We sampled rhizosphere soil and randomly subsampled from the roots for later lipid extraction from both substrates and collected a small random subsample from the roots for later DNA extraction. For experiment 1, only root samples for DNA analysis were collected as described. Details on the study site, glasshouse experiment, and harvest can be found in Appendix : Section . AMF community To identify AMF in plant root samples, we extracted DNA and amplified the internal transcribed spacer 2 (ITS2) region of the eukaryotic ribosomal DNA by polymerase chain reaction (PCR) using primers ITS3 and ITS4 (Tedersoo et al., ). The PCR products were sequenced on the Illumina MiSeq platform. The resulting amplicon sequence variants (ASVs) were assigned a fungal taxonomy using the UNITE 8.2 (2020) database (Nilsson et al., ). We filtered the data to contain only sequences assigned to the subphylum Glomeromycotina, analyzing each ASV as a proxy for AMF species (Fu et al., ). To test whether the soil AMF community significantly differed between the soils collected in different seasons, we applied a permutational analysis of variance (PERMANOVA). See Appendix : Section for details of molecular, bioinformatics, and sampling completeness steps. AMF interaction niche Interaction partner diversity has multiple components, and each can be measured in different ways (Morris et al., ). To comprehensively describe plant interaction niches for AMF, we calculated eight diversity metrics based on AMF sequences from plant roots, encompassing different numeric and phylogenetic components of of α‐, β‐, and γ‐diversity. These included mean AMF richness and Shannon's diversity index per plant species (numeric α‐diversities), along with the average mean phylogenetic distance (MPD) for all replicates per species (phylogenetic α‐diversity). We also calculated the proportion of core AMF species (those present in ≥60% of replicates) per species (β[core]) and the number of compositional units of AMF per species (β[CU]) to represent numeric β‐diversity. The mean UniFrac distance per species was used as a measure of phylogenetic β‐diversity. Finally, by pooling replicates per species, we calculated the total number of unique AMF (numeric γ‐diversity) and total MPD (phylogenetic γ‐diversity). Details of interaction niche metrics are in Appendix : Section . AMF To examine if plant interaction niches for AMF were stable under different environmental conditions, we compared plant–AMF interaction niches in experiments 1 and 2. For each plant species, we created a table of the absolute values of the diversity metrics in each experiment and applied symmetric Procrustes analysis (Peres‐Neto & Jackson, ) in vegan (Oksanen et al., ) to test the correlation of the plants' interaction niches in the two experiments. A permutational test using the maximal number of permutations and the function protest was used to assess the statistical significance of each correlation. To visualize plant–interaction niches for AMF in the two experiments, diversity metrics were scaled to vary between 0 and 1, from most specialist to most generalist plant species and plotted as stacked radar plots. To test whether the widths of plant interaction niches (“specialism/generalism”) affected C allocation to the soil microbial community, AMF and bacterial biomass were quantified using neutral lipid and phospholipid fatty acid (NLFA & PLFA) analysis. The NLFA 16:1ω5 is strongly correlated with AMF structures in roots and soil (Sharma & Buyer, ) and serves as a reliable proxy for C allocation to AMF, as AMF cannot synthesize FAs and depend on the host for FA C14:0 (Luginbuehl et al., ). Soil bacterial biomass was estimated using 31 bacterial PLFA biomarkers (Appendix : Table ). Lipids were extracted from lyophilised rhizosphere soil and root samples in experiment 2 following Lewe et al. with modifications described in Appendix : Section . Soil bacterial biomass and AMF biomass in soil and roots are reported as mean ± standard deviation. Significant differences among groups were assessed using ANOVA or, when assumptions were violated, a Kruskal–Wallis test. AMF and bacteria To test whether host interaction traits for AMF influenced C allocation to AMF and bacteria in soils, we used Bayesian linear mixed modeling. Since diversity metrics are somewhat interdependent, we applied principal components analysis (PCA) to extract uncorrelated linear recombinations of the eight diversity metrics used to characterize plant interaction niches, calculated per replicate. Principal components (PC) 1–3 collectively described 87.6% of the variation in plant niche space for AMF. We therefore modeled AMF and bacterial biomass in soils as a function of PC1, PC2, and PC3. To account for possible effects of plant biomass on C allocation to soil microbes, we included plant root and shoot biomass or their ratio (root: shoot) as covariates in the models. We also included total AMF biomass of the plant's root system as a covariate to account for possible differences in soil AMF biomass due to variation in root AMF biomass. Because plant species were unevenly distributed across three families, we accounted for phylogenetic nonindependence among samples by including a phylogenetic covariance matrix as a random effect. Plant species identity was also included as a random effect to account for plant functional traits not explained by phylogeny or root and shoot biomass. The best fit model was selected based on convergence and accuracy criteria, including using posterior predictive checks and leave‐one‐out cross‐validation (LOO‐CV) from all possible variable configurations (Vehtari et al., ). For the best fit model, we tested our hypotheses that plant interaction generalism with AMF increases soil AMF biomass (Hyp2) and soil bacterial biomass (Hyp3) by computing evidence ratios (i.e., the ratio of the posterior probability of each hypothesis against its alternative) for the model parameters associated with plant interaction generalism (PC1, PC2, PC2). Using the same approach, we also tested if specific diversity metrics were a better fit and selected the metric explaining the most variance along each PC axis. AMF and bacterial biomass were then modeled as functions of those diversities. All Bayesian models were fitted in brms (Bürkner, ). Details can be found in Appendix : Section . Plant species have stable interaction niches for AMF The eight plant species differed in their interaction niches with AMF, as evidenced by large differences in both absolute (Appendix : Tables ) and relative (Figure ) diversity metrics. Also remarkable was the similarity of plant interaction niches under both experimental conditions, despite the two soils having significantly different AMF pools at the start of the experiment (PERMANOVA R 2 = 0.25, F = 2.02, p = 0.019). Niches were similar even though the plants hosted very different taxa and AMF communities in the two experiments (Appendix : Figure ), suggesting considerable stability of interaction niche for AMF in the plant species we studied (Figure ; Table ). For example, in both experiments, the grasses Holcus lanatus and Agrostis capillaris were interaction generalists for AMF, with high numeric α‐diversity, while the Asteraceae members Achillea millefolium and Cichorium intybus were specialists relative to the other plant species tested. In contrast, Lolium arundinaceus was a phylogenetic generalist, characterized by high phylogenetic diversities, while Poa cita had intermediate diversity levels. Permutational Procrustes analysis of the absolute values of the eight numeric and phylogenetic diversities for plant species confirmed that plant interaction niches with AMF were significantly correlated across both experimental conditions (Table ), supporting hypothesis 1 and indicating niche stability. AMF biomass in rhizosphere soil increases with phylogenetic plant interaction generalism All plant species translocated substantial amounts of C into the AMF mycelium. The AMF biomarker NLFA 16:1ω5 was present in all root and rhizosphere soil samples, as well as in small amounts in plant‐free control soils (0.32 ± 0.29 nmol g −1 DW soil). Across rhizosphere soils, the AMF biomarker varied significantly from 5.71 ± 2.56 to 37.61 ± 13.07 nmol g −1 DW soil ( H = 15.8, df = 7, p = 0.027). However, similar amounts of the AMF biomarker in the roots of all species suggested comparable AMF colonization (Appendix : Table ), with values varying from 1.45 ± 0.62 μmol g −1 DW root in C. intybus to 4.81 ± 1.39 μmol g −1 DW root in Poa cita ( F [7, 28] = 2.35, p = 0.051). The PCA of the diversity metrics revealed that numeric measures of α‐ and β‐diversity (richness, Shannon's diversity, β[CU], and β[core]) were strongly associated with PC1, which explained 52.2% of the variation in the plant‐AMF interaction niche space. PC2 explained 23.4% of the variation and corresponded to phylogenetic β‐diversity (UniFrac), as well as numeric and phylogenetic γ‐diversity. PC3, largely influenced by phylogenetic α‐diversity (MPD), accounted for an additional 12.0% of the variation (Appendix : Figure ). Our model for the AMF biomarker NL 16:1ω5 indicated strong evidence that PC3 and plant root biomass had significant positive effects on AMF biomass in rhizosphere soil (Figure ; Appendix : Section ). The final model explained 35% of the variation of the AMF biomass in rhizosphere soil (Bayes R 2 = 35.4 ± 10.5%). The retention of root biomass but not total root AMF biomass as a covariate in the final model suggests that C allocation to AMF is greatest for species with large root systems, regardless of AMF colonization levels. However, the effect of root biomass was more variable than that of PC3. These results provide partial support for hypothesis 2, indicating that while interaction generalist hosts allocate more C to soil AMF, this relationship is primarily driven by the phylogenetic α‐diversity aspect of interaction generalism. Bacterial biomass in rhizosphere soil decreases with numeric plant interaction generalism Rhizosphere bacterial biomass varied significantly among plant species ( H = 15.5, df = 7, p = 0.003) ranging from 14.66 ± 4.39 nmol g −1 DW soil for P. lanceolata to 31.00 ± 6.30 nmol g −1 DW soil for Cichorium intybus (control soil: 14.02 ± 4.52 nmol g −1 DW soil; Appendix : Table ). The best model for bacterial biomass explained 49% of the variance (Bayes R 2 = 49.2 ± 9.0%) and, contrary to our third hypothesis, showed a strong negative effect of PC1 on bacterial biomass. Bacterial biomass in rhizosphere soils was affected by plant shoot and, to a lesser extent, root biomass (Figure , Appendix : Section ). AMF The eight plant species differed in their interaction niches with AMF, as evidenced by large differences in both absolute (Appendix : Tables ) and relative (Figure ) diversity metrics. Also remarkable was the similarity of plant interaction niches under both experimental conditions, despite the two soils having significantly different AMF pools at the start of the experiment (PERMANOVA R 2 = 0.25, F = 2.02, p = 0.019). Niches were similar even though the plants hosted very different taxa and AMF communities in the two experiments (Appendix : Figure ), suggesting considerable stability of interaction niche for AMF in the plant species we studied (Figure ; Table ). For example, in both experiments, the grasses Holcus lanatus and Agrostis capillaris were interaction generalists for AMF, with high numeric α‐diversity, while the Asteraceae members Achillea millefolium and Cichorium intybus were specialists relative to the other plant species tested. In contrast, Lolium arundinaceus was a phylogenetic generalist, characterized by high phylogenetic diversities, while Poa cita had intermediate diversity levels. Permutational Procrustes analysis of the absolute values of the eight numeric and phylogenetic diversities for plant species confirmed that plant interaction niches with AMF were significantly correlated across both experimental conditions (Table ), supporting hypothesis 1 and indicating niche stability. biomass in rhizosphere soil increases with phylogenetic plant interaction generalism All plant species translocated substantial amounts of C into the AMF mycelium. The AMF biomarker NLFA 16:1ω5 was present in all root and rhizosphere soil samples, as well as in small amounts in plant‐free control soils (0.32 ± 0.29 nmol g −1 DW soil). Across rhizosphere soils, the AMF biomarker varied significantly from 5.71 ± 2.56 to 37.61 ± 13.07 nmol g −1 DW soil ( H = 15.8, df = 7, p = 0.027). However, similar amounts of the AMF biomarker in the roots of all species suggested comparable AMF colonization (Appendix : Table ), with values varying from 1.45 ± 0.62 μmol g −1 DW root in C. intybus to 4.81 ± 1.39 μmol g −1 DW root in Poa cita ( F [7, 28] = 2.35, p = 0.051). The PCA of the diversity metrics revealed that numeric measures of α‐ and β‐diversity (richness, Shannon's diversity, β[CU], and β[core]) were strongly associated with PC1, which explained 52.2% of the variation in the plant‐AMF interaction niche space. PC2 explained 23.4% of the variation and corresponded to phylogenetic β‐diversity (UniFrac), as well as numeric and phylogenetic γ‐diversity. PC3, largely influenced by phylogenetic α‐diversity (MPD), accounted for an additional 12.0% of the variation (Appendix : Figure ). Our model for the AMF biomarker NL 16:1ω5 indicated strong evidence that PC3 and plant root biomass had significant positive effects on AMF biomass in rhizosphere soil (Figure ; Appendix : Section ). The final model explained 35% of the variation of the AMF biomass in rhizosphere soil (Bayes R 2 = 35.4 ± 10.5%). The retention of root biomass but not total root AMF biomass as a covariate in the final model suggests that C allocation to AMF is greatest for species with large root systems, regardless of AMF colonization levels. However, the effect of root biomass was more variable than that of PC3. These results provide partial support for hypothesis 2, indicating that while interaction generalist hosts allocate more C to soil AMF, this relationship is primarily driven by the phylogenetic α‐diversity aspect of interaction generalism. Rhizosphere bacterial biomass varied significantly among plant species ( H = 15.5, df = 7, p = 0.003) ranging from 14.66 ± 4.39 nmol g −1 DW soil for P. lanceolata to 31.00 ± 6.30 nmol g −1 DW soil for Cichorium intybus (control soil: 14.02 ± 4.52 nmol g −1 DW soil; Appendix : Table ). The best model for bacterial biomass explained 49% of the variance (Bayes R 2 = 49.2 ± 9.0%) and, contrary to our third hypothesis, showed a strong negative effect of PC1 on bacterial biomass. Bacterial biomass in rhizosphere soils was affected by plant shoot and, to a lesser extent, root biomass (Figure , Appendix : Section ). We found that pasture plant species exhibit stable interaction niches for AMF, even under varying environmental conditions. Our comprehensive characterization of plant–AMF interaction niches provides novel insight into how plant niche partitioning for interaction partners affects C allocation to soil microbial communities. We show that C allocation to AMF and bacteria is associated with different aspects of the plant interaction niche. Further, we show that interaction generalism had opposite effects on AMF and bacterial biomass in soils. Below, we discuss these results in detail and explore how plant interaction niches for AMF may impact ecosystem C cycling. We found remarkable similarity in plant species' interaction niches in experiments 1 and 2, supporting our first hypothesis, despite that different edaphic conditions and AMF inoculum pools used in the two experiments generated substantial differences in the taxonomic composition of AMF communities of plant species. Although rhizosphere AMF communities respond to edaphic conditions (Davison et al., ) and available AMF pools (Van Geel et al., ), the stability of plant interaction niches suggests that plants exhibit fundamental interaction niches for AMF. This observation is consistent with findings from plant‐pollinator networks, where species retain their interaction niches moving from their native to alien ranges (Emer et al., ). However, given that only eight plant species in two experiments were compared, further work is needed to confirm the general stability of plant interaction traits for AMF. Nonetheless, our findings suggest that plant interaction generalism for AMF could serve as a useful functional trait (Funk et al., ) for understanding how interactions with soil organisms drive ecosystem processes. Our multidimensional approach to interaction generalism allowed us to resolve niche partitioning among generalist hosts for AMF partners. We found that generalists partitioned interaction trait space through variation in numeric and phylogenetic AMF diversity, which likely involves distinct trade‐offs. Niche partitioning may occur as plants select the most beneficial AMF partners (Werner & Kiers, ) or interact with AMF exhibiting diverse nutrient acquisition strategies (Powell & Rillig, ). Thus, plant niche partitioning for AMF partners may significantly contribute to maintaining ecosystem functional diversity (Dehling et al., ), enhancing ecosystem resilience to environmental change (Turnbull et al., ). We found some support for our second hypothesis that plant interaction generalism for AMF is positively related to AMF biomass in the rhizosphere. However, the increase in soil AMF biomass was driven by the phylogenetic α‐diversity aspect of host interaction generalism. Phylogenetically diverse AMF communities are linked to higher variability in traits like hyphal growth (Hart & Reader, ) and nutrient acquisition (Horsch et al., ). This may suggest that complementarity among AMF taxa increased C allocation to the rhizosphere. Alternatively, interaction generalists hosting diverse AMF taxa may have been less able to downregulate C flow to less favorable mutualists (Grman, ) making them more susceptible to cheaters (Kiers & Denison, ). Indeed, the significant positive effect of β(CU), which reflects heterogeneity of AMF among replicates of a host species, supports the idea that generalist hosts may have been less selective for beneficial AMF. Root biomass, rather than AMF biomass in roots, was an important covariate. While root traits (e.g., diameter, branching) influence plant interaction niches for AMF (Bergmann et al., ; Ramana et al., ), our results likely reflect that plants with higher root biomass provide more habitat for AMF (Sweeney et al., ). Greater habitat availability can reduce competition, favoring higher AMF diversity (Bergmann et al., ; Mony et al., ). Given the role of AMF in C sequestration into the soil organic C pool (Zhu & Miller, ), the relationship between soil AMF biomass and the phylogenetic diversity aspect of plant interaction generalism highlights the importance of generalist plants in regulating C flux between the atmosphere and biosphere. Contrary to our third hypothesis, we found that interaction generalist plants were associated with lower bacterial biomass in rhizosphere soils. The interactions between plants, AMF, and bacteria in the hyphosphere and rhizosphere are complex, with both plants and AMF releasing compounds that can affect bacterial taxa either positively or negatively (Bharadwaj et al., ; Changey et al., ). Furthermore, AMF and soil bacteria often compete for resources, and AMF can outcompete bacteria in the rhizosphere as AMF hyphae can significantly reduce bacterial access to nutrients (Bukovská et al., ). Indeed, the effect of AMF on bacteria strongly depends upon the nutrient status of the host plant and AMF (Huang et al., ; Lanfranco et al., ). The positive effect of nutrient limitation on plant C allocation to mycorrhizas is well known (Huang et al., ). Under nutrient‐limited conditions, plants hosting large AMF communities may generate strong competitive effects on rhizosphere bacteria. In our study, nutrient limitation was likely, as mesocosms consisted primarily of sand with only small amounts of field soil as inoculum and no mineral nutrient supplementation. Despite ample light, the relatively small plant size at harvest suggests nutrient stress. Root and shoot biomass were significant covariates in our bacterial biomass model, indicating that larger plants were associated with larger bacterial communities. Together, these findings suggest that competition between AMF and bacteria for C limited bacterial biomass in our study. We sought a better understanding of plant interaction niches for AMF and their effects on soil microbial biomass. We demonstrate that, despite variation in environmental conditions, plant interaction niches for AMF were stable relative to other plants in their community. This aligns with niche theory and other studies of plant functional traits (Funk et al., ) and interaction traits in other types of networks (Emer et al., ). However, under field conditions, we expect realized plant–AMF interaction niches to be shaped by various filters on community assembly including biotic and stochastic factors like priority effects and plant–soil feedbacks (HilleRisLambers et al., ). Under the nutrient‐limited conditions of our experiments, we found that plants with high phylogenetic interaction generalism were associated with higher soil AMF biomass, while high numeric interaction generalism was linked to lower bacterial biomass, suggesting strong AMF‐bacterial competition for C in the rhizosphere. These findings align with well‐described patterns in community and ecosystem ecology, such as greater fungal‐to‐bacterial biomass (Wardle et al., ) and plant‐mycorrhizal dependence (Huang et al., ) under nutrient limitation. Nonetheless, over 50% of the variance in AMF and bacterial biomass remains unexplained, suggesting that other factors may also play important roles. We propose that plant interaction niches for AMF are a promising new avenue to enhance understanding of how plant traits alter key ecosystem functions, such as C cycling. The authors declare no conflicts of interest. Appendix S1. Appendix S2. |
Multi-Omics Analysis of the Gut-Brain Axis Elucidates Therapeutic Mechanisms of Guhong Injection in the Treatment of Ischemic Stroke | 826bc201-93a6-45ed-8858-c4bbbe36b420 | 11855775 | Biochemistry[mh] | Ischemic stroke (IS), a prevalent neurological disorder, poses a significant threat to human life. Between 1990 and 2019, there was a 70% increase in the global absolute incidence of stroke and an 85% rise in its prevalence , which is partially attributable to both population growth and aging. Nevertheless, the rising age-standardized incidence of IS among individuals aged 18 to 50 (increasing by 50% in the past decade) has received significant attention . It is mainly caused by thrombosis resulting from cerebral atherosclerosis, resulting in vascular stenosis and occlusion. The formation of a thrombus reduces cerebral perfusion, causing brain ischemia and hypoxia, ultimately leading to necrosis and the softening of brain tissue . The multifaceted pathological mechanisms of IS involve various factors, including aberrant brain tissue metabolism, oxidative stress, free radical production, inflammatory response, and apoptosis . Research on IS has progressed from a single, brain-centric perspective of IS to a more comprehensive “whole body” approach. Increasing evidence substantiates the function of intestinal microbiota in gut–brain axis signaling after stroke . An examination of the cellular and molecular immune mechanisms active in the gut–brain axis inflammatory pathway has revealed that disrupted gut microflora, altered intestinal microenvironments, and chronic diseases can exacerbate the prognosis of IS . Alterations in the gut–brain axis have been associated with the pathogenic mechanisms of various diseases, including neurodevelopmental disorders, neurodegenerative diseases, psychiatric disorders, and cerebrovascular accidents such as stroke . Recent research has shown a correlation between the gut microbiota and IS by means of the gut–brain axis, which influences the pathogenesis of stroke . Consequently, the intestinal microbiome has become a promising therapeutic target for protecting brain function after a stroke. Accumulating evidence indicates that the gut microbiota and short-chain fatty acids (SCFAs) play a pivotal role as key signaling molecules in the interaction between the digestive tract and the central nervous system . Guhong injection (GH) is a compound preparation consisting of safflower ( Carthamus tinctorius L.) extract and acetylglutamine, serving as a multi-target drug therapy that aligns with the combination drug model in modern medicine . It has been authorized by the China Food and Drug Administration for the management of cerebrovascular diseases, including cerebral blood supply deficiency, cerebral thrombosis, cerebral embolism, and convalescent-stage cerebral hemorrhage. GH combines the characteristics of Western medicine and traditional Chinese medicine, exhibiting anticoagulant, antithrombotic, microcirculation-improving, and anti-oxidative stress properties . Brain metabolomics studies have shown that GH can ameliorate metabolic disorders in ischemic stroke rats by regulating the glutamate–glutamine cycle, glycolysis, nucleic acid metabolism, the TCA cycle, and phospholipid metabolism . GH could attenuate myocardial ischemia–reperfusion injury by activating GSTP and suppressing the ASK1-JNK/p38 pathway . The microbiota–gut–brain axis is intricately linked to the physiology and pathology of both the digestive tract and the central nervous system. However, the therapeutic potential of GH in IS, possibly mediated by the intestinal microbiota, and the relationships between the gut–brain axis, biomarkers, and target proteins have not been elucidated so far. Our research objective is to elucidate the mechanism of action of GH in treating IS using a middle cerebral artery occlusion (MCAO) rat model. This will be achieved through describing a multi-pronged approach that includes 16S rRNA gene sequencing, metabolomics, network pharmacology, and Western blot (WB). This integrated method aims to demonstrate the impacts of GH on the gut microbiota, biomarkers, SCFAs, and target proteins, thereby revealing that these effects may be achieved through the gut–brain axis. 2.1. TTC Staining and Neurological Deficit The protective effect of GH on IS was assessed through the determination of infarct volume and neurological deficits. TTC staining was performed on brain tissue, where red represented normal tissue and white indicated the ischemic region. Relative to the SHAM group, animals subjected to MCAO exhibited a distinct white area, confirming the successful establishment of the model ( A). The results revealed that pretreatment with GH and NM significantly improved neurological deficits and infarct volume in the MCAO rats ( A−C). The infarct volume in the GH and NM (nimodipine injection) groups was significantly reduced compared with that in the MCAO group, suggesting that GH may have the potential to improve blood circulation. 2.2. Effects of GH on Gut Microbiota of MCAO Rats In this study, high-throughput sequencing produced 2,148,528 optimized sequences, which served to assess the function and composition of gut microbiota. α-Diversity analysis demonstrated that species diversity in the GHB group exceeded that of the MCAO group, as evidenced by the Simpson and Shannon indices. Additionally, species richness in the GHB group, reflected by Chao1 and Ace indices, exceeded that of the MCAO group ( A−D). Principal coordinate analysis (PCoA) was employed to evaluate the impact of GH on the β diversity of gut microbes, where the proximity on the PCoA plot reflected the similarity in species composition ( E). The number of ASV corresponding to each phylum and genus level was recorded. Intestinal microbial composition analysis at the phylum level indicated a predominance of Firmicutes in the SHAM group and Proteobacteria in the MCAO group. The results for the GHB group were comparable to those of the SHAM group at the phylum level. LEfSe analysis was performed at the genus level. Lactobacillus was predominant in both the SHAM and GHB groups, while Escherichia-Shigella was predominant in the MCAO group. Notably, the MCAO group showed an increase in pathogens or conditionally pathogenic bacteria such as Escherichia-Shigella , Enterobacteriaceae , and Enterococcus ; however, GH treatment led to an increased abundance of Lactobacillus and Bacillus , which are considered to be SCFA-producing bacteria ( F−H). 2.3. Effects of GH on SCFAs of MCAO Rats The gut microbiota, along with its metabolites, particularly SCFAs, plays a crucial role in preserving the equilibrium of the intestinal microecosystem and enhancing the integrity of the mucosal barrier. Consequently, a quantitative assessment of SCFAs was performed . In the MCAO group, we observed that the levels of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids were significantly lower than those in the SHAM group . In the GHB group, the amounts of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids were notably elevated compared with the SHAM group . 2.4. Metabolomics Analysis 2.4.1. Multivariate Data Analysis LC-MS was utilized to identify potential metabolites and systematically elucidate the therapeutic mechanism of GH in IS. The total ionic chromatograms (TICs) of serum samples, obtained from negative and positive ion modes, are shown in . To evaluate the overall metabolomic changes in IS rats following GH treatment, PCA and PLS-DA models were employed to investigate the trends in metabolic profile alterations among the SHAM, MCAO, GHA, GHB, and NM groups. Our unsupervised PCA of the SHAM, MCAO, GHA, GHB, and NM groups revealed distinct differences in metabolic patterns and natural clustering trends among the groups . The supervised PLS-DA score plot demonstrated significant separation between the groups, indicating successful establishment of the MCAO-induced IS model ( A,B). Through validation testing, carried out via a permutation test, we ascertained that the PLS-DA model was not overfitted ( C,D). To investigate potential biomarkers induced by MCAO and GHB, the raw data of the MCAO group were compared against those of the SHAM and GHB groups using OPLS-DA. The OPLS-DA score plots of serum extracts showed complete separation between the GHB and SHAM groups and the MCAO group, demonstrating Q 2 > 0.5, while the differences in R 2 Y and Q 2 were below 0.3. This indicates a pronounced metabolic disorder caused by MCAO, which could be further improved through GHB treatment . A permutation test was conducted to assess the validity of the OPLS-DA model . Metabolites with VIP ≥ 1 in the S-plot were considered to significantly contribute to the interference of IS . 2.4.2. Identification of Potential Biomarkers Potential biomarkers associated with MCAO rats were identified based on a threshold of p < 0.05 and VIP > 1.0 for differential metabolites. Consequently, a total of 52 metabolites were filtrated in serum samples between the SHAM and MCAO groups . Among these metabolites, 14 were significantly increased, whereas 38 showed a notable reduction in the MCAO group relative to the SHAM group. Notably, administration of GH reversed the variations in 45 metabolites induced by IS (11 downregulated and 34 upregulated). Compared with the MCAO group, the metabolites including PA(8:0/13:0), PC(14:1/20:0), LysoPC(16:1/0:0), PC(16:0/20:2), LysoPC(17:0/0:0), LysoPC(18:1/0:0), LysoPC(P-18:1/0:0), PC(18:2/16:0), LysoPC(18:3/0:0), LysoPC(20:1/0:0), LysoPC(20:5/0:0), 14,15-EET, 19-HETE, L-β-Lysine, and 1-methylhistidine were significantly upregulated in the GH group. Conversely, the metabolites including PC(20:4/18:0), PC(18:3/22:6), PGE2, 4-guanidinebutyric acid, citrulline, creatine, L-dopa, proline, D-sorbitol, mannitol, 2-phenylethanol glucuronide, galactitol, and gentisate aldehyde were downregulated. The Receiver Operating Characteristic (ROC) curve analysis provided insights into the differential metabolites for IS prediction . The results indicated that these metabolites possess strong diagnostic capabilities, with area under the curve (AUC) values ranging from 0.78 to 1.00. Notably, 39 metabolites exhibited excellent diagnostic performance, as evidenced by AUC > 0.90 . These findings suggest that these biomarkers hold potential for the clinical diagnosis of IS. However, further human studies are warranted to elucidate their relationships and validate their clinical utility . 2.4.3. Metabolic Pathways of Potential Biomarkers The 52 biomarkers were analyzed for functional enrichment using MetaboAnalyst ( A). It was observed that the differential metabolites mainly affected 38 metabolic pathways. Among these pathways, 10 were significantly enriched ( p < 0.05, impact > 0) , including arachidonic acid (ACA) metabolism; glycerophospholipid metabolism; tyrosine metabolism; tryptophan metabolism; α-linolenic acid metabolism; histidine metabolism; arginine and proline metabolism; lysine degradation; fructose and mannose metabolism; and phenylalanine, tyrosine, and tryptophan biosynthesis. 2.4.4. Effects of GH on Metabolic Pathways of MCAO Rats Comparison between A and B revealed that the intervention with GH in MCAO rats might have aided the treatment of IS by restoring arachidonic acid metabolism, glycerophospholipid metabolism, arginine and proline metabolism, tyrosine metabolism, fructose and mannose metabolism, and phenylalanine, tyrosine, and tryptophan biosynthesis to normal levels. Using the KEGG database, we investigated the metabolic pathways to create a probable metabolic pathway grid ( C). 2.5. PPI and Disease–Pathway–Target–Drug Network Construction The potential targets of GH were identified using TCMSP and Swiss Target Prediction, leading to the discovery of 528 targets. Additionally, 213 targets related to IS were identified through the Drugbank, OMIM, and Genecard databases (score ≥ 10.0). In total, 56 overlapping targets were found between GH and IS, which were then imported into the String database. The result was imported into cytoscape software for the visual analysis of the PPI network, which consisted of 1422 edges and 56 nodes ( A). The targets of ALB, TNF, IL6, IL1B, AKT1, NOS3, MAPK3, ACE, MMP9, and PTGS2 were the top 10. The 56 targets were imported into the Metascape database to acquire KEGG and GO terms. The KEGG analysis demonstrated that the treatment of IS with GH possibly involves apoptosis, the NF-κB signaling pathway, the sphingolipid signaling pathway, and autophagy in animals. Based on these results, a drugs–pathways–targets–diseases network was established ( B). 2.6. Effects of GH on Inflammatory Cytokines and Anti-Oxidative Indices in Serum of MCAO Rats To evaluate the anti-oxidative and anti-inflammatory effects of GH, serum levels of TNF-α, IL-6, IL-1β, SOD, and MDA were analyzed through ELISA. In comparison to the SHAM group, the levels of MDA, TNF-α, IL-1β, and IL-6 were significantly elevated in the MCAO group. The results indicated a reduction in MDA, TNF-α, IL-1β, and IL-6 levels in the GH and NM groups relative to the MCAO group ( A–D). Additionally, GH and NM treatment led to the elevation of SOD levels in IS rats ( E). 2.7. The Effects of GH on Inflammatory Responses in the Brains of MCAO Rats The protein levels of inflammatory response-related biologicals were examined by WB ( A). The results of the WB analysis indicated that compared to the SHAM group, iNOS, p-IκBα, p-p65, and NLRP3 levels were dramatically increased in the MCAO group. Compared with the MCAO group, the NM group exhibited a downregulation of iNOS, p-p65, and p-IκBα levels, whereas no significant difference was observed in NLRP3 expression ( B). 2.8. The Effects of GH on Autophagy in the Brains of MCAO Rats Autophagy is essential for the development and occurrence of IS. To investigate the potential regulatory mechanism of GH on autophagy, WB was conducted to evaluate the effects of GH on the expression of mTOR, p-mTOR, AMPK, and p-AMPK ( C). In the MCAO group, the expression of p-AMPK was reduced, while p-mTOR was increased relative to those in the SHAM group. In the MACO rats treated with GH, the expression of p-mTOR significantly decreased, while that of p-AMPK clearly increased ( D). 2.9. The Effect of GH on Oxidative Stress in the Brains of MCAO Rats The profiles of HO-1, Nrf2, and Keap-1 in brain tissues are depicted in E. In contrast to the SHAM group, the levels of Keap-1 and HO-1 were significantly upregulated, while the Nrf2 content showed a decrease in the MCAO group. Furthermore, compared to the MCAO group, treatment with GH and nimodipine resulted in significant downregulation of Keap-1 expression and a notable increase in the expression levels of HO-1 and Nrf2 ( F). The therapeutic effect found in the nimodipine group was basically the same as that of the high-dose GHB group. 2.10. The Effect of GH on Apoptosis in the Brains of MCAO Rats To examine the effect of GH on apoptosis in IS rats, WB was utilized to measure the levels of apoptosis-related proteins, including Cleaved Caspase-3, Bax, and Bcl-2, in brain tissue ( G). Compared to the SHAM group, the levels of Bax and Cleaved Caspase-3 were elevated, while Bcl-2 was reduced in the MCAO group. After intraperitoneal administration of GH and nimodipine in the IS rats, the concentrations of the pro-apoptotic proteins Bax and Cleaved Caspase-3 notably decreased, while the expression of the anti-apoptotic protein Bcl-2 increased ( H). These findings indicated that the GH treatment groups exhibited a varying degree of regulation in apoptosis. 2.11. The Effect of GH on Blood–Brain Barrier (BBB) Integrity in the Brains of MCAO Rats In order to explore the potential molecular mechanism of GH against BBB disruption in rats with IS, the expression levels of ZO-1, occluding, and claudin-1 in rat brain tissues were assessed using WB ( I). Compared with the SHAM group, the results indicated a decrease in the levels of ZO-1, occluding, and claudin-1 in the MCAO group. However, treatment with GH and nimodipine injection resulted in a notable enhancement in the expression levels of ZO-1, occludin, and claudin-1 ( J). 2.12. The Interplay Between the Gut Microbiota and Serum Metabolites, SCFAs, and Protein Target Expression After GH Therapy Significant variations in the alterations of SCFAs and intestinal microbiota among the SHAM, MCAO, and GHB groups were observed. Spearman correlation analysis was performed to investigate the associations between SCFAs and the gut microbiota within each group. The analysis indicated that 18 bacterial species potentially exert a significant influence on the concentrations of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids. ( p < 0.01) ( A). The gut microbiota most significantly influenced by GH comprised Lactobacillus , Escherichia-Shigella , and Bacillus . Detailed analyses were conducted to elucidate their correlations with SCFAs . Spearman correlation analysis showed significant associations between 16 bacterial species and metabolite levels ( p < 0.01). The most substantial alterations in abundance following GH administration were observed in Lactobacillus , Escherichia-Shigella , and Bacillus . Subsequently, we conducted an analysis to elucidate their correlations with metabolites . The observed correlation between the bacteria Lactobacillus , Bacillus , and Shigella and the metabolites phosphatidylcholine (PC), lysophosphatidylcholine (LysoPC), 14,15-EET, L-Dopa, mannitol, and galactitol may suggest a significant role for these microorganisms in the pathophysiology of IS ( B). The significant associations between the abundance of 16 gut microbial species and signaling pathways, including NF-κB, the NLRP3 inflammasome, AMPK, KEAP1-Nrf2, apoptosis, and tight junctions, were identified by Spearman correlation analysis ( C) ( p < 0.01). Furthermore, we performed detailed correlation analyses on the relationship between Lactobacillus , Escherichia-Shigella , and Bacillus and these signaling pathways . The protective effect of GH on IS was assessed through the determination of infarct volume and neurological deficits. TTC staining was performed on brain tissue, where red represented normal tissue and white indicated the ischemic region. Relative to the SHAM group, animals subjected to MCAO exhibited a distinct white area, confirming the successful establishment of the model ( A). The results revealed that pretreatment with GH and NM significantly improved neurological deficits and infarct volume in the MCAO rats ( A−C). The infarct volume in the GH and NM (nimodipine injection) groups was significantly reduced compared with that in the MCAO group, suggesting that GH may have the potential to improve blood circulation. In this study, high-throughput sequencing produced 2,148,528 optimized sequences, which served to assess the function and composition of gut microbiota. α-Diversity analysis demonstrated that species diversity in the GHB group exceeded that of the MCAO group, as evidenced by the Simpson and Shannon indices. Additionally, species richness in the GHB group, reflected by Chao1 and Ace indices, exceeded that of the MCAO group ( A−D). Principal coordinate analysis (PCoA) was employed to evaluate the impact of GH on the β diversity of gut microbes, where the proximity on the PCoA plot reflected the similarity in species composition ( E). The number of ASV corresponding to each phylum and genus level was recorded. Intestinal microbial composition analysis at the phylum level indicated a predominance of Firmicutes in the SHAM group and Proteobacteria in the MCAO group. The results for the GHB group were comparable to those of the SHAM group at the phylum level. LEfSe analysis was performed at the genus level. Lactobacillus was predominant in both the SHAM and GHB groups, while Escherichia-Shigella was predominant in the MCAO group. Notably, the MCAO group showed an increase in pathogens or conditionally pathogenic bacteria such as Escherichia-Shigella , Enterobacteriaceae , and Enterococcus ; however, GH treatment led to an increased abundance of Lactobacillus and Bacillus , which are considered to be SCFA-producing bacteria ( F−H). The gut microbiota, along with its metabolites, particularly SCFAs, plays a crucial role in preserving the equilibrium of the intestinal microecosystem and enhancing the integrity of the mucosal barrier. Consequently, a quantitative assessment of SCFAs was performed . In the MCAO group, we observed that the levels of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids were significantly lower than those in the SHAM group . In the GHB group, the amounts of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids were notably elevated compared with the SHAM group . 2.4.1. Multivariate Data Analysis LC-MS was utilized to identify potential metabolites and systematically elucidate the therapeutic mechanism of GH in IS. The total ionic chromatograms (TICs) of serum samples, obtained from negative and positive ion modes, are shown in . To evaluate the overall metabolomic changes in IS rats following GH treatment, PCA and PLS-DA models were employed to investigate the trends in metabolic profile alterations among the SHAM, MCAO, GHA, GHB, and NM groups. Our unsupervised PCA of the SHAM, MCAO, GHA, GHB, and NM groups revealed distinct differences in metabolic patterns and natural clustering trends among the groups . The supervised PLS-DA score plot demonstrated significant separation between the groups, indicating successful establishment of the MCAO-induced IS model ( A,B). Through validation testing, carried out via a permutation test, we ascertained that the PLS-DA model was not overfitted ( C,D). To investigate potential biomarkers induced by MCAO and GHB, the raw data of the MCAO group were compared against those of the SHAM and GHB groups using OPLS-DA. The OPLS-DA score plots of serum extracts showed complete separation between the GHB and SHAM groups and the MCAO group, demonstrating Q 2 > 0.5, while the differences in R 2 Y and Q 2 were below 0.3. This indicates a pronounced metabolic disorder caused by MCAO, which could be further improved through GHB treatment . A permutation test was conducted to assess the validity of the OPLS-DA model . Metabolites with VIP ≥ 1 in the S-plot were considered to significantly contribute to the interference of IS . 2.4.2. Identification of Potential Biomarkers Potential biomarkers associated with MCAO rats were identified based on a threshold of p < 0.05 and VIP > 1.0 for differential metabolites. Consequently, a total of 52 metabolites were filtrated in serum samples between the SHAM and MCAO groups . Among these metabolites, 14 were significantly increased, whereas 38 showed a notable reduction in the MCAO group relative to the SHAM group. Notably, administration of GH reversed the variations in 45 metabolites induced by IS (11 downregulated and 34 upregulated). Compared with the MCAO group, the metabolites including PA(8:0/13:0), PC(14:1/20:0), LysoPC(16:1/0:0), PC(16:0/20:2), LysoPC(17:0/0:0), LysoPC(18:1/0:0), LysoPC(P-18:1/0:0), PC(18:2/16:0), LysoPC(18:3/0:0), LysoPC(20:1/0:0), LysoPC(20:5/0:0), 14,15-EET, 19-HETE, L-β-Lysine, and 1-methylhistidine were significantly upregulated in the GH group. Conversely, the metabolites including PC(20:4/18:0), PC(18:3/22:6), PGE2, 4-guanidinebutyric acid, citrulline, creatine, L-dopa, proline, D-sorbitol, mannitol, 2-phenylethanol glucuronide, galactitol, and gentisate aldehyde were downregulated. The Receiver Operating Characteristic (ROC) curve analysis provided insights into the differential metabolites for IS prediction . The results indicated that these metabolites possess strong diagnostic capabilities, with area under the curve (AUC) values ranging from 0.78 to 1.00. Notably, 39 metabolites exhibited excellent diagnostic performance, as evidenced by AUC > 0.90 . These findings suggest that these biomarkers hold potential for the clinical diagnosis of IS. However, further human studies are warranted to elucidate their relationships and validate their clinical utility . 2.4.3. Metabolic Pathways of Potential Biomarkers The 52 biomarkers were analyzed for functional enrichment using MetaboAnalyst ( A). It was observed that the differential metabolites mainly affected 38 metabolic pathways. Among these pathways, 10 were significantly enriched ( p < 0.05, impact > 0) , including arachidonic acid (ACA) metabolism; glycerophospholipid metabolism; tyrosine metabolism; tryptophan metabolism; α-linolenic acid metabolism; histidine metabolism; arginine and proline metabolism; lysine degradation; fructose and mannose metabolism; and phenylalanine, tyrosine, and tryptophan biosynthesis. 2.4.4. Effects of GH on Metabolic Pathways of MCAO Rats Comparison between A and B revealed that the intervention with GH in MCAO rats might have aided the treatment of IS by restoring arachidonic acid metabolism, glycerophospholipid metabolism, arginine and proline metabolism, tyrosine metabolism, fructose and mannose metabolism, and phenylalanine, tyrosine, and tryptophan biosynthesis to normal levels. Using the KEGG database, we investigated the metabolic pathways to create a probable metabolic pathway grid ( C). LC-MS was utilized to identify potential metabolites and systematically elucidate the therapeutic mechanism of GH in IS. The total ionic chromatograms (TICs) of serum samples, obtained from negative and positive ion modes, are shown in . To evaluate the overall metabolomic changes in IS rats following GH treatment, PCA and PLS-DA models were employed to investigate the trends in metabolic profile alterations among the SHAM, MCAO, GHA, GHB, and NM groups. Our unsupervised PCA of the SHAM, MCAO, GHA, GHB, and NM groups revealed distinct differences in metabolic patterns and natural clustering trends among the groups . The supervised PLS-DA score plot demonstrated significant separation between the groups, indicating successful establishment of the MCAO-induced IS model ( A,B). Through validation testing, carried out via a permutation test, we ascertained that the PLS-DA model was not overfitted ( C,D). To investigate potential biomarkers induced by MCAO and GHB, the raw data of the MCAO group were compared against those of the SHAM and GHB groups using OPLS-DA. The OPLS-DA score plots of serum extracts showed complete separation between the GHB and SHAM groups and the MCAO group, demonstrating Q 2 > 0.5, while the differences in R 2 Y and Q 2 were below 0.3. This indicates a pronounced metabolic disorder caused by MCAO, which could be further improved through GHB treatment . A permutation test was conducted to assess the validity of the OPLS-DA model . Metabolites with VIP ≥ 1 in the S-plot were considered to significantly contribute to the interference of IS . Potential biomarkers associated with MCAO rats were identified based on a threshold of p < 0.05 and VIP > 1.0 for differential metabolites. Consequently, a total of 52 metabolites were filtrated in serum samples between the SHAM and MCAO groups . Among these metabolites, 14 were significantly increased, whereas 38 showed a notable reduction in the MCAO group relative to the SHAM group. Notably, administration of GH reversed the variations in 45 metabolites induced by IS (11 downregulated and 34 upregulated). Compared with the MCAO group, the metabolites including PA(8:0/13:0), PC(14:1/20:0), LysoPC(16:1/0:0), PC(16:0/20:2), LysoPC(17:0/0:0), LysoPC(18:1/0:0), LysoPC(P-18:1/0:0), PC(18:2/16:0), LysoPC(18:3/0:0), LysoPC(20:1/0:0), LysoPC(20:5/0:0), 14,15-EET, 19-HETE, L-β-Lysine, and 1-methylhistidine were significantly upregulated in the GH group. Conversely, the metabolites including PC(20:4/18:0), PC(18:3/22:6), PGE2, 4-guanidinebutyric acid, citrulline, creatine, L-dopa, proline, D-sorbitol, mannitol, 2-phenylethanol glucuronide, galactitol, and gentisate aldehyde were downregulated. The Receiver Operating Characteristic (ROC) curve analysis provided insights into the differential metabolites for IS prediction . The results indicated that these metabolites possess strong diagnostic capabilities, with area under the curve (AUC) values ranging from 0.78 to 1.00. Notably, 39 metabolites exhibited excellent diagnostic performance, as evidenced by AUC > 0.90 . These findings suggest that these biomarkers hold potential for the clinical diagnosis of IS. However, further human studies are warranted to elucidate their relationships and validate their clinical utility . The 52 biomarkers were analyzed for functional enrichment using MetaboAnalyst ( A). It was observed that the differential metabolites mainly affected 38 metabolic pathways. Among these pathways, 10 were significantly enriched ( p < 0.05, impact > 0) , including arachidonic acid (ACA) metabolism; glycerophospholipid metabolism; tyrosine metabolism; tryptophan metabolism; α-linolenic acid metabolism; histidine metabolism; arginine and proline metabolism; lysine degradation; fructose and mannose metabolism; and phenylalanine, tyrosine, and tryptophan biosynthesis. Comparison between A and B revealed that the intervention with GH in MCAO rats might have aided the treatment of IS by restoring arachidonic acid metabolism, glycerophospholipid metabolism, arginine and proline metabolism, tyrosine metabolism, fructose and mannose metabolism, and phenylalanine, tyrosine, and tryptophan biosynthesis to normal levels. Using the KEGG database, we investigated the metabolic pathways to create a probable metabolic pathway grid ( C). The potential targets of GH were identified using TCMSP and Swiss Target Prediction, leading to the discovery of 528 targets. Additionally, 213 targets related to IS were identified through the Drugbank, OMIM, and Genecard databases (score ≥ 10.0). In total, 56 overlapping targets were found between GH and IS, which were then imported into the String database. The result was imported into cytoscape software for the visual analysis of the PPI network, which consisted of 1422 edges and 56 nodes ( A). The targets of ALB, TNF, IL6, IL1B, AKT1, NOS3, MAPK3, ACE, MMP9, and PTGS2 were the top 10. The 56 targets were imported into the Metascape database to acquire KEGG and GO terms. The KEGG analysis demonstrated that the treatment of IS with GH possibly involves apoptosis, the NF-κB signaling pathway, the sphingolipid signaling pathway, and autophagy in animals. Based on these results, a drugs–pathways–targets–diseases network was established ( B). To evaluate the anti-oxidative and anti-inflammatory effects of GH, serum levels of TNF-α, IL-6, IL-1β, SOD, and MDA were analyzed through ELISA. In comparison to the SHAM group, the levels of MDA, TNF-α, IL-1β, and IL-6 were significantly elevated in the MCAO group. The results indicated a reduction in MDA, TNF-α, IL-1β, and IL-6 levels in the GH and NM groups relative to the MCAO group ( A–D). Additionally, GH and NM treatment led to the elevation of SOD levels in IS rats ( E). The protein levels of inflammatory response-related biologicals were examined by WB ( A). The results of the WB analysis indicated that compared to the SHAM group, iNOS, p-IκBα, p-p65, and NLRP3 levels were dramatically increased in the MCAO group. Compared with the MCAO group, the NM group exhibited a downregulation of iNOS, p-p65, and p-IκBα levels, whereas no significant difference was observed in NLRP3 expression ( B). Autophagy is essential for the development and occurrence of IS. To investigate the potential regulatory mechanism of GH on autophagy, WB was conducted to evaluate the effects of GH on the expression of mTOR, p-mTOR, AMPK, and p-AMPK ( C). In the MCAO group, the expression of p-AMPK was reduced, while p-mTOR was increased relative to those in the SHAM group. In the MACO rats treated with GH, the expression of p-mTOR significantly decreased, while that of p-AMPK clearly increased ( D). The profiles of HO-1, Nrf2, and Keap-1 in brain tissues are depicted in E. In contrast to the SHAM group, the levels of Keap-1 and HO-1 were significantly upregulated, while the Nrf2 content showed a decrease in the MCAO group. Furthermore, compared to the MCAO group, treatment with GH and nimodipine resulted in significant downregulation of Keap-1 expression and a notable increase in the expression levels of HO-1 and Nrf2 ( F). The therapeutic effect found in the nimodipine group was basically the same as that of the high-dose GHB group. To examine the effect of GH on apoptosis in IS rats, WB was utilized to measure the levels of apoptosis-related proteins, including Cleaved Caspase-3, Bax, and Bcl-2, in brain tissue ( G). Compared to the SHAM group, the levels of Bax and Cleaved Caspase-3 were elevated, while Bcl-2 was reduced in the MCAO group. After intraperitoneal administration of GH and nimodipine in the IS rats, the concentrations of the pro-apoptotic proteins Bax and Cleaved Caspase-3 notably decreased, while the expression of the anti-apoptotic protein Bcl-2 increased ( H). These findings indicated that the GH treatment groups exhibited a varying degree of regulation in apoptosis. In order to explore the potential molecular mechanism of GH against BBB disruption in rats with IS, the expression levels of ZO-1, occluding, and claudin-1 in rat brain tissues were assessed using WB ( I). Compared with the SHAM group, the results indicated a decrease in the levels of ZO-1, occluding, and claudin-1 in the MCAO group. However, treatment with GH and nimodipine injection resulted in a notable enhancement in the expression levels of ZO-1, occludin, and claudin-1 ( J). Significant variations in the alterations of SCFAs and intestinal microbiota among the SHAM, MCAO, and GHB groups were observed. Spearman correlation analysis was performed to investigate the associations between SCFAs and the gut microbiota within each group. The analysis indicated that 18 bacterial species potentially exert a significant influence on the concentrations of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids. ( p < 0.01) ( A). The gut microbiota most significantly influenced by GH comprised Lactobacillus , Escherichia-Shigella , and Bacillus . Detailed analyses were conducted to elucidate their correlations with SCFAs . Spearman correlation analysis showed significant associations between 16 bacterial species and metabolite levels ( p < 0.01). The most substantial alterations in abundance following GH administration were observed in Lactobacillus , Escherichia-Shigella , and Bacillus . Subsequently, we conducted an analysis to elucidate their correlations with metabolites . The observed correlation between the bacteria Lactobacillus , Bacillus , and Shigella and the metabolites phosphatidylcholine (PC), lysophosphatidylcholine (LysoPC), 14,15-EET, L-Dopa, mannitol, and galactitol may suggest a significant role for these microorganisms in the pathophysiology of IS ( B). The significant associations between the abundance of 16 gut microbial species and signaling pathways, including NF-κB, the NLRP3 inflammasome, AMPK, KEAP1-Nrf2, apoptosis, and tight junctions, were identified by Spearman correlation analysis ( C) ( p < 0.01). Furthermore, we performed detailed correlation analyses on the relationship between Lactobacillus , Escherichia-Shigella , and Bacillus and these signaling pathways . In recent years, numerous studies have highlighted the integrative function of the microbiota–gut–brain axis in IS . In the present study, analysis of fecal samples using 16S rRNA sequencing revealed that MCAO led to an increase in pathogens or conditionally pathogenic bacteria, including Escherichia-Shigella , Enterobacteriaceae , and Enterococcus. GH treatment significantly enhanced the α-diversity of the intestinal microbiota. The composition of the dominant bacteria in the GH group resembled that of the SHAM group at both the phylum and genus levels, characterized by an increase in the populations of Firmicutes and Lactobacillus and a decrease in Proteobacteria and Escherichia-Shigella . Previous studies have indicated that Lactobacillus administration can confer neuroprotection in IS rat models by inhibiting neuronal apoptosis, diminishing the size of cerebral infarction, attenuating oxidative stress, and ameliorating neurobehavioral deficits . Moreover, Lactobacillus has been recognized as the primary bacterial strain responsible for elevating SCFA concentrations . SCFAs can exert local effects on the mucosal layer, contributing to the maintenance of intestinal function and barrier integrity . Studies have shown that the levels of acetic, propionic, and butyric acids are lower in stroke patients and older individuals , and butyric acid mitigates the production of TNF-α and IL-6 induced by LPS . SCFAs have been shown to regulate the gut–brain axis by alleviating epithelial barrier impairment through the facilitation of tight junction formation . Furthermore, they influence the gut–brain axis by governing the differentiation of regulatory Th17 cells, Th1 cells, and T cells . To elucidate the relationship between GH and the intestinal microbiota more comprehensively, we quantified the concentrations of SCFAs in the intestines of rats. Compared to the SHAM group, the MCAO group showed decreased levels of acetic, propionic, isobutyric, butyric, isovaleric, and valeric acids. After GH intervention, an increase in these SCFAs was observed, suggesting that GH treatment could ameliorate the imbalance of SCFAs induced by MCAO and implying that GH could exert a protective effect on MCAO-induced neuroinflammation, probably due to increased SCFAs. Therefore, the prebiotic properties of GH may facilitate the restoration of the intestinal barrier, attenuate inflammatory responses, and provide neuroprotective effects. The results of brain metabolomics have shown that GH can ameliorate metabolic disorders by regulating the TCA cycle, glycolysis, nucleic acid metabolism, phospholipid metabolism, and the glutamate–glutamine cycle in IS rats , but serum metabolomics-based studies of GH’s effect on IS rats are still needed, and the association with gut microbial dysbiosis in IS is also still unclear. In this study, the results of plasma metabolomics indicated that GH could reverse 45 biomarkers and 6 disordered metabolic pathways in IS rats, including ACA metabolism; tyrosine metabolism; fructose and mannose metabolism; arginine and proline metabolism; glycerophospholipid metabolism; and phenylalanine, tyrosine, and tryptophan biosynthesis. Consistent with brain metabolomics results, the metabolic pathway mainly affected by GH in serum metabolomics is energy metabolism. However, we found that GH affected ACA metabolic pathways and related biomarkers in our serum metabolomics analysis. Substances involved in the metabolism of arachidonic acid, including 14,15-EET and PGE2, are essential in acute ischemic syndromes that impact the coronary and cerebrovascular systems . These substances can inhibit neuronal apoptosis, reduce infarct area, and trigger neuroinflammation following cerebral ischemia . The correlation analysis of intestinal microbiota and metabolites indicated significant associations between Lactobacillus , Bacillus , and Shigella and metabolites such as phosphatidylcholine (PC), lysophosphatidylcholine (LysoPC), 14,15-EET, L-Dopa, mannitol, and galactitol, underscoring the significant role of these microorganisms in the pathophysiology of IS. Many studies have shown that severe stroke patients have significantly lower phospholipid levels when compared to mild stroke patients and that nerve injury disrupts membrane phospholipid metabolism . Dysregulation of phospholipid metabolism, accumulation of lipid peroxides, and energy metabolism impairment may lead to neurodegenerative lesions in ischemia and head injury . Other studies suggest that glycerophospholipid metabolism is involved in MCAO/reperfusion . L-dopa, as a precursor of dopamine, plays various roles by binding to postsynaptic dopaminergic receptors in the basal ganglia, affecting neuroplasticity, wakefulness, mood regulation, and motor control . Guidelines for treating cerebral edema in neurocritical care indicate that mannitol is effective for the early management of cerebral edema or elevated intracranial pressure in acute ischemic stroke patients . Research has indicated that glycerophospholipid metabolism is associated with the gut–brain axis through microglia-mediated neuroinflammation . Additionally, sugar metabolism affects the progression of neurodegenerative diseases and is implicated in the gut–brain axis . It is speculated that the effect of GH on Lactobacillus , Bacillus , and Shigella may be crucial for its anti-inflammatory effects in MCAO rats. GH has been demonstrated to enhance the prognosis of IS through the regulation of apoptosis, inflammation, and the PI3K/AKT signaling pathway and to attenuate myocardial ischemia–reperfusion injury by activating GSTP and inhibiting the ASK1-JNK/p38 pathway . In the current investigation, GH was found to modulate pro-inflammatory cytokines such as IL-1β, IL-6, and TNF-α, as well as biomarkers of oxidative stress, and regulate target proteins associated with NF-κB, the NLRP3 inflammasome, AMPK, and apoptosis, consistent with previous findings. In addition, three novel findings that differed from previous results were obtained in the current study. First, GH could regulate KEAP1-Nrf2 signaling pathways by increasing the levels of HO-1 and Keap-1, decreasing the level of Nrf2, suggesting that GH could ameliorate IS by mitigating oxidative stress. Nrf2, Keap-1 and HO-1 are target proteins of oxidative stress, which means their effects are induced under oxidative stress and contribute to genomic protection, the exertion of anti-oxidant activity, the scavenging of free radicals, and, ultimately, the provision of neuroprotection . Second, the results of tight junction signaling pathway-related BBB disruption indicated that GH could increase the expression levels of claudin-1, occludin, and ZO-1. The BBB is a critical component of the neurovascular units, and its disruption significantly increases the risk of mortality in those with early cerebral ischemia. The tight junction (TJ) complex, consisting of transmembrane proteins (occludin, claudin), junction adhesion molecules (JAMs), and cytoplasmic attachment proteins (ZO-1, ZO-2, ZO-3), plays a crucial role in regulating BBB permeability . Thus, these findings indicate that GH treatment may potentially preserve the integrity of the BBB in IS rats. Third, Spearman correlation analysis revealed that p-AMPK, Nrf2, ZO-1, occludin, and claudin-1 exhibited a positive correlation with Lactobacillus and a negative correlation with Shigella . Conversely, p-mTOR, Keap-1, NLRP3, p-IκBα, p-p65, iNOS, and Bax exhibited negative correlations with Lactobacillus and positive correlations with Shigella . Finally, the levels of Cleaved Caspase-3 and Bcl-2 demonstrated distinct correlation patterns that require further investigation. Studies have demonstrated that the amelioration of cerebral ischemia–reperfusion injury and Parkinson’s disease, as well as age-related brain injury reduction, are closely associated with the microbiota–gut–brain axis through the reduction in oxidative stress response, inhibition of the NF-κB signaling pathway, and prevention of colonic tight junction protein degradation . These findings suggest that the impact of GH on oxidative stress and NF-κB signaling pathways may be associated with its regulatory effects on the gut microbiota. Consequently, further research is required to clarify their interrelationships. Therefore, this study demonstrates that GH can modulate the gut microbiota, SCFAs, biomarkers, and signaling pathways . Nonetheless, this investigation is not devoid of limitations. The study of the impact of GH on the gut microbiota is still novel, and additional studies are needed to clarify how GH plays a role in affecting the gut–brain axis in IS. Changes in the gut microbiome induced by GH can contribute to anti-IS effects in the brain, possibly via humoral and/or other pathways. The specific mechanisms by which GH induces the gut–brain axis remain to be further elucidated. GH can affect the gut microbiome of MCAO rats, but what is its effect on the human gut microbiome? This needs to be further studied so as to provide a scientific basis for the rational application of GH in clinical practice and its clinical action mechanism. 4.1. Materials Guhong injection was obtained from Tonghua Guhong Pharmaceutical Co., Ltd. (Tonghua, China), in which the concentration of aceglutamide was 27–33 mg/mL, and the concentration of hydroxysafflor yellow A was not less than 0.15 mg/mL. Methanol (HPLC grade), acetonitrile (HPLC grade), and isopropanol (HPLC grade) were purchased from Merck Millipore (Molsheim, France). Enzyme-linked immunosorbent assay (ELISA) kits for IL-1β, IL-6, and TNF-α, as well as SOD and MDA kits, were acquired from Wuhan Servicebio Technology Co., Ltd. (Wuhan, China). Monofilament for MCAO was acquired from Beijing Jitai Yuancheng Technology Co., Ltd. (Beijing, China). Antibodies against IκBα, p-IκBα, p65, p-p65, Cleaved Caspase-3, Bcl-2, Bax, HO-1, Nrf2, and Keap-1 were obtained from Abcam (Cambridge, UK). Antibodies against AMPK, p-AMPK, mTOR, p-mTOR, claudin-1, occludin, ZO-1, iNOS, NLRP3, and β-actin were acquired from Cell Signaling Biotechnology (Hertfordshire, UK). 4.2. Animals Male Sprague Dawley rats, weighing 200 ± 20 g, were sourced from SPF (Beijing) Biotechnology Co., Ltd. (China; Certificate No.: SCXK (Jing) 2019-0010). The rats were allowed a one-week adaptation period before the start of the experiment. All rats were kept in a standard animal facility with a 12 h light/dark circular room (with temperature at 25 ± 2 °C and humidity at 60 ± 5%). The rats had free access to tap water and rodent chow. All animal experiments adhered to the Regulations of Experimental Animal Administration and the Guide for the Care and Use of Laboratory Animals, as issued by the State Committee of Science and Technology of Jiangxi Province, China. The study procedure received approval from the Animal Research Ethics Committee of Nanchang University (SYXK(Gan)2021-0004), with animal ethics clearance being granted on 22 December 2021. 4.3. Animal Model and Drug Administration In this study, MCAO was induced following the methods initially outlined by Longa et al. . The rats received anesthesia via an intraperitoneal administration of 1% sodium pentobarbital at a dosage of 3 mL/kg. The subcutaneous tissue and muscle were dissected to expose the common carotid artery (CCA), external carotid artery (ECA), and internal carotid artery (ICA). A monofilament (diameter: 0.24 mm) with a spherical tip was advanced into the ICA to occlude the entry point of the middle carotid artery (MCA). The filament was gently advanced to a depth of about 18–20 mm until slight resistance was felt, and it was left in position for 2 h to induce cerebral ischemia. Subsequently, the filament was slowly retracted to achieve reperfusion. Rats in the sham-operated group underwent identical surgical exposure procedures, but the filament was not inserted. The rats were randomly assigned to five distinct cohorts: the sham-operated cohort (SHAM group, n = 6), MCAO model cohort (MCAO group, n = 6), GHA cohort ( n = 6), GHB cohort ( n = 6), and positive control drug nimodipine injection cohort (NM, n = 6). The rats in the GHA and GHB groups were intraperitoneally injected with 1 mL/kg and 4 mL/kg Guhong injection, respectively (with 1 mL/kg being equivalent to the clinical dose). The NM group rats received a nimodipine injection intraperitoneally at a dosage of 1 mg/kg. The rats in the MCAO and SHAM groups were administered 4 mL/kg of normal saline via intraperitoneal injection daily. The rats were intraperitoneally injected with GH and normal saline 6 h after reperfusion and received continued administration for 7 d. 4.4. Assessment of Neurological Impairment The neurological deficits in rats 24 h post-cerebral ischemia were assessed using the Zea-Longa method. The scoring criteria were as follows: 0 score—no neurological deficits observed; 1 point—inability of the front paw to extend straight when lifted vertically; 2 points—leaning or circling towards the opposite side during walking; 3 points—tilting to the opposite side during walking; and 4 points—inability to walk independently or exhibiting a depressed consciousness. 4.5. Sample Collection and TTC Staining On the 7th day after reperfusion, the rats were anesthetized with 1% sodium pentobarbital (3 mL/kg, i.p.). Blood samples were collected and stored at 4 °C for 2 h. Following this, they underwent centrifugation at 4000 rpm for 10 min at 4 °C, and the resulting serum was kept at −80 °C. The brain was excised and kept at −20 °C for 20 min, before being sliced into 6 pieces (1 mm each). The brain slices were subsequently immersed in a 2% 2,3,5-Triphenyltetrazolium chloride (TTC, Sigma-Aldrich) solution and incubated in the dark at 37 °C for 20 min, with the slices being turned every 5 min. After incubation, the staining solution was removed, and the brain slices were rinsed with PBS to terminate the staining process. Normal brain tissue was visually identified as red, whereas the ischemic regions appeared white. 4.6. ELISA The levels of pro-inflammatory cytokines (IL-1β, IL-6, TNF-α) and indicators associated with oxidative stress (SOD and MDA) were quantified using the corresponding reagent kits. 4.7. 16S rRNA Analysis After collection, the rat feces were promptly kept at −80 °C pending DNA extraction. The V3-V4 segment of the bacterial 16S rDNA gene was chosen for PCR amplification (338 F: 5′-ACTCCTACGGGAGGCAGCAG-3′; 806 R: 5′-GGACTACHVGGGTWTCTAAT-3′). The total volume of the PCR amplification reaction was 20 μL, consisting of the following components: 4 μL of FastPfu Buffer, 10 ng of template DNA, 0.8 μL of the forward primer, 2 μL of dNTPs (2.5 mM each), 0.2 μL of Bovine Serum Albumin (BSA), and 0.8 μL of the reverse primer. The remaining volume was supplemented to 20 μL with ddH 2 O. After quantification, the samples underwent mixing, purification, and recovery procedures. Subsequently, sequencing was conducted by employing the Illumina platform and based on the standardized methods. All valid sequences were classified into amplicon sequence variants (ASVs). The community composition was elucidated using a community bar plot, while species differentials were discerned through linear discriminant analysis effect size (LEfSe). 4.8. Quantification of SCFAs The extraction and quantification methods we used for our analysis of SCFAs were refined based on the protocol established by Zhu et al. . Initially, 50 mg of rat feces was homogenized with 500 µL of PBS and subsequently centrifuged at 4 °C to collect the supernatant. To this supernatant, 200 µL of crotonic acid was added as an internal standard, followed by thorough mixing. The sample was subsequently kept at −20 °C overnight and centrifuged again at 4 °C the next day. The final supernatant, passed through a 0.22 µm filter, was analyzed using gas chromatography (GC-2010 Pro Shimadzu, Kyoto Japan) to determine the concentration of SCFAs in the rat intestinal tract. SCFA contents were quantified using acetic, propionic, butyric, and crotonic acid standards. The measurement was performed using a DB-FFAP column (0.32 mm × 30 m ID) with an injection temperature of 250 °C and a volume of 1 μL. Nitrogen acted as the carrier gas, maintaining a column flow rate of 1 mL/min. The split ratio was adjusted to 8:1, while the scavenging flow rate was kept at 3 mL/min. 4.9. LC-MS Metabolomics Analysis 4.9.1. Sample Preparation The serum sample was retrieved from the refrigerator and allowed to thaw. Subsequently, 100 μL of serum was added to an EP tube, followed by 400 μL of methanol to precipitate the proteins. The mixture was vortexed to ensure thorough mixing and then centrifuged at a speed of 12,000 rpm for 15 min at a temperature of 4 °C. The supernatant (400 μL) was then placed into a sampling vial for UPLC-Q-Exactive MS/MS analysis. The whole procedure was carried out on ice. 4.9.2. LC-MS Conditions Metabolomics analysis was performed using an ACQUITY UPLC HSS T3 column (100 mm × 2.1 mm i.d., 1.8 µm) integrated with the UHPLC-Q Exactive HF-X system from Thermo Fisher Scientific (USA). The chromatographic separation parameters included a sample injection volume of 3 μL, a flow rate of 0.4 mL/min, and a column temperature set to 40 °C. The mobile phases for the serum samples were composed of solvent A (95% water, 5% acetonitrile, and 0.1% formic acid) and solvent B (a mixture of 47.5% isopropyl alcohol, 47.5% acetonitrile, 5% water, and 0.1% formic acid). The gradient elution was implemented with the following steps: 0–3 min at 10–20% B; 3–4.5 min at 20–35% B; 4.5–5 min at 35–100% B; 5–6.4 min at 100% B; and 6.4–8 min at 0% B. 4.10. Multivariate Statistical Analysis and Data Processing Appropriate statistical analyses, including one-way ANOVA followed by Tukey’s multiple comparison test or Student’s t -test, were performed as required to evaluate the data. Raw data from each group were processed using Progenesis QI for LC-MS data processing. A two-dimensional dataset was subsequently loaded into SIMCA-P 14.1 software to carry out orthogonal partial least squares discriminant analysis (OPLS-DA), partial least squares discriminant analysis (PLS-DA), and principal component analysis (PCA). The OPLS-DA, PLS-DA, and PCA models were evaluated using R 2 X, R 2 Y, and Q 2 Y intercepts. Additionally, the OPLS-DA models were validated through cross-validated residual variance testing (CV-ANOVA). The ions of potential biomarkers were filtered using a p -value of < 0.05 and variable importance in projection (VIP) values of ≥1. 4.11. Network Pharmacological Analysis The chemical ingredients of GH were ascertained from relevant literature and the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database ( https://old.tcmsp-e.com/tcmsp.php , accessed on 9 February 2023). The drug targets were sourced from the TCMSP database and the Swiss Target Prediction Database ( http://swisstargetprediction.ch/ , accessed on 9 February 2023), while the targets associated with IS were retrieved from the OMIM ( http://www.ncbi.nlm.nih.gov/omim , accessed on 9 February 2023) and GeneCards ( https://www.genecards.org/ , accessed on 9 February 2023) databases. The filtered targets of GH and IS were assessed for protein–protein interactions (PPIs) using the STRING database ( https://stringdb.org/ , accessed on 9 February 2023). “Homo sapiens” was selected as the background species, with a confidence score threshold of 0.7, while other parameters were kept at their default settings. Subsequently, a disease–pathway–target–drug network was constructed to investigate the potential mechanisms of GH. The PPI results and the disease–pathway–target–drug data were displayed using Cytoscape 3.7.1 to generate network visualizations. 4.12. Western Blot Analysis The hippocampus and cerebral cortex tissues were collected from the rats’ brains. Brain tissues were lysed using RIPA buffer that included phosphorylation and protease inhibitors. The lysate was then subjected to low-temperature grinding twice (30 s each time), carried out using a grinding tube containing 3 steel balls. After grinding, the lysate was chilled on ice for 30 min and then subjected to centrifugation at 4 °C for 15 min at 12,000× g . The supernatant was harvested, and the protein concentration was determined using a BCA protein analysis kit. Equivalent protein sample quantities were applied and separated using 6% to 10% SDS-PAGE. The isolated proteins were then transferred to PVDF membranes, which were subsequently blocked with protein-free rapid blocking buffer for 25 min. The membranes were placed on a shaker and incubated with the primary antibody overnight at 4 °C, followed by a 2 h treatment at room temperature with a peroxidase-labeled secondary antibody. The protein bands were subsequently revealed utilizing an ECL kit. Guhong injection was obtained from Tonghua Guhong Pharmaceutical Co., Ltd. (Tonghua, China), in which the concentration of aceglutamide was 27–33 mg/mL, and the concentration of hydroxysafflor yellow A was not less than 0.15 mg/mL. Methanol (HPLC grade), acetonitrile (HPLC grade), and isopropanol (HPLC grade) were purchased from Merck Millipore (Molsheim, France). Enzyme-linked immunosorbent assay (ELISA) kits for IL-1β, IL-6, and TNF-α, as well as SOD and MDA kits, were acquired from Wuhan Servicebio Technology Co., Ltd. (Wuhan, China). Monofilament for MCAO was acquired from Beijing Jitai Yuancheng Technology Co., Ltd. (Beijing, China). Antibodies against IκBα, p-IκBα, p65, p-p65, Cleaved Caspase-3, Bcl-2, Bax, HO-1, Nrf2, and Keap-1 were obtained from Abcam (Cambridge, UK). Antibodies against AMPK, p-AMPK, mTOR, p-mTOR, claudin-1, occludin, ZO-1, iNOS, NLRP3, and β-actin were acquired from Cell Signaling Biotechnology (Hertfordshire, UK). Male Sprague Dawley rats, weighing 200 ± 20 g, were sourced from SPF (Beijing) Biotechnology Co., Ltd. (China; Certificate No.: SCXK (Jing) 2019-0010). The rats were allowed a one-week adaptation period before the start of the experiment. All rats were kept in a standard animal facility with a 12 h light/dark circular room (with temperature at 25 ± 2 °C and humidity at 60 ± 5%). The rats had free access to tap water and rodent chow. All animal experiments adhered to the Regulations of Experimental Animal Administration and the Guide for the Care and Use of Laboratory Animals, as issued by the State Committee of Science and Technology of Jiangxi Province, China. The study procedure received approval from the Animal Research Ethics Committee of Nanchang University (SYXK(Gan)2021-0004), with animal ethics clearance being granted on 22 December 2021. In this study, MCAO was induced following the methods initially outlined by Longa et al. . The rats received anesthesia via an intraperitoneal administration of 1% sodium pentobarbital at a dosage of 3 mL/kg. The subcutaneous tissue and muscle were dissected to expose the common carotid artery (CCA), external carotid artery (ECA), and internal carotid artery (ICA). A monofilament (diameter: 0.24 mm) with a spherical tip was advanced into the ICA to occlude the entry point of the middle carotid artery (MCA). The filament was gently advanced to a depth of about 18–20 mm until slight resistance was felt, and it was left in position for 2 h to induce cerebral ischemia. Subsequently, the filament was slowly retracted to achieve reperfusion. Rats in the sham-operated group underwent identical surgical exposure procedures, but the filament was not inserted. The rats were randomly assigned to five distinct cohorts: the sham-operated cohort (SHAM group, n = 6), MCAO model cohort (MCAO group, n = 6), GHA cohort ( n = 6), GHB cohort ( n = 6), and positive control drug nimodipine injection cohort (NM, n = 6). The rats in the GHA and GHB groups were intraperitoneally injected with 1 mL/kg and 4 mL/kg Guhong injection, respectively (with 1 mL/kg being equivalent to the clinical dose). The NM group rats received a nimodipine injection intraperitoneally at a dosage of 1 mg/kg. The rats in the MCAO and SHAM groups were administered 4 mL/kg of normal saline via intraperitoneal injection daily. The rats were intraperitoneally injected with GH and normal saline 6 h after reperfusion and received continued administration for 7 d. The neurological deficits in rats 24 h post-cerebral ischemia were assessed using the Zea-Longa method. The scoring criteria were as follows: 0 score—no neurological deficits observed; 1 point—inability of the front paw to extend straight when lifted vertically; 2 points—leaning or circling towards the opposite side during walking; 3 points—tilting to the opposite side during walking; and 4 points—inability to walk independently or exhibiting a depressed consciousness. On the 7th day after reperfusion, the rats were anesthetized with 1% sodium pentobarbital (3 mL/kg, i.p.). Blood samples were collected and stored at 4 °C for 2 h. Following this, they underwent centrifugation at 4000 rpm for 10 min at 4 °C, and the resulting serum was kept at −80 °C. The brain was excised and kept at −20 °C for 20 min, before being sliced into 6 pieces (1 mm each). The brain slices were subsequently immersed in a 2% 2,3,5-Triphenyltetrazolium chloride (TTC, Sigma-Aldrich) solution and incubated in the dark at 37 °C for 20 min, with the slices being turned every 5 min. After incubation, the staining solution was removed, and the brain slices were rinsed with PBS to terminate the staining process. Normal brain tissue was visually identified as red, whereas the ischemic regions appeared white. The levels of pro-inflammatory cytokines (IL-1β, IL-6, TNF-α) and indicators associated with oxidative stress (SOD and MDA) were quantified using the corresponding reagent kits. After collection, the rat feces were promptly kept at −80 °C pending DNA extraction. The V3-V4 segment of the bacterial 16S rDNA gene was chosen for PCR amplification (338 F: 5′-ACTCCTACGGGAGGCAGCAG-3′; 806 R: 5′-GGACTACHVGGGTWTCTAAT-3′). The total volume of the PCR amplification reaction was 20 μL, consisting of the following components: 4 μL of FastPfu Buffer, 10 ng of template DNA, 0.8 μL of the forward primer, 2 μL of dNTPs (2.5 mM each), 0.2 μL of Bovine Serum Albumin (BSA), and 0.8 μL of the reverse primer. The remaining volume was supplemented to 20 μL with ddH 2 O. After quantification, the samples underwent mixing, purification, and recovery procedures. Subsequently, sequencing was conducted by employing the Illumina platform and based on the standardized methods. All valid sequences were classified into amplicon sequence variants (ASVs). The community composition was elucidated using a community bar plot, while species differentials were discerned through linear discriminant analysis effect size (LEfSe). The extraction and quantification methods we used for our analysis of SCFAs were refined based on the protocol established by Zhu et al. . Initially, 50 mg of rat feces was homogenized with 500 µL of PBS and subsequently centrifuged at 4 °C to collect the supernatant. To this supernatant, 200 µL of crotonic acid was added as an internal standard, followed by thorough mixing. The sample was subsequently kept at −20 °C overnight and centrifuged again at 4 °C the next day. The final supernatant, passed through a 0.22 µm filter, was analyzed using gas chromatography (GC-2010 Pro Shimadzu, Kyoto Japan) to determine the concentration of SCFAs in the rat intestinal tract. SCFA contents were quantified using acetic, propionic, butyric, and crotonic acid standards. The measurement was performed using a DB-FFAP column (0.32 mm × 30 m ID) with an injection temperature of 250 °C and a volume of 1 μL. Nitrogen acted as the carrier gas, maintaining a column flow rate of 1 mL/min. The split ratio was adjusted to 8:1, while the scavenging flow rate was kept at 3 mL/min. 4.9.1. Sample Preparation The serum sample was retrieved from the refrigerator and allowed to thaw. Subsequently, 100 μL of serum was added to an EP tube, followed by 400 μL of methanol to precipitate the proteins. The mixture was vortexed to ensure thorough mixing and then centrifuged at a speed of 12,000 rpm for 15 min at a temperature of 4 °C. The supernatant (400 μL) was then placed into a sampling vial for UPLC-Q-Exactive MS/MS analysis. The whole procedure was carried out on ice. 4.9.2. LC-MS Conditions Metabolomics analysis was performed using an ACQUITY UPLC HSS T3 column (100 mm × 2.1 mm i.d., 1.8 µm) integrated with the UHPLC-Q Exactive HF-X system from Thermo Fisher Scientific (USA). The chromatographic separation parameters included a sample injection volume of 3 μL, a flow rate of 0.4 mL/min, and a column temperature set to 40 °C. The mobile phases for the serum samples were composed of solvent A (95% water, 5% acetonitrile, and 0.1% formic acid) and solvent B (a mixture of 47.5% isopropyl alcohol, 47.5% acetonitrile, 5% water, and 0.1% formic acid). The gradient elution was implemented with the following steps: 0–3 min at 10–20% B; 3–4.5 min at 20–35% B; 4.5–5 min at 35–100% B; 5–6.4 min at 100% B; and 6.4–8 min at 0% B. The serum sample was retrieved from the refrigerator and allowed to thaw. Subsequently, 100 μL of serum was added to an EP tube, followed by 400 μL of methanol to precipitate the proteins. The mixture was vortexed to ensure thorough mixing and then centrifuged at a speed of 12,000 rpm for 15 min at a temperature of 4 °C. The supernatant (400 μL) was then placed into a sampling vial for UPLC-Q-Exactive MS/MS analysis. The whole procedure was carried out on ice. Metabolomics analysis was performed using an ACQUITY UPLC HSS T3 column (100 mm × 2.1 mm i.d., 1.8 µm) integrated with the UHPLC-Q Exactive HF-X system from Thermo Fisher Scientific (USA). The chromatographic separation parameters included a sample injection volume of 3 μL, a flow rate of 0.4 mL/min, and a column temperature set to 40 °C. The mobile phases for the serum samples were composed of solvent A (95% water, 5% acetonitrile, and 0.1% formic acid) and solvent B (a mixture of 47.5% isopropyl alcohol, 47.5% acetonitrile, 5% water, and 0.1% formic acid). The gradient elution was implemented with the following steps: 0–3 min at 10–20% B; 3–4.5 min at 20–35% B; 4.5–5 min at 35–100% B; 5–6.4 min at 100% B; and 6.4–8 min at 0% B. Appropriate statistical analyses, including one-way ANOVA followed by Tukey’s multiple comparison test or Student’s t -test, were performed as required to evaluate the data. Raw data from each group were processed using Progenesis QI for LC-MS data processing. A two-dimensional dataset was subsequently loaded into SIMCA-P 14.1 software to carry out orthogonal partial least squares discriminant analysis (OPLS-DA), partial least squares discriminant analysis (PLS-DA), and principal component analysis (PCA). The OPLS-DA, PLS-DA, and PCA models were evaluated using R 2 X, R 2 Y, and Q 2 Y intercepts. Additionally, the OPLS-DA models were validated through cross-validated residual variance testing (CV-ANOVA). The ions of potential biomarkers were filtered using a p -value of < 0.05 and variable importance in projection (VIP) values of ≥1. The chemical ingredients of GH were ascertained from relevant literature and the Traditional Chinese Medicine Systems Pharmacology (TCMSP) database ( https://old.tcmsp-e.com/tcmsp.php , accessed on 9 February 2023). The drug targets were sourced from the TCMSP database and the Swiss Target Prediction Database ( http://swisstargetprediction.ch/ , accessed on 9 February 2023), while the targets associated with IS were retrieved from the OMIM ( http://www.ncbi.nlm.nih.gov/omim , accessed on 9 February 2023) and GeneCards ( https://www.genecards.org/ , accessed on 9 February 2023) databases. The filtered targets of GH and IS were assessed for protein–protein interactions (PPIs) using the STRING database ( https://stringdb.org/ , accessed on 9 February 2023). “Homo sapiens” was selected as the background species, with a confidence score threshold of 0.7, while other parameters were kept at their default settings. Subsequently, a disease–pathway–target–drug network was constructed to investigate the potential mechanisms of GH. The PPI results and the disease–pathway–target–drug data were displayed using Cytoscape 3.7.1 to generate network visualizations. The hippocampus and cerebral cortex tissues were collected from the rats’ brains. Brain tissues were lysed using RIPA buffer that included phosphorylation and protease inhibitors. The lysate was then subjected to low-temperature grinding twice (30 s each time), carried out using a grinding tube containing 3 steel balls. After grinding, the lysate was chilled on ice for 30 min and then subjected to centrifugation at 4 °C for 15 min at 12,000× g . The supernatant was harvested, and the protein concentration was determined using a BCA protein analysis kit. Equivalent protein sample quantities were applied and separated using 6% to 10% SDS-PAGE. The isolated proteins were then transferred to PVDF membranes, which were subsequently blocked with protein-free rapid blocking buffer for 25 min. The membranes were placed on a shaker and incubated with the primary antibody overnight at 4 °C, followed by a 2 h treatment at room temperature with a peroxidase-labeled secondary antibody. The protein bands were subsequently revealed utilizing an ECL kit. In the current study, 52 differential metabolites in a rat model of MCAO were identified, and it was shown that GH could reverse 45 biomarkers and restore 6 dysregulated metabolic pathways associated with IS. Additionally, GH exerted therapeutic effects on IS through the modulation of various signaling pathways, including the NF-κB, NLRP3 inflammasome, AMPK, KEAP1-Nrf2, apoptosis, and tight junction signaling pathways. The 16S rRNA analysis indicated that MCAO increased or conditioned the pathogens in the intestine, and GH treatment increased the α-diversity of the intestinal flora and the abundance of Lactobacillus, subsequently enhancing the production of SCFAs. Spearman correlation analysis indicated significant correlations between 16 bacterial species and metabolite levels, as well as signaling pathways. In summary, GH has been demonstrated to possess potential efficacy in attenuating the inflammatory response, neuronal apoptosis, and BBB damage in IS by modulating multiple gut–brain axis-based pathways. |
Non-radiologist-performed abdominal point-of-care ultrasonography in paediatrics — a scoping review | 9b72540e-b095-4b51-87a9-dc0bdab2387b | 8266706 | Pediatrics[mh] | In paediatric medicine, US is a widely used imaging technique because it is noninvasive, safe and fast. Traditionally, US examinations are performed by radiologists and ultrasonographers. However, with the introduction of affordable and portable US systems, US is increasingly used as a bedside tool, or the so-called point-of-care, by non-radiologists. To ensure good medical care for children, a high-quality US examination is of great importance, regardless of the type of physician performing the examination. This quality can be achieved by setting national and international quality standards, and by achieving consensus among US performers on who can do which examination after which level of training. At this point, there is a lack of consensus. This can partly be explained by radiologists, including paediatric radiologists, expressing their fear of losing territory. As the European Society of Radiology (ESR) position paper on US stated, “Turf battles about the use of US continue to grow as more and more specialists are claiming US as part of their everyday’s [sic] work, and the position of radiologists is progressively further undermined” . As a result, non-radiologist point-of-care US has primarily developed outside the sight of radiologists, and consequently many radiologists are not aware of the status of such testing. If radiologists and non-radiologists would be more aware of both the current uses of non-radiologist point-of-care US and the current gaps in literature, this might form a strong scientific basis for a proper consultation between the two. In a first step to address this issue, we conducted a scoping review focusing on abdominal point-of-care US performed by non-radiologists in children. The aim of this review was to gain an overview of uses of abdominal non-radiologist point-of-care US in children. Additionally, we aimed to identify gaps in the evidence, which can form the basis for future research projects to create a firm scientific base for the implementation of non-radiologist point-of-care US in paediatric medicine.
The method for this scoping review was based on the framework outlined by Arksey and O’Malley . The review included the following five key elements: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies; (4) charting the data; and (5) collating, summarising and reporting the results. The research topics we focussed on were: providing an overview of the uses of abdominal non-radiologist point-of-care US, sorted by organ; assessing the quality of examinations and training for abdominal non-radiologist point-of-care US; assessing the patient perspective of abdominal non-radiologist point-of-care US; financial costs of abdominal non-radiologist point-of-care US; and legal consequences following the use of abdominal non-radiologist point-of-care US. The search was conducted with the help of a clinical librarian (J.G.D.) on April 25, 2019, in the Medline, Embase and Web of Science Conference Proceedings databases. The search terms are shown in . The inclusion criteria were original research studies on abdominal non-radiologist point-of-care US in children. We excluded studies not written in English, not published, not from Western countries (i.e. North America, Australia or Europe), studies in which both adults and children were studied but in which the data could not be separated, and studies of which no full text was available. In case the US operator was not specified and no radiologist was involved in the study, we assumed the US operator was a non-radiologist. In all other cases, the study was excluded. The full details of the study selection and data extraction can be found in the previously published review protocol . We focussed only on abdominal non-radiologist point-of-care US because given the broadness of the field of non-radiologist point-of-care US, it was not feasible to perform a scoping review of the whole field (e.g., chest or musculoskeletal US).
The total number of records found from the initial database searches was 7,624. After eliminating 2,532 duplications and subsequently excluding 4,676 records that did not comply with our inclusion criteria based on title and abstract, the number of potentially relevant records was further reduced to 416. Finally, after full-text screening, we included 106 articles: 39 studies and 51 case reports or case series that together gave an overview of the uses of abdominal non-radiologist point-of-care US, 14 on training of non-radiologists, and 1 each on legal consequences following non-radiologist point-of-care US and on patient satisfaction (Fig. ). No studies on the financial costs of non-radiologist point-of-care US were identified. The 106 articles included in this scoping review were published between 1990 and 2019, with 50 (47%) articles published within the last 5 years (Fig. ). Most of the studies were conducted in the United States (83%). Only four articles were published in journals with a focus on imaging, two of which were in a journal dedicated to point-of-care US in any environment or setting . In 11 articles (10%), a radiologist was named as a co-author. Overview of uses of non-radiologist point-of-care ultrasound Of the 39 studies on abdominal non-radiologist point-of-care US, we found 9 studies on the bladder (Table ; ), 10 on the bowel (Table ; ), 4 on the stomach (Table ; ), 1 on the kidney (Table ; ), 4 on fluid status (Table ; ), 9 on non-radiologist point-of-care US for trauma screening (Table ; ) and 1 “other” on umbilical artery line placement (Table 7; ). Next we present these studies per organ. The case reports and series are displayed in Table . Bladder Of the nine studies on non-radiologist point-of-care US of the bladder, six assessed bladder volume, two during suprapubic aspiration; one assessed the degree of dehydration (Table ). Of the studies regarding bladder volume, we identified four randomised controlled trials and two observational studies, mostly aiming to assess the benefit of using non-radiologist point-of-care US to obtain a valid urine sample for urinalysis. Three studies used success rates of catheterisation in infants as the end point and all found an increased success rate when using non-radiologist point-of-care US prior to catheterization . One study used success rate of obtaining a clean-catch urine sample and did not find a difference between the two groups , and one study found that performing an non-radiologist point-of-care US prior to sending a child to the radiology department for a transabdominal pelvic US predicted the patient readiness for the examination and decreased time to pelvic US. The two studies regarding suprapubic aspiration both assessed whether the success rate could be improved. One study was a randomised controlled trial comparing blind suprapubic aspiration to non-radiologist point-of-care US-guided suprapubic aspiration and found a higher success rate in the US-guided group (79% vs. 52%, P =0.04) . The other study demonstrated a success rate of only 53% when using non-radiologist point-of-care US for bladder scan . Last, the results of the last study suggest that non-radiologist point-of-care US for bladder scan could be used to monitor urine production in children suspected of having dehydration . Bowel We identified six studies on non-radiologist point-of-care US for diagnosing appendicitis, two on intussusception, one on constipation and one on bowel motility (Table ). Six studies assessed the diagnostic accuracy of non-radiologist point-of-care US in diagnosing appendicitis in children, all with a combination of pathology and clinical follow-up details as reference standard . For detailed analysis of diagnostic accuracy, we refer to a previously published systematic review on this topic . In two of the included studies, performance of non-radiologists was compared to that of radiologists. One of these two studies demonstrated a comparable accuracy between the two raters and a sensitivity of 82% (95% confidence interval [CI]: 64–92) vs. 96% (95% CI: 83–99) and specificity of 97% (95% CI: 85–99) vs. 100% (95% CI: 87–100), respectively . In contrast, the other study demonstrated that non-radiologists reported inconclusive results more often than radiologists (59% compared to 15%, respectively) . Last, one study showed that the use of non-radiologist point-of-care US could decrease the length of hospital stay for children suspected of having appendicitis (length of stay decreased from 288 min (95% CI: 256–319) to 154 min (95% CI: 113–195) . The two studies regarding intussusception assessed the diagnostic accuracy of non-radiologist point-of-care US, using radiology department examinations as a reference standard (either radiology US or any (i.e. CT, US, barium enema) . Sensitivity of non-radiologist point-of-care US ranged from 85% to 100% and specificity from 97% to 100%. Finally, one pilot study showed that non-radiologist point-of-care US can be used to detect return of bowel function in infants with gastroschisis by assessing presence of motility , and one study assessed whether measuring the transrectal diameter can be used to diagnose constipation in children with abdominal pain. The latter study showed a sensitivity of 86% (95% CI: 69–96), and a specificity of 71% (95% CI: 53–85) using the Rome III criteria as a reference standard . Stomach We identified two studies on preoperative gastric content assessment and two on pyloric hypertrophy diagnosis (Table ). The two studies on non-radiologist point-of-care US regarding the assessment of stomach filling status were from the anaesthesiology department and assessed whether non-radiologist point-of-care US could be used to assess whether a patient can be intubated safely. One of these studies used MRI findings as a reference standard and the other used gastroscopy . Both studies demonstrated that gastric content could be assessed with acceptable accuracy (area under the curve for measurements in the right lateral decubital position ranged from 0.73 to 0.92). The other two studies demonstrated that non-radiologist point-of-care US is capable of accurately diagnosing pyloric hypertrophy when using radiology US as reference standard (sensitivity when identifying pylorus: 100% [95% CI: 66–100]; specificity, 100% [95% CI: 92–100]) . There was no difference between measurements obtained by the non-radiologists compared to the radiologists ( P >0.2) . Kidney The one study on kidneys assessed the diagnostic accuracy of non-radiologist point-of-care US in diagnosing hydronephrosis. It found a sensitivity of 77% (95% CI: 58–95%) and a specificity of 97% (95% CI: 95–99%), using radiology US as reference standard (Table ) . Fluid status We identified four studies that assessed the use of non-radiologist point-of-care US in determining fluid status (Table ). All used the inferior vena cava/aorta ratio and compared this ratio to dehydration. Dehydration was assessed by weight loss, clinical judgement of dehydration, or bicarbonate level. Reported sensitivity ranged from 39% to 86% and reported specificity ranged from 56% to 100% . Trauma screening We identified nine studies on non-radiologist point-of-care US after trauma (i.e. non-radiologist focused abdominal sonography for trauma [FAST]) (Table ). Four of these studies assessed the diagnostic utility of non-radiologist point-of-care US after trauma using CT, findings during laparoscopy, or clinical outcome as a reference standard. The reported sensitivity ranged 50–100% (95% CI: 36–100) and the specificity ranged 96–100% (95% CI: 80–100) . Five of the identified studies assessed the clinical impact of non-radiologist point-of-care US on management after trauma, either by assessing the impact on number of needed CT scans or by assessing the success rate of nonoperative management (i.e. not needing an intervention) based on the non-radiologist point-of-care US result . Most of these studies demonstrated that, overall, the use of CT decreased when non-radiologist FAST was increasingly used . However, in hemodynamically stable patients, the clinical care (e.g., length of hospital stay and CT usage) did not improve by using non-radiologist FAST . In addition, one study reported that in 5/88 (6%) patients, the non-radiologist FAST exam was negative, whereas the patients had a significant injury (e.g., required blood transfusion) and that in one of these cases the surgeon would have cancelled the CT based on the non-radiologist FAST exam . Other We identified one study on a procedural non-radiologist point-of-care US, regarding umbilical artery catheter placement. This study showed that non-radiologist point-of-care US can reduce the time of line placement from 139 min (standard deviation [SD]: 49 min) to 75 min (SD: 25 min) ( P <0.001) (Table ). Case reports and case series We identified 49 case reports and case series on abdominal non-radiologist point-of-care US in children (Table ). According to these publications, a total of 31 different diagnoses were made with the help of non-radiologist point-of-care US. In all but three publications, the diagnosis was made at the emergency department. Quality and training We identified 16 published articles concerning the training of non-radiologists performing non-radiologist point-of-care US in children (Table ; ). We subdivided these publications into three categories: (1) studies reporting efforts and outcomes of general training strategies for non-radiologist point-of-care US, (2) studies reporting training strategies for a dedicated application of non-radiologist point-of-care US and (3) surveys that reported the state of non-radiologist point-of-care US use and training in paediatric medicine. We describe these findings in the following subsections. Studies reporting efforts and outcomes of general training strategies for non-radiologist point-of-care ultrasound The first is a study from a paediatric critical care department that reported initial efforts, structure, and progress within the division and institution to train and credential physicians . Physicians were trained as follows: they first participated in a 2-day introductory course with didactic lectures and hands-on training sessions. The training consisted of four modules: procedural, haemodynamic, thoracic and abdominal. After the training they were encouraged to perform at least 25 point-of-care US exams per module. Images were saved and were reviewed by point-of-care US experts once a week and by a radiologist once a month. Although only one of the 25 trainees completed the whole course, the non-radiologist point-of-care US examinations they performed contributed to the clinical management (i.e. after performing the US the clinical management was changed) and the authors reported a good experience with the reviewing process. Another study designed an online learning platform to train paediatric emergency medicine physicians and reported the performance of the trainees . The learning platform consisted of 100 cases (including short clinical presentation, video, images) per application (e.g., FAST, lung, cardiac) and trainees had to distinguish pathology from normal anatomy. In case of pathology they had to identify the location. After every case they received feedback. On average participants needed to complete 1–45 cases to reach 80% accuracy and 11–290 cases to reach 95% accuracy. The least efficient participants (95th percentile) needed to complete 60–288 cases to reach 80% accuracy and 243–1,040 to reach 95% accuracy. Most participants needed about 2–3 h to achieve the highest performance benchmark. The last study in this category was a publication describing the efforts of a number of experts in the field of paediatric emergency medicine non-radiologist point-of-care US to reach consensus on the core applications to include in point-of-care US training for paediatric emergency medicine physicians using the Delphi method . They concluded that applications of abdominal non-radiologist point-of-care US to include in training of non-radiologists were free peritoneal fluid, abscess incision and drainage, central line placement, intussusception, intrauterine pregnancy, bladder volume, and detection of foreign bodies. According to the experts, applications to exclude from training were abdominal aortic aneurism and ovarian torsion. Studies reporting training strategies for a dedicated application of non-radiologist point-of-care ultrasound Five articles described a training strategy for a dedicated application of non-radiologist point-of-care US. These included teaching paediatric emergency medicine fellows to measure the pyloric channel when hypertrophic pyloric stenosis is suspected , teaching emergency physicians to diagnose hydronephrosis in children with a urinary tract infection , teaching emergency physicians to diagnose ileocolic intussusception , training paediatric trauma surgeons to perform a FAST and teaching emergency physicians to diagnose free abdominal fluid after trauma . For the single-organ non-radiologist point-of-care US examinations (pyloric channel measurement, detecting hydronephrosis and detecting ileocolic intussusception), the training consisted of a short hands-on training (e.g., about five non-radiologist point-of-care US exams) with or without a preceding didactic lecture about US physics and the specific pathology. In these studies trainees were able to detect the specific pathology with acceptable accuracy (sensitivity: 77% [95% CI: 58–95], specificity: 97% (95% CI: 95–99]) at the end of the training . For the multiple-organ non-radiologist point-of-care US examinations (i.e. post-trauma non-radiologist point-of-care US) the training was more extensive. For the detection of free fluid, trainees followed a 1-day training that consisted of didactic lectures, a videotaped session with instruction, real-time images of pathology and a hands-on workshop on healthy volunteers. After the training, trainees were able to detect free fluid in trauma patients with a sensitivity of 75% (95% CI: 36–95) and a specificity of 97% (95% CI: 81–100) . For the FAST training, paediatric surgeons were trained for about 16 months: they first followed a technical instruction and hands-on training and then they had to perform at least 30 FAST exams. After this training they had to complete an exam on patients with known ascites. Sensitivity for significant amounts of free fluid was 50%, and specificity was 85%. In addition, surgeons reported they never felt they became experts, and they judged 4–10% of non-radiologist point-of-care US exams as inconclusive . Surveys that reported the state of non-radiologist point-of-care ultrasound use and training in paediatric medicine We identified eight survey studies, published between 2008 and 2018. All aimed to evaluate current state of non-radiologist point-of-care US use and education in a paediatric department (either paediatric emergency medicine, paediatric critical care medicine or neonatal medicine), all studies were performed in North America . From these surveys it becomes clear that the number of paediatric emergency departments using non-radiologist point-of-care US has increased over the last 12 years (from about 57% to 95%). However, all surveys reported a broad variety of training curricula. Reported methods of training were: bedside training, general emergency department training by a non-radiologist point-of-care US experts, following a formal course, a radiology rotation or training in a skills lab. Reported perceived barriers to implement point-of-care US training were mostly lack of training personnel, lack of time, lack of training guidelines, concerns about liability, and resistance from the radiology department. Patient perspectives We identified one study that evaluated the satisfaction with emergency department visits of caregivers of children who received a non-radiologist point-of-care US examination (either for diagnostic or educational purposes) compared to that of children who did not receive a non-radiologist point-of-care US examination (Table ) . Caregivers’ satisfaction was measured with a visual analogue scale. In this study, there was no difference in satisfaction between patients who did and did not receive a non-radiologist point-of-care US examination, and two-thirds of caregivers reported that they felt the examination improved the child’s interaction with the emergency department physician. Financial costs No publication regarding financial costs was identified. Legal consequences We identified one publication concerning legal consequences following the use of non-radiologist point-of-care US (Table ) . This was a retrospective study concerning extent and quality of lawsuits. A search of the United States Westlaw database identified two lawsuits. Both lawsuits concerned the fact that the non-radiologist point-of-care US exam was not performed; in the first case, the placement of a peripherally inserted venous catheter in a child should have been checked with point-of-care US according to the accusers. In the second case, blood was found in the retroperitoneal space and it was claimed that a FAST exam should have been done. In both cases the defendants (i.e. the physicians) were acquitted.
Of the 39 studies on abdominal non-radiologist point-of-care US, we found 9 studies on the bladder (Table ; ), 10 on the bowel (Table ; ), 4 on the stomach (Table ; ), 1 on the kidney (Table ; ), 4 on fluid status (Table ; ), 9 on non-radiologist point-of-care US for trauma screening (Table ; ) and 1 “other” on umbilical artery line placement (Table 7; ). Next we present these studies per organ. The case reports and series are displayed in Table .
Of the nine studies on non-radiologist point-of-care US of the bladder, six assessed bladder volume, two during suprapubic aspiration; one assessed the degree of dehydration (Table ). Of the studies regarding bladder volume, we identified four randomised controlled trials and two observational studies, mostly aiming to assess the benefit of using non-radiologist point-of-care US to obtain a valid urine sample for urinalysis. Three studies used success rates of catheterisation in infants as the end point and all found an increased success rate when using non-radiologist point-of-care US prior to catheterization . One study used success rate of obtaining a clean-catch urine sample and did not find a difference between the two groups , and one study found that performing an non-radiologist point-of-care US prior to sending a child to the radiology department for a transabdominal pelvic US predicted the patient readiness for the examination and decreased time to pelvic US. The two studies regarding suprapubic aspiration both assessed whether the success rate could be improved. One study was a randomised controlled trial comparing blind suprapubic aspiration to non-radiologist point-of-care US-guided suprapubic aspiration and found a higher success rate in the US-guided group (79% vs. 52%, P =0.04) . The other study demonstrated a success rate of only 53% when using non-radiologist point-of-care US for bladder scan . Last, the results of the last study suggest that non-radiologist point-of-care US for bladder scan could be used to monitor urine production in children suspected of having dehydration .
We identified six studies on non-radiologist point-of-care US for diagnosing appendicitis, two on intussusception, one on constipation and one on bowel motility (Table ). Six studies assessed the diagnostic accuracy of non-radiologist point-of-care US in diagnosing appendicitis in children, all with a combination of pathology and clinical follow-up details as reference standard . For detailed analysis of diagnostic accuracy, we refer to a previously published systematic review on this topic . In two of the included studies, performance of non-radiologists was compared to that of radiologists. One of these two studies demonstrated a comparable accuracy between the two raters and a sensitivity of 82% (95% confidence interval [CI]: 64–92) vs. 96% (95% CI: 83–99) and specificity of 97% (95% CI: 85–99) vs. 100% (95% CI: 87–100), respectively . In contrast, the other study demonstrated that non-radiologists reported inconclusive results more often than radiologists (59% compared to 15%, respectively) . Last, one study showed that the use of non-radiologist point-of-care US could decrease the length of hospital stay for children suspected of having appendicitis (length of stay decreased from 288 min (95% CI: 256–319) to 154 min (95% CI: 113–195) . The two studies regarding intussusception assessed the diagnostic accuracy of non-radiologist point-of-care US, using radiology department examinations as a reference standard (either radiology US or any (i.e. CT, US, barium enema) . Sensitivity of non-radiologist point-of-care US ranged from 85% to 100% and specificity from 97% to 100%. Finally, one pilot study showed that non-radiologist point-of-care US can be used to detect return of bowel function in infants with gastroschisis by assessing presence of motility , and one study assessed whether measuring the transrectal diameter can be used to diagnose constipation in children with abdominal pain. The latter study showed a sensitivity of 86% (95% CI: 69–96), and a specificity of 71% (95% CI: 53–85) using the Rome III criteria as a reference standard .
We identified two studies on preoperative gastric content assessment and two on pyloric hypertrophy diagnosis (Table ). The two studies on non-radiologist point-of-care US regarding the assessment of stomach filling status were from the anaesthesiology department and assessed whether non-radiologist point-of-care US could be used to assess whether a patient can be intubated safely. One of these studies used MRI findings as a reference standard and the other used gastroscopy . Both studies demonstrated that gastric content could be assessed with acceptable accuracy (area under the curve for measurements in the right lateral decubital position ranged from 0.73 to 0.92). The other two studies demonstrated that non-radiologist point-of-care US is capable of accurately diagnosing pyloric hypertrophy when using radiology US as reference standard (sensitivity when identifying pylorus: 100% [95% CI: 66–100]; specificity, 100% [95% CI: 92–100]) . There was no difference between measurements obtained by the non-radiologists compared to the radiologists ( P >0.2) .
The one study on kidneys assessed the diagnostic accuracy of non-radiologist point-of-care US in diagnosing hydronephrosis. It found a sensitivity of 77% (95% CI: 58–95%) and a specificity of 97% (95% CI: 95–99%), using radiology US as reference standard (Table ) .
We identified four studies that assessed the use of non-radiologist point-of-care US in determining fluid status (Table ). All used the inferior vena cava/aorta ratio and compared this ratio to dehydration. Dehydration was assessed by weight loss, clinical judgement of dehydration, or bicarbonate level. Reported sensitivity ranged from 39% to 86% and reported specificity ranged from 56% to 100% .
We identified nine studies on non-radiologist point-of-care US after trauma (i.e. non-radiologist focused abdominal sonography for trauma [FAST]) (Table ). Four of these studies assessed the diagnostic utility of non-radiologist point-of-care US after trauma using CT, findings during laparoscopy, or clinical outcome as a reference standard. The reported sensitivity ranged 50–100% (95% CI: 36–100) and the specificity ranged 96–100% (95% CI: 80–100) . Five of the identified studies assessed the clinical impact of non-radiologist point-of-care US on management after trauma, either by assessing the impact on number of needed CT scans or by assessing the success rate of nonoperative management (i.e. not needing an intervention) based on the non-radiologist point-of-care US result . Most of these studies demonstrated that, overall, the use of CT decreased when non-radiologist FAST was increasingly used . However, in hemodynamically stable patients, the clinical care (e.g., length of hospital stay and CT usage) did not improve by using non-radiologist FAST . In addition, one study reported that in 5/88 (6%) patients, the non-radiologist FAST exam was negative, whereas the patients had a significant injury (e.g., required blood transfusion) and that in one of these cases the surgeon would have cancelled the CT based on the non-radiologist FAST exam .
We identified one study on a procedural non-radiologist point-of-care US, regarding umbilical artery catheter placement. This study showed that non-radiologist point-of-care US can reduce the time of line placement from 139 min (standard deviation [SD]: 49 min) to 75 min (SD: 25 min) ( P <0.001) (Table ).
We identified 49 case reports and case series on abdominal non-radiologist point-of-care US in children (Table ). According to these publications, a total of 31 different diagnoses were made with the help of non-radiologist point-of-care US. In all but three publications, the diagnosis was made at the emergency department.
We identified 16 published articles concerning the training of non-radiologists performing non-radiologist point-of-care US in children (Table ; ). We subdivided these publications into three categories: (1) studies reporting efforts and outcomes of general training strategies for non-radiologist point-of-care US, (2) studies reporting training strategies for a dedicated application of non-radiologist point-of-care US and (3) surveys that reported the state of non-radiologist point-of-care US use and training in paediatric medicine. We describe these findings in the following subsections. Studies reporting efforts and outcomes of general training strategies for non-radiologist point-of-care ultrasound The first is a study from a paediatric critical care department that reported initial efforts, structure, and progress within the division and institution to train and credential physicians . Physicians were trained as follows: they first participated in a 2-day introductory course with didactic lectures and hands-on training sessions. The training consisted of four modules: procedural, haemodynamic, thoracic and abdominal. After the training they were encouraged to perform at least 25 point-of-care US exams per module. Images were saved and were reviewed by point-of-care US experts once a week and by a radiologist once a month. Although only one of the 25 trainees completed the whole course, the non-radiologist point-of-care US examinations they performed contributed to the clinical management (i.e. after performing the US the clinical management was changed) and the authors reported a good experience with the reviewing process. Another study designed an online learning platform to train paediatric emergency medicine physicians and reported the performance of the trainees . The learning platform consisted of 100 cases (including short clinical presentation, video, images) per application (e.g., FAST, lung, cardiac) and trainees had to distinguish pathology from normal anatomy. In case of pathology they had to identify the location. After every case they received feedback. On average participants needed to complete 1–45 cases to reach 80% accuracy and 11–290 cases to reach 95% accuracy. The least efficient participants (95th percentile) needed to complete 60–288 cases to reach 80% accuracy and 243–1,040 to reach 95% accuracy. Most participants needed about 2–3 h to achieve the highest performance benchmark. The last study in this category was a publication describing the efforts of a number of experts in the field of paediatric emergency medicine non-radiologist point-of-care US to reach consensus on the core applications to include in point-of-care US training for paediatric emergency medicine physicians using the Delphi method . They concluded that applications of abdominal non-radiologist point-of-care US to include in training of non-radiologists were free peritoneal fluid, abscess incision and drainage, central line placement, intussusception, intrauterine pregnancy, bladder volume, and detection of foreign bodies. According to the experts, applications to exclude from training were abdominal aortic aneurism and ovarian torsion. Studies reporting training strategies for a dedicated application of non-radiologist point-of-care ultrasound Five articles described a training strategy for a dedicated application of non-radiologist point-of-care US. These included teaching paediatric emergency medicine fellows to measure the pyloric channel when hypertrophic pyloric stenosis is suspected , teaching emergency physicians to diagnose hydronephrosis in children with a urinary tract infection , teaching emergency physicians to diagnose ileocolic intussusception , training paediatric trauma surgeons to perform a FAST and teaching emergency physicians to diagnose free abdominal fluid after trauma . For the single-organ non-radiologist point-of-care US examinations (pyloric channel measurement, detecting hydronephrosis and detecting ileocolic intussusception), the training consisted of a short hands-on training (e.g., about five non-radiologist point-of-care US exams) with or without a preceding didactic lecture about US physics and the specific pathology. In these studies trainees were able to detect the specific pathology with acceptable accuracy (sensitivity: 77% [95% CI: 58–95], specificity: 97% (95% CI: 95–99]) at the end of the training . For the multiple-organ non-radiologist point-of-care US examinations (i.e. post-trauma non-radiologist point-of-care US) the training was more extensive. For the detection of free fluid, trainees followed a 1-day training that consisted of didactic lectures, a videotaped session with instruction, real-time images of pathology and a hands-on workshop on healthy volunteers. After the training, trainees were able to detect free fluid in trauma patients with a sensitivity of 75% (95% CI: 36–95) and a specificity of 97% (95% CI: 81–100) . For the FAST training, paediatric surgeons were trained for about 16 months: they first followed a technical instruction and hands-on training and then they had to perform at least 30 FAST exams. After this training they had to complete an exam on patients with known ascites. Sensitivity for significant amounts of free fluid was 50%, and specificity was 85%. In addition, surgeons reported they never felt they became experts, and they judged 4–10% of non-radiologist point-of-care US exams as inconclusive . Surveys that reported the state of non-radiologist point-of-care ultrasound use and training in paediatric medicine We identified eight survey studies, published between 2008 and 2018. All aimed to evaluate current state of non-radiologist point-of-care US use and education in a paediatric department (either paediatric emergency medicine, paediatric critical care medicine or neonatal medicine), all studies were performed in North America . From these surveys it becomes clear that the number of paediatric emergency departments using non-radiologist point-of-care US has increased over the last 12 years (from about 57% to 95%). However, all surveys reported a broad variety of training curricula. Reported methods of training were: bedside training, general emergency department training by a non-radiologist point-of-care US experts, following a formal course, a radiology rotation or training in a skills lab. Reported perceived barriers to implement point-of-care US training were mostly lack of training personnel, lack of time, lack of training guidelines, concerns about liability, and resistance from the radiology department.
The first is a study from a paediatric critical care department that reported initial efforts, structure, and progress within the division and institution to train and credential physicians . Physicians were trained as follows: they first participated in a 2-day introductory course with didactic lectures and hands-on training sessions. The training consisted of four modules: procedural, haemodynamic, thoracic and abdominal. After the training they were encouraged to perform at least 25 point-of-care US exams per module. Images were saved and were reviewed by point-of-care US experts once a week and by a radiologist once a month. Although only one of the 25 trainees completed the whole course, the non-radiologist point-of-care US examinations they performed contributed to the clinical management (i.e. after performing the US the clinical management was changed) and the authors reported a good experience with the reviewing process. Another study designed an online learning platform to train paediatric emergency medicine physicians and reported the performance of the trainees . The learning platform consisted of 100 cases (including short clinical presentation, video, images) per application (e.g., FAST, lung, cardiac) and trainees had to distinguish pathology from normal anatomy. In case of pathology they had to identify the location. After every case they received feedback. On average participants needed to complete 1–45 cases to reach 80% accuracy and 11–290 cases to reach 95% accuracy. The least efficient participants (95th percentile) needed to complete 60–288 cases to reach 80% accuracy and 243–1,040 to reach 95% accuracy. Most participants needed about 2–3 h to achieve the highest performance benchmark. The last study in this category was a publication describing the efforts of a number of experts in the field of paediatric emergency medicine non-radiologist point-of-care US to reach consensus on the core applications to include in point-of-care US training for paediatric emergency medicine physicians using the Delphi method . They concluded that applications of abdominal non-radiologist point-of-care US to include in training of non-radiologists were free peritoneal fluid, abscess incision and drainage, central line placement, intussusception, intrauterine pregnancy, bladder volume, and detection of foreign bodies. According to the experts, applications to exclude from training were abdominal aortic aneurism and ovarian torsion.
Five articles described a training strategy for a dedicated application of non-radiologist point-of-care US. These included teaching paediatric emergency medicine fellows to measure the pyloric channel when hypertrophic pyloric stenosis is suspected , teaching emergency physicians to diagnose hydronephrosis in children with a urinary tract infection , teaching emergency physicians to diagnose ileocolic intussusception , training paediatric trauma surgeons to perform a FAST and teaching emergency physicians to diagnose free abdominal fluid after trauma . For the single-organ non-radiologist point-of-care US examinations (pyloric channel measurement, detecting hydronephrosis and detecting ileocolic intussusception), the training consisted of a short hands-on training (e.g., about five non-radiologist point-of-care US exams) with or without a preceding didactic lecture about US physics and the specific pathology. In these studies trainees were able to detect the specific pathology with acceptable accuracy (sensitivity: 77% [95% CI: 58–95], specificity: 97% (95% CI: 95–99]) at the end of the training . For the multiple-organ non-radiologist point-of-care US examinations (i.e. post-trauma non-radiologist point-of-care US) the training was more extensive. For the detection of free fluid, trainees followed a 1-day training that consisted of didactic lectures, a videotaped session with instruction, real-time images of pathology and a hands-on workshop on healthy volunteers. After the training, trainees were able to detect free fluid in trauma patients with a sensitivity of 75% (95% CI: 36–95) and a specificity of 97% (95% CI: 81–100) . For the FAST training, paediatric surgeons were trained for about 16 months: they first followed a technical instruction and hands-on training and then they had to perform at least 30 FAST exams. After this training they had to complete an exam on patients with known ascites. Sensitivity for significant amounts of free fluid was 50%, and specificity was 85%. In addition, surgeons reported they never felt they became experts, and they judged 4–10% of non-radiologist point-of-care US exams as inconclusive .
We identified eight survey studies, published between 2008 and 2018. All aimed to evaluate current state of non-radiologist point-of-care US use and education in a paediatric department (either paediatric emergency medicine, paediatric critical care medicine or neonatal medicine), all studies were performed in North America . From these surveys it becomes clear that the number of paediatric emergency departments using non-radiologist point-of-care US has increased over the last 12 years (from about 57% to 95%). However, all surveys reported a broad variety of training curricula. Reported methods of training were: bedside training, general emergency department training by a non-radiologist point-of-care US experts, following a formal course, a radiology rotation or training in a skills lab. Reported perceived barriers to implement point-of-care US training were mostly lack of training personnel, lack of time, lack of training guidelines, concerns about liability, and resistance from the radiology department.
We identified one study that evaluated the satisfaction with emergency department visits of caregivers of children who received a non-radiologist point-of-care US examination (either for diagnostic or educational purposes) compared to that of children who did not receive a non-radiologist point-of-care US examination (Table ) . Caregivers’ satisfaction was measured with a visual analogue scale. In this study, there was no difference in satisfaction between patients who did and did not receive a non-radiologist point-of-care US examination, and two-thirds of caregivers reported that they felt the examination improved the child’s interaction with the emergency department physician.
No publication regarding financial costs was identified.
We identified one publication concerning legal consequences following the use of non-radiologist point-of-care US (Table ) . This was a retrospective study concerning extent and quality of lawsuits. A search of the United States Westlaw database identified two lawsuits. Both lawsuits concerned the fact that the non-radiologist point-of-care US exam was not performed; in the first case, the placement of a peripherally inserted venous catheter in a child should have been checked with point-of-care US according to the accusers. In the second case, blood was found in the retroperitoneal space and it was claimed that a FAST exam should have been done. In both cases the defendants (i.e. the physicians) were acquitted.
We conducted this scoping review to gain an overview of current uses of abdominal non-radiologist point-of-care US in children to (1) make radiologists and non-radiologists more aware of its status and (2) prompt both categories of US performers to collaborate with each other. This scoping review demonstrates that non-radiologist point-of-care US is increasingly used and studied in paediatric care for a variety of indications. It also shows that non-radiologist point-of-care US in certain indications can have a positive impact on patient care and outcome, e.g., by reducing number of CTs needed or reducing length of hospital stay. This supports the further development of non-radiologist point-of-care US, and it underlines the need for consensus on who can do which examinations. This scoping review also assessed the quality of examinations and training of non-radiologists performing abdominal point-of-care US in children. Regarding the quality, in some settings non-radiologists performed equal to radiologists , but this was certainly not always the case . Moreover, clinically important missed diagnoses have been reported , underlining the need for proper training of non-radiologists. This scoping review makes clear that no standardised training guidelines are available, which is a key issue for the further development of non-radiologist point-of-care US. Based on the included studies, effective training could start with a short introduction lecture, followed by an online training program (e.g., Kwan et al. ), which can be followed at home, and such training could conclude with a non-radiologist point-of-care US rotation in the emergency department, radiology department or both. In the included studies, a basic training of just 1–2 h was found to be sufficient for physicians performing dedicated single-organ point-of-care US exams. We, however, believe that in order to gain more generalizable skills and to ensure a high quality of all operators, a more thorough approach is needed, with paediatric radiologic input. An example of how collaboration between non-radiologists and radiologists could help to maintain quality of the non-radiologist point-of-care US exams is implementing a review process, as Conlon et al. described, where radiologists and non-radiologists come together on regular basis to discuss cases. There are some important issues to take into consideration before further implementing non-radiologist point-of-care US into daily care. First, very few studies have properly looked at missed diagnoses or incorrect diagnoses. There is a risk of non-radiologist point-of-care US leading to a delayed diagnosis and subsequently to the patient’s wellbeing being at risk. The fact that these cases have not led to published lawsuits is not evidence that this is not a problem. Second, no studies exist on the financial costs of readily available point-of-care US, which could lead to an increase in health care costs; hence a proper cost–benefit analysis is warranted. Also, little attention has been paid to the patient’s perspectives thus far. In addition, few studies compared the performance of the non-radiologists to that of radiologists. Comparing a non-radiologist to a radiologist after completing a proper training program would give more insight into the quality of US examinations. Last, from the included studies we cannot conclude what the impact of is on the clinical daily practice because the studies describe research circumstances. More research on this topic is needed before implementing changes to point-of-care US usage. The strengths of this scoping review are our thorough search strategy with help of a clinical librarian and the cooperation of both radiologists and non-radiologists. Our scoping review has some limitations as well. First, we limited our scoping review to abdominal US. This was done to keep a clear focus; however, we suspect that a similar result can be found in other fields where non-radiologist point-of-care US is being used, such as in chest or musculoskeletal US. Second, we limited our scoping review to in-hospital use of non-radiologist point-of-care US in developed countries. Our findings might have been different in low-resource countries, where access to radiology departments can be limited. In such a setting non-radiologist point-of-care US might well be the only imaging modality available. In addition, we did not perform a quality assessment of included studies because we aimed to provide a general overview and not to answer a very specific research question through a systematic review. Also, we excluded articles including both children and adults if the data could not be separated. This might have led to a loss of relevant information.
This scoping review supports the further development of non-radiologist point-of-care US and underlines the need for consensus among US performers on who can do which examination after which level of training. More research on training non-radiologists and on cost–benefit of non-radiologist point-of-care US is needed.
ESM 1 (XLSX 12 kb)
|
Minimally invasive system to reliably characterize ventricular electrophysiology from living donors | 83e17203-9b80-47a0-97d6-e4f8fc43e6b8 | 7673124 | Physiology[mh] | Management of cardiovascular diseases poses a great challenge for healthcare systems. Improved understanding of the physiology and pathophysiology of the complex and highly heterogeneous human heart requires integrated multi-level analysis able to account for spatial and temporal variability in cardiac behavior – . Although extensive research has been conducted in animals, the relevance to humans remains largely unknown. This can be, to a large extent, attributed to the scarce access to human cardiac tissue samples and the low-throughput research on whole organs. Cardiac tissue slices preserve the complex three-dimensional structure, multicellularity and interactions between cell types in the heart – . Thin tissue slices can benefit from oxygen and metabolic substrate diffusion, allowing its function to be maintained without the need for coronary perfusion , . Organotypic slices have been shown to constitute an affordable model to investigate structure , metabolism and function – of cardiac tissue and its response to pharmacological and toxicological interventions , , , . New systems and protocols have been developed that allow biomimetic culture of cardiac tissue slices for several days – . In humans, however, the availability of cardiac ventricular samples is mostly limited to trabeculae or papillary muscles from intracardiac ablation procedures , , , and to a reduced number of explanted failing hearts or hearts from organ donors not transplanted for technical reasons , , , , , , , . There is a need for a higher throughput system that allows the characterization of ventricular tissue electrophysiology in health and disease, with a representation of inter- and intra-individual variability in the spatial and temporal domains , – . We propose full-thickness transmural core biopsies to obtain small human ventricular samples from living donors, compatible with routine cardiac surgical procedures. A 14 gauge (G) core biopsy needle allows surgeons to perform a biopsy in an easy and safe way, on minimal time, with limited bleeding and without organ damage – . In addition, the collection with this type of biopsy needle is highly reproducible, regarding sample size and morphology, and allows the study of different layers of the ventricular wall. Here, we evidence that transmural core biopsies are suitable for robust and accurate electrophysiological characterization of the ventricular myocardium. In pigs, we compare the electrophysiology of myocardial slices from transmural core biopsies and transmural ventricular blocks , , , , obtained from the same ventricular region. In humans, we characterize electrophysiological signals recorded from slices of transmural core biopsies and papillary muscles of living donors. In all cases, we assess the response to an increase in the pacing frequency and to the administration of the β-adrenergic agonist isoproterenol.
Detailed methods are available in the Supplementary Material. Myocardial tissue collection Porcine left ventricular transmural core biopsies were obtained from 5 pigs after sacrifice by intravenous administration of KCl solution (1 mEq/kg) performed under deep anesthesia with propofol (intravenous administration, up to 6 mg/kg) and inhaled sevoflurane (1.9%). A disposable 14 G tru-cut biopsy needle (Bard Mission 1410MS, Bard) was used to extract transmural core biopsies of approximately 1.2 mm diameter and up to 10 mm long (Fig. ). Transmural tissue blocks (surface area ≈ 5 × 7 mm) were cut with a single edge razor blade from a neighboring zone (Fig. ). All animal experiments complied with the regulations of the local animal welfare committee for the care and use of experimental animals and were approved by local authorities (Ethics Committee on Animal Experimentation, CEAEA, of the University of Zaragoza). All animal procedures conformed to the guidelines from Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes. Human left ventricular transmural core biopsies were collected by experienced cardiothoracic surgeons at Miguel Servet University Hospital. Specimens were obtained from 21 patients undergoing valve replacement surgery or coronary artery bypass grafting. A disposable 14 G tru-cut biopsy needle (Bard Mission 1410MS, Bard) was used to extract one biopsy from every patient during cardiac arrest soon after the patient was placed on cardiopulmonary bypass. Papillary muscles resected during valve replacement from 8 different patients were included for comparison. The clinical characteristics of donors are summarized in Supplementary Table . All patients gave written informed consent before surgery and prior to their inclusion in the study. The study conforms to the principles outlined in the Declaration of Helsinki and was approved by the local Ethics Committee (CEICA, reference number PI17/0023). Tissue slice preparation Upon collection, porcine and human tissues were immediately submerged in ice-cold pre-oxygenated Tyrode’s solution. Transport time to the laboratory was less than 10 min for human tissues and less than 1 h for porcine tissues. Tissue blocks were directly glued onto the vibratome cutting stage, mounting them epicardium-side down to ensure maximum longitudinal alignment of muscle fibers with the slicing plane (Fig. ). Transmural core biopsies were embedded in 4% low-melting agarose (Roth, Karlsruhe, Germany) and glued onto the vibratome cutting stage with the biopsy in upright position (Fig. ) to be sliced parallel to the epicardial plane. Papillary muscles were sliced with the chordae tendinae aligned with the slicing plane. Tissue blocks, transmural core biopsies and papillary muscles were cut in ice-cold pre-oxygenated Tyrode’s solution at a thickness of 350 µm, employing a high precision vibratome (Leica VT1200S, Leica Microsystems, Germany). After slicing, sections were paraformaldehyde-fixed for histological evaluation or kept at room temperature in pre-oxygenated BDM-Tyrode’s solution for electrophysiological analysis by optical mapping within the following 8 h and staining for viability assessment. See Supplementary Material for a detailed description of the procedures. Optical mapping of transmembrane potential Myocardial tissue slices were optically mapped with a MiCAM O5-Ultima CMOS camera (SciMedia, Costa Mesa, CA). Further details are available at Supplementary Material. For transmembrane potential measurements, slices were incubated at room temperature with the excitation–contraction uncoupler blebbistatin (10 µM, Tocris Bioscience, St. Louis, MO) for 30 min and with the voltage-sensitive dye RH237 (Invitrogen, Carlsbad, CA) for 15 min at a concentration of 7.5 µM in pre-oxygenated Tyrode’s solution. After staining, tissue slices were washed in pre-oxygenated Tyrode’s solution for 30 min to 1 h. Optical measurements were conducted in pre-oxygenated Tyrode’s solution while placed in a heated chamber at 35 °C, equipped with two platinum field-stimulation electrodes. 20-s recordings were acquired at pacing frequencies of 1 and 2 Hz after a short period of stimulation to allow slices to adjust to the pacing rate. β-adrenergic stimulation responsiveness was evaluated by application of isoproterenol (100 nM, Sigma Aldrich). Optical mapping data analysis Custom-written software was developed for optical mapping data analysis (MATLAB R2017a, The MathWorks Inc., Natick, MA). See Supplementary Material for a detailed description of the analyzed samples. Optical action potential (AP) signals were high-pass filtered (0.4 Hz cut-off frequency) to remove baseline drift corresponding to frequencies smaller than 0.4 Hz and subsequently filtered by an adaptive spatio-temporal Gaussian filter . AP duration (APD) was calculated by measuring the elapsed time between the activation, defined as the time occurrence of the maximum AP upslope, and the time for 80% repolarization. APD and activation time maps of the myocardial slices were generated for the whole set of pixels . A signal-to-noise ratio (SNR) value was calculated for each of them as the AP amplitude divided by the root mean-square of voltage during the diastolic interval . A threshold on SNR was set and APD and activation maps were presented only for pixels with SNR above it. Relative APD values measured after β-adrenergic stimulation or after increasing the stimulation frequency to 2 Hz were calculated as normalized with respect to those measured at baseline while pacing at 1 Hz. Statistical analysis Quantitative data are presented as median [first quartile (Q1)-third quartile (Q3)], or as averaged values for percentages of cases. The notation n/N is used to denote n slices from N tissues (either tissue blocks, papillary muscles, or transmural core biopsies). In the viability evaluation by confocal microscopy imaging, the notation i/n/N denotes i images of different areas from n slices from N tissues. When optical mapping measurements are presented from different pixels across each slice, the notation p/n/N is used to denote p pixels from n slices from N tissues. The effects of β-adrenergic stimulation and of increased pacing frequency on APD were assessed by using the non-parametric Wilcoxon signed rank test for paired samples, as the data were not normally distributed according to Shapiro–Wilk test. To compare normalized APD values between the groups of measurements from tissue blocks/papillary muscles and the group of measurements from transmural core biopsies, the non-parametric Mann–Whitney U-test for unpaired samples was used. A p value < 0.05 was considered as statistically significant.
Porcine left ventricular transmural core biopsies were obtained from 5 pigs after sacrifice by intravenous administration of KCl solution (1 mEq/kg) performed under deep anesthesia with propofol (intravenous administration, up to 6 mg/kg) and inhaled sevoflurane (1.9%). A disposable 14 G tru-cut biopsy needle (Bard Mission 1410MS, Bard) was used to extract transmural core biopsies of approximately 1.2 mm diameter and up to 10 mm long (Fig. ). Transmural tissue blocks (surface area ≈ 5 × 7 mm) were cut with a single edge razor blade from a neighboring zone (Fig. ). All animal experiments complied with the regulations of the local animal welfare committee for the care and use of experimental animals and were approved by local authorities (Ethics Committee on Animal Experimentation, CEAEA, of the University of Zaragoza). All animal procedures conformed to the guidelines from Directive 2010/63/EU of the European Parliament on the protection of animals used for scientific purposes. Human left ventricular transmural core biopsies were collected by experienced cardiothoracic surgeons at Miguel Servet University Hospital. Specimens were obtained from 21 patients undergoing valve replacement surgery or coronary artery bypass grafting. A disposable 14 G tru-cut biopsy needle (Bard Mission 1410MS, Bard) was used to extract one biopsy from every patient during cardiac arrest soon after the patient was placed on cardiopulmonary bypass. Papillary muscles resected during valve replacement from 8 different patients were included for comparison. The clinical characteristics of donors are summarized in Supplementary Table . All patients gave written informed consent before surgery and prior to their inclusion in the study. The study conforms to the principles outlined in the Declaration of Helsinki and was approved by the local Ethics Committee (CEICA, reference number PI17/0023).
Upon collection, porcine and human tissues were immediately submerged in ice-cold pre-oxygenated Tyrode’s solution. Transport time to the laboratory was less than 10 min for human tissues and less than 1 h for porcine tissues. Tissue blocks were directly glued onto the vibratome cutting stage, mounting them epicardium-side down to ensure maximum longitudinal alignment of muscle fibers with the slicing plane (Fig. ). Transmural core biopsies were embedded in 4% low-melting agarose (Roth, Karlsruhe, Germany) and glued onto the vibratome cutting stage with the biopsy in upright position (Fig. ) to be sliced parallel to the epicardial plane. Papillary muscles were sliced with the chordae tendinae aligned with the slicing plane. Tissue blocks, transmural core biopsies and papillary muscles were cut in ice-cold pre-oxygenated Tyrode’s solution at a thickness of 350 µm, employing a high precision vibratome (Leica VT1200S, Leica Microsystems, Germany). After slicing, sections were paraformaldehyde-fixed for histological evaluation or kept at room temperature in pre-oxygenated BDM-Tyrode’s solution for electrophysiological analysis by optical mapping within the following 8 h and staining for viability assessment. See Supplementary Material for a detailed description of the procedures.
Myocardial tissue slices were optically mapped with a MiCAM O5-Ultima CMOS camera (SciMedia, Costa Mesa, CA). Further details are available at Supplementary Material. For transmembrane potential measurements, slices were incubated at room temperature with the excitation–contraction uncoupler blebbistatin (10 µM, Tocris Bioscience, St. Louis, MO) for 30 min and with the voltage-sensitive dye RH237 (Invitrogen, Carlsbad, CA) for 15 min at a concentration of 7.5 µM in pre-oxygenated Tyrode’s solution. After staining, tissue slices were washed in pre-oxygenated Tyrode’s solution for 30 min to 1 h. Optical measurements were conducted in pre-oxygenated Tyrode’s solution while placed in a heated chamber at 35 °C, equipped with two platinum field-stimulation electrodes. 20-s recordings were acquired at pacing frequencies of 1 and 2 Hz after a short period of stimulation to allow slices to adjust to the pacing rate. β-adrenergic stimulation responsiveness was evaluated by application of isoproterenol (100 nM, Sigma Aldrich).
Custom-written software was developed for optical mapping data analysis (MATLAB R2017a, The MathWorks Inc., Natick, MA). See Supplementary Material for a detailed description of the analyzed samples. Optical action potential (AP) signals were high-pass filtered (0.4 Hz cut-off frequency) to remove baseline drift corresponding to frequencies smaller than 0.4 Hz and subsequently filtered by an adaptive spatio-temporal Gaussian filter . AP duration (APD) was calculated by measuring the elapsed time between the activation, defined as the time occurrence of the maximum AP upslope, and the time for 80% repolarization. APD and activation time maps of the myocardial slices were generated for the whole set of pixels . A signal-to-noise ratio (SNR) value was calculated for each of them as the AP amplitude divided by the root mean-square of voltage during the diastolic interval . A threshold on SNR was set and APD and activation maps were presented only for pixels with SNR above it. Relative APD values measured after β-adrenergic stimulation or after increasing the stimulation frequency to 2 Hz were calculated as normalized with respect to those measured at baseline while pacing at 1 Hz.
Quantitative data are presented as median [first quartile (Q1)-third quartile (Q3)], or as averaged values for percentages of cases. The notation n/N is used to denote n slices from N tissues (either tissue blocks, papillary muscles, or transmural core biopsies). In the viability evaluation by confocal microscopy imaging, the notation i/n/N denotes i images of different areas from n slices from N tissues. When optical mapping measurements are presented from different pixels across each slice, the notation p/n/N is used to denote p pixels from n slices from N tissues. The effects of β-adrenergic stimulation and of increased pacing frequency on APD were assessed by using the non-parametric Wilcoxon signed rank test for paired samples, as the data were not normally distributed according to Shapiro–Wilk test. To compare normalized APD values between the groups of measurements from tissue blocks/papillary muscles and the group of measurements from transmural core biopsies, the non-parametric Mann–Whitney U-test for unpaired samples was used. A p value < 0.05 was considered as statistically significant.
Integrity of tissue slices We first assessed integrity of vibratome slices by histological analysis. Results are presented in Fig. for porcine transmural tissue blocks (Fig. A,E) and transmural core biopsies (Fig. B,F) as well as for human papillary muscles (Fig. C,G) and transmural core biopsies (Fig. D,H). Preserved myocardial structural integrity was demonstrated in all pig and human cases, with the majority of myocardial fibers being longitudinally aligned, as revealed by hematoxylin/eosin staining (Fig. A–D), and with regular sarcoplasmic cross striations, evidenced at high magnification by Masson’s Trichrome staining (Fig. E–H). Contraction bands or wavy fibers, indicative of tissue damage, were not observed. Elongated fibroblasts in close contact with small capillaries and myocytes could be distinguished together with the presence of blood vessels surrounded by collagenous connective tissue (Fig. B,F). Intact cellular organization and retained tissue architecture were equally observed in slices obtained from transmural core biopsies (Fig. B,D,F,H) or from larger pieces of cardiac tissue (Fig. A,C,E,G) for both pigs and humans. As a side note, an increment in cell size and nuclear size as a typical indicator of hypertrophied myocardium could be observed in the cardiomyocytes of some specimens, like the one shown in panel 2C coming from a patient undergoing mitral valve repair. In some slices of tissue specimens, we observed areas with both longitudinal and transverse fibers (Supplementary Figure A). A few slices showed small areas lacking myocardial fibers. Human papillary muscles showed varying degrees of intercalated connective tissue of the chordae tendinae (Supplementary Figure B). Transmural core biopsies of elderly patients or patients with myocardial hypertrophy frequently presented higher levels of interstitial fibrosis. Moreover, in some but not all slices from specific tissue specimens, a region was found to be occupied by part of a coronary artery or a vein and its surrounding connective tissue (Supplementary Figure C). Viability of tissue slices We next performed TTC staining analysis to assess the optimal preservation of the ventricular tissue specimens after extraction from the heart and subsequent transportation to the laboratory (Fig. A). In all cases, we observed an extended homogeneous deep red staining, correlating with viable tissue. Only minor areas of dead tissue in some edges were noted, as in the lower end of the transmural core biopsy and the papillary muscle of the second and third panels in Fig. A, likely tore during the extraction from the heart. A quantitative outcome of the viability of transmural core biopsies with respect to tissue blocks in pigs as well as of transmural core biopsies with respect to papillary muscles in humans is presented in Fig. B. The median relative viability of transmural core biopsies was above 50% of that in porcine tissue blocks and above 70% of that in human papillary muscles. Absolute absorbance values of transmural core biopsies were not statistically significantly different between biopsies and larger pieces of tissue or between porcine and human biopsies. Additionally, we assessed the effects of vibratome slicing and electrophysiological assessment on tissue viability after 4–8 h of optical mapping recordings. The results from the TTC staining analysis of the tissue slices is presented in Fig. C. In the porcine transmural tissue blocks, slicing and electrophysiological procedures led to minimal damage, with median relative viability of the slices in relation to the intact tissue block being 90% and no statistically significant differences in absolute absorbance values between the slices and the whole tissues. For human papillary muscles, the median relative viability of the slices was reduced to 47%, probably due to their higher inhomogeneity in the distribution and alignment of myocardial tissue fibers and, again, no slicing-induced significant differences in absolute absorbance values. For porcine and human transmural core biopsies, the median relative viability of the slices was 55% and 64% of that for whole transmural core biopsies in pigs and humans, respectively. In both species, the absolute viability of transmural core biopsies slices was significantly different from that of the intact biopsies, indicating some damage caused by vibratome slicing. When comparing the slicing-induced damage in transmural core biopsies versus the slicing-induced damage in larger pieces of tissue, this was found to be significantly larger in pigs but not in humans. Further assessment of vibratome slices’ viability by Dapi/Syto9 staining and confocal microscopy analysis is presented in Fig. . Due to laser penetration limitations, viability results are compared for the first half and second half portions of the outer layers of cardiomyocytes in the tissue slices. In the first most external portion of the slices from pig tissue blocks, human papillary muscles and human transmural core biopsies, the median percentages of viable cells were 35%, 28% and 40%, which increased to 44%, 56% and 58% respectively for the second half portion. This increment was statistically significant for human transmural slices. Provided the observed increase in viability when going deep into the tissue slice, viability percentages up to 100% can be expected for inner layers in the central zone of the slices (in accordance to viability assessment in the whole slices with the TTC enzymatic method described in Fig. C). It should be noted that median percentages of viable cells were similar for the slices of all analyzed types of tissues, and even slightly larger for transmural core biopsies, thus indicating feasibility of biopsy slices for electrophysiological analysis. Optical mapping of pig tissue slices Viable pig slices for electrophysiological analysis were obtained from all tissue blocks and all transmural core biopsies. For each block or biopsy, more than 15 slices were generally obtained. In the case of the tissue blocks, 100% of the vibratome slices rendered electrophysiological signals, with the corresponding percentage in the transmural core biopsies being 70%. Optical mapping results in slices from tissue blocks and slices from transmural core biopsies corresponding to the same animal and a nearby location are illustrated in Fig. . As can be observed from Fig. A, the APs measured in the biopsy preserved the morphology of those measured in the block. This was regularly noted in all tissue specimens from all investigated animals. In addition, the APDs and activation times measured in the biopsies were within the range of APDs and activation times in the blocks, with a more reduced spatial variability in the biopsies due to their smaller volumes, as exemplified in the maps presented in Fig. B. The median of APDs in a collection of pixels of all analyzed tissue blocks was 111 ms at 1 Hz pacing frequency. In the biopsy slices the median of APDs was 117 ms (Fig. , left panel). Differences in the APD of both types of slices were not statistically significant. Histograms of APD values in the transmural core biopsies and tissue blocks are presented in Fig. C. High overlap between both histograms can be observed, without differences in the APD distributions. Next, the response to an increase in the pacing frequency was evaluated in the porcine transmural core biopsy slices and compared to that in the porcine tissue block slices (Fig. , left panel) by the representation of normalized APDs (Fig. , central panel). Although the change in APD was only significant for the tissue blocks, similar relative responses were observed in the two cases: median of normalized APDs was 88% in the blocks and 91% in the biopsies, with no statistically significant differences. Porcine transmural core biopsy slices and tissue block slices were additionally compared in terms of their response to β-adrenergic stimulation (Fig. , left and right panel). We observed a change in the APD when applying 100 nM isoproterenol, even if only significant in the case of the biopsies. When comparing the relative differences, normalized APDs did not change between the block slices and the biopsy slices: median normalized APD being 95% in the blocks and 92% in the biopsies. Optical mapping of human tissue slices Viable human slices for electrophysiological analysis were obtained from all papillary muscles and all transmural core biopsies. For each papilla or biopsy, we generally obtained more than 8 slices. 77% of the vibratome slices in the case of the papillary muscles and 58% in the case of the transmural core biopsies rendered electrophysiological signals. The results obtained by optically mapping human papillary muscle slices and transmural core biopsy slices are illustrated in Fig. . Examples of APs, as well as APD and activation time maps are presented in Fig. A for a papillary muscle slice and in Fig. B for a transmural biopsy slice. It should be noted that there is no correspondence between the papillary muscle and the biopsy, as these were collected from different patients and locations. Less spatial variability was observed in the biopsy slices as compared to the papillary muscle slices due to their smaller volume. The median APD values in the biopsies from the left ventricular myocardium were larger than in the papillary muscles: median of APD being 266 ms in the papillary muscles and 360 ms in the biopsies (Fig. C, left panel). An increase in the pacing frequency reduced APD in both papillary muscle slices and biopsy slices, as shown in Fig. C, left and central panel. The median shortening of APD was 19% in the papillary muscles and 22% in the biopsies. Similarly, the β-adrenergic agonist isoproterenol reduced APD in papillary muscle slices and biopsy slices (Fig. C, left and right panel). The median shortening of APD was 22% in the papillary muscles and 27% in the biopsies.
We first assessed integrity of vibratome slices by histological analysis. Results are presented in Fig. for porcine transmural tissue blocks (Fig. A,E) and transmural core biopsies (Fig. B,F) as well as for human papillary muscles (Fig. C,G) and transmural core biopsies (Fig. D,H). Preserved myocardial structural integrity was demonstrated in all pig and human cases, with the majority of myocardial fibers being longitudinally aligned, as revealed by hematoxylin/eosin staining (Fig. A–D), and with regular sarcoplasmic cross striations, evidenced at high magnification by Masson’s Trichrome staining (Fig. E–H). Contraction bands or wavy fibers, indicative of tissue damage, were not observed. Elongated fibroblasts in close contact with small capillaries and myocytes could be distinguished together with the presence of blood vessels surrounded by collagenous connective tissue (Fig. B,F). Intact cellular organization and retained tissue architecture were equally observed in slices obtained from transmural core biopsies (Fig. B,D,F,H) or from larger pieces of cardiac tissue (Fig. A,C,E,G) for both pigs and humans. As a side note, an increment in cell size and nuclear size as a typical indicator of hypertrophied myocardium could be observed in the cardiomyocytes of some specimens, like the one shown in panel 2C coming from a patient undergoing mitral valve repair. In some slices of tissue specimens, we observed areas with both longitudinal and transverse fibers (Supplementary Figure A). A few slices showed small areas lacking myocardial fibers. Human papillary muscles showed varying degrees of intercalated connective tissue of the chordae tendinae (Supplementary Figure B). Transmural core biopsies of elderly patients or patients with myocardial hypertrophy frequently presented higher levels of interstitial fibrosis. Moreover, in some but not all slices from specific tissue specimens, a region was found to be occupied by part of a coronary artery or a vein and its surrounding connective tissue (Supplementary Figure C).
We next performed TTC staining analysis to assess the optimal preservation of the ventricular tissue specimens after extraction from the heart and subsequent transportation to the laboratory (Fig. A). In all cases, we observed an extended homogeneous deep red staining, correlating with viable tissue. Only minor areas of dead tissue in some edges were noted, as in the lower end of the transmural core biopsy and the papillary muscle of the second and third panels in Fig. A, likely tore during the extraction from the heart. A quantitative outcome of the viability of transmural core biopsies with respect to tissue blocks in pigs as well as of transmural core biopsies with respect to papillary muscles in humans is presented in Fig. B. The median relative viability of transmural core biopsies was above 50% of that in porcine tissue blocks and above 70% of that in human papillary muscles. Absolute absorbance values of transmural core biopsies were not statistically significantly different between biopsies and larger pieces of tissue or between porcine and human biopsies. Additionally, we assessed the effects of vibratome slicing and electrophysiological assessment on tissue viability after 4–8 h of optical mapping recordings. The results from the TTC staining analysis of the tissue slices is presented in Fig. C. In the porcine transmural tissue blocks, slicing and electrophysiological procedures led to minimal damage, with median relative viability of the slices in relation to the intact tissue block being 90% and no statistically significant differences in absolute absorbance values between the slices and the whole tissues. For human papillary muscles, the median relative viability of the slices was reduced to 47%, probably due to their higher inhomogeneity in the distribution and alignment of myocardial tissue fibers and, again, no slicing-induced significant differences in absolute absorbance values. For porcine and human transmural core biopsies, the median relative viability of the slices was 55% and 64% of that for whole transmural core biopsies in pigs and humans, respectively. In both species, the absolute viability of transmural core biopsies slices was significantly different from that of the intact biopsies, indicating some damage caused by vibratome slicing. When comparing the slicing-induced damage in transmural core biopsies versus the slicing-induced damage in larger pieces of tissue, this was found to be significantly larger in pigs but not in humans. Further assessment of vibratome slices’ viability by Dapi/Syto9 staining and confocal microscopy analysis is presented in Fig. . Due to laser penetration limitations, viability results are compared for the first half and second half portions of the outer layers of cardiomyocytes in the tissue slices. In the first most external portion of the slices from pig tissue blocks, human papillary muscles and human transmural core biopsies, the median percentages of viable cells were 35%, 28% and 40%, which increased to 44%, 56% and 58% respectively for the second half portion. This increment was statistically significant for human transmural slices. Provided the observed increase in viability when going deep into the tissue slice, viability percentages up to 100% can be expected for inner layers in the central zone of the slices (in accordance to viability assessment in the whole slices with the TTC enzymatic method described in Fig. C). It should be noted that median percentages of viable cells were similar for the slices of all analyzed types of tissues, and even slightly larger for transmural core biopsies, thus indicating feasibility of biopsy slices for electrophysiological analysis.
Viable pig slices for electrophysiological analysis were obtained from all tissue blocks and all transmural core biopsies. For each block or biopsy, more than 15 slices were generally obtained. In the case of the tissue blocks, 100% of the vibratome slices rendered electrophysiological signals, with the corresponding percentage in the transmural core biopsies being 70%. Optical mapping results in slices from tissue blocks and slices from transmural core biopsies corresponding to the same animal and a nearby location are illustrated in Fig. . As can be observed from Fig. A, the APs measured in the biopsy preserved the morphology of those measured in the block. This was regularly noted in all tissue specimens from all investigated animals. In addition, the APDs and activation times measured in the biopsies were within the range of APDs and activation times in the blocks, with a more reduced spatial variability in the biopsies due to their smaller volumes, as exemplified in the maps presented in Fig. B. The median of APDs in a collection of pixels of all analyzed tissue blocks was 111 ms at 1 Hz pacing frequency. In the biopsy slices the median of APDs was 117 ms (Fig. , left panel). Differences in the APD of both types of slices were not statistically significant. Histograms of APD values in the transmural core biopsies and tissue blocks are presented in Fig. C. High overlap between both histograms can be observed, without differences in the APD distributions. Next, the response to an increase in the pacing frequency was evaluated in the porcine transmural core biopsy slices and compared to that in the porcine tissue block slices (Fig. , left panel) by the representation of normalized APDs (Fig. , central panel). Although the change in APD was only significant for the tissue blocks, similar relative responses were observed in the two cases: median of normalized APDs was 88% in the blocks and 91% in the biopsies, with no statistically significant differences. Porcine transmural core biopsy slices and tissue block slices were additionally compared in terms of their response to β-adrenergic stimulation (Fig. , left and right panel). We observed a change in the APD when applying 100 nM isoproterenol, even if only significant in the case of the biopsies. When comparing the relative differences, normalized APDs did not change between the block slices and the biopsy slices: median normalized APD being 95% in the blocks and 92% in the biopsies.
Viable human slices for electrophysiological analysis were obtained from all papillary muscles and all transmural core biopsies. For each papilla or biopsy, we generally obtained more than 8 slices. 77% of the vibratome slices in the case of the papillary muscles and 58% in the case of the transmural core biopsies rendered electrophysiological signals. The results obtained by optically mapping human papillary muscle slices and transmural core biopsy slices are illustrated in Fig. . Examples of APs, as well as APD and activation time maps are presented in Fig. A for a papillary muscle slice and in Fig. B for a transmural biopsy slice. It should be noted that there is no correspondence between the papillary muscle and the biopsy, as these were collected from different patients and locations. Less spatial variability was observed in the biopsy slices as compared to the papillary muscle slices due to their smaller volume. The median APD values in the biopsies from the left ventricular myocardium were larger than in the papillary muscles: median of APD being 266 ms in the papillary muscles and 360 ms in the biopsies (Fig. C, left panel). An increase in the pacing frequency reduced APD in both papillary muscle slices and biopsy slices, as shown in Fig. C, left and central panel. The median shortening of APD was 19% in the papillary muscles and 22% in the biopsies. Similarly, the β-adrenergic agonist isoproterenol reduced APD in papillary muscle slices and biopsy slices (Fig. C, left and right panel). The median shortening of APD was 22% in the papillary muscles and 27% in the biopsies.
In this study, we present transmural core biopsies as a safe and widely available method to obtain ventricular tissues during routine cardiac surgery for structural and functional characterization. Although cardiac tissue slices have already been shown to represent a suitable model for physiological and pharmacological studies – , , we hereby provide the novelty of obtaining such human tissue slices from small myocardial biopsies. Indeed, biopsies respond similarly to larger pieces of ventricular tissue to increments in the pacing frequency or to β-adrenergic stimulation. This safe and simple method thus represents a breakthrough to increase our understanding of human heart functioning. Transmural core biopsy slices preserve structural integrity, with predominantly longitudinal alignment of myocardial fibers, regular cross striations and intact tissue structure. In general, we do not observe signs of edema, hypercontracture or myofibril disorganization. Viability of the entire biopsy specimens and vibratome slices, evaluated both by an enzymatic assay and by vital staining imaged with confocal microscopy, is substantially maintained when compared with larger ventricular portions like porcine transmural tissue blocks and human papillary muscles. Only minor injuries are noted on the surface of the biopsy, likely caused by the biopsy needle during the extraction. Upon vibratome cutting, the presence of damaged cells in the tissue slices is mainly restricted to the first outer layers of cells transected during the slicing, which nevertheless presents some viability. Cardiomyocytes' viability increases deep into the tissue slice, in agreement with previous studies on larger ventricular myocardial slices , , , , , . The alignment of the myocardial fibers with the vibratome slicing plane improves viability and restricts tissue damage to the very first cellular layers, as previously reported , , , , , . The fiber alignment step is relevant not only to transmural core biopsies, but generally to any other tissue to be sliced. This can explain why the viability of transmural biopsy slices is higher than that of larger tissue slices like those obtained from papillary muscles, which present pronounced tissue heterogeneity and gross areas of chordae tendinae that hinder fiber alignment with the cutting plane. Other indications to increase biopsy slice viability include preservation of the biopsies in very cold solution during transport and slicing , , , , , and fast positioning of the biopsy into agarose to minimize exposure to room temperature. Importantly, a slice thickness of 350 µm is selected to maximize viability while attaining optimal electrophysiological signals. This value is in line with other studies that have reported thicknesses ranging from 150 to 500 µm , , , , with most studies in humans setting it at 300–400 µm , , , , . Also, a post-cutting recovery time of 30 min is applied, consistent with previously reported times for myocardial slices to allow them to attain steady-state properties , , , , , , . We note that some transmural core biopsies render slices with areas of not totally aligned fibers, accumulation of fibrosis or presence of blood vessels, which may represent a higher percentage of its total volume as compared to other more commonly used myocardial slices. However, since a set of tissue slices is obtained from each transmural core biopsy, these limitations can be at least partially overcome by the evaluation of other slices from the same biopsy. Indeed, the percentages of pig and human biopsies where we can measure electrophysiological signals, despite being lower than the percentages corresponding to pig transmural tissue blocks and human papillary muscles, are still substantial. Optical mapping allows the characterization of action potentials from transmural core biopsy slices with high spatial resolution. The electrophysiological evaluation of myocardial tissue slices is evolving from multi-electrode arrays , , , , , to optical mapping for the analysis of action potentials and/or intracellular calcium transients , , , , , – . Here we show that AP duration and morphology are comparable in slices from transmural core biopsies and tissue blocks from the same pig ventricular regions. Median APD values (117 ms and 111 ms in the biopsy and block slices, respectively) are slightly lower than previously published values in experimental works on multicellular preparations – , which could be explained by the fact that our pig samples come from piglets and young pigs. Indeed, in our APD measures we could observe a trend to shorter APD in those from piglets as compared to those from adult pigs. In any case, the results presented in this study confirm that transmural core biopsy slices mimic the electrical behavior of larger pieces of myocardial tissue and thus constitute an alternative model for cardiac electrophysiological research. In humans, results from biopsy slices and papillary muscle slices are not comparable, as they are obtained from different patients and, importantly, different ventricular locations. Some but not all the patients were also on antiarrhythmic therapy. Here, median APD in papillary muscle slices paced at 1 Hz frequency is 266 ms. A previous study has reported average APD 90 values of 320 ms and 443 ms in papillary muscles of healthy and failing hearts respectively, paced at a lower frequency of 0.5 Hz . Both factors, i.e. slower pacing and heart failure-associated remodeling, contribute to APD prolongation. In another study, APD 90 values of around 290 ms have been reported for ventricular trabeculae and papillary muscles of undiseased organ donors paced at 1 Hz and APD 90 values of around 350 ms for papillary slices from failing hearts . Our results on papillary muscle slices obtained from patients with mitral valve disorders are in line with published experimental ranges. On the other hand, median APD in the human transmural core biopsy slices of this study paced at 1 Hz is 360 ms. This value is in accordance with those reported by other groups for failing and non-failing ventricular tissue preparations. Reported APD values for left ventricular multicellular preparations in the literature range from 275 to 439 ms for non-failing ventricles , , , , – and from 380 to 457 ms for failing ventricles , , , , , , , . The transmural core biopsies of this study are obtained from patients with coronary artery disease, aortic aneurysm and aortic valve disorders, which can explain the large range of APD values covering from 275 to 475 ms. Morphology of action potentials recorded on transmural biopsies as well as local activation times are in line with those previously reported by other groups . To the best of our knowledge, this is the first study showing electrophysiological measurements in myocardial slices of the left ventricular free wall from living donors. Since transmural core biopsies can be routinely collected and several slices can be optically mapped, this study sets the basis for future investigations aimed at characterizing inter- and intra-individual variability in the human left ventricle in a large number of individuals. The electrical properties assessed from biopsy slices are those of native myocardium and, thus, they represent a more advanced model than isolated cardiomyocytes. Transmural core biopsies avoid the loss of extracellular matrix and inter-cellular connections as well as the alterations in ion channels associated with cell isolation procedures and provide affordability, ease of use and reproducibility. The use of transmural biopsies can also be useful for cardiac tissue characterization in animal studies without the need to sacrifice the animals and allowing longitudinal research. As several myocardial slices can be obtained from each core biopsy, and more than one biopsy can be obtained from the same animal, multiple conditions can be tested in tissues from the same heart and different time points can be evaluated. This allows us to control for inter-individual bias and to reduce the number of animals required for a given experiment, in line with the 3Rs ethical principles in animal testing. There is already a study in the literature where serial left ventricular needle biopsies have been obtained from dogs undergoing chronic experiments by a trans-thoracic approach that avoids thoracotomy . In that study, the authors show that biopsy sampling does not influence any hemodynamic, mechanical or electrocardiographic parameters while allowing molecular assessment of left ventricular tissue. Assessment of the response of biopsy tissue slices to an increase in the pacing frequency provides further confirmation of the fact that transmural core biopsies maintain the electrophysiological properties of native myocardium. In pigs, biopsy slices respond to a change in pacing frequency from 1 to 2 Hz with an APD decrease of the same magnitude (median difference below 3%) than the slices obtained from larger tissue blocks of the same ventricular region. In humans, biopsy slices respond to the change in pacing frequency with a more remarkable decrease in APD than that observed in pigs, but in any case of the same magnitude than slices from human papillary muscles (median difference below 4%). The observed APD reduction following an increment in pacing frequency is in agreement with previously reported outcomes for other pig and human multicellular preparations , , , , . The physiological responsiveness of transmural core biopsy slices is additionally substantiated by assessing the response to the administration of the β-adrenergic agonist isoproterenol. Porcine slices from transmural core biopsies and tissue blocks respond equally to β-adrenergic stimulation, with no statistically significant differences in the APD decrease (less than 3% median difference). Similarly happens with human slices from transmural core biopsies and papillary muscles, which respond more notably than pig tissues, but similarly one to another (less than 6% median difference, no statistically significant differences). The β-adrenergic stimulation-induced APD reduction measured in the pig and human tissue preparations of this study are in line with other studies of the literature , , , , , . All presented results support the suitability of transmural biopsy slices for studies of healthy and diseased human ventricular tissue. Some limitations and future extensions of this work are as follows. Due to the small size of human ventricular transmural core biopsies, imposed by safety considerations, the percentage of damaged slices from each biopsy specimen is larger than for other multicellular preparations. Nevertheless, since a large number of slices can be obtained from each biopsy and more than one biopsy can be obtained from each donor, it is still possible to perform different types of analysis in a relatively high proportion of donors. We have characterized APD and activation times in mid-myocardial slices from transmural core biopsy slices from pig and humans and we have shown that, despite the reduced size of the biopsies, they still allow evaluation of spatial AP heterogeneities. Future studies could additionally investigate other electrophysiological properties from transmural core biopsies, such as conduction velocity, refractory period or restitution curves, both at baseline and in response to pharmacological treatments. Also, other studies could characterize differences in the electrophysiology of different regions (epicardium, mid-myocardium, endocardium) within the myocardial wall of each transmural core biopsy, which would help to investigate the effects on transmural heterogeneity under diseased states, including inherited channelopathies. Moreover, transmural core biopsy slices could be used to understand depolarization abnormalities, like those observed in Brugada syndrome, while accounting for myocardial tissue heterogeneity. In conclusion, we present and validate transmural core biopsies as a novel, safe, and handy procedure to obtain human left ventricular tissue from living donors. Slices from transmural core biopsies preserve the structural and functional properties of larger pieces of myocardial tissue and respond similarly to changes in pacing frequency and β-adrenergic stimulation. This study opens the door to future investigations aimed at characterizing cardiac behavior in healthy and diseased hearts from a large number of individuals.
Supplementary Information.
|
Preoperative optimization strategy management model applied in gallbladder surgery | b64214c7-a6e1-47a3-987e-b296a20b2677 | 11877759 | Surgical Procedures, Operative[mh] | Pre-registration hospitalization is a management model designed for patients who require hospitalization but have relatively stable medical conditions. In cases where immediate admission is not feasible due to a lack of available beds, virtual hospitalization procedures are initiated in advance. This process involves assigning patients to virtual beds and conducting necessary examinations and tests to facilitate formal hospitalization and treatment . With the increasing diversity of public healthcare demands and the urgent need to address gaps in service availability, alongside the ongoing refinement of medical insurance cost control, hospitals are in critical need of a pathway for high-quality development. Studies indicate that optimizing the pre-registration process enhances management efficiency compared to traditional medical models. This approach can reduce the length of hospital stays, lower hospitalization costs, improve patients’ acceptance of treatment and care, and enhance overall hospital operational efficiency . While this model has become relatively mature and widely accepted in developed countries such as those in Europe and America , developing countries continue to face challenges in areas including the pre-registration period, doctor-patient communication, process management, informatization, and the evaluation of health education effectiveness. To enhance the operational efficiency of hospitals, minimize patients’ hospitalization costs, and align with the shifts in medical policies, we embarked on a research study. This study involved the construction of a pre-admission optimization strategy management model for liver and gallbladder surgery, developed through an extensive literature review and expert consultations. When initially applied to the patient population undergoing gallbladder surgery, our management model yielded significant results. Consequently, this study offers invaluable insights for the implementation of pre-admission management across various medical specialties.
General information This study utilized cluster sampling to select ordinary inpatients and pre-inpatients from June 1, 2022, to December 31, 2023. During this period, the patients were divided into two groups for statistical analysis. Inclusion criteria (1) agreement to participate in the study and signing of the informed consent form; (2) alertness; (3) age ≥ 18 years; (4) patients diagnosed with cholecystitis, gallstones, gallbladder polyps, and scheduled for laparoscopic cholecystectomy. Exclusion criteria include (1) Patients diagnosed with gallbladder cancer either during or post-surgery; (2) Patients experiencing severe heart, liver, or kidney dysfunction, coagulation disorders, or a history of laparoscopic surgery; (3) Patients exhibiting complications such as bleeding or those referred to a different department due to severe postoperative complications; (4) Non-compliant patients who are unable to comprehend or cooperate with health education; (5) Patients suffering from acute cholecystitis.(6) individuals who voluntarily withdrew from the study. A total of 440 patients were included, consisting of 158 ordinary inpatients and 282 pre-inpatients undergoing gallbladder surgery. The comparison of general information between the two groups is presented in Tables , , , , , and . Control group The general inpatient group was admitted following routine procedures for laparoscopic cholecystectomy. After admission, the process included an introduction to the environment, preoperative assessments, preoperative examinations, consultations, dietary requirements, as well as perioperative components such as postoperative guidance for recovery training and the implementation of therapeutic nursing care. Intervention group Composition of the management team The research team comprises an expert group and a working group. The expert group consists of 10 senior-level professionals, including 1 team leader, 2 deputy team leaders, and 7 team members. Their primary responsibilities include discussing and formulating pre-admission processes and plans. The working group includes 1 personnel from the admission and discharge management center, 1 head nurse from the department, 1 key nursing staff member, and 2 key medical staff members. Their main responsibilities are: (1) conducting literature searches and preliminary construction of the management model; (2) organizing and facilitating expert management meetings to discuss and summarize findings; (3) determining execution processes and content; and (4) assigning implementation details and clarifying division of labor. Literature search The search was conducted according to the “6S” evidence model, utilizing a top-down approach. The evidence resource databases include Uptodate, Cochrane Library, JBI Library, BMJ Best Practice, The Agency for Healthcare Research and Quality (AHRQ), Medical Search, ESPEN website, ASPEN website, EMbase, Medline, PubMed, SinoMed, Wanfang Database, and CNKI. The Chinese keywords used were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The English keywords were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The search period for each database spanned from January 1, 2017, to January 1, 2022. Analysis revealed that pre-hospitalization does not occupy hospital bed resources, improves treatment efficiency for hospitals, and can optimize medical resources for health insurance. During the pre-hospitalization period, no bed or nursing fees are incurred. It is essential to streamline and efficiently operate the processes of pre-hospitalization entry assessment, notification, issuance of hospital cards and examinations, pre-hospitalization handling, improvement of examinations, and medical record documentation. In light of the current conditions within our hospital, we have established a preliminary management model for the optimization of pre-hospitalization strategies in liver and gallbladder surgery. Expert conference Medical department personnel, outpatient and admission center staff, medical experts, and working group members were invited to discuss applicable diseases and workflows related to pre-admission. The Director of Outpatient and Admission Management reported on the recent operations concerning the pre-admission of liver and gallbladder patients, as well as existing challenges in the pre-admission process. The medical department and operation management department provided feedback and analyzed the interpretation of the pre-admission process. Experts listened to the difficulties or bottlenecks in the operational process, analyzed them individually, discussed common issues, and assigned responsibilities. The revised management model identifies laparoscopic cholecystectomy for gallbladder polyps and gallstones in the hepatobiliary department as the applicable disease. Additionally, it is proposed to simplify and automate the pre-admission process, shorten preoperative preparation time, and ensure close cooperation among various examination departments to minimize the need for patients to travel back and forth. It is recommended that after the initial pre-admission visit, patients complete all necessary examinations in no more than one visit before being admitted. Completion of information construction The management model was instituted following the completion of the information platform’s construction. The outpatient services have finalized the establishment of pre-admission cards, and all pertinent information about pre-admission patients, such as the intended admission date, contact telephone number, and the progress and results of examinations, will be received by the inpatient system module. Determination of pre-hospitalization management content and implementation process Determining Admission Scope: For patients in the hepatobiliary surgery ward who require hospitalization due to stable conditions but cannot be admitted immediately due to a lack of available beds, formal pre-admission examinations are conducted through reserved pre-admission beds. Patients who do not stay in the hospital will undergo necessary examinations and preoperative preparations as provided by the hospital. Pre-Admission Management Requirements: (1) Pre-admission patients should be admitted within three days; if they fail to do so, the information system will issue a prompt. The Admission and Discharge Management Center will provide an explanation and make recommendations for extending the pre-admission time limit or transferring to self-payment discharge, which must be approved by the medical insurance office before implementation . (2) Following the formal admission of pre-admission patients, the clinical department should schedule surgery within two working days. If there are valid reasons for a delay, the attending physician must document these reasons in the medical records. (3) Clinical doctors should inform patients of relevant matters regarding pre-admission after issuing the pre-admission card and provide two copies of the “Informed Consent for Pre-Admission,” signed by the physician, along with the relevant examination forms. (4) During the pre-admission process, staff at the Admission and Discharge Management Center should reiterate the specific procedures and precautions to the patients and collect a signed copy of the “Informed Consent for Pre-Admission” for retention. (5) When scheduling examinations, efforts should be made to arrange for them to be conducted on the same day whenever possible . (6) Medical insurance patients should use their social security card for unified billing during the pre-admission period and make advance payments according to their condition, all handled at the inpatient department. (7) If a medical insurance patient is not formally admitted due to personal reasons during the pre-admission period, all expenses incurred during this period will be settled as self-payment, and a self-payment invoice will be issued. In special cases, the Information Department should assist outpatient physicians in completing inpatient refunds and outpatient billing. (8) Upon the formal admission of pre-admission patients, the ward medical staff should transfer their pre-admission beds to regular beds. (9) The patient’s hospitalization period should commence from the time of formal admission, and hospitalization invoices, medical records, etc., should reflect this time of formal admission . (10) Managing physicians should document the results of examinations conducted during the patient’s pre-admission period in the medical records, and various examination reports should be directly included in the inpatient records. (11) If a pre-admission patient exhibits critical values during examinations, these should be treated as outpatient critical values; if specialist inpatient treatment is required, the patient should be promptly transferred to regular hospitalization . (12) Providing timely general anesthesia in the anesthesia clinic to conduct comprehensive anesthesia evaluations, if necessary, facilitating multidisciplinary collaborative diagnosis and treatment, and assessing vital physiological organs to predict risk factors and evaluate the patient’s ability to tolerate surgery.(13)The Medical Affairs Department, Medical Insurance Office, and other related departments should regularly inspect pre-admission situations and address any violations in accordance with hospital regulations. (14) Follow-up evaluations of pre-admission patients after discharge, along with continuous quality improvement, should be conducted. Preoperative Optimization Strategy Management Model Process (Fig. ). Evaluation criteria General Information: Clinical data were collected from both patient groups, including gender, age, and underlying conditions such as hypertension, diabetes, heart disease, and respiratory diseases. Following discharge, patients were divided into two groups for statistical observation of total hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs. A comparison was made between the group of pre-hospitalization patients with one or more underlying conditions (hypertension, diabetes, cardiovascular, and respiratory diseases) and those without such conditions. The two patient groups were compared based on age, categorizing them into elderly patients (≥ 65 years) and non-elderly patients (< 65 years). The total number of hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs were observed across different age groups. Statistical methods The statistical analysis was conducted utilizing SPSS 26.0. Descriptive statistics, specifically mean and standard deviation, were employed to depict normally distributed continuous data, with the t-test serving for comparisons between the two groups. For non-normally distributed continuous data, these were characterized using median and quartiles, with non-parametric tests applied for comparisons amongst the two groups. Categorical data was described via frequency, and the chi-square test facilitated comparisons between the two groups. Binary logistic regression analysis was executed to discern factors affecting pre-hospitalization and general hospitalization. A significance threshold of P < 0.05 was set to ascertain statistical significance of differences.
This study utilized cluster sampling to select ordinary inpatients and pre-inpatients from June 1, 2022, to December 31, 2023. During this period, the patients were divided into two groups for statistical analysis. Inclusion criteria (1) agreement to participate in the study and signing of the informed consent form; (2) alertness; (3) age ≥ 18 years; (4) patients diagnosed with cholecystitis, gallstones, gallbladder polyps, and scheduled for laparoscopic cholecystectomy. Exclusion criteria include (1) Patients diagnosed with gallbladder cancer either during or post-surgery; (2) Patients experiencing severe heart, liver, or kidney dysfunction, coagulation disorders, or a history of laparoscopic surgery; (3) Patients exhibiting complications such as bleeding or those referred to a different department due to severe postoperative complications; (4) Non-compliant patients who are unable to comprehend or cooperate with health education; (5) Patients suffering from acute cholecystitis.(6) individuals who voluntarily withdrew from the study. A total of 440 patients were included, consisting of 158 ordinary inpatients and 282 pre-inpatients undergoing gallbladder surgery. The comparison of general information between the two groups is presented in Tables , , , , , and .
(1) agreement to participate in the study and signing of the informed consent form; (2) alertness; (3) age ≥ 18 years; (4) patients diagnosed with cholecystitis, gallstones, gallbladder polyps, and scheduled for laparoscopic cholecystectomy.
(1) Patients diagnosed with gallbladder cancer either during or post-surgery; (2) Patients experiencing severe heart, liver, or kidney dysfunction, coagulation disorders, or a history of laparoscopic surgery; (3) Patients exhibiting complications such as bleeding or those referred to a different department due to severe postoperative complications; (4) Non-compliant patients who are unable to comprehend or cooperate with health education; (5) Patients suffering from acute cholecystitis.(6) individuals who voluntarily withdrew from the study. A total of 440 patients were included, consisting of 158 ordinary inpatients and 282 pre-inpatients undergoing gallbladder surgery. The comparison of general information between the two groups is presented in Tables , , , , , and .
The general inpatient group was admitted following routine procedures for laparoscopic cholecystectomy. After admission, the process included an introduction to the environment, preoperative assessments, preoperative examinations, consultations, dietary requirements, as well as perioperative components such as postoperative guidance for recovery training and the implementation of therapeutic nursing care.
Composition of the management team The research team comprises an expert group and a working group. The expert group consists of 10 senior-level professionals, including 1 team leader, 2 deputy team leaders, and 7 team members. Their primary responsibilities include discussing and formulating pre-admission processes and plans. The working group includes 1 personnel from the admission and discharge management center, 1 head nurse from the department, 1 key nursing staff member, and 2 key medical staff members. Their main responsibilities are: (1) conducting literature searches and preliminary construction of the management model; (2) organizing and facilitating expert management meetings to discuss and summarize findings; (3) determining execution processes and content; and (4) assigning implementation details and clarifying division of labor. Literature search The search was conducted according to the “6S” evidence model, utilizing a top-down approach. The evidence resource databases include Uptodate, Cochrane Library, JBI Library, BMJ Best Practice, The Agency for Healthcare Research and Quality (AHRQ), Medical Search, ESPEN website, ASPEN website, EMbase, Medline, PubMed, SinoMed, Wanfang Database, and CNKI. The Chinese keywords used were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The English keywords were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The search period for each database spanned from January 1, 2017, to January 1, 2022. Analysis revealed that pre-hospitalization does not occupy hospital bed resources, improves treatment efficiency for hospitals, and can optimize medical resources for health insurance. During the pre-hospitalization period, no bed or nursing fees are incurred. It is essential to streamline and efficiently operate the processes of pre-hospitalization entry assessment, notification, issuance of hospital cards and examinations, pre-hospitalization handling, improvement of examinations, and medical record documentation. In light of the current conditions within our hospital, we have established a preliminary management model for the optimization of pre-hospitalization strategies in liver and gallbladder surgery. Expert conference Medical department personnel, outpatient and admission center staff, medical experts, and working group members were invited to discuss applicable diseases and workflows related to pre-admission. The Director of Outpatient and Admission Management reported on the recent operations concerning the pre-admission of liver and gallbladder patients, as well as existing challenges in the pre-admission process. The medical department and operation management department provided feedback and analyzed the interpretation of the pre-admission process. Experts listened to the difficulties or bottlenecks in the operational process, analyzed them individually, discussed common issues, and assigned responsibilities. The revised management model identifies laparoscopic cholecystectomy for gallbladder polyps and gallstones in the hepatobiliary department as the applicable disease. Additionally, it is proposed to simplify and automate the pre-admission process, shorten preoperative preparation time, and ensure close cooperation among various examination departments to minimize the need for patients to travel back and forth. It is recommended that after the initial pre-admission visit, patients complete all necessary examinations in no more than one visit before being admitted. Completion of information construction The management model was instituted following the completion of the information platform’s construction. The outpatient services have finalized the establishment of pre-admission cards, and all pertinent information about pre-admission patients, such as the intended admission date, contact telephone number, and the progress and results of examinations, will be received by the inpatient system module. Determination of pre-hospitalization management content and implementation process Determining Admission Scope: For patients in the hepatobiliary surgery ward who require hospitalization due to stable conditions but cannot be admitted immediately due to a lack of available beds, formal pre-admission examinations are conducted through reserved pre-admission beds. Patients who do not stay in the hospital will undergo necessary examinations and preoperative preparations as provided by the hospital. Pre-Admission Management Requirements: (1) Pre-admission patients should be admitted within three days; if they fail to do so, the information system will issue a prompt. The Admission and Discharge Management Center will provide an explanation and make recommendations for extending the pre-admission time limit or transferring to self-payment discharge, which must be approved by the medical insurance office before implementation . (2) Following the formal admission of pre-admission patients, the clinical department should schedule surgery within two working days. If there are valid reasons for a delay, the attending physician must document these reasons in the medical records. (3) Clinical doctors should inform patients of relevant matters regarding pre-admission after issuing the pre-admission card and provide two copies of the “Informed Consent for Pre-Admission,” signed by the physician, along with the relevant examination forms. (4) During the pre-admission process, staff at the Admission and Discharge Management Center should reiterate the specific procedures and precautions to the patients and collect a signed copy of the “Informed Consent for Pre-Admission” for retention. (5) When scheduling examinations, efforts should be made to arrange for them to be conducted on the same day whenever possible . (6) Medical insurance patients should use their social security card for unified billing during the pre-admission period and make advance payments according to their condition, all handled at the inpatient department. (7) If a medical insurance patient is not formally admitted due to personal reasons during the pre-admission period, all expenses incurred during this period will be settled as self-payment, and a self-payment invoice will be issued. In special cases, the Information Department should assist outpatient physicians in completing inpatient refunds and outpatient billing. (8) Upon the formal admission of pre-admission patients, the ward medical staff should transfer their pre-admission beds to regular beds. (9) The patient’s hospitalization period should commence from the time of formal admission, and hospitalization invoices, medical records, etc., should reflect this time of formal admission . (10) Managing physicians should document the results of examinations conducted during the patient’s pre-admission period in the medical records, and various examination reports should be directly included in the inpatient records. (11) If a pre-admission patient exhibits critical values during examinations, these should be treated as outpatient critical values; if specialist inpatient treatment is required, the patient should be promptly transferred to regular hospitalization . (12) Providing timely general anesthesia in the anesthesia clinic to conduct comprehensive anesthesia evaluations, if necessary, facilitating multidisciplinary collaborative diagnosis and treatment, and assessing vital physiological organs to predict risk factors and evaluate the patient’s ability to tolerate surgery.(13)The Medical Affairs Department, Medical Insurance Office, and other related departments should regularly inspect pre-admission situations and address any violations in accordance with hospital regulations. (14) Follow-up evaluations of pre-admission patients after discharge, along with continuous quality improvement, should be conducted. Preoperative Optimization Strategy Management Model Process (Fig. ). Evaluation criteria General Information: Clinical data were collected from both patient groups, including gender, age, and underlying conditions such as hypertension, diabetes, heart disease, and respiratory diseases. Following discharge, patients were divided into two groups for statistical observation of total hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs. A comparison was made between the group of pre-hospitalization patients with one or more underlying conditions (hypertension, diabetes, cardiovascular, and respiratory diseases) and those without such conditions. The two patient groups were compared based on age, categorizing them into elderly patients (≥ 65 years) and non-elderly patients (< 65 years). The total number of hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs were observed across different age groups.
The research team comprises an expert group and a working group. The expert group consists of 10 senior-level professionals, including 1 team leader, 2 deputy team leaders, and 7 team members. Their primary responsibilities include discussing and formulating pre-admission processes and plans. The working group includes 1 personnel from the admission and discharge management center, 1 head nurse from the department, 1 key nursing staff member, and 2 key medical staff members. Their main responsibilities are: (1) conducting literature searches and preliminary construction of the management model; (2) organizing and facilitating expert management meetings to discuss and summarize findings; (3) determining execution processes and content; and (4) assigning implementation details and clarifying division of labor.
The search was conducted according to the “6S” evidence model, utilizing a top-down approach. The evidence resource databases include Uptodate, Cochrane Library, JBI Library, BMJ Best Practice, The Agency for Healthcare Research and Quality (AHRQ), Medical Search, ESPEN website, ASPEN website, EMbase, Medline, PubMed, SinoMed, Wanfang Database, and CNKI. The Chinese keywords used were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The English keywords were “Hernia/Hepatobiliary,” “Pre-hospitalization,” “Information,” and “Management Mode/Management/Treatment Mode.” The search period for each database spanned from January 1, 2017, to January 1, 2022. Analysis revealed that pre-hospitalization does not occupy hospital bed resources, improves treatment efficiency for hospitals, and can optimize medical resources for health insurance. During the pre-hospitalization period, no bed or nursing fees are incurred. It is essential to streamline and efficiently operate the processes of pre-hospitalization entry assessment, notification, issuance of hospital cards and examinations, pre-hospitalization handling, improvement of examinations, and medical record documentation. In light of the current conditions within our hospital, we have established a preliminary management model for the optimization of pre-hospitalization strategies in liver and gallbladder surgery.
Medical department personnel, outpatient and admission center staff, medical experts, and working group members were invited to discuss applicable diseases and workflows related to pre-admission. The Director of Outpatient and Admission Management reported on the recent operations concerning the pre-admission of liver and gallbladder patients, as well as existing challenges in the pre-admission process. The medical department and operation management department provided feedback and analyzed the interpretation of the pre-admission process. Experts listened to the difficulties or bottlenecks in the operational process, analyzed them individually, discussed common issues, and assigned responsibilities. The revised management model identifies laparoscopic cholecystectomy for gallbladder polyps and gallstones in the hepatobiliary department as the applicable disease. Additionally, it is proposed to simplify and automate the pre-admission process, shorten preoperative preparation time, and ensure close cooperation among various examination departments to minimize the need for patients to travel back and forth. It is recommended that after the initial pre-admission visit, patients complete all necessary examinations in no more than one visit before being admitted.
The management model was instituted following the completion of the information platform’s construction. The outpatient services have finalized the establishment of pre-admission cards, and all pertinent information about pre-admission patients, such as the intended admission date, contact telephone number, and the progress and results of examinations, will be received by the inpatient system module.
Determining Admission Scope: For patients in the hepatobiliary surgery ward who require hospitalization due to stable conditions but cannot be admitted immediately due to a lack of available beds, formal pre-admission examinations are conducted through reserved pre-admission beds. Patients who do not stay in the hospital will undergo necessary examinations and preoperative preparations as provided by the hospital. Pre-Admission Management Requirements: (1) Pre-admission patients should be admitted within three days; if they fail to do so, the information system will issue a prompt. The Admission and Discharge Management Center will provide an explanation and make recommendations for extending the pre-admission time limit or transferring to self-payment discharge, which must be approved by the medical insurance office before implementation . (2) Following the formal admission of pre-admission patients, the clinical department should schedule surgery within two working days. If there are valid reasons for a delay, the attending physician must document these reasons in the medical records. (3) Clinical doctors should inform patients of relevant matters regarding pre-admission after issuing the pre-admission card and provide two copies of the “Informed Consent for Pre-Admission,” signed by the physician, along with the relevant examination forms. (4) During the pre-admission process, staff at the Admission and Discharge Management Center should reiterate the specific procedures and precautions to the patients and collect a signed copy of the “Informed Consent for Pre-Admission” for retention. (5) When scheduling examinations, efforts should be made to arrange for them to be conducted on the same day whenever possible . (6) Medical insurance patients should use their social security card for unified billing during the pre-admission period and make advance payments according to their condition, all handled at the inpatient department. (7) If a medical insurance patient is not formally admitted due to personal reasons during the pre-admission period, all expenses incurred during this period will be settled as self-payment, and a self-payment invoice will be issued. In special cases, the Information Department should assist outpatient physicians in completing inpatient refunds and outpatient billing. (8) Upon the formal admission of pre-admission patients, the ward medical staff should transfer their pre-admission beds to regular beds. (9) The patient’s hospitalization period should commence from the time of formal admission, and hospitalization invoices, medical records, etc., should reflect this time of formal admission . (10) Managing physicians should document the results of examinations conducted during the patient’s pre-admission period in the medical records, and various examination reports should be directly included in the inpatient records. (11) If a pre-admission patient exhibits critical values during examinations, these should be treated as outpatient critical values; if specialist inpatient treatment is required, the patient should be promptly transferred to regular hospitalization . (12) Providing timely general anesthesia in the anesthesia clinic to conduct comprehensive anesthesia evaluations, if necessary, facilitating multidisciplinary collaborative diagnosis and treatment, and assessing vital physiological organs to predict risk factors and evaluate the patient’s ability to tolerate surgery.(13)The Medical Affairs Department, Medical Insurance Office, and other related departments should regularly inspect pre-admission situations and address any violations in accordance with hospital regulations. (14) Follow-up evaluations of pre-admission patients after discharge, along with continuous quality improvement, should be conducted. Preoperative Optimization Strategy Management Model Process (Fig. ).
General Information: Clinical data were collected from both patient groups, including gender, age, and underlying conditions such as hypertension, diabetes, heart disease, and respiratory diseases. Following discharge, patients were divided into two groups for statistical observation of total hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs. A comparison was made between the group of pre-hospitalization patients with one or more underlying conditions (hypertension, diabetes, cardiovascular, and respiratory diseases) and those without such conditions. The two patient groups were compared based on age, categorizing them into elderly patients (≥ 65 years) and non-elderly patients (< 65 years). The total number of hospitalization days, preoperative preparation days, postoperative hospitalization days, and total postoperative costs were observed across different age groups.
The statistical analysis was conducted utilizing SPSS 26.0. Descriptive statistics, specifically mean and standard deviation, were employed to depict normally distributed continuous data, with the t-test serving for comparisons between the two groups. For non-normally distributed continuous data, these were characterized using median and quartiles, with non-parametric tests applied for comparisons amongst the two groups. Categorical data was described via frequency, and the chi-square test facilitated comparisons between the two groups. Binary logistic regression analysis was executed to discern factors affecting pre-hospitalization and general hospitalization. A significance threshold of P < 0.05 was set to ascertain statistical significance of differences.
From June 1, 2022, to December 31, 2023, patients undergoing gallbladder surgery were categorized into a pre-hospitalization group and a regular hospitalization group. The two patient groups were comparable in terms of general condition, with no significant differences observed, except for age ( P > 0.05). Binary logistic regression analysis was employed to identify the influencing factors for pre-hospitalization and regular hospitalization.However, significant differences were found in length of stay, preoperative preparation time, postoperative recovery time, and total cost between the two groups ( P < 0.05), as presented in Table . The pre-admission patients were further divided into those with underlying conditions and those without, and comparisons were made regarding length of stay, preoperative preparation time, and total cost, revealing statistically significant differences ( P < 0.05), as shown in Table . A comparative analysis was conducted based on age stratification, categorizing patients into two groups: those aged 65 years and older, and those under 65 years. The analysis focused on four indicators: length of hospital stay, preoperative preparation time, postoperative recovery time, and total cost. Among the pre-hospitalized patients, statistical significance was observed in the length of hospitalization, postoperative recovery time, and total cost ( P < 0.05), while preoperative preparation time did not show statistical significance ( P > 0.05), as presented in Table .For patients aged 65 years and older, a comparison of different hospitalization modes revealed statistically significant differences in all four indicators between the pre-hospitalization mode and general hospitalization mode ( P < 0.05), as shown in Table . In the cohort of patients under 65 years, comparisons between pre-hospitalized patients and general patients indicated statistical significance in length of hospitalization, postoperative recovery time, and total cost ( P < 0.05), whereas no statistical significance was found in preoperative preparation time ( P > 0.05), detailed in Table . Furthermore, within the general inpatient group, the age stratification comparison of all four indicators demonstrated statistical significance ( P < 0.05), as illustrated in Table .
Multidisciplinary collaboration ensures the management of preoperative patients undergoing gallbladder surgery through process reengineering. Interdisciplinary cooperation is increasingly vital in the medical field, as various studies have demonstrated its positive impact on the perioperative management of surgical patients, the integration of emergency processes, and other factors influencing clinical outcomes . This study, grounded in interdisciplinary collaboration, optimizes the traditional inpatient management model through literature reviews and expert consultations, thereby transforming the existing systems for inpatient management and information processing . The interdisciplinary expert groups involved comprise senior professionals holding director positions in hospital functional departments, including medical, information, and hepatobiliary sectors, which ensures the practicality and reliability of the process transformation. The information department is capable of initiating inpatient records while ordering examinations and tests, as well as establishing disease package modules. Attending physicians can create templates for examination items, facilitating the completion of examination orders in outpatient clinics and their subsequent display in the inpatient system’s pre-admission section. The inpatient center process enables same-day completion of tests and examinations, while the medical department oversees the quality of process execution. Additionally, the medical team has implemented a WeChat butler service, guiding pre-admission patients with smartphones to join online groups, thus facilitating a seamless transition from pre-admission to formal admission and providing uninterrupted services to patients . This management model not only enhances outpatient efficiency but also increases compliance among attending physicians in issuing pre-admission and inpatient records. Preoperative gallbladder surgery patients have advantages in different age groups compared with ordinary inpatient gallbladder surgery patients. Gallbladder surgery patients are categorized by age into two groups: those aged 65 years and older and those younger than 65 years. Preoperative patients under 65 years undergoing gallbladder surgery experience shorter hospitalization durations, quicker postoperative recovery, and lower total hospitalization costs compared to their older counterparts. Under the same preoperative protocols, elderly patients’ hospitalization efficiency is still adversely affected by underlying health conditions and other factors.The efficiency of hospital admissions for prehospitalized patients may be enhanced through a comprehensive assessment of underlying conditions, such as blood glucose and blood pressure, conducted during outpatient visits, along with the implementation of early educational interventions for this patient group. In different hospitalization processes, patients aged 65 years and older exhibit a clear advantage in terms of hospitalization efficiency compared to standard inpatients. This study recommends prioritizing preoperative management processes for gallbladder surgery patients aged 65 and older. For older patients, proactive preoperative management can address similar issues before hospitalization, thereby reducing preparation time. In comparisons between preoperative patients under 65 years and regular inpatients, preoperative patients demonstrate a distinct advantage in terms of postoperative hospitalization duration, total hospitalization time, and costs. However, for gallbladder surgery patients under 65 years, the efficiency of preoperative preparation time is not advantageous, likely due to the ability to schedule basic examinations for gallbladder surgery on the same day without requiring additional appointments, resulting in no impact on preparation time. Conversely, for patients over 65 years, the reduction in waiting time before surgery is more pronounced. Research on preoperative patients segmented by age remains limited in domestic literature. This study highlights the benefits of enhancing medical efficiency and reducing patient costs for gallbladder surgery preoperative patients across different age groups. The Preoperative Management Model for Laparoscopic Cholecystectomy: Practicality and Effectiveness.With the high-quality development of public hospitals, enhancing medical service capacity is the most crucial aspect . In the current context of limited medical resources, improving the efficiency of these resources is essential. Pre-hospitalization, as a supplementary mode to general hospitalization, functions as a virtual bed type that aligns with current medical reform policies. Utilizing laparoscopic cholecystectomy patients as a pilot group, the implementation of the pre-hospital optimization strategy management mode has demonstrated significant benefits in reducing hospitalization duration, postoperative recovery time, and overall costs, consistent with findings from experts such as Cao Lei and Yang Jian . During the pre-hospitalization period, the hospital does not charge for bed and nursing fees. The pre-hospitalization deposit and incurred examination fees are incorporated into the formal hospitalization bill, resulting in a reduced total cost upon discharge.The incidence of bile duct injury ranges from 0.2–1.3% , while the incidence of bleeding complications in laparoscopic gallbladder surgery varies from 0.04–10% . The occurrence of these complications is associated with the surgeon’s level of experience, anatomical variants of the bile duct, and individual patient differences. In the patients included in this study, there were no occurrences of bleeding, bile duct injury, or other complications. A multidisciplinary preoperative assessment of patients’ surgical tolerance and risk factors is beneficial in reducing the risk of postoperative complications . Furthermore, implementing a comprehensive preoperative assessment process prior to hospitalization contributes to ensuring the safety of surgical procedures for patients. Aside from 26 patients who opted out of pre-hospitalization management for personal reasons, no adverse events concerning medical safety were reported. The pre-hospitalization management of cholecystectomy patients aligns with the objectives of the new medical reform policy, facilitating the integration of medical resources, enhancing departmental management precision, and improving the social benefits of hospitals . Limitations The short observation period of this study limits its findings, and the long-term benefits of this management model for patients require further validation. Additionally, focusing on a single disease type could enhance the sample size and reduce selection bias by employing propensity score matching. The inpatient experiences of patients during the statistical period were not further investigated or compared. Furthermore, it remains unclear whether economic factors, including the health insurance system, influence the hospitalization patterns of patients. These limitations warrant further investigation and research.The pre-hospital optimization strategy management model requires a high level of patient compliance; however, factors such as communication device issues and the need to reserve phones for children have delayed some elderly patients’ access to preoperative examination information. This has resulted in inadequate preoperative preparation and the necessity for additional examinations upon admission. Due to certain medical risks and safety concerns, addressing these issues will be crucial for the follow-up pre-hospitalization management model.
The short observation period of this study limits its findings, and the long-term benefits of this management model for patients require further validation. Additionally, focusing on a single disease type could enhance the sample size and reduce selection bias by employing propensity score matching. The inpatient experiences of patients during the statistical period were not further investigated or compared. Furthermore, it remains unclear whether economic factors, including the health insurance system, influence the hospitalization patterns of patients. These limitations warrant further investigation and research.The pre-hospital optimization strategy management model requires a high level of patient compliance; however, factors such as communication device issues and the need to reserve phones for children have delayed some elderly patients’ access to preoperative examination information. This has resulted in inadequate preoperative preparation and the necessity for additional examinations upon admission. Due to certain medical risks and safety concerns, addressing these issues will be crucial for the follow-up pre-hospitalization management model.
The pre-hospitalization optimization strategy management model can significantly reduce the duration of hospital stays and medical costs for gallbladder surgery patients. This model is advocated for all patients, irrespective of whether they are aged 65 or above, or below 65. Moreover, the establishment of thispre-hospitalization management model, backed by digital support, can create a mutually beneficial situation for both patients and hospitals.
|
Pharmacoepidemiology and costs of medications dispensed during pregnancy: A retrospective population‐based study | 8ed80e31-194e-4174-ae48-8f575ee9c17e | 10952169 | Pharmacology[mh] | INTRODUCTION Evidence suggests >80% of pregnant women take at least one medication during their pregnancy , , , and this prevalence has increased over time, with the average number of medicines used during a pregnancy increasing from 2.5 in 1976–1978 to 4.2 in 2006–2008. As the average age of women at childbirth rises, the incidence of maternal chronic disease is also rising. , , Some women enter pregnancy with chronic medical conditions that require ongoing or episodic pharmacological treatment (e.g. asthma, epilepsy, depression) and other women develop conditions during pregnancy that may require pharmaceutical intervention (e.g. iron deficiency anaemia, diabetes, pre‐eclampsia). Prior studies reporting on the epidemiology of pharmaceuticals used in pregnant women have largely focused on European cohorts, , , , , , although analyses have also been published on women in the UK, Australia, , North America, , , and Brazil, , , and one multinational study was also published in 2014. Of these 16 studies, only seven (44%) were published in the last decade and only one study reported on the costs associated with the medications prescribed. Healthcare budgets are finite and the resources available to achieve positive health outcomes are limited, therefore even when there are data to support the safety, efficacy and economic efficiency of medication use (known information deficits for pregnant populations , , , , , , ), due consideration must also be given to the opportunity cost (the value foregone as a consequence of a resource not being available for its best alternative use) associated with implementation at scale. Increasing the proportion of Government budgets that are spent on healthcare reduces the proportion available for other societal priorities such as education, housing and transportation. Rising healthcare costs can impact the economy, compromise patient care and financial security, and impact patient access to care. Evidence of the affordability, cost and value of healthcare interventions is essential to inform national health priorities and support the development of clinical practice guidelines. The objective of this study is to give an overview of the pharmacoepidemiology and costs of prescription medications dispensed during pregnancy among a cohort of Australian women, using a rich source of routinely collected health information. More specifically, we aim to describe the prevalence of medication use during pregnancy and identify which medications are the greatest cost‐drivers for total expenditure from the perspective of both women and the Government. METHODS 2.1 Study design and population This population‐level observational study utilises an existing linked administrative dataset, Maternity1000, which contains information on 255 408 women (328 868 pregnancies) in Queensland, Australia, who gave birth between 1 January 2013 and 30 June 2018 and their infants. The data as of June 2018 were the latest data that could be released by government data custodians to the research team. For the study described in this paper, routinely collected information on all pregnant women and their infants born (live births and stillbirths of ≥20 weeks’ gestation or ≥400 g) during this timeframe were identified from the Queensland Perinatal Data Collection (PDC) and records for mothers and children were then linked to Pharmaceutical Benefits Scheme (PBS) claims & costs records between 1 September 2011 and 30 June 2018. Only prescriptions that were dispensed under the funding provisions of the Pharmaceutical Benefits Scheme were included. Data were available for all medicines dispensed during the antenatal period (i.e. including under co‐payment dispensings) for this population. Births in all sectors (i.e. public and private) were included in the analysis, with equal access to PBS‐subsidised prescriptions for public and private patients provided they are entitled to Medicare benefits. Patients are eligible for Medicare if they are an Australian or New Zealand Citizen, an Australian permanent resident, have applied for permanent residency, are a temporary resident covered by a ministerial order, or are visiting from a country with a Reciprocal Health Care Agreement. In Australia, medicines listed on the PBS can be dispensed to patients at a Commonwealth Government subsidised price. There were 906 different medicines listed on the PBS as of 30 June 2021. Patients pay a co‐payment towards the cost of each PBS‐subsidised medicine. In 2022, general patients pay up to $42.50 per item dispensed and concession card holders pay up to $6.80 per item. Births with an unknown birth year ( n = 93) were excluded from our analysis. Prescription drug use in pregnancy was defined for this analysis as the dispensing of any PBS‐listed medication (i.e. a medication approved for public subsidy) to a woman after day 30 of pregnancy up until the date of delivery. A diagram showing the variables we used to define a dispensing that occurred during pregnancy within our dataset is shown in Figure . We excluded the first 30 days of pregnancy to avoid misclassification of medications potentially dispensed in the month prior to pregnancy. Neither patients nor the public were involved in the development of the dataset utilised for this study, or in the design of the analyses (see Table for GRIPP2‐SF checklist). We acknowledge the value that such engagement can divulge, but we were unable to integrate this into our study. In addition, we were unable to incorporate a core outcome set, as we primarily focus on dispensing data and associated cost analyses, rather than health outcomes. 2.2 Statistical analysis Using the information contained within the Maternity1000 dataset, a pharmacoepidemiological analysis was conducted on the use of pharmaceuticals dispensed during pregnancy. The unit of analysis for demographic data was one pregnancy; therefore women may appear in the results more than once if they had multiple pregnancies during the timeframe analysed. We defined the first trimester as covering the period from 31 days up to 13 +6 weeks’ gestation; second trimester as 14 +0 weeks to 27 +6 weeks’ gestation; and third trimester as 28 +0 weeks until the date of delivery. This definition is aligned with that used by The American College of Obstetricians and Gynecologists. Statistical analysis was conducted using SAS Version 9.4. We used descriptive statistics to illustrate: the prevalence of pregnant women dispensed ≥1 PBS‐listed pharmaceutical throughout: each trimester of pregnancy, and the entire pregnancy; the most frequently dispensed medications (reported as the number of dispensings for a given medication as a percentage of the total number of dispensings for all medications) in terms of: pharmaceutical agents and the World Health Organization's Anatomical Therapeutic Chemical (ATC) Classification System ; which medications represent the greatest cost burden over the study period for: women (through out‐of‐pocket (OOP) payments) and the Government (PBS subsidy amount). Total medication costs can be calculated by adding patient costs (OOP payments) to Government costs (PBS subsidy amount), therefore we have not explicitly presented or discussed these in our analyses. Normal distributions were assumed for all cost and count data, with 95% CI presented for all mean values. Standard Wald confidence limits for proportions were calculated for 95% CI reported for the prevalence of women dispensed ≥1 PBS‐listed medication during pregnancy. We conducted a sub‐group analysis to investigate whether there were any distributional effects according to the mother's charging status (i.e. the type of ward accommodation under which the mother elected to be admitted) – colloquially referred to as a public admission (universal healthcare often fully paid by the government) and private admission (combination of fees including private fees paid for by the patient). Within the dataset, date of delivery is only available as the month and year of birth, with the date always listed as the first of the month in an effort to maintain privacy. Sensitivity analyses were conducted to test the robustness of our results to different assumptions regarding date of delivery. We tested two alternate scenarios: the assumed date of birth being the end of the month and the assumed date of birth being the 15th of the month displayed in the dataset. The outcomes tested in this analysis were the most frequently dispensed medications and the medications that were the greatest cost‐burden for the Government. Kendall's coefficient of concordance was calculated to determine whether rankings across the three scenarios differed significantly. Where total annual costs are calculated, data are only presented graphically for women who gave birth from 2013 up until the end of 2017 to ensure a consistent balance in the number of pregnancies reported across the years, as a full calendar year of data was not available for 2018. Therefore, the costs of medications presented for the year 2013 refer to the total cost of any PBS items dispensed during pregnancy to a woman who gave birth in 2013 (i.e. not simply the items dispensed during the 2013 calendar year). All cost data have been adjusted for inflation using the Reserve Bank of Australia's Inflation Calculator and are presented in constant prices; 2020/21 Australian Dollars ($1AUD = $0.67USD/£0.55GBP in December 2022). Study design and population This population‐level observational study utilises an existing linked administrative dataset, Maternity1000, which contains information on 255 408 women (328 868 pregnancies) in Queensland, Australia, who gave birth between 1 January 2013 and 30 June 2018 and their infants. The data as of June 2018 were the latest data that could be released by government data custodians to the research team. For the study described in this paper, routinely collected information on all pregnant women and their infants born (live births and stillbirths of ≥20 weeks’ gestation or ≥400 g) during this timeframe were identified from the Queensland Perinatal Data Collection (PDC) and records for mothers and children were then linked to Pharmaceutical Benefits Scheme (PBS) claims & costs records between 1 September 2011 and 30 June 2018. Only prescriptions that were dispensed under the funding provisions of the Pharmaceutical Benefits Scheme were included. Data were available for all medicines dispensed during the antenatal period (i.e. including under co‐payment dispensings) for this population. Births in all sectors (i.e. public and private) were included in the analysis, with equal access to PBS‐subsidised prescriptions for public and private patients provided they are entitled to Medicare benefits. Patients are eligible for Medicare if they are an Australian or New Zealand Citizen, an Australian permanent resident, have applied for permanent residency, are a temporary resident covered by a ministerial order, or are visiting from a country with a Reciprocal Health Care Agreement. In Australia, medicines listed on the PBS can be dispensed to patients at a Commonwealth Government subsidised price. There were 906 different medicines listed on the PBS as of 30 June 2021. Patients pay a co‐payment towards the cost of each PBS‐subsidised medicine. In 2022, general patients pay up to $42.50 per item dispensed and concession card holders pay up to $6.80 per item. Births with an unknown birth year ( n = 93) were excluded from our analysis. Prescription drug use in pregnancy was defined for this analysis as the dispensing of any PBS‐listed medication (i.e. a medication approved for public subsidy) to a woman after day 30 of pregnancy up until the date of delivery. A diagram showing the variables we used to define a dispensing that occurred during pregnancy within our dataset is shown in Figure . We excluded the first 30 days of pregnancy to avoid misclassification of medications potentially dispensed in the month prior to pregnancy. Neither patients nor the public were involved in the development of the dataset utilised for this study, or in the design of the analyses (see Table for GRIPP2‐SF checklist). We acknowledge the value that such engagement can divulge, but we were unable to integrate this into our study. In addition, we were unable to incorporate a core outcome set, as we primarily focus on dispensing data and associated cost analyses, rather than health outcomes. Statistical analysis Using the information contained within the Maternity1000 dataset, a pharmacoepidemiological analysis was conducted on the use of pharmaceuticals dispensed during pregnancy. The unit of analysis for demographic data was one pregnancy; therefore women may appear in the results more than once if they had multiple pregnancies during the timeframe analysed. We defined the first trimester as covering the period from 31 days up to 13 +6 weeks’ gestation; second trimester as 14 +0 weeks to 27 +6 weeks’ gestation; and third trimester as 28 +0 weeks until the date of delivery. This definition is aligned with that used by The American College of Obstetricians and Gynecologists. Statistical analysis was conducted using SAS Version 9.4. We used descriptive statistics to illustrate: the prevalence of pregnant women dispensed ≥1 PBS‐listed pharmaceutical throughout: each trimester of pregnancy, and the entire pregnancy; the most frequently dispensed medications (reported as the number of dispensings for a given medication as a percentage of the total number of dispensings for all medications) in terms of: pharmaceutical agents and the World Health Organization's Anatomical Therapeutic Chemical (ATC) Classification System ; which medications represent the greatest cost burden over the study period for: women (through out‐of‐pocket (OOP) payments) and the Government (PBS subsidy amount). Total medication costs can be calculated by adding patient costs (OOP payments) to Government costs (PBS subsidy amount), therefore we have not explicitly presented or discussed these in our analyses. Normal distributions were assumed for all cost and count data, with 95% CI presented for all mean values. Standard Wald confidence limits for proportions were calculated for 95% CI reported for the prevalence of women dispensed ≥1 PBS‐listed medication during pregnancy. We conducted a sub‐group analysis to investigate whether there were any distributional effects according to the mother's charging status (i.e. the type of ward accommodation under which the mother elected to be admitted) – colloquially referred to as a public admission (universal healthcare often fully paid by the government) and private admission (combination of fees including private fees paid for by the patient). Within the dataset, date of delivery is only available as the month and year of birth, with the date always listed as the first of the month in an effort to maintain privacy. Sensitivity analyses were conducted to test the robustness of our results to different assumptions regarding date of delivery. We tested two alternate scenarios: the assumed date of birth being the end of the month and the assumed date of birth being the 15th of the month displayed in the dataset. The outcomes tested in this analysis were the most frequently dispensed medications and the medications that were the greatest cost‐burden for the Government. Kendall's coefficient of concordance was calculated to determine whether rankings across the three scenarios differed significantly. Where total annual costs are calculated, data are only presented graphically for women who gave birth from 2013 up until the end of 2017 to ensure a consistent balance in the number of pregnancies reported across the years, as a full calendar year of data was not available for 2018. Therefore, the costs of medications presented for the year 2013 refer to the total cost of any PBS items dispensed during pregnancy to a woman who gave birth in 2013 (i.e. not simply the items dispensed during the 2013 calendar year). All cost data have been adjusted for inflation using the Reserve Bank of Australia's Inflation Calculator and are presented in constant prices; 2020/21 Australian Dollars ($1AUD = $0.67USD/£0.55GBP in December 2022). RESULTS 3.1 Demographics Demographic characteristics for women in the dataset are presented in Table . More than 98% of pregnancies were singleton pregnancies, and 31% of all pregnancies were first‐time pregnancies. Almost 30% of women had a medical condition diagnosed prior to or during pregnancy, with 20% of women entering pregnancy with obesity. In addition, 70% of women experienced a complication during their pregnancy. Trends in demographic characteristics over time show the mean age of pregnant women, mean body mass index (BMI) at conception and the percentage of pregnant women with a diagnosed medical condition or experiencing a complication during pregnancy have increased from 2013 to 2018 (see Figure ). A modified version of Table is also reported in Table , which incorporates the numbers and percentage of missing data for each variable. 3.2 Prevalence of at least one medication approved for public subsidy being dispensed during pregnancy Across the analysis period (2013–2018), 61% (95% CI 60.96–61.29) of pregnant women were dispensed ≥1 PBS‐listed medication during pregnancy (see Table ). Prevalence increased over the timeframe investigated and was marginally higher during the first trimester than during the second or third trimester. 3.3 Medications dispensed in the greatest volume during pregnancy Table shows that metoclopramide (11%), amoxicillin (10%) and cefalexin (9%) were the three most commonly dispensed medications. Antibacterials for systemic use were the most frequently dispensed therapeutic class, making up 26% of all mediations dispensed. Psychoanaleptics were the next most common therapeutic class at just over 11% of all dispensings, closely followed by drugs for functional gastrointestinal disorders (11%), which incorporates metoclopramide dispensings. 3.4 Medications that cost the most during pregnancy For women who gave birth in 2017, the Government spent more than $4.32 million (AUD 2020/21) on PBS‐listed medications dispensed during pregnancy (see Table ). Total Government cost increased rapidly over the timeframe analysed. Total out‐of‐pocket expenses for pregnant women also increased over the period analysed, albeit at a slower rate (see Figure for graphical representation of the results). This indicates a trend towards more expensive drugs (i.e. drugs with a total cost above the patient co‐payment) being dispensed over time. In terms of individual medications, Table shows that women spent the greatest amount of money on the antiemetic metoclopramide and the anticoagulant/antithrombotic agent enoxaparin sodium, which was also the medication that the Government spent the greatest amount of money on. Medications representing the highest out‐of‐pocket costs to women were approximately in line with volume of use, with 80% of the top 20 most frequently dispensed medications also being listed in the 20 highest total cost contributors for women's OOP expenditure. Antibacterials for systemic use and psychoanaleptics contributed both the greatest volume and cost to women in terms of therapeutic class (Tables and ). Injectable agents featured heavily as high total cost items for the Government, with eight of the top ten pharmaceutical agents being injectables (see Table ). In addition, insulin preparations accounted for four of the ten highest cost pharmaceuticals to the Government. Accordingly, antidiabetic therapies are the largest cost‐contributor to the Government in terms of therapeutic category (see Table ). More than half of total Government expenditure on PBS‐listed pharmaceuticals for pregnant women were attributable to only nine pharmaceutical agents (see Table ). 3.5 Medication expenditure during pregnancy for women electing to be admitted as a public versus private patient Table shows that Government expenditure per prescription and per pregnancy (on medication) is increasing at a more rapid rate for women who elect private obstetric care versus those who elect public obstetric care (refer to Figure for graphical representation of results). This is despite a very modest increase in the mean number of PBS‐listed prescriptions dispensed per pregnancy for private versus public patients. The average patient out‐of‐pocket costs per pregnancy are far greater for women whose antenatal care is funded privately (private = $52.85 versus public = $32.92 in 2018), although this is influenced by a larger proportion of concession card holders being cared for publicly (private: general = 93% versus concession = 7%; public: general = 54% versus concession = 46%) and therefore a lower average patient contribution to the cost per dispensing, as shown in Table . 3.6 Sensitivity analyses When the assumed date of delivery was altered to the 15th of the birth month rather than the 1st of the month supplied in the dataset, 18 of the 20 most frequently dispensed pharmaceutical agents during pregnancy in the primary analysis remained in the top 20. The same was true for the 20 medications that contributed the greatest total cost to Government expenditure, as shown in Tables and . Modifying the assumption to the date of delivery being the end of the birth month showed that 17 of the 20 most frequently dispensed pharmaceutical agents remained in the top 20. In terms of total Government expenditure, 18 of the top 20 pharmaceuticals remained in the top 20. Calculation of Kendall's coefficient of concordance revealed significant agreement between the rankings shown in each scenario (frequency of dispensing: W = 0.90, p < 0.0001; total Government expenditure: W = 0.94, p < 0.0001). The sensitivity analyses therefore show the results are robust to reasonable changes in the assumed date of delivery. Demographics Demographic characteristics for women in the dataset are presented in Table . More than 98% of pregnancies were singleton pregnancies, and 31% of all pregnancies were first‐time pregnancies. Almost 30% of women had a medical condition diagnosed prior to or during pregnancy, with 20% of women entering pregnancy with obesity. In addition, 70% of women experienced a complication during their pregnancy. Trends in demographic characteristics over time show the mean age of pregnant women, mean body mass index (BMI) at conception and the percentage of pregnant women with a diagnosed medical condition or experiencing a complication during pregnancy have increased from 2013 to 2018 (see Figure ). A modified version of Table is also reported in Table , which incorporates the numbers and percentage of missing data for each variable. Prevalence of at least one medication approved for public subsidy being dispensed during pregnancy Across the analysis period (2013–2018), 61% (95% CI 60.96–61.29) of pregnant women were dispensed ≥1 PBS‐listed medication during pregnancy (see Table ). Prevalence increased over the timeframe investigated and was marginally higher during the first trimester than during the second or third trimester. Medications dispensed in the greatest volume during pregnancy Table shows that metoclopramide (11%), amoxicillin (10%) and cefalexin (9%) were the three most commonly dispensed medications. Antibacterials for systemic use were the most frequently dispensed therapeutic class, making up 26% of all mediations dispensed. Psychoanaleptics were the next most common therapeutic class at just over 11% of all dispensings, closely followed by drugs for functional gastrointestinal disorders (11%), which incorporates metoclopramide dispensings. Medications that cost the most during pregnancy For women who gave birth in 2017, the Government spent more than $4.32 million (AUD 2020/21) on PBS‐listed medications dispensed during pregnancy (see Table ). Total Government cost increased rapidly over the timeframe analysed. Total out‐of‐pocket expenses for pregnant women also increased over the period analysed, albeit at a slower rate (see Figure for graphical representation of the results). This indicates a trend towards more expensive drugs (i.e. drugs with a total cost above the patient co‐payment) being dispensed over time. In terms of individual medications, Table shows that women spent the greatest amount of money on the antiemetic metoclopramide and the anticoagulant/antithrombotic agent enoxaparin sodium, which was also the medication that the Government spent the greatest amount of money on. Medications representing the highest out‐of‐pocket costs to women were approximately in line with volume of use, with 80% of the top 20 most frequently dispensed medications also being listed in the 20 highest total cost contributors for women's OOP expenditure. Antibacterials for systemic use and psychoanaleptics contributed both the greatest volume and cost to women in terms of therapeutic class (Tables and ). Injectable agents featured heavily as high total cost items for the Government, with eight of the top ten pharmaceutical agents being injectables (see Table ). In addition, insulin preparations accounted for four of the ten highest cost pharmaceuticals to the Government. Accordingly, antidiabetic therapies are the largest cost‐contributor to the Government in terms of therapeutic category (see Table ). More than half of total Government expenditure on PBS‐listed pharmaceuticals for pregnant women were attributable to only nine pharmaceutical agents (see Table ). Medication expenditure during pregnancy for women electing to be admitted as a public versus private patient Table shows that Government expenditure per prescription and per pregnancy (on medication) is increasing at a more rapid rate for women who elect private obstetric care versus those who elect public obstetric care (refer to Figure for graphical representation of results). This is despite a very modest increase in the mean number of PBS‐listed prescriptions dispensed per pregnancy for private versus public patients. The average patient out‐of‐pocket costs per pregnancy are far greater for women whose antenatal care is funded privately (private = $52.85 versus public = $32.92 in 2018), although this is influenced by a larger proportion of concession card holders being cared for publicly (private: general = 93% versus concession = 7%; public: general = 54% versus concession = 46%) and therefore a lower average patient contribution to the cost per dispensing, as shown in Table . Sensitivity analyses When the assumed date of delivery was altered to the 15th of the birth month rather than the 1st of the month supplied in the dataset, 18 of the 20 most frequently dispensed pharmaceutical agents during pregnancy in the primary analysis remained in the top 20. The same was true for the 20 medications that contributed the greatest total cost to Government expenditure, as shown in Tables and . Modifying the assumption to the date of delivery being the end of the birth month showed that 17 of the 20 most frequently dispensed pharmaceutical agents remained in the top 20. In terms of total Government expenditure, 18 of the top 20 pharmaceuticals remained in the top 20. Calculation of Kendall's coefficient of concordance revealed significant agreement between the rankings shown in each scenario (frequency of dispensing: W = 0.90, p < 0.0001; total Government expenditure: W = 0.94, p < 0.0001). The sensitivity analyses therefore show the results are robust to reasonable changes in the assumed date of delivery. DISCUSSION 4.1 Main findings Overall, six in every ten pregnant women were dispensed at least one prescription medication during pregnancy and prevalence increased over time. Government expenditure on medications for pregnant women is rising at a rapid rate in comparison with patient out‐of‐pocket expenditure, with almost one‐third of total Government costs being attributed to only three patented, injectable pharmaceuticals – enoxaparin sodium, ferric carboxymaltose and insulin aspart. The average number of PBS‐listed medications dispensed per pregnancy increased from 2.1 to 2.5 between 2013 and 2018, with an incongruent rise observed for Government expenditure on medication per pregnancy, rising from $46 to $84 over the same time. That is, the rate of increase in Government costs was five times the rate of increase in quantity of items dispensed (per woman). 4.2 Interpretation Other studies that have examined prescription medication use during pregnancy have shown similar prevalence rates, with a Norwegian study reporting a 60% prevalence and a Danish study reporting 66%. In terms of therapeutic classes, a 2018 study by Haas et al. reported findings similar to ours, with gastrointestinal or antiemetic agents, antibiotics and analgesics being the most frequently prescribed therapeutic classes. An Australian pharmacovigilance study based on above co‐payment dispensing data also reported similar dispensing patterns, with a slightly higher dispensing rate shown for psychoanaleptics. Our study is the first to report on the cost burden of pharmaceuticals and therapeutic classes dispensed during pregnancy. We are only aware of one small study in Brazil that has analysed the costs associated with medication use in 47 pregnant women, reporting only the mean cost of medication per pregnancy (equivalent to $61.83 AUD 2020/21). We hypothesise that the disproportionate rise in Government expenditure compared with women's out‐of‐pocket expenditure seen in our analysis is influenced by newer, more expensive drugs being prescribed more frequently. Ferric carboxymaltose is an example of this phenomenon, with prescription of this drug increasing approximately five‐fold from 2013 to 2017, and total annual expenditure on intravenous iron therapies for women of reproductive age increasing 35‐fold over the same timeframe. As the policy landscape changes to accommodate the testing of new and existing medications in pregnant populations, we expect to see newer medications prescribed more frequently, inevitably leading to rises in the mean cost per dispensing in this population. Nonetheless, significant improvements in health outcomes for women and their children are also expected, leading to efficiency gains in the delivery of health care. Total annual Government expenditure on PBS‐listed medications dispensed during pregnancy increased by more than 50% in real terms from 2013 to 2017. This is twice the increase observed across the entire PBS from 2012/2013 to 2016/2017 (25%). , The rate of increase in Government expenditure was also shown to be higher for women who elect private obstetric care as opposed to publicly funded obstetric care. The results also indicate that women who elect privately funded care (versus public) may be prescribed newer, more expensive drugs at a higher rate, where a larger proportion of the cost of medications are borne by the Government. This could indicate equity issues surrounding access to newer medications during pregnancy for women of lower socio‐economic status and warrants further research. Further studies are also required to determine whether the observed increase in expenditure corresponded to improved health outcomes. Such analyses were outside the scope of this paper. Patient co‐payments for PBS‐listed medications mean that high‐volume medications aren’t always a high cost‐burden to the Government (e.g. metoclopramide). Volume of use, price per dispensing, mode of administration and the availability of generic alternatives all influence the likelihood of a medication being a cost burden to the Government. In addition, multiple other factors influence changes in dispensing patterns and costs over time, including changes in therapeutic guidelines (in particular for diabetes ), alterations to pregnancy safety classifications, the inclusion of new medications/new listings on the PBS and the proportion of concessional patients in the community. Investigation of the influence of each of these factors on the results reported were outside the scope of this paper. Interestingly, ondansetron was shown to be the tenth most frequently dispensed medication despite not being formally approved for use outside of cancer chemotherapy under the funding provisions of the PBS. This may indicate that PBS ‘leakage’ or upward prescribing is a common phenomenon within the antiemetic and antinauseant therapeutic category, whereby medical practitioners prioritise the clinical need of the patient and protection of the doctor–patient relationship over subsidy restrictions. This observation has been previously reported by Colvin et al. who describe the circumstances surrounding these PBS‐funded dispensings in greater detail. 4.3 Limitations The limitations associated with analyses that utilise healthcare databases are well‐known , and are applicable to this study. First, there are difficulties in accurately defining medications that were consumed during pregnancy, as we did not have the exact date of delivery (a privacy protection mechanism). Sensitivity analyses showed our primary results were robust to reasonable variations surrounding the assumed date of delivery. Secondly, we acknowledge that the date of dispensing and the date women take medications are not necessarily the same – particularly for drugs used as episodic treatment. For our analyses, we necessarily assumed that when a medication was dispensed it was also taken on the same date, as there was no way of confirming when (or whether) a medication was actually taken. It is also possible that medications may have been dispensed prior to conception, yet taken during pregnancy. Thirdly, drugs may have been prescribed for indications outside of their ATC‐assigned classification (e.g. valproate prescribed for bipolar disorder rather than epilepsy), therefore there may be inaccuracies in the results reported. Fourthly, this study does not report on all medications supplied to women during pregnancy. We did not analyse data relating to: non‐PBS funded prescription medications (including medicines dispensed in a hospital inpatient setting, private prescriptions, over‐the‐counter medications, vitamins or herbal supplements); PBS‐listed items dispensed to pregnant women who experienced a miscarriage or termination of pregnancy prior to 20 weeks of gestation; dispensings relating to pregnancies where the PDC‐PBS link was not successful ( n = 187); dispensings associated with deliveries where the birth year was unknown ( n = 93); items dispensed during the first 30 days of pregnancy. Consequently, it is likely that our analysis underestimates the true prevalence of medication use during pregnancy, volume of dispensings, and total costs associated with PBS‐listed medications. Finally, our analysis has not incorporated an assessment of the incidence of any positive or negative health outcomes linked to consumption of medications; that is, there has been no assessment of the value arising from the medications dispensed. Rather, our analysis serves as a precursor to these types of full economic evaluations, highlighting the therapeutic areas and medications that may require more thorough assessment regarding economic efficiency and cost containment. Main findings Overall, six in every ten pregnant women were dispensed at least one prescription medication during pregnancy and prevalence increased over time. Government expenditure on medications for pregnant women is rising at a rapid rate in comparison with patient out‐of‐pocket expenditure, with almost one‐third of total Government costs being attributed to only three patented, injectable pharmaceuticals – enoxaparin sodium, ferric carboxymaltose and insulin aspart. The average number of PBS‐listed medications dispensed per pregnancy increased from 2.1 to 2.5 between 2013 and 2018, with an incongruent rise observed for Government expenditure on medication per pregnancy, rising from $46 to $84 over the same time. That is, the rate of increase in Government costs was five times the rate of increase in quantity of items dispensed (per woman). Interpretation Other studies that have examined prescription medication use during pregnancy have shown similar prevalence rates, with a Norwegian study reporting a 60% prevalence and a Danish study reporting 66%. In terms of therapeutic classes, a 2018 study by Haas et al. reported findings similar to ours, with gastrointestinal or antiemetic agents, antibiotics and analgesics being the most frequently prescribed therapeutic classes. An Australian pharmacovigilance study based on above co‐payment dispensing data also reported similar dispensing patterns, with a slightly higher dispensing rate shown for psychoanaleptics. Our study is the first to report on the cost burden of pharmaceuticals and therapeutic classes dispensed during pregnancy. We are only aware of one small study in Brazil that has analysed the costs associated with medication use in 47 pregnant women, reporting only the mean cost of medication per pregnancy (equivalent to $61.83 AUD 2020/21). We hypothesise that the disproportionate rise in Government expenditure compared with women's out‐of‐pocket expenditure seen in our analysis is influenced by newer, more expensive drugs being prescribed more frequently. Ferric carboxymaltose is an example of this phenomenon, with prescription of this drug increasing approximately five‐fold from 2013 to 2017, and total annual expenditure on intravenous iron therapies for women of reproductive age increasing 35‐fold over the same timeframe. As the policy landscape changes to accommodate the testing of new and existing medications in pregnant populations, we expect to see newer medications prescribed more frequently, inevitably leading to rises in the mean cost per dispensing in this population. Nonetheless, significant improvements in health outcomes for women and their children are also expected, leading to efficiency gains in the delivery of health care. Total annual Government expenditure on PBS‐listed medications dispensed during pregnancy increased by more than 50% in real terms from 2013 to 2017. This is twice the increase observed across the entire PBS from 2012/2013 to 2016/2017 (25%). , The rate of increase in Government expenditure was also shown to be higher for women who elect private obstetric care as opposed to publicly funded obstetric care. The results also indicate that women who elect privately funded care (versus public) may be prescribed newer, more expensive drugs at a higher rate, where a larger proportion of the cost of medications are borne by the Government. This could indicate equity issues surrounding access to newer medications during pregnancy for women of lower socio‐economic status and warrants further research. Further studies are also required to determine whether the observed increase in expenditure corresponded to improved health outcomes. Such analyses were outside the scope of this paper. Patient co‐payments for PBS‐listed medications mean that high‐volume medications aren’t always a high cost‐burden to the Government (e.g. metoclopramide). Volume of use, price per dispensing, mode of administration and the availability of generic alternatives all influence the likelihood of a medication being a cost burden to the Government. In addition, multiple other factors influence changes in dispensing patterns and costs over time, including changes in therapeutic guidelines (in particular for diabetes ), alterations to pregnancy safety classifications, the inclusion of new medications/new listings on the PBS and the proportion of concessional patients in the community. Investigation of the influence of each of these factors on the results reported were outside the scope of this paper. Interestingly, ondansetron was shown to be the tenth most frequently dispensed medication despite not being formally approved for use outside of cancer chemotherapy under the funding provisions of the PBS. This may indicate that PBS ‘leakage’ or upward prescribing is a common phenomenon within the antiemetic and antinauseant therapeutic category, whereby medical practitioners prioritise the clinical need of the patient and protection of the doctor–patient relationship over subsidy restrictions. This observation has been previously reported by Colvin et al. who describe the circumstances surrounding these PBS‐funded dispensings in greater detail. Limitations The limitations associated with analyses that utilise healthcare databases are well‐known , and are applicable to this study. First, there are difficulties in accurately defining medications that were consumed during pregnancy, as we did not have the exact date of delivery (a privacy protection mechanism). Sensitivity analyses showed our primary results were robust to reasonable variations surrounding the assumed date of delivery. Secondly, we acknowledge that the date of dispensing and the date women take medications are not necessarily the same – particularly for drugs used as episodic treatment. For our analyses, we necessarily assumed that when a medication was dispensed it was also taken on the same date, as there was no way of confirming when (or whether) a medication was actually taken. It is also possible that medications may have been dispensed prior to conception, yet taken during pregnancy. Thirdly, drugs may have been prescribed for indications outside of their ATC‐assigned classification (e.g. valproate prescribed for bipolar disorder rather than epilepsy), therefore there may be inaccuracies in the results reported. Fourthly, this study does not report on all medications supplied to women during pregnancy. We did not analyse data relating to: non‐PBS funded prescription medications (including medicines dispensed in a hospital inpatient setting, private prescriptions, over‐the‐counter medications, vitamins or herbal supplements); PBS‐listed items dispensed to pregnant women who experienced a miscarriage or termination of pregnancy prior to 20 weeks of gestation; dispensings relating to pregnancies where the PDC‐PBS link was not successful ( n = 187); dispensings associated with deliveries where the birth year was unknown ( n = 93); items dispensed during the first 30 days of pregnancy. Consequently, it is likely that our analysis underestimates the true prevalence of medication use during pregnancy, volume of dispensings, and total costs associated with PBS‐listed medications. Finally, our analysis has not incorporated an assessment of the incidence of any positive or negative health outcomes linked to consumption of medications; that is, there has been no assessment of the value arising from the medications dispensed. Rather, our analysis serves as a precursor to these types of full economic evaluations, highlighting the therapeutic areas and medications that may require more thorough assessment regarding economic efficiency and cost containment. CONCLUSION Medication use during pregnancy is common and has rising cost implications for women and rapidly escalating cost implications for the Government. Increases in out‐of‐pocket expenses and apparent disparities in access to newer medications between public and private patients reveal issues surrounding equity of access to medications within this population, and warrant further research. All authors contributed to conceptualisation of the study. HJ carried out the data analysis for the study under the guidance of EC. HJ drafted the paper, which was edited according to the valuable contributions that all authors made with respect to recommendations for further analysis and interpretation of the data. All authors approved the final version of the article and accept accountability for the integrity of the research. HJ was supported by a Monash Equity Scholarship. EC was supported by a National Health and Medical Research Council (NHMRC) Fellowship. LEG was supported by a Channel 7 Children's Research Foundation Fellowship. EC has received grant funding from Ferring Pharmaceuticals to identify costs associated with adverse birth outcomes of culturally and linguistically diverse women. This funding was not utilised as a part of this study, nor did Ferring Pharmaceuticals play any role in this study. Completed disclosure of interest forms are available to view online as supporting information. Ethics approval was obtained from the Townsville Hospital and Health Service Human Research Ethics Committee (HREC; HREC/16/QTHS/223) and the Australian Institute of Health and Welfare HREC (EO2017‐1‐338). In addition, we obtained Public Health Act approval (RD007377) for the study. Appendix S1. Data S1. |
Attitudes, experiences, and preferences of ophthalmic professionals regarding routine use of patient-reported outcome measures in clinical practice | a6c2e4fb-f8d7-4047-babb-225db78c88e5 | 7717508 | Ophthalmology[mh] | High quality healthcare has three domains: safety, effectiveness and positive patient experience . Patient-reported outcome measures (PROMs) are now established as key tools for measuring effectiveness. Routine use of PROMs is widely advocated and has been used to assess and improve the quality of healthcare in many countries . In the UK National Health Service (NHS), routine PROM use was initially mandated for four high volume ‘beacon’ surgical procedures in adults with the expectation that improved subjective well-being would reflect high quality care . As yet, routine use of PROMs to measure the quality of ophthalmic health services is not mandated. PROMs are recognised as particularly valuable adjuncts to clinical assessment in chronic conditions, where clinical parameters may change little but the impact on the lives of affected individuals may vary significantly. Thus in the context of chronic conditions, promoting health-related quality of life becomes an important focus of healthcare . Childhood visual impairment is a prime exemplar of a chronic health state, with a profound and dynamic impact on subjective well-being and activities of daily living. The impact on development and participation during childhood are well described, including the risk of delayed social, cognitive and emotional milestones , and limitations to age-appropriate activities . As children grow up, the impact of visual impairment may change, due to a combination of disease progression, change in clinical treatment/intervention and/or the child’s adaptation to the functional limitations imposed by visual impairment. Thus PROMs have significant potential value in clinical practice, affording rich insight into the broader impact of visual impairment upon aspects of daily life that are not captured by clinical assessment. Until recently one key barrier to routine PROMs use in ophthalmology has been a dearth of robust vision PROMs . With a burgeoning vision PROMs industry, it is time to address another gap in the evidence base, the lack of understanding of the barriers and enablers to routine use of vision PROMs from the perspectives of ophthalmic clinicians. We report a novel investigation, using our two child/young person vision PROMs as ‘model’ instruments, of ophthalmic clinicians’ prior experience of, and future training needs for, using PROMs and their views about the barriers and enablers to future implementation in paediatric ophthalmology practice.
This pilot service development and quality improvement study was approved by the National Health Service Research Ethics Committee for University College London Great Ormond Street Institute of Child Health and Great Ormond Street Hospital, London, UK (REC reference: 17/LO/1484). The study followed the tenets of the Declaration of Helsinki. Sample A voluntary sample of clinicians based in the Department of Ophthalmology at Great Ormond Street Hospital, London UK, an internationally leading children’s hospital. All members of the patient-facing multi-professional clinical team, comprising ophthalmologists, orthoptists, optometrists, clinical vision scientists, nurses, and an eye clinic liaison officer, were invited by the leading researcher (AR) to participate in this study. Due to the nature of the study design (i.e. a study of all ‘patient-facing’ staff in a single department), no exclusion criteria were used. Procedure Participants were recruited through verbal invitation during a clinical teaching session at the hospital, which took place in July 2018, attended by 31 clinicians in the Department. The aims of the project were presented alongside the NHS policy framework and context for routine use of PROMs. In order to understand experience and perspectives relevant to paediatric ophthalmology specifically, we used our previously developed instruments capturing the distinct but complementary outcomes of vision-related quality of life (the VQoL_CYP) and functional vision (the FVQ_CYP) , as two exemplar child-vision PROMs (available for download at https://xip.uclb.com/ct/healthcare_tools/ ). An overview of the VQoL_CYP and FVQ_CYP was presented by way of an update, as they were already familiar to most participants since the research programme that developed these PROMs was based at the hospital’s partner institution, the UCL Great Ormond Street Institute of Child Health. Several members of the broader research team in which the VQoL_CYP and FVQ_CYP were developed were also present at the clinical teaching session. Following this briefing session, an online survey was distributed by email to the whole clinical team. The survey was constructed using RedCap software . A combination of closed- and open-ended questions were used that took account of the existing literature outside ophthalmology relating to routine PROM use. The survey elicited prior experience of using PROMs, self-assessed further training/information needs, level of confidence in discussing PROMs with patients, and agreement with well-known benefits and barriers of using PROMs routinely in clinical practice (closed-ended questions are presented in Tables and , see for the full survey). A brief description of the VQoL_CYP and FVQ_CYP instruments preceded the survey. The survey was piloted with a clinical member of the research team who was not part of the clinical department, with an aim to identify any improvements to the wording or presentation of individual questions. Participants submitted their responses anonymously, to encourage candid responses. Given the relatively small size of the clinical department (which is very well known in the UK) and to avoid any risk of disclosure, we deliberately did not collect potentially identifiable information such as participants’ gender, age, experience in clinical practice or their specific role in the department. Three reminders to participate were sent to the whole department over the course of 3 months. In a follow-up/feedback session, the results of the survey were presented to the clinical team. A one hour long focus group discussion was led by researchers (AR and JR), who are experienced in collecting qualitative data, to enable a ‘deep dive’ into the findings including development of a consensus on the optimal approach to presenting analysed PROMs data to clinicians. Qualitative data were audio recorded. Analysis Quantitative analysis comprised descriptive statistics using SPSS . Qualitative data from open-ended survey items and the focus group discussion were transcribed and entered into NVivo . Data were analysed by two researchers (AR and JR) using qualitative thematic analysis, including open and axial coding techniques , to identify key themes, derived from the data, which were then cross-referenced with quantitative findings.
A voluntary sample of clinicians based in the Department of Ophthalmology at Great Ormond Street Hospital, London UK, an internationally leading children’s hospital. All members of the patient-facing multi-professional clinical team, comprising ophthalmologists, orthoptists, optometrists, clinical vision scientists, nurses, and an eye clinic liaison officer, were invited by the leading researcher (AR) to participate in this study. Due to the nature of the study design (i.e. a study of all ‘patient-facing’ staff in a single department), no exclusion criteria were used.
Participants were recruited through verbal invitation during a clinical teaching session at the hospital, which took place in July 2018, attended by 31 clinicians in the Department. The aims of the project were presented alongside the NHS policy framework and context for routine use of PROMs. In order to understand experience and perspectives relevant to paediatric ophthalmology specifically, we used our previously developed instruments capturing the distinct but complementary outcomes of vision-related quality of life (the VQoL_CYP) and functional vision (the FVQ_CYP) , as two exemplar child-vision PROMs (available for download at https://xip.uclb.com/ct/healthcare_tools/ ). An overview of the VQoL_CYP and FVQ_CYP was presented by way of an update, as they were already familiar to most participants since the research programme that developed these PROMs was based at the hospital’s partner institution, the UCL Great Ormond Street Institute of Child Health. Several members of the broader research team in which the VQoL_CYP and FVQ_CYP were developed were also present at the clinical teaching session. Following this briefing session, an online survey was distributed by email to the whole clinical team. The survey was constructed using RedCap software . A combination of closed- and open-ended questions were used that took account of the existing literature outside ophthalmology relating to routine PROM use. The survey elicited prior experience of using PROMs, self-assessed further training/information needs, level of confidence in discussing PROMs with patients, and agreement with well-known benefits and barriers of using PROMs routinely in clinical practice (closed-ended questions are presented in Tables and , see for the full survey). A brief description of the VQoL_CYP and FVQ_CYP instruments preceded the survey. The survey was piloted with a clinical member of the research team who was not part of the clinical department, with an aim to identify any improvements to the wording or presentation of individual questions. Participants submitted their responses anonymously, to encourage candid responses. Given the relatively small size of the clinical department (which is very well known in the UK) and to avoid any risk of disclosure, we deliberately did not collect potentially identifiable information such as participants’ gender, age, experience in clinical practice or their specific role in the department. Three reminders to participate were sent to the whole department over the course of 3 months. In a follow-up/feedback session, the results of the survey were presented to the clinical team. A one hour long focus group discussion was led by researchers (AR and JR), who are experienced in collecting qualitative data, to enable a ‘deep dive’ into the findings including development of a consensus on the optimal approach to presenting analysed PROMs data to clinicians. Qualitative data were audio recorded.
Quantitative analysis comprised descriptive statistics using SPSS . Qualitative data from open-ended survey items and the focus group discussion were transcribed and entered into NVivo . Data were analysed by two researchers (AR and JR) using qualitative thematic analysis, including open and axial coding techniques , to identify key themes, derived from the data, which were then cross-referenced with quantitative findings.
Eighteen clinicians (47% of the clinical department) completed the survey. Twenty-seven took part in the focus group discussion, representing every different ‘patient-facing’ professional group within the department. Three themes were derived inductively, based on qualitative analysis: interpretation of PROM data , responsibilities for action , and optimal PROM presentation . These were cross-referenced with quantitative findings, and are presented as complementary data. As shown in , only a minority (22.2%) of participants had any experience of using PROMs. Various training and information needs were identified, with a need for training in how to choose the best PROM and how to interpret PROM scores being the most common (>80% respondents). Half or more also identified their need for better understanding of both the benefits and challenges of using PROMs. Most participants preferred to view purely visual representations of PROM data (versus numeric scores) , with some pre-coding: “…a traffic light system with red getting worse.” Qualitative data analysis revealed a clear preference for simple presentation formats alongside objective assessments of visual function to support discussions with patients and their families. Flexibility in presentation of PROM data, enabling both overall scores and, where sought, individual item scores to be viewed over time (e.g. a scatterplot) with the option to “dig deeper” into individual item scores, was deemed optimal for facilitating interpretation. As shown in , clinicians felt most confident about explaining to their patients what PROMs are and why their patients should complete them, but less confident about explaining what scores meant and how they would be used. Complementary qualitative data revealed concerns about interpreting findings in the context of parental influence or missing data. Incorporating PROM items into electronic patient records, with supporting manuals and embedded algorithms to allow immediate analysis were viewed as facilitating accurate interpretation: “Tablet and web or cloud based collection systems with immediate analytics, such as scoring would be ideal. It would be great if these could be incorporated into electronic patient records.” As shown in , the majority of participants agreed with the well-established potential benefits to using PROMs, notably that PROMs would be useful when i) making clinical decisions, ii) detecting problems and concerns that clinical assessments would not identify, iii) monitoring a patient’s condition and response to treatment, and iv) improving communication and joint decision-making with patients, and their families. Equally, participants endorsed the commonly reported barriers to routine use by clinicians in other specialties, in particular the risk that using PROMs would encourage patients to discuss aspects of health beyond the control of the clinicians. Some clinicians emphasised this perspective within the focus group, discussing, in more detail, the possible disclosure by patients of psychosocial, and emotional issues. Clinicians’ raised some concerns about their responsibility for action. “ Are we responsible to act on these issues ? When does it become a child protection issue ? ” There was consensus that “pre-screening” of PROMs data by one member of the clinical team with expertise in psychosocial and emotional issues associated with visual impairment, such as the Eye Clinic Liaison Officer, before a clinical review, was the optimal approach: “These are valuable tools that the Eye Clinic Liaison Officer would benefit from having access to and advising me of the results from, especially if they highlight areas of concern.”
From this pilot study of the views and experiences of paediatric ophthalmic professionals regarding routine use of PROMs, it is clear that clinicians value the benefits of embedding routine use of vision PROMs in ophthalmic practice to improve their understanding and ability to monitor the impact of eye disease and its treatment on their patients and to enhance communications and joint decision-making with their patients and their families. The need for further training and information before implementation were clearly articulated alongside the need to find ways of allowing PROM data to be efficiently collected, analysed and reviewed before being presented in an appropriate format alongside clinical data to ensure meaningful use. We used a mixed methods approach to understand the perspectives of clinicians working in our tertiary paediatric ophthalmology service i.e. serving the UK population of children and young people with visual impairment or blindness who are the intended users of the VQoL_CYP and FVQ_CYP . This pilot study was designed to ascertain preliminary findings which will be useful to taking the next steps in routine implementation of PROMs in paediatric ophthalmology. Thus, whilst the sample size was adequate for this primary purpose of the study, it precluded formal statistical analyses, for example the relationship between clinicians’ experience and attitudes. Equally, the available resources including clinicians’ time, precluded in-depth individual interviews which could have allowed finer granularity of qualitative data. Participants in this study were clinicians within a hospital that is the partner to the research institution of the study team, and therefore potentially in a particularly good position to reflect, subjectively and accurately, on their experience of PROMs, and future routine use in an environment which they are extremely familiar with. However, only a minority had prior experience of using any vision PROMs. Whilst it is possible that participants’ prior familiarity with the overarching PROMs research programme might have influenced their participation in the study, as well as their responses, for example their positive attitudes towards using PROMs, it is notable that concerns were also identified. These warrant careful consideration, particularly issues surrounding the interpretation of PROM data which, if done incorrectly, could have serious clinical consequences. Throughout the research processes we tried to minimise any possible bias to ensure the study findings were reliable, encouraging participants to be as open as possible, and ensuring anonymity of data (collected in the survey). We also acknowledge that the single site, specialist setting in which this study took place and use of child-appropriate vision PROMs as exemplars, may preclude direct generalisations of the study findings to other ophthalmic clinical settings. Nevertheless, this study provides some important, preliminary information of generic value in ophthalmology, and notably, is anchored by the existing literature outside ophthalmology. Thus we believe the novel findings of this study to be valid and useful in planning future routine PROMs use in ophthalmology in other settings. Despite limited personal experience of using PROMs, most clinicians participating in this study recognised powerful benefits that are already evidenced in the literature outside ophthalmology, in particular improvements in patient-doctor communication and empowering patients’ to make decisions about their health/treatment . The growing literature points also to other benefits including improved characterisation of diagnoses , ability to capture concerns beyond the scope of functional clinical assessments and usefulness in monitoring long-term conditions Our findings suggest that implementation of routine use of PROMs in ophthalmology does not require further research to identify ‘ophthalmology-specific’ benefits but rather that existing experience and literature on potential benefits could be utilised to provide information and plan training for ophthalmic clinicians. For example a reported intervention model in paediatrics (incorporating educational, epidemiological, behavioural, organisational, and social interaction approaches) utilising online PROM administration accompanied by generic training about PROMs for clinicians before observing (via DVD) others using PROMs could be translated into paediatric ophthalmology after due consideration of the population of patients being served and training in how to choose the most appropriate PROM . Whilst the generic literature points to some concerns amongst clinicians about whether routine use of PROMs truly makes a difference to clinical outcomes , the finding that the majority of clinicians in our study assigned particular value to PROMs in clinical decision-making is in keeping with other health professionals . This most likely reflects understanding that the way in which routine PROM use is undertaken is critical to realising its full benefits. Our participants views regarding barriers and enablers aligned with two key themes in the extant literature: operationalisation (how best to collect and incorporate data) and impact (how best to interpret and act on the data to change patient care) . The empirical literature shows that use of digital technology and electronic systems to administer PROMs and manage the data is efficient and effective . Moreover if patients are able to complete PROMs ahead of their consultation and can also view their own longitudinal PROM data, this may allow priorities for discussion at the upcoming consultation to be identified and allow better pre-planning of key decision-making outpatient appointments . This approach could work well in NHS ophthalmology services, where there is generally already a member of the team (e.g. Eye Clinic Liaison Officer (ECLO) or equivalent) who is ideally placed to review and discuss the PROM data with patients before briefing, as required, the managing clinician. With regards to presenting PROM data, this study clearly identifies a visual format embedded within an electronic patient record system would be optimal, facilitating clinicians’ interpretation and minimising time spent viewing data. Such a system needs to be flexible, offering users the ability to switch between graphical summaries and a deep dive into raw data. The global drive towards electronic patient record systems provides the ideal vehicle for integration of PROMs into routine ophthalmic care, to provide an integrated and flexible platform for PROMs collection as has already been achieved in other paediatric areas . Whilst ophthalmology is not yet on a par with other clinical specialities in terms of either availability of robust PROMs or routine implementation , we suggest that there is cause for optimism based on our study findings. The findings of this pilot study show that the potential benefits of routine PROM use are recognised by ophthalmic clinicians and that they have an appetite to learn about how to choose and use the most appropriate PROMs. They also suggest that the existing literature outside ophthalmology relating to overcoming barriers and exploiting enablers to routine implementation may be applicable to planning implementation in ophthalmology.
S1 Appendix (DOCX) Click here for additional data file.
|
Adrenal hypoplasia congenita in identical twins | ac36b115-6d01-4236-8846-04c1628852a1 | 6452601 | Pathology[mh] | Patient information twin-A He was born on 13th February 2006, at 37 weeks of gestation to a 39-year-old gravida 6 para 6 mother after uncomplicated twin pregnancy. The parents were second degree cousins of Saudi origins. He has 2 elder brothers and 3 sisters, all are alive and healthy. His birth weight was 2.0 kg, body length was 46 cm, and head circumference was 32 cm. He was discharged with his mother in good condition after 3 days of admission to the nursery for observation. Clinical findings At the age of 18 days, he was presented to the emergency department with history of vomiting, poor feeding, and decreased activity with failure to thrive. On physical examination, his weight was 1.66 kg, blood pressure (BP) was 67/51 mmHg, heart rate was 160/min, temperature was 36.2°C, and respiratory rate (RR) was 58/min. He was dehydrated, not dysmorphic with no evidence of hyperpigmentation. Normal systemic examination, with normal male genitalia . Diagnostic assessment He was admitted to neonatal intensive care unit with impression of sepsis. Laboratory tests showed hyponatremia 128 mmol/L with hyperkalemia 6.2 mmol/L. Septic workup was carried out and were awaited. He was managed with intravenous fluids and antibiotics. He was noticed to have persisted hyponatremia and hyperkalemia, therefore endocrine consultation was requested. A provisional diagnosis of congenital adrenal hyperplasia was made as it is the most common cause of salt wasting at this age group. His endocrinological data revealed unelevated adrenocorticotropic hormone (ACTH), serum aldosterone low-normal while plasma renin was very high. Adrenocorticotropic hormone stimulation test was carried out with 0.25 mg synthectin which showed normal cortisol response at 60 minutes. Seventeen hydroxyprogesterone, dehydroepiandrosterone sulfate (DHEAS), and testosterone were normal. Chromosomal study showed 46 XY . Therapeutic intervention These results ruled out CAH. He was diagnosed to have isolated aldosterone deficiency, and managed with fludrocortisone and sodium chloride orally with excellent response. He was followed up in outpatient clinic regularly and was maintaining normal serum sodium and potassium. He was thriving well . At 18 months of age, he was noticed to have increased pigmentation specially the lips and gum, but was thriving well. Glucocorticoid deficiency was suspected. Urgent ACTH stimulation test was carried out with 0.25 mg synthectin. Basal ACTH >2000 pg/ml. Serum cortisol failed to rise in response to ACTH at 60 min. Seventeen hydroxyprogesterone was normal. Testosterone and DHEAS were normal . He was diagnosed to have primary adrenal insufficiency managed with hydrocortisone and fludrocortisone orally. At 18 months of age ultrasound done, showed right testis at the right inguinal area. He underwent orchidopexy for right undescended testis . Follow-up and outcomes He was followed up regularly in outpatient department showing normal ACTH and serum electrolytes, and was thriving well. His last visit was at the age of 12 years and 3 months . His height was 134 cm (just below 3 rd centile), his weight was 29 kilograms (below 10 th centile). Patient information twin-B He is now 12 years and 3 months old boy, his birth weight was 1.8 kg. He was discharged with his mother in good condition. He was growing normally and was not having any significant illness. Clinical findings At the age of 9 years and 6 months, his mother brought him to endocrine clinic accompanying his twin-A brother. The mother complained that she noticed him to have progressive weight loss, fatigue, decreased activity, and progressively increasing generalized body pigmentation, which was noticed for 3 months. There was no history of vomiting, abdominal pain, or change in bowel habit. There was no history of preceding infection. On examination, he was alert and conscious. The Glasgow Coma Scale is 15/15, lethargic, dehydrated with generalized marked hyperpigmentation. The body weight was 18 kg (below 3rd centile), the height was 122 cm (below 5th centile). His temperature was 36.5°C, heart rate was 109/min, BP was 104/59 mmHg, RR was 36/min, and oxygen saturation was 100%. Systemic examination was normal except for right undescended testis . He was admitted to Pediatric Intensive Care Unit with the impression of adrenal crisis due to adrenal insufficiency. He was managed with intravenous hydrocortisone, intravenous normal saline, kayexalate, and fludrocortisone. Orchidopexy was carried out later . Diagnostic assessment As his twin-A brother was diagnosed to have primary adrenal insufficiency, AHC was suspected and blood samples for gene study were sent for both of them which proved DAX-1 mutation. After 2 months of hydrocortisone replacement the ACTH was 45.37 pg/mL. At 12 years and 3 months of age his wight was 32 kg (above 10th centile), and his height was 135 cm (3rd centile) . Therapeutic intervention Both brothers are now on oral hydrocortisone and fludrocortisone replacement therapy. Follow-up and outcomes They are thriving well, repeated hormonal evaluation, particularly serum ACTH was performed and is maintained within the normal reference range. Both of them have normal penile length, and Tanner stage 1 for testis and pubic hair.
He was born on 13th February 2006, at 37 weeks of gestation to a 39-year-old gravida 6 para 6 mother after uncomplicated twin pregnancy. The parents were second degree cousins of Saudi origins. He has 2 elder brothers and 3 sisters, all are alive and healthy. His birth weight was 2.0 kg, body length was 46 cm, and head circumference was 32 cm. He was discharged with his mother in good condition after 3 days of admission to the nursery for observation.
At the age of 18 days, he was presented to the emergency department with history of vomiting, poor feeding, and decreased activity with failure to thrive. On physical examination, his weight was 1.66 kg, blood pressure (BP) was 67/51 mmHg, heart rate was 160/min, temperature was 36.2°C, and respiratory rate (RR) was 58/min. He was dehydrated, not dysmorphic with no evidence of hyperpigmentation. Normal systemic examination, with normal male genitalia .
He was admitted to neonatal intensive care unit with impression of sepsis. Laboratory tests showed hyponatremia 128 mmol/L with hyperkalemia 6.2 mmol/L. Septic workup was carried out and were awaited. He was managed with intravenous fluids and antibiotics. He was noticed to have persisted hyponatremia and hyperkalemia, therefore endocrine consultation was requested. A provisional diagnosis of congenital adrenal hyperplasia was made as it is the most common cause of salt wasting at this age group. His endocrinological data revealed unelevated adrenocorticotropic hormone (ACTH), serum aldosterone low-normal while plasma renin was very high. Adrenocorticotropic hormone stimulation test was carried out with 0.25 mg synthectin which showed normal cortisol response at 60 minutes. Seventeen hydroxyprogesterone, dehydroepiandrosterone sulfate (DHEAS), and testosterone were normal. Chromosomal study showed 46 XY .
These results ruled out CAH. He was diagnosed to have isolated aldosterone deficiency, and managed with fludrocortisone and sodium chloride orally with excellent response. He was followed up in outpatient clinic regularly and was maintaining normal serum sodium and potassium. He was thriving well . At 18 months of age, he was noticed to have increased pigmentation specially the lips and gum, but was thriving well. Glucocorticoid deficiency was suspected. Urgent ACTH stimulation test was carried out with 0.25 mg synthectin. Basal ACTH >2000 pg/ml. Serum cortisol failed to rise in response to ACTH at 60 min. Seventeen hydroxyprogesterone was normal. Testosterone and DHEAS were normal . He was diagnosed to have primary adrenal insufficiency managed with hydrocortisone and fludrocortisone orally. At 18 months of age ultrasound done, showed right testis at the right inguinal area. He underwent orchidopexy for right undescended testis .
He was followed up regularly in outpatient department showing normal ACTH and serum electrolytes, and was thriving well. His last visit was at the age of 12 years and 3 months . His height was 134 cm (just below 3 rd centile), his weight was 29 kilograms (below 10 th centile).
He is now 12 years and 3 months old boy, his birth weight was 1.8 kg. He was discharged with his mother in good condition. He was growing normally and was not having any significant illness.
At the age of 9 years and 6 months, his mother brought him to endocrine clinic accompanying his twin-A brother. The mother complained that she noticed him to have progressive weight loss, fatigue, decreased activity, and progressively increasing generalized body pigmentation, which was noticed for 3 months. There was no history of vomiting, abdominal pain, or change in bowel habit. There was no history of preceding infection. On examination, he was alert and conscious. The Glasgow Coma Scale is 15/15, lethargic, dehydrated with generalized marked hyperpigmentation. The body weight was 18 kg (below 3rd centile), the height was 122 cm (below 5th centile). His temperature was 36.5°C, heart rate was 109/min, BP was 104/59 mmHg, RR was 36/min, and oxygen saturation was 100%. Systemic examination was normal except for right undescended testis . He was admitted to Pediatric Intensive Care Unit with the impression of adrenal crisis due to adrenal insufficiency. He was managed with intravenous hydrocortisone, intravenous normal saline, kayexalate, and fludrocortisone. Orchidopexy was carried out later .
As his twin-A brother was diagnosed to have primary adrenal insufficiency, AHC was suspected and blood samples for gene study were sent for both of them which proved DAX-1 mutation. After 2 months of hydrocortisone replacement the ACTH was 45.37 pg/mL. At 12 years and 3 months of age his wight was 32 kg (above 10th centile), and his height was 135 cm (3rd centile) .
Both brothers are now on oral hydrocortisone and fludrocortisone replacement therapy.
They are thriving well, repeated hormonal evaluation, particularly serum ACTH was performed and is maintained within the normal reference range. Both of them have normal penile length, and Tanner stage 1 for testis and pubic hair.
Primary adrenal insufficiency is a potentially life threatening disorder that can present with salt losing crisis or profound hypoglycemia and requires urgent resuscitation and appropriate steroid replacement. Primary adrenal insufficiency can occur at any age, in the neonatal period, in infancy or in childhood. It is difficult to diagnose AHC in a neonate because it is often misdiagnosed as the salt wasting form of congenital adrenal hyperplasia which is the most common etiology for adrenal insufficiency in this age group. , In fact, both of these 2 diseases have different steroid metabolism and can be distinguished from each other by clinical manifestation and genetic features. Adrenal hypoplasia congenita is a rare disorder that can be inherited as an X-linked or autosomal recessive pattern. , The exact incidence of AHC is not known, however, for the X-linked form, the incidence is estimated between 1:140,000 and 1:1,200,000 children. More than one hundred patients with DAX-1 mutations, have been described. DAX-1 mutations are more likely in patients with a positive family history of an affected male. X-linked AHC is caused by deletions or mutations in DAX-1 gene (AHC; MIM: 300200), the majority of these mutations are frameshift or nonsense mutations leading to truncated DAX-1 protein. There is no clear evidence for a genotype-phenotype correlation between a mutation in DAX-1 (NR0B1) and its structural consequence and the clinical phenotype. The age of onset of adrenal failure can vary within the same family, suggesting that other epigenetic factors influence the clinical coarse of AHC , and that’s what we have described in our identical twins. Twin-A presented with salt wasting crisis in the early neonatal period which was followed with glucocorticoid deficiency after 18 months. While twin-B presented with adrenal crisis after many years (at age of 9 years and 6 months) although they were having the same mutation in DAX-1. It has also been observed that in some patients with AHC, the apparent mineralocorticoid deficiency frequently can precede glucocorticoid deficiency and this explained why twin-A was initially diagnosed as having isolated aldosterone deficiency. The key treatment during an acute adrenal crisis for these patients are intravenous hydrocortisone as well as normal saline with glucose solutions. These patients require lifelong replacement of glucocorticoids (physiological dose) as well as mineralocorticoids. Adrenal hypoplasia congenita is frequently associated with hypogonadotropic hypogonadism (HHG, MIM 1416110), the spectrum of presentation of HHG varies widely from pubertal failure to infertility. , , , Mutational (genetic) analysis of DAX-1 is important in any male infant presenting with salt-losing adrenal failure, when steriodogenic disorder (congenital adrenal hyperplasia) and adrenal hemorrhage have been excluded. Genetic analysis of DAX-1 gene is very useful for definitive diagnosis of X-linked AHC as well as for genetic counseling in families having DAX-1 mutation with a history of unexplained death of maternal male relatives, highlighting X-linked pattern of transmission. We have described an identical twin with DAX-1 mutation who presented at different ages with different presentations. Twin-A presented with isolated mineralocorticoid deficiency during neonatal period, which was followed after 18 months with glucocorticoid deficiency. Furthermore, twin-B was totally normal till the age of 9 years and 6 months when he presented with adrenal crisis. Both of them were having undescended testis requiring orchidopexy and both are still having small testicular size at 12 years of age. They will be followed up closely for their pubertal development as they are likely to have HHG. In conclusion, gene analysis is important for the diagnosis of AHC and for genetic counseling
|
The interplay between toothbrush stiffness and charcoal-containing dentifrice on the development of enamel topography changes | de724295-d528-447c-9043-df2dc13dbb91 | 11569616 | Dentistry[mh] | Changes in enamel surface roughness could promote plaque formation and bacterial proliferation . There are a variety of factors that contribute to the changes in enamel surface roughness, including acid exposure, which dissolves hydroxyapatite found in the enamel surface, and exposure to an abrasive agent contained in different dental dentifrices . The tooth can be protected from these challenges by the formation of salivary pellicle and bacterial biofilms . The salivary pellicle formed over the tooth surface is composed of salivary proteins and glycoproteins that adhere to the enamel, granting a barrier against mechanical abrasion and chemical attacks . Besides, bacterial biofilms act as a protective barrier, preventing direct contact between the tooth surface and abrasive substances . Despite these protective factors, the tooth structure is not completely immune against chemical and abrasive attacks. A wide range of toothpaste is available today in the market, with different abrasive particles, detergents, and therapeutic agents . It is critical to realize that some of the available toothpastes may cause harmful effects to dental tissues . As a result, acknowledging the content and function of each toothpaste allows for the proper choice of the desired target. Ideal toothpaste provides maximum dental cleaning and teeth protection with minimum abrasion . Nowadays, charcoal-containing toothpaste is one of the trendiest items to whiten and clean teeth. Charcoal particle characteristics help remove extrinsic staining, biofilm, and food debris . However, a concern has been raised about using charcoal particles in toothpaste, as these particles’ star and fractal shape may increase teeth’ roughness . This is critical as increased enamel roughness may allow the accumulation of plaque and stains, leading to discolorations and secondary caries . Additionally, enamel wear can lead to tooth sensitivity, which negatively impacts patients’ quality of life . A controversy has been found in the literature concerning the use of charcoal-containing toothpastes . In one study, increased enamel surface roughness has been significantly seen after frequent usage of charcoal-containing toothpaste compared to the control . In another investigation, three charcoal-based toothpastes were found to induce increased the surface roughness of the teeth following 2,000 cycles of brushing simulation . Opposingly, another investigation found that charcoal-containing toothpaste did not affect the surface topographies of tooth enamel . Same findings were observed in another study where different charcoal-based dentifrices were found to induce less enamel changes compared to the control following erosion-toothbrushing abrasion cycling . The conflicting results in the existing literature may stem from variations in brushing force, the number of cycles, and the types of dentifrices used. However, it is worth mentioning that the previous investigations did not take into consideration the abrasivity of the toothbrush itself, as it has been demonstrated that the toothbrush stiffness may modulate the abrasivity of such toothpaste . As a result, investigating the interaction between the use of charcoal-based toothpaste and different bristles’ stiffness may provide more insight concerning the factors leading the enamel abrasion. Therefore, our study aimed to determine the abrasiveness of charcoal toothpaste on enamel using different toothbrush stiffnesses (soft, medium, and hard). Our results may help filling the gap between the controversial studies due to the insufficiency of mentioning the toothbrush bristles type while using activated charcoal. We hypothesize that charcoal-containing toothpaste when used with hard bristles could demonstrate greater enamel topography changes compared to the use of conventional toothpaste or soft bristles.
Ethical approval, study design, and characterization The use of extracted teeth and the design of this study were approved by the Institutional Review Board at Imam Abdulrahman bin Faisal University (IRB-2023-02-414). In this study, four main groups with three subgroups for each were investigated (Fig. ). The first independent variable was toothpaste at four levels: (i) conventional fluoridated toothpaste without coal particles (Conventional TP), (ii) fluoridated toothpaste with charcoal particles (Charcoal TP), (iii) whitening toothpaste without coal particles (Whitening TP), and (iv) distilled water as a control. Details concerning the used toothpastes and their composition are described in Table . The second independent variable was the toothbrush bristles’ stiffness at three levels: soft, medium, and hard (Table ). The diameter and length of the bristles were determined using a 4.5× magnification device (LUXO, Elmsford, New York, USA) and ImageJ Software (The National Institutes of Health, Bethesda, USA). The toothbrushes (Tara Toothbrush Company LLC, Dammam, Saudi Arabia) were obtained from a local pharmacy store. The dependent variable investigated in the study was enamel surface roughness, which was measured at two time-point. Sample size calculation A priori sample size calculation was carried out using G*Power 3, a statistical tool for computing power analyses for a variety of statistical tests. This calculation established the sample size necessary for two-ANOVA test comparing the mean scores of the twelve groups at a significance level of p < 0.05. The results indicated a minimum of 19 samples in each group to detect medium effect sizes (f = 0.25). The sample size was increased to 22 samples per group to account any type of errors that could occur. Power was set at 80%. Samples preparation After getting the ethical approval, human-extracted premolars were collected and stored in 0.1 wt% of thymol solution at 4 °C till the time of use. Teeth with carious lesions or cracks were excluded. A total of 132 enamel specimens were prepared from extracted premolars using an IsoMet 4000 water-cooled precision saw (Buehler, Lake Bluff, IL, USA). Each specimen was then split into two, resulting in a total of 264 samples . An acrylic resin block was used to hold each specimen, and the outer enamel surface was flattened with silicon carbide paper of #600-, #1200-, and 2000-grit (Wirtz-Buehler, Düsseldorf, Germany) and diamond pastes to produce a flattened window of the enamel at 3 ⋅ 3 mm . Based on a list generated by RANDOM.ORG, all specimens were assigned unique numbers and distributed randomly among the twelve groups, where both the type of toothpaste and the bristles’ stiffness were randomized. Toothbrushing simulation model A custom-made V-8 toothbrushing machine (model ZM-3.8, SD Mechatronik, Feldkirchen-Westerham, Germany) was used to position the samples with their long axis perpendicular to the brushes’ long axis. Each toothpaste group was prepared by diluting 60 mL of dentifrice with distilled water at a 1:3 ratio, ensuring consistent application across samples. Custom-designed plastic trays were utilized to protect reference areas on the teeth, preventing unintended abrasion . Then, teeth were brushed for 1,250 and then for another 1,250 (total 2,500) double strokes, using the same toothbrush, at brushing load of 200 g to simulate typical brushing pressure. Following each specimen, the toothbrush was replaced by another one to exclude any impact related to bristles deformation. After finishing the brushing strokes, the specimens were thoroughly rinsed with distilled water to remove any residual toothpaste and debris . Evaluating the surface roughness The average surface roughness value for each group before and after the brushing simulation was determined through a non-contact optical profilometer (Contour GT-K1 optical profiler; Bruker Nano, Tucson, AZ, USA). With the aid of a regular camera, an area measuring approximately (0.43 × 0.58 mm 2 ) on three locations (center, right side, and left side) of the same specimen was scanned and documented the average Ra. The center of the sample was initially identified by a mark made in the acrylic material . The difference in Ra between before and after the brushing simulation challenge was determined by subtracting the post-brushing value from the pre-brushing value (baseline). Statistical analysis Sigma Plot recorded and analyzed the data. Descriptive statistics (mean, standard deviation, frequency, and percentages) were used to summarize the information. In addition, two-way ANOVA followed by Tukey multiple comparisons were used to compare the outcomes. A P-value of < 0.05 was considered statistically significant.
The use of extracted teeth and the design of this study were approved by the Institutional Review Board at Imam Abdulrahman bin Faisal University (IRB-2023-02-414). In this study, four main groups with three subgroups for each were investigated (Fig. ). The first independent variable was toothpaste at four levels: (i) conventional fluoridated toothpaste without coal particles (Conventional TP), (ii) fluoridated toothpaste with charcoal particles (Charcoal TP), (iii) whitening toothpaste without coal particles (Whitening TP), and (iv) distilled water as a control. Details concerning the used toothpastes and their composition are described in Table . The second independent variable was the toothbrush bristles’ stiffness at three levels: soft, medium, and hard (Table ). The diameter and length of the bristles were determined using a 4.5× magnification device (LUXO, Elmsford, New York, USA) and ImageJ Software (The National Institutes of Health, Bethesda, USA). The toothbrushes (Tara Toothbrush Company LLC, Dammam, Saudi Arabia) were obtained from a local pharmacy store. The dependent variable investigated in the study was enamel surface roughness, which was measured at two time-point.
A priori sample size calculation was carried out using G*Power 3, a statistical tool for computing power analyses for a variety of statistical tests. This calculation established the sample size necessary for two-ANOVA test comparing the mean scores of the twelve groups at a significance level of p < 0.05. The results indicated a minimum of 19 samples in each group to detect medium effect sizes (f = 0.25). The sample size was increased to 22 samples per group to account any type of errors that could occur. Power was set at 80%.
After getting the ethical approval, human-extracted premolars were collected and stored in 0.1 wt% of thymol solution at 4 °C till the time of use. Teeth with carious lesions or cracks were excluded. A total of 132 enamel specimens were prepared from extracted premolars using an IsoMet 4000 water-cooled precision saw (Buehler, Lake Bluff, IL, USA). Each specimen was then split into two, resulting in a total of 264 samples . An acrylic resin block was used to hold each specimen, and the outer enamel surface was flattened with silicon carbide paper of #600-, #1200-, and 2000-grit (Wirtz-Buehler, Düsseldorf, Germany) and diamond pastes to produce a flattened window of the enamel at 3 ⋅ 3 mm . Based on a list generated by RANDOM.ORG, all specimens were assigned unique numbers and distributed randomly among the twelve groups, where both the type of toothpaste and the bristles’ stiffness were randomized.
A custom-made V-8 toothbrushing machine (model ZM-3.8, SD Mechatronik, Feldkirchen-Westerham, Germany) was used to position the samples with their long axis perpendicular to the brushes’ long axis. Each toothpaste group was prepared by diluting 60 mL of dentifrice with distilled water at a 1:3 ratio, ensuring consistent application across samples. Custom-designed plastic trays were utilized to protect reference areas on the teeth, preventing unintended abrasion . Then, teeth were brushed for 1,250 and then for another 1,250 (total 2,500) double strokes, using the same toothbrush, at brushing load of 200 g to simulate typical brushing pressure. Following each specimen, the toothbrush was replaced by another one to exclude any impact related to bristles deformation. After finishing the brushing strokes, the specimens were thoroughly rinsed with distilled water to remove any residual toothpaste and debris .
The average surface roughness value for each group before and after the brushing simulation was determined through a non-contact optical profilometer (Contour GT-K1 optical profiler; Bruker Nano, Tucson, AZ, USA). With the aid of a regular camera, an area measuring approximately (0.43 × 0.58 mm 2 ) on three locations (center, right side, and left side) of the same specimen was scanned and documented the average Ra. The center of the sample was initially identified by a mark made in the acrylic material . The difference in Ra between before and after the brushing simulation challenge was determined by subtracting the post-brushing value from the pre-brushing value (baseline).
Sigma Plot recorded and analyzed the data. Descriptive statistics (mean, standard deviation, frequency, and percentages) were used to summarize the information. In addition, two-way ANOVA followed by Tukey multiple comparisons were used to compare the outcomes. A P-value of < 0.05 was considered statistically significant.
The impact of the type of toothpaste, bristles’ stiffness, and their interaction after 1,250 and 2,500 cycles of brushing simulation are described in Tables and . The two-way ANOVA analysis revealed that after 1,250 brushing simulation cycles (Table ), the toothpaste type ( P = 0.017) and bristles’ stiffness ( P = 0.022) were significant factors in modulating the enamel surface roughness. Besides, there was a significant interaction ( P = 0.013) when these two factors were analyzed. Table illustrates the two-way ANOVA results following 2,500 brushing simulation cycles. Similarly, both the toothpaste type (P = < 0.001) and the bristles’ stiffness (P = < 0.001) were significant factors in the enamel surface roughness with no significant interaction ( P = 0.227). Following 1,250 brushing simulation cycles (Table ; Fig. ), the whitening toothpaste (39.24 ± 17.07) significantly ( P = 0.037) increased the enamel surface roughness compared to the negative control (22.20 ± 18.34) when soft bristles toothbrushes were used. When bristles with medium stiffness were used, the whitening toothpaste also was associated with the highest value of surface roughness change (47.68 ± 19.90), which was significantly ( P = 0.003) higher than the charcoal toothpaste (25.93 ± 16.09). Table ; Fig. illustrate the enamel roughness values after 2,500 cycles of brushing simulation. The whitening and charcoal toothpastes were associated with increased surface roughness change compared to the conventional toothpaste and the negative control, especially when bristles with medium and hard bristles were used. Using bristles with soft stiffness revealed that charcoal toothpaste was associated with the higher value of surface roughness change (55.86 ± 41.18), which was significant ( P = 0.024) compared to the negative control. Using bristles with medium stiffness showed that the whitening (68.23 ± 48.58) and charcoal (73.62 ± 34.66) toothpastes significantly (P = < 0.05) increased the enamel surface roughness compared to the conventional toothpaste (36.53 ± 22.56). Figure. illustrates the scan images for the investigated groups recorded by the non-contact profilometer. In general, more topographic changes were observed when bristles with hard stiffness were used and when whitening and charcoal toothpastes were applied.
The alternative hypothesis of this study was supported as charcoal toothpaste increased enamel surface roughness compared to the conventional toothpaste and the negative control, especially when the bristles’ stiffness was hard. The enamel surface roughness change following the application of the charcoal toothpaste was comparable to the whitening toothpaste, suggesting that both may induce harmful effects against the enamel surface. The results of our findings suggest that people should be cautious when using dental products containing charcoal or whitening ingredients for an elongated period as they may abrade the enamel. The number of toothbrushing cycles applied in this study served as a prolonged representation of toothbrushing. In real-life situations, toothbrushing usually occurs at a rate of 4.5 strokes per second . Since 2 min is the suggested brushing time to eliminate plaque , it is expected that around 90 strokes are needed in each sextant. This equates to 15 brushing strokes per tooth or 5 brushing strokes per surface (buccal, lingual, occlusal) . Consequently, if patients brush their teeth on three occasions a day, a total of 5475 strokes would be accomplished on a particular surface in a year. For this study, simulating approximately three and six months, the number of brushing cycles was set at 1,250 and 2,500, respectively. These durations are expected to induce initial changes in the enamel surface. However, it is important to note that the parameters used in this study, as well as the number of cycles, are limited by the absence of saliva, which serves as a potential remineralizing reservoir, and the salivary pellicle, which acts as a protective layer. In several in vitro studies, the brushing force used was between 0.2 and 4.2 N, with an average between 2 and 3 N . In this study, we applied a force equivalent to 2 N to be convenient with the current literature. The Charcoal and whitening toothpastes experienced the highest increase in the Ra value, which is a measure of surface roughness. This increase was mainly attributed to the presence of abrasive materials in the toothpaste. Apart from silica and hydrated silica, Charcoal Formula toothpaste also contains activated carbon or charcoal. Charcoal particles in the toothpaste have a star-shaped or fractal shape, which may contribute to its abrasivity . We intended in this study to use toothpastes with comparable ingredients to emphasize the possible impact of the charcoal powder in the charcoal toothpaste. Statistical analysis comparing the conventional toothpaste (positive control) and the charcoal toothpaste, which have similar types of abrasive materials, revealed significant differences in the changes of Ra values (surface roughness) after three months of brushing. This suggests that the charcoal component in the toothpaste may have played a role in altering the surface roughness of the tooth enamel despite the similar types of abrasive materials used in both toothpastes. It is also possible that the size and quantity of silica particles differ between the two toothpastes. Larger and more abundant silica particles may induce greater surface changes . This is more likely when comparing the ingredients of the conventional and whitening toothpastes, as the only difference between them is the number of pigments, which probably have little to do with the abrasivity of the toothpastes. The relative dentin abrasion values (RDA) of the conventional and whitening toothpastes used in this study were found to be 70 and 124, respectively . This suggests that the conventional toothpaste used here has low abrasivity, while the whitening toothpaste has high abrasivity, mainly related to the size and number of silicas. No available information concerning the RDA value of the charcoal toothpaste used in this study, but it is expected to be close to the whitening toothpaste based on the obtained results. This study also suggests that the use of medium and hard bristles may lead to greater enamel wear compared to soft bristles. While dental practitioners typically recommend soft-bristled toothbrushes to their patients, it is essential to investigate the extent of enamel wear associated with medium and hard bristles. Additionally, the potential interactions between these bristles and toothpastes with varying ingredients and abrasivity warrant exploration. Interestingly, the effects of medium and hard bristles may be comparable to those of soft bristles when used with low-abrasive toothpaste, which was also demonstrated in the study by Turssi et al. . Therefore, a key objective of this article was to examine the interactions between the various toothpastes investigated and the stiffness of different bristles. The results of our study agree with previous investigations showing that charcoal toothpaste may induce enamel surface roughness . However, our results contraindicate other findings, revealing no difference in enamel wear subjected to conventional and charcoal toothpastes . The conflicting results in the existing literature may arise from several methodological differences, including variations in brushing force applied during the experiments, which can significantly influence enamel wear. Additionally, the number of brushing cycles used in each study varies, potentially leading to differing levels of abrasion severities. Furthermore, the types of dentifrices utilized, each with distinct formulations, active ingredients, and abrasive properties, can also contribute to the discrepancies observed in the outcomes. Besides, this controversy could be attributed to the use of one type of bristles stiffness, which could be an important modulator in the wear of enamel . These factors underscore the need for standardized methodologies to allow for more reliable comparisons across studies. Therefore, we intended to use different bristles’ stiffness in this study with clinically relevant parameters to investigate the interaction between the charcoal toothpaste and the stiffness of the bristles. This study found a significant difference in enamel surface roughness between conventional toothpaste versus charcoal and whitening toothpastes when used with hard and medium bristle toothbrushes. However, only a slight difference was observed when using soft bristles. These findings suggest that the risk of enamel wear may be greater when using charcoal and whitening toothpastes in combination with hard or medium bristle toothbrushes, especially with long-term use. While some may wonder if it is safe to use charcoal toothpaste even for a short period, it should be noted that despite the abrasiveness of charcoal toothpaste, which can remove extrinsic stains, it does not provide any intrinsic whitening benefits and can still harm the enamel surface, as indicated by the results of this study. Additionally, there is growing evidence that charcoal toothpastes are not particularly effective at improving teeth color . Therefore, it may be advisable to avoid the use of charcoal-based toothpastes and instead opt for home or professional bleaching/whitening treatments for safe and effective teeth whitening. Our results suggest that individuals should be cautious when using highly abrasive toothpastes and stiff-bristled toothbrushes, as they may have significant adverse effects on an individual’s oral health. One of the primary concerns is the gradual wear down of the tooth enamel over time. Tooth enamel is the outermost, hardest layer of the tooth, and it plays a crucial role in protecting the underlying dentin and pulp. However, the aggressive abrasive action of these dental products can gradually abrade the enamel, leading to increased sensitivity . This increased sensitivity can make everyday tasks like eating and drinking a challenging and unpleasant experience. Furthermore, the loss of enamel can also compromise the aesthetic appearance of the teeth, potentially leading to an aged or unattractive look. Another significant consequence of using abrasive toothpastes and stiff-bristled toothbrushes is the increased risk of gingival recession . To mitigate these issues, it is recommended that individuals use a soft-bristled toothbrush and a non-abrasive, fluoride-containing toothpaste with proper brushing technique. Besides, seeking professional advice from dental practitioners can ensure using the most effective tools for oral hygiene practice. This laboratory study yielded significant findings regarding the impact of charcoal toothpaste and toothbrushes with varying bristle stiffness on enamel surface topography. However, there are some limitations that should be acknowledged. While efforts were made to select and polish the teeth within specific criteria, it is impossible to perfectly standardize the baseline surface roughness and mineral content of the enamel samples . This variability in the starting conditions of the samples could have introduced some inherent differences. Furthermore, various patient-related factors, such as individual differences in oral health status, overall physiological conditions, and general health, can significantly influence the enamel’s response to the wear challenge. These clinical variables are not easily replicated in a controlled laboratory setting. Besides, in the oral environment, the formation of salivary pellicles over the tooth surface may potentially reduce the degree of wear induced by toothbrushing . Putting all these factors into consideration in addition to the need to investigate other commercially available charcoal-based toothpastes, the results obtained here can not be generalized. It is crucial to validate the findings of this in vitro study through well-designed in-vivo models. Despite these limitations, the current study provides valuable insights into the potential abrasive effects of charcoal toothpaste and different bristle stiffnesses on enamel topography. Further clinical research is warranted to fully understand the implications for dental health and to establish safe and effective oral hygiene practices.
The results of this study suggest that the type of toothpaste used, and the stiffness of the toothbrush bristles can impact the surface roughness of dental enamel. Specifically, the investigated whitening and charcoal toothpastes were associated with increased enamel roughness. Further research, including well-designed clinical studies and investigating a broader sample of commercially available products, is warranted to corroborate these in vitro findings and develop evidence-based recommendations for dental consumers and practitioners.
|
Conservative endometrioma surgery: The combined technique versus CO | 65c78bd2-7892-4698-bd46-53729ec1423f | 11884717 | Surgical Procedures, Operative[mh] | Endometriosis is defined by the presence of endometrium-like epithelium and/or stroma outside the endometrium and myometrium, usually with an associated inflammatory process . This chronic disease affects 2 to 10% of women of reproductive age and its prevalence increases up to 50% in women with infertility . Endometrioma(s) are present in 17 to 44% of patients with the disease . Laparoscopic surgery is a well-established treatment of endometrioma(s) however caution is required to minimize ovarian damage . Surgical treatment of endometrioma(s) can be performed by several techniques : cystectomy , i.e., excision of the cyst wall; ablation using CO 2 -laser vaporization or plasma energy to destroy the inner surface of the cyst wall in situ; or a partial ovarian cystectomy combining excisional and ablative surgery. The latter technique will hereafter be referred to as “the combined technique”, which includes first of stripping 80–90% of the cyst wall surface, followed by a second step consisting of ablation of the remaining 10–20% cyst surface attached to the ovarian vascular hilus . Although the technique of coagulation or fulguration (with destruction of the inner surface of the cyst wall using electrosurgery) has been described before, we will not further discuss this technique since the last ESHRE guideline strongly recommend cystectomy above coagulation or fulguration in terms of recurrence and endometriosis-associated pain . Cystectomy has been the first line surgical treatment for a long time . However, based on findings of studies on detrimental effect on ovarian reserve due to endometrioma surgery, alternative techniques should be envisaged, as mentioned in the last ESHRE guideline where CO 2 -laser vaporization is suggested as an alternative to cystectomy . More specifically, a small randomized controlled trial (RCT) by Tsolakidis et al. showed that endometrioma surgery by CO2-laser vaporization (as part of three stage management ) had a lower impact on ovarian reserve (measured by decline in AMH) than cystectomy. The RCT by Candiani et al directly compared cystectomy with ‘one-step’ CO 2 -laser ablation and showed a higher AFC after CO 2 -laser ablation. In this same RCT, better results were seen for AMH levels postoperatively in the CO 2 -laser ablation group (no reduction versus significant reduction in the cystectomy group). The combined technique has also been proposed as an alternative technique for the classical cystectomy. In comparison with the contralateral normal ovary, no difference in ovarian volume and AFC was seen postoperatively suggesting that this technique has limited deleterious effect on the ovarian reserve . The RCTs of Tsolakidis and Candiani have used serum anti-Müllerian hormone levels to describe the ovarian reserve. Indeed, serum AMH is the most accurate marker of ovarian reserve . In addition, a recent systematic review and meta-analysis suggested that in women with endometrioma, AMH levels may be of greater utility than AFCs in the assessment of the risk of iatrogenic depletion of the ovarian reserve. This was based on the observation of a significant reduction in AMH levels (which were consistent at the early- intermediate- and late- postoperative time points) after cystectomy but not for AFC (8). Since CO 2 -laser vaporization and the combined technique may be safer for the normal ovarian tissue as opposed to cystectomy, they are considered as more conservative surgical techniques. In patients wishing to preserve their reproductive potential, the least harmful technique should be preferred when planning ovarian surgery. However, to the best of our knowledge, these different conservative techniques have not been compared directly regarding their effect on ovarian reserve and recurrence rate. We therefore designed this multicenter, non-blinded, RCT with parallel groups and allocation 1:1. We aimed to determine whether and to what extent these two surgical procedures for endometrioma(s) (combined technique (group 1) versus CO 2 laser vaporization (group 2)) may affect ovarian reserve by comparing changes in serum AMH levels concentrations after treatment.
This study protocol used the SPIRIT 2013 checklist (see ): recommended items to address in a clinical trial protocol and related documents. Setting This is a multicenter national, non-blinded, randomized controlled trial with parallel groups and allocation 1:1. In group 1 a combined technique will be performed versus a complete CO2 laser vaporization in group 2. Four different centers in Belgium will be involved of which the first three are university hospitals: University Hospitals Leuven (Leuven, Belgium) Hôpital de La Citadelle (Liège, Belgium) Cliniques Universitaires Saint-Luc (Brussels, Belgium) GZA (Gasthuiszusters Antwerpen) Sint-Augustinus (Antwerp, Belgium) Patients are randomly assigned according to a computer-generated randomization list using the method of block randomization (using varying block sizes) to allocate them in a 1:1 ratio. Randomization will be done maximally 2 months before the intervention and minimally 1 hour before start of the intervention by the (sub)investigator of each center. Block randomization per study center will be used to ensure allocation of equal numbers of subjects in each group per center. Participants Patients planned for laparoscopic surgery for endometriotic cysts are eligible to participate in the study. Diagnosis of the endometrioma(s) will be done using transvaginal ultrasound by an experienced sonographer following the International Ovarian Tumor Analysis (IOTA) criteria for reliable diagnosis of endometriomas in premenopausal women : ground glass echogenicity of the cyst fluid, one to four locules, no papillary projections with detectable blood flow. Further mapping of the endometriosis lesions will be done by transvaginal ultrasound using the International Deep Endometriosis Analysis (IDEA) terminology and complemented by magnetic resonance imaging (MRI) when deemed necessary. Before performing surgery for an endometriotic cyst, AMH level is routinely measured (with Roche ECLIA AMH kit since this assay is available in all participating centers). In women with endometrioma(s), the preoperative measurement of AMH level is good clinical practice to assess the potential risk for iatrogenic premature ovarian insufficiency after surgery . Eligible patients will be informed on the study by their endometriosis surgeon, they will receive a patient information leaflet (providing a plain language text in Dutch, French or English). If they are willing to participate, a written informed consent is signed before enrollment. To be eligible to participate in this study, a subject must meet all the following criteria: Age 18 – 40 years (both inclusive) Unilateral endometriotic cysts with a mean diameter of ≥ 2.5 cm and ≤ 8 cm, measured in 3 dimensions. Complaining of infertility and/or pain BMI ≤ 35 kg/m 2 Use of contraception (combined hormonal contraceptives or progestogens), at least 4 weeks prior to surgery AMH level ≥ 0.7 ng/mL preoperatively (A circulating AMH level of 0.7 ng/ml has been claimed to be the threshold value for poor ovarian responsiveness to controlled ovarian stimulation ) A potential subject who meets any of the following criteria will be excluded from participation in this study: Patient preference for incomplete surgery for the pelvis (for example patient request to only treat the endometrioma without the other associated endometriotic lesions if present) Contra-indication for the use of contraception (combined hormonal contraceptives or progestogens) Use of Gonadotrophin-releasing hormone (GnRH) analogues preoperatively and in the first 3 months postoperatively (History of) hysterectomy Prior unilateral oophorectomy Pituitary/hypothalamic disorders Suspected malignancy Contralateral endometrioma of > 2 cm Pregnancy Prior ovarian surgery (for endometriosis or other cysts) is not an exclusion criterium (as opposed to oophorectomy) but should be reported. Interventions Patients accepting to enter the study will be randomized between 2 different laparoscopic techniques (both arms are existing and accepted surgical strategies for the treatment of endometriomas): Group 1: The combined technique. First step consisting of stripping the cyst wall for 80% of the surface, followed by a second step consisting of ablation of the remaining 20% cyst surface attached to the ovarian vascular hilus and left on site . Group 2: CO 2 -laser vaporization only. CO 2 -laser vaporization of the complete inner cystic wall after drainage of the cyst content, irrigation, and inspection of its inner wall. A biopsy of the cyst wall will be sent for routine histologic examination to confirm the diagnosis of endometriosis. Ablation of the entire inner surface of the cyst wall using the CO 2 -laser (Lumenis). Power settings of 30–55 watt for the CO 2 -laser beam and 6–10 watt for CO 2 -fibre (based on animal data) are usually used. The CO 2 -laser will be applied in ‘Surgitouch mode’ so that it can ablate the cyst surface while preserving the underlying healthy tissue . If a small contralateral endometrioma is present (≤ 2 cm) this will be treated by CO 2 -laser vaporization only (independent of randomization). Simultaneous treatment of all visual endometriosis lesions (standard procedure). Operative techniques will be recorded as recommended by the CORDES statement . Cross over is allowed from group 1 (combined technique) to group 2 (CO 2 -laser vaporization) if stripping of 80% of the cyst wall is not possible. Hormonal contraceptives (combined hormonal contraceptives or progestogens) will be used by the participants for at least 4 weeks preoperatively to avoid the presence of a corpus luteum during surgery. The total duration of use of hormonal treatment will be registered. If there was no desire to conceive postoperatively, advise was given to continue the oral contraceptives postoperatively to reduce risk of recurrence . Blinding of the surgeon is not possible. Blinding of surgeons/patients during postoperative follow-up is not feasible and is assumed not to influence the primary outcome. Indeed, bias due to lack of blinding is expected to be negligible since the primary outcome measured is clear and unambiguous (measurement of AMH level). Due to divergent timing between screening consultation and surgery, AMH level measurement will be repeated the day before or on the day of surgery (baseline AMH level). After surgery this will be repeated at 3, 6 and 12 months postoperatively. Variables influencing AMH levels will also be registered (age, smoking, use of hormonal treatment and duration, previous ovarian surgery, BMI). Surgical data will be registered including operative time, recording of the hemostatic method used to manage bleeding on an ovary (if necessary), revised American Society for Reproductive Medicine (rASRM) points and stage, Endometriosis Fertility Index (EFI), hospital stay and complications. During postoperative follow-up pain symptoms will be evaluated, transvaginal ultrasound will be performed and AMH levels will be measured at fixed timepoints (3, 6, 12 and 24 months postoperatively, see and ). In case that a child wish is present, patients will be managed according to their EFI either by non-assisted reproductive technologies (ART) or ART. If a clinical pregnancy occurs postoperatively, the study related follow-up ends but pregnancy outcomes will be recorded. Both arms of the study are established surgical strategies. Any adverse events will be communicated to the sponsor and principal investigator without undue delay. The adverse events that may occur are those related to surgery and will not differ from what is expected in normal clinical practice. Despite this, a data safety monitoring board (DSMB) has been established including an independent statistician of the KU Leuven to oversee the safety of the participants in the trial. Patients can leave the study at any time for any reason if they wish to do so without any consequences. The investigator can decide to withdraw a subject from the study for urgent medical reasons. Each withdrawal must be clearly documented. End of study is reached if a patient became pregnant (with referral to an obstetrician for follow-up of the pregnancy) or after completion of the 24 months follow-up period. For follow-up after ending the study, patients were referred to their general gynecologist (not necessarily in the endometriosis unit). Study objectives Primary objective. To assess the effect of conservative surgery of endometrioma(s) on ovarian reserve as reflected by AMH level. For the primary outcome evaluation of serum AMH level measurement will be done before (baseline) and after (at 3 months follow up) laparoscopic treatment of endometrioma(s) (i.e., delta AMH). The 3 months-period rather than an immediate assessment after surgery was chosen as ovarian surgery inflicts traumatic damage to the ovarian cortex as reflected by a sharp decline in AMH level immediately after surgery, however recovery of the AMH level is expected 3 months postoperatively when edema and local inflammation end . The timing of the primary analysis was based on these findings and is similar to the RCT of Candiani . Secondary objectives. AMH difference (or delta AMH)/cyst surface between baseline and at 3-, 6- and 12-months post-operatively with a correction for the cyst surface (since the volume of the cyst may be responsible for more/less influence on the ovarian reserve). The cyst surface will be calculated using the formula for the surface area of a sphere (4 πr 2 ). The radius of a sphere is half its diameter, for diameter the mean of the three cysts diameters will be used. AMH level modifications (or delta AMH) at 6 and 12 months follow up. Cyst recurrence rate at 3, 6, 12 and 24 months postoperatively (visualized by transvaginal ultrasound, any recurrence will be registered). Clinical pregnancy (as defined by the International Committee Monitoring Assisted Reproductive Technologies (ICMART) as a pregnancy diagnosed by ultrasonographic visualization of one or more gestational sacs or definitive clinical signs of pregnancy. It includes ectopic pregnancy. Note: Multiple gestational sacs are counted as one clinical pregnancy). Ectopic pregnancy (as defined by the ICMART as a pregnancy in which implantation takes place outside the uterine cavity). Miscarriage (defined as a spontaneous loss of pregnancy). Live birth (as defined by the ICMART as the complete expulsion or extraction from its mother of a product of fertilization, irrespective of the duration of the pregnancy, which, after such separation, breathes or shows any other evidence of life, such as heartbeat, umbilical cord pulsation, or definite movement of voluntary muscles, irrespective of whether the umbilical cord has been cut or the placenta is attached. Evolution of pain patterns pre- and postoperatively: each endometriosis related pain complaint will be evaluated using the numerical rating scale (NRS) scale at each visit. Premature ovarian insufficiency (POI) postoperatively. Data collection and analysis This study will use an electronic data capture system, i.e., Redcap. Redcap is a web-based system, all study sites will have access to Redcap. The server is hosted within the University Hospitals Leuven and meets hospital level security and back-up. Site access will be controlled, login in Redcap is password controlled. Each user will receive a personal login name and password and will have a specific role which had predefined restrictions on what is allowed in Redcap. Users will only be able to see data of patients of their own site. Any activity in this software is traced and transparent per audit trail and log files. As the randomization will be done in Redcap, a unique study number will be assigned to all subjects and subsequently used in the database. The subject identification code will be safeguarded by the site. The name and any other identifying data will not be included in the study database. An interim analysis is not planned since both arms are established surgical strategies. The recruitment phase is planned to be 60 months (initially estimated inclusion time of 24 months, due to the COVID-19 pandemic important delay in recruitment for which extension of inclusion time with another 36 months, a notification was made to the Ethical Committee). Sample size calculation was based on the primary outcome: evaluation of serum AMH level 3 months after laparoscopic treatment of endometrioma(s). Power calculation was based on the findings of the RCT from Candiani et al. where AMH was a secondary outcome in comparing conventional cystectomy versus CO 2 -laser vaporization. In this paper a postoperative AMH of 1,9 ± 0,9 ng/mL was found after CO 2 -laser vaporization only. In this paper a difference in decline in postoperative AMH of 50% was observed in favor of the vaporization group, although this study was not powered for this outcome. Since we will compare two conservative techniques for endometrioma surgery, a difference of 30% AMH decline after surgery was considered to be clinically relevant (after consultation of all participating centers). Based on Candiani et al. (2018) a mean serum AMH of 1.9 (SD = 0.9) ng/mL is expected with CO 2 -laser vaporization. Assuming a common standard deviation a total sample size of 82 patients is needed (or 41 patients in each group) based on a two-sided independent t-test with alpha equal to 0.05 to have at least 80% power to detect a difference of 30% between both groups. To account for the 10% drop out because of pregnancy within 3 months postoperatively, it is prudent to aim for a total sample size of 92 patients (or 46 patients in each group). Note that this calculation is based on the conservative assumption of no correlation between baseline AMH and AMH after 3 months. In practice, the power is expected to exceed largely 80% since the final analysis will be based on an approach taking into account the baseline value. However, since the study wants to gather information on multiple endpoints, we deemed it not appropriate to lower the sample size. The full analysis set (FAS) will, according to the intent-to-treat principle, include all randomized patients according to their randomized treatment. The FAS will be used for the evaluation of all efficacy endpoints. The primary analysis will be based on the FAS. In case cross-over occurs, an as-treated analysis will be performed additionally. Patients from the FAS with major protocol deviations will be excluded from the per protocol set (PPS). The PPS will be reviewed and finalized prior to database lock at a Blind Review Meeting. Summary tables (descriptive statistics and/or frequency tables) will be provided for all variables (baseline, surgical and postoperative follow-up). Continuous variables will be summarized with descriptive statistics (n, mean, standard deviation, range, median, p25 and p75). Frequency counts and proportions of subjects within each category will be provided for the categorical data. For the comparison of the mean AMH levels at different timepoints a constrained Longitudinal Data Analysis (cLDA ) will be used such that the presence of missing values (due to dropout or to pregnancy, in the latter case AMH values after pregnancy are put on missing) can be handled. Center will be added as a fixed effect in the cLDA model. The same approach will be used for the (log-transformed) ratio of the AMH level and the cyst surface, and for the longitudinally gathered NRS pain scores. Cyst recurrence rate until 24 months postoperatively will be visualized using Kaplan-Meier estimates and compared using a stratified log-rank test. Subgroup analysis will be performed on patients without previous history of ovarian surgery (if relevant) and depending on continuation of contraception postoperatively. The datasets, including anonymized patient-level data generated during the current trial, will be available after study completion from the corresponding author upon reasonable request. Dissemination plans Results from this trial will be shared through publications and presented at international conferences. All participating investigators will be co-authors, according to the number of patients included and intellectual contribution. Status and timeline of the study The study protocol was approved by the ethics committee research UZ/KU Leuven on October 24 th 2019 (institutional review board (IRB) number: S62899) and registered on clinicaltrials.gov (NCT04151433) on November 5 th 2019. All participating patients will sign a written informed consent, as approved by the ethical committees. Coordinating center: University Hospitals Leuven. Patient recruitment is ongoing and started after approval of the ethics committee (December 1 st 2019). Expected end of inclusion will be no later than June 30 th , 2025. End of study, with collection of all secondary outcomes, is foreseen by mid 2027 (if necessary, extend with 9 months to allow full follow-up of the pregnancies until delivery). Current protocol version is 2.2 (dated 16-12-2021), see (PDF version). The English informed consent form can be found in .
This is a multicenter national, non-blinded, randomized controlled trial with parallel groups and allocation 1:1. In group 1 a combined technique will be performed versus a complete CO2 laser vaporization in group 2. Four different centers in Belgium will be involved of which the first three are university hospitals: University Hospitals Leuven (Leuven, Belgium) Hôpital de La Citadelle (Liège, Belgium) Cliniques Universitaires Saint-Luc (Brussels, Belgium) GZA (Gasthuiszusters Antwerpen) Sint-Augustinus (Antwerp, Belgium) Patients are randomly assigned according to a computer-generated randomization list using the method of block randomization (using varying block sizes) to allocate them in a 1:1 ratio. Randomization will be done maximally 2 months before the intervention and minimally 1 hour before start of the intervention by the (sub)investigator of each center. Block randomization per study center will be used to ensure allocation of equal numbers of subjects in each group per center.
Patients planned for laparoscopic surgery for endometriotic cysts are eligible to participate in the study. Diagnosis of the endometrioma(s) will be done using transvaginal ultrasound by an experienced sonographer following the International Ovarian Tumor Analysis (IOTA) criteria for reliable diagnosis of endometriomas in premenopausal women : ground glass echogenicity of the cyst fluid, one to four locules, no papillary projections with detectable blood flow. Further mapping of the endometriosis lesions will be done by transvaginal ultrasound using the International Deep Endometriosis Analysis (IDEA) terminology and complemented by magnetic resonance imaging (MRI) when deemed necessary. Before performing surgery for an endometriotic cyst, AMH level is routinely measured (with Roche ECLIA AMH kit since this assay is available in all participating centers). In women with endometrioma(s), the preoperative measurement of AMH level is good clinical practice to assess the potential risk for iatrogenic premature ovarian insufficiency after surgery . Eligible patients will be informed on the study by their endometriosis surgeon, they will receive a patient information leaflet (providing a plain language text in Dutch, French or English). If they are willing to participate, a written informed consent is signed before enrollment. To be eligible to participate in this study, a subject must meet all the following criteria: Age 18 – 40 years (both inclusive) Unilateral endometriotic cysts with a mean diameter of ≥ 2.5 cm and ≤ 8 cm, measured in 3 dimensions. Complaining of infertility and/or pain BMI ≤ 35 kg/m 2 Use of contraception (combined hormonal contraceptives or progestogens), at least 4 weeks prior to surgery AMH level ≥ 0.7 ng/mL preoperatively (A circulating AMH level of 0.7 ng/ml has been claimed to be the threshold value for poor ovarian responsiveness to controlled ovarian stimulation ) A potential subject who meets any of the following criteria will be excluded from participation in this study: Patient preference for incomplete surgery for the pelvis (for example patient request to only treat the endometrioma without the other associated endometriotic lesions if present) Contra-indication for the use of contraception (combined hormonal contraceptives or progestogens) Use of Gonadotrophin-releasing hormone (GnRH) analogues preoperatively and in the first 3 months postoperatively (History of) hysterectomy Prior unilateral oophorectomy Pituitary/hypothalamic disorders Suspected malignancy Contralateral endometrioma of > 2 cm Pregnancy Prior ovarian surgery (for endometriosis or other cysts) is not an exclusion criterium (as opposed to oophorectomy) but should be reported.
Patients accepting to enter the study will be randomized between 2 different laparoscopic techniques (both arms are existing and accepted surgical strategies for the treatment of endometriomas): Group 1: The combined technique. First step consisting of stripping the cyst wall for 80% of the surface, followed by a second step consisting of ablation of the remaining 20% cyst surface attached to the ovarian vascular hilus and left on site . Group 2: CO 2 -laser vaporization only. CO 2 -laser vaporization of the complete inner cystic wall after drainage of the cyst content, irrigation, and inspection of its inner wall. A biopsy of the cyst wall will be sent for routine histologic examination to confirm the diagnosis of endometriosis. Ablation of the entire inner surface of the cyst wall using the CO 2 -laser (Lumenis). Power settings of 30–55 watt for the CO 2 -laser beam and 6–10 watt for CO 2 -fibre (based on animal data) are usually used. The CO 2 -laser will be applied in ‘Surgitouch mode’ so that it can ablate the cyst surface while preserving the underlying healthy tissue . If a small contralateral endometrioma is present (≤ 2 cm) this will be treated by CO 2 -laser vaporization only (independent of randomization). Simultaneous treatment of all visual endometriosis lesions (standard procedure). Operative techniques will be recorded as recommended by the CORDES statement . Cross over is allowed from group 1 (combined technique) to group 2 (CO 2 -laser vaporization) if stripping of 80% of the cyst wall is not possible. Hormonal contraceptives (combined hormonal contraceptives or progestogens) will be used by the participants for at least 4 weeks preoperatively to avoid the presence of a corpus luteum during surgery. The total duration of use of hormonal treatment will be registered. If there was no desire to conceive postoperatively, advise was given to continue the oral contraceptives postoperatively to reduce risk of recurrence . Blinding of the surgeon is not possible. Blinding of surgeons/patients during postoperative follow-up is not feasible and is assumed not to influence the primary outcome. Indeed, bias due to lack of blinding is expected to be negligible since the primary outcome measured is clear and unambiguous (measurement of AMH level). Due to divergent timing between screening consultation and surgery, AMH level measurement will be repeated the day before or on the day of surgery (baseline AMH level). After surgery this will be repeated at 3, 6 and 12 months postoperatively. Variables influencing AMH levels will also be registered (age, smoking, use of hormonal treatment and duration, previous ovarian surgery, BMI). Surgical data will be registered including operative time, recording of the hemostatic method used to manage bleeding on an ovary (if necessary), revised American Society for Reproductive Medicine (rASRM) points and stage, Endometriosis Fertility Index (EFI), hospital stay and complications. During postoperative follow-up pain symptoms will be evaluated, transvaginal ultrasound will be performed and AMH levels will be measured at fixed timepoints (3, 6, 12 and 24 months postoperatively, see and ). In case that a child wish is present, patients will be managed according to their EFI either by non-assisted reproductive technologies (ART) or ART. If a clinical pregnancy occurs postoperatively, the study related follow-up ends but pregnancy outcomes will be recorded. Both arms of the study are established surgical strategies. Any adverse events will be communicated to the sponsor and principal investigator without undue delay. The adverse events that may occur are those related to surgery and will not differ from what is expected in normal clinical practice. Despite this, a data safety monitoring board (DSMB) has been established including an independent statistician of the KU Leuven to oversee the safety of the participants in the trial. Patients can leave the study at any time for any reason if they wish to do so without any consequences. The investigator can decide to withdraw a subject from the study for urgent medical reasons. Each withdrawal must be clearly documented. End of study is reached if a patient became pregnant (with referral to an obstetrician for follow-up of the pregnancy) or after completion of the 24 months follow-up period. For follow-up after ending the study, patients were referred to their general gynecologist (not necessarily in the endometriosis unit).
First step consisting of stripping the cyst wall for 80% of the surface, followed by a second step consisting of ablation of the remaining 20% cyst surface attached to the ovarian vascular hilus and left on site .
2 -laser vaporization only. CO 2 -laser vaporization of the complete inner cystic wall after drainage of the cyst content, irrigation, and inspection of its inner wall. A biopsy of the cyst wall will be sent for routine histologic examination to confirm the diagnosis of endometriosis. Ablation of the entire inner surface of the cyst wall using the CO 2 -laser (Lumenis). Power settings of 30–55 watt for the CO 2 -laser beam and 6–10 watt for CO 2 -fibre (based on animal data) are usually used. The CO 2 -laser will be applied in ‘Surgitouch mode’ so that it can ablate the cyst surface while preserving the underlying healthy tissue . If a small contralateral endometrioma is present (≤ 2 cm) this will be treated by CO 2 -laser vaporization only (independent of randomization). Simultaneous treatment of all visual endometriosis lesions (standard procedure). Operative techniques will be recorded as recommended by the CORDES statement . Cross over is allowed from group 1 (combined technique) to group 2 (CO 2 -laser vaporization) if stripping of 80% of the cyst wall is not possible. Hormonal contraceptives (combined hormonal contraceptives or progestogens) will be used by the participants for at least 4 weeks preoperatively to avoid the presence of a corpus luteum during surgery. The total duration of use of hormonal treatment will be registered. If there was no desire to conceive postoperatively, advise was given to continue the oral contraceptives postoperatively to reduce risk of recurrence . Blinding of the surgeon is not possible. Blinding of surgeons/patients during postoperative follow-up is not feasible and is assumed not to influence the primary outcome. Indeed, bias due to lack of blinding is expected to be negligible since the primary outcome measured is clear and unambiguous (measurement of AMH level). Due to divergent timing between screening consultation and surgery, AMH level measurement will be repeated the day before or on the day of surgery (baseline AMH level). After surgery this will be repeated at 3, 6 and 12 months postoperatively. Variables influencing AMH levels will also be registered (age, smoking, use of hormonal treatment and duration, previous ovarian surgery, BMI). Surgical data will be registered including operative time, recording of the hemostatic method used to manage bleeding on an ovary (if necessary), revised American Society for Reproductive Medicine (rASRM) points and stage, Endometriosis Fertility Index (EFI), hospital stay and complications. During postoperative follow-up pain symptoms will be evaluated, transvaginal ultrasound will be performed and AMH levels will be measured at fixed timepoints (3, 6, 12 and 24 months postoperatively, see and ). In case that a child wish is present, patients will be managed according to their EFI either by non-assisted reproductive technologies (ART) or ART. If a clinical pregnancy occurs postoperatively, the study related follow-up ends but pregnancy outcomes will be recorded. Both arms of the study are established surgical strategies. Any adverse events will be communicated to the sponsor and principal investigator without undue delay. The adverse events that may occur are those related to surgery and will not differ from what is expected in normal clinical practice. Despite this, a data safety monitoring board (DSMB) has been established including an independent statistician of the KU Leuven to oversee the safety of the participants in the trial. Patients can leave the study at any time for any reason if they wish to do so without any consequences. The investigator can decide to withdraw a subject from the study for urgent medical reasons. Each withdrawal must be clearly documented. End of study is reached if a patient became pregnant (with referral to an obstetrician for follow-up of the pregnancy) or after completion of the 24 months follow-up period. For follow-up after ending the study, patients were referred to their general gynecologist (not necessarily in the endometriosis unit).
Primary objective. To assess the effect of conservative surgery of endometrioma(s) on ovarian reserve as reflected by AMH level. For the primary outcome evaluation of serum AMH level measurement will be done before (baseline) and after (at 3 months follow up) laparoscopic treatment of endometrioma(s) (i.e., delta AMH). The 3 months-period rather than an immediate assessment after surgery was chosen as ovarian surgery inflicts traumatic damage to the ovarian cortex as reflected by a sharp decline in AMH level immediately after surgery, however recovery of the AMH level is expected 3 months postoperatively when edema and local inflammation end . The timing of the primary analysis was based on these findings and is similar to the RCT of Candiani . Secondary objectives. AMH difference (or delta AMH)/cyst surface between baseline and at 3-, 6- and 12-months post-operatively with a correction for the cyst surface (since the volume of the cyst may be responsible for more/less influence on the ovarian reserve). The cyst surface will be calculated using the formula for the surface area of a sphere (4 πr 2 ). The radius of a sphere is half its diameter, for diameter the mean of the three cysts diameters will be used. AMH level modifications (or delta AMH) at 6 and 12 months follow up. Cyst recurrence rate at 3, 6, 12 and 24 months postoperatively (visualized by transvaginal ultrasound, any recurrence will be registered). Clinical pregnancy (as defined by the International Committee Monitoring Assisted Reproductive Technologies (ICMART) as a pregnancy diagnosed by ultrasonographic visualization of one or more gestational sacs or definitive clinical signs of pregnancy. It includes ectopic pregnancy. Note: Multiple gestational sacs are counted as one clinical pregnancy). Ectopic pregnancy (as defined by the ICMART as a pregnancy in which implantation takes place outside the uterine cavity). Miscarriage (defined as a spontaneous loss of pregnancy). Live birth (as defined by the ICMART as the complete expulsion or extraction from its mother of a product of fertilization, irrespective of the duration of the pregnancy, which, after such separation, breathes or shows any other evidence of life, such as heartbeat, umbilical cord pulsation, or definite movement of voluntary muscles, irrespective of whether the umbilical cord has been cut or the placenta is attached. Evolution of pain patterns pre- and postoperatively: each endometriosis related pain complaint will be evaluated using the numerical rating scale (NRS) scale at each visit. Premature ovarian insufficiency (POI) postoperatively.
To assess the effect of conservative surgery of endometrioma(s) on ovarian reserve as reflected by AMH level. For the primary outcome evaluation of serum AMH level measurement will be done before (baseline) and after (at 3 months follow up) laparoscopic treatment of endometrioma(s) (i.e., delta AMH). The 3 months-period rather than an immediate assessment after surgery was chosen as ovarian surgery inflicts traumatic damage to the ovarian cortex as reflected by a sharp decline in AMH level immediately after surgery, however recovery of the AMH level is expected 3 months postoperatively when edema and local inflammation end . The timing of the primary analysis was based on these findings and is similar to the RCT of Candiani .
AMH difference (or delta AMH)/cyst surface between baseline and at 3-, 6- and 12-months post-operatively with a correction for the cyst surface (since the volume of the cyst may be responsible for more/less influence on the ovarian reserve). The cyst surface will be calculated using the formula for the surface area of a sphere (4 πr 2 ). The radius of a sphere is half its diameter, for diameter the mean of the three cysts diameters will be used. AMH level modifications (or delta AMH) at 6 and 12 months follow up. Cyst recurrence rate at 3, 6, 12 and 24 months postoperatively (visualized by transvaginal ultrasound, any recurrence will be registered). Clinical pregnancy (as defined by the International Committee Monitoring Assisted Reproductive Technologies (ICMART) as a pregnancy diagnosed by ultrasonographic visualization of one or more gestational sacs or definitive clinical signs of pregnancy. It includes ectopic pregnancy. Note: Multiple gestational sacs are counted as one clinical pregnancy). Ectopic pregnancy (as defined by the ICMART as a pregnancy in which implantation takes place outside the uterine cavity). Miscarriage (defined as a spontaneous loss of pregnancy). Live birth (as defined by the ICMART as the complete expulsion or extraction from its mother of a product of fertilization, irrespective of the duration of the pregnancy, which, after such separation, breathes or shows any other evidence of life, such as heartbeat, umbilical cord pulsation, or definite movement of voluntary muscles, irrespective of whether the umbilical cord has been cut or the placenta is attached. Evolution of pain patterns pre- and postoperatively: each endometriosis related pain complaint will be evaluated using the numerical rating scale (NRS) scale at each visit. Premature ovarian insufficiency (POI) postoperatively.
This study will use an electronic data capture system, i.e., Redcap. Redcap is a web-based system, all study sites will have access to Redcap. The server is hosted within the University Hospitals Leuven and meets hospital level security and back-up. Site access will be controlled, login in Redcap is password controlled. Each user will receive a personal login name and password and will have a specific role which had predefined restrictions on what is allowed in Redcap. Users will only be able to see data of patients of their own site. Any activity in this software is traced and transparent per audit trail and log files. As the randomization will be done in Redcap, a unique study number will be assigned to all subjects and subsequently used in the database. The subject identification code will be safeguarded by the site. The name and any other identifying data will not be included in the study database. An interim analysis is not planned since both arms are established surgical strategies. The recruitment phase is planned to be 60 months (initially estimated inclusion time of 24 months, due to the COVID-19 pandemic important delay in recruitment for which extension of inclusion time with another 36 months, a notification was made to the Ethical Committee). Sample size calculation was based on the primary outcome: evaluation of serum AMH level 3 months after laparoscopic treatment of endometrioma(s). Power calculation was based on the findings of the RCT from Candiani et al. where AMH was a secondary outcome in comparing conventional cystectomy versus CO 2 -laser vaporization. In this paper a postoperative AMH of 1,9 ± 0,9 ng/mL was found after CO 2 -laser vaporization only. In this paper a difference in decline in postoperative AMH of 50% was observed in favor of the vaporization group, although this study was not powered for this outcome. Since we will compare two conservative techniques for endometrioma surgery, a difference of 30% AMH decline after surgery was considered to be clinically relevant (after consultation of all participating centers). Based on Candiani et al. (2018) a mean serum AMH of 1.9 (SD = 0.9) ng/mL is expected with CO 2 -laser vaporization. Assuming a common standard deviation a total sample size of 82 patients is needed (or 41 patients in each group) based on a two-sided independent t-test with alpha equal to 0.05 to have at least 80% power to detect a difference of 30% between both groups. To account for the 10% drop out because of pregnancy within 3 months postoperatively, it is prudent to aim for a total sample size of 92 patients (or 46 patients in each group). Note that this calculation is based on the conservative assumption of no correlation between baseline AMH and AMH after 3 months. In practice, the power is expected to exceed largely 80% since the final analysis will be based on an approach taking into account the baseline value. However, since the study wants to gather information on multiple endpoints, we deemed it not appropriate to lower the sample size. The full analysis set (FAS) will, according to the intent-to-treat principle, include all randomized patients according to their randomized treatment. The FAS will be used for the evaluation of all efficacy endpoints. The primary analysis will be based on the FAS. In case cross-over occurs, an as-treated analysis will be performed additionally. Patients from the FAS with major protocol deviations will be excluded from the per protocol set (PPS). The PPS will be reviewed and finalized prior to database lock at a Blind Review Meeting. Summary tables (descriptive statistics and/or frequency tables) will be provided for all variables (baseline, surgical and postoperative follow-up). Continuous variables will be summarized with descriptive statistics (n, mean, standard deviation, range, median, p25 and p75). Frequency counts and proportions of subjects within each category will be provided for the categorical data. For the comparison of the mean AMH levels at different timepoints a constrained Longitudinal Data Analysis (cLDA ) will be used such that the presence of missing values (due to dropout or to pregnancy, in the latter case AMH values after pregnancy are put on missing) can be handled. Center will be added as a fixed effect in the cLDA model. The same approach will be used for the (log-transformed) ratio of the AMH level and the cyst surface, and for the longitudinally gathered NRS pain scores. Cyst recurrence rate until 24 months postoperatively will be visualized using Kaplan-Meier estimates and compared using a stratified log-rank test. Subgroup analysis will be performed on patients without previous history of ovarian surgery (if relevant) and depending on continuation of contraception postoperatively. The datasets, including anonymized patient-level data generated during the current trial, will be available after study completion from the corresponding author upon reasonable request.
Results from this trial will be shared through publications and presented at international conferences. All participating investigators will be co-authors, according to the number of patients included and intellectual contribution.
The study protocol was approved by the ethics committee research UZ/KU Leuven on October 24 th 2019 (institutional review board (IRB) number: S62899) and registered on clinicaltrials.gov (NCT04151433) on November 5 th 2019. All participating patients will sign a written informed consent, as approved by the ethical committees. Coordinating center: University Hospitals Leuven. Patient recruitment is ongoing and started after approval of the ethics committee (December 1 st 2019). Expected end of inclusion will be no later than June 30 th , 2025. End of study, with collection of all secondary outcomes, is foreseen by mid 2027 (if necessary, extend with 9 months to allow full follow-up of the pregnancies until delivery). Current protocol version is 2.2 (dated 16-12-2021), see (PDF version). The English informed consent form can be found in .
Since CO 2 -laser vaporization and the combined technique may be safer for the normal ovarian tissue as opposed to cystectomy, they are considered as more conservative surgical techniques. In patients wishing to preserve their reproductive potential, the least harmful technique should be preferred when planning ovarian surgery. To the best of our knowledge, these different conservative techniques have not been compared directly yet regarding their effect on ovarian reserve and/or disease recurrence. This study presents some challenges: First, patients may be reluctant to participate in an RCT due to the random allocation to a specific surgical treatment. We experience that random allocation by a computer system might be difficult to accept for our patients, although by explaining that the two proposed surgical techniques in this RCT are both considered to be less harmful for the ovarian reserve than the classical cystectomy we expect patients to be willing to participate. Next to ovarian reserve, another question that is often asked by patients is risk of recurrence. Short term follow-up (one year postoperatively) shows a higher recurrence rate in the CO 2 -laser vaporization group . When looking at the available evidence on long term recurrence rate, similar recurrence rates have been described after cystectomy and CO 2 -laser vaporization or mixed techniques (including the combined technique) . Second is the concern of non-adherence to the scheduled follow-up up until two years postoperatively. However, the primary outcome is measured at 3 months postoperatively simultaneously with the first routine postoperative check-up (no extra hospital visit will be required). Therefore, we expect that the drop-out rate before this timepoint will be low. If a patient becomes pregnant within 3 months postoperatively, the study related follow-up ends (with drop out for the primary outcome), although pregnancy outcomes will be recorded. This was accounted for when calculating our sample size. Finally, previous studies in our group on long term outcomes showed great willingness of endometriosis patients to adhere to study protocols . Third is the challenge of performing a multicenter study. Several meetings with the participating centers were organized during protocol development to standardize preoperative treatment and discuss the study flow. Next to the challenges, the study may have some limitations: First, AMH level was chosen as the marker for ovarian reserve although it is the most appropriate marker available, it is not perfect. Indeed, AMH level can be difficult to interpret due to following reasons: 1) Different reproductive and lifestyle determinants can influence AMH levels as shown by the study of Dólleman et al . 2) Although AMH levels in serum vary significantly across the menstrual cycle (with a slight increase during follicular phase, particularly for women over 30 years), the age-related decline of AMH seems consistent regardless of the menstrual cycle day of the AMH assessment . Therefore, sample collection can be performed on any day of the menstrual cycle for assessment of ovarian reserve. 3) Unknown effect of the use of GnRH analogue on AMH values , which led us to exclude its use around the operating time. On the other hand, AMH level measurement also has advantages: 1) It is an easy measurement (by blood sample with a validated assay); 2) It is an objective measurement compared to AFC, AFC is a subjective measurement with inter-operator and inter technical variability based on ultrasound and thus prone to more variable results. Moreover, adequate measurement of AFC is challenging in the presence of a(n) ovarian cyst(s) and in patients with high BMI. Second, we decide to include a selected group of patients into this study: unilateral endometrioma(s) with an AMH level at screening > 0.7 ng/ml (a small contralateral endometrioma of < 2 cm was allowed). For this reason, our results may not be entirely generalizable for patients with bilateral (large) endometrioma(s) and possibly more extensive disease . Third, we chose to perform a real-life study including both patients with pain and/or infertility. This heterogenous group of patients will differ in postoperative need of contraception, despite advice to continue hormonal therapy postoperatively for secondary prevention of recurrence , desire for a natural pregnancy or advice to start a fertility treatment. All these variables will be registered, and certain subgroup analyses are planned (as described above). Notwithstanding, the randomization process should allocate these equally over both study arms. Overall, we believe that this RCT will add clinically valuable information not only on ovarian reserve but also on recurrence, evolution of pain patterns postoperatively and fertility outcomes. The ultimate goal of this RCT is to contribute significantly to optimized care with regards to surgical techniques for the treatment of endometrioma(s). Observations could then further be integrated in decision algorithms for treatment of patients with endometriomas with (future) fertility wishes. Whereas our focus lies on ovarian reserve, the high quality of the collected data will also allow meta-analysis for the secondary outcomes.
S1 File SPIRIT 2013 checklist. Recommended items to address in a clinical trial protocol and related documents. (DOCX) S2 File Protocol. Original full protocol (PDF). (PDF) S3 File Informed consent form template (in English). (PDF)
|
Phosphorylated glycosphingolipids are commonly detected in | b8d2ac73-01c9-4c01-b4d9-9aecbeb105cd | 11842410 | Biochemistry[mh] | Lipids are an essential class of biomolecules that serve as building blocks for membranes, energy storage, and even signaling molecules. The systematic analysis of lipids is referred to as lipidomics or lipid profiling. Lipids show a large structural diversity, and dependent on the lipid classes, several building blocks (fatty acyls, sphingoid bases, and headgroups) can be combined, integrating multiple lipid biosynthetic and remodeling pathways (Fahy et al., ; Liebisch et al., ; van Meer, ). The regulation of lipid metabolism is complex, and many model organisms have been used for its study. One of these model organisms is the small, soil-dwelling nematode Caenorhabditis elegans (Watts & Ristow, ). Sphingolipids are crucial components among the various lipid classes present in biological membranes. Together with cholesterol, they are enriched in small membrane microdomains known as lipid rafts. (Lingwood & Simons, ; Merris et al., ; Simons & Ikonen, ). These lipid rafts can also be found in the membranes of C. elegans (Rao et al., ; Sedensky et al., ). However, C. elegans membranes, compared to mammalian ones, contain lesser amount of cholesterol. Another striking difference is that sphingolipids in C. elegans contain an unusual C17 iso-branched chain sphingoid base produced from 13-methyl myristic acid and serine (Hannich et al., ). Furthermore, N-acyls bound in the sphingolipids usually are hydroxylated at the second carbon (Chitwood et al., ; Gerdt et al., ). It is currently unknown why C. elegans produce this branched chain sphingoid base, and compensation for cholesterol might be a possible reason (though it has not been proven so far). However, an essential relationship between C. elegans sphingolipids and cholesterol seems to exist. A novel sphingolipid class was recently described in C. elegans , phosphorylated glycosphingolipids. This class of sphingolipids was identified for the first time in a study investigating cholesterol deprivation (Boland et al., ). These novel lipids, called phosphoethanolamine glucosylceramides (PEGCs) and monomethyl phosphoethanolamine glucosyl ceramides (mmPEGCs) (structure shown in Fig. A), were able to rescue the larval arrest of cholesterol deprived worms. However, these lipids have only been identified in this specific study so far. They were not described in the recent in-depth analysis of the sphingolipidome of C. elegans (Hänel et al., ; Scholz et al., ). Missing identification of these lipids in other studies is mostly due to missing electronic reference spectra or missing structures in lipid and metabolite structure databases. Though the missing structures of the phosphorylated glycosphingolipids were curated into LipidMaps (O’Donnell et al., ), no electronic reference spectra have been deposited in public databases such as MassBank or GNPS (Horai et al., ; Wang et al., ), yet. These novel sphingolipids were analyzed using shotgun lipidomics without prior chromatographic separation or LC-MS analysis of isolated fractions. In our approach, we aimed to determine whether PEGCs and mmPEGCs can be more widely detected using high-resolution MS coupled with LC alone or in combination with trapped ion mobility spectrometry (TIMS). Ion mobility spectrometry (IMS) is a powerful tool for describing and identifying novel lipids. For instance, maradolipids, which are exclusively found in C. elegans dauer larvae (Penkov et al., ), have been analyzed by UHPLC-DTIM-QTOF-MS using a combination of data-independent acquisition (DIA) and ion mobility (Witting et al., ). Five novel phosphorylated glycosphingolipids were reported in the work of Boland et al., two containing a phosphoethanolamine and three containing an N-methylphosphoethanolamine group. Based on the fragmentation patterns described by Boland et al. (Boland et al., ), we searched our recently published sphingolipidomics data (Hänel et al., ) to see if the described lipid species and further lipids of that same class can be detected. As a result, we detected all lipid species described by Boland et al., and additionally found one species for PEGCs and five species for mmPEGCs. To comprehensively describe this lipid class, we used UHPLC-TIMS-TOF-MS/MS to study the chromatographic and ion mobility behavior of PEGCs and mmPEGCs. Lastly, we were able to identify ceramides and hexosylceramides containing a phytosphingosine base as found in PEGCs and mmPEGCs as potential biosynthetic precursors. Sphingolipidome data from Hänel et al. Data from Hänel et al. was reinvestigated. For details on lipid extraction and measurement, please refer to the original publication (Hänel et al., ). Chemicals Methanol (MeOH), 2-propanol (iPrOH), and acetonitrile (ACN) were of LC-MS grade (Sigma-Aldrich, Taufkirchen, Germany or Biosolve Chimie, France). All other solvents and chemicals were of the highest available purity, typically analytical grade. Lipid extraction C. elegans reference samples were obtained from the University of Georgia, Athens, Georgia, United States (Gouveia et al., ). The obtained material was suspended in MeOH, aliquoted into 50 µL aliquots, and lipid extraction was performed using multiple extraction methods described in detail below. MeOH An additional 400 µL of MeOH was added to the 50 µL C. elegans sample. The mixture was incubated at 500 rpm for one hour at 25 °C. Next, the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The supernatant was transferred and collected, and the pellet was re-extracted with 400 µL of MeOH, followed by a second incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The combined supernatant was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Bligh and Dyer (Bligh & Dyer, ) An additional 100 µL of MeOH was added to the 50 µL C. elegans sample, followed by the addition of 100 µL of CHCl 3 . The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 100 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The lower phase was collected in a fresh vial. The upper aqueous phase was re-extracted with 200 µL of the mixture CHCl 3 /MeOH/H 2 O (60:35:5, % v/v/v) followed by incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The lower organic phase was again collected and mixed with the previous CHCl 3 phase. The CHCl 3 phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. BUME (Löfgren et al., ) 300 µL of ButOH was added to the 50 µL C. elegans sample, followed by the addition of 400 µL Heptane/Ethyl acetate (3:1, % v/v). The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 400 µL of 1% acetic acid was added to induce the phase separation; the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 400 µL Heptane/Ethyl acetate (3:1, % v/v) followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. MTBE (Matyash et al., ) 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. This was followed by the addition of 100 µL H 2 O to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 200 µL MTBE/MeOH/ H 2 O (10:3:2.5, % v/v/v), followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was again collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Alkaline MTBE 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. Afterwards, 195 µL of 1 M KOH in MeOH was added, and the mixture was incubated for two hours at 37 °C. After cooling to room temperature, 4 µL acetic acid and 376 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Lipidome analysis with timsTOF pro 2 Lipid analysis was performed as described previously by Witting et al. (Witting et al., ). Briefly, lipids were separated on a Waters Cortecs C18 column (150 mm × 2.1 mm ID, 1.6 μm particle size) (Waters, Eschborn, Germany). Elution was performed with a linear gradient. Eluent A consisted of 40% H 2 O / 60% ACN + 10 mM ammonium formate / 0.1% formic acid, and eluent B consisted of 10% ACN / 90% iPrOH + 10 mM ammonium formate / 0.1% formic acid. Lipid extracts from C. elegans reference samples were analyzed using a Bruker Elute UHPLC (Bruker Daltonics GmbH & Co. KG, Bremen, Germany) coupled to a Bruker timsTOF Pro 2 (Bruker Daltonics GmbH & Co. KG, Bremen, Germany). Following gradient conditions were used: After an isocratic step with 32% B for 1.5 min, the percentage of buffer B was increased to 97% B at 21 min, held steady for 4 min, and returned to initial conditions in 0.1 min. The column was re-equilibrated for 4.9 min. The column temperature was set to 40 °C and the flow rate to 0.350 mL/min. The MS was equipped with a VIP-HESI source, and analysis was performed in positive ionization mode with the following parameters used: Sheat Gas Temperature 300 °C, Dry Gas Temperature 230 °C, Dry Gas 8.0 L/min, m/z 100 to 1350, Capillary voltage 4500 V, Charging voltage 2000 V, Corona 4000 nA, End plate offset − 500 V, ESI mode, Nebulizer 2.0 bar, Sheath gas flow 4.0 L/min. MS/MS was collected using DDA-PASEF, isolating single-charged molecules and fragmenting them with a collision energy of 30 eV. Trapped Ion Mobility Experiments were performed using nitrogen as carrier gas and a mobility ramp from 0.55 Vs/cm² to 1.85 Vs/cm² in 100.0 ms. Data Analysis Data processing, which included m/z calibration, mobility calibration, peak picking, peak grouping (including de-isotoping and adduct grouping), and alignment, was performed in MetaboScape 2024b (Bruker Daltonics GmbH & Co. KG, Bremen, Germany). Both datasets discussed in Hänel et al. and the new timsTOF-based dataset acquired in this study were processed. Lipid annotation was performed using rule-based lipid annotation in MetaboScape. For the initial identification of PEGCs and mmPEGCs, a novel MassQL feature in MetaboScape 2024b was used (Fig. B). Further data inspection was performed in Microsoft Excel 365, including RT and CCS trendline analysis for filtering false positive candidates. Lipids belonging to the same class are expected to show an increase in RT and CCS with an increase in chain length. Plots of m/z vs. RT or m/z vs. CCS were searched for features showing a monotonous increase with increasing m/z . If the deviation from linear or quadratic trendlines was larger than expected, peaks were removed. All plots were generated in R 4.4.0 within RStudio (2023.06.0) using ggplot2 (3.5.1). Library spectra of PEGCs and mmPEGCs were generated from .mgf files exported from MetaboScape and multiple packages from the RforMassSpectrometry collection (Rainer et al., ). Data from Hänel et al. was reinvestigated. For details on lipid extraction and measurement, please refer to the original publication (Hänel et al., ). Methanol (MeOH), 2-propanol (iPrOH), and acetonitrile (ACN) were of LC-MS grade (Sigma-Aldrich, Taufkirchen, Germany or Biosolve Chimie, France). All other solvents and chemicals were of the highest available purity, typically analytical grade. C. elegans reference samples were obtained from the University of Georgia, Athens, Georgia, United States (Gouveia et al., ). The obtained material was suspended in MeOH, aliquoted into 50 µL aliquots, and lipid extraction was performed using multiple extraction methods described in detail below. MeOH An additional 400 µL of MeOH was added to the 50 µL C. elegans sample. The mixture was incubated at 500 rpm for one hour at 25 °C. Next, the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The supernatant was transferred and collected, and the pellet was re-extracted with 400 µL of MeOH, followed by a second incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The combined supernatant was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Bligh and Dyer (Bligh & Dyer, ) An additional 100 µL of MeOH was added to the 50 µL C. elegans sample, followed by the addition of 100 µL of CHCl 3 . The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 100 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The lower phase was collected in a fresh vial. The upper aqueous phase was re-extracted with 200 µL of the mixture CHCl 3 /MeOH/H 2 O (60:35:5, % v/v/v) followed by incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The lower organic phase was again collected and mixed with the previous CHCl 3 phase. The CHCl 3 phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. BUME (Löfgren et al., ) 300 µL of ButOH was added to the 50 µL C. elegans sample, followed by the addition of 400 µL Heptane/Ethyl acetate (3:1, % v/v). The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 400 µL of 1% acetic acid was added to induce the phase separation; the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 400 µL Heptane/Ethyl acetate (3:1, % v/v) followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. MTBE (Matyash et al., ) 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. This was followed by the addition of 100 µL H 2 O to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 200 µL MTBE/MeOH/ H 2 O (10:3:2.5, % v/v/v), followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was again collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Alkaline MTBE 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. Afterwards, 195 µL of 1 M KOH in MeOH was added, and the mixture was incubated for two hours at 37 °C. After cooling to room temperature, 4 µL acetic acid and 376 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and dried using a rotary vacuum concentrator until the solvent was evaporated entirely. An additional 400 µL of MeOH was added to the 50 µL C. elegans sample. The mixture was incubated at 500 rpm for one hour at 25 °C. Next, the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The supernatant was transferred and collected, and the pellet was re-extracted with 400 µL of MeOH, followed by a second incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The combined supernatant was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. ) An additional 100 µL of MeOH was added to the 50 µL C. elegans sample, followed by the addition of 100 µL of CHCl 3 . The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 100 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The lower phase was collected in a fresh vial. The upper aqueous phase was re-extracted with 200 µL of the mixture CHCl 3 /MeOH/H 2 O (60:35:5, % v/v/v) followed by incubation at 500 rpm for 15 min at 25 °C and centrifugation at 13,000 rpm for 15 min at 4 °C. The lower organic phase was again collected and mixed with the previous CHCl 3 phase. The CHCl 3 phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. ) 300 µL of ButOH was added to the 50 µL C. elegans sample, followed by the addition of 400 µL Heptane/Ethyl acetate (3:1, % v/v). The mixture was incubated at 500 rpm for one hour at 25 °C. Next, 400 µL of 1% acetic acid was added to induce the phase separation; the mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 400 µL Heptane/Ethyl acetate (3:1, % v/v) followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. ) 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. This was followed by the addition of 100 µL H 2 O to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper phase was collected in a fresh vial. The lower aqueous phase was re-extracted with 200 µL MTBE/MeOH/ H 2 O (10:3:2.5, % v/v/v), followed by incubation at 500 rpm for 15 min at 25 °C and was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was again collected and mixed with the previously collected organic phase. The combined organic phase was dried using a rotary vacuum concentrator until the solvent was evaporated entirely. 300 µL of Methyl tert-butyl ether (MTBE) was added to the 50 µL C. elegans sample, and the mixture was incubated at 500 rpm for one hour at 25 °C. Afterwards, 195 µL of 1 M KOH in MeOH was added, and the mixture was incubated for two hours at 37 °C. After cooling to room temperature, 4 µL acetic acid and 376 µL of H 2 O were added to induce phase separation. The mixture was centrifuged at 13,000 rpm for 15 min at 4 °C. The upper organic phase was collected and dried using a rotary vacuum concentrator until the solvent was evaporated entirely. Lipid analysis was performed as described previously by Witting et al. (Witting et al., ). Briefly, lipids were separated on a Waters Cortecs C18 column (150 mm × 2.1 mm ID, 1.6 μm particle size) (Waters, Eschborn, Germany). Elution was performed with a linear gradient. Eluent A consisted of 40% H 2 O / 60% ACN + 10 mM ammonium formate / 0.1% formic acid, and eluent B consisted of 10% ACN / 90% iPrOH + 10 mM ammonium formate / 0.1% formic acid. Lipid extracts from C. elegans reference samples were analyzed using a Bruker Elute UHPLC (Bruker Daltonics GmbH & Co. KG, Bremen, Germany) coupled to a Bruker timsTOF Pro 2 (Bruker Daltonics GmbH & Co. KG, Bremen, Germany). Following gradient conditions were used: After an isocratic step with 32% B for 1.5 min, the percentage of buffer B was increased to 97% B at 21 min, held steady for 4 min, and returned to initial conditions in 0.1 min. The column was re-equilibrated for 4.9 min. The column temperature was set to 40 °C and the flow rate to 0.350 mL/min. The MS was equipped with a VIP-HESI source, and analysis was performed in positive ionization mode with the following parameters used: Sheat Gas Temperature 300 °C, Dry Gas Temperature 230 °C, Dry Gas 8.0 L/min, m/z 100 to 1350, Capillary voltage 4500 V, Charging voltage 2000 V, Corona 4000 nA, End plate offset − 500 V, ESI mode, Nebulizer 2.0 bar, Sheath gas flow 4.0 L/min. MS/MS was collected using DDA-PASEF, isolating single-charged molecules and fragmenting them with a collision energy of 30 eV. Trapped Ion Mobility Experiments were performed using nitrogen as carrier gas and a mobility ramp from 0.55 Vs/cm² to 1.85 Vs/cm² in 100.0 ms. Data processing, which included m/z calibration, mobility calibration, peak picking, peak grouping (including de-isotoping and adduct grouping), and alignment, was performed in MetaboScape 2024b (Bruker Daltonics GmbH & Co. KG, Bremen, Germany). Both datasets discussed in Hänel et al. and the new timsTOF-based dataset acquired in this study were processed. Lipid annotation was performed using rule-based lipid annotation in MetaboScape. For the initial identification of PEGCs and mmPEGCs, a novel MassQL feature in MetaboScape 2024b was used (Fig. B). Further data inspection was performed in Microsoft Excel 365, including RT and CCS trendline analysis for filtering false positive candidates. Lipids belonging to the same class are expected to show an increase in RT and CCS with an increase in chain length. Plots of m/z vs. RT or m/z vs. CCS were searched for features showing a monotonous increase with increasing m/z . If the deviation from linear or quadratic trendlines was larger than expected, peaks were removed. All plots were generated in R 4.4.0 within RStudio (2023.06.0) using ggplot2 (3.5.1). Library spectra of PEGCs and mmPEGCs were generated from .mgf files exported from MetaboScape and multiple packages from the RforMassSpectrometry collection (Rainer et al., ). Defining fragmentation rules for PEGCs and mmPEGCs Phosphoethanolamine glucosyl ceramides (PEGCs) and monomethyl phosphoethanolamine glucosyl ceramides (mmPEGCs) were identified as important, novel lipids that are essential for cholesterol mobilization in the nematode C. elegans (Boland et al., ). Two PEGCs and three mmPEGCs were initially identified using shotgun lipidomics, isolation via solid-phase extraction over silica gel and NMR. We wanted to know if more species of these two lipid classes exist and if they are only detectable under the specific conditions of the previous study or if they are generally part of the C. elegans lipidome. The fragmentation of PEGCs and mmPEGCs was described by Boland et al. (Boland et al., ). They used direct infusion mass spectrometry with an Orbitrap MS instrument. No chromatographic separation was performed. Therefore, potentially mixed fragmentation spectra of overlapping PEGCs and mmPEGCs were obtained. However, different fragments specific to each lipid class were identified. First, both classes show characteristic headgroup losses corresponding to the glucosyl moiety and the phosphoethanolamine or monomethyl phosphoethanolamine group attached. In the case of PEGCs, this neutral loss is 285.0613 (C 8 H 16 O 8 NP), and for mmPEGC the neutral loss is 299.0770 (C 9 H 18 O 8 NP). These neutral losses generate fragments of the corresponding ceramide. In addition, head group fragments can also be observed as distinct fragments at m/z 286.0686 ([C 8 H 17 O 8 NP] + ) for PEGCs and at m/z 300.0843 ([C 9 H 19 O 8 NP] + ) for mmPEGCs. The latter often shows an additional water loss yielding m/z 282.0737 ([C 9 H 17 O 7 NP] + ) and a mono-methyl-phosphoethanolamin fragment at m/z 156.0420 ([C 3 H 11 O 4 NP] + ) (Fig. A). Systematic searches for MS 2 spectra showing distinct fragments, such as the ones described above, are reoccurring tasks in metabolomics and lipidomics data analysis. For the identification of lipids in shotgun lipidomics experiments, for example, the Molecular Fragmentation Query Language (MFQL) has been proposed and has already been applied for the identification of lipids in C. elegans (Herzog et al., ; Papan et al., ). The Mass Spectra Query Language (MassQL) has recently been proposed as a more general way to query mass spectra and extract potentially relevant biological information (Jarmusch et al., ). MassQL allows for the generation of human-readable queries that can be used to search MS data for specific features, spectra, etc., showing certain properties (such fragmentation, retention, or ion mobility ranges) and has been implemented in different programming languages and software tools (including Bruker MetaboScape, which is used in this work). It is ideal to search for features associated with specific MS 2 fragmentation. Based on the fragmentation described for PEGCs and mmPEGCs, we generated MassQL queries to identify features potentially representing known and novel members of these two glycosphingolipid classes (Fig. B). These queries were applied to two datasets. Firstly, sphingolipidome profiles were obtained by Hänel et al. , and secondly, a newly generated dataset of lipids detected in C. elegans reference samples by UHPLC-TIMS-TOF-MS/MS. Lipids are often named using a shorthand notation, which gives information on the lipid class and its composition. To correctly name PEGCs and mmPEGCs following the established shorthand notations for lipids (Liebisch et al., ), we used PE-GlcCer and PE-NMe-GlcCer to denote the lipid class and notation of carbons, double bonds, and hydroxyl groups according to rules for sphingolipids, respectively. One example is PE-NME-GlcCer 16:0(15Me); O3/22:0;2OH at the level of identification according to the structures from Boland et al. ; this would correspond to PE-NMe-HexCer 17:0;O3/22:0;O at molecular species or PE-NMe-HexCer 39:0;O4 at species level. HexCer has been used since, for analysis at this level, (MS/MS) often, the identity of the sugar cannot be confirmed. Identifying PEGCs and mmPEGCs in C. Elegans LC-MS/MS data We have recently performed an in-depth description of the C. elegans sphingolipidome using UPLC-UHR-TOF-MS/MS (Hänel et al., ). Extracts have been obtained from a mixed-stage sample in order to increase the number of potentially detectable sphingolipids. In this previous study, PEGCs and mmPEGCs were not investigated. We reprocessed the data using Bruker MetaboScape 2024b, which includes an implementation of MassQL as a beta version (Jarmusch et al., ). The following initial MassQL queries for PEGCs and mmPEGCs have been constructed: We were able to identify features showing one or the other of these fragmentation patterns. We detected two of the mmPEGCs (PE-NMe-HexCer 39:0;O4 and PE-NMe-HexCer 41:0;O4) described by Boland et al. in the dataset from Hänel et al. using MassQL for filtering together with a third feature in this dataset. The measured m/z matches a hypothetical mmPEGC species PE-NMe-HexCer 43:0;O4, not described so far. Since data-dependent acquisition (DDA) was used to generate MS 2 data, further species of this lipid class might have been detected in MS 1 and not selected for fragmentation during data acquisition. Therefore, we calculated the formula and exact mass of multiple features in silico and compared the precursor m/z of features associated with theoretical m/z values for these lipids. Since UPLC-UHR-TOF-MS/MS was used, we leveraged retention times as an additional level of information and potential filtering of false positive annotations. The three species identified by MS 2 serve as anchor points in this scenario. In addition, we could putatively annotate PE-NMe-HexCer 40:0;O4 (also identified by Boland et al. , PE-NMe-HexCer 42:0;O4, PE-NMe-HexCer 44:0;O4 and PE-NMe-HexCer 45:0;O4 as new species of this lipid class. Plotting the m/z against RT, a consistent trend line was observed fitting the number of carbons in the side chains, including species confirmed by MS 2 (see Fig. B). Details on the level of identification can be found in SI Table 1. As an independent control, we checked the additional new dataset that was generated using UHPLC-TIMS-TOF-MS/MS on C. elegans reference samples (Gouveia et al., ). Similar to the samples from Hänel et al. these were extracts from mixed-stage samples. Ion mobility separation (IMS) is a valuable addition to MS, MS/MS and RT for lipid identification, e.g., to identify maradolipids in C. elegans (Witting et al., ). Besides the additional separation, which can potentially resolve isobaric and isomeric interferences, collisional cross sections (CCS) can be derived. The obtained data is largely complementary to MS and can be used for identification purposes. In contrast to RT, which represents a system’s property arising from the selected column, eluents, the analyte, and further analytical conditions, CCS are transferable between different instruments and even instrument types (e.g. TIMS, DTIMS, TWIMS) (George et al., ). Similar to the UPLC-UHR-TOF-MS/MS data, we used MassQL to detect putative mmPEGC species in the UHPLC-TIMS-TOF-MS/MS data. All species detected in the dataset from Hänel et al. could also be detected in this dataset with confirmation by MS 2 . Likewise, a consistent trendline across the retention time dimension was found. We also investigated the acquired CCS values in this dataset and observed a consistent trend in the mobility dimension, increasing confidence in the lipid annotations. Next, we checked for the presence of PEGCs using the corresponding MassQL described above. No features with MS 2 spectra fitted the query mentioned above in the dataset by Hänel et al. and only one feature matched in the UHPLC-TIMS-TOF-MS/MS dataset. Inspecting the corresponding MS 2 spectrum in more detail, only a low abundant peak was found for the neutral loss of the headgroup. Therefore, the MassQL query was modified accordingly to potentially identify more PEGCs species: Using this modified query, several features were found in the UHPLC-TIMS-TOF-MS/MS dataset, but only three also matched the theoretical m/z values of PEGCs, including two species found by Boland et al. , PE-HexCer 39:0;O4 and PE-HexCer 41:0;O4, and a new PE-HexCer 40:0;O4. Inspecting the RT and CCS trendlines consistent trends were found. Based on this, the data from Hänel et al. was reinvestigated. RTs of PEGCs are slightly higher compared to the corresponding isomeric mmPEGCs. Using this information and the exact m/z for the three lipids, features that potentially correspond to these lipids were found in the Hänel et al. dataset. However, based on matching of m/z values only, these annotations are highly ambiguous. Table summarizes all PEGC and mmPEGC species detected in the UHPC-TIMS-MS/MS dataset. Based on the obtained data ( m/z , RT, MS 2 and CCS if available), different species from both PEGCs and mmPEGCs were identified. Analysis of Boland et al. by NMR has proven that these lipids are also based on a C17 iso-branched chain sphingoid base, similar to other C. elegans sphingolipids. However, only in one case within the Hänel et al. dataset further evidence of a C17 sphingoid base was found: PE-NMe-HexCer 17:0;O3/24:0;O was annotated as the only species on this detailed level. So far, C17iso branched sphingoid bases are only known in C. elegans . Based on the assumption of a C17 sphingoid base, the N-acyls have a carbon length of 21 to 28. This matches the acyl groups known from other sphingolipid classes in the nematode (Chitwood et al., ; Hänel et al., ). To further verify lipid species from PEGC and mmPEGC in the dataset from Hänel et al., we have performed a correlation analysis of RT values between the two datasets. A high correlation with 0.99 was found, indicating that identifications can be transferred between the datasets. Using species identified in both as “anchor” points a linear regression was used to transfer RT values. Measured RT values matched the predicted RT values with a maximum error of 1.07%. Ceramides and hexosylceramides based on C17-phytosphingosine are precursors for PEGCs and mmPEGCs PEGCs and mmPEGCs show a different sphingoid base than other C. elegans sphingolipids. In contrast to other sphingolipids, these lipids contain a C17iso phytosphingosine base with an additional hydroxyl group at the fourth position instead of a double bond between the fourth and fifth carbon atoms. Neither ceramides nor hexosylceramides with this base have been identified by Hänel et al. or Scholz et al., performing deep analysis of the C. elegans sphingolipidome (Hänel et al., ; Scholz et al., ). Another study that focused on the analysis of sphingolipids was performed by Mosbech et al., but also no evidence for a C17iso phytosphingosine base was found (Mosbech et al., ). We searched other publications performing lipid profiling in C. elegans for the potential presence of these precursors of PEGCs and mmPEGCs. Lipid names were normalized to the same shorthand notations to ensure comparability. We identified four different articles that report (Anh et al., ; Cheng et al., ; Mosbech et al., ; Smulan et al., ) the presence of ceramides, hexosylceramides, or both containing potentially a phytosphingosine base. For example, Anh et al. detected one lipid species annotated as Cer 41:0;4O or Cer 17:0;3O/24:0(2OH) and one lipid species as HexCer 41:0;4O or HexCer 17:0;3O/24:0(2OH) (Anh et al., ). Since no details about the used database or identification strategy were reported in most cases, we decided to reinvestigate the data from Hänel et al. to search for these sphingolipids as well. Calculating the theoretical m/z specific to fragments of the hypothetical phytosphingosine base ( m/z 286.2741 ([C 17 H 36 NO 2 ] + ), m/z 268.2635 ([C 17 H 34 NO] + ) and m/z 250.2529 ([C 17 H 32 N] + )) we generated the following MassQL query: Multiple features matching this pattern were found in the mass range fitting to Cer and HexCers. First, Cers were investigated. Using exact m/z , fragmentation pattern, and RT trendlines, seven features were identified, including two isomers for N-acyls of length 23 and 24. One additional feature was putatively annotated based on exact m/z only, but it fitted well with the RT trendline as established by the MS 2 annotated features. The two different isomers detected for the lipid species with an N-acyl of 23 and 24 carbons were baseline-separated and individually confirmed by MS 2 . We manually inspected the spectra of putative HexCers, and additionally, the fragments from the phytosphingosine base. HexCers showed neutral losses corresponding to the hexosyl moiety and additional water losses. Therefore, a more specific MassQL query was generated and applied to search for HexCers containing a phytopshingoid base: Only three features matched exactly this MassQL query, showing a high enough intensity for all fragments. More species were found with more relaxed parameters, e.g., less required fragments. Ten features were annotated as HexCers containing a phytosphingosine base, mostly on the molecular species level based on MS 2 , exact m/z , and RT trendlines. Only three features were annotated on the species level, but RT trends confirmed this annotation. Sample preparation by Hänel et al. included saponification of glycerol- and glycerophospholipid, which might also lead to hydrolysis of PEGCs and mmPEGCs to HexCers and Cers. We, therefore, additionally investigated the UHPLC-TIMS-TOF-MS/MS dataset. Although this data set also included an alkaline MTBE extraction, several other extractions have been used, and the presence of Cer and HexCer in these samples confirms that these lipid classes are a native part of the C. elegans sphingolipidome and do not represent artifacts from sample preparation. Using the same MassQL query for general screening for phytosphingosine-based species, we found several features with a matching fragmentation spectrum; however, they were only in the mass range of Cers, not HexCers. We additionally used the rule-based lipid annotation feature within MetaboScape as an additional criterion for identification and trendlines along CCS values. 9 features were finally annotated as Cer species, with 23, 24, and 25 carbon species showing two features of potentially isomeric structures. All species were also correctly annotated by the rule-based lipid annotation. Although this feature in MetaboScape in principle, allows for the prediction of lipid CCS values, phytoceramides are currently not covered in the CCS prediction. Investigating why no HexCers were identified, we found that only the neutral loss of the hexosyl moiety could be observed due to the lower collision energy used in the UHPC-TIMS-MS/MS dataset. Accordingly, the MassQL was reduced to: Many features were filtered using this rather generic query, including a HexCer with a normal sphingoid base, indicating that this query works in principle. Again, using exact m/z and trendlines along RT and CCS as additional filters (Fig. ), 11 features were annotated as HexCer containing a phytosphingosine base. These lipids were found not only in extracts from the alkaline MTBE extraction method but also in all extraction methods, suggesting they exist naturally. Based on the results obtained, Cers and HexCers with a phytosphingosine base are established as part of the C. elegans sphingolipidome. As precursors for PEGCs and mmPEGCs they potentially play important roles in the biology of the nematode. Similar to PEGC and mmPEGCs, RT values between the datasets were compared and high correlation was found indicating that indeed the same species have been detected between the two datasets. Phosphoethanolamine glucosyl ceramides (PEGCs) and monomethyl phosphoethanolamine glucosyl ceramides (mmPEGCs) were identified as important, novel lipids that are essential for cholesterol mobilization in the nematode C. elegans (Boland et al., ). Two PEGCs and three mmPEGCs were initially identified using shotgun lipidomics, isolation via solid-phase extraction over silica gel and NMR. We wanted to know if more species of these two lipid classes exist and if they are only detectable under the specific conditions of the previous study or if they are generally part of the C. elegans lipidome. The fragmentation of PEGCs and mmPEGCs was described by Boland et al. (Boland et al., ). They used direct infusion mass spectrometry with an Orbitrap MS instrument. No chromatographic separation was performed. Therefore, potentially mixed fragmentation spectra of overlapping PEGCs and mmPEGCs were obtained. However, different fragments specific to each lipid class were identified. First, both classes show characteristic headgroup losses corresponding to the glucosyl moiety and the phosphoethanolamine or monomethyl phosphoethanolamine group attached. In the case of PEGCs, this neutral loss is 285.0613 (C 8 H 16 O 8 NP), and for mmPEGC the neutral loss is 299.0770 (C 9 H 18 O 8 NP). These neutral losses generate fragments of the corresponding ceramide. In addition, head group fragments can also be observed as distinct fragments at m/z 286.0686 ([C 8 H 17 O 8 NP] + ) for PEGCs and at m/z 300.0843 ([C 9 H 19 O 8 NP] + ) for mmPEGCs. The latter often shows an additional water loss yielding m/z 282.0737 ([C 9 H 17 O 7 NP] + ) and a mono-methyl-phosphoethanolamin fragment at m/z 156.0420 ([C 3 H 11 O 4 NP] + ) (Fig. A). Systematic searches for MS 2 spectra showing distinct fragments, such as the ones described above, are reoccurring tasks in metabolomics and lipidomics data analysis. For the identification of lipids in shotgun lipidomics experiments, for example, the Molecular Fragmentation Query Language (MFQL) has been proposed and has already been applied for the identification of lipids in C. elegans (Herzog et al., ; Papan et al., ). The Mass Spectra Query Language (MassQL) has recently been proposed as a more general way to query mass spectra and extract potentially relevant biological information (Jarmusch et al., ). MassQL allows for the generation of human-readable queries that can be used to search MS data for specific features, spectra, etc., showing certain properties (such fragmentation, retention, or ion mobility ranges) and has been implemented in different programming languages and software tools (including Bruker MetaboScape, which is used in this work). It is ideal to search for features associated with specific MS 2 fragmentation. Based on the fragmentation described for PEGCs and mmPEGCs, we generated MassQL queries to identify features potentially representing known and novel members of these two glycosphingolipid classes (Fig. B). These queries were applied to two datasets. Firstly, sphingolipidome profiles were obtained by Hänel et al. , and secondly, a newly generated dataset of lipids detected in C. elegans reference samples by UHPLC-TIMS-TOF-MS/MS. Lipids are often named using a shorthand notation, which gives information on the lipid class and its composition. To correctly name PEGCs and mmPEGCs following the established shorthand notations for lipids (Liebisch et al., ), we used PE-GlcCer and PE-NMe-GlcCer to denote the lipid class and notation of carbons, double bonds, and hydroxyl groups according to rules for sphingolipids, respectively. One example is PE-NME-GlcCer 16:0(15Me); O3/22:0;2OH at the level of identification according to the structures from Boland et al. ; this would correspond to PE-NMe-HexCer 17:0;O3/22:0;O at molecular species or PE-NMe-HexCer 39:0;O4 at species level. HexCer has been used since, for analysis at this level, (MS/MS) often, the identity of the sugar cannot be confirmed. C. Elegans LC-MS/MS data We have recently performed an in-depth description of the C. elegans sphingolipidome using UPLC-UHR-TOF-MS/MS (Hänel et al., ). Extracts have been obtained from a mixed-stage sample in order to increase the number of potentially detectable sphingolipids. In this previous study, PEGCs and mmPEGCs were not investigated. We reprocessed the data using Bruker MetaboScape 2024b, which includes an implementation of MassQL as a beta version (Jarmusch et al., ). The following initial MassQL queries for PEGCs and mmPEGCs have been constructed: We were able to identify features showing one or the other of these fragmentation patterns. We detected two of the mmPEGCs (PE-NMe-HexCer 39:0;O4 and PE-NMe-HexCer 41:0;O4) described by Boland et al. in the dataset from Hänel et al. using MassQL for filtering together with a third feature in this dataset. The measured m/z matches a hypothetical mmPEGC species PE-NMe-HexCer 43:0;O4, not described so far. Since data-dependent acquisition (DDA) was used to generate MS 2 data, further species of this lipid class might have been detected in MS 1 and not selected for fragmentation during data acquisition. Therefore, we calculated the formula and exact mass of multiple features in silico and compared the precursor m/z of features associated with theoretical m/z values for these lipids. Since UPLC-UHR-TOF-MS/MS was used, we leveraged retention times as an additional level of information and potential filtering of false positive annotations. The three species identified by MS 2 serve as anchor points in this scenario. In addition, we could putatively annotate PE-NMe-HexCer 40:0;O4 (also identified by Boland et al. , PE-NMe-HexCer 42:0;O4, PE-NMe-HexCer 44:0;O4 and PE-NMe-HexCer 45:0;O4 as new species of this lipid class. Plotting the m/z against RT, a consistent trend line was observed fitting the number of carbons in the side chains, including species confirmed by MS 2 (see Fig. B). Details on the level of identification can be found in SI Table 1. As an independent control, we checked the additional new dataset that was generated using UHPLC-TIMS-TOF-MS/MS on C. elegans reference samples (Gouveia et al., ). Similar to the samples from Hänel et al. these were extracts from mixed-stage samples. Ion mobility separation (IMS) is a valuable addition to MS, MS/MS and RT for lipid identification, e.g., to identify maradolipids in C. elegans (Witting et al., ). Besides the additional separation, which can potentially resolve isobaric and isomeric interferences, collisional cross sections (CCS) can be derived. The obtained data is largely complementary to MS and can be used for identification purposes. In contrast to RT, which represents a system’s property arising from the selected column, eluents, the analyte, and further analytical conditions, CCS are transferable between different instruments and even instrument types (e.g. TIMS, DTIMS, TWIMS) (George et al., ). Similar to the UPLC-UHR-TOF-MS/MS data, we used MassQL to detect putative mmPEGC species in the UHPLC-TIMS-TOF-MS/MS data. All species detected in the dataset from Hänel et al. could also be detected in this dataset with confirmation by MS 2 . Likewise, a consistent trendline across the retention time dimension was found. We also investigated the acquired CCS values in this dataset and observed a consistent trend in the mobility dimension, increasing confidence in the lipid annotations. Next, we checked for the presence of PEGCs using the corresponding MassQL described above. No features with MS 2 spectra fitted the query mentioned above in the dataset by Hänel et al. and only one feature matched in the UHPLC-TIMS-TOF-MS/MS dataset. Inspecting the corresponding MS 2 spectrum in more detail, only a low abundant peak was found for the neutral loss of the headgroup. Therefore, the MassQL query was modified accordingly to potentially identify more PEGCs species: Using this modified query, several features were found in the UHPLC-TIMS-TOF-MS/MS dataset, but only three also matched the theoretical m/z values of PEGCs, including two species found by Boland et al. , PE-HexCer 39:0;O4 and PE-HexCer 41:0;O4, and a new PE-HexCer 40:0;O4. Inspecting the RT and CCS trendlines consistent trends were found. Based on this, the data from Hänel et al. was reinvestigated. RTs of PEGCs are slightly higher compared to the corresponding isomeric mmPEGCs. Using this information and the exact m/z for the three lipids, features that potentially correspond to these lipids were found in the Hänel et al. dataset. However, based on matching of m/z values only, these annotations are highly ambiguous. Table summarizes all PEGC and mmPEGC species detected in the UHPC-TIMS-MS/MS dataset. Based on the obtained data ( m/z , RT, MS 2 and CCS if available), different species from both PEGCs and mmPEGCs were identified. Analysis of Boland et al. by NMR has proven that these lipids are also based on a C17 iso-branched chain sphingoid base, similar to other C. elegans sphingolipids. However, only in one case within the Hänel et al. dataset further evidence of a C17 sphingoid base was found: PE-NMe-HexCer 17:0;O3/24:0;O was annotated as the only species on this detailed level. So far, C17iso branched sphingoid bases are only known in C. elegans . Based on the assumption of a C17 sphingoid base, the N-acyls have a carbon length of 21 to 28. This matches the acyl groups known from other sphingolipid classes in the nematode (Chitwood et al., ; Hänel et al., ). To further verify lipid species from PEGC and mmPEGC in the dataset from Hänel et al., we have performed a correlation analysis of RT values between the two datasets. A high correlation with 0.99 was found, indicating that identifications can be transferred between the datasets. Using species identified in both as “anchor” points a linear regression was used to transfer RT values. Measured RT values matched the predicted RT values with a maximum error of 1.07%. PEGCs and mmPEGCs show a different sphingoid base than other C. elegans sphingolipids. In contrast to other sphingolipids, these lipids contain a C17iso phytosphingosine base with an additional hydroxyl group at the fourth position instead of a double bond between the fourth and fifth carbon atoms. Neither ceramides nor hexosylceramides with this base have been identified by Hänel et al. or Scholz et al., performing deep analysis of the C. elegans sphingolipidome (Hänel et al., ; Scholz et al., ). Another study that focused on the analysis of sphingolipids was performed by Mosbech et al., but also no evidence for a C17iso phytosphingosine base was found (Mosbech et al., ). We searched other publications performing lipid profiling in C. elegans for the potential presence of these precursors of PEGCs and mmPEGCs. Lipid names were normalized to the same shorthand notations to ensure comparability. We identified four different articles that report (Anh et al., ; Cheng et al., ; Mosbech et al., ; Smulan et al., ) the presence of ceramides, hexosylceramides, or both containing potentially a phytosphingosine base. For example, Anh et al. detected one lipid species annotated as Cer 41:0;4O or Cer 17:0;3O/24:0(2OH) and one lipid species as HexCer 41:0;4O or HexCer 17:0;3O/24:0(2OH) (Anh et al., ). Since no details about the used database or identification strategy were reported in most cases, we decided to reinvestigate the data from Hänel et al. to search for these sphingolipids as well. Calculating the theoretical m/z specific to fragments of the hypothetical phytosphingosine base ( m/z 286.2741 ([C 17 H 36 NO 2 ] + ), m/z 268.2635 ([C 17 H 34 NO] + ) and m/z 250.2529 ([C 17 H 32 N] + )) we generated the following MassQL query: Multiple features matching this pattern were found in the mass range fitting to Cer and HexCers. First, Cers were investigated. Using exact m/z , fragmentation pattern, and RT trendlines, seven features were identified, including two isomers for N-acyls of length 23 and 24. One additional feature was putatively annotated based on exact m/z only, but it fitted well with the RT trendline as established by the MS 2 annotated features. The two different isomers detected for the lipid species with an N-acyl of 23 and 24 carbons were baseline-separated and individually confirmed by MS 2 . We manually inspected the spectra of putative HexCers, and additionally, the fragments from the phytosphingosine base. HexCers showed neutral losses corresponding to the hexosyl moiety and additional water losses. Therefore, a more specific MassQL query was generated and applied to search for HexCers containing a phytopshingoid base: Only three features matched exactly this MassQL query, showing a high enough intensity for all fragments. More species were found with more relaxed parameters, e.g., less required fragments. Ten features were annotated as HexCers containing a phytosphingosine base, mostly on the molecular species level based on MS 2 , exact m/z , and RT trendlines. Only three features were annotated on the species level, but RT trends confirmed this annotation. Sample preparation by Hänel et al. included saponification of glycerol- and glycerophospholipid, which might also lead to hydrolysis of PEGCs and mmPEGCs to HexCers and Cers. We, therefore, additionally investigated the UHPLC-TIMS-TOF-MS/MS dataset. Although this data set also included an alkaline MTBE extraction, several other extractions have been used, and the presence of Cer and HexCer in these samples confirms that these lipid classes are a native part of the C. elegans sphingolipidome and do not represent artifacts from sample preparation. Using the same MassQL query for general screening for phytosphingosine-based species, we found several features with a matching fragmentation spectrum; however, they were only in the mass range of Cers, not HexCers. We additionally used the rule-based lipid annotation feature within MetaboScape as an additional criterion for identification and trendlines along CCS values. 9 features were finally annotated as Cer species, with 23, 24, and 25 carbon species showing two features of potentially isomeric structures. All species were also correctly annotated by the rule-based lipid annotation. Although this feature in MetaboScape in principle, allows for the prediction of lipid CCS values, phytoceramides are currently not covered in the CCS prediction. Investigating why no HexCers were identified, we found that only the neutral loss of the hexosyl moiety could be observed due to the lower collision energy used in the UHPC-TIMS-MS/MS dataset. Accordingly, the MassQL was reduced to: Many features were filtered using this rather generic query, including a HexCer with a normal sphingoid base, indicating that this query works in principle. Again, using exact m/z and trendlines along RT and CCS as additional filters (Fig. ), 11 features were annotated as HexCer containing a phytosphingosine base. These lipids were found not only in extracts from the alkaline MTBE extraction method but also in all extraction methods, suggesting they exist naturally. Based on the results obtained, Cers and HexCers with a phytosphingosine base are established as part of the C. elegans sphingolipidome. As precursors for PEGCs and mmPEGCs they potentially play important roles in the biology of the nematode. Similar to PEGC and mmPEGCs, RT values between the datasets were compared and high correlation was found indicating that indeed the same species have been detected between the two datasets. PEGCs and mmPEGCs were previously identified as important lipids with roles in cholesterol mobilization. Until now, these lipids have only been detected in the work of Boland et al. . In their work, analysis was performed with shotgun lipidomics, leading to overlapping fragmentation spectra for isomeric species with overlapping m/z (e.g. PE-NMe-HexCer 40:0;O4 and PE-HexCer 41:0;O4). We argued that besides the five phosphorylated glucosylceramides (two PEGC and three mmPEGC) already identified by Boland et al. , more lipids of this class is expected. Therefore, we conducted an in-depth search for additional PEGCs and mmPEGC species in two different datasets using UPLC-UHR-TOF-MS and UHPLC-TIMSTOF-MS/MS, respectively. In the first case, we reinvestigated our recently published UPLC-UHR-TOF-MS data from Hänel et al. , and secondly, we performed UHPLC-TIMS-TOF-MS/MS measurements of C. elegans reference samples using a timsTOF Pro 2 instrument. For a systematic search for novel lipid species, MassQL was used as a convenient way to investigate the different datasets in a systematic manner for features matching the specific fragmentation pattern of PEGCs and mmPEGCs. The additional separation dimensions of LC and TIMS helped identify species from lipid classes and Cers and HexCers based on phytosphingosine as their potential biosynthetic precursors. In addition to the two PEGCs and three mmPEGCs species identified by Boland et al. , we could annotate six new phosphorylated glycosphingolipids (one PEGC and five mmPEGCs) as well as 20 Cer and HexCer species based on fragmentation pattern and trend lines along the RT and/or CCS dimension. This is the first time that these lipids (PEGCs and mmPEGCs) were analyzed by TIMS, and obtained CCS values and fragmentation spectra will serve as a reference for further investigations. Furthermore, this study serves as strong case for the combination of LC and IMS, since several of the isomeric/isobaric pairs of lipids detected could not be resolved by IMS alone. Furthermore, the correlation of RT values between the two datasets showed high values of 0.99 for all investigated lipid classes, indicating that, indeed, the same species were identified in both datasets. Our results indicate that PEGCs and mmPEGCs can be detected in C. elegans using multiple lipid extraction methods. This suggests that these lipids might as well be detected in the C. elegans lipidome beyond the study by Boland et al. and this study. We did a retrospective analysis of several other datasets created by us during the last years and were able to detect PEGCs and mmPEGCs in multiple datasets, e.g., Rackles et al. and Haeussler et al. (data not shown) (Haeussler et al., ; Rackles et al., ). Using MassQL for mmPEGC within the MetaboScape software, we found in both datasets mmPEGC(q39:0) and mmPEGC(q41:0). The same was true for several other unpublished datasets. Unfortunately, not many further public C. elegans lipidomics datasets were available to check for the presence of PEGCs or mmPEGCs beyond our own data. We investigated a dataset uploaded to Metabolomics Workbench data (Sud et al., ) that contained lipid profiling data. Exact masses matching the theoretical masses of PEGCs and mmPEGCs could be annotated, but due to missing MS 2 information not further confirmed. Based on the results, PEGCs and mmPEGCs are part of the lipidome of C. elegans , and we suggest they should be included in future investigations into sphingolipids. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 27 kb) Supplementary file2 (MB 28 kb) Supplementary file3 (MB 57 kb) |
Baruch Fischhoff: the importance of testing messages | e417cdec-5d25-49be-b524-8b57dd6c56d4 | 7411322 | Health Communication[mh] | |
Quality of life changes over time and predictors in a large head and neck patients’ cohort: secondary analysis from an Italian multi-center longitudinal, prospective, observational study—a study of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) head and neck working group | be303eb4-30a4-43e2-9feb-34d6d50200fe | 10023607 | Internal Medicine[mh] | Head and neck carcinoma (HNC) is becoming common worldwide, and it is anticipated to rise by 30% accounting for an estimated 1.08 million new cancer cases annually by 2030 . In particular, the increasing rates of human papilloma virus (HPV)-related tumors, with better prognosis compared to the counterpart, have contributed to this high prevalence of HNC especially in the United States of America and Western Europe . Currently, regardless of HPV status, evidenced-based treatments are multimodal and may produce several physical complications and psychological distress, which may persist beyond treatment . The main treatment-related side effects are oral mucositis, taste impairment, salivary gland dysfunction, xerostomia, incapacity to chew and swallow, bacterial and fungal infections, neuropathy, trismus, and skin changes and reactions of the treated area . All these complications impair patients’ ability to perform on daily activities , resulting in social withdrawal, mental, and emotional distress and impacting patients’ health-related (HR) quality of life (QoL) domains but also more general QoL domains . HRQoL may be described as a subjective and multi-dimensional concept related to one’s perception of well-being and satisfaction with one’s own health as well daily life functioning , which encompasses physical, psychological, and social functioning and disease-treatment related symptoms and side effects . Thus, it may be considered a subset of the broader concept of QoL, defined as “an individual’s perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns” . Accordingly, we have decided to focus on the more comprehensive term of QoL. As it was abovementioned said, HNC patients’ face unique physical, emotional, and psychological challenges and life disruptions, in comparison to other cancer sites . Hence, understanding QoL changes and patients’ needs during and after therapy is essential to manage the disease more effectively and to set up rehabilitative strategies for the patients . Longitudinal studies reported that QoL usually decreases during radiation therapy (RT) and starts to improve 3–6 months after treatment, with a global amelioration one year after RT end, without a complete return to pre-treatment status, and with a pattern varies depending on the dimension of QoL evaluated . In addition, information about clinical and treatment-related predictors impacting on improvement and recovery on QOL is not comprehensive enough so far. A multi-center longitudinal, prospective, observational study of consecutive HNC patients, treated at seven Italian Oncology Radiotherapy Departments, was conducted on behalf of the Italian Association of Radiotherapy and Clinical Oncology (AIRO) Head and Neck Working Group. The first endpoint was the Italian language psychometric validation of the M.D. Anderson Symptom Inventory Head and Neck (MDASI-HN) questionnaire . Here, we present results of secondary endpoints: (i) investigate QoL in patients with HNC using the MDASI-HN module to measure symptom burden during RT and in the follow-up period, namely, (1, 3, 6, and 12 months after completion of RT) and (ii) analyze whether QoL may be predicted by socio-demographic and clinical characteristics.
Procedure This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT. Questionnaire and data collection The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected. Statistical analysis Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
This was a multi-center prospective longitudinal observational study of consecutive HNC patients treated with RT at seven Italian Oncology Radiotherapy Departments, from 2016 to 2019. Eligibility criteria were patients with a squamous cell carcinoma of the head and neck (including oral cavity, oropharynx, larynx, and hypopharynx); age ≥ 18 years old; Eastern Cooperative Oncology Group (ECOG) performance status < 2; and good knowledge of Italian language. Exclusion criteria included history of cognitive or psychiatric disorders, synchronous tumors, or previous RT to the head and neck region. Treatment details were previously described . Briefly, all patients were treated with (chemo)radiotherapy ((C)RT) with definitive or adjuvant intent (postoperative), based on primary and disease stage. If needed, type of surgical approach and induction chemotherapy regimen were chosen by the respective professionals. The study was approved by the Ethical Committee of Fondazione IRCCS Istituto Nazionale dei Tumori in Milan (prot. INT 29/15). All patients signed study-specific informed consent and answered to the questionnaire after the physician visit. Questionnaire measure and socio-demographic and clinical variables were collected at different time points: pre-treatment (before RT); weekly during RT (6–7 weeks); and in the follow-up period, specifically 1, 3, 6, and 12 months after RT.
The MDASI-HN is a brief and reliable patient-reported outcome measure (PROM) questionnaire developed to investigate symptoms severity, specifically general cancer-related symptoms (GC-RS), head and neck cancer-related symptoms (HNC-RS), and symptoms interference with daily activities (SIDA) . It contains 13 items representing the most common symptoms among all cancer types (such as fatigue level, lack of appetite and vomiting) and 9 items specific to HNC (such as problems with tasting food, choking or coughing and difficulty swallowing or chewing). These items assess the presence and severity of symptoms during the previous 24 h, rating them on a 11-point scale from “not present” (0) to “as bad as you can imagine” (10). The last 6 items concern how these symptoms interfere with daily activities, including work, walk, and relationship with other; these assess how general and specific cancer symptoms interfere with patients’ activities during the past 24 h. These items are rated on a scale ranging from “do not interfere” (0) to “interfered completely” (10) . Clinical and socio-demographic characteristics, including age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, human papillomavirus (HPV) status, RT setting (adjuvant vs. definitive), and concomitant systemic therapy, were also collected.
Data were analyzed using IBM SPSS Statistics version 25 (IBM, Armonk, NY, USA). Multi-level mixed-effects linear regression estimated the association between QoL and time as well as with clinical and socio-demographic variables. We opted for such a hierarchical approach as it (a) permits to model random effects (intercepts and slopes) of time and (b) permits to treat variables as nested within other variables; in particular, for the present study, the various timepoints are nested under each participant. We also investigated the missing and response rate at each timepoint as percentage (e.g., number of participants who responded at week x/total number of participants*100). The following variables were investigated: time (in weeks), age, sex, living situation, educational level, employment status, alcohol consumption and tobacco use, ECOG performance status, HPV status, RT setting, and concomitant systemic therapy. Last, we set alpha at p < 0.05.
Participants From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition. Socio-demographic and clinical and variables and changes of QoL over time Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL. Changes of QoL over time: the role of HPV Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
From January 2016 to December 2019, 166 HNC patients were enrolled and received (C)RT. The response rate at the beginning of the study was high in all the three dimensions, and at time 1, it ranged from 95.78% (GC-RS) to 93.37% (SIDA); however, it slowly decreased from the last week of treatment. Indeed, the missing rate gradually increased in the follow-up period. At week 8, missing rate was of 31.93% for all three factors of the MDASI-HN, whereas it raised at 60.84% at week 52. Patient socio-demographic characteristics are shown in Table , while tumor and treatment characteristics are shown in Table . Most of the patients, specifically 79%, had locally advanced disease according to TNM 7th edition.
Considering the whole sample, first, hierarchical linear model analysis was conducted on the factor GC-RS as the dependent variable in a stepwise fashion and indicated that the best model was the one including the linear, quadratic, and cubic effects of time, and both the intercepts and the slope of time (linear) as random effects. Subsequently, the other variables were also entered in the analyses. After entering them, the random effect of the slope was no longer significant and was hence excluded. Table shows results of this model. A second analysis was conducted on the factor HNC-RS as the dependent variable in the same stepwise fashion as for the first dimension. The analyses showed that the best fitting model included the linear, quadratic, and cubic trend and the random effect of the intercepts (linear). Subsequently, the other variables were entered in the analyses. None of the variables considered reached significance except for time (Table ). A third analysis was conducted on SIDA as the dependent variable, again in a stepwise fashion. The analyses showed that the best fitting model included the three effects of time (linear, quadratic, and cubic) and the random effects of the intercepts and the slope (linear). As for the first factor, once the other variables were entered in the analyses, the random effect of the slope was no longer significant; hence, it was excluded. The HPV status and the linear, quadratic, and cubic effects of time were significant (Table ). As Fig. a shows, for all three MDASI factors, there was a trend whereby the scores increased from week 1 to week 8 (with some fluctuation between week 4 and week 8), followed by a decrease from week 8 to week 52. Considering that a higher score indicates lower QoL, the results indicated a worsening in the first eight weeks, followed by a slow return to a better QoL.
Since the amount of patient diagnosed with oropharynx cancer outnumbered those with other tumor locations, the same analyses as above were conducted only for those cases where the location of the tumor was the oropharynx, considering patients HPV positive and negative separately. In relation to HPV-negative patients, as can be seen in Table , for the GC-RS factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). This model showed that the linear, quadratic, and the cubic effects of time were all significant. For the HNC-RS factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, linear, quadratic, and cubic effects of time were all significant. The analysis conducted on the SIDA factor showed that the best model was the one including the three effects of time (linear, quadratic, and cubic), all the other variables, and the random effects of the intercepts (linear). The model showed that the linear, quadratic, and cubic effects of time were all significant. In all these three dimensions, none of the other variables considered reached significance. In relation to HPV-positive patients (Table ), for the first factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects and that of all the other variables, plus intercepts of time (linear) as random effect. The model showed that the linear, quadratic, and the cubic effects of time were all significant. Further, the effect of gender, age at diagnosis, educational level, surgery, and alcohol use were also significant. The estimated marginal means indicated that male patients ( M = 2.16, SE = 0.42), with a higher educational level ( M = 2.11, SE = 0.33), who had surgery ( M = 2.15, SE = 0.53), and those who use alcohol ( M = 2.22, SE = 0.38) had lower scores than females ( M = 3.30, SE = 0.37), who had a low educational level ( M = 3.35, SE = 0.45), who had not the surgery done ( M = 3.31, SE = 0.32), and who never drink alcohol ( M = 3.24, SE = 0.40). For the second factor, the best fitting model included linear, quadratic, and cubic trend of time; all the other variables; and the random effect of the intercepts (linear). The model showed that the linear, quadratic, and the cubic effects of time were all significant. The effect of educational level and ECOG status was also significant. Patients with a lower educational level ( M = 5.38, SE = 0.47) and those fully active (ECOG 0) ( M = 4.93, SE = 0.41) showed higher scores than those with higher educational level ( M = 3.56, SE = 0.35) and restricted in physically strenuous activity (ECOG 1) ( M = 4.01, SE = 0.43). For the third factor, the best model was the one including the fixed effect of linear, quadratic, and cubic effects of time and that of all the other variables, plus intercepts of time (linear) as random effect. Again, the linear, quadratic, and the cubic effects of time were all significant. The effects of gender, age at diagnosis, employment status, and alcohol use were also significant. Patients who were female ( M = 3.70, SE = 0.62), employed ( M = 3.76, SE = 0.68), and never use alcohol ( M = 3.57, SE = 0.66) showed higher scores that males ( M = 2.08, SE = 0.70), unemployed ( M = 2.02, SE = 0.63), and alcohol user ( M = 2.21, SE = 0.63). As Fig. b-d shows, HPV-positive patients showed higher score, thus, worse QoL during treatment, whereas HPV-negative patients had worse QoL in the follow-up period, specifically when considering the HN cancer-related symptoms and the symptom interference with daily activities factors.
In this prospective longitudinal study, we used the PROM MDASI-HN to detect patients’ symptoms burden and implement interventions and therapy adjustments specific to each patient. A 3-factor solution, including GC-RS, HNC-RS, and SIDA, was considered, and a series of linear mixed model analyses were conducted. In both GC-RS and HNC-RS domains, time was the only significant predictor of patient’s QoL, whereas concerning the SIDA, time and HPV status were significant, resulting in HPV-positive patients with worst QoL than negative ones. It was evident that HNC patients’ QoL declined during RT (Fig. a), especially those symptoms specific to HNC, such as problems with mucus and difficulty in swallowing, that resulted to be more painful; nonetheless, QoL slowly improved as soon as treatment ended, which is consistent with the pattern found by other findings . Indeed, it is plausible that symptom severity is worse during RT because of tumor presence as well as therapy short-term side effects, which consequently affect patients’ life, whereas after therapy completion, there should be a physical relief due to tumor size reduction, thus, an improvement of patients’ perception of their life quality. However, it is also important to consider those findings in which side effects and problems persisted up to 1-year follow-up and even beyond it . In these cases, the sequelae were related to specific HNC-related symptoms, such as dry mouth, sticky saliva, or senses dysfunctions, showing that although general and global QoL recovered, the same did not happen for specific HNC symptoms. For instance, Oskam and colleagues found that QoL decrease related to HNC specific symptoms persisted up to a period between 8 to 11 years post-diagnosis. A possible explanation is that these problems and symptomatology are long-term side-effects of treatments, which appear only years after therapy, whereas other symptoms, such as nausea or pain, are caused by the presence of tumor or treatment administration . Among the studies found, only a few employed the M.D. Anderson Symptom Inventory Head and Neck module (MDASI-HN), 28-item version, which was used to assess symptoms severity during RT as well as in the follow up period. Most of previous research used QoL measures that were longer than MDASI-HN, although measuring similar dimensions; thus, future research could use this questionnaire to address patients’ QoL and avoid extra burden to them. The same abovementioned analyses were conducted among oropharynx cancer patients, distinguished by HPV positive and negative. Concerning HPV-negative patients, only the variable of time resulted to predict patients’ QoL. Among HPV-positive patients, time resulted to be significant in all the three factors. Regarding the GC-RS factor, being female, those patients who underwent surgery, those with low educational level, or patients that have never drunk alcohol had a worst QoL. Moreover, older patients were likely to have decreased QoL. It seems understandable that patients who had surgery may be debilitated, thus, having low QoL; similarly, patients with low educational level may engage in unhealthy behaviors and have less resources to cope with their disease. In relation to the HNC-RS factor, patients restricted in physically strenuous activity (ECOG 1) or with high educational level had a better QoL than fully active patients (ECOG 0) or those with a lower educational level. As for ECOG, our results appear to be contradictory at the first glance. We need to underline that a good performance status is generally classified as state 0 or 1one for the other. ECOG 0–1 is linked to better values in several scales of QOL. A possible explanation of our finding is that for patients with no functional impairment or premorbid lifestyle depicting a ECOG 0 status before starting RT, any impact on QOL is more perceived since the difference from baseline conditions is greater compared to patients with ECOG 1. For the SIDA, it was found that older patients, female subjects, those patients who were employed, or those who never used alcohol showed worst QoL. Unexpectedly, those subjects who never drink alcohol had worst QoL; this result would need to be further explored, considering that previous studies have focused on the prognostic role of alcohol use in developing HNC regardless its specific role during cancer treatment. Comparing HPV-positive and HPV-negative patients’ QoL trends over time (Fig. b-d), it is possible to notice that although HPV-positive patients had worse QoL during treatment and immediately after it, especially in relation to GC-RS and HNC-RS factors, their QoL levels increase in the follow-up period; on the other hand, HPV-negative patients had worse QoL during the weeks after concluding treatment, thus, in the follow-up period. Our results are in agreement with literature data. Indeed, HPV-related oropharyngeal cancer patients’ population tends to be younger and healthier, with a very good baseline QOL, compared with individuals with other HPV-unrelated HNC. However, HPV-positive cancer patients are more likely to suffer a deterioration on their QOL during treatment. In a sub-study conducted within a prospective phase 3 randomized trial of concurrent standard radiation versus accelerated radiation plus cisplatin for locally advanced HN Carcinoma: NRG Oncology RTOG 0129, p16-positive oropharyngeal cancer (OPC) patients had better QOL than p16-negative patients did, before treatment and after 1 year after treatment. However, QOL/PS decreased more significantly from pretreatment to the last 2 weeks of treatment in the p16-positive group than in the p16-negative group . Again, in a sub-analysis of the randomized trial Trans-Tasman Radiation Oncology Group (TROG) 02.02 (HeadSTART), HPV-positive patients showed a more dramatic QOL drop with concurrent chemoradiation compared to HPV-negative ones . The current study has some limitations that should be noted and may have an influence on results generalization. First, due to drop-out the sample size of those who completed the questionnaire up to the last time point was smaller than the one who answered at the beginning of the research. Second, our sample consisted mainly of male patients with a prevalence of oropharynx tumors. Although the presence of these limitations, using the MDASI-HN, is a valid and short PROM, having a timeline that included both the treatment and the follow-up period resulted to be fundamental to have deeper understanding of patients’ QoL. Future research should give further attention to treatments sequelae specific to HNC, especially in the long-term period; extending the follow-up period would allow to better understand symptoms trajectories and their interference with daily life, considering that HNC specific symptoms may persist even years after ending treatments. Furthermore, it seems important to consider other psycho-social variables (for instance, gender and financial toxicity ), which may have an impact on treatment outcomes as well as patients’ QoL, and analyze their trajectories over time, allowing to understand how these variables interact with patients’ physical and psychological well-being. This would help to develop more specific treatments and interventions that would answer to patients’ needs.
Although QoL is an important indicator of healthcare systems quality and is included within the assessment of treatments benefits , some of its aspects may be often underdiagnosed and thus undertreated by physicians . Moreover, clinical as well as socio-demographic variables may have an impact on patients’ QoL. Hence, PROM as a standard procedure should be included in patients’ condition assessment, allowing deeper insights of their disease experience and excluding response misunderstanding .
|
MEDiCINe: Motion Correction for Neural Electrophysiology Recordings | 20838d41-a151-4008-a27b-f29853732560 | 11896784 | Physiology[mh] | Recent advances in high-density microelectrode arrays such as Neuropixels have allowed neurophysiologists to record from hundreds of neurons simultaneously. Such data scale necessitates automatic isolation and tracking of individual neurons throughout a recording session, a process called “spike sorting.” One challenge for automated spike-sorting algorithms is relative motion between the electrodes and the brain, which must be corrected to stabilize the recording. We introduce a method for estimating such motion in neural recordings. Our method outperforms existing motion estimation methods and produces more accurate spike sorting on a benchmark of simulated datasets with known ground-truth motion. Our method also performs well on primate neurophysiology datasets. We open-source our method and instructions for integrating it into common spike-sorting pipelines.
Electrophysiology studies often involve recording neural activity with laminar microelectrode arrays inserted in the brain. This data is processed to compute putative spike times of individual neurons throughout a recording session, a process termed “spike sorting.” Historically, spike sorting was primarily a manual or semimanual process . However, recent advances in recording scale afforded by high-density laminar microelectrode arrays such as Neuropixels probes have made manual spike sorting prohibitively time-consuming. This has necessitated the emergence of automated spike-sorting algorithms . Automating spike sorting is challenging for several reasons, one of which is that the laminar microelectrode array (hereafter referred to as “array”) may move relative to its surrounding neural tissue . This motion can be caused by a variety of factors, such as pulsation, changes in intracranial pressure, decompression of neural tissue after inserting an array, and instability of the mechanical apparatus holding the array. Motion is typically more extreme in nonhuman primate (NHP) and human recordings than recordings in rodents and other small animals . Estimating and correcting for motion is an important step in spike-sorting pipelines. Improvements in motion estimation yield better automatic spike sorting, both yielding more usable neurons for analysis and saving the researcher time manually curating spike-sorting results . Electrophysiology motion estimation is challenging for several reasons. First, the motion can exhibit a range of statistics, including slow drift, high-frequency noise, and discrete jumps. Second, the motion may depend on position along the array, varying as a function of depth in the brain. Third, the neural activity itself may be nonstationary: Neuron firing rates may fluctuate over time, neurons may be gained or lost throughout the session due to motion or cell death, and the relative motion between the array and the brain may cause the recorded waveform shape of single neurons to change. Existing state-of-the-art approaches to motion estimation begin by discretizing the data into temporal bins . For each temporal bin, they compute a histogram of neural activity as a function of depth along the array and neural activity features such as waveform amplitude. An estimate of the motion is computed to maximize the correlations across pairs of these histograms. This computation may involve comparing each temporal bin to a particular reference bin or may involve comparing each temporal bin only to its nearest neighbors . While these approaches work well for some recordings, there is broad agreement in the field that existing approaches struggle for some recordings and accurate motion estimation is a common difficulty for spike sorting. We introduce MEDiCINe ( M otion E stimation by Di stributional C ontrastive I nference for Ne urophysiology), an approach for motion estimation that infers motion by fitting a constrained model of the neural data. We first consider the generative process of the neural data, which has two components: (1) neural activity consisting of local voltage modulations, such as spikes or LFP power, coming from neurons that are unmoving in the brain tissue, and (2) motion of the array relative to the brain. We then formulate a nonparametric model of neural activity that captures this generative structure. We fit this model to neural data using gradient descent. We found that this approach works on a wide range of datasets without hyperparameter tuning. MEDiCINe outperforms existing methods on an extensive suite of simulated datasets with known ground-truth motion and a variety of motion and instability statistics. MEDiCINe also works well on all of our NHP Neuropixels recordings and on rodent Neuropixels recordings with experimentally imposed motion. Lastly, we open-source MEDiCINe, usage instructions, examples integrating MEDiCINe with SpikeInterface and Kilosort4 tools for spike sorting, and data and code for reproducing our results.
MEDiCINe method Consider a dataset of N spikes (putative action potentials) extracted from an electrophysiology recording session. Represent the dataset as a set of triples [ ( t 1 , d 1 , a 1 ) , ( t 2 , d 2 , a 2 ) , … , ( t N , d N , a N ) ] , where t i is the time at which spike i occurred, d i is the estimated depth along the laminar array at which spike i was detected, and a i is the amplitude of spike i . If the recording has low electrophysiology motion through time, then marginalizing this dataset over time would yield a sparse distribution in depth–amplitude space, under the assumption that individual neurons have stable depth and spike amplitude in the brain. In contrast, if the recording has high motion, then marginalizing this dataset over time would not yield a sparse dataset, because spikes coming from a single neuron would be spread out over depth . Leveraging this observation, the key intuition underlying MEDiCINe is to learn a motion function that maximizes the sparsity of the time-marginalized dataset distribution . The following architecture and objective function of MEDiCINe formulate this sparsity maximization in a way that facilitates computationally efficient optimization. To operationalize this, MEDiCINe learns two differentiable functions: Motion F unction M : ( d , t ) → Δ d , Classification N etwork C : ( d , a ) → p ∈ [ 0 , 1 ] . These functions compose to form a probability function P over the joint space [time, depth, amplitude]: P ( t , d , a ) = C ( d + M ( d , t ) , a ) . P is trained to classify whether an input spike comes from the dataset or from a uniform null distribution over depth and amplitude. This pressures the motion-corrected dataset spikes d + M ( d , t ) to be a sparse distribution that is highly discriminable from a uniform null. Specifically, M and C are fit using gradient descent. For each step, we draw a batch [ ( t i 1 , d i 1 , a i 1 ) , … , ( t i K , d i K , a i K ) ] of K random samples from the spike dataset and a batch [ ( t ^ j 1 , d ^ j 1 , a ^ j 1 ) , … , ( t ^ j K , d ^ j K , a ^ j K ) ] of K random samples from a uniform distribution with the same ranges at the spike dataset. We then apply P to each of these batches to get [ p i 1 , … , p i K ] and [ p ^ j 1 , … , p ^ j K ] , where p i l = P ( t i l , d i l , a i l ) and p ^ j l = P ( p ^ j l , d ^ j l , a ^ j l ) . We then compute the loss function as follows: L = ∑ l = 1 K log ( p i l ) + log ( 1 − p ^ j l ) . This is the binary cross-entropy loss where the data samples are labeled 1 and the uniform samples are labeled 0. We backpropagate L to update the parameters in M and C . This loss function pressures P to discriminate dataset samples from uniform samples, hence pressuring the motion function to cause the time-marginalized dataset distribution after motion adjustment to be sparse. We parameterize the classification network C by a multilayer perceptron with two hidden layers. We parameterize the motion function M as the linear interpolation of a matrix of shape [depth bins, time bins] discretizing the space of depth and time. The entries of this matrix are Δ d estimates. Using multiple depth bins allows the motion function to model motion that varies across depth. Note that this method is not specific to datasets with only [time, depth, amplitude] representations of spikes. It can apply more generally to any dataset of neural events that have a [time, depth, feature vector] representation. This includes spike data with spike shape features beyond amplitude and LFP data with power spectrum features. Note also that this method is not specific to motion only in depth. By letting the motion function return a three-vector [Δ x , Δ y , Δ z ], it could estimate motion in three-dimensional space. Code and data accessibility To use MEDiCINe, please visit our MEDiCINe website https://jazlab.github.io/medicine . That website includes demos and instructions and for using MEDiCINe on your own data, including interfacing with Kilosort4 and SpikeInterface. To reproduce the results in this work, please visit https://github.com/jazlab/medicine_paper for software, data, and instructions for reproducing the results in this manuscript.
Consider a dataset of N spikes (putative action potentials) extracted from an electrophysiology recording session. Represent the dataset as a set of triples [ ( t 1 , d 1 , a 1 ) , ( t 2 , d 2 , a 2 ) , … , ( t N , d N , a N ) ] , where t i is the time at which spike i occurred, d i is the estimated depth along the laminar array at which spike i was detected, and a i is the amplitude of spike i . If the recording has low electrophysiology motion through time, then marginalizing this dataset over time would yield a sparse distribution in depth–amplitude space, under the assumption that individual neurons have stable depth and spike amplitude in the brain. In contrast, if the recording has high motion, then marginalizing this dataset over time would not yield a sparse dataset, because spikes coming from a single neuron would be spread out over depth . Leveraging this observation, the key intuition underlying MEDiCINe is to learn a motion function that maximizes the sparsity of the time-marginalized dataset distribution . The following architecture and objective function of MEDiCINe formulate this sparsity maximization in a way that facilitates computationally efficient optimization. To operationalize this, MEDiCINe learns two differentiable functions: Motion F unction M : ( d , t ) → Δ d , Classification N etwork C : ( d , a ) → p ∈ [ 0 , 1 ] . These functions compose to form a probability function P over the joint space [time, depth, amplitude]: P ( t , d , a ) = C ( d + M ( d , t ) , a ) . P is trained to classify whether an input spike comes from the dataset or from a uniform null distribution over depth and amplitude. This pressures the motion-corrected dataset spikes d + M ( d , t ) to be a sparse distribution that is highly discriminable from a uniform null. Specifically, M and C are fit using gradient descent. For each step, we draw a batch [ ( t i 1 , d i 1 , a i 1 ) , … , ( t i K , d i K , a i K ) ] of K random samples from the spike dataset and a batch [ ( t ^ j 1 , d ^ j 1 , a ^ j 1 ) , … , ( t ^ j K , d ^ j K , a ^ j K ) ] of K random samples from a uniform distribution with the same ranges at the spike dataset. We then apply P to each of these batches to get [ p i 1 , … , p i K ] and [ p ^ j 1 , … , p ^ j K ] , where p i l = P ( t i l , d i l , a i l ) and p ^ j l = P ( p ^ j l , d ^ j l , a ^ j l ) . We then compute the loss function as follows: L = ∑ l = 1 K log ( p i l ) + log ( 1 − p ^ j l ) . This is the binary cross-entropy loss where the data samples are labeled 1 and the uniform samples are labeled 0. We backpropagate L to update the parameters in M and C . This loss function pressures P to discriminate dataset samples from uniform samples, hence pressuring the motion function to cause the time-marginalized dataset distribution after motion adjustment to be sparse. We parameterize the classification network C by a multilayer perceptron with two hidden layers. We parameterize the motion function M as the linear interpolation of a matrix of shape [depth bins, time bins] discretizing the space of depth and time. The entries of this matrix are Δ d estimates. Using multiple depth bins allows the motion function to model motion that varies across depth. Note that this method is not specific to datasets with only [time, depth, amplitude] representations of spikes. It can apply more generally to any dataset of neural events that have a [time, depth, feature vector] representation. This includes spike data with spike shape features beyond amplitude and LFP data with power spectrum features. Note also that this method is not specific to motion only in depth. By letting the motion function return a three-vector [Δ x , Δ y , Δ z ], it could estimate motion in three-dimensional space.
To use MEDiCINe, please visit our MEDiCINe website https://jazlab.github.io/medicine . That website includes demos and instructions and for using MEDiCINe on your own data, including interfacing with Kilosort4 and SpikeInterface. To reproduce the results in this work, please visit https://github.com/jazlab/medicine_paper for software, data, and instructions for reproducing the results in this manuscript.
To evaluate MEDiCINe and compare it to existing motion estimation methods, we quantitatively benchmarked it on a suite of simulated datasets with controlled ground-truth motion. We also qualitatively assessed its performance on NHP electrophysiology datasets without known ground-truth motion. Simulated datasets To compare the performance of MEDiCINe with existing motion estimation methods, we generated a suite of 384 simulated neurophysiology recording datasets with controlled ground-truth motion and evaluated both MEDiCINe and existing motion estimation methods on these datasets. To generate the simulated datasets, we enlarged a preexisting suite of simulated datasets to include a wide variety of motion and neuron stability statistics that occur in neurophysiology data. We used the MEArec electrophysiology data simulator to generate one 30 min Neuropixels electrophysiology session for each combination of the following dataset parameters: Linear drift of the relative depth between the array and brain. Two options: (1) no linear drift or (2) linear drift of 0.1 μm/s. Random walk of the relative depth between the array and brain with Gaussian steps and 1 s frequency. Three options: (1) no random walk, (2) random walk with standard deviation of 1 μm/s, or (3) random walk with standard deviation of 2 μm/s. Discrete random jumps of the relative depth between the array and brain. Two options: (1) no jumps or (2) jump times sampled from a Poisson process with a rate of 100 s and jump displacements sampled from a uniform distribution over [−50 μ m , 50 μ m ]. Number of neurons. Two options: (1) 20 neurons or (2) 100 neurons. Distribution of neuron density over depth. Two options: (1) neurons uniformly distributed over depth or (2) neurons distributed bimodally over depth from a mixture of two Gaussians with means at 15 and 85% of the array length and standard deviations of 10% of the array length. Firing rate stability. Four options: (1) constant firing rates randomly uniformly sampled between 1 and 20 Hz, (2) periodic firing rates that are synchronous over all neurons with a period of 4 min and a mean of 1.5 Hz, (3) periodic firing rates that are asynchronous over all neurons, or (4) half of the neurons have constant firing rates, while the other half appear or disappear at random times in the session with linearly ramping firing rate between 0 Hz and a random maximum value between of 1 and 20 Hz. Depth dependency of motion. Two options: (1) depth-independent (rigid) motion or (2) depth-dependent (nonrigid) motion that varies linearly over depth with coefficient 1 for the deepest electrode and 0.5 for the shallowest electrode. From these datasets we extracted spike times, estimated depths, and amplitudes using the monopolar triangulation method . shows spike raster plots of three example simulated datasets and for results of MEDiCINe applied to these datasets. We evaluated the following five motion estimation methods on the extracted spikes from each of our 384 simulated datasets: Kilosort. The “datashift” motion estimation function from Kilosort4 with default parameters, which is currently the most recent motion estimation in the Kilosort family. DREDge. The official DREDge implementation in the SpikeInterface library version 0.101.2 with default parameters, currently considered the state-of-the-art motion estimation method . DREDge Rigid. A modification of DREDge that enforces rigid motion as a function of depth and uses center-of-mass depth estimation instead of monopolar triangulation . This is implemented as the “rigid_fast” method in SpikeInterface version 0.101.2. MEDiCINe Rigid. Our MEDiCINe method with a single depth bin, enforcing rigid motion as a function of depth. MEDiCINe. Our MEDiCINe method with multiple depth bins. In practice, we used two depth bins, which is the same number as DREDge uses on our simulated datasets with the default parameters. 10.1523/ENEURO.0529-24.2025.f2-1 Figure 2-1 Results on Simulated Data Conditioned on Parameters. Motion estimation model results conditioned on each parameter of variation of simulated dataset suite. Errorbars show 95% confidence interval of the mean. Left column shows mean absolute error, and right column shows method ranking. Download Figure 2-1, TIF file . 10.1523/ENEURO.0529-24.2025.f2-2 Figure 2-2 Benchmark Violin Plots (A) A violin plot representation of the results in Figure 2-A. (B) A violin plot representation of the results in Figure 2-B. Download Figure 2-2, TIF file . 10.1523/ENEURO.0529-24.2025.f2-3 Figure 2-3 Benchmark method Rankings (A) For each simulated dataset, we compute the ranking (1-5) of each of the 5 motion estimation methods on that dataset in terms of mean absolute motion estimation error. This ranking is shown on the y-axis. (B) For each simulated dataset for which we run spike sorting, we compute the ranking (1 - 5) of each of the 5 motion estimation methods on that dataset in terms of relative sorting inaccuracy. This ranking is shown on the y-axis. Download Figure 2-3, TIF file . 10.1523/ENEURO.0529-24.2025.f2-4 Figure 2-4 Failure Cases (A) Kilosort motion estimation results for the simulated dataset for which the difference between Kilosort and the best method is greatest. This represents the worst failure case for Kilosort in our suite of simulated datasets. (B) - (E) Corresponding failure cases for the other methods. Download Figure 2-4, TIF file . 10.1523/ENEURO.0529-24.2025.f2-5 Figure 2-5 Spike Sorting Accuracy. Accuracy as a function of unit (sorted by accuracy) for Kilosort4 sorting results for each motion estimation method on each of the 40 datasets for which we ran spike sorting. Download Figure 2-5, TIF file . MEDiCINe implementation We parameterized the motion function of MEDiCINe by an array of size [ depth_bins, time_bins ]. For multiple depth bins, the depth bins uniformly divided the range from the deepest to the shallowest detected spike. We let time_bins equal the ceiling of the number of seconds in the dataset, allowing the model to capture motion at 1 s resolution. We also applied a triangular temporal smoothing kernel with 30 s support. We found this temporal resolution and smoothness to be sufficiently fine to capture motion well in all our datasets. To compute the change in depth at a given time and depth, we computed the linear interpolation of the temporally smoothed motion array for that time and depth. We then applied a scaled hyperbolic tangent function to bound the motion by ±400 μ m . We parameterized the activity network of MEDiCINe by a multilinear perceptron with 14 input units, two fully connected hidden layers each with 256 units, and one output unit. The activation function was ReLU. We applied a sigmoid function to the output to force it to be a probability in [0, 1]. Given a depth and amplitude, to compute the probability of a corresponding spike, we did the following: Normalize both the depth and amplitude to lie in [0, 1], given the depths and amplitudes of all spikes in the dataset. Compute six depth features by taking sin ( x ⋅ depth ) for x in [1, 2, 4, 8, 16, 32]. Similarly, compute six amplitude features. Concatenate the depth and amplitude with their features into a 14-dimensional vector. Apply the MLP to this vector. We added the sinusoidal features as inputs to the network because they helped optimization by allowing the MLP to more easily learn high-frequency modulations. In our experiments, these features improved optimization convergence runtime by about a factor of 10. We implemented the model in PyTorch and trained it with the Adam optimizer with a learning rate of 5 · 10 −4 and gradient clipping of 1. We used batch size 8,192, where each batch had 4,096 spikes randomly sampled from the dataset and 4,096 spikes randomly sampled from a uniform distribution with the same depth, amplitude, and time bounds as the spike dataset. We trained for 10,000 gradient steps. To reduce the chance of converging to a local minimum, we added noise to the motion function output early in training. At the start of training, this noise had standard deviation equal to 0.1 times the depth range of the data. This was linearly annealed to 0 throughout the first 2,000 gradient steps of training. Benchmark results We evaluated performance of all motion estimators using a standard measure of the median-corrected mean absolute error with respect to the ground-truth motion . Specifically, for each dataset, we selected 11 depth levels evenly spaced from the deepest to the shallowest recorded spike. For each of these depth levels and each model, we computed the ground-truth motion M through time at 1 s intervals and the motion M ~ estimated by the model at 1 s intervals. For each level, we compute the median-corrected absolute difference abs ( M − M ~ − median ( M − M ~ ) ) . The model's mean absolute error is the average of this quantity over time and depth levels . By this metric, MEDiCINe Rigid and MEDiCINe significantly outperformed all other methods on average . When conditioning these results on each factor of variation of the datasets, MEDiCINe always performed at least as well as all existing methods (Extended Data ). These results are not due to outlier effects (Extended Data ). On a per-dataset basis, MEDiCINe Rigid and MEDiCINe also ranked highest on average among all the methods (Extended Data ) and did not have extreme failure modes (Extended Data ). Prior work has shown that better motion estimation correlates with better spike sorting . To verify this, we selected a random set of 40 of our simulated datasets to evaluate spike sorting. For each of these datasets and each motion estimation method, we corrected for the estimated motion in the neural data using Kriging interpolation and ran Kilosort4 spike sorting (disabling the built-in motion correction step; ). To evaluate sorting quality, we computed a standard metric of spike-sorting accuracy . For any motion estimation method, we define the relative spike-sorting inaccuracy on a dataset A to be as follows: Inaccurac y rel ( A ) = Inaccuracy ( A ) − min B ∈ e s t i m a t o r s Accuracy ( B ) , where Inaccuracy = ∑ 1 ≤ i ≤ N _ neurons ( 1 − Accurac y i ) . MEDiCINe Rigid and MEDiCINe had lower relative spike-sorting inaccuracy than existing methods ( ; Extended Data ). Neurophysiology datasets To test MEDiCINe in practice, we used four of our primate Neuropixels sessions with motion artifacts that we found difficult to estimate and correct using existing methods. This data was collected by acute Neuropixels recording of the dorsomedial frontal cortex of awake behaving rhesus macaque monkeys. All experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the Committee of Animal Care at the Massachusetts Institute of Technology. The recordings exhibited a range of real-world motion and instability conditions. We suspect the primary cause of motion artifacts is movement of the surface of the brain within the recording craniotomy due to changes in intracranial pressure when the animal moves its body. We used monopolar triangular spike localization and applied MEDiCINe to the data. We found that MEDiCINe performed well under these conditions , qualitatively better than existing methods on these datasets (Extended Data ). 10.1523/ENEURO.0529-24.2025.f3-1 Figure 3-1 Non-MEDICINE Results for NHP Datasets. This shows the results for all non-MEDICINE methods for each of the NHP datasets shown in Figure 3. Download Figure 3-1, TIF file . 10.1523/ENEURO.0529-24.2025.f3-2 Figure 3-2 Results for Rodent Datasets (A) Spike raster for one rodent dataset. Note the motion artifacts beginning at 600 s caused my intentional movement of the micromanipulator. (B) Plots of the estimated motion (colors) by each method and the motion of the micromanipulator (black), in a time window around the micromanipulator movement. (C) Mean absolute error of the estimated motion by each method compared to micromanipulator movement. Download Figure 3-2, TIF file . In addition to our NHP datasets, we also benchmarked MEDiCINe and existing methods on a rodent Neuropixels dataset with motion imposed by controlled movements of the micromanipulator holding the probe during recording . On these datasets, we found MEDiCINe to perform at least as well as existing motion estimation methods, when compared with the ground-truth movement of the micromanipulator (Extended Data ). However, all methods performed similarly on these data. We believe all methods had significant error with respect to the micromanipulator because the micromanipulator motion does reflect the ground-truth motion between the probe and the brain tissue. Specifically, elasticity of the brain tissue and friction between the tissue and the probe cause the micromanipulator movements to be attenuated and smoothed with respect to the brain tissue.
To compare the performance of MEDiCINe with existing motion estimation methods, we generated a suite of 384 simulated neurophysiology recording datasets with controlled ground-truth motion and evaluated both MEDiCINe and existing motion estimation methods on these datasets. To generate the simulated datasets, we enlarged a preexisting suite of simulated datasets to include a wide variety of motion and neuron stability statistics that occur in neurophysiology data. We used the MEArec electrophysiology data simulator to generate one 30 min Neuropixels electrophysiology session for each combination of the following dataset parameters: Linear drift of the relative depth between the array and brain. Two options: (1) no linear drift or (2) linear drift of 0.1 μm/s. Random walk of the relative depth between the array and brain with Gaussian steps and 1 s frequency. Three options: (1) no random walk, (2) random walk with standard deviation of 1 μm/s, or (3) random walk with standard deviation of 2 μm/s. Discrete random jumps of the relative depth between the array and brain. Two options: (1) no jumps or (2) jump times sampled from a Poisson process with a rate of 100 s and jump displacements sampled from a uniform distribution over [−50 μ m , 50 μ m ]. Number of neurons. Two options: (1) 20 neurons or (2) 100 neurons. Distribution of neuron density over depth. Two options: (1) neurons uniformly distributed over depth or (2) neurons distributed bimodally over depth from a mixture of two Gaussians with means at 15 and 85% of the array length and standard deviations of 10% of the array length. Firing rate stability. Four options: (1) constant firing rates randomly uniformly sampled between 1 and 20 Hz, (2) periodic firing rates that are synchronous over all neurons with a period of 4 min and a mean of 1.5 Hz, (3) periodic firing rates that are asynchronous over all neurons, or (4) half of the neurons have constant firing rates, while the other half appear or disappear at random times in the session with linearly ramping firing rate between 0 Hz and a random maximum value between of 1 and 20 Hz. Depth dependency of motion. Two options: (1) depth-independent (rigid) motion or (2) depth-dependent (nonrigid) motion that varies linearly over depth with coefficient 1 for the deepest electrode and 0.5 for the shallowest electrode. From these datasets we extracted spike times, estimated depths, and amplitudes using the monopolar triangulation method . shows spike raster plots of three example simulated datasets and for results of MEDiCINe applied to these datasets. We evaluated the following five motion estimation methods on the extracted spikes from each of our 384 simulated datasets: Kilosort. The “datashift” motion estimation function from Kilosort4 with default parameters, which is currently the most recent motion estimation in the Kilosort family. DREDge. The official DREDge implementation in the SpikeInterface library version 0.101.2 with default parameters, currently considered the state-of-the-art motion estimation method . DREDge Rigid. A modification of DREDge that enforces rigid motion as a function of depth and uses center-of-mass depth estimation instead of monopolar triangulation . This is implemented as the “rigid_fast” method in SpikeInterface version 0.101.2. MEDiCINe Rigid. Our MEDiCINe method with a single depth bin, enforcing rigid motion as a function of depth. MEDiCINe. Our MEDiCINe method with multiple depth bins. In practice, we used two depth bins, which is the same number as DREDge uses on our simulated datasets with the default parameters. 10.1523/ENEURO.0529-24.2025.f2-1 Figure 2-1 Results on Simulated Data Conditioned on Parameters. Motion estimation model results conditioned on each parameter of variation of simulated dataset suite. Errorbars show 95% confidence interval of the mean. Left column shows mean absolute error, and right column shows method ranking. Download Figure 2-1, TIF file . 10.1523/ENEURO.0529-24.2025.f2-2 Figure 2-2 Benchmark Violin Plots (A) A violin plot representation of the results in Figure 2-A. (B) A violin plot representation of the results in Figure 2-B. Download Figure 2-2, TIF file . 10.1523/ENEURO.0529-24.2025.f2-3 Figure 2-3 Benchmark method Rankings (A) For each simulated dataset, we compute the ranking (1-5) of each of the 5 motion estimation methods on that dataset in terms of mean absolute motion estimation error. This ranking is shown on the y-axis. (B) For each simulated dataset for which we run spike sorting, we compute the ranking (1 - 5) of each of the 5 motion estimation methods on that dataset in terms of relative sorting inaccuracy. This ranking is shown on the y-axis. Download Figure 2-3, TIF file . 10.1523/ENEURO.0529-24.2025.f2-4 Figure 2-4 Failure Cases (A) Kilosort motion estimation results for the simulated dataset for which the difference between Kilosort and the best method is greatest. This represents the worst failure case for Kilosort in our suite of simulated datasets. (B) - (E) Corresponding failure cases for the other methods. Download Figure 2-4, TIF file . 10.1523/ENEURO.0529-24.2025.f2-5 Figure 2-5 Spike Sorting Accuracy. Accuracy as a function of unit (sorted by accuracy) for Kilosort4 sorting results for each motion estimation method on each of the 40 datasets for which we ran spike sorting. Download Figure 2-5, TIF file .
We parameterized the motion function of MEDiCINe by an array of size [ depth_bins, time_bins ]. For multiple depth bins, the depth bins uniformly divided the range from the deepest to the shallowest detected spike. We let time_bins equal the ceiling of the number of seconds in the dataset, allowing the model to capture motion at 1 s resolution. We also applied a triangular temporal smoothing kernel with 30 s support. We found this temporal resolution and smoothness to be sufficiently fine to capture motion well in all our datasets. To compute the change in depth at a given time and depth, we computed the linear interpolation of the temporally smoothed motion array for that time and depth. We then applied a scaled hyperbolic tangent function to bound the motion by ±400 μ m . We parameterized the activity network of MEDiCINe by a multilinear perceptron with 14 input units, two fully connected hidden layers each with 256 units, and one output unit. The activation function was ReLU. We applied a sigmoid function to the output to force it to be a probability in [0, 1]. Given a depth and amplitude, to compute the probability of a corresponding spike, we did the following: Normalize both the depth and amplitude to lie in [0, 1], given the depths and amplitudes of all spikes in the dataset. Compute six depth features by taking sin ( x ⋅ depth ) for x in [1, 2, 4, 8, 16, 32]. Similarly, compute six amplitude features. Concatenate the depth and amplitude with their features into a 14-dimensional vector. Apply the MLP to this vector. We added the sinusoidal features as inputs to the network because they helped optimization by allowing the MLP to more easily learn high-frequency modulations. In our experiments, these features improved optimization convergence runtime by about a factor of 10. We implemented the model in PyTorch and trained it with the Adam optimizer with a learning rate of 5 · 10 −4 and gradient clipping of 1. We used batch size 8,192, where each batch had 4,096 spikes randomly sampled from the dataset and 4,096 spikes randomly sampled from a uniform distribution with the same depth, amplitude, and time bounds as the spike dataset. We trained for 10,000 gradient steps. To reduce the chance of converging to a local minimum, we added noise to the motion function output early in training. At the start of training, this noise had standard deviation equal to 0.1 times the depth range of the data. This was linearly annealed to 0 throughout the first 2,000 gradient steps of training.
We evaluated performance of all motion estimators using a standard measure of the median-corrected mean absolute error with respect to the ground-truth motion . Specifically, for each dataset, we selected 11 depth levels evenly spaced from the deepest to the shallowest recorded spike. For each of these depth levels and each model, we computed the ground-truth motion M through time at 1 s intervals and the motion M ~ estimated by the model at 1 s intervals. For each level, we compute the median-corrected absolute difference abs ( M − M ~ − median ( M − M ~ ) ) . The model's mean absolute error is the average of this quantity over time and depth levels . By this metric, MEDiCINe Rigid and MEDiCINe significantly outperformed all other methods on average . When conditioning these results on each factor of variation of the datasets, MEDiCINe always performed at least as well as all existing methods (Extended Data ). These results are not due to outlier effects (Extended Data ). On a per-dataset basis, MEDiCINe Rigid and MEDiCINe also ranked highest on average among all the methods (Extended Data ) and did not have extreme failure modes (Extended Data ). Prior work has shown that better motion estimation correlates with better spike sorting . To verify this, we selected a random set of 40 of our simulated datasets to evaluate spike sorting. For each of these datasets and each motion estimation method, we corrected for the estimated motion in the neural data using Kriging interpolation and ran Kilosort4 spike sorting (disabling the built-in motion correction step; ). To evaluate sorting quality, we computed a standard metric of spike-sorting accuracy . For any motion estimation method, we define the relative spike-sorting inaccuracy on a dataset A to be as follows: Inaccurac y rel ( A ) = Inaccuracy ( A ) − min B ∈ e s t i m a t o r s Accuracy ( B ) , where Inaccuracy = ∑ 1 ≤ i ≤ N _ neurons ( 1 − Accurac y i ) . MEDiCINe Rigid and MEDiCINe had lower relative spike-sorting inaccuracy than existing methods ( ; Extended Data ).
To test MEDiCINe in practice, we used four of our primate Neuropixels sessions with motion artifacts that we found difficult to estimate and correct using existing methods. This data was collected by acute Neuropixels recording of the dorsomedial frontal cortex of awake behaving rhesus macaque monkeys. All experimental procedures conformed to the guidelines of the National Institutes of Health and were approved by the Committee of Animal Care at the Massachusetts Institute of Technology. The recordings exhibited a range of real-world motion and instability conditions. We suspect the primary cause of motion artifacts is movement of the surface of the brain within the recording craniotomy due to changes in intracranial pressure when the animal moves its body. We used monopolar triangular spike localization and applied MEDiCINe to the data. We found that MEDiCINe performed well under these conditions , qualitatively better than existing methods on these datasets (Extended Data ). 10.1523/ENEURO.0529-24.2025.f3-1 Figure 3-1 Non-MEDICINE Results for NHP Datasets. This shows the results for all non-MEDICINE methods for each of the NHP datasets shown in Figure 3. Download Figure 3-1, TIF file . 10.1523/ENEURO.0529-24.2025.f3-2 Figure 3-2 Results for Rodent Datasets (A) Spike raster for one rodent dataset. Note the motion artifacts beginning at 600 s caused my intentional movement of the micromanipulator. (B) Plots of the estimated motion (colors) by each method and the motion of the micromanipulator (black), in a time window around the micromanipulator movement. (C) Mean absolute error of the estimated motion by each method compared to micromanipulator movement. Download Figure 3-2, TIF file . In addition to our NHP datasets, we also benchmarked MEDiCINe and existing methods on a rodent Neuropixels dataset with motion imposed by controlled movements of the micromanipulator holding the probe during recording . On these datasets, we found MEDiCINe to perform at least as well as existing motion estimation methods, when compared with the ground-truth movement of the micromanipulator (Extended Data ). However, all methods performed similarly on these data. We believe all methods had significant error with respect to the micromanipulator because the micromanipulator motion does reflect the ground-truth motion between the probe and the brain tissue. Specifically, elasticity of the brain tissue and friction between the tissue and the probe cause the micromanipulator movements to be attenuated and smoothed with respect to the brain tissue.
In this work, we introduced a novel method for estimating motion in neurophysiology recordings, called MEDiCINe (Motion Estimation by Distributional Contrastive Inference for Neurophysiology). We found that MEDiCINe outperformed existing methods on an extensive benchmark of simulated datasets with known ground-truth motion. We also found that MEDiCINe performed well on real NHP neurophysiology datasets where existing methods struggle. There are two key differences between MEDiCINe and existing motion estimation methods . First, MEDiCINe is a probabilistic generative model of the spike data constrained to decompose the data into independent motion and neural activity components. In contrast, existing methods estimate motion by explicitly aligning activity histograms in different time bins throughout the data. Second, MEDiCINe's model of the data is parameterized implicitly, allowing it to leverage the continuity of time, depth, and amplitude, which helps it handle very sparse and noisy data. In contrast, existing methods discretize the data in time, depth, and amplitude, which may cause them to be sensitive to bin sizes and brittle for very sparse or noisy datasets. We envision several ways to extend and improve the MEDiCINe model: LFP features. In this work, we have evaluated MEDiCINe on spike data, but using LFP data may allow it to estimate motion more accurately, particularly for datasets with few neurons. Other works have found LFP features useful for motion estimation . More spike shape features. In this work, the only spike shape feature we used for motion estimation was amplitude. However, MEDiCINe could readily use other features, such as spike width or waveform shape. In fact, because MEDiCINe uses a sparsity loss based on classification, we expect using more spike features would improve its performance by increasing the sparsity of the motion-corrected time-marginalized spike distribution. Fluctuations in firing rate over time. In this work, we used a time-invariant classification network to discriminate dataset spikes from uniformly sampled spikes. However, in practice, neuron firing rates change over time (e.g., due to cell death). Modeling these changes in firing rate could improve the performance of MEDiCINe. One way to do this would be to allow the classification network to depend on time subject to reasonable priors, such as only allowing sparse or slow changes in firing rates. Inductive biases on motion. In this work, the motion function M was unconstrained aside from a temporal smoothing kernel. This affords the model flexibility, but causes it to sometimes find implausible solutions (Extended Data ). This could be addressed by incorporating more priors in the motion function, such as a Gaussian process prior on the motion or explicit priors for motion patterns that are likely to occur in neurophysiology data (e.g., discrete jumps and slow monotonic drift). Motion in three dimensions. In this work, we only considered motion in the depth direction along the laminar array, not horizontal motion in directions orthogonal to the array. While motion in depth is the most salient and detectable motion axis for laminar arrays, the motion function in MEDiCINe could be directly augmented to model three-dimensional motion, which may offer benefits for users with three-dimensional electrode arrays. This may also allow MEDiCINe to be used for motion estimation in recording modalities other than electrophysiology, such as calcium imaging. We open-source the MEDiCINe model implementation at https://github.com/jazlab/medicine and provide a website ( https://jazlab.github.io/medicine ) with documentation, demos, and instructions for installing and using MEDiCINe with just a few lines of code. We also open-source all code and data ( https://github.com/jazlab/medicine_paper ) necessary for reproducing our results with instructions for how to do so.
|
Microbiological quality assessment of | 783c54f3-8e5d-4a5d-9326-bb4f4258a986 | 11164913 | Microbiology[mh] | Fish and shellfish possess a high degree of perishability and are susceptible to substantial quality differences attributed to species distinctions, environmental habitats, feeding behaviors, and the influence of both autolysis enzymes and hydrolytic enzymes, produced by microorganisms that affect the fish's muscle tissue . Catfish are a diverse clade of more than 4100 ray-finned fish species, representing more than 12% of all teleosts, and around 6.3% of all vertebrates . Catfish have been commonly captured and cultured for hundreds of years in Africa, Asia, South America, North America and Europe . African catfish ( Clarias gariepinus ; Burchell, 1822) is an important commercial finfish species in fisheries and aquaculture sectors . African catfish is able to survive and grow in poorly oxygenated water, high stocking densities, grow at a faster rate, resist diseases and handling stress and produce good tasting flesh . Bayad, scientifically known as Bagrus bajad (Forsskal, 1775), represents a particular type of bagrid catfish that is observed in a wide range of natural habitats in African lakes and rivers . Pangasius catfish ( Pangasianodon hypophthalmus; Sauvage, 1878) are one of the most popular riverine freshwater species whose range is limited to the Mekong River and Mekong basins and a yearly turnover of 1.525 million metric tons, are produced in Vietnam , . Pangasius catfish fillet has been exported to over 138 countries with a value of about 1.6 billion USD . Fish meat serves as a primary and affordable source of animal proteins, emphasizing and underscoring the significance of proper processing and preservation techniques. Among the various fish species, catfish stand out as particularly suitable for additional utilization due to their remarkable resilience to diverse environmental conditions and their rapid growth rate . A fillet is the edible portion of fish, after removal of the head, fins, viscera, bones, skin, and adipose tissues. The yield of catfish fillets can vary depending on several factors such as the size of the fish, the trimming and cleaning process, and any losses during processing. However, as a general guideline, the yield of catfish fillets is typically around 40–50% of the whole fish’s weight . Freshness is the most important attribute when assessing the quality of fish. Sensory, microbial, chemical, and physical determination methods can be used to assess fish quality by measuring the lipid oxidation, volatile compounds, TVC (total viable count) and any changes in the sensory attributes of the fish – . Psychrophilic aerobic bacteria (PAC) are prevalent in numerous environmental settings, and as a result, fish could have acquired them from various pathways including water, harvesting, transportation, handling, processing, distribution and storage . Psychrophilic bacteria are Gram-negative microorganisms found within the genera Aeromonas , Vibrio , Acinetobacter , Pseudomonas , Flavobacterium , Photobacterium , Shewanella , and Moritella as well as Gram-positive microorganisms found within the genera Bacillus , Arthrobacter, Micrococcus and Lactobacillus , . The total viable count represents the conventional microbiological technique employed for assessing the quality of finfish fillets and stands as a prevalent quality metric endorsed by food safety authorities . The enumeration of specific spoilage organisms (SSOs) on iron agar provides a more reliable microbial indicator of fish freshness. SSOs play a key role in the spoilage of fish and seafood. While initially present in limited numbers, they proliferate more rapidly than other bacteria within the fish , . Foodborne pathogens including Salmonella spp., Yersinia spp., Aeromonas spp., Pseudomonas spp., Staphylococcus aureus , Vibrio spp., and Escherichia , play a crucial role in fish-related concerns . These pathogens are responsible for causing foodborne illnesses such as typhoid fever, gastroenteritis, diarrhea, and dysentery. These illnesses pose significant health hazards to consumers, including the risk of death . Fish fillets contamination by Escherichia spp. is mainly associated with contaminated water or through cross-contamination during washing, filleting, and trimming of the fish . The spoilage of fresh fish and its products is predominantly attributed to Aeromonas spp. . Pseudomonas spp. are opportunistic bacteria that thrive in various environments and can be present in varying quantities . This group of microorganisms are commonly encountered in fish and other fresh foods, where they play a role in the spoilage process . Pseudomonas luteola is Gram-negative non-spore forming bacilli, catalase-positive, urease negative, indole negative, cytochrome oxidase-negative, H 2 S production negative and oxidation fermentation negative. Pseudomonas luteola commonly found in aqueous environments, soil and plants , . Catfish fillets are highly favored by consumers because of their nutritional value and favorable sensory characteristics. However, while there is extensive literature on the quality of catfish fillets, there is limited research on the quantitative and qualitative microbiological aspects. Hence, this study was aimed to evaluate the levels of spoilage and pathogenic microorganisms in fillet samples of African catfish ( Clarias gariepinus ), bayad ( Bagrus bajad ), and pangasius catfish ( Pangasianodon hypophthalmus ).
Ethics declarations The experimental protocols and all methods were performed in accordance with the ARRIVE 2.0 standards (Animal Research: Reporting of In Vivo Experiments) guidelines and regulations. All procedures and protocols for experiments were carried out in compliance with the guidelines and regulations set by Veterinary Medicine Cairo University Institutional Animal Care and Use Committee (Vet. CU. IACUC; Vet CU18042024918). Sample size calculation The determination of the sample size for this study followed the formula for an unknown population, as outlined by Kothari : n = Z 2 SD 2 /e 2 , In this calculation, where Z represents the value of the standard variate (1.96) at a 95% confidence level, SD denotes the standard deviation of the population derived from the trial sample (0.11), and e stands for the tolerable sampling error or precision (0.05) within a 95% confidence interval. Subsequently, the sample size was computed as: [12pt]{minimal}
$$=^{2}{SD}^{2}}{{e}^{2}}=^{2}{(0.11)}^{2}}{{(0.05)}^{2}} 19\; $$ n = Z 2 SD 2 e 2 n = ( 1.96 ) 2 ( 0.11 ) 2 ( 0.05 ) 2 ≈ 19 samples Therefore, the sample size of each catfish fillet type was 19, making a total of 80 samples for the four types of fillets (African catfish fillets, Bayad fillets, white basa fillets and red basa fillets). Samples collection In total, 80 samples were collected. This included 20 African catfish ( C. gariepinus ) fillets weighing between 600 and 1000 g, 20 bayad ( B. bajad ) fillets weighing between 500 and 2500 g and 40 skinless-frozen pangasius catfish ( P. hypophthalmus ) fillets weighing between 600 and 800 g. These samples were obtained from fish markets and retailers in Egypt (Kafr El Sheikh, Alexandria and Beheira governorates) between 2021 and 2023. Pangasius catfish fillets on the Egyptian market can be divided into two grades: white basa fillets (20 samples) and red basa fillets (20 samples). All catfish fillet samples were transported in separate iceboxes filled with ice bags to the Department of Food Hygiene, Alexandria University. Microbiological analyses were performed immediately on the catfish fillet samples. Microbiological analyses Enumeration of bacterial load Upon arrival at the laboratory, sterile scalpels and tweezers were used to aseptically collect 25 g samples of fish fillet, which were then placed in a sterile stomacher bag. Next, 225 mL of sterile 0.1% peptone water (Difco, UK) was added to the bag, and the mixture was homogenized for 1 min using a Stomacher at the normal speed (Stomacher lab-blender 400, Seward Medical, UK) according to ISO 6887–3:2017 . Subsequently, a tenfold serial dilution series was prepared, and the counts were determined using the pour plate technique according to ISO 6887–3:2017 . Each analysis was conducted twice in order to ensure accuracy. The number of viable microorganisms was then counted, calculated, and expressed as the logarithm of colony-forming units per gram (log CFU/g). Plate Count Agar (PCA) is a widely employed solid culture medium for enumeration the viable bacterial population in a sample. It provides a nutrient-rich environment that supports the growth of a wide range of bacteria. PCA contains a combination of peptones, yeast extract, and agar according to ISO 4833–1:2013 . The mesophilic aerobic count and psychrophilic aerobic bacteria were determined using a Plate Count Agar (PCA, Oxoid), then incubated at 30 °C for 48 h and at 7 °C for 7–10 days, respectively according to according to ISO 6887–3:2017 ; ISO 4833–1:2013 . The hydrogen sulfide producing bacteria were enumerated using iron agar (14 g agar, 3 g beef extract, 20 g peptone, 5 g sodium chloride, 3 g yeast extract, 0.6 g L-Cysteine, 0.3 g Sodium thiosulfate and 0.3 g Ferric citrate per 1L autoclaved distilled water) according to Gram et al . . Staphylococcus spp. and Staphylococcus aureus counts using Baird-Parker agar medium according to ISO 6888:2021 . Enterobacteriaceae counts were enumerated using Violet red bile glucose (VRBG) agar and incubated at 37 °C for 24 h according to ISO 21,528:2017 . The Most Probable Number (MPN) method was used for the enumeration of Coliform and fecal Coliform according to Feng et al . ; ISO 7251:2005 ; ISO 4831:2006 ; Oblinger & Koburger . Isolation of pathogenic bacteria Isolation of E. coli was conducted according to ISO 16,649:2018 . Approximately 1 g of homogenized fish fillets was mixed with 9 mL of modified Tryptone Soya Broth (mTSB, HiMedia). The samples were thoroughly mixed and left to incubate overnight at a temperature of 41 °C. Following selective enrichment, 50 µL of the resulting mixture were spread onto MacConkey agar (HiMedia) plates to isolate E. coli bacteria, and the plates were incubated aerobically at 37 °C for 24 h. The plates were then examined for the presence of E. coli growth, characterized by pink colonies indicating lactose fermentation. A single, isolated colony exhibiting these characteristics was chosen and transferred to Eosin Methylene Blue agar (EMB, HiMedia) to observe the formation of a metallic sheen. At the same time, another colony displaying similar characteristics was subjected to gram staining according to ISO 16,649:2018 . Isolation of Salmonella spp. was performed according to ISO 6579:2017 . Briefly, 225 mL of buffered peptone water was inoculated with 25 g fish fillets and incubated at 37 °C for 18 h (pre-enrichment in non-selective liquid medium), then inoculate 1 mL of the above-mentioned broth into 9 mL Rappaport–Vassiliadis medium with soya (RVS broth, Oxoid) and finally plating out on Xylose Lysine Deoxycholate agar (XLD, Oxoid) and Salmonella Shigella agar (SS, Oxoid), incubated at 37 °C for 24 h according to ISO 6579:2017 . Yersinia spp. was performed according to ISO 10,273:2017 . Isolation of Yersinia spp. was starting from enrichment in non-selective PSB broth (HiMedia), and then direct plating from PSB broth onto Cefsulodin-Irgasan-Novobiocin (CIN, HiMedia) agar plates, incubated at 30 °C for 24 h according to ISO 10,273:2017 . Vibrio spp. were performed according to ISO 21,872:2017 . Isolation of Vibrio spp. was starting from primary enrichment medium alkaline saline peptone water (ASPW, Oxoid), followed by incubation at 37 °C for 6 h, then streaked on thiosulfate citrate bile and sucrose agar (TCBS, Oxoid) and incubated at 37 °C for 24 h according to ISO 21,872:2017 . Representative colonies were selected from the plate count agar (PCA, Oxoid) after incubation at 7°C for 7–10 days. Selected colonies were streaked onto Rimler-Shotts (RS, HiMedia) agar supplemented with Novobiocin (HiMedia), and Pseudomonas agar base enriched with Cetrimide, Fucidin, and Cephalosporin ( Pseudomonas CFC agar, Oxoid) and then placed in an incubator at 25 °C for a period of 24 to 72 h. Typical Aeromonas spp. produce greenish-yellow to yellow colonies, or yellow colonies with black centers (H 2 S producing bacteria) on RS agar, while typical Pseudomonas spp. produce blue-green colonies on Pseudomonas CFC agar. Phenotypic characterization of isolates The presumptive identification of isolates was accomplished by assessing their phenotypical characteristics according to the criteria described by Bergey . Biochemical characterization of the isolates using commercial miniaturized API-20E (Biomérieux, France) were performed according to the manufacturer’s instructions. Confirmed isolates were maintained until further use at − 20°C in nutrient broth containing 16% glycerol. Serological identification of Escherichia spp. isolates A total of ten isolates of E. coli and E. fergusonii , which were identified based on their phenotypic characteristics, underwent serological identification according to Ewing . Genotypic characterization, 16S rRNA sequencing and Phylogenetic analysis Four bacterial isolates were selected for sequencing studies based on their morphological and biochemical characteristics. The selected bacterial isolates were Aeromonas hydrophila, isolated from Bagrus bajad , Pseudomonas luteola, isolated from Pangasianodon hypophthalmus as well as E. coli , and E. fergusonii, isolated from Clarias gariepinus . The DNA extraction of these isolates was performed using QIAamp DNA Kits (Qiagen, USA) following the instructions provided by the manufacturer. The DNA was maintained until further use at − 20 °C. The genotypic identification of isolates was confirmed by employing universal 16S rRNA gene primers (Forward: 5′-AGAGTTTGATCCTGGCTCAG-3′) (Reverse: 5′-GGTTACCTTGTTACGACTT-3′) . The 16S rRNA gene was amplified using 50 µL reaction volume in Maxima Hot Start PCR Master Mix (ThermoFisher, USA), per the manufacturer’s instructions. The PCR procedure was set with an initial denaturation step at 95 °C for 10 min, followed by 35 cycles of denaturation at 95 °C for 30 s, annealing at 65 °C for 60 s, and extension at 72 °C for 90 s. The final extension step was conducted at 72 °C for 10 min. The PCR products were examined using a 1% (w/v) agarose gel that was treated with ethidium bromide for staining . The sequences were examined using an ABI 3730xl DNA sequencer from Applied Biosystems™ in the Sigma Scientific Services Laboratory located in Cairo, Egypt. The 16S rRNA sequences acquired were then compared to existing databases through BLASTN on NCBI to determine their closest phylogenetic affiliations . The neighbor-joining algorithm in MEGA X was utilized to construct the phylogenetic analysis . The evolutionary history of the analyzed taxa was represented by constructing a bootstrap consensus tree, which was generated from 500 replicates. The Maximum Composite Likelihood method, developed by Tamura et al . , was used to calculate evolutionary distances. These distances are expressed as the number of base substitutions per site . Statistical analyses Statistical analyses were performed via the R program (R 4.3.1) . The heteroscedasticity of variances and the normality of residuals was calculated using the Levene's and Shapiro–Wilk tests. Microbiological assessment data of catfish fillets was presented as the mean ± SEM (Standard error of mean). Microbiological data were computed by one way analysis of variance (ANOVA) and followed by Tukey’s post hoc test for multiple comparison between groups. The significance level was set at a probability value of less than 0.05 ( p ˂ 0.05).
The experimental protocols and all methods were performed in accordance with the ARRIVE 2.0 standards (Animal Research: Reporting of In Vivo Experiments) guidelines and regulations. All procedures and protocols for experiments were carried out in compliance with the guidelines and regulations set by Veterinary Medicine Cairo University Institutional Animal Care and Use Committee (Vet. CU. IACUC; Vet CU18042024918).
The determination of the sample size for this study followed the formula for an unknown population, as outlined by Kothari : n = Z 2 SD 2 /e 2 , In this calculation, where Z represents the value of the standard variate (1.96) at a 95% confidence level, SD denotes the standard deviation of the population derived from the trial sample (0.11), and e stands for the tolerable sampling error or precision (0.05) within a 95% confidence interval. Subsequently, the sample size was computed as: [12pt]{minimal}
$$=^{2}{SD}^{2}}{{e}^{2}}=^{2}{(0.11)}^{2}}{{(0.05)}^{2}} 19\; $$ n = Z 2 SD 2 e 2 n = ( 1.96 ) 2 ( 0.11 ) 2 ( 0.05 ) 2 ≈ 19 samples Therefore, the sample size of each catfish fillet type was 19, making a total of 80 samples for the four types of fillets (African catfish fillets, Bayad fillets, white basa fillets and red basa fillets).
In total, 80 samples were collected. This included 20 African catfish ( C. gariepinus ) fillets weighing between 600 and 1000 g, 20 bayad ( B. bajad ) fillets weighing between 500 and 2500 g and 40 skinless-frozen pangasius catfish ( P. hypophthalmus ) fillets weighing between 600 and 800 g. These samples were obtained from fish markets and retailers in Egypt (Kafr El Sheikh, Alexandria and Beheira governorates) between 2021 and 2023. Pangasius catfish fillets on the Egyptian market can be divided into two grades: white basa fillets (20 samples) and red basa fillets (20 samples). All catfish fillet samples were transported in separate iceboxes filled with ice bags to the Department of Food Hygiene, Alexandria University. Microbiological analyses were performed immediately on the catfish fillet samples.
Enumeration of bacterial load Upon arrival at the laboratory, sterile scalpels and tweezers were used to aseptically collect 25 g samples of fish fillet, which were then placed in a sterile stomacher bag. Next, 225 mL of sterile 0.1% peptone water (Difco, UK) was added to the bag, and the mixture was homogenized for 1 min using a Stomacher at the normal speed (Stomacher lab-blender 400, Seward Medical, UK) according to ISO 6887–3:2017 . Subsequently, a tenfold serial dilution series was prepared, and the counts were determined using the pour plate technique according to ISO 6887–3:2017 . Each analysis was conducted twice in order to ensure accuracy. The number of viable microorganisms was then counted, calculated, and expressed as the logarithm of colony-forming units per gram (log CFU/g). Plate Count Agar (PCA) is a widely employed solid culture medium for enumeration the viable bacterial population in a sample. It provides a nutrient-rich environment that supports the growth of a wide range of bacteria. PCA contains a combination of peptones, yeast extract, and agar according to ISO 4833–1:2013 . The mesophilic aerobic count and psychrophilic aerobic bacteria were determined using a Plate Count Agar (PCA, Oxoid), then incubated at 30 °C for 48 h and at 7 °C for 7–10 days, respectively according to according to ISO 6887–3:2017 ; ISO 4833–1:2013 . The hydrogen sulfide producing bacteria were enumerated using iron agar (14 g agar, 3 g beef extract, 20 g peptone, 5 g sodium chloride, 3 g yeast extract, 0.6 g L-Cysteine, 0.3 g Sodium thiosulfate and 0.3 g Ferric citrate per 1L autoclaved distilled water) according to Gram et al . . Staphylococcus spp. and Staphylococcus aureus counts using Baird-Parker agar medium according to ISO 6888:2021 . Enterobacteriaceae counts were enumerated using Violet red bile glucose (VRBG) agar and incubated at 37 °C for 24 h according to ISO 21,528:2017 . The Most Probable Number (MPN) method was used for the enumeration of Coliform and fecal Coliform according to Feng et al . ; ISO 7251:2005 ; ISO 4831:2006 ; Oblinger & Koburger . Isolation of pathogenic bacteria Isolation of E. coli was conducted according to ISO 16,649:2018 . Approximately 1 g of homogenized fish fillets was mixed with 9 mL of modified Tryptone Soya Broth (mTSB, HiMedia). The samples were thoroughly mixed and left to incubate overnight at a temperature of 41 °C. Following selective enrichment, 50 µL of the resulting mixture were spread onto MacConkey agar (HiMedia) plates to isolate E. coli bacteria, and the plates were incubated aerobically at 37 °C for 24 h. The plates were then examined for the presence of E. coli growth, characterized by pink colonies indicating lactose fermentation. A single, isolated colony exhibiting these characteristics was chosen and transferred to Eosin Methylene Blue agar (EMB, HiMedia) to observe the formation of a metallic sheen. At the same time, another colony displaying similar characteristics was subjected to gram staining according to ISO 16,649:2018 . Isolation of Salmonella spp. was performed according to ISO 6579:2017 . Briefly, 225 mL of buffered peptone water was inoculated with 25 g fish fillets and incubated at 37 °C for 18 h (pre-enrichment in non-selective liquid medium), then inoculate 1 mL of the above-mentioned broth into 9 mL Rappaport–Vassiliadis medium with soya (RVS broth, Oxoid) and finally plating out on Xylose Lysine Deoxycholate agar (XLD, Oxoid) and Salmonella Shigella agar (SS, Oxoid), incubated at 37 °C for 24 h according to ISO 6579:2017 . Yersinia spp. was performed according to ISO 10,273:2017 . Isolation of Yersinia spp. was starting from enrichment in non-selective PSB broth (HiMedia), and then direct plating from PSB broth onto Cefsulodin-Irgasan-Novobiocin (CIN, HiMedia) agar plates, incubated at 30 °C for 24 h according to ISO 10,273:2017 . Vibrio spp. were performed according to ISO 21,872:2017 . Isolation of Vibrio spp. was starting from primary enrichment medium alkaline saline peptone water (ASPW, Oxoid), followed by incubation at 37 °C for 6 h, then streaked on thiosulfate citrate bile and sucrose agar (TCBS, Oxoid) and incubated at 37 °C for 24 h according to ISO 21,872:2017 . Representative colonies were selected from the plate count agar (PCA, Oxoid) after incubation at 7°C for 7–10 days. Selected colonies were streaked onto Rimler-Shotts (RS, HiMedia) agar supplemented with Novobiocin (HiMedia), and Pseudomonas agar base enriched with Cetrimide, Fucidin, and Cephalosporin ( Pseudomonas CFC agar, Oxoid) and then placed in an incubator at 25 °C for a period of 24 to 72 h. Typical Aeromonas spp. produce greenish-yellow to yellow colonies, or yellow colonies with black centers (H 2 S producing bacteria) on RS agar, while typical Pseudomonas spp. produce blue-green colonies on Pseudomonas CFC agar. Phenotypic characterization of isolates The presumptive identification of isolates was accomplished by assessing their phenotypical characteristics according to the criteria described by Bergey . Biochemical characterization of the isolates using commercial miniaturized API-20E (Biomérieux, France) were performed according to the manufacturer’s instructions. Confirmed isolates were maintained until further use at − 20°C in nutrient broth containing 16% glycerol. Serological identification of Escherichia spp. isolates A total of ten isolates of E. coli and E. fergusonii , which were identified based on their phenotypic characteristics, underwent serological identification according to Ewing . Genotypic characterization, 16S rRNA sequencing and Phylogenetic analysis Four bacterial isolates were selected for sequencing studies based on their morphological and biochemical characteristics. The selected bacterial isolates were Aeromonas hydrophila, isolated from Bagrus bajad , Pseudomonas luteola, isolated from Pangasianodon hypophthalmus as well as E. coli , and E. fergusonii, isolated from Clarias gariepinus . The DNA extraction of these isolates was performed using QIAamp DNA Kits (Qiagen, USA) following the instructions provided by the manufacturer. The DNA was maintained until further use at − 20 °C. The genotypic identification of isolates was confirmed by employing universal 16S rRNA gene primers (Forward: 5′-AGAGTTTGATCCTGGCTCAG-3′) (Reverse: 5′-GGTTACCTTGTTACGACTT-3′) . The 16S rRNA gene was amplified using 50 µL reaction volume in Maxima Hot Start PCR Master Mix (ThermoFisher, USA), per the manufacturer’s instructions. The PCR procedure was set with an initial denaturation step at 95 °C for 10 min, followed by 35 cycles of denaturation at 95 °C for 30 s, annealing at 65 °C for 60 s, and extension at 72 °C for 90 s. The final extension step was conducted at 72 °C for 10 min. The PCR products were examined using a 1% (w/v) agarose gel that was treated with ethidium bromide for staining . The sequences were examined using an ABI 3730xl DNA sequencer from Applied Biosystems™ in the Sigma Scientific Services Laboratory located in Cairo, Egypt. The 16S rRNA sequences acquired were then compared to existing databases through BLASTN on NCBI to determine their closest phylogenetic affiliations . The neighbor-joining algorithm in MEGA X was utilized to construct the phylogenetic analysis . The evolutionary history of the analyzed taxa was represented by constructing a bootstrap consensus tree, which was generated from 500 replicates. The Maximum Composite Likelihood method, developed by Tamura et al . , was used to calculate evolutionary distances. These distances are expressed as the number of base substitutions per site .
Upon arrival at the laboratory, sterile scalpels and tweezers were used to aseptically collect 25 g samples of fish fillet, which were then placed in a sterile stomacher bag. Next, 225 mL of sterile 0.1% peptone water (Difco, UK) was added to the bag, and the mixture was homogenized for 1 min using a Stomacher at the normal speed (Stomacher lab-blender 400, Seward Medical, UK) according to ISO 6887–3:2017 . Subsequently, a tenfold serial dilution series was prepared, and the counts were determined using the pour plate technique according to ISO 6887–3:2017 . Each analysis was conducted twice in order to ensure accuracy. The number of viable microorganisms was then counted, calculated, and expressed as the logarithm of colony-forming units per gram (log CFU/g). Plate Count Agar (PCA) is a widely employed solid culture medium for enumeration the viable bacterial population in a sample. It provides a nutrient-rich environment that supports the growth of a wide range of bacteria. PCA contains a combination of peptones, yeast extract, and agar according to ISO 4833–1:2013 . The mesophilic aerobic count and psychrophilic aerobic bacteria were determined using a Plate Count Agar (PCA, Oxoid), then incubated at 30 °C for 48 h and at 7 °C for 7–10 days, respectively according to according to ISO 6887–3:2017 ; ISO 4833–1:2013 . The hydrogen sulfide producing bacteria were enumerated using iron agar (14 g agar, 3 g beef extract, 20 g peptone, 5 g sodium chloride, 3 g yeast extract, 0.6 g L-Cysteine, 0.3 g Sodium thiosulfate and 0.3 g Ferric citrate per 1L autoclaved distilled water) according to Gram et al . . Staphylococcus spp. and Staphylococcus aureus counts using Baird-Parker agar medium according to ISO 6888:2021 . Enterobacteriaceae counts were enumerated using Violet red bile glucose (VRBG) agar and incubated at 37 °C for 24 h according to ISO 21,528:2017 . The Most Probable Number (MPN) method was used for the enumeration of Coliform and fecal Coliform according to Feng et al . ; ISO 7251:2005 ; ISO 4831:2006 ; Oblinger & Koburger .
bacteria Isolation of E. coli was conducted according to ISO 16,649:2018 . Approximately 1 g of homogenized fish fillets was mixed with 9 mL of modified Tryptone Soya Broth (mTSB, HiMedia). The samples were thoroughly mixed and left to incubate overnight at a temperature of 41 °C. Following selective enrichment, 50 µL of the resulting mixture were spread onto MacConkey agar (HiMedia) plates to isolate E. coli bacteria, and the plates were incubated aerobically at 37 °C for 24 h. The plates were then examined for the presence of E. coli growth, characterized by pink colonies indicating lactose fermentation. A single, isolated colony exhibiting these characteristics was chosen and transferred to Eosin Methylene Blue agar (EMB, HiMedia) to observe the formation of a metallic sheen. At the same time, another colony displaying similar characteristics was subjected to gram staining according to ISO 16,649:2018 . Isolation of Salmonella spp. was performed according to ISO 6579:2017 . Briefly, 225 mL of buffered peptone water was inoculated with 25 g fish fillets and incubated at 37 °C for 18 h (pre-enrichment in non-selective liquid medium), then inoculate 1 mL of the above-mentioned broth into 9 mL Rappaport–Vassiliadis medium with soya (RVS broth, Oxoid) and finally plating out on Xylose Lysine Deoxycholate agar (XLD, Oxoid) and Salmonella Shigella agar (SS, Oxoid), incubated at 37 °C for 24 h according to ISO 6579:2017 . Yersinia spp. was performed according to ISO 10,273:2017 . Isolation of Yersinia spp. was starting from enrichment in non-selective PSB broth (HiMedia), and then direct plating from PSB broth onto Cefsulodin-Irgasan-Novobiocin (CIN, HiMedia) agar plates, incubated at 30 °C for 24 h according to ISO 10,273:2017 . Vibrio spp. were performed according to ISO 21,872:2017 . Isolation of Vibrio spp. was starting from primary enrichment medium alkaline saline peptone water (ASPW, Oxoid), followed by incubation at 37 °C for 6 h, then streaked on thiosulfate citrate bile and sucrose agar (TCBS, Oxoid) and incubated at 37 °C for 24 h according to ISO 21,872:2017 . Representative colonies were selected from the plate count agar (PCA, Oxoid) after incubation at 7°C for 7–10 days. Selected colonies were streaked onto Rimler-Shotts (RS, HiMedia) agar supplemented with Novobiocin (HiMedia), and Pseudomonas agar base enriched with Cetrimide, Fucidin, and Cephalosporin ( Pseudomonas CFC agar, Oxoid) and then placed in an incubator at 25 °C for a period of 24 to 72 h. Typical Aeromonas spp. produce greenish-yellow to yellow colonies, or yellow colonies with black centers (H 2 S producing bacteria) on RS agar, while typical Pseudomonas spp. produce blue-green colonies on Pseudomonas CFC agar.
The presumptive identification of isolates was accomplished by assessing their phenotypical characteristics according to the criteria described by Bergey . Biochemical characterization of the isolates using commercial miniaturized API-20E (Biomérieux, France) were performed according to the manufacturer’s instructions. Confirmed isolates were maintained until further use at − 20°C in nutrient broth containing 16% glycerol.
Escherichia spp. isolates A total of ten isolates of E. coli and E. fergusonii , which were identified based on their phenotypic characteristics, underwent serological identification according to Ewing .
Four bacterial isolates were selected for sequencing studies based on their morphological and biochemical characteristics. The selected bacterial isolates were Aeromonas hydrophila, isolated from Bagrus bajad , Pseudomonas luteola, isolated from Pangasianodon hypophthalmus as well as E. coli , and E. fergusonii, isolated from Clarias gariepinus . The DNA extraction of these isolates was performed using QIAamp DNA Kits (Qiagen, USA) following the instructions provided by the manufacturer. The DNA was maintained until further use at − 20 °C. The genotypic identification of isolates was confirmed by employing universal 16S rRNA gene primers (Forward: 5′-AGAGTTTGATCCTGGCTCAG-3′) (Reverse: 5′-GGTTACCTTGTTACGACTT-3′) . The 16S rRNA gene was amplified using 50 µL reaction volume in Maxima Hot Start PCR Master Mix (ThermoFisher, USA), per the manufacturer’s instructions. The PCR procedure was set with an initial denaturation step at 95 °C for 10 min, followed by 35 cycles of denaturation at 95 °C for 30 s, annealing at 65 °C for 60 s, and extension at 72 °C for 90 s. The final extension step was conducted at 72 °C for 10 min. The PCR products were examined using a 1% (w/v) agarose gel that was treated with ethidium bromide for staining . The sequences were examined using an ABI 3730xl DNA sequencer from Applied Biosystems™ in the Sigma Scientific Services Laboratory located in Cairo, Egypt. The 16S rRNA sequences acquired were then compared to existing databases through BLASTN on NCBI to determine their closest phylogenetic affiliations . The neighbor-joining algorithm in MEGA X was utilized to construct the phylogenetic analysis . The evolutionary history of the analyzed taxa was represented by constructing a bootstrap consensus tree, which was generated from 500 replicates. The Maximum Composite Likelihood method, developed by Tamura et al . , was used to calculate evolutionary distances. These distances are expressed as the number of base substitutions per site .
Statistical analyses were performed via the R program (R 4.3.1) . The heteroscedasticity of variances and the normality of residuals was calculated using the Levene's and Shapiro–Wilk tests. Microbiological assessment data of catfish fillets was presented as the mean ± SEM (Standard error of mean). Microbiological data were computed by one way analysis of variance (ANOVA) and followed by Tukey’s post hoc test for multiple comparison between groups. The significance level was set at a probability value of less than 0.05 ( p ˂ 0.05).
Quantitative microbiological analyses Mesophilic aerobic count, psychrophilic aerobic bacteria, H 2 S producing bacteria, Staphylococcus spp., Staphylococcus aureus, Enterobacteriaceae , Coliform and fecal Coliform counts for each type of fillet are shown in Table . The mesophilic aerobic count, Staphylococcus spp., Staphylococcus aureus , hydrogen sulfide producing bacteria, Enterobacteriaceae and Coliform counts displayed no significant differences ( p > 0.05) among the examined samples. Psychrophilic aerobic bacteria of bayad fillets (> 5 log CFU g −1 ) were significantly higher bacterial counts than African catfish fillets (> 4 log CFU g −1 ) and pangasius catfish fillets (white basa fillets only: ≤ 4 log CFU g −1 ). Although Staphylococcus spp. count in pangasius catfish fillet samples were higher than 4 log CFU g -1 , Staphylococcus aureus was estimated to be 1.33 to 1.51 log CFU g -1 (Table ). Hydrogen sulfide producing bacterial counts in pangasius catfish fillets (red basa fillets; 2.91 log CFU g −1 ) revealed higher H 2 S bacterial counts than African catfish fillets (2.65 log CFU g −1 ), bayad fillets (2.33 log CFU g −1 ) and pangasius catfish fillets (white basa fillets; 2.57 log CFU g −1 ). The Enterobacteriaceae count in pangasius catfish fillets samples was 2.86 to 3.01 log CFU g -1 for white basa and red basa. African catfish fillet (1.27 log MPN g −1 ) showed significantly higher fecal Coliform counts than bayad fillets (0.91 log MPN g −1 ) and Pangasius catfish fillets (0.50–0.56 log MPN g −1 ). The fecal Coliform counts are shown in Table . Qualitative microbiological analyses In the present study, concerning Salmonella spp., Yersinia spp. and Vibrio spp., no colony was isolated from our catfish fillet samples. On the other hand, Escherichia spp. was detected in five African catfish fillet samples, three bayad fillet samples and two pangasius catfish fillet samples (Table ). The incidence of Escherichia spp. isolated from catfish fillets were 25%, 15%, 5% and 5% for African catfish fillet samples, bayad fillet samples and white basa fillet samples and red basa fillet samples. In the other hand, the incidence of Aeromonas spp. isolated from catfish fillets were 25%, 20%, 5% and 20% for African catfish fillet samples, bayad fillet samples, white basa fillet samples and red basa fillet samples. Additionally, the incidence of Pseudomonas spp. isolated from pangasius catfish fillets were 5% and 15% for white basa fillet samples and red basa fillet samples (Table ). Phenotypic and genotypic characterization of isolates The results of the phenotypic analyses conducted on P. luteola and A. hydrophila in the current study are presented in Table . The pure cultures of A. hydrophila and P. luteola were confirmed by sequencing 16S rRNA genes. Compared to the 1485 bp 16S rRNA gene of A. hydrophila (MT847230) expressed a 99.5% homology with the 16S rRNA sequence of A. hydrophila subsp. hydrophila (LC420139, KX012004 and LC420130) , whereas the sequences available in the GenBank, 1490 bp 16S rRNA gene of P. luteola (MT845202) expressed a 99.6% homology with the 16S rRNA sequence of P. luteola (KT728842, KY194220 and KY194291). The phylogenetic tree using 16S rRNA gene sequences of A. hydrophila and P. luteola is shown in Figs. and , respectively. The results of the phenotypic analyses conducted on E. coli and E. fergusonii in the current study are presented in Table . The serological identification of eight E. coli and two E. fergusonii isolates are shown in Table . In this study, we isolated E. coli serotype O26 from African catfish, bayad, and pangasius catfish fillets, while E. fergusonii serotype O78 was found in two African catfish fillet samples. Following 16S rRNA sequencing analysis, the phenotypically and serologically identified Escherichia isolates were assigned to E. coli and E. fergusonii . Additionally, the construction of the phylogenetic tree utilizes 16S rRNA gene sequences, with length of 1432 bp ( E. coli , MT845092) and 1386 bp ( E. fergusonii , MT844056) is shown in Fig. . Compared to the sequences available in GenBank, E. fergusonii 16S rRNA gene (MT844056) expressed a 99.5% homology with the 16S rRNA sequence of E. fergusonii (JQ838153 and MH040100), while E. coli 16S rRNA gene (MT845092) expressed 99.9% homology with the 16S rRNA sequence of E. coli (KT260583 and MF754138). This study reports the first isolation of E. fergusonii from African catfish fillet samples.
Mesophilic aerobic count, psychrophilic aerobic bacteria, H 2 S producing bacteria, Staphylococcus spp., Staphylococcus aureus, Enterobacteriaceae , Coliform and fecal Coliform counts for each type of fillet are shown in Table . The mesophilic aerobic count, Staphylococcus spp., Staphylococcus aureus , hydrogen sulfide producing bacteria, Enterobacteriaceae and Coliform counts displayed no significant differences ( p > 0.05) among the examined samples. Psychrophilic aerobic bacteria of bayad fillets (> 5 log CFU g −1 ) were significantly higher bacterial counts than African catfish fillets (> 4 log CFU g −1 ) and pangasius catfish fillets (white basa fillets only: ≤ 4 log CFU g −1 ). Although Staphylococcus spp. count in pangasius catfish fillet samples were higher than 4 log CFU g -1 , Staphylococcus aureus was estimated to be 1.33 to 1.51 log CFU g -1 (Table ). Hydrogen sulfide producing bacterial counts in pangasius catfish fillets (red basa fillets; 2.91 log CFU g −1 ) revealed higher H 2 S bacterial counts than African catfish fillets (2.65 log CFU g −1 ), bayad fillets (2.33 log CFU g −1 ) and pangasius catfish fillets (white basa fillets; 2.57 log CFU g −1 ). The Enterobacteriaceae count in pangasius catfish fillets samples was 2.86 to 3.01 log CFU g -1 for white basa and red basa. African catfish fillet (1.27 log MPN g −1 ) showed significantly higher fecal Coliform counts than bayad fillets (0.91 log MPN g −1 ) and Pangasius catfish fillets (0.50–0.56 log MPN g −1 ). The fecal Coliform counts are shown in Table .
In the present study, concerning Salmonella spp., Yersinia spp. and Vibrio spp., no colony was isolated from our catfish fillet samples. On the other hand, Escherichia spp. was detected in five African catfish fillet samples, three bayad fillet samples and two pangasius catfish fillet samples (Table ). The incidence of Escherichia spp. isolated from catfish fillets were 25%, 15%, 5% and 5% for African catfish fillet samples, bayad fillet samples and white basa fillet samples and red basa fillet samples. In the other hand, the incidence of Aeromonas spp. isolated from catfish fillets were 25%, 20%, 5% and 20% for African catfish fillet samples, bayad fillet samples, white basa fillet samples and red basa fillet samples. Additionally, the incidence of Pseudomonas spp. isolated from pangasius catfish fillets were 5% and 15% for white basa fillet samples and red basa fillet samples (Table ).
The results of the phenotypic analyses conducted on P. luteola and A. hydrophila in the current study are presented in Table . The pure cultures of A. hydrophila and P. luteola were confirmed by sequencing 16S rRNA genes. Compared to the 1485 bp 16S rRNA gene of A. hydrophila (MT847230) expressed a 99.5% homology with the 16S rRNA sequence of A. hydrophila subsp. hydrophila (LC420139, KX012004 and LC420130) , whereas the sequences available in the GenBank, 1490 bp 16S rRNA gene of P. luteola (MT845202) expressed a 99.6% homology with the 16S rRNA sequence of P. luteola (KT728842, KY194220 and KY194291). The phylogenetic tree using 16S rRNA gene sequences of A. hydrophila and P. luteola is shown in Figs. and , respectively. The results of the phenotypic analyses conducted on E. coli and E. fergusonii in the current study are presented in Table . The serological identification of eight E. coli and two E. fergusonii isolates are shown in Table . In this study, we isolated E. coli serotype O26 from African catfish, bayad, and pangasius catfish fillets, while E. fergusonii serotype O78 was found in two African catfish fillet samples. Following 16S rRNA sequencing analysis, the phenotypically and serologically identified Escherichia isolates were assigned to E. coli and E. fergusonii . Additionally, the construction of the phylogenetic tree utilizes 16S rRNA gene sequences, with length of 1432 bp ( E. coli , MT845092) and 1386 bp ( E. fergusonii , MT844056) is shown in Fig. . Compared to the sequences available in GenBank, E. fergusonii 16S rRNA gene (MT844056) expressed a 99.5% homology with the 16S rRNA sequence of E. fergusonii (JQ838153 and MH040100), while E. coli 16S rRNA gene (MT845092) expressed 99.9% homology with the 16S rRNA sequence of E. coli (KT260583 and MF754138). This study reports the first isolation of E. fergusonii from African catfish fillet samples.
In this study, we evaluate the levels of spoilage and pathogenic microorganisms in catfish fillet samples. Most of the evaluated microbiological properties are within the permissible limits set by International Commission on Microbiological Specification for Food . Psychrophilic aerobic bacteria of bayad fillets (> 5 log CFU g −1 ) were significantly higher than African catfish fillets (> 4 log CFU g −1 ) and pangasius catfish fillets (white basa fillets only: ≤ 4 log CFU g −1 ). Dambrosio et al . indicated that the average psychrophilic aerobic bacteria count in fillet samples of P. hypophthalmus , acquired from an Italian trade import services company, was 4.44 log CFU g -1 and these results were comparable with our results. Moreover, the counts of aerobic psychrotrophic microorganisms found in pangasius catfish varied from 4.6 to 5.9 log CFU g -1 . Nevertheless, although CFU g -1 obtained in this study was moderately high, it did not exceed the acceptable permissible limit for total bacterial load (5.5–7.0 log CFU g -1 ) for fresh and frozen fish, as established by International Commission on Microbiological Specification for Food . The elevated levels of psychrophilic aerobic bacteria found in bayad fillets may be attributed to the preservation method commonly used for bayad in Egyptian markets, which involves storing it with an equal amount of crushed ice. Staphylococcus aureus does not naturally inhabit the microbiota of fish. Consequently, its occurrence in fish is possibly linked to unsanitary practices during handling by fish handlers, processors, or sellers, as well as potential cross-contamination throughout handling, transportation, storage, and processing, stemming from the presence of this pathogen in the microbiome of most humans – . Although Staphylococcus spp. count in our pangasius catfish fillet samples were higher than 4 log CFU g -1 , Staphylococcus aureus was estimated to be 1.33 to 1.51 log CFU g -1 . These moderately high levels suggested that the product contamination is possibly linked to unsanitary practices during handling, processing, selling, and storage etc. – . Tong Thi et al . indicated that the detection of Staphylococcus aureus on the hands of food operators during fish processing, especially in the packaging area, was deemed indicative of inadequate personal hygienic practices. Lower Staphylococcus aureus level (1.14 log CFU g -1 ) in pangasius catfish fillets was previously reported by Dambrosio et al . . Our findings were lower than the acceptable permissible limit of Staphylococcus aureus (˂ 3 log CFU g -1 ) in fish fillets set by Egyptian Organization for Standardization and Quality . The quantification of specific spoilage organisms (SSOs) on iron agar is a more reliable microbial measure of fish freshness; SSOs are responsible for the deterioration of fish and seafood , . Hydrogen sulfide producing bacterial counts in pangasius catfish fillets (red basa fillets; 2.91 log CFU g −1 ) revealed higher counts than African catfish fillets (2.65 log CFU g −1 ), bayad fillets (2.33 log CFU g −1 ) and pangasius catfish fillets (white basa fillets; 2.57 log CFU g −1 ). The Enterobacteriaceae count in pangasius catfish fillets samples was 2.86 to 3.01 log CFU g -1 for white basa and red basa, which is higher than the Enterobacteriaceae count in pangasius catfish fillets samples (2.29 log CFU g -1 ) previously reported by Dambrosio et al . . Enterobacteriaceae and Coliform levels in fish fillets are an indicator of general bacteriological conditions, and an index for the presence of pathogenic enteric organisms . Mossel and Tamminga adopted a reference value (3 log CFU g −1 ) for Enterobacteriaceae in fish fillets. In the present study, the mesophilic aerobic counts, psychrophilic aerobic bacteria, Staphylococcus aureus and Enterobacteriaceae counts of catfish fillets were within the acceptable permissible limit set by Egyptian Organization for Standardization and Quality ; International Commission on Microbiological Specification for Food . Fecal Coliforms are a group of bacteria most commonly used as pollution indicators in food and water and easily affected by freezing storage . The fecal Coliform count in African catfish fillet (1.27 log MPN g −1 ) was significantly higher than bayad fillets (0.91 log MPN g −1 ) and pangasius catfish fillets (0.50–0.56 log MPN g −1 ). These levels were lower than the upper acceptable permissible limits of fecal Coliform for fish fillets set by Egyptian Organization for Standardization and Quality ; International Commission on Microbiological Specification for Food . Comparable results were previously documented by Budiati et al . , who observed that the fecal Coliform content for catfish ranged between 0.48 and 1.63 log MPN g -1 . The lower levels of fecal Coliform found in white basa (0.50 log MPN g -1 ), and red basa (0.56 log MPN g -1 ) fillets may be attributed to the freezing preservation method commonly used for basa fillets in Egyptian markets. Boyd and Tanner reported that the high organic matter, poor water quality, inferior feed quality and high stocking density of catfish in ponds could be associated with rising Coliforms and fecal Coliform loads in catfish fillets. Additionally, Budiati et al . suggested that the type of feed can influence the bacterial burden in fish. Utilizing chicken offals and spoiled eggs as fish feed may pose potential sources of bacterial contamination in both fish and aquatic environment. Enterobacteriaceae , Staphylococcus spp., and various other microorganisms may be present in the initial microbial population, primarily as contaminants . No Salmonella spp. was detected in our fillet samples and comparable findings were previously documented by Dambrosio et al . who did not detect Salmonella spp. from P. hypophthalmus fillet samples imported to Italy. Similar findings were reported regarding Vibrio spp. by Noseda et al . , where they did not detect Vibrio spp. in P. hypophthalmus fillets. Nevertheless, in contradiction to our results, Tong Thi et al . detected V. cholerae (in 1/9 samples) of the pangasius catfish sampled at the filleting step. Escherichia spp. is the predominant Coliform found in the intestinal flora of warm-blooded animals and is primarily linked to fecal contamination . During the processing stage, high levels of E. coli were detected in the samples collected from hands and surfaces due to cross contamination from food contact surfaces (hands, cutting boards and knives) and fish fillets , . Moreover, the presence of Escherichia spp. in fish fillet might be attributed to the contamination of fishponds by livestock waste . Yagoub claimed that the fertilization of the fishpond using farm animal and poultry manure could be a source of E. coli in the fish samples. In contrast, low Escherichia spp. was detected in pangasius catfish fillets, suggesting that the freezing process had lethal effect on Escherichia spp. Fish fillets contamination by Escherichia spp. may be associated with contaminated water or through cross-contamination during washing, filleting, and trimming of the fish . The incidence of Aeromonas spp. isolated from catfish fillets were 25%, 20%, 5% and 20% for African catfish fillet samples, bayad fillet samples, white basa fillet samples and red basa fillet samples. According to Henin & Ibrahim et al . , the incidence of Aeromonas spp. in imported frozen fish, fresh catfish and freshwater fish was reported as 15.2%, 11.6% and 9.7%, respectively. Wong et al . detected Aeromonas spp. in 10% of the frozen fish samples they examined. In contrast, Pseudomonas spp. was exclusively detected in samples of pangasius catfish fillet. Higher results were reported by Yagoub who isolated Pseudomonas spp. from 62% of the examined fish samples and Rahmou who isolated Pseudomonas spp. from 28% of the examined fish fillet samples. The specific spoilage organisms (SSOs) in the present study were Aeromonas spp. and Pseudomonas spp., these results agree with Viji et al . . Previous studies mostly defined the SSOs in aerobically stored fish and fish products as Gram-negative psychrotrophic bacteria, including Pseudomonas spp., Aeromonas spp., Vibrio spp., and Shewanella spp. , , , . Pseudomonads are one of the most significant spoilage organisms, as their rapid growth contributes to the breakdown of nitrogenous compounds, ultimately resulting in the deterioration of product . Aeromonas hydrophila ( A. hydrophila ) is ubiquitous in the aquatic environment and has been found in freshwater fish, including catfish and tilapia – . Pseudomonas luteola are Gram-negative, aerobic, oxidase-negative rods commonly found in aqueous environments, soil and plants , . Pseudomonas luteola is not a common pathogen in aquaculture; the first record of P. luteola infection in rainbow trout ( Oncorhynchus mykiss ) was reported by Altinok et al . . This study reports the isolation of P. luteola in pangasius catfish fillets imported to Egypt. E. coli is ubiquitous, as it naturally inhabits the intestines of warm-blooded animals without causing any symptoms, and it is extensively spread throughout the environment . Thus, E. coli is a reliable indicator of fecal contamination, water pollution and mishandling , . E. fergusonii is an emerging opportunistic pathogen and is occasionally isolated from the intestinal contents of human and warm-blooded animals. Several studies have isolated E. fergusonii from mammals and birds with systemic or enteric infections , , whereas a few studies have isolated E. fergusonii from sewage, surface water, well water and cultured Egyptian Nile tilapia with signs of bacteremia , . E. fergusonii has been frequently identified in the fecal matter of cattle, poultry, goats, sheep, and horses exhibiting symptoms such as diarrhea, meningitis, mastitis, abortion, and septicemia , . The phenotypic analysis of E. fergusonii were nearly similar to isolates recovered from Egyptian Nile tilapia except our isolates were positive for ADH (Arginine DiHydrolase) and negative for ONPG (ß-galactosidase). E. coli serotype O26 was frequently isolated from African catfish, bayad, and pangasius catfish fillets, while E. fergusonii serotype O78 was found in two African catfish fillet samples. Certain serotypes (O26) found in African catfish were comparable to the predominant Escherichia serotypes identified in broiler chickens in Egypt , . E. coli is an extrinsic microorganism for fish environment, and it is not a part of fish flora. E coli might be introduced to fishponds through the traditional fertilization of fishponds using farm animals and poultry manure, which may harbor E. coli , E. fergusonii and other Enterobacteriaceae members , , .
The present study aimed to assess the bacterial load and pathogenic bacteria in African catfish, bayad, and pangasius catfish fillets. Our findings indicate that all examined catfish fillets were deemed acceptable and safe for human consumption. No Salmonella spp., Yersinia spp., or Vibrio spp. were detected in any of the examined catfish fillets. E. coli serotype O26 was frequently isolated from African catfish, bayad, and pangasius catfish fillets, while E. fergusonii serotype O78 was found in two African catfish fillet samples. Furthermore, this study reports first isolation of E. fergusonii from African catfish fillets and Pseudomonas luteola from pangasius catfish fillets. The isolation of E. fergusonii from African catfish fillet samples highlights the need for more research on emerging pathogens and their prevalence in catfish production. To prevent contamination, recontamination, or the survival of biological hazards during handling, processing, distribution, and storage of catfish fillets, we highly recommend implementing Good Manufacturing Practices (GMP), Good Hygiene Practices (GHP), and a meticulously planned HACCP program. Continued surveillance and investigation of bacterial species can contribute to better understanding and management of risks associated with catfish fillets. Overall, the catfish industry, producers and consumers will benefit from using our data on microbiological quality assessment of the catfish fillets for stringent process control.
|
Association of online activities with obstetrics and gynecology specialty choice: a nationwide online survey | b2c1fec1-f6c0-46c1-a47e-dacfa390a20d | 9905000 | Gynaecology[mh] | Sexual and Reproductive Health and Rights (SRHR) for women are incorporated into the Sustainable Development Goals and are important for peace and prosperity in all nations. To achieve SRHR, obstetricians and gynecologists play a crucial role; thus, ensuring an increased number of new obstetricians and gynecologists is a global issue. Choosing a specialty is an important decision for medical doctors. Most medical students are interested in deciding on a specialty when they enter university, but some develop the interest through clerkships or working as junior residents. , As promoting factors for choosing Obstetrics and Gynecology (OB-GYN) as a specialty, previous studies showed that clerkship experience was important. , This could be explained by the fact that clerkship experiences could provide opportunities for surgery, various clinical experiences, interaction with OB-GYN senior residents or other applicants, and fast-paced and acute experiences, attracting applicants to OB-GYN. , However, severe acute respiratory syndrome coronavirus-2 infection has completely disrupted the traditional face-to-face clinical practice and the standard process of intern work, thus preventing interactions with senior residents and other applicants. - Student participation in clinical practice has become unfeasible nationwide to prevent the spread of infection from patients to students and vice versa and to limit personal protective equipment use. Since most of the clerkships in obstetrics and gynecology involve procedures that require face-to-face interaction, such as deliveries and surgeries, medical students or junior residents from this field are vulnerable to the loss of clinical practice opportunities due to the COVID-19 pandemic in terms of conveying the appeal of the specialty. Online education is one of the methods utilized to counteract the lack of real interaction caused by the COVID-19 pandemic. A survey of 3348 medical students in Libya regarding the use of e-learning six months after the COVID-19 pandemic began showed that 65% (n = 2176) of the students participated in online study groups and discussions, and 54.1% (n = 1811) reported that two-way communication was possible online. Previous studies have reported the benefits of online education and the ambivalent attitudes of students toward online education in OB-GYN. , A cross-sectional study of 121 students in Germany showed high satisfaction with e-learning in the OB-GYN program consisting of online lecture notes, video materials, and online webinars (median score: 3.6-3.9 using a 5-point Likert scale). Another survey conducted following an online course on 98 students in Germany reported that online education programs, including online lectures, video tutorials based on real patients, and digital teaching on practical gynecological skills and examinations, achieved ratings as "good" or "excellent" among > 80% of the students. In contrast, 74% of the students desired bedside learning with real patients. Using online tools may positively impact specialty selection because they enable medical students or junior residents to experience the appeal of OB-GYN and to collect information on the specialty. However, no studies have focused on the association of online activities with specialty selection during the COVID-19 pandemic. In this study, we conducted a nationwide hospital-based survey to determine the association between online activities and the number of new senior residents majoring in OB-GYN. Study design and Participants We used the data from a nationwide web-based, self-administered anonymous survey to investigate the recruitment activities under the COVID-19 pandemic conducted by The Japanese Society of Obstetrics and Gynecology (JSOG) between December 21, 2020, and January 31, 2021. The questionnaire was provided to all 576 obstetrics and gynecology training facilities from the JSOG as online participants, and a letter was sent to the training directors of the facilities as a reminder. These facilities ranged from urban perinatal centers to regional obstetric care facilities and covered eastern and western parts of Japan. Of 576 participating facilities, 334 facilities (response rate: 58.0%) that sent valid responses were included in this study. Completion of the web-based questionnaires implied informed consent. All the data were collected anonymously in that survey, and no correspondence table exists. Since this study used data that had already been unlinked and anonymized prior to the study, and informed consent was obtained upon completion of the web-based questionnaire, Ethics Review Board approval was not required. Measurement The primary interest outcome, the number of new OB-GYN senior residents, was asked by using the following question: "How did the number of people who decided to come to your hospital for obstetrics and gynecology training change this year compared with previous years? " Please choose the opinion that most closely matches your own: include no one, under 0.5 times, 0.5 to less than 1.0 times, same as average years, 1.0 to less than 1.5 times, 1.5 to less than 2.0 times, and more than twice. The reasons for asking the percentage compared with previous years rather than the number of new OB-GYN residents were as follows: some of the participating OB-GYN training facilities have many residents yearly. In contrast, others admit almost no new residents. We asked each facility for the percentage of new residents compared with that in previous years rather than the exact number to clarify the association between online activities and the number of new OB-GYN applicants during the COVID-19 pandemic. We defined a >1.0-fold increase as an increase and <1.0-fold increase as a decrease. The primary exposure was online activities, including some recruitment and clerkship activities, which could affect the specialty choices of junior residents, such as information sessions, hospital tours, interviews, hands-on seminars, convivial parties, lectures, and clinical practice in inpatient or outpatient settings, such as physical or pelvic examination, ultrasound, surgery, and surgical training (e.g., ligation, suture, or dry box training for laparoscopic surgery). , , The content validity of recruitment and clerkship activities was reviewed when the questionnaire was made by obstetricians and gynecologists in charge of recruitment in Japan. Implementation of online activities was asked by using the following question: "Regarding recruitment activities and clinical practice at your hospital, please select the status of implementation after the COVID-19 pandemic (multiple selections are possible): never implemented, cessation after the pandemic, implemented face-to-face after the pandemic, implemented online after the pandemic, and implemented in other ways after the pandemic." We defined implementation of online activities as positive if participants selected "implemented online after the pandemic" for any one or more of the items. As covariates, data on the location of the hospital, hospital status (i.e., university hospital or general hospital), the number of full-time obstetricians and gynecologists, and implementation of face-to-face activities were collected using an online questionnaire. The implementation of face-to-face activities was assessed using the same question for the implementation of online activities described above. In addition, to examine the effect of the pandemic on recruitment activity, the following questions were asked: "Do you think the COVID-19 pandemic has affected the way you recruit obstetricians and gynecologists? please select each of the following: not at all, not significantly, partially, and significantly." "To what extent you were able to convey the appeal of obstetrics and gynecology to students and residents rotating at your hospital this year compared to previous years? Please indicate this on a scale of 0–10," Statistical analysis The chi-square test and Student's t-test were used to examine discrete and continuous variables, respectively. They compared background characteristics stratified by the number of new obstetrics and gynecology senior residents in that year compared with previous years. We then performed simple and multiple logistic regression analyses to examine the association between online activities and the number of new obstetrics and gynecology senior residents. In these analyses, two models were utilized. In model 1, we considered the institution's area, type, and number of full-time obstetricians and gynecologists in each facility. This was because these factors can affect lifestyle, stress levels, and the time demands of specialty work, which were reported as key factors influencing the application for obstetrics and gynecology residencies. Regarding institution areas, 47 prefectures in Japan were categorized into 10 areas commonly used in Japan (i.e., Hokkaido, Tohoku, Kanto, Tokyo, Hokuriku, Chubu, Kinki, Chugoku, Shikoku, Kyusyu, and Okinawa). The number of full-time obstetricians and gynecologists was categorized in increments of 5. In model 2, we considered the variables used in model 1 and the implementation of face-to-face activities as covariates because face-to-face activities such as clerkships and hands-on seminars are well-known factors that positively affect the increased number of applicants for obstetrics and gynecology residencies. - To clarify the association between online activities and the number of new OB-GYN senior residents considering face-to-face activities, we examined the interaction effect of online and face-to-face activities. We conducted stratified analyses according to the implementation of face-to-face activities. Statistical analyses were performed using Stata SE 15 (STATA Corp, College Station, TX, USA), and p<.05 was considered a statistically significant difference. We used the data from a nationwide web-based, self-administered anonymous survey to investigate the recruitment activities under the COVID-19 pandemic conducted by The Japanese Society of Obstetrics and Gynecology (JSOG) between December 21, 2020, and January 31, 2021. The questionnaire was provided to all 576 obstetrics and gynecology training facilities from the JSOG as online participants, and a letter was sent to the training directors of the facilities as a reminder. These facilities ranged from urban perinatal centers to regional obstetric care facilities and covered eastern and western parts of Japan. Of 576 participating facilities, 334 facilities (response rate: 58.0%) that sent valid responses were included in this study. Completion of the web-based questionnaires implied informed consent. All the data were collected anonymously in that survey, and no correspondence table exists. Since this study used data that had already been unlinked and anonymized prior to the study, and informed consent was obtained upon completion of the web-based questionnaire, Ethics Review Board approval was not required. The primary interest outcome, the number of new OB-GYN senior residents, was asked by using the following question: "How did the number of people who decided to come to your hospital for obstetrics and gynecology training change this year compared with previous years? " Please choose the opinion that most closely matches your own: include no one, under 0.5 times, 0.5 to less than 1.0 times, same as average years, 1.0 to less than 1.5 times, 1.5 to less than 2.0 times, and more than twice. The reasons for asking the percentage compared with previous years rather than the number of new OB-GYN residents were as follows: some of the participating OB-GYN training facilities have many residents yearly. In contrast, others admit almost no new residents. We asked each facility for the percentage of new residents compared with that in previous years rather than the exact number to clarify the association between online activities and the number of new OB-GYN applicants during the COVID-19 pandemic. We defined a >1.0-fold increase as an increase and <1.0-fold increase as a decrease. The primary exposure was online activities, including some recruitment and clerkship activities, which could affect the specialty choices of junior residents, such as information sessions, hospital tours, interviews, hands-on seminars, convivial parties, lectures, and clinical practice in inpatient or outpatient settings, such as physical or pelvic examination, ultrasound, surgery, and surgical training (e.g., ligation, suture, or dry box training for laparoscopic surgery). , , The content validity of recruitment and clerkship activities was reviewed when the questionnaire was made by obstetricians and gynecologists in charge of recruitment in Japan. Implementation of online activities was asked by using the following question: "Regarding recruitment activities and clinical practice at your hospital, please select the status of implementation after the COVID-19 pandemic (multiple selections are possible): never implemented, cessation after the pandemic, implemented face-to-face after the pandemic, implemented online after the pandemic, and implemented in other ways after the pandemic." We defined implementation of online activities as positive if participants selected "implemented online after the pandemic" for any one or more of the items. As covariates, data on the location of the hospital, hospital status (i.e., university hospital or general hospital), the number of full-time obstetricians and gynecologists, and implementation of face-to-face activities were collected using an online questionnaire. The implementation of face-to-face activities was assessed using the same question for the implementation of online activities described above. In addition, to examine the effect of the pandemic on recruitment activity, the following questions were asked: "Do you think the COVID-19 pandemic has affected the way you recruit obstetricians and gynecologists? please select each of the following: not at all, not significantly, partially, and significantly." "To what extent you were able to convey the appeal of obstetrics and gynecology to students and residents rotating at your hospital this year compared to previous years? Please indicate this on a scale of 0–10," The chi-square test and Student's t-test were used to examine discrete and continuous variables, respectively. They compared background characteristics stratified by the number of new obstetrics and gynecology senior residents in that year compared with previous years. We then performed simple and multiple logistic regression analyses to examine the association between online activities and the number of new obstetrics and gynecology senior residents. In these analyses, two models were utilized. In model 1, we considered the institution's area, type, and number of full-time obstetricians and gynecologists in each facility. This was because these factors can affect lifestyle, stress levels, and the time demands of specialty work, which were reported as key factors influencing the application for obstetrics and gynecology residencies. Regarding institution areas, 47 prefectures in Japan were categorized into 10 areas commonly used in Japan (i.e., Hokkaido, Tohoku, Kanto, Tokyo, Hokuriku, Chubu, Kinki, Chugoku, Shikoku, Kyusyu, and Okinawa). The number of full-time obstetricians and gynecologists was categorized in increments of 5. In model 2, we considered the variables used in model 1 and the implementation of face-to-face activities as covariates because face-to-face activities such as clerkships and hands-on seminars are well-known factors that positively affect the increased number of applicants for obstetrics and gynecology residencies. - To clarify the association between online activities and the number of new OB-GYN senior residents considering face-to-face activities, we examined the interaction effect of online and face-to-face activities. We conducted stratified analyses according to the implementation of face-to-face activities. Statistical analyses were performed using Stata SE 15 (STATA Corp, College Station, TX, USA), and p<.05 was considered a statistically significant difference. The background characteristics of the facilities and results of the questionnaire are shown in . The number of new OB-GYN senior residents increased in 187 facilities (56.0%) (defined as the increasing group). It decreased in 147 facilities (44.0%) (defined as the decrease group) in 2021, compared to the number in previous years. The proportions of facilities that implemented face-to-face and online activities were significantly higher in the increase group than in the decrease group (65.78% (n=123) vs. 42.18% (n=62), χ 2 (1,N=334) =18.55, p<0.01; 41.71% (n=78) vs. 28.57% (n=42), χ 2 (1,N=334) =6.17, p=.01, respectively). The number of OB-GYN staff in the facilities tended to be higher in the increasing group than in the decreasing group, but the difference was insignificant (χ 2 (6, N=334) =11.14). The achievement rate for conveying the appeal of obstetrics and gynecology compared with that of previous years was higher in the increasing group (M=4.24, SD=1.68) than in the decreasing group (M=3.50, SD=1.49), (t ( 332) =-4.22, p<.01). No other characteristics differed between the two groups. The association between the increase in new obstetrics and gynecology senior residents and online activities or covariates is presented in . Online activities were significantly associated with an increase in the number of new obstetrics and gynecology senior residents in the crude analysis and model 1 (adjusted odds ratio [AOR]=1.94, [95% confidence interval [CI]]: 1.15–3.26, p=.01). However, this association was not found in model 2 when the implementation of face-to-face activities was adopted as a covariate. The implementation of face-to-face activities was significantly associated with an increase in the number of new obstetrics and gynecology senior residents (AOR=2.67, 95% CI: 1.60–4.44, p<.01). Hospitals with 11-15 or 16-20 full-time obstetricians and gynecologists significantly had new OB-GYN senior residents compared with those with 1-5 full-time obstetricians and gynecologists (AOR=3.82, 95%CI: 1.69–8.66, p<.01, and AOR=3.09, 95%CI: 1.13–8.43, p=.03, respectively). The interaction effect of the implementation of online and face-to-face activities was observed (AOR=0.26, 95%CI: 0.09–0.76, p for interaction=.01). Other covariates were not associated with such an increase. In the stratified analysis, the implementation of online activities was significantly associated with an increase in the number of new OB-GYN senior residents among the facilities that did not conduct face-to-face activities (AOR=3.81, 95% CI: 1.40–10.32, p=.01) but not among those that conducted face-to-face activities . To the best of our knowledge, this is the first study to investigate the association between online activities and an increase in the number of new obstetrics and gynecology senior residents. Our multiple logistic regression analysis revealed a significant association between the increase in the number of new obstetrics and gynecology senior residents and online activities after adjusting for the institution's location, type, and the number of full-time obstetricians and gynecologists as covariates. However, this association was not found when the implementation of face-to-face activities was adopted as a covariate. The interaction effect and the stratified analysis indicated that online activities were significantly associated with an increase in the number of new obstetrics and gynecology senior residents among the facilities that did not conduct face-to-face activities. The background characteristics demonstrated that 67.7% (n=226) of the total respondents partially or strongly felt the influence of COVID-19 on recruitment activity. In addition, even facilities with more senior residents in 2021 reported being less capable of conveying their appeal than in previous years. In Japan, clinical training in a hospital begins in most universities' fourth or fifth year of medical school. After six years of medical school and two years of internship after graduation, students begin their specialty training. They select a specialty based on their lifestyle preferences and interest throughout their clerkships and internships. , Therefore, discontinuing clerkships and internships due to COVID-19 may have impacted the recruitment of obstetrics and gynecology applicants. Online activity was significantly associated with an increase in the number of new obstetrics and gynecology senior residents in the facilities that did not conduct face-to-face activities. However, no association was found in facilities that conducted face-to-face activities. The possible explanation for this result is that face-to-face and online activities positively impacted the recruitment of obstetrics and gynecology senior residents by familiarizing them with obstetrics and gynecology and permitting interaction with senior residents. It has been reported that interaction with senior residents and the provision of information sessions are important in selecting a major. A study of 238 medical students in the United States regarding their decision to major in urology during the COVID-19 pandemic showed that several students considered one-on-one or small-group interactions with senior residents (83%, n=197) and learning about the facilities offering programs (72%, n=171) as very important when selecting a urology program. Further studies to examine the mechanism of online activities promoting the motivation for being obstetricians and gynecologists among students or interns are needed. The strength of this study lies in the fact that it was a nationwide survey conducted within a limited number of facilities in Japan where senior residents can start major training. However, there were some limitations to this study. First, we essentially evaluated the activities of students undergoing clerkships and junior residencies. Given that the decision on a field of major can be influenced by multiple factors and is not made at a certain time, unmeasured factors could have affected the results. Second, since there was no golden standard for the categorization of online activities, we could not examine which type of activities especially had a positive impact on recruiting new OB-GYN senior residents. Third, due to the study design using a self-assessed questionnaire, the study was susceptible to some biases, such as recall, survival, and social desirability. Facilities that did not respond to the questionnaires might not be willing to answer the questionnaire, and those that responded to the questionnaire could report a higher resident number than they had, which may have affected the results. Finally, the increase in new obstetrics and gynecology senior residents was expressed as a percentage of the intake from previous years; thus, we could not evaluate the specific number. In conclusion, online activities were associated with an increase in the number of new obstetrics and gynecology senior residents in the facilities that did not conduct face-to-face activities. Further studies are warranted to clarify whether face-to-face or online activities are superior, the effect of the combined use of both activities, and the types of online activities that are effective for recruitment. Acknowledgments We are deeply grateful to all the facilities that responded to this survey. Also, we are deeply grateful to statistical experts Masashi Kizuki and Yuiko Nagamine for their statistical advice. Conflict of Interest The authors declare that they have no conflict of interest. We are deeply grateful to all the facilities that responded to this survey. Also, we are deeply grateful to statistical experts Masashi Kizuki and Yuiko Nagamine for their statistical advice. The authors declare that they have no conflict of interest. |
Characteristics of hospital pediatricians and obstetricians/gynecologists working long hours in Tokushima, Japan: A cross-sectional study | bf470be3-43a4-4f59-95e3-b033cad3eee0 | 11573208 | Gynaecology[mh] | Working hours for workers including physicians are regulated by the law in each country. For example, the workweek is 40 h in Japan , the average per week for 17 weeks is 48 h in the UK , and it is 8 h per day and 6 days per week in Germany . However, there are exceptions, including exemptions and special exceptions, which allow workers to work hours exceeding the statutory working hours. Working hours vary widely depending on the nature and amount of work. Therefore, examining the nature and quantity of work as well as to determine the hours worked for each type of job in each region is important. Long working hours are associated with an increased risk of adverse physical health outcomes, such as coronary heart disease , stroke , and diabetes , and with deleterious effects on mental health, such as depression and suicide . Additionally, regarding occupational safety, long working hours are reportedly associated with near-misses (that is, “an unplanned event that did not result in injury, illness, or damage but had the potential to do so”) and injuries . Long working hours are a public health concern that must be addressed. Burnout among physicians has recently become a global concern . Burnout comprised three symptoms: emotional exhaustion, depersonalization, and low personal accomplishment . Moreover, one of the factors associated with burnout is working long hours . In a large-scale Japanese study, approximately 40% and 10% of physicians worked >60 h/week and >80 h/week, respectively . In another Japanese study, full-time hospital physicians worked 50.1%/week . In addition to them, 10.5% of full-time physicians aged 24–69 years worked more than 80 h/month overtime in the main hospital and 4.4% worked ≥80 h/month overtime in side work . In Taiwan, approximately 35% of physicians worked >65 h/week . Notably, in a representative US study, 58.9% of surgical residents worked >80 h/week . Additionally, in a representative Japanese study, 20.1% of postgraduate residents worked >80 h/week . In another Japanese study, 67% and 27% of resident physicians worked ≥60 and ≥80 hours /week in their hospitals respectively . However, reports on the working hours of pediatricians and obstetricians/gynecologists (OB/GYNs) are limited. Tsutsumi suggested that doctor shortages and their uneven distribution between regions and specialties are related to the long doctors’ working hours in addition to a lack of task-sharing . In Japan, pediatricians and OB/GYNs are unevenly distributed among medical specialties. Nomura et al. reported that many rural hospitals in Japan have closed their pediatric and OB/GYN departments . By correcting long working hours, rural hospitals would have sufficient pediatricians and OB/GYNs. One possible solution is improving hospital and regional retention of physicians. One of the factors associated with regional retention of physicians is the training environment and career support . If the characteristics of pediatricians and OB/GYNs’ long working hours in hospitals are identified, they can be reduced through effective collaboration between the hospital staff and task-sharing. In Japan, exceeding the upper limit of working hours was determined by each institution, such as hospitals, based on a labor-management agreement. Previously, there was no uniformity in the overtime limits for workers including physicians based on the law. However, the Labor Standards Act will be revised to apply overtime regulations to physicians starting in the fiscal year (FY) 2024. The overtime limits for resident doctors and physicians working in emergency departments are 1,860 h (equivalent to an 80-h work/week) and 960 h (equivalent to a 60-h work/week), respectively . Pediatricians and OB/GYNs are responsible for policy-based medicine, such as pediatric emergencies and perinatal and neonatal medicine. Compliance with the law may affect local medical care systems. Therefore, it is necessary first to estimate hospital pediatricians and OB/GYNs’ working hours, including side work hours. Hence, this study aimed to determine the actual working conditions, including working hours and desired future work styles of hospital pediatricians and OB/GYNs in Tokushima Prefecture, Japan, and determine the characteristics of those working long hours. 2.1 Ethics of the research This study was conducted in accordance with the Declaration of Helsinki and National Ethical Guidelines. Written informed consent was obtained from all the participants. The study protocol was approved by the Ethics Committee of Tokushima University Hospital (approval number: 4077, approval date: 27 September 2021, reference number of ethics committee: 11000161). 2.2 Study aim, design, and setting This study aimed to determine the actual working conditions, including working hours, and desired future work styles of hospital pediatricians and OB/GYNs in Tokushima Prefecture. This cross-sectional study was conducted in the Tokushima Prefecture, Japan. 2.3 Study participants This study included pediatricians and OB/GYNs working as full staff in 14 hospitals in Tokushima Prefecture, Japan. The Ministry of Health, Labor, and Welfare’s study of physicians, dentists, and pharmacists conducted every 2 years reported that in 2020, 62 pediatricians and 57 OB/GYNs were working in hospitals in the Tokushima Prefecture. A letter explaining the study, self-administered questionnaire, and return envelope were mailed to each physician supervising pediatrics and obstetrics/gynecology departments at 14 hospitals. The supervising physicians distributed these forms to physicians working in the respective hospital departments. The cover page of the study form stated that participation is voluntary and that no personally identifiable data will be provided to the medical institutions where they work. Moreover, the study form included a box to confirm participants’ informed consent. An identification number was assigned to each hospital, and a participant’s name was not required. Each physician who gave consent completed the questionnaire and returned it in a sealed envelope. No hospital’s administrative office permission was needed to conduct this survey. Between 1 October 2021 and 31 January 2022, 96 participants were included in the analysis. The participants’ characteristics are listed in . 2.4 Measurements The questionnaire, supporting information1, included items related to the physicians’ age, sex, specialty, and type of practice at their workplace: day shift work- and night and holiday work- arrangements. The former included “primary attending physician system,” “multiple attending physician system,” and “others (for example, working in neonatal intensive care unit [NICU]/ maternal-fetal intensive-care unit [MFICU]),” while the latter included “on-call system,” “shift work system,” and “others (for example, working in NICU/MFICU).” The following items are related to their work status: Number of medical institutions where the respondents worked in September 2021; average hours worked per week (medical activities only, excluding research, training, or teaching activities, and if the respondent works at more than one medical institution, the total number of hours worked per week was calculated for the entire month); number of times per month that the participant worked night and day-off duties (at all medical institutions where they worked); number of annual paid leave days in 2020; status of task-shifting/task-sharing between physicians and nonphysicians; and desired future work style. The following six items were included based on previous studies on task-shifting/task-sharing status between physicians and nonphysicians . “Explanation and consensus building with patients,” “Taking basic vitals, such as blood pressure,” “Simple procedures to securing an intravenous line for intravenous infusion, taking blood samples, and data acquisition,” “Inputting medical records (electronic medical record entry),” “Medical clerical work (preparation of medical certificates and other documents, such as patient appointments),” and “Transporting and restocking supplies in the hospital and transporting patients to and from laboratories.” Three responses were used for each item: “I can always share” (considered as “Always”), “I can sometimes share,” and “I cannot share at all” (considered as “Not always”). The following four items were included in the desired future work style: Decrease in overtime work hours per week, number of day-off duty shifts per month, number of night duty shifts per month, and number of on-calls per month. The respondents were asked to answer “yes” if their current working style was applicable and “no” if not. 2.5 Statistical analysis Total working hours were classified into <60, 60–80, and ≥80 h/week. The 60 h/week and 80 h/week working hours are equivalent to 1,000 and 2,000 h, respectively, as annual overtime levels, given that the legal working hours stipulated by the Japanese Labor Standards Act is 40 h/week. In Japan, the Labor Standards Act will be revised to apply overtime regulations to physicians starting in the fiscal year (FY) 2024. The overtime limits for resident doctors and physicians working in emergency departments are 1,860 h (equivalent to an 80-h work/week) and 960 h (equivalent to a 60-h work/week), respectively . This category can be used to evaluate the portions of the overtime regulations that need to be addressed. First, the number of pediatricians and OB/GYNs working ≥60 h/week and ≥80 h/week was calculated based on age group. Second, we calculated the mean and standard deviation (SD) of their age by sex, weekly working hours, number of night and day-off duties, number of medical institutions being worked at concurrently, and number of annual paid leave days. Sex differences in the means of those continuous variables were analyzed using Student’s t-test. Third, Pearson’s Correlation Coefficient was used to evaluate the relationship between the weekly working hours and other variables. Subsequently, we analyzed the comparative effects on weekly working hours using hierarchical regression. In Model 1, the regression model was populated with working at night and day-off duties as independent variables. In Model 2, the number of medical institutions being worked at concurrently was added to the independent variable in Model 1. In Model 3, the number of annual paid leave days was added to the independent variable in Model 2. Adjusted R-squared values were calculated to check the degree of deviation of each model. The multicollinearity of independent variables was examined using the variation inflation factor (VIF). Independent variables indicating 10 ≥VIF were assumed to be multicollinear. The VIF of all independent variables was less than 1.4. Forth, the chi-square test was used to analyze the differences in day and nighttime working status and work-sharing status between pediatricians and OB/GYNs who worked long hours (≥60 h/week or ≥80 h/week). Finally, we analyzed the differences in the desired working environment between pediatricians and OB/GYNs using the chi-square test, dividing the working hours into <60 h/week, ≥60 h/week, <80 h/week, and ≥80 h/week. The statistical tests used are listed in the legend of Tables and Figures. Statistical tests were based on two-sided probabilities, and a p -value <0.05 was considered significant. All statistical analyses were performed using IBM SPSS Statistics version 28.0 for Windows (IBM; Armonk, NY, USA). This study was conducted in accordance with the Declaration of Helsinki and National Ethical Guidelines. Written informed consent was obtained from all the participants. The study protocol was approved by the Ethics Committee of Tokushima University Hospital (approval number: 4077, approval date: 27 September 2021, reference number of ethics committee: 11000161). This study aimed to determine the actual working conditions, including working hours, and desired future work styles of hospital pediatricians and OB/GYNs in Tokushima Prefecture. This cross-sectional study was conducted in the Tokushima Prefecture, Japan. This study included pediatricians and OB/GYNs working as full staff in 14 hospitals in Tokushima Prefecture, Japan. The Ministry of Health, Labor, and Welfare’s study of physicians, dentists, and pharmacists conducted every 2 years reported that in 2020, 62 pediatricians and 57 OB/GYNs were working in hospitals in the Tokushima Prefecture. A letter explaining the study, self-administered questionnaire, and return envelope were mailed to each physician supervising pediatrics and obstetrics/gynecology departments at 14 hospitals. The supervising physicians distributed these forms to physicians working in the respective hospital departments. The cover page of the study form stated that participation is voluntary and that no personally identifiable data will be provided to the medical institutions where they work. Moreover, the study form included a box to confirm participants’ informed consent. An identification number was assigned to each hospital, and a participant’s name was not required. Each physician who gave consent completed the questionnaire and returned it in a sealed envelope. No hospital’s administrative office permission was needed to conduct this survey. Between 1 October 2021 and 31 January 2022, 96 participants were included in the analysis. The participants’ characteristics are listed in . The questionnaire, supporting information1, included items related to the physicians’ age, sex, specialty, and type of practice at their workplace: day shift work- and night and holiday work- arrangements. The former included “primary attending physician system,” “multiple attending physician system,” and “others (for example, working in neonatal intensive care unit [NICU]/ maternal-fetal intensive-care unit [MFICU]),” while the latter included “on-call system,” “shift work system,” and “others (for example, working in NICU/MFICU).” The following items are related to their work status: Number of medical institutions where the respondents worked in September 2021; average hours worked per week (medical activities only, excluding research, training, or teaching activities, and if the respondent works at more than one medical institution, the total number of hours worked per week was calculated for the entire month); number of times per month that the participant worked night and day-off duties (at all medical institutions where they worked); number of annual paid leave days in 2020; status of task-shifting/task-sharing between physicians and nonphysicians; and desired future work style. The following six items were included based on previous studies on task-shifting/task-sharing status between physicians and nonphysicians . “Explanation and consensus building with patients,” “Taking basic vitals, such as blood pressure,” “Simple procedures to securing an intravenous line for intravenous infusion, taking blood samples, and data acquisition,” “Inputting medical records (electronic medical record entry),” “Medical clerical work (preparation of medical certificates and other documents, such as patient appointments),” and “Transporting and restocking supplies in the hospital and transporting patients to and from laboratories.” Three responses were used for each item: “I can always share” (considered as “Always”), “I can sometimes share,” and “I cannot share at all” (considered as “Not always”). The following four items were included in the desired future work style: Decrease in overtime work hours per week, number of day-off duty shifts per month, number of night duty shifts per month, and number of on-calls per month. The respondents were asked to answer “yes” if their current working style was applicable and “no” if not. Total working hours were classified into <60, 60–80, and ≥80 h/week. The 60 h/week and 80 h/week working hours are equivalent to 1,000 and 2,000 h, respectively, as annual overtime levels, given that the legal working hours stipulated by the Japanese Labor Standards Act is 40 h/week. In Japan, the Labor Standards Act will be revised to apply overtime regulations to physicians starting in the fiscal year (FY) 2024. The overtime limits for resident doctors and physicians working in emergency departments are 1,860 h (equivalent to an 80-h work/week) and 960 h (equivalent to a 60-h work/week), respectively . This category can be used to evaluate the portions of the overtime regulations that need to be addressed. First, the number of pediatricians and OB/GYNs working ≥60 h/week and ≥80 h/week was calculated based on age group. Second, we calculated the mean and standard deviation (SD) of their age by sex, weekly working hours, number of night and day-off duties, number of medical institutions being worked at concurrently, and number of annual paid leave days. Sex differences in the means of those continuous variables were analyzed using Student’s t-test. Third, Pearson’s Correlation Coefficient was used to evaluate the relationship between the weekly working hours and other variables. Subsequently, we analyzed the comparative effects on weekly working hours using hierarchical regression. In Model 1, the regression model was populated with working at night and day-off duties as independent variables. In Model 2, the number of medical institutions being worked at concurrently was added to the independent variable in Model 1. In Model 3, the number of annual paid leave days was added to the independent variable in Model 2. Adjusted R-squared values were calculated to check the degree of deviation of each model. The multicollinearity of independent variables was examined using the variation inflation factor (VIF). Independent variables indicating 10 ≥VIF were assumed to be multicollinear. The VIF of all independent variables was less than 1.4. Forth, the chi-square test was used to analyze the differences in day and nighttime working status and work-sharing status between pediatricians and OB/GYNs who worked long hours (≥60 h/week or ≥80 h/week). Finally, we analyzed the differences in the desired working environment between pediatricians and OB/GYNs using the chi-square test, dividing the working hours into <60 h/week, ≥60 h/week, <80 h/week, and ≥80 h/week. The statistical tests used are listed in the legend of Tables and Figures. Statistical tests were based on two-sided probabilities, and a p -value <0.05 was considered significant. All statistical analyses were performed using IBM SPSS Statistics version 28.0 for Windows (IBM; Armonk, NY, USA). The participants’ characteristics are presented in ( n = 96). The number of pediatricians and OB/GYNs working ≥60 h/week are shown in . shows the number of pediatricians and OB/GYNs working for ≥80 h/week. The number of pediatricians and OB/GYNs working ≥60 h/week was the largest at 12 in the 30–39 y.o. and 50–59 y.o. groups. The breakdown was six pediatricians and obstetricians in the 30–39 y.o. group, and five pediatricians and seven obstetricians in the 50–59 y.o. group. The number of pediatricians and OB/GYNs working ≥80 h/week was the largest at four in the 50–59 y.o. group. The participants’ characteristics of gender difference are presented in ( n = 96). The mean and SD of the age for males and females were 49.5 (13.3) and 40.8 (7.7), respectively. The mean age of men was significantly higher than that of women ( p <0.001). The mean age and standard deviation were 49.5 (13.3) years for men and 40.8 (7.7) years for females, respectively. Men worked longer hours than women; however, the difference was not significant ( p = 0.115). The mean and SD of the number of night and day-off duties (per week) were 6.5 (4.3) times for men and 4.8 (3.9) times for women, respectively. Males were significantly more frequent than females ( p = 0.042). The mean and SD of the number of medical institutions concurrently being worked at were 2.1 (1.6) for males and 2.0 (1.5) for females, respectively. Males worked in more medical institutions concurrently than females; however, the difference was not significant ( p = 0.235). The mean and SD of the number of annual paid leave days (last year) were 6.4 (5.4) for men and 7.5 (5.1) for women, respectively. Women had more annual paid leave days than men; however, the difference was not significant ( p = 0.338). Participants with missing data were excluded from the analysis. For the calculation of the p values, Student’s t-test was used. Relationships between the working hours and other variables are presented in . Working hours (per week) correlated with the number of night and day-off duties (r = 0.56, p<0.001) and medical institutions being worked at concurrently (r = 0.46, p<0.001). Working hours (per week) were negatively correlated with number of annual paid leave days (r = - 0.30, p = 0.004). Furthermore, the number of night and day-off duties positively correlated with the number of medical institutions being worked at concurrently (r = 0.39, p<0.001). Working hours (per week) and age were not significantly associated (r = - 0.14, p = 0.180). Participants with missing data were excluded from the analysis. For the calculation of the linear correlation coefficient and p values, Pearson correlation analysis was used. Associations between the working hours and other variables are presented in . The hierarchical regression analysis showed that weekly working hours were significantly associated with working at night and day-off duties (β = 0.42, p<0.001) and the number of medical institutions being worked at concurrently (β = 0.26, p = 0.011). Working at night and day-off duties had a greater impact on weekly working hours than the number of working medical institutions being worked at concurrently. Participants with missing data were excluded from the analysis. For the calculation of the standardized regression coefficient and p values, hierarchical regression analysis was used to predict weekly working hours. The percentage of physicians working ≥60 h/week for daytime was lowest in the “multiple attending physician system” (34.0%, p = 0.022) . The on-call system” had the lowest percentage of physicians working ≥60 h/week at night (25.0%, p = 0.012) . The percentage of physicians working ≥60 h was significantly lower for work sharing on measuring vital signs, inputting medical records, and writing documents (certificates) than those who always can do so but not always able to do so (37.8% vs. 71.4%, p = 0.019) , (29.5% vs. 53.8%, p = 0.016) , (31.4% vs. 55.6%, p = 0.017) . The percentage of physicians working ≥80 h was significantly lower in work sharing on transporting patients than those who were always able but not always able to do so (9.8% vs. 28.6%, p = 0.043) . The percentage of physicians who wanted to decrease the number of night duties, day off duties, and overtime work increased significantly as the number of hours worked per week (<60 h/week, ≥60 h/week, <80 h/week, or ≥80 h/week) increased (10.9% vs. 48.3% vs. 66.7%, p <0.001) , (10.9% vs. 17.2% vs. 50.0%, p = 0.006) , (25.5% vs. 48.3% vs. 75.0%, p = 0.003) . Our study revealed the working hours of hospital pediatricians and OB/GYNs in Tokushima Prefecture, Japan, factors associated with working hours, and characteristics of pediatricians and OB/GYNs who work long hours. Approximately 80% of pediatricians and OB/GYNs working in Tokushima Prefecture, Japan, participated in this study. This study is representative of the Tokushima Prefecture and not Japan; therefore, its results should be used as a guide to conduct further larger studies on pediatricians and OB/GYNs. Maternal and perinatal mortality rates in Japan are among the lowest worldwide . However, the uneven distribution of pediatricians and OB/GYNs is becoming increasingly apparent. To continuously maintain local medical care systems, including pediatric emergency and perinatal/neonatal care, the results of this study should be used to improve the pediatricians and OB/GYNs’ working environments. Working hours In this study, approximately 40% and 10% of pediatricians and OB/GYNs worked for ≥60 h/week and ≥80 h/week, respectively . Reports on physicians’ working hours vary according to the country where the study was conducted and the categories of physicians surveyed (for example, specialties and residents). However, these results were consistent with those of a large-scale study conducted in Japan before the coronavirus disease 2019 (COVID-19) pandemic . In a study conducted in Japan during the COVID-19 pandemic, 51.7% and 14.4% of pediatricians worked for ≥60 h /week and ≥80 h/week, respectively . In another study conducted in Japan after the COVID-19 pandemic, 84% and 47% of Japanese OB/GYNs worked ≥60 h/week and ≥80 h/week, respectively . Compared with previous studies in Japan during or after the COVID-19 pandemic, pediatricians and OB/GYNs worked shorter hours, which could be due to the recent decline in the birth rate in Tokushima Prefecture compared with that of Japan (Japan: 7.0 vs. Tokushima: 6.3 per 1,000 population, 2019) . Japan’s total fertility rate has recently remained flat, and many regions of Japan are expected to continue to experience a declining birthrate . Discussions on the required number of pediatricians and OB/GYNs based on projections of future patients and the number of births are needed. The COVID-19 pandemic has not necessarily increased working hours . A decrease in the number of pediatric emergency visits to hospitals during the pandemic has been reported . We believe the decline in pediatric and OB/GYN patient numbers may be owing to a decrease in infectious diseases or refrain from visiting clinics owing to the fear of infection. Moreover, the annual number of births in Japan was 840,835 in the pre-pandemic (2019) which decreased to 811,622 and 770,759 during the pandemic (2020 and 2021) . Our study was conducted during the COVID-19 pandemic, and it is possible that fewer labor hours were reported than before. Therefore, checking whether the number of patients and births after the COVID-19 pandemic has recovered to pre-pandemic levels is essential. Pediatricians and OB/GYNs in their 50s worked ≥60 h/week and 80h/week the most , with the latter more than the former . A large survey of physicians in all medical specialties in Japan reported that the adjusted odds ratio for working long hours was significantly lower for those aged ≥40 compared than those aged <30 . We believe certain reasons exist for the long work hours specific to obstetricians and gynecologists. Ishikawa suggested that Japan has increased the management responsibility for middle-aged OB/GYNs in addition to excessive expectations from the public to maintain high-quality medical care . Merlier et al. suggested that French OB/GYNs are at high risk of burnout because of the highly demanding nature of the profession that requires continuous (24-h a day, 7 days a week) care services . Additionally, OB/GYNs are exposed to litigation risks . Therefore, reducing the burden on OB/GYNs, particularly middle-aged OB/GYNs, is essential. In Japan, the maximum overtime hours in a year for resident doctors and physicians working in emergency departments will be limited to 1,860 h (equivalent to an 80-h work/week) and 960 h (equivalent to a 60-h work/week) for others, based on the Labor Standards Act starting in FY2024 . Assuming legal working hours of 40 h/week and 50 weeks/year, the 60-h and 80-h work/week corresponds to 1,000 h/year and 2,000 h/year of overtime excluding legal working hours, respectively. Pediatricians and OB/GYNs who work ≥80 h/week must work <80 h/week at the start of FY2024. Until FY2024, the law in Japan had not set uniform standards for physician’s overtime hours. The items identified in this study (factors associated with long working hours and how to reduce them) should be used to improve hospital pediatricians and OB/GYNs’ working environments. Long working hours In this study, weekly hours of work by pediatricians and OB/GYNs were significantly associated with working at night and day-off duties and the number of medical institutions being worked at concurrently . Moreover, the association was more related to working at night and day-off duties than the number of medical institutions being worked at concurrently . Many physicians who work long hours want to reduce the number of days of night and day-off duties . We suggest a shortage of pediatricians and OB/GYNs to cover night and day-off duties in hospitals due to the uneven distribution of pediatricians and OB/GYNs and working hours, and the number of night and day-off duties has increased to compensate for it. Therefore, it is necessary to strengthen the recruitment and retention of pediatricians and OB/GYNs at medical institutions in the future. However, previous studies have reported that many OB/GYNs perceive their salaries as low because of the nature of their work . The average annual salary paid to hospital physicians in Japan by their hospitals is lower than that of medical practitioners . An independent association was observed between long working hours and high annual income for pediatricians ; hence, due to financial circumstances, physicians working at the hospital may possibly be working at other medical institutions. From FY2024, the maximum annual working hour limit for physicians will be introduced . The shortage of pediatricians and OB/GYNs in hospitals may make maintaining local medical care systems difficult. From a long-term perspective, in addition to increasing the number of pediatric and OB/GYN residents, consolidating hospitals that provide pediatric emergency and perinatal/neonatal care may be necessary. Efforts to reduce long working hours In this study, working hours were significantly shorter in the “multiple attending physician system” than in the “primary attending physician system,” and in the “on-call system” than in the “shift work system” ( p <0.05) . We suggest that increasing the number of pediatricians and OB/GYNs per medical institution is necessary to achieve this working status. However, in previous studies, a higher number of on-call schedules per month resulted in a higher percentage of burnout, whereas a higher number of on-call physicians resulted in a lower percentage of burnout . Hence, we believe that the fundamental solution to this problem is increasing the number of physicians per medical institution. In 2012, the World Health Organization recommended that task-shifting from physicians to nonphysicians is effective in protecting maternal and newborn health . The Japanese Health Ministry recommends task-shifting and task-sharing . For example, some tasks have been shifted from physicians to non-physicians, such as clinical radiologists, clinical laboratory technicians, and clinical engineers. Work-style reforms for physicians in the hospitals which are the primary physician team system, and the cross-functional teams between hospital staff such as physicians, pharmacists, and nurses, are promoted. In a study of pediatricians and OB/GYNs in Japan regarding task shifts, approximately 60% of pediatricians and 50% of OB/GYNs favored task shifts. In the same study, pediatricians and OB/GYNs indicated that they could reduce their working hours by approximately 2 h/day. In contrast, some participants objected to the promotion of task shift in the specific tasks of pediatrics and OB/GYN, such as “Venous blood sampling (newborn/infant)” and “Fetal echogram at prenatal check-up.” In this study, the proportion of physicians who worked longer hours (60 h/week or 80 h/week) may have declined significantly during the Task Shift ( p <0.05) . Notably, the task shift duties surveyed in this study are not items that would be answered as objectionable in the previous studies . Therefore, task shifts for pediatricians and OB/GYNs are essential to improve their working environment. Desired working status Our study found that pediatricians and OB/GYNs at hospitals with long working hours (over 80 h/week) were less willing to work longer hours and decreased their night and day-off duties than those working fewer hours . Particularly, pediatricians working >60 h/week desired a decrease in night duties. Night duties not only adversely affect physicians’ quality of life but are associated with short sleep duration and sleep disorders , which are associated with burnout among physicians . Additionally, long working hours without breaks threaten the safety of medical care . Repeatedly, the fundamental solutions would require more young physicians in pediatrics and OB/GYN at hospitals and the consolidation of hospitals that provide pediatric emergency and perinatal/neonatal care. Working hours and environment may be related to physicians’ careers and job satisfaction . High career and job satisfaction negatively affect burnout . Additionally, high job satisfaction may keep physicians in the hospital. In addition to actively recruiting new physicians at the hospital level, efforts must be made to reduce the burden on physicians. A virtuous cycle must be created in which physicians retain their hospital positions. Therefore, a large-scale study on the details of the desired working status should be conducted in the future. Limitations This study had some limitations. First, the sample size was small because it targeted pediatricians and OB/GYNs working at hospitals in Tokushima Prefecture, Japan. Because the analysis was conducted collectively by professionals involved in perinatal and neonatal care, the specific trends in hospital installations and departments are unclear. Second, this was a cross-sectional study; therefore, causal relationships were unclear. Third, selection and recall bias are possible in this study. Approximately 20% of the participants did not respond to this study, which may have included busy pediatricians and OB/GYNs. The number of working hours per week was based on self-reported data. However, this is a common method used in epidemiological studies of physicians’ working hours because the responses must be simple. Fourth, some items were not included in the questionnaire but were reportedly related to long working hours. Items related to the type of hospital where they worked, details of their job, salary, and family and living arrangements were not included in the questionnaire. Particularly, we did not ask the OB/GYNs about combined obstetric and surgical activities. Previous studies have reported that approximately 60% of the physicians did not combine the activities . Additionally, some reports suggest a relationship between gender roles, child-rearing, and working hours . In the future, considering conducting a larger-scale study that includes these items while reducing the burden on respondents is necessary. In this study, approximately 40% and 10% of pediatricians and OB/GYNs worked for ≥60 h/week and ≥80 h/week, respectively . Reports on physicians’ working hours vary according to the country where the study was conducted and the categories of physicians surveyed (for example, specialties and residents). However, these results were consistent with those of a large-scale study conducted in Japan before the coronavirus disease 2019 (COVID-19) pandemic . In a study conducted in Japan during the COVID-19 pandemic, 51.7% and 14.4% of pediatricians worked for ≥60 h /week and ≥80 h/week, respectively . In another study conducted in Japan after the COVID-19 pandemic, 84% and 47% of Japanese OB/GYNs worked ≥60 h/week and ≥80 h/week, respectively . Compared with previous studies in Japan during or after the COVID-19 pandemic, pediatricians and OB/GYNs worked shorter hours, which could be due to the recent decline in the birth rate in Tokushima Prefecture compared with that of Japan (Japan: 7.0 vs. Tokushima: 6.3 per 1,000 population, 2019) . Japan’s total fertility rate has recently remained flat, and many regions of Japan are expected to continue to experience a declining birthrate . Discussions on the required number of pediatricians and OB/GYNs based on projections of future patients and the number of births are needed. The COVID-19 pandemic has not necessarily increased working hours . A decrease in the number of pediatric emergency visits to hospitals during the pandemic has been reported . We believe the decline in pediatric and OB/GYN patient numbers may be owing to a decrease in infectious diseases or refrain from visiting clinics owing to the fear of infection. Moreover, the annual number of births in Japan was 840,835 in the pre-pandemic (2019) which decreased to 811,622 and 770,759 during the pandemic (2020 and 2021) . Our study was conducted during the COVID-19 pandemic, and it is possible that fewer labor hours were reported than before. Therefore, checking whether the number of patients and births after the COVID-19 pandemic has recovered to pre-pandemic levels is essential. Pediatricians and OB/GYNs in their 50s worked ≥60 h/week and 80h/week the most , with the latter more than the former . A large survey of physicians in all medical specialties in Japan reported that the adjusted odds ratio for working long hours was significantly lower for those aged ≥40 compared than those aged <30 . We believe certain reasons exist for the long work hours specific to obstetricians and gynecologists. Ishikawa suggested that Japan has increased the management responsibility for middle-aged OB/GYNs in addition to excessive expectations from the public to maintain high-quality medical care . Merlier et al. suggested that French OB/GYNs are at high risk of burnout because of the highly demanding nature of the profession that requires continuous (24-h a day, 7 days a week) care services . Additionally, OB/GYNs are exposed to litigation risks . Therefore, reducing the burden on OB/GYNs, particularly middle-aged OB/GYNs, is essential. In Japan, the maximum overtime hours in a year for resident doctors and physicians working in emergency departments will be limited to 1,860 h (equivalent to an 80-h work/week) and 960 h (equivalent to a 60-h work/week) for others, based on the Labor Standards Act starting in FY2024 . Assuming legal working hours of 40 h/week and 50 weeks/year, the 60-h and 80-h work/week corresponds to 1,000 h/year and 2,000 h/year of overtime excluding legal working hours, respectively. Pediatricians and OB/GYNs who work ≥80 h/week must work <80 h/week at the start of FY2024. Until FY2024, the law in Japan had not set uniform standards for physician’s overtime hours. The items identified in this study (factors associated with long working hours and how to reduce them) should be used to improve hospital pediatricians and OB/GYNs’ working environments. In this study, weekly hours of work by pediatricians and OB/GYNs were significantly associated with working at night and day-off duties and the number of medical institutions being worked at concurrently . Moreover, the association was more related to working at night and day-off duties than the number of medical institutions being worked at concurrently . Many physicians who work long hours want to reduce the number of days of night and day-off duties . We suggest a shortage of pediatricians and OB/GYNs to cover night and day-off duties in hospitals due to the uneven distribution of pediatricians and OB/GYNs and working hours, and the number of night and day-off duties has increased to compensate for it. Therefore, it is necessary to strengthen the recruitment and retention of pediatricians and OB/GYNs at medical institutions in the future. However, previous studies have reported that many OB/GYNs perceive their salaries as low because of the nature of their work . The average annual salary paid to hospital physicians in Japan by their hospitals is lower than that of medical practitioners . An independent association was observed between long working hours and high annual income for pediatricians ; hence, due to financial circumstances, physicians working at the hospital may possibly be working at other medical institutions. From FY2024, the maximum annual working hour limit for physicians will be introduced . The shortage of pediatricians and OB/GYNs in hospitals may make maintaining local medical care systems difficult. From a long-term perspective, in addition to increasing the number of pediatric and OB/GYN residents, consolidating hospitals that provide pediatric emergency and perinatal/neonatal care may be necessary. In this study, working hours were significantly shorter in the “multiple attending physician system” than in the “primary attending physician system,” and in the “on-call system” than in the “shift work system” ( p <0.05) . We suggest that increasing the number of pediatricians and OB/GYNs per medical institution is necessary to achieve this working status. However, in previous studies, a higher number of on-call schedules per month resulted in a higher percentage of burnout, whereas a higher number of on-call physicians resulted in a lower percentage of burnout . Hence, we believe that the fundamental solution to this problem is increasing the number of physicians per medical institution. In 2012, the World Health Organization recommended that task-shifting from physicians to nonphysicians is effective in protecting maternal and newborn health . The Japanese Health Ministry recommends task-shifting and task-sharing . For example, some tasks have been shifted from physicians to non-physicians, such as clinical radiologists, clinical laboratory technicians, and clinical engineers. Work-style reforms for physicians in the hospitals which are the primary physician team system, and the cross-functional teams between hospital staff such as physicians, pharmacists, and nurses, are promoted. In a study of pediatricians and OB/GYNs in Japan regarding task shifts, approximately 60% of pediatricians and 50% of OB/GYNs favored task shifts. In the same study, pediatricians and OB/GYNs indicated that they could reduce their working hours by approximately 2 h/day. In contrast, some participants objected to the promotion of task shift in the specific tasks of pediatrics and OB/GYN, such as “Venous blood sampling (newborn/infant)” and “Fetal echogram at prenatal check-up.” In this study, the proportion of physicians who worked longer hours (60 h/week or 80 h/week) may have declined significantly during the Task Shift ( p <0.05) . Notably, the task shift duties surveyed in this study are not items that would be answered as objectionable in the previous studies . Therefore, task shifts for pediatricians and OB/GYNs are essential to improve their working environment. Our study found that pediatricians and OB/GYNs at hospitals with long working hours (over 80 h/week) were less willing to work longer hours and decreased their night and day-off duties than those working fewer hours . Particularly, pediatricians working >60 h/week desired a decrease in night duties. Night duties not only adversely affect physicians’ quality of life but are associated with short sleep duration and sleep disorders , which are associated with burnout among physicians . Additionally, long working hours without breaks threaten the safety of medical care . Repeatedly, the fundamental solutions would require more young physicians in pediatrics and OB/GYN at hospitals and the consolidation of hospitals that provide pediatric emergency and perinatal/neonatal care. Working hours and environment may be related to physicians’ careers and job satisfaction . High career and job satisfaction negatively affect burnout . Additionally, high job satisfaction may keep physicians in the hospital. In addition to actively recruiting new physicians at the hospital level, efforts must be made to reduce the burden on physicians. A virtuous cycle must be created in which physicians retain their hospital positions. Therefore, a large-scale study on the details of the desired working status should be conducted in the future. This study had some limitations. First, the sample size was small because it targeted pediatricians and OB/GYNs working at hospitals in Tokushima Prefecture, Japan. Because the analysis was conducted collectively by professionals involved in perinatal and neonatal care, the specific trends in hospital installations and departments are unclear. Second, this was a cross-sectional study; therefore, causal relationships were unclear. Third, selection and recall bias are possible in this study. Approximately 20% of the participants did not respond to this study, which may have included busy pediatricians and OB/GYNs. The number of working hours per week was based on self-reported data. However, this is a common method used in epidemiological studies of physicians’ working hours because the responses must be simple. Fourth, some items were not included in the questionnaire but were reportedly related to long working hours. Items related to the type of hospital where they worked, details of their job, salary, and family and living arrangements were not included in the questionnaire. Particularly, we did not ask the OB/GYNs about combined obstetric and surgical activities. Previous studies have reported that approximately 60% of the physicians did not combine the activities . Additionally, some reports suggest a relationship between gender roles, child-rearing, and working hours . In the future, considering conducting a larger-scale study that includes these items while reducing the burden on respondents is necessary. This study revealed that approximately 40% and 10% of hospital pediatricians and OB/GYNs in Tokushima, Japan, work ≥60 h/week and ≥80 h/week, respectively. Their weekly working hours were associated with working at night duties and day-off duties, number of working medical institutions concurrently. Physicians who worked long hours identified issues with sharing work with medical or non-medical workers and desired a reduction in the number of night and day-off duties and working hours. Our findings provide insights into improving the working environments of pediatricians and OB/GYNs. Hence, conducting a detailed and large-scale study of the working environments of pediatricians and OB/GYNs in the future is necessary. S1 File (DOCX) S2 File (ZIP) |
Instagram as a Tool to Improve Human Histology Learning in Medical Education: Descriptive Study | 43df49e6-ec8a-4708-b8b7-3d3a4cf474c5 | 11888019 | Anatomy[mh] | Social media platforms are web-based technologies particularly suited to facilitate the exchange of ideas through collaboration, interaction, and discussion. The accessibility and low cost of internet access, together with the high number of users of these platforms, make social media one of the easiest and most effective ways to disseminate information. In fact, 4.65 billion people, equivalent to 58.4% of the world’s population, use social media . In addition, most current medical students are far more knowledgeable and experienced with emerging technologies than preceding generations. Unlike traditional media (journals or television), social media emphasizes interactivity, motivation through social connections, and immediacy . In this sense, the “social constructivism theory” states that interaction and socialization may help students learn and construct their knowledge and personal learning processes, supporting the use of social media for educational activities as a different tool for teaching and learning . For all these reasons, social media platforms have progressively been incorporated into health care and medical education . Technological advances have enabled rapid dissemination of medical updates through social networks such as Facebook, X (previously known as Twitter), or Instagram. Thus, students often have access to significant amounts of information, including content taught during traditional classes. Nevertheless, this information has not always been rigorously verified or is outdated, representing a formative disadvantage for medical students. Moreover, there is a lack of engagement and even dropout from classes because traditional education methodology is considered by the student body to be boring, unnecessary, or repetitive. Therefore, the faculty must adapt to meet their specific needs, changing traditional teaching styles and implementing new e-learning technologies . Recently, the COVID-19 pandemic forced teaching staff to move further into a virtual education environment and highlighted the importance of communication between educators and learners through social media platforms. The rapid and efficient dissemination of information during the pandemic illustrated the significant influence of social media in the dissemination of medical literature and knowledge, not only among health care professionals but also among the student body . For instance, Chan et al demonstrated the benefits of using tools such as infographics posted on social media platforms (such as X and WeChat) to educate frontline health care workers about respiratory tract management and infection control in the setting of COVID-19. Thus, the pandemic prompted a paradigm shift in learning for students and medical residents by using different platforms (eg, YouTube, Zoom, Microsoft Teams) as e-learning tools under the new circumstances. Although face-to-face teaching is possible and desirable today, the use of social networks as educational instruments must continue with apps such as Instagram and aim to share image-based educational content to complement the classes. Histology has long been an integral part of the medical curriculum and continues to provide key information about biological tissues, physiology, and disease; it is therefore highly valued in clinical medicine and research. Furthermore, histopathology is a fundamental tool for diagnosis and prognosis. In addition, a thorough knowledge of histology is necessary for the surgical field and in general practice. However, histology and its nomenclature can be complex to understand for novice medical students, and consequently, it is often perceived as a secondary subject without clinical relevance . From a pedagogical point of view, one of the main goals of histology courses is to ensure that students acquire the competencies necessary to understand histophysiology. For example, histology requires students to develop pattern recognition skills. Specifically, they must be able to identify what they are observing based on specific histologic features. Consequently, histology courses commonly include laboratory practices for students to train and develop these abilities. In this context, the study of histology through digital imaging might be a relevant alternative for the development of their curricula. Instagram is a social networking service owned by Meta Platforms Inc that was launched in October 2010. Instagram allows photo and video sharing accompanied by text. The information can be shared either publicly or privately. Followers can archive shared posts, and the account’s owner can track the number of people reached and give feedback to their followers. The literature shows that this social network is being used for educational purposes in medical schools, predominantly in imaging-related subjects such as radiology , ophthalmology , dermatology , anatomy , fertility , pathology , plastic surgery , dentistry , and (with very few proposals) in histology . Understanding how students interact with these novel social media–based teaching environments and their approaches during e-learning processes is a matter of high relevance . On the other hand, there is a lack of evidence on how the use of social networks impacts the learning and follow-up of Spanish medical students in the first years of their formation in the field of histology. In the first courses, the curricula of a Spanish medical degree include a basic thematic area with fundamental core subjects to obtain the essential knowledge for the subsequent study of pathological alterations. Among these subjects, some necessarily require the use of images, such as anatomy, cytology, histology, and microbiology. For that purpose, an educational experience was carried out using the social network Instagram to make the subject more attractive to the students of the official degree of Medicine at the University of Malaga in Spain during the 2022-2023 academic year. Our main objective was to test whether the use of Instagram might facilitate knowledge acquisition and increase engagement with histology, leading to a positive impact on students’ qualifications. Additionally, we aimed to elucidate which type of visual material was more useful for medical students. Finally, we determined students’ perceptions of the integration of this tool in medical education.
Content of Histology in the Degree of Medicine at the University of Malaga According to the syllabus for the degree in Medicine at the University of Malaga, histology is divided into 2 subjects (Human Histology 1 and 2) that are taught during the first and second years, respectively. The different didactic content is distributed sequentially, progressively increasing the theoretical difficulty . First-year students learn general histology along with some special histology topics (eg, the immune system). The remaining systems and organs are studied during the second school year in the subject Human Histology 2. Some of the potential skills to be developed during these courses are knowledge about the architecture, morphology, and function of the different tissues or systems; recognizing the morphology and structure of tissues by microscopy and imaging techniques; and how to handle basic laboratory equipment and methodology. In addition, our curricula include the acquisition of some transversal competencies such as the capacity for analysis and synthesis, problem-solving or critical reasoning, and analysis, together with other abilities and skills (autonomous work, information management, and oral or written communication skills). Sample Size This study was carried out with 167 students enrolled in the subject Human Histology 2 in the degree in Medicine at the University of Malaga during the 2022-2023 academic year. The final examination was performed by most of the students (153 students), of which 143 participated until the end of the Instagram experience. Thus, only 10 (6.5%) of the 153 involved students did not follow our account. Design of the Instagram Profile After downloading the free app on a smartphone, a private Instagram account (username: @histologiauma) was created for the subject Human Histology 2 at the University of Malaga. The Instagram profile was linked to an institutional email address created to receive questions and comments from the student body of this subject. Students were notified of the availability of this account and were informed about the procedure to participate. For instance, they had to register by giving their real first name and last name. Once we checked they belong to the subject, the students were accepted as followers of @histologiauma. Virtual Microscope Images The images published in @histologiauma belong to the image bank of the Histology Unit of the Department of Human Physiology, Human Histology, Anatomical Pathology and Physical-Sports Education of the Medical School at the University of Malaga. During the COVID-19 pandemic, we introduced a highly interactive, web-based digital microscope system to view histological images during online practical lessons, either from classroom or personal computers. This virtual microscope is currently based on the digitalization of 66 slides, providing the element of real-time dynamic microscopy and offering students a truly innovative experience at exceptionally high resolution. Interestingly, this virtual microscope offers the possibility to capture specific tissue areas and use these pictures to formulate specific questions. Account Content Feed Two software applications were used to design the images: Canva and Microsoft Office PowerPoint. The free basic mode of Canva offers access to thousands of templates and 1 million free photos. Both applications allow drag-and-drop operations familiar to both average users and design professionals and feature templates, photo filters, images, icons, and shapes useful for customizing histological images (eg, including different shapes to highlight structures or cells within a tissue, adding numbers or letters). During a 3-month trial period, 35 posts were published, of which 5 were announcements about the account rules. The general process for uploading new content to @histologiauma account is summarized in . Ethical Considerations The repository of digital images is composed of scanned slides with anonymized tissue remnants from Virgen de la Victoria University Hospital, whose patients provided signed informed consent for educational purposes. The account @histologiauma was created as a private profile to be exclusively accessed by those second-year students of the degree in Medicine at the University of Malaga who voluntarily requested to participate. Images displaying captures from @histologiauma have been edited to make students’ profiles unidentifiable. Moreover, all the surveys were anonymously filled out by students. The manuscript is a retrospective case report that does not require ethics committee approval at our institution since no demographic nor clinical data from patients were used. Type of Questions Posted on @histologiauma The following sections were included in the Instagram platform for human histology education. Image-Based Multiple-Choice Questions There were 13 posts with image-based multiple-choice questions . Histological images of several organs studied during the academic year were posted. Different structures or cells were highlighted with arrows or other shapes (eg, stars, circles, asterisks). Multiple-choice questions with a single correct answer were posted, and 4 options were marked with labels A, B, C, and D in each question. For example, shows a post that asked students to select the incorrect option from the following answer choices: “A) Organ: cerebellum; Large yellow arrow: Pia mater; Red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, nystagmus as a quick and involuntary eye movement is included. B) Organ: cerebellum; Large yellow arrow: Dura mater; red circle: Cerebellar folia; Green star: gray matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia as a problem to speak is included. C) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue Star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia or incoordination of movements is included. 4) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange Star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome patients develop dysarthria (difficulty in speaking).” The correct answer is B. No negative scores were given in the case of wrong answers. All questions had a single correct answer. Each student provided their answer, including a brief justification in the form of a private message on Instagram. They were then notified about their success or encouraged to try again in case of failure. The answers were made public 5 days later in a comment, accompanied by a summary of the most common mistakes. Descriptions or Questions Associated With Histological Images This section, consisting of 10 posts, showed histological images pointing out different structures and components to be identified by the students. For example, shows a post that included the following questions: “1) Identify the organ shown in the image. Is it a tubular or a parenchymatous organ? 2) What is the green star (A) pointing at? And the yellow star (B)? And the blue arrow?” Occasionally, comparisons between pathological and healthy tissues were posted, along with an introduction to clinical medicine. This section was conceived in accordance with the curricular competency entitled “From Histology to Medicine,” which aims to highlight the clinical aspects of human histology. The correct answer was published 5 days later as a comment on the post, and feedback was given to the students, as explained for the multiple-choice questions. Didactic Schemes This section included 7 posts based on student requests for visual or explanatory diagrams of the content they found most difficult. Teachers then prepared specific outlines based on these requests, avoiding the inclusion of new content. An example is shown in . Diagrams were created using free-design and educational software, such as PowerPoint or Canva and stored in a shared Google Drive folder. The link to access the content was posted on the Instagram account and made available for 1 week. The content of these diagrams was derived from the theoretical material already provided to the students, as they were conceived as a complementary tool to the study. Students were also encouraged to make their own schemes to learn how to summarize concepts. “Do It Yourself” Section During the practical classes, students were encouraged to take images of histological slides through the eyepiece of the microscopes with their own smartphones. Later, they posted the images for 24 hours in the form of a story on @histologiauma accompanied by a specific question to be solved by their classmates. In total, 25 images were shared as stories, and an example is shown in . Teaching Commitment The teaching staff played a crucial role in providing feedback to the students, notifying them about their successes or mistakes through private messages and designing diagrams. Overall, 1 hour of work was required daily during the experience. Rating The activity was conceived as a voluntary pilot study. Students who actively participated could add a maximum of up to 0.50 points to their final mark, regardless of whether their interventions were successful. Thus, students earned points in proportion to their level of participation. A score of 10 was assigned to students who answered 100% of the questions. The student body was organized in 4 groups according to the rewards received on @histologiauma (group 1=0-4.4 points; group 2=4.5-6.5 points; group 3=6.5-8.5 points; group 4=8.5-10 points). Therefore, these points served as an indicator of participation. Evaluation of the Activity as an Educational Innovation To explore the influence of this innovative learning tool, data on student engagement and perceptions were collected during the lectures and at the end of the course through a final evaluation. The results were gathered throughout the academic semester with 3 anonymous surveys using a 5-point Likert scale. Each topic in the surveys covered a gradient of agreement with the statement presented (1=strongly disagree, 5=strongly agree). All questions were designed in Spanish by members of the teaching staff (AES, RSV, and DB) and later translated into English for publication. At the beginning of this educational experience, the opinions of students about the inclusion of new technologies and the implementation of social media in our medical school were assessed through a first survey (called the pre-experience survey; ). This first survey also included specific questions regarding students’ perceptions of the use of Instagram in the histology course . The second survey (middle experience; ) was conducted 2 months after the start of the project. This questionnaire focused on the general operation of the account and their early perceptions of the experience. The third and final survey contained the same questions and was carried out during the last week of theory classes. General questions in the initial survey (pre-experience). The university's educational systems are up to date and adapted to the times. Most teachers use social networks as an educational resource. Currently, the use of alternative educational tools is essential. Bringing the basic subjects of a medical degree closer to the reality of professional practice is essential. In general, teachers are concerned about updating educational tools. Specific questions for the initial survey (pre-experience). Instagram facilitates access to educational content on histology. Accessing Instagram allows me to consult the content anywhere at any time (eg, bus, train). Test questions are a useful tool to review theoretical or practical content. Downloading the subject outlines helps me to complete histology concepts. I expect to improve my academic grades thanks to this academic experience. Specific questions for the 2nd (mid-experience) and 3rd (end of the experience) surveys. I followed the @histologiauma updates daily. I answered test questions. I answered questions shared in the @histologiauma stories. I shared some photographs in the “Do it yourself” section. I answered the image-associated questions. I used @histologiauma when traveling by public transport. I used @histologiauma during class exchanges. The test questions were adequate. The schemes were useful. We received highly personalized attention. Circle the most useful sections of @histologiauma: “Do it yourself,” image-associated questions,” “multiple-choice questions,” “schemes.” I will use @histologiuma to prepare for my final exams. Statistical Analysis Raw data from the survey responses were collected and analyzed using SPSS v.24 (IBM Corp). Descriptive statistics were used to characterize the data, with a frequency study carried out for each of the variables evaluated in the different surveys. Homoscedasticity (equality of variances) and normal distribution of the data were checked. Data corresponding to students’ marks (score range 0-10) are expressed as mean (SD). Mann-Whitney tests were conducted to compare overall grades from different cohorts (groups from different academic years or current cohort with the extra points versus the cohort without the extra points). The comparisons between the marks of the different groups (previously categorized into groups 1-4 according to the level of participation) and the degree of acquired knowledge evidenced by the final exam (global marks) were conducted using 1-way ANOVA followed by a post hoc Bonferroni test. Pearson correlation coefficients were calculated using the individual scores according to the students’ participation in @histologiauma and their final grades. An r >1 indicated a positive linear correlation between the 2 variables. The significance was set at 95% of confidence.
According to the syllabus for the degree in Medicine at the University of Malaga, histology is divided into 2 subjects (Human Histology 1 and 2) that are taught during the first and second years, respectively. The different didactic content is distributed sequentially, progressively increasing the theoretical difficulty . First-year students learn general histology along with some special histology topics (eg, the immune system). The remaining systems and organs are studied during the second school year in the subject Human Histology 2. Some of the potential skills to be developed during these courses are knowledge about the architecture, morphology, and function of the different tissues or systems; recognizing the morphology and structure of tissues by microscopy and imaging techniques; and how to handle basic laboratory equipment and methodology. In addition, our curricula include the acquisition of some transversal competencies such as the capacity for analysis and synthesis, problem-solving or critical reasoning, and analysis, together with other abilities and skills (autonomous work, information management, and oral or written communication skills).
This study was carried out with 167 students enrolled in the subject Human Histology 2 in the degree in Medicine at the University of Malaga during the 2022-2023 academic year. The final examination was performed by most of the students (153 students), of which 143 participated until the end of the Instagram experience. Thus, only 10 (6.5%) of the 153 involved students did not follow our account.
After downloading the free app on a smartphone, a private Instagram account (username: @histologiauma) was created for the subject Human Histology 2 at the University of Malaga. The Instagram profile was linked to an institutional email address created to receive questions and comments from the student body of this subject. Students were notified of the availability of this account and were informed about the procedure to participate. For instance, they had to register by giving their real first name and last name. Once we checked they belong to the subject, the students were accepted as followers of @histologiauma.
The images published in @histologiauma belong to the image bank of the Histology Unit of the Department of Human Physiology, Human Histology, Anatomical Pathology and Physical-Sports Education of the Medical School at the University of Malaga. During the COVID-19 pandemic, we introduced a highly interactive, web-based digital microscope system to view histological images during online practical lessons, either from classroom or personal computers. This virtual microscope is currently based on the digitalization of 66 slides, providing the element of real-time dynamic microscopy and offering students a truly innovative experience at exceptionally high resolution. Interestingly, this virtual microscope offers the possibility to capture specific tissue areas and use these pictures to formulate specific questions.
Two software applications were used to design the images: Canva and Microsoft Office PowerPoint. The free basic mode of Canva offers access to thousands of templates and 1 million free photos. Both applications allow drag-and-drop operations familiar to both average users and design professionals and feature templates, photo filters, images, icons, and shapes useful for customizing histological images (eg, including different shapes to highlight structures or cells within a tissue, adding numbers or letters). During a 3-month trial period, 35 posts were published, of which 5 were announcements about the account rules. The general process for uploading new content to @histologiauma account is summarized in .
The repository of digital images is composed of scanned slides with anonymized tissue remnants from Virgen de la Victoria University Hospital, whose patients provided signed informed consent for educational purposes. The account @histologiauma was created as a private profile to be exclusively accessed by those second-year students of the degree in Medicine at the University of Malaga who voluntarily requested to participate. Images displaying captures from @histologiauma have been edited to make students’ profiles unidentifiable. Moreover, all the surveys were anonymously filled out by students. The manuscript is a retrospective case report that does not require ethics committee approval at our institution since no demographic nor clinical data from patients were used.
The following sections were included in the Instagram platform for human histology education. Image-Based Multiple-Choice Questions There were 13 posts with image-based multiple-choice questions . Histological images of several organs studied during the academic year were posted. Different structures or cells were highlighted with arrows or other shapes (eg, stars, circles, asterisks). Multiple-choice questions with a single correct answer were posted, and 4 options were marked with labels A, B, C, and D in each question. For example, shows a post that asked students to select the incorrect option from the following answer choices: “A) Organ: cerebellum; Large yellow arrow: Pia mater; Red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, nystagmus as a quick and involuntary eye movement is included. B) Organ: cerebellum; Large yellow arrow: Dura mater; red circle: Cerebellar folia; Green star: gray matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia as a problem to speak is included. C) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue Star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia or incoordination of movements is included. 4) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange Star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome patients develop dysarthria (difficulty in speaking).” The correct answer is B. No negative scores were given in the case of wrong answers. All questions had a single correct answer. Each student provided their answer, including a brief justification in the form of a private message on Instagram. They were then notified about their success or encouraged to try again in case of failure. The answers were made public 5 days later in a comment, accompanied by a summary of the most common mistakes. Descriptions or Questions Associated With Histological Images This section, consisting of 10 posts, showed histological images pointing out different structures and components to be identified by the students. For example, shows a post that included the following questions: “1) Identify the organ shown in the image. Is it a tubular or a parenchymatous organ? 2) What is the green star (A) pointing at? And the yellow star (B)? And the blue arrow?” Occasionally, comparisons between pathological and healthy tissues were posted, along with an introduction to clinical medicine. This section was conceived in accordance with the curricular competency entitled “From Histology to Medicine,” which aims to highlight the clinical aspects of human histology. The correct answer was published 5 days later as a comment on the post, and feedback was given to the students, as explained for the multiple-choice questions. Didactic Schemes This section included 7 posts based on student requests for visual or explanatory diagrams of the content they found most difficult. Teachers then prepared specific outlines based on these requests, avoiding the inclusion of new content. An example is shown in . Diagrams were created using free-design and educational software, such as PowerPoint or Canva and stored in a shared Google Drive folder. The link to access the content was posted on the Instagram account and made available for 1 week. The content of these diagrams was derived from the theoretical material already provided to the students, as they were conceived as a complementary tool to the study. Students were also encouraged to make their own schemes to learn how to summarize concepts. “Do It Yourself” Section During the practical classes, students were encouraged to take images of histological slides through the eyepiece of the microscopes with their own smartphones. Later, they posted the images for 24 hours in the form of a story on @histologiauma accompanied by a specific question to be solved by their classmates. In total, 25 images were shared as stories, and an example is shown in . Teaching Commitment The teaching staff played a crucial role in providing feedback to the students, notifying them about their successes or mistakes through private messages and designing diagrams. Overall, 1 hour of work was required daily during the experience.
There were 13 posts with image-based multiple-choice questions . Histological images of several organs studied during the academic year were posted. Different structures or cells were highlighted with arrows or other shapes (eg, stars, circles, asterisks). Multiple-choice questions with a single correct answer were posted, and 4 options were marked with labels A, B, C, and D in each question. For example, shows a post that asked students to select the incorrect option from the following answer choices: “A) Organ: cerebellum; Large yellow arrow: Pia mater; Red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, nystagmus as a quick and involuntary eye movement is included. B) Organ: cerebellum; Large yellow arrow: Dura mater; red circle: Cerebellar folia; Green star: gray matter; Red star: white matter; Orange star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia as a problem to speak is included. C) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange star: granular layer; Blue Star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome, ataxia or incoordination of movements is included. 4) Organ: cerebellum; Large yellow arrow: Pia mater; red circle: Cerebellar folia; Green star: white matter; Red star: white matter; Orange Star: granular layer; Blue star: molecular layer; Small yellow arrows: Purkinje cell layer. Example of pathology: Cerebellar syndrome patients develop dysarthria (difficulty in speaking).” The correct answer is B. No negative scores were given in the case of wrong answers. All questions had a single correct answer. Each student provided their answer, including a brief justification in the form of a private message on Instagram. They were then notified about their success or encouraged to try again in case of failure. The answers were made public 5 days later in a comment, accompanied by a summary of the most common mistakes.
This section, consisting of 10 posts, showed histological images pointing out different structures and components to be identified by the students. For example, shows a post that included the following questions: “1) Identify the organ shown in the image. Is it a tubular or a parenchymatous organ? 2) What is the green star (A) pointing at? And the yellow star (B)? And the blue arrow?” Occasionally, comparisons between pathological and healthy tissues were posted, along with an introduction to clinical medicine. This section was conceived in accordance with the curricular competency entitled “From Histology to Medicine,” which aims to highlight the clinical aspects of human histology. The correct answer was published 5 days later as a comment on the post, and feedback was given to the students, as explained for the multiple-choice questions.
This section included 7 posts based on student requests for visual or explanatory diagrams of the content they found most difficult. Teachers then prepared specific outlines based on these requests, avoiding the inclusion of new content. An example is shown in . Diagrams were created using free-design and educational software, such as PowerPoint or Canva and stored in a shared Google Drive folder. The link to access the content was posted on the Instagram account and made available for 1 week. The content of these diagrams was derived from the theoretical material already provided to the students, as they were conceived as a complementary tool to the study. Students were also encouraged to make their own schemes to learn how to summarize concepts.
During the practical classes, students were encouraged to take images of histological slides through the eyepiece of the microscopes with their own smartphones. Later, they posted the images for 24 hours in the form of a story on @histologiauma accompanied by a specific question to be solved by their classmates. In total, 25 images were shared as stories, and an example is shown in .
The teaching staff played a crucial role in providing feedback to the students, notifying them about their successes or mistakes through private messages and designing diagrams. Overall, 1 hour of work was required daily during the experience.
The activity was conceived as a voluntary pilot study. Students who actively participated could add a maximum of up to 0.50 points to their final mark, regardless of whether their interventions were successful. Thus, students earned points in proportion to their level of participation. A score of 10 was assigned to students who answered 100% of the questions. The student body was organized in 4 groups according to the rewards received on @histologiauma (group 1=0-4.4 points; group 2=4.5-6.5 points; group 3=6.5-8.5 points; group 4=8.5-10 points). Therefore, these points served as an indicator of participation.
To explore the influence of this innovative learning tool, data on student engagement and perceptions were collected during the lectures and at the end of the course through a final evaluation. The results were gathered throughout the academic semester with 3 anonymous surveys using a 5-point Likert scale. Each topic in the surveys covered a gradient of agreement with the statement presented (1=strongly disagree, 5=strongly agree). All questions were designed in Spanish by members of the teaching staff (AES, RSV, and DB) and later translated into English for publication. At the beginning of this educational experience, the opinions of students about the inclusion of new technologies and the implementation of social media in our medical school were assessed through a first survey (called the pre-experience survey; ). This first survey also included specific questions regarding students’ perceptions of the use of Instagram in the histology course . The second survey (middle experience; ) was conducted 2 months after the start of the project. This questionnaire focused on the general operation of the account and their early perceptions of the experience. The third and final survey contained the same questions and was carried out during the last week of theory classes. General questions in the initial survey (pre-experience). The university's educational systems are up to date and adapted to the times. Most teachers use social networks as an educational resource. Currently, the use of alternative educational tools is essential. Bringing the basic subjects of a medical degree closer to the reality of professional practice is essential. In general, teachers are concerned about updating educational tools. Specific questions for the initial survey (pre-experience). Instagram facilitates access to educational content on histology. Accessing Instagram allows me to consult the content anywhere at any time (eg, bus, train). Test questions are a useful tool to review theoretical or practical content. Downloading the subject outlines helps me to complete histology concepts. I expect to improve my academic grades thanks to this academic experience. Specific questions for the 2nd (mid-experience) and 3rd (end of the experience) surveys. I followed the @histologiauma updates daily. I answered test questions. I answered questions shared in the @histologiauma stories. I shared some photographs in the “Do it yourself” section. I answered the image-associated questions. I used @histologiauma when traveling by public transport. I used @histologiauma during class exchanges. The test questions were adequate. The schemes were useful. We received highly personalized attention. Circle the most useful sections of @histologiauma: “Do it yourself,” image-associated questions,” “multiple-choice questions,” “schemes.” I will use @histologiuma to prepare for my final exams. Statistical Analysis Raw data from the survey responses were collected and analyzed using SPSS v.24 (IBM Corp). Descriptive statistics were used to characterize the data, with a frequency study carried out for each of the variables evaluated in the different surveys. Homoscedasticity (equality of variances) and normal distribution of the data were checked. Data corresponding to students’ marks (score range 0-10) are expressed as mean (SD). Mann-Whitney tests were conducted to compare overall grades from different cohorts (groups from different academic years or current cohort with the extra points versus the cohort without the extra points). The comparisons between the marks of the different groups (previously categorized into groups 1-4 according to the level of participation) and the degree of acquired knowledge evidenced by the final exam (global marks) were conducted using 1-way ANOVA followed by a post hoc Bonferroni test. Pearson correlation coefficients were calculated using the individual scores according to the students’ participation in @histologiauma and their final grades. An r >1 indicated a positive linear correlation between the 2 variables. The significance was set at 95% of confidence.
Participant Demographics From the very first post, 73 of 167 students enrolled in Human Histology 2 followed the account. At the end of the learning experience, there were 143 followers (143/167, 85.6%), of which 106 students had actively participated during the entire period. Analytics from these 143 students showed that 76.2% (109/143) were women and 23.8% (34/143) were men. Most (133/143, 93%) of them were 18 years to 22 years old and in their second year of the medical degree (139/143, 97.2%). There were only 4 repeaters of this subject, who were concomitantly in the third year of the degree. Full-time students represented 97.9% (140/143) of the respondents. The full demographic profile of the students is shown in . Pre-Experience Test About Students’ Perceptions of the Use of Social Media and New Technologies in Medical School Curricula Of the participants, 83.9% (120/143) considered that the current educational system requires a significant update. Thus, 97.9% (139/143) of them strongly believed that the use of social networks should be significantly improved. Of the students, 99% (141/143) considered that using alternative educational tools is relevant, and 77.1% (110/143) of the students agreed that the use of social media such as Instagram facilitates access to the didactic content. Thereby, 95% (133/143) of the respondents found the subject more accessible thanks to @histologiauma, and 99.5% (142/143) believed they might improve their academic marks thanks to this experience. Survey About the Students’ Perceptions During the Experience Multiple-choice questions (65/143, 45.7%) and image-based questions (33/143, 22.8%) were the students’ favorite sections, with 96.5% (138/143) and 99.3% (142/143) of the students, respectively, considering them highly useful for learning the subject. In contrast, the schemes and “Do it yourself” sections were the favorite sections for 12.6% (18/143) and 0.8% (1/143), respectively, of the students. The remaining 18.1% (26/143) reported a stated preference for combinations of the different sections (multiple-choice questions + image-based questions: 17/143, 11.8%; multiple-choice questions + schemes: 5/143, 3.9%; image-based questions + schemes: 3/143, 2.4%). No other activities were included in the experience. Additionally, 96.7% (138/143) of the students felt well-supported and guided by the teaching staff throughout the experience. In addition, students showed no preference between public transport or the exchange of classes for visualizing the didactic content available on Instagram. Impact of the Experience on Final Marks The average grade obtained by the students from the Histology course during the academic year prior to the implementation of the experience (2021-2022: n=158 students) was 6.49 (SD 1.87) out of 10, whereas marks from the 2022-2023 cohort were significantly higher (mean 7.13, SD 1.68; P <.002; n=153), regardless of the extra points for participating in the experience. Once the earned points were included, the final outcome was not significantly different (2022/2023 without extra points: mean 7.29, SD 1.70; 2022/2023 with extra points: mean 7.13, SD 1.68; P =.03). Furthermore, the mean final grade from the 4 previous academic courses showed homogeneity in terms of having lower results (mean 6.12, SD 0.27; n=628 students) in comparison to our cohort. Overall, our data support that the use of social media produced a positive impact on students’ performance, even without considering the points for participating in @histologiauma. Interestingly, a positive linear correlation between individual participation scores and final marks (not including the extra reward points) was found ( r =0.439, P <.001). Moreover, the ANOVA showed significant differences between students’ marks according to their degree of participation ( P <.001; ). There was a trend of higher ratings according to the level of participation. The Bonferroni test showed that group 1 (the least engaged group with 0-4.4 points) achieved significantly lower global mean scores than the other 3 groups (all P <.001) . Finally, there were no significant differences among groups 2 to 4 (all P =.99).
From the very first post, 73 of 167 students enrolled in Human Histology 2 followed the account. At the end of the learning experience, there were 143 followers (143/167, 85.6%), of which 106 students had actively participated during the entire period. Analytics from these 143 students showed that 76.2% (109/143) were women and 23.8% (34/143) were men. Most (133/143, 93%) of them were 18 years to 22 years old and in their second year of the medical degree (139/143, 97.2%). There were only 4 repeaters of this subject, who were concomitantly in the third year of the degree. Full-time students represented 97.9% (140/143) of the respondents. The full demographic profile of the students is shown in .
Of the participants, 83.9% (120/143) considered that the current educational system requires a significant update. Thus, 97.9% (139/143) of them strongly believed that the use of social networks should be significantly improved. Of the students, 99% (141/143) considered that using alternative educational tools is relevant, and 77.1% (110/143) of the students agreed that the use of social media such as Instagram facilitates access to the didactic content. Thereby, 95% (133/143) of the respondents found the subject more accessible thanks to @histologiauma, and 99.5% (142/143) believed they might improve their academic marks thanks to this experience.
Multiple-choice questions (65/143, 45.7%) and image-based questions (33/143, 22.8%) were the students’ favorite sections, with 96.5% (138/143) and 99.3% (142/143) of the students, respectively, considering them highly useful for learning the subject. In contrast, the schemes and “Do it yourself” sections were the favorite sections for 12.6% (18/143) and 0.8% (1/143), respectively, of the students. The remaining 18.1% (26/143) reported a stated preference for combinations of the different sections (multiple-choice questions + image-based questions: 17/143, 11.8%; multiple-choice questions + schemes: 5/143, 3.9%; image-based questions + schemes: 3/143, 2.4%). No other activities were included in the experience. Additionally, 96.7% (138/143) of the students felt well-supported and guided by the teaching staff throughout the experience. In addition, students showed no preference between public transport or the exchange of classes for visualizing the didactic content available on Instagram.
The average grade obtained by the students from the Histology course during the academic year prior to the implementation of the experience (2021-2022: n=158 students) was 6.49 (SD 1.87) out of 10, whereas marks from the 2022-2023 cohort were significantly higher (mean 7.13, SD 1.68; P <.002; n=153), regardless of the extra points for participating in the experience. Once the earned points were included, the final outcome was not significantly different (2022/2023 without extra points: mean 7.29, SD 1.70; 2022/2023 with extra points: mean 7.13, SD 1.68; P =.03). Furthermore, the mean final grade from the 4 previous academic courses showed homogeneity in terms of having lower results (mean 6.12, SD 0.27; n=628 students) in comparison to our cohort. Overall, our data support that the use of social media produced a positive impact on students’ performance, even without considering the points for participating in @histologiauma. Interestingly, a positive linear correlation between individual participation scores and final marks (not including the extra reward points) was found ( r =0.439, P <.001). Moreover, the ANOVA showed significant differences between students’ marks according to their degree of participation ( P <.001; ). There was a trend of higher ratings according to the level of participation. The Bonferroni test showed that group 1 (the least engaged group with 0-4.4 points) achieved significantly lower global mean scores than the other 3 groups (all P <.001) . Finally, there were no significant differences among groups 2 to 4 (all P =.99).
Background Histology is one of the first morphological disciplines faced by medical students. Since it is necessary to integrate basic knowledge from other fields (eg, anatomy, cytology, biology, biochemistry) with spatial awareness, histology is perceived as a difficult subject by most learners. Moreover, students consider histology as irrelevant on their board examination (ie, “Spanish Specialized Health Training examination”) and even for their future clinical practice . Most current medical students use social networks daily and demand considerable effort from educators to make the subjects more attractive and dynamic. Creating a social media account is a free educational option that enables access to information and allows users to easily connect with others . Thus, in this work, we analyzed the impact of using a specific Instagram account (@histologiauma) as a teaching resource during a histology course (2022-2023). Principal Findings and Implications Overall, our data demonstrate that medical students who followed and interacted with @histologiauma improved their exam scores compared with those who did not. In fact, a complete lack or a low level of participation generated significant differences in comparison with students who actively engaged with the activity. Most importantly, the enhancement of final grades compared with previous cohorts was not a direct consequence of the extra points awarded to the participants. Thus, improved test performance may serve as indirect and tangible evidence of better long-term knowledge acquisition . These results are supported by the previous opinion of the majority of our students about the positive impact of this experience on academic results. In the first instance, the pre-experiment survey already showed that most of our students believed that social media is rarely used in educational contexts and considered that it may be relevant to include social media platforms as teaching tools, not only to increase accessibility to the content but also to improve their marks. In fact, the results demonstrated a positive disposition toward this innovative approach, since 99.5% of the participants believed they could improve their academic grades thanks to this experience even before participating in it. Research on the strength or quality of motivation as a predictor of academic success has yielded both definitive and inconclusive findings. In this work, higher engagement and interaction with the content through the proposed interactive activities may have helped in the learning process, which was later reflected in the scores. Indeed, motivation is a determining factor not only for medical students but also for all students to develop sophisticated and successful learning strategies. A study on small group learning found that increased knowledge and understanding of subject matter increase students’ motivation for studying and interest in the course content . The “social constructivist theory” states that socialization can also help students during their personal learning processes . In this sense, social media facilitates active interaction and collaboration by enabling instant communication and motivation . Furthermore, our Instagram activities also served as additional virtual tests. Testing is no longer considered as only a tool for evaluation but also for learning . Thus, using Instagram for educational purposes incorporates not only these phenomena in the process (as it could have been done through a virtual platform like Moodle) but also other factors that are particularly relevant for current young students: direct interaction with their classmates and immediacy, in addition to their own behavior and daily routine with smartphones and social media. We believe that all these factors increased the motivation and engagement of students with histology, leading to greater retention of the content that was finally reflected as higher scores. Unfortunately, the information available on social media platforms might not be updated or subjected to peer review; thus, it may be invalid, incorrect, or even false. Conversely, @histologiauma is a platform controlled by our group of specialized teachers, prepared to guide learners toward appropriate knowledge according to the content of the subject. Therefore, the creation of a platform adapted by the teaching staff to the curricular content is ideal not only to boost interest but also to prevent students from accessing unreliable information . Additionally, it is essential to comprehend the preferences of learners in order to create a quality digital learning environment . During the experience, the image-based questions, multiple-choice questions, and histological descriptions were considered very useful by the students. Ultimately, this knowledge may help teachers to understand the strengths and weaknesses of the subject matter as well as its impact on adherence. Comparison With the Literature Numerous social media accounts disseminate information about many different types of pathologies to the general public. Although our work focuses on a course within the medical degree program, it is evident that Instagram serves as an optimal and cost-effective platform for capturing attention through passive learning in the field of histology and pathology . In this sense, Nguyen et al reported that 92.5% of students visit Instagram for educational purposes. Accordingly, 97.9% of our respondents strongly believed that social networks should be implemented in higher education. As far as we know, many accounts share educational content about pathology , but very few are specifically targeted at histology and assessing the impact of sharing this information on social media on medical students. Another novelty of our approach is that we used Instagram as an educational tool specifically tailored to our students, offering personalized content directly aligned with the course curriculum. Although many other studies have examined the use of social media in education, few have focused on how a targeted, image-based platform like Instagram can enhance engagement and learning outcomes in medical education, particularly in a subject highly reliant on visual materials. For instance, Essig et al from the School of Medicine at the University of North Carolina created an experience with the Instagram profile @InstaHisto in 2020, which is the most similar to our work in the existing literature. However, one of the main differences between these profiles could be summarized by the word “personalization.” Our private account was created solely and exclusively for second-year students in the medicine degree program at the University of Malaga, in contrast with their public profile. Moreover, they examined the impact of the posts based on the number of views, not focusing on student interaction but rather on the general public. Instead, our work aimed to potentiate students’ knowledge acquisition and to increase engagement with the subject. We also intended to understand students' perceptions about this educational tool. Similarly, the work by Essig et al focused on the National Board of Medical Examiners final exams, reporting that 77% of their students found the histology content from @InstaHisto useful for passing the test. In this line, our survey data reflected a high degree of satisfaction with the utility in the educational environment of these virtual activities (96.5% and 99.3% with multiple-choice questions and image-based questions). More recently, Prabhu and Munawar evaluated 49 Instagram profiles dedicated to the dissemination and teaching of radiology, concluding that it is indeed a better application of this image-based social media platform due to its easy accessibility and appeal to students. In this line, our data reflect that 95% of students believe that using Instagram would enhance their perception of the course and its appeal. Limitations In this work, rating grades increased after use of @histologiauma, even before adding the points awarded for students’ participation in Instagram. It is noticeable that scarce involvement led to no or low improvement, and although there was no significant difference between the most active groups (G2 to G4), we did find a trend toward higher ratings according to the level of participation. In this sense, we cannot completely rule out the influence of other factors masking the impact of this experience, including other curricular or extracurricular activities performed during the school year or personal preferences regarding social media. Nevertheless, as far as we are concerned, standard students from the same academic course share identical academic schedules. All other activities performed during the histology course (such as problem-based learning or the section “From Histology to Medicine”) were developed during theoretical classes or practice and were mandatory. However, we are aware that every academic group is different and repeating the experience during additional academic years would yield more reliable data. Relevant to this, the algorithm used by social media platforms like Instagram tends to favor posts from accounts with which users interact more frequently or with related content, creating information bias for customers . Given this scenario, it is reasonable to interpret that students who interacted with @histologiauma above a threshold were later shown our account or similar profiles in their private feed more often than those who barely engaged or did not participate in the voluntary activity. This interaction likely results in students passively reviewing content each time they visit this platform, which finally positively impacts their acquisition and assimilation of knowledge and therefore their final results. However, this may also minimize the differences between the most active groups. Another specific limitation of this work is that participating in these activities was not mandatory, which may have led to potential selection bias. We cannot rule out that students following the @histologiauma account were more eager to participate in additional activities than the general medical students. However, the high participation rate (85.6% of students enrolled) notably reduced the impact of this possibility. Even so, eliminating the optional nature of this activity would have yielded clearer data. In addition, it is possible that some of our students do not have an Instagram account because they do not find social networks attractive. Nevertheless, this was rarely the case, since we detected very few exceptions of students who performed the final exam without participating in Instagram (10 of 153 students). For future editions, we intend to propose @histologiauma as an educational instrument in a public mode, encouraging users to create their own hashtags and check the transcendence of the posts. On the other hand, the use of social media platforms during the educational process of histology should be recommended only as a complement for regular teaching. Evidence that social media is not a panacea was provided in a separate analysis of 131 students who were using the microblog X during class with the aim of fostering student-faculty interaction on two campuses. Although it facilitated discussion, 71% of students found it distracting . For this reason, it is important to find a balance between the usual lecture-based methodology and the inclusion of social media in higher education, not only to meet the curricular needs of students but also to ensure their engagement with their studies. For future research, the sample size may be broadened to increase the validity and reliability of our findings and include cohorts from other courses or health science degrees such as podiatry, physiotherapy, or nursing. Moreover, specific tests about the content shown on the Instagram account could be implemented. The inclusion of a longitudinal study to track students’ performance and engagement over multiple semesters would allow better understanding of the long-term impact of Instagram-based learning. Although, in general, we detected typical mistakes of pattern recognition of histological structures, a range of accuracy-based rewards could be incorporated into the activity to avoid participation without true commitment. Finally, future experiences could explore the impact of different types of Instagram content (eg, video, live question-and-answer sessions, different quizzes). Conclusions Medical students consider there is inadequate use of social networks for teaching purposes, probably due to a lack of updated methodological approaches in the context of university subjects. Compared with the conventional educational system, social media platforms have a considerable impact on both teachers and students as they offer the possibility to easily connect and collaborate. In fact, one of the main objectives of medical education is to capitalize on the engaging nature of social media tools as part of an overall strategy to use a learner-centered approach. In addition, to increase student engagement during the first year of the degree in Medicine, it is desirable to use attractive didactic methods for learning histology. In this regard, the visual nature of histology is particularly appropriate for the introduction of new image-based tools. Thus, the aim of this study was to investigate an innovative online educational approach for histology based on an Instagram account specifically designed for medical students. In this work, we showed that the use of Instagram has great potential to improve not only the knowledge but also the scores of students of human histology. Our results provide evidence that this teaching strategy boosts students’ learning motivation. In the near future, the classical practical lessons based on the physical microscope might not be enough to meet the needs of medical students. Therefore, Instagram may be considered as a relevant tool for current students to achieve their curricular objectives in a more dynamic, friendly, and enjoyable way under the supervision of the faculty.
Histology is one of the first morphological disciplines faced by medical students. Since it is necessary to integrate basic knowledge from other fields (eg, anatomy, cytology, biology, biochemistry) with spatial awareness, histology is perceived as a difficult subject by most learners. Moreover, students consider histology as irrelevant on their board examination (ie, “Spanish Specialized Health Training examination”) and even for their future clinical practice . Most current medical students use social networks daily and demand considerable effort from educators to make the subjects more attractive and dynamic. Creating a social media account is a free educational option that enables access to information and allows users to easily connect with others . Thus, in this work, we analyzed the impact of using a specific Instagram account (@histologiauma) as a teaching resource during a histology course (2022-2023).
Overall, our data demonstrate that medical students who followed and interacted with @histologiauma improved their exam scores compared with those who did not. In fact, a complete lack or a low level of participation generated significant differences in comparison with students who actively engaged with the activity. Most importantly, the enhancement of final grades compared with previous cohorts was not a direct consequence of the extra points awarded to the participants. Thus, improved test performance may serve as indirect and tangible evidence of better long-term knowledge acquisition . These results are supported by the previous opinion of the majority of our students about the positive impact of this experience on academic results. In the first instance, the pre-experiment survey already showed that most of our students believed that social media is rarely used in educational contexts and considered that it may be relevant to include social media platforms as teaching tools, not only to increase accessibility to the content but also to improve their marks. In fact, the results demonstrated a positive disposition toward this innovative approach, since 99.5% of the participants believed they could improve their academic grades thanks to this experience even before participating in it. Research on the strength or quality of motivation as a predictor of academic success has yielded both definitive and inconclusive findings. In this work, higher engagement and interaction with the content through the proposed interactive activities may have helped in the learning process, which was later reflected in the scores. Indeed, motivation is a determining factor not only for medical students but also for all students to develop sophisticated and successful learning strategies. A study on small group learning found that increased knowledge and understanding of subject matter increase students’ motivation for studying and interest in the course content . The “social constructivist theory” states that socialization can also help students during their personal learning processes . In this sense, social media facilitates active interaction and collaboration by enabling instant communication and motivation . Furthermore, our Instagram activities also served as additional virtual tests. Testing is no longer considered as only a tool for evaluation but also for learning . Thus, using Instagram for educational purposes incorporates not only these phenomena in the process (as it could have been done through a virtual platform like Moodle) but also other factors that are particularly relevant for current young students: direct interaction with their classmates and immediacy, in addition to their own behavior and daily routine with smartphones and social media. We believe that all these factors increased the motivation and engagement of students with histology, leading to greater retention of the content that was finally reflected as higher scores. Unfortunately, the information available on social media platforms might not be updated or subjected to peer review; thus, it may be invalid, incorrect, or even false. Conversely, @histologiauma is a platform controlled by our group of specialized teachers, prepared to guide learners toward appropriate knowledge according to the content of the subject. Therefore, the creation of a platform adapted by the teaching staff to the curricular content is ideal not only to boost interest but also to prevent students from accessing unreliable information . Additionally, it is essential to comprehend the preferences of learners in order to create a quality digital learning environment . During the experience, the image-based questions, multiple-choice questions, and histological descriptions were considered very useful by the students. Ultimately, this knowledge may help teachers to understand the strengths and weaknesses of the subject matter as well as its impact on adherence.
Numerous social media accounts disseminate information about many different types of pathologies to the general public. Although our work focuses on a course within the medical degree program, it is evident that Instagram serves as an optimal and cost-effective platform for capturing attention through passive learning in the field of histology and pathology . In this sense, Nguyen et al reported that 92.5% of students visit Instagram for educational purposes. Accordingly, 97.9% of our respondents strongly believed that social networks should be implemented in higher education. As far as we know, many accounts share educational content about pathology , but very few are specifically targeted at histology and assessing the impact of sharing this information on social media on medical students. Another novelty of our approach is that we used Instagram as an educational tool specifically tailored to our students, offering personalized content directly aligned with the course curriculum. Although many other studies have examined the use of social media in education, few have focused on how a targeted, image-based platform like Instagram can enhance engagement and learning outcomes in medical education, particularly in a subject highly reliant on visual materials. For instance, Essig et al from the School of Medicine at the University of North Carolina created an experience with the Instagram profile @InstaHisto in 2020, which is the most similar to our work in the existing literature. However, one of the main differences between these profiles could be summarized by the word “personalization.” Our private account was created solely and exclusively for second-year students in the medicine degree program at the University of Malaga, in contrast with their public profile. Moreover, they examined the impact of the posts based on the number of views, not focusing on student interaction but rather on the general public. Instead, our work aimed to potentiate students’ knowledge acquisition and to increase engagement with the subject. We also intended to understand students' perceptions about this educational tool. Similarly, the work by Essig et al focused on the National Board of Medical Examiners final exams, reporting that 77% of their students found the histology content from @InstaHisto useful for passing the test. In this line, our survey data reflected a high degree of satisfaction with the utility in the educational environment of these virtual activities (96.5% and 99.3% with multiple-choice questions and image-based questions). More recently, Prabhu and Munawar evaluated 49 Instagram profiles dedicated to the dissemination and teaching of radiology, concluding that it is indeed a better application of this image-based social media platform due to its easy accessibility and appeal to students. In this line, our data reflect that 95% of students believe that using Instagram would enhance their perception of the course and its appeal.
In this work, rating grades increased after use of @histologiauma, even before adding the points awarded for students’ participation in Instagram. It is noticeable that scarce involvement led to no or low improvement, and although there was no significant difference between the most active groups (G2 to G4), we did find a trend toward higher ratings according to the level of participation. In this sense, we cannot completely rule out the influence of other factors masking the impact of this experience, including other curricular or extracurricular activities performed during the school year or personal preferences regarding social media. Nevertheless, as far as we are concerned, standard students from the same academic course share identical academic schedules. All other activities performed during the histology course (such as problem-based learning or the section “From Histology to Medicine”) were developed during theoretical classes or practice and were mandatory. However, we are aware that every academic group is different and repeating the experience during additional academic years would yield more reliable data. Relevant to this, the algorithm used by social media platforms like Instagram tends to favor posts from accounts with which users interact more frequently or with related content, creating information bias for customers . Given this scenario, it is reasonable to interpret that students who interacted with @histologiauma above a threshold were later shown our account or similar profiles in their private feed more often than those who barely engaged or did not participate in the voluntary activity. This interaction likely results in students passively reviewing content each time they visit this platform, which finally positively impacts their acquisition and assimilation of knowledge and therefore their final results. However, this may also minimize the differences between the most active groups. Another specific limitation of this work is that participating in these activities was not mandatory, which may have led to potential selection bias. We cannot rule out that students following the @histologiauma account were more eager to participate in additional activities than the general medical students. However, the high participation rate (85.6% of students enrolled) notably reduced the impact of this possibility. Even so, eliminating the optional nature of this activity would have yielded clearer data. In addition, it is possible that some of our students do not have an Instagram account because they do not find social networks attractive. Nevertheless, this was rarely the case, since we detected very few exceptions of students who performed the final exam without participating in Instagram (10 of 153 students). For future editions, we intend to propose @histologiauma as an educational instrument in a public mode, encouraging users to create their own hashtags and check the transcendence of the posts. On the other hand, the use of social media platforms during the educational process of histology should be recommended only as a complement for regular teaching. Evidence that social media is not a panacea was provided in a separate analysis of 131 students who were using the microblog X during class with the aim of fostering student-faculty interaction on two campuses. Although it facilitated discussion, 71% of students found it distracting . For this reason, it is important to find a balance between the usual lecture-based methodology and the inclusion of social media in higher education, not only to meet the curricular needs of students but also to ensure their engagement with their studies. For future research, the sample size may be broadened to increase the validity and reliability of our findings and include cohorts from other courses or health science degrees such as podiatry, physiotherapy, or nursing. Moreover, specific tests about the content shown on the Instagram account could be implemented. The inclusion of a longitudinal study to track students’ performance and engagement over multiple semesters would allow better understanding of the long-term impact of Instagram-based learning. Although, in general, we detected typical mistakes of pattern recognition of histological structures, a range of accuracy-based rewards could be incorporated into the activity to avoid participation without true commitment. Finally, future experiences could explore the impact of different types of Instagram content (eg, video, live question-and-answer sessions, different quizzes).
Medical students consider there is inadequate use of social networks for teaching purposes, probably due to a lack of updated methodological approaches in the context of university subjects. Compared with the conventional educational system, social media platforms have a considerable impact on both teachers and students as they offer the possibility to easily connect and collaborate. In fact, one of the main objectives of medical education is to capitalize on the engaging nature of social media tools as part of an overall strategy to use a learner-centered approach. In addition, to increase student engagement during the first year of the degree in Medicine, it is desirable to use attractive didactic methods for learning histology. In this regard, the visual nature of histology is particularly appropriate for the introduction of new image-based tools. Thus, the aim of this study was to investigate an innovative online educational approach for histology based on an Instagram account specifically designed for medical students. In this work, we showed that the use of Instagram has great potential to improve not only the knowledge but also the scores of students of human histology. Our results provide evidence that this teaching strategy boosts students’ learning motivation. In the near future, the classical practical lessons based on the physical microscope might not be enough to meet the needs of medical students. Therefore, Instagram may be considered as a relevant tool for current students to achieve their curricular objectives in a more dynamic, friendly, and enjoyable way under the supervision of the faculty.
|
Best practice for the selection, design and implementation of UK Kidney Association guidelines: a modified Delphi consensus approach | b63b7b24-09c9-4ccc-9252-7db867faa5d9 | 11191819 | Internal Medicine[mh] | The standard for best practice in modern healthcare is based on the ever-expanding body of evidence provided by clinical trials, studies and evidence synthesis. Treatment pathways across many clinical areas are directed by the creation of clinical practice guidelines (CPGs), intended to reduce variation of care and optimise patient outcomes. The area of nephrology is no different, with a myriad of national and international guidelines all designed to help healthcare professionals, commissioners and providers of healthcare and people with kidney disease, their families and carers. However, the development of CPGs is a complex undertaking, with challenges in selecting areas of review, prioritising their importance, development methodology and implementing the uptake of recommendations into clinical practice. Many international and national organisations have their own CPG development groups and standards with examples including the WHO, the National Institute for Health and Care Excellence (NICE), the Scottish Intercollegiate Guidelines Network (SIGN) and the Australian National Health and Medical Research Council. There have also been attempts to standardise the CPG development process by the Guidelines International Network (GIN) and the Institute of Medicine (IOM). Overall, these standards are similar in that they advocate for transparency in the CPG development process, as well as the need for external peer review and stakeholder consultation. However, the exact stepwise process for each differs, with varying stages and processes. Within nephrology it has similarly been highlighted that guidelines can often lack uniformity; importantly this can have a knock-on effect with the development of robust quality metrics, as one of the key components of metric validity is discordance with the latest evidence. Only by producing guidelines that are meaningful to all stakeholders can we then drive meaningful improvements in patient-centred care. Alongside standards proposed by GIN and IOM, many development and implementation toolkits have been created to measure the strength and quality of CPGs. Internationally recognised models such as Grading of Recommendations Assessment, Development and Evaluation (GRADE) and Appraisal of Guideline Research and Evaluation (AGREE/AGREE II) assist developers in summarising evidence to provide recommendations, and to measure the strength of their recommendations in a structured way. GRADE standards are currently recommended by WHO, and used by NICE and SIGN. However, the use of ratification toolkits is not universal and up to 50% of guidelines may be considered unreliable or biased on the basis of having unclear development processes. Despite the use of more rigorous development tools, many guidelines are still underused, representing a significant guideline-implementation gap. Evidence suggests that implementation can take up to 17 years and ultimately, only 14% of guideline recommendations are translated into clinical practice. As such, some have questioned the effort, relevance and utility of these (often cumbersome) pieces of documentation. However, successful implementation is important, research suggests correct utilisation of NICE guidelines for treating kidney disease has the potential to increase early patient referrals and lower long-term treatment costs. Research into implementation strategies report similar barriers to guideline uptake, including lack of time, skills, and knowledge, funding issues, complex and impractical guidelines, and resistance to change within the healthcare community. However, research does suggest that guidelines can be successfully integrated into clinical practice through the use of multi-faceted approaches to implementation, combining communication, education and practical design strategies. Limited research in nephrology indicates that educational interventions have helped to improve guideline adherence, physician competence and kidney function in diabetes patients with chronic kidney disease (CKD). A key part of CPG development and implementation is engaging stakeholders, including empowering people with long-term conditions like kidney disease to take part in joint decision-making processes. Previous barriers to this have been identified, including lack of understanding in how to incorporate the views of those living with kidney disease into guidelines, and how to effectively communicate with, and educate, people living with kidney disease. GIN has recently developed a toolkit to help developers engage with the public and patients and to try and overcome some of these barriers. In line with this, many now see the inclusion of patient and carer contributions as a fundamental part of the legitimacy and transparency of CPGs. It has also been suggested that patient input into CPG development will help improve the impact of guidelines and encourage their use. Due to the lack of standardised methods to develop CPGs, and the variability with which they are implemented, this research was undertaken on behalf of the UK Kidney Association (UKKA) Clinical Practice Guidelines Committee with the aim to ascertain how pertinent topics in kidney care within the UK should be chosen, prioritised, designed and implemented. The research was designed using a modified Delphi method in order to gather consensus from healthcare practitioners (HCPs) across multiple clinical specialties in primary and secondary care who have input into the management of people with CKD, as well as those people living with CKD, to ensure a breadth of stakeholder opinions were heard. The study was conducted using a modified Delphi methodology (see ), overseen by an independent facilitator (Triducive Partners) and is reported in accordance with the ACCORD guidelines. Initially, a scoping meeting was conducted in October 2022 between the UKKA Clinical Practice Guidelines Committee and the independent facilitator to agree the aims and scope of the project and discuss potential steering group members. A multi-professional panel of experts in renal healthcare (the study authors) from across the UK were selected on account of their leadership in UK societies, clinical expertise and standing as patient representatives. The group was invited by the UKKA Clinical Practice Guidelines Committee via email. Nine individuals agreed to participate, this number was chosen to ensure the accuracy and reliability of the study by representing all stakeholders without overcomplicating the process. 10.1136/bmjopen-2024-085723.supp1 Supplementary data During round 1, the group convened in January 2023 to discuss challenges in designing and implementing UKKA guidelines, using the nominal group technique. In this session, the panel created a list of problem areas which need to be addressed within guideline development. The panel discussed these areas and consolidated them into a final list covering: Value of guidelines to healthcare professionals. Value of guidelines to kidney patients. Selecting areas of focus for future UKKA guidelines. Design of future UKKA guidelines. Implementing future UKKA guidelines. Following the agreement of these domains, the group discussed each area in detail and created a series of 42 draft consensus statements. These were then reviewed anonymously and independently by the panel. This was collated by the independent facilitator. Based on feedback, eight statements were deleted, eight were edited and one new statement was added. The amended statements were then ratified independently and anonymously by the group. This process involved qualitative feedback and comprised round 2 of the process. The finalised 35 statements provided the basis of a consensus survey, which constituted round 3 of the process. Two separate surveys were created and distributed across the UK. One survey, including all 35 statements, was sent to healthcare professionals in primary and secondary care with any involvement in treating people with kidney disease (not limited to the renal healthcare community). The survey was then streamlined to contain only the most patient relevant statements (n=20), before being sent to patients and patient representatives. This survey was designed to collect quantitative opinion data, as is standard for the Delphi process. In both surveys, each statement was presented alongside a 4-point Likert scale (‘strongly agree’, ‘tend to agree’, ‘tend to disagree‘ and ‘strongly disagree’) to allow respondents to indicate their level of agreement. While the survey was anonymous, some demographic data was captured for further analyses (role of the respondent, location within the UK and years of experience). The minimum consensus level was set at ≥75%, a widely accepted threshold, with a further category of ‘very high agreement’ at ≥90%. Instead of aiming for a set response rate, the panel agreed to minimum stopping criteria for the survey. A minimum threshold of 400 responses (distributed between secondary care doctors, general practitioners (GPs), nurses, dietitians and other allied healthcare professionals) was set. The survey was distributed by the steering group to colleagues, professional and patient societies, and through social media. Respondents were anonymous to the steering group and did not receive incentives for participation. Completed surveys were anonymously collated and analysed by the independent facilitator to produce an agreement score for each statement, this was calculated by adding the percentage of respondents who agreed or strongly agreed with each statement. This information was then evaluated and discussed by the expert panel in a second group meeting (June 2023, round 4). Analysis was undertaken by the facilitator to assess whether there were differences between respondents by role, experience or location, which was also validated by the expert panel. As the stopping criteria were fulfilled, the group used the results to select key statements from each topic. These provided the basis for draft study recommendations. Following the meeting these were independently and anonymously reviewed and ratified by the group. Patient and public involvement International guideline standards include patient and public involvement as a core principle for developing high-quality evidence-based CPGs. As such, this study was designed to include patient representation at each stage, including the steering panel who co-designed a modified questionnaire for patient respondents and their representatives. Wide distribution of the survey (and the results) was ensured using patient and charity networks. International guideline standards include patient and public involvement as a core principle for developing high-quality evidence-based CPGs. As such, this study was designed to include patient representation at each stage, including the steering panel who co-designed a modified questionnaire for patient respondents and their representatives. Wide distribution of the survey (and the results) was ensured using patient and charity networks. The questionnaire was undertaken by 419 respondents, comprised 364 HCPs and 55 patients and patient representatives. Participants came from across the UK with representation from England, Northern Ireland, Scotland and Wales. Respondents by role are shown in . HCPs were also asked to provide how long they had been in role. The majority (n=123) had over 20 years of clinical experience, followed by 11–20 years of experience (n=109). Only 55 respondents had less than 5 years of experience. Consensus among HCP respondents was high , with 28/35 (80%) statements achieving over ≥90% consensus, and 4/35 (11%) reaching between <90% and ≥75% consensus. Only three statements (9%) did not reach the 75% consensus threshold. Among patients and patient representatives, agreement was also high , with all statements reaching the consensus threshold. Of the statements presented to this group, 13/20 (65%) achieved ≥90% consensus, 7/20 reached between<90% and ≥75% consensus. The results were further analysed by subgroup. There were some variations in the consensus levels for statements between HCPs based on their role, years in role and region . Consensus levels were more consistent when analysed by years of professional experience. However, the only statements to consistently vary from the mean by >10% across role and region were statements 5, 6 and 26. These were also the only statements not to achieve consensus. Among the patient group, differences were noted for two statements (9 and 13), however variation arose due to lower agreement from carers (n=3) and so may not be representative. Each of the statements and their individual consensus levels are presented in . The consensus score distribution across the 4-point Likert scale is shown in . Value of UKKA guidelines Across the responses from HCPs there was very high agreement on the importance and utility of guidelines (statements 1–4, 7–11, ≥86%). Responses to statement 4 (99%) also highlight that, while national guidelines are essential, there needs to be capacity for local variations in practice. This is supported by research into CPG implementation, which has found that flexibility and autonomy are key to encouraging HCPs to change their behaviour. In the past, the UKKA has produced ‘commentaries’ on guideline documents from other specialist societies (eg, Kidney Disease: Improving Global Outcomes) with specific advice on how these relate to UK practice or require specific considerations around their implementation. Responses to statement 5 (61%) and 6 (59%) (suggesting the UKKA should not provide commentaries and that they are less useful than guidelines) did not reach the consensus threshold. This suggests UKKA commentaries on national and international guidelines, along with recommendations for UK implementation, may be an important addition. shows a central tendency bias for these statements, with the majority of responses being ‘tend to agree’ and ‘tend to disagree’. A lack of strong opinion on these statements could mean some respondents were unsure of how to respond, particularly as there was no definition provided for what constituted a UKKA ‘commentary’. However, the literature repeatedly finds improving CPG visibility through promotion, education and short communications can help to increase their implementation. Therefore, commentaries created by the UKKA could help encourage adoption of new CPGs by simplifying and clarifying their recommendations. Further, where commentaries cover international guidelines, they can give a discussion and interpretation relevant to the UK context. Patient responses also showed strong consensus on the importance of guidelines, particularly documents which are jargon-free, and person centred. Survey results, alongside input from the lay representative, clearly show that people with kidney disease feel empowered when they have access to resources which they can read and process in their own time outside of appointments. The availability of such person-centred guidelines will help patients to make more informed decisions and potentially lead to improved health communication between patients and their healthcare providers. The lay representative also highlighted that people with long-term conditions want reputable information, sourced from the same sites accessed by HCPs, which is reflected in statement 8 (HCPs 98%, patients 95%). A strength of the UKKA guidance is that all documents can be found in a single site and are accessible to people with CKD and HCPs. This is something that will need to continue with future guidelines to ensure these documents remain accessible. UKKA guideline selection and design Following on from aspects of accessibility highlighted in topics A and B, responses to statements in topics C and D emphasised a need for engagement and user-friendliness. Statements 12 (93%) and 13 (91% HCPs, 87% patients) show there is a need to include a variety of stakeholders when selecting potential topics for guidelines, a view strongly supported within the literature. This is reinforced by the agreement to statements 15–18 (≥92%), showing respondents believe a multi-professional approach should be taken to identifying and prioritising guideline topics. While it has been acknowledged that cross specialty CPGs can be difficult to develop, a lack of consideration for comorbidities or age (as a proxy for comorbidity) limits the applicability of CPGs. When considering the design of future guidelines, statements 14 (98% HCPs, 91% patients) and 20 (88% HCPs, 84% patients) demonstrate the overarching need for guidance to be simple, equitable and indiscriminate. On the basis of this, it is suggested that not only will the UKKA make all guidelines available, but also have a section of their website where all HCPs and people with CKD can suggest topics for future guidelines, as seen in recommendations by Blackwood et al . Following this, it is clear there will be a need to prioritise the development of guidelines. The process of this will need to be both rigorous and transparent. Therefore, it is suggested that a ‘RAG’ (red, amber, green traffic light) system is used to standardise the process by which guideline development is prioritised. The literature states that CPGs are crucial as they provide management pathways and treatments based on evidence. Currently there is a reliance on clinical trials to provide this evidence base. However, trials may not be available, or even necessary, to back every recommendation. It has been argued that other forms of evidence should be seen as valid when compiling data for guidelines. The current research found strong support (statement 25; 92% HCPs, 80% patients) for the use of consensus-based evidence within guidelines. Banno et al have also argued for the value of consensus-based evidence, underlining the need for more Delphi studies to provide clear, documented consensus on the content of guidelines. There needs to be an element of caution here however, consensus based guidelines can generate inappropriately strong recommendations compared with evidence based guidelines and so it is important to ensure appropriate alignment of quality of evidence with strength of recommendations. Wider use of such practice would ensure the quality of guidelines, and allow for more inclusive guideline design, by assimilating patient input with clinical data and HCP recommended treatment pathways. Consensus with statements 22 (93%) and 23 (98%) show that HCPs are keen to take a multi-disciplinary approach to guideline development. The need for alignment across professional guidelines is further reinforced by consensus with statement 30 (98%). As discussed, developing cross specialty CPGs can be complex, hindered by lack of time, resources and standard CPG development methodologies. To ensure that future UKKA guidelines reflect the results of the current research, it is evident that the UKKA’s standards for guideline development will need to be updated to reflect this multi-stakeholder approach. When developing future guidelines, the UKKA will need to reach out to other societies and professional bodies for input, strengthening cross-discipline ties and communication. Collaboration across disciplines must be seen as pivotal, and the focus should be on how working together can create more broadly applicable, practical guidelines, by pooling knowledge and resources. We believe that this approach could be a roadmap for optimised clinical management across medical specialties in the UK. As set out in the Climate Change Act (2008), the UK National Health Service (NHS) has made a commitment to halve greenhouse gas emissions by 2025 and reach net zero by 2050. Within the NHS, the provision of kidney care is a carbon intensive specialty when considered in terms of the numbers of patients treated with renal replacement therapy. The strong consensus with statement 24 (92%) shows the commitment to this agenda, to pay particular attention within guidelines to inform carbon-reduction strategies that live up to the UKKA sustainability agenda; to meet or exceed NHS carbon net zero goals and, the ambition to reduce waste to landfill or incineration by 80%. Implementing future UKKA guidelines The UKKA is committed to reinforcing an agile approach to implementing and updating their guidelines. Although it is positive to see from the HCP response to statement 26 (‘guidelines are difficult to implement’; 49%) that many respondents believe guidelines are not difficult to implement, it still does mean that near enough half of the responders feel that guideline implementation can be challenging. There was no difference seen in the agreement levels across experience or geographic region. While literature around guideline implementation highlights potential areas where translating recommendations into practice can fail, it is heartening to see that within nephrology negative beliefs in guideline implementation may not be the central issue. However, it will still be necessary to prepare for other potential implementation pitfalls in the future. In order to address the needs of all HCPs and patients the use of a multi-faceted implementation approach (eg, easy to use and practical guidelines, combined with promotion, education, monitoring) is recommended. Consensus with statements 28 (HCPs 93%, patients 91%) and 29 (93%) show there is a need for guidelines to be practical and of use for both HCPs and patients, particularly within consultations. A more efficient structure to guidelines, including a jargon-free summary, could help make guidance more accessible to all audiences. When taken alongside the need for simplicity, it could be suggested that the ideal guideline is delivered in two ways: A central guideline designed to educate and support HCPs and people with CKD by providing focused descriptions of healthcare issues and the evidence base for treatments, alongside concise, actionable recommendations. A supplementary document with technical details, which enriches the information and evidence provided in the main guideline. Further to this, promotion of new guidelines could be encouraged by creating a calendar of guideline release dates, as supported by consensus with statement 34 (98%). This approach will keep stakeholders abreast of developments and ensure transparency and accountability in the development process. Recommendations Based on the levels of consensus seen within this study, the steering group were keen that the UKKA’s process for creating guidelines should be updated in response to the results of this work. As such the steering group posed the following recommendations: A more equitable approach to proposing guideline topics should be adopted, allowing input from HCPs, patients and their representatives. UK commentaries on international guidelines that outline regional applicability and more focused implementation are as valued as full UK guidelines. All guidance should focus on the end user, with simple and appropriate language to ensure accessibility for HCPs and people with CKD, and encourage engagement. Standardised, multi-faceted implementation techniques or ‘practice points’ to maximise the uptake of their recommendations into clinical practice should be developed and included. Connections across disciplines should be fostered, not only to ensure a multi-disciplinary approach to their guideline development but ensure perspectives from nephrology are considered in CPGs created by other professional bodies. Guideline groups should outline strategies to address the sustainability agenda, wherever possible. Study strengths and limitations The large number of experienced specialists that responded to the consensus questionnaire lends weight to the validity of the recommendations proposed by the steering group. The presence of patients and their representatives in the research, through the steering group and questionnaire respondents, increases the inclusivity and applicability of these findings. It highlights the opinions of these individuals and emphasises the need to acknowledge and act on them throughout guideline development. Responses were sought from both HCPs and patients across the UK in an attempt to reduce geographic bias. While some areas had fewer respondents (eg, Northern Ireland), overall, there was good representation across the UK. The survey was distributed by the steering group, however, data was collected and analysed anonymously by a third party, helping to limit bias. The 4-point Likert scale was used so respondents had no ‘neither agree or disagree’ option and had to form an opinion on each statement. However, as discussed, some statements did show a central tendency bias, which could have been due to the language used within the statements. As this study only undertook one round of survey with no adjustments to the statements, it is possible that some of the statements were too agreeable and did not sufficiently challenge the status quo. Further research on this in this area should refine the statements generated herein to determine any greater variance that may exist. Across the responses from HCPs there was very high agreement on the importance and utility of guidelines (statements 1–4, 7–11, ≥86%). Responses to statement 4 (99%) also highlight that, while national guidelines are essential, there needs to be capacity for local variations in practice. This is supported by research into CPG implementation, which has found that flexibility and autonomy are key to encouraging HCPs to change their behaviour. In the past, the UKKA has produced ‘commentaries’ on guideline documents from other specialist societies (eg, Kidney Disease: Improving Global Outcomes) with specific advice on how these relate to UK practice or require specific considerations around their implementation. Responses to statement 5 (61%) and 6 (59%) (suggesting the UKKA should not provide commentaries and that they are less useful than guidelines) did not reach the consensus threshold. This suggests UKKA commentaries on national and international guidelines, along with recommendations for UK implementation, may be an important addition. shows a central tendency bias for these statements, with the majority of responses being ‘tend to agree’ and ‘tend to disagree’. A lack of strong opinion on these statements could mean some respondents were unsure of how to respond, particularly as there was no definition provided for what constituted a UKKA ‘commentary’. However, the literature repeatedly finds improving CPG visibility through promotion, education and short communications can help to increase their implementation. Therefore, commentaries created by the UKKA could help encourage adoption of new CPGs by simplifying and clarifying their recommendations. Further, where commentaries cover international guidelines, they can give a discussion and interpretation relevant to the UK context. Patient responses also showed strong consensus on the importance of guidelines, particularly documents which are jargon-free, and person centred. Survey results, alongside input from the lay representative, clearly show that people with kidney disease feel empowered when they have access to resources which they can read and process in their own time outside of appointments. The availability of such person-centred guidelines will help patients to make more informed decisions and potentially lead to improved health communication between patients and their healthcare providers. The lay representative also highlighted that people with long-term conditions want reputable information, sourced from the same sites accessed by HCPs, which is reflected in statement 8 (HCPs 98%, patients 95%). A strength of the UKKA guidance is that all documents can be found in a single site and are accessible to people with CKD and HCPs. This is something that will need to continue with future guidelines to ensure these documents remain accessible. Following on from aspects of accessibility highlighted in topics A and B, responses to statements in topics C and D emphasised a need for engagement and user-friendliness. Statements 12 (93%) and 13 (91% HCPs, 87% patients) show there is a need to include a variety of stakeholders when selecting potential topics for guidelines, a view strongly supported within the literature. This is reinforced by the agreement to statements 15–18 (≥92%), showing respondents believe a multi-professional approach should be taken to identifying and prioritising guideline topics. While it has been acknowledged that cross specialty CPGs can be difficult to develop, a lack of consideration for comorbidities or age (as a proxy for comorbidity) limits the applicability of CPGs. When considering the design of future guidelines, statements 14 (98% HCPs, 91% patients) and 20 (88% HCPs, 84% patients) demonstrate the overarching need for guidance to be simple, equitable and indiscriminate. On the basis of this, it is suggested that not only will the UKKA make all guidelines available, but also have a section of their website where all HCPs and people with CKD can suggest topics for future guidelines, as seen in recommendations by Blackwood et al . Following this, it is clear there will be a need to prioritise the development of guidelines. The process of this will need to be both rigorous and transparent. Therefore, it is suggested that a ‘RAG’ (red, amber, green traffic light) system is used to standardise the process by which guideline development is prioritised. The literature states that CPGs are crucial as they provide management pathways and treatments based on evidence. Currently there is a reliance on clinical trials to provide this evidence base. However, trials may not be available, or even necessary, to back every recommendation. It has been argued that other forms of evidence should be seen as valid when compiling data for guidelines. The current research found strong support (statement 25; 92% HCPs, 80% patients) for the use of consensus-based evidence within guidelines. Banno et al have also argued for the value of consensus-based evidence, underlining the need for more Delphi studies to provide clear, documented consensus on the content of guidelines. There needs to be an element of caution here however, consensus based guidelines can generate inappropriately strong recommendations compared with evidence based guidelines and so it is important to ensure appropriate alignment of quality of evidence with strength of recommendations. Wider use of such practice would ensure the quality of guidelines, and allow for more inclusive guideline design, by assimilating patient input with clinical data and HCP recommended treatment pathways. Consensus with statements 22 (93%) and 23 (98%) show that HCPs are keen to take a multi-disciplinary approach to guideline development. The need for alignment across professional guidelines is further reinforced by consensus with statement 30 (98%). As discussed, developing cross specialty CPGs can be complex, hindered by lack of time, resources and standard CPG development methodologies. To ensure that future UKKA guidelines reflect the results of the current research, it is evident that the UKKA’s standards for guideline development will need to be updated to reflect this multi-stakeholder approach. When developing future guidelines, the UKKA will need to reach out to other societies and professional bodies for input, strengthening cross-discipline ties and communication. Collaboration across disciplines must be seen as pivotal, and the focus should be on how working together can create more broadly applicable, practical guidelines, by pooling knowledge and resources. We believe that this approach could be a roadmap for optimised clinical management across medical specialties in the UK. As set out in the Climate Change Act (2008), the UK National Health Service (NHS) has made a commitment to halve greenhouse gas emissions by 2025 and reach net zero by 2050. Within the NHS, the provision of kidney care is a carbon intensive specialty when considered in terms of the numbers of patients treated with renal replacement therapy. The strong consensus with statement 24 (92%) shows the commitment to this agenda, to pay particular attention within guidelines to inform carbon-reduction strategies that live up to the UKKA sustainability agenda; to meet or exceed NHS carbon net zero goals and, the ambition to reduce waste to landfill or incineration by 80%. The UKKA is committed to reinforcing an agile approach to implementing and updating their guidelines. Although it is positive to see from the HCP response to statement 26 (‘guidelines are difficult to implement’; 49%) that many respondents believe guidelines are not difficult to implement, it still does mean that near enough half of the responders feel that guideline implementation can be challenging. There was no difference seen in the agreement levels across experience or geographic region. While literature around guideline implementation highlights potential areas where translating recommendations into practice can fail, it is heartening to see that within nephrology negative beliefs in guideline implementation may not be the central issue. However, it will still be necessary to prepare for other potential implementation pitfalls in the future. In order to address the needs of all HCPs and patients the use of a multi-faceted implementation approach (eg, easy to use and practical guidelines, combined with promotion, education, monitoring) is recommended. Consensus with statements 28 (HCPs 93%, patients 91%) and 29 (93%) show there is a need for guidelines to be practical and of use for both HCPs and patients, particularly within consultations. A more efficient structure to guidelines, including a jargon-free summary, could help make guidance more accessible to all audiences. When taken alongside the need for simplicity, it could be suggested that the ideal guideline is delivered in two ways: A central guideline designed to educate and support HCPs and people with CKD by providing focused descriptions of healthcare issues and the evidence base for treatments, alongside concise, actionable recommendations. A supplementary document with technical details, which enriches the information and evidence provided in the main guideline. Further to this, promotion of new guidelines could be encouraged by creating a calendar of guideline release dates, as supported by consensus with statement 34 (98%). This approach will keep stakeholders abreast of developments and ensure transparency and accountability in the development process. Based on the levels of consensus seen within this study, the steering group were keen that the UKKA’s process for creating guidelines should be updated in response to the results of this work. As such the steering group posed the following recommendations: A more equitable approach to proposing guideline topics should be adopted, allowing input from HCPs, patients and their representatives. UK commentaries on international guidelines that outline regional applicability and more focused implementation are as valued as full UK guidelines. All guidance should focus on the end user, with simple and appropriate language to ensure accessibility for HCPs and people with CKD, and encourage engagement. Standardised, multi-faceted implementation techniques or ‘practice points’ to maximise the uptake of their recommendations into clinical practice should be developed and included. Connections across disciplines should be fostered, not only to ensure a multi-disciplinary approach to their guideline development but ensure perspectives from nephrology are considered in CPGs created by other professional bodies. Guideline groups should outline strategies to address the sustainability agenda, wherever possible. The large number of experienced specialists that responded to the consensus questionnaire lends weight to the validity of the recommendations proposed by the steering group. The presence of patients and their representatives in the research, through the steering group and questionnaire respondents, increases the inclusivity and applicability of these findings. It highlights the opinions of these individuals and emphasises the need to acknowledge and act on them throughout guideline development. Responses were sought from both HCPs and patients across the UK in an attempt to reduce geographic bias. While some areas had fewer respondents (eg, Northern Ireland), overall, there was good representation across the UK. The survey was distributed by the steering group, however, data was collected and analysed anonymously by a third party, helping to limit bias. The 4-point Likert scale was used so respondents had no ‘neither agree or disagree’ option and had to form an opinion on each statement. However, as discussed, some statements did show a central tendency bias, which could have been due to the language used within the statements. As this study only undertook one round of survey with no adjustments to the statements, it is possible that some of the statements were too agreeable and did not sufficiently challenge the status quo. Further research on this in this area should refine the statements generated herein to determine any greater variance that may exist. This research explored the views of HCPs, patients and patient representatives on the best practice for selecting, designing and implementing CPGs from the UKKA. Based on the levels of consensus seen across respondents, the steering group were able to develop a strong set of recommendations. Successful implementation of guidelines within nephrology has been shown to improve patient outcomes and is theorised to have long-term cost-effectiveness benefits. Actioning the suggested recommendations has the potential to improve the transparency and accountability of the guideline development process within the UKKA, as well as making UKKA CPG documentation more accessible and understandable for all stakeholders. This in the long-term, can only benefit clinical practice and patient outcomes within UK kidney care. Reviewer comments Author's manuscript |
Pregnancy Outcomes After Transvaginal Radiofrequency Ablation of Leiomyomas | a41ca299-9b87-4768-a3e7-17fe231d6c45 | 11837962 | Surgical Procedures, Operative[mh] | We conducted a retrospective review of the medical records of 226 pregnant patients after transvaginal radiofrequency ablation of leiomyomas from January 1, 2017, to February 28, 2022, in Victoria Rey Clinic. Preoperatively, all patients desired future fertility and had symptomatic FIGO type 2–5 leiomyomas with benign morphology on ultrasonography. Transvaginal radiofrequency ablation technique can be found in Appendix 1, available online at http://links.lww.com/AOG/D961 . Leiomyoma volume was measured before transvaginal radiofrequency ablation and at 6 and 12 months after treatment. A Voluson E8 ultrasound scanner was used to obtain three measurements of each leiomyoma. The volume was calculated with the formula 4/3π× a × b × c , where a , b , and c are the three measurements. The total leiomyoma volume per patient was the sum of all leiomyoma volumes. Pregnancy outcomes measured included miscarriage, preeclampsia, preterm delivery, fetal growth restriction, uterine rupture, placental abruption, placenta accreta spectrum, and mode of delivery (vaginal or cesarean). Perioperative outcomes included operative time, time to discharge, time to return to normal activities, and postoperative complications. Postoperative complications were evaluated according to a reproducible and reliable system, the Clavien–Dindo classification. The Ethical Committee of Ministry of Health of the Government of Andalusia (Comité Coordinador de Investigación Biomédica de Andalucía) approved the study. Frequencies and percentages were reported for categorical variables, and means±SDs, along with 95% CIs, were provided for normally distributed continuous variables. Continuous variables with nonnormal distribution were summarized with medians and interquartile ranges. To compare continuous variables at multiple time points, the Student t test for paired samples was applied for normally distributed data, and the Wilcoxon signed-rank test was used for nonnormally distributed data. Differences in categorical variables (eg, miscarriage and cesarean delivery rates between the assisted reproductive technology [ART] and spontaneous groups) were assessed with χ 2 or Fisher exact tests as appropriate. For correlations involving initial leiomyoma volume, time to pregnancy, and miscarriage rates, the Spearman rank correlation coefficient ( r ) was calculated to assess nonparametric relationships, with statistical significance set at P <.05. The Pearson product–moment correlation coefficient was used for parametric variables when applicable. The segmented regression test was used to evaluate cutoff points. All analyses were conducted with SAS 9.4M7. The mean±SD patient age was 37.4±4.5 years (range 28–47 years). All patients were White and had one or more FIGO type 2–5 leiomyomas. Indications for transvaginal radiofrequency ablation of leiomyomas were abnormal uterine bleeding in 41 patients (18.14%), unsuccessful ART and abnormal uterine bleeding in 151 patients (66.81%), and a history of more than three miscarriages or implantation failure after embryo transfer in 34 patients (15.04%). Patients with other known causes of infertility were excluded from the study. Euploid or donor egg embryos were transferred to 32 patients, and all of the miscarriages analyzed were found to have normal chromosomes. The median number of leiomyomas was 2.7 (interquartile range 1.3–3.5). A single leiomyoma was ablated in 124 patients (55.8%), and two to five leiomyomas were ablated in 101 patients (44.7%). The median operative time was 12.7 minutes (interquartile range 10.5–25.3 minutes), and the median time to discharge was 145 minutes (interquartile range 120–190 minutes). Patients resumed normal activities after a median of 4.2 days (interquartile range 2.8–6.1 days) (Table ). There were no intraoperative complications. Type I postoperative complications were noted in 45 patients (19.9%). Type II occurred in eight patients (3.5%), all of whom experienced a postnecrotic leiomyoma syndrome requiring intravenous dexamethasone and antibiotics for 3 days. Type IIIa complications occurred in 27 patients (11.9%) with FIGO type 2 leiomyomas. They required hysteroscopic removal of a free intrauterine leiomyoma within 26–51 days after the ablation. There were no type IV or V complications (Table ). The median leiomyoma volume per patient before transvaginal radiofrequency ablation of leiomyomas was 52.4 mL (interquartile range 22.3–101.7 mL) and at 6 and 12 months was 26.5 mL (interquartile range 12.1–49.8 mL) and 15.8 mL (interquartile range 6.2–33.7 mL), respectively. The median percentage of leiomyoma volume reduction at 6 and 12 months was 49.4% (interquartile range 26.8–64.7%) and 69.8% (interquartile range 45.9–82.4%), respectively (Table ). At a follow-up of 3 and 6 months, 78% and 91% of the 192 patients with abnormal uterine bleeding, respectively, reported normal menstruation. Spontaneous conception was reported by 78 patients (34.5%). Pregnancy was achieved by ART in 148 patients (65.5%), in vitro fertilization in 27 patients (18.2%), and donor eggs in 121 patients (81.7%). In all in vitro fertilization cases, an euploid embryo was transferred, and all cases had previously undergone unsuccessful ART. There was a significant difference in mean patient age between the spontaneous and ART conception groups (34.5 years vs 39.8 years, respectively, P =.01). Two or more pregnancies were reported by 24 patients (10.6%). Subsequent pregnancies were spontaneous in 14 patients, spontaneous after previous ART pregnancy in three patients, and after ART in six patients. Among the patients receiving ART, 89 (60.13%) underwent two or more separate embryo transfers. The initial median volume of leiomyomas was significantly higher in the ART group compared with the spontaneous pregnancy group (68.3 mL, interquartile range 40.8–110.5 mL vs 56.5 mL, interquartile range 35.1–84.2 mL, respectively, P =.04). Although the number of leiomyomas was somewhat higher in the ART group (median 2, interquartile range 1–4) compared with the spontaneous pregnancy group (median 1, interquartile range 1–3), this difference was not statistically significant ( P =.08) (Table ). The median interval time from transvaginal radiofrequency ablation to pregnancy was 9.3 months (interquartile range 5.6–15.1 months) and was statistically longer in the ART group at 14.7 months (7.8–19.3 months) compared with the spontaneous pregnancy group at 6.3 months (4.2–11.3 months, P <.001) (Table ). A positive correlation was observed between the preoperative leiomyoma volume and the interval time to pregnancy ( r =0.73, spontaneous group r =0.66, ART group r =0.78, P <.05). This correlation was most notable when the leiomyoma volume was greater than 58.6 mL (55.1 mL in spontaneous group, 69.2 mL in ART group). A preoperative leiomyoma volume greater than 58.6 mL appeared as an inflection point at which the relationship between leiomyoma volume and pregnancy time showed a notable change in trend, with a stronger correlation above this cut point ( r =0.77, r =0.74 in spontaneous group, r =0.82 in the ART group) than below it ( r =0.43, r =0.41 in spontaneous group, and r =0.53 in ART group, P <.05) (Fig. ). No correlation was found between the number of leiomyomas and the time to pregnancy. The miscarriage rate was 15.9% (n=36), with a higher miscarriage rate in the ART group compared with the spontaneous pregnancy group (17.5% vs 12.8%, respectively, P <.01). A negative correlation was found between the interval time to pregnancy and the miscarriage rate in both groups ( r =−0.34, spontaneous group r =−0.28, ART group r =−0.52, P <.05). The initial leiomyoma volume or number of leiomyomas did not correlate with miscarriage in the spontaneous pregnancy group. However, a statistically increased miscarriage rate was observed in the ART group when the interval time to pregnancy was shorter than 5.7 months ( r =−0.55 vs r =−0.36 behind and above this time, respectively, P <.05) (Fig. ). The overall cesarean delivery rate was 26.4% (n=51), with a significant difference between the ART and the spontaneous groups (27.0% and 24.0%, respectively, P <0.01). The initial leiomyoma volume or number of leiomyomas did not influence the cesarean delivery rate in either group. There was one instance of placenta accreta discovered during a cesarean delivery at 37 2/7 weeks of gestation in a patient with a previous open myomectomy 2 years before the transvaginal radiofrequency ablation of leiomyomas. One patient required medical treatment for uterine atony after a spontaneous vaginal delivery at 40 3/7 weeks of gestation. The premature delivery (before 37 weeks of gestation) rate was 4.1% (eight births), with a significant difference between the ART (3.9%) and spontaneous (4.3%) groups ( P <.01). The preeclampsia rate was 4.3%, similar between the ART (4.4%) and spontaneous (4.1%) groups ( P =.05). There were no instances of uterine rupture, placental abruption, or fetal growth restriction. Intramural leiomyomas have been associated with lower pregnancy rates and increased likelihood of miscarriage, and myomectomy has not been shown to improve fertility. , However, few data exist examining the pregnancy outcomes after radiofrequency ablation of leiomyomas. This study is the second report examining pregnancy outcomes after transvaginal radiofrequency ablation of leiomyomas. To contextualize these findings, a comprehensive literature search was conducted using PubMed, Embase, and the Cochrane Library. The search covered studies published from January 2000 to September 2024. Search terms included combinations of “pregnancy radiofrequency fibroids,” “pregnancy radiofrequency myoma,” “pregnancy outcomes radiofrequency,” “obstetrics outcomes radiofrequency,” and “fertility results radiofrequency.” The initial report included eight pregnancies, and as in the present series, no obstetric complications were directly attributable to the transvaginal radiofrequency ablation. The reported cesarean delivery rates of 65% after transvaginal radiofrequency ablation of leiomyomas and 75% after the transcervical route , are higher than our rate of 24%. Cesarean delivery rates, however, are influenced by many other factors. Abnormal placentation may occur as a result of leiomyoma necrosis secondary to radiofrequency ablation, particularly with those near the endometrium or with a high submucosal component. However, our pregnancy outcomes are similar to those observed in a general population, including premature birth and preeclampsia rates, and similar to those reported after radiofrequency ablation by conventional laparoscopy , and by a transcervical approach. Placenta accreta spectrum is more common after hysteroscopic myomectomy compared with open and laparoscopic routes. Instances of abnormal placental implantation have been reported after transvaginal radiofrequency ablation of leiomyomas. Our single instance of placenta accreta was most likely secondary to a previous open myomectomy performed 2 years before transvaginal radiofrequency ablation of leiomyomas. No instances of uterine rupture reported have been in pregnancies after radiofrequency ablation. – The miscarriage rate of 15.9% after radiofrequency in this series is similar to 15.3% reported in a general population and 13.8% and 18.9% reported after other techniques such as laparoscopic and robotic myomectomy, respectively. A recent study reported a higher miscarriage rate associated with laparoscopic radiofrequency ablation and open myomectomy. This could be a result of a different radiofrequency technique, a higher volume of treated leiomyomas, surgeon experience, or differences in the way that miscarriage data were collected and analyzed. Confounding factors in all studies, including ours, include patient age and the use of ART, as highlighted by Hartmann et al. The higher miscarriage rate and the longer time to pregnancy in the ART group may be related to the higher incidence of age-related embryonic pathology in this group compared with the spontaneous pregnancies groups. No previous reports have evaluated the optimal time to proceed with conception after radiofrequency ablation. In the present series, patients proceeded with pregnancy at their discretion. We observed a higher rate of miscarriage with pregnancies starting within the first 5.7 months after transvaginal radiofrequency ablation of leiomyomas. For that reason, it appears advisable to delay conception for at least 6 months after transvaginal radiofrequency ablation of leiomyomas. We also observed a shorter time to conception in patients with a preoperative leiomyoma volume below 58.7 mL. This seems to be a result of a rapid decrease in leiomyoma volume, reaching nearly 50% at 6 months. The mean leiomyoma volume reduction of 67.3% at 12 months is similar to the previously reported 78% and 63% with transvaginal and transcervical approaches, respectively. Transvaginal radiofrequency ablation appears to be effective in resolving the abnormal uterine bleeding associated with leiomyomas. We observed that 91% of our patients with abnormal uterine bleeding had returned to normal menstruation at 6 months, which is consistent with previous reports. , , , Our operating time, 14.3 minutes, is similar to those previously reported with transvaginal radiofrequency ablation of leiomyomas, 25 and 18 minutes, which are shorter than those associated with the transcervical and laparoscopic approaches of 44 and 73 minutes, respectively. , , , However, comparisons are difficult because of differences in volume and location of leiomyomas. The times to discharge and return to normal activities, 2.7 hours and 3.2 days, respectively, are shorter compared with 6.8–10 hours and 20 days reported by a laparoscopic approach. , There appears to be a general agreement that transvaginal radiofrequency ablation of leiomyomas is associated with shorter operating times and lower complication and recurrence rates compared with other minimally invasive techniques such as uterine artery embolization or magnetic resonance–guided high-intensity focused ultrasonography. , , It should be noted that hysteroscopic removal of free endometrial cavity leiomyomas was required within 3 months after radiofrequency treatment of 27 leiomyomas. All cases were FIGO type 2 leiomyomas, with a high submucosal component and a volume greater than 35 mL, equivalent to a largest diameter of 4 cm. Necrosis of the pseudocapsule, which is in direct contact with the myometrium, appears to be responsible for the detachment of the leiomyoma. Pregnancy outcomes after transvaginal radiofrequency ablation of leiomyomas in this series were reassuring, with no instances of uterine rupture, placental abruption, or fetal growth restriction. Our study is limited by its retrospective nature. The pregnancy rate after transvaginal radiofrequency ablation of leiomyomas remains unknown because this study included only those patients who desired pregnancy after radiofrequency ablation and the majority of pregnancies were conceived with some form of ART. To evaluate pregnancy rates and outcomes after transvaginal radiofrequency ablation of leiomyomas, a prospective randomized trial comparing radiofrequency ablation with myomectomy is needed to compared the time to pregnancy, embryo implantation rates, and pregnancy outcomes. |
Renaissance of a new era of ophthalmology residency training: Silver linings after three waves of COVID-19 | c1430d88-182c-4ec5-9b15-6bd207bb8cee | 9672799 | Ophthalmology[mh] | The rising trend of webinars and online symposiums has alleviated the need for physical interaction. With multiple e-learning opportunities; the exchange of ideas and remote education have become viable alternatives. Attendance via video conferencing is a boon for residents. These adaptations have stood the test of time over the past 2 years.
Wet lab training, usually secondary to hands-on surgical training, gained renewed recognition. In the early days of the pandemic, it had proven to be a valuable tool for the initial phases of training. Institutes have demonstrated how a single-microscope and single goat’s eye set-up can supplement surgical training.
Distancing norms emphasized the need for non-contact ocular examinations. This practice augmented simple and effective do-it-yourself innovations such as the i-verter demonstrated by Tagare et al . Ramakrishnan et al . shared the spring-action apparatus for the fixation of eyeball (SAFE), a notable innovation during wet-lab procedures. Akkara et al . demonstrated how available household items may work as effective surgical simulators for the beginner. These techniques come with a short learning curve and promise reproducibility and reliability.
The ocular impact of COVID-19 has been extensively documented with an increasing trend of these publications receiving an expedited review process. From the second wave onward, there was a surge in acceptance of mucormycosis-associated publications. This has and continues to provide thriving research opportunities for trainees. Residents during the pandemic have incurred substantial losses. However, current trainees can utilize the multifaceted gains stated above. Following their respective growth, these additions may be inculcated into a regular residency program; the nature in which they can further be expanded needs to be explored.
Nil.
There are no conflicts of interest.
|
What hypertensive patients want to know [and from whom] about their disease: a two-year longitudinal study | 8ffdf035-4ebe-4ee8-bf03-29fa5ad96c2e | 7068893 | Health Communication[mh] | Hypertension is globally the strongest modifiable risk factor for cardiovascular disease (CVD) and related disability; it causes 9.4 million deaths worldwide every year and it remains the leading risk factor for disability-adjusted life years (DALYs) . Despite extensive knowledge about ways to both prevent and treat hypertension through individual lifestyle changes, healthy behaviors and medication adherence are still suboptimal, leading to adverse cardiovascular effects . To improve management of hypertension, the Lancet Commission issued a 10-point action plan in which a key one was to improve communication between provider and patient and to tailor education about hypertension throughout the life course . Effective health communication is fundamental to achieving optimal adherence to recommended health behaviors and treatment . Extensive research has shown that as the information provided becomes more tailored to the personal features of the patients, it becomes more effective in influencing their behaviors . In the design and delivery of tailored health messages, two key variables are patients’ information needs and preferences for sources of information, but the role of these indicators have not been sufficiently investigated . Especially in the case of chronic diseases, meeting patients’ information needs and preferences is positively associated with their global satisfaction, quality of life, psychological well-being, and improved health status . Health providers perceive information needs differently than patients. When patients’ needs are left unresolved, lower adherence rates may result . Research in this field found that patients with acute coronary syndrome, myocardial infarction, and heart failure judge all types of information as important, with a preference for information on medication (names, dosage and side effects), risk factors (especially how to modify incorrect behaviors), and physiology (knowing how to manage signs and symptoms) . However, the majority of researches has been conducted using a cross-sectional methodology; few studies have been conducted using a longitudinal approach and there is still a lack of knowledge about how the specific need for health information changes as the disease progresses. One recent study that evaluated change in information needs over twenty-four months after the first diagnosis of acute coronary syndrome showed a reduction in information needs, but this decrease was significant only for topics related to daily life activities, behavioral habits, and risk and complications . These results suggest that information needs do not represent stable interests; rather, they change across the different moments of the disease. In addition to the content of health messages, a crucial role is played by the sources through which the information is delivered. Today, information on how to correctly manage hypertension is available from multiple sources, such as expert opinions, web pages, media, blogs, personal experience, and books/ journals/magazines. This plurality of sources implies the need for updated knowledge on patients’ use and trust in various sources of information to better deliver health information. Nevertheless, a few studies have been conducted, with limitations in terms of sample size and heterogeneity of composition [not only patients but also non-clinical population, like medical students]. Moreover, these studies have found inconsistent results: some have shown that traditional mass media such as television, radio, and newspapers were major information sources , whereas others have reported that people’s primary hypertension information sources were their doctors and relatives . Understanding patients’ information needs and preferences for sources of information is crucial to help health care providers in giving the right information at the right time in order to tailor health messages and, thus, make communication relevant for the patients. To the best of our knowledge, no studies have been conducted on patients affected by hypertension, especially through the application of a longitudinal approach. Hence, the purpose of the study reported here was to investigate levels of and change over time in hypertensive patients’ self-reported need for information about the disease and the perceived relevance of different sources of information. A further aim was to explore the relationships between need and preferences with socio-demographic and clinical variables. Due to the exploratory nature of the study and the scarcity of previous studies on the issue addressed, it was difficult to develop specific research hypotheses. Based on previous studies with different populations (e.g. ), it was hypothesized that the need for information would change over a two-year period, with a greater need for information on risk and complications and drug treatment at baseline and an increased desire for information on disease management as time progresses. It was also hypothesized that the primary source of information would be health care practitioners. No hypotheses about the role of socio-demographic and clinical variables were developed.
This is a secondary analysis from a multisite, longitudinal study of personality, resilience and self-regulation process on a large cohort of ACS and hypertensive patients in Italy. The research methodology was the same used in previous studies . Participants and procedure Patients who were already receiving pharmacological treatment or had a diagnosis of essential arterial hypertension (SBP > =140 mmHg and/or DBP > =90 mmHg) were recruited between January 2011 and April 2012 during their regular cardiological examinations in the Clinica Medica (medical clinic) unit of a hospital in Northern Italy. Patients were selected by convenience sampling method and they were eligible for this study if they met the following inclusion criteria: > 30 years old; good understanding of the Italian language; no moderate-severe cognitive impairment, psychiatric disorders or diseases with limited expected survival. Eligible patients were told about the aim of the study and its longitudinal design with three follow-ups at 6 (t1), 12 (t2), and 24 months after baseline (t3). After the sign of the informed consent form, a physician collected clinical data related to a) body mass index (BMI); b) waist circumference; c) blood pressure values; d) diabetes mellitus; e) the presence of different CVD risk factors, including gender, age, smoking behavior, dyslipidemia (abnormal level of total, high-density, and low-density lipoprotein cholesterol, and triglycerides), obesity, abdominal obesity, and family history of premature CVD. After the clinical examination patients answered some questions related to their need for health information and the perceived relevance of information sources. This procedure was repeated in the three follow-ups, during which a physician collected further clinical information related to the number of a) specialist visits, b) emergency room visits, c) hospitalizations related to hypertension, and patients’ blood pressure values. The Ethical Committee of the University of Milan-Bicocca and of the healthcare center from which patients were recruited approved the study. Measures Information needs As done in a previous study with patients affected by acute coronary syndrome , information needs were evaluated with two questions that examined the need for additional information in one of six domains related to hypertension and its management: "Pharmacological Treatment"; "Knowledge About the Disease"; "Daily Activities"; "Behavioral Habits"; "Impact of the Disease"; "Risk and Complications". The first question asked patients to determine, on a five-point Likert scale ranging from 1 (" I want to know nothing about the topic ") to 5 (" I want to know everything about it ") the amount of additional information needed by the patients in the six domains (" Indicate how much information you would like to receive about the following topics connected to the management of your cardiovascular disorders "). The second question asked patients to judge the importance of the six domains assigning a score from 1 to 6 (" Now please rate the importance of the topics listed below; you must assign a value from one for the most important topic to six for the least important one ). To avoid the propensity of patients to evaluate all knowledge as "very" or "extremely" important, a balanced index was calculated by multiplying the score on question 1 by the reversed score on question 2. The balanced index had a score range from 1 to 30 with higher scores indicating a greater need for information. Information sources Regarding sources of information, one dichotomous question investigated whether patients had received information from one of nine sources of information: “General Practitioners” (GPs), “Specialists”, “Relatives”, “Friends”, “Information Leaflets given by Physician”, “Information Leaflets given by Associations”, “Magazines”, “Internet”, and “Television”. A second question asked patients to assess, on a five-point Likert scale ranging from 1 (“not at all”) to 5 (“very relevant”), the perceived relevance of the nine sources (“ Think about how you have learned about your disease from the time you became aware you had the illness. For each of the sources listed below, indicate how relevant the source was in providing you with information ”). Socio-demographic variables Personal details were obtained about gender, age, marital and employment status, and education level. Time from the diagnosis of hypertension Patients were given an open-ended question on how many months/years it had been since they were diagnosed with hypertension (“How long you been diagnosed with hypertension from a healthcare provider?”). The responses that were reported in years were converted in months, and this variable was called “time from the diagnosis of hypertension”. Total cardiovascular risk index” (TCRi) For each patient a “Total Cardiovascular Risk Index” (TCRi) was determined based on the sum of the clinical data evaluated during clinical examination, with 1 point assigned for each cardiovascular risk factor present. Following the “2018 ESC/ESH Guidelines for the Management of Arterial Hypertension” were considered risk factors: male sex, age [men > = 55 years; women > = 65 years], smoking, obesity [BMI > = 30 kg/m2 [height2]], abdominal obesity [waist circumference: men > = 102 cm, women > = 88 cm], diabetes mellitus, dyslipidemia [total cholesterol > 190 mg/dL and/or LDL-C > 115 mg/dL and/or HDL-C: men < 40 mg/dL, women < 46 mg/dL and/or triglycerides > 150 mg/dL], elevated blood pressure values [SBP > = 140 mmHg and/or DBP > = 90 mmHg], and a family history of premature CVD [men aged < 55 years; women aged < 65 years]. Statistical analysis Analyses of Variance (ANOVA) for repeated measures were performed to assess statistical differences among information needs and the perceived relevance of sources over time, with a check for sphericity using Mauchly’s test of sphericity. Post hoc tests (0.05) were conducted using Bonferroni analysis. Cochran’s Q-test was used to assess changes in the proportion of patients receiving information from information sources over the four time points. The relationships among socio-demographic (i.e., gender, age, marital and employment status, educational level) and clinical (i.e., time from the diagnosis of hypertension, SBP, DBP, and TCRi) variables, information needs and the perceived relevance of sources were analyzed using regressions analyses. Missing data were substituted using hot deck imputation , a statistical procedure that replaces a missing value with the value of a similar “donor” in the dataset. This method is recommended when the percentage of missing data is lower than 10% regardless of the pattern of the missing data . In this study the percentage of missing data was 0.3%. Therefore, values were imputed using hot deck imputation; only one case was excluded from the analysis. The “donor” was selected according to the gender and age of the participants. The significant level was set at p ≤ 0.05 for all the analyses. Statistical Package for Social Sciences version 24.0 for Windows (SPSS Inc., Chicago, USA) was used to analyze the data.
Patients who were already receiving pharmacological treatment or had a diagnosis of essential arterial hypertension (SBP > =140 mmHg and/or DBP > =90 mmHg) were recruited between January 2011 and April 2012 during their regular cardiological examinations in the Clinica Medica (medical clinic) unit of a hospital in Northern Italy. Patients were selected by convenience sampling method and they were eligible for this study if they met the following inclusion criteria: > 30 years old; good understanding of the Italian language; no moderate-severe cognitive impairment, psychiatric disorders or diseases with limited expected survival. Eligible patients were told about the aim of the study and its longitudinal design with three follow-ups at 6 (t1), 12 (t2), and 24 months after baseline (t3). After the sign of the informed consent form, a physician collected clinical data related to a) body mass index (BMI); b) waist circumference; c) blood pressure values; d) diabetes mellitus; e) the presence of different CVD risk factors, including gender, age, smoking behavior, dyslipidemia (abnormal level of total, high-density, and low-density lipoprotein cholesterol, and triglycerides), obesity, abdominal obesity, and family history of premature CVD. After the clinical examination patients answered some questions related to their need for health information and the perceived relevance of information sources. This procedure was repeated in the three follow-ups, during which a physician collected further clinical information related to the number of a) specialist visits, b) emergency room visits, c) hospitalizations related to hypertension, and patients’ blood pressure values. The Ethical Committee of the University of Milan-Bicocca and of the healthcare center from which patients were recruited approved the study.
Information needs As done in a previous study with patients affected by acute coronary syndrome , information needs were evaluated with two questions that examined the need for additional information in one of six domains related to hypertension and its management: "Pharmacological Treatment"; "Knowledge About the Disease"; "Daily Activities"; "Behavioral Habits"; "Impact of the Disease"; "Risk and Complications". The first question asked patients to determine, on a five-point Likert scale ranging from 1 (" I want to know nothing about the topic ") to 5 (" I want to know everything about it ") the amount of additional information needed by the patients in the six domains (" Indicate how much information you would like to receive about the following topics connected to the management of your cardiovascular disorders "). The second question asked patients to judge the importance of the six domains assigning a score from 1 to 6 (" Now please rate the importance of the topics listed below; you must assign a value from one for the most important topic to six for the least important one ). To avoid the propensity of patients to evaluate all knowledge as "very" or "extremely" important, a balanced index was calculated by multiplying the score on question 1 by the reversed score on question 2. The balanced index had a score range from 1 to 30 with higher scores indicating a greater need for information. Information sources Regarding sources of information, one dichotomous question investigated whether patients had received information from one of nine sources of information: “General Practitioners” (GPs), “Specialists”, “Relatives”, “Friends”, “Information Leaflets given by Physician”, “Information Leaflets given by Associations”, “Magazines”, “Internet”, and “Television”. A second question asked patients to assess, on a five-point Likert scale ranging from 1 (“not at all”) to 5 (“very relevant”), the perceived relevance of the nine sources (“ Think about how you have learned about your disease from the time you became aware you had the illness. For each of the sources listed below, indicate how relevant the source was in providing you with information ”). Socio-demographic variables Personal details were obtained about gender, age, marital and employment status, and education level. Time from the diagnosis of hypertension Patients were given an open-ended question on how many months/years it had been since they were diagnosed with hypertension (“How long you been diagnosed with hypertension from a healthcare provider?”). The responses that were reported in years were converted in months, and this variable was called “time from the diagnosis of hypertension”. Total cardiovascular risk index” (TCRi) For each patient a “Total Cardiovascular Risk Index” (TCRi) was determined based on the sum of the clinical data evaluated during clinical examination, with 1 point assigned for each cardiovascular risk factor present. Following the “2018 ESC/ESH Guidelines for the Management of Arterial Hypertension” were considered risk factors: male sex, age [men > = 55 years; women > = 65 years], smoking, obesity [BMI > = 30 kg/m2 [height2]], abdominal obesity [waist circumference: men > = 102 cm, women > = 88 cm], diabetes mellitus, dyslipidemia [total cholesterol > 190 mg/dL and/or LDL-C > 115 mg/dL and/or HDL-C: men < 40 mg/dL, women < 46 mg/dL and/or triglycerides > 150 mg/dL], elevated blood pressure values [SBP > = 140 mmHg and/or DBP > = 90 mmHg], and a family history of premature CVD [men aged < 55 years; women aged < 65 years].
As done in a previous study with patients affected by acute coronary syndrome , information needs were evaluated with two questions that examined the need for additional information in one of six domains related to hypertension and its management: "Pharmacological Treatment"; "Knowledge About the Disease"; "Daily Activities"; "Behavioral Habits"; "Impact of the Disease"; "Risk and Complications". The first question asked patients to determine, on a five-point Likert scale ranging from 1 (" I want to know nothing about the topic ") to 5 (" I want to know everything about it ") the amount of additional information needed by the patients in the six domains (" Indicate how much information you would like to receive about the following topics connected to the management of your cardiovascular disorders "). The second question asked patients to judge the importance of the six domains assigning a score from 1 to 6 (" Now please rate the importance of the topics listed below; you must assign a value from one for the most important topic to six for the least important one ). To avoid the propensity of patients to evaluate all knowledge as "very" or "extremely" important, a balanced index was calculated by multiplying the score on question 1 by the reversed score on question 2. The balanced index had a score range from 1 to 30 with higher scores indicating a greater need for information.
Regarding sources of information, one dichotomous question investigated whether patients had received information from one of nine sources of information: “General Practitioners” (GPs), “Specialists”, “Relatives”, “Friends”, “Information Leaflets given by Physician”, “Information Leaflets given by Associations”, “Magazines”, “Internet”, and “Television”. A second question asked patients to assess, on a five-point Likert scale ranging from 1 (“not at all”) to 5 (“very relevant”), the perceived relevance of the nine sources (“ Think about how you have learned about your disease from the time you became aware you had the illness. For each of the sources listed below, indicate how relevant the source was in providing you with information ”).
Personal details were obtained about gender, age, marital and employment status, and education level.
Patients were given an open-ended question on how many months/years it had been since they were diagnosed with hypertension (“How long you been diagnosed with hypertension from a healthcare provider?”). The responses that were reported in years were converted in months, and this variable was called “time from the diagnosis of hypertension”.
For each patient a “Total Cardiovascular Risk Index” (TCRi) was determined based on the sum of the clinical data evaluated during clinical examination, with 1 point assigned for each cardiovascular risk factor present. Following the “2018 ESC/ESH Guidelines for the Management of Arterial Hypertension” were considered risk factors: male sex, age [men > = 55 years; women > = 65 years], smoking, obesity [BMI > = 30 kg/m2 [height2]], abdominal obesity [waist circumference: men > = 102 cm, women > = 88 cm], diabetes mellitus, dyslipidemia [total cholesterol > 190 mg/dL and/or LDL-C > 115 mg/dL and/or HDL-C: men < 40 mg/dL, women < 46 mg/dL and/or triglycerides > 150 mg/dL], elevated blood pressure values [SBP > = 140 mmHg and/or DBP > = 90 mmHg], and a family history of premature CVD [men aged < 55 years; women aged < 65 years].
Analyses of Variance (ANOVA) for repeated measures were performed to assess statistical differences among information needs and the perceived relevance of sources over time, with a check for sphericity using Mauchly’s test of sphericity. Post hoc tests (0.05) were conducted using Bonferroni analysis. Cochran’s Q-test was used to assess changes in the proportion of patients receiving information from information sources over the four time points. The relationships among socio-demographic (i.e., gender, age, marital and employment status, educational level) and clinical (i.e., time from the diagnosis of hypertension, SBP, DBP, and TCRi) variables, information needs and the perceived relevance of sources were analyzed using regressions analyses. Missing data were substituted using hot deck imputation , a statistical procedure that replaces a missing value with the value of a similar “donor” in the dataset. This method is recommended when the percentage of missing data is lower than 10% regardless of the pattern of the missing data . In this study the percentage of missing data was 0.3%. Therefore, values were imputed using hot deck imputation; only one case was excluded from the analysis. The “donor” was selected according to the gender and age of the participants. The significant level was set at p ≤ 0.05 for all the analyses. Statistical Package for Social Sciences version 24.0 for Windows (SPSS Inc., Chicago, USA) was used to analyze the data.
Participants’ characteristics Two hundred and seventy-one consecutive patients were enrolled at baseline; twenty-five patients declined to participate at t1 (attrition rate = 9.2%), seventeen at t2 (attrition rate = 6.9%) and twenty-three at t3 (attrition rate = 10%). One patient died of causes not directly related to hypertension before t2, and three patients died before t3. To exclude any possible differences, the distributions of the collected variables for the 271 participants measured at baseline were compared between the sample used in the analysis ( N = 202) and the dropouts ( N = 69). The Mann-Whitney non-parametric test was used both because of the difference in the sample sizes of patients used in the analysis and the dropouts . Patients who refused to participate at t1, t2 and t3 did not differ from the final study group with respect to socio-demographic variables, clinical data, information needs, and the perceived relevance of information sources, as evaluated at baseline. Two hundred and two patients participated in this study. Patients had a mean age of 54.3 years (range 21–78; SD = 10.4), were mainly married (78.7%) with a high school degree (49%) and employed (56.4%); women were 42.6% of the sample. Table shows full information about the demographic characteristics of the participants. The “ time from the diagnosis of hypertension ” variable varied from less than two months to more than thirty years. Roughly half of the sample (45.8%) had family histories of CVD. Furthermore, slightly less than one-third, 30.2%, presented with obesity, 15.1% had dyslipidemia, and 8.1% presented with diabetes; moreover, 4.5% had had prior cardiac events, and 1.6% had nephropathy (Table ). Table reports patients’ blood pressure values, as recorded in all measurements, and the frequency of each rank, according to the blood pressure classification . At baseline, half of the sample’s pressure values ranged between “Optimal” and “Pre-Hypertension” (56%); these values changed in the subsequent follow-ups. Table also describes the number and percentage of patients who monitored their blood pressure at home during the three follow-ups of the study. As it can be seen, almost the totality of patients monitored their pressure at home. Information needs All the information needs showed a violation of the assumption of sphericity: “Pharmacological Treatment” (x 2 (5) = 14.73, p < .05), “Knowledge About the Disease” (x 2 (5) = 23.49, p < .001), “Daily Activities” (x 2 (5) = 29.13, p < .001), “Behavioral Habits” (x 2 (5) = 12.90, p < .05), “Impact of the Disease” (x 2 (5) = 21.76, p < .01), and “Risk and Complications” (x 2 (5) = 21.38, p < .01). The degrees of freedom were therefore adjusted using Greenhouse-Geisser estimates of sphericity (ε = .95, ε = .94, ε = .91, ε = .96, ε = .94, and ε = .94, respectively). Results showed that information need decreased over time for “Knowledge About the Disease”, “Daily Activities”, “Behavioral Habits”, “Impact of the Disease”, while no reduction was found for “Pharmacological Treatments”, and “Risk and Complications”. Table presents the mean scores, standard deviation, test F and p levels. Information sources At baseline patients received information from “Specialists” (92.6%), “GPs” (86.1%), “Relatives” (78.7%), and “Television” (63.3%). Roughly half of the sample received information from “Magazines” (56.9%), “Internet” (55.9%), “Information Leaflets given by Physician” (53.5%), and “Friends” (52.5%). Only less than one third of the sample received information from “Information Leaflets given by Associations” (29.7%). Table presents the number and percentage of patients that have received information from a source over time. Results showed a reduction in information provision for almost all the sources during the three follow-ups. The Cochran’s Q test indicated that this reduction was significant for: “GPs” (x 2 (3) = 60.16, p < .001); “Specialists” (x 2 (3) = 69.87, p < .001); “Relatives” (x 2 (3) = 35.25, p < .001); “Friends” (x 2 (3) = 33.06, p < .001); “Information Leaflets given by Physician” (x 2 (3) = 22.95, p < .001); “Information Leaflets given by Associations” (x 2 (3) = 12.43, p < .01); “Television” (x 2 (3) = 7.93, p < .05). Only for “Magazines” (x 2 (3) = 4.18, p = .243) and “Internet” (x 2 (3) = 5.29, p = .152) the reduction between the different time points was not significant. Mauchly’s test showed a violation of the assumption of sphericity for: “GPs” (x 2 (5) = 16.85, p < .01), “Specialists” (x 2 (5) = 15.08, p < .01), “Information Leaflets given by Physician” (x 2 (5) = 11.52, p < .05), “Television” (x 2 (5) = 14.65, p < .01)). The degrees of freedom were therefore adjusted using Greenhouse-Geisser estimates of sphericity (ε = 88, ε = 91, ε = 80, and ε = 89, respectively). Through a repeated measures ANOVA it was found a significant decrease in the perceived relevance for “Relatives” (F(3;240) = 4.28, p < .01); “Magazines” (F(3;153) = 5.99, p < .01), “Internet” (F(3;153) = 3.61, p < .05), and “Television” (F(2.66;159.63) = 3.10, p < .05); no significant changes were found for the other sources (Fig. ). Relationship between information needs, information sources, socio-demographics, and clinical variables At baseline, more educated patients desired less information about “Knowledge About the Disease” ( β = −.183, p < .05) and more information about “Behavioral Habits” ( β = .164, p < .05) and “Daily Activities” ( β = .176, p < .05). Employment status was related to the information need about “Knowledge About the Disease” and “Daily Activities”; in particular, housewives wanted more information on the first topic ( β = −.155, p < .05), while retired patients and retired patients who had some work activities desired more information on daily life activities ( β = .231, p < .05; β = .230, p < .05, respectively); retired patients who had some work activities also desired more information on “Behavioral Habits” ( β = .247, p < .01). Age and SBP were related to “Impact of the Disease” ( β = −.305, p < .01 and β = .228, p < .05, respectively), with patients who were younger or had higher levels of SBP who were more interested in information on how to handle hypertension-related stress. Gender was related to “Knowledge About the Disease” ( β = .180, p < .05): women desired more information on the anatomical/functional nature connected to hypertension. No relationships were found between time from the diagnosis of hypertension, DBP, TCRi and information need. Repeated measure ANOVA showed that the information need about “Knowledge About the Disease” was related to patients’ marital status (F(2.80;467.85) = 3.36, p < .05), with married patients who wanted more information over time. Gender was related to the information need about “Daily Activities” (F(2.73;455.55) = 3.30, p < .05), with women who wanted more information over time. “ Impact of the Disease” was related with SBP (F(2.79;465.54) = 2.89, p < .05), with patients with higher levels of SBP who were more interested in information on how to handle hypertension-related stress. “Risk and Complications” was related to DBP (F(2.82;468.28) = 3.20, p < .05), with patients with higher levels of DBP who were more interested in information on this topic. No relationships were found with employment status, educational level, time from the diagnosis of hypertension, and TCRi. The perceived relevance of “GPs” was related to education ( β = −.205, p < .05) and employment status ( β = −.255, p < .05), with more educated and retired patients who perceived this figure as less relevant. Age was related to the perceived relevance of “Specialists” ( β = .246, p < .05), with older patients who perceived this source as more relevant. Education level and employment status were related to “Family”, ( β = .270, p < .05) with more educated and retired patients who perceived this source as more relevant. Age, employment status and time from the diagnosis of hypertension were positively related to the perception of relevance of “Information Leaflets given by Associations” ( β = −.441, p < .05; β = .536, p < .01; β = .317, p < .05, respectively), with younger patients, retired patients and who had a longer history of the disease who perceived information from this source as more relevant. Gender was related to the perception of relevance of “Magazines” ( β = .360, p < .01), with women who perceived this source as more relevant. Older patients perceived as less relevant information from “Internet” ( β = −.293, p < .05). No relationships were found between SBP, DBP, TCRi and the perceived relevance of information sources. When the relationships between information sources and demographic and clinical variables were analyzed over time, employment status was associated with the perceived relevance of “Relatives” (F (12; 174) = 1.92, p < .05). DBP was positively associated with the perceived relevance of “Internet” (F (3; 102) = 3.18, p < .05). Time from the diagnosis of hypertension was positively associated with the perceived relevance of “Relatives” (F (3; 174) = 2.75, p < .05). No relationships were found with gender, age, education, marital status, SBP, and TCRi.
Two hundred and seventy-one consecutive patients were enrolled at baseline; twenty-five patients declined to participate at t1 (attrition rate = 9.2%), seventeen at t2 (attrition rate = 6.9%) and twenty-three at t3 (attrition rate = 10%). One patient died of causes not directly related to hypertension before t2, and three patients died before t3. To exclude any possible differences, the distributions of the collected variables for the 271 participants measured at baseline were compared between the sample used in the analysis ( N = 202) and the dropouts ( N = 69). The Mann-Whitney non-parametric test was used both because of the difference in the sample sizes of patients used in the analysis and the dropouts . Patients who refused to participate at t1, t2 and t3 did not differ from the final study group with respect to socio-demographic variables, clinical data, information needs, and the perceived relevance of information sources, as evaluated at baseline. Two hundred and two patients participated in this study. Patients had a mean age of 54.3 years (range 21–78; SD = 10.4), were mainly married (78.7%) with a high school degree (49%) and employed (56.4%); women were 42.6% of the sample. Table shows full information about the demographic characteristics of the participants. The “ time from the diagnosis of hypertension ” variable varied from less than two months to more than thirty years. Roughly half of the sample (45.8%) had family histories of CVD. Furthermore, slightly less than one-third, 30.2%, presented with obesity, 15.1% had dyslipidemia, and 8.1% presented with diabetes; moreover, 4.5% had had prior cardiac events, and 1.6% had nephropathy (Table ). Table reports patients’ blood pressure values, as recorded in all measurements, and the frequency of each rank, according to the blood pressure classification . At baseline, half of the sample’s pressure values ranged between “Optimal” and “Pre-Hypertension” (56%); these values changed in the subsequent follow-ups. Table also describes the number and percentage of patients who monitored their blood pressure at home during the three follow-ups of the study. As it can be seen, almost the totality of patients monitored their pressure at home.
All the information needs showed a violation of the assumption of sphericity: “Pharmacological Treatment” (x 2 (5) = 14.73, p < .05), “Knowledge About the Disease” (x 2 (5) = 23.49, p < .001), “Daily Activities” (x 2 (5) = 29.13, p < .001), “Behavioral Habits” (x 2 (5) = 12.90, p < .05), “Impact of the Disease” (x 2 (5) = 21.76, p < .01), and “Risk and Complications” (x 2 (5) = 21.38, p < .01). The degrees of freedom were therefore adjusted using Greenhouse-Geisser estimates of sphericity (ε = .95, ε = .94, ε = .91, ε = .96, ε = .94, and ε = .94, respectively). Results showed that information need decreased over time for “Knowledge About the Disease”, “Daily Activities”, “Behavioral Habits”, “Impact of the Disease”, while no reduction was found for “Pharmacological Treatments”, and “Risk and Complications”. Table presents the mean scores, standard deviation, test F and p levels.
At baseline patients received information from “Specialists” (92.6%), “GPs” (86.1%), “Relatives” (78.7%), and “Television” (63.3%). Roughly half of the sample received information from “Magazines” (56.9%), “Internet” (55.9%), “Information Leaflets given by Physician” (53.5%), and “Friends” (52.5%). Only less than one third of the sample received information from “Information Leaflets given by Associations” (29.7%). Table presents the number and percentage of patients that have received information from a source over time. Results showed a reduction in information provision for almost all the sources during the three follow-ups. The Cochran’s Q test indicated that this reduction was significant for: “GPs” (x 2 (3) = 60.16, p < .001); “Specialists” (x 2 (3) = 69.87, p < .001); “Relatives” (x 2 (3) = 35.25, p < .001); “Friends” (x 2 (3) = 33.06, p < .001); “Information Leaflets given by Physician” (x 2 (3) = 22.95, p < .001); “Information Leaflets given by Associations” (x 2 (3) = 12.43, p < .01); “Television” (x 2 (3) = 7.93, p < .05). Only for “Magazines” (x 2 (3) = 4.18, p = .243) and “Internet” (x 2 (3) = 5.29, p = .152) the reduction between the different time points was not significant. Mauchly’s test showed a violation of the assumption of sphericity for: “GPs” (x 2 (5) = 16.85, p < .01), “Specialists” (x 2 (5) = 15.08, p < .01), “Information Leaflets given by Physician” (x 2 (5) = 11.52, p < .05), “Television” (x 2 (5) = 14.65, p < .01)). The degrees of freedom were therefore adjusted using Greenhouse-Geisser estimates of sphericity (ε = 88, ε = 91, ε = 80, and ε = 89, respectively). Through a repeated measures ANOVA it was found a significant decrease in the perceived relevance for “Relatives” (F(3;240) = 4.28, p < .01); “Magazines” (F(3;153) = 5.99, p < .01), “Internet” (F(3;153) = 3.61, p < .05), and “Television” (F(2.66;159.63) = 3.10, p < .05); no significant changes were found for the other sources (Fig. ).
At baseline, more educated patients desired less information about “Knowledge About the Disease” ( β = −.183, p < .05) and more information about “Behavioral Habits” ( β = .164, p < .05) and “Daily Activities” ( β = .176, p < .05). Employment status was related to the information need about “Knowledge About the Disease” and “Daily Activities”; in particular, housewives wanted more information on the first topic ( β = −.155, p < .05), while retired patients and retired patients who had some work activities desired more information on daily life activities ( β = .231, p < .05; β = .230, p < .05, respectively); retired patients who had some work activities also desired more information on “Behavioral Habits” ( β = .247, p < .01). Age and SBP were related to “Impact of the Disease” ( β = −.305, p < .01 and β = .228, p < .05, respectively), with patients who were younger or had higher levels of SBP who were more interested in information on how to handle hypertension-related stress. Gender was related to “Knowledge About the Disease” ( β = .180, p < .05): women desired more information on the anatomical/functional nature connected to hypertension. No relationships were found between time from the diagnosis of hypertension, DBP, TCRi and information need. Repeated measure ANOVA showed that the information need about “Knowledge About the Disease” was related to patients’ marital status (F(2.80;467.85) = 3.36, p < .05), with married patients who wanted more information over time. Gender was related to the information need about “Daily Activities” (F(2.73;455.55) = 3.30, p < .05), with women who wanted more information over time. “ Impact of the Disease” was related with SBP (F(2.79;465.54) = 2.89, p < .05), with patients with higher levels of SBP who were more interested in information on how to handle hypertension-related stress. “Risk and Complications” was related to DBP (F(2.82;468.28) = 3.20, p < .05), with patients with higher levels of DBP who were more interested in information on this topic. No relationships were found with employment status, educational level, time from the diagnosis of hypertension, and TCRi. The perceived relevance of “GPs” was related to education ( β = −.205, p < .05) and employment status ( β = −.255, p < .05), with more educated and retired patients who perceived this figure as less relevant. Age was related to the perceived relevance of “Specialists” ( β = .246, p < .05), with older patients who perceived this source as more relevant. Education level and employment status were related to “Family”, ( β = .270, p < .05) with more educated and retired patients who perceived this source as more relevant. Age, employment status and time from the diagnosis of hypertension were positively related to the perception of relevance of “Information Leaflets given by Associations” ( β = −.441, p < .05; β = .536, p < .01; β = .317, p < .05, respectively), with younger patients, retired patients and who had a longer history of the disease who perceived information from this source as more relevant. Gender was related to the perception of relevance of “Magazines” ( β = .360, p < .01), with women who perceived this source as more relevant. Older patients perceived as less relevant information from “Internet” ( β = −.293, p < .05). No relationships were found between SBP, DBP, TCRi and the perceived relevance of information sources. When the relationships between information sources and demographic and clinical variables were analyzed over time, employment status was associated with the perceived relevance of “Relatives” (F (12; 174) = 1.92, p < .05). DBP was positively associated with the perceived relevance of “Internet” (F (3; 102) = 3.18, p < .05). Time from the diagnosis of hypertension was positively associated with the perceived relevance of “Relatives” (F (3; 174) = 2.75, p < .05). No relationships were found with gender, age, education, marital status, SBP, and TCRi.
The current study aimed to investigate hypertensive patients’ need for information about their disease and the perceived relevance of different sources of information and how these variables change over twenty-four months. The results showed a general decrease in both desired information and the perceived relevance of sources over time. Patients desired less information on their pathology and on how to self-manage it, whereas they continued to desire medical information related to medical treatment and complications of the disease. This preference for medical information compared with lifestyle information has been identified in previous research and deserves attention by health practitioners. Research continuously shows that patients fail to adhere correctly to medical advice or to change their unhealthy behaviors ; thus, it is essential to help them understand what they can do to self-manage their health condition and to prevent complications. The fact that patients report being less interested in information about behavioral habits and daily life activities is discouraging and should prompt research to find new ways to communicate this important knowledge. The higher interest in the risks and complications of hypertension could indicate patients’ fears and worries about the possible worsening of their health condition; however, the participants reported no interest in what they could do in terms of self-care behaviors to reduce these complications. These results likely suggest that patients did not fully understand the degree to which lifestyle changes are necessary to manage hypertension and their general condition. It could be supposed that patients’ need for information on relevant topics decreases over time because they have already a full understanding of their disease. However, previous research showed that hypertensive patients have a knowledge deficit and held erroneous explanations for their hypertension . Future studies could consider the role exercised by the amount of information already known by patients in the self-reported need for information about the disease. Regarding information sources, the results showed that at baseline patients’ major sources are health care providers, both GPs and specialists, followed by relatives and television. Roughly half of the sample reported having received information from magazines, internet and information leaflets given by physician. Only brochures given by associations were irrelevant in informing patients, probably because this kind of material is not widely widespread. The lower use of magazines, internet, and information leaflets compared with interpersonal sources of information could be indicative of patients’ general tendency to not actively search for information. In fact, these sources give information to people who search for it. Besides, interpersonal sources are more reassuring and allow patients to take some control over their health. This result is consistent with previous studies . Surprisingly, the television was reported to be more informative compared to the Internet. This result is inconsistent with previous studies , in which the Internet was a widely used source of information for the management of long-term conditions, but it is similar to results found by Stavropoulou with Greek patients affected by hypertension. Yet, it is not a surprising finding given that Italy lags behind other European countries in the use of the Internet . Even the sample’s age [mean = 54; higher than in Akter et al., 2014] and education [25% of the sample had not obtained a high school diploma; lower than in Akter et al., 2014] could explain this result. This hypothesis is also confirmed by the significant relationship found between age and the perceived relevance of the Internet, with older patients who perceived as less relevant information from this source. Regarding patients’ perception of relevance for the multiple sources, results showed a significant decrease in the relevance for relatives, magazines, internet, and television. It is important to note that the scores for the majority of the sources are above level three, indicating that these sources are perceived to be fairly significant. Multiple patterns of relationships emerged between socio-demographic characteristics and variables under analysis. Higher educational level was significantly associated with needing more information on knowledge of the disease and behavioral habits. This could reveal a greater understanding of their crucial role in disease management in patients with higher education. Age and SBP were related to the need for information on how to manage the distress related to hypertension, with patients who were younger or had higher levels of SBP who were more interested in information on how to control distress related to the disease. The marital status was related to a greater need for information on knowledge about the disease over time. Despite researches have shown that relatives often play a crucial role in patients’ hypertension self-management they are rarely included in the patient-physician discussions. Information appeared to be the greatest need of family members of critically ill patients . It is possible that the greater information need in married patients was influenced by the need for information of the spouse. This hypothesis should be investigated in future researches. Gender was related with the need for information on daily life activities over time, with women who desire more information on this topic. This result is consistent with a previous study . The level of DBP was associated with the need for information about possible risk and complications due to the disease. This could reveal a higher level of worries in patients with mayor severity of hypertension. This hypothesis should be investigated in future research. Regarding information sources, some relationship arose at baseline with information sources and the socio-demographic and clinical variables related to gender, age, education level, and time from the diagnosis of hypertension. However, these relationships disappeared when analyzed over time, except for the time from the diagnosis of hypertension, that indicated patients’ history of the disease, that was positively associated with the perceived relevance of family. It was thought that this variable was more related to patients’ need and the perceived relevance of sources. Future researches should deepen the possible relationships between health communication topics and clinical variables. Health information needs and sources of information are under-investigated areas. To the best of our knowledge, this is one of the first studies to provide information on the current state of patients’ information needs and preferences for sources of information with respect to hypertension and on how these variables change as the disease progresses. The two-year longitudinal design and the large sample size represent the study’s strengths. Despite its merits, it also presents some limitations. First, the generalizability of these findings could be limited because patients were recruited from a single health care center where they were being followed to manage their disease. Furthermore, only patients’ socio-demographic and clinical variables were considered as possible factors correlated to needs and preferences; other variables associated with needs for health information and health outcomes such as health literacy and psychological variables were not assessed in this study. Additionally, the use of volunteer participants may likely have resulted in an overrepresentation of those who were more interested in the topics analyzed. Future research should consider population-based surveys to limit the effect of this possible bias.
The results here presented have multiple implications for health professionals involved in developing interventions to improve patients’ adherence and behaviors. Educational, communicational, and awareness-raising interventions should provide patients with information, education or skills to modify their unhealthy behaviors. Education and communication are related tools that offer great potential to improve the global management of elevated blood pressure, helping the patients to understand their condition and their role in the healthcare process. Considering patients’ needs, preferences, and the change in these variables over time will allow professionals to deliver the correct information at the right moment, avoiding misconceptions and misinformation. The study shows that there are areas of information, such as behavioral habits, that patients consistently place as a low priority yet are crucial to patients’ overall well-being. New and better ways to deliver information should be taken into account, and patients need to be educated about the importance of the information received to enable them to focus on primary and secondary prevention. It is highly recommended that research on patients’ information needs and preferences continues to be conducted, especially for those diseases that have been under-investigated, such as hypertension.
|
Enhancing consistency in arbuscular mycorrhizal trait-based research to improve predictions of function | 0e40029c-849f-42a7-9f72-1fd44171f5b9 | 11865136 | Microbiology[mh] | The scientific literature on AM fungal life-history traits ( i.e. , the biological characteristics and features that influence their growth, reproduction, and survival) has predominantly centered on aspects related to plant growth and nutrition, largely through an agronomic lens. Although not explicitly reported as such, early studies employing experimental approaches to assess, for example, AM fungal root colonization, abundance of external hyphae, and spore counts for specific species under certain experimental conditions have yielded insights into AM fungal trait variation (Abbott ; Reich ; Jakobsen et al. ; Gazey et al. ; Bever et al. ). Given the wide variation observed, these and other seminal studies provided a foundation for further inquiry into the complex dynamics of AM fungal life-history traits and their broader implications to the AM symbiosis. Studies of distinct traits within a taxonomic framework started with the comparison of mycelium form and function, and root colonization strategies among major families of the Glomeromycota. For example, Dodd et al. compared the morphology and mycelial architecture of different AM fungal genera, discussing form and function. In a comparative study of 21 AM fungal isolates ( i.e. , defined as an AM fungus isolated in the laboratory into pure culture but without genetic characterization, at which point it becomes a certified strain with a collection number) spanning 16 species from North America, Hart and Reader showed that the isolates of the Glomeraceae family, on average, colonized roots before those of Acaulosporaceae and Gigasporaceae families. Additionally, the proportion of fungal biomass in roots versus soil also diverged, on average, among the isolates of those families. Glomeraceae fungi exhibited high root colonization but low soil colonization, and vice-versa for Gigasporaceae. Acaulosporaceae fungi displayed low colonization in both roots and soil. These findings revealed a strong association between AM fungal morphological characteristics and taxonomy, as isolates of the main families in the phylum could be differentiated based on root colonization rate and biomass allocation patterns. These observations were corroborated by subsequent studies, albeit using AM fungi from the same community and, possibly, the same isolates (Hart and Reader , ; Maherali and Klironomos ; Powell et al. ; Sikes et al. ). In fact, using the same data, Aguilar-Trigueros et al. showed that large-spore species produced, on average, fewer spores than small-spore species, suggesting that AM fungi experience similar resource allocation constraints during reproduction as plant seeds (Moles et al. ). However, to what extent plant trait-frameworks may be applicable to AM fungi is unknown. At present, evidence suggests differences between Glomeraceae and Gigasporaceae concerning life-history traits and their relationship with host benefits. However, new comparative studies that include more fungal species isolated from other ecological contexts are necessary to confirm these differences. More recently, a distinction between ‘rhizophilic’ and ‘edaphophilic’ life-history strategies has been introduced to categorize AM fungi that allocate more biomass to growth within roots versus soil (Weber et al. ), and data show that long-term phosphorus (P) enrichment in subtropical forests shifts AM fungal communities toward edaphophilic guilds (Wang et al. ). Since there was more P directly available to the host plant in the soil, the observed shift towards edaphophilic guilds (i.e., Gigasporaceae vs Glomeraceae) suggests that these fungi may offer alternative benefits to P uptake (e.g., water uptake, nitrogen acquisition, pathogen resistance) or are better adapted to the new soil conditions (Rúa et al. ). The patterns described above demonstrate the utility of employing a comparative framework to test hypotheses concerning AM fungal function by examining trait expression. For instance, based on soil mycelium production, Gigasporaceae would be expected to outperform Glomeraceae in nutrient uptake (Maherali and Klironomos ). However, evidence suggests that the relationship between mycelium production and nutrient acquisition is not straightforward. If early or extensive root colonization (with abundant coils/arbuscules) rather than growing an extensive soil mycelium is more important for nutrient delivery to the host, then Glomeraceae could be more beneficial partners than Gigasporaceae under nutrient limiting conditions ( e.g. , Horsch et al. ). Despite inconsistencies among studies, which may to some extent be explained by variability in mycorrhizal dependency among hosts (Pringle and Bever ; Sikes et al. ), a meta-analysis (Yang et al. ) suggested that, on average, fungi of the family Glomeraceae were better at acquiring P and reducing pathogen growth compared to other AM fungal families. It is also of interest, that this family appears to be the most abundant in many locations (Öpik et al. ). Perhaps this potential ability of Glomeraceae to outperform Gigasporaceae in P acquisition despite producing less extensive mycelium, can be more accurately understood by considering ‘response traits’ (how organisms adapt to environmental changes) versus ‘effect traits’ (how organisms influence their environment and ecosystem processes) (Koide et al. ). The evolution of greater mycelium production either intra or extra-radically could reflect an adaptive response to increased susceptibility to soil disturbances (response trait) rather than directly enhancing soil P acquisition (effect trait). Considering these distinctions in a trait-based framework could help refine our understanding of AM fungal trait functionality on soil, hosts and AM fungi. Despite these advances towards better consistency in predicting functional outcomes from morphological and taxonomy data, we argue that only by integrating into databases morphological, physiological, and genetic trait data obtained across environmental conditions can we establish a basis for more accurately predicting the functions of these fungi. Previous studies lacked a comprehensive environmental perspective. For instance, considering diverse environmental conditions, such as varying soil types or climatic factors, could unveil how AM fungal traits respond and adapt. Currently, most data reporting the impact of different AM fungi on their host originate from short-term experiments, using fungal taxa that readily sporulate and are easily amenable to pure cultures (Ohsowski et al. ). This may not reflect the reality in natural environments. Both the study by Sikes et al. investigating differences in plant pathogen protection between AM fungal taxa, as well as that by Lerat et al. on C-sink strength among different AM fungal families suggest that certain functional outcomes resulting from the symbiosis depend on the combination of plant and fungal traits (Johnson et al. ). As such, considering fungal traits alone ( i.e. , in absence of plant and soil characteristics) may limit predictions of functional outcomes of the symbiosis (see Chaudhary et al. ). This brings an additional layer of complexity to the study of AM fungal ecophysiology and trait-based ecology, as intricate relationships between fungal and plant traits are to be expected (Chagnon et al. ). Van Der Heijden and Scheublin conducted the first comprehensive review of AM fungal traits to predict plant growth and ecosystem functioning. They provided a list of 13 AM fungal functional traits categorized into morphological traits ( e.g., hyphal length, rate and extent of root colonization, spore production) and physiological traits ( e.g. , fungal C acquisition, host preference, nutrient uptake efficiency, exudation of compounds into the hyphosphere). Subsequently, Behm and Kiers noted substantial intraspecific trait variation among AM fungal species (also see Koch et al. ; Schoen et al. ), complicating the characterization of traits and their incorporation into functional trait models. To address this issue, they proposed a five-part framework for characterizing intraspecific trait variation of AM fungal species within the context of nutrient cycling, based on experimental design and trait measurement considerations. According to Behm and Kiers , AM fungal genetic units should be subjected to diverse environmental conditions ( e.g. , host plants, soil nutrient concentrations). Measurements would encompass the degree of variation, trait reversibility, relationships among traits, the adaptive nature of variation, and the potential for variation to evolve. Through these five dimensions, researchers could map traits onto an evolutionary tree and incorporate them into functional models for predicting nutrient cycling dynamics. Chaudhary et al. highlighted the challenges in defining traits for organismal networks such as those formed by mycorrhizal fungi. They proposed a unified trait framework, complemented by a standardized vocabulary, with the objective of establishing a clear connection between trait-based mycorrhizal ecology, AM fungal niches and community assembly rules, categorizing traits into three main groups: morphological, physiological, and phenological. Within each of these categories, they pinpointed distinctive AM traits specific to both the host plant ( e.g., root:shoot ratio, growth form, photosynthetic pathways) and the fungal partner ( e.g. , spore size, hyphal length, and melanin content). Beyond these discrete traits for plants or fungi, they introduced the concept of mycorrhizal traits as unique attributes that emerge during symbiosis and are co-dependent on both partners. These encompass aspects such as root colonization-induced structures, plant mycorrhizal response, and resource exchange rates. This novel framework provides an enriched understanding of mycorrhizal ecology and serves as a basis for the empirical framework proposed here. Chagnon et al. put forth an AM fungal trait-based framework building on Grime's CSR (competitive, stress-tolerant, ruderal) framework, which identifies stress, disturbance and competition as the major filters driving trait selection and evolution in plant natural communities. By allowing speculative connections to be made regarding potential linkages between fungal traits ( e.g. , hyphal fusion, sporulation phenology, C sink strength, growth rates) and environmental filters ( e.g. , soil disturbances, scarce C transfer from host, low soil pH), this framework could tentatively identify priority traits for measurement, and combinations of host and fungal traits that may lead to the highest mutual benefits. Building on the apparent family-level conservatism of many traits or responses to environmental filters, parallels were drawn between AM fungal major families and C, S and R strategies. However, as stressed by Chagnon et al. , this family-to-strategy association is simplistic and struggles to predict AM fungal responses in complex multi-stress scenarios (Heuck et al. ). In addition, it fails to consider several AM fungal families ( e.g. , Pacisporaceae, Entrophosporaceae, Diversisporaceae, or more basal lineages like Paraglomeraceae, Archaeosporaceae, and Ambisporaceae). It also fails to consider the relative distribution of different AM fungal families in certain biomes or at certain latitudes. For example, Acaulospora is a common genus in the tropics, where it can be dominant both in natural forests and under intensive land-use where ruderal traits are crucial ( e.g. , González-Cortés et al. ). The primary significance of the CSR framework in AM fungal trait-based ecology should not be considered merely as a framework for associating families with strategies. Instead, it should be recognized as a tool for leveraging well-established life-history trade-offs in plant ecology to pinpoint pertinent fungal traits that should be incorporated into our research agenda. We build upon prior frameworks, emphasizing two significant barriers to achieving a more predictive understanding of AM fungal ecology. First, discrepancies among studies often arise due to non-standardized experimental approaches. Second, the absence of a comprehensive database on AM fungal traits further complicates progress in this field (TraitAM is expected to become publicly available in 2025; Chaudhary, personal communication). Moreover, the validity and relevance of the isolates and species employed in these studies are reliant on the taxa available in culture collections or from a few natural communities. A deliberate inclusion of numerous uncultured taxa, or other taxa hitherto overlooked fungal mutualisms in conjunction with AM fungi, such as Mucoromycotina, as suggested by Hoysted et al. , remains an important task. Given the existing data showing large variability in plant and soil responses to the AM symbiosis both among and within AM fungal species, we must address these issues to assess if, and to what extent, AM fungal traits determine plant growth responses or effects on ecosystems. Arbuscular mycorrhizal fungal traits, including for example hyphal length, arbuscule morphology, or the robustness of hyphal and spore walls, can modulate key functions/processes with ramifications not only to the health of the fungus itself but also the associated plant and the soil environment (see Fig. and Table for detailed descriptions of key traits, their hypothesized function, and methods for trait measurement). Here, we define AM fungal traits primarily as “functional markers,” which serve as indicators of mycorrhizal function and depend on the morphological, physiological, or phenological characteristics of the fungal partner (Chaudhary et al. ). In addition, genetic traits are becoming increasingly well understood (see below). In this context, AM fungal traits are most likely instrumental in defining ecosystem resistance, resilience and adaptability to environmental stress, as certain fungal isolates with specific traits may demonstrate superior robustness or flexibility under changing conditions. Conceptualizing the form and function of AM fungal traits becomes clearer when contextualized within the lifecycle of the fungal organism. We can broadly categorize the lifecycle of an AM fungus into two phases: (1) the asymbiotic phase, in which the dispersed spores (or other propagules) are activated, germinate and explore the soil for a compatible host, and (2) the symbiotic phase, which includes four stages: a) initiation of root colonization; b) formation of structures within the root cortex; c) extension of mycelium into the soil matrix and possibly other hosts; and d) spore production and dispersal. Briefly, spores, hyphal networks, and colonized root fragments, identified as the three principal types of propagules, remain dormant until the proper abiotic/biotic conditions emerge (MacLean et al. ; Lanfranco et al. ). Hyphae emerging from these propagules perceive a host root, adhere to its surface, and commence root colonization. A swollen hyphopodium forms, from which a single hypha penetrates the root epidermis to access the cortex. A series of morphogenetic and molecular processes come into play at these initial stages, enabling the plant to recognize the presence of the AM fungus (as reviewed by Bonfante and Perotto ; Gianinazzi-Pearson et al. ; Bonfante and Genre ; Luginbuehl and Oldroyd ). Upon reaching the root cortex, the fungus colonizes intercellular spaces, forming the intraradical mycelium (IRM). This mycelium then differentiates into structures such as arbuscules or coils, and, in some taxa, vesicles and intraradical spores. Upon attaining a certain threshold of root colonization, hyphae extend beyond the root system into the soil matrix, forming the extraradical mycelium (ERM), which consists of runner hyphae, branched absorbing structures (BAS), spore associated BAS, and spores. The expansive hyphal network, comprising IRM, ERM, embodies the traits that underpin several ecosystem-level processes attributed to AM fungi ( e.g. , nutrient cycling, soil C sequestration, water regulation, soil formation, pathogen regulation, etc.). As we will explore next, these traits impact not just the host plants and soil environment, but also the fungal organism itself. AM fungal spores Arbuscular mycorrhizal fungal spores are among the largest (Aguilar-Trigueros et al. ) and most multinucleated spores (Cooke et al. ; Kokkoris et al. ) known in the kingdom Fungi and exhibit the phenotypic characteristics that enable species’ identification. Three types of spore formation are recognized (Walker et al. ). Glomoid spores are formed blastically at the tip of a hypha or by intercalary inflation of a hypha. Acaulosporoid spores involve the blastic formation of a sporiferous saccule with a neck, followed by the differentiation of spores laterally, inside the neck, or within the sporiferous saccule. Gigasporoid spores are differentiated at the tip of a small bulb or suspensor cell. Spores range widely in their traits including size, shape, color, and wall thickness (see Morton for a review) across and within species. In fact, single isolates of some species are known to produce more than one type of spores (even the model fungus, Rhizophagus irregularis (Kokkoris et al. )). Spores have been observed to form individually in the soil, in loose clusters, or within small to large compact sporocarps. The spores' cytoplasm contains not only nuclei (ranging from hundreds to thousands) but also lipid reserves that assist in germination and early colonization. Based on spore ontogeny, three main phenotypic characteristics are observed in AM fungal spores: spore wall, germinal walls, and germination structure, with the latter two absent in many species (Morton et al. ). Additionally, the spore walls of many species exhibit different types of ornamentations. Some AM fungi produce sporocarps ( i.e. , aggregations of spores) that function in reproduction (Yamato et al. ) and dispersal, including dispersal by mammals (Mangan and Adler ). Overall, variation in spore traits across species are hypothesized to reflect differences in reproduction (investing in fewer larger spores or many small spores), dispersal (long or short, different dispersal vectors), survival in the absence of the host ( e.g. , resistance to desiccation and pathogenesis) and early colonization strategies (Chaudhary et al. ). While the potential functions of most spore traits remain poorly understood ( e.g. , we could not find a study exploring the potential functional implications of spore ornamentation; see Table for more examples), trait-based studies are starting to emerge. For example, traits such as color, size, and abundance, can mediate the effects of disturbances like fire and grazing (Hopkins and Bennett ). Intraradical mycelium (IRM) The AM fungal mycelial system colonizes two distinct environments: the IRM which develops within plant roots in a consistent environment, and the ERM which extends into the soil, where it encounters, by comparison, highly variable environmental conditions (Smith and Read ). Two broad anatomical groups of IRM can be recognized in mycorrhizal roots, the Arum -type, dominated by arbuscules, and the Paris- type, dominated by coils; although evidence suggests a continuum between these types, depending on the host plant and the fungus (Dickson ). Why these two types or arbuscules exist and can be formed by the same AM fungus is not well established. Studies making direct comparisons of trait efficiency under varying environments are needed to address this knowledge gap. Presence of 'H' branches in the IRM is more common in Glomeraceae taxa compared to Acaulosporaceae, while looping hyphae and hyphae with small-bumped projections are prevalent in species of the Gigasporaceae family (see Dodd et al. for a review). Arbuscules are highly branched structures with a turnover rate ranging from 7 to 16 days (Alexander et al. ) or longer in woody plants (Brundrett and Kendrick ), and they serve as the primary site of nutrient exchange between the fungus and the host. The main differences observed in arbuscule architecture relate to branching patterns. In Gigasporaceae, the trunk is wide, and branching is abrupt, whereas the trunk is narrow, and branching is gradual in Acaulosporaceae and Glomeraceae. Vesicles are thick-walled, globose to lobed structures that store lipids and contain many nuclei (Smith and Read ). They are not formed by members of Gigasporaceae, and there is some evidence that the same is true for basal families such as Ambisporaceae, Archaeosporaceae, and Paraglomeraceae. There is a paucity of studies solely investigating the ecological role of vesicles, particularly in symbiotic efficiency under environmental stress. However, given that they serve as energy reserve structures, understanding the role of vesicle formation on edaphic processes such as C turnover is an important knowledge gap. Extraradical mycelium (ERM) The ERM is composed of two types of hyphae: unbranched runner hyphae, which run parallel to the root length to initiate secondary colonization, and highly branched absorptive hyphae responsible for soil nutrient uptake and translocation to the host (Friese and Allen ). Bago et al. and Dodd et al. observed the formation of 'branched absorbing structures'—small groups of dichotomous hyphae—within the ERM in species of Glomeraceae. The ERM is also accountable for the formation of spores and auxiliary cells in the soil. Phenotypic variables associated with the ERM, such as hyphal length and density, interconnectedness, and hyphal diameter, have been studied in some AM fungal species (Dodd et al. ; Avio et al. ). However, ERM morphology, encompassing hyphal diameter and architectural configuration, which, for example, regulate nutrient and C transport efficiency, is poorly understood. We posit that this is largely governed by fundamental physical principles. Hyphae with smaller diameters possess higher surface-area-to-volume ratio, potentially enhancing their capacity for nutrient absorption. However, according to the Hagen–Poiseuille law (Sutera ), smaller diameters increase resistance to fluid flow, thereby reducing efficiency in long-distance transport. Conversely, hyphae with larger diameters exhibit reduced internal resistance to flow but this advantage is offset by a lower surface-area-to-volume ratio, which may diminish nutrient uptake efficiency. Additionally, hyphal architecture is likely to determine fluid transport efficiency; simple, linear structures minimize resistance for direct transport, while highly branched hyphae enhance nutrient scavenging capabilities, but this might increase internal transport resistance. The extraradical mycelium can form Common Mycorrhizal Networks (CMNs), where a single AM fungus can interconnect multiple plant hosts, facilitating resource exchange and, possibly, communication among hosts and AM fungi (Barto et al. ; Babikova et al. ). Perhaps the capacity of an AM fungus to form CMNs can be considered a fungal trait (Karst et al. ; Lehmann and Rillig ). Furthermore, novel traits specifically associated with CMNs could emerge. For example, hosts in a CMN can detect when one of the hosts in the network is under stress ( e.g. , herbivory) (Babikova et al. ; Song et al. ); however, the mechanisms for this remain elusive. It can be hypothesized that AM fungi could gain an advantage by actively producing warning signals (Scott and Kiers ). However, the cost–benefit of producing such signals is unclear, considering competition among AM fungi forming CMNs and the fact that plants in communities may associate with different AM fungi (Schamp et al. ). A more plausible hypothesis for plant alert mechanisms via AM fungi is that these fungi simply act as passive conduits for unavoidable host cues, in which case such “signalling communication” may not be a trait per se . More research is certainly necessary to clarify the role of CMNs in natural ecosystems and to assess the extent to which the ability to form CMNs can be considered a fungal trait (Karst et al. ; Lehmann and Rillig ). Arbuscular mycorrhizal fungal spores are among the largest (Aguilar-Trigueros et al. ) and most multinucleated spores (Cooke et al. ; Kokkoris et al. ) known in the kingdom Fungi and exhibit the phenotypic characteristics that enable species’ identification. Three types of spore formation are recognized (Walker et al. ). Glomoid spores are formed blastically at the tip of a hypha or by intercalary inflation of a hypha. Acaulosporoid spores involve the blastic formation of a sporiferous saccule with a neck, followed by the differentiation of spores laterally, inside the neck, or within the sporiferous saccule. Gigasporoid spores are differentiated at the tip of a small bulb or suspensor cell. Spores range widely in their traits including size, shape, color, and wall thickness (see Morton for a review) across and within species. In fact, single isolates of some species are known to produce more than one type of spores (even the model fungus, Rhizophagus irregularis (Kokkoris et al. )). Spores have been observed to form individually in the soil, in loose clusters, or within small to large compact sporocarps. The spores' cytoplasm contains not only nuclei (ranging from hundreds to thousands) but also lipid reserves that assist in germination and early colonization. Based on spore ontogeny, three main phenotypic characteristics are observed in AM fungal spores: spore wall, germinal walls, and germination structure, with the latter two absent in many species (Morton et al. ). Additionally, the spore walls of many species exhibit different types of ornamentations. Some AM fungi produce sporocarps ( i.e. , aggregations of spores) that function in reproduction (Yamato et al. ) and dispersal, including dispersal by mammals (Mangan and Adler ). Overall, variation in spore traits across species are hypothesized to reflect differences in reproduction (investing in fewer larger spores or many small spores), dispersal (long or short, different dispersal vectors), survival in the absence of the host ( e.g. , resistance to desiccation and pathogenesis) and early colonization strategies (Chaudhary et al. ). While the potential functions of most spore traits remain poorly understood ( e.g. , we could not find a study exploring the potential functional implications of spore ornamentation; see Table for more examples), trait-based studies are starting to emerge. For example, traits such as color, size, and abundance, can mediate the effects of disturbances like fire and grazing (Hopkins and Bennett ). The AM fungal mycelial system colonizes two distinct environments: the IRM which develops within plant roots in a consistent environment, and the ERM which extends into the soil, where it encounters, by comparison, highly variable environmental conditions (Smith and Read ). Two broad anatomical groups of IRM can be recognized in mycorrhizal roots, the Arum -type, dominated by arbuscules, and the Paris- type, dominated by coils; although evidence suggests a continuum between these types, depending on the host plant and the fungus (Dickson ). Why these two types or arbuscules exist and can be formed by the same AM fungus is not well established. Studies making direct comparisons of trait efficiency under varying environments are needed to address this knowledge gap. Presence of 'H' branches in the IRM is more common in Glomeraceae taxa compared to Acaulosporaceae, while looping hyphae and hyphae with small-bumped projections are prevalent in species of the Gigasporaceae family (see Dodd et al. for a review). Arbuscules are highly branched structures with a turnover rate ranging from 7 to 16 days (Alexander et al. ) or longer in woody plants (Brundrett and Kendrick ), and they serve as the primary site of nutrient exchange between the fungus and the host. The main differences observed in arbuscule architecture relate to branching patterns. In Gigasporaceae, the trunk is wide, and branching is abrupt, whereas the trunk is narrow, and branching is gradual in Acaulosporaceae and Glomeraceae. Vesicles are thick-walled, globose to lobed structures that store lipids and contain many nuclei (Smith and Read ). They are not formed by members of Gigasporaceae, and there is some evidence that the same is true for basal families such as Ambisporaceae, Archaeosporaceae, and Paraglomeraceae. There is a paucity of studies solely investigating the ecological role of vesicles, particularly in symbiotic efficiency under environmental stress. However, given that they serve as energy reserve structures, understanding the role of vesicle formation on edaphic processes such as C turnover is an important knowledge gap. The ERM is composed of two types of hyphae: unbranched runner hyphae, which run parallel to the root length to initiate secondary colonization, and highly branched absorptive hyphae responsible for soil nutrient uptake and translocation to the host (Friese and Allen ). Bago et al. and Dodd et al. observed the formation of 'branched absorbing structures'—small groups of dichotomous hyphae—within the ERM in species of Glomeraceae. The ERM is also accountable for the formation of spores and auxiliary cells in the soil. Phenotypic variables associated with the ERM, such as hyphal length and density, interconnectedness, and hyphal diameter, have been studied in some AM fungal species (Dodd et al. ; Avio et al. ). However, ERM morphology, encompassing hyphal diameter and architectural configuration, which, for example, regulate nutrient and C transport efficiency, is poorly understood. We posit that this is largely governed by fundamental physical principles. Hyphae with smaller diameters possess higher surface-area-to-volume ratio, potentially enhancing their capacity for nutrient absorption. However, according to the Hagen–Poiseuille law (Sutera ), smaller diameters increase resistance to fluid flow, thereby reducing efficiency in long-distance transport. Conversely, hyphae with larger diameters exhibit reduced internal resistance to flow but this advantage is offset by a lower surface-area-to-volume ratio, which may diminish nutrient uptake efficiency. Additionally, hyphal architecture is likely to determine fluid transport efficiency; simple, linear structures minimize resistance for direct transport, while highly branched hyphae enhance nutrient scavenging capabilities, but this might increase internal transport resistance. The extraradical mycelium can form Common Mycorrhizal Networks (CMNs), where a single AM fungus can interconnect multiple plant hosts, facilitating resource exchange and, possibly, communication among hosts and AM fungi (Barto et al. ; Babikova et al. ). Perhaps the capacity of an AM fungus to form CMNs can be considered a fungal trait (Karst et al. ; Lehmann and Rillig ). Furthermore, novel traits specifically associated with CMNs could emerge. For example, hosts in a CMN can detect when one of the hosts in the network is under stress ( e.g. , herbivory) (Babikova et al. ; Song et al. ); however, the mechanisms for this remain elusive. It can be hypothesized that AM fungi could gain an advantage by actively producing warning signals (Scott and Kiers ). However, the cost–benefit of producing such signals is unclear, considering competition among AM fungi forming CMNs and the fact that plants in communities may associate with different AM fungi (Schamp et al. ). A more plausible hypothesis for plant alert mechanisms via AM fungi is that these fungi simply act as passive conduits for unavoidable host cues, in which case such “signalling communication” may not be a trait per se . More research is certainly necessary to clarify the role of CMNs in natural ecosystems and to assess the extent to which the ability to form CMNs can be considered a fungal trait (Karst et al. ; Lehmann and Rillig ). The form and function of the main components of AM fungi ( i.e. , spores, IRM, and ERM) are intrinsically associated with physiological traits. These consist mainly of mechanisms ( e.g. , signaling) involved in spore germination and host recognition, enzyme activity, membrane transporters, and a wide range of biomolecules. Ultimately, they determine the resistance and resilience of AM fungi to specific environmental conditions and influence host and soil responses. For example, spores capable of germinating under extreme conditions or of storing energy to sustain asymbiotic growth for extended periods can be particularly important in some ecosystems. Despite their relevance, the physiological traits involved, and potential trade-offs remain poorly understood (see Akiyama and Hayashi ; Martin and van der Heijden ; Klein et al. ). Emerging research links AM fungal physiological traits to enzyme production for soil nutrient acquisition and storage. Perhaps the most relevant and widely studied are acid and alkaline phosphatases secreted by hyphae, which facilitate the release of inorganic P from organic compounds. This is a critical trait particularly in low-P soils (Joner et al. ; Plassard et al. ). For example, the proportion of arbuscules exhibiting alkaline phosphatase activity showed a positive correlation with shoot weight and P content (Joner et al. ). AM fungi can also enhance the uptake, transport, and assimilation of soil NO₃⁻ and NH₄⁺ into amino acids through the action of various enzymes (reviewed by Govindarajulu et al. and Jin et al. ). As experimental techniques such as in vitro root organ cultures, isotope labeling, and biochemical analyses of enzyme activity continue to evolve, so does the potential to link these physiological traits to AM functional roles in soil processes, plant community dynamics, and ecosystem function ( e.g. , Hestrin et al. ). A particularly relevant physiological trait is the ability of AM fungi to transport and store inorganic polyphosphate (Poly-P) (Ezawa et al. ). The uptake of inorganic phosphate (Pi) from the soil by the AM fungal mycelium is mediated by high-affinity Pi/H⁺ transporters, which have been identified in R. intraradices , F. mosseae , Gigaspora margarita , and Diversispora versiformis (reviewed by Rui et al. ). Once Pi is transported into the fungal cytoplasm, it is incorporated into Poly-P and subsequently translocated to the IRM and arbuscules. Poly-P molecules may serve as a reservoir when soil P is abundant, which can then be used by the fungi when P becomes scarce, possibly driven by source-sink relations with host(s) (Bunn et al. ). The capacity of different AM fungi to rapidly store and transport Poly-P can be seen as a key functional ‘response’ trait, reflecting both fungal resilience and variations in phosphate metabolism among different fungal taxa. For instance, Boddington and Dodd observed accumulation of Poly-P in the ERM of Gigaspora rosea but not in R. manihotis after a 10-week period, which could be related to different life-cycle strategies. Given the key roles of Pi transporters and Poly-P in the AM symbiosis, comparative studies should examine Pi transporter expression and Poly-P storage and transport in the IRM and ERM across Glomeromycota taxa. AM fungi have distinct lipid and fatty acid metabolism compared to saprotrophic fungi and rely on host-derived lipids for growth and development (Luginbuehl et al. ). The lipid and fatty acid profiles are unique AM fungal traits used as independent criteria for testing phylogenetic hypotheses (Bentivenga & Morton ) and to estimate mycelium biomass in soil and roots (Olsson et al. ). Thus, fatty acid content is an indicator of C allocation to storage (Olsson et al. ). Recently, the phospholipid fatty acid 16:1ω5 has been used as an indicator of the presence of AM fungi; however, studies should also include non-mycorrhizal plants or mycelium-free compartments (Olsson & Lekberg ). While lipid and fatty acid profiles have been primarily characterized in Glomeraceae and Gigasporaceae, expanding investigations to other taxa is essential for a more comprehensive understanding of these physiological traits. Another idea stemming from nutrient-storage processes may be the development of traits associated with post-mortem (necromass) ecological consequences of AM fungal traits. Koide et al. introduced the response-effect trait framework, which could be relevant for understanding how AM fungal traits influence ecological processes after fungal and plant death. For example, the potential roles of melanization in the survival of AM fungi ( e.g. , protection against UV radiation) could be relevant here (Deveautour et al. ). Moreover, this trait could play a role in persistence post-mortem (Fernandez and Kennedy ). A recent study on saprobic fungi providing a structured framework to understand physiological traits, can also help understand AM fungi (Camenzind et al. ). Specifically, the authors posit that stoichiometric flexibility in saprobic fungi is a key trait to maintain growth under resource-limited conditions. Stoichiometric flexibility could also apply to the ability of AM fungi to adjust nutrient exchange rates with host plants or reallocate resources to support hyphal networks under stress (Camenzind et al. ). For example, in P-deficient but nitrogen (N)-rich soils, AM fungi may increase P uptake efficiency by expressing more P transporters and phosphatases, and extending hyphae into nutrient-depleted zones. However, this change alters source-sink relations between symbionts with AM fungi requiring more C and the host more of the scarce P resources. Under drought, stoichiometric flexibility could enable AM fungi to mobilize stored resources ( e.g. , specific lipids, polyP) to sustain existing hyphae over growing new ones. Tolerance to drought, heavy metals, and fungicides involves various physiological traits in saprotrophic fungi, which in some aspects may be analogous to AM fungi ( e.g. , Hage-Ahmed et al. ; Riaz et al. ; Oliveira et al. ). We consider that further comparative studies between saprobic and AM fungi could help refine trait-based frameworks including physiological traits associated with nutrient acquisition, transfer and storage, C metabolism and symbiotic interaction traits such as those involved in host recognition. We define genetic measurements that have been proven or have the potential to reflect differences in life history strategies as “genetic traits”. Among these traits are the genetic organization of AM fungal strains, spore nuclear content, genome size, rDNA copy number and G + C content of the genome ( i.e. , percentage of nitrogenous bases in the DNA that are either guanine, G, or cytosine, C). Recent findings demonstrated that AM fungal strains belonging to one species carry thousands of nuclei in their coenocytic mycelia that either belong to one ( i.e. , homokaryotic) or two nuclear genotypes ( i.e. , dikaryotic (Ropars et al. )) with each of these genotypes having unique structure, genetic content and epigenetics (Sperschneider et al. ). Interestingly, the relative abundance of the coexisting genotypes in the dikaryotic strains appears to be deterministic and their regulation to be responsive to biotic ( e.g. , plant host identity) (Kokkoris et al. ) and abiotic factors ( e.g. , pH, temperature, nutrient content) (Cornell et al. ). Carrying two genomes instead of one may reflect differences in life-history strategies or different life stages if the same is shown in multiple species (Serghi et al. ). Homokaryotic strains exhibit higher and faster germination rates compared to the lower germination rates observed in dikaryotic strains. Conversely, dikaryotic strains demonstrate faster growth and produce larger, more interconnected ERM compared to their homokaryotic counterparts. This difference in nuclear organization can significantly influence the mycorrhizal response of their plant hosts. Specifically, and in contrast to expectations that two genomes might result in more mutualistic interactions, dikaryotic strains were inferior mutualists compared to the homokaryons when interacting with multiple potato cultivars (a highly mycorrhizal dependent crop) in greenhouse conditions (Terry et al. ). While we recognize that nuclear organization may be an important functional trait, until homo- versus dikaryons are found in more AM fungal species it might be premature to suggest this trait should be included in a program for standardization of trait measurement across AM fungal taxa. The spore’s nuclear content also seems to be associated with life history-traits, although further experimental evidence is needed. The range of nuclei present in spores correlates with spore size, ranging from 35,000 nuclei for spores of Gigaspora decipiens which have an average diameter of 400 µm, to 130 nuclei for the spores of Glomus cerebriforme with an average diameter of 80 µm (Kokkoris et al. ). These huge differences in nuclear content could be associated to spore viability and germination, and overall colonization ability after dispersal. For example, multiple re-germination events have been observed for Gigasporaceae spores when no host is encountered initially (Sward ), a trait that does not appear in Glomus species. It has been hypothesized that the numerous nuclei could serve as resource reserve via nucleophagy when facing starvation, a phenomenon observed in other fungi (Shoji et al. ; Kokkoris et al. ). Despite the number of genotypes and the number of nuclei present in AM fungal networks and spores, the overall genome size might influence the reproductive rate, environmental adaptability and overall resource economy of a species/strain. Although not very common for fungi, linkage of genome size to life-history affiliations is not a novel concept in plant community ecology. Grime and Mowforth linked plant genome size to climate growth conditions, Veselý et al. to early flowering events and preference for humid conditions, and Bhadra et al. to multiple functional traits related to plant morphology, physiology, performance and survival. Our knowledge on the variation of genome size in AM fungi is limited due to the low number of sequenced genomes. Regardless, we know that the variation is extreme, with larger species (Gigasporaceae) having genomes that reach 740 Mb and smaller species ( e.g. , Rhizophagus clarus ) 116 Mb (Kokkoris et al. ). Expanding our datasets with this information is important for uncovering connections between genome size and the morphological, physiological, and phenological traits of AM fungi. A lesser explored but potentially significant genomic trait in AM fungi is the ribosomal DNA (rDNA) copy number per genome. In bacterial communities, variation in ribosomal RNA operon copy number has been linked to ecological strategies, where bacteria with low copy number are slow-growing but well adapted to resource-limited environments. In contrast, individuals with higher copy numbers tend to thrive in nutrient-rich conditions due to their rapid growth potential (Roller et al. ). Similarly, fungi can vary considerably in the number of rDNA copies they carry in their genomes, with previous estimates ranging from 14 to more than 1400 copies across 91 taxa exhibiting a strong phylogenetic signal but no clear correlation to ecological lifestyle (Lofgren et al. ). In AM fungi, the copy number of rDNA has been examined only for R. irregularis where a single genome contains up to 11 copies of rDNA (Maeda et al. ); an extremely low number when compared to other fungi and eukaryotes. Eukaryotes with exceptionally low rDNA copy numbers often share similar characteristics, such as symbiotic and asexually reproductive lifestyles, indicating niche preference (e.g., Dalrymple ; Gardner et al. ). It would be of interest to examine the rDNA copy variation across the AM fungal phylogeny to determine whether differences could provide insights into whether certain AM fungal species are more specialized for high- or low-nutrient environments or correspond to rhizophilic versus edaphophilic taxa, potentially linking rDNA copy number to habitat preference, colonization efficiency, and symbiotic compatibility. Finally, a particular genetic trait to be considered is the G + C content of genomic DNA, which has the potential to reflect ecological niche or pathogenicity in fungi (Yoder and Turgeon ). Once again, despite limited data as few complete genomes are available, substantial variation in G + C content exists in AM fungi (range from 25 to 36%), which could potentially reflect differences observed in mycorrhizal response and host preference (Malar et al. ). With recent advancement in single-cell genomics, collection and characterization of genetic traits becomes increasingly more feasible, even for environmentally derived samples ( e.g. , single spores). Considering the above discussion, to overcome challenges related to measurement of AM fungal traits, and to obtain a more accurate understanding of AM fungal-plant interactions, we suggest the following points for future research. Share trait data through a centralized database of AM fungal traits . This is a keystone task to integrate data into analyses aimed at predicting plant responses and ecosystem processes. Published databases of AM fungal traits include FUN^FUN, which describes spore size and shape (Aguilar-Trigueros et al. ), or are limited to mycorrhizal type and root colonization primarily at early stages of plant development (Soudzilovskaia et al. ). Certain trait data are also collated along with AM fungal culture collections ( e.g. , INVAM, CICG, CCAMF, BEG), though public accessibility of such data is limited. Currently these trait datasets are not interoperable or harmonized. A new trait database, TraitAM, expands on FUN^FUN by incorporating additional spore traits, new calculated indices, and an updated phylogenetic tree to examine trait conservatism. It is expected to be publicly accessible and downloadable in full format in 2025 and could provide a generic structure for which to build on future AM fungal trait data efforts (Chaudhary, personal communication). We propose either a new database or the integration of a generic structure for individual AM fungal taxa stemming from the traits proposed in Table , consistent with the principles of ‘Observation and Measurement Ontogeny’, into existing ones (Madin et al. ). The central focus of this database is the taxonomy at the species and strain levels, along with accession codes, if available. Ancillary data relate to: a) the site of origin, including latitude, longitude, date, and observer; b) the source, whether from field observations, a culture collection, or literature data; and c) metadata and information about experimental treatments used to measure traits, including but not limited to the levels of replication and variation associated with each estimate. A trait and measurement component integrates all information related to a specific trait (spores, mycelium, arbuscules, vesicles) and its measurement values and units. Determine traits at fine levels of taxonomic resolution . Results from distinct experimental approaches indicate that AM fungal traits exhibit some conservation at the family level, although variation within these clades was also observed (Hart and Reader ; Maherali and Klironomos ). Seemingly inconsistent with this finding, high intraspecific variability in root colonization, ERM length and plant responses has been demonstrated for several species (Mensah et al. ; Schoen et al. ; Stahlhut et al. ). As argued above, there is a need to conduct additional comparative studies using different species within the same genus to investigate trait conservatism. Expand the scope of research to include a broader range of AM fungi, with a particular focus on uncultured and underrepresented taxa . Traits have been studied focusing on Glomeraceae, Acaulosporaceae, and Gigasporaceae, harboring 76% of the total number of species in the Glomeromycota. Other families such as Diversisporaceae, Entrophosporaceae, and basal families such as Paraglomeraceae and Archaeosporaceae are rarely included in experiments and there is very little information on their traits, despite being commonly present in communities. Glomeromycota comprise 12 families and 357 species but likely surpass that number by a factor of 5 to 15x (Öpik et al. ; Lutz et al. ). However, we estimate that only ca. 88 species are represented in culture collections worldwide. Measure and report AM fungal traits using standardized experimental approaches . AM fungal traits reported in the literature have been assessed using diverse methodologies, complicating direct comparisons across studies. To address this challenge, we propose a standardized set of minimum parameters for studying AM fungal traits (see 'Experimental approaches' section below). A priority will be to validate the reproducibility of AM fungal trait measurements across different research teams, using the same starting inoculum material. Determine the variability (plasticity) of AM fungal trait expression . Our understanding of AM fungal traits is limited by insufficient knowledge of their consistency under varying environmental conditions. Targeted experiments are needed to assess how specific environmental factors affect these traits. Furthermore, studying trait variation across traits and taxa could reveal whether some traits are more conserved or show greater variability within certain taxa. Utilize AM fungal isolates deposited in culture collections. Culture collections worldwide uphold a considerable variety of AM fungal isolates cultivated in a mineral substrate and root organ cultures. These isolates are well characterized taxonomically, thereby representing important resources for comparative studies of trait variation. These centers play a key role in training personnel through specialized workshops that offer both hands-on experience and theoretical knowledge on measuring AM fungal traits. Embrace AM fungal community diversity : A basic premise of ecophysiology is that environmental filters will select for specific traits/adaptations (Lambers et al. ). Given that some traits can be measured at the community level ( e.g. , hyphal nutrient stoichiometry (Zhang et al. )), it is conceivable to conduct experiments (physical disturbance, nutrient additions, drought etc.) on whole natural AM fungal communities and examine correlations between environmental filters and traits (Chagnon ). Coupled with rotating and static cores (Johnson et al. ), these experiments could also assess AM fungal growth and mycorrhizal function. With synthesis studies identifying major drivers of AM fungal community structure at global scales (Davison et al. ), the next frontier is to move beyond taxonomy and assess the functional biogeography of AM fungi ( e.g., Violle et al. ). The use of microphotography, artificial intelligence (AI) and machine learning: The integration of microphotography, AI and machine learning algorithms could not only help standardize and accelerate trait quantification and data integration, thereby eliminating the subjectivity of the observer, a common issue entangled with our current quantification approaches. Successful integration of the three can create automations that will allow for large dataset acquisition, no longer limited by space and time ( e.g. , continuous growth measurements of ERM and its traits, or continuous progression of root colonization with the help of fluorescent markers). These approaches can help reveal behavioral patterns that have so far remained undetected due to technical limitations. As mentioned above, key ecological functions of the AM fungal symbiosis ( e.g. , host plant growth promotion), depend on both the fungal isolate's traits and its interaction with the host. While this may limit some trait measurements, the hypothesis-driven approach of linking trait and function(s) outlined here, though not addressing all ecological questions about AM fungi, marks a significant step forward. Experimental approaches Various experimental approaches can be employed to investigate morphological and physiological AM fungal traits in semi-realistic conditions including soil or substrate and plant host(s). We identified five main approaches in the literature. Gazey et al. utilized a sterile mesh bag (Fig. A) approach to examine sporulation and external hyphae production in two Acaulospora species. This technique involves the use of 25 µm mesh bags (2 cm wide, 10 cm across, and 10 cm long) containing 200 g of uninoculated, steamed soil. The mesh bags can be placed in pots anywhere along the soil profile and data encompassing external hyphal length and spore numbers are gathered periodically. Other than the "sterile mesh bag" host biomass, root colonization and other variables can be measured. The approach allows measuring AM traits within a controlled sterile soil environment without the interference of propagules present in the original inoculum. The sterile soil should be free from AM fungal propagules and researchers should have controls with non-inoculated mesh bags. Furthermore, as samples are collected from a small soil volume inside the mesh bag over time, correlations between root colonization, spores, and external hyphae are more likely to represent realistic relationships among these traits. Nonetheless, this approach has potential limitations, including the need for propagules to penetrate the mesh bag, which may pose challenges for certain taxa ( e.g. , Thonar et al. ). Additionally, analyzing multiple isolates or species simultaneously can be time-consuming and labor-intensive. Direct measurement of fungal traits may require separating hyphae from the substrate altogether, which can be achieved using a slight variation of the sterile mesh bag. Zhang et al. used hyphal in-growth bags (Fig. A) (glass beads with silt/clay to which a dilute nutrient solution was added, enclosed in 38 µm nylon mesh) to harvest mycelium for C, N, and P analysis after eight weeks (Neumann and George ). Bags (2 × 10 cm) containing 40 g of the mixture were buried in the top 10 cm of pots. The mesh blocked Medicago sativa roots but not Festuca arundinacea root hairs, which required 10 µm mesh (Zhang et al. ). Soil particles adhering to hyphae must be removed before analysis. This method, unlike sterile mesh bags, better suits physiological studies. A third approach was used by Jakobsen et al. to study AM fungal hyphae abundance in soil and P uptake into roots. Root compartment bags (Fig. B) consist of cylindrical (60 mm diameter) bags constructed using a 25 µm nylon mesh filled with AM fungal inoculum. Bags are placed in 1.5 L pots with steamed sterilized soil, and pre-germinated seeds are transplanted into each bag, confining roots within the bag while allowing AM fungal hyphae to extend into the surrounding soil. After 25 days, the bags are transplanted into rectangular PVC boxes (300 × 185 × 130 mm) containing 7 kg of steamed dry soil. Hyphal growth is measured by collecting 10 mm soil cores on five dates at various distances from the root compartment. The root compartment method, like the sterile mesh bag approach, enables hyphal length measurements in a mycorrhiza-free environment. Transplanting the compartment into large boxes makes it ideal for studying AM fungal hyphal spread and comparing fungal taxa; however, it requires a substantial amount of soil. A variation of the method including a trap plant allows measurements of resource movement between hosts in a community (Mikkelsen et al. ) or compartments separated with an air gap for measurements of water transport when combined with stable isotopes and dyes (Kakouridis et al. ). A fourth approach, named inoculated containers (Fig. C) , was used by Hart and Reader to establish the taxonomic basis for the variation in root colonization strategies observed among AM fungal families. Fungal biomass is initially measured based on ergosterol concentration to equalize the amount of inoculum at the onset of the experiment. However, we recommend using a different approach ( e.g. , fatty acids) as it has been shown that AM fungi do not produce ergosterol (Olsson et al. ; Olsson and Lekberg ). Containers (also known as cone-tainers) (4 cm diameter × 20.5 cm deep) are [12pt]{minimal} $${~}^{2}\!/ \!{~}_{3}.$$ 2 3 filled with soil, inoculated with spores, hyphae, and colonized root fragments, and sown with leek as a surrogate host. After 30 days, shoots are harvested, and soil undergoes experimental treatments with different hosts. Containers are harvested six times over 12 weeks to measure root and soil colonization. This small-container approach enables studying multiple isolates over time, with standardized fungal biomass allowing direct taxonomic comparisons. However, as hyphal abundance is measured in the same container as the inoculum, distinguishing new hyphae from the original ones is not possible. Johnson et al. introduced a method using rotative cores (Fig. D) to study CMNs. Conical containers (270 mL) with a 2 × 5 cm slot covered by 40 µm nylon mesh (although the whole container can be covered with a mesh) or a hydrophobic membrane are filled with soil-sand mixtures, inoculated with AM fungi, and seeded with a host plant. After 2–3 months of CMN establishment, treatments are applied by either keeping containers static (undisturbed CMNs) or rotating them to sever hyphal networks. Despite being labor-intensive, this method is effective for studies on CMNs and effects of hyphal disruption on soil ( e.g. , bacterial structure, aggregation) and plant ( e.g. , biomass, nutrient allocation) (Babikova et al. ). It can also be used to assess physiological aspects of the ERH by adding tracers that only hyphae have access to (Lekberg et al. ). The study of genetic traits typically requires the use of isolation techniques into in vitro culture (Declerck et al. ). For example, in vitro root-organ cultures methods have enabled major breakthroughs in the understanding of genetic and physiological traits such as nutrient exchange ratios (Cranenbrouck et al. ; Kiers et al. ) and patterns of hyphal anastomosis between isolates in the same species (Giovannetti et al. ). In addition, in vitro systems may be instrumental in investigating trait interactions between AM fungi and other soil biota (Faghihinia et al. ; Vieira et al. ). Despite extensive research, a comprehensive understanding of AM fungal traits across taxa remains elusive due to experimental variability ( e.g. , host plants, soil type, fertilization, environment). We propose standardizing key experimental items while collecting non-standardizable factors ( e.g. , soil type, lighting) as metadata. This approach enables experiments across labs using the same AM fungal taxa under varied conditions ( e.g. , disturbance, salinity, drought, CO 2 , temperature, light) to assess trait conservation and prediction accuracy based on taxonomy. Standard plant-growth conditions Standardizing mycorrhizal fungal trait quantification improves data quality, reduces bias, and enhances study comparability and reproducibility. This consistency strengthens meta-analyses and fosters collaboration, advancing our understanding of the AM symbiosis. Based on the authors' expertise, we propose guidelines for standardizing trait measurements. However, potentially necessary deviations from these recommendations, if well-documented, remain valuable for understanding AM fungal life histories. Pot Size and Type: Measuring AM fungal traits often involves numerous experimental units. To ensure feasibility, pot size and substrate are crucial considerations. For mesh bags, pots over 2 L are recommended to avoid root bounding issues. For inoculated containers or rotative cores placed in a larger pot, conical containers (4 cm diameter × 20.5 cm deep) with open bottoms are ideal (Weremijewicz and Janos ). Soil texture: Soil texture influences AM fungal mycelium production and sporulation by affecting pore space for growth. Inert media like sand:expanded clay could be used to standardize the substrate. While these media have the advantage of not containing AM fungal propagules and facilitate spore and ERM hyphal extraction at the end of the experiment, they hardly represent the common habitat of AM fungi. We recommend using sterilized loam soil for standard experiments, as it is easier to handle, facilitates root washing, and provides a more representative soil type. If unavailable, adding quartzite or coarse river sand to adjust texture is suggested to bring the texture closer to that of a loam soil. Soil sterilization: Autoclave 121 °C (for as long as needed; adjust by placing a rod with autoclave tape to the center) repeated twice with a 24 h interval. However, if possible, gamma radiation is a useful but costly alternative that limits chemical alteration of organic matter and downstream consequences on dissolved organic C, aggregate stability and manganese toxicity (Boyd ; Berns et al. ). Nutrient solution: Plant nutrition is an important aspect to be considered as it impacts the establishment and functioning of the AM symbiosis. We suggest the use of a low P ( i.e. , 0.4 mM of KH 2 PO 4 ) half-strength Hoagland's solution for monocots to provide the minimum amount of macro and micronutrients to the host. A microbial wash, prepared by mixing AM fungal inocula with water followed by filtration through a 20 µm sieve, is typically added to test pots (Ames et al. ). As this step is challenging to standardize, we recommend including a control treatment without the wash to evaluate the microorganisms' impact on AM fungal traits. Host plant: Mycorrhizal host plants vary widely in their growth habits ( e.g. , grasses, trees, forbs), growth rates, and root architecture, which affect root colonization and hyphal growth. Host preference is another factor to consider, as it impacts sporulation (Bever et al. ). We suggest the use of Sorghum × drummondii (Sudan grass) as a standard host because: a) it has been widely used to grow and maintain a vast array of AM fungal germplasm in culture collections (Morton et al. ); b) it has a fasciculate root system that provides space for root colonization; and, c) it is mycorrhizal dependent. In addition to these recommendations, metadata should include temperature, soil/substrate type, pH, soil moisture content, soil fertility, light intensity and experiment duration. Various experimental approaches can be employed to investigate morphological and physiological AM fungal traits in semi-realistic conditions including soil or substrate and plant host(s). We identified five main approaches in the literature. Gazey et al. utilized a sterile mesh bag (Fig. A) approach to examine sporulation and external hyphae production in two Acaulospora species. This technique involves the use of 25 µm mesh bags (2 cm wide, 10 cm across, and 10 cm long) containing 200 g of uninoculated, steamed soil. The mesh bags can be placed in pots anywhere along the soil profile and data encompassing external hyphal length and spore numbers are gathered periodically. Other than the "sterile mesh bag" host biomass, root colonization and other variables can be measured. The approach allows measuring AM traits within a controlled sterile soil environment without the interference of propagules present in the original inoculum. The sterile soil should be free from AM fungal propagules and researchers should have controls with non-inoculated mesh bags. Furthermore, as samples are collected from a small soil volume inside the mesh bag over time, correlations between root colonization, spores, and external hyphae are more likely to represent realistic relationships among these traits. Nonetheless, this approach has potential limitations, including the need for propagules to penetrate the mesh bag, which may pose challenges for certain taxa ( e.g. , Thonar et al. ). Additionally, analyzing multiple isolates or species simultaneously can be time-consuming and labor-intensive. Direct measurement of fungal traits may require separating hyphae from the substrate altogether, which can be achieved using a slight variation of the sterile mesh bag. Zhang et al. used hyphal in-growth bags (Fig. A) (glass beads with silt/clay to which a dilute nutrient solution was added, enclosed in 38 µm nylon mesh) to harvest mycelium for C, N, and P analysis after eight weeks (Neumann and George ). Bags (2 × 10 cm) containing 40 g of the mixture were buried in the top 10 cm of pots. The mesh blocked Medicago sativa roots but not Festuca arundinacea root hairs, which required 10 µm mesh (Zhang et al. ). Soil particles adhering to hyphae must be removed before analysis. This method, unlike sterile mesh bags, better suits physiological studies. A third approach was used by Jakobsen et al. to study AM fungal hyphae abundance in soil and P uptake into roots. Root compartment bags (Fig. B) consist of cylindrical (60 mm diameter) bags constructed using a 25 µm nylon mesh filled with AM fungal inoculum. Bags are placed in 1.5 L pots with steamed sterilized soil, and pre-germinated seeds are transplanted into each bag, confining roots within the bag while allowing AM fungal hyphae to extend into the surrounding soil. After 25 days, the bags are transplanted into rectangular PVC boxes (300 × 185 × 130 mm) containing 7 kg of steamed dry soil. Hyphal growth is measured by collecting 10 mm soil cores on five dates at various distances from the root compartment. The root compartment method, like the sterile mesh bag approach, enables hyphal length measurements in a mycorrhiza-free environment. Transplanting the compartment into large boxes makes it ideal for studying AM fungal hyphal spread and comparing fungal taxa; however, it requires a substantial amount of soil. A variation of the method including a trap plant allows measurements of resource movement between hosts in a community (Mikkelsen et al. ) or compartments separated with an air gap for measurements of water transport when combined with stable isotopes and dyes (Kakouridis et al. ). A fourth approach, named inoculated containers (Fig. C) , was used by Hart and Reader to establish the taxonomic basis for the variation in root colonization strategies observed among AM fungal families. Fungal biomass is initially measured based on ergosterol concentration to equalize the amount of inoculum at the onset of the experiment. However, we recommend using a different approach ( e.g. , fatty acids) as it has been shown that AM fungi do not produce ergosterol (Olsson et al. ; Olsson and Lekberg ). Containers (also known as cone-tainers) (4 cm diameter × 20.5 cm deep) are [12pt]{minimal} $${~}^{2}\!/ \!{~}_{3}.$$ 2 3 filled with soil, inoculated with spores, hyphae, and colonized root fragments, and sown with leek as a surrogate host. After 30 days, shoots are harvested, and soil undergoes experimental treatments with different hosts. Containers are harvested six times over 12 weeks to measure root and soil colonization. This small-container approach enables studying multiple isolates over time, with standardized fungal biomass allowing direct taxonomic comparisons. However, as hyphal abundance is measured in the same container as the inoculum, distinguishing new hyphae from the original ones is not possible. Johnson et al. introduced a method using rotative cores (Fig. D) to study CMNs. Conical containers (270 mL) with a 2 × 5 cm slot covered by 40 µm nylon mesh (although the whole container can be covered with a mesh) or a hydrophobic membrane are filled with soil-sand mixtures, inoculated with AM fungi, and seeded with a host plant. After 2–3 months of CMN establishment, treatments are applied by either keeping containers static (undisturbed CMNs) or rotating them to sever hyphal networks. Despite being labor-intensive, this method is effective for studies on CMNs and effects of hyphal disruption on soil ( e.g. , bacterial structure, aggregation) and plant ( e.g. , biomass, nutrient allocation) (Babikova et al. ). It can also be used to assess physiological aspects of the ERH by adding tracers that only hyphae have access to (Lekberg et al. ). The study of genetic traits typically requires the use of isolation techniques into in vitro culture (Declerck et al. ). For example, in vitro root-organ cultures methods have enabled major breakthroughs in the understanding of genetic and physiological traits such as nutrient exchange ratios (Cranenbrouck et al. ; Kiers et al. ) and patterns of hyphal anastomosis between isolates in the same species (Giovannetti et al. ). In addition, in vitro systems may be instrumental in investigating trait interactions between AM fungi and other soil biota (Faghihinia et al. ; Vieira et al. ). Despite extensive research, a comprehensive understanding of AM fungal traits across taxa remains elusive due to experimental variability ( e.g. , host plants, soil type, fertilization, environment). We propose standardizing key experimental items while collecting non-standardizable factors ( e.g. , soil type, lighting) as metadata. This approach enables experiments across labs using the same AM fungal taxa under varied conditions ( e.g. , disturbance, salinity, drought, CO 2 , temperature, light) to assess trait conservation and prediction accuracy based on taxonomy. Standardizing mycorrhizal fungal trait quantification improves data quality, reduces bias, and enhances study comparability and reproducibility. This consistency strengthens meta-analyses and fosters collaboration, advancing our understanding of the AM symbiosis. Based on the authors' expertise, we propose guidelines for standardizing trait measurements. However, potentially necessary deviations from these recommendations, if well-documented, remain valuable for understanding AM fungal life histories. Pot Size and Type: Measuring AM fungal traits often involves numerous experimental units. To ensure feasibility, pot size and substrate are crucial considerations. For mesh bags, pots over 2 L are recommended to avoid root bounding issues. For inoculated containers or rotative cores placed in a larger pot, conical containers (4 cm diameter × 20.5 cm deep) with open bottoms are ideal (Weremijewicz and Janos ). Soil texture: Soil texture influences AM fungal mycelium production and sporulation by affecting pore space for growth. Inert media like sand:expanded clay could be used to standardize the substrate. While these media have the advantage of not containing AM fungal propagules and facilitate spore and ERM hyphal extraction at the end of the experiment, they hardly represent the common habitat of AM fungi. We recommend using sterilized loam soil for standard experiments, as it is easier to handle, facilitates root washing, and provides a more representative soil type. If unavailable, adding quartzite or coarse river sand to adjust texture is suggested to bring the texture closer to that of a loam soil. Soil sterilization: Autoclave 121 °C (for as long as needed; adjust by placing a rod with autoclave tape to the center) repeated twice with a 24 h interval. However, if possible, gamma radiation is a useful but costly alternative that limits chemical alteration of organic matter and downstream consequences on dissolved organic C, aggregate stability and manganese toxicity (Boyd ; Berns et al. ). Nutrient solution: Plant nutrition is an important aspect to be considered as it impacts the establishment and functioning of the AM symbiosis. We suggest the use of a low P ( i.e. , 0.4 mM of KH 2 PO 4 ) half-strength Hoagland's solution for monocots to provide the minimum amount of macro and micronutrients to the host. A microbial wash, prepared by mixing AM fungal inocula with water followed by filtration through a 20 µm sieve, is typically added to test pots (Ames et al. ). As this step is challenging to standardize, we recommend including a control treatment without the wash to evaluate the microorganisms' impact on AM fungal traits. Host plant: Mycorrhizal host plants vary widely in their growth habits ( e.g. , grasses, trees, forbs), growth rates, and root architecture, which affect root colonization and hyphal growth. Host preference is another factor to consider, as it impacts sporulation (Bever et al. ). We suggest the use of Sorghum × drummondii (Sudan grass) as a standard host because: a) it has been widely used to grow and maintain a vast array of AM fungal germplasm in culture collections (Morton et al. ); b) it has a fasciculate root system that provides space for root colonization; and, c) it is mycorrhizal dependent. In addition to these recommendations, metadata should include temperature, soil/substrate type, pH, soil moisture content, soil fertility, light intensity and experiment duration. |
Single shot detection of alterations across multiple ionic currents from assimilation of cell membrane dynamics | 90c04e24-9dfa-4ef7-89a3-956752af4efa | 10933487 | Physiology[mh] | The complement of ion channels in a cell membrane underpins key aspects of neuronal function such as the shape of action potentials , adaptive versus non-adaptive firing response , integration of synaptic inputs , and some forms of short or long-term memory . Dysfunction of a single channel type can substantially alter cellular behaviour: for example, channelopathies, in which certain ion channels are either absent or exhibit abnormal conductances, are known to be the causative factor in forms of epilepsy , , pain disorders , cystic fibrosis , and cardiac arrhythmias . Channelopathies also form part of a more complex pathophysiology in Parkinson’s , and Alzheimer’s diseases, Rett syndrome , and autism . Such outcomes drive the development of efficient methods for detecting ion channel dysfunction. In relation to disease, channelopathies may arise from changes in ion channel density, expression, gene mutations, and loss of function, such as in autoimmune disease , . Mutational channelopathies are identified in research by high throughput sequencing, or patch-sequencing which aims to correlate morphological and electrical alterations to underlying gene mutations . These sequencing approaches face challenges in identifying disease-linked mutations, particularly in autoimmune diseases where channel function may be altered by factors beyond the mutations themselves. Patch-clamp electrophysiology is the primary technique used for profiling ion channels in vitro, both in research studies investigating ion channel dysfunction in disease models, and in screening candidate drugs targeting ion channels. However, in both scenarios this approach is low throughput and labor-intensive , particularly if neurological causes are a priori unknown or involve multiple ionic currents. A single shot method is therefore highly desirable which can reconstruct ionic currents from multiple channel types from the effects they induce in the electrical response of a neuron. This would have the benefit of removing guess work by inferring changes across all ion channels simultaneously and by mapping the range of genes encoding altered subunits to the exclusion of all others whose effects are unknown. In this work, we demonstrate a powerful method based on statistical data assimilation (DA) that extracts information on multiple ionic currents simultaneously from chaotically driven current-clamp recordings. The method synchronizes a Hodgkin–Huxley-like model to the membrane voltage oscillations of a hippocampal neuron to estimate ion channel parameters – such as maximal conductances , voltage thresholds, slopes of activation curves, and recovery time constants that constitute the fingerprints of individual ion channels – . The predictive power of our approach is based on the observation that ionic currents reconstructed from estimated parameters carry an uncertainty three times lower than the parameters themselves. We subsequently use the ionic charge transferred per action potential as a reliable metric to predict ion channel alterations induced by channel antagonists. We find that changes in predicted ionic charge match the selectivity and potency of well-characterized inhibitory compounds applied to block BK, SK, A-type K + , and HCN channels. This approach is to our knowledge unique in inferring actual changes across a range of ion channel types from subtle changes in membrane voltage dynamics. The method can be applied to primary tissue, including animal models of disease, rather than being limited to cell cultures. The statistical readouts indicate any changes across the range of ionic currents simultaneously in the cell of interest in response to a drug or treatment. This method may prove beneficial in early drug screening, and in research studies aiming to detect functional changes amongst a wide range of ionic currents. Statistical data assimilation of pharmacologically altered neurons The statistical DA workflow is schematically depicted in Fig. . The membrane voltage of a hippocampal CA1 neuron is recorded whilst being driven by a chaotic sequence of current waveforms designed to elicit hyperpolarizing and depolarizing responses across many time scales and amplitudes (Fig. a). For each experiment the current-clamp protocol was applied twice: first in the natural state and a second time after applying an antagonist to block a specific ion channel (Fig. ). We then synchronized the neuron model to electrophysiological recordings using interior point optimization , a constrained nonlinear optimization framework. The neuron model was a single compartment Hodgkin–Huxley-type system incorporating the 8 ion channels most prevalent in the CA1 soma (Table ). Each modelled ion channel represents an amalgam of the possible subtypes of that channel: for example, the ‘SK’ channel represents the gating and response dynamics of both SK1 and SK2 subunits. Interior point optimization inferred the 67 parameters that best synchronize the model to electrophysiological data over an 800 ms long assimilation window (Table ). One set of 67 parameters was obtained from pre-drug data ( p pre ) and another from post-drug data ( p post ). Preliminary assimilations of model-generated data successfully recovered the 67 parameters of the original model to within 0.2% and with a 100% convergence rate , . Convergence was achieved irrespective of starting conditions and the positioning of the assimilation windows in a 2000 ms long epoch. This indicates that the observability and identifiability criteria which are necessary to reconstruct the model’s state variables and parameters from measurements, are fulfilled. In contrast, the problem of assimilating biological neuron data is complicated by our lack of knowledge of the exact model. Model error introduces correlations between some parameter estimates. As a result, parameter search tends to converge towards multiple solutions depending on the choice of starting conditions. In order to mitigate the uncertainty on parameters, we generated a statistical sample of parameters p pre ,1 … p pre , R and p post ,1 … p post , R (Fig. b) by assimilating R windows offset by 80 ms from each other (Figs. a). We then completed 2 R conductance models by inserting the pre-drug and post-drug parameters in the model equations. The ionic current waveforms (Fig. c,d) and membrane voltage oscillations (Fig. e) were predicted by forward-integrating the stimulating protocol (Fig. a) with pre-drug and post-drug completed models. We then numerically integrated the current waveform of each ion channel, to obtain the ionic charge transferred per action potential, pre-drug and post-drug. We repeated this process for the R assimilation windows to generate a statistical distribution of the ionic charge transferred (Fig. c). All R current waveforms were calculated at the site of one action potential chosen for being in the short time interval overlapped by all assimilation windows. The statistical distributions of ionic charges were plotted (Fig. c) and analyzed (Mann–Whitney) to estimate the median and mean predicted inhibition for each channel. To ensure that our predictions are not affected by the firing frequency of neurons, which we found to be particularly sensitive to potassium channel inhibition, we calculated the charge transfer at the site of a single action potential instead of over the entire assimilation window. Ionic current waveforms were reconstructed and analyzed in parallel allowing all current alterations to be predicted in one shot (Fig. d). Forward integration of the model also generated the predicted membrane voltage time series (Fig. e). The agreement between the experimentally observed and the predicted voltage provides an intermediate validation point of our method. Accuracy of current and parameter predictions The main challenge to inferring biologically relevant information from actual neurons as opposed to model data is to minimize the error introduced in the parameter field by model error and, to a lesser extent, measurement error – . In order to quantify the impact of model or data error, we calculated 100 sets of parameters by assimilating model data corrupted by 100 different realizations of white noise (Fig. ). The parameters that deviate significantly from their true values (Fig. a) are few and mainly associated with gate recovery times ( [12pt]{minimal} $$t$$ t , [12pt]{minimal} $$$$ ϵ ) (Table ). In order to clarify the nature of parameter correlations, we calculated the 67 × 67 covariance matrix of this dataset (Fig. b). We find that the covariance matrix exhibits a block structure whereby the correlations between parameters pertaining to the same ionic current are greater than those pertaining to different ionic currents. These findings suggest that the greater parameter correlations might compensate each other in the calculation of ionic currents. This underpins our key hypothesis that ionic currents might be calculated with a higher degree of confidence than their underlying parameters. A calculation of standard deviations of ionic currents and parameters over a range of noise levels (Fig. c) validates this hypothesis by predicting a three times lower uncertainty on ionic currents. This finding allows us to focus on ionic current as a metric of ion channel alterations, and to validate the magnitude of alterations against the effect of antagonists of known selectivity and potency. Figure d plots the eigenvalues of the covariance matrix which measure the lengths of semi-axes of the data misfit ellipsoid. There are six outliers at the left which point to six principal directions along which parameters are very loosely constrained with [12pt]{minimal} $$p}}{p} 100\%$$ Δ p p ≈ 100 % . Along the 61 other principal directions [12pt]{minimal} $$p}}{p}$$ Δ p p varies between 7 and 0.001% confirming that most parameter estimates are well constrained as observed in Fig. a. We now use these findings to predict the selectivity and potency of four ion channel antagonists applied to rodent hippocampal neurons. Predicting the alterations of ion channels induced by four antagonists in hippocampal neurons BK channel blockade The analysis of neurons subjected to BK channel blocker iberiotoxin (IbTX; 100 nM; Fig. ; R = 15 pre-drug and post-drug) predicted a 12.1% reduction in median and 14.8% reduction in mean BK-mediated charge per action potential. This was one statistical discovery across all channels in the drug-applied data (Fig. a; U = 25; q < 0.01; mean ranks 21.2 [pre-IbTX], 9.8 [post-IbTX]). Charge transfer decreased from 29.4 nC cm −2 to 25.9 nC.cm -2 . A compensatory increase in leak current was also identified, likely due to decreased K + permeability caused by IbTX (U = 37.5; q < 0.01; mean rank 10.5 [pre IbTX], 20.5 [post-IbTX]). Leak charge transfer increased from 6.3 to 10.3 nC cm −2 , with a mean increase of 46%. There were no statistical discoveries for any other channels. This demonstrates that models constructed by DA correctly predict the selectivity of IbTX. Figure b predicts the reduction in charge transfer through the BK channel targeted by IbTX. Identically driven action potentials measured pre-IbTX and post-IbTX (Fig. c) are compared to the action potentials predicted from our pre-IbTX and post-IbTX models (Fig. d). The model correctly predicts the reduction in afterhyperpolarization (fAHP) observed post-IbTX. BK current waveforms were also predicted by forward-integration of the pre-IbTX and post-IbTX conductance models (Fig. e). The area under both waveforms yielded the drop in BK-mediated charge transfer plotted in Fig. b. SK channel blockade Following application of the SK-specific channel blocker apamin (150 nM; Fig. ; R = 18 pre-drug and post-drug), our model predicted lower SK-mediated charge transfer (Fig. a; U = 65; q < 0.01; mean rank 23.9 [pre-apamin], 13.1 [post-apamin]). Median charge transfer dropped from 1.66 nC cm −2 to 0, with a mean reduction of 74.0%. This was the sole statistical discovery across all channels in the spike-normalized data. Our model thus correctly predicts that apamin is an antagonist of the SK channel. Figure b shows the predicted potency of apamin by plotting the reduction in SK-mediated charge transfer from the pre-apamin state to the post-apamin state. The identically driven action potentials measured pre-apamin and post-apamin (Fig. c) are compared to the action potentials predicted by our conductance models (Fig. d). The models correctly predict the reduction in medium afterhyperpolarization (mAHP) observed post-apamin in the tail end of the action potential. Forward-integration of the models also predicted the SK current waveforms at the site of an action potential pre- and post-apamin (Fig. e). These waveforms were integrated in time to obtain the predicted amounts of SK-mediated charge transfer which were then plotted in Fig. b. Kv channel blockade Following application of 4-Aminopyridine (4-AP) to block the voltage-gated potassium channels (300 µM; Fig. ; R = 19 pre-drug; R = 18 post-drug), our completed models predicted a reduction in charge transfer mediated by A-type K + channels (Fig. a; U = 52; q < 0.001; mean rank 25.3 [pre 4-AP], 12.4 [post 4-AP]). Median charge transfer dropped from 26.1 to 19.7 nC.cm −2 with a 19.0% mean reduction. In addition, the model predicts a 10.0% increase in median charge transfer (8.8% mean) through the BK-channel (U = 73; q < 0.01; median charge 41.2 nC cm −2 [pre 4-AP], 45.3 nC cm −2 [post 4-AP]); and a reduction in Ca 2+ -mediated charge transfer (U = 79; q < 0.01; mean rank 23.8 [pre 4-AP], 13.9 [post 4-AP]). Ca 2+ -mediated charge dropped from 9.65 to 9.23 nC cm −2 with a mean reduction of 3.0% mean. Figure b predicts the reduction in charge transfer through the A-type K + channels targeted by 4-AP. Action potentials measured pre 4-AP and post 4-AP (Fig. c) match the action potentials predicted by our pre 4-AP and post 4-AP models (Fig. d). The model correctly predicts the widening of action potentials induced by 4-AP which follows from a slower AHP repolarization. Figure e plots the predicted A-type K + current waveforms elicited within the same action potential. The predicted current amplitude drops sharply in response to 4-AP. The K + charge amounts transferred per action potential are obtained by integrating the pre 4-AP and post 4-AP current waveforms and plotted in Fig. b. HCN channel blockade We finally applied the ZD7288 antagonist to block the HCN channels (50 μM, Fig. ; R = 19 pre-drug and post-drug). Our completed models predict a reduction in HCN-mediated charge transferred across the full length of the assimilation window (Fig. a; U = 81; q < 0.01; mean rank 24.7 [pre-ZD7288], 10.5 [post-ZD7288]). Median charge transfer was reduced from 1.618 µC cm −2 [pre-ZD7288] to 0.0 with a mean reduction of 85.5%. In addition, our model predicts an increase in leak current (U = 77; q < 0.01; mean rank 14.1 [pre-ZD7288], 25.0 [post-ZD7288]), with median charge transfer increasing from 3.20 to 4.19 µC cm −2 a mean increase of 25.1%. These numbers represent the HCN charge amounts transferred across one 800 ms long assimilation window rather than per action potential as above. This is because the HCN current contributes to subthreshold oscillations unlike the SK, BK and A-type currents which contributes to action potentials. Figure b predicts the total blockage of the HCN channel targeted by ZD7288. The membrane voltage response to a hyperpolarizing current step applied before and after ZD7288 (Fig. c) is compared to the responses predicted by the pre-ZD7288 and post-ZD7288 models to the same current step (Fig. d). The model correctly predicts the faster adaptation and the reduced amplitude of the membrane voltage change post-ZD7288. In order to validate the results of data assimilation, we now compare the predicted changes in ionic charge transfer to the selectivity and potency of each ion channel antagonist determined by IC50 analysis. The results are summarized in Table . The predicted reductions in charge transfer are in good agreement with degree of inhibition expected in SK, BK, A-type and HCN. We further discuss below the inhibition of sub-types of the SK, BK, A, and HCN channels. Besides correctly identifying the selectivity of known antagonists, DA is sensitive enough to pick up correlations between ion channels driven by the modulation of reversal potentials (Fig. a) or compensation mechanisms (Fig. a). We also determined the degree of confidence in our predictions by computing the coefficient of variation (Table ). The results consistently show a ± 11% uncertainty on charge estimates. The statistical DA workflow is schematically depicted in Fig. . The membrane voltage of a hippocampal CA1 neuron is recorded whilst being driven by a chaotic sequence of current waveforms designed to elicit hyperpolarizing and depolarizing responses across many time scales and amplitudes (Fig. a). For each experiment the current-clamp protocol was applied twice: first in the natural state and a second time after applying an antagonist to block a specific ion channel (Fig. ). We then synchronized the neuron model to electrophysiological recordings using interior point optimization , a constrained nonlinear optimization framework. The neuron model was a single compartment Hodgkin–Huxley-type system incorporating the 8 ion channels most prevalent in the CA1 soma (Table ). Each modelled ion channel represents an amalgam of the possible subtypes of that channel: for example, the ‘SK’ channel represents the gating and response dynamics of both SK1 and SK2 subunits. Interior point optimization inferred the 67 parameters that best synchronize the model to electrophysiological data over an 800 ms long assimilation window (Table ). One set of 67 parameters was obtained from pre-drug data ( p pre ) and another from post-drug data ( p post ). Preliminary assimilations of model-generated data successfully recovered the 67 parameters of the original model to within 0.2% and with a 100% convergence rate , . Convergence was achieved irrespective of starting conditions and the positioning of the assimilation windows in a 2000 ms long epoch. This indicates that the observability and identifiability criteria which are necessary to reconstruct the model’s state variables and parameters from measurements, are fulfilled. In contrast, the problem of assimilating biological neuron data is complicated by our lack of knowledge of the exact model. Model error introduces correlations between some parameter estimates. As a result, parameter search tends to converge towards multiple solutions depending on the choice of starting conditions. In order to mitigate the uncertainty on parameters, we generated a statistical sample of parameters p pre ,1 … p pre , R and p post ,1 … p post , R (Fig. b) by assimilating R windows offset by 80 ms from each other (Figs. a). We then completed 2 R conductance models by inserting the pre-drug and post-drug parameters in the model equations. The ionic current waveforms (Fig. c,d) and membrane voltage oscillations (Fig. e) were predicted by forward-integrating the stimulating protocol (Fig. a) with pre-drug and post-drug completed models. We then numerically integrated the current waveform of each ion channel, to obtain the ionic charge transferred per action potential, pre-drug and post-drug. We repeated this process for the R assimilation windows to generate a statistical distribution of the ionic charge transferred (Fig. c). All R current waveforms were calculated at the site of one action potential chosen for being in the short time interval overlapped by all assimilation windows. The statistical distributions of ionic charges were plotted (Fig. c) and analyzed (Mann–Whitney) to estimate the median and mean predicted inhibition for each channel. To ensure that our predictions are not affected by the firing frequency of neurons, which we found to be particularly sensitive to potassium channel inhibition, we calculated the charge transfer at the site of a single action potential instead of over the entire assimilation window. Ionic current waveforms were reconstructed and analyzed in parallel allowing all current alterations to be predicted in one shot (Fig. d). Forward integration of the model also generated the predicted membrane voltage time series (Fig. e). The agreement between the experimentally observed and the predicted voltage provides an intermediate validation point of our method. The main challenge to inferring biologically relevant information from actual neurons as opposed to model data is to minimize the error introduced in the parameter field by model error and, to a lesser extent, measurement error – . In order to quantify the impact of model or data error, we calculated 100 sets of parameters by assimilating model data corrupted by 100 different realizations of white noise (Fig. ). The parameters that deviate significantly from their true values (Fig. a) are few and mainly associated with gate recovery times ( [12pt]{minimal} $$t$$ t , [12pt]{minimal} $$$$ ϵ ) (Table ). In order to clarify the nature of parameter correlations, we calculated the 67 × 67 covariance matrix of this dataset (Fig. b). We find that the covariance matrix exhibits a block structure whereby the correlations between parameters pertaining to the same ionic current are greater than those pertaining to different ionic currents. These findings suggest that the greater parameter correlations might compensate each other in the calculation of ionic currents. This underpins our key hypothesis that ionic currents might be calculated with a higher degree of confidence than their underlying parameters. A calculation of standard deviations of ionic currents and parameters over a range of noise levels (Fig. c) validates this hypothesis by predicting a three times lower uncertainty on ionic currents. This finding allows us to focus on ionic current as a metric of ion channel alterations, and to validate the magnitude of alterations against the effect of antagonists of known selectivity and potency. Figure d plots the eigenvalues of the covariance matrix which measure the lengths of semi-axes of the data misfit ellipsoid. There are six outliers at the left which point to six principal directions along which parameters are very loosely constrained with [12pt]{minimal} $$p}}{p} 100\%$$ Δ p p ≈ 100 % . Along the 61 other principal directions [12pt]{minimal} $$p}}{p}$$ Δ p p varies between 7 and 0.001% confirming that most parameter estimates are well constrained as observed in Fig. a. We now use these findings to predict the selectivity and potency of four ion channel antagonists applied to rodent hippocampal neurons. BK channel blockade The analysis of neurons subjected to BK channel blocker iberiotoxin (IbTX; 100 nM; Fig. ; R = 15 pre-drug and post-drug) predicted a 12.1% reduction in median and 14.8% reduction in mean BK-mediated charge per action potential. This was one statistical discovery across all channels in the drug-applied data (Fig. a; U = 25; q < 0.01; mean ranks 21.2 [pre-IbTX], 9.8 [post-IbTX]). Charge transfer decreased from 29.4 nC cm −2 to 25.9 nC.cm -2 . A compensatory increase in leak current was also identified, likely due to decreased K + permeability caused by IbTX (U = 37.5; q < 0.01; mean rank 10.5 [pre IbTX], 20.5 [post-IbTX]). Leak charge transfer increased from 6.3 to 10.3 nC cm −2 , with a mean increase of 46%. There were no statistical discoveries for any other channels. This demonstrates that models constructed by DA correctly predict the selectivity of IbTX. Figure b predicts the reduction in charge transfer through the BK channel targeted by IbTX. Identically driven action potentials measured pre-IbTX and post-IbTX (Fig. c) are compared to the action potentials predicted from our pre-IbTX and post-IbTX models (Fig. d). The model correctly predicts the reduction in afterhyperpolarization (fAHP) observed post-IbTX. BK current waveforms were also predicted by forward-integration of the pre-IbTX and post-IbTX conductance models (Fig. e). The area under both waveforms yielded the drop in BK-mediated charge transfer plotted in Fig. b. SK channel blockade Following application of the SK-specific channel blocker apamin (150 nM; Fig. ; R = 18 pre-drug and post-drug), our model predicted lower SK-mediated charge transfer (Fig. a; U = 65; q < 0.01; mean rank 23.9 [pre-apamin], 13.1 [post-apamin]). Median charge transfer dropped from 1.66 nC cm −2 to 0, with a mean reduction of 74.0%. This was the sole statistical discovery across all channels in the spike-normalized data. Our model thus correctly predicts that apamin is an antagonist of the SK channel. Figure b shows the predicted potency of apamin by plotting the reduction in SK-mediated charge transfer from the pre-apamin state to the post-apamin state. The identically driven action potentials measured pre-apamin and post-apamin (Fig. c) are compared to the action potentials predicted by our conductance models (Fig. d). The models correctly predict the reduction in medium afterhyperpolarization (mAHP) observed post-apamin in the tail end of the action potential. Forward-integration of the models also predicted the SK current waveforms at the site of an action potential pre- and post-apamin (Fig. e). These waveforms were integrated in time to obtain the predicted amounts of SK-mediated charge transfer which were then plotted in Fig. b. Kv channel blockade Following application of 4-Aminopyridine (4-AP) to block the voltage-gated potassium channels (300 µM; Fig. ; R = 19 pre-drug; R = 18 post-drug), our completed models predicted a reduction in charge transfer mediated by A-type K + channels (Fig. a; U = 52; q < 0.001; mean rank 25.3 [pre 4-AP], 12.4 [post 4-AP]). Median charge transfer dropped from 26.1 to 19.7 nC.cm −2 with a 19.0% mean reduction. In addition, the model predicts a 10.0% increase in median charge transfer (8.8% mean) through the BK-channel (U = 73; q < 0.01; median charge 41.2 nC cm −2 [pre 4-AP], 45.3 nC cm −2 [post 4-AP]); and a reduction in Ca 2+ -mediated charge transfer (U = 79; q < 0.01; mean rank 23.8 [pre 4-AP], 13.9 [post 4-AP]). Ca 2+ -mediated charge dropped from 9.65 to 9.23 nC cm −2 with a mean reduction of 3.0% mean. Figure b predicts the reduction in charge transfer through the A-type K + channels targeted by 4-AP. Action potentials measured pre 4-AP and post 4-AP (Fig. c) match the action potentials predicted by our pre 4-AP and post 4-AP models (Fig. d). The model correctly predicts the widening of action potentials induced by 4-AP which follows from a slower AHP repolarization. Figure e plots the predicted A-type K + current waveforms elicited within the same action potential. The predicted current amplitude drops sharply in response to 4-AP. The K + charge amounts transferred per action potential are obtained by integrating the pre 4-AP and post 4-AP current waveforms and plotted in Fig. b. HCN channel blockade We finally applied the ZD7288 antagonist to block the HCN channels (50 μM, Fig. ; R = 19 pre-drug and post-drug). Our completed models predict a reduction in HCN-mediated charge transferred across the full length of the assimilation window (Fig. a; U = 81; q < 0.01; mean rank 24.7 [pre-ZD7288], 10.5 [post-ZD7288]). Median charge transfer was reduced from 1.618 µC cm −2 [pre-ZD7288] to 0.0 with a mean reduction of 85.5%. In addition, our model predicts an increase in leak current (U = 77; q < 0.01; mean rank 14.1 [pre-ZD7288], 25.0 [post-ZD7288]), with median charge transfer increasing from 3.20 to 4.19 µC cm −2 a mean increase of 25.1%. These numbers represent the HCN charge amounts transferred across one 800 ms long assimilation window rather than per action potential as above. This is because the HCN current contributes to subthreshold oscillations unlike the SK, BK and A-type currents which contributes to action potentials. Figure b predicts the total blockage of the HCN channel targeted by ZD7288. The membrane voltage response to a hyperpolarizing current step applied before and after ZD7288 (Fig. c) is compared to the responses predicted by the pre-ZD7288 and post-ZD7288 models to the same current step (Fig. d). The model correctly predicts the faster adaptation and the reduced amplitude of the membrane voltage change post-ZD7288. In order to validate the results of data assimilation, we now compare the predicted changes in ionic charge transfer to the selectivity and potency of each ion channel antagonist determined by IC50 analysis. The results are summarized in Table . The predicted reductions in charge transfer are in good agreement with degree of inhibition expected in SK, BK, A-type and HCN. We further discuss below the inhibition of sub-types of the SK, BK, A, and HCN channels. Besides correctly identifying the selectivity of known antagonists, DA is sensitive enough to pick up correlations between ion channels driven by the modulation of reversal potentials (Fig. a) or compensation mechanisms (Fig. a). We also determined the degree of confidence in our predictions by computing the coefficient of variation (Table ). The results consistently show a ± 11% uncertainty on charge estimates. The analysis of neurons subjected to BK channel blocker iberiotoxin (IbTX; 100 nM; Fig. ; R = 15 pre-drug and post-drug) predicted a 12.1% reduction in median and 14.8% reduction in mean BK-mediated charge per action potential. This was one statistical discovery across all channels in the drug-applied data (Fig. a; U = 25; q < 0.01; mean ranks 21.2 [pre-IbTX], 9.8 [post-IbTX]). Charge transfer decreased from 29.4 nC cm −2 to 25.9 nC.cm -2 . A compensatory increase in leak current was also identified, likely due to decreased K + permeability caused by IbTX (U = 37.5; q < 0.01; mean rank 10.5 [pre IbTX], 20.5 [post-IbTX]). Leak charge transfer increased from 6.3 to 10.3 nC cm −2 , with a mean increase of 46%. There were no statistical discoveries for any other channels. This demonstrates that models constructed by DA correctly predict the selectivity of IbTX. Figure b predicts the reduction in charge transfer through the BK channel targeted by IbTX. Identically driven action potentials measured pre-IbTX and post-IbTX (Fig. c) are compared to the action potentials predicted from our pre-IbTX and post-IbTX models (Fig. d). The model correctly predicts the reduction in afterhyperpolarization (fAHP) observed post-IbTX. BK current waveforms were also predicted by forward-integration of the pre-IbTX and post-IbTX conductance models (Fig. e). The area under both waveforms yielded the drop in BK-mediated charge transfer plotted in Fig. b. Following application of the SK-specific channel blocker apamin (150 nM; Fig. ; R = 18 pre-drug and post-drug), our model predicted lower SK-mediated charge transfer (Fig. a; U = 65; q < 0.01; mean rank 23.9 [pre-apamin], 13.1 [post-apamin]). Median charge transfer dropped from 1.66 nC cm −2 to 0, with a mean reduction of 74.0%. This was the sole statistical discovery across all channels in the spike-normalized data. Our model thus correctly predicts that apamin is an antagonist of the SK channel. Figure b shows the predicted potency of apamin by plotting the reduction in SK-mediated charge transfer from the pre-apamin state to the post-apamin state. The identically driven action potentials measured pre-apamin and post-apamin (Fig. c) are compared to the action potentials predicted by our conductance models (Fig. d). The models correctly predict the reduction in medium afterhyperpolarization (mAHP) observed post-apamin in the tail end of the action potential. Forward-integration of the models also predicted the SK current waveforms at the site of an action potential pre- and post-apamin (Fig. e). These waveforms were integrated in time to obtain the predicted amounts of SK-mediated charge transfer which were then plotted in Fig. b. Following application of 4-Aminopyridine (4-AP) to block the voltage-gated potassium channels (300 µM; Fig. ; R = 19 pre-drug; R = 18 post-drug), our completed models predicted a reduction in charge transfer mediated by A-type K + channels (Fig. a; U = 52; q < 0.001; mean rank 25.3 [pre 4-AP], 12.4 [post 4-AP]). Median charge transfer dropped from 26.1 to 19.7 nC.cm −2 with a 19.0% mean reduction. In addition, the model predicts a 10.0% increase in median charge transfer (8.8% mean) through the BK-channel (U = 73; q < 0.01; median charge 41.2 nC cm −2 [pre 4-AP], 45.3 nC cm −2 [post 4-AP]); and a reduction in Ca 2+ -mediated charge transfer (U = 79; q < 0.01; mean rank 23.8 [pre 4-AP], 13.9 [post 4-AP]). Ca 2+ -mediated charge dropped from 9.65 to 9.23 nC cm −2 with a mean reduction of 3.0% mean. Figure b predicts the reduction in charge transfer through the A-type K + channels targeted by 4-AP. Action potentials measured pre 4-AP and post 4-AP (Fig. c) match the action potentials predicted by our pre 4-AP and post 4-AP models (Fig. d). The model correctly predicts the widening of action potentials induced by 4-AP which follows from a slower AHP repolarization. Figure e plots the predicted A-type K + current waveforms elicited within the same action potential. The predicted current amplitude drops sharply in response to 4-AP. The K + charge amounts transferred per action potential are obtained by integrating the pre 4-AP and post 4-AP current waveforms and plotted in Fig. b. We finally applied the ZD7288 antagonist to block the HCN channels (50 μM, Fig. ; R = 19 pre-drug and post-drug). Our completed models predict a reduction in HCN-mediated charge transferred across the full length of the assimilation window (Fig. a; U = 81; q < 0.01; mean rank 24.7 [pre-ZD7288], 10.5 [post-ZD7288]). Median charge transfer was reduced from 1.618 µC cm −2 [pre-ZD7288] to 0.0 with a mean reduction of 85.5%. In addition, our model predicts an increase in leak current (U = 77; q < 0.01; mean rank 14.1 [pre-ZD7288], 25.0 [post-ZD7288]), with median charge transfer increasing from 3.20 to 4.19 µC cm −2 a mean increase of 25.1%. These numbers represent the HCN charge amounts transferred across one 800 ms long assimilation window rather than per action potential as above. This is because the HCN current contributes to subthreshold oscillations unlike the SK, BK and A-type currents which contributes to action potentials. Figure b predicts the total blockage of the HCN channel targeted by ZD7288. The membrane voltage response to a hyperpolarizing current step applied before and after ZD7288 (Fig. c) is compared to the responses predicted by the pre-ZD7288 and post-ZD7288 models to the same current step (Fig. d). The model correctly predicts the faster adaptation and the reduced amplitude of the membrane voltage change post-ZD7288. In order to validate the results of data assimilation, we now compare the predicted changes in ionic charge transfer to the selectivity and potency of each ion channel antagonist determined by IC50 analysis. The results are summarized in Table . The predicted reductions in charge transfer are in good agreement with degree of inhibition expected in SK, BK, A-type and HCN. We further discuss below the inhibition of sub-types of the SK, BK, A, and HCN channels. Besides correctly identifying the selectivity of known antagonists, DA is sensitive enough to pick up correlations between ion channels driven by the modulation of reversal potentials (Fig. a) or compensation mechanisms (Fig. a). We also determined the degree of confidence in our predictions by computing the coefficient of variation (Table ). The results consistently show a ± 11% uncertainty on charge estimates. This proof-of-concept study demonstrates that the DA approach we present can concurrently infer functional alterations across a range of ionic currents. Single hippocampal CA1 neurons in acute brain slices were characterized by driving them with a chaotic current clamp protocol designed to extract the maximum of information for parameter identifiability, and with fast synaptic neurotransmission blocked. The ionic currents were reconstructed with a high degree of confidence from the model parameters estimated by DA. This technique is potentially applicable to assaying multiple ionic currents during functional drug screening, or research studies targeting neurological disease. We now discuss the two factors limiting its predictive accuracy. The first is parameter estimation in the presence of model error. The second is the variation in subunits making up each ion channel, when we are limited to modelling channels using an aggregate contribution of all subunits. Figures a, a, a and a demonstrate the ability of the method to disentangle the contributions of 8 different ionic channels from the membrane voltage time series and to assign drug induced changes to the correct ion channel being blocked. This identification relies on the uniqueness of the mathematical equations of each ionic current in the model (Table ), and notably the correct ion channel type can still be identified even when we know that model equations only approximate biological reality. There are, however, reasons to believe that model error is merely residual because the completed models make excellent predictions of the membrane voltage pre-drug and post-drug (Figs. e, d, d, d, d) and because DA assigns stable, sensible values to a majority of parameters. In order to evaluate the effect of model error on current estimates, we deliberately introduced an erroneous gate exponent in the sodium current (NaT), changing the gate exponent [12pt]{minimal} $$m^{3} h$$ m 3 h – [12pt]{minimal} $$m^{2} h$$ m 2 h . We find that the current waveforms estimated with the wrong model deviate only by a few percent from their true shape. We also find any drop in sodium current (NaT) induced by model error is compensated by a drop in potassium (A) current. This indicates that a slightly wrong model still retains its ability to discriminate ion channel types as observed in Figs. a, a, a, a. This robustness to model error is important as it allows for the construction of larger, more inclusive models that do not rely on prior assumptions about which ion channel to include, making them applicable to a wide range of biological neurons. As the model size grows, simulations on model data ought to verify the stimulation protocol still satisfies identifiability criteria , as a prerequisite. Multicompartment models may also be used however single compartment models have by and large been sufficiently detailed to accurately predict voltages and currents – . The computational efficiency of single-compartment models is a significant advantage, particularly when processing large datasets. Moreover, single-compartment models are critical in conserving the observability of the neuron state and robustness against overfitting; they focus on capturing essential dynamics pertinent to the treatment response. We assume that while the electrophysiological data also include features such as transmission line delays, and the effects of spatial parameters and sensing domains, it is reasonable to consider these invariant to drug application, thus not influencing the predicted charge alterations in Figs. a, a, a, a. Absolute physiological realism of the model is not necessary, as we focus on the ability to detect alterations in reconstructed currents in response to a treatment, rather than absolute accuracy in the modelled ion channel responses. This approach provides the benefit of increasing DA speed and increasing the success rate of assimilations. We now discuss the predicted channel block in relation to the subunit variation within each ion channel. Prediction of BK channel alterations Our inference method correctly identifies a reduction in the BK-mediated current in response to application of IbTX which was the statistical discovery validated by the false discovery rate criterion (q < 1%) among all 7 ion channels analyzed (Figs. a,b). DA thus identifies correctly the effect of IbTX, a highly selective inhibitor of BK channels , . BK channels have a very high unitary conductance , , and contribute to both the repolarization of the action potential and to fast afterhyperpolarization (fI AHP ), as seen in Fig. c. The contribution of the BK channel to both repolarization and fI AHP were correctly predicted by the model, in addition to the reduction in overall BK current induced by IbTX (Fig. d). This is validation that DA transfers biological relevant information to the complete model. Results predicted a 12.1% median reduction in the BK current (Table ). The response of BK channels to IbTX is heavily modulated by the presence of up to four auxiliary subunits (β1-β4) , , . Generally, β1 and β3 do not appear to be expressed in the brain. β2 is highly expressed in astrocytes, and β4 is expressed in neurons. It has been suggested that a full complement of four β subunits (1:1 stoichiometry) may be required to confer full IbTX resistance : channels with less than four β subunits would exhibit toxin sensitivity similar to channels totally lacking β4 subunits . As stoichiometry is unknown in these neurons, a mix of configurations would result in the partial inhibition of BK-mediated currents by IbTX. The potency of IbTX has previously been evaluated for several configurations of β subunits and is listed in Table . Whilst the precise degree of expected BK channel block is therefore not verifiable in CA1 neurons, we do find the channel is correctly selected by the method. Prediction of SK channel alterations SK channels have a major role in generation of the AHP. Our predictions from recordings made using the highly specific SK channel blocker, apamin , showed a median 100% reduction in SK-mediated current (Figs. a,b). The reduction in ranks for SK was the only statistical discovery, and DA did not predict any notable attenuation in charge transfer across any other type of ion channel. This validated the predictive power of our inference method against the specificity of inhibition by apamin. Apamin is a highly selective inhibitor of SK2 and SK3 channels, which mediate medium AHP currents (mI AHP ) with a relatively fast inactivation and decay . Whilst apamin-sensitive I AHP currents have been shown to be present in the CA1 soma, their blockade is often masked by the activity of other voltage-gated potassium channels , . The majority of SK channel subunits in CA1 neurons are SK2 , , with SK3 showing relatively low expression. SK1 is expressed in moderate levels and is apamin insensitive, but does not contribute to mI AHP , . At 150 nM, apamin is expected to completely block SK2/3-mediated mI AHP , and our prediction of an almost complete block (Fig. b) is therefore in excellent agreement with expected effects of apamin in CA1 neurons. Prediction of A-type and K channel alterations 4-aminopyridine was applied to inhibit voltage-dependent K + channels. These are accounted for by the A-type and K-delayed rectifier channels in our model, each representing an amalgam of actual Kv channel subtypes, and we applied 4-AP to inhibit these two modelled channel groups simultaneously. A-type channels are known to be present in CA1 neurons where they give fast activating and fast inactivating K + currents which can suppress excitatory postsynaptic potentials and delay action potentials . In CA1 neurons, A-type K + currents are mediated by either Kv1.4 or Kv4.2 channels , with Kv4.2 being more abundant , . Our results show prediction of 24.3% median (19.0% mean) reduction in A-type K + current (Figs. a,b), within the 13–25% inhibition expected from 4-AP at 300 μM (Table ) , . This result is good evidence that the method is sensitive to smaller alterations in current. The K-delayed-rectifier channels include the Kv1-3,5 and 6 subfamilies which are also inhibited by 4-AP, with an IC50 of 200–1500 µM , . We expected to see a reduction in predicted activity for the K-channel; however, whilst a reduction was clearly visible in 4-AP (Fig. a) this did not generate a discovery in the Mann–Whitney test. This is because the amount of charge transferred in the natural state is very low compared to other channels (NaT, A and BK). To statistically confirm the K-channel block, a larger sample of parameter estimates ( [12pt]{minimal} $$R 19)$$ R ≫ 19 ) would be needed to reduce the variance on the predicted charge transfer. Prediction of HCN channel alterations Hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels belong to the superfamily of voltage-gated pore loop channels. They are unique in possessing a reverse voltage-dependence that leads to activation upon hyperpolarization . The HCN1 and HCN2 subunits are the most abundant in CA1 neurons – , and both are amalgamated in our model of the HCN channel. Under ZD7288, predicted HCN current was reduced by 100% median (85.5% mean) (Fig. a,b). This result is a good match to the degree of block expected from previous work on CA1 neurons indicating a 70–85% mean reduction in HCN-mediated current , . Similar values were obtained from specific studies of the HCN1 and HCN2 subunits (Table ). In the predictive simulations of membrane voltage response to simulation (Fig. d), the average waveform of 19 iterations is presented. Notably, while the characteristic ‘sag’ of the HCN current is not prominently visible due to the averaging process, the ‘rebound’ phenomenon remains evident. An example of non-averaged predicted membrane voltage demonstrating the HCN ‘sag’ current is plotted in Fig. . Prediction of secondary effects of pharmacological inhibition in other channels DA also predicted alterations in ion channels not specifically targeted by 4-AP and ZD7288 (Figs. a, a). The observation of collateral alterations is consistent with modification of the electrochemical driving force by the antagonist which alters current flow through other ion channels, particularly at times when the blocked channel would otherwise have been activated. For example, a reduction in K + permeability during AHP will change the electrochemical driving force of other ions during that period. The driving force of Cl - into the cell will increase whereas the Na + driving force will be reduced. In addition, potassium current through the BK channel can compensate for the blocked Kv channels and vice-versa . The collateral effect of IbTX is to increase the leak current as predicted in Fig. a. This effect is likely to be caused by the reduction in K + permeability when the large conductance BK channel is inhibited. It is also notable that within the 4-AP dataset (Fig. a), BK current increases when the A-type channel is blocked. This is a well characterized effect of 4-AP which causes a persistent K + current and increases the spike width , (Fig. c). Our model correctly predicts this spike broadening (Fig. d), and the DA method has sufficient sensitivity to pick up the second order increase in BK current (Fig. a). A small (4.4%) reduction in median Ca 2+ channel current was also predicted in Fig. . This is likely to result from a reduction in the electrochemical driving force on Ca 2+ caused by the decreased potassium permeability following each action potential. The DA inference method has the potential to provide unbiased quantitative assessment of alterations among a range of ionic currents simultaneously, including current compensation between ion channels. The effects we observe are in good agreement with the selectivity and potency of antagonists, and notably we also detect well-characterized second order effects caused by blocking those channels. Sodium channels were not targeted in our study to avoid suppressing action potentials which would have impeded the ability of DA to estimate the parameters of all ion channels from the overall neuronal response. We decided therefore to prioritize other clinically relevant channels. However, future work would benefit from studying the partial inhibition of voltage gated Na channels. The method we describe may be applied in various scenarios where assessing functional effects of drug or disease on specific ionic currents is desirable. Patch-clamp electrophysiology remains the primary technique used for profiling ion channels in vitro. A major limitation of this technique when applied traditionally is its low throughput, with a single ion channel being addressable at a time. This is especially limiting as a counter-toxicity screen when it is desirable to know the effects of a drug on more than one ionic current or when thousands of candidate drugs have to be screened. In drug screening, non-electrophysiological high throughput screening (HTS) methods such as ligand binding and ion flux assays can alternatively be applied. However, binding assays measure binding affinity rather than functional changes to ionic currents, and fluorescent assays are an indirect measure of such currents as well as being unsuitable for use with voltage-gated channels due to the lack of control over membrane voltage , . Ion flux assays are widely used in drug discovery, but also lack control over membrane voltage, as well as suffering from low temporal resolution and often weak ionic signal, rendering them inferior to voltage-clamp experiments . Due to these limitations, the desire to apply patch-clamp assays early in the drug discovery process has led to the development of automated HTS patch-clamp systems : such systems provide large amounts of functional data on the channels being targeted, however they are expensive to purchase and run, and like the above techniques, can normally only be applied in cell cultures directed to overexpress a specific ion channel rather than in primary tissue , . This renders them unable to infer the functional impact of candidate drugs in physiologically relevant systems such as acute primary tissue slices . The DA method we present has the potential to be far faster than traditional patch clamp methods at interrogating multiple ionic currents at once, whilst retaining the ability to characterise the effects of a compound or treatment on individual neurons within a brain slice. This proof-of-concept study may be applied to assay functional channel alterations in many other neuron types, in drug screening, and potentially in animal models of disease. When targeting other neurons, it may be necessary to add or remove channel groups from the model as appropriate based on information from prior studies. However, depending on the degree of overlap in the characteristics of modelled channel types, the addition of extraneous channels may not affect the accuracy of current reconstruction, as discussed previously. Further work will verify this in practice. In relation to disease studies, the method could be complementary to transcriptomics and proteomics sequencing. Bottom-up sequencing methods do not discriminate between alterations which are relevant to electrical function from those which are not. Our top-down approach infers only the alterations in ion channels which are functionally relevant to neuronal electrical activity. Whilst the approach should be successful in theory, further validation of the method using animal models of channelopathy are necessary. In summary, the present study demonstrates that it is possible to reliably reconstruct multiple specific ionic currents by assimilating the membrane voltage of a neuron driven by a complex current waveform. Accuracy on the reconstructed ionic currents is sufficient to predict alterations in currents in agreement with the expected effects of inhibitory compounds, as well as predicting well-characterized second order compensation effects. This data assimilation method requires no prior assumption as to which channel might be affected as it provides a quantitative assessment of functional alterations among a range of ionic currents in one shot, which to our knowledge has not previously been achieved. With further validation it therefore has the potential to be widely applied in drug screening pipelines, and additionally in studies aiming to characterize ion channel dysfunction in disease models. It has the benefit of application in acute tissue slices or primary neuron cultures and may substantially reduce workload. Our inference method correctly identifies a reduction in the BK-mediated current in response to application of IbTX which was the statistical discovery validated by the false discovery rate criterion (q < 1%) among all 7 ion channels analyzed (Figs. a,b). DA thus identifies correctly the effect of IbTX, a highly selective inhibitor of BK channels , . BK channels have a very high unitary conductance , , and contribute to both the repolarization of the action potential and to fast afterhyperpolarization (fI AHP ), as seen in Fig. c. The contribution of the BK channel to both repolarization and fI AHP were correctly predicted by the model, in addition to the reduction in overall BK current induced by IbTX (Fig. d). This is validation that DA transfers biological relevant information to the complete model. Results predicted a 12.1% median reduction in the BK current (Table ). The response of BK channels to IbTX is heavily modulated by the presence of up to four auxiliary subunits (β1-β4) , , . Generally, β1 and β3 do not appear to be expressed in the brain. β2 is highly expressed in astrocytes, and β4 is expressed in neurons. It has been suggested that a full complement of four β subunits (1:1 stoichiometry) may be required to confer full IbTX resistance : channels with less than four β subunits would exhibit toxin sensitivity similar to channels totally lacking β4 subunits . As stoichiometry is unknown in these neurons, a mix of configurations would result in the partial inhibition of BK-mediated currents by IbTX. The potency of IbTX has previously been evaluated for several configurations of β subunits and is listed in Table . Whilst the precise degree of expected BK channel block is therefore not verifiable in CA1 neurons, we do find the channel is correctly selected by the method. SK channels have a major role in generation of the AHP. Our predictions from recordings made using the highly specific SK channel blocker, apamin , showed a median 100% reduction in SK-mediated current (Figs. a,b). The reduction in ranks for SK was the only statistical discovery, and DA did not predict any notable attenuation in charge transfer across any other type of ion channel. This validated the predictive power of our inference method against the specificity of inhibition by apamin. Apamin is a highly selective inhibitor of SK2 and SK3 channels, which mediate medium AHP currents (mI AHP ) with a relatively fast inactivation and decay . Whilst apamin-sensitive I AHP currents have been shown to be present in the CA1 soma, their blockade is often masked by the activity of other voltage-gated potassium channels , . The majority of SK channel subunits in CA1 neurons are SK2 , , with SK3 showing relatively low expression. SK1 is expressed in moderate levels and is apamin insensitive, but does not contribute to mI AHP , . At 150 nM, apamin is expected to completely block SK2/3-mediated mI AHP , and our prediction of an almost complete block (Fig. b) is therefore in excellent agreement with expected effects of apamin in CA1 neurons. 4-aminopyridine was applied to inhibit voltage-dependent K + channels. These are accounted for by the A-type and K-delayed rectifier channels in our model, each representing an amalgam of actual Kv channel subtypes, and we applied 4-AP to inhibit these two modelled channel groups simultaneously. A-type channels are known to be present in CA1 neurons where they give fast activating and fast inactivating K + currents which can suppress excitatory postsynaptic potentials and delay action potentials . In CA1 neurons, A-type K + currents are mediated by either Kv1.4 or Kv4.2 channels , with Kv4.2 being more abundant , . Our results show prediction of 24.3% median (19.0% mean) reduction in A-type K + current (Figs. a,b), within the 13–25% inhibition expected from 4-AP at 300 μM (Table ) , . This result is good evidence that the method is sensitive to smaller alterations in current. The K-delayed-rectifier channels include the Kv1-3,5 and 6 subfamilies which are also inhibited by 4-AP, with an IC50 of 200–1500 µM , . We expected to see a reduction in predicted activity for the K-channel; however, whilst a reduction was clearly visible in 4-AP (Fig. a) this did not generate a discovery in the Mann–Whitney test. This is because the amount of charge transferred in the natural state is very low compared to other channels (NaT, A and BK). To statistically confirm the K-channel block, a larger sample of parameter estimates ( [12pt]{minimal} $$R 19)$$ R ≫ 19 ) would be needed to reduce the variance on the predicted charge transfer. Hyperpolarization-activated and cyclic nucleotide-gated (HCN) channels belong to the superfamily of voltage-gated pore loop channels. They are unique in possessing a reverse voltage-dependence that leads to activation upon hyperpolarization . The HCN1 and HCN2 subunits are the most abundant in CA1 neurons – , and both are amalgamated in our model of the HCN channel. Under ZD7288, predicted HCN current was reduced by 100% median (85.5% mean) (Fig. a,b). This result is a good match to the degree of block expected from previous work on CA1 neurons indicating a 70–85% mean reduction in HCN-mediated current , . Similar values were obtained from specific studies of the HCN1 and HCN2 subunits (Table ). In the predictive simulations of membrane voltage response to simulation (Fig. d), the average waveform of 19 iterations is presented. Notably, while the characteristic ‘sag’ of the HCN current is not prominently visible due to the averaging process, the ‘rebound’ phenomenon remains evident. An example of non-averaged predicted membrane voltage demonstrating the HCN ‘sag’ current is plotted in Fig. . DA also predicted alterations in ion channels not specifically targeted by 4-AP and ZD7288 (Figs. a, a). The observation of collateral alterations is consistent with modification of the electrochemical driving force by the antagonist which alters current flow through other ion channels, particularly at times when the blocked channel would otherwise have been activated. For example, a reduction in K + permeability during AHP will change the electrochemical driving force of other ions during that period. The driving force of Cl - into the cell will increase whereas the Na + driving force will be reduced. In addition, potassium current through the BK channel can compensate for the blocked Kv channels and vice-versa . The collateral effect of IbTX is to increase the leak current as predicted in Fig. a. This effect is likely to be caused by the reduction in K + permeability when the large conductance BK channel is inhibited. It is also notable that within the 4-AP dataset (Fig. a), BK current increases when the A-type channel is blocked. This is a well characterized effect of 4-AP which causes a persistent K + current and increases the spike width , (Fig. c). Our model correctly predicts this spike broadening (Fig. d), and the DA method has sufficient sensitivity to pick up the second order increase in BK current (Fig. a). A small (4.4%) reduction in median Ca 2+ channel current was also predicted in Fig. . This is likely to result from a reduction in the electrochemical driving force on Ca 2+ caused by the decreased potassium permeability following each action potential. The DA inference method has the potential to provide unbiased quantitative assessment of alterations among a range of ionic currents simultaneously, including current compensation between ion channels. The effects we observe are in good agreement with the selectivity and potency of antagonists, and notably we also detect well-characterized second order effects caused by blocking those channels. Sodium channels were not targeted in our study to avoid suppressing action potentials which would have impeded the ability of DA to estimate the parameters of all ion channels from the overall neuronal response. We decided therefore to prioritize other clinically relevant channels. However, future work would benefit from studying the partial inhibition of voltage gated Na channels. The method we describe may be applied in various scenarios where assessing functional effects of drug or disease on specific ionic currents is desirable. Patch-clamp electrophysiology remains the primary technique used for profiling ion channels in vitro. A major limitation of this technique when applied traditionally is its low throughput, with a single ion channel being addressable at a time. This is especially limiting as a counter-toxicity screen when it is desirable to know the effects of a drug on more than one ionic current or when thousands of candidate drugs have to be screened. In drug screening, non-electrophysiological high throughput screening (HTS) methods such as ligand binding and ion flux assays can alternatively be applied. However, binding assays measure binding affinity rather than functional changes to ionic currents, and fluorescent assays are an indirect measure of such currents as well as being unsuitable for use with voltage-gated channels due to the lack of control over membrane voltage , . Ion flux assays are widely used in drug discovery, but also lack control over membrane voltage, as well as suffering from low temporal resolution and often weak ionic signal, rendering them inferior to voltage-clamp experiments . Due to these limitations, the desire to apply patch-clamp assays early in the drug discovery process has led to the development of automated HTS patch-clamp systems : such systems provide large amounts of functional data on the channels being targeted, however they are expensive to purchase and run, and like the above techniques, can normally only be applied in cell cultures directed to overexpress a specific ion channel rather than in primary tissue , . This renders them unable to infer the functional impact of candidate drugs in physiologically relevant systems such as acute primary tissue slices . The DA method we present has the potential to be far faster than traditional patch clamp methods at interrogating multiple ionic currents at once, whilst retaining the ability to characterise the effects of a compound or treatment on individual neurons within a brain slice. This proof-of-concept study may be applied to assay functional channel alterations in many other neuron types, in drug screening, and potentially in animal models of disease. When targeting other neurons, it may be necessary to add or remove channel groups from the model as appropriate based on information from prior studies. However, depending on the degree of overlap in the characteristics of modelled channel types, the addition of extraneous channels may not affect the accuracy of current reconstruction, as discussed previously. Further work will verify this in practice. In relation to disease studies, the method could be complementary to transcriptomics and proteomics sequencing. Bottom-up sequencing methods do not discriminate between alterations which are relevant to electrical function from those which are not. Our top-down approach infers only the alterations in ion channels which are functionally relevant to neuronal electrical activity. Whilst the approach should be successful in theory, further validation of the method using animal models of channelopathy are necessary. In summary, the present study demonstrates that it is possible to reliably reconstruct multiple specific ionic currents by assimilating the membrane voltage of a neuron driven by a complex current waveform. Accuracy on the reconstructed ionic currents is sufficient to predict alterations in currents in agreement with the expected effects of inhibitory compounds, as well as predicting well-characterized second order compensation effects. This data assimilation method requires no prior assumption as to which channel might be affected as it provides a quantitative assessment of functional alterations among a range of ionic currents in one shot, which to our knowledge has not previously been achieved. With further validation it therefore has the potential to be widely applied in drug screening pipelines, and additionally in studies aiming to characterize ion channel dysfunction in disease models. It has the benefit of application in acute tissue slices or primary neuron cultures and may substantially reduce workload. Current clamp electrophysiology CA1 hippocampal neurons were driven and recorded using a Molecular devices MultiClamp 700B amplifier. This type of amplifier uses a voltage follower circuit that was necessary to drive rapidly varying currents. A LabView controller (National Instruments) interfaced with a National Instruments USB-6363 DAQ card delivered the clamp protocol signal to the amplifier and recorded the membrane voltage returned by the neuron. Prior to each series of experiments, the gain of the protocol (via a multiplier) was adjusted to elicit a maximum number of action potentials per measurement epoch without causing depolarization block with excessive current amplitudes. The calibration protocol is described in Fig. . Current clamp protocols were designed to fulfil the identifiability criterion of the inverse problem, that is to excite the full dynamic range of the neuron. It comprised a mixture of hyperpolarizing and depolarizing current steps of different amplitudes and durations, and chaotic oscillations generated by the Lorenz96 system. Both the current stimulus and the membrane voltage were sampled at a rate of 100 kHz. This time resolution gave 20 datapoints per action potential which is sufficient for interpolating the finer features of the neuron response. Whole-cell current-clamp recordings were performed in acute brain slices from male Han Wistar rats at P15–17. Following decapitation, the brain was removed and placed into an ice-cold slicing solution composed of (mM): NaCl 52.5; sucrose 100; glucose 25; NaHCO 3 25; KCl 2.5; CaCl 2 1; MgSO 4 5; NaH 2 PO 4 1.25; kynurenic acid 0.1, and carbogenated using 95% O 2 /5% CO 2 . A Campden 7000 smz tissue slicer (Campden Instruments UK) was used to prepare transverse hippocampal slices at 350 μm, which were then transferred to a submersion chamber containing carbogenated artificial cerebrospinal fluid (aCSF) composed of (mM): NaCl 124; glucose 30; NaHCO 3 25; KCl 3; CaCl 2 2; MgSO 4 1; NaH 2 PO 4 0.4 and incubated at 30 °C for 1–5 h prior to use. Synaptic transmission was inhibited pharmacologically in order to prevent network feedback or random postsynaptic potentials from disrupting the trace. To this end all experiments were performed in the presence of (μM) kynurenate 3, picrotoxin 0.05, and strychnine 0.01, to inhibit ionotropic glutamatergic, γ-aminobutyric acid (GABA)-ergic, and glycinergic neurotransmission respectively. For patching, slices were transferred to the stage of an Axioskop 2 upright microscope (Carl Zeiss) and pyramidal CA1 neurons identified morphologically and by location using differential interference contrast optics. The chamber was perfused with carbogenated aCSF (composition as above) at 2 ml min −1 at 30 ± 1 °C. Patch pipettes were pulled from standard walled borosilicate glass (GC150F, Warner Instruments) to a resistance of 2.5–4 MΩ, and filled with an intracellular solution composed of (mM): potassium gluconate 130; sodium gluconate 5, HEPES 10; CaCl 2 1.5; sodium phosphocreatine 4; Mg-ATP 4; Na-GTP 0.3; pH 7.3; filtered at 0.2 µm. Inhibitory compounds were selected for the predictability of their effects on ion channel types known to be present in hippocampal pyramidal neurons: SK channels were inhibited using apamin (150 nM); BK channels were inhibited with iberiotoxin (100 nM); HCN channels were inhibited with ZD7288 (50 µM); A and K channels were inhibited using 4-AP (300 µM). The potency of each drug was obtained from IC50 values tabulated in the literature (Table ), which we compared to the reduction in ionic charge transfer predicted by our DA method. A total of 13 animals were used in development of the methodology. The data presented in this proof-of-concept study were collected from 4 animals. Each current datapoint was computed from one assimilation window, from a single neuron, in the pre-drug or the post-drug state using a single compound as specified. The chaotic current clamp protocol was applied pre-drug, before immediately switching to drug-containing aCSF at the specified concentration and allowing to wash in for a further 3 min, before initiation of the same protocol. Time between the start of the pre-drug clamp protocol and termination of the drug-applied protocol was in every case < 5 min. Model description A single-compartment model of the CA1 pyramidal neurons was built using a conductance-based framework incorporating eight active ionic currents identified in the physiological literature as being prevalent in the soma of CA1 neurons , , , in addition to a voltage-independent leak current , . The complement of ionic channels includes transient sodium (NaT), persistent sodium (NaP), delayed-rectifier potassium (K), A-type potassium (A), low threshold calcium (Ca), large- and small-conductance Ca 2+ - activated potassium (BK and SK respectively), and the hyperpolarization-activated cation channel (HCN). The density of calcium channels in the soma of CA1 neurons is much lower than in distal dendrites , however the internal Ca 2+ concentration activates the transfer of K + ions through the Ca-dependent BK and SK channels. Therefore our model equations need to include the calcium current. The equation of motion for the membrane voltage is: 1 [12pt]{minimal} $$C} = - J_{NaT} - J_{NaP} - J_{K} - J_{A} - J_{Ca} - J_{BK} - J_{SK} - J_{HCN} - J_{Leak} + I_{inj} ( t )/A,$$ C d V t dt = - J NaT - J NaP - J K - J A - J Ca - J BK - J SK - J HCN - J Leak + I inj t / A , where C is the membrane capacitance, V is the membrane potential, I inj (t) is the injected current protocol (Fig. a), A is the surface area of the soma, and J NaT … J Leak are the ionic current densities across the cell membrane. The equations describing individual ionic currents are given in Table . These currents depend on maximum ionic conductances ( g NaT , g K , g HCN …), reversal potentials ( E Na , E K , E HCN …), and gating variables ( m , h , n , p , …). The kinetics of each ionic gate is described by a first order equation and each gate activates or inactivates according to a sigmoidal function of the membrane voltage. The equations for each ion channel are as follows: Sodium channels The activation gate variables of the NaT and NaP channels were respectively: 2 [12pt]{minimal} $$m_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{m} }}} )} ],$$ m ∞ V = 0.5 1 + tanh V - V m δ V m , 3 [12pt]{minimal} $$p_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{p} }}} ) } ],$$ p ∞ V = 0.5 1 + tanh V - V p δ V p , where [12pt]{minimal} $$V_{m}$$ V m , [12pt]{minimal} $$V_{p}$$ V p are the activation thresholds and [12pt]{minimal} $$ V_{m} , V_{p}$$ δ V m , δ V p are the widths of the gate transition from the open to the closed state. The activation time of NaT and NaP being very rapid (~ 0.1 ms) compared to other channels we have assumed it to be instantaneous. This simplification reduces model complexity and improves parameter identifiability in DA. The kinetics of the NaT inactivation gate is given by: 4 [12pt]{minimal} $$ )}}{dt} = ( V ) - h( {V,t} )}}{{_{h} ( V )}},$$ d h V , t dt = h ∞ V - h V , t τ h V , where the steady-state inactivation curve is: 5 [12pt]{minimal} $$h_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{h} }}} )} ],$$ h ∞ V = 0.5 1 + tanh V - V h δ V h , and the recovery time depends on the membrane voltage as: 6 [12pt]{minimal} $$_{h} ( V ) = t_{h} + _{h} [ {1 - ^{2} ( { }}{{ V_{ h} }}} ) } ].$$ τ h V = t h + ϵ h 1 - tanh 2 V - V h δ V τ h . [12pt]{minimal} $$V_{h}$$ V h is the inactivation threshold, [12pt]{minimal} $$ V_{h}$$ δ V h the width of the open-to-closed transition of the inactivation gate. [12pt]{minimal} $$t_{h}$$ t h is recovery time away from the depolarization threshold and [12pt]{minimal} $$t_{h} + _{h}$$ t h + ϵ h the recovery time at the depolarization threshold. [12pt]{minimal} $$ V_{ h}$$ δ V τ h is the width of the peak at half maximum. Potassium channels The non-inactivating delayed-rectifier current (K) and the rapidly inactivating A-type potassium current (A) have the form given in Table . The kinetics of the A-type activation gate is: 7 [12pt]{minimal} $$ )}}{dt} = ( V ) - a( {V,t} )}}{{_{a} ( V )}},$$ d a V , t dt = a ∞ V - a V , t τ a V , where [12pt]{minimal} $$a_{ } ( V )$$ a ∞ V and [12pt]{minimal} $$_{a} ( V )$$ τ a V are given by Eqs. , where the subscript h is replaced with a (Table ). The inactivation kinetics of the K and A-type channels are respectively given by: 8 [12pt]{minimal} $$ )}}{dt} = ( V ) - n( {V,t} )}}{{_{n} ( V )}},$$ d n V , t dt = n ∞ V - n V , t τ n V , 9 [12pt]{minimal} $$ )}}{dt} = ( V ) - b( {V,t} )}}{{_{b} ( V )}},$$ d b V , t dt = b ∞ V - b V , t τ b V , where [12pt]{minimal} $$n_{ } ( V )$$ n ∞ V , [12pt]{minimal} $$_{n} ( V )$$ τ n V and [12pt]{minimal} $$b_{ } ( V )$$ b ∞ V and [12pt]{minimal} $$_{b} ( V )$$ τ b V are given by Eqs. , with the appropriate substitution of indices (Table ). Although the muscarinic potassium current ( [12pt]{minimal} $$I_{M}$$ I M ) is present in certain CA1 neurons, it was excluded from our model because of its relatively minor conductance and its persistent activity which primarily modulates the resting potential of the CA1 neuron . We determined that the characteristics of the [12pt]{minimal} $$I_{M}$$ I M can be adequately captured by the parameters governing the A-type potassium current, thereby avoiding an unnecessary increase in model complexity. Calcium activated potassium channels The BK and SK currents are Ca 2+ activated potassium currents found in the soma of hippocampal pyramidal cells . The BK current is sensitive to both membrane voltage and internal Ca 2+ concentration whereas the SK current only depends on the Ca 2+ concentration (Table ). Both currents are dependent of the internal calcium concentration given by : 10 [12pt]{minimal} $$ ]_{in} }}{dt} = ]_{ } - [ {Ca} ]_{in} }}{{_{ca} }} - }}{2wz} ,$$ d Ca in dt = Ca ∞ - Ca in τ ca - J Ca 2 w z , where [12pt]{minimal} $$[ {Ca} ]_{ }$$ Ca ∞ is the equilibrium concentration, [12pt]{minimal} $$_{ca}$$ τ ca is t he recovery time, z is Faraday’s constant, [12pt]{minimal} $$w$$ w is the thickness of the surface across which Ca 2+ fluxes are calculated ( [12pt]{minimal} $$w = 1 m$$ w = 1 μ m ), and [12pt]{minimal} $$J_{ca}$$ J ca is the calcium current whose expression is given in Table . The calcium current had voltage-dependent activation and inactivation gates, s and r , respectively . The kinetics and activation curves of s and r are given by Eqs.4–6 where subscript h is replaced with the s and r subscripts of the Ca parameters (Table ). The relaxation time constant for Ca 2+ in our model is set between 1 and 2 ms, a range that was chosen based on the slow dynamics, characteristic of Ca 2+ signaling. In the context of the model equations governing calcium, small variations within this interval were found to substantially influence the rate of change of internal Ca 2+ concentration, and so this range was deemed sufficient to accurately reflect the diversity of calcium dynamics in CA1 neurons. The BK current has two gate variables c , d while the SK channel has one w . The form of the ultrafast SK activation gate, w , is given by Warman et al. as: 11 [12pt]{minimal} $$w w_{ } ( {[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{w} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{w} } \} } ]$$ w ≡ w ∞ Ca in = 0.5 1 + tanh V - V w + 130 1 + tanh Ca in 0.2 - 250 / δ V w The slower activation gate of the BK channel, c, follows a first order rate equation: 12 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - c}}{{_{c} }}$$ dc dt = c ∞ V , Ca in - c τ c with a steady-state activation curve given by: 13 [12pt]{minimal} $$c_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{c} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{c} } \} } ]$$ c ∞ V , Ca in = 0.5 1 + tanh V - V c + 130 1 + tanh Ca in 0.2 - 250 / δ V c The inactivation gate of the BK channel, d, similarly follows a first order rate equation: 14 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - d}}{{_{d} ( V )}}$$ dd dt = d ∞ V , Ca in - d τ d V with 15 [12pt]{minimal} $$d_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{d} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{d} } \} } ]$$ d ∞ V , Ca in = 0.5 1 + tanh V - V d + 130 1 + tanh Ca in 0.2 - 250 / δ V d 16 [12pt]{minimal} $$_{d} ( V ) = t_{d} + _{d} [ {1 - ^{2} ( { }}{{ V_{ d} }}} ) } ]$$ τ d V = t d + ϵ d 1 - tanh 2 V - V d δ V τ d The existence of the SK and BK ionic currents was validated by much improved fits of the height and shape of action potentials, and their AHP region (Fig. e). Without the SK and BK currents, the model clips action potentials at 80% of their maximum height. In total, our conductance model had the 67 adjustment parameters listed in Table . Parameter estimation and current prediction Our interior point method optimizes the parameter vector [12pt]{minimal} $${}^{2{*}}$$ p ∗ and the initial state vector x *( t = 0) by minimizing the misfit between the experimental membrane voltage, V data , and the membrane voltage variable, V , at each time point [12pt]{minimal} $$t_{i}$$ t i ( [12pt]{minimal} $$i = 0, ,N$$ i = 0 , … , N ) of the assimilation window (Fig. a). This misfit is evaluated by the least-square cost function: 17 [12pt]{minimal} $$C( {p, {}( 0 )} ) = _{i = 0}^{N} \{ {[ {V_{data} ( {t_{i} } ) - V( {t_{i} , {},2{ x}( 0 )} )} ]^{2} + u( {t_{i} } )^{2} } \}.$$ C p , x 0 = 1 2 ∑ i = 0 N V data t i - V t i , p , x 0 2 + u t i 2 . The cost function is minimized under both equality and inequality constraints using the variational approach of Lagrangian optimization. The equality constraints are the model equations (Eqs.1–16). These were linearized at each time point [12pt]{minimal} $$t_{i}$$ t i of the assimilation window , which was 800 ms long and was meshed by N = 40,000 intervals of equal duration. The inequality constraints are given by the lower and upper boundaries of the parameter search range, LB and UB, in Table . These are set by the user. The 67 parameter components of the parameter vector [12pt]{minimal} $${}^{2{*}}$$ p ∗ are listed in Table . The state vector has 14 state variables, [12pt]{minimal} $${}( t ) \{ {V( t ),m( t ), h( t ), p( t ), n( t ), a( t ), b( t ), s( t ), r( t ),c( t ), d( t ), w( t ),z( t ),[ {Ca} ]_{in} } \}$$ x t ≡ V t , m t , h t , p t , n t , a t , b t , s t , r t , c t , d t , w t , z t , Ca in that hold the membrane voltage, gate variables, and internal calcium concentration. State variable [12pt]{minimal} $$V( t )$$ V t is observed, and is synchronized to the data, whereas the other state variables [12pt]{minimal} $$m( t ), ,[ {Ca} ]_{in}$$ m t , … , Ca in are unobserved and must be inferred. We used symbolic differentiation within Python to compute the Jacobian of the state variables with respect to parameters and the Hessian of the cost function. Both matrices were then inserted in the interior-point-optimization algorithm developed by Wächter and Biegler that iteratively determines [12pt]{minimal} $${}^{2{*}}$$ p ∗ and x *( t i ) at each point of the assimilation window. Thus, data assimilation infers observed and unobserved state variables, and model parameters. It estimates both parameters that relate in a nonlinear way to the membrane voltage (gate voltage thresholds, activation slopes, gate recovery times) as well as linear parameters (ionic conductances). In order to stabilize the convergence of the parameter search, a control term u ( t i )[ V data ( t i )-V ( t i )] was added to the right-hand side of Eq. 1 , and as u 2 ( t i ) in Eq. . In well-posed assimilation problems, the Tikhonov regularization term u ( t 0 ) … u ( t N ) uniformly tends to zero as p converges to the solution p * . The model error and experimental error encountered with biological neurons makes the problem ill-posed. Model error introduces correlations between some parameters which take multi-valued solutions when the initial guesses on state variables, parameters or data intervals vary. In this case the [12pt]{minimal} $$u( {t_{i} } )$$ u t i also converge to zero except at times that coincide with action potentials. Models configured with optimal parameters that include a small subset of correlated parameters reliably predict membrane voltage oscillations and ionic current waveforms for a wide range of current injection protocols (Figs. , ). When the u ( t i ) failed to converge uniformly across the assimilation window, the estimated parameters were discarded from the statistical analysis of the ion channels. Models configured with such parameters were unable to predict the experimental membrane voltage, as for example in Fig. e. Prior to the analysis of biological recordings, we verified that our current protocol and DA procedure fulfilled the conditions of observability and identifiability on model data. These preliminary studies showed that DA recovered all 67 parameters to within 0.1% of their original value in the model used to produce the assimilated data. We verified the uniqueness and accuracy of solutions using the R = 19 assimilation windows offset by 80 ms (Fig. a) and varying the starting values of [12pt]{minimal} $${}^{2{*}}$$ p ∗ and [12pt]{minimal} $${}^{2{*}}$$ x ∗ . The predicted ionic currents and membrane voltages were generated by forward integration of each completed model over the 2000 ms long epoch, both pre- and post-drug. Current waveforms were integrated to obtain the total charge transferred through each channel in that epoch (Figs. e, e, e). In order to eliminate the dependence on the neuron firing frequency, we divided the total charge transferred across the epoch by the number of action potentials to obtain the net charge transferred per spike, per ion channel (Figs. a, a, a). To verify the predicted inhibition is not affected by integrating currents over one action potential rather than the entire assimilation window, we plotted the changes in total charge transferred over the full epoch both pre- and post-drug (Fig. ). We verify that both methods gave similar results with small differences arising from sub-threshold current flow between action potentials. The model equations were differentiated symbolically using our custom-built Python library pyDSI to generate the C++ code of the optimization problem – . This code was then inserted in the open-source IPOPT software [ www.coin-or-org/ipopt ] implementing the MA97 sparse linear equation solver [ http://www.hsl.rl.ac.uk/catalogue ]. The optimizations were run on a 16-core (3.20 GHz) Linux workstation with 64 GB of RAM and a University of Bath minicomputer with 64-core processors and 320 GB of RAM. Model equations were linearized according to Boole’s rule . Statistical analysis Extreme outliers in the predicted charge data were detected using the ROUT test with the maximum desired false discovery rate, Q set at 0.1%, based on values for the NaT channel. Only 3 outliers were identified out of a total of 138. The corresponding parameters solutions [12pt]{minimal} $$p^{*}$$ p ∗ could also be identified by their failure to predict the membrane voltage oscillations over the 2000 ms epoch. Due to the non-gaussian distributions of some of the total predicted charge data, multiple two-tailed Mann–Whitney U rank-sum tests were applied, with multiple comparisons corrected for using the two-stage step-up method of Benjamini, Krieger and Yekutieli, with Q at 1%. Mann–Whitney U values are reported, and multiplicity-corrected significance values (q) are therefore reported for all discoveries. In figures, asterisks are applied based on these q values. For comparisons where predicted charge transfer distributions differed pre-drug and post-drug, we report the mean rank values in relation to Mann–Whitney U test output. GraphPad Prism version 9 was used for all statistical analyses. Ethical statement Experiments on rodents were performed under Schedule 1 in accordance with the United Kingdom Scientific Procedures act of 1986. CA1 hippocampal neurons were driven and recorded using a Molecular devices MultiClamp 700B amplifier. This type of amplifier uses a voltage follower circuit that was necessary to drive rapidly varying currents. A LabView controller (National Instruments) interfaced with a National Instruments USB-6363 DAQ card delivered the clamp protocol signal to the amplifier and recorded the membrane voltage returned by the neuron. Prior to each series of experiments, the gain of the protocol (via a multiplier) was adjusted to elicit a maximum number of action potentials per measurement epoch without causing depolarization block with excessive current amplitudes. The calibration protocol is described in Fig. . Current clamp protocols were designed to fulfil the identifiability criterion of the inverse problem, that is to excite the full dynamic range of the neuron. It comprised a mixture of hyperpolarizing and depolarizing current steps of different amplitudes and durations, and chaotic oscillations generated by the Lorenz96 system. Both the current stimulus and the membrane voltage were sampled at a rate of 100 kHz. This time resolution gave 20 datapoints per action potential which is sufficient for interpolating the finer features of the neuron response. Whole-cell current-clamp recordings were performed in acute brain slices from male Han Wistar rats at P15–17. Following decapitation, the brain was removed and placed into an ice-cold slicing solution composed of (mM): NaCl 52.5; sucrose 100; glucose 25; NaHCO 3 25; KCl 2.5; CaCl 2 1; MgSO 4 5; NaH 2 PO 4 1.25; kynurenic acid 0.1, and carbogenated using 95% O 2 /5% CO 2 . A Campden 7000 smz tissue slicer (Campden Instruments UK) was used to prepare transverse hippocampal slices at 350 μm, which were then transferred to a submersion chamber containing carbogenated artificial cerebrospinal fluid (aCSF) composed of (mM): NaCl 124; glucose 30; NaHCO 3 25; KCl 3; CaCl 2 2; MgSO 4 1; NaH 2 PO 4 0.4 and incubated at 30 °C for 1–5 h prior to use. Synaptic transmission was inhibited pharmacologically in order to prevent network feedback or random postsynaptic potentials from disrupting the trace. To this end all experiments were performed in the presence of (μM) kynurenate 3, picrotoxin 0.05, and strychnine 0.01, to inhibit ionotropic glutamatergic, γ-aminobutyric acid (GABA)-ergic, and glycinergic neurotransmission respectively. For patching, slices were transferred to the stage of an Axioskop 2 upright microscope (Carl Zeiss) and pyramidal CA1 neurons identified morphologically and by location using differential interference contrast optics. The chamber was perfused with carbogenated aCSF (composition as above) at 2 ml min −1 at 30 ± 1 °C. Patch pipettes were pulled from standard walled borosilicate glass (GC150F, Warner Instruments) to a resistance of 2.5–4 MΩ, and filled with an intracellular solution composed of (mM): potassium gluconate 130; sodium gluconate 5, HEPES 10; CaCl 2 1.5; sodium phosphocreatine 4; Mg-ATP 4; Na-GTP 0.3; pH 7.3; filtered at 0.2 µm. Inhibitory compounds were selected for the predictability of their effects on ion channel types known to be present in hippocampal pyramidal neurons: SK channels were inhibited using apamin (150 nM); BK channels were inhibited with iberiotoxin (100 nM); HCN channels were inhibited with ZD7288 (50 µM); A and K channels were inhibited using 4-AP (300 µM). The potency of each drug was obtained from IC50 values tabulated in the literature (Table ), which we compared to the reduction in ionic charge transfer predicted by our DA method. A total of 13 animals were used in development of the methodology. The data presented in this proof-of-concept study were collected from 4 animals. Each current datapoint was computed from one assimilation window, from a single neuron, in the pre-drug or the post-drug state using a single compound as specified. The chaotic current clamp protocol was applied pre-drug, before immediately switching to drug-containing aCSF at the specified concentration and allowing to wash in for a further 3 min, before initiation of the same protocol. Time between the start of the pre-drug clamp protocol and termination of the drug-applied protocol was in every case < 5 min. A single-compartment model of the CA1 pyramidal neurons was built using a conductance-based framework incorporating eight active ionic currents identified in the physiological literature as being prevalent in the soma of CA1 neurons , , , in addition to a voltage-independent leak current , . The complement of ionic channels includes transient sodium (NaT), persistent sodium (NaP), delayed-rectifier potassium (K), A-type potassium (A), low threshold calcium (Ca), large- and small-conductance Ca 2+ - activated potassium (BK and SK respectively), and the hyperpolarization-activated cation channel (HCN). The density of calcium channels in the soma of CA1 neurons is much lower than in distal dendrites , however the internal Ca 2+ concentration activates the transfer of K + ions through the Ca-dependent BK and SK channels. Therefore our model equations need to include the calcium current. The equation of motion for the membrane voltage is: 1 [12pt]{minimal} $$C} = - J_{NaT} - J_{NaP} - J_{K} - J_{A} - J_{Ca} - J_{BK} - J_{SK} - J_{HCN} - J_{Leak} + I_{inj} ( t )/A,$$ C d V t dt = - J NaT - J NaP - J K - J A - J Ca - J BK - J SK - J HCN - J Leak + I inj t / A , where C is the membrane capacitance, V is the membrane potential, I inj (t) is the injected current protocol (Fig. a), A is the surface area of the soma, and J NaT … J Leak are the ionic current densities across the cell membrane. The equations describing individual ionic currents are given in Table . These currents depend on maximum ionic conductances ( g NaT , g K , g HCN …), reversal potentials ( E Na , E K , E HCN …), and gating variables ( m , h , n , p , …). The kinetics of each ionic gate is described by a first order equation and each gate activates or inactivates according to a sigmoidal function of the membrane voltage. The equations for each ion channel are as follows: Sodium channels The activation gate variables of the NaT and NaP channels were respectively: 2 [12pt]{minimal} $$m_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{m} }}} )} ],$$ m ∞ V = 0.5 1 + tanh V - V m δ V m , 3 [12pt]{minimal} $$p_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{p} }}} ) } ],$$ p ∞ V = 0.5 1 + tanh V - V p δ V p , where [12pt]{minimal} $$V_{m}$$ V m , [12pt]{minimal} $$V_{p}$$ V p are the activation thresholds and [12pt]{minimal} $$ V_{m} , V_{p}$$ δ V m , δ V p are the widths of the gate transition from the open to the closed state. The activation time of NaT and NaP being very rapid (~ 0.1 ms) compared to other channels we have assumed it to be instantaneous. This simplification reduces model complexity and improves parameter identifiability in DA. The kinetics of the NaT inactivation gate is given by: 4 [12pt]{minimal} $$ )}}{dt} = ( V ) - h( {V,t} )}}{{_{h} ( V )}},$$ d h V , t dt = h ∞ V - h V , t τ h V , where the steady-state inactivation curve is: 5 [12pt]{minimal} $$h_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{h} }}} )} ],$$ h ∞ V = 0.5 1 + tanh V - V h δ V h , and the recovery time depends on the membrane voltage as: 6 [12pt]{minimal} $$_{h} ( V ) = t_{h} + _{h} [ {1 - ^{2} ( { }}{{ V_{ h} }}} ) } ].$$ τ h V = t h + ϵ h 1 - tanh 2 V - V h δ V τ h . [12pt]{minimal} $$V_{h}$$ V h is the inactivation threshold, [12pt]{minimal} $$ V_{h}$$ δ V h the width of the open-to-closed transition of the inactivation gate. [12pt]{minimal} $$t_{h}$$ t h is recovery time away from the depolarization threshold and [12pt]{minimal} $$t_{h} + _{h}$$ t h + ϵ h the recovery time at the depolarization threshold. [12pt]{minimal} $$ V_{ h}$$ δ V τ h is the width of the peak at half maximum. Potassium channels The non-inactivating delayed-rectifier current (K) and the rapidly inactivating A-type potassium current (A) have the form given in Table . The kinetics of the A-type activation gate is: 7 [12pt]{minimal} $$ )}}{dt} = ( V ) - a( {V,t} )}}{{_{a} ( V )}},$$ d a V , t dt = a ∞ V - a V , t τ a V , where [12pt]{minimal} $$a_{ } ( V )$$ a ∞ V and [12pt]{minimal} $$_{a} ( V )$$ τ a V are given by Eqs. , where the subscript h is replaced with a (Table ). The inactivation kinetics of the K and A-type channels are respectively given by: 8 [12pt]{minimal} $$ )}}{dt} = ( V ) - n( {V,t} )}}{{_{n} ( V )}},$$ d n V , t dt = n ∞ V - n V , t τ n V , 9 [12pt]{minimal} $$ )}}{dt} = ( V ) - b( {V,t} )}}{{_{b} ( V )}},$$ d b V , t dt = b ∞ V - b V , t τ b V , where [12pt]{minimal} $$n_{ } ( V )$$ n ∞ V , [12pt]{minimal} $$_{n} ( V )$$ τ n V and [12pt]{minimal} $$b_{ } ( V )$$ b ∞ V and [12pt]{minimal} $$_{b} ( V )$$ τ b V are given by Eqs. , with the appropriate substitution of indices (Table ). Although the muscarinic potassium current ( [12pt]{minimal} $$I_{M}$$ I M ) is present in certain CA1 neurons, it was excluded from our model because of its relatively minor conductance and its persistent activity which primarily modulates the resting potential of the CA1 neuron . We determined that the characteristics of the [12pt]{minimal} $$I_{M}$$ I M can be adequately captured by the parameters governing the A-type potassium current, thereby avoiding an unnecessary increase in model complexity. Calcium activated potassium channels The BK and SK currents are Ca 2+ activated potassium currents found in the soma of hippocampal pyramidal cells . The BK current is sensitive to both membrane voltage and internal Ca 2+ concentration whereas the SK current only depends on the Ca 2+ concentration (Table ). Both currents are dependent of the internal calcium concentration given by : 10 [12pt]{minimal} $$ ]_{in} }}{dt} = ]_{ } - [ {Ca} ]_{in} }}{{_{ca} }} - }}{2wz} ,$$ d Ca in dt = Ca ∞ - Ca in τ ca - J Ca 2 w z , where [12pt]{minimal} $$[ {Ca} ]_{ }$$ Ca ∞ is the equilibrium concentration, [12pt]{minimal} $$_{ca}$$ τ ca is t he recovery time, z is Faraday’s constant, [12pt]{minimal} $$w$$ w is the thickness of the surface across which Ca 2+ fluxes are calculated ( [12pt]{minimal} $$w = 1 m$$ w = 1 μ m ), and [12pt]{minimal} $$J_{ca}$$ J ca is the calcium current whose expression is given in Table . The calcium current had voltage-dependent activation and inactivation gates, s and r , respectively . The kinetics and activation curves of s and r are given by Eqs.4–6 where subscript h is replaced with the s and r subscripts of the Ca parameters (Table ). The relaxation time constant for Ca 2+ in our model is set between 1 and 2 ms, a range that was chosen based on the slow dynamics, characteristic of Ca 2+ signaling. In the context of the model equations governing calcium, small variations within this interval were found to substantially influence the rate of change of internal Ca 2+ concentration, and so this range was deemed sufficient to accurately reflect the diversity of calcium dynamics in CA1 neurons. The BK current has two gate variables c , d while the SK channel has one w . The form of the ultrafast SK activation gate, w , is given by Warman et al. as: 11 [12pt]{minimal} $$w w_{ } ( {[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{w} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{w} } \} } ]$$ w ≡ w ∞ Ca in = 0.5 1 + tanh V - V w + 130 1 + tanh Ca in 0.2 - 250 / δ V w The slower activation gate of the BK channel, c, follows a first order rate equation: 12 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - c}}{{_{c} }}$$ dc dt = c ∞ V , Ca in - c τ c with a steady-state activation curve given by: 13 [12pt]{minimal} $$c_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{c} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{c} } \} } ]$$ c ∞ V , Ca in = 0.5 1 + tanh V - V c + 130 1 + tanh Ca in 0.2 - 250 / δ V c The inactivation gate of the BK channel, d, similarly follows a first order rate equation: 14 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - d}}{{_{d} ( V )}}$$ dd dt = d ∞ V , Ca in - d τ d V with 15 [12pt]{minimal} $$d_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{d} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{d} } \} } ]$$ d ∞ V , Ca in = 0.5 1 + tanh V - V d + 130 1 + tanh Ca in 0.2 - 250 / δ V d 16 [12pt]{minimal} $$_{d} ( V ) = t_{d} + _{d} [ {1 - ^{2} ( { }}{{ V_{ d} }}} ) } ]$$ τ d V = t d + ϵ d 1 - tanh 2 V - V d δ V τ d The existence of the SK and BK ionic currents was validated by much improved fits of the height and shape of action potentials, and their AHP region (Fig. e). Without the SK and BK currents, the model clips action potentials at 80% of their maximum height. In total, our conductance model had the 67 adjustment parameters listed in Table . The activation gate variables of the NaT and NaP channels were respectively: 2 [12pt]{minimal} $$m_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{m} }}} )} ],$$ m ∞ V = 0.5 1 + tanh V - V m δ V m , 3 [12pt]{minimal} $$p_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{p} }}} ) } ],$$ p ∞ V = 0.5 1 + tanh V - V p δ V p , where [12pt]{minimal} $$V_{m}$$ V m , [12pt]{minimal} $$V_{p}$$ V p are the activation thresholds and [12pt]{minimal} $$ V_{m} , V_{p}$$ δ V m , δ V p are the widths of the gate transition from the open to the closed state. The activation time of NaT and NaP being very rapid (~ 0.1 ms) compared to other channels we have assumed it to be instantaneous. This simplification reduces model complexity and improves parameter identifiability in DA. The kinetics of the NaT inactivation gate is given by: 4 [12pt]{minimal} $$ )}}{dt} = ( V ) - h( {V,t} )}}{{_{h} ( V )}},$$ d h V , t dt = h ∞ V - h V , t τ h V , where the steady-state inactivation curve is: 5 [12pt]{minimal} $$h_{ } ( V ) = 0.5[ {1 + ( { }}{{ V_{h} }}} )} ],$$ h ∞ V = 0.5 1 + tanh V - V h δ V h , and the recovery time depends on the membrane voltage as: 6 [12pt]{minimal} $$_{h} ( V ) = t_{h} + _{h} [ {1 - ^{2} ( { }}{{ V_{ h} }}} ) } ].$$ τ h V = t h + ϵ h 1 - tanh 2 V - V h δ V τ h . [12pt]{minimal} $$V_{h}$$ V h is the inactivation threshold, [12pt]{minimal} $$ V_{h}$$ δ V h the width of the open-to-closed transition of the inactivation gate. [12pt]{minimal} $$t_{h}$$ t h is recovery time away from the depolarization threshold and [12pt]{minimal} $$t_{h} + _{h}$$ t h + ϵ h the recovery time at the depolarization threshold. [12pt]{minimal} $$ V_{ h}$$ δ V τ h is the width of the peak at half maximum. The non-inactivating delayed-rectifier current (K) and the rapidly inactivating A-type potassium current (A) have the form given in Table . The kinetics of the A-type activation gate is: 7 [12pt]{minimal} $$ )}}{dt} = ( V ) - a( {V,t} )}}{{_{a} ( V )}},$$ d a V , t dt = a ∞ V - a V , t τ a V , where [12pt]{minimal} $$a_{ } ( V )$$ a ∞ V and [12pt]{minimal} $$_{a} ( V )$$ τ a V are given by Eqs. , where the subscript h is replaced with a (Table ). The inactivation kinetics of the K and A-type channels are respectively given by: 8 [12pt]{minimal} $$ )}}{dt} = ( V ) - n( {V,t} )}}{{_{n} ( V )}},$$ d n V , t dt = n ∞ V - n V , t τ n V , 9 [12pt]{minimal} $$ )}}{dt} = ( V ) - b( {V,t} )}}{{_{b} ( V )}},$$ d b V , t dt = b ∞ V - b V , t τ b V , where [12pt]{minimal} $$n_{ } ( V )$$ n ∞ V , [12pt]{minimal} $$_{n} ( V )$$ τ n V and [12pt]{minimal} $$b_{ } ( V )$$ b ∞ V and [12pt]{minimal} $$_{b} ( V )$$ τ b V are given by Eqs. , with the appropriate substitution of indices (Table ). Although the muscarinic potassium current ( [12pt]{minimal} $$I_{M}$$ I M ) is present in certain CA1 neurons, it was excluded from our model because of its relatively minor conductance and its persistent activity which primarily modulates the resting potential of the CA1 neuron . We determined that the characteristics of the [12pt]{minimal} $$I_{M}$$ I M can be adequately captured by the parameters governing the A-type potassium current, thereby avoiding an unnecessary increase in model complexity. The BK and SK currents are Ca 2+ activated potassium currents found in the soma of hippocampal pyramidal cells . The BK current is sensitive to both membrane voltage and internal Ca 2+ concentration whereas the SK current only depends on the Ca 2+ concentration (Table ). Both currents are dependent of the internal calcium concentration given by : 10 [12pt]{minimal} $$ ]_{in} }}{dt} = ]_{ } - [ {Ca} ]_{in} }}{{_{ca} }} - }}{2wz} ,$$ d Ca in dt = Ca ∞ - Ca in τ ca - J Ca 2 w z , where [12pt]{minimal} $$[ {Ca} ]_{ }$$ Ca ∞ is the equilibrium concentration, [12pt]{minimal} $$_{ca}$$ τ ca is t he recovery time, z is Faraday’s constant, [12pt]{minimal} $$w$$ w is the thickness of the surface across which Ca 2+ fluxes are calculated ( [12pt]{minimal} $$w = 1 m$$ w = 1 μ m ), and [12pt]{minimal} $$J_{ca}$$ J ca is the calcium current whose expression is given in Table . The calcium current had voltage-dependent activation and inactivation gates, s and r , respectively . The kinetics and activation curves of s and r are given by Eqs.4–6 where subscript h is replaced with the s and r subscripts of the Ca parameters (Table ). The relaxation time constant for Ca 2+ in our model is set between 1 and 2 ms, a range that was chosen based on the slow dynamics, characteristic of Ca 2+ signaling. In the context of the model equations governing calcium, small variations within this interval were found to substantially influence the rate of change of internal Ca 2+ concentration, and so this range was deemed sufficient to accurately reflect the diversity of calcium dynamics in CA1 neurons. The BK current has two gate variables c , d while the SK channel has one w . The form of the ultrafast SK activation gate, w , is given by Warman et al. as: 11 [12pt]{minimal} $$w w_{ } ( {[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{w} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{w} } \} } ]$$ w ≡ w ∞ Ca in = 0.5 1 + tanh V - V w + 130 1 + tanh Ca in 0.2 - 250 / δ V w The slower activation gate of the BK channel, c, follows a first order rate equation: 12 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - c}}{{_{c} }}$$ dc dt = c ∞ V , Ca in - c τ c with a steady-state activation curve given by: 13 [12pt]{minimal} $$c_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{c} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{c} } \} } ]$$ c ∞ V , Ca in = 0.5 1 + tanh V - V c + 130 1 + tanh Ca in 0.2 - 250 / δ V c The inactivation gate of the BK channel, d, similarly follows a first order rate equation: 14 [12pt]{minimal} $$} = ( {V,[ {Ca} ]_{in} } ) - d}}{{_{d} ( V )}}$$ dd dt = d ∞ V , Ca in - d τ d V with 15 [12pt]{minimal} $$d_{ } ( {V,[ {Ca} ]_{in} } ) = 0.5[ {1 + \{ {\{ {V - V_{d} + 130\{ {1 + ( { ]_{in} }}{0.2}} ) } \} - 250} \}/ V_{d} } \} } ]$$ d ∞ V , Ca in = 0.5 1 + tanh V - V d + 130 1 + tanh Ca in 0.2 - 250 / δ V d 16 [12pt]{minimal} $$_{d} ( V ) = t_{d} + _{d} [ {1 - ^{2} ( { }}{{ V_{ d} }}} ) } ]$$ τ d V = t d + ϵ d 1 - tanh 2 V - V d δ V τ d The existence of the SK and BK ionic currents was validated by much improved fits of the height and shape of action potentials, and their AHP region (Fig. e). Without the SK and BK currents, the model clips action potentials at 80% of their maximum height. In total, our conductance model had the 67 adjustment parameters listed in Table . Our interior point method optimizes the parameter vector [12pt]{minimal} $${}^{2{*}}$$ p ∗ and the initial state vector x *( t = 0) by minimizing the misfit between the experimental membrane voltage, V data , and the membrane voltage variable, V , at each time point [12pt]{minimal} $$t_{i}$$ t i ( [12pt]{minimal} $$i = 0, ,N$$ i = 0 , … , N ) of the assimilation window (Fig. a). This misfit is evaluated by the least-square cost function: 17 [12pt]{minimal} $$C( {p, {}( 0 )} ) = _{i = 0}^{N} \{ {[ {V_{data} ( {t_{i} } ) - V( {t_{i} , {},2{ x}( 0 )} )} ]^{2} + u( {t_{i} } )^{2} } \}.$$ C p , x 0 = 1 2 ∑ i = 0 N V data t i - V t i , p , x 0 2 + u t i 2 . The cost function is minimized under both equality and inequality constraints using the variational approach of Lagrangian optimization. The equality constraints are the model equations (Eqs.1–16). These were linearized at each time point [12pt]{minimal} $$t_{i}$$ t i of the assimilation window , which was 800 ms long and was meshed by N = 40,000 intervals of equal duration. The inequality constraints are given by the lower and upper boundaries of the parameter search range, LB and UB, in Table . These are set by the user. The 67 parameter components of the parameter vector [12pt]{minimal} $${}^{2{*}}$$ p ∗ are listed in Table . The state vector has 14 state variables, [12pt]{minimal} $${}( t ) \{ {V( t ),m( t ), h( t ), p( t ), n( t ), a( t ), b( t ), s( t ), r( t ),c( t ), d( t ), w( t ),z( t ),[ {Ca} ]_{in} } \}$$ x t ≡ V t , m t , h t , p t , n t , a t , b t , s t , r t , c t , d t , w t , z t , Ca in that hold the membrane voltage, gate variables, and internal calcium concentration. State variable [12pt]{minimal} $$V( t )$$ V t is observed, and is synchronized to the data, whereas the other state variables [12pt]{minimal} $$m( t ), ,[ {Ca} ]_{in}$$ m t , … , Ca in are unobserved and must be inferred. We used symbolic differentiation within Python to compute the Jacobian of the state variables with respect to parameters and the Hessian of the cost function. Both matrices were then inserted in the interior-point-optimization algorithm developed by Wächter and Biegler that iteratively determines [12pt]{minimal} $${}^{2{*}}$$ p ∗ and x *( t i ) at each point of the assimilation window. Thus, data assimilation infers observed and unobserved state variables, and model parameters. It estimates both parameters that relate in a nonlinear way to the membrane voltage (gate voltage thresholds, activation slopes, gate recovery times) as well as linear parameters (ionic conductances). In order to stabilize the convergence of the parameter search, a control term u ( t i )[ V data ( t i )-V ( t i )] was added to the right-hand side of Eq. 1 , and as u 2 ( t i ) in Eq. . In well-posed assimilation problems, the Tikhonov regularization term u ( t 0 ) … u ( t N ) uniformly tends to zero as p converges to the solution p * . The model error and experimental error encountered with biological neurons makes the problem ill-posed. Model error introduces correlations between some parameters which take multi-valued solutions when the initial guesses on state variables, parameters or data intervals vary. In this case the [12pt]{minimal} $$u( {t_{i} } )$$ u t i also converge to zero except at times that coincide with action potentials. Models configured with optimal parameters that include a small subset of correlated parameters reliably predict membrane voltage oscillations and ionic current waveforms for a wide range of current injection protocols (Figs. , ). When the u ( t i ) failed to converge uniformly across the assimilation window, the estimated parameters were discarded from the statistical analysis of the ion channels. Models configured with such parameters were unable to predict the experimental membrane voltage, as for example in Fig. e. Prior to the analysis of biological recordings, we verified that our current protocol and DA procedure fulfilled the conditions of observability and identifiability on model data. These preliminary studies showed that DA recovered all 67 parameters to within 0.1% of their original value in the model used to produce the assimilated data. We verified the uniqueness and accuracy of solutions using the R = 19 assimilation windows offset by 80 ms (Fig. a) and varying the starting values of [12pt]{minimal} $${}^{2{*}}$$ p ∗ and [12pt]{minimal} $${}^{2{*}}$$ x ∗ . The predicted ionic currents and membrane voltages were generated by forward integration of each completed model over the 2000 ms long epoch, both pre- and post-drug. Current waveforms were integrated to obtain the total charge transferred through each channel in that epoch (Figs. e, e, e). In order to eliminate the dependence on the neuron firing frequency, we divided the total charge transferred across the epoch by the number of action potentials to obtain the net charge transferred per spike, per ion channel (Figs. a, a, a). To verify the predicted inhibition is not affected by integrating currents over one action potential rather than the entire assimilation window, we plotted the changes in total charge transferred over the full epoch both pre- and post-drug (Fig. ). We verify that both methods gave similar results with small differences arising from sub-threshold current flow between action potentials. The model equations were differentiated symbolically using our custom-built Python library pyDSI to generate the C++ code of the optimization problem – . This code was then inserted in the open-source IPOPT software [ www.coin-or-org/ipopt ] implementing the MA97 sparse linear equation solver [ http://www.hsl.rl.ac.uk/catalogue ]. The optimizations were run on a 16-core (3.20 GHz) Linux workstation with 64 GB of RAM and a University of Bath minicomputer with 64-core processors and 320 GB of RAM. Model equations were linearized according to Boole’s rule . Extreme outliers in the predicted charge data were detected using the ROUT test with the maximum desired false discovery rate, Q set at 0.1%, based on values for the NaT channel. Only 3 outliers were identified out of a total of 138. The corresponding parameters solutions [12pt]{minimal} $$p^{*}$$ p ∗ could also be identified by their failure to predict the membrane voltage oscillations over the 2000 ms epoch. Due to the non-gaussian distributions of some of the total predicted charge data, multiple two-tailed Mann–Whitney U rank-sum tests were applied, with multiple comparisons corrected for using the two-stage step-up method of Benjamini, Krieger and Yekutieli, with Q at 1%. Mann–Whitney U values are reported, and multiplicity-corrected significance values (q) are therefore reported for all discoveries. In figures, asterisks are applied based on these q values. For comparisons where predicted charge transfer distributions differed pre-drug and post-drug, we report the mean rank values in relation to Mann–Whitney U test output. GraphPad Prism version 9 was used for all statistical analyses. Experiments on rodents were performed under Schedule 1 in accordance with the United Kingdom Scientific Procedures act of 1986. Supplementary Information. |
Accumulating the key proteomic signatures associated with delirium: Evidence from systematic review | c6b68716-cd2a-4103-81ce-3531d0416ed8 | 11658594 | Biochemistry[mh] | Delirium is regarded as a multifactorial medical condition, and its underlying pathologies might be caused by trauma, stress, or inflammation. Delirium is a severe but treatable medical disorder that has been known for more than 2500 years. More than 30 terms have been used to describe it, including disturbance in attention and consciousness which tends to oscillate for a short term . Delirium is often poorly diagnosed and remained largely unrecognized among hospitalized patients, particularly in intensive care units (ICU) . Delirium in the elderly is becoming more common, affecting up to 50% of adult hospitalized patients . Delirium has a significant impact on a patient’s recovery and increases complications in hospital settings, which extend hospital stays, raise overall costs, and increases mortality . The three main hypotheses for delirium development and its progression include the alteration of neurotransmitter systems, the activity of inflammatory cytokines leading to permeabilization of the blood-brain barrier, and disruption of the hypothalamic-pituitary axis in response to severe trauma . Yet, delirious patients in ICU/hospital settings may benefit from additional biological, molecular, and pathophysiological insights provided by molecular biomarkers associated with delirium incidence . Genetic biomarkers are mainly classified into three basic groups: risk markers, disease markers, and end-products. The related biomarkers of delirium have been identified by several systematic reviews, and these biomarkers include distinct cerebrospinal fluid, amino acids, proteins, genes, regulatory molecules, genetic variation (i.e., SNIP), and other molecules as well . Despite some discrepancies in the results, the identified biomarkers thus so far are internally linked by known functional interactions and molecular pathways . Over the past few decades, the complexity of the molecular network-based biological functions and pathomechanisms influencing delirium development and its severity have been identified. There remains a knowledge gap about genetic factors, their regulatory elements, functional and molecular pathways, and the pathomechanisms of delirium origin and progression. It has been observed that the pathophysiology of delirium and its complications in medical settings remain unknown based on the body of existing literature . The molecular investigation is one of the effective and modern techniques that may assist with diagnosis, evaluation, and treatment while also shedding light on its mysterious pathogenesis . In this aspect, the proteomic biomarkers efficiently indicate the severity, risk, onset, and recovery of the disease and disease motion. They can be treated as a potential therapeutic target for drug development . Even though delirium has been linked to certain biomarkers, research has revealed conflicting results, leaving no clear biomarkers for delirium. Yet, only a few studies have been conducted to accumulate the molecular proteomic biomarkers of delirium, indicating a lack of knowledge about this critical medical condition. Therefore, this study focused on accumulating and identifying the key common proteomic signatures associated with delirium that have been studied so far. The review also justified the proteomic functional diversity of the common proteins associated with delirium. In addition, we have provided a comprehensive summary of the current state of knowledge on the proteomic signatures of delirium, which may form the basis of future in-depth molecular research and ultimately help with the development of more effective and potent drugs for delirium treatment.
Systematic review We conducted a systematic analysis of the literature to identify research on delirium-associated proteomic biomarkers. The entire procedure was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards, and associated PRISMA flowchart. The search strategy, inclusion and exclusion criteria utilized the PEO framework described below : P opulation: The study population solely included confirmed cases of delirium in humans. E xposure: The delirium-associated proteins/gene-encoded proteins which are the biomarker proteins that are significantly associated with delirium. O utcome:–this was delirium, and the identification of significant proteomic biomarkers associated with delirium. Study design—In the systematic review, studies from all kinds of observational and experimental were considered. This review was registered on PROSPERO (registration number: CRD42024566515). Search strategy A comprehensive electronic literature search was conducted on the selected electronic bibliographic databases (PubMed, Scopus, and EBSCOhost (CINAHL, MEDLINE)) using the MeSH terms, keywords, and subject headings. Only studies that were published in journals between 1 st January 2000 and 31 st December 2023 were considered for screening. The primary keywords were “delirium” and “biomarkers” used along with a combination of other associated keywords including “markers”, ‘genetics”, “genes” and “proteins” to search the studies. Boolean operators “AND”, and “OR” were applied to combine the searching keywords. In addition, this review was complemented by a thorough manual search of related studies. Further, studies were identified through citation searches of included studies and manual searches for professional web sources and key journals in these fields of research. The details of the search sentences used in different databases and their search outcomes are provided in . Eligibility criteria Eligible studies were included if they were i) based on original research studies focused and reported on genes/proteins showing any statistically significant relationship between delirium and genes/proteins in human cases; ii) delirium was assessed and confirmed using established delirium assessment methods; and iii) were published between January 1, 2000, and December 31, 2023, in English. Otherwise, editorials, letters, perspectives, commentaries, reports, reviews and meta-analysis, study protocols, publications in other languages, and studies with ’insufficient related data’ were excluded. Study screening and selection process The eligibility of studies to be included was determined following a three-stage screening process. The first stage involved screening of studies by title to eliminate duplications. The second stage required reading abstracts to determine their relevance to our study. Finally, the third stage necessitated reading full texts of the retained studies, and those that met the set criteria were kept. After screening the title and abstract, MPM and RAM carried out the full text screening to select the articles. During this process, we have discussed and reached a consensus with the other authors (KA and JG) to resolve any discrepancies. Quality assessment Quality assessment of the 78 included studies was conducted because of the heterogeneity among the study designs of the included studies. In this systematic review, cohort, case-control, cross-sectional, randomized control trials, and longitudinal study designs were found among the included studies. The Joanna Briggs Institute (JBI) provided critical quality assessment tools that have been utilized in this study for quality assessment. The JBI quality appraisal tools are widely used in academic studies to assess the risk of bias (graded as high, moderate, or low) where the higher quality scores demonstrate better confidence and vice versa. The JBI appraisal tool was used to evaluate the 53 cohort studies, 18 case-controls, three randomized control trials, two cross-sectional, and two longitudinal studies included in our review. The overall quality appraisal scores are summarized in . Data extraction The data were extracted from Mendeley libraries by one researcher (MPM) with the direct help and guidance of RAM, who subsequently reviewed the results. Discrepancies in the data were addressed and resolved by consensus, and in cases where the two researchers could not reach a consensus, other researchers (KA and JG) were consulted for adjudication. The studies reported the genes and proteins significantly related to delirium were mainly considered for this qualitative synthesis. At the data extraction stage from the selected articles, we considered the first author, publication year, age, gender, data collection time, method of detecting proteins/genes, country of study, study design, method of delirium assessment, and the reported significant proteins associated with delirium. Any missing information was kept blank or noted by “NA” on the data extraction table. The entire procedure was guided and completed by the systematic literature review tool Covidence ( https://app.covidence.org ).
We conducted a systematic analysis of the literature to identify research on delirium-associated proteomic biomarkers. The entire procedure was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards, and associated PRISMA flowchart. The search strategy, inclusion and exclusion criteria utilized the PEO framework described below : P opulation: The study population solely included confirmed cases of delirium in humans. E xposure: The delirium-associated proteins/gene-encoded proteins which are the biomarker proteins that are significantly associated with delirium. O utcome:–this was delirium, and the identification of significant proteomic biomarkers associated with delirium. Study design—In the systematic review, studies from all kinds of observational and experimental were considered. This review was registered on PROSPERO (registration number: CRD42024566515).
A comprehensive electronic literature search was conducted on the selected electronic bibliographic databases (PubMed, Scopus, and EBSCOhost (CINAHL, MEDLINE)) using the MeSH terms, keywords, and subject headings. Only studies that were published in journals between 1 st January 2000 and 31 st December 2023 were considered for screening. The primary keywords were “delirium” and “biomarkers” used along with a combination of other associated keywords including “markers”, ‘genetics”, “genes” and “proteins” to search the studies. Boolean operators “AND”, and “OR” were applied to combine the searching keywords. In addition, this review was complemented by a thorough manual search of related studies. Further, studies were identified through citation searches of included studies and manual searches for professional web sources and key journals in these fields of research. The details of the search sentences used in different databases and their search outcomes are provided in .
Eligible studies were included if they were i) based on original research studies focused and reported on genes/proteins showing any statistically significant relationship between delirium and genes/proteins in human cases; ii) delirium was assessed and confirmed using established delirium assessment methods; and iii) were published between January 1, 2000, and December 31, 2023, in English. Otherwise, editorials, letters, perspectives, commentaries, reports, reviews and meta-analysis, study protocols, publications in other languages, and studies with ’insufficient related data’ were excluded.
The eligibility of studies to be included was determined following a three-stage screening process. The first stage involved screening of studies by title to eliminate duplications. The second stage required reading abstracts to determine their relevance to our study. Finally, the third stage necessitated reading full texts of the retained studies, and those that met the set criteria were kept. After screening the title and abstract, MPM and RAM carried out the full text screening to select the articles. During this process, we have discussed and reached a consensus with the other authors (KA and JG) to resolve any discrepancies.
Quality assessment of the 78 included studies was conducted because of the heterogeneity among the study designs of the included studies. In this systematic review, cohort, case-control, cross-sectional, randomized control trials, and longitudinal study designs were found among the included studies. The Joanna Briggs Institute (JBI) provided critical quality assessment tools that have been utilized in this study for quality assessment. The JBI quality appraisal tools are widely used in academic studies to assess the risk of bias (graded as high, moderate, or low) where the higher quality scores demonstrate better confidence and vice versa. The JBI appraisal tool was used to evaluate the 53 cohort studies, 18 case-controls, three randomized control trials, two cross-sectional, and two longitudinal studies included in our review. The overall quality appraisal scores are summarized in .
The data were extracted from Mendeley libraries by one researcher (MPM) with the direct help and guidance of RAM, who subsequently reviewed the results. Discrepancies in the data were addressed and resolved by consensus, and in cases where the two researchers could not reach a consensus, other researchers (KA and JG) were consulted for adjudication. The studies reported the genes and proteins significantly related to delirium were mainly considered for this qualitative synthesis. At the data extraction stage from the selected articles, we considered the first author, publication year, age, gender, data collection time, method of detecting proteins/genes, country of study, study design, method of delirium assessment, and the reported significant proteins associated with delirium. Any missing information was kept blank or noted by “NA” on the data extraction table. The entire procedure was guided and completed by the systematic literature review tool Covidence ( https://app.covidence.org ).
Description of included studies Our search identified 1746 records, of which 1365 were screened . After removing 381 duplicates, 1365 records were excluded at title and abstract screening stage including for example, review papers, editorials, protocols, irrelevant and insufficient information, and others. Finally, 133 full-text studies were assessed for eligibility, and 78 studies were included in the systematic literature review . demonstrates the PRISMA flowchart and the PRISMA Checklist has been provided in . The included studies differed significantly in terms of patients, research methodology and settings, and the biomarker investigated . In terms of context, participants were from either a medical or surgical environment setting. In the delirium-only trials, authors either did not include patients with comorbidities or did not assess neurocognition to ascertain whether comorbidities were present or not. Studies with additional comorbidities did not consistently account for these variables’ existence. Pre-existing cognitive impairment and Alzheimer’s disease were considered as dementia in this study and hence excluded. All 78 studies have utilized different well-established delirium assessment methods to identify delirium either in the pre-operative or postoperative stage for critically ill patients. Twelve different delirium assessment methods were used by the included studies. Among them most of the studies used the Confusion Assessment Method (CAM; n = 35 studies) or its imitation for ICU (CAM-ICU; n = 25 studies) for delirium screening . Other studies utilized the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV; n = 4 studies) and Delirium Observation Scale (DOS), Delirium Rating Scale-Revised-98 (DRSR-98) and Nursing Delirium Screening Scale (NuDesc) were utilized by three single studies individually. A few studies were also utilized multiple assessment method to diagnose delirium . Among the selected studies, most of them were conducted in the USA (n = 22 studies), followed by China (n = 17 studies), the Netherlands (n = 10 studies), the Germany (n = 6 studies), the Poland (n = 5 studies), the India (n = 2 studies) and the Norway (n = 2 studies), and other 11 different countries . The included studies have a diversity of study settings including n = 53 (68%) cohort studies, n = 18 (23%) case-control studies, and other study settings as well ( and ). The patient’s demographic and study characteristics, including age, gender, sample size, number of delirious cases, and methods used to identify proteins/genes, have been documented in . The screened full-text studies with decision, entire data matrix have been recorded in . The authors’ name and publication year, country of the study, type of study, delirium assessment method and the reported proteins/gene encoded proteins have been summarized in . Quality of the included studies The JBI quality appraisal checklists provided the quality scores for the included studies . The majority of the included studies were of medium/moderate quality (n = 55/78, 70.5%) and high quality (n = 23/78, 29.5%), indicating the robustness of the included studies . For instance, among the cohort studies (n = 53), 47 studies were assessed as medium quality on the JBI scale when six were of high quality. Based on the quality appraisal checklists, 11 case-control studies were of high quality and seven were of the medium; among the RCT, two studies were of high, and one study was of medium quality. The two cross-sectional and two longitudinal studies were of high quality. In this review, no study was disqualified because of receiving a low-quality rating. Delirium-associated important biomolecules The included studies showed a total of 313 delirium-associated gene-encoded proteins where there were 189 unique proteins ( and ). A few proteins were examined repeatedly in a substantial number of studies . The most studied 13 proteins out of the reported proteins collected from the included studies have been highlighted in this review as the key common proteins associated with delirium. They are Interlukitin-6 (IL-6), C-reactive protein (CRP), Interlukitin-8 (IL-8), S100B calcium-binding protein, Interlukitin-10 (IL-10), Tumor necrosis factor-a (TNF-a), Interlukitin-1b (IL-1b), Cortisol, Monocyte chemoattractant protein 1 (MCP-1), Glial fibrillary acidic protein (GFAP), Insulin-like-growth-factor-1 (IGF-1), Interlukitin-1 receptor antagonist (IL-1ra), and Neurofilament light polypeptide (NFL). Among them, the IL-6 was mostly reported (n = 29 studies), followed by the CRP (n = 16 studies); IL-8 (n = 11 studies); S100B (n = 10 studies); IL-10 (n = 8 studies); TNF-a (n = 7 studies); IL-1b (n = 6 studies); Cortisol (n = 5 studies); MCP-1 (n = 5 studies); and other proteins subsequently presented in . Based on the distribution of the proteins, we found the top 13 key proteins that were reported in a minimum of four studies. describes the basic functionality and clinical justification of the most reported proteins. From the distribution of the functionality of the reported proteins, it has been clear that most of them are associated with cytokines and inflammatory functionality in the human body. Some of them functioned as neurotrophic factors and growth factors. Based on their clinical justification and linkages, they are implicated in a variety of functional pathways, including those for are involved in different functional pathways like the inflammatory response, immune response, neurodegenerative disorder, growth, and brain damage functions, and many more.
Our search identified 1746 records, of which 1365 were screened . After removing 381 duplicates, 1365 records were excluded at title and abstract screening stage including for example, review papers, editorials, protocols, irrelevant and insufficient information, and others. Finally, 133 full-text studies were assessed for eligibility, and 78 studies were included in the systematic literature review . demonstrates the PRISMA flowchart and the PRISMA Checklist has been provided in . The included studies differed significantly in terms of patients, research methodology and settings, and the biomarker investigated . In terms of context, participants were from either a medical or surgical environment setting. In the delirium-only trials, authors either did not include patients with comorbidities or did not assess neurocognition to ascertain whether comorbidities were present or not. Studies with additional comorbidities did not consistently account for these variables’ existence. Pre-existing cognitive impairment and Alzheimer’s disease were considered as dementia in this study and hence excluded. All 78 studies have utilized different well-established delirium assessment methods to identify delirium either in the pre-operative or postoperative stage for critically ill patients. Twelve different delirium assessment methods were used by the included studies. Among them most of the studies used the Confusion Assessment Method (CAM; n = 35 studies) or its imitation for ICU (CAM-ICU; n = 25 studies) for delirium screening . Other studies utilized the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV; n = 4 studies) and Delirium Observation Scale (DOS), Delirium Rating Scale-Revised-98 (DRSR-98) and Nursing Delirium Screening Scale (NuDesc) were utilized by three single studies individually. A few studies were also utilized multiple assessment method to diagnose delirium . Among the selected studies, most of them were conducted in the USA (n = 22 studies), followed by China (n = 17 studies), the Netherlands (n = 10 studies), the Germany (n = 6 studies), the Poland (n = 5 studies), the India (n = 2 studies) and the Norway (n = 2 studies), and other 11 different countries . The included studies have a diversity of study settings including n = 53 (68%) cohort studies, n = 18 (23%) case-control studies, and other study settings as well ( and ). The patient’s demographic and study characteristics, including age, gender, sample size, number of delirious cases, and methods used to identify proteins/genes, have been documented in . The screened full-text studies with decision, entire data matrix have been recorded in . The authors’ name and publication year, country of the study, type of study, delirium assessment method and the reported proteins/gene encoded proteins have been summarized in .
The JBI quality appraisal checklists provided the quality scores for the included studies . The majority of the included studies were of medium/moderate quality (n = 55/78, 70.5%) and high quality (n = 23/78, 29.5%), indicating the robustness of the included studies . For instance, among the cohort studies (n = 53), 47 studies were assessed as medium quality on the JBI scale when six were of high quality. Based on the quality appraisal checklists, 11 case-control studies were of high quality and seven were of the medium; among the RCT, two studies were of high, and one study was of medium quality. The two cross-sectional and two longitudinal studies were of high quality. In this review, no study was disqualified because of receiving a low-quality rating.
The included studies showed a total of 313 delirium-associated gene-encoded proteins where there were 189 unique proteins ( and ). A few proteins were examined repeatedly in a substantial number of studies . The most studied 13 proteins out of the reported proteins collected from the included studies have been highlighted in this review as the key common proteins associated with delirium. They are Interlukitin-6 (IL-6), C-reactive protein (CRP), Interlukitin-8 (IL-8), S100B calcium-binding protein, Interlukitin-10 (IL-10), Tumor necrosis factor-a (TNF-a), Interlukitin-1b (IL-1b), Cortisol, Monocyte chemoattractant protein 1 (MCP-1), Glial fibrillary acidic protein (GFAP), Insulin-like-growth-factor-1 (IGF-1), Interlukitin-1 receptor antagonist (IL-1ra), and Neurofilament light polypeptide (NFL). Among them, the IL-6 was mostly reported (n = 29 studies), followed by the CRP (n = 16 studies); IL-8 (n = 11 studies); S100B (n = 10 studies); IL-10 (n = 8 studies); TNF-a (n = 7 studies); IL-1b (n = 6 studies); Cortisol (n = 5 studies); MCP-1 (n = 5 studies); and other proteins subsequently presented in . Based on the distribution of the proteins, we found the top 13 key proteins that were reported in a minimum of four studies. describes the basic functionality and clinical justification of the most reported proteins. From the distribution of the functionality of the reported proteins, it has been clear that most of them are associated with cytokines and inflammatory functionality in the human body. Some of them functioned as neurotrophic factors and growth factors. Based on their clinical justification and linkages, they are implicated in a variety of functional pathways, including those for are involved in different functional pathways like the inflammatory response, immune response, neurodegenerative disorder, growth, and brain damage functions, and many more.
The review showed that most of the significantly associated proteins with delirium belong to the cytokines and inflammatory functionality groups. Although delirium is a multifactorial condition encompassing older age, alcohol or drug usage, severe comorbidities, and anesthesia. A study conducted by Liu et al 2018 showed the cytokines and inflammatory proteins also revealed a strong association of a few of them with delirium. The included studies reported one or more delirium-associated biomarkers proteins/genes however, there is not enough evidence to justify the use of one specific diagnostic or biomolecule as the sole risk or disease biomarker for delirium. This may happen due to methodological variations, distinct analytical procedures, a variety of patient populations, diagnostic criteria for identifying delirium, and the presence of complicating comorbidities. It was also difficult to determine the strength of the identified relationships between the genes/proteins and delirium. Role of cytokines and inflammation in delirium The included studies revealed a total of 189 unique gene-encoded proteins where some of the proteins occurred frequently such as the proinflammatory cytokines IL-6, IL-8, CRP, S100B, TNF-a, IL-1b, MCP-1, and IL-1ra; the anti-inflammatory cytokine IL-10. The studies suggest that the pro-inflammatory cytokines (PICs), including IL-6 and TNF-a, and anti-inflammatory cytokines, including IL-10, are significant subgroups of inflammatory processes and are crucial in the development of pain sensitivity . Since our included studies reported a significant relationship (either positive or negative) between cytokines and delirium development, these indicate that there exists a huge interconnection between cytokines, inflammation, and delirium. The onset of delirium is caused by an early sign of systemic inflammation which indicates the involvement of the cytokine proteins . Particularly, regarding its potential to serve as a delirium marker, IL-6 is one of the cytokines that has been examined the most among the included studies. The other cytokines namely, IL-8, MCP-1, IL-1ra, and the anti-inflammatory IL-10 show a significant stimulation of the major immune response pathways as well as in monocyte chemoattraction . Plasma IL-6 levels are correlated with delirium intensity and length in critically ill individuals, indicating that systemic inflammation plays a role in the onset and progression of delirium . Our studies suggested that the expression level of cytokine proteins changed significantly in delirious and non-delirious patients or preoperative and post-operative stages. In this aspect, the altered cytokine patterns point to an immune reaction that includes B-cell and T-cell stimulation, immunoglobulin production, and concurrent initiation of anti-inflammatory processes . The CRP plays a novel role in the pathophysiology of delirium as it is associated with stress response, and inflammation and has a role in neurotransmitter activities . Among the investigated studies CRP was reported in 10 individual studies which indicate its significance. Ritchie et al, 2014 reported that CRP has a noticeable association with delirious patients having “musculoskeletal” problems . Although there exists some inconsistency about the role of CRP in delirium as a potential biomarker, it can be focused on future in-depth research to clarify its role broadly. As a pro-inflammatory cytokine, IL-1b is highly associated with delirium development as well as having a part in the etiology of early delirium . It also plays a vital role in cholinergic activity, a route believed to be responsible for the pathophysiology of delirium . Having some contradictory findings, IL-1b could not be served as a potential individual biomarker for delirium . The S100B is considered as a calcium-binding protein that has involvement in astrocytes with the central nervous system (CNS) and is associated with delirium . The presence of S100B in cerebrospinal fluid (CSF) indicates the early symptoms of Alzheimer’s dementia which is one of the crucial adverse events for delirious patients. A pleiotropic cytokine TNF-a, is associated with several functional pathways including inflammation, necrosis, strong association with cognitive deterioration, apoptosis, and delirium as well . Due to the strong association with cognitive decline (like Alzheimer’s disease), it’s difficult to announce TNF-a as a potential biomarker of delirium which demands further research to clarify the specific role of TNF-a in delirium development. Other proteins are also found as top studies proteins in our review namely, IGF-1, GFAP, and NFL. All of these are associated with delirium development in preoperative or postoperative stages. IGF-1 is known as a neuroprotective and growth factor which involves in neurogenesis and also may inhibit cytotoxic cytokines, leads pro-inflammation . In our review, three studies reported a negative association of IGF-1 with delirium and one study reported a positive relationship . Due to the linkage with the pathophysiology of Alzheimer’s disease , it is still considered an inconsistency among biomarkers of delirium. The study revealed that the likelihood of delirium recovery may be influenced by lower levels of IGF-1 and the lack of the APOE-e4 genotype among female patients . Three studies identify the increased level of GFAP proteins which is significantly associated with delirium in our review. The NFL are associated with ongoing axonal complications and considered as a novel biomarker of Alzheimer’s disease , variety of neurological disorders . The above-mentioned discussion indicates the importance of reported significant biomolecules for delirium development. Several cytokines and inflammatory proteins are highly reported mentioning the association with delirium. Future research and deeper molecular investigation should focus on cytokines and inflammation related proteins and their associated signaling pathways to decipher the pathophysiology of delirium. Implications This review concludes that cytokines and inflammatory proteins play a crucial role in delirium development. Delirium’s pathophysiology is multifactorial, and diverse sampling types should be considered for molecular studies . Further studies involving the gene expression study could be a reliable source to identify the differentially expressed genes and proteins associated with delirium. The gene expression data analysis may assist to clarify the pathogenesis of delirium as well as the functional pathways. Future in-depth research on epigenetic analysis and genome-wide association studies may also benefit to identify potential biomarkers which will eventually help in delirium diagnosis and therapeutics. In this regard, this study will be a basis for further proteomic research in delirium. Study limitations This review focused on delirium in humans whether they were identified in ICU or any hospital settings. Studies including Alzheimer’s disease and dementia have not been included to keep the study rigorous and focused solely on delirium. This review’s search for relevant studies may have been limited by the databases used, which might result in the disappearance of potential studies. The current study covered the timeframe between January 2000 and December 2023, therefore, studies before 2000 and after 2023 have not been included. Besides that, this study retrieved published articles from PubMed, Scopus, and EBSCOhost (CINAHL, Medline) databases. If there exist any potential published studies outside of these databases, the article might be missing from this study. The current study focused on accumulating the proteomic biomarkers only, therefore considering the proteins and the gene-encoded proteins to investigate. No specific diagnostic and prognostic proteomic biomarker could be identified through this study. The reported relationship between the proteins and delirium was only considered when the direction of the relationship (positive or negative) was ignored. Therefore, the properties of upregulation or downregulation of proteins could not be described in this review which demands further studies to investigate the differentially expressed genes/proteins identification. Moreover, the cofounding variables, estimation process variation, lack of random allocation, study setting were not considered and completely ruled out.
The included studies revealed a total of 189 unique gene-encoded proteins where some of the proteins occurred frequently such as the proinflammatory cytokines IL-6, IL-8, CRP, S100B, TNF-a, IL-1b, MCP-1, and IL-1ra; the anti-inflammatory cytokine IL-10. The studies suggest that the pro-inflammatory cytokines (PICs), including IL-6 and TNF-a, and anti-inflammatory cytokines, including IL-10, are significant subgroups of inflammatory processes and are crucial in the development of pain sensitivity . Since our included studies reported a significant relationship (either positive or negative) between cytokines and delirium development, these indicate that there exists a huge interconnection between cytokines, inflammation, and delirium. The onset of delirium is caused by an early sign of systemic inflammation which indicates the involvement of the cytokine proteins . Particularly, regarding its potential to serve as a delirium marker, IL-6 is one of the cytokines that has been examined the most among the included studies. The other cytokines namely, IL-8, MCP-1, IL-1ra, and the anti-inflammatory IL-10 show a significant stimulation of the major immune response pathways as well as in monocyte chemoattraction . Plasma IL-6 levels are correlated with delirium intensity and length in critically ill individuals, indicating that systemic inflammation plays a role in the onset and progression of delirium . Our studies suggested that the expression level of cytokine proteins changed significantly in delirious and non-delirious patients or preoperative and post-operative stages. In this aspect, the altered cytokine patterns point to an immune reaction that includes B-cell and T-cell stimulation, immunoglobulin production, and concurrent initiation of anti-inflammatory processes . The CRP plays a novel role in the pathophysiology of delirium as it is associated with stress response, and inflammation and has a role in neurotransmitter activities . Among the investigated studies CRP was reported in 10 individual studies which indicate its significance. Ritchie et al, 2014 reported that CRP has a noticeable association with delirious patients having “musculoskeletal” problems . Although there exists some inconsistency about the role of CRP in delirium as a potential biomarker, it can be focused on future in-depth research to clarify its role broadly. As a pro-inflammatory cytokine, IL-1b is highly associated with delirium development as well as having a part in the etiology of early delirium . It also plays a vital role in cholinergic activity, a route believed to be responsible for the pathophysiology of delirium . Having some contradictory findings, IL-1b could not be served as a potential individual biomarker for delirium . The S100B is considered as a calcium-binding protein that has involvement in astrocytes with the central nervous system (CNS) and is associated with delirium . The presence of S100B in cerebrospinal fluid (CSF) indicates the early symptoms of Alzheimer’s dementia which is one of the crucial adverse events for delirious patients. A pleiotropic cytokine TNF-a, is associated with several functional pathways including inflammation, necrosis, strong association with cognitive deterioration, apoptosis, and delirium as well . Due to the strong association with cognitive decline (like Alzheimer’s disease), it’s difficult to announce TNF-a as a potential biomarker of delirium which demands further research to clarify the specific role of TNF-a in delirium development. Other proteins are also found as top studies proteins in our review namely, IGF-1, GFAP, and NFL. All of these are associated with delirium development in preoperative or postoperative stages. IGF-1 is known as a neuroprotective and growth factor which involves in neurogenesis and also may inhibit cytotoxic cytokines, leads pro-inflammation . In our review, three studies reported a negative association of IGF-1 with delirium and one study reported a positive relationship . Due to the linkage with the pathophysiology of Alzheimer’s disease , it is still considered an inconsistency among biomarkers of delirium. The study revealed that the likelihood of delirium recovery may be influenced by lower levels of IGF-1 and the lack of the APOE-e4 genotype among female patients . Three studies identify the increased level of GFAP proteins which is significantly associated with delirium in our review. The NFL are associated with ongoing axonal complications and considered as a novel biomarker of Alzheimer’s disease , variety of neurological disorders . The above-mentioned discussion indicates the importance of reported significant biomolecules for delirium development. Several cytokines and inflammatory proteins are highly reported mentioning the association with delirium. Future research and deeper molecular investigation should focus on cytokines and inflammation related proteins and their associated signaling pathways to decipher the pathophysiology of delirium.
This review concludes that cytokines and inflammatory proteins play a crucial role in delirium development. Delirium’s pathophysiology is multifactorial, and diverse sampling types should be considered for molecular studies . Further studies involving the gene expression study could be a reliable source to identify the differentially expressed genes and proteins associated with delirium. The gene expression data analysis may assist to clarify the pathogenesis of delirium as well as the functional pathways. Future in-depth research on epigenetic analysis and genome-wide association studies may also benefit to identify potential biomarkers which will eventually help in delirium diagnosis and therapeutics. In this regard, this study will be a basis for further proteomic research in delirium.
This review focused on delirium in humans whether they were identified in ICU or any hospital settings. Studies including Alzheimer’s disease and dementia have not been included to keep the study rigorous and focused solely on delirium. This review’s search for relevant studies may have been limited by the databases used, which might result in the disappearance of potential studies. The current study covered the timeframe between January 2000 and December 2023, therefore, studies before 2000 and after 2023 have not been included. Besides that, this study retrieved published articles from PubMed, Scopus, and EBSCOhost (CINAHL, Medline) databases. If there exist any potential published studies outside of these databases, the article might be missing from this study. The current study focused on accumulating the proteomic biomarkers only, therefore considering the proteins and the gene-encoded proteins to investigate. No specific diagnostic and prognostic proteomic biomarker could be identified through this study. The reported relationship between the proteins and delirium was only considered when the direction of the relationship (positive or negative) was ignored. Therefore, the properties of upregulation or downregulation of proteins could not be described in this review which demands further studies to investigate the differentially expressed genes/proteins identification. Moreover, the cofounding variables, estimation process variation, lack of random allocation, study setting were not considered and completely ruled out.
This study magnifies the significant information regarding delirium-associated proteomic biomarkers. We have summarized the 13 most studied proteins (IL-6, CRP, IL-8, S100B, IL-10, TNF-a, IL-1b, Cortisol, MCP-1, GFAP, IGF-1, IL-1ra and NFL) about delirium. Notably, cytokine and inflammatory proteomic factors are the most crucial influencer for delirium development and the ultimate stage of delirium, found in this study. Inconsistency among the proteomic biomarkers and the lack of knowledge about the entire pathophysiological process of delirium demand more in-depth molecular studies to decipher the core knowledge of the molecular functionality of delirium. More studies need to be conducted to identify the exclusive causal genomic and proteomic biomarker of delirium which can be investigated as prognostic, diagnostic, and therapeutic target biomolecules. The summarization of the current information on delirium-associated proteins that has been done in this study might serve as a guide for further research and in-depth investigation of delirium.
S1 File The search sentences are used in different databases and outcomes. (PDF) S2 File The overall quality appraisal scores. (XLSX) S3 File The PRISMA checklist. (DOCX) S4 File Delirium-associated 189 gene-encoded unique proteins. (XLSX) S1 Table The patient’s demographic and study characteristics table. (DOCX) S2 Table The screened studies with decision, entire data matrix. (XLSX)
|
Subtle changes in topsoil microbial communities of drained forested peatlands after prolonged drought | 7189539a-bcad-47aa-827c-977ad3501919 | 11544035 | Microbiology[mh] | Climate change is predicted to increase the risk of summer droughts and other extreme weather conditions. These changes may alter the composition, structure and function of soil microbial community and enzyme production, and ultimately affect soil C and nutrient cycling (Bogati & Walczak, ; Li et al., ; Venäläinen et al., ; Xu et al., ). The impact of drought varies with the soil type, vegetation, depth, season and microbe characteristics (Cordero et al., ; de Souza et al., ; Lamit et al., ; Peltoniemi et al., ; Veach et al., ; Wang, Meister, et al., ; Xu et al., ; Yang et al., ). The severity of drought is affected by evaporation, transpiration and interception, which in turn depend on the forest stand characteristics such as stand volume and leaf mass (Launiainen et al., ). In peatlands, different layers of peat and surface vegetation can impact the microbial responses to drought significantly. The shift in communities caused by these factors is more visible in the top layers of the peat than in the deep layers (Lamit et al., ). Lowering of the water table (WT) has been shown to benefit fungi and increase bacterial and fungal biomass (Andersen et al., ; Jaatinen et al., ), resulting in more efficient aerobic decomposition. Bacteria and fungi are responsible for organic matter decomposition, but fungi have stronger links to plants through mycorrhiza, whereas the relationships between plants and bacteria are less tight (Baldrian, ; Mundra et al., ). In controlled laboratory conditions, drought has caused changes in the community composition of microbes in peat (Potter et al., ). The study by Peltoniemi et al. included drained peatland sites, where drought was noticed to have changes in communities, especially in fungi and Actinobacteria. However, the overall effects of drought on microbial communities are poorly understood in boreal‐drained forested peatlands. Boreal forests and peatlands are globally significant reservoirs of carbon (C) (Bradshaw & Warkentin, ; Wieder et al., ). This function is affected by weather conditions, forest management and climate change (Charman et al., ; Harenda et al., ). Weather conditions and forest harvesting induce changes in soil temperature and moisture, WT, oxygen availability, soil pH and the amount and quality of organic matter input (Baldrian, ; Briones et al., ; Jaatinen et al., ; Keiluweit et al., ; Laiho, ; Peltoniemi et al., ; Peltomaa et al., ). In Finland, nearly half of the original peatland area (10 M ha) has been drained for forestry, majority of which was drained for the first time 50–60 years ago. Drainage improves tree growth, increases peat decomposition, alters the amount and species composition of ground vegetation and onsets the formation of raw humus layer over the original peat (Kaunisto & Moilanen, ; Laiho, ). The raw humus layer contains a significant nutrient pool (Kaunisto & Moilanen, ) and the majority of the fine roots of trees are concentrated in this layer (Lampela et al., ; Wei et al., ). The physical and chemical characteristics of the raw humus layer are similar to those of mor, an organic soil layer found in upland mineral soils (Laurén, ). Microbial communities in pristine peat and upland mor layers have been studied (Andersen et al., ; Kitson & Bell, ), but studies on microbial communities in raw humus of drained peatlands are scarce. The large amount of horizontally oriented macropores in the humus layer restricts the capillary rise of water from the underlying WT to the humus layer. This maintains good aeration in the root layer during wet periods. However, it also makes the humus layer vulnerable to drought during dry periods, which can disrupt microbe‐mediated nutrient cycling. Therefore, it is essential to study the drought‐induced changes in microbial communities, as the supply of nutrients from decomposing organic material is the most important factor regulating forest growth in drained peatlands (Laurén et al., ). Peatland forest management, such as drainage and harvesting, induces differences in the vegetation, which in turn affects the microbial communities due to changes in organic matter input (Laiho, ; Peltoniemi et al., ). Forest stand characteristics, including tree species, leaf mass and basal area, can affect the interception capacity (Launiainen et al., ) and evapotranspiration and therefore also affect WT and drought intensity. Mature spruce stands with high leaf mass are particularly vulnerable to drought (Netherer et al., ). Activities like harvesting and drainage can affect the WT, oxygen availability, organic matter quality and chemical properties of soil (Peltomaa et al., ). Furthermore, new forest management methods such as continuous cover forestry (CCF) are becoming more common. The CCF preserves part of stand and ground vegetation, maintaining more stable soil moisture and temperature conditions and maintains a higher potential for C accumulation than clear‐cut or uncut forests. Therefore, CCF can be expected to cause fewer changes in the microbial community than clear‐cutting (Kim et al., ; Roth et al., ). The rRNA transcripts have been used to identify active populations in mixed microbial communities (Blazewicz et al., ; Salgar‐Chaparro & Machuca, ). By adapting these methods, we can obtain a snapshot of the microbial functional groups in the soil. However, using rRNA transcripts to determine the microbial community does not fully capture the active community, as rRNA represents the potential for the activity, rather than the activity itself (Blazewicz et al., ). Furthermore, when studying only the active community, the samples often contain 16S rRNA genes from dead or dormant cells (Li et al., ). As a part of the ribosome, rRNA is involved in cell physiology and changes, and therefore can be linked to the community members that are or have recently been active (Gourse et al., ; Kerkhof & Ward, ; Poulsen et al., ). This study aimed to examine the effects of drought, seasonal changes and forest harvesting intensity on bacterial and fungal communities in the topsoil layer consisting of raw humus and surface peat of dried peatland forest over a growing season. During the experiment, there was a prolonged drought period in the summer. Our Norway spruce‐dominated study sites were treated with clear‐cut, selective cutting (CCF), whereas one of them was left uncut providing us with a set of different moisture regimes within the same study area. We used bacterial 16S and fungal ITS2 rRNA analysis to discover the present active community members. We hypothesized that (1) the microbial communities vary seasonally and respond to drought, (2) the bacterial and fungal communities show different responses to drought and (3) their responses are dependent on forest stand characteristics. Site description and sampling Soil samples were collected from a nutrient‐rich drained peatland forest dominated by Norway spruce ( Picea abies (L.) Karst.) in Paroninkorpi (61.01° N, 24.75° E) in Southern Finland. The ditch network was established at the beginning of the 1960s and underwent ditch cleaning in 2018. The current ditch spacing is 50 m, the depth is ca. 0.6 m, and the peat deposit is >1.5 m deep and formed of a thin raw humus layer (5–10 cm) overlaying moderately decomposed Carex ‐wood peat. The site was divided into plots (40 × 40 m) representing three harvesting intensities: (1) clear‐cut (all trees removed, basal area 0 m 2 ha −1 ), (2) CCF (basal area 12 m 2 ha −1 ) and (3) uncut forest (the basal area 25 m 2 ha −1 ). The harvesting was conducted when the thick snow cover protected the ground vegetation and soil in February 2017. The ground vegetation was formed by dwarf shrubs ( Vaccinium myrtillus L., V. vitis‐idaea L.), mosses (mainly Pleurozium schreberi Brid., Hylocomium splendens (Hedw.). Schimp. and some Sphagnum sp.), and ferns in the CCF and uncut plots. In the clear‐cut plots, thick patches of raspberry ( Rubus idaeus L.) and birch ( Betula sp.) had taken over the site, along with young Norway spruce seedlings planted in 2018. The pH, C and N content and C:N ratio of the peat in management plots are shown in Table to provide background information on the area's characteristics. More detailed descriptions of the study site can be found in Palviainen et al. and Peltomaa et al. . Soil samples for the soil microbial community analysis were collected from the surface soil including raw humus and surface peat (0–10 cm) in the spring (May), summer (July), and autumn (September) of 2021. The samples with three replicates were taken 1 m apart from each other in the middle of each plot, ca. 20 m from the ditches. The sampling placement (80 m between the plots) was selected to avoid edge effects from different directions (at least 2 × height of the trees) and for standardization of drainage effect (distance from the ditches). The ground vegetation was removed prior to sampling. Soil cores were collected using a cylinder sampler (diameter 3 cm). The soil was homogenized in a sterilized container by shaking, and then transferred to a sterilized 50 mL plastic tube, preserved with DNA/RNA Shield (Zymo Research, CA, USA), and placed on ice. The tubes were stored at −20°C before RNA extraction in the laboratory. Soil temperature in the field was measured at a depth of 10 cm using a digital stick thermometer (Orthex Group, Finland). The WT was monitored from groundwater tubes installed down to 1 m depth (see Palviainen et al., for further details) located within a 1–2 m distance from the soil sampling points. Weather data was collected ca. 20 km from the study site (Lammi Pappila weather station), by the Finnish Meteorological Institute. RNA extraction and sequencing RNA was extracted from the soil samples using the RNA PowerSoil® Total RNA Isolation Kit (Qiagen, Ireland) following the manufacturer's instructions. RNA concentration was verified using the Qubit RNA High Sensitivity RNA Assay Kit (Invitrogen, Life Technologies, CA, USA) and Qubit 2.0 fluorometer (Invitrogen). Complementary DNA (cDNA) was synthesized using the Quantinova Reverse Transcription Kit (Qiagen). The samples were sequenced by Novogene Company Ltd. (UK) using a sequencing depth of 100 K raw tags (a recommendation for complex data such as soil samples). For the amplicon generation, the bacterial V3‐V4 region of the 16S rRNA gene was amplified using the primer pair 341F/806R (Table ; Herlemann et al., ), whereas the primer pair ITS3‐2024F/ITS4‐2409R (Table ; Bellemain et al., ) was used for the fungal internal transcribed spacer 2 (ITS2) region. The PCR products were selected by 2% agarose gel electrophoresis, end‐repaired, A‐tailed and further ligated with Illumina adapters. The libraries were sequenced on a paired‐end Illumina platform to generate 250‐bp paired‐end raw reads. Samples were further processed by using QIIME2 (Version 2024.5; Bolyen et al., ). The primer sequences (Table ) were discarded by using Cutadapt (Martin, ). DADA2 (Callahan et al., ) pipeline was used to detect and correct Illumina amplicon sequence data, filter any phiX reads and chimeric sequences and merge paired reads. It should be noted that whereas bacterial sequences were run with forward and reverse sequences, fungal sequences were only run with forward sequences. This was done due to the low quality of the autumn samples. The spring samples were good quality. The bacterial sequences were truncated at 170 bp (forward) and 200 (reverse). Fungal sequences were truncated at 220 bp. The truncation was decided based on the quality of the samples. The tree for phylogenetic diversity analyses was created by using QIIME2 phylogeny plugin. Taxonomic analysis was done using pre‐trained classifiers. For bacteria, SILVA 138 SSU database and classifier were used (Robeson et al., ; https://www.arb-silva.de ) and for fungi UNITE v.10.0 (Version 04.04.2024; Abarenkov et al., ; https://unite.ut.ee/ ). The classification was done using scikit‐learn (Pedregosa et al., ) and the sequences were assigned to ASVs (amplicon sequencing variants). Statistical analyses All statistical testing was performed with R software (version 4.3.2). Statistical testing, data normalization, rarefaction, diversity and richness analyses and examination of community composition were performed using Phyloseq (version 1.46.0). In statistical testing, p ‐values <0.05 were considered significant. Data was rarefied (minimum sequencing depth reduced by 10%) and alpha diversity indices, observed species, species richness (Chao1; Chao, ) abundance‐based coverage estimator (ACE; Chao & Lee, ) and diversity (Shannon index; Shannon, and Simpson index; Simpson, ) were calculated. The normality of the data was checked using Shapiro–Wilk test (Shapiro & Wilk, ) and Q‐Q plots. Differences in alpha diversity between seasons and harvest intensity were then examined using the Kruskal‐Wallis test (Kruskal & Wallis, ), as data was not normally distributed and included multiple groups. Differences were then visualized using a boxplot. Lastly, the differences were further examined using the Wilcoxon rank‐sum test (Mann & Whitney, ). Principle coordinate analysis (PCoA) plots were created using Bray–Curtis dissimilarity matrix to visualize community‐level sample dissimilarity (beta diversity). Dominant taxa were calculated for bacterial and fungal phyla. The relative abundance of bacterial and fungal phyla was visualized using barplots. The hierarchical clustering of the relative abundance of the most common 35 bacterial and fungal genera was examined using a hierarchical clustering heatmap. Statistical differences in beta diversity and relative abundances on a phylum and genus level in relation to multiple environmental factors (season, harvest intensity, WT, pH and soil temperature) were examined using non‐parametric permutational multivariate analysis of variance (PERMANOVA) ‘adonis2’ function of the vegan package (version 2.6‐6.1). Relationships between the samples were examined and visualized using the UpSet plot. Soil samples were collected from a nutrient‐rich drained peatland forest dominated by Norway spruce ( Picea abies (L.) Karst.) in Paroninkorpi (61.01° N, 24.75° E) in Southern Finland. The ditch network was established at the beginning of the 1960s and underwent ditch cleaning in 2018. The current ditch spacing is 50 m, the depth is ca. 0.6 m, and the peat deposit is >1.5 m deep and formed of a thin raw humus layer (5–10 cm) overlaying moderately decomposed Carex ‐wood peat. The site was divided into plots (40 × 40 m) representing three harvesting intensities: (1) clear‐cut (all trees removed, basal area 0 m 2 ha −1 ), (2) CCF (basal area 12 m 2 ha −1 ) and (3) uncut forest (the basal area 25 m 2 ha −1 ). The harvesting was conducted when the thick snow cover protected the ground vegetation and soil in February 2017. The ground vegetation was formed by dwarf shrubs ( Vaccinium myrtillus L., V. vitis‐idaea L.), mosses (mainly Pleurozium schreberi Brid., Hylocomium splendens (Hedw.). Schimp. and some Sphagnum sp.), and ferns in the CCF and uncut plots. In the clear‐cut plots, thick patches of raspberry ( Rubus idaeus L.) and birch ( Betula sp.) had taken over the site, along with young Norway spruce seedlings planted in 2018. The pH, C and N content and C:N ratio of the peat in management plots are shown in Table to provide background information on the area's characteristics. More detailed descriptions of the study site can be found in Palviainen et al. and Peltomaa et al. . Soil samples for the soil microbial community analysis were collected from the surface soil including raw humus and surface peat (0–10 cm) in the spring (May), summer (July), and autumn (September) of 2021. The samples with three replicates were taken 1 m apart from each other in the middle of each plot, ca. 20 m from the ditches. The sampling placement (80 m between the plots) was selected to avoid edge effects from different directions (at least 2 × height of the trees) and for standardization of drainage effect (distance from the ditches). The ground vegetation was removed prior to sampling. Soil cores were collected using a cylinder sampler (diameter 3 cm). The soil was homogenized in a sterilized container by shaking, and then transferred to a sterilized 50 mL plastic tube, preserved with DNA/RNA Shield (Zymo Research, CA, USA), and placed on ice. The tubes were stored at −20°C before RNA extraction in the laboratory. Soil temperature in the field was measured at a depth of 10 cm using a digital stick thermometer (Orthex Group, Finland). The WT was monitored from groundwater tubes installed down to 1 m depth (see Palviainen et al., for further details) located within a 1–2 m distance from the soil sampling points. Weather data was collected ca. 20 km from the study site (Lammi Pappila weather station), by the Finnish Meteorological Institute. extraction and sequencing RNA was extracted from the soil samples using the RNA PowerSoil® Total RNA Isolation Kit (Qiagen, Ireland) following the manufacturer's instructions. RNA concentration was verified using the Qubit RNA High Sensitivity RNA Assay Kit (Invitrogen, Life Technologies, CA, USA) and Qubit 2.0 fluorometer (Invitrogen). Complementary DNA (cDNA) was synthesized using the Quantinova Reverse Transcription Kit (Qiagen). The samples were sequenced by Novogene Company Ltd. (UK) using a sequencing depth of 100 K raw tags (a recommendation for complex data such as soil samples). For the amplicon generation, the bacterial V3‐V4 region of the 16S rRNA gene was amplified using the primer pair 341F/806R (Table ; Herlemann et al., ), whereas the primer pair ITS3‐2024F/ITS4‐2409R (Table ; Bellemain et al., ) was used for the fungal internal transcribed spacer 2 (ITS2) region. The PCR products were selected by 2% agarose gel electrophoresis, end‐repaired, A‐tailed and further ligated with Illumina adapters. The libraries were sequenced on a paired‐end Illumina platform to generate 250‐bp paired‐end raw reads. Samples were further processed by using QIIME2 (Version 2024.5; Bolyen et al., ). The primer sequences (Table ) were discarded by using Cutadapt (Martin, ). DADA2 (Callahan et al., ) pipeline was used to detect and correct Illumina amplicon sequence data, filter any phiX reads and chimeric sequences and merge paired reads. It should be noted that whereas bacterial sequences were run with forward and reverse sequences, fungal sequences were only run with forward sequences. This was done due to the low quality of the autumn samples. The spring samples were good quality. The bacterial sequences were truncated at 170 bp (forward) and 200 (reverse). Fungal sequences were truncated at 220 bp. The truncation was decided based on the quality of the samples. The tree for phylogenetic diversity analyses was created by using QIIME2 phylogeny plugin. Taxonomic analysis was done using pre‐trained classifiers. For bacteria, SILVA 138 SSU database and classifier were used (Robeson et al., ; https://www.arb-silva.de ) and for fungi UNITE v.10.0 (Version 04.04.2024; Abarenkov et al., ; https://unite.ut.ee/ ). The classification was done using scikit‐learn (Pedregosa et al., ) and the sequences were assigned to ASVs (amplicon sequencing variants). All statistical testing was performed with R software (version 4.3.2). Statistical testing, data normalization, rarefaction, diversity and richness analyses and examination of community composition were performed using Phyloseq (version 1.46.0). In statistical testing, p ‐values <0.05 were considered significant. Data was rarefied (minimum sequencing depth reduced by 10%) and alpha diversity indices, observed species, species richness (Chao1; Chao, ) abundance‐based coverage estimator (ACE; Chao & Lee, ) and diversity (Shannon index; Shannon, and Simpson index; Simpson, ) were calculated. The normality of the data was checked using Shapiro–Wilk test (Shapiro & Wilk, ) and Q‐Q plots. Differences in alpha diversity between seasons and harvest intensity were then examined using the Kruskal‐Wallis test (Kruskal & Wallis, ), as data was not normally distributed and included multiple groups. Differences were then visualized using a boxplot. Lastly, the differences were further examined using the Wilcoxon rank‐sum test (Mann & Whitney, ). Principle coordinate analysis (PCoA) plots were created using Bray–Curtis dissimilarity matrix to visualize community‐level sample dissimilarity (beta diversity). Dominant taxa were calculated for bacterial and fungal phyla. The relative abundance of bacterial and fungal phyla was visualized using barplots. The hierarchical clustering of the relative abundance of the most common 35 bacterial and fungal genera was examined using a hierarchical clustering heatmap. Statistical differences in beta diversity and relative abundances on a phylum and genus level in relation to multiple environmental factors (season, harvest intensity, WT, pH and soil temperature) were examined using non‐parametric permutational multivariate analysis of variance (PERMANOVA) ‘adonis2’ function of the vegan package (version 2.6‐6.1). Relationships between the samples were examined and visualized using the UpSet plot. Weather and WT The average air temperature of the preceding month before samplings was 3.69°C in spring, 20.0°C in summer and 14.0°C in autumn. The precipitation sums for the corresponding times were 27.5, 34.3, and 115.5 mm, respectively. The daily weather data is presented in Figure . The low summertime precipitation was reflected in the WT, which was significantly lower ( p <0.05) during summer than in spring sampling (Table ). Additionally, the soil temperatures were higher in summer than in spring in all plots ( p <0.05) (Table ). According to the Finnish Meteorological Institute, the summer of 2021 was the second warmest in statistics from the beginning of the 20th century for the whole country. From June to August, the average number of hot weather days (>25°C) for the whole country was 50, while the normal average for the same period is 33 days. The average temperature during this period in the study area was 21°C. From the last days of June to mid‐July, there was almost no rainfall at all in the study area. The average temperature during this period in the study area was 21°C. The highest daily average temperature during the same period was 24.5°C and the lowest 18°C. This period consists of 18 days with only one with 2.2 mm of precipitation and 11 consecutive days without precipitation. The average precipitation during summer was quite typical for the whole country. However, the precipitation did not spread evenly over the summer, and there were periods with almost no rainfall (June and July) and heavy rain (August). 16S ‐ rRNA and IT2 ‐ rRNA gene sequencing results Amplicon sequencing of bacterial 16S rRNA sequences resulted in, on average, 73,467 effective and 71,386 annotated tags (Figure , Table ). However, due to the low amount of bacterial RNA, the 16S sequencing could not be performed for one clear‐cut sample in the spring, one CCF sample in the summer and two uncut forest samples in the autumn. The amplicon sequencing of fungal ITS2 rRNA sequences resulted, on average, in 83,751 effective and 60,518 annotated tags (Figure , Table ). However, none of the summer samples contained enough high‐quality fungal RNA for sequencing. Like the bacterial RNA, the fungal RNA was low for one clear‐cut and one CCF sample in the spring, and two clear‐cut samples in the autumn, resulting in no ITS2 sequencing. Further analysis produced 10,317 bacterial and 5660 fungal ASVs. Good's coverage ranged from 95% to 99% (Table ), indicating that most of the bacterial and fungal types have been detected in the samples. ASV classification resulted in 53 phyla, 137 classes, 296 orders, 477 families and 789 genera for bacteria and 12 phyla, 23 classes, 35 orders, 42 families and 39 genera for fungi. Bacterial and fungal community richness and diversity Overall, species richness and diversity were highest in spring compared to summer and autumn. Examining observed species, alpha diversity (Shannon and Simpson) and species richness (Chao1 and ACE) revealed no statistical differences between bacterial or fungal richness and diversity and harvest intensities. Observed species and diversity and richness were highest in spring for bacteria and in autumn for fungi (Table ; Figure ). Additionally, statistical differences ( p <0.05) could be detected between seasons. For bacteria, comparing spring to summer and autumn resulted in significant statistical differences, but there were no statistical differences between autumn and summer, and in Simpson analysis between spring and summer (Figure ). For fungi, the difference in species richness and diversity was also statistically significant between spring and autumn (Figure ). PCoA analysis (beta diversity) showed dissimilarity in microbial communities. There was a distinct structure in bacteria between spring samples and autumn and summer samples (Figure ). Statistical testing revealed a highly significant difference ( p <0.01) between seasons in bacterial beta diversity. Likewise, PCoA analysis revealed a distinct structure between spring and autumn in fungal beta diversity (Figure ). Fungal communities also showed significant differences in beta diversity ( p <0.05) between seasons but not as strongly as bacteria. Bacterial and fungal community composition The most common phyla for bacteria and fungi according to relative abundance in different samples are presented in Figure . For fungi, unassigned phyla were largely present in the samples. The most abundant bacteria in the phylum level were Acidobacteriota, Proteobacteria and Actinobacteria. The most abundant fungal phyla were Basidiomycota and Ascomycota. In spring, Acidobacteriota accounted for 77.8% of the dominant taxa, followed by Proteobacteria (22%). During summer, Proteobacteria dominated all the samples and accounted for 100% of the dominant taxa. Proteobacteria was also the most dominant in autumn (62%) but was accompanied by Acidobacteria and Actinobacteria with 12.5% share each. The fungal communities were dominated by Basidiomycota and Ascomycota. In spring, Basidiomycota covered 57.1% of the dominant taxa, whereas Ascomycota covered 28.6%. However, during autumn, Basidiomycota covered only 14.3% and Ascomycota was not among the dominant taxa. Unassigned phyla were significant in both seasons, accounting for 14.3% in spring and 85.7% in autumn. The most dominant bacterial phyla in all harvest intensities were Proteobacteria and Acidobacteriota. In clear‐cut, Proteobacteria covered 62.5% and 25% of dominant taxa, in CCF the equivalent percentages were 50% and 38%, and in the uncut site 57.1% and 42.9%. Dominant fungal communities in clear‐cut were Ascomycota (66.7%), in CCF Basidiomycota (40%) and in uncut Basidiomycota (50%). Additionally, unknown phyla were dominant in all harvest intensities (in clear‐cut 33%, in CCF 60% and in uncut 50%). Permutational multivariate analysis of variance did not reveal statistical significance between season or harvesting intensity on a phylum level. Only bacterial phylum, Myxococcota showed statistical significance when it comes to harvest intensity ( p <0.05). The most common bacterial and fungal genera, according to relative abundance in different samples, are presented in Figure . The share of unassigned genera was high, especially in fungal samples in autumn. Bacterial genera Pseudomonas (Pseudomonadota) varied significantly with soil temperature ( p <0.05). Additionally, genus Roseiarcus (Pseudomonadota) varied significantly with the season ( p <0.05), and uncultured eubacterium WD260 varied significantly with WT. When examining the hierarchical clustering of the samples from the genus‐level heatmaps (Figure ), bacterial and fungal samples cluster according to season and harvest intensity. Spring samples, in particular, cluster together, whereas there is slightly more variation in clustering with summer and autumn samples in both bacterial and fungal relative abundance. Examining the Upset plot (Figure ) revealed that in bacteria spring samples share the greatest number of ASVs. Despite the season and harvest intensity, the summer and autumn samples share much less ASVs. The least shared ASVs are with samples taken from clear‐cut in summer and autumn. In fungi (Figure ), the number of shared ASV's is more moderate. However, autumn samples (despite the harvest intensity) share the greatest number of ASVs. Spring and autumn samples taken from uncut, clear‐cut samples in spring and CCF samples in spring also all share a high amount of ASVs. WT The average air temperature of the preceding month before samplings was 3.69°C in spring, 20.0°C in summer and 14.0°C in autumn. The precipitation sums for the corresponding times were 27.5, 34.3, and 115.5 mm, respectively. The daily weather data is presented in Figure . The low summertime precipitation was reflected in the WT, which was significantly lower ( p <0.05) during summer than in spring sampling (Table ). Additionally, the soil temperatures were higher in summer than in spring in all plots ( p <0.05) (Table ). According to the Finnish Meteorological Institute, the summer of 2021 was the second warmest in statistics from the beginning of the 20th century for the whole country. From June to August, the average number of hot weather days (>25°C) for the whole country was 50, while the normal average for the same period is 33 days. The average temperature during this period in the study area was 21°C. From the last days of June to mid‐July, there was almost no rainfall at all in the study area. The average temperature during this period in the study area was 21°C. The highest daily average temperature during the same period was 24.5°C and the lowest 18°C. This period consists of 18 days with only one with 2.2 mm of precipitation and 11 consecutive days without precipitation. The average precipitation during summer was quite typical for the whole country. However, the precipitation did not spread evenly over the summer, and there were periods with almost no rainfall (June and July) and heavy rain (August). ‐ rRNA and IT2 ‐ rRNA gene sequencing results Amplicon sequencing of bacterial 16S rRNA sequences resulted in, on average, 73,467 effective and 71,386 annotated tags (Figure , Table ). However, due to the low amount of bacterial RNA, the 16S sequencing could not be performed for one clear‐cut sample in the spring, one CCF sample in the summer and two uncut forest samples in the autumn. The amplicon sequencing of fungal ITS2 rRNA sequences resulted, on average, in 83,751 effective and 60,518 annotated tags (Figure , Table ). However, none of the summer samples contained enough high‐quality fungal RNA for sequencing. Like the bacterial RNA, the fungal RNA was low for one clear‐cut and one CCF sample in the spring, and two clear‐cut samples in the autumn, resulting in no ITS2 sequencing. Further analysis produced 10,317 bacterial and 5660 fungal ASVs. Good's coverage ranged from 95% to 99% (Table ), indicating that most of the bacterial and fungal types have been detected in the samples. ASV classification resulted in 53 phyla, 137 classes, 296 orders, 477 families and 789 genera for bacteria and 12 phyla, 23 classes, 35 orders, 42 families and 39 genera for fungi. Overall, species richness and diversity were highest in spring compared to summer and autumn. Examining observed species, alpha diversity (Shannon and Simpson) and species richness (Chao1 and ACE) revealed no statistical differences between bacterial or fungal richness and diversity and harvest intensities. Observed species and diversity and richness were highest in spring for bacteria and in autumn for fungi (Table ; Figure ). Additionally, statistical differences ( p <0.05) could be detected between seasons. For bacteria, comparing spring to summer and autumn resulted in significant statistical differences, but there were no statistical differences between autumn and summer, and in Simpson analysis between spring and summer (Figure ). For fungi, the difference in species richness and diversity was also statistically significant between spring and autumn (Figure ). PCoA analysis (beta diversity) showed dissimilarity in microbial communities. There was a distinct structure in bacteria between spring samples and autumn and summer samples (Figure ). Statistical testing revealed a highly significant difference ( p <0.01) between seasons in bacterial beta diversity. Likewise, PCoA analysis revealed a distinct structure between spring and autumn in fungal beta diversity (Figure ). Fungal communities also showed significant differences in beta diversity ( p <0.05) between seasons but not as strongly as bacteria. The most common phyla for bacteria and fungi according to relative abundance in different samples are presented in Figure . For fungi, unassigned phyla were largely present in the samples. The most abundant bacteria in the phylum level were Acidobacteriota, Proteobacteria and Actinobacteria. The most abundant fungal phyla were Basidiomycota and Ascomycota. In spring, Acidobacteriota accounted for 77.8% of the dominant taxa, followed by Proteobacteria (22%). During summer, Proteobacteria dominated all the samples and accounted for 100% of the dominant taxa. Proteobacteria was also the most dominant in autumn (62%) but was accompanied by Acidobacteria and Actinobacteria with 12.5% share each. The fungal communities were dominated by Basidiomycota and Ascomycota. In spring, Basidiomycota covered 57.1% of the dominant taxa, whereas Ascomycota covered 28.6%. However, during autumn, Basidiomycota covered only 14.3% and Ascomycota was not among the dominant taxa. Unassigned phyla were significant in both seasons, accounting for 14.3% in spring and 85.7% in autumn. The most dominant bacterial phyla in all harvest intensities were Proteobacteria and Acidobacteriota. In clear‐cut, Proteobacteria covered 62.5% and 25% of dominant taxa, in CCF the equivalent percentages were 50% and 38%, and in the uncut site 57.1% and 42.9%. Dominant fungal communities in clear‐cut were Ascomycota (66.7%), in CCF Basidiomycota (40%) and in uncut Basidiomycota (50%). Additionally, unknown phyla were dominant in all harvest intensities (in clear‐cut 33%, in CCF 60% and in uncut 50%). Permutational multivariate analysis of variance did not reveal statistical significance between season or harvesting intensity on a phylum level. Only bacterial phylum, Myxococcota showed statistical significance when it comes to harvest intensity ( p <0.05). The most common bacterial and fungal genera, according to relative abundance in different samples, are presented in Figure . The share of unassigned genera was high, especially in fungal samples in autumn. Bacterial genera Pseudomonas (Pseudomonadota) varied significantly with soil temperature ( p <0.05). Additionally, genus Roseiarcus (Pseudomonadota) varied significantly with the season ( p <0.05), and uncultured eubacterium WD260 varied significantly with WT. When examining the hierarchical clustering of the samples from the genus‐level heatmaps (Figure ), bacterial and fungal samples cluster according to season and harvest intensity. Spring samples, in particular, cluster together, whereas there is slightly more variation in clustering with summer and autumn samples in both bacterial and fungal relative abundance. Examining the Upset plot (Figure ) revealed that in bacteria spring samples share the greatest number of ASVs. Despite the season and harvest intensity, the summer and autumn samples share much less ASVs. The least shared ASVs are with samples taken from clear‐cut in summer and autumn. In fungi (Figure ), the number of shared ASV's is more moderate. However, autumn samples (despite the harvest intensity) share the greatest number of ASVs. Spring and autumn samples taken from uncut, clear‐cut samples in spring and CCF samples in spring also all share a high amount of ASVs. We examined seasonal changes, the effects of prolonged drought periods during summer, and harvesting intensity on bacterial and fungal communities in drained forested peatlands over one growing season. We hypothesized seasonal changes in microbial communities. While the bacterial and fungal species richness and diversity did vary seasonally, there were only subtle changes in relative abundances in bacterial communities. It has been found that the diversity and community structure can be significantly different at the beginning of the growing season compared to the end (Santalahti et al., ; Shigyo et al., ; Wan et al., ). Additionally, in Wang et al. eukaryotic microbiomes, including fungi, showed significant shifts in community structure in rewetted peatlands, whereas in prokaryotic microbiomes were less prone to change. Although no seasonal variation was detected in the fungal relative abundances in our study, the drought period during the summer likely affected the fungal samples. This leads to an open question of whether the fungal communities are more vulnerable to drought. During the summer drought, WT dropped 50 cm in clear‐cut, 48 cm in CCF and 44 cm in uncut forest. After the precipitation increased, WT rose by 30 cm in clear‐cut, 21 cm in CCF and 40 cm in uncut forest. Seasonal droughts have already been shown to decrease forest growth in boreal areas (Aakala & Kuuluvainen, ; Brecka et al., ; Keiluweit et al., ), but their effects on microbes and soil C processes are less studied (Potter et al., ). In our study, the drought potentially affected fungi and prevented high‐quality RNA for fungal sequencing in all experimental plots in the summer and autumn. Fungi have been reported to be more vulnerable to drought than bacteria (Allison & Treseder, ; Jaatinen et al., ; Krivtsov et al., ; Xue et al., ). Thus, increased drought may negatively affect fungi‐driven organic matter decomposition and tree growth. Furthermore, microbes have been shown to react to drought leading to changes in the soil C cycle and decomposition in other ecosystems and soil types such as grasslands (Metze et al., ) and silt‐clay loam soil (Xie et al., ). Our results might potentially indicate that drought may alter significantly microbially mediated C and nutrient cycling because fungi are the primary decomposers in boreal forest soils. However, the question of whether fungal functions and community structure are affected by drought remains open. It has been studied by Wang, Wang, et al. that in rewetted peatlands fungal and bacterial communities change significantly during wet and dry periods, indicating that microbial communities are susceptible to extreme weather conditions. Our samples included a high amount of unassigned fungal ASV's especially in autumn, which makes drawing further conclusions difficult. We argue that the matter should be further investigated and studied in the future on forested drained peatlands, as the drought evidently affected the sampling and further analysis with missing summer samples for fungi. We found clear seasonal changes in microbial richness and diversity, but no significant differences between summer and autumn. The diversity and richness values were highest in spring for bacteria and autumn for fungi. Spring samples also showed more similarity for bacteria, whereas bacterial samples taken in summer and autumn were not as tightly related to each other. Fungal samples showed more systematic separation into spring and autumn. Seasonal changes in microbial diversity, richness and community composition have been examined and discovered in various environments and combined with other environmental factors like soil chemical properties and habitat characteristics can affect the microbial community richness and diversity (Luo et al., ; Shen et al., ; Solanki et al., ; Wan et al., ; Yu et al., ). Our results indicated that in drained peatland forests, both bacterial and fungal species richness and diversity are affected by seasonal changes, while bacterial diversity and richness are higher in spring, fungal richness and diversity are higher in autumn. The common phyla found among fungi and bacteria are abundant in both peatland forests and peatlands overall (Generó, ; Kalam et al., ; Santalahti et al., ). Although Basidiomycota can be found especially in the upper layers of the peat (Lusa & Bomberg, ), they do not usually dominate the fungal community in peatlands (Thromann, ), whereas Basidiomycota are important and commonly found ectomycorrhizal fungi in boreal forests (Santalahti et al., ). In our study, the CCF and control communities were dominated by Basidiomycota in spring. However, the relative abundance of both Basidiomycota and Ascomycota declined on our sites in autumn. This might be due to the poorer quality of the autumn samples, leading to a lower portion of assigned ASV's in the autumn samples. Additionally, we did not find a statistically significant association between seasons, harvest intensity or any other environmental factor studied like WT. In Peltoniemi et al. , Basidiomycota responded to the WT drawdown in peat soil and the Ascomycota became the dominant phyla after the treatment. Most soil fungi belong to Ascomycota and Basidiomycota, which form mutualistic relationships with plants and decompose recalcitrant organic C, including cellulose and polyphenolic compounds (Lynd et al., ). In bacteria Proteobacteria, Actinobacteriota and Acidobacteriota are all common phyla in boreal peatlands and soils (Aislabie & Deslippe, ; Generó, ; Kolton et al., ; Lewin et al., ; Sun et al., ; Zhang et al., ). They can be acidophilic or aciduric bacteria (Curtis et al., ; Kalam et al., ), and they respond differently to WT drawdown (Kitson & Bell, ). Actinobacteriota have been noticed to potentially respond negatively to WT drawdown in wet and nutrient‐rich sites, but benefit in nutrient‐poor sites (Jaatinen et al., ). Proteobacteria can respond positively or negatively to drought (Potter et al., ), but the abundance increases during rewetting (He et al., ). It has been noticed that Acidobacteriota benefits from drainage in peat soil as they become the dominant group (Urbanová & Bárta, ). In our samples, the relative abundance of Acidobacteriota was quite stable but shifted in summer and autumn, becoming more abundant in CCF (summer) and uncut (autumn). Furthermore, the relative abundance of Proteobacteria increased across the harvest intensities, whereas the relative abundance of Actinobacteriota stayed quite stable or increased slightly towards summer and autumn. Additionally, we found that the relative abundance of phylum Myxococcota varied significantly between harvest intensity. Forest harvesting affects stand and ground vegetation (Kim et al., ), microclimate (Chroňáková et al., ), and WT which is controlled by interception and evapotranspiration (Sarkkola et al., ). The changes in the vegetation affect microbes through the soil–plant‐microbial interactions (Mundra et al., ; Schulp et al., ; Tedersoo et al., ; Wardle et al., ), and the removal of trees alters the light, temperature and moisture conditions. Mycococcota are unusual bacteria, capable of predation and fruiting body formation (Thiery & Kaimer, ; Wielgoss et al., ) and potentially photosynthesis (Li et al., ). Myxococcota can be found in various aerobic environments and their unique ecology allows them to exist in several types of environments (Reichenbach, ), which could explain our observations. The relative abundance of bacteria was quite stable across the seasons and harvest intensities. The Acidobacteriae Subgroup_2, related to phosphorus ‘mining’ in nutrient‐poor soils (Jones et al., ; Mason et al., ), was abundant in spring, potentially indicating a higher springtime soluble phosphorus demand of plants. The genera related to the decomposition of recalcitrant C, such as cellulose ( Acidothermus ; Talia et al., ) and aromatic compounds ( Roseiarcus ; Man et al., ), were more abundant in spring than in autumn in the CCF and the uncut forest potentially reflecting the changes in the pool of labile substrates (Kirschbaum, ). Additionally, Roseiarcus showed significant differences in relative abundance when compared to season. The observations reflect the soil acidity and the ground vegetation of the management plots since most of these genera are concerned as acidophilic (e.g., Acinetobacter , Chloroflexi , AD3) and often regarded as members of mosses' or shrubs' microbiome associated with N cycling in low N environments (e.g., Candidatus S olibacter , candidate phylum WPS‐2) (Holland‐Moritz et al., ; Huber et al., ; Jenkins et al., ; Köhler et al., ; Kolton et al., ; Rodriguez‐Mena et al., ; Tian et al., ). The relative abundance of genus Pseudomonas was significantly different when compared with soil temperature, and the relative abundance of a genus of uncultured eubacterium WD260 differed significantly when compared with WT. Fungal genera did not differ significantly when compared with environmental factors. However, the relative abundance of ectomycorrhizal genus Asterostroma was high in all harvest intensities in spring but declined in the autumn. To conclude, this study investigated the microbial communities and drought in the topsoil layer of drained forested boreal peatlands. Despite the limited research on this topic, there is a growing demand for information due to new forestry practices and the impacts of climate change. We found some strong indications of seasonal changes in bacterial and fungal community diversity and species richness. Furthermore, some differences were observed in bacterial phyla and genera based on harvesting intensity, soil temperature, season and WT. We found some potential indications of the effects of drought, but due to the low quality of fungal samples in summer and autumn we could not draw further conclusions about whether the fungal abundance was affected by the drought. Additionally, there were no indications that the drought affected the bacterial relative abundance. Since our study was carried out only over one growing season, we suggest that similar longer‐term studies in varying weather conditions should be conducted. As the seasonal droughts are predicted to increase in boreal areas due to climate change‐promoted increase in temperature and evapotranspiration, as well as due to more irregular precipitation patterns (Diffenbaugh & Field, ; Donat et al., ; Gauthier et al., ; Ge et al., ; Reyer et al., ), further examination of the effects of prolonged drought periods on the microbial communities is essential. Oona Hillgén: Investigation; conceptualization; writing – review and editing; visualization; formal analysis; data curation; writing – original draft. Marjo Palviainen: Writing – review and editing; writing – original draft; validation; supervision. Annamari Laurén: Writing – review and editing; writing – original draft; validation; supervision. Mari Könönen: Investigation; conceptualization; writing – review and editing; formal analysis; validation. Anne Ojala: Conceptualization; writing – review and editing; funding acquisition; methodology; validation; supervision. Jukka Pumpanen: Conceptualization; writing – review and editing; funding acquisition; project administration; methodology; validation; supervision. Elina Peltomaa: Investigation; conceptualization; funding acquisition; writing – original draft; writing – review and editing; visualization; project administration; formal analysis; methodology; validation; supervision; data curation. The authors declare no conflict of interest. Figure S1. Daily precipitation (blue; left axis) and air temperature (red; right axis) during the growing season of 2021. The sampling times for soil microbial community analysis are marked with black arrows. Figure S2. The Y1‐axis titled ‘Tags Number’ means the number of tags; ‘Total tags’ (red bars) is the number of effective tags; ‘Taxon Tags’ (blue bars) is the number of annotated tags; ‘Unclassified Tags’ (green bars) is the number of unannotated tags; ‘Unique Tags’ (orange bars) is the number of tags with a frequency of 1 and only occurs in one sample. The Y2‐axis titled ‘OTUs Numbers’ means the number of OTUs, which are displayed as ‘OTUs’ (purple bars) to identify the numbers of OTUs in different samples. Panel A is for bacterial (16S) samples and panel B for fungal (ITS2) samples. Table S1. The amplicon was sequenced on Illumina paired‐end platform to generate 250‐bp paired‐end raw reads (raw PE) and then merged and pretreated to obtain Clean Tags. The chimeric sequences in Clean Tags were detected and removed to obtain the Effective Tags, which can be used for subsequent analysis. The summarizations obtained in each step of data processing are shown in the table. Table S2. Good's coverage and alpha diversity and richness indices per sample for (A) bacteria and (B) fungi. CC, clear cut; CCF, continuous cover forestry. A, B and C mark the sample replicants. |
Intermediate soil acidification induces highest nitrous oxide emissions | aae08bc6-d3c4-4412-9f28-1d645cbb11f2 | 10973416 | Microbiology[mh] | Nitrous oxide (N 2 O) is the dominant anthropogenic ozone-depleting substance and is also a long-lived potent greenhouse gas . It has a global warming potential about 265–298 times that of carbon dioxide (CO 2 ) and contributes approximately 7% to the overall global warming , . Although the N 2 O concentration in the atmosphere is low at ca. 330 ppb , it is increasing at an accelerating rate of ca. 0.75~1.0 ppb per year because human activities have greatly increased the input of reactive nitrogen (N) in the environment , . Agricultural N fertilization, in particular, dominates human-induced N 2 O emissions , , . Since the proportion of reactive N (Nr) emitted as N 2 O (i.e., the emission factor, EF) is relatively stable , in neutral soils, the rate of fertilizer N applied has been considered a robust predictor of N 2 O emission. Therefore, the Intergovernmental Panel on Climate Change (IPCC) uses 1% as the default EF of soils at pH of 6.76 (i.e., IPCC default Tier-1) in estimating N 2 O emissions . However, both process-based models and atmospheric inversion studies have recently demonstrated that N 2 O EFs have significantly increased, which reflects accelerating global N 2 O emissions in recent decades , , . This suggests that N-application rates are not reliable predictors of N 2 O emissions. Increases in N 2 O EFs have been attributed to the non-linear response of soil N 2 O emissions to N input , , , building on the premise that high N input exceeds plant N needs and leads to surplus N for microbial N 2 O production . Nitrogen applications further induce a higher proportion of N losses via N 2 O in acidic soils , , and it is well established that acidity (pH < 5.0) in soil increases the product ratio of [N 2 O/(N 2 O + N 2 )] during denitrification – . One hypothesized explanation is that pH interferes with the assembly of the N 2 O reductase . However, it was also recently shown that soil pH only exerts a control of denitrification product ratio in fertilized soils, while in unfertilized soils, biological controls were more important . Despite increases in the N 2 O/(N 2 O + N 2 ) product ratio of denitrification at low pH, N 2 O emissions are often low under acidic conditions because acidity suppresses microbial processes that generate N 2 O – . In general, raising soil pH through liming to near-neutral level (pH > 6.5) reduces N 2 O emissions, but raising pH in acidic soils (pH < 5.6) to moderately acidic levels (pH = 5.6–6.0) often increases N 2 O emissions , – . Taken together, these results suggest that soil pH exerts a critical, nonlinear control over N 2 O emissions , , , highlighting the urgency for a comprehensive, mechanistic understanding of pH effects on soil microorganisms and microbial processes that modulate N 2 O dynamics. Soil N 2 O emissions originate mainly from two microbial processes, ammonia oxidation being the first step in nitrification, and denitrification, which is the reduction of nitrate to gaseous N (Supplementary Fig. ). Although ammonia oxidation by ammonia-oxidizing archaea (AOA) and bacteria (AOB) control the rate-limiting step of nitrification in most terrestrial ecosystems , denitrification plays a more important role in soil N 2 O emissions , . Since the denitrification process is modular with varying genetic capacities for the different reductive steps in the denitrification pathway among denitrifying microorganisms, the composition of the denitrifying community will control N 2 O emissions. Of special concern is the proportion of the denitrifying community harboring the nosZ gene coding for the N 2 O reductase that converts N 2 O to N 2 as it is the only known sink for N 2 O in the biosphere (Supplementary Fig. ). There are two phylogenetically distinct clades in the nosZ phylogeny: nosZI and the recently described nosZII , . Not all denitrifiers carry this gene and therefore terminate denitrification with N 2 O, but there are also non-denitrifying N 2 O reducers which often possess nosZII . The ratio of denitrification genes, especially nirK and nirS encoding the known nitrite reductases involved in denitrification, to the nosZ gene abundance is often used as an indication of soil N 2 O emissions , , , but its relationship with soil pH remains largely unexplored. There is a lack of a unifying, conceptual framework of soil pH impacts on denitrifying microorganisms and N 2 O EFs, which critically limits our capacity to predict and mitigate N 2 O emissions. Here, we address this knowledge gap with two comprehensive, global meta-analyses of N 2 O emission fluxes and EFs in 539 fertilization experiments and of the relationships between soil pH, denitrification gene abundance estimates, and N 2 O flux data based on 289 field studies. In addition, three field experiments with acid additions were analyzed to further evaluate the effects of manipulating soil acidity to identify relationships between soil pH and N 2 O EFs and disentangle the linkages among soil pH, community composition, and activities of denitrifying microorganisms, and N 2 O EFs.
Global synthesis of N input and soil pH effects on N 2 O emission factors We first investigated how soil N 2 O EFs related to soil pH and the quantity of N input via fertilization by conducting a meta-analysis based on 539 field fertilization experiments, including 5438 observations of N 2 O emission fluxes and 3786 EFs records (Fig. ; Supplementary Data ). Data was collected from experiments distributed among croplands, grasslands, and forests across the globe, published between 1980 and 2019. The field sites cover soil pH (herein all pH values refer to [12pt]{minimal}
$${{{{{{}}}}}}_{({{{{{{}}}}}}_{2}{{{{{}}}}})}$$ pH ( H 2 O ) ) ranging from 2.8 to 9.7, with ca. 58% having a pH of 5.5–7.5 (Fig. ; Supplementary Fig. ). The highest N 2 O EFs mainly occurred in weak to moderately acidic soils (pH of 5.6–6.5), with an average EF of 1.2% (Fig. b, ). While there was a weak but statistically significant, linear relationship between pH and N 2 O EFs, this regression only explained 2.0% of the variation in EFs (Supplementary Fig. ; see Supplementary Table for the model selection). Soil N 2 O EFs had a hump-shaped relationship with soil pH, which reached its maximum at pH 5.6 (Fig. ; Supplementary Table ), and explained 4.0% of the variation in N 2 O EFs. However, once N 2 O EFs were averaged across soil pH in increments (0.1 each), the hump-shaped relationship became markedly more apparent and reached its maximum at pH 6.0 and explained 56% of the variation (Fig. ; Supplementary Fig. ; Supplementary Table ). These results suggest that interactions between EF and pH diverge around a pH threshold of 5.6–6.0. By contrast, there was no significant linear relationship between N 2 O EFs and the quantity of N input (Fig. ; Supplementary Table ). Indeed, the averaged EFs gradually increased with N input and reached their highest around 500–600 kg N ha −1 (EF = 1.4%; Fig. ; Supplementary Fig. ). However, the average EFs decreased and remained relatively low in studies with an N input over 600 kg N ha −1 (EF = 1.0%; Fig. ; Supplementary Fig. ). These results are inconsistent with the common belief that high N input or soil N content induces high EFs and reconfirm that N quantity alone cannot sufficiently predict N 2 O EFs , , . Further, the N 2 O EFs were significantly higher in acidic tropical soils (pH = 5.5; EFs = 1.1%) than in neutral subtropical (pH = 6.7; EFs = 0.9%) and temperate (pH = 6.9; EFs = 0.8%) soils (Fig. a, ), despite significantly lower N input in tropical (170 kg N ha −1 ) than subtropical (223 kg N ha −1 ) and temperate (207 kg N ha −1 ) soils (Fig. ). Nevertheless, in tea plantations, all on acidic soils and with high N input (mean = 401 kg N ha −1 ), N 2 O EFs positively correlated with both soil pH (Fig. ) and the quantity of N input (Fig. ), indicating that high acidity reduces N 2 O emissions. Additionally, our regression analysis showed that soil organic carbon (SOC) content was negatively correlated with soil pH (Supplementary Fig. ; R 2 = 0.11; P < 0.001), but SOC itself was not significantly related to N 2 O EFs (Supplementary Fig. ), suggesting that SOC may only indirectly affect N 2 O EFs via soil pH. Moreover, although N 2 O EFs significantly correlated with mean annual precipitation (MAP), total soil nitrogen (TN), and sand and clay contents, these correlations only explained a low percentage (1–3%) of the variation in N 2 O EFs (Supplementary Fig. ). Unlike the hump-shaped relationships observed between soil pH and EFs, our further analyses did not find any significant non-linear relations between N 2 O EFs and MAP, or sand and clay contents (Supplementary Fig. ; Supplementary Table ). There was a hump-shaped relationship between N 2 O EFs and TN, but it only explained 2% of the variation of N 2 O EFs (Supplementary Fig. ; Supplementary Table ). Taken together, these results indicate that although adequate N levels are required for N 2 O production, either by nitrification or denitrification, and that multiple soil and climatic factors may affect N 2 O emissions, soil pH exerts a dominant, non-linear control over N 2 O EFs. Soil acidification effects on soil N-cycling microorganisms and N 2 O To disentangle the potential microbial mechanisms governing effects of soil pH per se on N 2 O EFs, we conducted three field experiments in unfertilized grasslands in which acidity was manipulated (Supplementary Fig. ). Since none of the experimental sites had received any significant reactive N input (neither N deposition nor N fertilizers) – , the selection pressure of human-derived N on soil N-cycling microorganisms was negligible. We examined how changes in soil pH (i.e., soil acidification) influenced soil available N, abundance of nitrifier and denitrifier functional groups, and soil N 2 O emission potential. These experiments were located in three grassland sites with different initial soil pH: a Tibetan alpine meadow (pH = 6.0) near Maqu County, Gansu Province, and a Mongolian steppe (pH = 7.3) in the Xilin River Basin of Inner Mongolia, North China, and a Yellow Loess semi-arid grassland (pH = 8.0) near Guyuan, Ningxia in West China (Supplementary Fig. ). Each site had a no-acid control (A0) and four levels of acid additions (A1, A2, A3 and A4). Acid addition consistently reduced soil pH, effectively generating a pH gradient at each site: from 6.0 to 4.7 in the Tibetan alpine soil (Supplementary Fig. ), from 7.3 to 4.7 in the Mongolian steppe soil (Supplementary Fig. ), and from 8.0 to 7.0 in the Loess soil (Supplementary Fig. ). Soil NH 4 + -N (Supplementary Fig. ) decreased, but NO 3 − -N (Supplementary Fig. ) increased with increasing soil pH. The abundances of AOA and AOB also increased with increasing soil pH (Supplementary Fig. ) across the three sites, indicating that soil acidification inhibited AOA and AOB, and nitrification. Similar to AOA and AOB, abundances of nirK -, nirS - and nosZI -type denitrifiers generally increased with soil pH at all three sites, although they were lower in the sandy, low-C Mongolian soil than other two sites (Supplementary Figs. and ). The nosZI -denitrifiers were relatively less sensitive to low soil pH than those with nirS or nirK , but were more abundant under high soil pH, particularly in the alkaline Loess soil (Supplementary Fig. ). Soil pH significantly impacted N 2 O emissions, which were highest in weakly to moderately acidic soils (pH = 5.6–6.3; Fig. ). Across the pH gradients at the three sites, we observed hump-shaped relationships between soil pH and the ( nirK + nirS )/ nosZI ratio, and N 2 O emissions, which both peaked at pH = 6.0 (Fig. ). We further quantified the potential denitrification activity in the grassland soils under non-limited N- or C-conditions. Incubations with and without addition of acetylene to block the conversion of N 2 O to N 2 by N 2 O reductase allowed us to assess the potential N 2 O emission and the direct effect of soil pH on N 2 O reduction. Acid additions in the field experiments reduced the denitrification potential in acidic soils but increased it in alkaline soils, leading to the highest denitrification rates in neutral soils (Fig. ; Supplementary Fig. ). As expected, the N 2 O/(N 2 O + N 2 ) product ratio of denitrification decreased as soil pH increased (Fig. ; Supplementary Fig. ) , . Similar to the relationship between soil pH and the denitrifier community composition, and N 2 O emissions (Fig. d, ), we observed a hump-shaped relationship between soil pH and potential denitrification (Fig. ). However, the pH optimum for potential denitrification (pH = 6.7; Fig. ) was higher than that detected for N 2 O emissions (pH = 6.0; Fig. ). As denitrification rates are often higher under neutral to weak alkaline conditions , this difference suggests that decreased pH may have contributed to relatively higher net N 2 O emissions by weakening the N 2 O sink strength. Collectively, results from the three field experiments provide direct evidence that soil pH modulates the strength of the soil as a N 2 O source or sink, mainly because weak to moderate soil acidity promoted N 2 O emissions through favoring N 2 O-producing over N 2 O-consuming denitrifiers, as well as suppressing reduction of N 2 O to N 2 . Global relationship between soil pH and denitrifying microorganisms To further examine the generality of the relationship between soil pH and the relative composition of the denitrifying microorganisms identified in our acidity manipulation experiments, we conducted a second global meta-analysis to examine the relationship between soil pH and the abundance of denitrification genes in 289 field studies (Fig. ). Our dataset covers 3899 gene abundance estimates paired with N 2 O flux data in croplands (796 for nirK , 754 for nirS , 784 for nosZI ), grasslands (317 for nirK , 330 for nirS , 309 for nosZI ), and forests (234 for nirK , 181 for nirS , 194 for nosZI ) (Fig. ; see Supplementary Data for detail). Since we only found nine studies with data on nosZ clade II combined with N 2 O emission data from field experiments, only nosZ clade I was considered in the following analyses. A positive relationship between soil N 2 O emissions and the ( nirK + nirS )/ nosZI ratio across the 289 studies was observed (Supplementary Fig. ), underscoring the importance of the relationship between microbial sources and sinks for net N 2 O emissions. The meta-analysis largely supported our manipulation experiments by showing a hump-shaped (unimodal) relationship between soil pH and the abundances of nirK - and nirS -type denitrifiers, which reached their maximum at pH = 6.0–6.3 (Fig. b, ) and pH = 6.3–6.8 (Fig. d, ), respectively. However, soil pH was not significantly correlated with either the coarse (Fig. ) or averaged (Fig. ) abundance of nosZI . Consequently, the ( nirK + nirS )/ nosZI ratio also showed a hump-shaped relationship with soil pH, reaching its maximum at pH of 6.0–6.1 (Fig. h, ). These results illustrate that weak to moderately acidic soils generally favor N 2 O-producing over N 2 O-consuming denitrifiers and induce high N 2 O emissions across the global scale. A new conceptual framework of soil pH effects on N 2 O EFs and emissions Based on the results from the two global meta-analyses and our pH manipulation experiment, we propose that differential effects of soil pH on the denitrification product ratio (i.e., N 2 O/(N 2 O + N 2 )) and overall denitrification potential jointly control the non-linear responses of EFs to N fertilization (Fig. ). Thus, the net N 2 O emission from denitrification depends on both (i) the N 2 O/(N 2 O + N 2 ) product ratio of denitrification and (ii) the overall rate of denitrification , , and quantitatively, net N 2 O emission equals the product of these two parameters. However, both parameters vary distinctly in relation to soil pH (Figs. and ). In highly acidic soils (pH <5.5), the conversion of N 2 O to N 2 is typically restrained by inhibiting the activity or, as previously hypothesized, the assembly of the N 2 O reductase , , resulting in high N 2 O/(N 2 O + N 2 ) product ratio of denitrification , . However, low pH often suppresses growth and activity of both nitrifiers and denitrifiers , , , , thereby limiting the magnitude of N 2 O production and leading to low N 2 O EFs and N 2 O emission despite a high N 2 O/(N 2 O + N 2 ) product ratio of denitrification (Fig. ). Neutral (pH = 6.6–7.3) and slightly alkaline soils (pH = 7.4–7.8) are optimal for nitrification and denitrification , , but the activity of the N 2 O reductase is also at its maximum in this pH range, promoting reduction of N 2 O into N 2 , . By contrast, in moderately to weakly acidic soils (pH = 5.6–6.5), both nitrification and denitrification occur at intermediate levels , , and a high ( nirK + nirS )/ nosZI ratio allows high N 2 O production but low N 2 O consumption, leading to high N 2 O EFs (Fig. ). Overall, these differential effects of soil pH on N 2 O-producing and consuming microorganisms, and on N 2 O reduction result in the highest N 2 O EFs and emissions in moderately acidic soils. Our findings that soil pH controls non-linear responses of N 2 O emissions to N input challenge the prevailing understanding of what regulates N 2 O EFs. First, soil acidity as the primary determinant of EFs presents a new mechanistic understanding of the recent acceleration of global N 2 O emissions . Emerging evidence has recently shown that this acceleration was primarily related to high N 2 O EFs in China and Brazil , , although the underlying mechanisms or causes remained largely unresolved. Our results suggest that high N fertilization rates and its associated soil acidification, especially in China , may have jointly contributed to the increased N 2 O EFs . The high EF in Brazil remains unexplained because average N application rates there are significantly lower than the global average , . However, one unique, but overlooked, factor is that croplands in Brazil are strongly acidic , and liming is frequently applied to raise soil pH to ca. 6.0 for optimal crop growth , which might, as our results suggest, have induced high N 2 O EFs. Second, our findings showing the highest EFs in moderately acidic soils (pH = 5.6–6.0) indicate that the current calculations using the default IPCC EF 1% at pH 6.76 critically underestimate current soil N 2 O emissions. In general, soil acidification has occurred in a large proportion of agricultural soils in China, US, and Europe because of long-term N fertilization , , . However, the degree of acidification varies locally, which can have different effects on soil N 2 O emissions. According to our results, N fertilization will induce increased acidification and N 2 O EFs in soils with weak acidity (pH = 6.0–6.7). Moreover, in several Chinese regions, a considerable proportion of agricultural soils are already highly acidic (4.5 < pH < 5.5), where low pH may indeed inhibit N 2 O emissions (Fig. ). However, the high acidity is suppressive to the growth of crop plants, and farmers therefore often increase soil pH through liming, which may increase N 2 O emissions . For neutral or alkaline soils (pH > 6.7), particularly those soils with high buffering capacity, N 2 O emissions are likely less affected because N fertilization may not significantly reduce soil pH over the short term. This is relevant in light of the expected increase in the world population, especially in tropical and subtropical countries where the major population increase will occur, but current N application rates are low , . Soils in these regions are typically characterized by low soil fertility and they are moderately to strongly acidic . Increasing plant-available soil N in these regions will therefore be required to ensure crop productivity and economic profits but will inevitably increase N 2 O EFs and N 2 O emissions. To conclude, our results indicate that soils with high N 2 O EFs (Figs. b and ) significantly overlap in their pH range with pH optima for most crops (pH = 5.5–6.5) . This overlap presents a daunting challenge for N 2 O mitigation through manipulating soil pH, highlighting the need for alternative approaches to reduce N 2 O emissions. Liming is a common practice in agriculture to reduce toxicity of soil acidity on crop plants . As low soil pH induces high N 2 O emission product ratio (N 2 O:N 2 ) of denitrification , , raising soil pH to ca. 6.5 has been proposed as a management tool to reduce N 2 O emissions – . However, liming is often economically costly, and farmers tend to only raise soil pH to 5.5–6.0 , , which may, based on our results (Figs. and ), enhance N 2 O emissions. Liming also increases soil CO 2 emission , , offsetting its impact on N 2 O emissions. Our results highlight the urgency to identify alternative approaches that are practically feasible and conducive to lowering N 2 O emissions and suggest that manipulation of the community composition and activities of N 2 O-producing and N 2 O-consuming microbes may provide a promising approach for N 2 O mitigation. Several unique microbial guilds that dominantly control the N 2 O sink strength have recently been identified, which may be targeted to reduce the denitrification product ratio . For example, some N 2 O reductase-carrying bacteria have adapted to highly acidic soils with pH as low as 3.7 and it may be possible to introduce these bacteria into soil to mitigate N 2 O emissions in highly acidic soils. However, whether those N 2 O reductase-carrying bacteria can be introduced into slightly acidic soils to effectively mitigate N 2 O emissions warrants further assessment. In addition, manipulation of N 2 O-reducing microorganisms might be achieved through crop breeding or cover crop selection because some plants produce root exudates and/or plant metabolites inhibiting nitrifying , and denitrifying microorganisms. Further, reducing access of nitrifiers to ammonium through manipulating N sources (e.g., slow-releasing fertilizers) , supporting nitrate ammonifiers reducing nitrate to ammonium , , and enhancing plant N uptake, and/or inhibiting nitrifiers (e.g., nitrification inhibitors) can decrease N 2 O emissions from both nitrification and denitrification . Overall, our study provides compelling evidence illustrating that there is a hump-shape relationship between soil pH and N 2 O EF, leading to highest N 2 O emissions under moderate soil acidity. These findings suggest that raising pH through liming has limited capacity for N 2 O mitigation due to multiple biological and economic constraints, and that direct manipulation of N 2 O-producing and N 2 O-consuming microbes may provide novel approaches for N 2 O mitigation under future reactive N input scenarios.
2 O emission factors We first investigated how soil N 2 O EFs related to soil pH and the quantity of N input via fertilization by conducting a meta-analysis based on 539 field fertilization experiments, including 5438 observations of N 2 O emission fluxes and 3786 EFs records (Fig. ; Supplementary Data ). Data was collected from experiments distributed among croplands, grasslands, and forests across the globe, published between 1980 and 2019. The field sites cover soil pH (herein all pH values refer to [12pt]{minimal}
$${{{{{{}}}}}}_{({{{{{{}}}}}}_{2}{{{{{}}}}})}$$ pH ( H 2 O ) ) ranging from 2.8 to 9.7, with ca. 58% having a pH of 5.5–7.5 (Fig. ; Supplementary Fig. ). The highest N 2 O EFs mainly occurred in weak to moderately acidic soils (pH of 5.6–6.5), with an average EF of 1.2% (Fig. b, ). While there was a weak but statistically significant, linear relationship between pH and N 2 O EFs, this regression only explained 2.0% of the variation in EFs (Supplementary Fig. ; see Supplementary Table for the model selection). Soil N 2 O EFs had a hump-shaped relationship with soil pH, which reached its maximum at pH 5.6 (Fig. ; Supplementary Table ), and explained 4.0% of the variation in N 2 O EFs. However, once N 2 O EFs were averaged across soil pH in increments (0.1 each), the hump-shaped relationship became markedly more apparent and reached its maximum at pH 6.0 and explained 56% of the variation (Fig. ; Supplementary Fig. ; Supplementary Table ). These results suggest that interactions between EF and pH diverge around a pH threshold of 5.6–6.0. By contrast, there was no significant linear relationship between N 2 O EFs and the quantity of N input (Fig. ; Supplementary Table ). Indeed, the averaged EFs gradually increased with N input and reached their highest around 500–600 kg N ha −1 (EF = 1.4%; Fig. ; Supplementary Fig. ). However, the average EFs decreased and remained relatively low in studies with an N input over 600 kg N ha −1 (EF = 1.0%; Fig. ; Supplementary Fig. ). These results are inconsistent with the common belief that high N input or soil N content induces high EFs and reconfirm that N quantity alone cannot sufficiently predict N 2 O EFs , , . Further, the N 2 O EFs were significantly higher in acidic tropical soils (pH = 5.5; EFs = 1.1%) than in neutral subtropical (pH = 6.7; EFs = 0.9%) and temperate (pH = 6.9; EFs = 0.8%) soils (Fig. a, ), despite significantly lower N input in tropical (170 kg N ha −1 ) than subtropical (223 kg N ha −1 ) and temperate (207 kg N ha −1 ) soils (Fig. ). Nevertheless, in tea plantations, all on acidic soils and with high N input (mean = 401 kg N ha −1 ), N 2 O EFs positively correlated with both soil pH (Fig. ) and the quantity of N input (Fig. ), indicating that high acidity reduces N 2 O emissions. Additionally, our regression analysis showed that soil organic carbon (SOC) content was negatively correlated with soil pH (Supplementary Fig. ; R 2 = 0.11; P < 0.001), but SOC itself was not significantly related to N 2 O EFs (Supplementary Fig. ), suggesting that SOC may only indirectly affect N 2 O EFs via soil pH. Moreover, although N 2 O EFs significantly correlated with mean annual precipitation (MAP), total soil nitrogen (TN), and sand and clay contents, these correlations only explained a low percentage (1–3%) of the variation in N 2 O EFs (Supplementary Fig. ). Unlike the hump-shaped relationships observed between soil pH and EFs, our further analyses did not find any significant non-linear relations between N 2 O EFs and MAP, or sand and clay contents (Supplementary Fig. ; Supplementary Table ). There was a hump-shaped relationship between N 2 O EFs and TN, but it only explained 2% of the variation of N 2 O EFs (Supplementary Fig. ; Supplementary Table ). Taken together, these results indicate that although adequate N levels are required for N 2 O production, either by nitrification or denitrification, and that multiple soil and climatic factors may affect N 2 O emissions, soil pH exerts a dominant, non-linear control over N 2 O EFs.
2 O To disentangle the potential microbial mechanisms governing effects of soil pH per se on N 2 O EFs, we conducted three field experiments in unfertilized grasslands in which acidity was manipulated (Supplementary Fig. ). Since none of the experimental sites had received any significant reactive N input (neither N deposition nor N fertilizers) – , the selection pressure of human-derived N on soil N-cycling microorganisms was negligible. We examined how changes in soil pH (i.e., soil acidification) influenced soil available N, abundance of nitrifier and denitrifier functional groups, and soil N 2 O emission potential. These experiments were located in three grassland sites with different initial soil pH: a Tibetan alpine meadow (pH = 6.0) near Maqu County, Gansu Province, and a Mongolian steppe (pH = 7.3) in the Xilin River Basin of Inner Mongolia, North China, and a Yellow Loess semi-arid grassland (pH = 8.0) near Guyuan, Ningxia in West China (Supplementary Fig. ). Each site had a no-acid control (A0) and four levels of acid additions (A1, A2, A3 and A4). Acid addition consistently reduced soil pH, effectively generating a pH gradient at each site: from 6.0 to 4.7 in the Tibetan alpine soil (Supplementary Fig. ), from 7.3 to 4.7 in the Mongolian steppe soil (Supplementary Fig. ), and from 8.0 to 7.0 in the Loess soil (Supplementary Fig. ). Soil NH 4 + -N (Supplementary Fig. ) decreased, but NO 3 − -N (Supplementary Fig. ) increased with increasing soil pH. The abundances of AOA and AOB also increased with increasing soil pH (Supplementary Fig. ) across the three sites, indicating that soil acidification inhibited AOA and AOB, and nitrification. Similar to AOA and AOB, abundances of nirK -, nirS - and nosZI -type denitrifiers generally increased with soil pH at all three sites, although they were lower in the sandy, low-C Mongolian soil than other two sites (Supplementary Figs. and ). The nosZI -denitrifiers were relatively less sensitive to low soil pH than those with nirS or nirK , but were more abundant under high soil pH, particularly in the alkaline Loess soil (Supplementary Fig. ). Soil pH significantly impacted N 2 O emissions, which were highest in weakly to moderately acidic soils (pH = 5.6–6.3; Fig. ). Across the pH gradients at the three sites, we observed hump-shaped relationships between soil pH and the ( nirK + nirS )/ nosZI ratio, and N 2 O emissions, which both peaked at pH = 6.0 (Fig. ). We further quantified the potential denitrification activity in the grassland soils under non-limited N- or C-conditions. Incubations with and without addition of acetylene to block the conversion of N 2 O to N 2 by N 2 O reductase allowed us to assess the potential N 2 O emission and the direct effect of soil pH on N 2 O reduction. Acid additions in the field experiments reduced the denitrification potential in acidic soils but increased it in alkaline soils, leading to the highest denitrification rates in neutral soils (Fig. ; Supplementary Fig. ). As expected, the N 2 O/(N 2 O + N 2 ) product ratio of denitrification decreased as soil pH increased (Fig. ; Supplementary Fig. ) , . Similar to the relationship between soil pH and the denitrifier community composition, and N 2 O emissions (Fig. d, ), we observed a hump-shaped relationship between soil pH and potential denitrification (Fig. ). However, the pH optimum for potential denitrification (pH = 6.7; Fig. ) was higher than that detected for N 2 O emissions (pH = 6.0; Fig. ). As denitrification rates are often higher under neutral to weak alkaline conditions , this difference suggests that decreased pH may have contributed to relatively higher net N 2 O emissions by weakening the N 2 O sink strength. Collectively, results from the three field experiments provide direct evidence that soil pH modulates the strength of the soil as a N 2 O source or sink, mainly because weak to moderate soil acidity promoted N 2 O emissions through favoring N 2 O-producing over N 2 O-consuming denitrifiers, as well as suppressing reduction of N 2 O to N 2 .
To further examine the generality of the relationship between soil pH and the relative composition of the denitrifying microorganisms identified in our acidity manipulation experiments, we conducted a second global meta-analysis to examine the relationship between soil pH and the abundance of denitrification genes in 289 field studies (Fig. ). Our dataset covers 3899 gene abundance estimates paired with N 2 O flux data in croplands (796 for nirK , 754 for nirS , 784 for nosZI ), grasslands (317 for nirK , 330 for nirS , 309 for nosZI ), and forests (234 for nirK , 181 for nirS , 194 for nosZI ) (Fig. ; see Supplementary Data for detail). Since we only found nine studies with data on nosZ clade II combined with N 2 O emission data from field experiments, only nosZ clade I was considered in the following analyses. A positive relationship between soil N 2 O emissions and the ( nirK + nirS )/ nosZI ratio across the 289 studies was observed (Supplementary Fig. ), underscoring the importance of the relationship between microbial sources and sinks for net N 2 O emissions. The meta-analysis largely supported our manipulation experiments by showing a hump-shaped (unimodal) relationship between soil pH and the abundances of nirK - and nirS -type denitrifiers, which reached their maximum at pH = 6.0–6.3 (Fig. b, ) and pH = 6.3–6.8 (Fig. d, ), respectively. However, soil pH was not significantly correlated with either the coarse (Fig. ) or averaged (Fig. ) abundance of nosZI . Consequently, the ( nirK + nirS )/ nosZI ratio also showed a hump-shaped relationship with soil pH, reaching its maximum at pH of 6.0–6.1 (Fig. h, ). These results illustrate that weak to moderately acidic soils generally favor N 2 O-producing over N 2 O-consuming denitrifiers and induce high N 2 O emissions across the global scale.
2 O EFs and emissions Based on the results from the two global meta-analyses and our pH manipulation experiment, we propose that differential effects of soil pH on the denitrification product ratio (i.e., N 2 O/(N 2 O + N 2 )) and overall denitrification potential jointly control the non-linear responses of EFs to N fertilization (Fig. ). Thus, the net N 2 O emission from denitrification depends on both (i) the N 2 O/(N 2 O + N 2 ) product ratio of denitrification and (ii) the overall rate of denitrification , , and quantitatively, net N 2 O emission equals the product of these two parameters. However, both parameters vary distinctly in relation to soil pH (Figs. and ). In highly acidic soils (pH <5.5), the conversion of N 2 O to N 2 is typically restrained by inhibiting the activity or, as previously hypothesized, the assembly of the N 2 O reductase , , resulting in high N 2 O/(N 2 O + N 2 ) product ratio of denitrification , . However, low pH often suppresses growth and activity of both nitrifiers and denitrifiers , , , , thereby limiting the magnitude of N 2 O production and leading to low N 2 O EFs and N 2 O emission despite a high N 2 O/(N 2 O + N 2 ) product ratio of denitrification (Fig. ). Neutral (pH = 6.6–7.3) and slightly alkaline soils (pH = 7.4–7.8) are optimal for nitrification and denitrification , , but the activity of the N 2 O reductase is also at its maximum in this pH range, promoting reduction of N 2 O into N 2 , . By contrast, in moderately to weakly acidic soils (pH = 5.6–6.5), both nitrification and denitrification occur at intermediate levels , , and a high ( nirK + nirS )/ nosZI ratio allows high N 2 O production but low N 2 O consumption, leading to high N 2 O EFs (Fig. ). Overall, these differential effects of soil pH on N 2 O-producing and consuming microorganisms, and on N 2 O reduction result in the highest N 2 O EFs and emissions in moderately acidic soils. Our findings that soil pH controls non-linear responses of N 2 O emissions to N input challenge the prevailing understanding of what regulates N 2 O EFs. First, soil acidity as the primary determinant of EFs presents a new mechanistic understanding of the recent acceleration of global N 2 O emissions . Emerging evidence has recently shown that this acceleration was primarily related to high N 2 O EFs in China and Brazil , , although the underlying mechanisms or causes remained largely unresolved. Our results suggest that high N fertilization rates and its associated soil acidification, especially in China , may have jointly contributed to the increased N 2 O EFs . The high EF in Brazil remains unexplained because average N application rates there are significantly lower than the global average , . However, one unique, but overlooked, factor is that croplands in Brazil are strongly acidic , and liming is frequently applied to raise soil pH to ca. 6.0 for optimal crop growth , which might, as our results suggest, have induced high N 2 O EFs. Second, our findings showing the highest EFs in moderately acidic soils (pH = 5.6–6.0) indicate that the current calculations using the default IPCC EF 1% at pH 6.76 critically underestimate current soil N 2 O emissions. In general, soil acidification has occurred in a large proportion of agricultural soils in China, US, and Europe because of long-term N fertilization , , . However, the degree of acidification varies locally, which can have different effects on soil N 2 O emissions. According to our results, N fertilization will induce increased acidification and N 2 O EFs in soils with weak acidity (pH = 6.0–6.7). Moreover, in several Chinese regions, a considerable proportion of agricultural soils are already highly acidic (4.5 < pH < 5.5), where low pH may indeed inhibit N 2 O emissions (Fig. ). However, the high acidity is suppressive to the growth of crop plants, and farmers therefore often increase soil pH through liming, which may increase N 2 O emissions . For neutral or alkaline soils (pH > 6.7), particularly those soils with high buffering capacity, N 2 O emissions are likely less affected because N fertilization may not significantly reduce soil pH over the short term. This is relevant in light of the expected increase in the world population, especially in tropical and subtropical countries where the major population increase will occur, but current N application rates are low , . Soils in these regions are typically characterized by low soil fertility and they are moderately to strongly acidic . Increasing plant-available soil N in these regions will therefore be required to ensure crop productivity and economic profits but will inevitably increase N 2 O EFs and N 2 O emissions. To conclude, our results indicate that soils with high N 2 O EFs (Figs. b and ) significantly overlap in their pH range with pH optima for most crops (pH = 5.5–6.5) . This overlap presents a daunting challenge for N 2 O mitigation through manipulating soil pH, highlighting the need for alternative approaches to reduce N 2 O emissions. Liming is a common practice in agriculture to reduce toxicity of soil acidity on crop plants . As low soil pH induces high N 2 O emission product ratio (N 2 O:N 2 ) of denitrification , , raising soil pH to ca. 6.5 has been proposed as a management tool to reduce N 2 O emissions – . However, liming is often economically costly, and farmers tend to only raise soil pH to 5.5–6.0 , , which may, based on our results (Figs. and ), enhance N 2 O emissions. Liming also increases soil CO 2 emission , , offsetting its impact on N 2 O emissions. Our results highlight the urgency to identify alternative approaches that are practically feasible and conducive to lowering N 2 O emissions and suggest that manipulation of the community composition and activities of N 2 O-producing and N 2 O-consuming microbes may provide a promising approach for N 2 O mitigation. Several unique microbial guilds that dominantly control the N 2 O sink strength have recently been identified, which may be targeted to reduce the denitrification product ratio . For example, some N 2 O reductase-carrying bacteria have adapted to highly acidic soils with pH as low as 3.7 and it may be possible to introduce these bacteria into soil to mitigate N 2 O emissions in highly acidic soils. However, whether those N 2 O reductase-carrying bacteria can be introduced into slightly acidic soils to effectively mitigate N 2 O emissions warrants further assessment. In addition, manipulation of N 2 O-reducing microorganisms might be achieved through crop breeding or cover crop selection because some plants produce root exudates and/or plant metabolites inhibiting nitrifying , and denitrifying microorganisms. Further, reducing access of nitrifiers to ammonium through manipulating N sources (e.g., slow-releasing fertilizers) , supporting nitrate ammonifiers reducing nitrate to ammonium , , and enhancing plant N uptake, and/or inhibiting nitrifiers (e.g., nitrification inhibitors) can decrease N 2 O emissions from both nitrification and denitrification . Overall, our study provides compelling evidence illustrating that there is a hump-shape relationship between soil pH and N 2 O EF, leading to highest N 2 O emissions under moderate soil acidity. These findings suggest that raising pH through liming has limited capacity for N 2 O mitigation due to multiple biological and economic constraints, and that direct manipulation of N 2 O-producing and N 2 O-consuming microbes may provide novel approaches for N 2 O mitigation under future reactive N input scenarios.
Meta-analysis 1 of global synthesis of N input and soil pH effects on N 2 O emission factors (N 2 O EFs) The data collection and analysis followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines (see Supplementary Fig. for further information). We conducted an extensive search for studies of N fertilization and soil N 2 O emissions published between 1980 and 2019 through the Web of Science, Google Scholar, and the China Knowledge Resource Integrated Database ( http://www.cnki.net/ ). The keywords were used: (i) “nitrogen addition” OR “nitrogen deposition” OR “nitrogen amendment” OR “nitrogen fertilization”; (ii) “soil” OR “terrestrial”; and (iii) “N 2 O” OR “nitrous oxide”. We also extracted data and re-evaluated all studies from the databases published by Stehfest and Bouwman , Liu and Greaver , Shcherbak et al. , Liu et al. , Wang et al. , Charles et al. , Deng et al. , Maaz et al. , Cui et al. , and Hergoualc’h et al. . In order to avoid selection bias, we extracted peer-review publications with the following criteria: (a) only field studies in which the control and N fertilization treatment sites were located under the same climate, vegetation and soil conditions were included; (b) only chamber-based field experiments conducted in croplands, forests and grasslands were included; (c) studies using nitrification inhibitors were excluded. This yielded a dataset of 5438 observations of N 2 O emission fluxes from 539 field studies that spanned 42 countries and 570 sites (Fig. ; please see Supplementary Data ). Experiments were grouped into three regions based on absolute latitude: tropical (23.4 °S–23.4 °N), subtropical (23.4–35.0 °S or °N), and temperate (>35.0 °S or °N). For each study, soil properties (i.e., pH, clay, silt and sand content, organic carbon, and total nitrogen) and climate (i.e., mean annual precipitation (MAP) and temperature (MAT)) were directly obtained either from texts and/or tables or extracted from figures using the GetData Graph Digitizer software (ver. 2.22, http://www.getdata-graph-digitizer.com ). Nitrogen fertilization rates and soil N 2 O emissions obtained from the literature were converted into the unit of kg N ha − 1 , respectively. Fertilizer-induced N 2 O emission was then calculated as the difference in soil N 2 O emission between the fertilization treatment (E N ) and the no-fertilizing control (E O ). Then the emission factor (EF) of N 2 O emissions of each fertilization treatment was calculated as the percentage of N 2 O emission relative to the amount of N fertilization rate (see Eq. ). This yielded a dataset of 3786 N 2 O EF values (please see Supplementary Data ). 1 [12pt]{minimal}
$${EF}(\%)=100 _{N}-{E}_{O}}{N}$$ E F % = 100 × E N − E O N To determine the impact of soil pH on N 2 O EF, pH was divided into 58 groups by 0.1 unit (pH: 2.8–9.7). Soil pH was measured in water in most studies, but it was measured in CaCl 2 or KCl in solution in a small number of experiments. We converted soil pH values measured in CaCl 2 or KCl into water-based soil pH values, following the method described by Henderson and Bui and Kabala et al. , respectively. A few studies did not specifically state the reagent used, and we assumed that water was used there. Notably, soil acidity or alkalinity was divided into: ultra-acidic of pH < 3.5, extremely acidic of pH = 3.5–4.4, very strongly acidic of pH = 4.5–5.0, strongly acidic of pH = 5.1–5.5, moderately acidic of pH = 5.6–6.0, slightly acidic of pH = 6.1–6.5, neutral of pH = 6.6–7.3, slightly alkaline of pH = 7.4–7.8, moderately alkaline of pH = 7.9–8.4, and strongly alkaline of pH = 8.5–9.0, following the Soil Science Division Staff (2017) . One major issue in the method using the coarse EFs is that the pH increments with more data points are given higher weight than the pH increments with fewer data points. Consequently, the statistical analysis is highly skewed towards the pH increments with a large number of field experiments and measurements. However, this does not provide a fair assessment of the pH effect on N 2 O EFs. Therefore, we adopted the average method by averaging all the N 2 O EFs at each pH increment to obtain the mean EF and then giving all the pH increments equal weights. We followed the method used by Linquist et al. and Feng et al. to evaluate the mean EF for the different pH groups (Eqs. and . 2 [12pt]{minimal}
$$M=_{i} {W}_{i})}{ ({W}_{i})}$$ M = ∑ Y i × W i ∑ W i 3 [12pt]{minimal}
$${W}_{i}=$$ W i = n o We used Eq. to calculate the weighted mean values for each pH unit group. In Eq. , M is the mean value of EF. Y i is the observation of EF at the ith pH unit group. W i is the weight for the observations from the ith pH unit group and was calculated with Eq. , in which n is the replicates in each field experiment for each study, and o is the total number of observations from the ith pH unit group. At a given pH increment, this approach of weighting assigned more weight to well-replicated field measurements, reporting more precise EF estimates , . Field experiments of soil pH manipulations and their effects on denitrifiers and their activities Reactive N input affects N-cycling microbes and N 2 O emissions directly by increasing N availability for nitrification and denitrification and indirectly by inducing soil acidification. In order to determine the direct impact of soil acidification, we manipulated soil pH through adding diluted acids to create a pH gradient in three grassland experiments in the Tibetan Plateau, Inner Mongolian Plateau, and the Yellow Loess Plateau in China. We choose grasslands for three reasons. First, we wanted to assess the effect of soil pH without confounding effects of N fertilization. Unlike most Chinese croplands that have received high amounts of N fertilization , these grasslands are located in remote areas where there was low ambient N deposition and no N fertilization assuring minimal impact of human-derived N on soil N-cycling microbes – . Second, since none of the experimental grasslands had received any significant reactive N input (N deposition or N fertilizers), the selection pressure of human-derived N on soil N-cycling microorganisms was negligible. Third, we wanted to have field experiments on acidic, neutral, and alkaline soils that also have decent amounts of available soil N. Available soil N (particularly NO 3 − ) in other unfertilized soils, like forest soils, is very low and likely constrains N-cycling microbes . Moreover, grasslands potentially contribute 20% of total N 2 O flux to the atmosphere at the global scale , . A considerable proportion of global grasslands are under moderate to intensive management, and it is expected that more grasslands will be under fertilization, likely increasing N 2 O emissions . The three acid addition experiments were established in three grasslands with distinct climatic and soil conditions (Supplementary Table ; see Supplementary Data ). The first experiment was set up in an alpine meadow at Gansu Gannan Grassland Ecosystem National Observation and Research Station (33°59′N, 102°00′E, ca. 3538 m a.s.l.) in Maqu county, Gannan Prefecture, Gansu Province, China. Over the last forty years, the MAP and MAT at this site were at 620 mm and 1.2 °C, respectively. The soil was categorized as Cambisol (FAO taxonomy) and moderately acidic with a pH value of ca. 6.0 with moderate pH buffering capacity . The second experiment took advantage of an existing study on a steppe ecosystem at the Inner Mongolia Grassland Ecosystem Research Station of the Chinese Academy of Sciences (43°38′N, 116°42′E, 1250 m a.s.l.) near Xilin city, Inner Mongolia, China. The MAT at this site was 0.3 °C with the lowest in January (−21.6 °C) and the highest in July (19.0 °C). It has had a MAP of 346.1 mm with the majority (ca. 80%) occurring in summer (June to August). It had a dark chestnut soil (Calcic Chernozem according to ISSS Working Group RB, 1998) with a nearly neutral pH value (ca. 7.3) and with high sand content and low pH buffering capcity . The third experiment was in a semi-arid grassland at the Yunwu Mountains Natural Preserve (36°10′−36°17′N, 106°21′−106°27′E, 1800–2100 m a.s.l.) on the Loess Plateau, Guyuan, Ningxia, Northwest China. This site has a typical semiarid climate, and the mean annual rainfall was about 425 mm with about two-thirds (60–75%) falling in July-September. Over the last three decades, this site had a MAT of 7.0 °C (the lowest in January at −14 °C and the highest in July at 22.8 °C). The soil was a montane gray-cinnamon type classified as a Calci-Orthic Aridisol or a Haplic Calcisol in the Chinese and FAO classification, and alkaline with a pH of 8.0 and high pH buffering capacity . At each site, a single factor of acid (sulfuric acid) addition experiment was designed. To minimize any potential direct acid damage to living plants and soil organisms, the specific dose of concentrated sulfuric acid (98%) needed for each plot was first diluted into 60 L of tap water and then sprayed into each plot. Equal amounts of water only were added to the no-acid controls (A0). At the Gannan alpine site, the acid addition experiment was established in 2016 with five levels of acid addition : 0 (the control, A0), 1.32 (A1), 5.29 (A2), 9.25 (A3), and 14.53 (A4) mol H + m − 2 yr − 1 . Twenty plots (2 m × 2 m each) were then arranged in a randomized block design including four replicate blocks separated by 1 m buffer zones. Diluted sulfuric acid solution was applied twice each year (half of the designed dosage each time) in early June and late September of 2016, late April and late September of 2017, and late April 2018. At the Inner Mongolia steppe site, the acid experiment was initiated in 2009 with seven gradients of acid addition : 0, 2.76, 5.52, 8.28, 11.04, 13.80, and 16.56 mol H + m − 2 yr − 1 . The experiment was randomly positioned in a block design with 5 replicate blocks, leading to a total of 35 field plots (2 m × 2 m each). Diluted acid solution at the designed concentration was added to each plot in early September 2009, early June 2010, and early September 2010. Soil pH in all treatments stabilized and no additional acid has been added since 2010 79 . For this study, we randomly chose four replicate field plots of five treatments, 0 (the control, A0), 2.76 (A1), 5.52 (A2), 11.04 (A3), and 16.56 (A4) mol H + m − 2 yr − 1 , to investigate the impact of soil acidification on soil nitrifiers, denitrifiers and denitrification. The acid experiment at the Guyuan site was established in 2016 with 30 plots (2 m × 2 m each) using a randomized block design . It had five levels of acid additions with six replicate blocks separated by 1 m walkways. The five levels of acid additions were: 0 (the control, A0), 0.44 (A1), 1.10 (A2), 7.04 (A3), and 17.61 (A4) mol H + m − 2 yr − 1 , respectively. Diluted acid solution was applied twice each year (half each time) in early June and late September of 2016, late April and late September of 2017, and early May 2018. In mid-August 2018 when plant biomass peaked, three soil cores (5.0 cm dia.) were collected at 0–10 cm depth from each plot at both Gannan and Guyuan sites, and then mixed to form a composite sample per plot. For the Inner Mongolia site, soil samples were collected in the same way in early September 2020. Composited soil samples collected in field were placed on ice in coolers, and sent by express mail to the laboratory in Nanjing, China. All soil samples were first sieved through a mesh (2 mm) to remove rocks and dead plant materials. A small subsample (ca. 50 g) of each field soil sample was immediately stored at −20 °C for molecular analyses, and the remainder was kept at 4 °C in the refrigerator for later chemical and microbial analyses that were all initiated within 2 weeks. Soil pH in a soil-to-water (1:5, w/w) slurry was measured on an Ultramete-2 pH meter (Myron L. Company, Carlsbad, CA, USA). Inorganic NH 4 + -N and NO 3 − -N were extracted with 0.5 M K 2 SO 4 , and their concentrations in the extracts were quantified on a continuous flow injection auto-analyzer (Skalar SAN Plus, Skalar Inc., The Netherlands) . For each soil sample, 0.3 g (dry soil equivalent) frozen soil was used to extract total genomic DNA with PowerSoil DNA kits (MoBio Laboratories, Carlsbad, CA, USA). The DNA quantity and quality were determined by a Nanodrop spectrophotometer (Thermo Scientific, Wilmington, DE, USA). The copy numbers of AOA- amoA , AOB- amoA , nirK , nirS, and nosZI genes were determined using the Real-Time quantitative PCR System (Applied Biosystems, Foster City, CA, USA). The primer sets of crenamoA 23F/ crenamoA 616r (ATGGTCTGGCTWAGACG/GCCATCCATCTGTATGTCCA) , amoA -1F/ amoA -2R (GGGGTTTCTACTGGTGGT/CCCCTCGGAAAGCCTTCTTC) , nirK 876/ nirK 1040 (ATYGGCGGVAYGGCGA/GCCTCGATCAGRTTRTGGTT) , nirS Cd3aF/ nirS R3cd (AACGYSAAGGARACSGG/GASTTCGGRTGSGTCTTSAYGAA) , and norZ 1f/ norZ 1R (WCSYTGTTCMTCGAGCCAG/ATGTCGATCARCTGVKCRTTYTC) were used for the amplification of AOA- amoA , AOB- amoA, nirK , nirS , and nosZI gene, respectively. Each qPCR reaction (20 µL volume) was performed with 10 µL SYBRs Premix Ex Taq TM (Takara, Dalian, China), 1 µL template DNA corresponding to 8–12 ng, 0.5 µL of each primer, 0.5 µL bovine serum albumin (BSA, 5 mg mL − 1 ) and 7.5 µL distilled deionized H 2 O (ddH 2 O). The standard curve for determining the gene copy number was developed using the standard plasmids of different dilutions as a temperate. The standard plasmids were generated from the positive clones of the 5 target genes, which were derived from the amplification of the soil sample . The amplification efficiency of the qPCR assays ranged from 90 to 100% with R 2 > 0.99 for the standard curves. We checked potential qPCR reaction inhibition via the amplification of a known amount of the pGEM-T plasmid (Promega) with T7 and SP6 primers, adding to the extracts of DNA samples or water. No amplification reaction inhibitions in the samples were detected. We did not directly monitor soil N 2 O fluxes in the field, mainly because the field sites were remote. Instead, microcosm incubation experiments were conducted to determine potential soil N 2 O emissions. For each soil sample, field soil (20.0 g dry mass equivalent) was placed into a 125-mL dark bottle, and deionized water was added to adjust soil moisture to ca. 70% water-filled pore space (WFPS), creating a moisture condition conducive for denitrifiers and denitrification , . The high soil moisture content favored anaerobic processes since O 2 diffusion into the soil was restricted and effects of oxygen should be negligible. All bottles were loosely covered with fitting lids and incubated in a dark incubator at 20 °C. It is worth mentioning that both nitrification and denitrification processes produce N 2 O, but optimum N 2 O emissions from denitrification often occur at 70–80% WFPS , . Also, our results showed that soil pH had a linear relationship with soil nitrifiers (Supplementary Fig. ) and the high soil moisture suppressed nitrification. Thus, the design of the incubation experiments targeted N 2 O from anaerobic processes like denitrification, and N 2 O emissions from nitrification or other aerobic processes were not considered . To determine the N 2 O emissions, gas samples were taken from the headspaces of the incubation bottles as described by Zhang et al. . More specifically, all incubation bottles were flushed with fresh air (2 min each) prior to the gas sampling, then immediately sealed and incubated for 6 h in the dark. A gas sample of 15 mL was taken from the headspace of each incubation bottle and was immediately transferred into a vial for gas chromatograph (GC) measurement. After gas sampling, all incubation bottles were loosely covered until the next gas sampling to ensure minimum water loss. Gas sampling was conducted 5 times, respectively, at 12, 24, 48, 72, and 96 h after the incubation initiation. N 2 O concentrations in the sampling vials were determined within 24 h after the sampling collection on a GC equipped with an electron capture detector (ECD) (GC-7890B, Agilent, Santa Clara, CA, USA). The N 2 O fluxes were calculated using the formula : 4 [12pt]{minimal}
$$F= V C $$ F = ρ × V × Δ C × 273 ( 273 + T ) × W where F is the soil N 2 O gas flux rates (µg N kg − 1 soil h − 1 ), [12pt]{minimal}
$$$$ ρ is the standard state gas density (kg m − 3 ), V is the bottle volume (L), [12pt]{minimal}
$$ C$$ Δ C is the difference in N 2 O concentration (ppm) between two samples (0 and 6 h), T is the incubation temperature at 20 °C, and W is the dry weight of soil (kg). We further determined soil potential denitrification activities (PDA), using the modified acetylene (C 2 H 2 ) inhibition technique , . For each field soil sample, two sub-samples (each 5.0 g dry soil equivalent) were respectively put into two 100 mL sterile serum bottles. Then, 8 mL of N- and C-containing solution (KNO 3 at 50 mg NO 3 − -N g − 1 dry soil, glucose, and glutamic acid, each at 0.5 mg C g −1 dry soil) was added to create a soil slurry conducive for denitrification. To measure the PDA, 10% C 2 H 2 was injected into one bottle to inhibit N 2 O reductase activity so that the N 2 O produced was not reduced to N 2 . In the other bottle, no C 2 H 2 was added so that all enzymes of denitrification remained active and the N 2 O detected was the net difference between the production and consumption of N 2 O . All serum bottles were incubated in dark at 25 °C with agitation at 180 rpm. Gas samples (10 mL) were taken from the headspace at 2, 4 and 6 h after the beginning of the incubation for determination of N 2 O concentrations on a GC (GC-7890B, Agilent, Santa Clara, CA, USA). Meta-analysis 2 of relationships between soil pH and N 2 O-producing or N 2 O-consuming denitrifying microorganisms Similar to Meta-analysis 1, the data collection and analysis were also carried out according to the PRISMA guidelines (Supplementary Fig. ). We conducted an extensive search in Web of Science and Google Scholar for studies in which nirK -, nirS - and nosZ (clade I and II) had been quantified with the two sets of search terms: (1) nirK , nirS or nosZ gene, and (2) soil or terrestrial. In total, the search resulted in ca. 1539 article hits in December 2021. All articles were carefully read through to select those based on field studies, whereas those based on microcosm studies were excluded. There were 286 published papers that met our criteria. We also included the data from the three field acid addition experiments described above. Special attention was also directed towards checking whether nosZ clade I, nosZ clade II or both were quantified. Only 26 published studies quantified nosZII and, among these, only nine also reported soil N 2 O emissions in the field (see Supplementary Data for detail). Therefore, the gene nosZ in the dataset in this study only refers to nosZ clade I. Thus, the final dataset contained data from 501 sites reported by 289 studies, and included 1347, 1265, and 1287 abundance estimates of nirK , nirS and nosZI genes, respectively (see Supplementary Data for detail). We extracted data either from tables, texts or from figures using the GetData Graph Digitizer software (ver. 2.22; http://getdata-graph-digitizer.com ). For each article, we extracted the following information for our analysis: the abundance of nirK , nirS , nosZI and nosZII genes (copy numbers per g soil), soil pH, and depth of collected soils. Latitude, altitude, MAP and MAT of the experimental sites were also recorded. All information of N 2 O emissions (N 2 O emission rates and/or cumulative N 2 O emissions) was extracted. Because various publications reported the results of N 2 O emissions in different units, we converted all N 2 O emission rates into the unit of μg N m −2 h − 1 . Data were log-transformed to meet statistical tests assumptions (if necessary). In the literature, most data of gene abundances were presented in the form of log-transformed numbers, we first transformed them back to real numbers and obtained the average gene abundances for each pH increment, and then again log-transformed. Similar to Meta-analysis 1, we examined the relationships between soil pH and abundances of denitrifying microorganisms, using both coarse abundance and averaged abundance of each functional group of denitrifiers at each pH increment. Statistical analyses In Meta-analysis 1, we examined potential linear or quadratic relationships between N 2 O EFs and soil pH, MAP, MAT, soil sand, silt and clay content, SOC or TN. In Meta-analysis 2, we examined potential linear and quadratic relationships between soil pH and the abundance of nirK -, nirS -, nosZI -type denitrifiers, or the ( nirK + nirS )/ nosZI ratio. The model goodness of fit was evaluated with the Akaike information criterion (AICc) where a lower AICc value represents a model with a better fit , . In general, differences in AICc higher than 2 indicate that models are substantially different . Information on the AICc index was obtained using the package MuMIn from R . Given the large number of samples included in the meta-analyses, we interpreted the statistical significance of individual predictors using a conservative α of 0.001 following model selection by AICc. In Meta-analysis 1, we did a non-parametric alternative of Kruskal–Wallis analysis together with Pairwise Wilcox test to determine the differences in soil pH, N 2 O EFs, and N fertilization rate among different climate zones. For the field experiments, we used linear mixed-effects (LME) models to determine the effects of acid addition on the response variables at each site, treating the acid treatments as fixed effects and block as a random effect. One-way analysis of variance (ANOVA) followed by Duncan’s multiple-range tests were used to compare the means among acid addition levels across all response variables. Then, we examined the relationships between soil pH and N 2 O emissions, PDA, N 2 O/(N 2 O + N 2 ) ratio or the ( nirK + nirS )/ nosZI ratio across all three field sites, using linear or quadratic regression. We used the Akaike information criterion (AICc) to evaluate the model’s goodness of fit . All analyses were conducted in R (version 4.1.1) . Reporting summary Further information on research design is available in the linked to this article.
2 O emission factors (N 2 O EFs) The data collection and analysis followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines (see Supplementary Fig. for further information). We conducted an extensive search for studies of N fertilization and soil N 2 O emissions published between 1980 and 2019 through the Web of Science, Google Scholar, and the China Knowledge Resource Integrated Database ( http://www.cnki.net/ ). The keywords were used: (i) “nitrogen addition” OR “nitrogen deposition” OR “nitrogen amendment” OR “nitrogen fertilization”; (ii) “soil” OR “terrestrial”; and (iii) “N 2 O” OR “nitrous oxide”. We also extracted data and re-evaluated all studies from the databases published by Stehfest and Bouwman , Liu and Greaver , Shcherbak et al. , Liu et al. , Wang et al. , Charles et al. , Deng et al. , Maaz et al. , Cui et al. , and Hergoualc’h et al. . In order to avoid selection bias, we extracted peer-review publications with the following criteria: (a) only field studies in which the control and N fertilization treatment sites were located under the same climate, vegetation and soil conditions were included; (b) only chamber-based field experiments conducted in croplands, forests and grasslands were included; (c) studies using nitrification inhibitors were excluded. This yielded a dataset of 5438 observations of N 2 O emission fluxes from 539 field studies that spanned 42 countries and 570 sites (Fig. ; please see Supplementary Data ). Experiments were grouped into three regions based on absolute latitude: tropical (23.4 °S–23.4 °N), subtropical (23.4–35.0 °S or °N), and temperate (>35.0 °S or °N). For each study, soil properties (i.e., pH, clay, silt and sand content, organic carbon, and total nitrogen) and climate (i.e., mean annual precipitation (MAP) and temperature (MAT)) were directly obtained either from texts and/or tables or extracted from figures using the GetData Graph Digitizer software (ver. 2.22, http://www.getdata-graph-digitizer.com ). Nitrogen fertilization rates and soil N 2 O emissions obtained from the literature were converted into the unit of kg N ha − 1 , respectively. Fertilizer-induced N 2 O emission was then calculated as the difference in soil N 2 O emission between the fertilization treatment (E N ) and the no-fertilizing control (E O ). Then the emission factor (EF) of N 2 O emissions of each fertilization treatment was calculated as the percentage of N 2 O emission relative to the amount of N fertilization rate (see Eq. ). This yielded a dataset of 3786 N 2 O EF values (please see Supplementary Data ). 1 [12pt]{minimal}
$${EF}(\%)=100 _{N}-{E}_{O}}{N}$$ E F % = 100 × E N − E O N To determine the impact of soil pH on N 2 O EF, pH was divided into 58 groups by 0.1 unit (pH: 2.8–9.7). Soil pH was measured in water in most studies, but it was measured in CaCl 2 or KCl in solution in a small number of experiments. We converted soil pH values measured in CaCl 2 or KCl into water-based soil pH values, following the method described by Henderson and Bui and Kabala et al. , respectively. A few studies did not specifically state the reagent used, and we assumed that water was used there. Notably, soil acidity or alkalinity was divided into: ultra-acidic of pH < 3.5, extremely acidic of pH = 3.5–4.4, very strongly acidic of pH = 4.5–5.0, strongly acidic of pH = 5.1–5.5, moderately acidic of pH = 5.6–6.0, slightly acidic of pH = 6.1–6.5, neutral of pH = 6.6–7.3, slightly alkaline of pH = 7.4–7.8, moderately alkaline of pH = 7.9–8.4, and strongly alkaline of pH = 8.5–9.0, following the Soil Science Division Staff (2017) . One major issue in the method using the coarse EFs is that the pH increments with more data points are given higher weight than the pH increments with fewer data points. Consequently, the statistical analysis is highly skewed towards the pH increments with a large number of field experiments and measurements. However, this does not provide a fair assessment of the pH effect on N 2 O EFs. Therefore, we adopted the average method by averaging all the N 2 O EFs at each pH increment to obtain the mean EF and then giving all the pH increments equal weights. We followed the method used by Linquist et al. and Feng et al. to evaluate the mean EF for the different pH groups (Eqs. and . 2 [12pt]{minimal}
$$M=_{i} {W}_{i})}{ ({W}_{i})}$$ M = ∑ Y i × W i ∑ W i 3 [12pt]{minimal}
$${W}_{i}=$$ W i = n o We used Eq. to calculate the weighted mean values for each pH unit group. In Eq. , M is the mean value of EF. Y i is the observation of EF at the ith pH unit group. W i is the weight for the observations from the ith pH unit group and was calculated with Eq. , in which n is the replicates in each field experiment for each study, and o is the total number of observations from the ith pH unit group. At a given pH increment, this approach of weighting assigned more weight to well-replicated field measurements, reporting more precise EF estimates , .
Reactive N input affects N-cycling microbes and N 2 O emissions directly by increasing N availability for nitrification and denitrification and indirectly by inducing soil acidification. In order to determine the direct impact of soil acidification, we manipulated soil pH through adding diluted acids to create a pH gradient in three grassland experiments in the Tibetan Plateau, Inner Mongolian Plateau, and the Yellow Loess Plateau in China. We choose grasslands for three reasons. First, we wanted to assess the effect of soil pH without confounding effects of N fertilization. Unlike most Chinese croplands that have received high amounts of N fertilization , these grasslands are located in remote areas where there was low ambient N deposition and no N fertilization assuring minimal impact of human-derived N on soil N-cycling microbes – . Second, since none of the experimental grasslands had received any significant reactive N input (N deposition or N fertilizers), the selection pressure of human-derived N on soil N-cycling microorganisms was negligible. Third, we wanted to have field experiments on acidic, neutral, and alkaline soils that also have decent amounts of available soil N. Available soil N (particularly NO 3 − ) in other unfertilized soils, like forest soils, is very low and likely constrains N-cycling microbes . Moreover, grasslands potentially contribute 20% of total N 2 O flux to the atmosphere at the global scale , . A considerable proportion of global grasslands are under moderate to intensive management, and it is expected that more grasslands will be under fertilization, likely increasing N 2 O emissions . The three acid addition experiments were established in three grasslands with distinct climatic and soil conditions (Supplementary Table ; see Supplementary Data ). The first experiment was set up in an alpine meadow at Gansu Gannan Grassland Ecosystem National Observation and Research Station (33°59′N, 102°00′E, ca. 3538 m a.s.l.) in Maqu county, Gannan Prefecture, Gansu Province, China. Over the last forty years, the MAP and MAT at this site were at 620 mm and 1.2 °C, respectively. The soil was categorized as Cambisol (FAO taxonomy) and moderately acidic with a pH value of ca. 6.0 with moderate pH buffering capacity . The second experiment took advantage of an existing study on a steppe ecosystem at the Inner Mongolia Grassland Ecosystem Research Station of the Chinese Academy of Sciences (43°38′N, 116°42′E, 1250 m a.s.l.) near Xilin city, Inner Mongolia, China. The MAT at this site was 0.3 °C with the lowest in January (−21.6 °C) and the highest in July (19.0 °C). It has had a MAP of 346.1 mm with the majority (ca. 80%) occurring in summer (June to August). It had a dark chestnut soil (Calcic Chernozem according to ISSS Working Group RB, 1998) with a nearly neutral pH value (ca. 7.3) and with high sand content and low pH buffering capcity . The third experiment was in a semi-arid grassland at the Yunwu Mountains Natural Preserve (36°10′−36°17′N, 106°21′−106°27′E, 1800–2100 m a.s.l.) on the Loess Plateau, Guyuan, Ningxia, Northwest China. This site has a typical semiarid climate, and the mean annual rainfall was about 425 mm with about two-thirds (60–75%) falling in July-September. Over the last three decades, this site had a MAT of 7.0 °C (the lowest in January at −14 °C and the highest in July at 22.8 °C). The soil was a montane gray-cinnamon type classified as a Calci-Orthic Aridisol or a Haplic Calcisol in the Chinese and FAO classification, and alkaline with a pH of 8.0 and high pH buffering capacity . At each site, a single factor of acid (sulfuric acid) addition experiment was designed. To minimize any potential direct acid damage to living plants and soil organisms, the specific dose of concentrated sulfuric acid (98%) needed for each plot was first diluted into 60 L of tap water and then sprayed into each plot. Equal amounts of water only were added to the no-acid controls (A0). At the Gannan alpine site, the acid addition experiment was established in 2016 with five levels of acid addition : 0 (the control, A0), 1.32 (A1), 5.29 (A2), 9.25 (A3), and 14.53 (A4) mol H + m − 2 yr − 1 . Twenty plots (2 m × 2 m each) were then arranged in a randomized block design including four replicate blocks separated by 1 m buffer zones. Diluted sulfuric acid solution was applied twice each year (half of the designed dosage each time) in early June and late September of 2016, late April and late September of 2017, and late April 2018. At the Inner Mongolia steppe site, the acid experiment was initiated in 2009 with seven gradients of acid addition : 0, 2.76, 5.52, 8.28, 11.04, 13.80, and 16.56 mol H + m − 2 yr − 1 . The experiment was randomly positioned in a block design with 5 replicate blocks, leading to a total of 35 field plots (2 m × 2 m each). Diluted acid solution at the designed concentration was added to each plot in early September 2009, early June 2010, and early September 2010. Soil pH in all treatments stabilized and no additional acid has been added since 2010 79 . For this study, we randomly chose four replicate field plots of five treatments, 0 (the control, A0), 2.76 (A1), 5.52 (A2), 11.04 (A3), and 16.56 (A4) mol H + m − 2 yr − 1 , to investigate the impact of soil acidification on soil nitrifiers, denitrifiers and denitrification. The acid experiment at the Guyuan site was established in 2016 with 30 plots (2 m × 2 m each) using a randomized block design . It had five levels of acid additions with six replicate blocks separated by 1 m walkways. The five levels of acid additions were: 0 (the control, A0), 0.44 (A1), 1.10 (A2), 7.04 (A3), and 17.61 (A4) mol H + m − 2 yr − 1 , respectively. Diluted acid solution was applied twice each year (half each time) in early June and late September of 2016, late April and late September of 2017, and early May 2018. In mid-August 2018 when plant biomass peaked, three soil cores (5.0 cm dia.) were collected at 0–10 cm depth from each plot at both Gannan and Guyuan sites, and then mixed to form a composite sample per plot. For the Inner Mongolia site, soil samples were collected in the same way in early September 2020. Composited soil samples collected in field were placed on ice in coolers, and sent by express mail to the laboratory in Nanjing, China. All soil samples were first sieved through a mesh (2 mm) to remove rocks and dead plant materials. A small subsample (ca. 50 g) of each field soil sample was immediately stored at −20 °C for molecular analyses, and the remainder was kept at 4 °C in the refrigerator for later chemical and microbial analyses that were all initiated within 2 weeks. Soil pH in a soil-to-water (1:5, w/w) slurry was measured on an Ultramete-2 pH meter (Myron L. Company, Carlsbad, CA, USA). Inorganic NH 4 + -N and NO 3 − -N were extracted with 0.5 M K 2 SO 4 , and their concentrations in the extracts were quantified on a continuous flow injection auto-analyzer (Skalar SAN Plus, Skalar Inc., The Netherlands) . For each soil sample, 0.3 g (dry soil equivalent) frozen soil was used to extract total genomic DNA with PowerSoil DNA kits (MoBio Laboratories, Carlsbad, CA, USA). The DNA quantity and quality were determined by a Nanodrop spectrophotometer (Thermo Scientific, Wilmington, DE, USA). The copy numbers of AOA- amoA , AOB- amoA , nirK , nirS, and nosZI genes were determined using the Real-Time quantitative PCR System (Applied Biosystems, Foster City, CA, USA). The primer sets of crenamoA 23F/ crenamoA 616r (ATGGTCTGGCTWAGACG/GCCATCCATCTGTATGTCCA) , amoA -1F/ amoA -2R (GGGGTTTCTACTGGTGGT/CCCCTCGGAAAGCCTTCTTC) , nirK 876/ nirK 1040 (ATYGGCGGVAYGGCGA/GCCTCGATCAGRTTRTGGTT) , nirS Cd3aF/ nirS R3cd (AACGYSAAGGARACSGG/GASTTCGGRTGSGTCTTSAYGAA) , and norZ 1f/ norZ 1R (WCSYTGTTCMTCGAGCCAG/ATGTCGATCARCTGVKCRTTYTC) were used for the amplification of AOA- amoA , AOB- amoA, nirK , nirS , and nosZI gene, respectively. Each qPCR reaction (20 µL volume) was performed with 10 µL SYBRs Premix Ex Taq TM (Takara, Dalian, China), 1 µL template DNA corresponding to 8–12 ng, 0.5 µL of each primer, 0.5 µL bovine serum albumin (BSA, 5 mg mL − 1 ) and 7.5 µL distilled deionized H 2 O (ddH 2 O). The standard curve for determining the gene copy number was developed using the standard plasmids of different dilutions as a temperate. The standard plasmids were generated from the positive clones of the 5 target genes, which were derived from the amplification of the soil sample . The amplification efficiency of the qPCR assays ranged from 90 to 100% with R 2 > 0.99 for the standard curves. We checked potential qPCR reaction inhibition via the amplification of a known amount of the pGEM-T plasmid (Promega) with T7 and SP6 primers, adding to the extracts of DNA samples or water. No amplification reaction inhibitions in the samples were detected. We did not directly monitor soil N 2 O fluxes in the field, mainly because the field sites were remote. Instead, microcosm incubation experiments were conducted to determine potential soil N 2 O emissions. For each soil sample, field soil (20.0 g dry mass equivalent) was placed into a 125-mL dark bottle, and deionized water was added to adjust soil moisture to ca. 70% water-filled pore space (WFPS), creating a moisture condition conducive for denitrifiers and denitrification , . The high soil moisture content favored anaerobic processes since O 2 diffusion into the soil was restricted and effects of oxygen should be negligible. All bottles were loosely covered with fitting lids and incubated in a dark incubator at 20 °C. It is worth mentioning that both nitrification and denitrification processes produce N 2 O, but optimum N 2 O emissions from denitrification often occur at 70–80% WFPS , . Also, our results showed that soil pH had a linear relationship with soil nitrifiers (Supplementary Fig. ) and the high soil moisture suppressed nitrification. Thus, the design of the incubation experiments targeted N 2 O from anaerobic processes like denitrification, and N 2 O emissions from nitrification or other aerobic processes were not considered . To determine the N 2 O emissions, gas samples were taken from the headspaces of the incubation bottles as described by Zhang et al. . More specifically, all incubation bottles were flushed with fresh air (2 min each) prior to the gas sampling, then immediately sealed and incubated for 6 h in the dark. A gas sample of 15 mL was taken from the headspace of each incubation bottle and was immediately transferred into a vial for gas chromatograph (GC) measurement. After gas sampling, all incubation bottles were loosely covered until the next gas sampling to ensure minimum water loss. Gas sampling was conducted 5 times, respectively, at 12, 24, 48, 72, and 96 h after the incubation initiation. N 2 O concentrations in the sampling vials were determined within 24 h after the sampling collection on a GC equipped with an electron capture detector (ECD) (GC-7890B, Agilent, Santa Clara, CA, USA). The N 2 O fluxes were calculated using the formula : 4 [12pt]{minimal}
$$F= V C $$ F = ρ × V × Δ C × 273 ( 273 + T ) × W where F is the soil N 2 O gas flux rates (µg N kg − 1 soil h − 1 ), [12pt]{minimal}
$$$$ ρ is the standard state gas density (kg m − 3 ), V is the bottle volume (L), [12pt]{minimal}
$$ C$$ Δ C is the difference in N 2 O concentration (ppm) between two samples (0 and 6 h), T is the incubation temperature at 20 °C, and W is the dry weight of soil (kg). We further determined soil potential denitrification activities (PDA), using the modified acetylene (C 2 H 2 ) inhibition technique , . For each field soil sample, two sub-samples (each 5.0 g dry soil equivalent) were respectively put into two 100 mL sterile serum bottles. Then, 8 mL of N- and C-containing solution (KNO 3 at 50 mg NO 3 − -N g − 1 dry soil, glucose, and glutamic acid, each at 0.5 mg C g −1 dry soil) was added to create a soil slurry conducive for denitrification. To measure the PDA, 10% C 2 H 2 was injected into one bottle to inhibit N 2 O reductase activity so that the N 2 O produced was not reduced to N 2 . In the other bottle, no C 2 H 2 was added so that all enzymes of denitrification remained active and the N 2 O detected was the net difference between the production and consumption of N 2 O . All serum bottles were incubated in dark at 25 °C with agitation at 180 rpm. Gas samples (10 mL) were taken from the headspace at 2, 4 and 6 h after the beginning of the incubation for determination of N 2 O concentrations on a GC (GC-7890B, Agilent, Santa Clara, CA, USA).
2 O-producing or N 2 O-consuming denitrifying microorganisms Similar to Meta-analysis 1, the data collection and analysis were also carried out according to the PRISMA guidelines (Supplementary Fig. ). We conducted an extensive search in Web of Science and Google Scholar for studies in which nirK -, nirS - and nosZ (clade I and II) had been quantified with the two sets of search terms: (1) nirK , nirS or nosZ gene, and (2) soil or terrestrial. In total, the search resulted in ca. 1539 article hits in December 2021. All articles were carefully read through to select those based on field studies, whereas those based on microcosm studies were excluded. There were 286 published papers that met our criteria. We also included the data from the three field acid addition experiments described above. Special attention was also directed towards checking whether nosZ clade I, nosZ clade II or both were quantified. Only 26 published studies quantified nosZII and, among these, only nine also reported soil N 2 O emissions in the field (see Supplementary Data for detail). Therefore, the gene nosZ in the dataset in this study only refers to nosZ clade I. Thus, the final dataset contained data from 501 sites reported by 289 studies, and included 1347, 1265, and 1287 abundance estimates of nirK , nirS and nosZI genes, respectively (see Supplementary Data for detail). We extracted data either from tables, texts or from figures using the GetData Graph Digitizer software (ver. 2.22; http://getdata-graph-digitizer.com ). For each article, we extracted the following information for our analysis: the abundance of nirK , nirS , nosZI and nosZII genes (copy numbers per g soil), soil pH, and depth of collected soils. Latitude, altitude, MAP and MAT of the experimental sites were also recorded. All information of N 2 O emissions (N 2 O emission rates and/or cumulative N 2 O emissions) was extracted. Because various publications reported the results of N 2 O emissions in different units, we converted all N 2 O emission rates into the unit of μg N m −2 h − 1 . Data were log-transformed to meet statistical tests assumptions (if necessary). In the literature, most data of gene abundances were presented in the form of log-transformed numbers, we first transformed them back to real numbers and obtained the average gene abundances for each pH increment, and then again log-transformed. Similar to Meta-analysis 1, we examined the relationships between soil pH and abundances of denitrifying microorganisms, using both coarse abundance and averaged abundance of each functional group of denitrifiers at each pH increment.
In Meta-analysis 1, we examined potential linear or quadratic relationships between N 2 O EFs and soil pH, MAP, MAT, soil sand, silt and clay content, SOC or TN. In Meta-analysis 2, we examined potential linear and quadratic relationships between soil pH and the abundance of nirK -, nirS -, nosZI -type denitrifiers, or the ( nirK + nirS )/ nosZI ratio. The model goodness of fit was evaluated with the Akaike information criterion (AICc) where a lower AICc value represents a model with a better fit , . In general, differences in AICc higher than 2 indicate that models are substantially different . Information on the AICc index was obtained using the package MuMIn from R . Given the large number of samples included in the meta-analyses, we interpreted the statistical significance of individual predictors using a conservative α of 0.001 following model selection by AICc. In Meta-analysis 1, we did a non-parametric alternative of Kruskal–Wallis analysis together with Pairwise Wilcox test to determine the differences in soil pH, N 2 O EFs, and N fertilization rate among different climate zones. For the field experiments, we used linear mixed-effects (LME) models to determine the effects of acid addition on the response variables at each site, treating the acid treatments as fixed effects and block as a random effect. One-way analysis of variance (ANOVA) followed by Duncan’s multiple-range tests were used to compare the means among acid addition levels across all response variables. Then, we examined the relationships between soil pH and N 2 O emissions, PDA, N 2 O/(N 2 O + N 2 ) ratio or the ( nirK + nirS )/ nosZI ratio across all three field sites, using linear or quadratic regression. We used the Akaike information criterion (AICc) to evaluate the model’s goodness of fit . All analyses were conducted in R (version 4.1.1) .
Further information on research design is available in the linked to this article.
Supplementary Information Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Reporting summary
|
Blood Transfusion and Lung Surgeries in Pediatric Age Group: A Single Center Retrospective Study | a59418db-454e-4538-8158-00e8265dc39f | 7336983 | Pediatrics[mh] | Blood transfusion is a mainstay and standard therapeutic option for blood loss and severely anemic patients if maximum medical strategies fail. Blood transfusion is not without harm, recent studies suggest a correlation between transfusion and poor outcome in critically ill patients. Although blood is prescribed for many reasons based on the firm belief that it improves oxygen carrying capacity, it carries many adverse hazards. Importantly, lung surgeries are counted as moderate to high-risk operations and take a significant risk of blood loss. The amount of blood loss can significantly vary depending on the pathology of the disease and nature of the surgery. Evidence from many studies indicated that the incidence of patient's re-exploration in thoracic surgery for bleeding ranges from 1% to 3.7% and the rate of blood transfusion ranges from 20% to 52%. Little published works are identifying the requirements of blood transfusion during different types of lung surgeries and mostly comes from western world experiences. Transfusion probability (%T) is defined as the number of patients transfused divided by some patients cross-matched and multiplied by 100, according to Mead's criteria; a value of 30% or more is indicative of efficient blood usage. Apart from the risk of infection transmission (includes both existing and emerging pathogens), the outcome data of blood transfusion therapy hasn't always been favorable, particularly in the issues of postoperative infection, systemic inflammatory response syndrome (SIRS), multi-organ failure and mortality. This study aims to reveal the association between blood transfusion and poor clinical outcomes and characterize the epidemiology of blood transfusion after pediatric chest surgery.
It is a retrospective cohort study over 3 years from January 2015 to December 2017 at Cardio-thoracic Surgery Department, Tanta University Hospitals. The medical records of all patients were reviewed for the 3-year period and 248 patients who underwent open thoracotomy and major lung surgery and aged 18 years or younger were included. Patients undergoing emergency surgery, redo surgery, minor procedures like biopsy or thoracotomy for non-pulmonary operations were excluded. Patient charts were identified by screening of a database into which data were entered prospectively. Demographic variables (e.g., age, sex, weight), comorbid conditions, diagnosis, and nature of the disease (tuberculosis or not), surgery done, baseline hemoglobin (Hb), final Hb at the end of the operation and number of blood units cross-matched and transfused were recorded in the study. Information concerning blood products included the use of allogenic whole blood, red cells, platelets, and plasma either intraoperative or postoperative. Three units of whole blood or red cells concentrate (PRBCs) are routinely cross-matched, reserved and ordered in addition to two units of fresh frozen plasma (FFP) and two units of platelets are reserved for each patient. Intraoperative transfusion was at the carefulness of the anesthetist in charge of the case. Transfusion probability (%T) and the need for postoperative blood transfusion (determined by the intercostal drainage and when postoperative Hb was <8 g/dl) were reviewed and analyzed. The postoperative variables such as duration of analgesic, duration of antibiotic, persistent postoperative fever, allergic reactions, need of re-operation, in-hospital mortality, and time to remove chest drains, intensive care unit (ICU) stay, hospital stay and rate of infection were reviewed and analyzed. Technique Under general anesthesia, the standard surgical approach was lateral thoracotomy (anterior, mid and posterior). Types of surgical procedure were as shown in . At the end of the procedure, routine hemostasis was performed, and all bleeding points were secured. Thoracotomy was closed in layers, and two chest tubes were inserted for drainage and connected to underwater seal system. Chest tubes were removed sequentially if no bleeding, no effusion, no fever, no air leakage, totally expanded lung by serial chest X-ray and pleural drainage were <100 cc/day. Statistical analysis Our primary outcome is the incidence of pneumonia and the secondary outcomes are time to remove chest drains, ICU stay, and hospital stay. The sample size was calculated to be at least 114 in each group at a power of 95%, α error 0.05 with relative risk 3.7 in transfused patients and expected incidence in non-transfused patients 0.06 derived from a previous study. The data was analyzed using SPSS v25 (IBM, Armonk, NY, USA). Parametric variables were expressed as mean ± SD and compared by Student's T-test. Non-parametric variables were expressed as median, interquartile range and compared by Mann-Whitney U test. Categorical variables were expressed as frequency of occurrence and percentage and compared by Chi-square. P value ≤ 0.05 was considered significant.
Under general anesthesia, the standard surgical approach was lateral thoracotomy (anterior, mid and posterior). Types of surgical procedure were as shown in . At the end of the procedure, routine hemostasis was performed, and all bleeding points were secured. Thoracotomy was closed in layers, and two chest tubes were inserted for drainage and connected to underwater seal system. Chest tubes were removed sequentially if no bleeding, no effusion, no fever, no air leakage, totally expanded lung by serial chest X-ray and pleural drainage were <100 cc/day.
Our primary outcome is the incidence of pneumonia and the secondary outcomes are time to remove chest drains, ICU stay, and hospital stay. The sample size was calculated to be at least 114 in each group at a power of 95%, α error 0.05 with relative risk 3.7 in transfused patients and expected incidence in non-transfused patients 0.06 derived from a previous study. The data was analyzed using SPSS v25 (IBM, Armonk, NY, USA). Parametric variables were expressed as mean ± SD and compared by Student's T-test. Non-parametric variables were expressed as median, interquartile range and compared by Mann-Whitney U test. Categorical variables were expressed as frequency of occurrence and percentage and compared by Chi-square. P value ≤ 0.05 was considered significant.
A total of 248 patients were included for final analysis. Those patients are classified into two main groups according to the need of blood transfusion, Group I (non-transfused group = 130 patients) and Group II (transfused group = 118 patients). %T ranged between 42.8% and 50% according to type of surgery. The demographic details, comorbid conditions, tuberculosis and Hb were comparable in both groups . Surgical categories in both groups and %T are tabulated in . Sixty-six out of 118 patients (55.9%) in group II received blood or blood products intraoperative. Less than 5% of the patients received platelets only, 11% received FFP only, and almost one-third of the patients received more than one component . As regard to postoperative variables, there were no significant differences between group I and group II in the duration of analgesia, allergic reactions, need of re-operation and in-hospital mortality. However, transfused group showed significant increase in duration of antibiotic, persistent postoperative fever, time to remove chest drains, ICU stay, hospital stay and infection (pneumonia) . Incidence of pneumonia had a relative risk 1.82 with transfused compared to non-transfused group with 95% confidence interval: 1.364-2.43. Most of infection (9 cases, 75%) occurred with transfusion of more than one component.
Although postoperative blood transfusion is not supported by high level of evidence, the Society of Thoracic Surgeons recommends transfusion in all patients with postoperative HB <7 gm/dL; a valid recommendation to adult and pediatric patients. It may frequently lead to immune and non-immune mediated reactions such as febrile non-hemolytic transfusion reaction, hemolytic reaction, allergies, microcirculatory changes, transfusion-associated circulatory overload and infections. Lung surgeries are technically demanding procedures that are positively associated with a higher volume of intraoperative blood loss due to either accidental venous, arterial or oozing-type bleeding related to dense adhesion among lung lobes, mediastinum and chest wall. In the present study, 47.6% of our patients required blood product transfusion. Several studies were in agreement with our results and reported that the incidence of blood transfusion was between 20% and 52%. Others found that 58.6% of patients of lung surgeries required blood. In our work, we found that the %T varies between 42.8% and 50% according to type of surgery (a value equal to 30% or more point out the convenience of the number of units cross-matched). Our findings are not entirely in line with some of the results published by other authors, who showed a %T of 47.7% for lobectomy and pneumonectomy and 15.9% for local or segmental resection. Moreover, only 20% of patients undergoing lobectomy were transfused in another study. This could be explained by the fact that blood transfusion can vary entirely across various surgical procedures within the same specialty. Growing evidence underlines that blood transfusion causes adverse effects and is associated with poor outcome risk factor especially in critically ill children. Our study demonstrated that time to remove the chest drain, ICU stay, hospital stay and incidence of pneumonia after surgery all are significantly higher in transfused compared to non-transfused patients. Recent data support RBC transfusion association with morbidity and adverse outcomes in children undergoing cardiac surgery. In a series done by Costello and colleagues elucidated that postoperative exposure to three or more RBC transfusions was associated with an eightfold increase in the risk of infection. Salvin et al . studied 802 postoperative admissions to cardiac ICU and they found that RBC transfusion in younger and acutely ill was associated with a prolonged hospital stay. Also, many studies have documented the risk factors associated with blood transfusion in lung surgery patients. Harpole Jr et al . reported that intraoperative blood loss and intraoperative RBC transfusion are independent predictors of 30 days mortality and morbidity after lung resection procedures. Also, Weber et al . concluded that blood transfusions are acknowledged to prolong hospital stay and increase mortality after lung transplantation. Moreover, some studies have confirmed postoperative complications such as the initiation of pneumonia, wound infections, sepsis, systemic inflammatory response, renal complications, and operative mortality; turn into more periodic in transfused patients than non-transfused. Some mechanisms have been proposed to explain these controversial findings. The blood transfusion effect may have been in part related to low dose of bacterial contamination from the phlebotomy site, blood handling procedures and its storage. Indeed, infections itself explain the direct relationship between blood transfusion and prolonged hospital stay, hence other adverse outcomes. However, this association between blood transfusion and undesired results was not observed in several studies. Ali and colleagues, for instance, did not find such relation and proposed that clinicians should re-asses banning blood transfusion after cardiac surgery owing to worries of liability to infection. Furthermore, Vamvakas and Moore re-evaluated the clue stated, up to 1994 and declared that an incidental path was not determined and multiple confounders could grant blood transfusion just a representative sign for infection and other adverse outcomes. Monitoring other variables, we noticed that, the figures are higher in group II (transfused patients) compared to group I. Although, no statistically significant differences between both groups, we recognized there would be some correlation between blood transfusion and outcomes in terms of duration of analgesia, duration of antibiotics, persistent fever, allergic reaction, re-operation, and hospital mortality. Blood transfusion harm in pediatric age group patients is ultimately the same as the risk for adults; even, it might be costlier over the long term because infant and young patients are critically ill and may live longer with persistent sickness originating from a blood transfusion. The main limitations of this study are its retrospective nature and being a single-center experience. Hence, there are unknown factors that may affect the study outcomes and were not captured in our data collection. Also, the lack of more detailed data on the exact type, amount and frequency of blood transfusion requirement concerning blood loss in various lung surgery procedures, and the absence of control between the numbers of transfused units related to ordered and cross-matched units were other limitations. The absence of control between the numbers of transfused units related to ordered and cross-matched units. Such details may influence the efficiency of our overall transfusion strategy and to avoid overburden on the blood bank in the future. A prospective large-scale study is commanded with particular emphasis on pre-operative serum creatinine, duration of surgery and period of blood storage and other postoperative variables like renal complications and respiratory diseases after blood transfusion in those young age patients. Blood and blood components given in our center are typically non-leukoreduced hence further work should emphasize outcomes of leuko-depleted blood transfusion patients in comparison to non-transfused patients.
Transfusion group in pediatrics undergoing lung surgeries in our study was more prone to adverse outcomes such as pneumonia, delayed time to remove chest drains, prolonged ICU stay, and hospital stay. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
Human REM sleep recalibrates neural activity in support of memory formation | 5dad6edc-70ba-4bcf-9670-865594c4dc1c | 10456851 | Physiology[mh] | Contemporary theories of sleep function proposed that the overnight regulation of excitability constitutes a physiologic mechanism underlying neural network plasticity, facilitating memory consolidation during sleep . In vivo animal studies have revealed that wakefulness and learning lead to a progressive excitability increase . It has been proposed that sleep renormalizes excitability and eliminates synapses [termed “down-scaling” or “pruning” ]. Thus, sleep may restore the optimal neurobiological milieu for learning and strengthen memory representations . That such mechanisms take place in the human brain remains largely speculative. Specifically, the concept of a cellular or network-level process in the human brain, involving the recalibration of neural activity during sleep, has remained untested. In addition, the potential benefits of such a mechanism for memory retention remain unexplored. One reason for this paucity of knowledge, relative to animal models , is the lack of electrophysiological markers that link cellular properties, such as excitability, to whole-brain network dynamics, amenable to electroencephalography (EEG). To date, the majority of the evidence for sleep-dependent cellular and network homeostasis suggests that slow oscillations (SOs; <1.25 Hz) during NREM (nonrapid eye movement) sleep may mediate the regulation of neural excitability . Considerably less evidence exists regarding a similar role for REM sleep, with limited data in rodents suggesting that theta activity (4 to 10 Hz) may offer similar functional benefits . Theta oscillations during REM sleep are prominent only in rodents, whereas human REM sleep is characterized by desynchronized EEG activity without prominent oscillations . This leads to the currently unexamined possibility that traditional oscillation-based analyses might insufficiently capture functionally measurable processes of overnight recalibration of neural activity in human REM sleep. Several computational models indicate that desynchronized, non-oscillatory brain activity (also termed aperiodic activity for the lack of a defining temporal scale) correlates with population excitation-to-inhibition balance [as defined by the activity ratio between excitatory and inhibitory neurons; E-I ratio; ]; thus, constituting a promising proxy EEG marker of neural excitability . Aperiodic activity is typically quantified by the spectral slope x of the 1/ f x decay function of the electrophysiological power spectrum [or power spectral density (PSD) function] in log-log space. Thus, increased aperiodic neural activity encompasses a flattening of the PSD and an increase in the spectral slope, while decreased aperiodic activity is reflected in a steepening of the PSD and a decreased slope. While direct experimental evidence for this hypothesis remains scarce, aperiodic activity provides a possible theoretical framework to link the sleep-dependent regulation of neural activity in humans to overnight memory consolidation. Here, we assessed neural excitability at the population level using two different approaches. First, we examined calcium activity of pyramidal cells and the ratio of activity between excitatory pyramidal cells and interneurons, using two-photon imaging. While calcium activity is an indicator of calcium entry in the cells upon neural firing , calcium activity does not directly quantify firing, excitation, or inhibition; hence, we approximated excitability at the population level by means of pyramidal cell activity and their activity ratio with interneurons. Second, we used the EEG spectral slope to quantify aperiodic activity. Subsequently, we directly tested the relationship between these meso- and macroscale surrogate markers of neural excitability. Since theoretical accounts suggested that synaptic downscaling could benefit sleep-dependent memory consolidation , the key hypothesis was that aperiodic activity, quantified by the spectral slope as a EEG-based proxy of neural excitability , increases during the day and decreases after sleep; thus, constituting a down-regulation of a putative EEG-based metric of population excitability. Conversely, sleep loss should abate a sleep-dependent regulation. We hypothesized that human REM sleep, which previously has been shown to exhibit the strongest spectral slope reduction , might mediate the overnight modulation of aperiodic activity. If down-regulation of aperiodic activity is functionally relevant, then the degree of its modulation should predict individual’s memory retention. Two definitions of population excitability and E-I (excitatory-inhibitory) ratio are used, depending on the spatial scale. At the cellular/mesoscale level in rodent experiments, we evaluated the activity of pyramidal neurons, which we referred to as “excitability,” along with their ratio of activity with interneurons, termed “E-I ratio”; thereby approximating the E-I definition used in computational models . Subsequently, we examined whether the observed neuronal activity was reflected in the dynamics of aperiodic brain activity recorded from the rodent scalp EEG, quantified by the spectral slope. At the network level in human recordings, we then investigated whether the dynamics of aperiodic brain activity could serve as a surrogate marker of overnight excitability recalibration. To test the relationship between population dynamics and aperiodic activity, we first analyzed a previously published dataset that included simultaneous cortical in vivo two-photon calcium imaging and scalp electrophysiology during rodent sleep . We tested whether aperiodic EEG activity captures mesoscale neural excitability (as defined by pyramidal cell calcium activity) and the activity ratio between excitatory pyramidal cells and inhibitory interneurons (E-I ratio) of the underlying neural population (study 1; N = 8 animals, 1486 cells). We further included a control study with simultaneous hippocampal and scalp recordings in sleeping rodents to assess regional specificity (study S1; N = 5 animals; fig. S1). Next, we acquired both invasive and noninvasive electrophysiological recordings in humans to test whether an overnight regulation of aperiodic activity supported memory consolidation, by combining an episodic memory task with resting state scalp EEG recordings before, during, and after habitual sleep (study 2; N = 40 participants), as well as after sleep deprivation (study 3; N = 12 participants). Furthermore, we examined aperiodic activity in overnight sleep recordings in simultaneous scalp and intracranial EEG recordings (study 4; N = 15 participants; 498 bipolar contacts) in patients with pharmacoresistant epilepsy that underwent invasive monitoring. Aperiodic activity reflects neural population activity in rodents Computational models posit aperiodic activity captures population activity of excitatory and inhibitory interneurons , but these assumptions lack empirical evidence. To determine whether aperiodic EEG activity captures population dynamics during sleep, we combined scalp electrophysiology with in vivo two-photon calcium imaging in mouse cortex ( ; 14 recordings in eight animals) of pyramidal cells ( N = 1242) and interneurons [parvalbumin-positive (PV + ) interneurons, N = 132 cells; and somatostatin-positive (SOM + ) interneurons, N = 112 cells]. Excitability was defined as the overall calcium activity in pyramidal cells (quantified as active frames, see below). Cell type–specific activity was strongly modulated by different sleep stages. Overall pyramidal cell calcium activity was lower during sleep than wakefulness { ; Pyr: P = 0.0150, t 40 = −2.51, 95% confidence interval (CI 95 ) = [−5 × 10 −3 −6 × 10 −4 ]; PV + : P = 0.0330, t 19 = 2.30, CI 95 = [8 × 10 −4 1.9 × 10 −2 ]; SOM + : P < 0.0001, t 19 = −7.93, CI 95 = [−10 −2 −7 × 10 −3 ]; linear mixed effect (LME) models}. Layer 2/3 pyramidal cell activity was lowest during REM sleep. A similar pattern was evident for SOM + interneurons, while PV + interneurons exhibited an activity increase during REM sleep. At the mesoscale, these findings may reflect an excitability decrease during REM sleep . Spectral parametrization of simultaneously recorded frontal EEG activity revealed a sleep-stage–specific modulation of aperiodic background activity ( and fig. S1; P < 0.0001, F 1.74,12.15 = 23.46; repeated measures analysis of variance (RM-ANOVA), averaged across sessions; wake: −2.84 ± 0.05; NREM: −3.35 ± 0.04; REM: −2.97 ± 0.10; mean ± SEM), which largely captured hippocampal contributions (fig. S1). We extracted calcium transients (active frames) from the continuous fluorescence signal to determine whether the cell-specific calcium activity predicted aperiodic EEG activity. Putative excitatory (pyramidal) cell activity and inhibitory (interneuron) activity were strongly correlated (Spearman rho = 0.76, P < 0.0001). Moreover, their relationship was systematically biased toward interneuron activity in states of high overall activity ; thus, confirming and extending previous electrophysiological findings . Specifically, an increase in pyramidal cell calcium activity was counterbalanced by a net increase in inhibitory interneuron activity ( , regression slope 1.22 ± 0.05, mean ± SEM; P = 0.0041, t 7 = 4.18, d = 1.48; two-tailed t test against 1). In addition, this same relationship was also evident when overall activity was contrasted against the population E-I ratio (defined as the difference between the average pyramidal cell and interneuron activity; balanced at 0, bounded at ±1; , regression slope −0.35 ± 0.05; mean ± SEM; P < 0.0001, t 7 = −22.16, d = 2.10; two-tailed t test against 0). To directly test the relationship of population dynamics and aperiodic EEG activity (illustrated in ), we first discretized the calcium activity into four quartiles relative to either the current excitatory pyramidal cell activity or the momentary balance between pyramidal cell and interneuron activities and assessed EEG aperiodic activity as a function of the quartiles (Q; ranging from 1 to 4 = low to high). In line with the model predictions , the spectral slope increased (flattening of the PSD) as a function of pyramidal cell activity ( , right; P < 0.0001; t 54 = 5.67, CI 95 = [0.12 0.25], LME; slope Q1 = −2.94 ± 0.04; Q4 = −2.39 ± 0.14; mean ± SEM). Moreover, the spectral slope decreased (steepened) when the E-I ratio shifted toward increased pyramidal cell activity ( , right; P = 0.0002, t 54 = −3.96, CI 95 = [−0.24 −0.08], LME; slope Q1 = −2.53 ± 0.17; Q4 = −2.96 ± 0.04; mean ± SEM). While the spectral slope also covaried with the activity (putative E-I) ratio between pyramidal cells and interneurons (cf. ), this relationship was pronounced when pyramidal cell activity was low ( , bottom row, Q1 versus Q4: Cohen’s d = 0.88) as compared to when activity was high (top row Q1 versus Q4: d = 0.04). The observed pattern remained highly comparable when only a single state (e.g., wakefulness) was considered: The spectral slope increased (flattened) when pyramidal cell calcium activity was high ( P < 0.0001; t 54 = 7.35, CI 95 = [0.15 0.27], LME; slope Q1 = −2.86 ± 0.04; Q4 = −2.22 ± 0.13; mean ± SEM). The spectral slope decreased (steepened) when pyramidal cell activity was higher than interneuron activity (E-I quartiles; P < 0.0001, t 54 = −6.30, CI 95 = [−0.26 −0.13], LME; slope Q1 = −2.32 ± 0.14; Q4 = −2.86 ± 0.05; mean ± SEM). Again, the spectral slope covaried with the population E-I ratio, and this relationship was pronounced when pyramidal cell activity was low (Q1 versus Q4: Cohen’s d = 1.62) as compared to when activity was high (top row Q1 versus Q4: d = −0.11). This observation is the direct result of the mutual dependence and recurrent interactions between excitatory pyramidal cells and inhibitory interneurons in neural circuits (cf. ), while the model assumed a linear independent summation . In addition to the robust relationship across the entire night, aperiodic activity also tracked mesoscale properties on the time scale of single REM sleep epochs (fig. S2). Collectively, this set of findings indicates that the EEG aperiodic activity as quantified by the spectral slope indexes excitability dynamics at the mesoscale using calcium imaging . Aperiodic activity is modulated during sleep in humans We next sought to test whether a modulation of aperiodic activity similarly occurred during human sleep. Here, we used resting state scalp EEG (19-channel, 10–20 layout) recordings in three cognitive states (cognitive engagement during backward counting, rest eyes closed, and fixation) before and after a night of habitual sleep ( N = 40; fig. S3). Spectral analysis revealed a broadband power decrease after sleep in all frequencies above 11 Hz and across the majority of EEG sensors ( ; averaged across all conditions, cluster test; P = 0.0020, d = 0.86). This broadband modulation was driven by changes of non-oscillatory aperiodic brain activity (fig. S3, A and B ). The spectral slope was more negative after habitual sleep, with the peak effect over frontal EEG sensors (inset ; cluster test; P = 0.0180, d = 0.32; electrode Fz: PM: −2.76 ± 0.03, AM: −3.04 ± 0.03; mean ± SEM). These findings demonstrate that aperiodic activity undergoes an overnight modulation. Overnight modulation of aperiodic activity predicts successful memory retention in humans Having established an overnight modulation of aperiodic activity across sleep in humans, we next investigated whether this modulation was functional (rather than epiphenomenal), specifically examining whether such modulation predicted overnight memory retention. Participants performed a validated sleep-dependent episodic memory test [ ; 36 subjects completed behavioral testing; ]. After encoding, participants were trained to criterion before initial recognition testing in the evening (PM) . After 8 hours of sleep starting at their habitual bedtime, they performed the second recognition test the next morning (AM). Participants who exhibited a stronger modulation of aperiodic activity (decrease of the spectral slope; slope modulation = AM minus PM) demonstrated better memory retention ( ; cluster test; P = 0.0011; mean rho = −0.36; fig. S3D). This effect was not confounded by electromyogram (EMG) activity or age (fig. S3, B to D; Spearman partial correlation at Fz: EMG: rho = −0.45, P = 0.0069; age: rho = −0.46, P = 0.0050) and was most pronounced for the frequency range above 20 Hz (fig. S3, E to G). Collectively, this set of findings demonstrated that down-regulation of aperiodic brain activity across the night predicts the overnight consolidation of episodic memory that determines the next-day retention. Sleep deprivation attenuates overnight regulation of aperiodic activity in humans Having characterized the spatiotemporal extent of the overnight regulation, we next assessed the causal role for sleep in the modulation of aperiodic activity across the night in an independent cohort that was sleep deprived ( N = 12; eyes open, central fixation; 64 channel, equidistant layout, centered on electrode Cz). Sleep deprivation resulted in a broadband power increase over central sensors compared to posthabitual sleep (inset ; cluster test; P = 0.0030, d = 0.81), evident as a flattening of the spectral slope ( ; P = 0.023, d = 0.90; cluster test; posthabitual sleep: −3.03 ± 0.23; sleep deprivation: −2.71 ± 0.23). Statistical comparison to presleep resting states (PM; cf. ; between-subject design) revealed a broadband power decrease after sleep, directly replicating the results from study 2 ( ; cluster test; P = 0.0070, d = 0.81; cluster-corrected unpaired t tests). Sleep deprivation attenuated the modulation of aperiodic activity ( ; d = 0.27 instead of d = 0.81; cf. ) and led to an increase of low frequency activity [cluster test; 1 to 9 Hz; P = 0.0160, d = 1.23, a finding in line with observations of enhanced slow waves after prolonged wakefulness ]. Together, these findings establish that sleep deprivation attenuates the down-regulation of aperiodic brain activity. REM sleep predicts modulation of aperiodic activity Next, we tested the hypothesis that REM sleep mediates the observed down-regulation of aperiodic activity in humans. Previously, REM sleep theta oscillations have been associated with reorganizing neural excitability (as defined by the overall firing rate) in rodents . However, theta oscillations are less prevalent during human REM sleep (figs. S1 and S4, A and B), which is characterized by desynchronized EEG activity. Therefore, we tested whether a non-oscillatory mechanism during REM predicted the modulation from one NREM epoch to the next one. Consistent with previous findings in humans [; cf. fig. S1], the spectral slope was more negative during REM sleep ( ; one-way ANOVA: F 2.9,75.3 = 61.78, P < 0.0001; wake: −3.18 ± 0.15; NREM: −3.41 ± 0.07; REM: −4.41 ± 0.15; electrode Fz) compared to NREM (post hoc paired t test: t 39 = 8.20, P < 0.0001, d = 1.30) and wakefulness ( t 39 = 11.58, P < 0.0001, d = 1.83). This observation confirms that REM sleep is associated with the most profound reduction of aperiodic activity during human sleep , particularly over frontal EEG sensors. Note that this observation reflects a dissociation between human (REM slope < NREM slope) and rodent REM sleep (NREM slope < REM slope). However, this apparent discrepancy mainly reflects a technical issue given the strong hippocampal contribution to the frontal scalp EEG in rodents (fig. S1), while hippocampal dynamics were highly comparable across both species. When contrasting the first and last NREM segments of the night, a broadband spectral power modulation was evident ( ; P < 0.001, d = 0.80; cluster-corrected permutation test based on paired t tests) with a similar spatial extent as the effect across the night (cf. and fig. S4C) and encompassed the canonical delta band (<4 Hz). The broadband modulation was the result of a steepening of the spectral slope ( t 39 = 2.40; P = 0.0214, d = 0.38; paired two-tailed t test). When directly contrasting the first and last REM episodes of the night, modulations were band-limited ( ; cluster test; cluster 1 to 23 Hz, P = 0.0090, d = 0.55; cluster 28 to 40 Hz, P = 0.0380, d = 0.39) and were not driven by a change in aperiodic activity ( t 39 = −0.65, P = 0.2574, d = −0.10; paired two-tailed t test). To determine whether REM sleep mediates the modulation of aperiodic activity in subsequent NREM epochs, time-normalized triplets of NREM-REM-NREM sleep were extracted [ and fig. S5 for a complementary time normalization strategy analogous to ]. State-specific oscillatory patterns ( , middle) were only apparent after subtraction of aperiodic activity from broadband power spectra ( , top). Aperiodic activity, quantified as the spectral activity slope, was strongly modulated over the course of the triplet ( , bottom, cluster test; P = 0.001, d = 0.97). Consistent with a modulatory influence of REM sleep on NREM, a more negative spectral slope (i.e., a stronger down-regulation of aperiodic activity) was observed in NREM epochs after a REM episode compared to before ( ; paired two-tailed t test; t 39 = 4.04, P = 0.0002, d = 0.64). This effect was most pronounced in the first third of the respective NREM epoch (fig. S5G). On an individual level, a more negative spectral slope in REM sleep predicted a stronger modulation across the triplet ( ; cluster test; P = 0.001, mean rho = 0.44; peak correlation at Fz rho = 0.65). This relationship between REM slope and NREM slope modulation remained unchanged after accounting for theta power (partial correlation: rho = 0.59, P < 0.0001) and was also apparent when the REM slope was correlated against the individual difference between first and last NREM segments of the night ( ; cluster test; P = 0.0470, mean rho = 0.36; cf. ). Moreover, this effect was not confounded by SO power (fig. S4D; Spearman partial correlation; rho = 0.37, P = 0.0189) or REM theta power (partial correlation; rho = 0.34, P = 0.0349). We also tested whether slow wave activity (duration, amplitude, and quantity) predicted the down-regulation of aperiodic activity across the night but did not find consistent evidence for this (fig. S6). The overnight NREM slope modulation reliably predicted individual memory performance ( ; cluster test; P = 0.049, mean rho = −0.34) and became even more robust after accounting for theta power (partial correlation; rho = −0.49, P = 0.0025). Collectively, these observations indicate that REM, in concert with NREM sleep, predicts a sleep-dependent reduction of aperiodic activity that predicted memory retention. Distinct aperiodic activity regimes govern REM sleep in MTL and PFC in humans Regional population activity was assessed in intracranial recordings ( N = 15 participants, 498 bipolar contacts) in two key nodes of the human memory network, the prefrontal cortex (PFC) and the medial temporal lobe (MTL). Contemporary theoretical frameworks posit that long-term memory consolidation is associated with human PFC plasticity . Hence, we tested whether the modulation of aperiodic activity differentially affects PFC and MTL. Analogous to previous work , a set of parameters was obtained in the MTL and PFC . Across the night (comparison of the first and last third of the entire aggregated NREM sleep; ), the count of SOs (one-way RM-ANOVA across all thirds; F 1.6,22.2 = 8.01, P = 0.0041, d = 0.90), spindles ( F 1.7,24.4 = 8.76, P = 0.0020, d = 1.10), and the slope ( F 1.8,25.6 , P = 0.0160, d = 0.57; all other markers P > 0.5) showed statistically significant decreases in the PFC. Subsequently, NREM-REM-NREM triplets for all subjects were extracted separately for MTL and PFC ( and fig. S7). A state- and region-specific modulation of the spectral slope was observed with a prominent functional dissociation between the MTL and PFC [ ; two-way RM-ANOVA; region of interest (ROI): F 1,14 = 9.38, P = 0.0084; state: F 1.2,17.1 = 0.64, P = 0.4672; interaction: F 1.2,17.7 = 21.95, P = 0.0001]. This analysis revealed a steepening of the PFC power spectrum during REM sleep (cf. ; paired two-tailed t test; t 14 = 3.44, P = 0.0039, d = 0.89) as well as a net decrease in aperiodic activity in NREM sleep after a REM episode (replication of ; t 14 = 2.26, P = 0.0403, d = 0.58). Critically, the pattern was reversed in the MTL (increase of the spectral slope during REM; t 14 = −4.68, P = 0.0004, d = 1.21), and no REM-mediated modulation was observed ( t 14 = −0.16, P = 0.8772, d = 0.04). Moreover, these results from human hippocampus mirrored the pattern that was observed in rodent hippocampus. In contrast, frontal scalp EEG activity differed between both species likely reflects the contribution of hippocampal activity to scalp EEG in rodents (fig. S1). In sum, these results reveal a double dissociation between the PFC and MTL REM sleep with lower aperiodic activity (steeper spectral slope) observed during neocortical REM sleep, possibly providing the optimal neurophysiological milieu to enable neuroplasticity in support of long-term memory retention . Furthermore, our findings indicate that REM sleep resulted in a decrease in the spectral slope during subsequent NREM sleep, in line with our observations on the scalp level (cf. ). In contrast to the neocortical reduction of the aperiodic activity during REM sleep, a switch to a higher rate of aperiodic activity (flattening of the spectral slope) was evident in the MTL, where no REM-mediated down-regulation of aperiodic activity in adjacent NREM epochs was observed. Next, we analyzed high-frequency band (HFB) activity, a surrogate marker of multi-unit firing and dendritic synaptic potentials . Mean HFB activity only changed in MTL but not PFC over the course of the triplet (MTL: F 1.3,18.9 = 15.94, P = 0.0003; PFC: F 1.5,21.7 = 0.39, P = 0.6279) with no overnight modulation (both P values > 0.5) in both regions. However, we found a dispersion of activity patterns across all recording sites ( , last row). Population vector analysis revealed a regionally specific modulation of the multidimensional distance across the triplet (two-way RM-ANOVA; ROI: F 1,14 = 42.07, P < 0.0001, state: F 1.2,16.6 = 3.77, P = 0.0634; interaction: F 1.5,21.3 = 1.84, P = 0.1878), which reflects a more heterogeneous and less synchronized population response. We again observed a modulation of population activity in NREM following REM sleep in PFC ( t 14 = 2.50, P = 0.0253, d = 0.65) but not in the MTL ( t 14 = 1.24, P = 0.2354, d = 0.32). To further quantify the REM-mediated modulation, we separately correlated the overall REM spectral slope (analogous to ) with different sleep signatures. Collectively, this set of findings supports the hypothesis that REM-mediated aperiodic downmodulation preferentially occurs in the neocortex, a key node for human long-term memory retention . Steeper spectral slopes (indexing decreased aperiodic activity) during REM sleep predicted increased overnight hippocampal ripple activity (Spearman rho = 0.75, P = 0.0018), HFB activity (rho = −0.70, P = 0.0046), and active periods (rho = −0.64, P = 0.0129; see Materials and Methods). This was observable on the individual subject level and predicted the steepening of the spectral slope across the full night ( ; rho = 0.68, P = 0.0073; replicating ). The same relationship between REM slope and the overnight steepening of the spectral slope was observed in PFC ( ; rho = 0.12, P pseudo-population = 0.0277, P lme < 0.0001; t 345 = 6.72; CI 95 = [0.06 0.11]). In addition, the expression of spindles changed as a function of the REM slope (rho = 0.11, P pseudo-population = 0.0380, P lme = 0.0599; t 345 = −1.89; CI 95 = [−0.009 0.0002]), while the relationship to prefrontal SOs was less consistent (rho = −0.19, P pseudo-population = 0.0004, P lme = 0.1012; t 345 = 1.64; CI 95 = [−0.001 0.012]; fig. S8). The modulation of the spindle count by the REM slope exhibited an opposite pattern between medial and lateral frontal cortex, with a decrease in medial and an increase in lateral prefrontal regions ( ; P lme = 0.0157; t 345 = 2.43; CI 95 = [5 × 10 −5 5 × 10 −4 ]). Last, we tested whether brief oscillatory beta/gamma bursts might explain the observed effect on the spectral slope, but we did not find any evidence for this consideration (fig. S9). Together, these results reveal that aperiodic activity during REM sleep predicts the overnight modulation of aperiodic activity, an EEG-based proxy of excitability during sleep. Critically, the post-REM modulation of successive NREM sleep was confined to the neocortex, indicating that REM-mediated aperiodic down-regulation preferentially affects neocortical regions to support long-term memory retention. Computational models posit aperiodic activity captures population activity of excitatory and inhibitory interneurons , but these assumptions lack empirical evidence. To determine whether aperiodic EEG activity captures population dynamics during sleep, we combined scalp electrophysiology with in vivo two-photon calcium imaging in mouse cortex ( ; 14 recordings in eight animals) of pyramidal cells ( N = 1242) and interneurons [parvalbumin-positive (PV + ) interneurons, N = 132 cells; and somatostatin-positive (SOM + ) interneurons, N = 112 cells]. Excitability was defined as the overall calcium activity in pyramidal cells (quantified as active frames, see below). Cell type–specific activity was strongly modulated by different sleep stages. Overall pyramidal cell calcium activity was lower during sleep than wakefulness { ; Pyr: P = 0.0150, t 40 = −2.51, 95% confidence interval (CI 95 ) = [−5 × 10 −3 −6 × 10 −4 ]; PV + : P = 0.0330, t 19 = 2.30, CI 95 = [8 × 10 −4 1.9 × 10 −2 ]; SOM + : P < 0.0001, t 19 = −7.93, CI 95 = [−10 −2 −7 × 10 −3 ]; linear mixed effect (LME) models}. Layer 2/3 pyramidal cell activity was lowest during REM sleep. A similar pattern was evident for SOM + interneurons, while PV + interneurons exhibited an activity increase during REM sleep. At the mesoscale, these findings may reflect an excitability decrease during REM sleep . Spectral parametrization of simultaneously recorded frontal EEG activity revealed a sleep-stage–specific modulation of aperiodic background activity ( and fig. S1; P < 0.0001, F 1.74,12.15 = 23.46; repeated measures analysis of variance (RM-ANOVA), averaged across sessions; wake: −2.84 ± 0.05; NREM: −3.35 ± 0.04; REM: −2.97 ± 0.10; mean ± SEM), which largely captured hippocampal contributions (fig. S1). We extracted calcium transients (active frames) from the continuous fluorescence signal to determine whether the cell-specific calcium activity predicted aperiodic EEG activity. Putative excitatory (pyramidal) cell activity and inhibitory (interneuron) activity were strongly correlated (Spearman rho = 0.76, P < 0.0001). Moreover, their relationship was systematically biased toward interneuron activity in states of high overall activity ; thus, confirming and extending previous electrophysiological findings . Specifically, an increase in pyramidal cell calcium activity was counterbalanced by a net increase in inhibitory interneuron activity ( , regression slope 1.22 ± 0.05, mean ± SEM; P = 0.0041, t 7 = 4.18, d = 1.48; two-tailed t test against 1). In addition, this same relationship was also evident when overall activity was contrasted against the population E-I ratio (defined as the difference between the average pyramidal cell and interneuron activity; balanced at 0, bounded at ±1; , regression slope −0.35 ± 0.05; mean ± SEM; P < 0.0001, t 7 = −22.16, d = 2.10; two-tailed t test against 0). To directly test the relationship of population dynamics and aperiodic EEG activity (illustrated in ), we first discretized the calcium activity into four quartiles relative to either the current excitatory pyramidal cell activity or the momentary balance between pyramidal cell and interneuron activities and assessed EEG aperiodic activity as a function of the quartiles (Q; ranging from 1 to 4 = low to high). In line with the model predictions , the spectral slope increased (flattening of the PSD) as a function of pyramidal cell activity ( , right; P < 0.0001; t 54 = 5.67, CI 95 = [0.12 0.25], LME; slope Q1 = −2.94 ± 0.04; Q4 = −2.39 ± 0.14; mean ± SEM). Moreover, the spectral slope decreased (steepened) when the E-I ratio shifted toward increased pyramidal cell activity ( , right; P = 0.0002, t 54 = −3.96, CI 95 = [−0.24 −0.08], LME; slope Q1 = −2.53 ± 0.17; Q4 = −2.96 ± 0.04; mean ± SEM). While the spectral slope also covaried with the activity (putative E-I) ratio between pyramidal cells and interneurons (cf. ), this relationship was pronounced when pyramidal cell activity was low ( , bottom row, Q1 versus Q4: Cohen’s d = 0.88) as compared to when activity was high (top row Q1 versus Q4: d = 0.04). The observed pattern remained highly comparable when only a single state (e.g., wakefulness) was considered: The spectral slope increased (flattened) when pyramidal cell calcium activity was high ( P < 0.0001; t 54 = 7.35, CI 95 = [0.15 0.27], LME; slope Q1 = −2.86 ± 0.04; Q4 = −2.22 ± 0.13; mean ± SEM). The spectral slope decreased (steepened) when pyramidal cell activity was higher than interneuron activity (E-I quartiles; P < 0.0001, t 54 = −6.30, CI 95 = [−0.26 −0.13], LME; slope Q1 = −2.32 ± 0.14; Q4 = −2.86 ± 0.05; mean ± SEM). Again, the spectral slope covaried with the population E-I ratio, and this relationship was pronounced when pyramidal cell activity was low (Q1 versus Q4: Cohen’s d = 1.62) as compared to when activity was high (top row Q1 versus Q4: d = −0.11). This observation is the direct result of the mutual dependence and recurrent interactions between excitatory pyramidal cells and inhibitory interneurons in neural circuits (cf. ), while the model assumed a linear independent summation . In addition to the robust relationship across the entire night, aperiodic activity also tracked mesoscale properties on the time scale of single REM sleep epochs (fig. S2). Collectively, this set of findings indicates that the EEG aperiodic activity as quantified by the spectral slope indexes excitability dynamics at the mesoscale using calcium imaging . We next sought to test whether a modulation of aperiodic activity similarly occurred during human sleep. Here, we used resting state scalp EEG (19-channel, 10–20 layout) recordings in three cognitive states (cognitive engagement during backward counting, rest eyes closed, and fixation) before and after a night of habitual sleep ( N = 40; fig. S3). Spectral analysis revealed a broadband power decrease after sleep in all frequencies above 11 Hz and across the majority of EEG sensors ( ; averaged across all conditions, cluster test; P = 0.0020, d = 0.86). This broadband modulation was driven by changes of non-oscillatory aperiodic brain activity (fig. S3, A and B ). The spectral slope was more negative after habitual sleep, with the peak effect over frontal EEG sensors (inset ; cluster test; P = 0.0180, d = 0.32; electrode Fz: PM: −2.76 ± 0.03, AM: −3.04 ± 0.03; mean ± SEM). These findings demonstrate that aperiodic activity undergoes an overnight modulation. Having established an overnight modulation of aperiodic activity across sleep in humans, we next investigated whether this modulation was functional (rather than epiphenomenal), specifically examining whether such modulation predicted overnight memory retention. Participants performed a validated sleep-dependent episodic memory test [ ; 36 subjects completed behavioral testing; ]. After encoding, participants were trained to criterion before initial recognition testing in the evening (PM) . After 8 hours of sleep starting at their habitual bedtime, they performed the second recognition test the next morning (AM). Participants who exhibited a stronger modulation of aperiodic activity (decrease of the spectral slope; slope modulation = AM minus PM) demonstrated better memory retention ( ; cluster test; P = 0.0011; mean rho = −0.36; fig. S3D). This effect was not confounded by electromyogram (EMG) activity or age (fig. S3, B to D; Spearman partial correlation at Fz: EMG: rho = −0.45, P = 0.0069; age: rho = −0.46, P = 0.0050) and was most pronounced for the frequency range above 20 Hz (fig. S3, E to G). Collectively, this set of findings demonstrated that down-regulation of aperiodic brain activity across the night predicts the overnight consolidation of episodic memory that determines the next-day retention. Having characterized the spatiotemporal extent of the overnight regulation, we next assessed the causal role for sleep in the modulation of aperiodic activity across the night in an independent cohort that was sleep deprived ( N = 12; eyes open, central fixation; 64 channel, equidistant layout, centered on electrode Cz). Sleep deprivation resulted in a broadband power increase over central sensors compared to posthabitual sleep (inset ; cluster test; P = 0.0030, d = 0.81), evident as a flattening of the spectral slope ( ; P = 0.023, d = 0.90; cluster test; posthabitual sleep: −3.03 ± 0.23; sleep deprivation: −2.71 ± 0.23). Statistical comparison to presleep resting states (PM; cf. ; between-subject design) revealed a broadband power decrease after sleep, directly replicating the results from study 2 ( ; cluster test; P = 0.0070, d = 0.81; cluster-corrected unpaired t tests). Sleep deprivation attenuated the modulation of aperiodic activity ( ; d = 0.27 instead of d = 0.81; cf. ) and led to an increase of low frequency activity [cluster test; 1 to 9 Hz; P = 0.0160, d = 1.23, a finding in line with observations of enhanced slow waves after prolonged wakefulness ]. Together, these findings establish that sleep deprivation attenuates the down-regulation of aperiodic brain activity. Next, we tested the hypothesis that REM sleep mediates the observed down-regulation of aperiodic activity in humans. Previously, REM sleep theta oscillations have been associated with reorganizing neural excitability (as defined by the overall firing rate) in rodents . However, theta oscillations are less prevalent during human REM sleep (figs. S1 and S4, A and B), which is characterized by desynchronized EEG activity. Therefore, we tested whether a non-oscillatory mechanism during REM predicted the modulation from one NREM epoch to the next one. Consistent with previous findings in humans [; cf. fig. S1], the spectral slope was more negative during REM sleep ( ; one-way ANOVA: F 2.9,75.3 = 61.78, P < 0.0001; wake: −3.18 ± 0.15; NREM: −3.41 ± 0.07; REM: −4.41 ± 0.15; electrode Fz) compared to NREM (post hoc paired t test: t 39 = 8.20, P < 0.0001, d = 1.30) and wakefulness ( t 39 = 11.58, P < 0.0001, d = 1.83). This observation confirms that REM sleep is associated with the most profound reduction of aperiodic activity during human sleep , particularly over frontal EEG sensors. Note that this observation reflects a dissociation between human (REM slope < NREM slope) and rodent REM sleep (NREM slope < REM slope). However, this apparent discrepancy mainly reflects a technical issue given the strong hippocampal contribution to the frontal scalp EEG in rodents (fig. S1), while hippocampal dynamics were highly comparable across both species. When contrasting the first and last NREM segments of the night, a broadband spectral power modulation was evident ( ; P < 0.001, d = 0.80; cluster-corrected permutation test based on paired t tests) with a similar spatial extent as the effect across the night (cf. and fig. S4C) and encompassed the canonical delta band (<4 Hz). The broadband modulation was the result of a steepening of the spectral slope ( t 39 = 2.40; P = 0.0214, d = 0.38; paired two-tailed t test). When directly contrasting the first and last REM episodes of the night, modulations were band-limited ( ; cluster test; cluster 1 to 23 Hz, P = 0.0090, d = 0.55; cluster 28 to 40 Hz, P = 0.0380, d = 0.39) and were not driven by a change in aperiodic activity ( t 39 = −0.65, P = 0.2574, d = −0.10; paired two-tailed t test). To determine whether REM sleep mediates the modulation of aperiodic activity in subsequent NREM epochs, time-normalized triplets of NREM-REM-NREM sleep were extracted [ and fig. S5 for a complementary time normalization strategy analogous to ]. State-specific oscillatory patterns ( , middle) were only apparent after subtraction of aperiodic activity from broadband power spectra ( , top). Aperiodic activity, quantified as the spectral activity slope, was strongly modulated over the course of the triplet ( , bottom, cluster test; P = 0.001, d = 0.97). Consistent with a modulatory influence of REM sleep on NREM, a more negative spectral slope (i.e., a stronger down-regulation of aperiodic activity) was observed in NREM epochs after a REM episode compared to before ( ; paired two-tailed t test; t 39 = 4.04, P = 0.0002, d = 0.64). This effect was most pronounced in the first third of the respective NREM epoch (fig. S5G). On an individual level, a more negative spectral slope in REM sleep predicted a stronger modulation across the triplet ( ; cluster test; P = 0.001, mean rho = 0.44; peak correlation at Fz rho = 0.65). This relationship between REM slope and NREM slope modulation remained unchanged after accounting for theta power (partial correlation: rho = 0.59, P < 0.0001) and was also apparent when the REM slope was correlated against the individual difference between first and last NREM segments of the night ( ; cluster test; P = 0.0470, mean rho = 0.36; cf. ). Moreover, this effect was not confounded by SO power (fig. S4D; Spearman partial correlation; rho = 0.37, P = 0.0189) or REM theta power (partial correlation; rho = 0.34, P = 0.0349). We also tested whether slow wave activity (duration, amplitude, and quantity) predicted the down-regulation of aperiodic activity across the night but did not find consistent evidence for this (fig. S6). The overnight NREM slope modulation reliably predicted individual memory performance ( ; cluster test; P = 0.049, mean rho = −0.34) and became even more robust after accounting for theta power (partial correlation; rho = −0.49, P = 0.0025). Collectively, these observations indicate that REM, in concert with NREM sleep, predicts a sleep-dependent reduction of aperiodic activity that predicted memory retention. Regional population activity was assessed in intracranial recordings ( N = 15 participants, 498 bipolar contacts) in two key nodes of the human memory network, the prefrontal cortex (PFC) and the medial temporal lobe (MTL). Contemporary theoretical frameworks posit that long-term memory consolidation is associated with human PFC plasticity . Hence, we tested whether the modulation of aperiodic activity differentially affects PFC and MTL. Analogous to previous work , a set of parameters was obtained in the MTL and PFC . Across the night (comparison of the first and last third of the entire aggregated NREM sleep; ), the count of SOs (one-way RM-ANOVA across all thirds; F 1.6,22.2 = 8.01, P = 0.0041, d = 0.90), spindles ( F 1.7,24.4 = 8.76, P = 0.0020, d = 1.10), and the slope ( F 1.8,25.6 , P = 0.0160, d = 0.57; all other markers P > 0.5) showed statistically significant decreases in the PFC. Subsequently, NREM-REM-NREM triplets for all subjects were extracted separately for MTL and PFC ( and fig. S7). A state- and region-specific modulation of the spectral slope was observed with a prominent functional dissociation between the MTL and PFC [ ; two-way RM-ANOVA; region of interest (ROI): F 1,14 = 9.38, P = 0.0084; state: F 1.2,17.1 = 0.64, P = 0.4672; interaction: F 1.2,17.7 = 21.95, P = 0.0001]. This analysis revealed a steepening of the PFC power spectrum during REM sleep (cf. ; paired two-tailed t test; t 14 = 3.44, P = 0.0039, d = 0.89) as well as a net decrease in aperiodic activity in NREM sleep after a REM episode (replication of ; t 14 = 2.26, P = 0.0403, d = 0.58). Critically, the pattern was reversed in the MTL (increase of the spectral slope during REM; t 14 = −4.68, P = 0.0004, d = 1.21), and no REM-mediated modulation was observed ( t 14 = −0.16, P = 0.8772, d = 0.04). Moreover, these results from human hippocampus mirrored the pattern that was observed in rodent hippocampus. In contrast, frontal scalp EEG activity differed between both species likely reflects the contribution of hippocampal activity to scalp EEG in rodents (fig. S1). In sum, these results reveal a double dissociation between the PFC and MTL REM sleep with lower aperiodic activity (steeper spectral slope) observed during neocortical REM sleep, possibly providing the optimal neurophysiological milieu to enable neuroplasticity in support of long-term memory retention . Furthermore, our findings indicate that REM sleep resulted in a decrease in the spectral slope during subsequent NREM sleep, in line with our observations on the scalp level (cf. ). In contrast to the neocortical reduction of the aperiodic activity during REM sleep, a switch to a higher rate of aperiodic activity (flattening of the spectral slope) was evident in the MTL, where no REM-mediated down-regulation of aperiodic activity in adjacent NREM epochs was observed. Next, we analyzed high-frequency band (HFB) activity, a surrogate marker of multi-unit firing and dendritic synaptic potentials . Mean HFB activity only changed in MTL but not PFC over the course of the triplet (MTL: F 1.3,18.9 = 15.94, P = 0.0003; PFC: F 1.5,21.7 = 0.39, P = 0.6279) with no overnight modulation (both P values > 0.5) in both regions. However, we found a dispersion of activity patterns across all recording sites ( , last row). Population vector analysis revealed a regionally specific modulation of the multidimensional distance across the triplet (two-way RM-ANOVA; ROI: F 1,14 = 42.07, P < 0.0001, state: F 1.2,16.6 = 3.77, P = 0.0634; interaction: F 1.5,21.3 = 1.84, P = 0.1878), which reflects a more heterogeneous and less synchronized population response. We again observed a modulation of population activity in NREM following REM sleep in PFC ( t 14 = 2.50, P = 0.0253, d = 0.65) but not in the MTL ( t 14 = 1.24, P = 0.2354, d = 0.32). To further quantify the REM-mediated modulation, we separately correlated the overall REM spectral slope (analogous to ) with different sleep signatures. Collectively, this set of findings supports the hypothesis that REM-mediated aperiodic downmodulation preferentially occurs in the neocortex, a key node for human long-term memory retention . Steeper spectral slopes (indexing decreased aperiodic activity) during REM sleep predicted increased overnight hippocampal ripple activity (Spearman rho = 0.75, P = 0.0018), HFB activity (rho = −0.70, P = 0.0046), and active periods (rho = −0.64, P = 0.0129; see Materials and Methods). This was observable on the individual subject level and predicted the steepening of the spectral slope across the full night ( ; rho = 0.68, P = 0.0073; replicating ). The same relationship between REM slope and the overnight steepening of the spectral slope was observed in PFC ( ; rho = 0.12, P pseudo-population = 0.0277, P lme < 0.0001; t 345 = 6.72; CI 95 = [0.06 0.11]). In addition, the expression of spindles changed as a function of the REM slope (rho = 0.11, P pseudo-population = 0.0380, P lme = 0.0599; t 345 = −1.89; CI 95 = [−0.009 0.0002]), while the relationship to prefrontal SOs was less consistent (rho = −0.19, P pseudo-population = 0.0004, P lme = 0.1012; t 345 = 1.64; CI 95 = [−0.001 0.012]; fig. S8). The modulation of the spindle count by the REM slope exhibited an opposite pattern between medial and lateral frontal cortex, with a decrease in medial and an increase in lateral prefrontal regions ( ; P lme = 0.0157; t 345 = 2.43; CI 95 = [5 × 10 −5 5 × 10 −4 ]). Last, we tested whether brief oscillatory beta/gamma bursts might explain the observed effect on the spectral slope, but we did not find any evidence for this consideration (fig. S9). Together, these results reveal that aperiodic activity during REM sleep predicts the overnight modulation of aperiodic activity, an EEG-based proxy of excitability during sleep. Critically, the post-REM modulation of successive NREM sleep was confined to the neocortex, indicating that REM-mediated aperiodic down-regulation preferentially affects neocortical regions to support long-term memory retention. Together, our results across five independent studies demonstrate that REM sleep mediates an overnight down-regulation of aperiodic activity as quantified by the spectral slope. This REM sleep mechanism provided functional benefits, such that it predicted the success of subsequent overnight long-term memory retention, suggesting a possible mechanistic pathway that contributes to the recognized role of sleep in cementing human memories. These results reveal that aperiodic activity during sleep indexes mesoscale population activity and reflects an inherent characteristic of the functional organization of the sleeping brain. Aperiodic activity operates in concert with sleep oscillations [and provides nonredundant information to SOs; cf. fig. S6; ] to mediate overnight memory consolidation. Our present simultaneous two-photon calcium imaging and electrophysiology experiments in rodents and humans provide evidence for the idea that aperiodic activity tracks mesoscale population dynamics as quantified by calcium activity. An important feature of aperiodic activity is that it can be estimated from the scalp or intracranial EEG for every state including wakefulness, providing an electrophysiological marker enabling a direct comparison of activity across different neural and behavioral states. Sleep deprivation, as a perturbation approach of the assumed physiologic modulation, resulted in an attenuated down-regulation of aperiodic activity. Moreover, aperiodic activity in REM sleep led to a pronounced functional and anatomical dissociation between two key brain regions of the memory network, the MTL and neocortex. Specifically, the MTL switched from a stable state of low aperiodic activity during NREM sleep to a transient state of increased aperiodic activity (flattening of the PSD) during REM sleep, while the neocortex transitioned from high aperiodic activity during NREM to a state of decreased aperiodic activity (steeper slope) during REM sleep. In addition, aperiodic activity during REM sleep correlated with the overnight modulation of oscillatory NREM sleep signatures in a spatially specific manner, with aperiodic activity in the MTL indexing the modulation of hippocampal ripples, while neocortical aperiodic activity predicted spindle modulation. These findings indicate an important interaction between sleep stages, such that the expression of NREM sleep oscillations is governed by the preceding REM sleep episode. Aperiodic activity tracks population dynamics during sleep How does the sleeping brain regulate neural homeostasis to meet the demands of optimal function, including that required for information processing and memory retention? A possible hypothesis is that new synapses might be formed, existing connections strengthened and overall neural firing increased during wakefulness and learning . This activity increase might be particularly pronounced during early development and for highly active cells . Sleep has been proposed to counteract this progressive activity build-up to maintain healthy neural functioning, with sleep deprivation attenuating such a modulation and impairing cognitive processes and memory formation . On the cellular level, sleep reduces neural firing and promotes synapse elimination . Electrophysiological recordings suggested that synaptic activity is strongly attenuated during “down-states,” which may manifest as SOs in meso- and macroscale recordings . Hence, the seminal synaptic homeostasis hypothesis posits that SO-mediated postsynaptic depression might restore the optimal neural milieu for learning and memory , but it remains poorly understood how the regulation as observed on the cellular level relates to macroscale EEG activity as recorded from the human brain. Computational models have proposed a missing link between cellular and macroscale signals . The present studies tested the predictions of these models that aperiodic activity indexes neural E/I balance and hence might be modulated during sleep. We observed that the spectral slope, as a measure of aperiodic activity, captured in vivo mesoscale dynamics. Specifically, higher calcium activity, which indexes neural population activity , predicted increased aperiodic activity (a flattening of the EEG spectral slope; ), while lower calcium activity decreased aperiodic activity (steepening of the EEG spectral slope). The spectral slope also indexed the momentary ratio between pyramidal and interneuron activity ; this dependence was mainly observed when calcium activity was low. In contrast to the model predictions, a surplus of pyramidal cell activity (fourth E/I quartile; ) was accompanied by a steepening of the spectral slope. This deviation from the model predictions might be explained by the absence of recurrent connections between excitatory and inhibitory cells in the original model by Gao et al. , which constitutes a hallmark of neocortical circuits in vivo . Moreover, all calcium recordings were obtained from cortical layers 2/3 cell soma; hence, future experiments have to determine whether these results generalize to the synaptic level or other cortical layers . Likewise, the contribution of dendritic potentials needs to be considered in future experiments. Since the current findings were obtained using calcium activity as a surrogate of neural activity , the present results need to be extended using direct electrophysiological unit recordings. Collectively, this set of findings demonstrates that the spectral slope, as an index of aperiodic brain activity, captures neural excitability at the mesoscale (defined as overall pyramidal calcium activity) and only indirectly the underlying balance between excitatory pyramidal cell and inhibitory interneuron activity. To date, the relative contributions of synaptic currents and neural firing to generation of the EEG remains incompletely understood . Future computational models accounting for recurrent connections might be able to separate the relative contributions of neural firing and momentary E/I ratio. In the same vein, future studies need to determine how other factors, such as cerebral blood, cerebrospinal, or interstitial fluid flow, glymphatic flow, or the effects of neuro-modulatory systems, affect aperiodic activity. While sleep decreased aperiodic activity (steeper slope), sleep deprivation attenuated increased aperiodic activity (flatter slope). The strongest decrease of neocortical aperiodic activity (and cortical pyramidal cell activity in rodents; ) was observed during REM sleep. This observation raises the intriguing question of whether REM sleep mediates the overnight recalibration of EEG-based markers of neural excitability in humans. REM sleep recalibrates neural activity dynamics during sleep While SOs during NREM sleep have typically been linked to neural quiescence , mounting evidence suggests that such NREM sleep consequences are nuanced and that NREM sleep also reflects a brain state of considerable activity . For example, NREM sleep may increase synaptic efficiency , especially for small synaptic boutons , and neural firing (pronounced for low firing neurons) at the cellular level . At the population level, the cardinal oscillations of NREM sleep actively coordinate the hippocampal-neocortical dialogue to enable information reactivation, transfer, and consolidation . NREM sleep oscillations, including sharp-wave ripples, which are typically nested in SOs or spindles , have been suggested to mediate neuroplasticity through repetitive replay of firing sequences and the memory-specific up-regulation of synapse formation ; thus, reflecting a potential state of increased net excitation, in addition to co-occurring benefits of synaptic downscaling . In contrast, emerging evidence in animal models indicates a role for a neuronal inhibitory state in REM sleep . At a cellular level, REM sleep promotes global synapse elimination . Moreover, two-photon calcium imaging (cf. ) and in vivo electrophysiology studies report a global reduction of neural firing with an increase of interneuron activity during REM sleep. This is in accord with macroscale findings that demonstrated a reduction of aperiodic activity, possibly reflecting decreased population excitability in scalp EEG recordings . The present results provide direct in vivo evidence corroborating this proposal in human cortex, showing that REM-mediated activity modulates neural dynamics of the brain during this sleep state . This modulation was both region– (MTL versus PFC; and figs. S1 and S7) and species- (human versus rodent; fig. S1)–specific. In both species, we observed a flattening of spectral slope during REM sleep in the hippocampus, highlighting that hippocampal brain state–dependent dynamics might be evolutionary conserved . In contrast, the strongest REM-mediated aperiodic modulation was observed in human frontal cortex. This effect was not evident in rodents, where frontal EEG activity also encompasses the contribution of hippocampal activity (fig. S1), which directly accounts for the apparent inconsistency between both species. Moreover, previous work in rodent , cat , and human visual cortex observed that NREM sleep potentiates neural excitability and increases E/I balance in V1 (defined by magnetic resonance spectroscopy as the glutamate to γ-aminobutyric acid ratio) to possibly promote neural plasticity. Critically, no sleep-mediated downscaling was observed in visual cortex . These observations are in line with the present findings where the most pronounced and behaviorally relevant modulation of aperiodic activity occurred over frontocentral areas ( and ), while the aperiodic modulation over occipital sensors was negligible . REM sleep–mediated recalibration of aperiodic activity predicts memory retention Is the change in aperiodic activity during REM sleep epiphenomenal or functional, specifically regarding sleep-dependent overnight memory processing ? At the cellular level, consolidation of mnemonic representations requires a selective, activity-dependent elimination of synapses . As this down-scaling occurs primarily in sleep, prolonged wakefulness is proposed to result in synapse saturation leading to impaired memory function . Consistent with this proposition, when interneurons in hippocampus are optogenetically inactivated during REM sleep in rodents, memory formation is impaired . Conversely, REM sleep deprivation in rodents reduced synaptic plasticity . This set of findings suggests a role for REM sleep in adjusting neural activity overnight in support of memory retention. This recalibration had a functional benefit predicting successful next day memory retention. This association with memory enhancement was specific to the PFC, in line with the idea that neocortical areas house long-term mnemonic storage . This effect was not confounded by the simultaneously influence of slow wave activity on behavior (fig. S6), suggesting that aperiodic and slow wave activity constitute complementary mechanisms. Collectively, our study provides compelling evidence that aperiodic electrical brain activity within the human and rodent brain serves as a reliable indicator of neural population dynamics. Hence, aperiodic activity represents an essential and previously unrecognized functional characteristic of the sleeping brain. These findings shed light on the pivotal role of human REM sleep in recalibrating neural dynamics at the population level. Our results illustrate that the recalibration of population-based excitability markers facilitated by REM sleep not only supports but also potentially stems from experience-dependent plasticity throughout the waking hours. In sum, REM-mediated recalibration of neural dynamic might be critical for the overnight consolidation of memories into stable engrams within the brain. How does the sleeping brain regulate neural homeostasis to meet the demands of optimal function, including that required for information processing and memory retention? A possible hypothesis is that new synapses might be formed, existing connections strengthened and overall neural firing increased during wakefulness and learning . This activity increase might be particularly pronounced during early development and for highly active cells . Sleep has been proposed to counteract this progressive activity build-up to maintain healthy neural functioning, with sleep deprivation attenuating such a modulation and impairing cognitive processes and memory formation . On the cellular level, sleep reduces neural firing and promotes synapse elimination . Electrophysiological recordings suggested that synaptic activity is strongly attenuated during “down-states,” which may manifest as SOs in meso- and macroscale recordings . Hence, the seminal synaptic homeostasis hypothesis posits that SO-mediated postsynaptic depression might restore the optimal neural milieu for learning and memory , but it remains poorly understood how the regulation as observed on the cellular level relates to macroscale EEG activity as recorded from the human brain. Computational models have proposed a missing link between cellular and macroscale signals . The present studies tested the predictions of these models that aperiodic activity indexes neural E/I balance and hence might be modulated during sleep. We observed that the spectral slope, as a measure of aperiodic activity, captured in vivo mesoscale dynamics. Specifically, higher calcium activity, which indexes neural population activity , predicted increased aperiodic activity (a flattening of the EEG spectral slope; ), while lower calcium activity decreased aperiodic activity (steepening of the EEG spectral slope). The spectral slope also indexed the momentary ratio between pyramidal and interneuron activity ; this dependence was mainly observed when calcium activity was low. In contrast to the model predictions, a surplus of pyramidal cell activity (fourth E/I quartile; ) was accompanied by a steepening of the spectral slope. This deviation from the model predictions might be explained by the absence of recurrent connections between excitatory and inhibitory cells in the original model by Gao et al. , which constitutes a hallmark of neocortical circuits in vivo . Moreover, all calcium recordings were obtained from cortical layers 2/3 cell soma; hence, future experiments have to determine whether these results generalize to the synaptic level or other cortical layers . Likewise, the contribution of dendritic potentials needs to be considered in future experiments. Since the current findings were obtained using calcium activity as a surrogate of neural activity , the present results need to be extended using direct electrophysiological unit recordings. Collectively, this set of findings demonstrates that the spectral slope, as an index of aperiodic brain activity, captures neural excitability at the mesoscale (defined as overall pyramidal calcium activity) and only indirectly the underlying balance between excitatory pyramidal cell and inhibitory interneuron activity. To date, the relative contributions of synaptic currents and neural firing to generation of the EEG remains incompletely understood . Future computational models accounting for recurrent connections might be able to separate the relative contributions of neural firing and momentary E/I ratio. In the same vein, future studies need to determine how other factors, such as cerebral blood, cerebrospinal, or interstitial fluid flow, glymphatic flow, or the effects of neuro-modulatory systems, affect aperiodic activity. While sleep decreased aperiodic activity (steeper slope), sleep deprivation attenuated increased aperiodic activity (flatter slope). The strongest decrease of neocortical aperiodic activity (and cortical pyramidal cell activity in rodents; ) was observed during REM sleep. This observation raises the intriguing question of whether REM sleep mediates the overnight recalibration of EEG-based markers of neural excitability in humans. While SOs during NREM sleep have typically been linked to neural quiescence , mounting evidence suggests that such NREM sleep consequences are nuanced and that NREM sleep also reflects a brain state of considerable activity . For example, NREM sleep may increase synaptic efficiency , especially for small synaptic boutons , and neural firing (pronounced for low firing neurons) at the cellular level . At the population level, the cardinal oscillations of NREM sleep actively coordinate the hippocampal-neocortical dialogue to enable information reactivation, transfer, and consolidation . NREM sleep oscillations, including sharp-wave ripples, which are typically nested in SOs or spindles , have been suggested to mediate neuroplasticity through repetitive replay of firing sequences and the memory-specific up-regulation of synapse formation ; thus, reflecting a potential state of increased net excitation, in addition to co-occurring benefits of synaptic downscaling . In contrast, emerging evidence in animal models indicates a role for a neuronal inhibitory state in REM sleep . At a cellular level, REM sleep promotes global synapse elimination . Moreover, two-photon calcium imaging (cf. ) and in vivo electrophysiology studies report a global reduction of neural firing with an increase of interneuron activity during REM sleep. This is in accord with macroscale findings that demonstrated a reduction of aperiodic activity, possibly reflecting decreased population excitability in scalp EEG recordings . The present results provide direct in vivo evidence corroborating this proposal in human cortex, showing that REM-mediated activity modulates neural dynamics of the brain during this sleep state . This modulation was both region– (MTL versus PFC; and figs. S1 and S7) and species- (human versus rodent; fig. S1)–specific. In both species, we observed a flattening of spectral slope during REM sleep in the hippocampus, highlighting that hippocampal brain state–dependent dynamics might be evolutionary conserved . In contrast, the strongest REM-mediated aperiodic modulation was observed in human frontal cortex. This effect was not evident in rodents, where frontal EEG activity also encompasses the contribution of hippocampal activity (fig. S1), which directly accounts for the apparent inconsistency between both species. Moreover, previous work in rodent , cat , and human visual cortex observed that NREM sleep potentiates neural excitability and increases E/I balance in V1 (defined by magnetic resonance spectroscopy as the glutamate to γ-aminobutyric acid ratio) to possibly promote neural plasticity. Critically, no sleep-mediated downscaling was observed in visual cortex . These observations are in line with the present findings where the most pronounced and behaviorally relevant modulation of aperiodic activity occurred over frontocentral areas ( and ), while the aperiodic modulation over occipital sensors was negligible . Is the change in aperiodic activity during REM sleep epiphenomenal or functional, specifically regarding sleep-dependent overnight memory processing ? At the cellular level, consolidation of mnemonic representations requires a selective, activity-dependent elimination of synapses . As this down-scaling occurs primarily in sleep, prolonged wakefulness is proposed to result in synapse saturation leading to impaired memory function . Consistent with this proposition, when interneurons in hippocampus are optogenetically inactivated during REM sleep in rodents, memory formation is impaired . Conversely, REM sleep deprivation in rodents reduced synaptic plasticity . This set of findings suggests a role for REM sleep in adjusting neural activity overnight in support of memory retention. This recalibration had a functional benefit predicting successful next day memory retention. This association with memory enhancement was specific to the PFC, in line with the idea that neocortical areas house long-term mnemonic storage . This effect was not confounded by the simultaneously influence of slow wave activity on behavior (fig. S6), suggesting that aperiodic and slow wave activity constitute complementary mechanisms. Collectively, our study provides compelling evidence that aperiodic electrical brain activity within the human and rodent brain serves as a reliable indicator of neural population dynamics. Hence, aperiodic activity represents an essential and previously unrecognized functional characteristic of the sleeping brain. These findings shed light on the pivotal role of human REM sleep in recalibrating neural dynamics at the population level. Our results illustrate that the recalibration of population-based excitability markers facilitated by REM sleep not only supports but also potentially stems from experience-dependent plasticity throughout the waking hours. In sum, REM-mediated recalibration of neural dynamic might be critical for the overnight consolidation of memories into stable engrams within the brain. Participants Study 1: Two different strains of transgenic mice, PV-Cre mice (RRID:IMSR_JAX:008069; n = 4) and SOM-Cre mice (RRID: IMSR_JAX:013044; n = 4) were used. All mice were housed in groups of up to five animals under temperature-controlled and humidity-controlled conditions (22° ± 2°C; 45 to 65%) and a 12-hour/12-hour light/dark cycle. All recordings started during the first hour of the light phase, and only male mice older than 8 weeks were recorded. Procedures and data were the same as described previously . All experiments were approved by the local institutions in charge of animal welfare (CIN4/11. Regierungspräsidium Tübingen, State of Baden-Wuerttemberg, Germany). Study S1: The recordings were performed in five male Long Evans rats (Janvier, Le Genest-Saint-Isle, France, 280 to 340 g, 14 to 18 weeks old). Animals were kept on a 12-hour/12-hour light/dark cycle with lights off at 19:00 hours. Water and food were available ad libitum. All experiments were approved by the local institutions in charge of animal welfare (MPV3/13, Regierungspräsidium Tübingen, State of Baden-Wuerttemberg, Germany). Procedures and data were the same as described previously . Study 2: Fourteen younger (20.6 ± 2.2 years; mean ± SD) and 26 healthy older adults (73.0 ± 5.4 years; mean ± SD) participated in the study. Neurobehavioral correlations were highly comparable (see fig. S3). All participants provided written informed consent according to the local ethics committee (Berkeley Committee for Protection of Human Subjects Protocol Number 2010-01-595) and the Sixth Declaration of Helsinki. Here, we report a subset of participants from a larger cohort that also completed three resting state recordings in addition to overnight sleep recordings, which were unavailable for remainder of the participants . Study 3: Twelve young healthy controls (mean age: 23.2 ± 1.1 years; seven men, five women) participated in the study. All participants provided written informed consent according to the local ethics committee at the University of Mannheim (protocol number 2010-311 N-MA) and the Sixth Declaration of Helsinki. The resting state data were acquired in the context of a larger study investigating the effects of sleep deprivation on habituation but have not been reported previously . Study 4: We obtained intracranial recordings from 15 pharmacoresistant epilepsy patients (35.0 ± 11.1 years; mean ± SD; nine females) who underwent presurgical monitoring with implanted depth electrodes (Ad-Tech), which were placed stereo-tactically to localize the seizure onset zone. All patients were recruited from the University of California Irvine Medical Center, USA. Electrode placement was exclusively dictated by clinical considerations, and all patients provided written informed consent to participate in the study. Patients selection was based on magnetic resonance imaging (MRI)–confirmed electrode placement in the MTL and PFC from a larger cohort of 21 subjects . We only included patients where one seizure free night was available and a sufficient amount of REM sleep was recorded (see inclusion criteria below; two subjects did not exhibit simultaneous MTL and PFC coverage; four subjects did not exhibit sufficient REM sleep). The study was not preregistered. All procedures were approved by the Institutional Review Board at the University of California, Irvine (protocol number: 2014-1522) and conducted in accordance with the Sixth Declaration of Helsinki. Experimental design and procedure Study 1: All animals were anesthetized with ketamine (0.1 mg/g) and xylazine (0.008 mg/g) with a supplement of isoflurane. For topical anesthesia, lidocaine was applied. Afterward, the animals were mounted on a stereotaxic frame. Body temperature was continuously monitored and maintained at 37°C. A custom-made headpost was glued to the skull and subsequently cemented with dental acrylic (Kulzer Palapress). Virus injection and the implantation of the imaging window followed headpost implantation. To this end, a craniotomy above the sensorimotor cortex (1.1 mm caudal and 1 to 1.3 mm lateral from the bregma) with a size of 1.2 × 2 mm was made. Afterward, two viruses (AAV2/1-syn-GCaMP6f 2.96 × 10 12 genomes/ml and AAV2/1-Flex- tdTomato 1.48 × 10 11 genomes/ml) were injected into multiple sites of the area of craniotomy (10 to 20 nl per site; 3 to 5 min per injection). The injection depth was between 130 and 300 mm. Virus injection was followed by the implantation of the imaging window (1 × 1.5 mm). The space between the skull and the imaging window was filled with agarose (1.5 to 2%), and then the imaging window was cemented with dental acrylic. EEG electrodes were implanted on the cortical surface of the contralateral hemisphere relative to the imaging window (−2.5 mm, lateral +2.5 mm from bregma). The reference electrodes were implanted on the brain surface 1 mm relative to lambda. Two wire electrodes were implanted into the neck muscle for EMG recordings (Science Products). After the surgery, all animals were brought back to their home cage and were single-housed for the rest of the experiments. They had at least 10 days of recovery from surgery before imaging sessions started. After handling the animals 10 min/day for 1 week, the animal was habituated to the head fixation. Habituation consisted of four sessions per day for 1 week with increasing fixation durations (30 s, 3 min, 10 min, and 30 min) interleaved by 10-min rest intervals. Habituation was conducted until 24 hours before the first imaging session during the early light phase. Study S1: Animals were anesthetized with an intraperitoneal injection of fentanyl (0.005 mg/kg of body weight), midazolam (2.0 mg/kg), and medetomidine (0.15 mg/kg). They were placed into a stereotaxic frame and were supplemented with isoflurane (0.5%) if necessary. The scalp was exposed and five holes were drilled into the skull. Three EEG screw electrodes were implanted: one frontal electrode [anteroposterior (AP): +2.6 mm, mediolateral (ML): −1.5 mm, with reference to bregma], one parietal electrode (AP: −2.0 mm, ML: −2.5 mm), and one occipital reference electrode (AP: −10.0 mm, ML: 0.0 mm). In addition, a platinum electrode was implanted into the right dorsal hippocampus [AP: −3.1 mm, ML: +3.0 mm, dorsoventral (DV): −3.6 mm]. Electrode positions were confirmed by histological analysis. One stainless steel wire electrode was implanted in the neck muscle for EMG recordings. Electrodes were connected to an electrode pedestal (PlasticsOne, USA) and fixed with cold polymerizing dental resin, and the wound was sutured. Rats had at least 5 days for recovery. Study 2: All participants were trained on the episodic word-pair task in the evening and performed a short recognition test after 10 min. Then, participants were offered an 8-hour sleep opportunity, starting at their habitual bedtime (table S1). Resting state recordings were obtained directly before and after sleep. Polysomnography was collected continuously. Participants performed a long version of the recognition test approximately 2 hours after awakening. Subsequently, we obtained structural MRI scans from all participants. Two older adults did not complete behavioral testing, and two young adults failed to achieve criterion at encoding. Thus, these four subjects were excluded from behavioral analyses but were included in all electrophysiological analyses. Study 3: In the 3 days before the experiment, sleep was monitored using an Actiwatch Device (Philips Respironics, Amsterdam). Participants were randomly assigned to either start in the sleep deprivation or habitual sleep group. In the experimental night, participants were either allowed to sleep and monitored using the Actiwatch device or kept awake and engaged by an experimenter. Recordings were obtained in the late AM or around noon. Study 4: We recorded a full night of sleep for every participant. Recordings typically started around 8:00 to 10:00 p.m. and lasted for ~10 to 12 hours (table S2). Only nights that were seizure-free were included in the analysis. Polysomnography was collected continuously. Behavioral task Study 2: We used a previously established sleep-dependent episodic memory task , where subjects had to learn word-nonsense word pairs . Briefly, words were 3 to 8 letters in length and drawn from a normative set of English words, while nonsense words were 6 to 14 letters in length and derived from groups of common phonemes. During encoding, subjects learned 120 word-nonsense pairs. Each pair was presented for 5 s. Participants performed the criterion training immediately after encoding. The word was presented along with the previously learned nonsense word and two new nonsense words. Subjects had to choose the correctly associated nonsense words and received feedback afterward. Incorrect trials were repeated after a variable interval and were presented with two additional new nonsense words to avoid repetition of incorrect nonsense words. Criterion training continued until correct responses were observed for all trials. During recognition, a probe word or a new (foil) probe word was presented along with four options: (i) the originally paired nonsense word, (ii) a previously displayed nonsense word, which was linked to a different probe (lure), (iii) a new nonsense word, or (iv) an option to indicate that the probe is new. During the recognition test after a short delay (10 min), 30 probe and 15 foil trials were presented. At the long delay (10 hours), 90 probe and 45 foil trials were tested. All probe words were presented only once during recognition testing, either during short or long delay testing. Sleep monitoring and EEG data acquisition Study 1: Sleep stages were identified on the basis of EEG and EMG recordings during the imaging sessions. EEG and EMG signals were amplified, filtered (EEG: 0.01 to 300 Hz; EMG: 30 to 300 Hz), and sampled at a rate of 1000 Hz (Grass Technologies amplifier, model 15A54). On the basis of EEG/EMG signals for succeeding 10-s epochs, the brain state of the mouse was classified into wake, slow-wave sleep, and REM sleep stages. Sleep stages were determined with the software SleepSign for animals (Kissei Comtech). Study S1: Rats were habituated to the recording box [dark gray polyvinyl chloride (PVC), 30 cm by 30 cm, height: 40 cm] for 2 days, 12 hours/day. On the third day, animals were recorded for 12 hours, during the light phase, starting at 7:00 hours. The animal’s behavior was continuously tracked using a video camera mounted on the recording box. EEG, local field potential (LFP), and EMG signals were continuously recorded and digitalized using a CED Power 1401 converter and Spike2 software (Cambridge Electronic Design). During the recordings, the electrodes were connected through a swiveling commutator to an amplifier (Model 15A54, Grass Technologies). The screw electrode in the occipital skull served as reference for all EEG, LFP, and EMG recordings. Filtering was for the EEG between 0.1 and 300 Hz; for LFP signals, a high-pass filter of 0.1 Hz was applied; and for the EMG between 30 and 300 Hz, signals were sampled at 1 kHz. Study 2: Polysomnography sleep monitoring was recorded on a Grass Technologies Comet XL system (Astro-Med), including 19-channel EEG placed using the standard 10-20 system as well as electromyography (EMG). Electrooculogram (EOG) was recorded the right and left outer canthi. EEG recordings were referenced to bilateral linked mastoids and digitized at 400 Hz. Sleep scoring was performed according to standard criteria by Rechtschaffen and Kales in 30-s epochs . NREM sleep was defined as NREM stages 2 to 4. First and last NREM and REM epochs were defined as the first and last 5 min of the respective stages in the hypnogram. Study 3: Resting state EEG recordings were obtained using a 64-channel BrainAmp amplifier (Brain Products GmbH) EEG system with equidistant Ag-AgCl electrode positions (EasyCap, Herrsching, Germany). The central electrode of this layout corresponded to electrode Cz (10–20 layout) and was therefore used for between group comparisons. Study 4: We recorded from all available intracranial electrodes. To facilitate sleep staging based on established criteria, we also recorded scalp EEG, which typically included recordings from electrodes Fz, Cz, C3, C4, and Oz according to the international 10-20 system. EOG was recorded from four electrodes, which were placed around the right and left outer canthi. All electrophysiological data were acquired using a 256-channel Nihon Kohden recording system (model JE120A), analog-filtered at 0.01 Hz, and digitally sampled at 5000 Hz. All available artifact-free scalp electrodes were low-pass–filtered at 50 Hz, demeaned and detrended, down-sampled to 400 Hz, and referenced against the average of all clean scalp electrodes. EOGs were typically bipolar referenced to obtain one signal per eye. A surrogate EMG signal was derived from electrodes in immediate proximity to neck or skeletal muscles, by high-pass filtering either the ECG or EEG channels above 40 Hz. Sleep staging was carried out according to Rechtschaffen and Kales guidelines by trained personnel in 30-s segments as reported previously . Same conventions as in study 1 were used. Two-photon calcium imaging data acquisition Study 1: In vivo imaging was performed using a two-photon microscope based on the MOM system (Sutter) controlled by ScanImage software . The light source was a pulsed Ti:sapphire laser ( l = 980 nm; Chameleon; Coherent). Red and green fluorescence photons were collected with an objective lens (Nikon; 16×; 0.80 numerical aperture), separated by a 565-nm dichroic mirror (Chroma; 565dcxr) and barrier filters (green: ET525/70 m-2p; red: ET605/70 m- 2p), and measured using photomultiplier tubes (Hamamatsu Photonics; H10770PA-40). Imaging frames were visually inspected to exclude cross-talk between green and red channels. The imaging frame consisted of 1024 × 256 pixels, and the frame rate was 5.92 Hz (169 ms per frame). Images were collected in layer 2/3 at a depth of 150 to 250 mm. CT and MRI data acquisition Study 4: We obtained anonymized postoperative computed tomography (CT) scans and presurgical MRI scans, which were routinely acquired during clinical care. MRI scans were typically 1 mm isotropic. Quantification and statistical analysis Behavioral data analysis Study 2: Memory recognition was calculated by subtracting both the false alarm rate (proportion of foil words, which subjects reported as previously encountered) and the lure rate (proportion of words that were paired with a familiar but incorrect nonsense word) from the hit rate (correctly paired word-nonsense word pairs). Memory retention was subsequently calculated as the difference between recognition at long minus short delays. Two-photon data Preprocessing and data analysis Image analysis: Lateral motion was corrected in two steps. A cross-correlation–based image alignment (Turboreg) was performed, followed by a line-by-line correction using an algorithm based on a hidden Markov model . ROIs containing individual neurons were drawn manually, and the pixel values within each ROI were summed to estimate the fluorescence of this neuron. PV + and SOM + were manually detected by red fluorescence signal expressed by AAV2/1-Flex-tdtomato. The individual cell traces were calculated as the average pixel intensity within the ROIs for each frame. The cell traces were transformed into the percent signal change (Δ F / F ), in which the baseline for each cell was defined as the 20th percentile value of all frames within a ±3-min interval. We then extracted active frames (“calcium spikes”), which were defined as frames with Δ F / F signals two SDs above the mean in a sliding time window of ±3 min. To confirm that the neuropil signal did not affect our results and to compensate for background noise, we performed a standard neuropil subtraction for each cell’s fluorescence trace. The neuropil signal was estimated for each ROI as the average pixel value within two pixels around the ROI (excluding adjacent cells). The true signal was estimated as F ( t ) = FinROI − r × FaroundROI, where r = 0.7. Immunohistochemistry After finishing the experiments, mice were deeply anesthetized [ketamine (0.3 mg/g) and xylazine (0.024 mg/g), i.p.] and with 4% paraformaldehyde (PFA) in 0.1 M phosphate-buffered saline (PBS) intracardially perfused. Then, the brains were postfixed in 4% PFA at 4°C overnight and rinsed three times with 0.1 M PBS. Coronal slices (thickness, 65 mm) were blocked in 10% normal goat serum (NGS; Jackson ImmnunoResearch) and 0.3% Triton X-100 (Sigma-Aldrich) in 0.1 M PBS for 1.5 hours at room temperature. Slices were incubated with anti-PV rabbit primary antibody (1:1000; #24428, Immunostar, RRID: AB_572259) or anti-SOM rabbit primary antibody (1:1000; #T-4547, Peninsula Laboratories, RRID: AB_518618) in carrier solution (2% NGS and 0.3% Triton X-100 in PBS) for 48 hours at 4°C. Following 4× 10-min rinses with 0.1 M PBS, the slices were incubated in goat anti-rabbit immunoglobulin G antibodies conjugated either with Alexa Fluor 405 (for PV + staining, AB_221605) or Alexa Fluor 633 (for SOM + staining, AB2535732; both from Thermo Fisher Scientific; 1:1000) in carrier solution for 3 hours at room temperature on the shaker. Images were acquired on a confocal microscope (LSM 710, Carl Zeiss). Overall, the fraction of cells only expressing Alexa Fluor but not tdtomato and GCamp6f for PV + and SOM + was each below 2%. EEG data Preprocessing Study 1: EEG data from a frontal and parietal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Epochs containing artifacts were labeled semi-automatically when a threshold of 6 SD was exceeded in the concurrently acquired EMG signal. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study S1: EEG data from a frontal and parietal electrode as well as a hippocampal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study 2/3—Resting state: EEG data were imported into MATLAB and analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, high-pass–filtered at 1 Hz, common average referenced, and epoched into 3-s-long segments with 50% overlap. Artifact detection was done semi-automatically for EOG, jump, and muscle artifacts and visually confirmed . Study 2—Sleep: EEG data were imported into FieldTrip, then demeaned, detrended, common average referenced, and epoched into non-overlapping 30-s segments. Artifact detection was done manually in 5-s segments . Study 4: Scalp EEG was demeaned, detrended, and locally referenced against the mean of all available artifact-free scalp electrodes. We applied a 50-Hz low-pass filter and down-sampled the data to 500 Hz. All scalp EEG analyses were done on electrode Fz. In a subset of subjects, Fz was not available and Cz was used instead of Fz. Intracranial EEG: In every subject, we selected all available electrodes in the MTL, which were then demeaned, detrended, notch-filtered at 60 Hz and its harmonics, bipolar referenced to its immediate lateral neighboring electrode, and lastly down-sampled to 500 Hz. We retained all MTL channels but discarded noisy PFC channels. We adopted a previously introduced approach where we first detected interictal epileptic discharges (IEDs) using automated detectors (see below), which were then excluded from further analysis. Last, we selected one MTL electrode per participant with the lowest number of overall detections. For PFC analyses, all available contacts in these regions were included, and the same preprocessing steps were applied. Then, all resulting traces were manually inspected, and noisy, epileptic, and artifact-contaminated PFC channels were excluded. Extraction of REM epochs and time normalization procedure Study 1: Sleep data were manually staged. REM epochs were detected on the basis of the emergence of a prominent theta rhythm (4 to 10 Hz) and reduction of EMG activity. Given that a NREM-REM-NREM triplet analysis was not feasible (see fig. S2), we selected N continuous REM epoch that spanned at least three 10-s epochs and included ± N adjacent epochs (termed pre-REM, mostly NREM and post-REM, mostly wake). This ensured that an equal amount of data was included to assess the relationship of population dynamics and aperiodic activity. The values within every epoch were then averaged into one composite value for calcium and EEG activity. Study3/4: REM epochs were detected on the basis of the manually staged hypnogram according to established Rechtschaffen and Kales guidelines . We first detected all REM epochs and then selected artifact-free epochs that spanned at least three consecutive epochs (90 s) and required that the majority of adjacent periods within a time window ±9 min were staged as NREM sleep (9 min were chosen to match the 9 min of resting state data reported in as well as to match the average, artifact- and interruption-free duration of individual NREM epochs: study 3: 10 ± 13.9 min; study 4: 7 ± 16.7 min; median ± SD). Subsequently, the identified REM epochs were extracted as continuous time-domain signals and then epoched into 100 overlapping epochs and subjected to multitaper spectral analysis as outlined below. Similarly, the adjacent NREM data were epoched into 10-s-long segments with 70% overlap. The spectral estimates were then concatenated to form the final time-normalized triplet in the frequency domain. For statistical testing, we omitted the transition states and selected one third of the time-normalized epoch (beginning, center, and end of the triplet, respectively) for subsequent testing. We also repeated the entire analysis on more liberal criteria (fig. S5; inclusion of brief epochs of NREM1 or microarousals as well as episodes that were staging was uncertain) as outlined by Watson et al. . Here, the preceding and following NREM epochs were also time-normalized (in contrast to taking a fixed window) into 100 overlapping epochs and subjected to multitaper spectral analysis. In addition, we extracted time-normalized NREM epochs where continuous NREM epochs were equally epoched into 100 overlapping segments (figs. S5G and S7C). Spectral analysis Scalp EEG (studies 1, S1, and 2 to 4): Resting state spectral estimates were obtained through multitaper spectral analyses , based on discrete prolate slepian sequences. Spectral estimates were obtained between 1 and 50 Hz in 1-Hz steps. We adapted the number of tapers to obtain a frequency smoothing of ±2 Hz. For studies 1 and S1, we used an upper cutoff of 35 Hz given a broad hardware notch filter artifact from 40 to 60 Hz. Intracranial EEG (study 4): Spectral estimates were by means of multitaper spectral analyses based on discrete prolate spheroidal sequences in 153 logarithmically spaced bins between 0.25 and 181 Hz . We adjusted the temporal and spectral smoothing to approximately match a ±2-Hz frequency smoothing. Estimation of aperiodic background activity Aperiodic activity was estimated from three parameters of the electrophysiological power spectrum: spectral slope x (the negative exponent of the 1/ f x decay function), y intercept, and the population time constant (the frequency where a bend/“knee” occurs in the 1/ f spectrum). Note that the slope and y intercept provided redundant information (correlated at rho = −0.98, P < 0.0001; Spearman correlation), thus, analyses focused on the spectral slope. FOOOF fitting: To obtain estimates of aperiodic background activity, we first used the FOOOF algorithm . EEG spectra were fitted in the range from 1 to 45 Hz. Aperiodic background activity was defined by its slope parameter χ, the y intercept c , and a constant k (reflecting the knee parameter). aperiodic fit = 10 c ∗ 1 ( k + f 1 X ) The relationship of the knee parameter and the knee frequency is given by knee frequency = k 1 χ If a knee parameter could not be determined, then we refitted the spectrum in the fixed mode, which is equivalent to a linear fit where k = 0. Polynomial fitting: To estimate the spectral slope in different frequency bands, we also used first-degree polynomial fitting , thus yielding an instantaneous spectral exponent (slope, χ) and offset ( y -axis intercept, c ), for a given fitting range. EEG spectra were fitted using variable endpoints (from 1 to 5 to 45 Hz, 5-Hz steps), variable starting points (to 45 Hz, from 5 to 40 Hz, 5-Hz steps), a fixed bandwidth with varying center frequencies (5 to 45 Hz; ± 5 Hz), or in comparable ranges (e.g., 20 to 45 Hz; correlation to FOOOF estimates rho = 0.99, P < 0.0001; Spearman correlation). Typically, we report the spectral slope as obtained from the FOOOF model when fitted up to 45 Hz. In several instances, this approach was complemented by first-degree polynomial fitting to avoid high-frequency artifacts (e.g., in studies 1 and S1 from ~40 Hz; hence, we restricted the fitting up to 35 Hz), the presence of a variable spectral knee (bend of the power spectrum) or to highlight a specific frequency range in intracranial EEG, where the spectrum was estimated up to 180 Hz; hence, rendering a direct comparison of FOOOF iEEG and EEG estimates impractical. After the initial principled approach, we empirically determined the range with the highest correlation to behavior (fig. S3E; 25 to 45 Hz) and consequently used this range for all subsequent analyses. Event detection SOs: Event detection was performed for every channel separately based on previously established algorithms . We first filtered the continuous signal between 0.16 and 1.25 Hz and detected all the zero crossings. Then, events were selected on the basis of time (0.8- to 2-s duration) and amplitude (75% percentile) criteria. Last, we extracted 5-s-long segments (±2.5 s centered on the trough) from the raw signal and discarded all events that occurred during an IED. Sleep spindles: On the basis of established algorithms , we filtered the signal between 12 and 16 Hz and extracted the analytical amplitude after applying a Hilbert transform. We smoothed the amplitude with a 200-ms moving average. Then, the amplitude was thresholded at the 75% percentile (amplitude criterion), and only events that exceeded the threshold for 0.5 to 3 s (time criterion) were accepted. Events were defined as sleep spindle peak-locked 5-s-long epochs (±2.5 s centered on the spindle peak). Ripples: The signal was first filtered in the range from 80 to 120 Hz, and the analytical amplitude was extracted from a Hilbert transform in accordance with previously reported detection algorithms . The analytical signal was smoothed with a 100-ms window and z -scored. Candidate events were identified as epochs exceeding a z -score of 2 for at least 25 ms and a maximum of 200 ms and had to be spaced by at least 500 ms. We determined the instantaneous ripple frequency by detecting all peaks within the identified segment. The identified events were time-locked to the ripple trough in a time window of ±0.5 s. Overlapping epochs were merged. Epochs that contained IEDs or sharp transients were discarded. Beta/Gamma burst detection: For fig. S9, we detected individual bursts in the range from 25 to 45 Hz, where the spectral slope was estimated, using the procedure outlined here . Briefly, we segmented the continuous LFP signal into 30-s trials and obtained single-trial spectral estimates between 1 and 50 Hz in 0.5-Hz steps with a frequency smoothing of 4 Hz. Oscillatory bursts were identified per trial by thresholding (mean ± 2 SD) the average, z -normalized spectral power for the frequency band of interest (25–45 Hz) relative to the mean, and SD over a reference period of 10 trials (current trial plus subsequent nine). Only bursts with a minimum duration of three oscillatory cycles of the mean frequency of interest were considered. A two-dimensional Gaussian was subsequently fitted to the time-frequency map. Burst duration was determined by the time wherein the average power for the frequency band of interest exceeded half of the local maximum as determined by the local Gaussian fit. Burst frequency was determined by the peak in the Gaussian fit. Oscillatory bursts that coincided with interictal epileptiform discharges (±1 s) relative to the burst peak were omitted. Subsequently, we obtained a burst rate per 30-s segment for every participant and channel separately. On the individual subject and channel levels, we calculated the correlation coefficient between the PSD slope and the burst rate across the entire night. We used a random block-swap procedure (1000 times; random breakpoint and block swap of the slope vector) to obtain a surrogate distribution. Subsequently, we normalized the observed correlation coefficient relative to the surrogate distribution to obtain a z-value. IED detection: We detected IEDs using automated algorithms on all channels located in the MTL. All cutoffs were chosen in accordance with recently published findings and were confirmed by a neurologist who visually verified the detected events. The continuous signal was filtered front and backward between 25 and 80 Hz, and the analytical amplitude was extracted from the Hilbert transform and then z -scored. Events were detected when this signal was 3 SD above the mean for more than 20 ms and less than 100 ms. HFB, population activity, and active periods analysis: The HFB activity is typically defined from 70 to 180 Hz . To avoid confounding true HFB activity with ripple-band activity (upper cutoff, ~120 Hz), we defined HFB activity as the average power in this frequency range from 120 to 180 Hz. The multitaper spectral estimates where averaged into a single trace per electrode. The dynamics of the population activity were expressed as a population vector . At every time point, HFB activity was represented as a point P in a n -dimensional coordinate system where n reflects the number of electrodes. The population vector was then constructed by taking the Euclidean distance d between adjacent time points within in a given ROI, hence providing a single time course per ROI. MDD = d ( P t n , P t + 1 n ) Active periods were defined as epochs where the smoothed (100-ms window) HFB signal exceeded a z -score of 1 for at least 50 ms . Functional connectivity was calculated by means of the absolute of the imaginary coherency to control for spurious coupling arising from volume conduction effects. Before connectivity analysis, time-domain data was re-referenced to pairs that did not share a common reference (hippocampal contacts to occipital bone/scalp electrode versus a bipolar scalp pair, e.g., Fz-Cz). To avoid biased connectivity estimates, 1-s segments were randomly subsampled and stratified across different states (wake, NREM, and REM) to equate the trial numbers before connectivity analysis. Statistical analysis Unless stated otherwise, we used cluster-based permutation tests to correct for multiple comparisons as implemented in FieldTrip (Monte Carlo method; 1000 iterations). Clusters were formed in time/frequency (e.g., ) or space (e.g., ) by thresholding two-tailed, dependent t tests or linear correlations at P < 0.05. Correlation values were transformed into t values using the following formula t = r ∗ N − 2 1 − r 2 A permutation distribution was then created by randomly shuffling condition labels (paired t tests) or subject labels (correlation). The permutation P value was obtained by comparing the cluster statistic to the random permutation distribution. The clusters were considered significant at P < 0.05 (two-sided). Effect sizes were quantified by means of Cohen’s d or the correlation coefficient rho. To obtain effect sizes for cluster tests, we calculated the effect size separately for all channel, frequency, and/or time points and averaged across all data points in the cluster. Repeated-measures ANOVAs were Greenhouse-Geisser–corrected. For rodent data and for intracranial EEG , we either averaged multiple observation per participant into one composite metric, which was then subjected to regular t tests, ANOVAs, or correlations analyses, or we used LME models with subjects as random intercepts. P values were calculated on the pseudo-population and confirmed using LME models with subjects as random intercepts. Study 1: Two different strains of transgenic mice, PV-Cre mice (RRID:IMSR_JAX:008069; n = 4) and SOM-Cre mice (RRID: IMSR_JAX:013044; n = 4) were used. All mice were housed in groups of up to five animals under temperature-controlled and humidity-controlled conditions (22° ± 2°C; 45 to 65%) and a 12-hour/12-hour light/dark cycle. All recordings started during the first hour of the light phase, and only male mice older than 8 weeks were recorded. Procedures and data were the same as described previously . All experiments were approved by the local institutions in charge of animal welfare (CIN4/11. Regierungspräsidium Tübingen, State of Baden-Wuerttemberg, Germany). Study S1: The recordings were performed in five male Long Evans rats (Janvier, Le Genest-Saint-Isle, France, 280 to 340 g, 14 to 18 weeks old). Animals were kept on a 12-hour/12-hour light/dark cycle with lights off at 19:00 hours. Water and food were available ad libitum. All experiments were approved by the local institutions in charge of animal welfare (MPV3/13, Regierungspräsidium Tübingen, State of Baden-Wuerttemberg, Germany). Procedures and data were the same as described previously . Study 2: Fourteen younger (20.6 ± 2.2 years; mean ± SD) and 26 healthy older adults (73.0 ± 5.4 years; mean ± SD) participated in the study. Neurobehavioral correlations were highly comparable (see fig. S3). All participants provided written informed consent according to the local ethics committee (Berkeley Committee for Protection of Human Subjects Protocol Number 2010-01-595) and the Sixth Declaration of Helsinki. Here, we report a subset of participants from a larger cohort that also completed three resting state recordings in addition to overnight sleep recordings, which were unavailable for remainder of the participants . Study 3: Twelve young healthy controls (mean age: 23.2 ± 1.1 years; seven men, five women) participated in the study. All participants provided written informed consent according to the local ethics committee at the University of Mannheim (protocol number 2010-311 N-MA) and the Sixth Declaration of Helsinki. The resting state data were acquired in the context of a larger study investigating the effects of sleep deprivation on habituation but have not been reported previously . Study 4: We obtained intracranial recordings from 15 pharmacoresistant epilepsy patients (35.0 ± 11.1 years; mean ± SD; nine females) who underwent presurgical monitoring with implanted depth electrodes (Ad-Tech), which were placed stereo-tactically to localize the seizure onset zone. All patients were recruited from the University of California Irvine Medical Center, USA. Electrode placement was exclusively dictated by clinical considerations, and all patients provided written informed consent to participate in the study. Patients selection was based on magnetic resonance imaging (MRI)–confirmed electrode placement in the MTL and PFC from a larger cohort of 21 subjects . We only included patients where one seizure free night was available and a sufficient amount of REM sleep was recorded (see inclusion criteria below; two subjects did not exhibit simultaneous MTL and PFC coverage; four subjects did not exhibit sufficient REM sleep). The study was not preregistered. All procedures were approved by the Institutional Review Board at the University of California, Irvine (protocol number: 2014-1522) and conducted in accordance with the Sixth Declaration of Helsinki. Study 1: All animals were anesthetized with ketamine (0.1 mg/g) and xylazine (0.008 mg/g) with a supplement of isoflurane. For topical anesthesia, lidocaine was applied. Afterward, the animals were mounted on a stereotaxic frame. Body temperature was continuously monitored and maintained at 37°C. A custom-made headpost was glued to the skull and subsequently cemented with dental acrylic (Kulzer Palapress). Virus injection and the implantation of the imaging window followed headpost implantation. To this end, a craniotomy above the sensorimotor cortex (1.1 mm caudal and 1 to 1.3 mm lateral from the bregma) with a size of 1.2 × 2 mm was made. Afterward, two viruses (AAV2/1-syn-GCaMP6f 2.96 × 10 12 genomes/ml and AAV2/1-Flex- tdTomato 1.48 × 10 11 genomes/ml) were injected into multiple sites of the area of craniotomy (10 to 20 nl per site; 3 to 5 min per injection). The injection depth was between 130 and 300 mm. Virus injection was followed by the implantation of the imaging window (1 × 1.5 mm). The space between the skull and the imaging window was filled with agarose (1.5 to 2%), and then the imaging window was cemented with dental acrylic. EEG electrodes were implanted on the cortical surface of the contralateral hemisphere relative to the imaging window (−2.5 mm, lateral +2.5 mm from bregma). The reference electrodes were implanted on the brain surface 1 mm relative to lambda. Two wire electrodes were implanted into the neck muscle for EMG recordings (Science Products). After the surgery, all animals were brought back to their home cage and were single-housed for the rest of the experiments. They had at least 10 days of recovery from surgery before imaging sessions started. After handling the animals 10 min/day for 1 week, the animal was habituated to the head fixation. Habituation consisted of four sessions per day for 1 week with increasing fixation durations (30 s, 3 min, 10 min, and 30 min) interleaved by 10-min rest intervals. Habituation was conducted until 24 hours before the first imaging session during the early light phase. Study S1: Animals were anesthetized with an intraperitoneal injection of fentanyl (0.005 mg/kg of body weight), midazolam (2.0 mg/kg), and medetomidine (0.15 mg/kg). They were placed into a stereotaxic frame and were supplemented with isoflurane (0.5%) if necessary. The scalp was exposed and five holes were drilled into the skull. Three EEG screw electrodes were implanted: one frontal electrode [anteroposterior (AP): +2.6 mm, mediolateral (ML): −1.5 mm, with reference to bregma], one parietal electrode (AP: −2.0 mm, ML: −2.5 mm), and one occipital reference electrode (AP: −10.0 mm, ML: 0.0 mm). In addition, a platinum electrode was implanted into the right dorsal hippocampus [AP: −3.1 mm, ML: +3.0 mm, dorsoventral (DV): −3.6 mm]. Electrode positions were confirmed by histological analysis. One stainless steel wire electrode was implanted in the neck muscle for EMG recordings. Electrodes were connected to an electrode pedestal (PlasticsOne, USA) and fixed with cold polymerizing dental resin, and the wound was sutured. Rats had at least 5 days for recovery. Study 2: All participants were trained on the episodic word-pair task in the evening and performed a short recognition test after 10 min. Then, participants were offered an 8-hour sleep opportunity, starting at their habitual bedtime (table S1). Resting state recordings were obtained directly before and after sleep. Polysomnography was collected continuously. Participants performed a long version of the recognition test approximately 2 hours after awakening. Subsequently, we obtained structural MRI scans from all participants. Two older adults did not complete behavioral testing, and two young adults failed to achieve criterion at encoding. Thus, these four subjects were excluded from behavioral analyses but were included in all electrophysiological analyses. Study 3: In the 3 days before the experiment, sleep was monitored using an Actiwatch Device (Philips Respironics, Amsterdam). Participants were randomly assigned to either start in the sleep deprivation or habitual sleep group. In the experimental night, participants were either allowed to sleep and monitored using the Actiwatch device or kept awake and engaged by an experimenter. Recordings were obtained in the late AM or around noon. Study 4: We recorded a full night of sleep for every participant. Recordings typically started around 8:00 to 10:00 p.m. and lasted for ~10 to 12 hours (table S2). Only nights that were seizure-free were included in the analysis. Polysomnography was collected continuously. Study 2: We used a previously established sleep-dependent episodic memory task , where subjects had to learn word-nonsense word pairs . Briefly, words were 3 to 8 letters in length and drawn from a normative set of English words, while nonsense words were 6 to 14 letters in length and derived from groups of common phonemes. During encoding, subjects learned 120 word-nonsense pairs. Each pair was presented for 5 s. Participants performed the criterion training immediately after encoding. The word was presented along with the previously learned nonsense word and two new nonsense words. Subjects had to choose the correctly associated nonsense words and received feedback afterward. Incorrect trials were repeated after a variable interval and were presented with two additional new nonsense words to avoid repetition of incorrect nonsense words. Criterion training continued until correct responses were observed for all trials. During recognition, a probe word or a new (foil) probe word was presented along with four options: (i) the originally paired nonsense word, (ii) a previously displayed nonsense word, which was linked to a different probe (lure), (iii) a new nonsense word, or (iv) an option to indicate that the probe is new. During the recognition test after a short delay (10 min), 30 probe and 15 foil trials were presented. At the long delay (10 hours), 90 probe and 45 foil trials were tested. All probe words were presented only once during recognition testing, either during short or long delay testing. Study 1: Sleep stages were identified on the basis of EEG and EMG recordings during the imaging sessions. EEG and EMG signals were amplified, filtered (EEG: 0.01 to 300 Hz; EMG: 30 to 300 Hz), and sampled at a rate of 1000 Hz (Grass Technologies amplifier, model 15A54). On the basis of EEG/EMG signals for succeeding 10-s epochs, the brain state of the mouse was classified into wake, slow-wave sleep, and REM sleep stages. Sleep stages were determined with the software SleepSign for animals (Kissei Comtech). Study S1: Rats were habituated to the recording box [dark gray polyvinyl chloride (PVC), 30 cm by 30 cm, height: 40 cm] for 2 days, 12 hours/day. On the third day, animals were recorded for 12 hours, during the light phase, starting at 7:00 hours. The animal’s behavior was continuously tracked using a video camera mounted on the recording box. EEG, local field potential (LFP), and EMG signals were continuously recorded and digitalized using a CED Power 1401 converter and Spike2 software (Cambridge Electronic Design). During the recordings, the electrodes were connected through a swiveling commutator to an amplifier (Model 15A54, Grass Technologies). The screw electrode in the occipital skull served as reference for all EEG, LFP, and EMG recordings. Filtering was for the EEG between 0.1 and 300 Hz; for LFP signals, a high-pass filter of 0.1 Hz was applied; and for the EMG between 30 and 300 Hz, signals were sampled at 1 kHz. Study 2: Polysomnography sleep monitoring was recorded on a Grass Technologies Comet XL system (Astro-Med), including 19-channel EEG placed using the standard 10-20 system as well as electromyography (EMG). Electrooculogram (EOG) was recorded the right and left outer canthi. EEG recordings were referenced to bilateral linked mastoids and digitized at 400 Hz. Sleep scoring was performed according to standard criteria by Rechtschaffen and Kales in 30-s epochs . NREM sleep was defined as NREM stages 2 to 4. First and last NREM and REM epochs were defined as the first and last 5 min of the respective stages in the hypnogram. Study 3: Resting state EEG recordings were obtained using a 64-channel BrainAmp amplifier (Brain Products GmbH) EEG system with equidistant Ag-AgCl electrode positions (EasyCap, Herrsching, Germany). The central electrode of this layout corresponded to electrode Cz (10–20 layout) and was therefore used for between group comparisons. Study 4: We recorded from all available intracranial electrodes. To facilitate sleep staging based on established criteria, we also recorded scalp EEG, which typically included recordings from electrodes Fz, Cz, C3, C4, and Oz according to the international 10-20 system. EOG was recorded from four electrodes, which were placed around the right and left outer canthi. All electrophysiological data were acquired using a 256-channel Nihon Kohden recording system (model JE120A), analog-filtered at 0.01 Hz, and digitally sampled at 5000 Hz. All available artifact-free scalp electrodes were low-pass–filtered at 50 Hz, demeaned and detrended, down-sampled to 400 Hz, and referenced against the average of all clean scalp electrodes. EOGs were typically bipolar referenced to obtain one signal per eye. A surrogate EMG signal was derived from electrodes in immediate proximity to neck or skeletal muscles, by high-pass filtering either the ECG or EEG channels above 40 Hz. Sleep staging was carried out according to Rechtschaffen and Kales guidelines by trained personnel in 30-s segments as reported previously . Same conventions as in study 1 were used. Study 1: In vivo imaging was performed using a two-photon microscope based on the MOM system (Sutter) controlled by ScanImage software . The light source was a pulsed Ti:sapphire laser ( l = 980 nm; Chameleon; Coherent). Red and green fluorescence photons were collected with an objective lens (Nikon; 16×; 0.80 numerical aperture), separated by a 565-nm dichroic mirror (Chroma; 565dcxr) and barrier filters (green: ET525/70 m-2p; red: ET605/70 m- 2p), and measured using photomultiplier tubes (Hamamatsu Photonics; H10770PA-40). Imaging frames were visually inspected to exclude cross-talk between green and red channels. The imaging frame consisted of 1024 × 256 pixels, and the frame rate was 5.92 Hz (169 ms per frame). Images were collected in layer 2/3 at a depth of 150 to 250 mm. Study 4: We obtained anonymized postoperative computed tomography (CT) scans and presurgical MRI scans, which were routinely acquired during clinical care. MRI scans were typically 1 mm isotropic. Behavioral data analysis Study 2: Memory recognition was calculated by subtracting both the false alarm rate (proportion of foil words, which subjects reported as previously encountered) and the lure rate (proportion of words that were paired with a familiar but incorrect nonsense word) from the hit rate (correctly paired word-nonsense word pairs). Memory retention was subsequently calculated as the difference between recognition at long minus short delays. Study 2: Memory recognition was calculated by subtracting both the false alarm rate (proportion of foil words, which subjects reported as previously encountered) and the lure rate (proportion of words that were paired with a familiar but incorrect nonsense word) from the hit rate (correctly paired word-nonsense word pairs). Memory retention was subsequently calculated as the difference between recognition at long minus short delays. Preprocessing and data analysis Image analysis: Lateral motion was corrected in two steps. A cross-correlation–based image alignment (Turboreg) was performed, followed by a line-by-line correction using an algorithm based on a hidden Markov model . ROIs containing individual neurons were drawn manually, and the pixel values within each ROI were summed to estimate the fluorescence of this neuron. PV + and SOM + were manually detected by red fluorescence signal expressed by AAV2/1-Flex-tdtomato. The individual cell traces were calculated as the average pixel intensity within the ROIs for each frame. The cell traces were transformed into the percent signal change (Δ F / F ), in which the baseline for each cell was defined as the 20th percentile value of all frames within a ±3-min interval. We then extracted active frames (“calcium spikes”), which were defined as frames with Δ F / F signals two SDs above the mean in a sliding time window of ±3 min. To confirm that the neuropil signal did not affect our results and to compensate for background noise, we performed a standard neuropil subtraction for each cell’s fluorescence trace. The neuropil signal was estimated for each ROI as the average pixel value within two pixels around the ROI (excluding adjacent cells). The true signal was estimated as F ( t ) = FinROI − r × FaroundROI, where r = 0.7. Immunohistochemistry After finishing the experiments, mice were deeply anesthetized [ketamine (0.3 mg/g) and xylazine (0.024 mg/g), i.p.] and with 4% paraformaldehyde (PFA) in 0.1 M phosphate-buffered saline (PBS) intracardially perfused. Then, the brains were postfixed in 4% PFA at 4°C overnight and rinsed three times with 0.1 M PBS. Coronal slices (thickness, 65 mm) were blocked in 10% normal goat serum (NGS; Jackson ImmnunoResearch) and 0.3% Triton X-100 (Sigma-Aldrich) in 0.1 M PBS for 1.5 hours at room temperature. Slices were incubated with anti-PV rabbit primary antibody (1:1000; #24428, Immunostar, RRID: AB_572259) or anti-SOM rabbit primary antibody (1:1000; #T-4547, Peninsula Laboratories, RRID: AB_518618) in carrier solution (2% NGS and 0.3% Triton X-100 in PBS) for 48 hours at 4°C. Following 4× 10-min rinses with 0.1 M PBS, the slices were incubated in goat anti-rabbit immunoglobulin G antibodies conjugated either with Alexa Fluor 405 (for PV + staining, AB_221605) or Alexa Fluor 633 (for SOM + staining, AB2535732; both from Thermo Fisher Scientific; 1:1000) in carrier solution for 3 hours at room temperature on the shaker. Images were acquired on a confocal microscope (LSM 710, Carl Zeiss). Overall, the fraction of cells only expressing Alexa Fluor but not tdtomato and GCamp6f for PV + and SOM + was each below 2%. Image analysis: Lateral motion was corrected in two steps. A cross-correlation–based image alignment (Turboreg) was performed, followed by a line-by-line correction using an algorithm based on a hidden Markov model . ROIs containing individual neurons were drawn manually, and the pixel values within each ROI were summed to estimate the fluorescence of this neuron. PV + and SOM + were manually detected by red fluorescence signal expressed by AAV2/1-Flex-tdtomato. The individual cell traces were calculated as the average pixel intensity within the ROIs for each frame. The cell traces were transformed into the percent signal change (Δ F / F ), in which the baseline for each cell was defined as the 20th percentile value of all frames within a ±3-min interval. We then extracted active frames (“calcium spikes”), which were defined as frames with Δ F / F signals two SDs above the mean in a sliding time window of ±3 min. To confirm that the neuropil signal did not affect our results and to compensate for background noise, we performed a standard neuropil subtraction for each cell’s fluorescence trace. The neuropil signal was estimated for each ROI as the average pixel value within two pixels around the ROI (excluding adjacent cells). The true signal was estimated as F ( t ) = FinROI − r × FaroundROI, where r = 0.7. After finishing the experiments, mice were deeply anesthetized [ketamine (0.3 mg/g) and xylazine (0.024 mg/g), i.p.] and with 4% paraformaldehyde (PFA) in 0.1 M phosphate-buffered saline (PBS) intracardially perfused. Then, the brains were postfixed in 4% PFA at 4°C overnight and rinsed three times with 0.1 M PBS. Coronal slices (thickness, 65 mm) were blocked in 10% normal goat serum (NGS; Jackson ImmnunoResearch) and 0.3% Triton X-100 (Sigma-Aldrich) in 0.1 M PBS for 1.5 hours at room temperature. Slices were incubated with anti-PV rabbit primary antibody (1:1000; #24428, Immunostar, RRID: AB_572259) or anti-SOM rabbit primary antibody (1:1000; #T-4547, Peninsula Laboratories, RRID: AB_518618) in carrier solution (2% NGS and 0.3% Triton X-100 in PBS) for 48 hours at 4°C. Following 4× 10-min rinses with 0.1 M PBS, the slices were incubated in goat anti-rabbit immunoglobulin G antibodies conjugated either with Alexa Fluor 405 (for PV + staining, AB_221605) or Alexa Fluor 633 (for SOM + staining, AB2535732; both from Thermo Fisher Scientific; 1:1000) in carrier solution for 3 hours at room temperature on the shaker. Images were acquired on a confocal microscope (LSM 710, Carl Zeiss). Overall, the fraction of cells only expressing Alexa Fluor but not tdtomato and GCamp6f for PV + and SOM + was each below 2%. Preprocessing Study 1: EEG data from a frontal and parietal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Epochs containing artifacts were labeled semi-automatically when a threshold of 6 SD was exceeded in the concurrently acquired EMG signal. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study S1: EEG data from a frontal and parietal electrode as well as a hippocampal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study 2/3—Resting state: EEG data were imported into MATLAB and analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, high-pass–filtered at 1 Hz, common average referenced, and epoched into 3-s-long segments with 50% overlap. Artifact detection was done semi-automatically for EOG, jump, and muscle artifacts and visually confirmed . Study 2—Sleep: EEG data were imported into FieldTrip, then demeaned, detrended, common average referenced, and epoched into non-overlapping 30-s segments. Artifact detection was done manually in 5-s segments . Study 4: Scalp EEG was demeaned, detrended, and locally referenced against the mean of all available artifact-free scalp electrodes. We applied a 50-Hz low-pass filter and down-sampled the data to 500 Hz. All scalp EEG analyses were done on electrode Fz. In a subset of subjects, Fz was not available and Cz was used instead of Fz. Intracranial EEG: In every subject, we selected all available electrodes in the MTL, which were then demeaned, detrended, notch-filtered at 60 Hz and its harmonics, bipolar referenced to its immediate lateral neighboring electrode, and lastly down-sampled to 500 Hz. We retained all MTL channels but discarded noisy PFC channels. We adopted a previously introduced approach where we first detected interictal epileptic discharges (IEDs) using automated detectors (see below), which were then excluded from further analysis. Last, we selected one MTL electrode per participant with the lowest number of overall detections. For PFC analyses, all available contacts in these regions were included, and the same preprocessing steps were applied. Then, all resulting traces were manually inspected, and noisy, epileptic, and artifact-contaminated PFC channels were excluded. Extraction of REM epochs and time normalization procedure Study 1: Sleep data were manually staged. REM epochs were detected on the basis of the emergence of a prominent theta rhythm (4 to 10 Hz) and reduction of EMG activity. Given that a NREM-REM-NREM triplet analysis was not feasible (see fig. S2), we selected N continuous REM epoch that spanned at least three 10-s epochs and included ± N adjacent epochs (termed pre-REM, mostly NREM and post-REM, mostly wake). This ensured that an equal amount of data was included to assess the relationship of population dynamics and aperiodic activity. The values within every epoch were then averaged into one composite value for calcium and EEG activity. Study3/4: REM epochs were detected on the basis of the manually staged hypnogram according to established Rechtschaffen and Kales guidelines . We first detected all REM epochs and then selected artifact-free epochs that spanned at least three consecutive epochs (90 s) and required that the majority of adjacent periods within a time window ±9 min were staged as NREM sleep (9 min were chosen to match the 9 min of resting state data reported in as well as to match the average, artifact- and interruption-free duration of individual NREM epochs: study 3: 10 ± 13.9 min; study 4: 7 ± 16.7 min; median ± SD). Subsequently, the identified REM epochs were extracted as continuous time-domain signals and then epoched into 100 overlapping epochs and subjected to multitaper spectral analysis as outlined below. Similarly, the adjacent NREM data were epoched into 10-s-long segments with 70% overlap. The spectral estimates were then concatenated to form the final time-normalized triplet in the frequency domain. For statistical testing, we omitted the transition states and selected one third of the time-normalized epoch (beginning, center, and end of the triplet, respectively) for subsequent testing. We also repeated the entire analysis on more liberal criteria (fig. S5; inclusion of brief epochs of NREM1 or microarousals as well as episodes that were staging was uncertain) as outlined by Watson et al. . Here, the preceding and following NREM epochs were also time-normalized (in contrast to taking a fixed window) into 100 overlapping epochs and subjected to multitaper spectral analysis. In addition, we extracted time-normalized NREM epochs where continuous NREM epochs were equally epoched into 100 overlapping segments (figs. S5G and S7C). Spectral analysis Scalp EEG (studies 1, S1, and 2 to 4): Resting state spectral estimates were obtained through multitaper spectral analyses , based on discrete prolate slepian sequences. Spectral estimates were obtained between 1 and 50 Hz in 1-Hz steps. We adapted the number of tapers to obtain a frequency smoothing of ±2 Hz. For studies 1 and S1, we used an upper cutoff of 35 Hz given a broad hardware notch filter artifact from 40 to 60 Hz. Intracranial EEG (study 4): Spectral estimates were by means of multitaper spectral analyses based on discrete prolate spheroidal sequences in 153 logarithmically spaced bins between 0.25 and 181 Hz . We adjusted the temporal and spectral smoothing to approximately match a ±2-Hz frequency smoothing. Estimation of aperiodic background activity Aperiodic activity was estimated from three parameters of the electrophysiological power spectrum: spectral slope x (the negative exponent of the 1/ f x decay function), y intercept, and the population time constant (the frequency where a bend/“knee” occurs in the 1/ f spectrum). Note that the slope and y intercept provided redundant information (correlated at rho = −0.98, P < 0.0001; Spearman correlation), thus, analyses focused on the spectral slope. FOOOF fitting: To obtain estimates of aperiodic background activity, we first used the FOOOF algorithm . EEG spectra were fitted in the range from 1 to 45 Hz. Aperiodic background activity was defined by its slope parameter χ, the y intercept c , and a constant k (reflecting the knee parameter). aperiodic fit = 10 c ∗ 1 ( k + f 1 X ) The relationship of the knee parameter and the knee frequency is given by knee frequency = k 1 χ If a knee parameter could not be determined, then we refitted the spectrum in the fixed mode, which is equivalent to a linear fit where k = 0. Polynomial fitting: To estimate the spectral slope in different frequency bands, we also used first-degree polynomial fitting , thus yielding an instantaneous spectral exponent (slope, χ) and offset ( y -axis intercept, c ), for a given fitting range. EEG spectra were fitted using variable endpoints (from 1 to 5 to 45 Hz, 5-Hz steps), variable starting points (to 45 Hz, from 5 to 40 Hz, 5-Hz steps), a fixed bandwidth with varying center frequencies (5 to 45 Hz; ± 5 Hz), or in comparable ranges (e.g., 20 to 45 Hz; correlation to FOOOF estimates rho = 0.99, P < 0.0001; Spearman correlation). Typically, we report the spectral slope as obtained from the FOOOF model when fitted up to 45 Hz. In several instances, this approach was complemented by first-degree polynomial fitting to avoid high-frequency artifacts (e.g., in studies 1 and S1 from ~40 Hz; hence, we restricted the fitting up to 35 Hz), the presence of a variable spectral knee (bend of the power spectrum) or to highlight a specific frequency range in intracranial EEG, where the spectrum was estimated up to 180 Hz; hence, rendering a direct comparison of FOOOF iEEG and EEG estimates impractical. After the initial principled approach, we empirically determined the range with the highest correlation to behavior (fig. S3E; 25 to 45 Hz) and consequently used this range for all subsequent analyses. Event detection SOs: Event detection was performed for every channel separately based on previously established algorithms . We first filtered the continuous signal between 0.16 and 1.25 Hz and detected all the zero crossings. Then, events were selected on the basis of time (0.8- to 2-s duration) and amplitude (75% percentile) criteria. Last, we extracted 5-s-long segments (±2.5 s centered on the trough) from the raw signal and discarded all events that occurred during an IED. Sleep spindles: On the basis of established algorithms , we filtered the signal between 12 and 16 Hz and extracted the analytical amplitude after applying a Hilbert transform. We smoothed the amplitude with a 200-ms moving average. Then, the amplitude was thresholded at the 75% percentile (amplitude criterion), and only events that exceeded the threshold for 0.5 to 3 s (time criterion) were accepted. Events were defined as sleep spindle peak-locked 5-s-long epochs (±2.5 s centered on the spindle peak). Ripples: The signal was first filtered in the range from 80 to 120 Hz, and the analytical amplitude was extracted from a Hilbert transform in accordance with previously reported detection algorithms . The analytical signal was smoothed with a 100-ms window and z -scored. Candidate events were identified as epochs exceeding a z -score of 2 for at least 25 ms and a maximum of 200 ms and had to be spaced by at least 500 ms. We determined the instantaneous ripple frequency by detecting all peaks within the identified segment. The identified events were time-locked to the ripple trough in a time window of ±0.5 s. Overlapping epochs were merged. Epochs that contained IEDs or sharp transients were discarded. Beta/Gamma burst detection: For fig. S9, we detected individual bursts in the range from 25 to 45 Hz, where the spectral slope was estimated, using the procedure outlined here . Briefly, we segmented the continuous LFP signal into 30-s trials and obtained single-trial spectral estimates between 1 and 50 Hz in 0.5-Hz steps with a frequency smoothing of 4 Hz. Oscillatory bursts were identified per trial by thresholding (mean ± 2 SD) the average, z -normalized spectral power for the frequency band of interest (25–45 Hz) relative to the mean, and SD over a reference period of 10 trials (current trial plus subsequent nine). Only bursts with a minimum duration of three oscillatory cycles of the mean frequency of interest were considered. A two-dimensional Gaussian was subsequently fitted to the time-frequency map. Burst duration was determined by the time wherein the average power for the frequency band of interest exceeded half of the local maximum as determined by the local Gaussian fit. Burst frequency was determined by the peak in the Gaussian fit. Oscillatory bursts that coincided with interictal epileptiform discharges (±1 s) relative to the burst peak were omitted. Subsequently, we obtained a burst rate per 30-s segment for every participant and channel separately. On the individual subject and channel levels, we calculated the correlation coefficient between the PSD slope and the burst rate across the entire night. We used a random block-swap procedure (1000 times; random breakpoint and block swap of the slope vector) to obtain a surrogate distribution. Subsequently, we normalized the observed correlation coefficient relative to the surrogate distribution to obtain a z-value. IED detection: We detected IEDs using automated algorithms on all channels located in the MTL. All cutoffs were chosen in accordance with recently published findings and were confirmed by a neurologist who visually verified the detected events. The continuous signal was filtered front and backward between 25 and 80 Hz, and the analytical amplitude was extracted from the Hilbert transform and then z -scored. Events were detected when this signal was 3 SD above the mean for more than 20 ms and less than 100 ms. HFB, population activity, and active periods analysis: The HFB activity is typically defined from 70 to 180 Hz . To avoid confounding true HFB activity with ripple-band activity (upper cutoff, ~120 Hz), we defined HFB activity as the average power in this frequency range from 120 to 180 Hz. The multitaper spectral estimates where averaged into a single trace per electrode. The dynamics of the population activity were expressed as a population vector . At every time point, HFB activity was represented as a point P in a n -dimensional coordinate system where n reflects the number of electrodes. The population vector was then constructed by taking the Euclidean distance d between adjacent time points within in a given ROI, hence providing a single time course per ROI. MDD = d ( P t n , P t + 1 n ) Active periods were defined as epochs where the smoothed (100-ms window) HFB signal exceeded a z -score of 1 for at least 50 ms . Functional connectivity was calculated by means of the absolute of the imaginary coherency to control for spurious coupling arising from volume conduction effects. Before connectivity analysis, time-domain data was re-referenced to pairs that did not share a common reference (hippocampal contacts to occipital bone/scalp electrode versus a bipolar scalp pair, e.g., Fz-Cz). To avoid biased connectivity estimates, 1-s segments were randomly subsampled and stratified across different states (wake, NREM, and REM) to equate the trial numbers before connectivity analysis. Study 1: EEG data from a frontal and parietal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Epochs containing artifacts were labeled semi-automatically when a threshold of 6 SD was exceeded in the concurrently acquired EMG signal. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study S1: EEG data from a frontal and parietal electrode as well as a hippocampal electrode were imported into MATLAB analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, and epoched into 10-s segments. Data were referenced to a bipolar pair (frontal-parietal) for selected analyses (e.g., fig. S1). Study 2/3—Resting state: EEG data were imported into MATLAB and analyzed using the FieldTrip toolbox. Raw recordings were demeaned, detrended, high-pass–filtered at 1 Hz, common average referenced, and epoched into 3-s-long segments with 50% overlap. Artifact detection was done semi-automatically for EOG, jump, and muscle artifacts and visually confirmed . Study 2—Sleep: EEG data were imported into FieldTrip, then demeaned, detrended, common average referenced, and epoched into non-overlapping 30-s segments. Artifact detection was done manually in 5-s segments . Study 4: Scalp EEG was demeaned, detrended, and locally referenced against the mean of all available artifact-free scalp electrodes. We applied a 50-Hz low-pass filter and down-sampled the data to 500 Hz. All scalp EEG analyses were done on electrode Fz. In a subset of subjects, Fz was not available and Cz was used instead of Fz. Intracranial EEG: In every subject, we selected all available electrodes in the MTL, which were then demeaned, detrended, notch-filtered at 60 Hz and its harmonics, bipolar referenced to its immediate lateral neighboring electrode, and lastly down-sampled to 500 Hz. We retained all MTL channels but discarded noisy PFC channels. We adopted a previously introduced approach where we first detected interictal epileptic discharges (IEDs) using automated detectors (see below), which were then excluded from further analysis. Last, we selected one MTL electrode per participant with the lowest number of overall detections. For PFC analyses, all available contacts in these regions were included, and the same preprocessing steps were applied. Then, all resulting traces were manually inspected, and noisy, epileptic, and artifact-contaminated PFC channels were excluded. Study 1: Sleep data were manually staged. REM epochs were detected on the basis of the emergence of a prominent theta rhythm (4 to 10 Hz) and reduction of EMG activity. Given that a NREM-REM-NREM triplet analysis was not feasible (see fig. S2), we selected N continuous REM epoch that spanned at least three 10-s epochs and included ± N adjacent epochs (termed pre-REM, mostly NREM and post-REM, mostly wake). This ensured that an equal amount of data was included to assess the relationship of population dynamics and aperiodic activity. The values within every epoch were then averaged into one composite value for calcium and EEG activity. Study3/4: REM epochs were detected on the basis of the manually staged hypnogram according to established Rechtschaffen and Kales guidelines . We first detected all REM epochs and then selected artifact-free epochs that spanned at least three consecutive epochs (90 s) and required that the majority of adjacent periods within a time window ±9 min were staged as NREM sleep (9 min were chosen to match the 9 min of resting state data reported in as well as to match the average, artifact- and interruption-free duration of individual NREM epochs: study 3: 10 ± 13.9 min; study 4: 7 ± 16.7 min; median ± SD). Subsequently, the identified REM epochs were extracted as continuous time-domain signals and then epoched into 100 overlapping epochs and subjected to multitaper spectral analysis as outlined below. Similarly, the adjacent NREM data were epoched into 10-s-long segments with 70% overlap. The spectral estimates were then concatenated to form the final time-normalized triplet in the frequency domain. For statistical testing, we omitted the transition states and selected one third of the time-normalized epoch (beginning, center, and end of the triplet, respectively) for subsequent testing. We also repeated the entire analysis on more liberal criteria (fig. S5; inclusion of brief epochs of NREM1 or microarousals as well as episodes that were staging was uncertain) as outlined by Watson et al. . Here, the preceding and following NREM epochs were also time-normalized (in contrast to taking a fixed window) into 100 overlapping epochs and subjected to multitaper spectral analysis. In addition, we extracted time-normalized NREM epochs where continuous NREM epochs were equally epoched into 100 overlapping segments (figs. S5G and S7C). Scalp EEG (studies 1, S1, and 2 to 4): Resting state spectral estimates were obtained through multitaper spectral analyses , based on discrete prolate slepian sequences. Spectral estimates were obtained between 1 and 50 Hz in 1-Hz steps. We adapted the number of tapers to obtain a frequency smoothing of ±2 Hz. For studies 1 and S1, we used an upper cutoff of 35 Hz given a broad hardware notch filter artifact from 40 to 60 Hz. Intracranial EEG (study 4): Spectral estimates were by means of multitaper spectral analyses based on discrete prolate spheroidal sequences in 153 logarithmically spaced bins between 0.25 and 181 Hz . We adjusted the temporal and spectral smoothing to approximately match a ±2-Hz frequency smoothing. Aperiodic activity was estimated from three parameters of the electrophysiological power spectrum: spectral slope x (the negative exponent of the 1/ f x decay function), y intercept, and the population time constant (the frequency where a bend/“knee” occurs in the 1/ f spectrum). Note that the slope and y intercept provided redundant information (correlated at rho = −0.98, P < 0.0001; Spearman correlation), thus, analyses focused on the spectral slope. FOOOF fitting: To obtain estimates of aperiodic background activity, we first used the FOOOF algorithm . EEG spectra were fitted in the range from 1 to 45 Hz. Aperiodic background activity was defined by its slope parameter χ, the y intercept c , and a constant k (reflecting the knee parameter). aperiodic fit = 10 c ∗ 1 ( k + f 1 X ) The relationship of the knee parameter and the knee frequency is given by knee frequency = k 1 χ If a knee parameter could not be determined, then we refitted the spectrum in the fixed mode, which is equivalent to a linear fit where k = 0. Polynomial fitting: To estimate the spectral slope in different frequency bands, we also used first-degree polynomial fitting , thus yielding an instantaneous spectral exponent (slope, χ) and offset ( y -axis intercept, c ), for a given fitting range. EEG spectra were fitted using variable endpoints (from 1 to 5 to 45 Hz, 5-Hz steps), variable starting points (to 45 Hz, from 5 to 40 Hz, 5-Hz steps), a fixed bandwidth with varying center frequencies (5 to 45 Hz; ± 5 Hz), or in comparable ranges (e.g., 20 to 45 Hz; correlation to FOOOF estimates rho = 0.99, P < 0.0001; Spearman correlation). Typically, we report the spectral slope as obtained from the FOOOF model when fitted up to 45 Hz. In several instances, this approach was complemented by first-degree polynomial fitting to avoid high-frequency artifacts (e.g., in studies 1 and S1 from ~40 Hz; hence, we restricted the fitting up to 35 Hz), the presence of a variable spectral knee (bend of the power spectrum) or to highlight a specific frequency range in intracranial EEG, where the spectrum was estimated up to 180 Hz; hence, rendering a direct comparison of FOOOF iEEG and EEG estimates impractical. After the initial principled approach, we empirically determined the range with the highest correlation to behavior (fig. S3E; 25 to 45 Hz) and consequently used this range for all subsequent analyses. SOs: Event detection was performed for every channel separately based on previously established algorithms . We first filtered the continuous signal between 0.16 and 1.25 Hz and detected all the zero crossings. Then, events were selected on the basis of time (0.8- to 2-s duration) and amplitude (75% percentile) criteria. Last, we extracted 5-s-long segments (±2.5 s centered on the trough) from the raw signal and discarded all events that occurred during an IED. Sleep spindles: On the basis of established algorithms , we filtered the signal between 12 and 16 Hz and extracted the analytical amplitude after applying a Hilbert transform. We smoothed the amplitude with a 200-ms moving average. Then, the amplitude was thresholded at the 75% percentile (amplitude criterion), and only events that exceeded the threshold for 0.5 to 3 s (time criterion) were accepted. Events were defined as sleep spindle peak-locked 5-s-long epochs (±2.5 s centered on the spindle peak). Ripples: The signal was first filtered in the range from 80 to 120 Hz, and the analytical amplitude was extracted from a Hilbert transform in accordance with previously reported detection algorithms . The analytical signal was smoothed with a 100-ms window and z -scored. Candidate events were identified as epochs exceeding a z -score of 2 for at least 25 ms and a maximum of 200 ms and had to be spaced by at least 500 ms. We determined the instantaneous ripple frequency by detecting all peaks within the identified segment. The identified events were time-locked to the ripple trough in a time window of ±0.5 s. Overlapping epochs were merged. Epochs that contained IEDs or sharp transients were discarded. Beta/Gamma burst detection: For fig. S9, we detected individual bursts in the range from 25 to 45 Hz, where the spectral slope was estimated, using the procedure outlined here . Briefly, we segmented the continuous LFP signal into 30-s trials and obtained single-trial spectral estimates between 1 and 50 Hz in 0.5-Hz steps with a frequency smoothing of 4 Hz. Oscillatory bursts were identified per trial by thresholding (mean ± 2 SD) the average, z -normalized spectral power for the frequency band of interest (25–45 Hz) relative to the mean, and SD over a reference period of 10 trials (current trial plus subsequent nine). Only bursts with a minimum duration of three oscillatory cycles of the mean frequency of interest were considered. A two-dimensional Gaussian was subsequently fitted to the time-frequency map. Burst duration was determined by the time wherein the average power for the frequency band of interest exceeded half of the local maximum as determined by the local Gaussian fit. Burst frequency was determined by the peak in the Gaussian fit. Oscillatory bursts that coincided with interictal epileptiform discharges (±1 s) relative to the burst peak were omitted. Subsequently, we obtained a burst rate per 30-s segment for every participant and channel separately. On the individual subject and channel levels, we calculated the correlation coefficient between the PSD slope and the burst rate across the entire night. We used a random block-swap procedure (1000 times; random breakpoint and block swap of the slope vector) to obtain a surrogate distribution. Subsequently, we normalized the observed correlation coefficient relative to the surrogate distribution to obtain a z-value. IED detection: We detected IEDs using automated algorithms on all channels located in the MTL. All cutoffs were chosen in accordance with recently published findings and were confirmed by a neurologist who visually verified the detected events. The continuous signal was filtered front and backward between 25 and 80 Hz, and the analytical amplitude was extracted from the Hilbert transform and then z -scored. Events were detected when this signal was 3 SD above the mean for more than 20 ms and less than 100 ms. HFB, population activity, and active periods analysis: The HFB activity is typically defined from 70 to 180 Hz . To avoid confounding true HFB activity with ripple-band activity (upper cutoff, ~120 Hz), we defined HFB activity as the average power in this frequency range from 120 to 180 Hz. The multitaper spectral estimates where averaged into a single trace per electrode. The dynamics of the population activity were expressed as a population vector . At every time point, HFB activity was represented as a point P in a n -dimensional coordinate system where n reflects the number of electrodes. The population vector was then constructed by taking the Euclidean distance d between adjacent time points within in a given ROI, hence providing a single time course per ROI. MDD = d ( P t n , P t + 1 n ) Active periods were defined as epochs where the smoothed (100-ms window) HFB signal exceeded a z -score of 1 for at least 50 ms . Functional connectivity was calculated by means of the absolute of the imaginary coherency to control for spurious coupling arising from volume conduction effects. Before connectivity analysis, time-domain data was re-referenced to pairs that did not share a common reference (hippocampal contacts to occipital bone/scalp electrode versus a bipolar scalp pair, e.g., Fz-Cz). To avoid biased connectivity estimates, 1-s segments were randomly subsampled and stratified across different states (wake, NREM, and REM) to equate the trial numbers before connectivity analysis. Unless stated otherwise, we used cluster-based permutation tests to correct for multiple comparisons as implemented in FieldTrip (Monte Carlo method; 1000 iterations). Clusters were formed in time/frequency (e.g., ) or space (e.g., ) by thresholding two-tailed, dependent t tests or linear correlations at P < 0.05. Correlation values were transformed into t values using the following formula t = r ∗ N − 2 1 − r 2 A permutation distribution was then created by randomly shuffling condition labels (paired t tests) or subject labels (correlation). The permutation P value was obtained by comparing the cluster statistic to the random permutation distribution. The clusters were considered significant at P < 0.05 (two-sided). Effect sizes were quantified by means of Cohen’s d or the correlation coefficient rho. To obtain effect sizes for cluster tests, we calculated the effect size separately for all channel, frequency, and/or time points and averaged across all data points in the cluster. Repeated-measures ANOVAs were Greenhouse-Geisser–corrected. For rodent data and for intracranial EEG , we either averaged multiple observation per participant into one composite metric, which was then subjected to regular t tests, ANOVAs, or correlations analyses, or we used LME models with subjects as random intercepts. P values were calculated on the pseudo-population and confirmed using LME models with subjects as random intercepts. |
Neuroonkologie: Herausforderungen und Perspektiven | 88ca0ec6-fe1c-4275-9b42-ad4f9fb31ff7 | 10850005 | Internal Medicine[mh] | |
Barriers to diffusion and implementation of pediatric minimally invasive surgery in Brazil | 520cd04c-2f85-4d63-a202-a7d17e510984 | 11342547 | Pediatrics[mh] | The literature defines the benefits of minimally invasive surgery (MIS), particularly in reducing surgical trauma, infection, and operative stress . In addition to benefiting patients, MIS may significantly reduce hospital costs . Despite the established evidence, minimally invasive techniques are not commonly the first choice for the pediatric and neonatal population, especially in low- and middle-income countries (LMICs) . The main technological, technical, and epistemological barriers impose great difficulty in the transition of MIS in pediatric surgery, along with the rarity of pathologies . These barriers are presumed to have a greater impact on the implementation of MIS in LMICs , although there are limited data on whether there are limiting factors in pediatric surgery scenarios. This study aims to gather data on the potential aspects limiting the implementation and dissemination of pediatric MIS and to examine the current training of pediatric surgeons in Brazil and the healthcare they provide. A cross-sectional survey was conducted nationwide in Brazil from January 2022 to July 2022. The samples were taken by convenience from the population of pediatric surgeons. The inclusion criterion for participants was that they be a graduated and active pediatric surgeon. No exclusion criteria were formally defined. Data were collected via an online questionnaire distributed by the Brazilian Pediatric Surgery Association to all registered associates. Written informed consent was obtained from all participants upon completing the questionnaire. The questionnaire was developed via Google ® Forms and distributed electronically. Face equivalence and content validity were assessed by local experts in pediatric surgery and the MIS. The data collected were divided into three sections to evaluate the technological, technical, and epistemological limitations concerning MIS among the participants. It included demographic information, current MIS procedures, previous training, and a section including subjective perspectives. To eliminate subjective interpretation when assessing MIS procedures performed, respondents were required to provide answers to each specified procedure, which were then grouped into major, intermediate, or minor categories after data collection. Most of the questions used a 5-point Likert scale to reduce potential response bias. In the “Limitations” section, a consensus was reached regarding the interpretation of the 5-point Likert scale questions. The response “half of the time” was designated as the threshold for considering a factor statistically limiting the application of minimally invasive methods (median ≤ 3). The participants were divided into three groups on the basis of the length of their previous MIS training: no previous training, short-term training (weekend or premeeting courses), and long-term training (extension, postgraduate or fellowship). No personal information was recorded, ensuring that the questionnaire was anonymous. Most questions were mandatory, minimizing missing data for the final analysis. The data were retained via Microsoft Excel ® , and statistical analysis was performed via IBM SPSS Statistics v26 ® . Likert scale questions were treated as interval quantitative variables, and the median was used. The internal consistency of the sections was measured by Cronbach’s alpha when appropriate. The chi-square test was used to analyze percentages. The remaining variables were subjected to descriptive analysis. The original questionnaire (in Portuguese) and the translated version (English) are available as . This study was approved by the Hospital de Clínicas de Porto Alegre Ethics Committee on April 8th, 2021 (#40938120.7.0000.5327). A total of 187 surgeons participated in the study, representing approximately 12% of all trained surgeons in Brazil. There were no missing data, and no participants were excluded. The states with the highest representation were São Paulo ( n = 46; 24.6%), Rio Grande do Sul ( n = 38; 20.3%), Minas Gerais ( n = 18; 9.6%), Rio de Janeiro ( n = 17; 9.1%), Paraná ( n = 15; 8%), Santa Catarina ( n = 13; 7%), Bahia ( n = 6; 3.2%), and Pernambuco ( n = 6; 3.2%). Other states represented less than 3%, and five states had no representation in the study. Each state has a different number of active pediatric surgeons, so even states with fewer participants can still accurately reflect their unique circumstances. This study revealed a balanced representation, with 33.5% of pediatric surgeons represented by the state. Almost all participants ( n = 178; 95%) worked in referral centers, either to the city or region ( n = 86; 46%), the state ( n = 62; 33%), or the country ( n = 30; 16%). Analysis of the technological barriers The largest portion of the sample ( n = 114; 61%) worked in state capital cities, whereas 39% ( n = 73) worked in inner cities. Among the participants, 142 (76%) worked in both public health system (SUS – Sistema Único de Saúde ) and private health system (PH) hospitals, with 42% ( n = 82) working preferentially in the SUS and 40% ( n = 71) providing equal assistance to both health systems. When asked about the difference in the care they provided to the SUS and PH, over half of the sample ( n = 95; 51%) reported providing a better quality of care to the PH, with 30% ( n = 56) considering it slightly better and 21% ( n = 39) significantly better. Only 6% ( n = 11) reported providing better assistance to the SUS, whereas 41% ( n = 76) believed that they provided balanced assistance to both. The fellowship graduation years of the participants ranged from 1970 to 2022, with the largest portion ( n = 101; 54%) graduating between 1991 and 2010. With respect to specialization, 70.6% ( n = 132) worked as general pediatric surgeons, 18.2% ( n = 43) as pediatric urologists, 7% ( n = 13) as pediatric surgical oncologists, and 4% ( n = 8) as pediatric thoracic surgeons. In terms of previous training, 33% ( n = 62) had no experience with laparoscopic surgery during their fellowship. Among the remainder, 43% ( n = 80) were exposed to basic procedures, and 24% ( n = 45) were exposed to advanced procedures. 15% ( n = 28) of the sample had not performed any additional training in MIS. Among these, 64.3% ( n = 18) cited a lack of financial support or encouragement from their department as the reason. Difficulty in arranging time away from professional activities was the reason for 28.5% ( n = 8), and only 7.2% ( n = 2) reported that it was their own decision. Among the surgeons who undertook some type of training, 83.6% ( n = 133/159) used live animal models, 69.8% (111/159) used white box trainers, 37% (59/159) used virtual reality, and 6.3% ( n = 10/159) used specific experimental models. Most participants ( n = 122; 65%) worked as staff for general surgery residents or pediatric surgery fellows. Of these, only 47% ( n = 57) reported providing dedicated training space available to their fellows. Additionally, 52% ( n = 64) agreed that their trainees received sufficient hands-on practice in MIS during their program, whereas 33% ( n = 40) disagreed. Furthermore, 70% ( n = 85) of the participants reported that their ability to train fellows had remained the same since the advent of MIS, and 14% ( n = 17) reported that it had declined (Fig. ). When assessing the limitations of this barrier, 64.7% ( n = 121) of the respondents considered the “lack of basic instruments” (median 2) to be a significant limiting factor. The “lack of infrastructure” was not statistically significant for the application of MIS, with only 32% ( n = 60) considering it a limitation (median 4). Additionally, there was no significant difference between surgeons working in capital cities and those working in other cities regarding the “lack of basic instruments” (Fig. ). Analysis of the technical barriers Over half the sample ( n = 94; 50,2%) of the participants considered their “own lack of training” (median 3) a limiting factor. However, the “need for suturing and intracorporeal knot-tying” ( n = 82; 43,8%, median 4) and their own “lack of knowledge update” ( n = 67; 35,8%, median 4) were not found to be statistically significant limiting factors for the application of pediatric MIS. This section of questions, along with “lack of instruments” and “lack of infrastructure”, resulted in a Cronbach’s alpha of 0.75 (Fig. ). 85% ( n = 159) of the sample had undergone some sort of extra training in MIS. Of these, 61% ( n = 97/159) took long courses, whereas 39% ( n = 62/159) took short courses. Among those who identified their own “lack of training” as a limiting factor, 58.9% ( n = 53) had either no training or only short courses, whereas 42.3% ( n = 41) had taken long courses ( p = 0.017). The factors “my own lack of training” and “needs for suturing and intracorporeal knot-tying” were considered limiting, with a statistically significant difference observed when those who took long courses were compared to those who had no previous training ( n = 41; 42.3% vs. n = 18; 64.3%, p = 0.033 and n = 37; 38.1% vs. n = 17 60.7%, p = 0.029; respectively). This significant difference was not observed when comparing short- to long-term training or no training to short training. In categorizing surgical assistance, MIS procedures were classified as “major,” “intermediate,” or “minor” on the basis of their technical complexity or their magnitude. When the group of major procedures ( n = 1692 answers) was analyzed, 14% ( n = 244) of the surgeons reported performing most procedures through MIS. In comparison, 31% ( n = 525) considered themselves performing only selected cases, and 55% ( n = 923) reported performing these procedures solely through an open approach (Fig. ). In terms of intermediate procedures ( n = 846 answers), 57% ( n = 486) of the surgeons considered themselves performing most of them through MIS, 26% ( n = 217) performed MIS only for selected cases, and 17% ( n = 143) performed these procedures exclusively via the open approach (Fig. ). Among the minor procedures, 63% ( n = 573) of the surgeons considered performing most of them through MIS, 19% ( n = 177) reported performing MIS only for selected cases, and 17% ( n = 159) reported performing MIS only through the open approach (Fig. ). When asked whether they perform endoscopic procedures, 56% ( n = 105) of the surgeons reported performing at least some type, with some performing more than one. Among these, 69% ( n = 85/124) performed endoscopic examination of the urinary tract, 27% ( n = 27/124) performed endoscopic evaluation of the airway, and 10% ( n = 12/124) performed endoscopic evaluation of the digestive tract. Analysis of the epistemological barriers When asked whether MIS should be reserved for centers with highly trained staff, 72% ( n = 134) of the participants disagreed, 16% ( n = 31) agreed, and 12% ( n = 22) remained neutral. When asked if they felt more satisfied performing MIS procedures than open surgery, 70% ( n = 131) agreed, 10% ( n = 19) disagreed, and 20% ( n = 37) remained neutral (Fig. ). Most surgeons disagreed ( n = 133; 72%) with the statement that incisions in pediatric surgery are already so small that MIS does not offer as much benefit as in adult surgery, whereas 10% ( n = 18) agreed, and 19% ( n = 36) remained neutral (Fig. ). The majority ( n = 178; 95%) agreed that extra training in simulation is either necessary or indispensable. When questioned about video and robotic surgery, 25% ( n = 47) believed that robotic surgery would completely replace video surgery, whereas 51% ( n = 95) disagreed (Fig. ). Among all the participants, 95% ( n = 178) reported not performing robotic surgery, whereas 4% ( n = 8) performed it in pediatric general surgery and 1% ( n = 1) in pediatric urology. Additionally, 57% ( n = 107) of the surgeons worked in hospitals without a robotic platform. Another 34% ( n = 64) reported having the platform available only in private hospitals, 5% ( n = 9) in private and public hospitals, and 4% ( n = 7) in public hospitals. Finally, 36% ( n = 67) of the participants (median 4) considered their own “lack of knowledge update” to be a limiting factor for performing MIS in children, although this finding was not statistically significant (Fig. ). Moreover, 57% ( n = 105) of the participants stated that they would definitely attend future courses in pediatric MIS, 31% ( n = 56) believed they would, and 8% ( n = 14) reported no interest. The largest portion of the sample ( n = 114; 61%) worked in state capital cities, whereas 39% ( n = 73) worked in inner cities. Among the participants, 142 (76%) worked in both public health system (SUS – Sistema Único de Saúde ) and private health system (PH) hospitals, with 42% ( n = 82) working preferentially in the SUS and 40% ( n = 71) providing equal assistance to both health systems. When asked about the difference in the care they provided to the SUS and PH, over half of the sample ( n = 95; 51%) reported providing a better quality of care to the PH, with 30% ( n = 56) considering it slightly better and 21% ( n = 39) significantly better. Only 6% ( n = 11) reported providing better assistance to the SUS, whereas 41% ( n = 76) believed that they provided balanced assistance to both. The fellowship graduation years of the participants ranged from 1970 to 2022, with the largest portion ( n = 101; 54%) graduating between 1991 and 2010. With respect to specialization, 70.6% ( n = 132) worked as general pediatric surgeons, 18.2% ( n = 43) as pediatric urologists, 7% ( n = 13) as pediatric surgical oncologists, and 4% ( n = 8) as pediatric thoracic surgeons. In terms of previous training, 33% ( n = 62) had no experience with laparoscopic surgery during their fellowship. Among the remainder, 43% ( n = 80) were exposed to basic procedures, and 24% ( n = 45) were exposed to advanced procedures. 15% ( n = 28) of the sample had not performed any additional training in MIS. Among these, 64.3% ( n = 18) cited a lack of financial support or encouragement from their department as the reason. Difficulty in arranging time away from professional activities was the reason for 28.5% ( n = 8), and only 7.2% ( n = 2) reported that it was their own decision. Among the surgeons who undertook some type of training, 83.6% ( n = 133/159) used live animal models, 69.8% (111/159) used white box trainers, 37% (59/159) used virtual reality, and 6.3% ( n = 10/159) used specific experimental models. Most participants ( n = 122; 65%) worked as staff for general surgery residents or pediatric surgery fellows. Of these, only 47% ( n = 57) reported providing dedicated training space available to their fellows. Additionally, 52% ( n = 64) agreed that their trainees received sufficient hands-on practice in MIS during their program, whereas 33% ( n = 40) disagreed. Furthermore, 70% ( n = 85) of the participants reported that their ability to train fellows had remained the same since the advent of MIS, and 14% ( n = 17) reported that it had declined (Fig. ). When assessing the limitations of this barrier, 64.7% ( n = 121) of the respondents considered the “lack of basic instruments” (median 2) to be a significant limiting factor. The “lack of infrastructure” was not statistically significant for the application of MIS, with only 32% ( n = 60) considering it a limitation (median 4). Additionally, there was no significant difference between surgeons working in capital cities and those working in other cities regarding the “lack of basic instruments” (Fig. ). Over half the sample ( n = 94; 50,2%) of the participants considered their “own lack of training” (median 3) a limiting factor. However, the “need for suturing and intracorporeal knot-tying” ( n = 82; 43,8%, median 4) and their own “lack of knowledge update” ( n = 67; 35,8%, median 4) were not found to be statistically significant limiting factors for the application of pediatric MIS. This section of questions, along with “lack of instruments” and “lack of infrastructure”, resulted in a Cronbach’s alpha of 0.75 (Fig. ). 85% ( n = 159) of the sample had undergone some sort of extra training in MIS. Of these, 61% ( n = 97/159) took long courses, whereas 39% ( n = 62/159) took short courses. Among those who identified their own “lack of training” as a limiting factor, 58.9% ( n = 53) had either no training or only short courses, whereas 42.3% ( n = 41) had taken long courses ( p = 0.017). The factors “my own lack of training” and “needs for suturing and intracorporeal knot-tying” were considered limiting, with a statistically significant difference observed when those who took long courses were compared to those who had no previous training ( n = 41; 42.3% vs. n = 18; 64.3%, p = 0.033 and n = 37; 38.1% vs. n = 17 60.7%, p = 0.029; respectively). This significant difference was not observed when comparing short- to long-term training or no training to short training. In categorizing surgical assistance, MIS procedures were classified as “major,” “intermediate,” or “minor” on the basis of their technical complexity or their magnitude. When the group of major procedures ( n = 1692 answers) was analyzed, 14% ( n = 244) of the surgeons reported performing most procedures through MIS. In comparison, 31% ( n = 525) considered themselves performing only selected cases, and 55% ( n = 923) reported performing these procedures solely through an open approach (Fig. ). In terms of intermediate procedures ( n = 846 answers), 57% ( n = 486) of the surgeons considered themselves performing most of them through MIS, 26% ( n = 217) performed MIS only for selected cases, and 17% ( n = 143) performed these procedures exclusively via the open approach (Fig. ). Among the minor procedures, 63% ( n = 573) of the surgeons considered performing most of them through MIS, 19% ( n = 177) reported performing MIS only for selected cases, and 17% ( n = 159) reported performing MIS only through the open approach (Fig. ). When asked whether they perform endoscopic procedures, 56% ( n = 105) of the surgeons reported performing at least some type, with some performing more than one. Among these, 69% ( n = 85/124) performed endoscopic examination of the urinary tract, 27% ( n = 27/124) performed endoscopic evaluation of the airway, and 10% ( n = 12/124) performed endoscopic evaluation of the digestive tract. When asked whether MIS should be reserved for centers with highly trained staff, 72% ( n = 134) of the participants disagreed, 16% ( n = 31) agreed, and 12% ( n = 22) remained neutral. When asked if they felt more satisfied performing MIS procedures than open surgery, 70% ( n = 131) agreed, 10% ( n = 19) disagreed, and 20% ( n = 37) remained neutral (Fig. ). Most surgeons disagreed ( n = 133; 72%) with the statement that incisions in pediatric surgery are already so small that MIS does not offer as much benefit as in adult surgery, whereas 10% ( n = 18) agreed, and 19% ( n = 36) remained neutral (Fig. ). The majority ( n = 178; 95%) agreed that extra training in simulation is either necessary or indispensable. When questioned about video and robotic surgery, 25% ( n = 47) believed that robotic surgery would completely replace video surgery, whereas 51% ( n = 95) disagreed (Fig. ). Among all the participants, 95% ( n = 178) reported not performing robotic surgery, whereas 4% ( n = 8) performed it in pediatric general surgery and 1% ( n = 1) in pediatric urology. Additionally, 57% ( n = 107) of the surgeons worked in hospitals without a robotic platform. Another 34% ( n = 64) reported having the platform available only in private hospitals, 5% ( n = 9) in private and public hospitals, and 4% ( n = 7) in public hospitals. Finally, 36% ( n = 67) of the participants (median 4) considered their own “lack of knowledge update” to be a limiting factor for performing MIS in children, although this finding was not statistically significant (Fig. ). Moreover, 57% ( n = 105) of the participants stated that they would definitely attend future courses in pediatric MIS, 31% ( n = 56) believed they would, and 8% ( n = 14) reported no interest. MIS is considered one of the greatest advances in recent medical practice. Since the first application of laparoscopy in general surgery in 1985 , it has gained popularity worldwide. However, only in the mid-1990s, after technological advances and improvements in surgeons’ technical skills, could MIS be introduced for the pediatric and neonatal population. This progress has allowed more complex procedures to be performed on smaller patients over time . Despite evidence that MIS has grown over the past 30 years , there is still resistance to its wide adoption , particularly in LMICs . In the field of pediatric surgery, several barriers can be identified. These barriers can be grouped into three main categories, each representing a series of specific limitations that individually or collectively contribute to the difficulty in implementing MIS for the pediatric population in Brazil. Technological barriers Technological barriers are related primarily to the initial costs of investments and the availability of adequate equipment, instrumentation, and staff training . This barrier is particularly significant in LMICs, given the existence of many public hospitals and the bureaucratic hurdles in generating funds for improvements that require any type of financial investment . Furthermore, the high initial costs must be justified by the presence of qualified teams of trained surgeons and anesthesiologists . The participants in the study represented most of Brazil’s states and regions, and 61% were located in state capitals. Most surgeons identified the lack of basic instruments as a limiting factor, likely due to the high number of surgeons assisting patients originating from the SUS. Slightly more than half of the participants believed that they provided superior-quality care to patients in the public health sector. In LMICs, MIS programs in public institutions often become restricted or even extinguished due to resource limitations. This can result from a lack of technical support or difficult access to equipment maintenance, especially when relying on donations . Consequently, surgeons may turn to PH operators to provide more up-to-date care to their patients, contributing to a scarcity of health professionals in the SUS. This resource limitation is also reflected in the lower percentage of current fellows (47%) with dedicated facilities or models for training during their programs. Additionally, most surgeons without training considered the lack of financial support from their department to be the main reason for this, highlighting the absence of a standard training curriculum . This situation ultimately prompts surgeons and fellows to seek specific MIS training driven by personal interest and self-funding. The importance of a structured training method is also endorsed by the fact that pediatric surgery fellows have limited exposure to neonatal MIS during their program . This lack of exposure results from a dilution of the case volume, mainly due to the high ratio of diverse conditions to their low incidence. This impacts the learning curve and prolongs the time needed to reach proficiency , which is directly correlated with the technical aspects of MIS. Technical barriers Technical barriers are related to the surgeon and the surgical procedure . Only 14% of surgeons reported performing major procedures via MIS, with the majority still using the conventional open approach (Fig. ). This proportion gradually increased as participants were asked about intermediate and minor procedures (Figs. and ). In a recent study, 15% of esophageal atresia cases from 2016 to 2019 were repaired thoracoscopically . In this study, approximately 7% of the surgeons reported performing most cases via thoracoscopy. However, these rates are indirect measurements and are likely overestimated. These findings are likely due to the technical demands of reconstructive MIS procedures. These types of procedures usually require intracorporeal suturing (ICS) and knot-tying (KT) and are known to be major barriers to wider adoption of MIS . In pediatric surgery, especially in neonates, the surgeon’s skills need to be further developed and refined. Instrument manipulation and the need for ICS KT are complicated by the restricted workspace and the fragility of the visceral structures . The development of these skills requires extensive training and should be planned progressively and ideally, not first on the patient . Regarding previous training, 85% of the participants reported having some form of additional training, with a balance between long (52.4%) and short courses (47.6%). There was a significant difference in the perception of insufficient training and the need for ICS as limiting factors for performing MIS between those with no training or short-term courses and those with long-term courses (38.1% vs. 60.7%, p = 0.029). These findings align with the literature, reinforcing not only that pediatric MIS has a steep learning curve but also that maintaining these skills is crucial, as they seem to fade over time without ongoing training . These findings highlight the need for deliberate practice and the crucial role of simulation-based education for pediatric surgeons and fellows seeking to perform advanced procedures safely . This method allows professionals to develop the necessary skills in a controlled, stress-free environment that fundamentally allows errors without jeopardizing patient safety. Some authors have shown that even low-cost and simple simulators offered in short-term training courses can improve surgeons’ technical skills , which is especially important in LMICs . Epistemological barriers Epistemological barriers in medical practice refer to long-established beliefs and sociocultural aspects that hinder clinical progress, often driven by skepticism and resistance to change . In this research, the term describes philosophical obstacles to adopting minimally invasive approaches, such as reluctance to accept changes to established techniques, concerns about the outcomes of new methods, uncertainty when confronted with innovative technologies, unwillingness to perform more complex procedures, and hesitation to practice outside the operating room. Most participants disagreed that MIS should be reserved for centers with highly trained staff and reported feeling more satisfied when performing MIS than when performing open surgeries (Fig. ). Remarkably, nearly every surgeon agreed that extra training in simulation was either necessary or indispensable, with more than half certain that they would attend future courses. This evaluation revealed that, despite some traditional convictions, most participants recognized the benefits of MIS in pediatric surgery and the importance of adequate training. However, this barrier is aggravated by the lack of stimulus to update knowledge . Underestimating the need for MIS training makes surgeons experience more difficulties during technically demanding procedures and can lead to the abandonment of the technique. This is supported by the fact that pediatric surgery fellowship programs in Brazil do not include a standardized curriculum or mandatory training in MIS and that only approximately half of the participants reported providing a training space to their fellows, resulting in uneven technical skills among trainees. This study has limitations, primarily because it is an indirect measure of the gathered information. Selection bias is also possible since surgeons with greater interest in MIS may have been more likely to respond to the questionnaire, potentially leading to overestimated results. Despite these limitations, this study provides valuable data, not only for Brazil but also by presenting a broad overview of the challenges related to the application of pediatric MIS. Technological barriers are related primarily to the initial costs of investments and the availability of adequate equipment, instrumentation, and staff training . This barrier is particularly significant in LMICs, given the existence of many public hospitals and the bureaucratic hurdles in generating funds for improvements that require any type of financial investment . Furthermore, the high initial costs must be justified by the presence of qualified teams of trained surgeons and anesthesiologists . The participants in the study represented most of Brazil’s states and regions, and 61% were located in state capitals. Most surgeons identified the lack of basic instruments as a limiting factor, likely due to the high number of surgeons assisting patients originating from the SUS. Slightly more than half of the participants believed that they provided superior-quality care to patients in the public health sector. In LMICs, MIS programs in public institutions often become restricted or even extinguished due to resource limitations. This can result from a lack of technical support or difficult access to equipment maintenance, especially when relying on donations . Consequently, surgeons may turn to PH operators to provide more up-to-date care to their patients, contributing to a scarcity of health professionals in the SUS. This resource limitation is also reflected in the lower percentage of current fellows (47%) with dedicated facilities or models for training during their programs. Additionally, most surgeons without training considered the lack of financial support from their department to be the main reason for this, highlighting the absence of a standard training curriculum . This situation ultimately prompts surgeons and fellows to seek specific MIS training driven by personal interest and self-funding. The importance of a structured training method is also endorsed by the fact that pediatric surgery fellows have limited exposure to neonatal MIS during their program . This lack of exposure results from a dilution of the case volume, mainly due to the high ratio of diverse conditions to their low incidence. This impacts the learning curve and prolongs the time needed to reach proficiency , which is directly correlated with the technical aspects of MIS. Technical barriers are related to the surgeon and the surgical procedure . Only 14% of surgeons reported performing major procedures via MIS, with the majority still using the conventional open approach (Fig. ). This proportion gradually increased as participants were asked about intermediate and minor procedures (Figs. and ). In a recent study, 15% of esophageal atresia cases from 2016 to 2019 were repaired thoracoscopically . In this study, approximately 7% of the surgeons reported performing most cases via thoracoscopy. However, these rates are indirect measurements and are likely overestimated. These findings are likely due to the technical demands of reconstructive MIS procedures. These types of procedures usually require intracorporeal suturing (ICS) and knot-tying (KT) and are known to be major barriers to wider adoption of MIS . In pediatric surgery, especially in neonates, the surgeon’s skills need to be further developed and refined. Instrument manipulation and the need for ICS KT are complicated by the restricted workspace and the fragility of the visceral structures . The development of these skills requires extensive training and should be planned progressively and ideally, not first on the patient . Regarding previous training, 85% of the participants reported having some form of additional training, with a balance between long (52.4%) and short courses (47.6%). There was a significant difference in the perception of insufficient training and the need for ICS as limiting factors for performing MIS between those with no training or short-term courses and those with long-term courses (38.1% vs. 60.7%, p = 0.029). These findings align with the literature, reinforcing not only that pediatric MIS has a steep learning curve but also that maintaining these skills is crucial, as they seem to fade over time without ongoing training . These findings highlight the need for deliberate practice and the crucial role of simulation-based education for pediatric surgeons and fellows seeking to perform advanced procedures safely . This method allows professionals to develop the necessary skills in a controlled, stress-free environment that fundamentally allows errors without jeopardizing patient safety. Some authors have shown that even low-cost and simple simulators offered in short-term training courses can improve surgeons’ technical skills , which is especially important in LMICs . Epistemological barriers in medical practice refer to long-established beliefs and sociocultural aspects that hinder clinical progress, often driven by skepticism and resistance to change . In this research, the term describes philosophical obstacles to adopting minimally invasive approaches, such as reluctance to accept changes to established techniques, concerns about the outcomes of new methods, uncertainty when confronted with innovative technologies, unwillingness to perform more complex procedures, and hesitation to practice outside the operating room. Most participants disagreed that MIS should be reserved for centers with highly trained staff and reported feeling more satisfied when performing MIS than when performing open surgeries (Fig. ). Remarkably, nearly every surgeon agreed that extra training in simulation was either necessary or indispensable, with more than half certain that they would attend future courses. This evaluation revealed that, despite some traditional convictions, most participants recognized the benefits of MIS in pediatric surgery and the importance of adequate training. However, this barrier is aggravated by the lack of stimulus to update knowledge . Underestimating the need for MIS training makes surgeons experience more difficulties during technically demanding procedures and can lead to the abandonment of the technique. This is supported by the fact that pediatric surgery fellowship programs in Brazil do not include a standardized curriculum or mandatory training in MIS and that only approximately half of the participants reported providing a training space to their fellows, resulting in uneven technical skills among trainees. This study has limitations, primarily because it is an indirect measure of the gathered information. Selection bias is also possible since surgeons with greater interest in MIS may have been more likely to respond to the questionnaire, potentially leading to overestimated results. Despite these limitations, this study provides valuable data, not only for Brazil but also by presenting a broad overview of the challenges related to the application of pediatric MIS. The challenging implementation of pediatric MIS is multifactorial and characterized by the interweaving of technological, technical, and epistemological barriers. Despite most participants having some form of prior training, the lack of adequate training remains a significant limiting factor, especially among those who take only short-term courses. This finding, combined with the low percentage of participants performing advanced procedures, highlights the impact of the technical barrier in the adoption of pediatric MIS. The significant difference between surgeons undertaking long courses and those with no previous training, particularly regarding the need for intracorporeal suturing, reinforces these statements. Furthermore, the limitations imposed by the lack of basic instruments, institutional support, and proper training curricula—along with the low availability of training facilities for current fellows—demonstrate a gap between knowledge, practice, and education. Despite its limitations, this study should be used as a guide for future and more specific analyses. On the basis of these findings, strategies can be planned individually to address internal deficits or collectively by societies to improve education and training, ensuring the highest standard of care for all children. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
Dental anxiety and dental care - a comparison between Albania and Germany | 753e06ec-54a5-4e56-8f19-da540bda6875 | 11421110 | Dentistry[mh] | Despite constant medical advances, dental anxiety remains a condition that is reversible through desensitization of the general population. Nearly 80% of adults in industrialized countries experience discomfort before dental treatment, with 20% expressing a genuine fear of dental procedures, and 5% actively avoiding dental care altogether . The prevalence of dental anxiety is evident across all age groups, with even young children exhibiting avoidance behaviour towards dental treatment, often influenced by parental attitudes . An extreme manifestation of dental anxiety is termed dental phobia, which can be classified using the International Classification of Diseases (ICD). According to the ICD-10 Chapter V, F40.0, a phobia falls under the category of anxiety disorders. It is characterized as an irrational fear of a specific, generally non-threatening situation that is either completely avoided or endured with significant distress . This classification is distinct from phobias related to specific stimuli encountered during dental treatment, such as injections . One way to differentiate between anxiety and phobia stages is by assessing their impact on the individual’s daily routine and life. If it disrupts social life, occupation, and normal functioning, it may be considered a specific (dental) phobia . Dental anxiety proves stressful for both the patient and the dentist, leading to reduced cooperation, prolonged treatment times, and an uncomfortable treatment environment . This can lead to inaccurate diagnosis and inappropriate treatment, including the evaluation of tooth vitality . Patients who entirely avoid dental care often suffer from poor dental and periodontal health . Such individuals typically seek dental attention only when the pain becomes unbearable, requiring complex interventions like root canal therapy or extractions. This perpetuates a negative cycle that undermines the development of a healthy dentist-patient relationship . Factors such as the dental clinic environment, stress experienced during procedures, cognitive capacities of the individual, and cultural practices are known to influence dental fear and anxiety (DFA) . DFA poses daily challenges for dentists treating both children and adults. In pediatric dentistry, with a prevalence of 9%, DFA significantly complicates patient management . In adults, dental fear and anxiety often reflects past negative dental experiences from childhood or adolescence. Dental fear is an acute, distressing response to perceived threats . Studies from various countries have reported dental fear and anxiety prevalence rates of 12.5% in Canada , 12.6% in Russia , 13.5% in France , 16.1% in Australia , and 30% in China . Research in Saudi Arabia indicates DFA rates among adults range from 27 to 51% , among children, the rates range from 43,1% to 47,6% . DFA can hinder the use of dental services, impacting early disease detection and management. Among children in Eastern Europe, significant levels of anxiety were reported, with varying rates across countries. A total of 12.5% of children from Croatia, 26.67% from Macedonia, 10.94% from Bosnia and Herzegovina, 20.31% from Montenegro, 23.08% from Slovenia and 16.10% from Serbia showed a high level of anxiety . An observational study in Albania involving 180 participants aged 15 to 55 found that 70% displayed high dental fear regarding orthodontic treatments and fillings, 59% towards dental implants, and 74% exhibited extreme fear of extractions . 64% of the surveyed participants reported having gingivitis, and 61% indicated they suffered from dental caries, in contrast to 53% who had undergone tooth extractions. The data analysis revealed that tooth extractions and dental caries significantly affected high blood pressure, with a P ˂ 0.0001 . Taheri et al. explored the connections between dental pain perception and its relationship to pain anxiety, dental anxiety, and mental pain, finding significant correlations ( p = .001) between pain perception with dental anxiety ( r = .38), pain anxiety ( r = .45), and mental pain ( r = .25) . The approach to a dentist’s office significantly influences dental fear scores. Patients often view surgical and restorative procedures as unpleasant and intimidating, with past negative experiences potentially exacerbating fears during subsequent visits . Furthermore, previous negative experiences in the dental office can instigate fear during subsequent visits. Dental anxiety (DA) is more intense and irrational compared to general fear . This type of anxiety leads patients to avoid treatment, reflecting a shortfall in modern dentistry’s evolution toward minimal invasiveness. Although modern anesthetics can minimize pain, the fear of pain often exceeds the actual sensation of pain. Anxiety disorders are widespread, with 25% of general practitioners recognizing symptoms in their patients . Mental health is crucial, recognized by the WHO as a state where individuals achieve their potential, handle life’s normal stresses, work productively, and contribute to their community . Recently, mental disorders and psychosocial disabilities have gained recognition as significant global development issues . The WHO estimates that one in four individuals will encounter a mental health condition in their lifetime, with around 600 million people worldwide disabled due to mental health issues . The public health significance of mental illnesses underscores their multifaceted causes, primarily rooted in social issues. In Albania, seeking and receiving psychological support often faces significant prejudice. The Albanian culture, characterized by extremes, readily accepts or rejects things. In this regard, there is a notable lack of empathy, emphasizing the need for an improved attitude among Albanians toward psychological services and mental health. As the stigma surrounding mental health continues to decrease, more individuals are seeking professional help for their mental health issues. This trend is driving the growth of therapy and counseling services in the country . Comparing the Albanian healthcare system to the German counterpart, dental treatment is guaranteed for specific citizens only through public dental health services, excluding all private clinics . The Ministry of Health has approved free fluoridation for all children up to the age of 18, although this service is underutilized. In Albania, health insurance covers only dental emergencies, primarily focusing on tooth extractions . According to a survey conducted by the European Commission on the quality of life of Albanians, it was discovered that 41% of Albanians consistently postpone or entirely avoid visiting the doctor in order to save money . Consequently, dental care is not easily accessible, as evidenced by half of those surveyed in Albania stating that they either never visit the dentist or only seek dental care when the pain becomes unbearable . A study in Kosovo conducted that in total, 2,556 school children the caries prevalence for 7- to 14-year- old school children was 94.4% . The healthcare system in Germany offers a variety of options for dental care. As with all medical services, the statutory health insurance covers the cost of treatments only if the patient consults a dentist who is accredited to provide contracted dental care . Dental services in Germany include: (1) An annual check-up, (2) Dental care for children and adolescents from six months to 17 years old, (3) Oral health services for individuals with disabilities or those in need of nursing care, (4) General dental treatments primarily include the removal of tartar, fillings, root canal treatments, oral surgery, periodontal services, and treatments for oral mucosal diseases. These services are generally free of co-payments for the insured, (5) Orthodontic treatments until the age of 18, (6) Costs for dental prostheses . In 2021, the German Dental Association (BZÄK) outlined the oral health goals for Germany’s health system for 2030, based on robust epidemiological evidence . The 2030 agenda includes both disease-oriented and health-promotion goals. Key targets are achieving a caries-free rate of 90% among 3-year-olds and 12-year-olds, reducing the prevalence of severe periodontal disease to below 10% in middle-aged adults (35–44 years old), and enhancing oral health-related behaviors . Behavioral objectives aim to increase the frequency of twice-daily toothbrushing to 87.5% among children, 85.3% among adults, and 89.1% among seniors. Additionally, the agenda seeks to increase the proportion of individuals who attend regular dental check-ups annually to 86.9% for children, 75% for adults, and 94.6% for seniors . This marks the first Albanian scientific study in the fields of dentistry and psychosocial medicine and no prior research had explored dental anxiety. Consequently, we initiated a study to address this gap in the existing literature. The objective of this study is to investigate potential differences in dental anxiety between individuals from Albania, categorized as a “third country” and Germany, classified as an “industrialized country”. Additionally, the study aims to compare the dental care systems of both countries. Special emphasis is placed on assessing the anxiety levels of dental patients during a single visit to a clinic in Germany and Albania, with the overarching goal of identifying and comparing preventive behaviors and oral health status among these groups.
In Plauen, Germany and in Tirana, Albania the research group consists of dentists from both countries, collected data over the course of eight months (12.2019–07.2020). The questionnaires included various instruments such as the Dental Anxiety Scale (DAS) , the Brief Symptom Inventory-18 , and a set of descriptive questions gathering information about preventive behavior and oral health status, were handed out to a total of N = 263 patients, 133 patients from a private dental clinic in Plauen, Saxony (Germany), and 130 patients from the dentistry university clinic in Tirana (Albania) before treatment. The age range of participants varied from 14 to 80 years. All patients had to voluntarily take part in this study. The study was divided into two groups: Albanian and German patients. They were selected based on their explicit admission, made at the reception, of being afraid of the dentist. They were required to complete our questionnaires before treatment in the dental clinic in the waiting room. The questionnaire was administered by the dentists, who distributed it to the patients. All questions are designed to be easily understandable and free of medical jargon to avoid misunderstandings. The method of questioning was consistent and systematically applied to all participants. A structured and constant procedure ensures that all respondents are treated the same way, which is crucial for the validity and reliability of the results. In Germany, there were four refusals to participate in the study due to various reasons, in contrast to Albania, where there were no dropouts. The questionnaires were then examined in 2020 by our research group. Other inclusion criteria included having sufficient knowledge of the German and Albanian languages, possessing the physical and mental ability to complete the questionnaires, being oriented in terms of time and place, and displaying no psychiatric symptoms. This study did not involve a specific screening for psychological problems, and dental phobia or a high level of dental anxiety were not considered exclusion criteria. All patients provided written informed consent, and only those patients who gave written informed consent were included as study participants. For individuals younger than the age of 18 consent to participate were obtained from their parents or legal guardians. For Albanian patients, the German (validated) versions of the scales were utilized, and they were translated into the Albanian language by translators by hand when necessary. Statistical procedure All questionnaires underwent analysis using the statistical program ‘Statistical Package for the Social Sciences’ (SPSS). Mean total values were calculated and subsequently analyzed using an independent sample t-test. Chi-squared tests were employed to ascertain significance between questionnaire categories and sample characteristics. The level of significance was set at p < .05. The required sample size was determined using G*Power 3.1.3 . For comparing two groups with T-Tests (two independent means, two-tailed), with a significance level of α = 0.05, an effect size of Cohen’s d = 0.5, and a power of 95% (1- β = 0.95), a sample size of at least N = 105 per group (total N = 210) was necessary. Dental anxiety scale The Dental Anxiety Scale (DAS) was initially introduced in 1969 by Corah and is widely utilized for assessing dental fear in patients . The total dental anxiety score is calculated by summing up the scores from the four questions. The scores range from 4 to 20, and the patient’s level of anxiety is quantified as follows: a total score of 4 indicates “no fear”, a score between 5 and 8 corresponds to “low fear”, a score between 9 and 14 indicates “moderate fear”, and a score between 15 and 20 corresponds to “high fear” . These scores help evaluate the level of dental anxiety experienced by the patient. The reliability of the Dental Anxiety Scale was found to be rtt = 0.86 . In this study, Cronbach’s alpha was calculated as 0.76 (N = 263). The questionnaire was chosen to assess dental anxiety in this study due to its brevity and scientifically proven reliability. Brief symptom Inventory-18 The Brief Symptom Inventory-18 (BSI-18) was introduced in 2000 by Derogatis as a further condensed version of the BSI, which originally comprised 53 items from the Symptom-Checklist 90-R. Developed to assess the state of psychological stress with only 18 items , the BSI-18 has been applied in various contexts, including with cancer patients, victims of terrorist attacks, individuals with posttraumatic stress, those dealing with alcohol addiction, and other populations. The three scales—depression, anxiety, and somatization—each consist of six items and contribute to the Global Severity Index (GSI). Scores can range from 0 to 90, with each of the 18 items reflecting the respondent’s experiences over the last seven days on a scale offering four choices from ‘Not at all’ to ‘Extremely.’ The reliability of the three scales was assessed in 2010 on a sample of 638 psychotherapeutic patients: somatization α = 0.79, depression α = 0.84, anxiety α = 0.84, and GSI α = 0.91 . In our study, the reliability of the different BSI-18 scales was as follows: somatization α = 0.78, depression α = 0.72, anxiety α = 0.81, and GSI α = 0.90. Oral health In this study, patients were asked to provide answers to questions regarding their assessment of oral health and dental care. Questions followed: How many times a day do you brush your teeth? (Never, 1x/day, ≥2x/day) How often do you go to the dentist? (For example, for prophylaxis). (Never, 1x/year, ≥2x/year) How often do you have tartar removed? (Never, 1x/year, ≥2x/day) How often do you have a professional teeth cleaning appointment? (Never, 1x/year, ≥2x/day) How much do you think you can do to maintain the health of your teeth? (Nothing at all, little, some, much, very much)
All questionnaires underwent analysis using the statistical program ‘Statistical Package for the Social Sciences’ (SPSS). Mean total values were calculated and subsequently analyzed using an independent sample t-test. Chi-squared tests were employed to ascertain significance between questionnaire categories and sample characteristics. The level of significance was set at p < .05. The required sample size was determined using G*Power 3.1.3 . For comparing two groups with T-Tests (two independent means, two-tailed), with a significance level of α = 0.05, an effect size of Cohen’s d = 0.5, and a power of 95% (1- β = 0.95), a sample size of at least N = 105 per group (total N = 210) was necessary.
The Dental Anxiety Scale (DAS) was initially introduced in 1969 by Corah and is widely utilized for assessing dental fear in patients . The total dental anxiety score is calculated by summing up the scores from the four questions. The scores range from 4 to 20, and the patient’s level of anxiety is quantified as follows: a total score of 4 indicates “no fear”, a score between 5 and 8 corresponds to “low fear”, a score between 9 and 14 indicates “moderate fear”, and a score between 15 and 20 corresponds to “high fear” . These scores help evaluate the level of dental anxiety experienced by the patient. The reliability of the Dental Anxiety Scale was found to be rtt = 0.86 . In this study, Cronbach’s alpha was calculated as 0.76 (N = 263). The questionnaire was chosen to assess dental anxiety in this study due to its brevity and scientifically proven reliability.
The Brief Symptom Inventory-18 (BSI-18) was introduced in 2000 by Derogatis as a further condensed version of the BSI, which originally comprised 53 items from the Symptom-Checklist 90-R. Developed to assess the state of psychological stress with only 18 items , the BSI-18 has been applied in various contexts, including with cancer patients, victims of terrorist attacks, individuals with posttraumatic stress, those dealing with alcohol addiction, and other populations. The three scales—depression, anxiety, and somatization—each consist of six items and contribute to the Global Severity Index (GSI). Scores can range from 0 to 90, with each of the 18 items reflecting the respondent’s experiences over the last seven days on a scale offering four choices from ‘Not at all’ to ‘Extremely.’ The reliability of the three scales was assessed in 2010 on a sample of 638 psychotherapeutic patients: somatization α = 0.79, depression α = 0.84, anxiety α = 0.84, and GSI α = 0.91 . In our study, the reliability of the different BSI-18 scales was as follows: somatization α = 0.78, depression α = 0.72, anxiety α = 0.81, and GSI α = 0.90.
In this study, patients were asked to provide answers to questions regarding their assessment of oral health and dental care. Questions followed: How many times a day do you brush your teeth? (Never, 1x/day, ≥2x/day) How often do you go to the dentist? (For example, for prophylaxis). (Never, 1x/year, ≥2x/year) How often do you have tartar removed? (Never, 1x/year, ≥2x/day) How often do you have a professional teeth cleaning appointment? (Never, 1x/year, ≥2x/day) How much do you think you can do to maintain the health of your teeth? (Nothing at all, little, some, much, very much)
The mean score for the patients’ current subjective overall health was 2.49 (SD 1.18). The patients’ Dental Anxiety Scale (DAS) averaged 13.10 (SD 2.74). Consequently, the psychological distress of the patients, as assessed by the BSI-18, revealed mean values of 3.45 (SD 3.95) for the anxiety scale, 2.10 (SD 3.00) for the depression scale, and 2.56 (SD 3.31) for the somatization scale. The global trait score GSI had a mean of 8.11 (SD 9.13). Table presents a comparison of patient groups interviewed in Albania and Germany concerning their psychological well-being. The t-test results indicate a significant difference between the two patient populations across all measures, with effect sizes (Cohen’s d) falling within the medium to high range. Statistical analysis revealed that Albanian patients rated their overall health worse than German patients. Additionally, significant differences emerged between the two groups in responses to the Dental Anxiety Scale (DAS), with Germans reporting higher levels of dental anxiety. Furthermore, it became evident that German patients experience significantly more psychological distress, as observed across the depression, somatization, and anxiety subscales. Table provides a comparison of the oral health status and preventive behaviour between the two patient groups. Patients in the Albanian group reported brushing their teeth significantly less often than their German counterparts. Correspondingly, German patients also visited the dentist significantly more frequently than Albanians. In terms of tartar removal and professional teeth cleaning, there is a descriptive difference between German and Albanian participants, with Germans undergoing these treatments more frequently, although this result did not reach statistical significance. Additionally, a significant difference was observed in the perceptions of the two groups regarding their contribution to the health and maintenance of their own teeth. The majority of German subjects (75.9%) believed they could contribute a lot or very much to their own oral health, whereas in the Albanian group, only 40.8% thought the same, indicating a significant difference.
This is the first study to investigate the prevalence of dental anxiety and mental health problems in Albania and compare it with Germany. Due to the sample size and the study’s restricted scope to one city in Germany (Plauen/Saxony) and the capital of Albania (Tirana), it’s important to note that the data, including the study results, may not be fully representative of the entire populations of Germany and Albania. In this study the mean value for the Dental Anxiety Scale (DAS) in the patient collective was 13.10 (2.74). However, when comparing the expression of dental treatment anxiety, significant differences between the patient groups specifically, the German and Albanian subjects were observed. The average DAS value was higher for Germans and slightly lower for Albanian patients. Notably, the DAS values of the German group exceeded the German average value established by Kunzelmann and Dünninger (1990) . Thus, the study participants, in terms of the expression of their dental treatment anxiety, fall within the German average, with a significant value. The German findings align with those of other industrialized nations: in France, an estimated 13.5% of people suffer from moderate to severe dental anxiety , in Europe , in North America , and in Australia (10–18%) but significantly lower than in countries like China, where the rate is 30% . This study emphasizes the need for preventive measures against dental anxiety. Since dental anxiety often begins in childhood, young patients should be the primary focus of prevention efforts . Early education has been shown to positively impact dental anxiety, leading to better long-term dental care . Despite the strong correlation between dental anxiety and general state anxiety , patients frequently describe dental anxiety as an iatrogenic outcome of dental treatment . This highlights the responsibility of the dental profession and individual practitioners. Additionally, this study could advocate for the establishment of access centers for individuals with dental fear, particularly in Albania. Addressing dental fear requires a multidisciplinary team and is time-intensive. Training and rehabilitation are feasible in a supportive environment . In Northern Europe , specialized units with multidisciplinary skills and defined protocols provide prevention and treatment for anxious patients. However, Albania lacks such teams, although there are developments in behavior management and sedation techniques. Furthermore, dental anxiety is often viewed as an inevitability rather than a treatable condition, despite classifications based on DSM-IV psychiatric criteria . Consequently, there is little motivation to develop specialized services. For patients who do access the limited centers addressing both dental fear and dental disease, the costs are not covered by social security, exacerbating oral health inequalities for those with dental anxiety in Albania. Nevertheless, this study revealed a generally higher level of psychological distress using the Brief Symptom Inventory-18 (BSI-18). In terms of psychological distress, significant differences were observed between the two groups on all subscales as well as the Global Severity Index (GSI), with Germans reporting higher levels of psychological distress. In a sample of patients with anxiety disorders most comparable to ours, the following values for Cronbach’s alpha were found for the BSI subscales: somatization = 0.79, depressiveness = 0.87, and anxiety = 0.81 . In our study, the corresponding values were 0.78, 0.72, and 0.81, respectively, suggesting that the BSI-18 is nearly identical. However, the average score for both patient groups on the “Somatization” subscale was higher than the average values reported by Spitzer for a group of psychologically healthy individuals On the “Anxiety” subscale, only the average score of the German patients was higher than these values, while the score on the “Depression” subscale for Albanian patients was lower than that of German patients . The reasons for these findings in the German population are detailed in the following sections: Data from the 2015 Health Monitoring of the Robert Koch Institute (RKI) shows that in Germany at that time, nearly one in four men (22.0%) and nearly one in three women (33.3%) between the ages of 18 and 79 had experienced fully developed mental disorders at some point. The most common mental disorders were anxiety disorders (15.3%) and depressive disorders (7.7%), followed by somatic disorders (3.5%) . Despite the increasing demand for psychological services, there remains a stigma towards mental illness among Albanians. As a result, individuals often seek the assistance of a psychologist only when the problem has become very serious, and the issues, after having consulted various doctors, appear to be uncontrollable. A comparison of the preventive care behavior of the two patient populations revealed that Albanian patients had a significantly lower preventive care score than German patients. This discrepancy is particularly evident in the frequency of tooth brushing and dentist visits, as well as in the frequency of tartar removal and professional dental cleaning. The most common reason that school children visit the dentist was a toothache. A regular recall and check-up was rarely reported. Usually, were accompanied by their parents. Their first comments regarding their dental visit were “my child a terrible toothache all night” and “we couldn’t sleep at all.” The children with toothaches had bad experiences at the dentist and thus refused future visits. Even though there were dental offices in some of the schools in this study, they were often dysfunctional and poorly equipped. Often, there were no dentists specializing in the fields of pedodontics . The present study in Kosovo showed also that the mean DMFT (5.8) of school children in Kosovo was higher in comparison with school children of the following developed countries: Netherlands (1.1), Finland (1.2), Denmark (1.3), USA (1.4), United Kingdom (1.4), Sweden (1.5), Norway (2.1), Ireland (2.1), Germany (2.6) and Croatia (2.6) (16). The mean DMFT of Kosovo’s children (age 12) was similar to the mean values in Latvia (7.7), Poland (5.1) and a group of 12- to 14-year-olds in Sarajevo, Bosnia, Albania(7.18) . Surveys of schoolchildren and teachers revealed a lack of knowledge about oral health, making teachers ineffective as an educational tool on the subject . Nevertheless, it’s essential to note that the two patient groups differed significantly from each other.Distinct differences between German and Albanian patients were identified, with 42.9% of German patients never having undergone professional teeth cleaning, compared to a higher figure of 55.4% for Albanian patients. Similar to other dental treatments, professional teeth cleaning can evoke anxiety in certain patients as it involves the removal of impurities and tartar from the tooth surface. For many patients, the use of dental tools automatically triggers fear of associated pain. Another contributing factor could be that professional tooth cleaning is often considered a private service, not fully covered by statutory health insurance in Germany. The situation is even more challenging in Albania, where statutory health insurance funds do not contribute to oral health. In Albania, patients are required to bear the full cost of dental services themselves. Meanwhile, an increasing number of statutory health insurance companies in Germany have acknowledged the significance of prophylactic services and offer support through subsidies, such as bonus programs. However, additional initiatives should be implemented, particularly in the realm of education and information dissemination about the importance of prophylactic treatments and dental cleanings. This is crucial for preventing periodontal diseases and arresting their progression, given that the development and progression of caries are strongly influenced by individual behavior . Contrary to the hypothesis that individuals interviewed outside ‘developed’ countries might exhibit higher levels of anxiety and psychological distress due to potential avoidance of dental visits, this study did not confirm such a trend. The positive finding in the oral health-related survey, where the majority of both patient groups expressed confidence in their ability to maintain healthy teeth, is a significant step forward. Importance of the study The study revealed that patients outside German dental practices did not exhibit increased anxiety levels. However, it underscores the continued relevance of dental anxiety in those settings. Given the potential for dental avoidance behavior leading to severe dental issues, there is a recommendation for heightened awareness of dental anxiety among Albanian dentists. It is advised that Albanian dentists familiarize themselves with their patients’ oral health, promptly identify and address dental phobias. Essential to this is comprehensive healthcare and risk assessment by both general practitioners and dentists to effectively inform and advise individuals about the risks associated with neglecting dental treatment and prophylaxis. Implications for the research To conduct a more comprehensive investigation into dental treatment anxiety, additional studies should be undertaken with participants from non-European countries. It is also advisable to include the recording of DMF-T/S values and PSI for the involved patients. Given that dental anxiety frequently emerges in early childhood, conducting an extra survey focusing on dental anxiety among children and adolescents could be beneficial and pertinent for future research. Limitations Individuals aged 18 and older autonomously completed all the questionnaires in this study, while those below the legal age were included solely with explicit parental consent obtained through signed declarations. This introduces the possibility that some patients may not have been entirely candid in their responses, potentially downplaying the seriousness of their answers to avoid being identified as having dental anxiety. It’s important to recognize that the dataset might not comprehensively reflect the prevalence of dental anxiety in the population, particularly as it could exclude severely phobic patients actively avoiding dental treatment. Furthermore, the questionnaires did not inquire about the type of treatment participants anticipated post-survey. Those in acute pain might already be psychologically vulnerable, expecting more discomfort, and consequently, exhibiting greater apprehension toward treatment compared to those anticipating routine dental check-ups.
The study revealed that patients outside German dental practices did not exhibit increased anxiety levels. However, it underscores the continued relevance of dental anxiety in those settings. Given the potential for dental avoidance behavior leading to severe dental issues, there is a recommendation for heightened awareness of dental anxiety among Albanian dentists. It is advised that Albanian dentists familiarize themselves with their patients’ oral health, promptly identify and address dental phobias. Essential to this is comprehensive healthcare and risk assessment by both general practitioners and dentists to effectively inform and advise individuals about the risks associated with neglecting dental treatment and prophylaxis.
To conduct a more comprehensive investigation into dental treatment anxiety, additional studies should be undertaken with participants from non-European countries. It is also advisable to include the recording of DMF-T/S values and PSI for the involved patients. Given that dental anxiety frequently emerges in early childhood, conducting an extra survey focusing on dental anxiety among children and adolescents could be beneficial and pertinent for future research.
Individuals aged 18 and older autonomously completed all the questionnaires in this study, while those below the legal age were included solely with explicit parental consent obtained through signed declarations. This introduces the possibility that some patients may not have been entirely candid in their responses, potentially downplaying the seriousness of their answers to avoid being identified as having dental anxiety. It’s important to recognize that the dataset might not comprehensively reflect the prevalence of dental anxiety in the population, particularly as it could exclude severely phobic patients actively avoiding dental treatment. Furthermore, the questionnaires did not inquire about the type of treatment participants anticipated post-survey. Those in acute pain might already be psychologically vulnerable, expecting more discomfort, and consequently, exhibiting greater apprehension toward treatment compared to those anticipating routine dental check-ups.
The study’s conclusion is that individuals interviewed in Albania tend to avoid visiting the dentist not due to anxiety or other psychological distress but because they underestimate the importance of oral health. In comparison, German patients exhibit higher levels of dental anxiety and other psychological distress, possibly because they visit the dentist more frequently and, consequently, have had more negative experiences. Nonetheless, both Albanian and German dentists should heighten their awareness of the topic of ‘dental anxiety’ to be better equipped in dealing appropriately with patients experiencing increased anxiety. Further studies are needed to reveal other factors related to dental anxiety and psychological distress. The findings of the present study call for early implementation of preventive dentistry elements, oral health knowledge especially in Albanian curricula.
|
Cross-sectoral genomic surveillance reveals a lack of insight in sources of human infections with Shiga toxin-producing | 84c0e80c-d679-48d2-86c5-a9be1bf11bda | 11650479 | Microbiology[mh] | Shiga toxin-producing Escherichia coli (STEC) is a zoonotic pathogen associated with illness ranging from mild diarrhoea to haemolytic uremic syndrome (HUS) or even death . As ruminants (especially cattle, goat and sheep) are the main reservoir, consumption of contaminated food of bovine, caprine and ovine origin, and contact with these animals or their faeces, are known transmission routes . Nevertheless, the infection is often acquired abroad, and person-to-person transmission is also reported regularly . Besides the potentially severe clinical symptoms, STEC is a public health concern given its recognised potential to cause food-borne outbreaks. Therefore, many countries apply farm animal and food monitoring programmes as well as surveillance of human cases in order to assess risks, monitor circulating strains and detect as well as investigate outbreaks. With the introduction of whole genome sequencing (WGS), a tool with excellent discriminatory power and robustness became available to serve risk assessment, monitoring, surveillance and outbreak investigations . Combining WGS data from animal, food and human cases in a One Health surveillance context could provide a head start for outbreak investigation. Indeed, this has been very effective in source identification of Listeria monocytogenes and Salmonella enterica outbreaks . Genomic relationships were also found between STEC isolates obtained from food samples and human cases, for example in two outbreaks in Belgium . Nevertheless, the success rate of cross-sectoral WGS-based surveillance in matching human patients with animal or food isolates appears to be lower for STEC than for L. monocytogenes and Salmonella , at least in the Netherlands. This raises questions about potential under-recognised sources responsible for STEC infections. In this perspective we evaluate the cross-sectoral Dutch WGS database for its ability to match patients to sources and suggest potential improvements for the STEC surveillance, with emphasis on the use of WGS within a One Health concept. In the Netherlands, laboratories and medical doctors have to notify laboratory-confirmed STEC cases to the regional public health service by law. The public health service subsequently reports cases to the National Institute for Public Health and the Environment (RIVM). Since July 2016, notification criteria have been restricted to acute STEC infections with at least diarrhoea, vomiting, blood in stool or HUS . Furthermore, laboratories are requested to send, if available, an isolate from the confirmed STEC cases to the RIVM for further typing. The number of notified cases varied between 436 and 731 over the years 2017 to 2023. In 2017, an isolate for typing was available for 54% of the cases, but that proportion decreased to 26–37% in the period 2018 to 2023. The main reason for this decline is the progressive replacement of culturing practices with PCR in the Dutch medical laboratories. The Netherlands Food and Consumer Product Safety Authority (NVWA) is responsible for the implementation of farm animal and food monitoring programmes, and Wageningen Food Safety Research (WFSR) analyses the samples taken. The monitoring programme is based on risk and therefore focused on the most relevant pathogens (mainly those for which legal criteria exist) at different levels of the production chain such as farms, slaughterhouses, industry, wholesale, retail, and imported products at border control points. Besides manure, samples are mainly food products with a focus on meat (fresh and prepared/ready-to-eat), but also herbs, shellfish and vegetables. The number of samples taken for each level of production (in 2023, in total 3,515 food samples were analysed for STEC) can be found in the yearly reports of the Multi-Annual National Control Plan . The STEC WGS data are mutually shared in a near real-time manner between RIVM and WFSR, with the aim of increasing the speed and success rate of source finding in case of (active) clusters. Clusters, consisting of sequences of two or more STEC isolates, are defined based on single linkage hierarchical clustering. Since STEC is genomically more diverse than Listeria or Salmonella , where it is common to use three to five alleles as a threshold for primary surveillance, we used seven allelic differences in both the surveillance and for the current article. The shared database contained a total of 3,345 sequenced isolates, from January 2017 up to November 2023 (see ). Of these, 1,873 were collected from human cases and 1,472 originated from non-human sources. The three most common serotypes among human cases were O157:H7 (25%), O26:H11 (12%) and O146 (O146:H21 and O146:H28; 9%) accounting for 45% (844/1,873) of the isolates. In comparison, O146 (O146:H21 and O146:H28; 10%), O113 (O113:H4 and O113:H21; 6%), and O55:H12 (5%) were the three most common serotypes found in the non-human isolates, representing 21% (303/1,472). In total, 140 serotypes were present in the database, with 81 occurring in both patients and non-human sources, while 37 serotypes only occurred in patients and 22 serotypes only in non-human sources. Most common was the presence of only stx2 , both in human (37%; 697/1,873) and non-human (43%; 626/1,472) isolates, while stx1 was seen in 26% (482/1,873; human) and 35% (510/1,472; non-human), and the combination of stx1 and stx2 in 29% (534/1,873; human) and 20% (297/1,472; non-human). The stx2 gene variant stx 2f was mainly seen in isolates from patients (8%; 157/1,873) and hardly in food (0.3%; 4/1,472). The stx profile was unknown for three human (0.2%) and 35 non-human (2%) isolates. The attaching and effacing gene was detected in 64% (1,191/1,873) of the human isolates compared with 14% (197/1,472) of the non-human isolates. Only 15 WGS clusters (15/285; 5%) comprising both human and non-human isolates were identified . Nine of these clusters consisted of one human and one non-human isolate, while the maximum size of a mixed cluster was five isolates . The non-human isolates in these mixed clusters originated from beef products, manure of calves or sheep, carcasses of calves, and lamb. In most clusters, more than 6 months had passed between the sampling dates of non-human vs human isolates. The detection of these clusters did not lead to the start of an investigation because of the small size of the clusters and the long period between the sampling dates. Several potential reasons underlying the limited overlap between patient and non-human (animal and food) isolates can be postulated. Below we provide an overview of mutually not exclusive reasons and their different their contribution to the phenomenon. Travel-related and secondary transmission A substantial part of the human infections are acquired abroad. With the exception of the years of the COVID-19 pandemic, 2020 and 2021, around a quarter of the notified STEC infections in the Netherlands are related to international travel, and thus will not be related to Dutch animals or food obtained in the Netherlands. It is therefore important that information on travelling in the week before illness onset is reported. Around 15% of the cases mention other persons with illness in their surroundings, although in most of those cases, this could not be confirmed microbiologically because the other persons were not tested, and thus not notified, or because isolates for comparison were not available. A meta-analysis indicated person-to-person spread as the second most important STEC transmission pathway after consumption of raw or undercooked meat , and a recent case–control study in the United Kingdom showed childcare occupations as a risk factor of infection for all STEC serotypes . In addition, there are several reports about sexual transmission of STEC . Possible non-zoonotic STEC It is increasingly recognised that Stx-producing human-adapted E. coli hybrid pathotypes (incl. enteroaggregative, extraintestinal, enteropathogenic and enterotoxigenic pathogenic E. coli ) circulate and cause disease in humans and have hardly been detected in animals . A notable example is the Stx-producing enteroaggregative O104:H4 strain causing an outbreak that started in Germany in 2011, which to date can sporadically be detected and very likely has a human origin . Similarly, extensive comparative genomics of stx 2f -carrying STEC from human infections, which largely were Stx-producing typical enteropathogenic E. coli (tEPEC), revealed that these isolates are most likely to have a human reservoir . Under-recognised animal reservoirs and food transmission pathways Although ruminants, particularly cattle, are regarded as the main reservoir for STEC, there is evidence that dogs, fish, horses and pigs and some (wild) bird species including poultry are relevant spill-over hosts . These animals are susceptible to colonisation by STEC, but in contrast to reservoir animals, they do not maintain the bacteria in the absence of continuous exposure. Implicitly, this means that there may be other epidemiologically relevant sources of human STEC infection beyond ruminants which are seldomly monitored or sampled. Vegetables (including sprouts) and fruit have also been related to STEC infections and large outbreaks . Until now, outbreaks due to fresh produce have hardly been seen in the Netherlands. Nevertheless, contamination of fresh produce via nearby livestock farms is a realistic risk and should be kept in mind. Two other potential transmission pathways that have come to attention more recently are flour and raw pet food . Although the main reservoirs and sources are included in the Dutch monitoring programmes, adding fish, flour, poultry and raw pet food could enrich our surveillance system. Monitoring and sampling schemes Potentially, there is a substantial impact of the design of monitoring and sampling schemes on the final collection of animal and food isolates that are compared with those of human infections. A lack of understanding of the diversity of relevant STEC sources (see previous paragraph) limits accurate risk-based sampling. In addition, the nature of STEC being a genomically extremely diverse pathogen reduces the likelihood of detecting identical patient and source strains. This applies especially in combination with relative long farm-to-fork chains of bovine meat products, where ample opportunities exist for diversification of the STEC population present at different points in the production chain. An examination whether the focus in the current monitoring programmes has an influence on whether STEC types are detected, and which ones, could lead to more insight. Also investigating the diversity of STEC types that can be found on farms, in individual animals and food would provide useful insight. STEC detection and isolation procedures It is well known that STEC detection and isolation is cumbersome and does not always succeed in obtaining isolates for typing. In human diagnostics, the increasing trend towards molecular diagnostics without (attempts at) isolation creates the problem of receiving fewer STEC for typing, which subsequently results in loss of effective cluster and outbreak detection. To overcome this drawback, RIVM will start performing STEC isolation from patient faeces that tested positive in molecular diagnostics at medical laboratories, after which isolates will be further typed using WGS techniques. Another potential confounding effect might be the occurrence of mixed contamination of sources, complicated by potential horizontal exchange of virulence genes with commensal E. coli in the gut, because in general, only one isolate is retrieved and typed while in reality, multiple STEC types could be present. A substantial part of the human infections are acquired abroad. With the exception of the years of the COVID-19 pandemic, 2020 and 2021, around a quarter of the notified STEC infections in the Netherlands are related to international travel, and thus will not be related to Dutch animals or food obtained in the Netherlands. It is therefore important that information on travelling in the week before illness onset is reported. Around 15% of the cases mention other persons with illness in their surroundings, although in most of those cases, this could not be confirmed microbiologically because the other persons were not tested, and thus not notified, or because isolates for comparison were not available. A meta-analysis indicated person-to-person spread as the second most important STEC transmission pathway after consumption of raw or undercooked meat , and a recent case–control study in the United Kingdom showed childcare occupations as a risk factor of infection for all STEC serotypes . In addition, there are several reports about sexual transmission of STEC . It is increasingly recognised that Stx-producing human-adapted E. coli hybrid pathotypes (incl. enteroaggregative, extraintestinal, enteropathogenic and enterotoxigenic pathogenic E. coli ) circulate and cause disease in humans and have hardly been detected in animals . A notable example is the Stx-producing enteroaggregative O104:H4 strain causing an outbreak that started in Germany in 2011, which to date can sporadically be detected and very likely has a human origin . Similarly, extensive comparative genomics of stx 2f -carrying STEC from human infections, which largely were Stx-producing typical enteropathogenic E. coli (tEPEC), revealed that these isolates are most likely to have a human reservoir . Although ruminants, particularly cattle, are regarded as the main reservoir for STEC, there is evidence that dogs, fish, horses and pigs and some (wild) bird species including poultry are relevant spill-over hosts . These animals are susceptible to colonisation by STEC, but in contrast to reservoir animals, they do not maintain the bacteria in the absence of continuous exposure. Implicitly, this means that there may be other epidemiologically relevant sources of human STEC infection beyond ruminants which are seldomly monitored or sampled. Vegetables (including sprouts) and fruit have also been related to STEC infections and large outbreaks . Until now, outbreaks due to fresh produce have hardly been seen in the Netherlands. Nevertheless, contamination of fresh produce via nearby livestock farms is a realistic risk and should be kept in mind. Two other potential transmission pathways that have come to attention more recently are flour and raw pet food . Although the main reservoirs and sources are included in the Dutch monitoring programmes, adding fish, flour, poultry and raw pet food could enrich our surveillance system. Potentially, there is a substantial impact of the design of monitoring and sampling schemes on the final collection of animal and food isolates that are compared with those of human infections. A lack of understanding of the diversity of relevant STEC sources (see previous paragraph) limits accurate risk-based sampling. In addition, the nature of STEC being a genomically extremely diverse pathogen reduces the likelihood of detecting identical patient and source strains. This applies especially in combination with relative long farm-to-fork chains of bovine meat products, where ample opportunities exist for diversification of the STEC population present at different points in the production chain. An examination whether the focus in the current monitoring programmes has an influence on whether STEC types are detected, and which ones, could lead to more insight. Also investigating the diversity of STEC types that can be found on farms, in individual animals and food would provide useful insight. It is well known that STEC detection and isolation is cumbersome and does not always succeed in obtaining isolates for typing. In human diagnostics, the increasing trend towards molecular diagnostics without (attempts at) isolation creates the problem of receiving fewer STEC for typing, which subsequently results in loss of effective cluster and outbreak detection. To overcome this drawback, RIVM will start performing STEC isolation from patient faeces that tested positive in molecular diagnostics at medical laboratories, after which isolates will be further typed using WGS techniques. Another potential confounding effect might be the occurrence of mixed contamination of sources, complicated by potential horizontal exchange of virulence genes with commensal E. coli in the gut, because in general, only one isolate is retrieved and typed while in reality, multiple STEC types could be present. The absence of overlap of STEC strains from human and non-human sources in the Netherlands can have multiple reasons and causes. However, quantifying the impact of each possible reason and taking measures to reduce the impact is difficult. The most obvious factors are the substantial contribution of international travel and person-person spread in the STEC epidemiology. More efforts can be directed to comparing national sequences to international databases in order to infer the magnitude of relations to other geographical regions in relation to travel and/or imported food. More attention should be paid to identifying hybrid STEC strains and their epidemiology. In addition, some animals and food products may be under-recognised as potential sources of human infections. More effort in investigating the role of other sources beyond the well-known can provide a better understanding on STEC ecology in general, improving surveillance and source attribution, and ultimately provide better guidance for monitoring and source finding. All this also implies that sufficient attention must be paid to having good diagnostics in place and isolates available for typing. |
A retrospective analysis of preemptive pharmacogenomic testing in 22,918 individuals from China | 9fc29ab0-9b27-4017-a13b-5ac63951f849 | 10098050 | Pharmacology[mh] | INTRODUCTION Individuals have different genetic makeups, which may influence the risk of disease development as well as responses to drugs and environmental factors. Variants of genes involved in drug metabolism, drug transport, and target binding are linked to interindividual differences in both the efficacy and toxicity of many medications. Indeed, hundreds of genes affecting medication metabolism have been reported, and the availability of genomic data is leading to the discovery of new interactions. , , The findings of these studies are compiled through curation efforts such as PharmGKB ( https://www.pharmgkb.org/ ). Precision medicine has the goal of exactly matching a therapeutic intervention with the patient's molecular profile. Pharmacogenomics (PGx) focuses on the involvement of genomics and genetics in drug responses by integrating pharmacological effects and genotype, , and by offering personalized drug selection and dosage based on an individual's genetics, PGx may revolutionize patient care. Overall, the practical value of PGx testing has increased as high‐impact haplotypes have been discovered and characterized. The Clinical Pharmacogenetics Implementation Consortium (CPIC; cpicpgx.org ) and other organizations assign a clinical function to star alleles based on published experimental research and create peer‐reviewed and evidence‐based clinical practice guidelines , to aid physicians in implementing pharmacogenetics into clinical practice. Pharmacogenomics testing can be preemptive before prescription, or reactive in response to treatment failure or an adverse drug reaction. Preemptive PGx testing is the availability of information before the time of prescription, allowing this to be personalised to the patients when needed. A preemptive, panel‐based approach is increasingly playing an important role in supporting the use of genotype‐guided prescribing in clinical practice and the genetic information can be coupled to the patient's medical record to inform future drug therapy. Consequently, there are many programs around the world to support this research. In a previous study of five drug genomes in over 10,000 patients, the race/ethnicity of the majority of the cohort was European American, and a multiplexed test revealed an actionable variant in 91% of genotyped patients. Additionally, a Danish study involving 77,684 individuals with 42 clinically relevant variants and CYP2D6 gene deletion and duplication showed that almost all individuals carried at least one genetic variant (>99.9%), with 87% harboring three or more. PGx testing data of 1141 samples by exome sequencing in Hong Kong China revealed that 99.6% of subjects carried at least one such variant. The China Metabolic Analytics Project (ChinaMAP) was designed to comprehensively characterize the diverse genetic architectures of Han Chinese and other major ethnic minorities across different geographical areas and investigate their contribution to metabolic diseases as well as a broad spectrum of biomedically relevant quantitative traits. This project also studied the genetic diversity of some important PGx genes, such as genes related to the dosage of warfarin and clopidogrel. Furthermore, a comparison in European countries showed that race influences dose changes associated with genetic factors. Nevertheless, there has not been a study on large samples of a preemptive, panel‐based PGx testing in mainland China. The broad impact of diverse geographic distribution on PGx testing is not well understood in China. In this study, we retrospectively analyzed preemptive PGx testing data of 22,918 participants from 20 provinces of China. The PGx testing was performed by a 52‐gene targeted next‐generation sequencing (NGS) PGx panel, which covered 100 SNPs of 52 genes, and a full gene deletion of CYP2D6 (Table ). Of 52 genes targeted by the panel, 15 genes were involved in CPIC guidelines for 31 drugs, including CYP2C9 , SLCO1B1 , CYP2C19 , CYP2D6 , VKORC1 , CYP4F2 , G6PD , NUDT15 , CYP3A5 , IFNL4 , TPMT , HLA‐A , HLA‐B , UGT1A1 , and MT‐RNR1 (Table ). , , , , , , , , , , , , , , , , We utilized sequencing results of these 15 genes to find the opportunity for pharmacogenomic‐guided prescribing for 31 drugs according to CPIC guidelines. The other 37 genes, whose sequencing results were not interpreted by CPIC guidelines, were used for allele frequency analysis and quality control. Using this panel, we performed preemptive PGx testing for 22,918 subjects from 20 provinces in China. The results could provide evidence to evaluate the value of preemptive PGx testing and to optimize clinical practice in China. MATERIALS AND METHODS 2.1 Study subjects We retrospectively analyzed preemptive PGx testing data of subjects from 20 provinces in China from May 2019 to April 2022. These subjects were referred to preemptive PGx testing by physicians as part of their health care. They (or their guardians who <18 years) agreed to receive the preemptive PGx testing and signed informed consent after consulting physicians. Blood samples were collected from 23,199 consecutive, unrelated individuals and transported to CapitalBio Medical Laboratory for PGx testing. Two hundred eighty‐one low‐quality samples were filtered after quality control. Thus, 22,918 qualified samples from 22,918 subjects were included in this study. Of these subjects, there were 13,805 (60.29%) < 18 years, 6789 (29.57%) >= 18 years and 2324 (10.14%) without age information (Table ). Over 90% (12,782/13,805) of those under 18 years were newborns, and they received preemptive PGx testing during neonatal screening. In this study, our main objective was to research the genetic information. Thus, age distribution will not affect our conclusions. The data were deidentified prior to further analysis. This study was approved by the ethics committee of People's Hospital of Yangjiang (No. 20210047). 2.2 PGx panel MagPure tissue and blood DNA LQ kit (Magen Biotechnology) was used for DNA extraction from blood samples. Target region amplification and sequencing library construction were then performed by a multiplex‐PCR method using a library construction kit for PGx (CapitalBio Genomics) following the manufacturer's protocol. The process is briefly described as follows. First, the sequences of the target regions were amplified using gene‐specific primers. Second, library construction was carried out using library construction primers that added sequencing adapters to both ends of the product. The sequencing adapter contains a barcode sequence of 8–10 bp for distinguishing different samples. Third, the sequencing library was purified by AMPure XP beads (Beckman Coulter), and quantified by Qubit (Thermo Fisher Scientific). At last, all libraries were sequenced according to the standard 200‐bp single‐end sequencing procedure of the BioelectronSeq 4000 sequencing system (National Medical Products Administration registration permit NO. 20203220502), which utilizes the same sequencing principle as the Ion Proton sequencer (Thermo Fisher Scientific). Each sequencing run is able to process 120 samples. The CYP2D6 full gene deletion was detected by a long‐PCR method as described by Hersberger et al. 2.3 Sequencing data analysis Raw data were filtered using a homemade pipeline to exclude reads shorter than 70 bp or with more than 50% low‐quality bases (quality score < 20), providing high‐quality clean reads. TMAP ( https://github.com/iontorrent/TAMP , version 5.4.11) was used to map the clean reads to the hg38 version of the human reference genome with “mapall” and “map4” parameters and to obtain bam files. The bam files were compressed, sorted and indexed using samtools ( http://samtools.sourceforge.net/ , version 1.2). For sorted bam files, Torrent Variant Caller ( https://github.com/LeeBergstrand/Torrent_Variant_Caller , version 4.4.2.1) was used for SNP/Indel calling, and the “hotspot‐vcf” parameter was selected to detect variants for targeted loci. 2.4 Validation of PGx panel by Sanger sequencing We designed amplification primers for all 100 SNPs detected by PGx panel (Table ) using Primer Premier V5.0. The length of amplicons was limited to 200~800 bp. The target sites were away from forward/reverse primers>60 bp. The reaction was carried out in 50 μL volume containing DNA template X μL (X ≤ 19 and 30 ng < total DNA quality<100 ng), forward primer 2 μL, reverse primer 2 μL, Phanta Mix (Vazyme Biotech, including Phanta Max Super‐Fidelity DNA Polymerase, Phanta Max Buffer, and dNTP) 27 μL, nuclease‐free water (Thermo Fisher Scientific) (19‐X) μL. The following PCR conditions were used: initial denaturation at 95°C for 3 min, followed by 35 cycles consisting of denaturation (95°C for 15 s), annealing (65°C for 15 s, decreased 0.5°C per cycle before 55°C) and extension (72°C for 1 min) and a final step at 72°C for 5 min. The PCR products were sequenced by a 3730XL DNA analyzer (Thermo Fisher Scientific) following the manufacturer's protocol. 2.5 Star allele analysis and phenotype prediction Star allele analysis of CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT was performed using the tag SNP method. First, a diploid combination of all detected alleles was constructed, after which the allele combination of the sample based on SNP test results was determined. For CYP2D6 , six tag SNPs were used to detect six important star alleles: rs1135840, rs16947 for *2; rs3892097 for *4; rs1135840, rs1065852 for *10; rs1135840, rs16947, rs5030865 for *14; rs1135840, rs28371725, rs16947 for *41 and wild‐type for *1. CYP2D6 *5 allele were detected by a long‐PCR method as described in section 2.2. When the long‐PCR results indicated that one or two copies of CYP2D6 were missing, the genotypes were adjusted accordingly. For the other six genes, we have used 14 tag SNPs to detect 22 star alleles (Table ). The enzyme activity scoring table provided by CPIC was used for phenotype prediction. 2.6 CPIC recommendations Clinical Pharmacogenetics Implementation Consortium guidelines for 31 drugs covering 15 genes were applied to interpret the genetic data of each sample (Table ). For each gene, actionable genotypes were defined as genotypes required to change the medication strategy of at least one drug according to CPIC guidelines, including alternative drug, decreased dose, and increased dose (Table ). Among actionable genotypes, those required an alternative drug according to CPIC guidelines were defined as high‐risk genotypes(Table ). Besides, high‐risk ratio of a drug was calculated as the ratio of subjects who carry high‐risk genotypes for the drug by the CPIC guidelines. For example, there were 2737 subjects in all provinces who carried at least one copy of either HLA‐A*31:01 or HLA‐B*15:02 and were recommended to use an alternative drug instead of carbamazepine, so the high‐risk ratio of carbamazepine was 11.94% (2737/22,198). In order to study intra‐country differences of high‐risk ratios, risk ratio (RR) of each drug in each province was calculated as follows: RR of a drug in a province = high − risk ratio of the drug in a province high − risk ratio of the drug in all provinces . For example, the high‐risk ratio of carbamazepine in HAINAN province was 17.48% (18/103) and that in all provinces was 11.94% as mentioned above, so the RR of carbamazepine in HAINAN province was 1.46 (17.48%/11.94%). 2.7 Statistical method PLINK with “‐indep‐pairwise 50 5 0.5 ‐‐file data/my‐noweb” parameters was used to filter linked gene sites before performing clustering and principle component analysis (PCA). The pheatmap method in R was used for clustering with “average” method. PCA in the python‐based sklearn package was used for analysis. Frequencies and ratios were compared by the Fisher's exact test using python v3.8.13 with the “scipy.stats” package, and a p value of <0.01 was considered statistically significant. Study subjects We retrospectively analyzed preemptive PGx testing data of subjects from 20 provinces in China from May 2019 to April 2022. These subjects were referred to preemptive PGx testing by physicians as part of their health care. They (or their guardians who <18 years) agreed to receive the preemptive PGx testing and signed informed consent after consulting physicians. Blood samples were collected from 23,199 consecutive, unrelated individuals and transported to CapitalBio Medical Laboratory for PGx testing. Two hundred eighty‐one low‐quality samples were filtered after quality control. Thus, 22,918 qualified samples from 22,918 subjects were included in this study. Of these subjects, there were 13,805 (60.29%) < 18 years, 6789 (29.57%) >= 18 years and 2324 (10.14%) without age information (Table ). Over 90% (12,782/13,805) of those under 18 years were newborns, and they received preemptive PGx testing during neonatal screening. In this study, our main objective was to research the genetic information. Thus, age distribution will not affect our conclusions. The data were deidentified prior to further analysis. This study was approved by the ethics committee of People's Hospital of Yangjiang (No. 20210047). PGx panel MagPure tissue and blood DNA LQ kit (Magen Biotechnology) was used for DNA extraction from blood samples. Target region amplification and sequencing library construction were then performed by a multiplex‐PCR method using a library construction kit for PGx (CapitalBio Genomics) following the manufacturer's protocol. The process is briefly described as follows. First, the sequences of the target regions were amplified using gene‐specific primers. Second, library construction was carried out using library construction primers that added sequencing adapters to both ends of the product. The sequencing adapter contains a barcode sequence of 8–10 bp for distinguishing different samples. Third, the sequencing library was purified by AMPure XP beads (Beckman Coulter), and quantified by Qubit (Thermo Fisher Scientific). At last, all libraries were sequenced according to the standard 200‐bp single‐end sequencing procedure of the BioelectronSeq 4000 sequencing system (National Medical Products Administration registration permit NO. 20203220502), which utilizes the same sequencing principle as the Ion Proton sequencer (Thermo Fisher Scientific). Each sequencing run is able to process 120 samples. The CYP2D6 full gene deletion was detected by a long‐PCR method as described by Hersberger et al. Sequencing data analysis Raw data were filtered using a homemade pipeline to exclude reads shorter than 70 bp or with more than 50% low‐quality bases (quality score < 20), providing high‐quality clean reads. TMAP ( https://github.com/iontorrent/TAMP , version 5.4.11) was used to map the clean reads to the hg38 version of the human reference genome with “mapall” and “map4” parameters and to obtain bam files. The bam files were compressed, sorted and indexed using samtools ( http://samtools.sourceforge.net/ , version 1.2). For sorted bam files, Torrent Variant Caller ( https://github.com/LeeBergstrand/Torrent_Variant_Caller , version 4.4.2.1) was used for SNP/Indel calling, and the “hotspot‐vcf” parameter was selected to detect variants for targeted loci. Validation of PGx panel by Sanger sequencing We designed amplification primers for all 100 SNPs detected by PGx panel (Table ) using Primer Premier V5.0. The length of amplicons was limited to 200~800 bp. The target sites were away from forward/reverse primers>60 bp. The reaction was carried out in 50 μL volume containing DNA template X μL (X ≤ 19 and 30 ng < total DNA quality<100 ng), forward primer 2 μL, reverse primer 2 μL, Phanta Mix (Vazyme Biotech, including Phanta Max Super‐Fidelity DNA Polymerase, Phanta Max Buffer, and dNTP) 27 μL, nuclease‐free water (Thermo Fisher Scientific) (19‐X) μL. The following PCR conditions were used: initial denaturation at 95°C for 3 min, followed by 35 cycles consisting of denaturation (95°C for 15 s), annealing (65°C for 15 s, decreased 0.5°C per cycle before 55°C) and extension (72°C for 1 min) and a final step at 72°C for 5 min. The PCR products were sequenced by a 3730XL DNA analyzer (Thermo Fisher Scientific) following the manufacturer's protocol. Star allele analysis and phenotype prediction Star allele analysis of CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT was performed using the tag SNP method. First, a diploid combination of all detected alleles was constructed, after which the allele combination of the sample based on SNP test results was determined. For CYP2D6 , six tag SNPs were used to detect six important star alleles: rs1135840, rs16947 for *2; rs3892097 for *4; rs1135840, rs1065852 for *10; rs1135840, rs16947, rs5030865 for *14; rs1135840, rs28371725, rs16947 for *41 and wild‐type for *1. CYP2D6 *5 allele were detected by a long‐PCR method as described in section 2.2. When the long‐PCR results indicated that one or two copies of CYP2D6 were missing, the genotypes were adjusted accordingly. For the other six genes, we have used 14 tag SNPs to detect 22 star alleles (Table ). The enzyme activity scoring table provided by CPIC was used for phenotype prediction. CPIC recommendations Clinical Pharmacogenetics Implementation Consortium guidelines for 31 drugs covering 15 genes were applied to interpret the genetic data of each sample (Table ). For each gene, actionable genotypes were defined as genotypes required to change the medication strategy of at least one drug according to CPIC guidelines, including alternative drug, decreased dose, and increased dose (Table ). Among actionable genotypes, those required an alternative drug according to CPIC guidelines were defined as high‐risk genotypes(Table ). Besides, high‐risk ratio of a drug was calculated as the ratio of subjects who carry high‐risk genotypes for the drug by the CPIC guidelines. For example, there were 2737 subjects in all provinces who carried at least one copy of either HLA‐A*31:01 or HLA‐B*15:02 and were recommended to use an alternative drug instead of carbamazepine, so the high‐risk ratio of carbamazepine was 11.94% (2737/22,198). In order to study intra‐country differences of high‐risk ratios, risk ratio (RR) of each drug in each province was calculated as follows: RR of a drug in a province = high − risk ratio of the drug in a province high − risk ratio of the drug in all provinces . For example, the high‐risk ratio of carbamazepine in HAINAN province was 17.48% (18/103) and that in all provinces was 11.94% as mentioned above, so the RR of carbamazepine in HAINAN province was 1.46 (17.48%/11.94%). Statistical method PLINK with “‐indep‐pairwise 50 5 0.5 ‐‐file data/my‐noweb” parameters was used to filter linked gene sites before performing clustering and principle component analysis (PCA). The pheatmap method in R was used for clustering with “average” method. PCA in the python‐based sklearn package was used for analysis. Frequencies and ratios were compared by the Fisher's exact test using python v3.8.13 with the “scipy.stats” package, and a p value of <0.01 was considered statistically significant. RESULTS 3.1 Study subjects and geographic distribution A total of 22,918 subjects (male 11,455/49.98% vs. female 11,463/50.02%) were included in this study, which covered 20 provinces across China. After clustering analysis with mutant allele frequencies (Table ), 20 provinces were divided into three groups (Figure ), which are geographically distributed from north to south in China (Figure ). The north group (indicated in blue in Figure ) includes 12 provinces as follows: JILIN, LIAONING, INNERMOGOLIA, BEIJING, HEBEI, SHANXI, GANSU, SHAANXI, SHANDONG, HENAN, ANHUI, and JIANGSU. The middle group (indicated in pink in Figure ) includes six provinces as follows: HUBEI, SICHUAN, HUNAN, FUJIAN, GUANGDONG, and YUNNAN. The south group (indicated in yellow in Figure ) includes GUANGXI and HAINAN. 3.2 Qualitity control and validation A total of 100 variants in 52 genes (Table ) were detected by high‐depth sequencing with a mean depth higher than 1000×. The lowest depth of each site for each sample was 30× (Figure ). In order to validate the PGx panel, we performed Sanger sequencing for samples with different genotypes of each targeted gene loci. In total, we performed 488 Sanger sequencing reactions for validated samples with PGx panel results, including 187 homozygous wild‐types, 152 heterozygous variants, and 149 homozygous variants (Table ). All genotypes determined by Sanger sequencing were accordant with the PGx panel. 3.3 PCA of the mutant allele frequencies Principal component analysis was performed with mutant allele frequencies (Table ). At the same time, we also added five datasets of the 1000 Genomes Project including CHB (Han Chinese in Beijing, China), CHS (Han Chinese South, China), CDX (Chinese Dai in Xishuangbanna, China), KHV (Kinh in Ho Chi Minh City, Vietnam), and JPT (Japanese in Tokyo, Japan). Both clustering and PCA results support dividing 20 provinces into three groups according to the geographic distribution (Figure ). 3.4 Proportion of subjects carrying actionable genotypes Of the 22,918 subjects, 99.97% carried at least one actionable genotype in these 15 genes, as depicted in Figure (blue line). The number of genes with actionable genotypes per subject ranged from 0 to 10, with a median of 4. The distribution of the number of drugs with atypical dosage recommendations per subject was illustrated in the orange histogram in Figure . The median number was 8. It means the subjects carry actionable genotypes leading to atypical dosage recommendations for a median of 8 drugs by CPIC guidelines. In addition, we evaluated detection ratios of actionable genotypes for the 15 genes in each province and observed the highest ratio of over 99% on the VKORC1 gene in almost all 20 provinces (Table ). 3.5 Frequency of star alleles and predicted phenotypes We analyzed the spectrum of common star alleles and predicted phenotypes for the seven important PGx genes including CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT . For CYP2D6 , the star alleles included in this study were *1, *2, *4, *5, *10, *14, and *41. The most common star allele was *10, whose frequency was 46.60% in all samples. The population in LIAONING and GUANGXI had the lowest (43.54%) and highest (66.53%) *10 allele frequency respectively (Table ). Further, we predicted the phenotype of each sample based on its genotype according to the enzyme activity score by CIPC guidelines. There were three types of phenotypes we predicted in this study, including normal metaboliser (NM), intermediate metaboliser (IM) and poor metaboliser (PM). NMs, IMs and PMs accounted for 59.87%, 39.94% and 0.18% of all subjects respectively (Table ). For CYP2C19 , the alleles included in the analysis were *1, *2, *3, and *17. The *1 allele had a nationwide frequency of 63.46%, which was the most common allele. In different provinces, the frequencies of *1 allele ranged from 55.50% to 73.73%.The highest frequency was observed in GUANGXI, while the lowest one was in HUNAN (Table ). The predicted phenotypes of CYP2C19 included PM, IM, NM, rapid metaboliser (RM) and UM. For CYP2C9 , over 95% of alleles were determined as *1 allele, thus over 90% (20,814/22,918) of samples were predicted as NM. For CYP3A5 , the frequency of *1 and *3 was 28.20% and 71.80%, respectively. PMs and IMs of CYP3A5 accounted for 51.61% and 40.37% of all samples, respectively. For UGT1A1 , the most frequent allele was *1, whose frequency was 67.48%. The proportion of IMs and NMs was similar in all samples (44.24% vs 45.33%). For NUDT15 and TPMT , NM (72.45%, 96.37%, respectively) was the predominant predicted phenotype in these two genes (Tables and ). 3.6 CPIC therapeutic recommendations for 31 drugs Clinical Pharmacogenetics Implementation Consortium therapeutic recommendations for 31 drugs based on the 15 genes included in this study were shown in Figure . When only considering genetic factors, 99.33% of participants were subjected to a decreased warfarin dose by CPIC guidelines. We defined the high‐risk ratio of a drug as the ratio of participants who were recommended an alternative drug by the CPIC guidelines as described in Materials and Methods. Of 31 drugs with CPIC guidelines, 20 have recommendations for an alternative drug when subjects carry some specific genotypes (i.e., high‐risk genotypes). The high‐risk ratios of these 20 drugs ranged from 0.18% to 58.25%. Clopidogrel had the highest high‐risk ratio of 58.25%, which means only 41.75% of the subjects in the present study were recommended to use clopidogrel under normal risk (Figure ). 3.7 Distribution of high‐risk ratios in different provinces Since there existed orders of magnitude differences among high‐risk ratios of 20 drugs (0.18%–58.25%), we used RRs to study intra‐country differences. RR of a drug in a province equals the high‐risk ratio in the province divided by the average high‐risk ratio in all 20 provinces as described in Materials and Methods. Thus, we obtained RRs of the abovementioned 20 drugs in each province. Then we draw a heatmap shown in Figure . The highest RR (23.44, 95%CI:8.83–52.85) was the one of rasburicase in GUANGXI, which means the high‐risk ratio of rasburicase in GUANGXI was more than 23 times the average ratio in all provinces. We also performed Fisher's exact test between the high‐risk ratio in GUANGXI and that in all provinces and found that the high‐risk ratio of rasburicase in GUANGXI was significantly higher ( p < 0.001). Similarly, the second highest RR (13.17, 95%CI:4.06–33.22) was that of rasburicase in GUANGDONG. The high‐risk ratio of rasburicase in GUANGDOND was significantly higher than that in all provinces ( p < 0.001). In addition, desipramine, paroxetine, and codeine had the same RR in HENAN (12.59, 95%CI:2.52–41.24) and the high‐risk ratio was significantly higher than that in all provinces ( p < 0.01). Study subjects and geographic distribution A total of 22,918 subjects (male 11,455/49.98% vs. female 11,463/50.02%) were included in this study, which covered 20 provinces across China. After clustering analysis with mutant allele frequencies (Table ), 20 provinces were divided into three groups (Figure ), which are geographically distributed from north to south in China (Figure ). The north group (indicated in blue in Figure ) includes 12 provinces as follows: JILIN, LIAONING, INNERMOGOLIA, BEIJING, HEBEI, SHANXI, GANSU, SHAANXI, SHANDONG, HENAN, ANHUI, and JIANGSU. The middle group (indicated in pink in Figure ) includes six provinces as follows: HUBEI, SICHUAN, HUNAN, FUJIAN, GUANGDONG, and YUNNAN. The south group (indicated in yellow in Figure ) includes GUANGXI and HAINAN. Qualitity control and validation A total of 100 variants in 52 genes (Table ) were detected by high‐depth sequencing with a mean depth higher than 1000×. The lowest depth of each site for each sample was 30× (Figure ). In order to validate the PGx panel, we performed Sanger sequencing for samples with different genotypes of each targeted gene loci. In total, we performed 488 Sanger sequencing reactions for validated samples with PGx panel results, including 187 homozygous wild‐types, 152 heterozygous variants, and 149 homozygous variants (Table ). All genotypes determined by Sanger sequencing were accordant with the PGx panel. PCA of the mutant allele frequencies Principal component analysis was performed with mutant allele frequencies (Table ). At the same time, we also added five datasets of the 1000 Genomes Project including CHB (Han Chinese in Beijing, China), CHS (Han Chinese South, China), CDX (Chinese Dai in Xishuangbanna, China), KHV (Kinh in Ho Chi Minh City, Vietnam), and JPT (Japanese in Tokyo, Japan). Both clustering and PCA results support dividing 20 provinces into three groups according to the geographic distribution (Figure ). Proportion of subjects carrying actionable genotypes Of the 22,918 subjects, 99.97% carried at least one actionable genotype in these 15 genes, as depicted in Figure (blue line). The number of genes with actionable genotypes per subject ranged from 0 to 10, with a median of 4. The distribution of the number of drugs with atypical dosage recommendations per subject was illustrated in the orange histogram in Figure . The median number was 8. It means the subjects carry actionable genotypes leading to atypical dosage recommendations for a median of 8 drugs by CPIC guidelines. In addition, we evaluated detection ratios of actionable genotypes for the 15 genes in each province and observed the highest ratio of over 99% on the VKORC1 gene in almost all 20 provinces (Table ). Frequency of star alleles and predicted phenotypes We analyzed the spectrum of common star alleles and predicted phenotypes for the seven important PGx genes including CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT . For CYP2D6 , the star alleles included in this study were *1, *2, *4, *5, *10, *14, and *41. The most common star allele was *10, whose frequency was 46.60% in all samples. The population in LIAONING and GUANGXI had the lowest (43.54%) and highest (66.53%) *10 allele frequency respectively (Table ). Further, we predicted the phenotype of each sample based on its genotype according to the enzyme activity score by CIPC guidelines. There were three types of phenotypes we predicted in this study, including normal metaboliser (NM), intermediate metaboliser (IM) and poor metaboliser (PM). NMs, IMs and PMs accounted for 59.87%, 39.94% and 0.18% of all subjects respectively (Table ). For CYP2C19 , the alleles included in the analysis were *1, *2, *3, and *17. The *1 allele had a nationwide frequency of 63.46%, which was the most common allele. In different provinces, the frequencies of *1 allele ranged from 55.50% to 73.73%.The highest frequency was observed in GUANGXI, while the lowest one was in HUNAN (Table ). The predicted phenotypes of CYP2C19 included PM, IM, NM, rapid metaboliser (RM) and UM. For CYP2C9 , over 95% of alleles were determined as *1 allele, thus over 90% (20,814/22,918) of samples were predicted as NM. For CYP3A5 , the frequency of *1 and *3 was 28.20% and 71.80%, respectively. PMs and IMs of CYP3A5 accounted for 51.61% and 40.37% of all samples, respectively. For UGT1A1 , the most frequent allele was *1, whose frequency was 67.48%. The proportion of IMs and NMs was similar in all samples (44.24% vs 45.33%). For NUDT15 and TPMT , NM (72.45%, 96.37%, respectively) was the predominant predicted phenotype in these two genes (Tables and ). CPIC therapeutic recommendations for 31 drugs Clinical Pharmacogenetics Implementation Consortium therapeutic recommendations for 31 drugs based on the 15 genes included in this study were shown in Figure . When only considering genetic factors, 99.33% of participants were subjected to a decreased warfarin dose by CPIC guidelines. We defined the high‐risk ratio of a drug as the ratio of participants who were recommended an alternative drug by the CPIC guidelines as described in Materials and Methods. Of 31 drugs with CPIC guidelines, 20 have recommendations for an alternative drug when subjects carry some specific genotypes (i.e., high‐risk genotypes). The high‐risk ratios of these 20 drugs ranged from 0.18% to 58.25%. Clopidogrel had the highest high‐risk ratio of 58.25%, which means only 41.75% of the subjects in the present study were recommended to use clopidogrel under normal risk (Figure ). Distribution of high‐risk ratios in different provinces Since there existed orders of magnitude differences among high‐risk ratios of 20 drugs (0.18%–58.25%), we used RRs to study intra‐country differences. RR of a drug in a province equals the high‐risk ratio in the province divided by the average high‐risk ratio in all 20 provinces as described in Materials and Methods. Thus, we obtained RRs of the abovementioned 20 drugs in each province. Then we draw a heatmap shown in Figure . The highest RR (23.44, 95%CI:8.83–52.85) was the one of rasburicase in GUANGXI, which means the high‐risk ratio of rasburicase in GUANGXI was more than 23 times the average ratio in all provinces. We also performed Fisher's exact test between the high‐risk ratio in GUANGXI and that in all provinces and found that the high‐risk ratio of rasburicase in GUANGXI was significantly higher ( p < 0.001). Similarly, the second highest RR (13.17, 95%CI:4.06–33.22) was that of rasburicase in GUANGDONG. The high‐risk ratio of rasburicase in GUANGDOND was significantly higher than that in all provinces ( p < 0.001). In addition, desipramine, paroxetine, and codeine had the same RR in HENAN (12.59, 95%CI:2.52–41.24) and the high‐risk ratio was significantly higher than that in all provinces ( p < 0.01). DISCUSSION In this study, the vast majority (99.97%) of 22,918 individuals had at least one actionable genotype for 15 genes; in a previous study, 91% of individuals had at least one actionable genotype. Studies have also shown that almost all individuals have one or more actionable pharmacogenetic polymorphism(s). , Thus, evidence to date highlights the utility and potential benefit of panel‐based genotyping for pharmacogenomic testing. The number of actionable genotype of PGx genes per subject ranged from 0 to 10, with a mean of 4, in this study, which is consistent with a previous report. Furthermore, we found that overall, the participants harbored pharmacogenetic alleles that lead to atypical therapeutic recommendations by CPIC for a median of 8 drugs, indicating the value of such analysis for individuals taking multiple medications. It may be expected that testing a large number of pharmacogenes and drugs will identify clinically important variants and atypical drug responses in many subjects, and the degree of impact may be underestimated. PGx testing might optimize treatment and eliminate adverse drug response events (ADEs) by utilizing an appropriate drug at the right dose and at the right time. Preemptive PGx testing may be a crucial component of precision medicine in the postgenomic era. There is a substantial significance for pharmacogenomic‐guided medication prescription, and alternatives could be prescribed according to CPIC guidelines for 20 drugs when subjects carry high‐risk genotypes. Among these drugs, the high‐risk ratio of clopidogrel reached 58.25% in this study. It means 58.25% of subjects should be recommended an alternative drug instead of clopidogrel. Gregory's latest study reported a high‐risk ratio for clopidogrel of 29.6%, which was significantly lower than this study (29.6% vs. 58.25%, p < 0.001). Clopidogrel is a thienopyridine prodrug that requires hepatic biotransformation to form the active metabolite, and this conversion requires two sequential oxidation steps involving several CYP enzymes (e.g. CYP2C19 ). Only UMs (*1/*17, *17/*17) and EMs (*1/*1) of CYP2C19 are able to use clopidogrel at normal risk, whereas the CPIC recommends alternative antiplatelet therapy for IMs (*1/*2, *1/*3, *17/*2, *17/*3) and PMs (*2/*2, *2/*3, *3/*3). UMs and EMs occupied only a small percentage of the population in the present study in China. In contrast, Gregory's study included African, East Asian, European, and South Asian populations, and the proportion of UM and EM proportion was much higher than in this study. Another study in America showed that the distribution of metabolism phenotypes was 4.5% for UMs, 27.9% for EMs, 38.9% for NMs, 26.8% for IMs and 1.8% for PMs, indicating alternative drug treatment for nearly 68% of subjects. In general, there is a large difference in CYP2C19 metabolism phenotypes among races. Warfarin, the most commonly used oral anticoagulant worldwide, is prescribed for the treatment and prevention of thromboembolic disorders. Warfarin dosing is notoriously challenging due to its narrow therapeutic index and wide interindividual variability in dose requirements. Warfarin dose variability is affected by common CYP2C9 , VKORC1 and CYP4F2 genetic variants. CYP2C9 *1 leads to the “normal metabolizer” phenotype, CYP2C9 *2 and CYP2C9 *3 are the two most common decreased‐function alleles, and CYP2C9 allele frequencies differ between racial/ethnic groups. , In this study, almost all subjects (99.33%) taking warfarin needed a decreased dose. ChinaMAP analysis of CYP4F2 , VKORC1 and CYP2C9 indicated that almost all Chinese individuals should use a reduced dose of warfarin, while Gregory's study reported a need for a decreased dose for 48.8% of subjects. Another study also demonstrated that race influences warfarin dose changes associated with genetic factors and recommended that warfarin dosing algorithms be stratified by race. The medication recommendations for clopidogrel and warfarin differ between individuals of Chinese and European ancestry, which emphasizes that race influences dose changes associated with genetic factors. The ratio of high‐risk genotypes also differed between provinces in China. The high‐risk ratio of rasburicase in GUANGXI was much higher than that nationwide (RR = 23.4, 95%CI:8.83–52.85, p < 0.001), which was similarly found in GUANGDONG (RR = 13.17, 95%CI:4.06–33.22, p < 0.001). Rasburicase is used as prophylaxis and treatment for hyperuricemia during chemotherapy in adults and children with lymphoma, leukaemia, and solid tumours. Rasburicase is contraindicated for G6PD‐deficient patients due to the risk of acute hemolytic anaemia and possibly methemoglobinemia, which can be fatal. CPIC guideline for rasburicase therapy stated that clinical units treating tumour lysis syndrome should access G6PD status preemptively. In this study, we observed a much higher high‐risk ratio of rasburicase in GUANGXI and GUANGDONG than that in other provinces. G6PD deficiency is caused by pathogenic variants of the G6PD gene. A Chinese national newborn screening for G6PD deficiency showed the prevalence of G6PD deficiency in GUANGXI and GUANGDONG was higher than in other provinces, which indicated a higher frequency of G6PD gene variants in GUANGXI and GUANGDONG. The FDA‐approved drug label states that individuals at higher risk for G6PD deficiency should be screened before starting rasburicase therapy. Thus, preemptive PGx testing in GUANGXI and GUANGDONG could not miss the G6PD results. In addition, the high‐risk ratio of desipramine, paroxetine, and codeine in HENAN was higher than that in other provinces ( p < 0.001), and the metabolism of these three drugs is affected by CYP2D6 polymorphisms. , For escitalopram, sertraline, and citalopram, the high‐risk ratio in SICHUAN was much lower than that in other provinces ( p = 0.003), and the metabolism of these three drugs is affected by polymorphisms in CYP2C19 . In general, we should be attentive to some specific genes when conducting preemptive PGx testing in different geographical regions of China. In order to carry out a comprehensive screening of important PGx genes at affordable cost in large‐scale populations of China, a low‐cost and high‐throughput PGx panel was needed. In this study, we used a multiplex PCR method to achieve both amplification of target regions and the construction of sequencing library in one PCR reaction. In this way, both the cost and the time of sequencing library construction could be saved. The PGx panel was designed to cover hotspot variants of 52 PGx genes and just required about 0.5 million raw reads per sample on average, so we could run at least 120 samples per chip on the semiconductor sequencing platform. The sequencing cost per sample was as low as a few US dollars. One sequencer could deal with at least 360 samples per day. With this PGx panel, we have completed the detection of 22,918 samples efficiently and cost‐effectively in this study. However, one limitation of this study was that we did not detect copy number variations (CNVs). For most PGx genes, CNVs were uncommon, except CYP2D6 . CNVs of functional alleles (mainly *1xN and *2xN) will lead to UM in CYP2D6 . In our study, the copy number of a functional CYP2D6 allele (mainly *1 and *2) would be counted as one whether multiple copies of the gene exist or not, and therefore we predict these phenotypes mainly as NM and rarely as IM. For example, a UM sample with a genotype of *1/*1x2 will be predicted as NM with *1/*1. The total frequency of predicted NM and IM was 99.6%, slightly greater than the previously reported 99%, which could be a result of UM misinterpretation. Besides, for detection of PM of CYP2D6 , we have also performed a low‐cost gap‐PCR assay for the full gene deletion of *5 allele with an observed frequency of 6.61%. CYP2D6 is very complex, of which more than 130‐star alleles were reported. Seven common star alleles of CYP2D6 were determined in this study and were expected to account for more than 90% of Chinese populations as reported in a previous study. By increasing target regions in our panel, it could improve the accuracy of CYP2D6 genotyping in the future study. To predict drug response and further to make safer and more effective therapeutic recommendations, PGx is gradually shifting from the reactive testing of a single gene to the preemptive testing of multiple genes. NGS has a high accuracy and cost‐effectiveness to detect common variants. NGS is widely used in clinical due to its detection performance and economic cost. PGx tested by NGS may be widely promoted in China in the near future, which could find more evidence about the interaction between drugs and genes to benefit patients. In summary, we demonstrate that 99.97% of the study population carried at least one actionable PGx variant, suggesting a high prevalence of actionable variants in the general population in China. Hence, preemptive PGx genotyping may benefit most individuals, with particular value for those taking multiple medications. Additionally, comparison with researches in other populations indicates that medication recommendations vary in different racial/ethnic groups. Furthermore, the diversity we observed among 20 provinces suggests that preemptive PGx screening in different geographical regions in China may need to pay more attention to specific genes. These results emphasize the importance of preemptive PGx testing and provide essential evidence for promoting clinical implementation in China. Q.‐f. H., T.‐f L., L.‐y. Y., and M. H. conceived and designed the experiments. Q.‐f. H., T. Y., T.‐f. L., W. L., J.‐x. W., Y.C., X.‐k. Y., and K.‐c. S. performed the data analysis. Q.‐f. H., Y.‐w. L., H.‐f. L., T. Y., Q. L., K.‐s. H., L.‐f.J., X.‐y.H., Y.‐r. L., and L.‐y. Y. collected clinical data and performed experimental verification. Q.‐f. H., Y.‐w. L., J.‐x. W., W. L., T. Y., T.‐f. L., L.‐y. Y., and M. H. drafted and revised the manuscript. All authors provided important feedback on the analysis of the results and the revision of the article. This work was supported by the National Key Research and Development Program (No. 2017YFC0909303). The authors declare no conflicts of interest. Figure S1 Click here for additional data file. Tables S1–S9 Click here for additional data file. |
Developing the Next Generation of Augmented Reality Games for Pediatric Healthcare: An Open-Source Collaborative Framework Based on ARCore for Implementing Teaching, Training and Monitoring Applications | 9224adb0-3b8f-4661-9d2a-ad8d455b7f3e | 7962116 | Pediatrics[mh] | Augmented Reality (AR) has been drawing more and more interest in the last years both for industrial and entertainment purposes . There are an increasing number of AR solutions, but very few of them consider fast reaction times or provide shared experiences. Such features enhance dramatically human-to-machine interaction, since they enable building real-time interactive systems for fields like industrial automation, healthcare or gaming. In addition, shared experiences add the possibility to immerse multiple users in the same AR scenario in a way that they can interact simultaneously and with the same virtual elements. Specifically, the term AR refers to a technology that provides an environment in which virtual objects are combined with reality, thus integrating computer-generated objects in the real world . Therefore, AR combines real and virtual objects in a real environment, runs interactive applications in real time, and aligns real and virtual objects with each other . When AR is combined with the ability to interact with the real world through virtual objects, the term Mixed Reality (MR) is frequently used. Due to such capabilities, latency is a key factor in both AR and MR shared experiences, since it impacts user experience by desynchronizing the visualization and reactions of virtual elements, which is essential in a wide variety of fields, such as in healthcare (e.g., in therapy or rehabilitation processes ), teaching or in industrial environments . This article presents a novel collaborative framework that is able to synchronize shared experiences where multiple AR devices detect each other and interact through a Local Area Network (LAN) without depending on a remote server. Thus, the framework allows for deploying shared AR applications anywhere, even without an Internet connection, as long as the devices are connected to the same local network. In addition, this article proposes a practical use case where an AR mobile application is presented, describing the design and development of an innovative architecture for AR gaming experiences. Such an application is focused on helping pediatric patients and includes the necessary tools and accessories to monitor its use and the physical condition of the patients according to their application usage patterns. Since the proposed type of application should facilitate and motivate the physical activity of pediatric patients, the use of AR is essential, because it provides the ability to use the real world environment as a playground. In this way, it is possible to create three-dimensional activities that foster player physical activity in order to interact with the game. Moreover, the AR mobile gaming framework described in this article allows multiple players to engage in the same AR experience, so children can interact and collaborate among them sharing the same AR content, which is placed at the same positions. Furthermore, the proposed system includes a web data crowdsourcing platform for doctors and medical staff that is able to manage pediatric patient profiles and that facilitates the visualization of the data collected by the mobile application. Such a web tool also allows for configuring the different parameters of the AR system so as to restrict access or to enable the specific activities that each children can perform with the mobile application. Thus, the developed system provides a very useful tool for developing AR experiences focused on patients with long pediatric stays at the hospitals. Specifically, this article includes three main contributions: It provides a detailed description on how to develop a novel open-source collaborative AR framework aimed at implementing teaching, training and monitoring pediatric healthcare applications. It details how to make use of the developed AR framework so as to implement from scratch a practical pediatric healthcare application. Such an application does not need any previous configuration from the user and includes the possibility of creating shared AR experiences between different mobile devices. In addition, the developed system enables the collection and visualization of custom usage data, which can be remotely monitored and analyzed. The performance of the framework is evaluated in terms of latency and processing time in order to demonstrate that it provides a good user experience. In addition, it is worth noting that the presented framework is open-source (under GPL-3.0 license), so it can be downloaded from GitHub and then be used and/or modified by researchers and developers, who can also replicate the experiments and validate the provided results. The rest of this paper is structured as follows. reviews the state of the art on augmented and mixed reality collaborative applications, and analyzes some the most relevant academic and commercial mobile gaming solutions for pediatric patients. details the design and implementation of the proposed collaborative AR framework. illustrates with practical uses cases how the proposed collaborative AR framework can be used for developing applications for pediatric patients. Finally, details the experiments carried out to evaluate the framework performance, while presents the conclusions.
2.1. Collaborative Augmented and Mixed Reality Applications AR and MR are currently considered two of the most promising technologies for providing interfaces to visualize and explore information and they present an opportunity to redefine the way in which people collaborate in fields like telemedicine or intelligent transportation . In the literature there are some recent preliminary works of collaborative AR/MR applications. The most promising developments use expensive smart glasses like Microsoft HoloLens . For example, Chusetthagarn et al. presented a preliminary Proof-of-Concept (PoC) for visualizing sensor data in disaster management applications. Such a work makes use of HoloLens spatial anchors through a built-in sharing prefab provided by Holotoolkits, a Unity package to create a collaborative AR/MR environment. Unfortunately, such an implementation is currently considered as deprecated. It is also worth mentioning that Microsoft has been working on a Microsoft HoloLens sharing framework, which, among other features, includes a discovery process through UDP. However, although the mentioned Microsoft’s development was probably the most promising collaborative framework solution, it has been recently notified that the framework will be no longer maintained . There are other recent works devoted to providing mobile collaborative AR experiences. For example, Zhang et al. proposed a client–server based collaborative AR system that integrates a map-recovery and fusion method with a vision-inertial Simultaneous Localization and Mapping (SLAM). The authors validate the precision and completeness of their methods through a number of experiments. In addition, the launch of Google’s ARCore and Apple’s ARKit have simplified substantially the development of mobile AR applications . However, no collaborative applications based on such software have been found in the literature, but, with the prospect of 5G/6G networks, it has been an uprise in the number of web-based mobile AR implementations that rely on efficient communication planning . The concept of collaboration can be also understood as a way of enabling the interaction between Internet of Things (IoT) and AR/MR devices . For instance, in the authors presented a PoC of metal shelving that is monitored with strain gauges and that has a QR code attached. When the operator scans the QR code, certain identification data are sent to a cloud and a simulation model designed with Matlab provides a stress analysis that is visualized through a pair of F4 smart glasses. Other authors focused on enabling automatic discovery and relational localization to build contextual information on sensor data . In the case of the work detailed in , the researchers describe a scalable AR framework that acts as an extension to the deployed IoT infrastructure. In such a system, recognition and tracking information is distributed over and communicated by the objects themselves. The tracking method can be chosen depending on the context and is detected automatically by the IoT infrastructure. The target objects are filtered by their proximity to the user. It is also worth mentioning the work of Lee et al. , which proposed an architecture to integrate HoloLens through a RESTful API with Mobius, an open-source OneM2M IoT platform. However, in such a work the authors considered that further work will be needed in order to consider the various requirements defined by OneM2M. 2.2. Mobile Gaming for Pediatric Patients 2.2.1. Academic Developments A number of articles endorse the use of technologies like AR or Virtual Reality (VR) as a mean through which the quality of life of pediatric patients can be improved. For example, Gómez et al. proposed the use of VR in order to alleviate the negative side effects of chronic diseases that lead to periodic hospital admissions. Some of these symptoms can be anxiety, fatigue, pain, boredom or even depression, among others. Corrêa et al. presented an AR application for conducting activities related to musical therapy, which can be utilized on motor physical therapy. Burdea et al. and Pyk et al. used VR and video games for upper-limb physical therapy. For such a purpose, the authors of made use of a modified PlayStation 3 game console, 5DT sensory gloves and other hardware specifically designed for children. Martínez-García et al. developed “PainAPPle”, a mobile application that allows for measuring pain levels on children. This latter initiative tries to overcome the difficulties that hospital staff face when assessing the pain levels on young children that cannot talk or patients that have problems for expressing their feelings. Thus, in the application proposed by the authors, children can use images and shapes to express their mood. 2.2.2. Commercial Applications Different commercial applications are available for easing pediatric stays. For instance, there are projects such as “Nixi for children” or “Me van a hacer un transplante” (“I’m going to have a transplant”) whose main objective is to inform and reassure pediatric patients before operations in order to reduce their fear and uncertainty. In the case of “Nixi for children” , the aim of the project is to reduce the pre-operative anxiety on children. For this purpose, children use a VR device through which they can see immersive 360 videos where the procedures on the operation are explained so that pediatric patients become familiar with the environment. Regarding “Me van a hacer un transplante” , it is an application where it is explained to children what a bone marrow transplant is and what steps it consists of, in a way that makes it easier for them to understand the procedure they will undergo. This application includes a tale, a video and three games that children can play with while learning. There are other applications that were devised with the idea of facilitating the communication between children and health-care personnel through different games and activities. For example, “Dupi’s magic room” is a project aimed at developing a mobile application with which children can express themselves and explain their feelings through interactive games and drawings. Thus, healthcare professionals can monitor patient mood over time and check their progression. This application utilizes psychopedagogical techniques that allow for analyzing drawings in order to determine the patient emotional status, therefore helping children to express their feelings effortlessly. Regarding patient stress management, there are a few applications aimed at reducing pre-operative or post-operative anxiety, although they are mostly focused on adult patients. An example is “En calma en el quirófano” (’At ease in the operating room’), which is designed to guide patients through sounds so that the user is able to cope with the preparation prior to entering the operating room or to performing a medical test. Similarly, the application “REM volver a casa” (REM go home) provides the user with videos and sounds that allow the patient to follow a mindfulness training program. Finally, it is worth mentioning two applications designed specifically to entertain children during long hospital stays: “RH Kids” and “EntamAR” . Regarding “RH Kids”, it provides educational material for children. It includes different interactive stories, educational games and working sheets in order to have a positive impact on children happiness, which leads to a more efficient treatment and to shorten the time at hospital. “EntamAR” is an application that is similar to the one described in this article, since it is an AR mobile video game that aims to improve the quality of life of pediatric patients. Such an application makes use of the Onirix AR platform , through which the rooms can be scanned and used afterwards to build games. Thus, both the application and each of the activities included in it have to be developed ad hoc for each hospital, which requires that a technical team (usually volunteers or healthcare personnel), invest a relevant amount of time and effort in the design and implementation of each application. In addition, if the physical space in which the application is executed changes significantly, the room would have to be re-scanned and each of the activities reassembled. 2.3. Analysis of the State of the Art After reviewing the state of the art, it can be concluded that there are recent works with the aim of providing AR/MR shared experiences. However, in contrast to the open-source collaborative framework presented in this article, the vast majority of previous works are very early developments with a relevant number of open issues that require further research. In addition, most of them rely on advanced network and communication capabilities (e.g., 5G), remote servers, or sophisticated expensive AR/MR devices. Regarding pediatric healthcare use cases, all the solutions cited in and either solve totally or partially issues related to long pediatric stays. Such solutions can be classified into three categories: alternatives that facilitate the life of patients in health environments regardless of their age, solutions that help children outside of hospital environments (but that can be adapted to be used in such environments) and applications that currently exist to entertain children during long-term hospital stays. Among the analyzed applications, only a few target children and there is only one solution that makes use of AR. Regarding this aspect, “EntamAR” uses a platform which needs an ad hoc solution for each game and location where the game is to be played. This fact involves the need for carrying out a previous process in which the room has to be scanned and the game has to be re-assembled using the mentioned AR platform. Therefore, after analyzing the previously mentioned alternatives, it was concluded that there is a lack of autonomy of the AR systems when it comes to provide a complete User Experience (UX). Moreover, none of the analyzed alternatives offer a mechanism for monitoring patients in order to visualize their state or evolution. Due to the previous issues, the framework presented in this article is focused on enabling the development of AR applications that motivate the mobility of hospital patients and that needs no previous configuration from the user. In addition, the open-source framework provides a novel feature that has not been found in the state of the art: it allows pediatric patients to share the same AR experience in real time, so they can collaborate among them when playing the games. Furthermore, the developed system enables the collection and visualization of patient usage data. Such data can be really useful for doctors, who will have a dynamic and indirect way for collecting data on the mobility and mood of their patients.
AR and MR are currently considered two of the most promising technologies for providing interfaces to visualize and explore information and they present an opportunity to redefine the way in which people collaborate in fields like telemedicine or intelligent transportation . In the literature there are some recent preliminary works of collaborative AR/MR applications. The most promising developments use expensive smart glasses like Microsoft HoloLens . For example, Chusetthagarn et al. presented a preliminary Proof-of-Concept (PoC) for visualizing sensor data in disaster management applications. Such a work makes use of HoloLens spatial anchors through a built-in sharing prefab provided by Holotoolkits, a Unity package to create a collaborative AR/MR environment. Unfortunately, such an implementation is currently considered as deprecated. It is also worth mentioning that Microsoft has been working on a Microsoft HoloLens sharing framework, which, among other features, includes a discovery process through UDP. However, although the mentioned Microsoft’s development was probably the most promising collaborative framework solution, it has been recently notified that the framework will be no longer maintained . There are other recent works devoted to providing mobile collaborative AR experiences. For example, Zhang et al. proposed a client–server based collaborative AR system that integrates a map-recovery and fusion method with a vision-inertial Simultaneous Localization and Mapping (SLAM). The authors validate the precision and completeness of their methods through a number of experiments. In addition, the launch of Google’s ARCore and Apple’s ARKit have simplified substantially the development of mobile AR applications . However, no collaborative applications based on such software have been found in the literature, but, with the prospect of 5G/6G networks, it has been an uprise in the number of web-based mobile AR implementations that rely on efficient communication planning . The concept of collaboration can be also understood as a way of enabling the interaction between Internet of Things (IoT) and AR/MR devices . For instance, in the authors presented a PoC of metal shelving that is monitored with strain gauges and that has a QR code attached. When the operator scans the QR code, certain identification data are sent to a cloud and a simulation model designed with Matlab provides a stress analysis that is visualized through a pair of F4 smart glasses. Other authors focused on enabling automatic discovery and relational localization to build contextual information on sensor data . In the case of the work detailed in , the researchers describe a scalable AR framework that acts as an extension to the deployed IoT infrastructure. In such a system, recognition and tracking information is distributed over and communicated by the objects themselves. The tracking method can be chosen depending on the context and is detected automatically by the IoT infrastructure. The target objects are filtered by their proximity to the user. It is also worth mentioning the work of Lee et al. , which proposed an architecture to integrate HoloLens through a RESTful API with Mobius, an open-source OneM2M IoT platform. However, in such a work the authors considered that further work will be needed in order to consider the various requirements defined by OneM2M.
2.2.1. Academic Developments A number of articles endorse the use of technologies like AR or Virtual Reality (VR) as a mean through which the quality of life of pediatric patients can be improved. For example, Gómez et al. proposed the use of VR in order to alleviate the negative side effects of chronic diseases that lead to periodic hospital admissions. Some of these symptoms can be anxiety, fatigue, pain, boredom or even depression, among others. Corrêa et al. presented an AR application for conducting activities related to musical therapy, which can be utilized on motor physical therapy. Burdea et al. and Pyk et al. used VR and video games for upper-limb physical therapy. For such a purpose, the authors of made use of a modified PlayStation 3 game console, 5DT sensory gloves and other hardware specifically designed for children. Martínez-García et al. developed “PainAPPle”, a mobile application that allows for measuring pain levels on children. This latter initiative tries to overcome the difficulties that hospital staff face when assessing the pain levels on young children that cannot talk or patients that have problems for expressing their feelings. Thus, in the application proposed by the authors, children can use images and shapes to express their mood. 2.2.2. Commercial Applications Different commercial applications are available for easing pediatric stays. For instance, there are projects such as “Nixi for children” or “Me van a hacer un transplante” (“I’m going to have a transplant”) whose main objective is to inform and reassure pediatric patients before operations in order to reduce their fear and uncertainty. In the case of “Nixi for children” , the aim of the project is to reduce the pre-operative anxiety on children. For this purpose, children use a VR device through which they can see immersive 360 videos where the procedures on the operation are explained so that pediatric patients become familiar with the environment. Regarding “Me van a hacer un transplante” , it is an application where it is explained to children what a bone marrow transplant is and what steps it consists of, in a way that makes it easier for them to understand the procedure they will undergo. This application includes a tale, a video and three games that children can play with while learning. There are other applications that were devised with the idea of facilitating the communication between children and health-care personnel through different games and activities. For example, “Dupi’s magic room” is a project aimed at developing a mobile application with which children can express themselves and explain their feelings through interactive games and drawings. Thus, healthcare professionals can monitor patient mood over time and check their progression. This application utilizes psychopedagogical techniques that allow for analyzing drawings in order to determine the patient emotional status, therefore helping children to express their feelings effortlessly. Regarding patient stress management, there are a few applications aimed at reducing pre-operative or post-operative anxiety, although they are mostly focused on adult patients. An example is “En calma en el quirófano” (’At ease in the operating room’), which is designed to guide patients through sounds so that the user is able to cope with the preparation prior to entering the operating room or to performing a medical test. Similarly, the application “REM volver a casa” (REM go home) provides the user with videos and sounds that allow the patient to follow a mindfulness training program. Finally, it is worth mentioning two applications designed specifically to entertain children during long hospital stays: “RH Kids” and “EntamAR” . Regarding “RH Kids”, it provides educational material for children. It includes different interactive stories, educational games and working sheets in order to have a positive impact on children happiness, which leads to a more efficient treatment and to shorten the time at hospital. “EntamAR” is an application that is similar to the one described in this article, since it is an AR mobile video game that aims to improve the quality of life of pediatric patients. Such an application makes use of the Onirix AR platform , through which the rooms can be scanned and used afterwards to build games. Thus, both the application and each of the activities included in it have to be developed ad hoc for each hospital, which requires that a technical team (usually volunteers or healthcare personnel), invest a relevant amount of time and effort in the design and implementation of each application. In addition, if the physical space in which the application is executed changes significantly, the room would have to be re-scanned and each of the activities reassembled.
A number of articles endorse the use of technologies like AR or Virtual Reality (VR) as a mean through which the quality of life of pediatric patients can be improved. For example, Gómez et al. proposed the use of VR in order to alleviate the negative side effects of chronic diseases that lead to periodic hospital admissions. Some of these symptoms can be anxiety, fatigue, pain, boredom or even depression, among others. Corrêa et al. presented an AR application for conducting activities related to musical therapy, which can be utilized on motor physical therapy. Burdea et al. and Pyk et al. used VR and video games for upper-limb physical therapy. For such a purpose, the authors of made use of a modified PlayStation 3 game console, 5DT sensory gloves and other hardware specifically designed for children. Martínez-García et al. developed “PainAPPle”, a mobile application that allows for measuring pain levels on children. This latter initiative tries to overcome the difficulties that hospital staff face when assessing the pain levels on young children that cannot talk or patients that have problems for expressing their feelings. Thus, in the application proposed by the authors, children can use images and shapes to express their mood.
Different commercial applications are available for easing pediatric stays. For instance, there are projects such as “Nixi for children” or “Me van a hacer un transplante” (“I’m going to have a transplant”) whose main objective is to inform and reassure pediatric patients before operations in order to reduce their fear and uncertainty. In the case of “Nixi for children” , the aim of the project is to reduce the pre-operative anxiety on children. For this purpose, children use a VR device through which they can see immersive 360 videos where the procedures on the operation are explained so that pediatric patients become familiar with the environment. Regarding “Me van a hacer un transplante” , it is an application where it is explained to children what a bone marrow transplant is and what steps it consists of, in a way that makes it easier for them to understand the procedure they will undergo. This application includes a tale, a video and three games that children can play with while learning. There are other applications that were devised with the idea of facilitating the communication between children and health-care personnel through different games and activities. For example, “Dupi’s magic room” is a project aimed at developing a mobile application with which children can express themselves and explain their feelings through interactive games and drawings. Thus, healthcare professionals can monitor patient mood over time and check their progression. This application utilizes psychopedagogical techniques that allow for analyzing drawings in order to determine the patient emotional status, therefore helping children to express their feelings effortlessly. Regarding patient stress management, there are a few applications aimed at reducing pre-operative or post-operative anxiety, although they are mostly focused on adult patients. An example is “En calma en el quirófano” (’At ease in the operating room’), which is designed to guide patients through sounds so that the user is able to cope with the preparation prior to entering the operating room or to performing a medical test. Similarly, the application “REM volver a casa” (REM go home) provides the user with videos and sounds that allow the patient to follow a mindfulness training program. Finally, it is worth mentioning two applications designed specifically to entertain children during long hospital stays: “RH Kids” and “EntamAR” . Regarding “RH Kids”, it provides educational material for children. It includes different interactive stories, educational games and working sheets in order to have a positive impact on children happiness, which leads to a more efficient treatment and to shorten the time at hospital. “EntamAR” is an application that is similar to the one described in this article, since it is an AR mobile video game that aims to improve the quality of life of pediatric patients. Such an application makes use of the Onirix AR platform , through which the rooms can be scanned and used afterwards to build games. Thus, both the application and each of the activities included in it have to be developed ad hoc for each hospital, which requires that a technical team (usually volunteers or healthcare personnel), invest a relevant amount of time and effort in the design and implementation of each application. In addition, if the physical space in which the application is executed changes significantly, the room would have to be re-scanned and each of the activities reassembled.
After reviewing the state of the art, it can be concluded that there are recent works with the aim of providing AR/MR shared experiences. However, in contrast to the open-source collaborative framework presented in this article, the vast majority of previous works are very early developments with a relevant number of open issues that require further research. In addition, most of them rely on advanced network and communication capabilities (e.g., 5G), remote servers, or sophisticated expensive AR/MR devices. Regarding pediatric healthcare use cases, all the solutions cited in and either solve totally or partially issues related to long pediatric stays. Such solutions can be classified into three categories: alternatives that facilitate the life of patients in health environments regardless of their age, solutions that help children outside of hospital environments (but that can be adapted to be used in such environments) and applications that currently exist to entertain children during long-term hospital stays. Among the analyzed applications, only a few target children and there is only one solution that makes use of AR. Regarding this aspect, “EntamAR” uses a platform which needs an ad hoc solution for each game and location where the game is to be played. This fact involves the need for carrying out a previous process in which the room has to be scanned and the game has to be re-assembled using the mentioned AR platform. Therefore, after analyzing the previously mentioned alternatives, it was concluded that there is a lack of autonomy of the AR systems when it comes to provide a complete User Experience (UX). Moreover, none of the analyzed alternatives offer a mechanism for monitoring patients in order to visualize their state or evolution. Due to the previous issues, the framework presented in this article is focused on enabling the development of AR applications that motivate the mobility of hospital patients and that needs no previous configuration from the user. In addition, the open-source framework provides a novel feature that has not been found in the state of the art: it allows pediatric patients to share the same AR experience in real time, so they can collaborate among them when playing the games. Furthermore, the developed system enables the collection and visualization of patient usage data. Such data can be really useful for doctors, who will have a dynamic and indirect way for collecting data on the mobility and mood of their patients.
The next subsections describe the internal components of the developed framework. Specifically, it is first detailed the communications architecture used by the AR applications that make use of the collaborative framework and, next, the design and implementation of the framework are described thoroughly. 3.1. Communications Architecture shows an overview of the architecture of the developed framework, which is divided into three parts: the visualization subsystem, the mobile application and the backend server. The backend hosts a remote database and manages the requests made from web and mobile applications through a Representational State Transfer (REST) Application Programming Interface (API). In this article it is assumed that a mobile application (which is assumed to be implemented as an Android app, since such an operating system is used by nearly 80% of current mobile devices) is in charge of gathering and managing the data collected from the users and is responsible for storing such data on a local database on each device. Moreover, the mobile application is responsible for sending usage data to the backend server. As far as data storage is concerned, only the minimum necessary information is stored in the mobile device local database in order to guarantee the proper functioning of the application. For instance, in a pediatric healthcare application, the remote database would store all the essential personal information on the patients: height, the difficulty level of the games, year of birth and the dates when the users were created and modified. All the AR mobile devices within the same LAN connect with each other without requiring to connect to the Internet. Such a communication is handled by the framework, which is able to keep track of the status of each user and propagate the events that occur in each of the application instances. 3.2. Design and Technology Selection During the design of the proposed collaborative framework, the following requirements were considered: The devised system has to support AR visualization and interaction. It has to allow two or more players to place 3D objects in a way that they are synchronized regarding materials or animations performed on the objects. If new objects are incorporated into the scene, they have to appear on all of the connected mobile devices at the appropriate positions. The system has to provide scoreboards and a method to keep them synchronized at all times. The system should allow an easy method to create, host and join games, keeping in mind that the application that would use the proposed framework could be used by children. The system has to provide a way to store usage data from users and gather them in a centralized location, so it can be easily accessed and analyzed. Considering the previous requirements, the proposed framework was designed to provide a set of tools to develop AR games for pediatric healthcare that is composed by the following components, which are described in the next subsections: the AR subsystem, the backend server, the communications subsystem and the visualization subsystem. 3.2.1. AR Subsystem The AR subsystem is divided into the following sub-components: Augmented Reality and game engine. It must be first noted that there are actually two main types of AR: marker-based and markerless AR . In addition, AR content can be displayed through different devices, being the most common mobile phones and tablets. However, in the last years Head Mounted Display (HMD) devices have evolved significantly, providing new visualization systems for AR and adding the possibility of having stereoscopic vision. Considering the requirements of most pediatric healthcare scenarios (similar to the ones indicated later in ), it was concluded that the best alternative for the proposed application consisted in making use of markerless AR, since the other types of AR require the use of external infrastructure (i.e., markers) or are not suitable for the application target user (i.e., smart glasses). Among the different available AR platforms, ARCore was selected for the development of the AR activities, since it specializes on surface detection, it has official extensions and packages for the most popular game engines and it offers an increasing number of compatible devices. Regarding the AR development tool, the game engine options are quite restricted, since they have to provide ARCore support. The options offered by the official ARCore website are Android , iOS , Unity and Unreal Engine . After considering the two available graphic engine alternatives, it was decided to use Unity due to its large community, which provides numerous examples, tutorials and forums. In addition, the learning curve of Unity is less steep than the one of other tools like Unreal Engine. Specifically, Unity 2019.3 was selected, since it is currently the only version that provides a direct way to export a Unity project as an Android library. This functionality significantly reduces the complexity of the integration process of the AR games with the rest of the components of the mobile application. Mobile Platform. For the selection of the mobile platform, the considered alternatives were the two most used platforms at the moment (Android and iOS), as well as the use of a hybrid mobile development framework. After considering the compatibility with the libraries provided by ARCore and with the used game engine, it was decided to develop the application for Android, since, as it was previously mentioned, it is currently the most popular mobile operating system, it has a wider support and a growing number of ARCore-compatible devices. However, other platforms might be taken into consideration in future work. Local database. The local data storage system is located on each mobile device and stores all the data collected by the application. For each patient, information is collected about his/her daily mood survey results, as well as the playing time and his/her step count (to quantify the amount of performed physical exercise). In addition, the patient profile settings are stored on the local database. All the mentioned data and settings can be also visualized and modified from the web application. In the case of Android, local storage can be implemented through the shared preferences, the internal/external storage system or by using a local database . Due to the nature of the stored data, which are confidential, a data storage system with restricted access is needed. In addition, it was considered that the data to be stored are structured and the fact that the storage system will be read and written with medium to high frequency. For such reasons, it was decided that the option that best suits the requirements of the framework is a local database like Room . 3.2.2. Backend Server The backend server was designed as a web server that follows the Model View Controller (MVC) pattern. Such a software architecture pattern separates the data presentation layer from the management logic of the user interactions. The server is therefore in charge of managing the business logic and the data access layer, including all the infrastructure that supports the provision of the provided services. The backend server provides a REST API through which various endpoints can be accessed. Server software. Different technologies (frameworks and programming languages) can be used for implementing the backend server, like Java (e.g., Spring Boot), C# (e.g., .NET), JavaScript (e.g., Node.js) or Python (e.g., Django). Among the available solutions, Django was selected due to its learning curve, available documentation, libraries and extensions (e.g., chart generators, REST APIs, serialization libraries or database access libraries through Object-relational Mapping (ORMs)), community support, security and scalability. Remote database. It can be implemented by using cloud storage (e.g., Amazon Web Service (AWS)) , Azure or Google Cloud ) or through a local database (e.g., MySQL , PostgreSQL or MongoDB ). After considering the different alternatives, it was decided to make use of a local database, mainly to avoid storing data out of the hospital/healthcare network. There are different databases officially supported by Django (PostgreSQL, MariaDB, MySQL, Oracle and SQLite) , among which SQLite was chosen due to its very simple configuration and its low computational resource consumption. 3.2.3. Visualization Subsystem Different technologies can be used for developing the Visualization Subsystem, like Angular , React , Vue , Bootstrap or Materialize . A strict requirement is that the app should be able to make asynchronous requests to the REST API of the backend. In addition, other aspects were considered (e.g., learning curve, responsive design, implementation complexity, available documentation) and, eventually, Bootstrap was selected. 3.2.4. Communications Subsystem The development of shared AR experiences requires the use of a system that facilitates the communications among AR devices. For the proposed system, the mentioned system has to be compatible with Unity and should provide an easy way to implement the requirements of the AR games: to spawn virtual objects at a specific spot, to detect which user has first performed certain action and to keep the scoreboards synchronized. The first alternative to develop the framework may consist in using the tools provided by Unity. In this case, it would be necessary to use the UNet network library , which provides a wide range of functionalities that are needed to develop online games. The main drawback is that UNet is deprecated since 2018 and will be removed from Unity, so its future software support will be a problem. Another alternative provided by Unity is Multiplay , a game server that hosts services based on a consumption model, which implies that users have to pay for it as much as they use it . In addition, Multiplay was still in alpha when the development of the work presented in this paper started, so it was decided to look for other alternatives. The other two best known options are Mirror and Photon . Both are quite similar, as they provide basic networking capabilities, as well as client-to-client connections via a server and connections where one of the clients hosts the game by acting as the server. One advantage that Photon has over Mirror is the matchmaking service, although this does not make a big difference respect to Mirror when implementing two-player games, as this drawback can be easily solved through software. Moreover, it must be noted that Mirror and Photon differ in their cost, since Photon charges for its services and Mirror is completely free. Considering the previous analysis, it was decided to use Mirror as the baseline for the implementation of the collaborative framework. This is due to its similarity with the deprecated UNet and the large number of users and documentation they offer. In addition, the fact that Mirror is a free software project means that the code can be accessed at all times, and can even be easily modified or extended if necessary. shows a more detailed view of the designed framework. Two players are illustrated on the diagram: the one on the left acts as a local server, providing connectivity for the rest of the users and working as a coordinator; the other player only needs to join the game and the application will be executed as a client. This approach provides the chance to make the game portable, as it is a serverless application, hence it does not depend on an external server.
shows an overview of the architecture of the developed framework, which is divided into three parts: the visualization subsystem, the mobile application and the backend server. The backend hosts a remote database and manages the requests made from web and mobile applications through a Representational State Transfer (REST) Application Programming Interface (API). In this article it is assumed that a mobile application (which is assumed to be implemented as an Android app, since such an operating system is used by nearly 80% of current mobile devices) is in charge of gathering and managing the data collected from the users and is responsible for storing such data on a local database on each device. Moreover, the mobile application is responsible for sending usage data to the backend server. As far as data storage is concerned, only the minimum necessary information is stored in the mobile device local database in order to guarantee the proper functioning of the application. For instance, in a pediatric healthcare application, the remote database would store all the essential personal information on the patients: height, the difficulty level of the games, year of birth and the dates when the users were created and modified. All the AR mobile devices within the same LAN connect with each other without requiring to connect to the Internet. Such a communication is handled by the framework, which is able to keep track of the status of each user and propagate the events that occur in each of the application instances.
During the design of the proposed collaborative framework, the following requirements were considered: The devised system has to support AR visualization and interaction. It has to allow two or more players to place 3D objects in a way that they are synchronized regarding materials or animations performed on the objects. If new objects are incorporated into the scene, they have to appear on all of the connected mobile devices at the appropriate positions. The system has to provide scoreboards and a method to keep them synchronized at all times. The system should allow an easy method to create, host and join games, keeping in mind that the application that would use the proposed framework could be used by children. The system has to provide a way to store usage data from users and gather them in a centralized location, so it can be easily accessed and analyzed. Considering the previous requirements, the proposed framework was designed to provide a set of tools to develop AR games for pediatric healthcare that is composed by the following components, which are described in the next subsections: the AR subsystem, the backend server, the communications subsystem and the visualization subsystem. 3.2.1. AR Subsystem The AR subsystem is divided into the following sub-components: Augmented Reality and game engine. It must be first noted that there are actually two main types of AR: marker-based and markerless AR . In addition, AR content can be displayed through different devices, being the most common mobile phones and tablets. However, in the last years Head Mounted Display (HMD) devices have evolved significantly, providing new visualization systems for AR and adding the possibility of having stereoscopic vision. Considering the requirements of most pediatric healthcare scenarios (similar to the ones indicated later in ), it was concluded that the best alternative for the proposed application consisted in making use of markerless AR, since the other types of AR require the use of external infrastructure (i.e., markers) or are not suitable for the application target user (i.e., smart glasses). Among the different available AR platforms, ARCore was selected for the development of the AR activities, since it specializes on surface detection, it has official extensions and packages for the most popular game engines and it offers an increasing number of compatible devices. Regarding the AR development tool, the game engine options are quite restricted, since they have to provide ARCore support. The options offered by the official ARCore website are Android , iOS , Unity and Unreal Engine . After considering the two available graphic engine alternatives, it was decided to use Unity due to its large community, which provides numerous examples, tutorials and forums. In addition, the learning curve of Unity is less steep than the one of other tools like Unreal Engine. Specifically, Unity 2019.3 was selected, since it is currently the only version that provides a direct way to export a Unity project as an Android library. This functionality significantly reduces the complexity of the integration process of the AR games with the rest of the components of the mobile application. Mobile Platform. For the selection of the mobile platform, the considered alternatives were the two most used platforms at the moment (Android and iOS), as well as the use of a hybrid mobile development framework. After considering the compatibility with the libraries provided by ARCore and with the used game engine, it was decided to develop the application for Android, since, as it was previously mentioned, it is currently the most popular mobile operating system, it has a wider support and a growing number of ARCore-compatible devices. However, other platforms might be taken into consideration in future work. Local database. The local data storage system is located on each mobile device and stores all the data collected by the application. For each patient, information is collected about his/her daily mood survey results, as well as the playing time and his/her step count (to quantify the amount of performed physical exercise). In addition, the patient profile settings are stored on the local database. All the mentioned data and settings can be also visualized and modified from the web application. In the case of Android, local storage can be implemented through the shared preferences, the internal/external storage system or by using a local database . Due to the nature of the stored data, which are confidential, a data storage system with restricted access is needed. In addition, it was considered that the data to be stored are structured and the fact that the storage system will be read and written with medium to high frequency. For such reasons, it was decided that the option that best suits the requirements of the framework is a local database like Room . 3.2.2. Backend Server The backend server was designed as a web server that follows the Model View Controller (MVC) pattern. Such a software architecture pattern separates the data presentation layer from the management logic of the user interactions. The server is therefore in charge of managing the business logic and the data access layer, including all the infrastructure that supports the provision of the provided services. The backend server provides a REST API through which various endpoints can be accessed. Server software. Different technologies (frameworks and programming languages) can be used for implementing the backend server, like Java (e.g., Spring Boot), C# (e.g., .NET), JavaScript (e.g., Node.js) or Python (e.g., Django). Among the available solutions, Django was selected due to its learning curve, available documentation, libraries and extensions (e.g., chart generators, REST APIs, serialization libraries or database access libraries through Object-relational Mapping (ORMs)), community support, security and scalability. Remote database. It can be implemented by using cloud storage (e.g., Amazon Web Service (AWS)) , Azure or Google Cloud ) or through a local database (e.g., MySQL , PostgreSQL or MongoDB ). After considering the different alternatives, it was decided to make use of a local database, mainly to avoid storing data out of the hospital/healthcare network. There are different databases officially supported by Django (PostgreSQL, MariaDB, MySQL, Oracle and SQLite) , among which SQLite was chosen due to its very simple configuration and its low computational resource consumption. 3.2.3. Visualization Subsystem Different technologies can be used for developing the Visualization Subsystem, like Angular , React , Vue , Bootstrap or Materialize . A strict requirement is that the app should be able to make asynchronous requests to the REST API of the backend. In addition, other aspects were considered (e.g., learning curve, responsive design, implementation complexity, available documentation) and, eventually, Bootstrap was selected. 3.2.4. Communications Subsystem The development of shared AR experiences requires the use of a system that facilitates the communications among AR devices. For the proposed system, the mentioned system has to be compatible with Unity and should provide an easy way to implement the requirements of the AR games: to spawn virtual objects at a specific spot, to detect which user has first performed certain action and to keep the scoreboards synchronized. The first alternative to develop the framework may consist in using the tools provided by Unity. In this case, it would be necessary to use the UNet network library , which provides a wide range of functionalities that are needed to develop online games. The main drawback is that UNet is deprecated since 2018 and will be removed from Unity, so its future software support will be a problem. Another alternative provided by Unity is Multiplay , a game server that hosts services based on a consumption model, which implies that users have to pay for it as much as they use it . In addition, Multiplay was still in alpha when the development of the work presented in this paper started, so it was decided to look for other alternatives. The other two best known options are Mirror and Photon . Both are quite similar, as they provide basic networking capabilities, as well as client-to-client connections via a server and connections where one of the clients hosts the game by acting as the server. One advantage that Photon has over Mirror is the matchmaking service, although this does not make a big difference respect to Mirror when implementing two-player games, as this drawback can be easily solved through software. Moreover, it must be noted that Mirror and Photon differ in their cost, since Photon charges for its services and Mirror is completely free. Considering the previous analysis, it was decided to use Mirror as the baseline for the implementation of the collaborative framework. This is due to its similarity with the deprecated UNet and the large number of users and documentation they offer. In addition, the fact that Mirror is a free software project means that the code can be accessed at all times, and can even be easily modified or extended if necessary. shows a more detailed view of the designed framework. Two players are illustrated on the diagram: the one on the left acts as a local server, providing connectivity for the rest of the users and working as a coordinator; the other player only needs to join the game and the application will be executed as a client. This approach provides the chance to make the game portable, as it is a serverless application, hence it does not depend on an external server.
The AR subsystem is divided into the following sub-components: Augmented Reality and game engine. It must be first noted that there are actually two main types of AR: marker-based and markerless AR . In addition, AR content can be displayed through different devices, being the most common mobile phones and tablets. However, in the last years Head Mounted Display (HMD) devices have evolved significantly, providing new visualization systems for AR and adding the possibility of having stereoscopic vision. Considering the requirements of most pediatric healthcare scenarios (similar to the ones indicated later in ), it was concluded that the best alternative for the proposed application consisted in making use of markerless AR, since the other types of AR require the use of external infrastructure (i.e., markers) or are not suitable for the application target user (i.e., smart glasses). Among the different available AR platforms, ARCore was selected for the development of the AR activities, since it specializes on surface detection, it has official extensions and packages for the most popular game engines and it offers an increasing number of compatible devices. Regarding the AR development tool, the game engine options are quite restricted, since they have to provide ARCore support. The options offered by the official ARCore website are Android , iOS , Unity and Unreal Engine . After considering the two available graphic engine alternatives, it was decided to use Unity due to its large community, which provides numerous examples, tutorials and forums. In addition, the learning curve of Unity is less steep than the one of other tools like Unreal Engine. Specifically, Unity 2019.3 was selected, since it is currently the only version that provides a direct way to export a Unity project as an Android library. This functionality significantly reduces the complexity of the integration process of the AR games with the rest of the components of the mobile application. Mobile Platform. For the selection of the mobile platform, the considered alternatives were the two most used platforms at the moment (Android and iOS), as well as the use of a hybrid mobile development framework. After considering the compatibility with the libraries provided by ARCore and with the used game engine, it was decided to develop the application for Android, since, as it was previously mentioned, it is currently the most popular mobile operating system, it has a wider support and a growing number of ARCore-compatible devices. However, other platforms might be taken into consideration in future work. Local database. The local data storage system is located on each mobile device and stores all the data collected by the application. For each patient, information is collected about his/her daily mood survey results, as well as the playing time and his/her step count (to quantify the amount of performed physical exercise). In addition, the patient profile settings are stored on the local database. All the mentioned data and settings can be also visualized and modified from the web application. In the case of Android, local storage can be implemented through the shared preferences, the internal/external storage system or by using a local database . Due to the nature of the stored data, which are confidential, a data storage system with restricted access is needed. In addition, it was considered that the data to be stored are structured and the fact that the storage system will be read and written with medium to high frequency. For such reasons, it was decided that the option that best suits the requirements of the framework is a local database like Room .
The backend server was designed as a web server that follows the Model View Controller (MVC) pattern. Such a software architecture pattern separates the data presentation layer from the management logic of the user interactions. The server is therefore in charge of managing the business logic and the data access layer, including all the infrastructure that supports the provision of the provided services. The backend server provides a REST API through which various endpoints can be accessed. Server software. Different technologies (frameworks and programming languages) can be used for implementing the backend server, like Java (e.g., Spring Boot), C# (e.g., .NET), JavaScript (e.g., Node.js) or Python (e.g., Django). Among the available solutions, Django was selected due to its learning curve, available documentation, libraries and extensions (e.g., chart generators, REST APIs, serialization libraries or database access libraries through Object-relational Mapping (ORMs)), community support, security and scalability. Remote database. It can be implemented by using cloud storage (e.g., Amazon Web Service (AWS)) , Azure or Google Cloud ) or through a local database (e.g., MySQL , PostgreSQL or MongoDB ). After considering the different alternatives, it was decided to make use of a local database, mainly to avoid storing data out of the hospital/healthcare network. There are different databases officially supported by Django (PostgreSQL, MariaDB, MySQL, Oracle and SQLite) , among which SQLite was chosen due to its very simple configuration and its low computational resource consumption.
Different technologies can be used for developing the Visualization Subsystem, like Angular , React , Vue , Bootstrap or Materialize . A strict requirement is that the app should be able to make asynchronous requests to the REST API of the backend. In addition, other aspects were considered (e.g., learning curve, responsive design, implementation complexity, available documentation) and, eventually, Bootstrap was selected.
The development of shared AR experiences requires the use of a system that facilitates the communications among AR devices. For the proposed system, the mentioned system has to be compatible with Unity and should provide an easy way to implement the requirements of the AR games: to spawn virtual objects at a specific spot, to detect which user has first performed certain action and to keep the scoreboards synchronized. The first alternative to develop the framework may consist in using the tools provided by Unity. In this case, it would be necessary to use the UNet network library , which provides a wide range of functionalities that are needed to develop online games. The main drawback is that UNet is deprecated since 2018 and will be removed from Unity, so its future software support will be a problem. Another alternative provided by Unity is Multiplay , a game server that hosts services based on a consumption model, which implies that users have to pay for it as much as they use it . In addition, Multiplay was still in alpha when the development of the work presented in this paper started, so it was decided to look for other alternatives. The other two best known options are Mirror and Photon . Both are quite similar, as they provide basic networking capabilities, as well as client-to-client connections via a server and connections where one of the clients hosts the game by acting as the server. One advantage that Photon has over Mirror is the matchmaking service, although this does not make a big difference respect to Mirror when implementing two-player games, as this drawback can be easily solved through software. Moreover, it must be noted that Mirror and Photon differ in their cost, since Photon charges for its services and Mirror is completely free. Considering the previous analysis, it was decided to use Mirror as the baseline for the implementation of the collaborative framework. This is due to its similarity with the deprecated UNet and the large number of users and documentation they offer. In addition, the fact that Mirror is a free software project means that the code can be accessed at all times, and can even be easily modified or extended if necessary. shows a more detailed view of the designed framework. Two players are illustrated on the diagram: the one on the left acts as a local server, providing connectivity for the rest of the users and working as a coordinator; the other player only needs to join the game and the application will be executed as a client. This approach provides the chance to make the game portable, as it is a serverless application, hence it does not depend on an external server.
4.1. Description of the Use Case One of the priorities that is always present in a hospital environment is to improve the quality of life of patients. For instance, one of the consequences of hospitalization is an increase in patient stress levels, including children, who usually experience increases in their stress levels, which derive into aggressive behaviors, isolation and difficulties when recovering from medical procedures . Minimizing anxiety is important in order to help children to be more relaxed and comfortable during interventions and medical tests that may be conducted on them while they are at the hospital. Children with less anxiety will cooperate more easily, will be less afraid of procedures, will need less sedatives and will have shorter recovery times . During periods of hospitalization, play activities are often used in the form of therapeutic play or play therapy, thus providing an improvement in their physical and emotional well-being, while also reducing patient recovery time. Games can help to reduce the intensity of the negative feelings that occur during children’s hospitalizations, thus being appropriate, for example, in pre-operative period preparations and invasive procedures . Therefore, the use of games inside a hospital can become an interesting tool to be considered for the assistance to hospitalized children. The use of AR also increases children engagement and motivation, preventing them from getting bored and encouraging them to use the application for longer periods of time. To show the technical capabilities and the performance of the proposed AR framework, a demonstrative experience was created. In such an application, part of the games were adapted so that they can be used both by children with complete or restricted mobility. It is important to note that the application is intended to test the developed system from a technical point of view and demonstrate its technical capabilities. No empirical scientific tests with children were made. Any applications made with this framework will have to be assessed taking into account medical considerations and tests will have to be carried out. In our future work we will tackle these issues as soon as the COVID-19 pandemic allows for carrying out the experiments in safe scenarios. For example, one possibility would be to perform the evaluation as follows. First, the testing group would be split into two subgroups: one that would use the application daily and another that would not. The idea would be to conduct periodic surveys during their hospital stay asking for changes on, for instance, children’s behavior, pain level or mood improvement, among others. The final stage of the experiment would consist in comparing the results of the two groups and verify the effectiveness of the proposed application. Another useful experiment would require to conduct surveys before and after children undergo medical tests, thus evaluating and comparing the results to verify if the mobile application is also effective in such scenarios. At the same time, throughout this evaluation process, the usability of the application would also be examined, looking at how children use the application and including questions in the surveys about such a topic. 4.2. Main Requirements of the System Considering the system objectives mentioned in , the following are its main design requirements: AR should be included in the mobile application in order to encourage children to perform daily physical activity. Since AR can integrate the virtual elements of the game into a real scenario, it is possible to provide new augmented experiences and to increase player engagement and motivation. Children are the target of the AR application, so the User Interface (UI) needs to be as simple as possible and the mobile device should be comfortable and easy to use. For these reasons, the use of mobile devices such as mobile phones or tablets are a better option than smart glasses or HMD devices. In addition, HMD devices like Microsoft HoloLens smart glasses are much more expensive, they are designed for adult-size heads, they are quite heavy (579 g) and have a steep learning curve. The mobile AR application should be designed to host games aimed at children between 6 and 14 years old. This decision implies that the UX design needs to be adapted to the mentioned age range. As a consequence, the AR application has to implement simple interfaces, large buttons and actions that require a minimum number of steps. In order to be able to run the application on different mobile devices, its computational resource requirements need to be limited, thus emphasizing its performance rather than visual details. For such a reason, cartoon low-poly aesthetics are appropriate, since they help to reduce computational complexity. The mobile AR application needs to collect the children’s usage data so that they can be later visualized and managed through a web platform as well as through the developed mobile application. Such data should be displayed by using simple charts, where playing time, steps and daily surveys can be shown in an attractive way. It is important to consider that the games to be designed are intended to encourage patients to engage them in moderate physical activity inside a hospital within a restricted area. For example, as it will be described later in , in one of the games, which is inspired by the well-known “Marco Polo” game, a player is hidden and the others have to find the person using their voice as a guide. In this case, the hidden player would be replaced by the mobile device, which would emit different sounds to indicate its position. Another example to encourage patients to walk and move around a room is the game detailed in , which creates 3D animals as the user walks and interacts with them. The proposed games should also be multi-player games and ideally foster the cooperation of the pediatric patient with other people (e.g., parents, medical staff or other patients) in order to solve a problem. For instance, the game described in involves two people, who play different roles: one player sees a 3D map, while the other one has the clues that are needed to solve a puzzle. Thus, the two players have to communicate and cooperate in order to accomplish the indicated goals. In addition, another game has also been developed, described in , in which players compete to see which of the two obtains more points by capturing virtual animals that are displayed on the physical scenario. The requirements for playing the game should be minimized. Therefore, the use of external components (e.g., QR markers, sensors) or previous scenario-recognition phases should not be necessary so as to avoid scanning the environment and thus removing the need for creating and personalizing the games for each hospital room. As a consequence, the developed solution should be flexible enough to be used during the most common situations and should be adapted to the abilities and the most common limitations of a pediatric patient. 4.3. Mobile AR Application Following the requirements previously indicated in , the mobile AR application was designed having in mind that it is going to be used by children (by pediatric patients aged between 6 and 14), so the app interface and interactions with the users have been devised to be as simple as possible, with visual cues and minimizing the amount of text to be read. shows a diagram that illustrates the relationships among the components of the application, as well as the interaction flow that the user follows depending on different parameters. As it is illustrated in , when the application is opened, the patient has the option to login by scanning a barcode (which is one of the most common identification technologies used for identifying patients in a hospital) or he/she can skip this step if identification is not needed. If a barcode is not scanned, the user menu is displayed and he/she can choose from the different available AR games. In case the user logs in by scanning a barcode but does not grant the data storage permission, the application will not collect or store information about the patient. In contrast, when the user logs in by scanning a barcode and accepts the necessary storage permissions, it is checked whether the daily survey is required (if it is required, the survey will be displayed). When the user completes the survey or in the case that it is not necessary to do so, the user menu is shown, which enables accessing the AR games and viewing the user statistics stored in the local database of the device. The data collected by the mobile application are stored so that they can be viewed by the medical staff and patient families in a quick and efficient way. The data collection process consists of two steps: first, the mobile application collects and stores data in its local database; then, if an Internet connection is available, the data are sent to a server, where they are stored permanently. The devised application design also considers other usability aspects in order to adapt the controls and interfaces to the different constraints of the mobile application users. For instance, the interface is as simple as possible, with a limited amount of different screens, so users do not need to navigate through multiple screens to get to the different parts of the application. In addition, the amount of shown text has been minimized and icons have been incorporated, so that children can easily understand how to use the application. Regarding the four games described later in – , they were designed in a different way, having in mind the different situations that may arise in a hospital, where children suffer from different conditions that hinder their mobility. For instance, the mentioned mobility constraints led to allow users to move the maps of the games wherever they want and to let users rotate and scale such maps, thus facilitating accessibility to all parts of the map for those patients who cannot move easily. Finally, attention should be paid to the fact that environmental conditions affect ARCore performance when detecting and tracking surfaces . For this reason, during the development of the games, the environmental factor has been considered while testing them. For such a purpose, tests have been carried out in different lighting conditions and different surfaces: tiles, wood, monochromatic lacquered tables and concrete. In addition, the speed of movement and the rotations that the user can make while using the device was evaluated. The application was tested under similar conditions to those that might be encountered in hospitals (i.e., inside classrooms, hallways, living rooms and work environments) and it has been concluded that the application works smoothly. Nonetheless, it was confirmed that the worse the light or the more homogeneous the surface is, the longer ARCore takes to detect the surface and the worse the tracking continuity. These drawbacks are mitigated to some extent thanks to the ability of ARCore to recover from these failures, as it is able to restore previous virtual object positions once the environment is recognized again. 4.4. Backend In the proposed use case, the backend is used to save the data collected from the mobile devices, providing an information access point to the healthcare personnel and to patient’s family. This means that healthcare professionals do not have to collect manually the local usage data from each device. In addition, having a central database allows the information to be synchronized among different devices. Furthermore, the backend database can be used to back up local data, allowing for installing and uninstalling the application without losing patient data. Three GET requests were designed: one for obtaining user profile information; another for determining whether the patient has already filled his/her daily survey; and a last request, devised for the website, which provides the summary of all the patient data so that they can be easily displayed on charts. In addition, the REST API defines three POST requests that are used to send information to the server regarding new users, games and surveys. 4.5. Web Frontend The web frontend is the interface through which remote users and the backend server communicate. Since the backend server was designed by using the MVC pattern, the frontend corresponds to the view part of the pattern. To implement the frontend, a website was devised to visualize and manage patient data remotely. Specifically, the website was designed to provide easy navigation and access to each patient’s data in a clear and simple way. To access each patient’s data, the remote viewer only has to enter the barcode number of the monitored patient. The website also provides a menu to modify the patient settings and restrictions of the games. Specifically, the web application offers the following pages that users can navigate through: Home page. It provides a welcome page with a video showing all the games and a summary on the features of the mobile and web applications. In addition, a button is provided through which the remote user can access the patient search page. A screenshot of the home page is shown in on the left. Search page. It is used to look for a patient profile by inserting his/her barcode number. , in the center image, shows a screenshot of the search page. User data edition page. In this page the patient’s personal information can be edited. A screenshot of the edition page can be seen on on the right. Patient data page. It can be accessed from the search page after registering a patient in the database. As it can be observed in , the patient data page contains the patient personal information together with charts that show his/her step count and the evolution of the answers of patient daily mood survey. In order to validate the proposed mobile AR framework, the use cases described in the next subsections were tested, verifying the authentication and initialization of the system, and each of the developed games, as well as the web application. 4.6. Authentication and System Initialization Within the mobile application, there are two available options: the user can access with or without scanning the barcode. In order to use the application and store the patient’s usage data, the patient must first log in. The way to log into the application is by scanning the barcode of the hospital identification bracelets, as it can be seen in . Once the barcode is scanned, it is checked whether the patient is registered in the system. If the patient is not registered, such a registration is then performed by asking for permission to collect and store his/her data. Then, the login process continues by checking whether the patient must take his/her daily survey and, then, the main menu is displayed. During this process, if an Internet connection is available, requests will also be made to the server to check if there are any data about the user on the backend server. The daily survey is shown during the login process once a day for each registered patient. As it is shown on , this survey collects data on a scale of 1 to 5 on mood, pain level and appetite. If a user does not want to register or send his/her usage data, he/she would choose the login without barcode option, in which it is only possible to access the application and start a gaming session. 4.7. Designed Games Within the Unity project there is a main scene where the rest of the scenes are loaded. This scene has an object to which a parameter is passed with the scene number that has to be loaded next. ARCore is used to detect horizontal surfaces, which is necessary for the correct positioning of 3D objects on real scenarios. During this process, raycasting is performed from the camera in the direction of the user to check for collisions with the surfaces detected by ARCore. The collision point is where the virtual objects will be placed. The system of user interfaces, that includes the buttons and the interactions with the user through scripts that can be adapted to each specific scene, is the same for all scenes. Users can show/hide the main menu and restart the game. In addition, in the “Map explorer” and “WakaMole” games, users are able to scale and rotate the map in order to see it from a more comfortable point of view without having to move excessively. As it was previously mentioned, such a functionality is aimed at easing the interactions of patients whose reduced mobility prevents them from solving a challenge because they are not able to visualize the map from all the required perspectives. Moreover, the developed mobile application enhances its UX through haptic feedback, which is used to reinforce the response of the system to user interactions. Thus, when the user makes a mistake, a short vibration is emitted, while when the game is completed a repetition of vibrations is emitted as a ‘celebration’. In addition, it is also considered appropriate to add a particle system that simulates confetti to indicate that an objective was achieved and to reinforce the positive feeling of the user. Another aspect that is considered important is to adapt the size and position of the map scaling and rotation sliders when rotating the mobile device. In this way the shape of the sliders is not distorted, since Unity’s way of managing the interfaces is by ratios and distances to anchors. In the case of the screen rotation, it is an event that is collected by the Android application and then sent through a message system that a script receives in Unity. 4.7.1. Marco Polo The first of the activities devised for pediatric patients was the game “Marco polo”, which does not involve AR, but which enables testing the rest of the components of the developed system. Specifically, the game requires a minimum of two people: one would hide the mobile device and would click on a button, while the rest of the players will try to find the mobile device by being guided by the sound it plays. In , it can be observed the game main screen, where the user can select from one of the three difficulty levels that are available. Depending on the selected level, the sound, melody and its frequency are adapted. The greater the difficulty, the lower the sound and and its frequency. Once the level is selected, the game can begin. A timer has also been added to keep track of the time from when the device is hidden to when it is found. 4.7.2. Jungle Adventure The main objective of this game is for the patient to walk and do a minimum of physical activity. The game consists of walking through a room or corridor, aiming the camera of the device towards the floor to detect the surfaces, where animals will appear. shows some examples of the animals while being automatically spawned on the floor. When the animals are displayed, the patient will have to "collect" them by clicking on the 3D objects. When an animal is registered, its silhouette will light up on a panel located on the lower left side of the screen, making the user aware of the animals still to be found and showing how many of them have already been collected. Once a surface is detected, an internal counter is triggered which, after a random time within a range, will cause an animal to appear on the surface that the user is targeting. The instantiated animal (3D model) will be randomly chosen from a list of animals that is passed to the scene controller for this purpose. After the user clicks on the animal, the counter will be activated again, restarting this cycle, which will stop when all the animals on the list are displayed. 4.7.3. Map Explorer This game challenges two patients to solve a map in a collaborative way. For this purpose, users can play two roles: the explorer, who has to guide a character through the map, and the assistant, who has the clues to discover the steps the explorer has to follow to solve the puzzle. For this game there are two different maps available: Savannah and North Pole, which are shown on . Each game consists in combining the clues given by the assistant player in order to solve the given challenge. These clues can be found on the mobile application, at the “MapAssistantActivity”screen, as it can be seen on . This activity uses the ARCore surface detection tool in order to place the map on a surface at the beginning of the game. In each map, the position of the characters and obstacles is predefined, changing in each execution small details that will be used to indicate the order of the resolution of the challenge. The correct order for the solution of each map can be found in the posts that can be accessed by the other player from their device. Thus, completing the game is a task that needs to be achieved with the collaboration between the two patients. In addition, a timer has been included to count the time it takes the players to solve the challenge. This is meant to encourage the patients to keep playing and moving on a daily basis. 4.7.4. Wakamole This game was designed thinking about children who cannot move or have very little mobility, as they can play without the need for walking around a map. The game is for two patients that play against each other, trying to ’capture’ as many animals as possible. The animals appear periodically at random points on the map, and have a time limit after which they disappear. The player who first clicks on an animal gets a point. The game ends when one of the players reaches a previously agreed score, since there is no time limitation. To perform the previously described process, the users need to be connected to the same game. For such a purpose, one of the players starts a new game, and the other one joins the game introducing a game code as it is illustrated in , where the image at the center shows the host screen while waiting for the second player to enter the game code (which is illustrated on the image on the right). After the matchmaking process is completed, the new scene containing the game is loaded. shows how ARCore is used for surface detection in order to place the map on a surface at the beginning of the game. When the game starts, the animals appear one by one for a fixed time. The first player that presses on the animal before it disappears gets a point. The spawning points of the animals are stored in a list, from which random points are taken each time a new animal is spawned. If only one of the players clicks on the animal, there will be no conflicts, since there is enough time for the message to be sent from the client to the server (then the server will communicate the clients that the animal needs to be hidden, as it has already been picked by a user). The problem arises when both players click on the same animal almost simultaneously, as both event messages would reach the server and a point would be added to each player. In order to solve this problem, the message flow has been implemented in such a way that the client sends its own identifier to the server, and when the server notifies the clients that the animal has been clicked on, it also sends such an identifier. This allows each client to check if the received identifier matches its own before giving the order to add a point to its scoreboard. When one of the scoreboards is incremented, it is automatically updated in all clients, as they remain synchronized at all times. An example of the such scoreboards can be observed at the bottom of the screenshot on the right of .
One of the priorities that is always present in a hospital environment is to improve the quality of life of patients. For instance, one of the consequences of hospitalization is an increase in patient stress levels, including children, who usually experience increases in their stress levels, which derive into aggressive behaviors, isolation and difficulties when recovering from medical procedures . Minimizing anxiety is important in order to help children to be more relaxed and comfortable during interventions and medical tests that may be conducted on them while they are at the hospital. Children with less anxiety will cooperate more easily, will be less afraid of procedures, will need less sedatives and will have shorter recovery times . During periods of hospitalization, play activities are often used in the form of therapeutic play or play therapy, thus providing an improvement in their physical and emotional well-being, while also reducing patient recovery time. Games can help to reduce the intensity of the negative feelings that occur during children’s hospitalizations, thus being appropriate, for example, in pre-operative period preparations and invasive procedures . Therefore, the use of games inside a hospital can become an interesting tool to be considered for the assistance to hospitalized children. The use of AR also increases children engagement and motivation, preventing them from getting bored and encouraging them to use the application for longer periods of time. To show the technical capabilities and the performance of the proposed AR framework, a demonstrative experience was created. In such an application, part of the games were adapted so that they can be used both by children with complete or restricted mobility. It is important to note that the application is intended to test the developed system from a technical point of view and demonstrate its technical capabilities. No empirical scientific tests with children were made. Any applications made with this framework will have to be assessed taking into account medical considerations and tests will have to be carried out. In our future work we will tackle these issues as soon as the COVID-19 pandemic allows for carrying out the experiments in safe scenarios. For example, one possibility would be to perform the evaluation as follows. First, the testing group would be split into two subgroups: one that would use the application daily and another that would not. The idea would be to conduct periodic surveys during their hospital stay asking for changes on, for instance, children’s behavior, pain level or mood improvement, among others. The final stage of the experiment would consist in comparing the results of the two groups and verify the effectiveness of the proposed application. Another useful experiment would require to conduct surveys before and after children undergo medical tests, thus evaluating and comparing the results to verify if the mobile application is also effective in such scenarios. At the same time, throughout this evaluation process, the usability of the application would also be examined, looking at how children use the application and including questions in the surveys about such a topic.
Considering the system objectives mentioned in , the following are its main design requirements: AR should be included in the mobile application in order to encourage children to perform daily physical activity. Since AR can integrate the virtual elements of the game into a real scenario, it is possible to provide new augmented experiences and to increase player engagement and motivation. Children are the target of the AR application, so the User Interface (UI) needs to be as simple as possible and the mobile device should be comfortable and easy to use. For these reasons, the use of mobile devices such as mobile phones or tablets are a better option than smart glasses or HMD devices. In addition, HMD devices like Microsoft HoloLens smart glasses are much more expensive, they are designed for adult-size heads, they are quite heavy (579 g) and have a steep learning curve. The mobile AR application should be designed to host games aimed at children between 6 and 14 years old. This decision implies that the UX design needs to be adapted to the mentioned age range. As a consequence, the AR application has to implement simple interfaces, large buttons and actions that require a minimum number of steps. In order to be able to run the application on different mobile devices, its computational resource requirements need to be limited, thus emphasizing its performance rather than visual details. For such a reason, cartoon low-poly aesthetics are appropriate, since they help to reduce computational complexity. The mobile AR application needs to collect the children’s usage data so that they can be later visualized and managed through a web platform as well as through the developed mobile application. Such data should be displayed by using simple charts, where playing time, steps and daily surveys can be shown in an attractive way. It is important to consider that the games to be designed are intended to encourage patients to engage them in moderate physical activity inside a hospital within a restricted area. For example, as it will be described later in , in one of the games, which is inspired by the well-known “Marco Polo” game, a player is hidden and the others have to find the person using their voice as a guide. In this case, the hidden player would be replaced by the mobile device, which would emit different sounds to indicate its position. Another example to encourage patients to walk and move around a room is the game detailed in , which creates 3D animals as the user walks and interacts with them. The proposed games should also be multi-player games and ideally foster the cooperation of the pediatric patient with other people (e.g., parents, medical staff or other patients) in order to solve a problem. For instance, the game described in involves two people, who play different roles: one player sees a 3D map, while the other one has the clues that are needed to solve a puzzle. Thus, the two players have to communicate and cooperate in order to accomplish the indicated goals. In addition, another game has also been developed, described in , in which players compete to see which of the two obtains more points by capturing virtual animals that are displayed on the physical scenario. The requirements for playing the game should be minimized. Therefore, the use of external components (e.g., QR markers, sensors) or previous scenario-recognition phases should not be necessary so as to avoid scanning the environment and thus removing the need for creating and personalizing the games for each hospital room. As a consequence, the developed solution should be flexible enough to be used during the most common situations and should be adapted to the abilities and the most common limitations of a pediatric patient.
Following the requirements previously indicated in , the mobile AR application was designed having in mind that it is going to be used by children (by pediatric patients aged between 6 and 14), so the app interface and interactions with the users have been devised to be as simple as possible, with visual cues and minimizing the amount of text to be read. shows a diagram that illustrates the relationships among the components of the application, as well as the interaction flow that the user follows depending on different parameters. As it is illustrated in , when the application is opened, the patient has the option to login by scanning a barcode (which is one of the most common identification technologies used for identifying patients in a hospital) or he/she can skip this step if identification is not needed. If a barcode is not scanned, the user menu is displayed and he/she can choose from the different available AR games. In case the user logs in by scanning a barcode but does not grant the data storage permission, the application will not collect or store information about the patient. In contrast, when the user logs in by scanning a barcode and accepts the necessary storage permissions, it is checked whether the daily survey is required (if it is required, the survey will be displayed). When the user completes the survey or in the case that it is not necessary to do so, the user menu is shown, which enables accessing the AR games and viewing the user statistics stored in the local database of the device. The data collected by the mobile application are stored so that they can be viewed by the medical staff and patient families in a quick and efficient way. The data collection process consists of two steps: first, the mobile application collects and stores data in its local database; then, if an Internet connection is available, the data are sent to a server, where they are stored permanently. The devised application design also considers other usability aspects in order to adapt the controls and interfaces to the different constraints of the mobile application users. For instance, the interface is as simple as possible, with a limited amount of different screens, so users do not need to navigate through multiple screens to get to the different parts of the application. In addition, the amount of shown text has been minimized and icons have been incorporated, so that children can easily understand how to use the application. Regarding the four games described later in – , they were designed in a different way, having in mind the different situations that may arise in a hospital, where children suffer from different conditions that hinder their mobility. For instance, the mentioned mobility constraints led to allow users to move the maps of the games wherever they want and to let users rotate and scale such maps, thus facilitating accessibility to all parts of the map for those patients who cannot move easily. Finally, attention should be paid to the fact that environmental conditions affect ARCore performance when detecting and tracking surfaces . For this reason, during the development of the games, the environmental factor has been considered while testing them. For such a purpose, tests have been carried out in different lighting conditions and different surfaces: tiles, wood, monochromatic lacquered tables and concrete. In addition, the speed of movement and the rotations that the user can make while using the device was evaluated. The application was tested under similar conditions to those that might be encountered in hospitals (i.e., inside classrooms, hallways, living rooms and work environments) and it has been concluded that the application works smoothly. Nonetheless, it was confirmed that the worse the light or the more homogeneous the surface is, the longer ARCore takes to detect the surface and the worse the tracking continuity. These drawbacks are mitigated to some extent thanks to the ability of ARCore to recover from these failures, as it is able to restore previous virtual object positions once the environment is recognized again.
In the proposed use case, the backend is used to save the data collected from the mobile devices, providing an information access point to the healthcare personnel and to patient’s family. This means that healthcare professionals do not have to collect manually the local usage data from each device. In addition, having a central database allows the information to be synchronized among different devices. Furthermore, the backend database can be used to back up local data, allowing for installing and uninstalling the application without losing patient data. Three GET requests were designed: one for obtaining user profile information; another for determining whether the patient has already filled his/her daily survey; and a last request, devised for the website, which provides the summary of all the patient data so that they can be easily displayed on charts. In addition, the REST API defines three POST requests that are used to send information to the server regarding new users, games and surveys.
The web frontend is the interface through which remote users and the backend server communicate. Since the backend server was designed by using the MVC pattern, the frontend corresponds to the view part of the pattern. To implement the frontend, a website was devised to visualize and manage patient data remotely. Specifically, the website was designed to provide easy navigation and access to each patient’s data in a clear and simple way. To access each patient’s data, the remote viewer only has to enter the barcode number of the monitored patient. The website also provides a menu to modify the patient settings and restrictions of the games. Specifically, the web application offers the following pages that users can navigate through: Home page. It provides a welcome page with a video showing all the games and a summary on the features of the mobile and web applications. In addition, a button is provided through which the remote user can access the patient search page. A screenshot of the home page is shown in on the left. Search page. It is used to look for a patient profile by inserting his/her barcode number. , in the center image, shows a screenshot of the search page. User data edition page. In this page the patient’s personal information can be edited. A screenshot of the edition page can be seen on on the right. Patient data page. It can be accessed from the search page after registering a patient in the database. As it can be observed in , the patient data page contains the patient personal information together with charts that show his/her step count and the evolution of the answers of patient daily mood survey. In order to validate the proposed mobile AR framework, the use cases described in the next subsections were tested, verifying the authentication and initialization of the system, and each of the developed games, as well as the web application.
Within the mobile application, there are two available options: the user can access with or without scanning the barcode. In order to use the application and store the patient’s usage data, the patient must first log in. The way to log into the application is by scanning the barcode of the hospital identification bracelets, as it can be seen in . Once the barcode is scanned, it is checked whether the patient is registered in the system. If the patient is not registered, such a registration is then performed by asking for permission to collect and store his/her data. Then, the login process continues by checking whether the patient must take his/her daily survey and, then, the main menu is displayed. During this process, if an Internet connection is available, requests will also be made to the server to check if there are any data about the user on the backend server. The daily survey is shown during the login process once a day for each registered patient. As it is shown on , this survey collects data on a scale of 1 to 5 on mood, pain level and appetite. If a user does not want to register or send his/her usage data, he/she would choose the login without barcode option, in which it is only possible to access the application and start a gaming session.
Within the Unity project there is a main scene where the rest of the scenes are loaded. This scene has an object to which a parameter is passed with the scene number that has to be loaded next. ARCore is used to detect horizontal surfaces, which is necessary for the correct positioning of 3D objects on real scenarios. During this process, raycasting is performed from the camera in the direction of the user to check for collisions with the surfaces detected by ARCore. The collision point is where the virtual objects will be placed. The system of user interfaces, that includes the buttons and the interactions with the user through scripts that can be adapted to each specific scene, is the same for all scenes. Users can show/hide the main menu and restart the game. In addition, in the “Map explorer” and “WakaMole” games, users are able to scale and rotate the map in order to see it from a more comfortable point of view without having to move excessively. As it was previously mentioned, such a functionality is aimed at easing the interactions of patients whose reduced mobility prevents them from solving a challenge because they are not able to visualize the map from all the required perspectives. Moreover, the developed mobile application enhances its UX through haptic feedback, which is used to reinforce the response of the system to user interactions. Thus, when the user makes a mistake, a short vibration is emitted, while when the game is completed a repetition of vibrations is emitted as a ‘celebration’. In addition, it is also considered appropriate to add a particle system that simulates confetti to indicate that an objective was achieved and to reinforce the positive feeling of the user. Another aspect that is considered important is to adapt the size and position of the map scaling and rotation sliders when rotating the mobile device. In this way the shape of the sliders is not distorted, since Unity’s way of managing the interfaces is by ratios and distances to anchors. In the case of the screen rotation, it is an event that is collected by the Android application and then sent through a message system that a script receives in Unity. 4.7.1. Marco Polo The first of the activities devised for pediatric patients was the game “Marco polo”, which does not involve AR, but which enables testing the rest of the components of the developed system. Specifically, the game requires a minimum of two people: one would hide the mobile device and would click on a button, while the rest of the players will try to find the mobile device by being guided by the sound it plays. In , it can be observed the game main screen, where the user can select from one of the three difficulty levels that are available. Depending on the selected level, the sound, melody and its frequency are adapted. The greater the difficulty, the lower the sound and and its frequency. Once the level is selected, the game can begin. A timer has also been added to keep track of the time from when the device is hidden to when it is found. 4.7.2. Jungle Adventure The main objective of this game is for the patient to walk and do a minimum of physical activity. The game consists of walking through a room or corridor, aiming the camera of the device towards the floor to detect the surfaces, where animals will appear. shows some examples of the animals while being automatically spawned on the floor. When the animals are displayed, the patient will have to "collect" them by clicking on the 3D objects. When an animal is registered, its silhouette will light up on a panel located on the lower left side of the screen, making the user aware of the animals still to be found and showing how many of them have already been collected. Once a surface is detected, an internal counter is triggered which, after a random time within a range, will cause an animal to appear on the surface that the user is targeting. The instantiated animal (3D model) will be randomly chosen from a list of animals that is passed to the scene controller for this purpose. After the user clicks on the animal, the counter will be activated again, restarting this cycle, which will stop when all the animals on the list are displayed. 4.7.3. Map Explorer This game challenges two patients to solve a map in a collaborative way. For this purpose, users can play two roles: the explorer, who has to guide a character through the map, and the assistant, who has the clues to discover the steps the explorer has to follow to solve the puzzle. For this game there are two different maps available: Savannah and North Pole, which are shown on . Each game consists in combining the clues given by the assistant player in order to solve the given challenge. These clues can be found on the mobile application, at the “MapAssistantActivity”screen, as it can be seen on . This activity uses the ARCore surface detection tool in order to place the map on a surface at the beginning of the game. In each map, the position of the characters and obstacles is predefined, changing in each execution small details that will be used to indicate the order of the resolution of the challenge. The correct order for the solution of each map can be found in the posts that can be accessed by the other player from their device. Thus, completing the game is a task that needs to be achieved with the collaboration between the two patients. In addition, a timer has been included to count the time it takes the players to solve the challenge. This is meant to encourage the patients to keep playing and moving on a daily basis. 4.7.4. Wakamole This game was designed thinking about children who cannot move or have very little mobility, as they can play without the need for walking around a map. The game is for two patients that play against each other, trying to ’capture’ as many animals as possible. The animals appear periodically at random points on the map, and have a time limit after which they disappear. The player who first clicks on an animal gets a point. The game ends when one of the players reaches a previously agreed score, since there is no time limitation. To perform the previously described process, the users need to be connected to the same game. For such a purpose, one of the players starts a new game, and the other one joins the game introducing a game code as it is illustrated in , where the image at the center shows the host screen while waiting for the second player to enter the game code (which is illustrated on the image on the right). After the matchmaking process is completed, the new scene containing the game is loaded. shows how ARCore is used for surface detection in order to place the map on a surface at the beginning of the game. When the game starts, the animals appear one by one for a fixed time. The first player that presses on the animal before it disappears gets a point. The spawning points of the animals are stored in a list, from which random points are taken each time a new animal is spawned. If only one of the players clicks on the animal, there will be no conflicts, since there is enough time for the message to be sent from the client to the server (then the server will communicate the clients that the animal needs to be hidden, as it has already been picked by a user). The problem arises when both players click on the same animal almost simultaneously, as both event messages would reach the server and a point would be added to each player. In order to solve this problem, the message flow has been implemented in such a way that the client sends its own identifier to the server, and when the server notifies the clients that the animal has been clicked on, it also sends such an identifier. This allows each client to check if the received identifier matches its own before giving the order to add a point to its scoreboard. When one of the scoreboards is incremented, it is automatically updated in all clients, as they remain synchronized at all times. An example of the such scoreboards can be observed at the bottom of the screenshot on the right of .
The first of the activities devised for pediatric patients was the game “Marco polo”, which does not involve AR, but which enables testing the rest of the components of the developed system. Specifically, the game requires a minimum of two people: one would hide the mobile device and would click on a button, while the rest of the players will try to find the mobile device by being guided by the sound it plays. In , it can be observed the game main screen, where the user can select from one of the three difficulty levels that are available. Depending on the selected level, the sound, melody and its frequency are adapted. The greater the difficulty, the lower the sound and and its frequency. Once the level is selected, the game can begin. A timer has also been added to keep track of the time from when the device is hidden to when it is found.
The main objective of this game is for the patient to walk and do a minimum of physical activity. The game consists of walking through a room or corridor, aiming the camera of the device towards the floor to detect the surfaces, where animals will appear. shows some examples of the animals while being automatically spawned on the floor. When the animals are displayed, the patient will have to "collect" them by clicking on the 3D objects. When an animal is registered, its silhouette will light up on a panel located on the lower left side of the screen, making the user aware of the animals still to be found and showing how many of them have already been collected. Once a surface is detected, an internal counter is triggered which, after a random time within a range, will cause an animal to appear on the surface that the user is targeting. The instantiated animal (3D model) will be randomly chosen from a list of animals that is passed to the scene controller for this purpose. After the user clicks on the animal, the counter will be activated again, restarting this cycle, which will stop when all the animals on the list are displayed.
This game challenges two patients to solve a map in a collaborative way. For this purpose, users can play two roles: the explorer, who has to guide a character through the map, and the assistant, who has the clues to discover the steps the explorer has to follow to solve the puzzle. For this game there are two different maps available: Savannah and North Pole, which are shown on . Each game consists in combining the clues given by the assistant player in order to solve the given challenge. These clues can be found on the mobile application, at the “MapAssistantActivity”screen, as it can be seen on . This activity uses the ARCore surface detection tool in order to place the map on a surface at the beginning of the game. In each map, the position of the characters and obstacles is predefined, changing in each execution small details that will be used to indicate the order of the resolution of the challenge. The correct order for the solution of each map can be found in the posts that can be accessed by the other player from their device. Thus, completing the game is a task that needs to be achieved with the collaboration between the two patients. In addition, a timer has been included to count the time it takes the players to solve the challenge. This is meant to encourage the patients to keep playing and moving on a daily basis.
This game was designed thinking about children who cannot move or have very little mobility, as they can play without the need for walking around a map. The game is for two patients that play against each other, trying to ’capture’ as many animals as possible. The animals appear periodically at random points on the map, and have a time limit after which they disappear. The player who first clicks on an animal gets a point. The game ends when one of the players reaches a previously agreed score, since there is no time limitation. To perform the previously described process, the users need to be connected to the same game. For such a purpose, one of the players starts a new game, and the other one joins the game introducing a game code as it is illustrated in , where the image at the center shows the host screen while waiting for the second player to enter the game code (which is illustrated on the image on the right). After the matchmaking process is completed, the new scene containing the game is loaded. shows how ARCore is used for surface detection in order to place the map on a surface at the beginning of the game. When the game starts, the animals appear one by one for a fixed time. The first player that presses on the animal before it disappears gets a point. The spawning points of the animals are stored in a list, from which random points are taken each time a new animal is spawned. If only one of the players clicks on the animal, there will be no conflicts, since there is enough time for the message to be sent from the client to the server (then the server will communicate the clients that the animal needs to be hidden, as it has already been picked by a user). The problem arises when both players click on the same animal almost simultaneously, as both event messages would reach the server and a point would be added to each player. In order to solve this problem, the message flow has been implemented in such a way that the client sends its own identifier to the server, and when the server notifies the clients that the animal has been clicked on, it also sends such an identifier. This allows each client to check if the received identifier matches its own before giving the order to add a point to its scoreboard. When one of the scoreboards is incremented, it is automatically updated in all clients, as they remain synchronized at all times. An example of the such scoreboards can be observed at the bottom of the screenshot on the right of .
The developed collaborative AR game described in requires short response times to ensure a good user experience since, as it is a fast-reaction game, determining which user performs the actions first is critical to guarantee the fairness of the game and a smooth experience. To evaluate the performance of the developed system, four sets of tests were carried out by using two different mobile devices. Such devices acted first as clients and then as hosts of the game in two different scenarios: through a local network (i.e., both users made use of the same WiFi) and through the Internet, when the two users were in remote locations. The used devices have the following specifications: Device 1 (tablet): Samsung Galaxy Tab S4 with 4 GB of RAM, 64 GB of internal memory and a Qualcomm Snapdragon 835 processor (Samsung Electronics Co., Ltd., Seoul, Korea). Device 2 (smartphone): OnePlus 6T with 8 GB of RAM, 128 GB of internal memory and a Snapdragon 845 processor (Oneplus, Shenzhen, Guangdong, China). Each set of tests consisted in playing the game for 10 min. During such a time, the latency and the processing time data of every packet were collected and stored in a local file. It should be noted that the mentioned data were obtained in a way that they represent accurately the latency/time that the application will experience in a real environment. In addition, it is worth noting that, due to the single-thread nature of Unity, the measured times include the waiting times that the application requires during its execution in order to manage the processing slots of the used frames. Specifically, the following are the main steps included in the estimation of the latency: First, at one point during the game, a client sends a message to transmit a game event. Then, the server receives the message over the network. Next, the server waits until the next frame processing slot is ready. The received message is parsed and processed by the server, which sends a response to the client. The client receives the message from the server over the network. The client waits until the next frame processing slot is ready. The message is parsed and processed by the client. Although the previous steps estimate times that are slightly higher than the existing network latencies (which are traditionally used by game servers to determine their performance), they provide more realistic and accurate estimations on the times that a user of the game will experience in real life. shows a comparison of the obtained latencies for the two tested devices and for the previously indicated test scenarios. As it can be observed in the Figure, latency variability over time is not very high in the test networks. Such an observation is corroborated by the statistics shown in , which provides the mean, standard deviation and variance of the latencies plotted in . Both and show that, as it could be expected, latency was slightly higher for the remote tests than for the local network. Nonetheless, it must be noted that, in other networks, the obtained values will vary depending on the characteristics of used network and on the number of connected devices, so the provided results should be considered merely as illustrative of a real use case. and also allow for determining the differences in the latencies experienced by the two evaluated devices: as it is indicated in , on average, the selected tablet is between 32 and 41 ms slower than the smartphone, which is essentially due to their different hardware. Since, for each scenario, network latency is essentially the same (with slight oscillations due to traffic load), the observed latency differences are related to their processing time, which is the time required by the application to process a package and render the interface changes. shows the processing times for both devices, which reach, on average, around 8 ms for the tablet and only 3 ms for the phone, thus corroborating the observed performance difference. As a consequence, future developers should be careful when choosing their AR devices, since, currently, they can impact user experience. However, it is worth pointing out that processing time is lower than network latency, so the minimization of the latter should be considered (for instance, by using the latest wireless communication technologies, like 5G) in AR environments where latency is critical .
This paper presented the design, implementation and evaluation of an open-source collaborative framework to develop teaching, training, and monitoring pediatric healthcare applications. The framework enables functionalities for connecting with other AR devices and for enabling real-time visualization and simultaneous interaction with virtual objects. In order to show and asses the technical capabilities and the performance of the proposed open-source collaborative framework, an AR application was developed in order to demonstrate its potential for future researchers and developers of pediatric healthcare applications. Such an AR application actually consists of two applications: a mobile gaming application and a web application aimed at monitoring the progress of pediatric patients in terms of mood and performed activities. The collected data are shown in a user-friendly way through charts, so their representation is intuitive and easy to understand. The developed AR system was evaluated by using two different mobile devices (e.g., a tablet and a smartphone) with different hardware capabilities. The conducted performance tests measured the latency and the processing time of every packet during real games. The obtained results show that average latency is always below 200 ms for every tested device, which results in a smooth gaming experience. However, it was also observed that the selected AR device impacts user experience substantially. In addition, it was concluded that wireless communications should be carefully examined in AR environments where network latency is critical. Nonetheless, considering the previous observations, it can be stated that the proposed open-source AR collaborative framework can help future researchers to develop the next generation of AR collaborative pediatric healthcare applications. As future work, the authors plan to conduct a thorough evaluation of the mobile app with pediatric patients as soon as the current COVID-19 pandemic situation allows it.
|
One Health communication channels: a qualitative case study of swine influenza in Canada in 2020 | 2c3b9cb9-ae42-44b0-a4d1-668980e424f0 | 10996129 | Health Communication[mh] | Influenza virus surveillance and response to spillover between species are situations that can benefit from a One Health (OH) approach, as they occur at the intersection of animal, human, and ecosystem health sectors . An improved understanding of intersectoral communication across OH domains is important because many emerging diseases are of animal origin , and many global forces (increased mobility of people, animals, animal products and goods, climate change, agribusiness expansion, deforestation, etc.) are increasingly altering environments to put animals and humans in close contact, facilitating disease spillover in both directions. Identifying formal and informal structures, processes or practices that support OH communication could improve the integration of the OH approach in different systems. Human infections with swine influenza virus subtypes have been reported in North America , and these are reportable under the International Health Regulations (IHR), although data suggest that transmission of influenza from humans to pigs is more frequent . Here, we report a human case of influenza A H1N2v occurring in Alberta in October 2020. It resulted in rapid collaboration and investigation by human and animal health sectors, but there is limited information about how and why effective communication and coordination occurred between and within these sectors during this event. Bridging this gap requires gathering information from multiple points of view, to which qualitative methods are well suited . Describing the context and narrative of a specific case study enables identification of patterns that can then be validated in other contexts. Our study objectives were to describe the OH communication channels and flow of information among stakeholders involved in human and swine influenza surveillance and response activities in Alberta (Canada) and to identify elements encouraging and inhibiting OH communication, specifically related to information sharing between livestock and public health professionals. Our research question was therefore to determine what factors impede or support information sharing between sectors during the occurrence of a human case of zoonotic influenza. To describe the mechanisms and performance of communication channels, we used interpretive process tracing . We started from the detection of the emergence of a human case of influenza A H1N2v in Alberta in October 2020. We then sought to understand the communication channels related to the surveillance of influenza in pigs and humans generally and how these and other channels operated in this specific case. Our research team included animal and public health researchers and government employees, but none were directly involved in case management or regional surveillance systems related to this event. We relied on the experience and knowledge from our collaborator from the Animal Health Science Directorate of the Canadian Food Inspection Agency (CFIA) to identify some key stakeholders. Documentary research and interviews with stakeholders influenced each other in an iterative process. The Canadian Animal Health Surveillance System (CAHSS) was a starting point for documentary research, as it already mapped the surveillance system within multiple animal production industries . Additionally, we used a report created after the 2009 H1N1 pandemic and a recent study about laboratory and syndromic surveillance in the swine sector to create a preliminary outline of Canadian influenza communication channels. We initially identified ten stakeholders occupying strategic positions in the case study communications channels, representing federal and provincial governments, animal and public health, and Canadian swine health surveillance systems. Additional stakeholders were identified through snowballing and findings from concurrent documentary research . Interviewees were invited to participate in a one-hour individual semistructured interview to identify and explore structural links and information channels. We developed semistructured interview questions during the initial phases of the documentary research and created a general interview guide to identify the case study communication channels and the barriers and facilitators for communication between animal and human health stakeholders (Table ). The guide was tested with a team member involved in animal health surveillance who did not participate in developing the questions. The data from this pilot were kept for the analyses. The research team met throughout the project to discuss and assess the guide and minimize biases . For example, interviewers adapted the guide to make it relevant to each interviewed stakeholder by choosing questions that aligned with their work and position. Some questions were also rephrased or complemented to fill gaps identified during previous interviews. Changes and additions were reviewed by members of the research team, all of whom assessed the questions from their own disciplinary vantage point to ensure these alterations were consistent with the global objectives of the study and did not reflect the implicit assumptions or biases of any one discipline or sector. While the main interviewer was an animal health specialist, she was joined by at least one other member of the team from another discipline during all interviews. Having one interviewer with deep subject matter expertise ensured continuity and rigour across interviews; having a second research team member from a different disciplinary vantage point served as a check to reduce confirmation bias. Interviews were conducted in English or French between September and December 2021, and (with permission) audio recorded on Zoom (Zoom Video Communications, Inc.) or Teams (Microsoft corp.). Interviews were transcribed and cleaned and then coded and analyzed in NVivo (Luminvero©). To protect anonymity, all interview quotes in this report are presented in English. Participants did not receive compensation. We contacted 23 stakeholders from the human ( n =13) and animal health ( n =10) sectors, of whom eight (human: n = 6, and animal: n = 2) declined or did not reply to our invitations (nonparticipation proportion = 35%). Fifteen participants from the human ( n = 7) and animal ( n = 8) health sectors were interviewed in November and December 2021. Employees of the federal (Public Health Agency of Canada and CFIA) and provincial (Alberta Health Services and Alberta Ministry of Agriculture) governments, stakeholders from the swine health surveillance system and from academia participated in the interviews (Table ). Analyses Through an interpretive process tracing approach, we explored how actors described their practices, how they perceived their actions, and how information flows . We used interview transcripts combined with documentary research to create a map of the communication channels among stakeholders involved in human and swine influenza surveillance in Alberta and in the specific H1N2v zoonotic human influenza case. We then synthesized this information graphically using an online collaborative platform (Miro; RealtimeBoard, Inc.). We identified two distinct categories of communication channels: formal and informal. Formal channels were those that entailed an institutionalized, official structure, often including established written protocols, guidance documents, or terms of reference specifying how actors holding specific positions of authority were to communicate with each other. Informal channels were ad hoc, created by the involved stakeholders to suit a particular situation, and were often dependent on personal relationships between individuals, rather than institutionalized relationships between offices or job functions. We used an iterative thematic analysis to identify barriers and facilitators to information sharing in this case. Themes and subthemes summarizing participants’ perspectives were discussed among members of our research team, and representative quotes were selected. We used this information about facilitators and barriers to identify elements that, more broadly, may support or impede information sharing between animal health and human health stakeholders. Through an interpretive process tracing approach, we explored how actors described their practices, how they perceived their actions, and how information flows . We used interview transcripts combined with documentary research to create a map of the communication channels among stakeholders involved in human and swine influenza surveillance in Alberta and in the specific H1N2v zoonotic human influenza case. We then synthesized this information graphically using an online collaborative platform (Miro; RealtimeBoard, Inc.). We identified two distinct categories of communication channels: formal and informal. Formal channels were those that entailed an institutionalized, official structure, often including established written protocols, guidance documents, or terms of reference specifying how actors holding specific positions of authority were to communicate with each other. Informal channels were ad hoc, created by the involved stakeholders to suit a particular situation, and were often dependent on personal relationships between individuals, rather than institutionalized relationships between offices or job functions. We used an iterative thematic analysis to identify barriers and facilitators to information sharing in this case. Themes and subthemes summarizing participants’ perspectives were discussed among members of our research team, and representative quotes were selected. We used this information about facilitators and barriers to identify elements that, more broadly, may support or impede information sharing between animal health and human health stakeholders. While our study focused on a human case of swine influenza, it quickly became clear that the surveillance systems in place prior to the event were important. Surveillance systems have multiple goals. For influenza in Canada, surveillance aims to detect and monitor the viruses, and to inform vaccines and policies . Routine communication structures We describe below the usual communication channels in the swine sector, the human health sector, and across these two sectors. Routine communication channels for the surveillance of influenza virus infection in the swine sector in Alberta Figure B shows communication channels as they flow (from left to right) in Alberta. Influenza virus in pigs is provincially notifiable in Alberta, British Columbia and Saskatchewan, but it is not federally notifiable. At the regional level, the Canada West Swine Health Intelligence Network (CWSHIN) combines and analyzes the data from British Columbia, Alberta, Saskatchewan, and Manitoba. It includes clinical impression surveys from swine veterinarians, laboratory diagnostic data from provincial and university laboratories (presence of pathogens, or serological or anatomical indicators), and condemnation rates from federally inspected slaughterhouses . Once analyzed, the information is shared quarterly with veterinarians (reports, as private communications) and producers (reports, as public communications) and, when requested to address animal, human, or ecosystem concerns, with provincial governments (Fig. B). While some analyzed data are publicly available via reports for producers, our participants stated that there are no other direct communication channels between the CWSHIN and public health stakeholders. However, the regional surveillance networks (CWSHIN, Ontario Animal Health Network, and Réseau d’alerte et d’information zoosanitaire) are part of the Canadian Swine Health Intelligence Network (CSHIN) and the CAHSS, which include members from the National and Provincial pork councils, veterinary colleges, diagnostic laboratories, provincial governments, CFIA, Agriculture and Agri-Food Canada (AAFC), Public Health Agency of Canada (PHAC), and national and regional veterinary organizations and networks. Routine communication channels for the surveillance of influenza virus infection in the human sector in Alberta In Alberta, laboratory data flow through a single laboratory information system (Provincial Surveillance Initiative; PSI), and information is automatically transmitted to stakeholders (e.g., physicians, patients, and surveillance units within the Ministry of Health; Fig. A) via an online platform. This system allows the linkage of clinical and epidemiological data with laboratory data at the provincial level. The data about influenza collected by the healthcare system are gathered provincially and then anonymized and shared with FluWatch, a national surveillance program for influenza and influenza-like illnesses (ILI) . The program monitors, inter alia, health care admission for influenza or ILI, laboratory-confirmed detection, syndromic surveillance, outbreak and severe outcome surveillance, and vaccine coverage; it shares weekly reports online . At the provincial and federal levels, we were unable to identify other communication channels for providing human influenza surveillance information to animal health stakeholders. Routine communication channels between the swine and human sectors for the surveillance of influenza in Alberta Many swine health surveillance stakeholders are members of the Community for Emerging and Zoonotic Diseases (CEZD). This multidisciplinary network of public and animal health experts from government, industry and academia was developed to support early warning, preparedness, and response for animal emerging and zoonotic diseases . Open source signals are extracted automatically via the Knowledge Integration using Web Based Intelligence (KIWI) and manually by the CEZD core team (CFIA employees). This team assesses signals daily, with rolling support from volunteer members and from expert partners from federal and provincial governments, academia, and industry when needed. Signals are then shared with the CEZD community, including through immediate notifications of important disease events, group notifications and pings, quarterly sector-specific intelligence reports and weekly intelligence reports. Although CEZD was growing during the 2020-2021 period , membership is voluntary, as it is the case for CAHSS. Moreover, both networks cover multiple species and diseases, which serves to maximize the reach of the communities but can result in an overwhelming amount of information for members whose main interest is in another sector, such as human or ecosystem health. This large amount of information primarily relevant to other sectors can lead members to leave or not join these two networks. Communication channels between sectors during a human case of swine influenza in Alberta In all cases where a new influenza subtype, including an animal influenza subtype, is identified from a human case, this must be reported to the World Health Organization (WHO) under the IHR . In Canada, PHAC is the body responsible for notifying the WHO of such cases. We examined the IHR-reportable case of a human infected with an animal influenza subtype identified in October 2020 in Alberta. The event we examined happened during an exceptional period for ILI as it was less than a year after the WHO declared the 2019 novel coronavirus disease (COVID-19) a global pandemic. At that time, influenza activity remained below average, most ILI symptoms were due to COVID-19 cases, and most public health and human health resources were dedicated to managing the pandemic . In the case we investigated, the influenza subtype identified through sequencing performed at a provincial laboratory on October 29, 2020 (Fig. ) in the human case was a variant similar to a swine influenza virus (A H1N2v). Samples from the human case were then sent to a reference laboratory, the National Laboratory of Microbiology (NLM) of PHAC, for confirmation. A provincial laboratory stakeholder also contacted a University Animal Health Laboratory colleague and sent the human sample in parallel for sequencing and confirmation that the variant was a swine virus. Because the human case was IHR reportable and had potential for high visibility, the provincial laboratory immediately contacted the Alberta Chief Medical Officer of Health (CMOH). Provincial and federal government stakeholders (Alberta Health, Alberta Health Services, Alberta Agriculture and Forestry, PHAC, CFIA) were called to an evening meeting to raise awareness and ensure that the situation was managed in a way that satisfied provincial, federal and international obligations. This “H1N2v working group” was put in place quickly, apparently following the initiative of the Alberta CMOH (not confirmed as no interview was conducted with the initiators of this working group). The information PHAC received through formal communication channels (e.g., from the NLM) took longer compared to the original call by the CMOH and the H1N2v working group. For this study, we did not have access to the guidelines in place for such an event, and it is unclear if the other stakeholders (provincial Ministry of Agriculture and CFIA) were officially needed to be involved. Because swine influenza is endemic in the porcine population and this case was of importance for human health, the provincial public health stakeholders led the initiative, with the support of other stakeholders. The H1N2v working group met at least twice following the initial meeting. Additionally, follow-up data was gathered at the provincial level via multiple channels (public health, animal health, epidemiological, and laboratory investigations), and findings from the various investigations were shared with PHAC daily for a week and then weekly for two additional weeks. Information sharing between provincial and federal public health entities seemed to follow a formal process, but while we had access to the communication template, none of the interviewed participants had information about the structure supporting this initiative. In the meantime, regional public health partners (within Alberta Health Services) were mandated to conduct the field investigation for the human case and its contacts with humans and pigs, supported by Alberta Agriculture and Forestry and stakeholders from the swine sector (e.g., Alberta Pork). The investigation’s goal was to clarify whether the infection was contracted from animal-to-person (directly or indirectly) or person-to-person. The public health investigation, the available information about swine influenza in the province (obtained from the CSHIN report), and the farm investigation performed in collaboration with an Animal Health Laboratory all provided supporting data. The human and animal investigation data were collected by multiple stakeholders. The communication of results followed formal structures through Alberta Health (case, laboratory and epidemiological investigation results) and Alberta Agriculture and Forestry (farm investigation results) and were ultimately shared with PHAC. Interviewees reported that coordination of the two provincial ministries in this case was facilitated by the public health veterinarian, whose position is shared between the two ministries. Interviewees also said that in the investigation’s early stages, the swine sector’s participation in the farm investigation (an informal channel, via Alberta Pork) facilitated communication between the government and the farm involved. This highlights the importance of strong formal and informal government-industry relationships, which ensured that farmers and stakeholders trusted the system enough to support the investigation. While the investigation was still ongoing and a clearer picture of the case and its transmission was emerging, a decision was made to make the information public. Our interviews did not identify the process leading to this decision, but six days after the initial notification to the government officials, an Alberta CMOH press release was distributed, with information stating there was limited risk for the general population. This now-public information was then identified by at least two Canadian event-based surveillance (EBS) systems that distributed the information to their communities. One of the EBS interviewees mentioned, however, that they received an email from the Alberta Agriculture and Forestry the night before the press release so they could prepare for it and have a notification ready to be shared. This informal communication channel seemed to arise from a preexisting relationship between stakeholders involved. Encouraging and inhibiting elements involved in OH communication The communication channels evident in our case study allowed us to identify elements involved in the information flow between animal and human health stakeholders (Table ). Identifying what information needed to be shared between sectors was influenced by actors’ understanding of the evidence needed to trigger decisions and actions. During the surveillance phase, information was available online from the animal health (CWSHIN, CSHIN, CAHSS, CEZD) and human health (FluWatch, Global Public Health Intelligence Network) sectors. However, it was difficult to quantify how much these sources were used by different stakeholders. We identified little other communication between animal and human health stakeholders during this phase. Stakeholders reported having very limited time and resources to consult and use information from other sectors, suggesting a need for policies and structural integration of OH. For example, having a public health veterinarian appointed at both the provincial agriculture and health ministries was mentioned as a key element facilitating communication and coordination (Quote 1). Quote 1. “When the pandemic started, we had our public health veterinarian position empty. […] That position is essentially fully dedicated to working between the two ministries [Agriculture and Health]. It [the impact of this vacancy] showed itself in terms of just some gaps for them working on things without consulting us, but then [when] that position was filled and the other relationships were in place, everything just went really smoothly. […] it demonstrated the importance of those relationships and… having a good liaison between the two departments.” During the outbreak, surveillance, laboratory, and industry information on swine influenza was quickly available to human health stakeholders. Animal health stakeholders, however, noted that the communication was, unfortunately and as in many cases, only one way. Barriers to within- and cross-sector communication included complicated or lacking communication channels. In our case study, there was a formal channel between the provincial and federal government due to the IHR requirements, but this is not the case for non-IHR-reportable zoonotic diseases. Moreover, the CMOH’s phone call to other stakeholders to create the H1N2v working group occurred faster than the formal communication channels. Established professional connections facilitated information flow between stakeholders who understood each other’s needs and interests. While a lack of formal channels was identified as a pitfall due to potentially missed communication opportunities, many participants mentioned that established, informal relationships and networks facilitated information sharing – both the assessment of how much and what type of information to share and with whom it should be shared. Informal and formal communication channels were also affected by privacy and ethical concerns. Raw data, usually confidential, obtained from either the animal or human health sectors cannot easily be shared, adding to the complexity of formal communication channels. Analyzed or summarized data (i.e., information) were easier for both animal and human health sectors to share in reports or online platforms. Trust, which can be defined as the perceived benevolence, integrity, competence and predictability of the other , was identified as the foundation for good communication among different stakeholders, whether via formal or informal channels. Here, previous interactions between stakeholders likely served as a basis for trusting that the person receiving the information would be kind, competent, honest, and predictable when using it. From the perspective of animal health stakeholders, however, trust was more difficult: the perceived anthropocentric perspective of health initiatives, including OH initiatives , created fear that shared information might not be reciprocated and would have negative repercussions on animals and producers (Quote 2). Quote 2. “You need to build trust and it takes a long time […] you need to build that trust with individual livestock sectors, that human health is not going to destroy the sector a . The [animal health] sector is generally very cautious because their perspective is very rarely considered […] if you have a human pathogen […] in livestock and it can potentially transfer to people, all the burden is very often on the livestock. […] Human health has a lot of resources and animal health doesn't, but they get all [the burden]. It's a matter of who [has] the cost and who's benefiting.” a While the stakeholder interviewed did not give additional details, they could have been referring to the case of a herd where an emerging influenza virus (H1N1v) was identified, which resulted in depopulation of the herd . This was a severe consequence for the farmer, while the source of the virus was determined to be an infected human. They could also have been referring to the possibility of zoonotic events decreasing the marketability of meat because of public perception or export restrictions. This was unfortunately not discussed further in the interview Interviewees suggested that information sharing requires two main steps: (1) identifying what information must be shared and (2) sharing that information with another sector (Fig. ). Once stakeholders within a sector had information, the first step was identifying what should and can be shared, with whom, and through what channels. This could be facilitated or impeded by actors’ perceptions of other sectors’ needs, the type of information that is available, and the resources available. For sharing information itself, both the presence and type of communication channels were critical for external information sharing with other sectors – but so were trust and the availability of resources. Preexisting relationships among stakeholders also shaped actors’ understanding of each other’s needs, the presence of informal channels, and trust. We describe below the usual communication channels in the swine sector, the human health sector, and across these two sectors. Routine communication channels for the surveillance of influenza virus infection in the swine sector in Alberta Figure B shows communication channels as they flow (from left to right) in Alberta. Influenza virus in pigs is provincially notifiable in Alberta, British Columbia and Saskatchewan, but it is not federally notifiable. At the regional level, the Canada West Swine Health Intelligence Network (CWSHIN) combines and analyzes the data from British Columbia, Alberta, Saskatchewan, and Manitoba. It includes clinical impression surveys from swine veterinarians, laboratory diagnostic data from provincial and university laboratories (presence of pathogens, or serological or anatomical indicators), and condemnation rates from federally inspected slaughterhouses . Once analyzed, the information is shared quarterly with veterinarians (reports, as private communications) and producers (reports, as public communications) and, when requested to address animal, human, or ecosystem concerns, with provincial governments (Fig. B). While some analyzed data are publicly available via reports for producers, our participants stated that there are no other direct communication channels between the CWSHIN and public health stakeholders. However, the regional surveillance networks (CWSHIN, Ontario Animal Health Network, and Réseau d’alerte et d’information zoosanitaire) are part of the Canadian Swine Health Intelligence Network (CSHIN) and the CAHSS, which include members from the National and Provincial pork councils, veterinary colleges, diagnostic laboratories, provincial governments, CFIA, Agriculture and Agri-Food Canada (AAFC), Public Health Agency of Canada (PHAC), and national and regional veterinary organizations and networks. Routine communication channels for the surveillance of influenza virus infection in the human sector in Alberta In Alberta, laboratory data flow through a single laboratory information system (Provincial Surveillance Initiative; PSI), and information is automatically transmitted to stakeholders (e.g., physicians, patients, and surveillance units within the Ministry of Health; Fig. A) via an online platform. This system allows the linkage of clinical and epidemiological data with laboratory data at the provincial level. The data about influenza collected by the healthcare system are gathered provincially and then anonymized and shared with FluWatch, a national surveillance program for influenza and influenza-like illnesses (ILI) . The program monitors, inter alia, health care admission for influenza or ILI, laboratory-confirmed detection, syndromic surveillance, outbreak and severe outcome surveillance, and vaccine coverage; it shares weekly reports online . At the provincial and federal levels, we were unable to identify other communication channels for providing human influenza surveillance information to animal health stakeholders. Routine communication channels between the swine and human sectors for the surveillance of influenza in Alberta Many swine health surveillance stakeholders are members of the Community for Emerging and Zoonotic Diseases (CEZD). This multidisciplinary network of public and animal health experts from government, industry and academia was developed to support early warning, preparedness, and response for animal emerging and zoonotic diseases . Open source signals are extracted automatically via the Knowledge Integration using Web Based Intelligence (KIWI) and manually by the CEZD core team (CFIA employees). This team assesses signals daily, with rolling support from volunteer members and from expert partners from federal and provincial governments, academia, and industry when needed. Signals are then shared with the CEZD community, including through immediate notifications of important disease events, group notifications and pings, quarterly sector-specific intelligence reports and weekly intelligence reports. Although CEZD was growing during the 2020-2021 period , membership is voluntary, as it is the case for CAHSS. Moreover, both networks cover multiple species and diseases, which serves to maximize the reach of the communities but can result in an overwhelming amount of information for members whose main interest is in another sector, such as human or ecosystem health. This large amount of information primarily relevant to other sectors can lead members to leave or not join these two networks. Figure B shows communication channels as they flow (from left to right) in Alberta. Influenza virus in pigs is provincially notifiable in Alberta, British Columbia and Saskatchewan, but it is not federally notifiable. At the regional level, the Canada West Swine Health Intelligence Network (CWSHIN) combines and analyzes the data from British Columbia, Alberta, Saskatchewan, and Manitoba. It includes clinical impression surveys from swine veterinarians, laboratory diagnostic data from provincial and university laboratories (presence of pathogens, or serological or anatomical indicators), and condemnation rates from federally inspected slaughterhouses . Once analyzed, the information is shared quarterly with veterinarians (reports, as private communications) and producers (reports, as public communications) and, when requested to address animal, human, or ecosystem concerns, with provincial governments (Fig. B). While some analyzed data are publicly available via reports for producers, our participants stated that there are no other direct communication channels between the CWSHIN and public health stakeholders. However, the regional surveillance networks (CWSHIN, Ontario Animal Health Network, and Réseau d’alerte et d’information zoosanitaire) are part of the Canadian Swine Health Intelligence Network (CSHIN) and the CAHSS, which include members from the National and Provincial pork councils, veterinary colleges, diagnostic laboratories, provincial governments, CFIA, Agriculture and Agri-Food Canada (AAFC), Public Health Agency of Canada (PHAC), and national and regional veterinary organizations and networks. In Alberta, laboratory data flow through a single laboratory information system (Provincial Surveillance Initiative; PSI), and information is automatically transmitted to stakeholders (e.g., physicians, patients, and surveillance units within the Ministry of Health; Fig. A) via an online platform. This system allows the linkage of clinical and epidemiological data with laboratory data at the provincial level. The data about influenza collected by the healthcare system are gathered provincially and then anonymized and shared with FluWatch, a national surveillance program for influenza and influenza-like illnesses (ILI) . The program monitors, inter alia, health care admission for influenza or ILI, laboratory-confirmed detection, syndromic surveillance, outbreak and severe outcome surveillance, and vaccine coverage; it shares weekly reports online . At the provincial and federal levels, we were unable to identify other communication channels for providing human influenza surveillance information to animal health stakeholders. Many swine health surveillance stakeholders are members of the Community for Emerging and Zoonotic Diseases (CEZD). This multidisciplinary network of public and animal health experts from government, industry and academia was developed to support early warning, preparedness, and response for animal emerging and zoonotic diseases . Open source signals are extracted automatically via the Knowledge Integration using Web Based Intelligence (KIWI) and manually by the CEZD core team (CFIA employees). This team assesses signals daily, with rolling support from volunteer members and from expert partners from federal and provincial governments, academia, and industry when needed. Signals are then shared with the CEZD community, including through immediate notifications of important disease events, group notifications and pings, quarterly sector-specific intelligence reports and weekly intelligence reports. Although CEZD was growing during the 2020-2021 period , membership is voluntary, as it is the case for CAHSS. Moreover, both networks cover multiple species and diseases, which serves to maximize the reach of the communities but can result in an overwhelming amount of information for members whose main interest is in another sector, such as human or ecosystem health. This large amount of information primarily relevant to other sectors can lead members to leave or not join these two networks. In all cases where a new influenza subtype, including an animal influenza subtype, is identified from a human case, this must be reported to the World Health Organization (WHO) under the IHR . In Canada, PHAC is the body responsible for notifying the WHO of such cases. We examined the IHR-reportable case of a human infected with an animal influenza subtype identified in October 2020 in Alberta. The event we examined happened during an exceptional period for ILI as it was less than a year after the WHO declared the 2019 novel coronavirus disease (COVID-19) a global pandemic. At that time, influenza activity remained below average, most ILI symptoms were due to COVID-19 cases, and most public health and human health resources were dedicated to managing the pandemic . In the case we investigated, the influenza subtype identified through sequencing performed at a provincial laboratory on October 29, 2020 (Fig. ) in the human case was a variant similar to a swine influenza virus (A H1N2v). Samples from the human case were then sent to a reference laboratory, the National Laboratory of Microbiology (NLM) of PHAC, for confirmation. A provincial laboratory stakeholder also contacted a University Animal Health Laboratory colleague and sent the human sample in parallel for sequencing and confirmation that the variant was a swine virus. Because the human case was IHR reportable and had potential for high visibility, the provincial laboratory immediately contacted the Alberta Chief Medical Officer of Health (CMOH). Provincial and federal government stakeholders (Alberta Health, Alberta Health Services, Alberta Agriculture and Forestry, PHAC, CFIA) were called to an evening meeting to raise awareness and ensure that the situation was managed in a way that satisfied provincial, federal and international obligations. This “H1N2v working group” was put in place quickly, apparently following the initiative of the Alberta CMOH (not confirmed as no interview was conducted with the initiators of this working group). The information PHAC received through formal communication channels (e.g., from the NLM) took longer compared to the original call by the CMOH and the H1N2v working group. For this study, we did not have access to the guidelines in place for such an event, and it is unclear if the other stakeholders (provincial Ministry of Agriculture and CFIA) were officially needed to be involved. Because swine influenza is endemic in the porcine population and this case was of importance for human health, the provincial public health stakeholders led the initiative, with the support of other stakeholders. The H1N2v working group met at least twice following the initial meeting. Additionally, follow-up data was gathered at the provincial level via multiple channels (public health, animal health, epidemiological, and laboratory investigations), and findings from the various investigations were shared with PHAC daily for a week and then weekly for two additional weeks. Information sharing between provincial and federal public health entities seemed to follow a formal process, but while we had access to the communication template, none of the interviewed participants had information about the structure supporting this initiative. In the meantime, regional public health partners (within Alberta Health Services) were mandated to conduct the field investigation for the human case and its contacts with humans and pigs, supported by Alberta Agriculture and Forestry and stakeholders from the swine sector (e.g., Alberta Pork). The investigation’s goal was to clarify whether the infection was contracted from animal-to-person (directly or indirectly) or person-to-person. The public health investigation, the available information about swine influenza in the province (obtained from the CSHIN report), and the farm investigation performed in collaboration with an Animal Health Laboratory all provided supporting data. The human and animal investigation data were collected by multiple stakeholders. The communication of results followed formal structures through Alberta Health (case, laboratory and epidemiological investigation results) and Alberta Agriculture and Forestry (farm investigation results) and were ultimately shared with PHAC. Interviewees reported that coordination of the two provincial ministries in this case was facilitated by the public health veterinarian, whose position is shared between the two ministries. Interviewees also said that in the investigation’s early stages, the swine sector’s participation in the farm investigation (an informal channel, via Alberta Pork) facilitated communication between the government and the farm involved. This highlights the importance of strong formal and informal government-industry relationships, which ensured that farmers and stakeholders trusted the system enough to support the investigation. While the investigation was still ongoing and a clearer picture of the case and its transmission was emerging, a decision was made to make the information public. Our interviews did not identify the process leading to this decision, but six days after the initial notification to the government officials, an Alberta CMOH press release was distributed, with information stating there was limited risk for the general population. This now-public information was then identified by at least two Canadian event-based surveillance (EBS) systems that distributed the information to their communities. One of the EBS interviewees mentioned, however, that they received an email from the Alberta Agriculture and Forestry the night before the press release so they could prepare for it and have a notification ready to be shared. This informal communication channel seemed to arise from a preexisting relationship between stakeholders involved. The communication channels evident in our case study allowed us to identify elements involved in the information flow between animal and human health stakeholders (Table ). Identifying what information needed to be shared between sectors was influenced by actors’ understanding of the evidence needed to trigger decisions and actions. During the surveillance phase, information was available online from the animal health (CWSHIN, CSHIN, CAHSS, CEZD) and human health (FluWatch, Global Public Health Intelligence Network) sectors. However, it was difficult to quantify how much these sources were used by different stakeholders. We identified little other communication between animal and human health stakeholders during this phase. Stakeholders reported having very limited time and resources to consult and use information from other sectors, suggesting a need for policies and structural integration of OH. For example, having a public health veterinarian appointed at both the provincial agriculture and health ministries was mentioned as a key element facilitating communication and coordination (Quote 1). Quote 1. “When the pandemic started, we had our public health veterinarian position empty. […] That position is essentially fully dedicated to working between the two ministries [Agriculture and Health]. It [the impact of this vacancy] showed itself in terms of just some gaps for them working on things without consulting us, but then [when] that position was filled and the other relationships were in place, everything just went really smoothly. […] it demonstrated the importance of those relationships and… having a good liaison between the two departments.” During the outbreak, surveillance, laboratory, and industry information on swine influenza was quickly available to human health stakeholders. Animal health stakeholders, however, noted that the communication was, unfortunately and as in many cases, only one way. Barriers to within- and cross-sector communication included complicated or lacking communication channels. In our case study, there was a formal channel between the provincial and federal government due to the IHR requirements, but this is not the case for non-IHR-reportable zoonotic diseases. Moreover, the CMOH’s phone call to other stakeholders to create the H1N2v working group occurred faster than the formal communication channels. Established professional connections facilitated information flow between stakeholders who understood each other’s needs and interests. While a lack of formal channels was identified as a pitfall due to potentially missed communication opportunities, many participants mentioned that established, informal relationships and networks facilitated information sharing – both the assessment of how much and what type of information to share and with whom it should be shared. Informal and formal communication channels were also affected by privacy and ethical concerns. Raw data, usually confidential, obtained from either the animal or human health sectors cannot easily be shared, adding to the complexity of formal communication channels. Analyzed or summarized data (i.e., information) were easier for both animal and human health sectors to share in reports or online platforms. Trust, which can be defined as the perceived benevolence, integrity, competence and predictability of the other , was identified as the foundation for good communication among different stakeholders, whether via formal or informal channels. Here, previous interactions between stakeholders likely served as a basis for trusting that the person receiving the information would be kind, competent, honest, and predictable when using it. From the perspective of animal health stakeholders, however, trust was more difficult: the perceived anthropocentric perspective of health initiatives, including OH initiatives , created fear that shared information might not be reciprocated and would have negative repercussions on animals and producers (Quote 2). Quote 2. “You need to build trust and it takes a long time […] you need to build that trust with individual livestock sectors, that human health is not going to destroy the sector a . The [animal health] sector is generally very cautious because their perspective is very rarely considered […] if you have a human pathogen […] in livestock and it can potentially transfer to people, all the burden is very often on the livestock. […] Human health has a lot of resources and animal health doesn't, but they get all [the burden]. It's a matter of who [has] the cost and who's benefiting.” a While the stakeholder interviewed did not give additional details, they could have been referring to the case of a herd where an emerging influenza virus (H1N1v) was identified, which resulted in depopulation of the herd . This was a severe consequence for the farmer, while the source of the virus was determined to be an infected human. They could also have been referring to the possibility of zoonotic events decreasing the marketability of meat because of public perception or export restrictions. This was unfortunately not discussed further in the interview Interviewees suggested that information sharing requires two main steps: (1) identifying what information must be shared and (2) sharing that information with another sector (Fig. ). Once stakeholders within a sector had information, the first step was identifying what should and can be shared, with whom, and through what channels. This could be facilitated or impeded by actors’ perceptions of other sectors’ needs, the type of information that is available, and the resources available. For sharing information itself, both the presence and type of communication channels were critical for external information sharing with other sectors – but so were trust and the availability of resources. Preexisting relationships among stakeholders also shaped actors’ understanding of each other’s needs, the presence of informal channels, and trust. This case study highlights the complex communication structures for influenza surveillance and response in both human and animal health sectors and the limited links between these sectors. It illustrates the importance of rapid and open communication channels between these sectors in both surveillance and response contexts. While day-to-day surveillance aims to detect and monitor influenza viruses, the detection of a human case harboring an animal subtype resulted in a specific response, which triggered different channels. While information flows through formal and informal channels, trust is a critical component in all types of communication: between animal and human health actors, between government and livestock sectors, and between international, federal, provincial and territorial, and regional jurisdictional levels. Developing and maintaining relationships among stakeholders requires time and resources but is essential for mutual understanding of information needs and rapid communication. While previous studies found that communication is a key factor for OH initiatives , we were able to identify processes that were in place when good communication occurred. These findings offer a new perspective that could be useful to many surveillance and response programs. For example, networks and structures are often described for influenza programs, but the communication channels and information flow are not detailed . This is a gap that would be useful to address, especially as we found that while formal structures are necessary, informal structures allow for quicker and more efficient communication and coordination. Limitations While the findings from this study highlight key elements of good One Health communication, the retrospective interpretive process tracing of a case study has certain limitations. First, our study was based on an influenza case, for which there are established surveillance systems and protocols . This likely contributed to the effective response but also influenced our findings. We think this could have hidden or minimized some of the challenges faced by stakeholders regarding OH communication. For example, in the case of a disease that has no formal surveillance system reporting guidelines, challenges might be different. Second, we purposively selected a “success story” to illustrate what happens when OH communication goes well. Due to this retrospective selection of our case study, we suspected that communication and coordination went well prior to starting the project. This could have influenced our findings, and it is possible that we would have had different conclusions if we used a case study for which communication and coordination were suboptimal. To mitigate this, we designed the study with an interpretive approach focusing on the interviewees’ own perspectives, with as little preconceived bias as possible . Third, the case we chose happened during the COVID-19 pandemic. The high focus on ILI during this period could have strengthened some communication channels. For example, many resources were deployed to manage the pandemic, which may have facilitated communication and integration among sectors. Fourth, this could have also affected the stakeholders who agreed to participate in the interviews, which were conducted at a later stage of the pandemic. Indeed, six human health stakeholders who had key positions in this case declined or did not reply to our invitation, and our findings lack their perspective. It is possible that more communication channels between human and animal health exist, but we were not able to identify them. The barriers and limitations we identified are possibly different for stakeholders in the human health sector; additional research related to the involvement of these actors in OH communication would be beneficial. Fifth, due to the limited resources available for this project, the focus of the case study (swine and public health), and the process we used to identify the stakeholders to interview, we did not identify stakeholders from the environment and wildlife health sector, or from other livestock health sectors (e.g., poultry). This is, in itself, a finding, highlighting the limited communication channels among these stakeholders. It is however unclear if our findings about facilitators and barriers are generalizable to all sectors. While the findings from this study highlight key elements of good One Health communication, the retrospective interpretive process tracing of a case study has certain limitations. First, our study was based on an influenza case, for which there are established surveillance systems and protocols . This likely contributed to the effective response but also influenced our findings. We think this could have hidden or minimized some of the challenges faced by stakeholders regarding OH communication. For example, in the case of a disease that has no formal surveillance system reporting guidelines, challenges might be different. Second, we purposively selected a “success story” to illustrate what happens when OH communication goes well. Due to this retrospective selection of our case study, we suspected that communication and coordination went well prior to starting the project. This could have influenced our findings, and it is possible that we would have had different conclusions if we used a case study for which communication and coordination were suboptimal. To mitigate this, we designed the study with an interpretive approach focusing on the interviewees’ own perspectives, with as little preconceived bias as possible . Third, the case we chose happened during the COVID-19 pandemic. The high focus on ILI during this period could have strengthened some communication channels. For example, many resources were deployed to manage the pandemic, which may have facilitated communication and integration among sectors. Fourth, this could have also affected the stakeholders who agreed to participate in the interviews, which were conducted at a later stage of the pandemic. Indeed, six human health stakeholders who had key positions in this case declined or did not reply to our invitation, and our findings lack their perspective. It is possible that more communication channels between human and animal health exist, but we were not able to identify them. The barriers and limitations we identified are possibly different for stakeholders in the human health sector; additional research related to the involvement of these actors in OH communication would be beneficial. Fifth, due to the limited resources available for this project, the focus of the case study (swine and public health), and the process we used to identify the stakeholders to interview, we did not identify stakeholders from the environment and wildlife health sector, or from other livestock health sectors (e.g., poultry). This is, in itself, a finding, highlighting the limited communication channels among these stakeholders. It is however unclear if our findings about facilitators and barriers are generalizable to all sectors. While additional research, including larger comparative studies, is needed, our findings highlight the importance of investing time and resources in supporting relationship building, as well as formal communication mechanisms, among stakeholders in the human, animal, and ecosystem health sectors. |
Disparities in Glaucoma Surgery: A Review of Current Evidence and Future Directions for Improvement | 6d6230d8-5b99-43c9-97ea-f76089cf7595 | 10484012 | Ophthalmology[mh] | Glaucoma is a leading cause of irreversible vision loss in the United States and worldwide. , Surgical management of glaucoma is used in several aspects of glaucoma care. Selective laser trabeculoplasty (SLT) can be used as an adjunct or alternative to medications for reduction of intraocular pressure (IOP) in individuals with glaucoma or ocular hypertension. Laser peripheral iridotomy (LPI) is used for the prevention or treatment of angle closure glaucoma. Minimally invasive glaucoma surgery (MIGS) is often used to lower IOP at the time of cataract surgery in individuals with glaucoma, and can also be used as a standalone procedure for IOP reduction in select clinical settings. Finally, incisional surgeries, which include trabeculectomy and tube shunt, are generally reserved for individuals with glaucomatous disease progression despite maximally tolerated medical therapy. , Healthcare disparities refer to differences in healthcare that are linked to social or economic disadvantage, which can take place on the individual or structural levels. – There are several aspects of glaucoma surgical management where disparities can exist, including patient selection and timing of surgery, type of surgery performed, intra-operative and postoperative surgical complications, follow-up surgical care, and long-term surgical outcomes. Within each of these realms, multiple types of disparities can exist, including disparities by race and ethnicity, age, gender, insurance type, geographic residence, people with disabilities, and other social, economic, and demographic factors. Given the multiple settings and types of glaucoma surgical disparities that can occur, the study of these disparities is complex and requires a multifaceted approach incorporating information from multiple data sources and using various study designs. To this end, the purpose of the present review is to synthesize existing literature in the United States on disparities in glaucoma surgery by study type, to analyze the advantages and limitations of each study design for the investigation of disparities in glaucoma surgery, and to identify future research directions for the identification and elimination of disparities in surgical glaucoma management. Several large datasets are available for examination of factors related to glaucoma surgery, including Medicare claims datasets, the American Academy of Ophthalmology (AAO) Intelligent Research in Sight (IRIS) Registry, the Veterans Health Administration (VHA) database, and other claims-based datasets. Studies of disparities in glaucoma surgery have been performed utilizing several of these datasets, with a large proportion of studies examining disparities in surgical incidence and treatment patterns. In the Medicare population, Javitt et al. compared observed versus expected rates of laser, incisional, and cyclodestructive glaucoma surgery in Black versus White beneficiaries and concluded that the observed rate of glaucoma surgery among Black beneficiaries was 45% lower than the expected rate of surgery in this population despite higher rates of surgery observed in Black beneficiaries. Similarly, Devgan et al. found higher rates of argon laser trabeculoplasty and trabeculectomy in Black compared to White beneficiaries, but concluded that the observed rate of surgery in Black beneficiaries was nearly half the expected rate. In a dataset of Medicare beneficiaries linked to the National Long Term Care Survey, Ostermann et al. found similar rates of glaucoma diagnosis in Black and White beneficiaries but higher rates of surgery in Black beneficiaries, suggesting delayed onset of care or greater disease severity in Black beneficiaries with glaucoma. More recently, Halawa et al. examined rates of multiple types of glaucoma procedures by race and ethnicity after controlling for age, systemic comorbidities, and glaucoma severity, and found higher odds of glaucoma surgeries in Black versus White beneficiaries, and higher odds of SLT in Hispanic versus White beneficiaries. In addition to these possible racial and ethnic disparities, other types of disparities in patterns of glaucoma procedures have also been identified in the Medicare database. Schultz et al. examined multivariable predictors of laser trabeculoplasty in Medicare beneficiaries with glaucoma, and found that younger age and North Central United States region of residence were associated with increased odds of laser trabeculoplasty. In addition to the Medicare database, several studies of disparities in glaucoma surgical incidence and treatment patterns have been performed in other large datasets. Lee et al. examined tertiary glaucoma care patterns in the VHA by provider type, and found statistically significant differences in the rate of LPI, laser trabeculoplasty, and filtration surgery in veterans who received care in optometry-only clinics, ophthalmology-only clinics, integrated clinics, and separate clinics in the VHA. Olivier et al. examined multivariable predictors of receiving MIGS at the time of cataract surgery in individuals with glaucoma in the AAO IRIS Registry, and found greater odds of MIGS use among individuals who were older, identified as Black versus White race and ethnicity, had Medicare versus private insurance, and lived in the Northeast versus South United States. Finally, Usmani et al. examined ophthalmic procedures performed in the ambulatory setting from 2012 to 2014 through the State Ambulatory Surgery Database, and found that the largest proportion of individuals undergoing glaucoma surgery were elderly, women, identified as Black race and ethnicity, from large metropolitan areas, and insured by Medicare. Whereas most studies of disparities in glaucoma surgery in large databases have examined disparities in surgical incidence and treatment patterns, two studies have examined disparities in glaucoma surgical outcomes. In a study by Yang et al. that evaluated the effectiveness of MIGS with and without concurrent phacoemulsification in the AAO IRIS Registry, Black versus White race and ethnicity and older age were associated with increased odds of re-operation in individuals who underwent MIGS. Another study in the AAO IRIS Registry by Ciociola et al. compared the effectiveness of trabeculectomy and tube shunt with versus without phacoemulsification. In this study, older age was associated with lower odds of re-operation after trabeculectomy with phacoemulsification, tube shunt with phacoemulsification, and tube shunt alone, Black, Hispanic, and Asian race and ethnicity compared to White race and ethnicity was associated with higher odds of re-operation after trabeculectomy with phacoemulsification and trabeculectomy alone, and Black compared to White race and ethnicity was associated with higher odds of re-operation after tube shunt alone. Based on these existing studies of glaucoma surgical disparities in large datasets, we conclude that there are likely racial and ethnic disparities in surgical incidence and treatment patterns for individuals with glaucoma, with higher rates of glaucoma procedures in Black compared to White individuals in several studies yet lower than expected rates of surgery in Black individuals based on the prevalence of glaucoma in this population. Further studies are needed to identify whether these surgical disparities are due to delayed diagnosis and later disease stage at presentation in Black individuals with glaucoma, and the role of individual and structural level social determinants of health in contributing to these disparities. In addition to likely racial and ethnic disparities, there may also be disparities in glaucoma surgical incidence and treatment by age, provider type, and insurance type, which merit further focused investigation in additional studies. Although information on disparities in glaucoma surgical outcomes is limited, several demographic factors may be associated with increased risk of re-operation for glaucoma surgery, but further information is needed on disparities in glaucoma surgical complications and long-term surgical outcomes. At this time, the majority of studies of disparities in glaucoma surgery in large datasets are focused on disparities in surgical incidence and treatment patterns rather than disparities in surgical outcomes, which may be due to the fact that several existing datasets are heavily claims-based with limited clinical information to assess long-term surgical outcomes. Several retrospective clinical studies have been performed to examine potential disparities in glaucoma surgery, with a large proportion of studies focused on surgical outcomes. A review by Taubenslaug and Kammer included several retrospective clinical studies that compared surgical outcomes in Black versus White individuals with glaucoma who received trabeculectomy, Ex-PRESS shunt, tube shunt, and canaloplasty. The authors concluded that there may have been decreased success for Black individuals after several of these procedures, but that trabeculectomy remains the procedure of choice for primary surgical intervention to reduce IOP. Nguyen et al. compared the incidence of trabeculectomy failure and bleb leaks in 105 Black individuals and 117 White individuals with trabeculectomy from a single academic center, and found higher rates of trabeculectomy failure and bleb leaks in Black individuals, with surgical failure defined by a combination of IOP, percent IOP reduction, and use of glaucoma medication. Similarly, Shin et al. examined 174 individuals with combined phacoemulsification and trabeculectomy and found that Black compared to White race and ethnicity was associated with increased risk of surgical failure by two different criteria based on a combination of re-operation, bleb appearance, and glaucoma medication use. In a case-control study by Soltau et al. of 55 eyes with bleb-related infection and 55 control eyes without infection, Black race and ethnicity and younger age were associated with increased risk of infection. Ishida and Netland compared rates of surgical failure after Ahmed valve implantation in 43 Black and 43 White individuals and found higher rates of failure in Black individuals with two different failure definitions based on a combination of IOP reduction, glaucoma medication use, re-operation, and vision loss. Edmiston et al. examined rates of postoperative anterior uveitis after combined phacoemulsification and endoscopic cyclophotocoagulation in 223 individuals, and found higher rates of postoperative uveitis in Black compared to White individuals. Finally, Laroche et al. performed three studies to examine short-term surgical outcomes in Black and Hispanic individuals with glaucoma who received XEN gel stent with phacoemulsifcation, Hydrus microstent with phacoemulsifcation, and Kahook dual blade goniotomy with or without phacoemulsifcation. Although they reported adequate IOP control for most individuals with Hydrus microstent and Kahook dual blade by 6 months postoperatively, a high proportion required re-operation by 12 months postoperative for individuals with the XEN gel stent. In addition to the several studies of racial and ethnic disparities in glaucoma surgery outcomes, one study by Funk et al. examined the association between travel distance and postoperative outcomes for 199 individuals with trabeculectomy or tube shunt from a single academic center. This study reported that compared to individuals who lived <25 miles from clinic, those who lived >50 miles away had increased odds of loss to follow-up and missed appointments. Additionally, those who lived >20 miles from interstate access had more loss to follow-up than those who lived <10 miles from access, and those with Medicaid coverage had more missed appointments than those with Medicare coverage. In summary, several retrospective clinical studies have been performed to examine disparities in glaucoma surgery. Many of these studies compared glaucoma surgical outcomes in Black compared to White individuals, with several reporting higher rates of surgical failure and complications in Black individuals, suggesting likely racial and ethnic disparities in glaucoma surgical outcomes. Although most of these studies were in single academic centers with a limited number of individuals available for the study, the availability of abundant clinical information, including visual acuity, IOP, glaucoma medication use, ocular examination data, and re-operation data allowed for detailed assessment of surgical failure using multiple types of failure criteria. Whereas there was one study that reported associations between increased distance traveled and increased risk of loss to follow-up after glaucoma surgery, there is a shortage of clinical studies examining other types of social, economic, and demographic disparities in outcomes after glaucoma surgery with detailed incorporation of clinical information. Several clinical trials have been performed in the United States to evaluate the efficacy of different types of glaucoma surgery. In addition to inclusion policies for clinical research funded by the National Institutes of Health, the National Institute on Minority Health and Health Disparities Strategic Plan for 2021 to 2025 includes a goal to promote individuals from minoritized populations in all federally funded research with human participation. In this section, we will present existing pivotal trials in glaucoma surgery, characteristics of included participants in each trial, and the extent to which each trial presents outcomes for participants from minoritized populations. The Advanced Glaucoma Intervention Study (AGIS) included individuals 35 to 80 years old with primary open angle glaucoma (POAG) without previous glaucoma surgery or residual open angle glaucoma after laser iridotomy. Participants were randomized to argon laser trabeculoplasty (ALT)-trabeculectomy-trabeculectomy (ATT) or trabeculectomy-ALT-trabeculectomy (TAT) intervention sequences and followed for early and late treatment failure. Of 591 total participants in AGIS, there were 249 (42.1%) who were White, 332 (56.2%) who were Black, and 10 (1.7%) who were Other race and ethnicity. Of the 581 White and Black participants, there were 117 (20.1%) who were women, 123 (21.2%) who were age ≥65 years, 110 (18.9%) who were married, and 129 (22.2%) who completed high school. At 10-year follow-up, AGIS investigators reported that IOP was lower in both sequences in White and Black participants with medically uncontrolled glaucoma, but that the TAT sequence for White participants and the ATT sequence for Black participants was better for long-term visual function outcomes. The Collaborative Initial Glaucoma Treatment Study (CIGTS) included individuals 25 to 75 years old with open angle glaucoma without previous intraocular surgery. Participants were randomized to initial trabeculectomy followed by a combination of argon laser trabeculoplasty, medications, and trabeculectomy if further IOP reduction was needed, versus initial medical therapy followed by a combination of argon laser trabeculoplasty, medications, and repeat trabeculectomy if further IOP reduction was needed. There were 607 participants enrolled at baseline, of whom 337 (55.5%) were White, 231 (39.0%) were Black, 10 (1.6%) were Asian, and 29 (4.8%) were Other race and ethnicity. There were 273 (45.0%) participants who were women and 191 (31.5%) who were 65 to 75 years old. In long-term follow-up reports, no specific information on surgical failure or complications was reported in participants from minoritized populations. The Tube Versus Trabeculectomy (TVT) Study included individuals 18 to 85 years old with inadequately controlled glaucoma who had undergone previous cataract and/or glaucoma surgery. Participants were randomized to 350 mm 2 Baerveldt shunt implantation or trabeculectomy with mitomycin-C (MMC) 0.4 mg/mL for 4 minutes. Of 212 total participants at baseline, there were 95 (44.8%) who were White, 82 (38.7%) who were Black, 30 (14.2%) who were Hispanic, and 5 (2.5%) who were Other race and ethnicity. There were 112 (42.8%) participants who were women and the average participant age was 71.0 ± 10.4 years. At 5 years of follow-up, there were no statistically significant associations among age, gender, or race and ethnicity and risk of surgical failure in multivariable analyses. The Primary Tube Versus Trabeculectomy (PTVT) Study was a new trial after the TVT that included participants 18 to 85 years old with inadequately controlled glaucoma who had not undergone previous intraocular surgery, and participants were randomized to 350 mm 2 Baerveldt shunt implantation or trabeculectomy with MMC 0.4 mg/mL for 2 minutes. At baseline, the study included 242 participants, of whom 95 (39.3%) were White, 116 (47.9%) were Black, 15 (6.2%) were Hispanic, 13 (5.4%) were Asian, and 3 (1.2%) were Other race and ethnicity. There were 82 (33.9%) participants who were women, and mean age was 62.0 ± 11.4 years in the tube group and 60.8 ± 12.3 years in the trabeculectomy group. At 5 years of follow-up, there were no statistically significant associations among age, gender, or race and ethnicity and risk of surgical failure in multivariable analyses. The Ahmed Baerveldt Comparison (ABC) Study enrolled individuals 18 to 85 years old with inadequately controlled glaucoma with a planned aqueous shunt procedure. Participants were randomized to aqueous shunt implantation with the Ahmed valve model FP7 versus the 350 mm 2 Baerveldt shunt. At baseline, the study enrolled 276 participants, of whom 134 (48.6%) were White, 68 (24.6%) were Black, 33 (12.0%) were Hispanic, 33 (12.0%) were Asian, and 8 (2.9%) were Other race and ethnicity. There were 142 participants (51.4%) who were men (the number of women was not reported) and mean age was 63.8 ± 13.6 years. In 5-year follow-up reports, no specific information on surgical failure or complications was reported in participants from minoritized populations. The Ahmed Versus Baerveldt (AVB) Study enrolled individuals 18 years and older with inadequately controlled glaucoma with a planned aqueous shunt procedure. At baseline, the study enrolled 238 participants, of whom 170 (71.4%) were White, 28 (11.8%) were Black, 19 (8.0%) were Indian, 12 (5.0%) were Hispanic, and 9 (3.8%) were Asian race and ethnicity. There were 132 (55%) participants who were women and mean age was 66 ± 16 years. In 5-year follow-up reports, no specific information on surgical failure or complications was reported in participants from minoritized populations. The pivotal trial for the iStent inject enrolled individuals with mild to moderate POAG and visually significant cataract, and randomized participants to phacoemulsification with versus without iStent inject implantation. The study enrolled 505 participants, of whom 368 (72.9%) were White, 96 (19.0%) were Black, 34 (6.7%) were Hispanic, 4 (0.8%) were Asian, 1 (0.2%) was American Indian, 1 (0.2%) was East Indian, and 1 (0.2%) was Portuguese race and ethnicity. There were 289 (57.2%) participants who were women and mean age was 69.0 ± 8.2 years in the phacoemulsification with iStent inject group and 70.1 ± 7.7 years in the phacoemulsification alone group. At 2-year follow-up, no specific information on surgical failure or complications was reported in participants from minoritized populations. The HORIZON Study enrolled participants with mild to moderate POAG and visually significant cataract, and randomized participants to phacoemulsification with versus without Hydrus stent implantation. The study enrolled 556 participants, of whom 444 (79.9%) were White, 60 (10.8%) were Black, 32 (5.8%) were Asian, and 20 (3.6%) were Other race and ethnicity. There were 311 (55.9%) participants who were women and mean age was 71.1 ± 7.9 years in the phacoemulsification with Hydrus group and 71.2 ± 7.6 years in the phacoemulsification alone group. At 5-year follow-up, no specific information on surgical failure or complications was reported in minoritized populations. In summary, several pivotal clinical trials have been performed in glaucoma surgery and shape glaucoma surgical management in day-to-day practice. It appears that most trials have enrolled participants from diverse racial and ethnic backgrounds, included large proportions of female participants, and included participants of a wide range of ages. However, further investigation is needed on whether the distribution of these demographics is comparable to the demographic distributions of individuals receiving these glaucoma surgical interventions in the real-world clinical setting, and whether increased recruitment is needed of participants from minoritized populations. Additionally, to address other sources of health disparities, additional information is needed on socioeconomic status and other social determinants of health at the time of clinical trial recruitment to ensure representation of a wide range of social, economic, and demographic backgrounds in trial participants. Finally, whereas some existing trials presented surgical outcomes specific to racially and ethnically minoritized populations, focused efforts should be made in future glaucoma surgical trials to present expanded outcome information by race and ethnicity, gender, age, and other social, economic, and demographic factors in order to increase understanding of potential disparities in surgical outcomes by these factors. The identification and elimination of disparities in glaucoma surgical care is of utmost importance to prevent glaucomatous vision loss for the most at-risk populations. We have presented current literature on disparities in glaucoma surgery from retrospective database and clinical studies and in clinical trials. Currently, there are likely racial and ethnic disparities in glaucoma surgical incidence, treatment patterns, and outcomes, but there is a need to understand the role of individual and structural level social determinants of health in contributing to these disparities, as well as a need for further understanding of the presence of other social, economic, and demographic disparities. Additionally, whereas the large glaucoma clinical trials report the frequency of enrollment by race and ethnicity, the majority of these trials have not compiled or reported the role of racial and ethnic and other disparities in trial outcomes. To address these needs, we can apply several aspects of the framework to eliminate disparities in eye care outlined by the AAO Taskforce on Disparities in Eye Care. Specifically, in retrospective studies of disparities in glaucoma surgery, the harmonization of medical data from claims and clinical practice, along with additional data outside the medical setting including measures of socioeconomic factors, lifestyle habits, air pollution, and other factors will improve the ability to increase understanding of the role of social determinants of health in contributing to disparities in glaucoma surgery. Additionally, whereas current large scale studies of disparities in glaucoma surgery mainly focus on surgical incidence and treatment patterns as measured by claims, with increased incorporation of clinical data measures, such as visual acuity, IOP, and ancillary testing, into large scale datasets, such as the AAO IRIS Registry and the Sight Outcomes Research (SOURCE) Collaborative, there is increased ability to examine disparities in glaucoma surgical outcomes with detailed clinical definitions of surgical failure in large populations. In the clinical trial setting, outcomes specific to participants from minoritized populations could be retrospectively analyzed and presented for completed trials, and future trials should continue to recruit a diverse group of participants and create specific plans to examine interventions and outcomes for participants from minoritized populations. Additionally, future study protocols could increase collection of information related to social determinants of health at the time of participant enrollment in trials. Finally, in addition to improving the collection and analysis of information related to disparities in glaucoma surgery in various research settings, there is a need to engage patients and their communities to reduce the disparities that are identified. Community-partnered approaches involving community centers, faith-based organizations, and other key community stakeholders may decrease barriers and improve outcomes for individuals undergoing glaucoma surgery, and future research and initiatives involving these approaches should be implemented. , – In summary, several challenges exist in the identification and elimination of disparities associated with glaucoma surgery. A multifaceted approach involving the retrospective and prospective analysis of surgical incidence and outcomes with incorporation of social, economic, and demographic information, and contributions from the researcher, provider, patient, and community perspectives is needed if we are ever going to eliminate the healthcare disparities associated with glaucoma surgery occurring in minoritized populations in the United States. |
Randomized Trials Fit for the 21st Century. A Joint Opinion from the European Society of Cardiology, American Heart Association, American College of Cardiology, and the World Heart Federation | 49410bbf-7394-4872-bb0c-3405b5d44d0b | 9756910 | Internal Medicine[mh] | The views expressed in this article are those of the authors and therefore do not necessarily reflect the respective policies of the European Society of Cardiology, the American Heart Association, Inc., the American College of Cardiology, or the World Heart Federation.
Randomized controlled trials are the cornerstone for reliably evaluating therapeutic strategies . However, during the past 25 years, the rules and regulations governing randomized trials and their interpretation have become increasingly burdensome , and the cost and complexity of trials has become prohibitive . The present model is unsustainable, and the development of potentially effective treatments is often stopped prematurely on financial grounds, while existing drug treatments or non-drug interventions (such as screening strategies or management tools) may not be assessed reliably. The current ‘best regulatory practice’ environment, and a lack of consensus on what that requires, too often makes it unduly difficult to undertake efficient randomized trials able to provide reliable evidence about the safety and efficacy of potentially valuable interventions. Inclusion of underrepresented population groups and lack of diversity also remain among the challenges. The widespread availability of large-scale, population-wide, ‘real world data’ is increasingly being promoted as a way of bypassing the challenges of conducting randomized trials. Yet, despite the small random errors around the estimates of the effects of an intervention that can be yielded by analyses of such large datasets, non-randomized observational analyses of the effects of an intervention should not be relied on as a substitute, due to their potential for systematic error . That is, the estimated effects may be precise but inaccurate, due to design and statistical biases that cannot be reliably avoided irrespective of the sophistication of the analysis. With this joint opinion, the European Society of Cardiology (ESC), American Heart Association (AHA), World Heart Federation (WHF), and American College of Cardiology (ACC) call for action at a global scale to reinvent randomized clinical trials to be fit for purpose in the 21st century.
Among all medical specialities, cardiology has historically led the way in evidence-based practice. With ground-breaking randomized trials in the 1980s, such as the International Study of Infarct Survival (ISIS) , Gruppo Italiano per lo Studio della Streptochinasi nell’Infarto (GISSI) and Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries (GUSTO) trials in acute myocardial infarction, cardiovascular ‘mega-trials’ were conceived and rapidly transformed clinical practice. High quality trials have also reliably demonstrated incremental clinical benefits with modification of major cardiovascular risk factors, such as hypertension and dyslipidaemia , saving millions of lives worldwide in recent decades. Despite these advances, cardiovascular disease remains the leading cause of death and disability globally , and there is a need to identify additional effective therapies, to increase upstream prevention and precision medicine efforts, and to determine how best to use the effective treatments that we already have (and, as a corollary, not use those that are not effective or safe). As age-specific rates of mortality and major morbidity decline due to better prevention and treatment, it becomes more difficult to conduct reliable assessments of new or existing interventions. Lower absolute risks of cardiovascular events mean that increasingly large samples are needed to generate the numbers of outcomes of interest, given the typically modest relative benefits of many interventions. Moreover, cardiovascular interventions often require sufficient time before the benefits emerge. As the size of trials increases, the cost rises, and there may be a temptation to limit the duration of follow-up, in order both to control costs and, from an industry perspective, to get new agents to market faster. The proprotein convertase subtilisin–kexin type 9 (PCSK9) inhibiting monoclonal antibodies (evolocumab and alirocumab) provide a recent example of such a strategy failing patients . These agents have an impressive LDL cholesterol-lowering effect and, in large phase 3 randomized trials, were clearly shown to safely reduce major cardiovascular events. However, with only around 2–3 years of follow-up, it is likely that those trials underestimated the full benefits of prolonged PCSK9 inhibition on cardiovascular mortality and morbidity. So, despite the conduct of large trials which cost billions of dollars, the uptake of these agents has been limited (exacerbated by their high cost), and they have not realized their full potential for population health benefit even in high income countries. During the past 25 years, there has been an enormous increase in the rules and related bureaucracy governing clinical trials. First issued in 1996, the International Council for Harmonization (ICH) Good Clinical Practice (GCP) Guidelines describe the responsibilities and expectations of all those involved in the conduct of clinical trials. The intention of the ICH-GCP guideline was to ensure the safety and rights of participants in trials and also to ensure the reliability of trial results so that the safety of future patients would be protected. However, despite these well-intended aims, the guideline is now often over-interpreted and implemented in ways that are unnecessarily obstructive , prohibiting good trials from being done affordably. These problems are exacerbated by the financial incentive for some parties (in particular contract research organizations) to over-interpret ICH-GCP and profit from additional, often unnecessary, clinical trial procedures (such as frequent on-site monitoring visits when less costly data-driven monitoring approaches can be more informative [ https://ctti-clinicaltrials.org/ourwork/quality/quality-by-design/ ]). While the increasing complexities have been obstacles to trials conducted by industry, the regulations have become much larger barriers for conducting trials of interventions that have little or no commercial support. Consequently, trials of important questions relevant to low-income populations (e.g. infections affecting the heart such as rheumatic heart disease, tuberculous pericarditis or Chagas disease) or those that may have the potential for large clinical and population benefits but involve generic drugs (e.g. a polypill) have been hard to conduct.
Streamline the trial processes: reinvent simple trials with global impact The COVID-19 pandemic has provided clinical trialists with an opportunity to rethink their trade and remember the landmark successes of the cardiovascular mega-trial concept established in the 1980s. Trials such as Randomised Evaluation of COVID-19 Therapy (RECOVERY) and World Health Organization Solidarity have been highly streamlined and designed to be easy to administer in the busy hospitals in which large numbers of COVID patients were being treated. Only essential data were to be collected and, wherever possible, much of the follow-up information was derived from national electronic health records (EHRs). Importantly, they showed that such trials can be conducted in accordance with the principles of GCP, but without over-interpretation or unnecessary complication. By contrast, many of the other COVID-19 trials had complex protocols (e.g. more restrictive eligibility criteria, significant additional data collection beyond that collected for routine care) with a focus on surrogate outcomes (e.g. time to clinical improvement, rather than mortality), such that their relatively small size did not allow them to yield clear evidence on the outcomes that matter most to patients . Indeed, putative benefits observed in many small trials have not translated into mortality benefits when assessed in the larger streamlined trials . Use routine data to our advantage in trials, not as an inappropriate replacement Considerable opportunities for streamlined trial conduct are provided by digital healthcare in the 2020s, with high quality EHRs available for both recruitment and follow-up of trial participants . Part of the success of the RECOVERY trial was the nationwide availability of routine health data for comprehensive and complete follow-up. For many years, cardiovascular trials have successfully exploited EHRs for both recruitment and follow-up [as for example, in the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies (SWEDEHEART) series of trials], with important clinical findings . Current initiatives are extending this approach through development and use of local and national registries that can facilitate low-cost, pragmatic ‘randomized registry trials’ . However, data access restrictions and regulatory authority reticence to accepting EHR-based outcome data in randomized trials (especially for drug registration) have led to an underuse of this approach to trial streamlining. Instead, inappropriate emphasis is being placed—including by regulators—on using so-called ‘real world’ observational studies, despite the potential biases inherent in such methods. Collaborative revision of ICH-GCP, making it Fit for purpose in the 21st century Recent experience has shown that important clinical questions can be addressed rapidly in streamlined trials while remaining compliant with existing guidelines. However, the approach taken to the implementation of the ICH-GCP guidelines is typically inflexible and frequently involves over-interpretation that stifles innovation in the clinical trials enterprise, driving up costs through waste, delay and failure. In consultation with a range of stakeholders—from patients and the public who volunteer for clinical trials, to organizations that provide the skills, funding and infrastructure to conduct research—the Good Clinical Trials Collaborative (GCTC https://www.goodtrials.org/ ) has been established by Wellcome, the Gates Foundation and the African Academy of Sciences to build on the work of the FDA-funded Clinical Trials Transformation Initiative (CTTI, https://ctti-clinicaltrials.org/ ) by producing comprehensive revised guidelines fit for the purposes of doing randomized trials in the 21st century. The GCTC is reviewing the principles for all types of healthcare interventions, in all settings, to produce guidelines that aim to foster and promote informative, ethical and efficient randomized controlled trials (see Graphical Abstract). Draft guidance was published for consultation and review in 2021, and it is anticipated that revised guidelines will be issued in 2022 ( https://www.goodtrials.org/guidance ). We strongly support the adoption of this guidance into regulation, guidance, and practice across the whole clinical trials ecosystem—including by regulators, sponsors, and healthcare and research organizations—to ensure that the principles are embedded across all aspects of clinical trial design, delivery, oversight, quality assurance, analysis, and interpretation. Professional societies and their members have a key role to play in providing training in the fundamental principles of clinical trials, recognizing contribution to clinical trials as a core clinical activity, ensuring diversity and representativeness of included participants, and building community trust in the research enterprise by considering the patient perspective throughout all stages of trial development. ( https://nap.nationalacademies.org/catalog/26349/envisioning-a-transformed-clinical-trials-enterprise-for-2030-proceedings-of ).
The COVID-19 pandemic has provided clinical trialists with an opportunity to rethink their trade and remember the landmark successes of the cardiovascular mega-trial concept established in the 1980s. Trials such as Randomised Evaluation of COVID-19 Therapy (RECOVERY) and World Health Organization Solidarity have been highly streamlined and designed to be easy to administer in the busy hospitals in which large numbers of COVID patients were being treated. Only essential data were to be collected and, wherever possible, much of the follow-up information was derived from national electronic health records (EHRs). Importantly, they showed that such trials can be conducted in accordance with the principles of GCP, but without over-interpretation or unnecessary complication. By contrast, many of the other COVID-19 trials had complex protocols (e.g. more restrictive eligibility criteria, significant additional data collection beyond that collected for routine care) with a focus on surrogate outcomes (e.g. time to clinical improvement, rather than mortality), such that their relatively small size did not allow them to yield clear evidence on the outcomes that matter most to patients . Indeed, putative benefits observed in many small trials have not translated into mortality benefits when assessed in the larger streamlined trials .
Considerable opportunities for streamlined trial conduct are provided by digital healthcare in the 2020s, with high quality EHRs available for both recruitment and follow-up of trial participants . Part of the success of the RECOVERY trial was the nationwide availability of routine health data for comprehensive and complete follow-up. For many years, cardiovascular trials have successfully exploited EHRs for both recruitment and follow-up [as for example, in the Swedish Web-system for Enhancement and Development of Evidence-based care in Heart disease Evaluated According to Recommended Therapies (SWEDEHEART) series of trials], with important clinical findings . Current initiatives are extending this approach through development and use of local and national registries that can facilitate low-cost, pragmatic ‘randomized registry trials’ . However, data access restrictions and regulatory authority reticence to accepting EHR-based outcome data in randomized trials (especially for drug registration) have led to an underuse of this approach to trial streamlining. Instead, inappropriate emphasis is being placed—including by regulators—on using so-called ‘real world’ observational studies, despite the potential biases inherent in such methods.
Recent experience has shown that important clinical questions can be addressed rapidly in streamlined trials while remaining compliant with existing guidelines. However, the approach taken to the implementation of the ICH-GCP guidelines is typically inflexible and frequently involves over-interpretation that stifles innovation in the clinical trials enterprise, driving up costs through waste, delay and failure. In consultation with a range of stakeholders—from patients and the public who volunteer for clinical trials, to organizations that provide the skills, funding and infrastructure to conduct research—the Good Clinical Trials Collaborative (GCTC https://www.goodtrials.org/ ) has been established by Wellcome, the Gates Foundation and the African Academy of Sciences to build on the work of the FDA-funded Clinical Trials Transformation Initiative (CTTI, https://ctti-clinicaltrials.org/ ) by producing comprehensive revised guidelines fit for the purposes of doing randomized trials in the 21st century. The GCTC is reviewing the principles for all types of healthcare interventions, in all settings, to produce guidelines that aim to foster and promote informative, ethical and efficient randomized controlled trials (see Graphical Abstract). Draft guidance was published for consultation and review in 2021, and it is anticipated that revised guidelines will be issued in 2022 ( https://www.goodtrials.org/guidance ). We strongly support the adoption of this guidance into regulation, guidance, and practice across the whole clinical trials ecosystem—including by regulators, sponsors, and healthcare and research organizations—to ensure that the principles are embedded across all aspects of clinical trial design, delivery, oversight, quality assurance, analysis, and interpretation. Professional societies and their members have a key role to play in providing training in the fundamental principles of clinical trials, recognizing contribution to clinical trials as a core clinical activity, ensuring diversity and representativeness of included participants, and building community trust in the research enterprise by considering the patient perspective throughout all stages of trial development. ( https://nap.nationalacademies.org/catalog/26349/envisioning-a-transformed-clinical-trials-enterprise-for-2030-proceedings-of ).
Cardiology provided the foundation for an era of highly successful clinical trials, and is well-placed to reinvent trials for the 21st century. The ESC, AHA, ACC, and WHF are committed to ensuring that high quality trials continue to provide randomized evidence that improves the clinical care of all patients across different race and gender identities, socioeconomic strata, and geographies. Technology has transformed medical practice in recent decades, and clinical trials need to keep pace if modern therapies and treatment strategies are to continue to be robustly evaluated. Digital advances provide streamlined solutions to trial conduct, such as app-based data collection, remote monitoring, and ‘virtual’ trial visits. The COVID-19 pandemic has forced us to think more critically about many elements of daily life with a rapid change in what is now considered ‘normal’. A timely opportunity exists to promote similarly radical changes into the conduct of trials, to enhance efficiencies while maintaining safety. The cardiovascular organizations, societies, and foundations provide a valuable forum to advocate for the appropriate use of routine EHRs (i.e. ‘real world’ data) within randomized trials, recognizing the huge potential of centrally or regionally-held electronic health data for trial recruitment and follow-up, as well as to highlight the severe limitations of using observational analyses when the purpose is to draw causal inference about the risks and benefits of an intervention. With this document, our societies wish to engage in the development and widespread adoption of consensus guidance for clinical trials, supporting a more effective regulatory environment and allowing researchers to conduct the trials that are needed to improve patient care much more efficiently. Finally, the COVID-19 pandemic has re-emphasized the importance of making it feasible for busy clinicians, and their patients, to participate in randomized trials. Without sustained efforts to increase the application of streamlined approaches, and a more supportive regulatory environment for those who do choose to generate randomized evidence (instead of the adversarial approach that is often taken in regulatory audits), patients will suffer from important clinical questions not being addressed reliably, either because trials are too small or, due to excessive financial or bureaucratic obstacles, are never done at all.
Stephan Achenbach, Department of Cardiology, Friedrich-Alexander, University Erlangen-Nürnberg, Erlangen, Germany; Louise Bowman, Nuffield Department of Population Health, University of Oxford, UK; Barbara Casadei, RDM, Division of Cardiovascular Medicine, NIHR Oxford Biomedical Research Centre, University of Oxford, UK; Rory Collins, Nuffield Department of Population Health, University of Oxford, UK; Philip J. Devereaux, Department of Medicine, McMaster University, Hamilton, Canada; Population Health Research Institute, Hamilton, Canada; Department of Health Research Methods, Evidence, and Impact, Canada; Pamela S. Douglas, Department of Medicine, Duke University School of Medicine, Durham, North Carolina, USA; Ole Frobert, Örebro University, Faculty of Health, Department of Cardiology, Örebro, Sweden; Department of Clinical Medicine, Aarhus University Health, Aarhus, Denmark; Shinya Goto, Department of Medicine (Cardiology), Tokai University School of Medicine, Isehara, Japan; Cindy Grines, Northside Hospital Cardiovascular Institute, Atlanta, Georgia, USA; Robert A. Harrington, Department of Medicine, Division of Cardiovascular Medicine, Stanford University, CA, USA; Richard Haynes, MRC Population Health Research Unit, Nuffield Department of Population Health, University of Oxford, UK; Judith S. Hochman, Leon H. Charney Division of Cardiology, Department of Medicine, New York University Grossman School of Medicine, New York, USA; Stefan James, Uppsala Clinnical Research Center and Department of Medical Sciences, Uppsala University, Uppsala, Sweden; Paulus Kirchhof, Department of Cardiology, University Heart and Vascular Center Hamburg, University Medical Center Hamburg Eppendorf, Germany; Atrial Fibrillation Competence NETwork (AFNET), Münster, Germany; Institute of Cardiovascular Sciences, University of Birmingham, UK; Michel Komajda, Department of Cardiology, Groupe Hospitalier Paris Saint Joseph, Sorbonne University, Paris, France; Carolyn S.P. Lam, National Heart Centre Singapore & Duke-National University of Singapore, Singapore; Martin Landray, Nuffield Department of Population Health, University of Oxford, UK; Aldo Maggioni, ANMCO Research Centre, Florence, Italy; John McMurray, British Heart Foundation Cardiovascular Research Centre, Institute of Cardiovascular & Medical Sciences; University of Glasgow, UK; Nick Medhurst, Good Clinical Trials Collaborative https://www.goodtrials.org/ ; Roxana Mehran, Icahn School of Medicine at Mount Sinai, New York, USA; Bruce Neal, The George Institute for Global Health, University of New South Wales, Sydney, Australia; School of Public Health, Imperial College London, London, UK; Lars Rydén, Department Medicine K2, Karolinska Institutet, Stockholm, Sweden; Holger Thiele, Heart Center Leipzig at University of Leipzig and Leipzig Heart Institute, Department of Internal Medicine/Cardiology, Leipzig, Germany; Isabelle Van Gelder, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Lars Wallentin, Uppsala Clinnical Research Center and Department of Medical Sciences, Uppsala University, Uppsala, Sweden; Salim Yusuf, Population Health Research Institute, McMaster University and Hamilton Health Sciences, Hamilton, ON, Canada; Faiez Zannad, Université de Lorraine, Inserm and CHRU, Nancy, France. The ESC Patient Forum https://www.escardio.org/The-ESC/What-we-do/esc-patient-engagement .
|
An Autopsy Case of Antineutrophil Cytoplasmic Antibody-associated Vasculitis Induced by Propylthiouracil | c006455b-edc2-447c-b5df-882ca198e3cd | 11671207 | Forensic Medicine[mh] | Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a group of disorders characterized by inflammation, the destruction of small blood vessels, and the presence of circulating ANCA . AAV is a necrotizing vasculitis that does not involve immune complex deposition. AAV predominantly affects the small vessels and it is typically associated with ANCA specific for myeloperoxidase (MPO-ANCA) or proteinase 3 (PR3-ANCA) . Certain medications can also induce ANCA-associated vasculitis. Most patients with drug-induced AAV exhibit high MPO-ANCA titers . The strongest association between medications and AAV is with antithyroid drugs [propylthiouracil (PTU) , methimazole (MMI) , and carbimazole] . AAV, induced by antithyroid drugs, can lead to organ failure. More than 90% of such patients experience ≤three organ failures . We herein report an autopsy case of severe PTU-induced AAV in a patient who developed AAV after a long disease progression. An autopsy revealed five types of pathologically confirmed acute organ failure. A 79-year-old Japanese woman presenting with weight loss and tachycardia was referred to our hospital for the first time. She had a history of chronic hepatitis C due to blood transfusion. The virus was identified as genotype 1B, and the RNA quantification test was negative. The hepatologist diagnosed mild fibrosis of the liver and prescribed 300 mg ursodeoxycholic acid. A blood examination at the initial visit revealed that her serum transaminase was within the normal range and her serum free thyroxine (T4), free triiodothyronine (T3), thyroid stimulating hormone (TSH), and TSH receptor antibody was >6.0 ng/dL, >30.0 pg/mL, <0.01 mIU/L, and 5.0 IU/L, respectively. Thyroid ultrasonography revealed a diffuse enlargement of the thyroid gland, with an estimated volume of 40 mL. Technetium scintigraphy revealed a diffuse, increased accumulation in the thyroid gland. Based on these results, the patient was diagnosed with Graves’ disease (GD). The patient had no evidence of ophthalmopathy or tibial myxoedema. Her serum creatinine level was 0.57 mg/dL with positivity for urine occult blood. She was started on 30 mg MMI, 50 mg potassium iodine (KI), and 30 mg propranolol. A month later, while her serum free T4 level improved to within the normal range, she developed arthralgia and her medication was switched to 150 mg PTU monotherapy. As a result, her arthralgia improved, and she continued to take 150 mg PTU. At 83 years of age, she complained of numbness in the fingertips of both hands. Her serum anticardiolipin IgG antibody test result was positive (17 U/mL). The dermatologist prescribed 100 mg aspirin, which reduced the numbness. At 84 years of age, her GD worsened, her oral PTU increased to 300 mg, and she continued to subsequently take 300 mg PTU. Her thyroid hormone level was within the normal range. At 85 years of age, chest radiography and computed tomography (CT) revealed an abnormal invasive shadow in the lower lobe of the right lung. This shadow has remained unchanged for the following two years. No cavitary lesions were observed. Two years before hospitalization, her serum creatinine level gradually increased . At 89 years of age, she was admitted to our hospital with fatigue and appetite loss. A physical examination on admission revealed a body temperature of 37.1°C, blood pressure of 168/90 mmHg, heart rate of 84 bpm, and percutaneous oxygen saturation of 99% in room air. Although pallor of the eyelid, conjunctiva, and a dry mouth were noted, no other symptoms were observed. A qualitative urine examination revealed protein positivity. Urine sedimentation tests revealed 30-49 white blood cells and 30-49 red blood cells per high-power field. The patient's laboratory data are presented in . Compared to the data from the previous 2 months, the serum white blood cell count and C-reactive protein levels did not increase, whereas the percentage of neutrophils was elevated. Serum hepatobiliary enzyme and electrolyte levels were within normal ranges. Tests for antinuclear, anti- Cryptococcus , and anti- Aspergillus antibodies yielded negative results. As shown in , CT revealed a cavitary lesion with fluid in the lower lobe of the right lung and many frosted glass shadings in the bilateral lung fields. There were no findings of embolisms. MPO-ANCA was positive at a high titer on the 5th hospital day. Therefore, PTU was discontinued and replaced with KI because of the possibility of PTU-induced ANCA-AAV (PTU-AAV). Immunosuppressive therapy with oral prednisolone (30 mg for 3 days followed by 40 mg) was initiated on the 10th day of illness because her condition did not improve. Despite these treatments, joint pain appeared on the 5th sick day, microscopic erythema of the fingers on the 7th day, edema and ulcers of the intraoral cavity and, especially, the tongue on the 11th day, black stools on the 17th day, intraoral bleeding on the 19th day, and skin ulcers on the 21st day. Symptoms in the patient's eyes, such as edema, conjunctival erythema, and mucus, appeared on the 10th day of the illness, thus leading an ophthalmologist to diagnose lymphocyte proliferation in the ocular conjunctiva. Despite administering prednisolone treatment, the patient died on the 28th day. Pathological Findings An autopsy was performed with the consent of the patient's family. Pathological findings showed fibrinoid necrosis with neutrophils infiltrating small blood vessels in the stomach and ileum . Fibrinoid necrosis of small vessels was also observed in the skin . Granulomas were also observed in the lungs. Fibrinoid necrosis with neutrophil infiltration was observed in small blood vessels of the lungs . There was no fibrinoid necrosis of the tongue, the epithelium was shed, and neutrophilic infiltration was observed in the small vessels . Global glomerular sclerosis was observed in 30% of glomeruli, while fibrous crescents or segmental sclerosis were observed in approximately 30% of glomeruli . Almost all crescentic lesions were fibrous, but a few were fibrocellular. Membranous nephropathy was not observed. An autopsy was performed with the consent of the patient's family. Pathological findings showed fibrinoid necrosis with neutrophils infiltrating small blood vessels in the stomach and ileum . Fibrinoid necrosis of small vessels was also observed in the skin . Granulomas were also observed in the lungs. Fibrinoid necrosis with neutrophil infiltration was observed in small blood vessels of the lungs . There was no fibrinoid necrosis of the tongue, the epithelium was shed, and neutrophilic infiltration was observed in the small vessels . Global glomerular sclerosis was observed in 30% of glomeruli, while fibrous crescents or segmental sclerosis were observed in approximately 30% of glomeruli . Almost all crescentic lesions were fibrous, but a few were fibrocellular. Membranous nephropathy was not observed. In this report, we describe the case of a patient with severe PTU-AAV that occurred nine years after the initiation of PTU and resulted in five types of organ failure, followed by death. Slot et al. reported that ANCA and AAV were observed in 11% and 4% of antithyroid drug users . Among antithyroid drugs, PTU is considered to be more strongly associated with drug-induced AAV than MMI . PTU has been reported to cause AAV 39.2 times more often than MMI, and the average time to AAV development is 42 months (1-372 months) . Our patient developed AAV 108 months after the initiation of PTU. Studies suggest that abnormal neutrophil extracellular traps (NETs) are associated with autoimmune diseases, such as AAV, rheumatoid arthritis, and systemic lupus erythematosus . Nakazawa et al. reported that rats immunized with abnormal NETs induced by PTU using rat neutrophils produced MPO-ANCA and developed pulmonary capillaritis and glomerulonephritis in vivo and in vitro . The clinical course of antithyroid drugs (ATD)-induced AAV varies. One study reported that all 13 patients with PTU-AAV achieved remission with cyclophosphamide and immunosuppressive drugs . In contrast, another report showed that 2% of 92 patients with ATD-induced AAV died, 23% of all patients experienced sequelae , and the highest number of organ failures was four. The two patients who died received MMI and PTU, respectively, and both had kidney and respiratory organ failure. Regarding therapeutic intervention, MMI and PTU were discontinued in all 92 patients, and 76.1% and 18.5% of patients received steroids and steroids with immunosuppressants, respectively . Although our patient, who had both kidney and respiratory failure, discontinued PTU and received steroids without immunosuppressants, her condition worsened, eventually leading to death. One possible explanation for this outcome may be the advanced age of the patient. Another possibility is that she had taken a relatively high dose of PTU for a long period, although the relationship between AAV severity and the duration or dose of PTU is unclear . The most likely organs to be affected are the kidneys (38.2%), lungs (19.0%), skin (13.8%), joints (13.1%), eyes (5.9%), muscles (5.3%), gastrointestinal tract (2.0%), brain and nerves (2.0%), and ears (0.7%) . Our patient had impairments in five organs, as substantiated by pathological evidence, thus affecting the gastrointestinal tract, lungs, kidneys, skin, and tongue. Glomerular lesions in AAV are commonly known as crescent-shaped glomerulonephritis. Our patient exhibited a gradual deterioration of her renal function with protein and hematuria, which worsened rapidly four months prior to admission. This was consistent with the fact that fibrous crescents, segmental sclerosis, and chronic crescentic glomerulonephritis were confirmed during autopsy. We believe that this glomerular lesion was caused by the AAV. In addition, cavitary lesions were observed in the lungs of the patient. MPO-ANCA-positive AAV is reportedly prone to lung vasculitis, whereas cavitary lesions of the lungs are more likely to occur in PR3-ANCA-positive AAV . Our patient had cavitary lesions in the lungs that were not identified 2 years earlier, despite being MPO-ANCA-positive for AAV and presenting with a variety of symptoms similar to those of MPO-ANCA-positive AAV. An autopsy of the lungs revealed granulomas. Active vasculitis-like lesions with neutrophil infiltration and necrosis, along with cavitary lesions, thus suggesting that the lung lesions were caused by AAV. In conclusion, AAV may develop more than one year after starting PTU. Our patient developed AAV nine years after PTU initiation. As AAV is a potentially fatal disease, the development of various vasculitis symptoms during PTU administration should therefore be carefully addressed. |
Metabolomic analysis of murine tissues infected with | 1e360531-fb3f-4ad6-8f0d-f1fc2010d0b6 | 11771894 | Biochemistry[mh] | Brucellosis is a worldwide bacterial zoonosis with a broad host range of animals, spanning from wildlife to agriculturally important livestock . The three main species recognized as substantial health threats and most common human pathogens are B . melitensis , B . abortus , and B . suis . Humans acquire brucellosis through inhalation of infectious aerosols and by consumption of contaminated animal products . Human brucellosis can be a severely debilitating disease that requires hospitalization . Human disease is characterized by persistent waves of fever with systemic symptoms that can vary among individuals, including chills, malaise, headaches, and hepato- or splenomegaly . Brucella replicates intracellularly and has a predilection for organs rich in reticuloendothelial cells such as the spleen and liver . Controlling Brucella infection is complicated due to the fact that humoral immunity and some antibiotics are unable to reach the bacteria while inside the cell . Thus, antibiotic therapy is lengthened, increasing the likelihood of treatment-related side effects and leading to non-compliance among those being treated . Aggravating the situation, no vaccine is licensed to prevent human brucellosis. Metabolic changes in immune cells have an influence on the effectiveness of the immune response and how intracellular bacteria experience stress . In the context of brucellosis, macrophages are important for controlling infection but can also be the major niche for Brucella survival and replication depending on their metabolic status . Metabolic reprogramming of host cells can be initiated by pathways such as glycolysis, TCA cycle, glutaminolysis, gamma-aminobutyric acid (GABA) shunt, and others, depending on the immune response . For example, M1 polarization of macrophages is marked by increased glycolysis and an impaired TCA cycle, driving the accumulation of several metabolites as well as promoting immune signaling and, in some cases, antimicrobial activities . In contrast, M2 polarization of macrophages is marked by mitochondrial oxidative metabolism of fatty acids, leading to an anti-inflammatory and pro-resolution profile which can favor Brucella replication . In human and animal tissues, Brucella encounter a variety of metabolic conditions in which they must be able to multiply in order to cause disease and spread to new hosts . However, the relative availability of metabolites that Brucella encounters across host tissues during infection is largely unknown. Within mice and natural hosts, Brucella can target the spleen, liver, and reproductive tracts for replication . Therefore, in this study, we performed global screening of metabolites in spleens, livers, and reproductive tracts at various timepoints after Brucella infection. In addition, we investigated the effects of these metabolic changes on inflammatory cytokine production and the requirements for Brucella virulence. Bacterial strains and growth conditions Experiments with live Brucella melitensis were performed in biosafety level 3 (BSL3) facilities. B . melitensis 16M was obtained from Montana State University (Bozeman, Montana) and grown on Brucella agar (Becton, Dickinson) for 3 days at 37°C/5% CO 2 . Colonies were then transferred to Brucella broth (Bb; Becton, Dickinson) and grown at 37°C with shaking overnight. The overnight Brucella concentration was estimated by measuring the optical density (OD) at 600 nm, and the inoculum was prepared and diluted to the appropriate concentration in sterile phosphate-buffered saline (sPBS). Titer was confirmed by serial dilution of the B . melitensis inoculum onto Brucella agar plates. Generation of B . melitensisΔbmeI0265 The BMEI0265 gene in B . melitensis 16M was replaced in frame with a chloramphenicol resistance gene ( catR ) from plasmid pKD3 using the suicide plasmid pNTPS139 . Approximately 1,000-bp fragments upstream and downstream of BMEI0265 were amplified by PCR using primers shown in . We also generated a 1,044-bp fragment containing the catR gene from pKD3 using primers listed in . The 5’ end of the forward primer used to amplify the upstream fragment of BMEI0265 contained homology to 30 bp upstream of the BamHI site in pNTPS139. The 5’ end of the forward primer used to amplify the catR gene from pKD3 contained 30 bp homologous to the 3’ end of the upstream BMEI0265 fragment. The downstream fragment of BMEI0265 was amplified using a forward primer whose 5’ end contained 30 bp homologous to the 3’ end of the catR fragment, while the 5’ end of the downstream BMEI0265 fragment reverse primer contained 30 bp homologous to the 30bp downstream of the SalI site of pNTPS139. pNTPS139 was digested with BamHI/SalI. The upstream and downstream BMEI0265 fragments, along with BamHI/SalI- digested pNPTS139, were all ligated together using the NEB Hi-Fi DNA assembly kit according to the manufacturer’s instructions (New England Biolabs, Ipswich, MA). These plasmids were introduced into B . melitensis 16M, and merodiploid transformants were obtained by selection on Brucella agar plus 25 μg/ml kanamycin. A single kanamycin-resistant clone was grown overnight in Brucella broth and then plated onto brucella agar supplemented with 10% sucrose. Genomic DNA from sucrose-resistant, kanamycin-sensitive colonies was isolated and screened by PCR for replacement of the gene of interest. Mice All animal procedures were reviewed and approved by the Animal Care and Use Committee (ACUC) of the University of Missouri and followed the U.S. Public Health Service Policy on the Humane Care and Use of Laboratory Animals under Office of Laboratory Animal Welfare assurance number is D16-00249. All animals were checked daily by trained personnel. No overt symptoms of disease were observed in these animals and all efforts were made to minimize suffering. Mice were euthanized via CO 2 inhalation according to American Veterinary Medical Association guidelines at pre-determined endpoints as described in the figure legends. We utilized 6- to 12-week-old C57BL/6J mice that were age and sex-matched for experiments. The number of mice used in each experiment is described in the figure legends. Mice were infected with 1x10 5 CFUs of B . melitensis in 200 μL sterile PBS (sPBS) intraperitoneally (i.p.). For coinfections, a 1:1 ratio of WT B . melitensis 16M and a chloramphenicol resistant strain ( B . melitensisΔbmeI0265 ) was prepared. Following infection, animals were maintained in individually ventilated caging under high-efficiency particulate air-filtered barrier conditions with 12-hour light and dark cycles within animal biosafety level 3 (ABSL-3) facilities at the University of Missouri. Mice were provided food and water ad libitum. Calculation of tissue bacterial burdens Animals were euthanized, and spleens, livers, and reproductive tracts (uterus, fallopian tubes, and ovaries) were harvested. Tissues were homogenized mechanically in sPBS . Serial dilutions were performed in triplicate in sPBS and plated onto brucella agar. Plates were incubated for three days at 37°C/5% CO 2 , colonies were counted, and the number of CFU/tissue or CFU/mL were calculated. For co-infection experiments, bacterial burdens were determined by plating homogenized spleens or cells on Brucella agar with or without 5 μg/mL chloramphenicol to select for B . melitensisΔbmeI0265 . Flow cytometry Spleens were homogenized and cell suspensions filtered through sterile 40 μm mesh following red blood cell lysis. Fc receptors were blocked in fluorescence-activated cell-sorting (FACS) buffer (2% heat inactivated fetal bovine serum in PBS) before extracellular staining with fluorochrome-conjugated mAbs from eBioscience or Biolegend (San Diego, CA) F4/80 (BM8), Ly-6G (1A8), CD11b (M1/70), and Ly-6C (HK1.4). Cells were then fixed in 4% paraformaldehyde at 4°C overnight before washing and resuspension in FACS buffer. Fluorescence was acquired on a CyAn ADP analyzer (Beckman Coulter, Brea, CA) and FlowJo (Tree Star, Ashland, OR) software was used for analysis. Cells were gated as Macrophages (F4/80 + ), monocytes (CD11b + Ly6C high ), and neutrophils (CD11b + Ly6C mid ). Metabolite quantification by gas chromatography-tandem mass spectrometry (GC-MS) Nontargeted metabolomics was performed at the University of Missouri Metabolomics Center . ~40 mg of spleen, liver, and reproductive tract (uterus, fallopian tubes, and ovaries) was collected in a final concentration of 80% methanol. Tissues were homogenized and transferred to glass vials and incubated at room temperature with shaking (~140 rpm) for 2 hours. Next, 1.5 mL of CHCl 3 containing 10 μg/mL of docosanol (nonpolar internal standard) was added. The material was then sonicated, vortexed, and incubated at 50°C for 1 h. Then, 1 mL of HPLC grade H 2 O containing 25 μg/mL of ribitol (polar phase internal standard) was added, vortexed, and samples were incubated for 1 h at 50°C. Samples were centrifuged at 10°C at 3000x g for 40 minutes to pellet cell debris and separate phases. Upper phase (polar) and lower phase (nonpolar) were individually transferred to new glass tubes and dried in a speed vacuum. Material was stored at -20°C until ready to derivatize. For polar derivatization, samples were resuspended in a solution containing 50 μl of pyridine containing fresh 15 mg/ml methoxyamine-HCl, sonicated, vortexed, and placed in a 50°C oven for 1 h. Samples were allowed to equilibrate to room temperature and then 50 μl of N-methyltrimethylsilyltrifluoroacetamide (MSTFA) + 1% trimethylchlorosilane (TMCS) (Fisher Scientific) was added. Samples were vortexed, incubated for 1 h at 50°C, centrifuged, and transferred to glass inserts for injection. For nonpolar derivatization, samples were resuspended in 0.8 mL of CHCl 3 , 0.5mL of 1.25 M HCl in MeOH, vortexed and incubated for 4 h at 50°C. Post incubation, 2 mL of hexane was added to the samples, which were vortexed, and the upper layer was transferred to a new autosampler vial to dry. Material was then resuspended by adding 70 μl of pyridine, vortexed, and then 30 μl of MSTFA + 1%TMCS was added prior to incubation for 1 h at 50°C, centrifugation, and transfer to glass inserts for injection. Samples were analyzed using GC-MS on an Agilent 6890 GC coupled to a 5973N MSD mass spectrometer with a scan range from m/z 50 to 650. Separations were performed using a 60 m DB-5MS column (0.25-mm inner diameter, 0.25-mm film thickness; J&W Scientific) and a constant flow of 1.0ml/min helium gas. Results were interpreted using MetaboAnalyst 5.0 software ( https://www.metaboanalyst.ca/ ) with a P value threshold of 0.05. Macrophage generation, treatments, and infections Cells were flushed from the femurs and tibias of C57BL/6J mice with sPBS supplemented with 5 μg/mL of gentamicin. Bone marrow-derived macrophages (BMDMs) were generated by cultured in complete medium with glutamine (CM; RPMI 1640, 10% fetal bovine serum [FBS], 10mM HEPES buffer, 10 mM nonessential amino acids, 10 mM sodium pyruvate) containing 30 ng/ml recombinant murine macrophage colony-stimulating factor (M-CSF; Shenandoah Biotechnology, Warwick, PA). After 3 days of culture, cells were washed with 30 mL of pre-warmed sPBS and fresh CM containing 30 ng/ml M-CSF was added to the culture flasks. After 3 days, adherent cells were collected by adding 0.05% trypsin (MilliporeSigma). Cells were plated at 1x10 6 cells/ml in fresh CM (with or without glutamine) and allowed to adhere. Cells were infected at a multiplicity of infection (MOI) of 100 B . melitensis 16M or coinfected with 1:1 ratio of B . melitensis 16M and B . melitensisΔbmeI0265 each at a MOI of 100. Cells were infected for 4 h, washed with sPBS, and then cultured in CM containing 50 μg/ml gentamicin for 30 minutes. Cells were then washed with sPBS and left to incubate in CM containing 2.5 μg/ml gentamicin for the remainder of the experiment. For GLS inhibition, 10 μm of Telaglenastat (MedChemExpress LLC, Monmouth Junction, NJ) was added to cells 12 h prior to infection . For GABAergic modulation, BMDMs were treated with GABA (100 μM) or bicuculline (BIC; 100 μM; GABA receptor antagonist) (Sigma-Aldrich, St. Louis, MO) at the same time that gentamicin containing media was added to the cells. At 24 h, 48 h, and 72 h post infection, supernatants were collected, and macrophages were washed and then lysed. These lysates were plated on brucella agar or brucella agar supplemented with chloramphenicol as explained above to determine the amount of intracellular Brucella . Supernatants were used for quantification of cytokines as described below. Cytokine quantification Cell culture supernatants were filtered prior to measurement of cytokines. IL-1β levels were measured with a mouse IL-1β ELISA Ready Set Go kit (Invitrogen, Carlsbad, CA) according to the manufacturers’ instructions. Statistical analysis For CFU and cytokines, data are expressed as mean +/- standard deviation (SD). Student’s unpaired T-test were used to assess differences in means between two groups with significance at p<0.05, while ANOVA followed by Tukey’s test with significance at p<0.05 was used for comparisons between ≥3 groups. N values and the number of experimental repeats are provided in the figure legends. All statistical analyses were performed with Prism software (version 10.1.1, GraphPad). Multivariant statistical analysis for metabolite data was performed with Metaboanalyst (v5.0) ( https://www.metaboanalyst.ca ). The normalized values were used for statistical analyses such as principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), heatmaps, and volcano plots after log transformation and autoscaling with Metaboanalyst software. Pathway analysis was also performed in Metaboanalyst (V5.0) using KEGG database to identify the biological significance of metabolic pathways associated with infection. Pathways with P<0.05 were plotted indicating potential biological relevance. Experiments with live Brucella melitensis were performed in biosafety level 3 (BSL3) facilities. B . melitensis 16M was obtained from Montana State University (Bozeman, Montana) and grown on Brucella agar (Becton, Dickinson) for 3 days at 37°C/5% CO 2 . Colonies were then transferred to Brucella broth (Bb; Becton, Dickinson) and grown at 37°C with shaking overnight. The overnight Brucella concentration was estimated by measuring the optical density (OD) at 600 nm, and the inoculum was prepared and diluted to the appropriate concentration in sterile phosphate-buffered saline (sPBS). Titer was confirmed by serial dilution of the B . melitensis inoculum onto Brucella agar plates. B . melitensisΔbmeI0265 The BMEI0265 gene in B . melitensis 16M was replaced in frame with a chloramphenicol resistance gene ( catR ) from plasmid pKD3 using the suicide plasmid pNTPS139 . Approximately 1,000-bp fragments upstream and downstream of BMEI0265 were amplified by PCR using primers shown in . We also generated a 1,044-bp fragment containing the catR gene from pKD3 using primers listed in . The 5’ end of the forward primer used to amplify the upstream fragment of BMEI0265 contained homology to 30 bp upstream of the BamHI site in pNTPS139. The 5’ end of the forward primer used to amplify the catR gene from pKD3 contained 30 bp homologous to the 3’ end of the upstream BMEI0265 fragment. The downstream fragment of BMEI0265 was amplified using a forward primer whose 5’ end contained 30 bp homologous to the 3’ end of the catR fragment, while the 5’ end of the downstream BMEI0265 fragment reverse primer contained 30 bp homologous to the 30bp downstream of the SalI site of pNTPS139. pNTPS139 was digested with BamHI/SalI. The upstream and downstream BMEI0265 fragments, along with BamHI/SalI- digested pNPTS139, were all ligated together using the NEB Hi-Fi DNA assembly kit according to the manufacturer’s instructions (New England Biolabs, Ipswich, MA). These plasmids were introduced into B . melitensis 16M, and merodiploid transformants were obtained by selection on Brucella agar plus 25 μg/ml kanamycin. A single kanamycin-resistant clone was grown overnight in Brucella broth and then plated onto brucella agar supplemented with 10% sucrose. Genomic DNA from sucrose-resistant, kanamycin-sensitive colonies was isolated and screened by PCR for replacement of the gene of interest. All animal procedures were reviewed and approved by the Animal Care and Use Committee (ACUC) of the University of Missouri and followed the U.S. Public Health Service Policy on the Humane Care and Use of Laboratory Animals under Office of Laboratory Animal Welfare assurance number is D16-00249. All animals were checked daily by trained personnel. No overt symptoms of disease were observed in these animals and all efforts were made to minimize suffering. Mice were euthanized via CO 2 inhalation according to American Veterinary Medical Association guidelines at pre-determined endpoints as described in the figure legends. We utilized 6- to 12-week-old C57BL/6J mice that were age and sex-matched for experiments. The number of mice used in each experiment is described in the figure legends. Mice were infected with 1x10 5 CFUs of B . melitensis in 200 μL sterile PBS (sPBS) intraperitoneally (i.p.). For coinfections, a 1:1 ratio of WT B . melitensis 16M and a chloramphenicol resistant strain ( B . melitensisΔbmeI0265 ) was prepared. Following infection, animals were maintained in individually ventilated caging under high-efficiency particulate air-filtered barrier conditions with 12-hour light and dark cycles within animal biosafety level 3 (ABSL-3) facilities at the University of Missouri. Mice were provided food and water ad libitum. Animals were euthanized, and spleens, livers, and reproductive tracts (uterus, fallopian tubes, and ovaries) were harvested. Tissues were homogenized mechanically in sPBS . Serial dilutions were performed in triplicate in sPBS and plated onto brucella agar. Plates were incubated for three days at 37°C/5% CO 2 , colonies were counted, and the number of CFU/tissue or CFU/mL were calculated. For co-infection experiments, bacterial burdens were determined by plating homogenized spleens or cells on Brucella agar with or without 5 μg/mL chloramphenicol to select for B . melitensisΔbmeI0265 . Spleens were homogenized and cell suspensions filtered through sterile 40 μm mesh following red blood cell lysis. Fc receptors were blocked in fluorescence-activated cell-sorting (FACS) buffer (2% heat inactivated fetal bovine serum in PBS) before extracellular staining with fluorochrome-conjugated mAbs from eBioscience or Biolegend (San Diego, CA) F4/80 (BM8), Ly-6G (1A8), CD11b (M1/70), and Ly-6C (HK1.4). Cells were then fixed in 4% paraformaldehyde at 4°C overnight before washing and resuspension in FACS buffer. Fluorescence was acquired on a CyAn ADP analyzer (Beckman Coulter, Brea, CA) and FlowJo (Tree Star, Ashland, OR) software was used for analysis. Cells were gated as Macrophages (F4/80 + ), monocytes (CD11b + Ly6C high ), and neutrophils (CD11b + Ly6C mid ). Nontargeted metabolomics was performed at the University of Missouri Metabolomics Center . ~40 mg of spleen, liver, and reproductive tract (uterus, fallopian tubes, and ovaries) was collected in a final concentration of 80% methanol. Tissues were homogenized and transferred to glass vials and incubated at room temperature with shaking (~140 rpm) for 2 hours. Next, 1.5 mL of CHCl 3 containing 10 μg/mL of docosanol (nonpolar internal standard) was added. The material was then sonicated, vortexed, and incubated at 50°C for 1 h. Then, 1 mL of HPLC grade H 2 O containing 25 μg/mL of ribitol (polar phase internal standard) was added, vortexed, and samples were incubated for 1 h at 50°C. Samples were centrifuged at 10°C at 3000x g for 40 minutes to pellet cell debris and separate phases. Upper phase (polar) and lower phase (nonpolar) were individually transferred to new glass tubes and dried in a speed vacuum. Material was stored at -20°C until ready to derivatize. For polar derivatization, samples were resuspended in a solution containing 50 μl of pyridine containing fresh 15 mg/ml methoxyamine-HCl, sonicated, vortexed, and placed in a 50°C oven for 1 h. Samples were allowed to equilibrate to room temperature and then 50 μl of N-methyltrimethylsilyltrifluoroacetamide (MSTFA) + 1% trimethylchlorosilane (TMCS) (Fisher Scientific) was added. Samples were vortexed, incubated for 1 h at 50°C, centrifuged, and transferred to glass inserts for injection. For nonpolar derivatization, samples were resuspended in 0.8 mL of CHCl 3 , 0.5mL of 1.25 M HCl in MeOH, vortexed and incubated for 4 h at 50°C. Post incubation, 2 mL of hexane was added to the samples, which were vortexed, and the upper layer was transferred to a new autosampler vial to dry. Material was then resuspended by adding 70 μl of pyridine, vortexed, and then 30 μl of MSTFA + 1%TMCS was added prior to incubation for 1 h at 50°C, centrifugation, and transfer to glass inserts for injection. Samples were analyzed using GC-MS on an Agilent 6890 GC coupled to a 5973N MSD mass spectrometer with a scan range from m/z 50 to 650. Separations were performed using a 60 m DB-5MS column (0.25-mm inner diameter, 0.25-mm film thickness; J&W Scientific) and a constant flow of 1.0ml/min helium gas. Results were interpreted using MetaboAnalyst 5.0 software ( https://www.metaboanalyst.ca/ ) with a P value threshold of 0.05. Cells were flushed from the femurs and tibias of C57BL/6J mice with sPBS supplemented with 5 μg/mL of gentamicin. Bone marrow-derived macrophages (BMDMs) were generated by cultured in complete medium with glutamine (CM; RPMI 1640, 10% fetal bovine serum [FBS], 10mM HEPES buffer, 10 mM nonessential amino acids, 10 mM sodium pyruvate) containing 30 ng/ml recombinant murine macrophage colony-stimulating factor (M-CSF; Shenandoah Biotechnology, Warwick, PA). After 3 days of culture, cells were washed with 30 mL of pre-warmed sPBS and fresh CM containing 30 ng/ml M-CSF was added to the culture flasks. After 3 days, adherent cells were collected by adding 0.05% trypsin (MilliporeSigma). Cells were plated at 1x10 6 cells/ml in fresh CM (with or without glutamine) and allowed to adhere. Cells were infected at a multiplicity of infection (MOI) of 100 B . melitensis 16M or coinfected with 1:1 ratio of B . melitensis 16M and B . melitensisΔbmeI0265 each at a MOI of 100. Cells were infected for 4 h, washed with sPBS, and then cultured in CM containing 50 μg/ml gentamicin for 30 minutes. Cells were then washed with sPBS and left to incubate in CM containing 2.5 μg/ml gentamicin for the remainder of the experiment. For GLS inhibition, 10 μm of Telaglenastat (MedChemExpress LLC, Monmouth Junction, NJ) was added to cells 12 h prior to infection . For GABAergic modulation, BMDMs were treated with GABA (100 μM) or bicuculline (BIC; 100 μM; GABA receptor antagonist) (Sigma-Aldrich, St. Louis, MO) at the same time that gentamicin containing media was added to the cells. At 24 h, 48 h, and 72 h post infection, supernatants were collected, and macrophages were washed and then lysed. These lysates were plated on brucella agar or brucella agar supplemented with chloramphenicol as explained above to determine the amount of intracellular Brucella . Supernatants were used for quantification of cytokines as described below. Cell culture supernatants were filtered prior to measurement of cytokines. IL-1β levels were measured with a mouse IL-1β ELISA Ready Set Go kit (Invitrogen, Carlsbad, CA) according to the manufacturers’ instructions. For CFU and cytokines, data are expressed as mean +/- standard deviation (SD). Student’s unpaired T-test were used to assess differences in means between two groups with significance at p<0.05, while ANOVA followed by Tukey’s test with significance at p<0.05 was used for comparisons between ≥3 groups. N values and the number of experimental repeats are provided in the figure legends. All statistical analyses were performed with Prism software (version 10.1.1, GraphPad). Multivariant statistical analysis for metabolite data was performed with Metaboanalyst (v5.0) ( https://www.metaboanalyst.ca ). The normalized values were used for statistical analyses such as principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), heatmaps, and volcano plots after log transformation and autoscaling with Metaboanalyst software. Pathway analysis was also performed in Metaboanalyst (V5.0) using KEGG database to identify the biological significance of metabolic pathways associated with infection. Pathways with P<0.05 were plotted indicating potential biological relevance. Brucellosis progression in mice and tissue metabolism variations during different phases of infection We investigated changes in tissue metabolite levels during progression of intraperitoneal Brucella infection in the spleens, livers, and female reproductive tracts of C57BL/6J mice. Bacterial loads peaked at day seven post-infection in all three tissues . At 14- and 28-days post-infection (dpi) CFU levels in spleens had a ~5-10-fold reduction compared to 7 dpi . Similarly, in livers, there was a ~100-fold reduction in CFUs between the 14 and 28 day timepoints relative to day 7 post-infection . In reproductive tracts, CFU counts were ~10-fold lower at 14 dpi and ~100-fold lower at 28 dpi compared to 7 dpi . Next, we analyzed the levels of innate immune cells in the spleen during infection. At 7 and 14 dpi the proportions of macrophages, monocytes, and neutrophils were significantly higher compared to naïve mice . At 14 dpi, neutrophil proportions were significantly increased compared to all other time points indicating that two weeks post-infection may be the peak of splenic inflammation in this model. Based on these findings, we chose four time points to investigate metabolic changes during experimental brucellosis, pre-infection (naïve), 7 dpi representing the peak of bacterial loads, 14 dpi as the peak of inflammation, and 28 dpi as a starting point of chronic infection with some resolution of inflammation. To investigate metabolic diversity within spleens, livers, and female reproductive tracts we extracted polar metabolites, which were then derivatized and analyzed via GC-MS. The clusters were plotted by Partial Least-Squares Discriminant Analysis (PLS-DA) . PLS-DA is a supervised data reduction method influencing the dimension in a graph representation that incorporates group (timepoint) information . This approach was selected to investigate how infection alters metabolic shifts associated with specific time-related changes, emphasizing the variation between groups relative to intra-group differences. All three tissues demonstrated altered metabolic profiles when comparing infected animals with naïve mice. The total number of compounds showing significant changes in spleens, livers, and reproductive tracts were 117, 30, and 32, respectively ( – Tables). Heat map visualization showed a distinct splenic metabolite profile separating uninfected from infected animals , though one sample from the day 28 timepoint displayed variable metabolite levels relative to the other samples within its group. The most remarkable changes in relative levels of compounds appeared to be between naïve and 14 dpi mice. In particular, we found elevated levels of aspartate, succinate, and lactate at 14 dpi in spleens . These findings were of interest as lactate is the end product of glycolysis and therefore used as a marker for this pathway . Lactate can also serve carbon source for, and promote the growth of, B . abortus within macrophages . Aspartate is associated with the urea cycle , and can contribute to the growth of some Brucella strains , while succinate was recently shown to be linked with HIF-1α stabilization and activation, which facilitates the metabolic shift from mitochondrial phosphorylation (OXPHOS) to glycolysis in several settings including Brucella infection . Collectively, these findings from metabolite screening suggest a Brucella- driven change in tissue metabolism. Changes in tissue metabolism at the peak of Brucella- induced inflammation To confirm that the metabolic profile of spleens, livers, and reproductive tracts are altered by Brucella infection at the peak of inflammation, we performed another metabolomic experiment on tissues from additional naive and 14 dpi animals. In this experiment, we employed a Principal Component Analysis (PCA) which is an unsupervised technique that analyzes each sample independently, emphasizing the largest variation between individuals . This methodology was selected to detect general metabolic patterns and visualize the global variation in the dataset of each tissue. Via PCA, metabolite profiles in spleens and livers were again distinct in naïve and infected animals, clustering separately in the Principal Component Analysis (PCA) ( and and Tables). However, reproductive tracts did not show defined separation ( and ) which is in contrast to our findings in . This difference could be because we employed an unsupervised approach here (PCA) while in we utilized a PLS-DA which incorporates group level information into the analysis. We performed volcano tests combining fold change analysis (1.5 threshold) and T-test (P<0.05) to determine metabolites modified by infection . In spleens 11 compounds were lower in infected compared to naïve tissues, including myo-inositol . Twenty-nine compounds were significantly higher in infected spleens, including lactate, aspartate, itaconate, malate, and glutamate, suggesting an increase in the level of the TCA cycle intermediates in infected spleens . Livers from infected mice were found to have 34 compounds with lower relative levels and 67 compounds with significantly higher levels . Infected livers showed elevated pyroglutamate, aspartate, boric acid, and glutamate, among other metabolites . Five compounds were elevated, and 8 compounds were reduced in reproductive tracts from infected mice. Of these compounds, only 1,5-anhydroglucitol was identifiable . In addition, we performed a metabolic pathway analysis in spleens and livers targeting potential cellular signaling and metabolic networks that could play a role in Brucella infection. Several identified pathways modified by infection were linked to the TCA cycle, including glutaminolysis, the arginosuccinate shunt, glycolysis, and the GABA shunt . These data demonstrate a change in host metabolism concurrent with the peak of inflammation during infection. Of the identified pathways, there were some that might be involved with the cellular immune response against Brucella , including glutaminolysis and glycolysis. Inhibition of the glutaminolysis pathway dampens IL-1β production in response to Brucella Our metabolomics data indicated alteration of the glutaminolysis pathway in response to Brucella infection. Glutamine is a vital compound in cellular metabolism and is involved in many functions, ranging from protein biosynthesis to mitochondrial respiration . Glutaminolysis is one of the means responsible for replenishing the TCA cycle via the breakdown of glutamine into glutamate. The key enzyme in that process is glutaminase (GLS). These reactions replenish the TCA cycle leading to the production of itaconate and succinate, which subsequently promotes a similar effect as the Warburg effect (aerobic glycolysis) in cancer cells . Therefore, we investigated the role of glutamine catabolism in BMDMs infected with B . melitensis . To chemically inhibit the glutaminolysis pathway, we treated BMDMs with the GLS inhibitor CB-839, commercially known as Telaglenastat. GLS inhibition did not significantly affect the ability of macrophages to control intracellular Brucella infection at 24, 48, and 72 hours . However, Telaglenastat treatment dampened IL1-β secretion at 48 and 72 hours after infection . Glutamine-dependent anaplerosis is the largest source for succinate as a metabolite which in turn enhances IL1-β production . As glutamine is required for glutaminolysis and subsequent anaplerosis of the TCA cycle, we infected BMDMs with B . melitensis in complete media with and without glutamine. Despite bacterial clearance not being affected by glutamine , IL-1β secretion was dampened in the absence of glutamine 72 hours after infection . Collectively, these data demonstrate that glutaminolysis plays a role in IL-1β production but does not contribute to control of Brucella infection in BMDMs. GABA supplementation does not alter control of Brucella by macrophages Glutamate is also the precursor of GABA, an inhibitory neurotransmitter with the potential to enhance antimicrobial defenses against intracellular bacteria . To investigate if the host GABAergic system controls intracellular B . melitensis we treated infected BMDMs with GABA (100 μM) or bicuculline (BIC; 100 μM; GABA receptor antagonist) . No difference was observed between treatment groups at 48 hours. Our in vivo metabolomics data indicated that the GABA shunt was altered in mouse spleens at two-weeks post-infection . B . abortus has been previously shown to encode two GABA transporters, bab1_1794 and bab2_0879 , with moderate and high GABA import rates, respectively, but these GABA transporters were not required for virulence . There are other genes within the Brucella genome that are annotated to encode putative GABA transporters including B . melitensis BMEI0265. We therefore generated an isogenic BMEI0265 mutant ( B . melitensisΔbmei0265 ). However, we did not find B . melitensisΔbmei0265 to be attenuated in mouse spleens two weeks post-infection . These findings suggest that bmei0265 does not likely play a role in B . melitensis virulence. We investigated changes in tissue metabolite levels during progression of intraperitoneal Brucella infection in the spleens, livers, and female reproductive tracts of C57BL/6J mice. Bacterial loads peaked at day seven post-infection in all three tissues . At 14- and 28-days post-infection (dpi) CFU levels in spleens had a ~5-10-fold reduction compared to 7 dpi . Similarly, in livers, there was a ~100-fold reduction in CFUs between the 14 and 28 day timepoints relative to day 7 post-infection . In reproductive tracts, CFU counts were ~10-fold lower at 14 dpi and ~100-fold lower at 28 dpi compared to 7 dpi . Next, we analyzed the levels of innate immune cells in the spleen during infection. At 7 and 14 dpi the proportions of macrophages, monocytes, and neutrophils were significantly higher compared to naïve mice . At 14 dpi, neutrophil proportions were significantly increased compared to all other time points indicating that two weeks post-infection may be the peak of splenic inflammation in this model. Based on these findings, we chose four time points to investigate metabolic changes during experimental brucellosis, pre-infection (naïve), 7 dpi representing the peak of bacterial loads, 14 dpi as the peak of inflammation, and 28 dpi as a starting point of chronic infection with some resolution of inflammation. To investigate metabolic diversity within spleens, livers, and female reproductive tracts we extracted polar metabolites, which were then derivatized and analyzed via GC-MS. The clusters were plotted by Partial Least-Squares Discriminant Analysis (PLS-DA) . PLS-DA is a supervised data reduction method influencing the dimension in a graph representation that incorporates group (timepoint) information . This approach was selected to investigate how infection alters metabolic shifts associated with specific time-related changes, emphasizing the variation between groups relative to intra-group differences. All three tissues demonstrated altered metabolic profiles when comparing infected animals with naïve mice. The total number of compounds showing significant changes in spleens, livers, and reproductive tracts were 117, 30, and 32, respectively ( – Tables). Heat map visualization showed a distinct splenic metabolite profile separating uninfected from infected animals , though one sample from the day 28 timepoint displayed variable metabolite levels relative to the other samples within its group. The most remarkable changes in relative levels of compounds appeared to be between naïve and 14 dpi mice. In particular, we found elevated levels of aspartate, succinate, and lactate at 14 dpi in spleens . These findings were of interest as lactate is the end product of glycolysis and therefore used as a marker for this pathway . Lactate can also serve carbon source for, and promote the growth of, B . abortus within macrophages . Aspartate is associated with the urea cycle , and can contribute to the growth of some Brucella strains , while succinate was recently shown to be linked with HIF-1α stabilization and activation, which facilitates the metabolic shift from mitochondrial phosphorylation (OXPHOS) to glycolysis in several settings including Brucella infection . Collectively, these findings from metabolite screening suggest a Brucella- driven change in tissue metabolism. Brucella- induced inflammation To confirm that the metabolic profile of spleens, livers, and reproductive tracts are altered by Brucella infection at the peak of inflammation, we performed another metabolomic experiment on tissues from additional naive and 14 dpi animals. In this experiment, we employed a Principal Component Analysis (PCA) which is an unsupervised technique that analyzes each sample independently, emphasizing the largest variation between individuals . This methodology was selected to detect general metabolic patterns and visualize the global variation in the dataset of each tissue. Via PCA, metabolite profiles in spleens and livers were again distinct in naïve and infected animals, clustering separately in the Principal Component Analysis (PCA) ( and and Tables). However, reproductive tracts did not show defined separation ( and ) which is in contrast to our findings in . This difference could be because we employed an unsupervised approach here (PCA) while in we utilized a PLS-DA which incorporates group level information into the analysis. We performed volcano tests combining fold change analysis (1.5 threshold) and T-test (P<0.05) to determine metabolites modified by infection . In spleens 11 compounds were lower in infected compared to naïve tissues, including myo-inositol . Twenty-nine compounds were significantly higher in infected spleens, including lactate, aspartate, itaconate, malate, and glutamate, suggesting an increase in the level of the TCA cycle intermediates in infected spleens . Livers from infected mice were found to have 34 compounds with lower relative levels and 67 compounds with significantly higher levels . Infected livers showed elevated pyroglutamate, aspartate, boric acid, and glutamate, among other metabolites . Five compounds were elevated, and 8 compounds were reduced in reproductive tracts from infected mice. Of these compounds, only 1,5-anhydroglucitol was identifiable . In addition, we performed a metabolic pathway analysis in spleens and livers targeting potential cellular signaling and metabolic networks that could play a role in Brucella infection. Several identified pathways modified by infection were linked to the TCA cycle, including glutaminolysis, the arginosuccinate shunt, glycolysis, and the GABA shunt . These data demonstrate a change in host metabolism concurrent with the peak of inflammation during infection. Of the identified pathways, there were some that might be involved with the cellular immune response against Brucella , including glutaminolysis and glycolysis. Brucella Our metabolomics data indicated alteration of the glutaminolysis pathway in response to Brucella infection. Glutamine is a vital compound in cellular metabolism and is involved in many functions, ranging from protein biosynthesis to mitochondrial respiration . Glutaminolysis is one of the means responsible for replenishing the TCA cycle via the breakdown of glutamine into glutamate. The key enzyme in that process is glutaminase (GLS). These reactions replenish the TCA cycle leading to the production of itaconate and succinate, which subsequently promotes a similar effect as the Warburg effect (aerobic glycolysis) in cancer cells . Therefore, we investigated the role of glutamine catabolism in BMDMs infected with B . melitensis . To chemically inhibit the glutaminolysis pathway, we treated BMDMs with the GLS inhibitor CB-839, commercially known as Telaglenastat. GLS inhibition did not significantly affect the ability of macrophages to control intracellular Brucella infection at 24, 48, and 72 hours . However, Telaglenastat treatment dampened IL1-β secretion at 48 and 72 hours after infection . Glutamine-dependent anaplerosis is the largest source for succinate as a metabolite which in turn enhances IL1-β production . As glutamine is required for glutaminolysis and subsequent anaplerosis of the TCA cycle, we infected BMDMs with B . melitensis in complete media with and without glutamine. Despite bacterial clearance not being affected by glutamine , IL-1β secretion was dampened in the absence of glutamine 72 hours after infection . Collectively, these data demonstrate that glutaminolysis plays a role in IL-1β production but does not contribute to control of Brucella infection in BMDMs. Brucella by macrophages Glutamate is also the precursor of GABA, an inhibitory neurotransmitter with the potential to enhance antimicrobial defenses against intracellular bacteria . To investigate if the host GABAergic system controls intracellular B . melitensis we treated infected BMDMs with GABA (100 μM) or bicuculline (BIC; 100 μM; GABA receptor antagonist) . No difference was observed between treatment groups at 48 hours. Our in vivo metabolomics data indicated that the GABA shunt was altered in mouse spleens at two-weeks post-infection . B . abortus has been previously shown to encode two GABA transporters, bab1_1794 and bab2_0879 , with moderate and high GABA import rates, respectively, but these GABA transporters were not required for virulence . There are other genes within the Brucella genome that are annotated to encode putative GABA transporters including B . melitensis BMEI0265. We therefore generated an isogenic BMEI0265 mutant ( B . melitensisΔbmei0265 ). However, we did not find B . melitensisΔbmei0265 to be attenuated in mouse spleens two weeks post-infection . These findings suggest that bmei0265 does not likely play a role in B . melitensis virulence. Over the past decade, immunometabolism studies have become key to understanding how metabolism of host cells impacts the outcome of infection . In the context of Brucella infection, studies have indicated that modulation of glycolysis can affect intracellular replication and inflammation , while enhanced glucose availability within M2 macrophages can promote Brucella replication . Here, we investigated the interface of host- Brucella interactions by first screening tissue metabolic fluctuations during the course of infection. We found that infection caused more consistent separation of metabolite profiles from spleen and liver relative to reproductive tracts from non-pregnant mice (Figs and ). This could be due to the absence of estrus synchronization before the experiment. In addition, while Brucella can infect the reproductive tract of both pregnant, and non-pregnant mice , pregnancy can alter the immune response which could in turn affect metabolite availability. Therefore, investigating the metabolite profiles of reproductive tracts from pregnant mice infected with Brucella should be studied in the future. Untargeted GC-MS showed a Brucella- driven change in tissue metabolism at 14 dpi, particularly in metabolites correlated with the TCA cycle. Mitochondrial changes in metabolites related to the TCA cycle can regulate the function and activation of immune cells . We have previously demonstrated the impact of itaconate, an intermediate metabolite from the TCA cycle, on mouse susceptibility to Brucella . In the present study, we found itaconate levels to be higher in spleens from infected mice . In line with these results, we suggest that TCA cycle-correlated metabolites have an impact on efficient immune responses against B . melitensis . Metabolic pathway analysis displayed a link between infection and several pathways such as glutaminolysis, the arginosuccinate shunt, glycolysis, and the GABA shunt. Among the metabolites showing key changes, glutamate levels were significantly increased in infected livers compared to uninfected controls. Glutamate plays a role in macrophage polarization, replenishment of the TCA cycle via glutaminolysis, and in the GABA shunt at the mitochondria level . Glutaminolysis has been found to be critical in the metabolic reprogramming of M1-like human macrophages infected with M . tuberculosis , demonstrating its importance in the proinflammatory response . Similarly, we found that glutaminolysis inhibition with telaglenastat impairs secretion of the pro-inflammatory IL-1β cytokine . It is established that IL-1 is crucial to control Brucella infection in mice . However, while GLS inhibition dampened IL-1β production, we found that GLS inhibition did not affect the ability of BMDMs to control intracellular B . melitensis . Conversely, a study using a telaglenastat analog (BPTES) reported increased CFUs in a macrophage cell line infected with B . abortus . The divergent results could be due to differences between the two inhibitors, the types of macrophages, and the strains of Brucella . Furthermore, glutamate is also involved in the GABA shunt, a process responsible for producing and conserving the supply of gamma-aminobutyric acid. GABA is an inhibitory neurotransmitter recently found to impact the host immune system . Studies performed with M . tuberculosis , Salmonella Typhimurium, and Listeria monocytogenes demonstrated that GABAergic system activation by GABA or its receptor agonist enhances macrophage antimicrobial defense against these intracellular bacteria . However, our results suggested that the GABAergic system does not play a role in controlling B . melitensis intracellularly. The conversion of glutamate into GABA is catalyzed by the enzyme glutamate decarboxylase (GAD), which provides a pH homeostasis mechanism in some pathogenic bacteria, including L . monocytogenes . Several Brucella species have a functional GAD system; however, the system is lost in host-adapted pathogens such as B . melitensis , B . abortus , and B . suis . Nonetheless, Brucella has a GABA transporter with an undefined role in the metabolic utilization of GABA which may play a role in the pathogenesis of Brucella infection . According to our results, the potential transport of GABA transport by BMEI0265 does not play a role in B . melitensis virulence under the conditions tested here. However, because Brucella encodes multiple GABA transporters, the effect of deleting a single transporter could be masked by the presence of transporters with redundant function. Therefore, more studies are needed to understand the mechanisms of GABA signaling on host immune defense against Brucella In conclusion, we show here that metabolite screening of spleens, livers, and reproductive tracts suggested a Brucella -driven change in tissue metabolism, with the most remarkable changes in host metabolism occurring at the peak of inflammation around two weeks after B . melitensis infection. Additionally, metabolite changes were related to intracellular pathways linked to mitochondria mechanisms. Between these candidate pathways, glutaminolysis was demonstrated to play a role in IL-1β production but did not contribute to macrophage control of Brucella infection in vitro . S1 Table Primers used in this study. (DOCX) S2 Table Metabolite levels in spleens from naïve mice, and from spleens at 7, 14, and 28 days post-infection with B . melitensis . (CSV) S3 Table Metabolite levels in livers from naïve mice, and from livers at 7, 14, and 28 days post-infection with B . melitensis . (CSV) S4 Table Metabolite levels in reproductive tracts from naïve female mice, and from reproductive tracts at 7, 14, and 28 days post-infection with B . melitensis . (CSV) S5 Table Metabolite levels in spleens from naïve mice, and from spleens at 7 days post-infection with B . melitensis . (CSV) S6 Table Metabolite levels in livers from naïve mice, and from livers at 7 28 days post-infection with B . melitensis . (CSV) S7 Table Metabolite levels in reproductive tracts from naïve female mice, and from reproductive tracts at 7 days post-infection with B . melitensis . (CSV) |
Clear cell variant of ameloblastic carcinoma | f8372cc6-ded0-4c8f-9dd5-7a2f5d96b886 | 10685852 | Anatomy[mh] | |
Efficiency of Polyethylene Terephthalate Glycol Thermoplastic Material to Functional and Expansion Forces in Orthodontic Applications: An Experimental Study | c909461c-bbfc-4f2b-b787-1de841f10227 | 11748742 | Dentistry[mh] | Orthodontic appliance design has evolved significantly to accommodate patient preferences for better aesthetics during treatment. This progression led to the development of invisible or esthetic orthodontic options, such as ceramic brackets, lingual appliances, and removable clear thermoplastic appliances. Clear appliances have gained popularity due to their invisibility, simplicity, hygiene, comfort, and reduced impact on mastication . Removable clear appliances are typically manufactured either through thermoforming (a process that shapes thermoplastic sheets into three-dimensional (3D) forms using heat, vacuum, and pressure) or through 3D printing, which uses photo-polymerization of clear liquid resin . The thermoforming process remains the most widely used for producing clear appliances, either from pure thermoplastic sheets (composed of a single material) or blended sheets (composed of two or more materials) . Thermoplastics, synthetic polymers that melt when heated and harden upon cooling, offer a range of mechanical properties that depend on their molecular design and environmental conditions, such as temperature and humidity. These materials can be classified as noncrystalline (amorphous) such as polycarbonate (PC), polyurethane (PUR), and polyethylene terephthalate glycol (PETG), or crystalline, such as polypropylene (PP) and polyethylene (PE) . In dental applications, thermoplastics undergo rigorous safety testing and are evaluated for compliance with international standards, including American Society for Testing and Materials (ASTM) or International Organization for Standardization (ISO) certifications. Their use as an alternative to acrylic in dental devices continues to expand . PETG is an amorphous polymer produced by the polycondensation of ethylene glycol with terephthalic acid. The addition of glycol enhances the processability of PET, lowers its crystallization temperature, and improves the material's mechanical properties, resulting in a versatile, colorless, and transparent substance that is widely used in packaging, medical containers, and electronics . PETG's desirable characteristics, such as high mechanical strength, good formability, fatigue and abrasion resistance, and dimensional stability in moist environments, make it particularly suitable for orthodontic applications. It is already used in manufacturing retainers , splints , and tooth aligners, though it has not yet been explored for functional or orthopedic appliances . Functional orthodontic therapy aims to correct skeletal malocclusion by encouraging or redirecting the growth of skeletal structures in growing individuals. The twin block, a commonly used removable functional appliance, is versatile and easily modified to suit various treatment needs . One such modification, the clear twin block, integrates self-cured acrylic ramps with clear thermoplastic appliances to enhance aesthetics and improve patient compliance. Earlier studies have used materials like biocryl (pure polymethyl methacrylate [PMMA]) or PC in thicknesses of 1–1.5 mm to fabricate these clear twin block appliances . However, no studies have yet investigated the use of PETG thermoplastic for manufacturing twin block appliances, nor has its use been explored for fabricating clear expanders. Given the unique properties of PETG, particularly its 2 mm thickness, this study aims to evaluate the mechanical properties of a modified twin block (MTB) appliance made from PETG compared to a conventional twin block (CTB) made from acrylic. Specifically, the study will assess the appliances' ability to withstand deformation under applied forces and during expansion, providing new insights into the effectiveness of clear PETG appliances.
2.1. Study Design This is an experimental study that was designed as a part of a randomized clinical trial, using two mechanical tests to analyze the compression strain that developed in the modified twin block (MTB) appliance with expander, in comparison with the CTB appliance with expander. Ethical approval was obtained from the ethics committee at the College of Dentistry, University of Baghdad (Reference No. 664 in 13.9.2022).
This is an experimental study that was designed as a part of a randomized clinical trial, using two mechanical tests to analyze the compression strain that developed in the modified twin block (MTB) appliance with expander, in comparison with the CTB appliance with expander. Ethical approval was obtained from the ethics committee at the College of Dentistry, University of Baghdad (Reference No. 664 in 13.9.2022).
The main materials used in the fabrication of the appliances and samples for the study are summarized in . Two mechanical tests were conducted: a mechanical loading test using the three-point bending method and a compression test. The MTB appliance with an expander was designed using 2 mm thick biocompatible PETG thermoplastic material. This material was adapted to maxillary and mandibular casts individually using a pressure molding vacuum machine. After molding, the two clear appliances were trimmed, finished, and transferred onto an articulator with the help of a pre-existing working wax bite. Cold-cure acrylic was then used to fabricate the ramps on the thermoplastic sheets. Finally, the expansion screw was positioned in the midline of the maxillary cast, and the maxillary appliance was split midpalatally, as shown in . 3.1. Mechanical Loading by Three-Point Bending Test To determine the ability of the thermoplastic appliance to withstand the deformation during applied forces, a three-point bending test was used to investigate the response of MTB appliances under static loading compared with the CTB appliances. The experiment evaluated 10 samples for each group, including the following steps: Preparation of samples : In 2017, the standard ASTM (D790-17) outlined the preferred dimensions for various types of plastic materials. For plastic specimens, the optimal measurements are 12.7 mm in width, 3.2 mm in thickness, and 127 mm in length. To meet these specifications, a stainless-steel mold was created by using waterjet cutting to carve a 100 × 160 mm stainless-steel block into four individual pieces, allowing for the simultaneous production of four specimens. Each piece was designed to match the precise dimensions of the specimens . Ten specimens from the MTB appliance group were fabricated through the following steps: A 2 mm thick PETG thermoplastic sheet was heated in a thermoforming machine to 220°C until it became soft and pliable. Once the sheet reached the required temperature, it was swiftly placed over a stainless-steel block, and pressure was applied to mold the thermoplastic tightly to the block shape. After cooling and hardening, the molded sheet was removed from the block. The standard dimensions of the specimen were marked on the thermoformed sheet using a marker and ruler (or by tracing one of the prepared acrylic specimens). A carbide fissure bur was used to cut the thermoformed sheet, and the trimmed material was adapted into a stainless-steel mold. Cold-cure acrylic was then applied over the thermoformed sheet using the sprinkle method. Once the material had set, the specimen was carefully removed from the mold . The other 10 specimens belonged to the CTB appliance group and were fabricated entirely from cold-cure acrylic using the sprinkle method. The process began by applying a separating medium over all parts of the stainless-steel mold to facilitate easy removal of the specimens. Cold-cure acrylic was then distilled directly into the mold using the sprinkle method. Once the acrylic had set, the specimens were carefully removed from the mold. After being prepared, the samples were ready for testing . Testing of the specimens : The specimens were subjected to a three-point bending test using the universal Instron testing machine equipped with a 100 kN load cell according to the standard protocol of ASTM (D790-17). The specimen was extended at a rate of 5 mm/min, and data were collected at a frequency rate of 100 Hz. The sample was positioned on a stainless-steel stand that had a rectangular base and two vertical supports with a 100 mm span (apart from each other) and 30 mm curvature radius . Each specimen was subjected to the load cell and load–deflection curve recorded within the built-in software of the machine; the specimen was deflected at a speed of 100 mm/min. 3.2. Compression Test To estimate the firmness of the screw within the appliance, a computer-controlled electronic universal Instron testing machine (LARYEE, UE343000, China) with high accuracy load cell (±0.5% of full scale) and sampling rate up to 100 kHz (kHz). The devise was equipped at 100 kN with full resolution and used to record the forces released by the expander within the upper part of MTB appliance and compare it with the forces released by that of the CTB appliance. Ten appliances were evaluated for each group, including the following steps: Preparation of DIA-stone models: Ideal DIA-stone models with the same intercanine and intermolar distance were prepared similarly to the preparation of ideal study models by mixing 20 mL of water with 100 g of HIRO diastone (following the manufactural directions) in a rubber bowel with a plaster spatula for 30 s with stropping motions. Then, we place the bowel on the vibrator to allow all bubbles to rise to the surface and break. A small amount of mixture was poured into the distal of the last molar side of a silicon mold under the vibrator allowing for a gradual flow of the mixture within teeth spaces until the arch filled with the mixture. Then, with a plaster spatula, all the mixture was added within the plastic mold to make the base of the models, waiting for 10–12 min (loss of gloss of the mixture) for the complete set of the DIA-stone model, and the procedure was repeated 20 times. Fabrication of Appliances : Following the basic design of the standard twin block appliance , 10 upper parts of twin block appliance were fabricated on the DIA-stone models of each group. For the MTB group appliances, the preparation process in the dental laboratory followed these steps: A 2 mm thick PETG thermoplastic sheet was heated in a thermoforming machine to 220°C until it became soft and pliable. Once heated, the sheet was swiftly placed over a prepared DIA-stone model, and pressure was applied to mold the thermoplastic tightly to the contours of the teeth. After allowing the material to cool and harden, the molded sheet was carefully removed from the model. Excess material was trimmed using scissors, and the edges were smoothed and polished with fine sandpaper and polishing wheels to eliminate sharp edges. A mid-palatal screw was then positioned in the center of the PETG appliance, and cold-cure acrylic was applied using the sprinkle method around the screw's edges to securely integrate it within the appliance. The appliance was split midpalatally and polished . While for the CTB appliance group, the preparation process in the dental laboratory involved the following steps: Adams' clasps were bent on the DIA-stone model to fit the upper permanent first molars and first premolars, serving as retentive components. A mid-palatal screw was centrally positioned on the model at the level of the second premolars. Cold-cure acrylic was then applied using the sprinkle method around the edges of the screw to securely integrate it within the appliance. The acrylic was extended to cover the occlusal surfaces of the upper posterior teeth, ensuring proper fit and stability . Preparation of split digital model : According to the guidelines by Chaconas and Caputo and Oshagh et al. , the appliance on a split model was hold by the clamps (specimen grips) of the Instron machine during the testing procedure. A crosscut straight fissure bur was used to create a line in the middle of DIA-stone models, and with grinding disc, the models were split into two halves. These halves were scanned with 3D scanner (Dentsply Sirona inEos X5 scanner, Germany) in order to converting them to a digital file format (STL) and these files sent to 3D printer (Phrozen Sonic Mighty 4K MSLA 3D printer, Taiwan) and printed in a form of 3D split resin digital models (Figures and ). Testing of the appliances : The universal Instron testing machine securely hold the appliance and the split digital mold using upper and lower clamps. The midline screw of each appliance was activated by a key, with each turn producing 0.25 mm of expansion. This compression force was then transmitted hydraulically to the jaw of the Instron machine, and the resulting force was recorded in Newton units by the software of the machine for each turn . This process continued until the screw reached its maximum separation capacity, or there was a decrease in the recorded force. This testing procedure adhered to the standard protocol for estimating the rigidity of the screw in a new appliance design or material . 3.3. Statistical Analysis Statistical Package for Social Science Version 25.0 (SPSS Inc., Chicago, IL. USA) was used for statistical analysis with statistical significance set at p < 0.05. All responses (Test I and Test II) were collected and saved as an Excel spreadsheet (Excel, Microsoft Office Professional Plus 2019, Washington, USA). The Shapiro–Wilk test was used to assess the normality of data distribution, while the independent samples t -test was used to compare the difference between groups.
To determine the ability of the thermoplastic appliance to withstand the deformation during applied forces, a three-point bending test was used to investigate the response of MTB appliances under static loading compared with the CTB appliances. The experiment evaluated 10 samples for each group, including the following steps: Preparation of samples : In 2017, the standard ASTM (D790-17) outlined the preferred dimensions for various types of plastic materials. For plastic specimens, the optimal measurements are 12.7 mm in width, 3.2 mm in thickness, and 127 mm in length. To meet these specifications, a stainless-steel mold was created by using waterjet cutting to carve a 100 × 160 mm stainless-steel block into four individual pieces, allowing for the simultaneous production of four specimens. Each piece was designed to match the precise dimensions of the specimens . Ten specimens from the MTB appliance group were fabricated through the following steps: A 2 mm thick PETG thermoplastic sheet was heated in a thermoforming machine to 220°C until it became soft and pliable. Once the sheet reached the required temperature, it was swiftly placed over a stainless-steel block, and pressure was applied to mold the thermoplastic tightly to the block shape. After cooling and hardening, the molded sheet was removed from the block. The standard dimensions of the specimen were marked on the thermoformed sheet using a marker and ruler (or by tracing one of the prepared acrylic specimens). A carbide fissure bur was used to cut the thermoformed sheet, and the trimmed material was adapted into a stainless-steel mold. Cold-cure acrylic was then applied over the thermoformed sheet using the sprinkle method. Once the material had set, the specimen was carefully removed from the mold . The other 10 specimens belonged to the CTB appliance group and were fabricated entirely from cold-cure acrylic using the sprinkle method. The process began by applying a separating medium over all parts of the stainless-steel mold to facilitate easy removal of the specimens. Cold-cure acrylic was then distilled directly into the mold using the sprinkle method. Once the acrylic had set, the specimens were carefully removed from the mold. After being prepared, the samples were ready for testing . Testing of the specimens : The specimens were subjected to a three-point bending test using the universal Instron testing machine equipped with a 100 kN load cell according to the standard protocol of ASTM (D790-17). The specimen was extended at a rate of 5 mm/min, and data were collected at a frequency rate of 100 Hz. The sample was positioned on a stainless-steel stand that had a rectangular base and two vertical supports with a 100 mm span (apart from each other) and 30 mm curvature radius . Each specimen was subjected to the load cell and load–deflection curve recorded within the built-in software of the machine; the specimen was deflected at a speed of 100 mm/min.
To estimate the firmness of the screw within the appliance, a computer-controlled electronic universal Instron testing machine (LARYEE, UE343000, China) with high accuracy load cell (±0.5% of full scale) and sampling rate up to 100 kHz (kHz). The devise was equipped at 100 kN with full resolution and used to record the forces released by the expander within the upper part of MTB appliance and compare it with the forces released by that of the CTB appliance. Ten appliances were evaluated for each group, including the following steps: Preparation of DIA-stone models: Ideal DIA-stone models with the same intercanine and intermolar distance were prepared similarly to the preparation of ideal study models by mixing 20 mL of water with 100 g of HIRO diastone (following the manufactural directions) in a rubber bowel with a plaster spatula for 30 s with stropping motions. Then, we place the bowel on the vibrator to allow all bubbles to rise to the surface and break. A small amount of mixture was poured into the distal of the last molar side of a silicon mold under the vibrator allowing for a gradual flow of the mixture within teeth spaces until the arch filled with the mixture. Then, with a plaster spatula, all the mixture was added within the plastic mold to make the base of the models, waiting for 10–12 min (loss of gloss of the mixture) for the complete set of the DIA-stone model, and the procedure was repeated 20 times. Fabrication of Appliances : Following the basic design of the standard twin block appliance , 10 upper parts of twin block appliance were fabricated on the DIA-stone models of each group. For the MTB group appliances, the preparation process in the dental laboratory followed these steps: A 2 mm thick PETG thermoplastic sheet was heated in a thermoforming machine to 220°C until it became soft and pliable. Once heated, the sheet was swiftly placed over a prepared DIA-stone model, and pressure was applied to mold the thermoplastic tightly to the contours of the teeth. After allowing the material to cool and harden, the molded sheet was carefully removed from the model. Excess material was trimmed using scissors, and the edges were smoothed and polished with fine sandpaper and polishing wheels to eliminate sharp edges. A mid-palatal screw was then positioned in the center of the PETG appliance, and cold-cure acrylic was applied using the sprinkle method around the screw's edges to securely integrate it within the appliance. The appliance was split midpalatally and polished . While for the CTB appliance group, the preparation process in the dental laboratory involved the following steps: Adams' clasps were bent on the DIA-stone model to fit the upper permanent first molars and first premolars, serving as retentive components. A mid-palatal screw was centrally positioned on the model at the level of the second premolars. Cold-cure acrylic was then applied using the sprinkle method around the edges of the screw to securely integrate it within the appliance. The acrylic was extended to cover the occlusal surfaces of the upper posterior teeth, ensuring proper fit and stability . Preparation of split digital model : According to the guidelines by Chaconas and Caputo and Oshagh et al. , the appliance on a split model was hold by the clamps (specimen grips) of the Instron machine during the testing procedure. A crosscut straight fissure bur was used to create a line in the middle of DIA-stone models, and with grinding disc, the models were split into two halves. These halves were scanned with 3D scanner (Dentsply Sirona inEos X5 scanner, Germany) in order to converting them to a digital file format (STL) and these files sent to 3D printer (Phrozen Sonic Mighty 4K MSLA 3D printer, Taiwan) and printed in a form of 3D split resin digital models (Figures and ). Testing of the appliances : The universal Instron testing machine securely hold the appliance and the split digital mold using upper and lower clamps. The midline screw of each appliance was activated by a key, with each turn producing 0.25 mm of expansion. This compression force was then transmitted hydraulically to the jaw of the Instron machine, and the resulting force was recorded in Newton units by the software of the machine for each turn . This process continued until the screw reached its maximum separation capacity, or there was a decrease in the recorded force. This testing procedure adhered to the standard protocol for estimating the rigidity of the screw in a new appliance design or material .
Statistical Package for Social Science Version 25.0 (SPSS Inc., Chicago, IL. USA) was used for statistical analysis with statistical significance set at p < 0.05. All responses (Test I and Test II) were collected and saved as an Excel spreadsheet (Excel, Microsoft Office Professional Plus 2019, Washington, USA). The Shapiro–Wilk test was used to assess the normality of data distribution, while the independent samples t -test was used to compare the difference between groups.
4.1. Mechanical Loading by Three-Point Bending Test Ten samples for each group were tested by a three-point bending test. The descriptive statistics are shown in . For the samples of the CTB group, the mean value of the maximum load reached 159.5 N (SD 7.62) which is higher than that of the MTB group (145 N, SD 8.5). also shows that the data for both groups were normally distributed (Shapiro–Wilk test). Independent samples t -test revealed a statistically significant difference between CTB and MTB groups ( p =0.001) . 4.2. Compression Test Ten appliances for each group were tested by measuring the amount of load (compression forces) produced with each turn of the mid-line screw (each activation of the screw resulted in 0.25 mm of expansion), and the results of the test were described, for 30 turns, as shown in . A line chart, as clarified in , illustrated the mean value of the applied load (Newton) after each turn of expansion. The initial phase of increase loads up the two lines (of CTB, and MTB groups) to maintain the same pattern with almost the same level of gradual growing in the amount of load till turn 17 of screw activation. Then, the CTB group line showed a considerable increase in the amount of mean load reaching the peak 334.5 N (SD = 43.43) at turn 25 of screw activation while the peak of mean load for MTB group was equal 252.6 (SD = 82) Newton at turn 23. After the peak of the mean value of each group, both lines gradually declined until reaching 0 N loads at turn 30 of screw activation.
Ten samples for each group were tested by a three-point bending test. The descriptive statistics are shown in . For the samples of the CTB group, the mean value of the maximum load reached 159.5 N (SD 7.62) which is higher than that of the MTB group (145 N, SD 8.5). also shows that the data for both groups were normally distributed (Shapiro–Wilk test). Independent samples t -test revealed a statistically significant difference between CTB and MTB groups ( p =0.001) .
Ten appliances for each group were tested by measuring the amount of load (compression forces) produced with each turn of the mid-line screw (each activation of the screw resulted in 0.25 mm of expansion), and the results of the test were described, for 30 turns, as shown in . A line chart, as clarified in , illustrated the mean value of the applied load (Newton) after each turn of expansion. The initial phase of increase loads up the two lines (of CTB, and MTB groups) to maintain the same pattern with almost the same level of gradual growing in the amount of load till turn 17 of screw activation. Then, the CTB group line showed a considerable increase in the amount of mean load reaching the peak 334.5 N (SD = 43.43) at turn 25 of screw activation while the peak of mean load for MTB group was equal 252.6 (SD = 82) Newton at turn 23. After the peak of the mean value of each group, both lines gradually declined until reaching 0 N loads at turn 30 of screw activation.
Generally, when designing an orthodontic appliance using thermoplastics, it is crucial to select a material that has efficient mechanical properties that can match the appliance needs and treatment requirements within the stimulated intraoral environment . Numerous studies have demonstrated that PETG possesses excellent mechanical properties and has been approved for safe use in the dental field. It is commonly used for fabricating temporary bridges in areas requiring esthetic attention and for creating teeth bleaching trays . PETG thermoplastic foil splints with bonded wire cleats have also been applied in occlusal management. The cleats enable intermaxillary fixation with orthodontic elastics, guiding and maintaining the occlusion in centric relation . Additionally, within simulated oral environments, PETG provides stable forces to target teeth, making it suitable for use in tooth aligners . PETG thermoplastics till now not been used in the fabrication of functional appliances, so they were chosen for the construction of the twin block appliance, and for that reason, the current experimental study was conducted to investigate the ability of this thermoplastic material including two mechanical tests. 5.1. Mechanical Loading by Three-point Bending Test The mechanical loading of specific apparatus on certain materials (specifically thermoplastics) is simply and easily tested experimentally by a three-point bending test using a flat material sample of defined size depending on certain standardizations . Hence, a sample with a specific dimension depending to the standard by ASTM (D790-17) in 2017 for thermoplastic (molding) materials was used in this study. For myofunctional appliances, the force magnitude is lower than for orthopedic appliances to enhance patient compliance to wear the appliance and higher than that for removable orthodontic appliances to produce enough force in their action. It had been found that each 1 mm of displacement of the dental base by myofunctional appliance could produce 100 g (1 N) of force that stretches the muscles and may reach 500 g (5 N) depending on the severity of the malocclusion . The CTB group had a mean maximum load value of 159.5 N (SD 7.62), which was higher than the MTB group value of 145 N (SD of 8.5). The CTB appliances were constructed from PMMA. A material characterized by its high modulus of elasticity and rigidity, which contributes to its ability to resist higher load before deformation. The notable stiffness and strength of acrylic were well-documented in orthodontic literature , making it a suitable material for appliances that require significant load-bearing capacity. In contrast, MTB appliances are constructed from PETG thermoplastics, a material known with its flexibility and lower modulus of elasticity compared to the acrylic. This characteristic means that PETG deforms more readily under load, allowing it to absorb and dissipate energy rather than resist it rigidly. Accordingly, the MTB samples exhibited a lower mean maximum load in the bending test. These results are consistent with findings by Albertini et al. , which demonstrated lower force values for PETG thermoplastics compared to other thermoplastic materials. On the other hand, the PETG ability to evenly distribute stress across the surface of the appliance could likely enhance its long-term durability. As highlighted by Lombardo et al. and Elkholy et al. , PETG has a certain degree of stiffness and excellent stress relaxation properties allowing it to gradually return to its original shape after deformation, which reduces the likelihood permanent damage and increases the appliance lifespan in clinical settings. The ability of PETG to undergo more significant deformation before failing makes it less prone to breaking under sudden or high loads, which can be beneficial in clinical scenarios where the appliance may be exposed to variable forces. Although there were statistically significant differences between the two groups, the amount of load applied on the MTB group was still much greater than the required range of functional force magnitude for myofunctional appliances to withstand the MTB without deflection. In the current test, the balance between strength and flexibility of PETG was in line with other studies of three-point bending test . This gives the PETG an advantage of being fabricated and designed appropriately with lower risk of appliance breakage under masticatory forces by reducing the localized pressure points. 5.2. Compression Test The design of twin block appliance in this study included a screw (orthodontic expander) to provide a functional dentoalveolar expansion to correct positional crossbite which could happen after the mandibular protrusion . Based on the compression test results and the line chart presented in , the initial phase of increase loads reveals that the distribution of expansion force was even across both appliance structures. The difference in peak values reflects that the CTB appliance might provide greater resistance against the applied forces with higher stress maintenance within the acrylic. This in turn may lead to increased force requirement for expansion and exposing the appliance to the risk of fracture. The lower peak value of the MTB may reveal a better force distribution and lower resistance to expansion and consequently reduce the risk of appliance fracture. The gradual decline in the applied force for both appliances explains that as the expansion progress the resistance will reduce. This indicates that in spite of the resiliency of cold-cure acrylic (for CTB group) being higher than PETG thermoplastics (for MTB group), both groups have the ability to withstand the optimal delivered dentoalveolar expansion force (8.9–17.8 N) multiple times, with an expansion rate of 0.5–1 mm per week. The amount of force delivered per expansion was reported by several authors . It was evident from this test that PETG has high mechanical strength which allows it to withstand applied loads for a significant period of time before undergoing plastic deformation. This could be attributed to the superior stress relaxation property of the PETG in comparison with that of the acrylic which is essential for ensuring uniform dentoalveolar movement and minimize stress concentration . In both tests, the PETG thermoplastics demonstrated promising thermal bonding with acrylic, as there was no separation occurred between the two materials under the applied load. Moreover, the PETG material exhibited high flexibility with minimal deflection. The peak mean load values for both tests were 252.6 N and 145 N, respectively, which exceeded the required level of expansion and functional load during treatment. These findings suggest that glycol, which is present in the chemical composition of PETG thermoplastic material, plays an important role in enhancing the material robustness and mechanical properties . Additionally, PETG thermoplastics have a significant stress relaxation characteristic, meaning it slowly converts elastic strain into plastic strain even below the yield strength levels. This property is closely linked to PETG viscoelastic property . During the compression test, the MTB group showed a gradual reduction in the applied mean value load similar to the CTB group, indicating this property. However, it was not as apparent in the three-point bending test due to the insufficient time for plastic deformation of the specimens during measurement. For optimal stress distribution and to maintain the viscoelastic properties of thermoplastic material, it is recommended to increase its thickness . A thinner material leads to lower yield and tensile strength, making it more prone to deformation and fractures . To ensure the PETG material can withstand functional treatment force without breaking and maintain its elastic and viscous behavior during plastic deformation and to allow stress distribution over a larger area, a thickness of 2 mm was chosen as compared with previous studies that used other types of thermoplastics from a 1–1.5 mm thickness . The findings of this study are concordant with studies on mechanical properties of PETG in orthodontic appliances supporting that the use of PETG for its balance of strength and flexibility reinforces its suitability for clinical use in clear aligner and other orthodontic appliances. This study provides an insight on a novel appliance that could function properly compared with the twin block in addition to greater potential to be acceptable by the patients due to its invisibility, easiness of insertion/removal, and flexibility. However, based merely on two mechanical tests may not be sufficient to understand its intraoral behavior as a functional appliance. Therefore, a randomized clinical trial is required to test its effectiveness.
The mechanical loading of specific apparatus on certain materials (specifically thermoplastics) is simply and easily tested experimentally by a three-point bending test using a flat material sample of defined size depending on certain standardizations . Hence, a sample with a specific dimension depending to the standard by ASTM (D790-17) in 2017 for thermoplastic (molding) materials was used in this study. For myofunctional appliances, the force magnitude is lower than for orthopedic appliances to enhance patient compliance to wear the appliance and higher than that for removable orthodontic appliances to produce enough force in their action. It had been found that each 1 mm of displacement of the dental base by myofunctional appliance could produce 100 g (1 N) of force that stretches the muscles and may reach 500 g (5 N) depending on the severity of the malocclusion . The CTB group had a mean maximum load value of 159.5 N (SD 7.62), which was higher than the MTB group value of 145 N (SD of 8.5). The CTB appliances were constructed from PMMA. A material characterized by its high modulus of elasticity and rigidity, which contributes to its ability to resist higher load before deformation. The notable stiffness and strength of acrylic were well-documented in orthodontic literature , making it a suitable material for appliances that require significant load-bearing capacity. In contrast, MTB appliances are constructed from PETG thermoplastics, a material known with its flexibility and lower modulus of elasticity compared to the acrylic. This characteristic means that PETG deforms more readily under load, allowing it to absorb and dissipate energy rather than resist it rigidly. Accordingly, the MTB samples exhibited a lower mean maximum load in the bending test. These results are consistent with findings by Albertini et al. , which demonstrated lower force values for PETG thermoplastics compared to other thermoplastic materials. On the other hand, the PETG ability to evenly distribute stress across the surface of the appliance could likely enhance its long-term durability. As highlighted by Lombardo et al. and Elkholy et al. , PETG has a certain degree of stiffness and excellent stress relaxation properties allowing it to gradually return to its original shape after deformation, which reduces the likelihood permanent damage and increases the appliance lifespan in clinical settings. The ability of PETG to undergo more significant deformation before failing makes it less prone to breaking under sudden or high loads, which can be beneficial in clinical scenarios where the appliance may be exposed to variable forces. Although there were statistically significant differences between the two groups, the amount of load applied on the MTB group was still much greater than the required range of functional force magnitude for myofunctional appliances to withstand the MTB without deflection. In the current test, the balance between strength and flexibility of PETG was in line with other studies of three-point bending test . This gives the PETG an advantage of being fabricated and designed appropriately with lower risk of appliance breakage under masticatory forces by reducing the localized pressure points.
The design of twin block appliance in this study included a screw (orthodontic expander) to provide a functional dentoalveolar expansion to correct positional crossbite which could happen after the mandibular protrusion . Based on the compression test results and the line chart presented in , the initial phase of increase loads reveals that the distribution of expansion force was even across both appliance structures. The difference in peak values reflects that the CTB appliance might provide greater resistance against the applied forces with higher stress maintenance within the acrylic. This in turn may lead to increased force requirement for expansion and exposing the appliance to the risk of fracture. The lower peak value of the MTB may reveal a better force distribution and lower resistance to expansion and consequently reduce the risk of appliance fracture. The gradual decline in the applied force for both appliances explains that as the expansion progress the resistance will reduce. This indicates that in spite of the resiliency of cold-cure acrylic (for CTB group) being higher than PETG thermoplastics (for MTB group), both groups have the ability to withstand the optimal delivered dentoalveolar expansion force (8.9–17.8 N) multiple times, with an expansion rate of 0.5–1 mm per week. The amount of force delivered per expansion was reported by several authors . It was evident from this test that PETG has high mechanical strength which allows it to withstand applied loads for a significant period of time before undergoing plastic deformation. This could be attributed to the superior stress relaxation property of the PETG in comparison with that of the acrylic which is essential for ensuring uniform dentoalveolar movement and minimize stress concentration . In both tests, the PETG thermoplastics demonstrated promising thermal bonding with acrylic, as there was no separation occurred between the two materials under the applied load. Moreover, the PETG material exhibited high flexibility with minimal deflection. The peak mean load values for both tests were 252.6 N and 145 N, respectively, which exceeded the required level of expansion and functional load during treatment. These findings suggest that glycol, which is present in the chemical composition of PETG thermoplastic material, plays an important role in enhancing the material robustness and mechanical properties . Additionally, PETG thermoplastics have a significant stress relaxation characteristic, meaning it slowly converts elastic strain into plastic strain even below the yield strength levels. This property is closely linked to PETG viscoelastic property . During the compression test, the MTB group showed a gradual reduction in the applied mean value load similar to the CTB group, indicating this property. However, it was not as apparent in the three-point bending test due to the insufficient time for plastic deformation of the specimens during measurement. For optimal stress distribution and to maintain the viscoelastic properties of thermoplastic material, it is recommended to increase its thickness . A thinner material leads to lower yield and tensile strength, making it more prone to deformation and fractures . To ensure the PETG material can withstand functional treatment force without breaking and maintain its elastic and viscous behavior during plastic deformation and to allow stress distribution over a larger area, a thickness of 2 mm was chosen as compared with previous studies that used other types of thermoplastics from a 1–1.5 mm thickness . The findings of this study are concordant with studies on mechanical properties of PETG in orthodontic appliances supporting that the use of PETG for its balance of strength and flexibility reinforces its suitability for clinical use in clear aligner and other orthodontic appliances. This study provides an insight on a novel appliance that could function properly compared with the twin block in addition to greater potential to be acceptable by the patients due to its invisibility, easiness of insertion/removal, and flexibility. However, based merely on two mechanical tests may not be sufficient to understand its intraoral behavior as a functional appliance. Therefore, a randomized clinical trial is required to test its effectiveness.
Both the CTB and MTB groups display similar patterns of resistance when subjected to expansion load delivered by the orthodontic expander (screw). This load is several times greater than the ideal dentoalveolar expansion force. The MTB can withstand the required functional load without deformation. 6.1. Limitations of the Study A larger sample size would provide more robust statistical analysis and enhance the reliability and generalization of the results. Variations in environmental conditions of oral cavity, such as temperature and humidity, could influence the mechanical properties of the thermoplastic material. Conducting tests under standardized conditions may mitigate this limitation. Moreover, since the study was performed in an experimental setting that provides valuable insights into parts of mechanical properties of PETG material, such findings may vary when applied in the oral cavity due to exposure to masticatory forces and different physical and chemical factors. Short duration evaluation of the study might not capture potential changes or degradation that could occur over extended periods of clinical use. The direct clinical relevance and performance of this material in orthodontic treatments may require further investigation through in vivo studies or clinical trials. 6.2. Clinical Implications The study found that PETG thermoplastic material displayed strong thermal bonding with acrylic, indicating its potential for clinical use. It also showed adequate stress relaxation, and balance between flexibility and rigidity. The peak mean load values exceeded required levels for expansion and functional treatment, indicating its suitability for this orthodontic application. The robustness, flexibility, and significant stress relaxation of PETG also exhibit a property essential for orthodontic appliances. Increasing material thickness is recommended for optimal stress distribution and maintaining viscoelastic properties. These findings suggest that PETG thermoplastic material could be a valuable choice for orthodontic treatments, offering durability and reliability.
A larger sample size would provide more robust statistical analysis and enhance the reliability and generalization of the results. Variations in environmental conditions of oral cavity, such as temperature and humidity, could influence the mechanical properties of the thermoplastic material. Conducting tests under standardized conditions may mitigate this limitation. Moreover, since the study was performed in an experimental setting that provides valuable insights into parts of mechanical properties of PETG material, such findings may vary when applied in the oral cavity due to exposure to masticatory forces and different physical and chemical factors. Short duration evaluation of the study might not capture potential changes or degradation that could occur over extended periods of clinical use. The direct clinical relevance and performance of this material in orthodontic treatments may require further investigation through in vivo studies or clinical trials.
The study found that PETG thermoplastic material displayed strong thermal bonding with acrylic, indicating its potential for clinical use. It also showed adequate stress relaxation, and balance between flexibility and rigidity. The peak mean load values exceeded required levels for expansion and functional treatment, indicating its suitability for this orthodontic application. The robustness, flexibility, and significant stress relaxation of PETG also exhibit a property essential for orthodontic appliances. Increasing material thickness is recommended for optimal stress distribution and maintaining viscoelastic properties. These findings suggest that PETG thermoplastic material could be a valuable choice for orthodontic treatments, offering durability and reliability.
|
Does NIH funding differ between medical specialties? A longitudinal analysis of NIH grant data by specialty and type of grant, 2011–2020 | f7a743dd-b17f-467c-858f-4cca455cfa27 | 9809243 | Internal Medicine[mh] | The US National Institutes of Health (NIH) is part of the US Department of Health and Human Services and is the primary agency responsible for public health and biomedical research. The NIH comprised over 27 separate institutes and centres covering several biomedical disciplines and specialties in medicine . Approximately 80% of NIH funding goes towards funding extramural research in the form of research grants. 10.1136/bmjopen-2021-058191.supp1 Supplementary data NIH funding is used for advancement of research across many fields of basic science and clinical medicine. In fiscal year 2020, the total NIH funding was $40.3 billion. NIH funding is spread across many medical specialties, and the number of physicians in these medical specialties varies. Assuming that the number of active physicians can be viewed as a rough approximation of the demand for a specialty and a proxy for the disease burden (diseases treated by that specialty) in the society, we hypothesised that the amount of funding for each specialty would be proportional to the number of active physicians in the specialty. We investigated whether NIH funding metrics (number of grants, number of active physicians per grant in that specialty, total dollar amount of grants, total dollar amount of grants per active physician in that specialty and mean dollar amount of each grant type) vary between specialties and evaluated the trends in these NIH funding metrics from 2011 to 2020.
Patient and public involvement statement Neither patients nor members of the public were involved in any way in this research. It was not appropriate or possible to involve patients or the public in the design, or conduct, or reporting or dissemination plans of our research. Study design and data source We carried out a retrospective analysis of the NIH’s RePORTER (Research Portfolio Online Reporting Tools Expenditures and Results) database ( https://reporter.nih.gov/ ), which has data on grants that were awarded by the NIH. Unlike the federal Query/View/Report database, the RePORTER database shows data only for grants that were awarded and therefore does not allow for analysis of data involving grant applications that did not result in the awarding of a grant. As a result, we could not evaluate the success rates of grant applications by specialty. The RePORTER database was searched for grants awarded between 2011 and 2020, which were classified, based on the department name on the grant application, into one of the following 19 specialties: anaesthesiology, dermatology, emergency medicine, family medicine, internal medicine, neurology, neurosurgery, obstetrics and gynaecology, ophthalmology, orthopaedic surgery, otolaryngology, pathology, paediatrics, physical medicine and rehabilitation, plastic surgery, psychiatry, radiation-diagnostic/oncology, surgery, and urology. These specialties were chosen because they were the medical specialties in clinical medicine available in the RePORTER database. Grants that were not classified as one of these specialties were excluded. We included all grants for each specialty from the RePORTER database. The grant types appearing in the data set are listed in . Number of active physicians The number of active physicians was obtained from the Physician Specialty Data Report by the American Association of Medical Colleges. The number of active physicians included those from all training pathways, including doctor of (allopathic) medicine (MD), doctor of osteopathic medicine (DO) and international medical graduates. Because the number of active physicians per specialty was only available for certain years, during the time period of 2010–2020, linear interpolation was used to estimate the number of active physicians in the unlisted years (2011, 2012, 2014, 2016, 2018 and 2020). These data are shown in . Number of NIH grants by specialty We evaluated all grants awarded over the 10-year period to identify the most frequently awarded grant types. The 10 most frequently awarded grant types were R01, R03, R21, F32, T32, K01, K08, K23, U01 and P30. The titles and descriptions of these grant types appear in . Plots of the number of grants by specialty and the per cent change in the number of grants from 2011 by specialty were created. We systematically evaluated NIH grants awarded at critical periods in the academic career pipeline, including training grants (predoctoral T32, and postdoctoral F32), career development grants (K01, K08 and K23), and grants typically awarded in the later/advanced career stages, including the R01, R03, R21, P30 and U01 grants. Number of active physicians in each specialty per grant To evaluate how many active physicians existed per grant type, the total number of active physicians was divided by the total number of each grant type for each year for each specialty. This metric gauges how rare it is for a physician of each specialty to have any grant type. Total dollar amount of NIH grants awarded in each specialty To evaluate whether differences in the number of grants resulted in differences in the dollar amount of funding for each specialty, we calculated the total dollar amount of funding by specialty from 2011 to 2020 after adjusting for inflation. The total dollar amount of funding was calculated for each specialty for each grant type during the study period. The annual funding each year from 2012 through 2020 was converted to year 2011 dollars, using the gross domestic product price index for the relevant years. To evaluate changes in funding over the time period studied, for each year after 2011, we calculated the per cent change in the dollar amount of funding after adjusting for inflation (compared with 2011) by specialty. Dollar amount of grants per active physician Because the total dollar amount of grants may be affected by the number of researchers in that specialty and therefore the number of active physicians, we calculated the number of dollars of funding per active physician to adjust for the differing sizes of the medical specialties. We divided the dollar amount of funding for each specialty by the number of active physicians in that specialty to calculate the dollar amount of grants per active physician. Mean dollar amount per grant for each specialty by grant type The dollar amounts vary by grant type, with smaller grants typically awarded to early-stage investigators and larger grants awarded to more seasoned investigators. We hypothesised that there should be no differences between specialties when the mean dollar amount per grant for a given grant type was evaluated. To test this hypothesis, we calculated the mean and SD of the inflation-adjusted dollar amount per grant by specialty for each grant type for 2011–2020. All analyses were performed in Excel V.2107 (Microsoft, Redmond, Washington, USA) and R V.4.1.2 (R Foundation for Statistical Computing, Vienna, Austria). Qualitative variables were compared between specialties using χ 2 tests, while quantitative variables were compared between specialties using t-tests with unequal variances. All test statistics were two-sided. To control for false-positive findings due to multiple comparisons between specialties, we used the Bonferroni-adjusted type I error rate of 0.05 ( 19 2 ) = 0.0002924 , so that p values less than this Bonferroni-adjusted type I error rate were considered statistically significant.
Neither patients nor members of the public were involved in any way in this research. It was not appropriate or possible to involve patients or the public in the design, or conduct, or reporting or dissemination plans of our research.
We carried out a retrospective analysis of the NIH’s RePORTER (Research Portfolio Online Reporting Tools Expenditures and Results) database ( https://reporter.nih.gov/ ), which has data on grants that were awarded by the NIH. Unlike the federal Query/View/Report database, the RePORTER database shows data only for grants that were awarded and therefore does not allow for analysis of data involving grant applications that did not result in the awarding of a grant. As a result, we could not evaluate the success rates of grant applications by specialty. The RePORTER database was searched for grants awarded between 2011 and 2020, which were classified, based on the department name on the grant application, into one of the following 19 specialties: anaesthesiology, dermatology, emergency medicine, family medicine, internal medicine, neurology, neurosurgery, obstetrics and gynaecology, ophthalmology, orthopaedic surgery, otolaryngology, pathology, paediatrics, physical medicine and rehabilitation, plastic surgery, psychiatry, radiation-diagnostic/oncology, surgery, and urology. These specialties were chosen because they were the medical specialties in clinical medicine available in the RePORTER database. Grants that were not classified as one of these specialties were excluded. We included all grants for each specialty from the RePORTER database. The grant types appearing in the data set are listed in .
The number of active physicians was obtained from the Physician Specialty Data Report by the American Association of Medical Colleges. The number of active physicians included those from all training pathways, including doctor of (allopathic) medicine (MD), doctor of osteopathic medicine (DO) and international medical graduates. Because the number of active physicians per specialty was only available for certain years, during the time period of 2010–2020, linear interpolation was used to estimate the number of active physicians in the unlisted years (2011, 2012, 2014, 2016, 2018 and 2020). These data are shown in . Number of NIH grants by specialty We evaluated all grants awarded over the 10-year period to identify the most frequently awarded grant types. The 10 most frequently awarded grant types were R01, R03, R21, F32, T32, K01, K08, K23, U01 and P30. The titles and descriptions of these grant types appear in . Plots of the number of grants by specialty and the per cent change in the number of grants from 2011 by specialty were created. We systematically evaluated NIH grants awarded at critical periods in the academic career pipeline, including training grants (predoctoral T32, and postdoctoral F32), career development grants (K01, K08 and K23), and grants typically awarded in the later/advanced career stages, including the R01, R03, R21, P30 and U01 grants. Number of active physicians in each specialty per grant To evaluate how many active physicians existed per grant type, the total number of active physicians was divided by the total number of each grant type for each year for each specialty. This metric gauges how rare it is for a physician of each specialty to have any grant type. Total dollar amount of NIH grants awarded in each specialty To evaluate whether differences in the number of grants resulted in differences in the dollar amount of funding for each specialty, we calculated the total dollar amount of funding by specialty from 2011 to 2020 after adjusting for inflation. The total dollar amount of funding was calculated for each specialty for each grant type during the study period. The annual funding each year from 2012 through 2020 was converted to year 2011 dollars, using the gross domestic product price index for the relevant years. To evaluate changes in funding over the time period studied, for each year after 2011, we calculated the per cent change in the dollar amount of funding after adjusting for inflation (compared with 2011) by specialty. Dollar amount of grants per active physician Because the total dollar amount of grants may be affected by the number of researchers in that specialty and therefore the number of active physicians, we calculated the number of dollars of funding per active physician to adjust for the differing sizes of the medical specialties. We divided the dollar amount of funding for each specialty by the number of active physicians in that specialty to calculate the dollar amount of grants per active physician. Mean dollar amount per grant for each specialty by grant type The dollar amounts vary by grant type, with smaller grants typically awarded to early-stage investigators and larger grants awarded to more seasoned investigators. We hypothesised that there should be no differences between specialties when the mean dollar amount per grant for a given grant type was evaluated. To test this hypothesis, we calculated the mean and SD of the inflation-adjusted dollar amount per grant by specialty for each grant type for 2011–2020. All analyses were performed in Excel V.2107 (Microsoft, Redmond, Washington, USA) and R V.4.1.2 (R Foundation for Statistical Computing, Vienna, Austria). Qualitative variables were compared between specialties using χ 2 tests, while quantitative variables were compared between specialties using t-tests with unequal variances. All test statistics were two-sided. To control for false-positive findings due to multiple comparisons between specialties, we used the Bonferroni-adjusted type I error rate of 0.05 ( 19 2 ) = 0.0002924 , so that p values less than this Bonferroni-adjusted type I error rate were considered statistically significant.
We evaluated all grants awarded over the 10-year period to identify the most frequently awarded grant types. The 10 most frequently awarded grant types were R01, R03, R21, F32, T32, K01, K08, K23, U01 and P30. The titles and descriptions of these grant types appear in . Plots of the number of grants by specialty and the per cent change in the number of grants from 2011 by specialty were created. We systematically evaluated NIH grants awarded at critical periods in the academic career pipeline, including training grants (predoctoral T32, and postdoctoral F32), career development grants (K01, K08 and K23), and grants typically awarded in the later/advanced career stages, including the R01, R03, R21, P30 and U01 grants.
To evaluate how many active physicians existed per grant type, the total number of active physicians was divided by the total number of each grant type for each year for each specialty. This metric gauges how rare it is for a physician of each specialty to have any grant type.
To evaluate whether differences in the number of grants resulted in differences in the dollar amount of funding for each specialty, we calculated the total dollar amount of funding by specialty from 2011 to 2020 after adjusting for inflation. The total dollar amount of funding was calculated for each specialty for each grant type during the study period. The annual funding each year from 2012 through 2020 was converted to year 2011 dollars, using the gross domestic product price index for the relevant years. To evaluate changes in funding over the time period studied, for each year after 2011, we calculated the per cent change in the dollar amount of funding after adjusting for inflation (compared with 2011) by specialty.
Because the total dollar amount of grants may be affected by the number of researchers in that specialty and therefore the number of active physicians, we calculated the number of dollars of funding per active physician to adjust for the differing sizes of the medical specialties. We divided the dollar amount of funding for each specialty by the number of active physicians in that specialty to calculate the dollar amount of grants per active physician.
The dollar amounts vary by grant type, with smaller grants typically awarded to early-stage investigators and larger grants awarded to more seasoned investigators. We hypothesised that there should be no differences between specialties when the mean dollar amount per grant for a given grant type was evaluated. To test this hypothesis, we calculated the mean and SD of the inflation-adjusted dollar amount per grant by specialty for each grant type for 2011–2020. All analyses were performed in Excel V.2107 (Microsoft, Redmond, Washington, USA) and R V.4.1.2 (R Foundation for Statistical Computing, Vienna, Austria). Qualitative variables were compared between specialties using χ 2 tests, while quantitative variables were compared between specialties using t-tests with unequal variances. All test statistics were two-sided. To control for false-positive findings due to multiple comparisons between specialties, we used the Bonferroni-adjusted type I error rate of 0.05 ( 19 2 ) = 0.0002924 , so that p values less than this Bonferroni-adjusted type I error rate were considered statistically significant.
Number of grants From 2011 through 2020, there were 184 382 grants awarded by the NIH in the specialties considered. Internal medicine/medicine (72 205, 37.2%), psychiatry (19 029, 10.3%), paediatrics (17 422, 9.4%) and pathology (14 946, 8.1%) were the specialties that received the most NIH grants . In comparison, plastic surgery (16, 0.009%), physical medicine and rehabilitation (1124, 0.6%), urology (1474, 0.8%), and emergency medicine (1258, 0.7%) were the specialties that received the fewest NIH grants . Internal medicine/medicine received the greatest number of grants, in aggregate over the period from 2011 through 2020, followed by psychiatry, paediatrics, pathology and neurology . Internal medicine/medicine consistently received the greatest number of grants during the study. 10.1136/bmjopen-2021-058191.supp2 Supplementary data The average percentage change in the number of grants by specialty from 2012 to 2020 compared with the initial year (2011) was highest for emergency medicine (40.8%), neurosurgery (34.2%) and orthopaedics (32.2%) . The average percentage change was lowest for plastic surgery (−51.9%), and otolaryngology (−36.8%) . 10.1136/bmjopen-2021-058191.supp3 Supplementary data shows the number of the most commonly awarded grant types for each specialty and shows R01 and R21 were the most awarded grant types. The number of the most awarded NIH training grants (F32 and T32) , NIH career development grants (K01, K08 and K23) and NIH advanced career grants (R01, R03, R21, P30 and U01) is shown for each year. 10.1136/bmjopen-2021-058191.supp14 Supplementary data Number of active physicians per grant More F32 grants were awarded in neurology, internal medicine/medicine and pathology than in emergency medicine. An F32 training grant was awarded for every 428–1304 neurologists, for every 875–1113 internal medicine/medicine physicians and for every 523–973 pathologists during the study period . However, an F32 training grant was only awarded for every 33 984–45 202 emergency medicine physicians. T32 grants were also more commonly awarded in neurology, internal medicine/medicine and pathology than in emergency medicine. A T32 grant was awarded for every 349–462 neurologists, every 256–337 internal medicine/medicine physicians, every 144–230 pathologists and every 21 174–40 974 emergency medicine physicians during the years 2011-2020. Career development grants (K01, K08 and K23) were more frequently awarded in internal medicine/medicine, neurology and pathology as these specialties consistently had the lowest numbers of physicians per awarded grant , while emergency medicine, family medicine and radiation-diagnostic/oncology had the highest numbers of physicians per career development grant. For the advanced career researcher grants (R01, R03, R21, P30 and U01), internal medicine/medicine, neurology and pathology consistently had the lowest numbers of physicians per awarded grant . There was an R01 grant awarded for every 34–41 internal medicine/medicine physicians, for every 20–25 neurologists and for every 15–17 pathologists, while there was an R01 grant awarded for every 2274–7317 plastic surgeons and for every 1194–1622 family medicine physicians. U01 grants were more commonly awarded in internal medicine/medicine and neurology and least commonly awarded in orthopaedics and family medicine. A U01 grant was awarded for every 204–295 internal medicine/medicine physicians, for every 174–391 neurologists, for every 9629–19 374 orthopaedists, and for every 7420–11 820 family medicine physicians during the study period 2011 to 2020. Dollar amount of grants The total dollar amount of funding awarded from 2011 to 2020 was US$83 342 MM (US$1 MM is US$1 million). Of this total, the specialties receiving the largest total amount of funding were internal medicine/medicine (US$36 023 MM, 43.2%), psychiatry (US$8268 MM, 9.9%) and paediatrics (US$7897 MM, 9.5%). The specialties receiving the least funding were plastic surgery (US$4.6 MM, <0.1%), physical medicine and rehabilitation (US$358 MM, 0.4%), and emergency medicine (US$550 MM, 0.7%) . Internal medicine/medicine was the most funded specialty after adjusting for inflation ; however, pathology and neurology were better funded after adjusting for number of active physicians . Of the specialties considered, emergency medicine had the largest average per cent increase in the amount of funding compared with the baseline year of 2011 . The dollar amount of funding for each specialty for the NIH training grants (F32 and T32) from 2011 to 2020 is shown in . Emergency medicine, family medicine and plastic surgery had the least amount of funding for these NIH training grants. For the NIH career development grants, we found that emergency medicine and family medicine were also among the least funded . Finally, we found that most of the more advanced career NIH grants were in internal medicine/medicine . Dollar amount of grants per active physician Pathology ($47.8 K/year) and neurology ($44.3 K/year) had the highest amounts of funding per physician over 2011–2020 and in each of the 10 years studied . Both internal medicine/medicine and neurology were among the highest funded per active physician for many of the grant types studied. For instance, for NIH T32 training grants, internal medicine/medicine, neurology, pathology and psychiatry had the highest amounts of funding per physician . Likewise, internal medicine/medicine, neurology, neurosurgery and pathology had the highest amounts of F32 funding per physician . Internal medicine/medicine, neurology, pathology and psychiatry had the highest amounts of K01 and K08 funding per physician . Internal medicine/medicine, neurology, paediatrics and psychiatry had the highest amounts of K23 funding per active physician . Internal medicine/medicine, neurology, neurosurgery and psychiatry had the highest amounts of R01 and U01 funding per physician . Mean dollar amount per grant for each specialty by grant type We compared the mean dollar amount of each grant for each grant type between specialties to assess whether the amount of funding varies between specialties. show the p values from these comparisons. We found that the mean dollar amount of the NIH training grants varied between specialties . Pathology, psychiatry and neurology had significantly different F32 funding from physical medicine and rehabilitation (p<0.0002924), and urology, ophthalmology, dermatology, surgery, orthopaedics and otolaryngology had significantly different T32 funding from pathology (p<0.0002924). The mean dollar amount of the NIH career development grants varied between specialties. A significant variation in mean K01 funding , mean K08 funding and K23 funding was noted. For example, the mean K01 funding for dermatology was statistically significantly different from that for physical medicine and rehabilitation, psychiatry, neurology, urology, anaesthesiology, obstetrics and gynaecology, radiology, family medicine, pathology, paediatrics, otolaryngology, and internal medicine/medicine. The mean dollar amount of the NIH advanced career grants varied between specialties . Psychiatry had significantly different R01 funding from all other specialties, except plastic surgery, family medicine and emergency medicine (p<0.0002924). 10.1136/bmjopen-2021-058191.supp4 Supplementary data 10.1136/bmjopen-2021-058191.supp5 Supplementary data 10.1136/bmjopen-2021-058191.supp6 Supplementary data 10.1136/bmjopen-2021-058191.supp7 Supplementary data 10.1136/bmjopen-2021-058191.supp8 Supplementary data 10.1136/bmjopen-2021-058191.supp9 Supplementary data 10.1136/bmjopen-2021-058191.supp10 Supplementary data 10.1136/bmjopen-2021-058191.supp11 Supplementary data 10.1136/bmjopen-2021-058191.supp12 Supplementary data 10.1136/bmjopen-2021-058191.supp13 Supplementary data
From 2011 through 2020, there were 184 382 grants awarded by the NIH in the specialties considered. Internal medicine/medicine (72 205, 37.2%), psychiatry (19 029, 10.3%), paediatrics (17 422, 9.4%) and pathology (14 946, 8.1%) were the specialties that received the most NIH grants . In comparison, plastic surgery (16, 0.009%), physical medicine and rehabilitation (1124, 0.6%), urology (1474, 0.8%), and emergency medicine (1258, 0.7%) were the specialties that received the fewest NIH grants . Internal medicine/medicine received the greatest number of grants, in aggregate over the period from 2011 through 2020, followed by psychiatry, paediatrics, pathology and neurology . Internal medicine/medicine consistently received the greatest number of grants during the study. 10.1136/bmjopen-2021-058191.supp2 Supplementary data The average percentage change in the number of grants by specialty from 2012 to 2020 compared with the initial year (2011) was highest for emergency medicine (40.8%), neurosurgery (34.2%) and orthopaedics (32.2%) . The average percentage change was lowest for plastic surgery (−51.9%), and otolaryngology (−36.8%) . 10.1136/bmjopen-2021-058191.supp3 Supplementary data shows the number of the most commonly awarded grant types for each specialty and shows R01 and R21 were the most awarded grant types. The number of the most awarded NIH training grants (F32 and T32) , NIH career development grants (K01, K08 and K23) and NIH advanced career grants (R01, R03, R21, P30 and U01) is shown for each year. 10.1136/bmjopen-2021-058191.supp14 Supplementary data
More F32 grants were awarded in neurology, internal medicine/medicine and pathology than in emergency medicine. An F32 training grant was awarded for every 428–1304 neurologists, for every 875–1113 internal medicine/medicine physicians and for every 523–973 pathologists during the study period . However, an F32 training grant was only awarded for every 33 984–45 202 emergency medicine physicians. T32 grants were also more commonly awarded in neurology, internal medicine/medicine and pathology than in emergency medicine. A T32 grant was awarded for every 349–462 neurologists, every 256–337 internal medicine/medicine physicians, every 144–230 pathologists and every 21 174–40 974 emergency medicine physicians during the years 2011-2020. Career development grants (K01, K08 and K23) were more frequently awarded in internal medicine/medicine, neurology and pathology as these specialties consistently had the lowest numbers of physicians per awarded grant , while emergency medicine, family medicine and radiation-diagnostic/oncology had the highest numbers of physicians per career development grant. For the advanced career researcher grants (R01, R03, R21, P30 and U01), internal medicine/medicine, neurology and pathology consistently had the lowest numbers of physicians per awarded grant . There was an R01 grant awarded for every 34–41 internal medicine/medicine physicians, for every 20–25 neurologists and for every 15–17 pathologists, while there was an R01 grant awarded for every 2274–7317 plastic surgeons and for every 1194–1622 family medicine physicians. U01 grants were more commonly awarded in internal medicine/medicine and neurology and least commonly awarded in orthopaedics and family medicine. A U01 grant was awarded for every 204–295 internal medicine/medicine physicians, for every 174–391 neurologists, for every 9629–19 374 orthopaedists, and for every 7420–11 820 family medicine physicians during the study period 2011 to 2020.
The total dollar amount of funding awarded from 2011 to 2020 was US$83 342 MM (US$1 MM is US$1 million). Of this total, the specialties receiving the largest total amount of funding were internal medicine/medicine (US$36 023 MM, 43.2%), psychiatry (US$8268 MM, 9.9%) and paediatrics (US$7897 MM, 9.5%). The specialties receiving the least funding were plastic surgery (US$4.6 MM, <0.1%), physical medicine and rehabilitation (US$358 MM, 0.4%), and emergency medicine (US$550 MM, 0.7%) . Internal medicine/medicine was the most funded specialty after adjusting for inflation ; however, pathology and neurology were better funded after adjusting for number of active physicians . Of the specialties considered, emergency medicine had the largest average per cent increase in the amount of funding compared with the baseline year of 2011 . The dollar amount of funding for each specialty for the NIH training grants (F32 and T32) from 2011 to 2020 is shown in . Emergency medicine, family medicine and plastic surgery had the least amount of funding for these NIH training grants. For the NIH career development grants, we found that emergency medicine and family medicine were also among the least funded . Finally, we found that most of the more advanced career NIH grants were in internal medicine/medicine .
Pathology ($47.8 K/year) and neurology ($44.3 K/year) had the highest amounts of funding per physician over 2011–2020 and in each of the 10 years studied . Both internal medicine/medicine and neurology were among the highest funded per active physician for many of the grant types studied. For instance, for NIH T32 training grants, internal medicine/medicine, neurology, pathology and psychiatry had the highest amounts of funding per physician . Likewise, internal medicine/medicine, neurology, neurosurgery and pathology had the highest amounts of F32 funding per physician . Internal medicine/medicine, neurology, pathology and psychiatry had the highest amounts of K01 and K08 funding per physician . Internal medicine/medicine, neurology, paediatrics and psychiatry had the highest amounts of K23 funding per active physician . Internal medicine/medicine, neurology, neurosurgery and psychiatry had the highest amounts of R01 and U01 funding per physician .
We compared the mean dollar amount of each grant for each grant type between specialties to assess whether the amount of funding varies between specialties. show the p values from these comparisons. We found that the mean dollar amount of the NIH training grants varied between specialties . Pathology, psychiatry and neurology had significantly different F32 funding from physical medicine and rehabilitation (p<0.0002924), and urology, ophthalmology, dermatology, surgery, orthopaedics and otolaryngology had significantly different T32 funding from pathology (p<0.0002924). The mean dollar amount of the NIH career development grants varied between specialties. A significant variation in mean K01 funding , mean K08 funding and K23 funding was noted. For example, the mean K01 funding for dermatology was statistically significantly different from that for physical medicine and rehabilitation, psychiatry, neurology, urology, anaesthesiology, obstetrics and gynaecology, radiology, family medicine, pathology, paediatrics, otolaryngology, and internal medicine/medicine. The mean dollar amount of the NIH advanced career grants varied between specialties . Psychiatry had significantly different R01 funding from all other specialties, except plastic surgery, family medicine and emergency medicine (p<0.0002924). 10.1136/bmjopen-2021-058191.supp4 Supplementary data 10.1136/bmjopen-2021-058191.supp5 Supplementary data 10.1136/bmjopen-2021-058191.supp6 Supplementary data 10.1136/bmjopen-2021-058191.supp7 Supplementary data 10.1136/bmjopen-2021-058191.supp8 Supplementary data 10.1136/bmjopen-2021-058191.supp9 Supplementary data 10.1136/bmjopen-2021-058191.supp10 Supplementary data 10.1136/bmjopen-2021-058191.supp11 Supplementary data 10.1136/bmjopen-2021-058191.supp12 Supplementary data 10.1136/bmjopen-2021-058191.supp13 Supplementary data
The data show that the number of NIH grants awarded over the investigated period varied substantially across specialties. Internal medicine/medicine consistently received the greatest number of grants, followed by psychiatry, paediatrics, pathology and neurology. After adjusting for the number of active physicians in each specialty, we found that neurology, internal medicine/medicine and pathology were the specialties with the greatest number of grants per active physician, while emergency medicine, family medicine and plastic surgery were the fields with the fewest number of grants per active physician, and this pattern was consistent across training grants, career development grants and advanced career grants. We found that internal medicine/medicine had the greatest dollar amount of funding of all medical specialties. After adjusting the dollar amount of funding by the number of active physicians in each specialty, we found that pathology, neurology, internal medicine/medicine and psychiatry were the specialties with the highest levels of funding per active physician, while plastic surgery, family medicine, emergency medicine and anaesthesiology were the specialties with the lowest levels of funding per active physician. When we analysed the mean amount of funding for each grant type, we found significant differences in funding amount between specialties. Our results have tremendous clinical and biomedical ramifications. We show that there are differences in the amount of funding, amount of funding adjusted for number of physicians, dollar amount of NIH funding adjusted for number of physicians and funding amount for each grant type between specialties. It is unclear why these differences exist. One speculation is that some of the better funded specialties (internal medicine/medicine, neurology, pathology) are more closely aligned with the NIH mission than other specialties; however, the NIH’s mission is ‘To seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability’ ( https://www.nih.gov/about-nih/what-we-do/nih-almanac/about-nih ), which is largely applicable to all medical specialties. One consideration is the number of physicians in each specialty, or alternatively the number or amount of grants awarded to each specialty does not reflect patients’ needs for the specialty in the society. Another speculation is that the better funded specialties are intrinsically better than the less funded specialties at doing research, and in having a pipeline of training grants and career development grants, and this has resulted in the discrepancies that we have noted. One may speculate that a better funded specialty at the training and career development levels may result in that specialty having better funded status at the more advanced research career levels. The differences in research funding between specialties have likely already had significant consequences for future progress in research. It has been shown that the proportion of individuals with MDs/PhDs who spend at least 50% of their time on research differs markedly by specialty. We note that the three specialties that were most highly funded on a per-physician basis (pathology, neurology and internal medicine/medicine) in our study were noted to have higher proportions of individuals spending at least 50% of their time on research, while family medicine and emergency medicine (which were among the lowest funded in our study) had much lower proportions of faculty with more than 50% time dedicated to research. Whether specialty choice is a driver of subsequent research focus in physicians’ careers is unknown; however, it is easy to speculate that greater availability of NIH funding in certain specialties may facilitate research-focused careers in those specialties. The specialties of internal medicine, pathology, paediatrics and neurology together accounted for approximately 60% of the specialties in which Medical Scientist Training Program (MSTP) MD/PhD graduates attended residency programmes. Three of these specialties were the three specialties with the highest amount of NIH funding per active physician. The temporal trend is difficult to assess, and it is unclear which is the cause and which is the effect. MD/PhD programme graduates were more likely to choose specialties like pathology and less likely to choose emergency medicine or family medicine. MSTP graduates may be more attracted to these better funded specialties due to increased NIH funding per active physician, or alternatively MSTP graduates in these better funded specialties were able to acquire more NIH funding per active physician. However, the distribution of specialties pursued by MSTP graduates has changed over time and many specialties now have substantial representation, which means a larger proportion of these MSTP graduates are in specialties that are less well funded. The presence of substantial numbers of MDs/PhDs in specialties with lower levels of funding (eg, 12.9% of MDs/PhDs in paediatrics, 7.0% in surgical specialties and 4.2% in radiation-diagnostic/oncology) leaves the possibility of a mismatch between the supply of such well-trained physician scientists and the amount of funding available to them, which could lead to increased competition for research grants in these specialties and increased attrition, ultimately further exacerbating the relatively lower levels of research in these specialties. This problem is not unique to MDs/PhDs and physicians, and may also occur for nurses, nurse practitioners, public health scientists, social workers, engineers, pharmacists, PhDs and other biomedical research professionals who do research in one of the less funded specialties. The importance of having a diverse biomedical workforce has been stressed. Diversity in race/ethnicity and gender has been at the forefront of these discussions due to the tremendous health disparities that exist across races/ethnicities and genders. However, this research shows that there are disparities in funding based on physician specialty. The current status quo makes some specialties (better NIH-funded specialties) more attractive to physician researchers. However, the inequity in NIH funding may create significant societal problems. For example, one of the worst instances of healthcare inequalities in the USA is the inequality seen in the difference between black and white maternal mortality. The mortality rate for non-Hispanic black women was 3.55 times that of non-Hispanic white women. Here, our data show that obstetrics and gynaecology was one of the worst NIH-funded specialties. Further research is required to understand the aetiology of this disparity so that the NIH research dollars are appropriately distributed among all specialties. This study has a few limitations. We assumed that the size of the active physician workforce was a proxy for the relative amount of clinical/public health need for a given medical specialty. However, deviations from proportionality may arise if the diseases treated by certain specialties result in greater morbidity/mortality or cost to the society, and if the NIH funding depends on morbidity/mortality or cost to the society. There are several non-physician researchers, including PhDs and individuals with degrees in public health, nursing, social work, pharmacy or engineering, who are funded by the NIH. These non-MD researchers may be more likely to be hired in some specialties, and as a result change the funding landscape of those specialties. The categorisation of grants into specialties was done by the NIH. The NIH did not provide the training backgrounds of the principal investigators (PIs) and also did not provide information on whether the PIs were clinicians, and therefore we could not determine whether the categorisation was based on the specialty in which these PIs were clinically practising, or if categorisation was based on the clinical department that the PI was affiliated with. The analyses in this study were based only on grants that were awarded and did not include any data for grant applications which were rejected to assess differences in funding rates. Grant application data are not included in the RePORTER database and can only be accessed through an internal database at the NIH and were unavailable for this study. Finally, the study was limited to the PI only and did not investigate the specialties of the coinvestigators since these data were not available in the RePORTER database. In conclusion, the number of NIH grants, total dollar amount of funding, dollar amount of funding per active physician and mean funding amount per grant (by grant type) vary by specialty. This may affect research progress and the careers of scientists and may affect patient outcomes in specialties that are less well funded. Further research is required to understand why this discrepancy exists.
Reviewer comments Author's
manuscript
|
Multimodal Nature of the Single-cell Primate Brain Atlas: Morphology, Transcriptome, Electrophysiology, and Connectivity | 23632a68-f205-4d3d-931e-de6909cd6af8 | 11003949 | Physiology[mh] | The mammalian neocortex is responsible for high-cognitive function and fine motor skills. The neocortex fulfills these functions through complicated networks of diverse neurons. Studies using rodents have established a basic framework of the neocortex, censused the transcriptomes of all the major cell types, and linked them to their physiological properties and principles of connection. Despite its augmented cognitive function, the primate neocortex shares the basic neuronal program with rodents. Accumulating evidence reveals the divergence between rodents and primates . However, the distinctions are subtle. A multimodal census of the neurons of the primate neocortex is necessary to underpin evolutionary changes that augment the capacities of the primate neocortex. Primates exhibit complex brain structures that augment cognitive function. However, the volume and number of neocortical neurons increased rapidly compared to subcortical structures during the evolutionary expansion of the neocortex . While the general principles of cortical development and basic architecture are conserved, studies have shown differences in the cellular composition of the human cortex . These differences include the expansion of superficial cortical layers during mammalian evolution, which may involve rare cell types and novel cellular interactions contributing to the complexity of primate brain function . Notably, neurons such as von Economo and rosehip neurons , which have unique morphological features, are primate-specific and do not exist in mice. These neurons may be involved in various cognitive processes, including facilitating rapid information transmission across different brain regions and promoting the integration of sensory, emotional, and mental information. In addition, there are transcriptional differences between mice, non-human primates, and humans, particularly in genes related to neuronal structure and function . For example, glutamatergic neuron transcriptome types are more diverse in the supragranular layer of the human neocortex . The development of novel research technologies and massive high-throughput studies provide valuable resources for understanding the foundation of the augmented cognitive capacities of the primate brain. First, it is crucial to systematically investigate the cell type composition within primate cortical areas. Since Frederick Sanger invented Sanger sequencing in 1977, when we could read the genetic code for the first time, it has taken several decades and significant efforts to promote the development of sequencing technology . Due to its cost, sequencing became the conventional technology used in regular research until next-generation sequencing methods were announced in 2005, followed by the invention of the single-cell RNA-seq (scRNA-seq) method . However, due to the limited application of scRNA-seq to frozen tissue with cell membranes ruptured during freezing, an essential complementary technology was developed that involved isolating a single nucleus sequencing the RNA (snRNA-seq) . This approach initiated a new chapter for investigating the cell type composition of primate cortical areas . Previous neuronal classification was based on morphology, electrophysiology, or some specific molecules , while sc/snRNA-seq provides high-throughput analysis and single-cell resolution for the robust classification of cell types and generating a transcriptomic cellular atlas . By comparing the cell type composition across species, primate-specific cell type composition and proportions might be evaluated to help explain the more complex brain functions of primates . After generating a transcriptomic reference cellular atlas , thorough gene expression analysis might improve understanding of cell differentiation trajectories by identifying abundant related pathways and genes, which could serve as candidate targets for interventions targeting neurodevelopmental disorders . When comparing physiological and pathological states, sc/snRNA-seq helps identify the modification of the composition and proportions of cell types with related pathogenic signaling pathways or genes, which can not only reveal the molecular mechanisms of pathological processes but also provide abundant promising targets for therapeutic protection and intervention . However, because of the multiple dimensions of the neuron, which is the basic unit of the nervous system, integrating the wealth of transcriptomic data with well-established morphological and electrophysiological data is still required . Since Neher and Sakmann invented patch-clamp technology to study ionic currents at the single-cell level, this method has become the standard for investigating electrophysiology and morphology in single cells, especially neurons . Genetically-labeled cells facilitate the study of the morphology and electrophysiology of neurons . However, the application of these techniques is limited by the availability of known cell type-specific markers and the operational feasibility of primate experiments. Recently, several groups have developed and optimized Patch-seq, a multimodal method that describes electrophysiological, transcriptomic, and morphological profiles in single neurons of rodents and adult human and non-human primate brain slices . By combining this technology with innovative data analytical tools , neuroscientists can map the Patch-seq data to the transcriptomic reference atlas for assigning morphological and electrophysiologic annotations to enrich the transcriptomic cellular atlas to improve a comprehensive understanding of the primate cortical areas during physiological or pathological states . After generating a transcriptomic cellular atlas with electrophysiological and morphological annotations, the next step is to analyze the connectivity principles within cortical areas. However, the currently popular technologies that identify neuronal types typically need more morphological data and heavily rely on genetic manipulation, which is challenging in primate experiments. To decipher the principles of local connectivity, a high-throughput technology and robust cell classification standards are critical . Simultaneous multiple whole-cell patch-clamp recordings, such as dual, triple, and quadruple recordings, have proven invaluable in facilitating the study of connectivity between neurons . The number of test potential connections is significantly increased with simultaneous patch-clamp recording neurons. Multicell patch-clamp setups with up to 12 simultaneously recorded neurons were achieved before 2011 . The stable simultaneous octuple patch-clamp recording technology achieved superior multicell patch-clamp recording results . Because it provided highly detailed morphological data from neurons for neuronal type identification and offered high-throughput evaluation of potential connections, it is optimally suited for primate research on connectivity and addresses concerns regarding the considerable workload in primate studies and the scarcity of primate tissues. Here, we review the advances in primate neuroscience from the applications of the above three advanced technologies and discuss the potential of integrating the wealth of datasets obtained using those technologies to generate a primate brain atlas with multiple dimensions (describing transcriptomic, electrophysiological, and morphological profiles, as well as the principles of local connectivity) for comprehensively understanding the functional mechanisms of primate cortical areas. While sc/snRNA-seq can be used to generate a transcriptomic reference atlas of primate cortical areas with transcriptome-based classification, Patch-seq can provide morphological and electrophysiological annotations for the above transcriptomic reference atlas. In addition, the multicell patch-clamp measures the strength of monosynaptic connections between cells. These physiological properties can be mapped to the transcriptomic reference atlas, depicting a multimodal atlas of the primate brain, and facilitating the advancement of our knowledge in neuroscience. Single-cell or single-nucleus RNA-seq technologies, which have high throughput and robust resolution for cell type classification with gene expression analysis during both physiological and pathological states, have provided an excellent opportunity for the in-depth exploration of the primate brain (Fig. B−E). The Taxonomy of Transcriptomic Cell Types Accumulating research on the rodent and primate cortex at single-cell resolution suggested that the cell types are largely conserved across species. The neocortical brain cells of primates are commonly analyzed to generate a taxonomy of cell types according to transcriptomic similarity. Brain cells are grouped into neuronal cells and non-neuronal cells. Neuronal cells are usually divided into glutamatergic excitatory neurons and GABAergic inhibitory neurons . Each class can be further divided into multiple subclasses (Table ). In the primate neocortex, such as the human middle temporal gyrus and macaque primary visual cortex, the excitatory neurons have been categorized into different subclasses based on the laminar distribution, transcriptome type analysis, and the transcriptomic homology to well-established datasets. These subclasses include intratelencephalic cell subclasses (L2/3 IT, L4 IT/IT, L5 IT, L6 IT, and L6 IT Car3), the L5 extratelencephalic cell subclass (L5 ET), the L6 corticothalamic cell subclass (L6 CT), the near-projecting cell subclass (L5/6 NP), and the L6b cell subclass . Alternatively, in a different system, based on the expression of marker genes, excitatory neurons are also divided into four subclasses: CUX2 -expressing cells (or LINC00507 , or HPCAL , combined with the application of the marker NXPH4 to distinguish upper neurons from L6b neurons) that are mainly located in the upper layers, RORB -expressing cells (enriched in layer 4 but can be found across all layers), FEZF2 -expressing cells (located in deep layers), and THEMIS -expressing cells (located in deep layers). The GABAergic inhibitory class contains four main subclasses ( LAMP5 , VIP, SST , and PVALB -expressing) ), essentially correspond to their developmental origin in the medial ganglionic eminence (MGE: PVALB and SST subclasses) or caudal ganglionic eminence (CGE; LAMP5 and VIP subclasses) . There are additional subclasses such as the PAX6 , LAMP5 LHX6 , and PAX6 ADARB2 (SNCG) subclasses originating from the CGE, and the PVALB UNC5B (Chandelier) and SST CHODL subclasses originating from the MGE. Non-neuronal brain cells are grouped into six subclasses: astrocytes, oligodendrocyte precursor cells, oligodendrocytes, microglia, and perivascular macrophages, endothelial cells, and vascular leptomeningeal cells . These neuronal and non-neuronal subclasses are subdivided into cell types based on additional marker genes. Cross-Species Transcriptomic Conservation and Divergence Recent comprehensive cross-species transcriptomic studies that have generated transcriptomic cellular atlases with abundant molecular signatures have revealed surprisingly well-conserved neuronal and non-neuronal types across different cortical areas among primates and rodents . Humans share with rodents an evolutionarily-conserved regulatory program involved in the process of neuronal development, which controls the specification, migration, and differentiation of GABAergic interneurons . However, the primate-specific cell types and the differences in homologous cell types, including proportions, laminar distributions, gene expression, and morphological features, are inestimable . For example, homologous thalamocortical neurons in the primate dorsal lateral geniculate nucleus, which convey visual information from the retina to the primary visual cortex (V1), are distinct from those in rodents . Detailed single-cell transcriptome analysis sampling from non-human primate V1, novel cell types (the NPY- expressing excitatory neuron type and the primate-specific activity-dependent OSTN+ neuron type), and different gene expression patterns have been revealed in the primary visual cortex in primates . These findings may account for the high visual acuity and more complex color vision of primates. Moreover, comparative RNA sequencing has revealed divergent expression patterns regulating cell morphogenesis, such as ZEB2 (zinc finger E-box binding homeobox 2) and human-specific NOTCH2NL (paralogs of the NOTCH2 receptor). The expression of ZEB2 promotes the neuroepithelial transition and manipulations of the related downstream signaling lead to the acquisition of non-human ape architecture in the human context and vice versa . On the other hand, NOTCH2NL expands cortical progenitors and enhances neuronal output, emphasizing the important role of neuroepithelial cell shape in human brain expansion . These findings suggest that cell type taxonomy is largely conserved from rodents to primates, yet differences exist. The Transcriptomes During Brain Development and Neurogenesis The primate brain, which has the largest volume relative to body size and ~1000 times more neurons than the rodent brain , is much more complex. Single-cell and single-nucleus technologies have established cellular taxonomies of multiple cortical areas from developing primates using gene expression patterns. This considerably expands our knowledge of early neurogenesis, neuroplasticity, and cellular differentiation during the early developmental stage . Transcriptomic data, which provides cell lineages, molecular signatures, and transcriptional regulatory networks that underlie the basis of physiological activities , can be used to assess human and non-human primate brain development and early neurogenesis . Moreover, synaptic gene expression patterns show considerable differences in human cortical areas during aging, accounting for the reduced functions of the aging brain . Neurogenesis in adult primates, a recurring and crucial topic of primate neuroscience, has been comprehensively investigated through sc/snRNA-seq transcriptomic data accompanied by sufficient immunostaining evidence . Larger-scale transcriptomic studies have also focused on the diversity of glial cells, including oligodendrocytes and astrocytes , which exhibit developmental and metabolic regulation by neuronal activity in the developing human cerebral cortex . This result indicates that the balance of the interaction between glial cells and neurons is important for the normal development of the primate brain. Transcriptome Changes of Neuropathological State Single-cell and single-nucleus transcriptomic analyses enable the exploration of cell type composition, transcriptomic modifications, and their role in neurological diseases. These modifications, particularly in critical genes and pathways, regulate disease progression and offer therapeutic opportunities . Such analyses have been extensively used to study various primate neurological disorders, establishing cellular taxonomies, identifying vulnerable cell subpopulations with risk genes, and shedding light on pathogenesis mechanisms and potential therapeutics . In this review, we dived into the detailed research on Alzheimer's disease (AD), autism spectrum disorder (ASD), and multiple sclerosis (MS). AD is a progressive neurodegenerative disorder characterized by memory loss, cognitive decline, and executive dysfunction . Analyses of single nuclei from the prefrontal cortex of individuals with AD have identified distinct neuronal and non-neuronal types with pathological gene expression associated with myelination, inflammation, and neuron survival. Disease-associated changes are highly cell-type specific, with some genes ( HSP90AA1 and HSPA1A involved in protein folding) universally upregulated in late stages . Transcriptional and pathological differences between sexes have also been reported . Understanding cell-type-specific gene networks and transitions is crucial for unraveling AD pathogenesis. Integrated analysis of transcription factors and AD risk loci have revealed drivers of cell-type-specific transitions. It highlights the repression of AD risk genes in oligodendrocyte progenitor cells, astrocytes, and their upregulation in microglia . Cell-type-specific vulnerability is a fundamental feature of neurodegenerative diseases in which different cellular populations show a gradient of susceptibility to degeneration. Abundant molecular signatures identified by snRNA-seq provide an unprecedented chance to characterize the specifically vulnerable neuronal subpopulations at the molecular level. In transcriptomic analysis of AD, RORB (RAR Related Orphan Receptor B) has been identified as a marker of selectively vulnerable excitatory neurons. At the same time, the downregulation of genes involved in homeostatic functions is used to characterize vulnerable astrocyte subpopulations . Recent multimodal methodology has identified potential AD treatments . ASD is a neurodevelopmental condition impacting interaction and communication . Recent research investigating 11 cortical areas in individuals with ASD and neurotypical controls has revealed widespread transcriptomic changes across the cortex in individuals with ASD. The findings exhibit an anterior-to-posterior gradient, with the most significant differences in the primary visual cortex. These differences coincide with reduced typical transcriptomic differences between cortical areas in neurotypical individuals . In relation to the cell-type-specific molecular changes associated with ASD, Velmeshev et al. found that synaptic signaling in upper-layer excitatory neurons and the molecular state of microglia are preferentially affected in ASD. Moreover, the dysregulation of specific groups of genes in cortico-cortical projection neurons has been found to correlate with the clinical severity of ASD, as expected . Oligodendrocytes are implicated in the pathogenesis of MS, a multifocal inflammatory disease affecting cortical areas . SnRNA-seq has demonstrated different functional states of oligodendrocyte subpopulations in MS tissue and has identified selectively vulnerable neuronal subpopulations, stressed oligodendrocytes, reactive astrocytes, and activated microglia associated with the progression of MS lesions . Overlapping transcriptional profiles between MS and other neurodegenerative diseases suggest shared mechanisms and potential therapeutic approaches . Accumulating research on the rodent and primate cortex at single-cell resolution suggested that the cell types are largely conserved across species. The neocortical brain cells of primates are commonly analyzed to generate a taxonomy of cell types according to transcriptomic similarity. Brain cells are grouped into neuronal cells and non-neuronal cells. Neuronal cells are usually divided into glutamatergic excitatory neurons and GABAergic inhibitory neurons . Each class can be further divided into multiple subclasses (Table ). In the primate neocortex, such as the human middle temporal gyrus and macaque primary visual cortex, the excitatory neurons have been categorized into different subclasses based on the laminar distribution, transcriptome type analysis, and the transcriptomic homology to well-established datasets. These subclasses include intratelencephalic cell subclasses (L2/3 IT, L4 IT/IT, L5 IT, L6 IT, and L6 IT Car3), the L5 extratelencephalic cell subclass (L5 ET), the L6 corticothalamic cell subclass (L6 CT), the near-projecting cell subclass (L5/6 NP), and the L6b cell subclass . Alternatively, in a different system, based on the expression of marker genes, excitatory neurons are also divided into four subclasses: CUX2 -expressing cells (or LINC00507 , or HPCAL , combined with the application of the marker NXPH4 to distinguish upper neurons from L6b neurons) that are mainly located in the upper layers, RORB -expressing cells (enriched in layer 4 but can be found across all layers), FEZF2 -expressing cells (located in deep layers), and THEMIS -expressing cells (located in deep layers). The GABAergic inhibitory class contains four main subclasses ( LAMP5 , VIP, SST , and PVALB -expressing) ), essentially correspond to their developmental origin in the medial ganglionic eminence (MGE: PVALB and SST subclasses) or caudal ganglionic eminence (CGE; LAMP5 and VIP subclasses) . There are additional subclasses such as the PAX6 , LAMP5 LHX6 , and PAX6 ADARB2 (SNCG) subclasses originating from the CGE, and the PVALB UNC5B (Chandelier) and SST CHODL subclasses originating from the MGE. Non-neuronal brain cells are grouped into six subclasses: astrocytes, oligodendrocyte precursor cells, oligodendrocytes, microglia, and perivascular macrophages, endothelial cells, and vascular leptomeningeal cells . These neuronal and non-neuronal subclasses are subdivided into cell types based on additional marker genes. Recent comprehensive cross-species transcriptomic studies that have generated transcriptomic cellular atlases with abundant molecular signatures have revealed surprisingly well-conserved neuronal and non-neuronal types across different cortical areas among primates and rodents . Humans share with rodents an evolutionarily-conserved regulatory program involved in the process of neuronal development, which controls the specification, migration, and differentiation of GABAergic interneurons . However, the primate-specific cell types and the differences in homologous cell types, including proportions, laminar distributions, gene expression, and morphological features, are inestimable . For example, homologous thalamocortical neurons in the primate dorsal lateral geniculate nucleus, which convey visual information from the retina to the primary visual cortex (V1), are distinct from those in rodents . Detailed single-cell transcriptome analysis sampling from non-human primate V1, novel cell types (the NPY- expressing excitatory neuron type and the primate-specific activity-dependent OSTN+ neuron type), and different gene expression patterns have been revealed in the primary visual cortex in primates . These findings may account for the high visual acuity and more complex color vision of primates. Moreover, comparative RNA sequencing has revealed divergent expression patterns regulating cell morphogenesis, such as ZEB2 (zinc finger E-box binding homeobox 2) and human-specific NOTCH2NL (paralogs of the NOTCH2 receptor). The expression of ZEB2 promotes the neuroepithelial transition and manipulations of the related downstream signaling lead to the acquisition of non-human ape architecture in the human context and vice versa . On the other hand, NOTCH2NL expands cortical progenitors and enhances neuronal output, emphasizing the important role of neuroepithelial cell shape in human brain expansion . These findings suggest that cell type taxonomy is largely conserved from rodents to primates, yet differences exist. The primate brain, which has the largest volume relative to body size and ~1000 times more neurons than the rodent brain , is much more complex. Single-cell and single-nucleus technologies have established cellular taxonomies of multiple cortical areas from developing primates using gene expression patterns. This considerably expands our knowledge of early neurogenesis, neuroplasticity, and cellular differentiation during the early developmental stage . Transcriptomic data, which provides cell lineages, molecular signatures, and transcriptional regulatory networks that underlie the basis of physiological activities , can be used to assess human and non-human primate brain development and early neurogenesis . Moreover, synaptic gene expression patterns show considerable differences in human cortical areas during aging, accounting for the reduced functions of the aging brain . Neurogenesis in adult primates, a recurring and crucial topic of primate neuroscience, has been comprehensively investigated through sc/snRNA-seq transcriptomic data accompanied by sufficient immunostaining evidence . Larger-scale transcriptomic studies have also focused on the diversity of glial cells, including oligodendrocytes and astrocytes , which exhibit developmental and metabolic regulation by neuronal activity in the developing human cerebral cortex . This result indicates that the balance of the interaction between glial cells and neurons is important for the normal development of the primate brain. Single-cell and single-nucleus transcriptomic analyses enable the exploration of cell type composition, transcriptomic modifications, and their role in neurological diseases. These modifications, particularly in critical genes and pathways, regulate disease progression and offer therapeutic opportunities . Such analyses have been extensively used to study various primate neurological disorders, establishing cellular taxonomies, identifying vulnerable cell subpopulations with risk genes, and shedding light on pathogenesis mechanisms and potential therapeutics . In this review, we dived into the detailed research on Alzheimer's disease (AD), autism spectrum disorder (ASD), and multiple sclerosis (MS). AD is a progressive neurodegenerative disorder characterized by memory loss, cognitive decline, and executive dysfunction . Analyses of single nuclei from the prefrontal cortex of individuals with AD have identified distinct neuronal and non-neuronal types with pathological gene expression associated with myelination, inflammation, and neuron survival. Disease-associated changes are highly cell-type specific, with some genes ( HSP90AA1 and HSPA1A involved in protein folding) universally upregulated in late stages . Transcriptional and pathological differences between sexes have also been reported . Understanding cell-type-specific gene networks and transitions is crucial for unraveling AD pathogenesis. Integrated analysis of transcription factors and AD risk loci have revealed drivers of cell-type-specific transitions. It highlights the repression of AD risk genes in oligodendrocyte progenitor cells, astrocytes, and their upregulation in microglia . Cell-type-specific vulnerability is a fundamental feature of neurodegenerative diseases in which different cellular populations show a gradient of susceptibility to degeneration. Abundant molecular signatures identified by snRNA-seq provide an unprecedented chance to characterize the specifically vulnerable neuronal subpopulations at the molecular level. In transcriptomic analysis of AD, RORB (RAR Related Orphan Receptor B) has been identified as a marker of selectively vulnerable excitatory neurons. At the same time, the downregulation of genes involved in homeostatic functions is used to characterize vulnerable astrocyte subpopulations . Recent multimodal methodology has identified potential AD treatments . ASD is a neurodevelopmental condition impacting interaction and communication . Recent research investigating 11 cortical areas in individuals with ASD and neurotypical controls has revealed widespread transcriptomic changes across the cortex in individuals with ASD. The findings exhibit an anterior-to-posterior gradient, with the most significant differences in the primary visual cortex. These differences coincide with reduced typical transcriptomic differences between cortical areas in neurotypical individuals . In relation to the cell-type-specific molecular changes associated with ASD, Velmeshev et al. found that synaptic signaling in upper-layer excitatory neurons and the molecular state of microglia are preferentially affected in ASD. Moreover, the dysregulation of specific groups of genes in cortico-cortical projection neurons has been found to correlate with the clinical severity of ASD, as expected . Oligodendrocytes are implicated in the pathogenesis of MS, a multifocal inflammatory disease affecting cortical areas . SnRNA-seq has demonstrated different functional states of oligodendrocyte subpopulations in MS tissue and has identified selectively vulnerable neuronal subpopulations, stressed oligodendrocytes, reactive astrocytes, and activated microglia associated with the progression of MS lesions . Overlapping transcriptional profiles between MS and other neurodegenerative diseases suggest shared mechanisms and potential therapeutic approaches . Single-cell or single-nucleus sequencing data provide valuable insights into the transcriptomic types (t-types) of homology across species . However, it cannot obtain the morphological and electrophysiological properties of the morpho-electrical-transcriptomic types . Patch-seq is a revolutionary technology that can simultaneously acquire the morphology, electrophysiology, and transcriptome of single cells, which are valuable resources to establish a multimodal atlas. Patch-seq is a modification of regular patch-clamp recording. It has been applied to record from cultured cells and acute brain sections in vitro , and the recorded cells are labeled with dye for subsequent morphological reconstruction. After electrophysiological recording, most of the cytoplasmic contents are aspirated (generally including the nucleus) and transferred to an individual tube containing a lysis buffer followed by a standard single-cell or single-nucleus RNA-seq protocol (Fig. D−G). This powerful multimodal approach provides valuable resources for advancing our understanding of primate neuroscience. Patch-seq techniques have been successfully implemented in the primate cortex . Patch-seq-sampled data from neurons across human cortical layers 1, 2, 3, and 5 have been mapped to a human transcriptomic cellular reference atlas and assigned electrophysiological and morphological features to the mapped t-types . This multimodal analysis has identified human-specific double bouquet cells, mapping to two cortical GABAergic somatostatin (SST) t-types ( SST CALB1 and SST ADGRG6 ) . Studies on the human cortex have revealed higher divergence in the upper-layer neocortex compared to mice , whereas the cell density in layers 2/3 is lower in humans than in mice . Among human L1 interneurons, subclasses defined by their transcriptomes exhibit similarly distinct morpho-electrical phenotypes. Two human cell types with specialized phenotypes have been identified ( MC4R rosehip cells and the bursting PAX6 TNFAIP8L3 t-type) . Indeed, observing a 'rosehip' cell type in human and not mouse neocortex emphasizes the importance of studying human L1 to uncover potential species-specific specializations . In addition, the supragranular layer of the human neocortex exhibits increased diversity in glutamatergic neuron types, predominantly found in layers 2 and 3. Five human supragranular neuron t-types ( LTK , GLP2R , FREM3 , CARM1P1 , and COL22A1 ) have corresponding morphology, physiology, and transcriptome phenotypes. The more superficially located LTK , GLP2R , and FREM3 types are homologous to the mouse supragranular IT types. The more deeply located CARM1P1 and COL22A1 types do not have direct counterparts in the mouse supragranular neocortex. Instead, they exhibit the closest transcriptomic similarity to infragranular mouse IT types . These results suggest an increased diversity of deep L3 neurons in humans. The deep portion of layer 3 contains highly distinctive cell types, two pyramidal cell types ( FREM3+ and CARM1P1+ transcriptomic cell types ) expressing neurofilament protein SMI-32 (encoded by the NEFH gene), which labels long-range projection neurons in primates that are selectively depleted in AD , providing a promising entry to study the pathological mechanism and explore potential therapeutic options. In contrast to the well-documented diversity of supragranular-layer excitatory neurons across regions and species, studies addressing the variability of deep-layer excitatory neurons, such as ET, IT, and CT neurons, are limited . Axonal projections to lower brain regions predominantly originate from layer 5 (L5) ET neurons. L5 ET neurons exhibit unique morpho-electric properties, gene expression patterns, local synaptic connections, long-range afferents, and neuromodulatory responses, as primarily described in rodents. L5 ET neurons are traditionally characterized by thick apical dendritic tufts in layer 1, while L5 IT neurons have thinner or tuftless dendrites. In addition, L5 ET neurons strongly express hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, likely contributing to their strong dendritic electrogenesis . Still, in rodents, the HCN conductance tends to dampen dendritic electrogenesis. The differences in HCN expression between L5 ET and IT neurons may not be the primary reason for the differences in electrogenesis . Further interrogations are necessary. These different properties of L5 ET and IT neurons contribute to distinct aspects of perception and behavior. Patch-seq analysis of L5 neurons in the primary motor cortex of both mice and macaques has revealed that macaque and human Betz cells are homologous to the thick-tufted L5 ET neurons in mice but exhibit species-specific differences in morphology, physiology, and gene expression. Macaque and human Betz ET neurons have specialized suprathreshold properties, such as biphasic firing patterns evoked by prolonged suprathreshold current injection . Macaque and human L5 ET neurons are notably larger and possess long "taproot" basal dendrites, characteristic of the iconic Betz cells . Gene expression patterns shape the electrophysiological and morphological phenotypes. Patch-seq provides a multimodal analysis strategy to identify potential molecular markers or pathways that predict the phenotype of single neurons . Using Patch-seq combined with Weighted Gene Co-expression Network Analyses of single human neurons in culture, certain gene clusters are correlated with neuronal maturation as determined by electrophysiological characteristics, and a list of candidate genes has been identified that have the potential to serve as biomarkers of neuronal maturation . For example, Patch-seq recording from human induced pluripotent stem cell (iPSC)-derived astrocytes and neurons has revealed a continuum of low- to high-function electrophysiological states. Furthermore, a novel biomarker, GDAP1L1, effectively identifies the high-functioning neurons. These biomarkers facilitate the classification of neurons based on their functionality and enable the stratification of functional heterogeneity . Subtle differences in the transcriptome may profoundly affect neuronal morphology and function . Patch-seq links transcriptomes with phenotypes such as morphology and electrophysiology. This allows for targeted studies on specific neuronal populations based on factors like anatomical location, functional properties, and lineages. Patch-seq also facilitates studies on the molecular basis of morphological and functional diversity . We can gain new insights into neurons by using Patch-seq to correlate transcriptomes with morphological characteristics, neuronal locations, and projection patterns. The acquisition and integration of transcriptome information is essential to neuronal classification. Understanding the functional mechanisms of primate cortical areas requires a comprehensive investigation of their cellular and synaptic organization. To enable a detailed understanding of microcircuits, multicell whole-cell patch-clamp recordings still represent the gold standard method . This method reliably detects unitary excitatory and inhibitory synaptic connectivity with its submillisecond and subthreshold resolution . Increasing the number of simultaneously recorded neurons can considerably increase the number of probed synaptic connections and generate larger sample sizes from fewer experiments. For example, if the number of simultaneously patched neurons reaches eight in single slices, 56 potential connections will be tested. It is important to achieve relatively high throughput due to the limited availability of primate samples. By using multicell whole-cell patch-clamp recordings, simultaneous recording of multiple neurons becomes possible. This approach significantly enhances the number of monosynaptic connections tested per experiment, enabling us to explore the principles of connection between different cells (Fig. H, ) . The recent application of multicell patch-clamp technology to the study of the primate brain has considerably expanded our knowledge of its microcircuits. Recurrent excitatory connectivity is thought to be important in behavior and disease , and has also been identified as a common feature in computational models of cortical working memory, receptive field shaping, attractor dynamics, and sequence storage . Although there is a wide range of reported rates of recurrent connectivity among excitatory neurons in rodents , evidence has shown that the human cortex possesses a higher recurrent excitatory connectivity rate and mean amplitude, which might contribute to the related functional difference across species . The higher mean amplitude and other excitatory postsynaptic differences indicate stronger synaptic connectivity within the human cortex, which is explained by the larger presynaptic active zones and postsynaptic densities that may allow a higher release probability as well as more neurotransmitter release and binding . A more comprehensive survey of intralaminar connectivity has been applied to investigate all cortical layers detailing the connectivity atlas, and the analysis includes synaptic dynamics between layer-defined pyramidal neurons and inhibitory neurons, greatly increasing our understanding of primate neural microcircuits . In this study of the connectivity between layer-defined neuronal types, the connectivity probability among layer 4 was nearly absent in the human cortex. At the same time, it was high in the mouse cortex. Moreover, disynaptic inhibition in the human cortex, which was not detected in the mouse cortex, was found between confirmed spiny pyramidal cells that were unidirectional, originating in layer 2 and targeting other layer 2 or layer 3 pyramidal cells. Consistent with previous results, the connectivity rates were estimated to decline with increasing distance but at a slower speed than in rodents. The unique characteristics of the human circuit findings regarding alterations in synaptic dynamics may help explain the complexity of information in the human cortex. The morphologies of neurons are diverse. Their dendritic and axonal projections provide additional information for investigating the local connectivity. Recent advances in patch-clamp-based techniques and solution formulations have significantly improved the quality of neuron reconstruction , establishing a framework for the morphological classification of rodent neurons . In addition, high-throughput information can be obtained through the advances in imaging and cell labeling techniques, such as superresolution hopping probe ion conductance microscopy, a variant of scanning ion conductance microscopy and viral tracers or transgenic animals (e.g., Cre-driven lines or tetracycline-controlled transcription factors for labeling specific neurons) . The morphological classification of inhibitory and excitatory neurons in rodents can serve as a preliminary reference for establishing the morphological classification of brain cells in primates. This reference can expedite the generation of a comprehensive connectivity atlas. By leveraging the knowledge and techniques developed in rodent studies, researchers can accelerate the mapping of neural circuits in primates, providing valuable insights into brain connectivity and function. The Advantages of these Three Technologies The primate brain is organized into cortical areas responsible for different functions. Different types of cells within cortical areas with specific transcriptomic, morphologic, and electrophysiologic profiles establish synaptic connectivity following principles. During development, aging, and disease, the electrophysiology, morphology, and connectivity phenotypes of cell types are simultaneously affected and undergo different degrees of adaptation under the control of transcriptomic modifications. As in the mouse, the single-cell data on the morphology, electrophysiological properties, and gene expression patterns in the primate brain need to be integrated for comprehensive knowledge of functional mechanisms in primate cortical areas. Single-cell and single-nucleus RNA-seq provide robust cell-type classification data to generate a transcriptomic reference atlas that can be used to illustrate the cellular composition of cortical areas and to perform bioinformatics analysis to explain the modifications that occur during physiological and pathological states (Fig. A). Patch-seq results, which can be mapped to the transcriptomes provided in the above transcriptomic reference atlas, establish morphological and electrophysiological annotations for each cell type. Based on the advances in multicell patch-clamp recordings with detailed morphological reconstruction, especially the morphology of inhibitory neurons , local connectivity principles among morphology-based neuronal types will be delineated in primates as in rodents (Fig. B, ). The strong correspondence between morphological and electrophysiological phenotypes of cells might be a theoretical basis for mapping the local connectivity principles to the above transcriptomic atlas with morphological and electrophysiological annotations. To this end, a primate brain atlas that integrates the datasets from the above three technologies can describe the transcriptomic, morphological, and electrophysiological profiles of each type and the local connectivity principles. This atlas will play a crucial role in investigating the cortical regions in primates and significantly contribute to research on degenerative diseases in primates. Atlases with Multiple Dimensions Resolve Novel Neuroscience Questions Extensive data on the applications of the three technologies are now being obtained from primate brains to generate a primate brain atlas with multiple dimensions within cortical areas. The atlas will describe the multiple profiles, including transcriptomes, electrophysiology, and morphology, as well as the local connectivity principles, in each cell type within primate cortical areas (Fig. ). The organizational differences within homologous cortical areas could explain the higher complexity of functions in primates than in rodents. When establishing the above atlas, larger-scale transcriptome analysis might be used to identify species-specific cell types with distinct gene expression patterns and neuronal biology , and the local connectivity patterns of a selected cell type could then be comprehensively studied by the applications of Patch-seq and multicell patch-clamp (Fig. D). Although studies show a surprising conservation of basic transcriptomic cell types across cortical areas between primates and rodents , modifications of the primate neuronal profiles of transcriptomes, electrophysiology, morphology, and connectivity patterns might contribute to the more complex brain functions. In rodent studies, the integration of data from Patch-seq and sn/scRNA-seq has allowed for a comprehensive multimodal analysis. This approach successfully identified distinct morphology-electrophysiology-transcriptome types, showcasing unique neuronal properties. Moreover, these types can form continuous and correlated transcriptomic and morphological electrical landscapes within their respective families . This mutual predictability not only helps to predict the functional, morphological, or transcriptomic state based on one or two distinct neuronal properties but also provides promising potential for integration with experimental data from other technologies. In pathological studies, since large-scale transcriptome analysis has identified vulnerable cell types , Patch-seq can be used to test the potential loss or gain changes in the transcriptomic, electrophysiological, and morphological aspects of these vulnerable cell types, and multicell patch-clamp can be used to detect the loss or gain changes of local connectivity involving the selected cell types. Gene expression pattern analysis not only explains the molecular mechanisms of functional changes but also supplies lists of marker genes of cell types to develop genetic manipulation tools, which might serve in the applications of Patch-seq and multicell patch-clamp in primates. Moreover, Patch-seq and multicell patch-clamp can be used to verify the therapeutic effectiveness of targeting drug candidates provided by the transcriptome analysis . The primate brain is organized into cortical areas responsible for different functions. Different types of cells within cortical areas with specific transcriptomic, morphologic, and electrophysiologic profiles establish synaptic connectivity following principles. During development, aging, and disease, the electrophysiology, morphology, and connectivity phenotypes of cell types are simultaneously affected and undergo different degrees of adaptation under the control of transcriptomic modifications. As in the mouse, the single-cell data on the morphology, electrophysiological properties, and gene expression patterns in the primate brain need to be integrated for comprehensive knowledge of functional mechanisms in primate cortical areas. Single-cell and single-nucleus RNA-seq provide robust cell-type classification data to generate a transcriptomic reference atlas that can be used to illustrate the cellular composition of cortical areas and to perform bioinformatics analysis to explain the modifications that occur during physiological and pathological states (Fig. A). Patch-seq results, which can be mapped to the transcriptomes provided in the above transcriptomic reference atlas, establish morphological and electrophysiological annotations for each cell type. Based on the advances in multicell patch-clamp recordings with detailed morphological reconstruction, especially the morphology of inhibitory neurons , local connectivity principles among morphology-based neuronal types will be delineated in primates as in rodents (Fig. B, ). The strong correspondence between morphological and electrophysiological phenotypes of cells might be a theoretical basis for mapping the local connectivity principles to the above transcriptomic atlas with morphological and electrophysiological annotations. To this end, a primate brain atlas that integrates the datasets from the above three technologies can describe the transcriptomic, morphological, and electrophysiological profiles of each type and the local connectivity principles. This atlas will play a crucial role in investigating the cortical regions in primates and significantly contribute to research on degenerative diseases in primates. Extensive data on the applications of the three technologies are now being obtained from primate brains to generate a primate brain atlas with multiple dimensions within cortical areas. The atlas will describe the multiple profiles, including transcriptomes, electrophysiology, and morphology, as well as the local connectivity principles, in each cell type within primate cortical areas (Fig. ). The organizational differences within homologous cortical areas could explain the higher complexity of functions in primates than in rodents. When establishing the above atlas, larger-scale transcriptome analysis might be used to identify species-specific cell types with distinct gene expression patterns and neuronal biology , and the local connectivity patterns of a selected cell type could then be comprehensively studied by the applications of Patch-seq and multicell patch-clamp (Fig. D). Although studies show a surprising conservation of basic transcriptomic cell types across cortical areas between primates and rodents , modifications of the primate neuronal profiles of transcriptomes, electrophysiology, morphology, and connectivity patterns might contribute to the more complex brain functions. In rodent studies, the integration of data from Patch-seq and sn/scRNA-seq has allowed for a comprehensive multimodal analysis. This approach successfully identified distinct morphology-electrophysiology-transcriptome types, showcasing unique neuronal properties. Moreover, these types can form continuous and correlated transcriptomic and morphological electrical landscapes within their respective families . This mutual predictability not only helps to predict the functional, morphological, or transcriptomic state based on one or two distinct neuronal properties but also provides promising potential for integration with experimental data from other technologies. In pathological studies, since large-scale transcriptome analysis has identified vulnerable cell types , Patch-seq can be used to test the potential loss or gain changes in the transcriptomic, electrophysiological, and morphological aspects of these vulnerable cell types, and multicell patch-clamp can be used to detect the loss or gain changes of local connectivity involving the selected cell types. Gene expression pattern analysis not only explains the molecular mechanisms of functional changes but also supplies lists of marker genes of cell types to develop genetic manipulation tools, which might serve in the applications of Patch-seq and multicell patch-clamp in primates. Moreover, Patch-seq and multicell patch-clamp can be used to verify the therapeutic effectiveness of targeting drug candidates provided by the transcriptome analysis . These technologies have considerable potential to address crucial questions in primate neuroscience. Each of the above three technologies has already made important contributions to advances in primate neuroscience. Establishing a primate brain atlas with multiple dimensions by combining the advantages of the three technologies is extraordinarily promising. Still, several important challenges remain. Due to the scarcity of primate tissues, especially those from primate models of neurological diseases, establishing a detailed primate brain atlas is challenging. In the future, integrating Patch-seq and multicell patch-clamp techniques with non-invasive imaging methods, such as magnetic resonance imaging and diffusion tensor imaging, offers a promising avenue for obtaining detailed structural and connectivity information about the primate brain in vivo . Secondly, implementing Patch-seq and multicell patch-clamp techniques faces challenges due to their relatively low throughput. Future efforts should focus on developing automated and high-throughput methods for Patch-seq and multicell patch-clamp techniques. By streamlining the experimental workflow and optimizing protocols, more cells can be processed within a shorter time frame, allowing for a more comprehensive analysis of the diverse cell types in the primate brain. Moreover, commonly used genetic manipulation tools are mainly used in rodents and organotypic section cultures in humans and non-human primates . Future development of genetic manipulation tools, such as expressing fluorescent proteins to label specific neurons or using optogenetic approaches in vivo to dissect circuit connections, could reveal patterns of synaptic connectivity in different species, including non-human primates . Finally, to obtain transcriptome information from multicell patch experiments, future research should prioritize the development of techniques that enable the simultaneous patching of multiple cells, extraction of nuclei for RNA retrieval, and concurrent acquisition of morphological and electrophysiological information. For example, spatial single-cell analysis of multiple genes using multiplexed error-robust fluorescence in situ hybridization (MERFISH) has generated a molecularly-defined and spatially-resolved cell atlas . Integrating Patch-seq with multicell patch-clamp and single-cell sequencing technologies is immensely important in establishing comparative connectivity maps across different states or cortical regions. By combining these techniques, researchers can establish comparative connectivity maps, enabling the study of neural circuitry across different conditions or regions of the cortex. This integration paves the way for a deeper understanding of brain function and connectivity. In conclusion, a rodent transcriptomic cell atlas has become available with morphological and electrophysiological annotation and delineating the local connectivity principles . A primate neocortex transcriptomic atlas has been established, while the morphological, electrophysiological, and connective properties are mostly absent. Detailed interrogation of these neurons indicates a higher diversification of excitatory neurons in the supragranular layer. Further studies on a broader range of brain areas and cell types, together with a cellular atlas describing developmental, aging, and disease states, are necessary to fully understand the molecular and neuronal basis of the augmented cognitive and behavioral capabilities of higher primates. |
Assessing the readability and quality of online patient information for laser tattoo removal | 303b4d85-e588-4414-b9ee-e7a0e9e75205 | 11252190 | Patient Education as Topic[mh] | Recently, tattoos have increased in popularity, with an estimated 32% of American adults having one . Many regret their tattoos and seek removal, the most common reason being career concerns . Laser removal is the most effective method for tattoo removal with the least side effects . Similar to other dermatologic concerns, many prospective laser tattoo removal clients may seek preliminary information online . The American Academy of Dermatology (AAD) recommends that all prospective laser tattoo removal clients consult a board-certified physician before the procedure for safety and best results . Additionally, it is essential that online health information on the procedure mentions possible adverse events while maintaining a level of readability accessible to all patients . Thus, our study aimed to assess the quality and comprehensiveness of online patient information on laser tattoo removal along the aforementioned lines.
We performed a Google search using the search terms “Laser Tattoo Removal Patient Information” and “Laser Tattoo Removal Patient Instructions” using Google Chrome’s incognito mode. This mode hides previous browser data and user location, and we cleared cookies to decrease the potential for search biases. We screened the first 60 resulting links in each query for inclusion. Exclusion criteria included websites that did not have information on laser tattoo removal, were not in English, were sponsored links, and had duplicates. We then assessed the readability of the articles included using validated criteria. We also assessed whether the articles mentioned the necessity of multiple sessions, potential complications, special considerations for skin of color clients, and the importance of a consultation with a dermatologist or plastic surgeon. A flowchart of our methodology is shown in Fig. .
The 77 webpages were assessed for readability in terms of approximate grade level using the Automated Readability Index, Gunning Fog Readability, Flesch-Kincaid Grade Level, Coleman-Liau Readability Index, Smog Index Readability Score, Linsear Write Readability Formula, and Forcast Readability Formula. These formulas were combined for the average reading level consensus calculated score that averages the seven previously mentioned scores and reports them as an integer grade level value. They were also assessed on a scale from 0 to 100 of increasing difficulty of readability using the Flesch Reading Ease scale. The readability results for the 77 webpages for Laser tattoo removal included in this study are reported in Fig. (whiskers demonstrating minimums and maximums) for the 7 readability scales, and consensus scale used that correlated with approximate grade level and Fig. for the Flesch Reading Ease scale (which is out of 100). The results demonstrated that the patient information and instructions varied from just above elementary level (e.g., minimum of 4.23 for the Linsear Write Readability Formula) or middle school to college level (e.g., maximum of 16.2 for the Gunning Fog Readability scale) regardless of which readability scale was used. However, the mean readability scores were all within the high school grade range for the 7 scales used, which can be seen with standard deviations and the mean for the Flesch Reading Ease Scale in Table . The averages of all seven scales were above the 8th -grade reading level, combined with the general variability of the data, demonstrate that even though this data is intended for the general patient population, the reading level is more advanced than what is recommended for similar types of patient information (e.g., consent forms) by academic, professional, and governmental agencies . Additionally, less than half of the websites (43%) directly addressed complications associated with the procedure specifically for Skin of Color patients. Less than half of the sites (45%) recommended a consultation with a dermatologist or plastic surgeon before undergoing laser tattoo removal treatments. Only 53% of the websites had physicians with MD degrees as part of the patient care for the patients receiving laser tattoo removal. However, 90% of the websites did mention that multiple tattoo removal procedures would most likely be needed for optimal results (Table ).
The results of this study demonstrate the challenging readability of patient information and instructions for laser tattoo removal, reflecting the results of other studies analyzing dermatological diseases and procedures (e.g., Ezemma et al.’s analysis of patient materials for central centrifugal cicatricial alopecia) . Thus, this demonstrates a general trend in the readability of dermatological patient information above the recommended patient reading levels by professional and governmental bodies . Additionally, websites failed to adequately inform Skin of Color patients of the adverse effects of laser tattoo removal for hypopigmentation, hyperpigmentation, and scarring that disproportionately affect patients with darker skin tones . Guidelines from the Centers for Disease Control and Prevention and the American Academy of Dermatologists recommend that patients consult a dermatologist before pursuing laser tattoo removal to minimize potential complication risk . However, of the 77 websites analyzed, fewer than half recommended consultation with a dermatologist or plastic surgeon, and only 53% of the webpages either were written by a physician with a Doctor of Medicine degree or had one as part of the patient care for the laser tattoo removal. Guidelines also recommend multiple treatments that are sufficiently temporally spaced to improve cosmesis and minimize complications, with the majority (90%) of the web pages analyzed recommending multiple-spaced treatments . Many of the webpages were for medical practices, with the highest medical provider being either a laser technician, nurse, or physician’s assistant as part of the patient care team, which poses a challenge to the potential validity of the information provided to the patients in addition to issues of readability. One of the limitations of this study is the focus on English language websites. Studies on a combination of language and social barriers contribute to lower health literacy among Hispanic populations compared to other ethnic groups, with poorer health literacy associated with worse health outcomes (e.g., more hospitalizations) . Therefore, websites in Spanish that have appropriate readability are important for potentially improving health literacy and outcomes for Hispanic and Spanish-speaking patients. However, the authors of this study were not sufficiently fluent in Spanish to properly assess the characteristics assessed for this study, and future studies should focus on Spanish and other language websites.
|
Molecular karyotyping and gene expression analysis in childhood cancer patients | 7df7a079-8dfd-4f9f-8084-84ac6061bd27 | 7769790 | Pediatrics[mh] | Determining genetic risk factors for cancer is a major goal of medical research. The increase of knowledge about genetic risk factors aims to improve cancer diagnostics and, due to therapeutic advances, contribute to increased overall survival for pediatric cancer. As pediatric cancer survivors reach adulthood, the development of secondary malignancies becomes a significant issue for these patients. Treatment of the primary neoplasm with chemotherapy (systemic therapy) and/or radiotherapy has been described as a risk factor for second neoplasms after childhood cancer . As only a small percentage of the treated children suffer from a second neoplasm, other factors are likely to be involved . A predisposition for the occurrence of a second neoplasm in childhood might be a pre-existing somatic genetic variation responsible for, or associated with, DNA-repair, cell cycle control, and other genes crucial for tumor development . Genetic variation among other modifications may manifest as SNP (single nucleotide polymorphism, mutation) or/and chromosomal copy number variation (CNV). Additive, epigenetic modifications, like aberrant methylation may also lead to tumor development. CNVs may harbor genes and/or regulatory regions that could contribute to complex diseases such as cancer, whose development is triggered and orchestrated by the interaction of many genes. Two consecutive chromosomal aberrations can be described in neoplasias: Primary somatic variations as initiating events and secondary aberrations that are acquired during transformation toward cancer . Typically, chromosomal abnormalities that accumulate during tumor evolution lead to an unbalanced genome. On the other hand, balanced chromosomal alterations are often associated with cytogenetically cryptic deletions or duplications in the breakpoint regions . These alterations can have a direct effect on transcript levels and thus gene expression . So far there have been few studies on primary fibroblasts of cancer patients to study underlying predisposing genomic variations and associated gene expression changes. Fibroblasts of breast and thyroid cancer patients were almost always found to have defective DNA repair and/or cell cycle regulation . Abnormal gene expression in the somatic cells of unaffected parents of retinoblastoma patients is also consistent with an inherited predisposition to cancer development . Radiation and chemotherapeutic agents do not mechanistically distinguish a tumor cell from healthy tissue and the application of these genotoxic agents may be another source of acquired CNVs. Frequent induction of chromosomal aberrations after irradiation has been reported by Massenkeil et al. in skin fibroblasts in vivo . To protect the healthy tissue from damage, it is especially important to understand the molecular mechanisms involved in the cellular response to radiation. Several attempts have been undertaken to analyze the transcriptional effects after irradiation of different tissues or cells, with different cell culture conditions, doses, and time points. Furthermore, it remains unclear whether DNA duplications or deletions are associated with the formation of epigenetic alterations, such as DNA methylation, which could play a role in cancer development . To identify genomic susceptibility factors for primary and secondary cancer formation in childhood, we compared molecular cytogenetic profiles by SNP array analysis in primary fibroblasts of childhood cancer survivors (1N) and carefully matched patients with second cancer (2N), alongside with cancer-free controls (0N). We determined the gene expression profile in primary fibroblasts in vitro after X-ray treatment and correlated it with the genes located within the deletions and duplications detected by SNP array analyses. Specifically, we tested the hypothesis that the occurrence of secondary cancer is associated with modifications in the expression of cell cycle control and DNA repair pathways. Finally, we analyzed the methylation patterns in the putative promoter regions of two candidate cancer-relevant genes, which resided within CNV regions and displayed differential expression after irradiation.
Patient collective This study was approved by the Ethics Committee of the Medical Association of Rhineland-Palatinate (no. 837.440.03 (4102) and no. 837.262.12(8363-F)). With the help of the German Childhood Cancer Registry, 20 individuals who survived a childhood malignancy and then developed a second primary cancer (2N) and 20 carefully matched (first tumor, manifestation age, sex) individuals who survived a childhood cancer without developing a second malignancy (1N) were recruited for the KiKme study (Cancer in Childhood and Molecular Epidemiology). Twenty matched patients (sex and age) without cancer from the Department of Accident Surgery and Orthopedics in Mainz Germany served as controls (0N). Written informed consent to use fibroblasts for research purposes was obtained after genetic counseling for all participating patients. No patient had intellectual disability or any other severe mental disease based on the clinical impression and personal history. The numbering of the patients does not represent the recruiting number and was chosen randomly. Skin biopsies were taken at the earliest 2 years after the last cancer therapy. Eleven patients suffered from acute lymphatic or myeloid leukemia, 5 patients from Hodgkin or Burkitt lymphoma, and 4 patients from other solid tumors as primary malignancy. The second cancers in the 2N group included myelodysplastic syndrome, lymphoma, thyroid cancer, and other solid tumors. All patients were followed up from primary cancer diagnosis to the time when they were recruited. With the exception of one patient (1N), all patients received chemotherapy, radiotherapy, or combination therapy. Six patients received allogeneic bone marrow transplantation. Clinical data of the participating patients are shown in Table . Although it was reported that 7–8% of children affected by cancer carry an unambiguous predisposing germline variant, predominantly within TP53 and BRCA2 , no proven pathogenic germline variants in TP53 , BRCA1 , or BRCA2 were identified using Sanger sequencing applying the ACMG criteria, in our cohort (Mutation Databases: ( http://p53.iarc.fr/ ; https://www.ncbi.nlm.nih.gov/clinvar/ and https://www.lovd.nl/ )). In one case, an RB1 oncogenic splice mutation was detected. This patient was excluded from further analysis. The remaining patients did not fit the criteria of an inherited childhood cancer syndrome . Currently, there are several attempts to characterize the common CNVs which have no impact on disease. The 1000 Genomes project ( http://www.1000genomes.org ), the genome of the Netherlands project ( http://www.nlgenome.nl ), and the Toronto Database of Genomic Variants ( http://dgv.tcag.ca/ ) are examples of these attempts. Since the boundaries of the variants are not well defined, there could be an over-estimation in the actual size of the variants. In addition, some ethnic aspects may contribute to the prevalence of a specific CNV. Therefore, we compared the cancer patients’ CNV with the data of matched 0N controls and 1000 0N cases without cancer, diabetes, obesity, dyslipidemia, and stroke from the Gutenberg Heart Study (GHS), which had the advantage that the samples were analyzed in the same laboratory with the same technique and the participants’ samples came from similar ethnic backgrounds. The aim was to detect genes which were affected in the cancer patients but not in the controls, which may therefore be considered as rare, putative, and predisposing variations. The GHS is a community-based prospective, observational single-center cohort study in the Rhein-Main-Region in western Mid-Germany. The GHS has been approved by the local ethics committee and by the local and federal data safety commissioners. The primary aim of the GHS study is to evaluate and improve cardiovascular risk stratification. Cell culture and experimental procedure Primary fibroblasts from skin biopsies were cultured in DMEM (Invitrogen, Karlsruhe, Germany) and were supplemented with 15% fetal bovine serum (FBS) (Biochrom, Berlin, Germany), 1% vitamins, and 1% antibiotics (Pen/Strep) (Biochrom, Berlin, Germany) at 37 °C and 5% CO 2 . All experiments using the primary fibroblasts were performed with growth-arrested cells in the G0/G1 stage in 10-cm cell culture dishes. Confluency of the cells was achieved by contact inhibition and subsequent cultivation for 2 weeks. Over 90% of the cells were in the G0/G1 stage of the cell cycle which was confirmed by FACS (flow cytometric cell cycle analysis). For comparisons of 0N, 1N, and 2N patients, fibroblasts with similar passages 9 (± 2) were used. Cells were exposed to X-rays with a D3150 X-Ray Therapy System (Gulmay Ltd., Surrey, UK) at 140 kV and a dose rate of 3.62 Gray (Gy)/min at room temperature. Sham-irradiated cells were kept at the same conditions in the radiation device control room. Cells were exposed to single doses ranging from 2 to 8 Gy and were returned to the incubator. Cells were harvested by a brief treatment with trypsin/EDTA (Biochrom, Berlin, Germany) and washed with PBS (–Mg/–Cl) at 15 min, 2 h, and 24 h after irradiation. Resulting pellets were stored at − 80 °C until DNA or RNA preparation. Cell lines MCF7 (ATCC, Manassas, VA, USA), ZR-75-1, EFO27, and T47D (ATCC, Manassas, VA, USA) were cultivated in RPMI1640 (Gibco) supplemented with 10% FBS, 2.5% HEPES buffer (Sigma), and 1% antibiotics (Pen/Strep) (Life Technologies). The A549 (ATCC, Manassas, VA, USA) cell line was cultured in DMEM modified with 10% FBS. SNP array (molecular karyotype analysis) Molecular karyotyping (SNP array) was performed using DNA (isolated with the NucleoSpin tissue kit Macherey-Nagel, Germany) from untreated primary fibroblasts in passage 5 (2N, 1N, 0N). High-resolution screening for microdeletions and duplications was performed with the Affymetrix GeneChip Genome-Wide Human SNP array 6.0 and the GeneChip Genome-Wide SNP Sty Assay Kit 5.0/6.0, following the protocol developed by the manufacturer (Affymetrix, Santa Clara, CA, USA). Data calculation was performed with Affymetrix Genotyping Console 4.2.0.26 and Chromosome Analysis Suite 3.1.0.15. The segment filters for gains and losses were set at a minimum of 5 markers and 20 kb. All samples passed the QC control filters (MAPD_<0.25, SNPQC>_15.00, Waviness SD <_0.12). RNA-sequencing, data analysis, and statistics Total RNAs were prepared from treated and untreated fibroblast cultures using the Nucleo Spin RNA Plus Kit from Macherey-Nagel. The RNA integrity was assessed with a Bioanalyzer 2100 (Agilent RNA 6000 Nano Kit, Agilent Technologies, Santa Clara, USA). One microgram of total RNA (QuBit, Thermo Fisher Scientific) RIN ≥ 8 was used for library construction using the TruSeq RNA Sample Prep Kit v2 (Set A and B, Illumina) following the manufacturer’s instruction. RNA-Seq libraries were pooled, cBot clustered, and sequenced on a HiSeq2500 instrument (Illumina) in high-output mode. Reads with a length of 50 nucleotides were generated using TruSeq SR (single read) Cluster Kit v3 (Illumina) and TruSeq SBS Kit v3 (Illumina). Data was generated by RTA Version 1.8.4 (real-time analysis) and converted to FASTQ format using bcl2fastq Version 1.8.4. (Illumina). Raw reads were cleaned from adapter sequences using Trimmomatic. Cleaned reads were aligned to the human reference genome (GRCh38) using STAR. Expression per gene expressed as the number of aligned reads per gene was quantified using FeatureCounts. Data analysis was performed using R with 51 samples varying in the dose of applied radiation (0 Gy, 2 Gy, 5 Gy, 8 Gy) and time post-irradiation (15 min, 2 h, 24 h) for analysis. Genes with less than 10 counts in 4 samples were discarded. Data were normalized for sequencing depth using the EdgeR package. Transformation to log2 counts per million was performed via the Voom method, implemented in the limma-package. Differential gene expression dependent on dose and time points was detected using linear models implemented in the limma-package. Genes with an adjusted p value smaller than 0.05 were flagged as significant for further analyses. p values were adjusted for false discovery rate (FDR) (Benjamini-Hochberg procedure). Quantitative real-time PCR for gene expression and copy number variation Total RNAs were prepared from treated and untreated fibroblast cultures using the Nucleo Spin RNA Plus Kit from Macherey-Nagel. Two micrograms of the RNA samples was reversely transcribed into cDNA using the SuperScript IV First-Strand random hexamer Synthesis System (Invitrogen). Genomic DNA was isolated with the NucleoSpin tissue kit (Macherey-Nagel, Germany). Forward and reverse primers (Exon-spanning for gene expression) were designed with the Primer-Blast program ( https://www.ncbi.nlm.nih.gov/tools/primer-blast/ ). RRN18S and TBP for gene expression and HEM3 and RFC3 for copy number calculations were used as endogenous control genes (Online Resource Table: Primer sequences). Each 10-μl reaction volume contained 25 ng cDNA or DNA template in 5 μl Sybr-Green Master Mix (Roche), 2 μl RNase-free PCR graded water (Roche), and 1 μl each of forward and reverse primer (10 μM). All reactions were performed in triplicate and in two stages, with one cycle of 95 °C for 10 min (first stage) and 45 cycles of 94 °C for 10 s, (TM-primer)°C for 10 s, and 72 °C for 10 s (second stage) using the LightCycler 480II Roche. Amplification qualities were assayed using melting curves and agarose gel analysis. The qPCR amplification efficiency was calculated using the LingReg program and the CT values were corrected using the mean amplification efficiency. Relative quantification was carried out with the ΔΔCT method using the two endogenous control genes and the control 0 Gy or 0N probands for calibration. Statistical analyses were conducted using the unpaired t test. Expression changes with p value < 0.05 were considered significant. Bisulfite pyrosequencing Genomic DNA was isolated with the NucleoSpin tissue kit (Macherey-Nagel, Germany). Bisulfite conversion of 0.2 μg DNA was performed with the EpiTect Bisulfite Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. PCR and sequencing primers for the analyzed genes were designed with PyroMark Assay Design 2.0 software (Qiagen) (online Resource Table: Primer sequences). The 25 μl PCR reactions consisted of 2.5 μl 10x PCR buffer, 20 mM MgCl 2 , 0.5 μl dNTP mix (10 mM), 1 μl of each forward and reverse primer (10 μM), 0.2 μl FastStart Taq DNA Polymerase (5 U/μl) (Roche Diagnostics, Mannheim, Germany), 18.8 μl PCR-grade water, and 1 μl (~ 100 ng) bisulfite-converted template DNA. PCR amplifications were performed with an initial denaturation step at 95 °C for 5 min, 35 cycles of 95 °C for 30 s, 55 °C for 30 s and 72 °C for 45 s and a final extension step at 72 °C for 5 min. Bisulfite pyrosequencing was performed on a PyroMark Q96 MD Pyrosequencing System using the PyroMark Gold Q96 CDT Reagent Kit (Qiagen) and 0.5 μl of sequencing primers (10 mM). Data analysis was performed with the Pyro Q-CpG software (Qiagen). FISH analysis Metaphase chromosome spreads of the patients were prepared from primary mitotic fibroblasts. BAC clones (RP11-139D07 for the 2N4 and RP11-327M19 for the 2N7 patient) were selected from the Wellcome Trust Sanger Institute Ensembl contigs and obtained from the Resource Center Primary Database of the German Human Genome Project and ResGen (Invitrogen). Genomic BAC DNAs were labeled with Tetramethyl-rhodamine-5-dUTP* (Roche) or 25 nmol Fluorescein-12-dUTP* (Roche) by standard nick-translation and FISH-mapped on metaphase chromosomes. Control BAC clones were chosen for 16q terminal or 2q terminal chromosome areas. Images were generated using the Leica microscope CTR MIC and Software CW4000.
This study was approved by the Ethics Committee of the Medical Association of Rhineland-Palatinate (no. 837.440.03 (4102) and no. 837.262.12(8363-F)). With the help of the German Childhood Cancer Registry, 20 individuals who survived a childhood malignancy and then developed a second primary cancer (2N) and 20 carefully matched (first tumor, manifestation age, sex) individuals who survived a childhood cancer without developing a second malignancy (1N) were recruited for the KiKme study (Cancer in Childhood and Molecular Epidemiology). Twenty matched patients (sex and age) without cancer from the Department of Accident Surgery and Orthopedics in Mainz Germany served as controls (0N). Written informed consent to use fibroblasts for research purposes was obtained after genetic counseling for all participating patients. No patient had intellectual disability or any other severe mental disease based on the clinical impression and personal history. The numbering of the patients does not represent the recruiting number and was chosen randomly. Skin biopsies were taken at the earliest 2 years after the last cancer therapy. Eleven patients suffered from acute lymphatic or myeloid leukemia, 5 patients from Hodgkin or Burkitt lymphoma, and 4 patients from other solid tumors as primary malignancy. The second cancers in the 2N group included myelodysplastic syndrome, lymphoma, thyroid cancer, and other solid tumors. All patients were followed up from primary cancer diagnosis to the time when they were recruited. With the exception of one patient (1N), all patients received chemotherapy, radiotherapy, or combination therapy. Six patients received allogeneic bone marrow transplantation. Clinical data of the participating patients are shown in Table . Although it was reported that 7–8% of children affected by cancer carry an unambiguous predisposing germline variant, predominantly within TP53 and BRCA2 , no proven pathogenic germline variants in TP53 , BRCA1 , or BRCA2 were identified using Sanger sequencing applying the ACMG criteria, in our cohort (Mutation Databases: ( http://p53.iarc.fr/ ; https://www.ncbi.nlm.nih.gov/clinvar/ and https://www.lovd.nl/ )). In one case, an RB1 oncogenic splice mutation was detected. This patient was excluded from further analysis. The remaining patients did not fit the criteria of an inherited childhood cancer syndrome . Currently, there are several attempts to characterize the common CNVs which have no impact on disease. The 1000 Genomes project ( http://www.1000genomes.org ), the genome of the Netherlands project ( http://www.nlgenome.nl ), and the Toronto Database of Genomic Variants ( http://dgv.tcag.ca/ ) are examples of these attempts. Since the boundaries of the variants are not well defined, there could be an over-estimation in the actual size of the variants. In addition, some ethnic aspects may contribute to the prevalence of a specific CNV. Therefore, we compared the cancer patients’ CNV with the data of matched 0N controls and 1000 0N cases without cancer, diabetes, obesity, dyslipidemia, and stroke from the Gutenberg Heart Study (GHS), which had the advantage that the samples were analyzed in the same laboratory with the same technique and the participants’ samples came from similar ethnic backgrounds. The aim was to detect genes which were affected in the cancer patients but not in the controls, which may therefore be considered as rare, putative, and predisposing variations. The GHS is a community-based prospective, observational single-center cohort study in the Rhein-Main-Region in western Mid-Germany. The GHS has been approved by the local ethics committee and by the local and federal data safety commissioners. The primary aim of the GHS study is to evaluate and improve cardiovascular risk stratification.
Primary fibroblasts from skin biopsies were cultured in DMEM (Invitrogen, Karlsruhe, Germany) and were supplemented with 15% fetal bovine serum (FBS) (Biochrom, Berlin, Germany), 1% vitamins, and 1% antibiotics (Pen/Strep) (Biochrom, Berlin, Germany) at 37 °C and 5% CO 2 . All experiments using the primary fibroblasts were performed with growth-arrested cells in the G0/G1 stage in 10-cm cell culture dishes. Confluency of the cells was achieved by contact inhibition and subsequent cultivation for 2 weeks. Over 90% of the cells were in the G0/G1 stage of the cell cycle which was confirmed by FACS (flow cytometric cell cycle analysis). For comparisons of 0N, 1N, and 2N patients, fibroblasts with similar passages 9 (± 2) were used. Cells were exposed to X-rays with a D3150 X-Ray Therapy System (Gulmay Ltd., Surrey, UK) at 140 kV and a dose rate of 3.62 Gray (Gy)/min at room temperature. Sham-irradiated cells were kept at the same conditions in the radiation device control room. Cells were exposed to single doses ranging from 2 to 8 Gy and were returned to the incubator. Cells were harvested by a brief treatment with trypsin/EDTA (Biochrom, Berlin, Germany) and washed with PBS (–Mg/–Cl) at 15 min, 2 h, and 24 h after irradiation. Resulting pellets were stored at − 80 °C until DNA or RNA preparation. Cell lines MCF7 (ATCC, Manassas, VA, USA), ZR-75-1, EFO27, and T47D (ATCC, Manassas, VA, USA) were cultivated in RPMI1640 (Gibco) supplemented with 10% FBS, 2.5% HEPES buffer (Sigma), and 1% antibiotics (Pen/Strep) (Life Technologies). The A549 (ATCC, Manassas, VA, USA) cell line was cultured in DMEM modified with 10% FBS.
Molecular karyotyping (SNP array) was performed using DNA (isolated with the NucleoSpin tissue kit Macherey-Nagel, Germany) from untreated primary fibroblasts in passage 5 (2N, 1N, 0N). High-resolution screening for microdeletions and duplications was performed with the Affymetrix GeneChip Genome-Wide Human SNP array 6.0 and the GeneChip Genome-Wide SNP Sty Assay Kit 5.0/6.0, following the protocol developed by the manufacturer (Affymetrix, Santa Clara, CA, USA). Data calculation was performed with Affymetrix Genotyping Console 4.2.0.26 and Chromosome Analysis Suite 3.1.0.15. The segment filters for gains and losses were set at a minimum of 5 markers and 20 kb. All samples passed the QC control filters (MAPD_<0.25, SNPQC>_15.00, Waviness SD <_0.12).
Total RNAs were prepared from treated and untreated fibroblast cultures using the Nucleo Spin RNA Plus Kit from Macherey-Nagel. The RNA integrity was assessed with a Bioanalyzer 2100 (Agilent RNA 6000 Nano Kit, Agilent Technologies, Santa Clara, USA). One microgram of total RNA (QuBit, Thermo Fisher Scientific) RIN ≥ 8 was used for library construction using the TruSeq RNA Sample Prep Kit v2 (Set A and B, Illumina) following the manufacturer’s instruction. RNA-Seq libraries were pooled, cBot clustered, and sequenced on a HiSeq2500 instrument (Illumina) in high-output mode. Reads with a length of 50 nucleotides were generated using TruSeq SR (single read) Cluster Kit v3 (Illumina) and TruSeq SBS Kit v3 (Illumina). Data was generated by RTA Version 1.8.4 (real-time analysis) and converted to FASTQ format using bcl2fastq Version 1.8.4. (Illumina). Raw reads were cleaned from adapter sequences using Trimmomatic. Cleaned reads were aligned to the human reference genome (GRCh38) using STAR. Expression per gene expressed as the number of aligned reads per gene was quantified using FeatureCounts. Data analysis was performed using R with 51 samples varying in the dose of applied radiation (0 Gy, 2 Gy, 5 Gy, 8 Gy) and time post-irradiation (15 min, 2 h, 24 h) for analysis. Genes with less than 10 counts in 4 samples were discarded. Data were normalized for sequencing depth using the EdgeR package. Transformation to log2 counts per million was performed via the Voom method, implemented in the limma-package. Differential gene expression dependent on dose and time points was detected using linear models implemented in the limma-package. Genes with an adjusted p value smaller than 0.05 were flagged as significant for further analyses. p values were adjusted for false discovery rate (FDR) (Benjamini-Hochberg procedure).
Total RNAs were prepared from treated and untreated fibroblast cultures using the Nucleo Spin RNA Plus Kit from Macherey-Nagel. Two micrograms of the RNA samples was reversely transcribed into cDNA using the SuperScript IV First-Strand random hexamer Synthesis System (Invitrogen). Genomic DNA was isolated with the NucleoSpin tissue kit (Macherey-Nagel, Germany). Forward and reverse primers (Exon-spanning for gene expression) were designed with the Primer-Blast program ( https://www.ncbi.nlm.nih.gov/tools/primer-blast/ ). RRN18S and TBP for gene expression and HEM3 and RFC3 for copy number calculations were used as endogenous control genes (Online Resource Table: Primer sequences). Each 10-μl reaction volume contained 25 ng cDNA or DNA template in 5 μl Sybr-Green Master Mix (Roche), 2 μl RNase-free PCR graded water (Roche), and 1 μl each of forward and reverse primer (10 μM). All reactions were performed in triplicate and in two stages, with one cycle of 95 °C for 10 min (first stage) and 45 cycles of 94 °C for 10 s, (TM-primer)°C for 10 s, and 72 °C for 10 s (second stage) using the LightCycler 480II Roche. Amplification qualities were assayed using melting curves and agarose gel analysis. The qPCR amplification efficiency was calculated using the LingReg program and the CT values were corrected using the mean amplification efficiency. Relative quantification was carried out with the ΔΔCT method using the two endogenous control genes and the control 0 Gy or 0N probands for calibration. Statistical analyses were conducted using the unpaired t test. Expression changes with p value < 0.05 were considered significant.
Genomic DNA was isolated with the NucleoSpin tissue kit (Macherey-Nagel, Germany). Bisulfite conversion of 0.2 μg DNA was performed with the EpiTect Bisulfite Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. PCR and sequencing primers for the analyzed genes were designed with PyroMark Assay Design 2.0 software (Qiagen) (online Resource Table: Primer sequences). The 25 μl PCR reactions consisted of 2.5 μl 10x PCR buffer, 20 mM MgCl 2 , 0.5 μl dNTP mix (10 mM), 1 μl of each forward and reverse primer (10 μM), 0.2 μl FastStart Taq DNA Polymerase (5 U/μl) (Roche Diagnostics, Mannheim, Germany), 18.8 μl PCR-grade water, and 1 μl (~ 100 ng) bisulfite-converted template DNA. PCR amplifications were performed with an initial denaturation step at 95 °C for 5 min, 35 cycles of 95 °C for 30 s, 55 °C for 30 s and 72 °C for 45 s and a final extension step at 72 °C for 5 min. Bisulfite pyrosequencing was performed on a PyroMark Q96 MD Pyrosequencing System using the PyroMark Gold Q96 CDT Reagent Kit (Qiagen) and 0.5 μl of sequencing primers (10 mM). Data analysis was performed with the Pyro Q-CpG software (Qiagen).
Metaphase chromosome spreads of the patients were prepared from primary mitotic fibroblasts. BAC clones (RP11-139D07 for the 2N4 and RP11-327M19 for the 2N7 patient) were selected from the Wellcome Trust Sanger Institute Ensembl contigs and obtained from the Resource Center Primary Database of the German Human Genome Project and ResGen (Invitrogen). Genomic BAC DNAs were labeled with Tetramethyl-rhodamine-5-dUTP* (Roche) or 25 nmol Fluorescein-12-dUTP* (Roche) by standard nick-translation and FISH-mapped on metaphase chromosomes. Control BAC clones were chosen for 16q terminal or 2q terminal chromosome areas. Images were generated using the Leica microscope CTR MIC and Software CW4000.
Molecular karyotype analysis (SNP array) of 2N, 1N, and 0N controls The concept of the study was to detect genes that were affected in the cancer patients (2N, 1N) but not in the controls (0N and 1000 GHS), which may, therefore, be considered as a rare, putative, and predisposing variation. We detected rare germline CNVs in eighteen 2N and sixteen 1N patients. In some cases, the detected aberrations in the SNP array analysis overlapped between patients and controls. For the final compendium of putative pathogenic aberrations, we selected only genes that were not affected in the control section, but the annotation of the aberration reflects the complete duplicated or deleted CNV area. Altogether we detected 142 affected genes in 2N patients of which 53 genes were not altered in controls (matched 0N and 1000 GHS). For the 1N collective, there were 185 genes affected by CNVs of which 38 genes were uniquely altered in the 1N cancer patients. Interestingly, 22 genes in 2N patients within CNV and eighteen genes in 1N have previously been described to be associated with tumor development, growth, apoptosis, and chromosomal stability, or as differentially expressed in cancer. (TCGA data base: https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ). Only one gene ( ABCC6 ) was partially duplicated in two matched patients (1N5 and 2N4) and duplicated in five out of 1000 controls (2N; arr[hg19] 16p13.11(15,048,755-16,295,900)x3 and 1N; arr[hg19] 16p13.11(16,294,705-16,798,651)x3). Both patients suffered from leukemia and patient 2N4 later developed a slow-growing brain tumor. Each 2N patient displayed a unique CNV pattern which was not seen in other patients of the 2N group. Altogether we detected sixteen heterozygous and one homozygous duplication, as well as eleven heterozygous and one homozygous deletion in the 2N group. The homozygous deletion affected the TPTE2P3 gene, which is classified as a pseudogene with expression restricted to the testis . The findings for the 1N patient group were similar to the observed ones in 2N patients. Here we also detected unique CNV patterns, with the exception of three regions: 19q13.42(54,716,827-54,741,307)x3, 22q11.21(21,567,218-21,845,282)x3, and 14q11.2(24,431,136-24,499,742)x1 which were duplicated in more than one 1N case. The duplication in chromosome 19q13.42 contains the LILRB3 gene and occurred in two leukemia cancer patients, whereas the duplication in 22q11.21 encompasses five genes ( HIC2 , PI4KAP2 (pseudogene), POM121L8P (pseudogene), RIMBP3B , and RIMBP3C ). The patients carrying this aberration suffered from leukemia and solid tumors. The aberration 14q11.2(24,431,136-24,499,742)x1 contains the DHRS4L2 gene, which is downregulated after radiation and occurred in our patients with leukemia and solid tumor. In total, we detected thirteen heterozygous, one homozygous duplicated, and twelve deleted heterozygous regions in the 1N patient cohort. The CNVs did not always affect a whole gene. We detected intronic deletions in IGSF21 , NCK1 , and MCU , and intronic duplications in the RBFOX3 , COL11A , SORCS1 , FMNL2 , and NLGN1 genes. These sites harbor transcription factor binding regions. In total, seven pseudogenes and eight microRNAs were affected. Six long coding (LINCO) and two anti-sense RNAs (AS) were affected by CNVs in cancer patients, but not in controls (see Tables and and for more detailed information Online Resource 1, Table: 1N CNV; 2N CNV) . qPCR and FISH analysis to confirm CNV To confirm the results of the SNP array analyses, we chose exemplary regions for verification by qPCR and FISH analysis. In all explored cases, the qPCR result confirmed the duplications detected by the SNP array and the hybridization using specific probes (FISH) indicates tandem duplication in both cases (2N4; 2N7) analyzed. In Fig. , the duplication in 2N4 is shown. Since the qPCR technique is suitable for screening for aberrations even in a mosaic state, further verification was conducted using qPCR. Case 2N12 displayed a deletion in chromosome 2q32.1. qPCR analysis with specific primers for this region suggests a heterozygous deletion, whereas the duplication in 19q13.41 in case 2N9 may be a mosaic (Fig. ). Analysis of common CNVs and transcription factor binding sites in controls (0N and 1000 CGH) and patient’s gene free areas Our analysis of frequently altered regions in the human genome (Online Resource 1, Table: Suppl. common CNV) revealed the presence of transcription factor binding sites either within gene-loci (e.g., the DUSP22 gene) or within previously described immune response regulating areas (IGK, immunoglobulin kappa locus; IGH immunoglobulin heavy locus, etc.). Our hypothesis is that there may be several areas altered in the patients which do not contain genes, but enhancer-/transcription binding- or CpG-sites that might be important for the regulation of genes outside of CNVs. On these grounds, we conducted an analysis on parts in the genome which are classified to date as gene free (data base UCSC https://genome.ucsc.edu/cgi-bin/hgGateway ) and were found to be homozygously or heterozygously deleted mainly in the tumor patients (Online Resource 1 Table: Gene free CNV 2N and 1N). We did not detect any homozygously deleted sites, whereas twelve regions were heterozygously deleted in 2N cases and six in 1N tumor cases. Eight areas were duplicated (six in 2N and two in 1N). Only four of these regions did not contain any transcription factor binding site. Other regions like 5q21.2(103,509,767-103,534,114)x1, harboring MYC , RAD21 , or SMC3 binding sites, suggest some involvement in the regulation of DNA repair or growth control. There were no significant differences or overlaps between 1N and 2N cases. None of the detected areas contained CpG islands ( https://genome.ucsc.edu/ ). Gene expression in cells of 0N patients after irradiation As stated by other researchers, low background radiation and therapeutic radiation treatments are important inducers of cancer and secondary independent cancers. To estimate the influence of CNVs on gene regulation after irradiation, we designed a study to analyze collateral radiation damage, aiming to detect genes that are transcriptionally altered after irradiation and affected in patients by CNVs. To compare the gene expression with the genomic gains and losses of former tumor patients, it was necessary to generate a comparable gene expression data set. There are several studies published on gene expression after gamma radiation in human primary fibroblasts, mostly performed using chip technology and examining the expression in 2D and generally at 80% confluency. As G0/G1 is probably the predominant cell cycle stage in collaterally irradiated tissues, we designed our experiments in cell cycle arrested cells, to omit mitotic gene expression and DNA repair in dividing cells, as previously published with skin fibroblasts and neonatal foreskin cell lines . To ensure a wide spectrum of the gene induction after radiation, we used three independent 0N fibroblast cell lines and extracted the RNA after 15 min (early response), 2 h (mid response), and 24 h (late response). The radiation doses were chosen either to be therapeutically relevant (2 Gy) or experimental (5 Gy and 8 Gy). Using the entire data set, without regard for the differences in radiation dose and time, we detected 21,459 dysregulated genes ( p value < 0.05) post-radiation. After stratifying the results for false discovery rate (FDR) < 0.05, we found 2619 genes to be altered in their transcription rate (Online Resource 2). Considering the post-radiation time, the gene expression varied strongly between 15 min, 2 h, and 24 h. After 15 min, we detected only one regulated gene ( ANAPC4 ) (FDR < 0.05) in comparison with gene induction after 2 h (FDR < 0.05; 1472 regulated genes) and 24 h (FDR < 0.05; 1567 regulated genes). For the verification of the RNA-seq experiments, we chose ten representative genes ( APG1 , CDKN1A , CSRNP1 , FAM111B , FBXO22 , KRT17 , MDM2 , MYBL2 , RAD54L , and THSD1 ) for further characterization using qPCR. Among them are known marker genes that have already been described to be regulated upon radiation, such as CDKN1A , RAD54L , and KRT17 (Fig. ). As stated by Christmann and Kaina , mammalian cells express DNA repair genes at a detectable basal level and just a slight upregulation or downregulation may significantly ameliorate the repair capacity of the cell. By convention, an expression change of ± 1.5–2-fold is considered to be biologically relevant. In our experience, the calculation of the fold changes depends on the platform used to generate the data and the bioinformatics normalization approach. Therefore, we used a data set based on the FDR value < 0.05 with 2619 genes (Online Resource 2), with no regard to time and dose (duplicate genes were removed) for comparison with the molecular karyotypes of the patients. Irradiation sensitive genes affected by CNVs in cancer patients To analyze the impact of radiation-induced genes in unique patient-related CNVs, we compared the SNP array data with the gene expression signatures obtained after irradiation. Among the 2N patients, we detected six genes ( POLR3F , SEC23B , ZNF133 , C16orf45 , RRN3 , and NTAN1 ) which were overexpressed after irradiation and were duplicated in the genome of ALL patients with the second independent cancer being either meningioma or thyroid carcinoma. None of these genes has been described to promote cancer, but ZNF133 has been identified to be overexpressed in osteosarcoma . Among the 1N patients, we detected five genes ( ZCWPW , SYNCRI , DHX3 , DHRS4L2 , and THSD1 ) that were differentially regulated after irradiation and were located in duplicated regions. We analyzed the expression profile of THSD1 in three independent controls (0N) and six patient-derived fibroblast cell lines (three 1N and three 2N) using qPCR and detected highly variable expression changes after radiation among controls as well as cancer patients (1N, 2N). We could not establish a clear connection between the duplication of THSD1 and an increased expression before and after irradiation (Fig. ). Other than the previously mentioned genes, we also detected radiation-sensitive genes within common CNVs. The copy number of the DUSP22 gene is highly variable among individuals and, surprisingly, this gene changes its expression upon radiation treatment, probably contributing to individual response upon therapy. To make a classifying rating, we created a term called aberration frequency, which estimates a value for the incidence of a given aberration in a given cohort. Some alterations in the genome such as FAM86B1 or GOLGA8A and duplication in the intron of PTPRN2 were up to three times more frequent in our cancer cases than in controls. A compilation of the results is given in Online Resource 1, Table X-ray response. The highest response upon radiation (fold change) was calculated for GOLGA8A . The deletion occurs at the 5′ end of the retained intron transcript variant 2 non-coding RNA. qPCR examination of the copy number status in matched 1N8/0N11 and 2N8/0N11 samples revealed a loss for the 1N8 case, whereas in 2N8, the loss was heterozygous. We analyzed the expression of GOLGA8A before and after radiation in the corresponding matching samples. The 0N11 control shows an increase of expression after 2 h proportional to the radiation intensity, whereas the cancer patient samples 1N8 and 2N8 show a diminished response after irradiation (Fig. ). Methylation analysis of duplicated genes Gene expression is also modulated by methylation. In a previous study, we did not find global methylation changes in normal fetal fibroblasts 1–72 h after irradiation, neither in genic (promoters, 5′ UTRs, first exons, gene bodies, and 3′ UTRs) nor in intergenic regions . To analyze the possibility of methylation changes upon altered DNA content in fibroblasts of our cancer patients, we chose two genes that presented with a CNV and were also differentially methylated in several cancer cell lines. GSTT2 is deleted in patient 2N7 (arr[hg19] 22q11.23(24,283,003-24,330,206)x1) and in 58 participants of the control GHS collective. In comparison with the hypermethylation of the GSTT2 CpG island, consisting of six CpGs, in the A549, MCF7, and EFO27 cancer cell lines, the patient’s sample was hypomethylated similar to the matched control sample 0N7 and the FancD1 fibroblast line. The analysis of the duplicated THSD1 gene promoter with ten CpGs showed similar results to the GSTT2 gene. In the case 1N13 and the matched samples 2N19 and 0N20, the values matched the values of normative samples in contrast to the hypermethylation in the two cancer cell lines ZR-75-1 and T47D ( Fig. ) .
The concept of the study was to detect genes that were affected in the cancer patients (2N, 1N) but not in the controls (0N and 1000 GHS), which may, therefore, be considered as a rare, putative, and predisposing variation. We detected rare germline CNVs in eighteen 2N and sixteen 1N patients. In some cases, the detected aberrations in the SNP array analysis overlapped between patients and controls. For the final compendium of putative pathogenic aberrations, we selected only genes that were not affected in the control section, but the annotation of the aberration reflects the complete duplicated or deleted CNV area. Altogether we detected 142 affected genes in 2N patients of which 53 genes were not altered in controls (matched 0N and 1000 GHS). For the 1N collective, there were 185 genes affected by CNVs of which 38 genes were uniquely altered in the 1N cancer patients. Interestingly, 22 genes in 2N patients within CNV and eighteen genes in 1N have previously been described to be associated with tumor development, growth, apoptosis, and chromosomal stability, or as differentially expressed in cancer. (TCGA data base: https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ). Only one gene ( ABCC6 ) was partially duplicated in two matched patients (1N5 and 2N4) and duplicated in five out of 1000 controls (2N; arr[hg19] 16p13.11(15,048,755-16,295,900)x3 and 1N; arr[hg19] 16p13.11(16,294,705-16,798,651)x3). Both patients suffered from leukemia and patient 2N4 later developed a slow-growing brain tumor. Each 2N patient displayed a unique CNV pattern which was not seen in other patients of the 2N group. Altogether we detected sixteen heterozygous and one homozygous duplication, as well as eleven heterozygous and one homozygous deletion in the 2N group. The homozygous deletion affected the TPTE2P3 gene, which is classified as a pseudogene with expression restricted to the testis . The findings for the 1N patient group were similar to the observed ones in 2N patients. Here we also detected unique CNV patterns, with the exception of three regions: 19q13.42(54,716,827-54,741,307)x3, 22q11.21(21,567,218-21,845,282)x3, and 14q11.2(24,431,136-24,499,742)x1 which were duplicated in more than one 1N case. The duplication in chromosome 19q13.42 contains the LILRB3 gene and occurred in two leukemia cancer patients, whereas the duplication in 22q11.21 encompasses five genes ( HIC2 , PI4KAP2 (pseudogene), POM121L8P (pseudogene), RIMBP3B , and RIMBP3C ). The patients carrying this aberration suffered from leukemia and solid tumors. The aberration 14q11.2(24,431,136-24,499,742)x1 contains the DHRS4L2 gene, which is downregulated after radiation and occurred in our patients with leukemia and solid tumor. In total, we detected thirteen heterozygous, one homozygous duplicated, and twelve deleted heterozygous regions in the 1N patient cohort. The CNVs did not always affect a whole gene. We detected intronic deletions in IGSF21 , NCK1 , and MCU , and intronic duplications in the RBFOX3 , COL11A , SORCS1 , FMNL2 , and NLGN1 genes. These sites harbor transcription factor binding regions. In total, seven pseudogenes and eight microRNAs were affected. Six long coding (LINCO) and two anti-sense RNAs (AS) were affected by CNVs in cancer patients, but not in controls (see Tables and and for more detailed information Online Resource 1, Table: 1N CNV; 2N CNV) .
To confirm the results of the SNP array analyses, we chose exemplary regions for verification by qPCR and FISH analysis. In all explored cases, the qPCR result confirmed the duplications detected by the SNP array and the hybridization using specific probes (FISH) indicates tandem duplication in both cases (2N4; 2N7) analyzed. In Fig. , the duplication in 2N4 is shown. Since the qPCR technique is suitable for screening for aberrations even in a mosaic state, further verification was conducted using qPCR. Case 2N12 displayed a deletion in chromosome 2q32.1. qPCR analysis with specific primers for this region suggests a heterozygous deletion, whereas the duplication in 19q13.41 in case 2N9 may be a mosaic (Fig. ).
Our analysis of frequently altered regions in the human genome (Online Resource 1, Table: Suppl. common CNV) revealed the presence of transcription factor binding sites either within gene-loci (e.g., the DUSP22 gene) or within previously described immune response regulating areas (IGK, immunoglobulin kappa locus; IGH immunoglobulin heavy locus, etc.). Our hypothesis is that there may be several areas altered in the patients which do not contain genes, but enhancer-/transcription binding- or CpG-sites that might be important for the regulation of genes outside of CNVs. On these grounds, we conducted an analysis on parts in the genome which are classified to date as gene free (data base UCSC https://genome.ucsc.edu/cgi-bin/hgGateway ) and were found to be homozygously or heterozygously deleted mainly in the tumor patients (Online Resource 1 Table: Gene free CNV 2N and 1N). We did not detect any homozygously deleted sites, whereas twelve regions were heterozygously deleted in 2N cases and six in 1N tumor cases. Eight areas were duplicated (six in 2N and two in 1N). Only four of these regions did not contain any transcription factor binding site. Other regions like 5q21.2(103,509,767-103,534,114)x1, harboring MYC , RAD21 , or SMC3 binding sites, suggest some involvement in the regulation of DNA repair or growth control. There were no significant differences or overlaps between 1N and 2N cases. None of the detected areas contained CpG islands ( https://genome.ucsc.edu/ ).
As stated by other researchers, low background radiation and therapeutic radiation treatments are important inducers of cancer and secondary independent cancers. To estimate the influence of CNVs on gene regulation after irradiation, we designed a study to analyze collateral radiation damage, aiming to detect genes that are transcriptionally altered after irradiation and affected in patients by CNVs. To compare the gene expression with the genomic gains and losses of former tumor patients, it was necessary to generate a comparable gene expression data set. There are several studies published on gene expression after gamma radiation in human primary fibroblasts, mostly performed using chip technology and examining the expression in 2D and generally at 80% confluency. As G0/G1 is probably the predominant cell cycle stage in collaterally irradiated tissues, we designed our experiments in cell cycle arrested cells, to omit mitotic gene expression and DNA repair in dividing cells, as previously published with skin fibroblasts and neonatal foreskin cell lines . To ensure a wide spectrum of the gene induction after radiation, we used three independent 0N fibroblast cell lines and extracted the RNA after 15 min (early response), 2 h (mid response), and 24 h (late response). The radiation doses were chosen either to be therapeutically relevant (2 Gy) or experimental (5 Gy and 8 Gy). Using the entire data set, without regard for the differences in radiation dose and time, we detected 21,459 dysregulated genes ( p value < 0.05) post-radiation. After stratifying the results for false discovery rate (FDR) < 0.05, we found 2619 genes to be altered in their transcription rate (Online Resource 2). Considering the post-radiation time, the gene expression varied strongly between 15 min, 2 h, and 24 h. After 15 min, we detected only one regulated gene ( ANAPC4 ) (FDR < 0.05) in comparison with gene induction after 2 h (FDR < 0.05; 1472 regulated genes) and 24 h (FDR < 0.05; 1567 regulated genes). For the verification of the RNA-seq experiments, we chose ten representative genes ( APG1 , CDKN1A , CSRNP1 , FAM111B , FBXO22 , KRT17 , MDM2 , MYBL2 , RAD54L , and THSD1 ) for further characterization using qPCR. Among them are known marker genes that have already been described to be regulated upon radiation, such as CDKN1A , RAD54L , and KRT17 (Fig. ). As stated by Christmann and Kaina , mammalian cells express DNA repair genes at a detectable basal level and just a slight upregulation or downregulation may significantly ameliorate the repair capacity of the cell. By convention, an expression change of ± 1.5–2-fold is considered to be biologically relevant. In our experience, the calculation of the fold changes depends on the platform used to generate the data and the bioinformatics normalization approach. Therefore, we used a data set based on the FDR value < 0.05 with 2619 genes (Online Resource 2), with no regard to time and dose (duplicate genes were removed) for comparison with the molecular karyotypes of the patients.
To analyze the impact of radiation-induced genes in unique patient-related CNVs, we compared the SNP array data with the gene expression signatures obtained after irradiation. Among the 2N patients, we detected six genes ( POLR3F , SEC23B , ZNF133 , C16orf45 , RRN3 , and NTAN1 ) which were overexpressed after irradiation and were duplicated in the genome of ALL patients with the second independent cancer being either meningioma or thyroid carcinoma. None of these genes has been described to promote cancer, but ZNF133 has been identified to be overexpressed in osteosarcoma . Among the 1N patients, we detected five genes ( ZCWPW , SYNCRI , DHX3 , DHRS4L2 , and THSD1 ) that were differentially regulated after irradiation and were located in duplicated regions. We analyzed the expression profile of THSD1 in three independent controls (0N) and six patient-derived fibroblast cell lines (three 1N and three 2N) using qPCR and detected highly variable expression changes after radiation among controls as well as cancer patients (1N, 2N). We could not establish a clear connection between the duplication of THSD1 and an increased expression before and after irradiation (Fig. ). Other than the previously mentioned genes, we also detected radiation-sensitive genes within common CNVs. The copy number of the DUSP22 gene is highly variable among individuals and, surprisingly, this gene changes its expression upon radiation treatment, probably contributing to individual response upon therapy. To make a classifying rating, we created a term called aberration frequency, which estimates a value for the incidence of a given aberration in a given cohort. Some alterations in the genome such as FAM86B1 or GOLGA8A and duplication in the intron of PTPRN2 were up to three times more frequent in our cancer cases than in controls. A compilation of the results is given in Online Resource 1, Table X-ray response. The highest response upon radiation (fold change) was calculated for GOLGA8A . The deletion occurs at the 5′ end of the retained intron transcript variant 2 non-coding RNA. qPCR examination of the copy number status in matched 1N8/0N11 and 2N8/0N11 samples revealed a loss for the 1N8 case, whereas in 2N8, the loss was heterozygous. We analyzed the expression of GOLGA8A before and after radiation in the corresponding matching samples. The 0N11 control shows an increase of expression after 2 h proportional to the radiation intensity, whereas the cancer patient samples 1N8 and 2N8 show a diminished response after irradiation (Fig. ).
Gene expression is also modulated by methylation. In a previous study, we did not find global methylation changes in normal fetal fibroblasts 1–72 h after irradiation, neither in genic (promoters, 5′ UTRs, first exons, gene bodies, and 3′ UTRs) nor in intergenic regions . To analyze the possibility of methylation changes upon altered DNA content in fibroblasts of our cancer patients, we chose two genes that presented with a CNV and were also differentially methylated in several cancer cell lines. GSTT2 is deleted in patient 2N7 (arr[hg19] 22q11.23(24,283,003-24,330,206)x1) and in 58 participants of the control GHS collective. In comparison with the hypermethylation of the GSTT2 CpG island, consisting of six CpGs, in the A549, MCF7, and EFO27 cancer cell lines, the patient’s sample was hypomethylated similar to the matched control sample 0N7 and the FancD1 fibroblast line. The analysis of the duplicated THSD1 gene promoter with ten CpGs showed similar results to the GSTT2 gene. In the case 1N13 and the matched samples 2N19 and 0N20, the values matched the values of normative samples in contrast to the hypermethylation in the two cancer cell lines ZR-75-1 and T47D ( Fig. ) .
Copy number variation in cancer patients In this study, we focused on rare CNVs, which may have an impact on cancer predisposition and recurrent cancer incidents in sporadic, non-familial, non-syndromic childhood cancer cases. There have been some studies on rare copy number aberrations (CNA) in hereditary cancer predisposition syndromes , whereas studies in sporadic cancer cases are sparse, or the cancer family history is not stated . The possibility of CNVs occurring due to radiotherapy cannot be excluded but is unlikely because the fibroblasts in our study were usually obtained several years after the last treatment and the number of aberrant cells decreases over time and after 13 months post-therapy, normal karyotypes are prevalent . The findings of acquired CNVs in leukemia patients from other studies did not match with the detected CNVs in our study and make them unlikely to be a secondary event. In addition, none of the CNVs in our cohort matched the reported de novo induced CNVs in clonal descendants of irradiated human fibroblasts . Nevertheless, pre-existing somatic mutations acquired prior to treatment may be selected during chemotherapy/radiation and may lead to therapy-related secondary cancer . The aim, therefore, was to detect CNVs that could harbor genes that function as modifiers of cancer risk rather than being innocuous . Since six of our patients received allogeneic bone marrow transplants, their blood DNA would represent the donor’s profile. We, therefore, did not use EBV-transformed lymphoblasts, but primary fibroblasts, which constitute a homogenous cell population with intact cell cycle and DNA repair checkpoints. To date, few childhood cancer-predisposing mutations are known . The patients of our collective showed no germline mutations in high-penetrant cancer predisposition genes like TP53 or BRCA1/BRCA2. None of the patients classified for a genetic cancer (predisposition) syndrome. In our study, 75% of the former cancer patients displayed unique CNVs and very few were shared between the 1N and 2N group. Only one gene ( ABCC6 ) was partially duplicated in two patients and a similar duplication was reported in an adult cancer patient by Villacis et al. . A deletion of the LINC01473 gene present in a 2N patient was reported to be deleted in a childhood cancer patient by Krepischi . In 2N patients, we saw aberrations, e.g., in Chr.16p13.11, that harbor at least four genes ( ABCC1 , FOPNL , MYH11 , KIAA0430 ) involved in cancer development or that are present in rare CNVs in cancer patients. Another duplicated region, 9p13.3p13.2, includes the genes MELK , RNF38 , and GNE . The MELK gene is involved in cell proliferation and apoptosis whereas multiple losses of RNF38 were detected in CML samples. Another gene that is duplicated in this region is GNE , which has been reported to be overexpressed in cancer (see Online Resource 1 for detailed information). Loss of it is important for the induction of apoptosis. At this point, we cannot completely exclude the possibility that the detected aberrations may have some impact on primary and secondary tumor development in 2N patients. Similar findings were made for 1N patients. We detected a heterozygous loss of the ST18 gene in 1N patient who suffered from leukemia, which has been previously described in two papers . The finding in a 1N patient of duplicated CKAP2 , THSD1 , and VPS36 genes in 13q14.3 is very interesting because CKAP2 is responsible for spindle bipolarity and chromosome stability and may represent a new factor contributing to eye tissue cancer development. As the patient was cured by surgical treatment only, the possible therapy-related changes can be excluded in this case. Altogether we would consider aberrations in SNX14 , SYNCRIP , CBR3-AS1 , and CKAP2 as putatively responsible for tumor development or being at least an important passenger aberration in the respective 1N patients. We verified a subset of the CNV results by qPCR and FISH analysis. Some of the CNV regions are duplicated in tandem mode and some cases may present as mosaic duplications, which conforms with the theory stated by Hu et al., . As we showed in previous work, mosaic CNVs also occur in other cancer patients . The absence of selective pressure might preclude the phenotypic manifestations of the minor mosaic population in a phenotypically normal individual , but this does not exclude the possibility of progression or change in the cell microenvironment toward cancer . Some conditions might not be associated with a specific gene dosage, but rather the simple presence of a structural change at a given position in the human genome. It was stated before that intragenic regions may have an impact on cancer risk. A deleted intergenic locus may contain an enhancer which modulates breast cancer risk or intergenic regions may harbor novel transcripts . We, therefore, conducted a survey to find altered areas with annotated transcription binding sites in our patient collective. We included, in our study, the analysis of to date “gene free” regions and for all conspicuous areas, the presence of transcription factor binding sites, which to our knowledge, has not been done previously. These detected structural changes in cancer patient’s genomes may cause perturbation in particular pathways regardless of gene dosage. The importance of the detected “gene free” loci will be the subject of further surveys. There are certain limitations of our study. Firstly, we were unable to collect samples from relatives to evaluate the familial status of the aberration as well as the corresponding tumor material. Also, the involvement of mutations in genes unknown to date cannot be excluded. Thirdly, there is evidence that distinct tumor classes exist, one driven by mutations and the other driven by CNVs. The question if the detected CNVs may represent driver or passenger mutations cannot be answered at this stage. Due to the chosen technique, only duplications and deletions are described in this study, while balanced structural changes remain undetected. Each patient displayed an almost unique pattern of aberrations which is consistent with the idea of the multi-causation of cancer and findings of spontaneous abnormalities in normal human fibroblasts from patients with Li-Fraumeni cancer syndrome and chromosomal changes in non-cancerous breast tissue for breast carcinoma patients . Interestingly, an abnormal gene expression in fibroblasts was detected in patients with Gorlin syndrome (GS), which is a hereditary disorder with tumorigenicity, caused by constitutive hyperactivity of hedgehog signaling. The hyper-activated hedgehog signaling contributes to low miR-196a-5p expression and high MAP3K1 expression in human fibroblasts and mouse cells . The described CNVs in this study may consequently deregulate gene expression and important pathways. Although it was not possible to connect the occurrence of particular CNVs and 2N or 1N cancer incidence, our findings may inspire new insights in cancer pathways regulation. To our knowledge, this is the first time that such an investigation in childhood cancer survival patients prevalent in ALL and matched triplet design and additional 1000 well-chosen controls was performed. Further studies are on the way with increased patient numbers to corroborate our findings. Radiation sensitive genes inside copy number variations We generated an extensive gene list, which represents the dose- and time-dependent response upon irradiation damage to define sensitive genes within CNVs. qPCR analysis and former studies with radiation-induced transcriptional responses performed on quiescent fibroblasts support our results. We detected altered regions harboring genes, which respond upon irradiation and were not described to date as radiation-sensitive, but some of the detected genes were described in cancer. One gene ( ZNF133 ), which responded upon irradiation, was detected in a patient of the 2N collective and has been identified as being overexpressed in osteosarcoma . More genes described in cancer were detected in 1N patients. The gene ZCWPW2 is a histone-modifying enzyme and was downregulated after radiation in controls. The gene locus was found to be duplicated in a patient who suffered from Hodgkin’s lymphoma. SYNCRIP , an RNA-binding protein that controls the myeloid leukemia stem cell program , was found to be overexpressed after radiation and was duplicated in a patient with sarcoma. DHX30 , which was found to be frequently mutated in childhood AML , was duplicated in a patient with ALL. The THSD1 gene is often mutated in cancer and was found to be duplicated in a patient with unilateral retinoblastoma (1N13), lacking an RB1 mutation. None of the findings may explain the proneness to secondary cancers in 2N participants of this study. One of the most likely reasons for this result is the limited number of patients. Additional studies by analyzing larger cohorts may thus uncover more responsible gene sites. We show exemplarily that gene expression may depend upon copy number alterations but there are also exceptions. THSD1 proved to be expressed in some individuals after irradiation on a copy number independent status. This is not unusual, because individual gene expression response after genotoxic agents in fibroblasts has already been described. Such findings complicate future studies and make intensive studies on RNA and protein level necessary. In some frequency, genomic alterations in gene copy numbers were seen also in controls, which may indicate a certain individual plasticity upon damage. Genes like DUSP22 , which is deleted in some cases of cutaneous anaplastic large T cell lymphomas, may have more than one physiological substrate and the regulation of specific signaling cascades by this enzyme may be cell-type and context-specific. Recently, DUSP22 was described to contribute to inflammatory bowel disease . Surprisingly, its copy number varies in fibroblasts in 0N cases as well as in our patients. Another gene that was affected in controls and more frequently in our patients may play a role in tumor tissue. Significant biallelic deletions of GOLGA8A have been described in gastrointestinal tumors and pancreatic ductal adenocarcinomas. The detected downregulation in our patients correlates with the deletion status of the region, as shown also by Wang et al., . It was shown that CNVs may be associated with aberrant methylation and have an impact on tumor prognosis . Thus, to compensate for the addition or loss of genetic material, affected genes may be fine-tuned by methylation . In our two analyzed gene loci ( THSD1 and GSTT2 ), no aberrant methylation in patients was detected in contrast to hypermethylation in the analyzed tumor cell lines. Nevertheless, additional methylation surveys should be conducted in further studies. To our knowledge, this is the first study that uses primary fibroblasts of childhood sporadic cancer cases. In conclusion, although we did not detect a consistent overall candidate gene, we describe potential vulnerable sites or rare CNVs in our collective, which may contribute to tumor development. Furthermore, we detected genes sensitive to radiation treatment that are transcriptionally altered by CNVs. As we detected aberrations seen also by other researchers, it is worthwhile to conduct further investigations in a larger collective and extensive study to address expression, cellular localization, putative deletion, and overexpression of genes to determine the impact of a given aberration on maintaining genome stability.
In this study, we focused on rare CNVs, which may have an impact on cancer predisposition and recurrent cancer incidents in sporadic, non-familial, non-syndromic childhood cancer cases. There have been some studies on rare copy number aberrations (CNA) in hereditary cancer predisposition syndromes , whereas studies in sporadic cancer cases are sparse, or the cancer family history is not stated . The possibility of CNVs occurring due to radiotherapy cannot be excluded but is unlikely because the fibroblasts in our study were usually obtained several years after the last treatment and the number of aberrant cells decreases over time and after 13 months post-therapy, normal karyotypes are prevalent . The findings of acquired CNVs in leukemia patients from other studies did not match with the detected CNVs in our study and make them unlikely to be a secondary event. In addition, none of the CNVs in our cohort matched the reported de novo induced CNVs in clonal descendants of irradiated human fibroblasts . Nevertheless, pre-existing somatic mutations acquired prior to treatment may be selected during chemotherapy/radiation and may lead to therapy-related secondary cancer . The aim, therefore, was to detect CNVs that could harbor genes that function as modifiers of cancer risk rather than being innocuous . Since six of our patients received allogeneic bone marrow transplants, their blood DNA would represent the donor’s profile. We, therefore, did not use EBV-transformed lymphoblasts, but primary fibroblasts, which constitute a homogenous cell population with intact cell cycle and DNA repair checkpoints. To date, few childhood cancer-predisposing mutations are known . The patients of our collective showed no germline mutations in high-penetrant cancer predisposition genes like TP53 or BRCA1/BRCA2. None of the patients classified for a genetic cancer (predisposition) syndrome. In our study, 75% of the former cancer patients displayed unique CNVs and very few were shared between the 1N and 2N group. Only one gene ( ABCC6 ) was partially duplicated in two patients and a similar duplication was reported in an adult cancer patient by Villacis et al. . A deletion of the LINC01473 gene present in a 2N patient was reported to be deleted in a childhood cancer patient by Krepischi . In 2N patients, we saw aberrations, e.g., in Chr.16p13.11, that harbor at least four genes ( ABCC1 , FOPNL , MYH11 , KIAA0430 ) involved in cancer development or that are present in rare CNVs in cancer patients. Another duplicated region, 9p13.3p13.2, includes the genes MELK , RNF38 , and GNE . The MELK gene is involved in cell proliferation and apoptosis whereas multiple losses of RNF38 were detected in CML samples. Another gene that is duplicated in this region is GNE , which has been reported to be overexpressed in cancer (see Online Resource 1 for detailed information). Loss of it is important for the induction of apoptosis. At this point, we cannot completely exclude the possibility that the detected aberrations may have some impact on primary and secondary tumor development in 2N patients. Similar findings were made for 1N patients. We detected a heterozygous loss of the ST18 gene in 1N patient who suffered from leukemia, which has been previously described in two papers . The finding in a 1N patient of duplicated CKAP2 , THSD1 , and VPS36 genes in 13q14.3 is very interesting because CKAP2 is responsible for spindle bipolarity and chromosome stability and may represent a new factor contributing to eye tissue cancer development. As the patient was cured by surgical treatment only, the possible therapy-related changes can be excluded in this case. Altogether we would consider aberrations in SNX14 , SYNCRIP , CBR3-AS1 , and CKAP2 as putatively responsible for tumor development or being at least an important passenger aberration in the respective 1N patients. We verified a subset of the CNV results by qPCR and FISH analysis. Some of the CNV regions are duplicated in tandem mode and some cases may present as mosaic duplications, which conforms with the theory stated by Hu et al., . As we showed in previous work, mosaic CNVs also occur in other cancer patients . The absence of selective pressure might preclude the phenotypic manifestations of the minor mosaic population in a phenotypically normal individual , but this does not exclude the possibility of progression or change in the cell microenvironment toward cancer . Some conditions might not be associated with a specific gene dosage, but rather the simple presence of a structural change at a given position in the human genome. It was stated before that intragenic regions may have an impact on cancer risk. A deleted intergenic locus may contain an enhancer which modulates breast cancer risk or intergenic regions may harbor novel transcripts . We, therefore, conducted a survey to find altered areas with annotated transcription binding sites in our patient collective. We included, in our study, the analysis of to date “gene free” regions and for all conspicuous areas, the presence of transcription factor binding sites, which to our knowledge, has not been done previously. These detected structural changes in cancer patient’s genomes may cause perturbation in particular pathways regardless of gene dosage. The importance of the detected “gene free” loci will be the subject of further surveys. There are certain limitations of our study. Firstly, we were unable to collect samples from relatives to evaluate the familial status of the aberration as well as the corresponding tumor material. Also, the involvement of mutations in genes unknown to date cannot be excluded. Thirdly, there is evidence that distinct tumor classes exist, one driven by mutations and the other driven by CNVs. The question if the detected CNVs may represent driver or passenger mutations cannot be answered at this stage. Due to the chosen technique, only duplications and deletions are described in this study, while balanced structural changes remain undetected. Each patient displayed an almost unique pattern of aberrations which is consistent with the idea of the multi-causation of cancer and findings of spontaneous abnormalities in normal human fibroblasts from patients with Li-Fraumeni cancer syndrome and chromosomal changes in non-cancerous breast tissue for breast carcinoma patients . Interestingly, an abnormal gene expression in fibroblasts was detected in patients with Gorlin syndrome (GS), which is a hereditary disorder with tumorigenicity, caused by constitutive hyperactivity of hedgehog signaling. The hyper-activated hedgehog signaling contributes to low miR-196a-5p expression and high MAP3K1 expression in human fibroblasts and mouse cells . The described CNVs in this study may consequently deregulate gene expression and important pathways. Although it was not possible to connect the occurrence of particular CNVs and 2N or 1N cancer incidence, our findings may inspire new insights in cancer pathways regulation. To our knowledge, this is the first time that such an investigation in childhood cancer survival patients prevalent in ALL and matched triplet design and additional 1000 well-chosen controls was performed. Further studies are on the way with increased patient numbers to corroborate our findings.
We generated an extensive gene list, which represents the dose- and time-dependent response upon irradiation damage to define sensitive genes within CNVs. qPCR analysis and former studies with radiation-induced transcriptional responses performed on quiescent fibroblasts support our results. We detected altered regions harboring genes, which respond upon irradiation and were not described to date as radiation-sensitive, but some of the detected genes were described in cancer. One gene ( ZNF133 ), which responded upon irradiation, was detected in a patient of the 2N collective and has been identified as being overexpressed in osteosarcoma . More genes described in cancer were detected in 1N patients. The gene ZCWPW2 is a histone-modifying enzyme and was downregulated after radiation in controls. The gene locus was found to be duplicated in a patient who suffered from Hodgkin’s lymphoma. SYNCRIP , an RNA-binding protein that controls the myeloid leukemia stem cell program , was found to be overexpressed after radiation and was duplicated in a patient with sarcoma. DHX30 , which was found to be frequently mutated in childhood AML , was duplicated in a patient with ALL. The THSD1 gene is often mutated in cancer and was found to be duplicated in a patient with unilateral retinoblastoma (1N13), lacking an RB1 mutation. None of the findings may explain the proneness to secondary cancers in 2N participants of this study. One of the most likely reasons for this result is the limited number of patients. Additional studies by analyzing larger cohorts may thus uncover more responsible gene sites. We show exemplarily that gene expression may depend upon copy number alterations but there are also exceptions. THSD1 proved to be expressed in some individuals after irradiation on a copy number independent status. This is not unusual, because individual gene expression response after genotoxic agents in fibroblasts has already been described. Such findings complicate future studies and make intensive studies on RNA and protein level necessary. In some frequency, genomic alterations in gene copy numbers were seen also in controls, which may indicate a certain individual plasticity upon damage. Genes like DUSP22 , which is deleted in some cases of cutaneous anaplastic large T cell lymphomas, may have more than one physiological substrate and the regulation of specific signaling cascades by this enzyme may be cell-type and context-specific. Recently, DUSP22 was described to contribute to inflammatory bowel disease . Surprisingly, its copy number varies in fibroblasts in 0N cases as well as in our patients. Another gene that was affected in controls and more frequently in our patients may play a role in tumor tissue. Significant biallelic deletions of GOLGA8A have been described in gastrointestinal tumors and pancreatic ductal adenocarcinomas. The detected downregulation in our patients correlates with the deletion status of the region, as shown also by Wang et al., . It was shown that CNVs may be associated with aberrant methylation and have an impact on tumor prognosis . Thus, to compensate for the addition or loss of genetic material, affected genes may be fine-tuned by methylation . In our two analyzed gene loci ( THSD1 and GSTT2 ), no aberrant methylation in patients was detected in contrast to hypermethylation in the analyzed tumor cell lines. Nevertheless, additional methylation surveys should be conducted in further studies. To our knowledge, this is the first study that uses primary fibroblasts of childhood sporadic cancer cases. In conclusion, although we did not detect a consistent overall candidate gene, we describe potential vulnerable sites or rare CNVs in our collective, which may contribute to tumor development. Furthermore, we detected genes sensitive to radiation treatment that are transcriptionally altered by CNVs. As we detected aberrations seen also by other researchers, it is worthwhile to conduct further investigations in a larger collective and extensive study to address expression, cellular localization, putative deletion, and overexpression of genes to determine the impact of a given aberration on maintaining genome stability.
ESM 1 (XLSX 68 kb) ESM 2 (XLSX 46980 kb) ESM 3 (PNG 455 kb) High Resolution Image (TIF 523 kb) ESM 4 (PNG 386 kb) High Resolution Image (TIF 466 kb) ESM 5 (PNG 540 kb) High Resolution Image (TIF 633 kb) ESM 6 (PNG 556 kb) High Resolution Image (TIF 664 kb) ESM 7 (PNG 589 kb) High Resolution Image (TIF 714 kb) ESM 8 (PNG 490 kb) High Resolution Image (TIF 571 kb) ESM 9 (PNG 553 kb) High Resolution Image (TIF 655 kb) ESM 10 (PNG 570 kb) High Resolution Image (TIF 691 kb) ESM 11 (PNG 496 kb) High Resolution Image (TIF 580 kb) ESM 12 (PNG 650 kb) High Resolution Image (TIF 729 kb) ESM 13 (PNG 514 kb) High Resolution Image (TIF 595 kb) ESM 14 (PNG 503 kb) High Resolution Image (TIF 581 kb) ESM 15 (PNG 527 kb) High Resolution Image (TIF 620 kb) ESM 16 (PNG 580 kb) High Resolution Image (TIF 659 kb) ESM 17 (PNG 478 kb) High Resolution Image (TIF 515 kb) ESM 18 (DOCX 132 kb)
|
Precision Oncology: 2023 in Review | 48e70387-b944-49b5-883e-36c53f9dd4dd | 10715685 | Internal Medicine[mh] | The scope of precision oncology continues to expand as drugs with new mechanisms of action enable therapeutic intervention on a wider array of targets in broader, biomarker-selected patient populations. By virtue of the advances in our understanding of specific mutation-based clinical implications and the epistatic relationship between co-occurring mutations, as well as the role that the immune environment plays in therapy selection, the long-standing paradigm of matching a single gene to a single treatment is rapidly evolving. This review, as the second installment in the Precision Oncology Year in Review series , uses OncoKB to offer a lens into the advances in precision oncology in 2023. On the basis of OncoKB, as of November 2023, twelve treatments were approved by the FDA for unique biomarker-selected indications, and six biomarker- and indication-specific treatments were listed in the National Comprehensive Cancer Network (NCCN) guidelines in the past year. In addition, compelling clinical evidence for two precision oncology therapies led to their inclusion as level 3 investigational agents in OncoKB . Here we discuss the growing array of targetable molecular alterations as well as the proteomic and immunologic biomarkers that are increasingly guiding patient matching to novel classes of medications, including antibody–drug conjugates (ADC) and proteolysis-targeting chimeras (PROTAC)/protein degraders, and how the distinct biology of individual mutant alleles has contributed to drug development efforts.
Over the past couple of years, novel approaches to drug design have resulted in new precision oncology therapies that are proving to be successful in addressing an increasing number of previously undruggable targets in the clinic. Epitomizing the cumulative results of these developments is our current emerging ability to target KRAS -mutant cancer, initiated with the success of selective KRAS G12C inhibitors. The KRAS G12C inhibitors sotorasib and adagrasib, both of which trap KRAS G12C in its inactive GDP-bound state, previously received accelerated approval for KRAS G12C -mutant non–small cell lung cancer (NSCLC). These inhibitors are now listed in the NCCN guidelines additional KRAS G12C -mutant histologies, including for pancreatic and colorectal cancers (the latter indication's approval is in combination with either anti-EGFR monoclonal antibody inhibitors cetuximab or panitumumab). Another more potent KRAS G12C inhibitor of GDP-bound KRAS, divarasib, was shown to achieve an initial overall response rate (ORR) of 54% and progression-free survival (PFS) of 13.1 months in patients with NSCLC treated on a phase I trial . KRAS G12C has a slightly increased affinity for GTP versus GDP, and this past year, the field pivoted to develop KRAS G12C inhibitors that trap the oncoprotein in its activated or so-called “on” form. For example, FMC-376 is a covalent inhibitor of both the activated and inactivated forms of KRAS G12C , and RMC-6291, employs the formation of a so-called “tricomplex” between KRAS, cyclophilin A, and the drug to inhibit KRAS G12C in its activated state. There has also been a pronounced emphasis on combining KRAS G12C inhibitors with other agents this year. These combination strategies include supplementing KRAS G12C inhibitor treatment with drugs that target emerging biomarkers such as integrin beta 4 as well as with immunotherapy, chemotherapy or other precision oncology drugs including those that target known resistance alterations arising in the receptor tyrosine kinase (RTK) or mitogen activated protein kinase (MAPK) pathways. Preliminary data on the combination of the KRAS G12C “off” inhibitor LY3537982 with pembrolizumab showed an ORR of 78% in NSCLC with no prior G12C inhibitor exposure and 25% after prior G12C inhibitor exposure . Non-G12C KRAS alleles, including both mutant-selective and pan-KRAS inhibitors, are also being explored. For example, KRAS G12D , the most common KRAS allele pan-cancer, is now potentially targetable by agents including RMC-9805, a tricomplex inhibitor; MRTX1133, a noncovalent inhibitor; and ASP3082, a protein degrader. Multiallele KRAS inhibitors such as RMC-6236 achieved clinical responses in G12D- and G12V-mutant cancers in a phase I trial . Lastly, pan-KRAS inhibitors that avoid inadvertant HRAS and NRAS activation by KRAS wild-type cells are in preclinical development . Other targets previously considered undruggable include the YAP transcription coactivator, the phosphorylation and subsequent degradation target of the Hippo kinase cascade pathway. Mutation of Hippo pathway components, such as the tumor suppressor NF2 , have been observed to arise in IDH -mutant-low-grade gliomas, mesotheliomas, and HPV-negative head and neck squamous cancers. Clinical responses were observed in an ongoing trial testing the YAP/TEAD inhibitor VT3989 (7), where the TEAD family of transcription factors are known to bind to and potentiate YAP oncogenic activity. Additionally, new targets such as cyclin E1, part of the cell-cycle pathway and often deregulated in cancer by amplification, are also being explored therapeutically, including to guide patient selection for treatment with the newly developed CDK2 inhibitor BLU-222.
Kinases continue to represent the stronghold of the precision oncology armamentarium, with each year yielding novel kinase inhibitors that are characterized by improved potency, increased selectivity against specific kinase isoforms, or optimized mutant selectivity. This year, the FDA approved quizartinib, a more potent FLT3 and FLT3-ITD mutation type II inhibitor compared with earlier generation FLT3 inhibitors. Patients with acute myeloid leukemia (AML) that received quizartinib in addition to first-line chemotherapy achieved a median overall survival of 31.7 months compared with 15.1 months in patients who received first-line chemotherapy alone . In solid tumors, RLY-4008, an isoform-selective, covalent FGFR2 inhibitor designed to avoid off-target side effects associated with FGFR1 and FGFR4 inhibition, showed promising results in patients with FGFR2-positive cholangiocarcinoma . Trials testing FGFR3 selective inhibitors (TYRA-300 and LOXO-435) were also launched this year . In addition to selectivity for individual mutations, inhibitors designed to selectively target single and compound acquired resistance mutations arising from treatment with earlier-generation ALK/ROS continue to be developed and approved. In November 2023, repotrectinib, a next-generation ROS1 and TRK inhibitor, was FDA-approved for ROS1 fusion-positive NSCLCs. Importantly, this drug demonstrated potent activity against ROS1 and NTRK TKI resistance mutations, including solvent front mutations. Relevant to the ROS1 approval, repotrectinib inhibited ROS1 fusion-positive NSCLCs bearing ROS1 G2032R that arises after progression on crizotinib or entrectinib. Currently in clinical trials are the TKIs NVL-520, a ROS1 selective agent that also targets ROS1 G2032R , as well as NVL-655, with activity against ALK fusion-positive NSCLCs harboring ALK G1202R/L1196M and ALK G1202R/T1151M compound resistance mutations. BLU-945, a reversible, wild-type-sparing inhibitor of EGFR+/T790M and EGFR+/T790M/C797S resistance mutants that maintains activity against the sensitizing mutations, especially L858R, similarly achieved responses in compound EGFR mutants . Building on cumulative observations that pan-PI3K inhibition leads to hyperglycemia in treated PIK3CA-mutant patients with breast cancer that led to the 2019 approval of the PI3K-alpha-selective inhibitor alpelisib in combination with fulvestrant, the industry continues to pivot to wild-type sparing drug design. This year, the mutant selective inhibitor RLY-2608 was tested as monotherapy and in combination with fulvestrant in a phase I trial and neither severe nor dose-limiting hyperglycemia secondary to wild-type PI3Kα inhibition were observed thus far . Relatedly, 2023 saw the approval of the pan-AKT targeted inhibitor capivasertib for PIK3CA/AKT1/PTEN positive, hormone-receptor positive and HER2-negative patients with advanced cancer. The biomarker-based approval of capivasertib was surprising considering that published median progression free survival was 7.2 months in capivasertib/fulvestrant treated versus 3.6 months in placebo/fulvestrant and 7.3 months in capivasertib/fulvestrant versus 3.1 months in placebo/fulvestrant in the overall and AKT pathway-altered patient cohorts respectively. In parallel with the development of newer kinase inhibitors, the indications for which meaningful clinical benefit has been observed for established kinase inhibitors have also continued to expand. For example, alectinib, initially developed for ALK fusion–positive NSCLC, has now been added to the NCCN guidelines for ALK -positive inflammatory myofibroblastic tumors. Combination dabrafenib plus trametinib, previously approved for BRAF V600E -mutant solid tumors, received additional approval in the first-line setting for low-grade pediatric gliomas where treated patients achieved a median PFS of 20.1 months as well as a decreased rate of high-grade adverse events when compared with 7.4 months when treated with chemotherapy . Other kinase inhibitors have been retired. Examples include Debio1347 for FGFR1 amplifications and mobocertinib for EGFR exon 20 insertions. It is notable that sponsors decided not to pursue confirmatory studies mandated for regulatory approval for several drug indications despite their documented activity. Infigratinib previously received accelerated approval for FGFR2-positive cholangiocarcinoma based on a response rate of 23.1% , and the RET inhibitor pralsetinib previously showed an ORR of 71% among medullary thyroid cancers that had not been previously treated with cabozantinib and/or vandetanib . Beyond kinase inhibitors, triple combination of different PARP inhibitors with hormone therapy abiraterone and steroid prenidsone were approved in 2023 for select patients with prostate cancer carrying mutations in genes involved in homologous recombination repair (HRR). PARP inhibitor olaparib or niraparib in combination with abiraterone and prenidsone received regulatory approval for metastatic, castration-resistant prostate cancer with BRCA1/2 alterations. Additionally, following the TALAPRO-2 study testing the efficacy of combination treatment with PARP inhibitor talazoparib and androgen receptor inhibitor enzalutamide, resulted in the approval of this regimen for patients with HRR mutations, including in BRCA1/2, ATR, FANCA, MLH1, MRE11, NBN, ATM, PALB2, CDK12, CHEK2, and RAD51C . Notably, while the subgroup of all HRR-deficient pts shows statistical significance between the talazoparib + enzalutamide group versus placebo + enzalutamide, subgroup analysis indicates that this signal is likely primarily driven by BRCA2 presence . Indeed, analysis of treatment benefit in ATM- or CHEK2-mutant subgroups showed no significant PFS differences . Lastly, the precision oncology drug ivosidenib targeting the mutant metabolic enzyme IDH1 previously developed for AML and cholangiocarcinoma, was added to the NCCN guidelines for IDH1 -mutant oligodendrogliomas. The IDH1 inhibitor olutasidenib received approval for AML, following a study of IDH1 inhibitor–naïve relapsed/refractory AML, with an ORR of 48% and a median survival of 11.6 months . Beyond extending the range of histologies for which IDH1 inhibitors have been studied, IDH1/2-targeted drugs such as vorasidenib, which achieved improved PFS versus placebo (27.7 vs. 11.1 months) in IDH1/2-mutant low-grade gliomas, are also being used for lower-grade tumors .
Beyond DNA- and RNA-based targets, there has been significant expansion in drug development for protein targets. Among the most robust examples of protein targeting has been the evolution of therapies targeting HER2. Indications for the so-called naked or nonconjugated monoclonal anti-HER2 antibody trastuzumab when used in combination with the small molecule HER2 kinase inhibitor tucatinib have expanded. Combination tucatinib plus trastuzumab had previously been used with chemotherapy for breast cancer and gained regulatory approval for HER2-positive/RAS wild-type colorectal cancers this year. Similarly, the combination of trastuzumab and pertuzumab was newly included in the NCCN guidelines for biliary tract cancers. For HER2 ADC therapy, a basket trial of trastuzumab deruxtecan (T-DXd) demonstrated the potential tumor-agnostic utility of protein expression using HER2 IHC for biomarker selection. The drug was previously approved for HER2-positive gastric and breast cancers, and preliminary data showed an ORR of 37% in patients with any tumor staining IHC 2+ or 3+ and 61% for the 3+ cohort . Moving beyond HER2, a new wave of monospecific and bispecific ADCs have emerged, including those targeting receptor tyrosine kinases. For example, a number of ADCs and bispecifics are currently in trial for MET overexpression, including REGN5093-M114 and ABBV-400. A dual EGFRxHER3-directed ADC, BL-B01D1, showed preliminary efficacy across a variety of tumor types, with an ORR of 62% in patients with EGFR-mutant NSCLC, 46% in nasopharyngeal carcinoma, and additional activity in small-cell lung cancer (SCLC) and head and neck squamous cell carcinoma . ABBV-011 is an ADC designed to target SEZ6, a tumor-specific cell-surface protein that has been found to be overexpressed in neuroendocrine tumors such as SCLC. Data from a phase I trial testing ABBV-011 in patients with relapsed or refractory SCLC, a patient population with limited molecularly targeted therapeutic options, showed an ORR of 25% . Perhaps the most successful example within ADCs this year was the FDA approval of mirvetuximab soravtansine-gynx for patients with platinum-resistant ovarian, peritoneal, or fallopian tube cancer. Mirvetuximab soravtansine-gynx targets the folate receptor, FOLR1, which is overexpressed on the cell surface of many epithelial-derived cancers. Together with the drug, the VENTANA FOLR1 RxDx Assay was approved as a companion diagnostic for FOLR1 biomarker testing. Folate receptor alpha staining of 2+ by IHC in at least 75% of viable cells is required for a patient to be eligible for treatment . Efforts to target folate receptor alpha have evolved from ADCs to the smaller sized nanoparticle–drug conjugate ELU001, the latter drug designed to facilitate better tumor penetration. Beyond ADCs, protein degradation therapy is making a comeback. This year saw the FDA approval of a next-generation selective estrogen receptor degrader, elacestrant, and the introduction of novel KRAS and BRAF V600E degraders. Elacestrant received approval for patients with estrogen receptor (ER)–positive, endocrine therapy–refractory, HER2-negative breast cancer with an ESR1 mutation. ESR1 mutations have been associated with resistance to hormone therapy, due in part to estrogen independent signaling. Treatment with elacestrant notably improved PFS in ESR1 mutant and wild-type patient cohorts, with an HR of 0.055 and 0.70, respectively, in a phase III trial . Combination strategies, such as with PI3K inhibition, are currently being explored. PROTACs/protein degraders designed to address molecular alterations in known oncogenes, such as KRAS and EGFR were also tested in 2023. A bifunctional degradation activating compound of BRAF V600E , CFT1946, that inhibits both the kinase activity of the oncoprotein as well as paradoxical MAPK activation by preventing oncoprotein dimerization entered clinical trials for patients with BRAF V600 -mutant disease.
Composite biomarkers, such as tumor mutational burden (TMB) and microsatellite instability (MSI) status, have continued to be used to tailor immunotherapeutic approaches to individual cancers, leading to further drug approvals. For example, the checkpoint inhibitor dostarlimab received regulatory approval in combination with chemotherapy for patients with MSI-high advanced or recurrent endometrial cancers. Recognition of the role of genomic instability events, such as chromothripsis, in oncogenesis is likely to propel further multi-omic biomarker development in the future. Taking notes from the ADC playbook, immune-stimulating antibody conjugates (ISAC) have shown preliminary evidence of drug activity. BDC-1001 is an example of an ISAC consisting of a mAb conjugated to a Toll-like receptor TLR7/8 agonist that primes the microenvironment for immune rejection of the tumor by secretion of cytokines. By binding to a cell-surface protein such as HER2, ISACs are designed to elicit a phagocytic response and T cell–mediated antitumor immunity. Preliminary activity from BDC-1001 monotherapy and nivolumab combination cohorts across HER2-amplified or HER2 protein–expressing tumors was reported . In addition, T-cell receptor (TCR) and neoantigen-based therapies including those with HLA restriction continue to be explored such as TCR therapies designed to address peptide neoantigens produced by mutant PIK3CA , KRAS G12D , and FLT3 , among other alterations.
As novel classes of drugs have emerged as viable options for patient treatment, the complexity of biomarkers has increased, and clinical trial design has similarly evolved and adapted to accommodate this increased complexity. In the age of precision oncology many molecularly selected therapies now achieve target inhibition at doses below the maximum tolerated dose (MTD). Moreover, as clinical responses become more durable, long-term tolerability beyond the dose-limiting toxicity period has come under scrutiny. This year, the FDA announced the launch of Project Optimus to facilitate improved dose-optimization strategies through multiple mechanisms, including by randomization of patients to different dose levels . In parallel with the FDA guidance on optimizing dose have been efforts to expedite trials to enable early access to effective therapies. For combinations of established precision oncology drugs and novel therapies, the FDA allowed the early introduction of combination therapy following a lead-in period of the new treatment to allow for characterization of toxicity, safety, pharmacodynamics, and pharmacokinetics. For example, preclinical data demonstrated that select tumors that have progressed on FDA-approved targeted oncogene inhibitors may be resensitized by adding PF-07284892, an inhibitor of the SHP2 tyrosine phosphatase. While most phase I trials require extensive testing of novel drugs as monotherapy prior to allowing combinations, recognition of the need to expedite combination therapy enabled patients to undergo a lead-in period on the SHP2 monotherapy, followed by the addition of the approved inhibitor at progression . Trial designs also must account for changes in clinical characteristics of the patients being treated, including the inevitable shift of interventions to earlier-stage disease in place of focusing solely on late-stage disease or progression. Mirroring the use of blood-based testing for minimal residual disease in hematologic malignancies, circulating tumor DNA is increasingly being studied to define which patients with solid tumors will gain the greatest benefit from such earlier-stage interventions. For example, therapies are now being tested for patients who have no radiologic signs of disease, but who nonetheless have persistent molecular traces of cancer and who may benefit from therapeutic intervention. Improving Access to Precision Oncology for All Patients An additional trial design challenge derives from the need to ensure that all patients benefit from precision oncology's advances. Rates of genetic counseling tend to be lower in non-white patients, even when financial barriers are removed . Therapeutic approaches that rely on germline genetic differences may further accentuate disparities. In the immunotherapy space, modeling of antigen presentation contributing to HLA allele selection-based drug design has relied heavily on predominantly white patient populations. HLA alleles are known to differ between patients of different ancestries, and several immunologic therapies have been tailored for the HLA-A*02:01 allele, which is most common in white populations. Thus, for example, recent promising data generated by clinical testing of TAEST16001, a NY-ESO-1–directed TCR therapy given with IL2, showed a 41.7% response rate among 12 patients with soft-tissue sarcomas. Since only patients with the HLA-A*02:01 allele are eligible for treatment with TAEST16001, the promise of this drug may only be realized in a subset of patients . Indeed, in a pan-cancer study of more than 45,000 patients, those with African ancestry also had a lower rate of somatic actionable alterations . Extending the benefits of precision oncology to all patients requires identifying targetable germline and somatic variants across diverse populations, as well as ensuring that trial eligibility criteria and designs facilitate broad access.
An additional trial design challenge derives from the need to ensure that all patients benefit from precision oncology's advances. Rates of genetic counseling tend to be lower in non-white patients, even when financial barriers are removed . Therapeutic approaches that rely on germline genetic differences may further accentuate disparities. In the immunotherapy space, modeling of antigen presentation contributing to HLA allele selection-based drug design has relied heavily on predominantly white patient populations. HLA alleles are known to differ between patients of different ancestries, and several immunologic therapies have been tailored for the HLA-A*02:01 allele, which is most common in white populations. Thus, for example, recent promising data generated by clinical testing of TAEST16001, a NY-ESO-1–directed TCR therapy given with IL2, showed a 41.7% response rate among 12 patients with soft-tissue sarcomas. Since only patients with the HLA-A*02:01 allele are eligible for treatment with TAEST16001, the promise of this drug may only be realized in a subset of patients . Indeed, in a pan-cancer study of more than 45,000 patients, those with African ancestry also had a lower rate of somatic actionable alterations . Extending the benefits of precision oncology to all patients requires identifying targetable germline and somatic variants across diverse populations, as well as ensuring that trial eligibility criteria and designs facilitate broad access.
Advances in precision oncology have enabled the development of multiple novel therapies and combinations this year. These have included treatments that more selectively inhibit their target of interest, including allele and isoform-specific inhibitors, as well as drugs like elacestrant for ESR1-mutant hormone receptor–positive breast cancer that are designed to address known mechanisms of resistance to previously approved therapies. The armamentarium of drugs for molecular alterations is also expanding, with many novel classes of therapies including ADCs, PROTACS/protein degraders, and TCR therapies designed to address protein and peptide targets. The design of clinical trials that support these developments has evolved to enable optimized dosing for tolerability and to facilitate accrual of patients with earlier-stage disease. As both novel drug mechanisms emerge and the biomarkers used to match patients to therapies evolve, cross-talk between precision oncology, molecular oncology, immuno-oncology, and proteomics is yielding therapeutic options for an expanded population of patients.
|
Application of CUSUM analysis in assessing learning curves in robot-assisted sacrocolpopexy performed by experienced gynecologist | 9179cc2f-7e4f-4cbd-8f97-8828d52151f1 | 11619684 | Gynaecology[mh] | Pelvic organ prolapse (POP) affects approximately 40% of women worldwide, and the prevalence is expected to increase with an aging population . Countries in Asia, including but not limited to South Korea, Japan and China, are aging rapidly. This calls for necessary attention to the treatment of POP to improve the health quality of elderly women. Abdominal sacrocolpopexy (ASCP) with mesh interposition has been associated with the highest durability and lowest recurrence of level 1 apical prolapse . However, it is also associated with increased pain, postoperative comorbidities and longer hospitalization. The adoption of laparoscopic sacrocolpopexy (LSC) to alleviate such complications accompanying the laparotomy approach has been limited by a steep learning curve . The superior depth perception and greater dexterity of robotic surgery offers a promising alternative to overcome the obstacles faced during open and laparoscopic approaches to surgically treating POP. Despite these advantages provided by robot assisted surgery, the benefits are uncertain in terms of higher costs and longer operative time . The operative time can vary depending on the surgeon’s competency, patient’s characteristics and coordination within the surgical team. Therefore, studies investigating learning curves based on operative time are important for not only optimizing patient outcomes and deciding cost-effectiveness, but also evaluating feasibility for future unexperienced surgeons. Previous literatures that have studied surgical learning curves for robot assisted sacrocolpopexy (RSCP) have been reported by measuring operative variables, operative outcomes and complications with various statistical methods such as graphical inspection, logistic regression or cumulative sum (CUSUM) analysis . Cumulative sum (CUSUM) analysis, a statistical method initially developed for quality control in the manufacturing industry, is able to detect even the subtle shifts in parameters of any given procedure and present a visual representation of the trend as the procedure is repeated over and over again . Applying CUSUM analysis to surgical learning curves can allow real-time monitoring of surgeon proficiency and competency by detecting fine patterns after controlling for random variations . The aim of this study, therefore, was to assess the learning curve of RSCP by applying CUSUM analysis based on operation time, complication rate and conversion rate to laparotomy. This retrospective study included 50 consecutive RSCP surgeries from June 2018 to June 2023 by a single experienced gynecologist. The Institutional Review Board of the Kyung Hee University Hospital at Gangdong approved the protocol for this study (IRB no: 2024-01-025). Data were collected by review of electronic medical records including patient demographics, intraoperative parameters and postoperative outcomes. Basic patient information, such as age, body mass index (BMI), parity, menopausal status, American Society of Anesthesiologists (ASA) score, past medical and surgical history were retrieved retrospectively from medical archives. Intraoperative parameters such as concomitant procedures, total operative time (op time), change of hematocrit and any intra-operative complications were also collected with length of hospitalization and short-term postoperative complications. Total operation time was defined as the time from first incision to that of the final closure. RSCP was carried out with the aid of da Vinci Xi system (Intuitive Surgical, Inc, Sunnyvale, CA). Three 8-mm robotic trocars and a 12 mm trocar were created. Depending on supply circumstances, two types of mesh: (1) non-absorbable polypropylene mesh (Prolene ® , Ethicon, Johnson & Johnson, USA) or (2) partially-absorbable (glycolide–co-caprolactone) (75/25) polypropylene-composite mesh (Seratex ® , Serag-Wiessner GmbH & Co. KG, Germany) were inserted to bridge and fix the anterior and posterior vagina to the sacral promontory. 2 − 0 Polydioxanone (PDS II, Ethicon, Soerville, NJ) suture was used to secure the mesh to the vagina. After incising the peritoneum along the right pelvic sidewall from the sacrum to the cul-de-sac, 1 − 0 Prolene™ polypropylene suture was used to secure the tail of the mesh to the sacral promontory. Peritoneum was then closed with 1 − 0 Coated VICRYL (polyglactin 910) suture to completely cover the mesh. Assessment of the surgical learning curve was performed using risk-adjusted cumulative summation (CUSUM) methodology in terms of op time and presence of any intra- and post-operative complication. The cumulative sum of the operation times (CUSUM OT ) was computed for each RSCP surgeries in chronological order by summing the differences between the individual op time (x i ) and the mean op time (µ) of all cases. The CUSUM at op time n (CUSUM OTn ) is calculated as follows: CUSUM OTn = [12pt]{minimal} $$ _{i = 1}^n\,(xi\, - )$$ . The CUSUM value for case one represents the difference between its op time and the mean operative time of all cases. Subsequently, the CUSUM for case two is the sum of the difference in op time for case two and the CUSUM of case one. This process is repeated until CUSUM values for all cases are obtained. Breakpoints in the learning curves were determined retrospectively using piecewise linear regression. A broken-line model was employed to identify case numbers marking transitions between phases of the learning curve, including Learning (Phase 1), Proficiency (Phase 2), and Competency (Phase 3), based on op time. The breakpoints were rounded to the next whole number. Continuous variables, such as age, length of hospitalization and op time, were reported as mean (standard deviation). One-way analysis of variance (ANOVA) was employed to compare continuous variables. Categorical variables were expressed as percentages and analyzed using the chi-square test. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS), version 28.0 for Windows (IBM Corp, Armonk, NY, USA). A p-value of < 0.05 was considered statistically significant. The construction of CUSUM learning curves and piecewise linear regression analysis was conducted using RStudio, version 4.2.2. A total of 50 consecutive RSCP surgeries were performed from June 2018 to June 2023 by an experienced surgeon. The baseline characteristics and surgical details of the study population are summarized in Table . The study population consisted of 50 females with a mean age of 58 years. Mean BMI was 24.2 kg/m 2 . The number and percentage of participants presented with pelvic organ prolapse quantification (POP-Q) stage of 2, 3 and 4 was 28 (56%), 20 (40%) and 2 (4%), respectively. The mean op time was 222.4 ± 64.3 min, with 45 cases (90%) of concomitant robotic-assisted laparoscopic hysterectomy. The median decrease of hematocrit was 6.5%. The learning curve of RSCP represented with a second order polynomial curve of best fit is shown in Fig. . The breakpoints at which the learning phase changes in RSCP op time were determined using piecewise linear regression (Fig. ). The regression identified breakpoints at case 8.47 (95% CI 8.0, 9.0) and case 34.41 (95% CI 32.7, 36.1), with an R 2 value of 0.87, which agrees with that of the second-order polynomial equation. The breakpoints were rounded to the next whole number at case 9 and 35. The initial learning curve phase (Phase 1) shows that a surgeon was able to reach the learning phase in every parameter of surgical performance after 9 cases. The subsequent 26 cases led to the achievement of expert competence (Phase 2). The Learning, Proficiency, and Competency phases consisted of 9, 26, and 15 cases, respectively in this consecutive series. This suggests that the surgeon achieved proficiency after the first 9 cases and competency after 35 cases. Comparison of patient characteristics and perioperative parameters among the three phases are summarized in Table . The specific mean op time in the Learning, Proficiency, and Competency phases are 338.1 ± 57.4 min, 213.0 ± 34.3 min and 179.9 ± 28.0 min, respectively. A significant decrease in op time was observed between three phases ( p = 0.000), with a larger discrepancy between the Learning and Proficiency phase. There were no significant differences in baseline patient characteristics among three phases except for the Ba point in POP-Q stage ( p = 0.005), indicating that op time is not affected by the degree of prolapse in competency phase. There were no intraoperative and short-term post-operative complications during the span of this study. Furthermore, there were no conversions because of robotic surgery failure. CUSUM analysis based on complication and conversion rate, therefore, was not available. For the interpretation of our CUSUM results, it is important to realize the downward slope indicates a shorter op time than expected while an upward slope indicates a longer op time than expected. A prolonged upward trend may call for further investigation and identification of possible reasons behind the sudden increase in op time. Our data shows a plateau in op time after first 8 cases. This is followed by a steep reduction in CUSUM of RSCP after 35 cases. Proficiency was thus achieved by case 9, and competency was achieved by case 35. In the competency phase, op time is not affected by the degree of prolapse. Aside from the decrease in op time, several other variables can be used for monitoring and auditing surgical performance such as estimated blood loss, pain medication and hospital stay. However, no significant statistical differences were noted among three phases (data was not shown). Moreover, no peri- and short-term post-operative complications were reported during this study. There are very few published papers about the learning curve of RSCP, and analysis methods even differed from one another. By graphical inspection of op time, Akl et al. reported 25.4% decrease of op time after the first 10 cases, with the last 30 cases having a mean op time of 167.3 min . Geller et al. reported decline in op time by > 1 h after first 10 cases with a median op time of 254 min of the remaining 137 cases. A significant decline after 20 cases for critical steps of the procedure was observed, represented by an inflection point of considerable reduction in performance time at 60 cases by the split group method (dividing the data into consecutive groups and comparing group means) and the cubic function of fitting smoothing curve analysis . Myers et al. applied CUSUM to monitor, not a learning curve but, maintenance of proficiency at the target value of a 10% complication rate . Linder et al. reported that median op time plateaued after first 60 cases from 5.3 to 3.6 h. Proficiency, as determined by a risk-adjusted CUSUM analysis for complication rate, was achieved after approximately 84 cases. The rate of intraoperative or grade 2 + postoperative complication was reported to be 26.8% . Sharma et al. reported that proficiency was noted at 25 cases and efficiency at 36 cases and no significant improvement in op time after 60 cases with mean op time of 247 min after 36 cases by the B-spline regression and sequential grouping model for op time . Van Zanten et al. reported that op time dropped after 20–24 cases and stabilized between 24 and 29 cases, and mean op time was 173 min. The proficiency based on CUSUM analysis of the rate of intraoperative complications was obtained after 78 cases. The rate of intraoperative complication was reported to be 1.9%. Existing literature investigating the learning curve of RSCP is limited, and outcomes were analyzed with different definitions and confounding factors such as discrepancy in prior surgeon experience was not eliminated. This presents a challenge to draw on concrete findings and conclusions based on the literature so far. The importance of understanding the learning curve and establishing a surgical training program to enable safe and effective surgery cannot be overemphasized. And the first step in doing so is reaching a consensus on a standardized reporting system to standardize outcome. CUSUM analysis has potential to be adopted as a standardized self-monitoring tool in assessing learning curves of surgical procedures due to its ability to efficiently detect and visualize subtle trends in parameters. However, it is also important to be aware of its limitations. The most blatant limitation lies within its strength: CUSUM is primarily effective for merely detecting trends and shifts in performance without the ability to provide insights into the underlying reason for changes such as patient selection, trainee involvement or concomitant procedures . This leaves room for misinterpretation, leading to incorrect conclusions about a surgeon’s proficiency. Risk-adjusted CUSUM analysis may serve to compensate for CUSUM’s inability to account for variability in case complexity. Previously reported numbers for proficiency are considerably higher than our results. The discrepancy in surgeon experience and individual skill set is a possible explanation. This study monitored 50 RSCP procedures performed by a single gynecologist with plentiful cases of surgical experience. This explanation of proficient surgical experience also offers an explanation to the absence of intraoperative complications. Also, it was possible to get a triphasic CUSUM curve with inflections at case 9 and 35 through overall operative time. The completion of all procedures by a single surgeon bestows homogeneity and is a strength of our study. However, a single surgeon may also be viewed as a limitation due to the lack of generalizability. Additional limitations include the retrospective study design and thus the lack of specific segmentation of each step such as docking, console, concomitant operative and suture times. Concomitant procedures such as total and/or subtotal hysterectomy, salpingo-oophorectomy, anti-incontinence surgery and rectocele repair were included in total operative time. Concomitant operative times, compared to RSCP opearative times, occupy a minor span of the total operation time. However, this challenges the homogeneity of the data and must be taken into consideration. Operative time is also reflective of harmony of several factors of operative platform, surgical teams including assistant and anesthesiology. Rather, overall op time might be better indicator of surgical proficiency. In conclusion, according to CUSUM analysis, surgical proficiency of RSCP was attained after first 9 cases, and stabilization of operation time was achieved after 35 cases. This statistical tool has proven to be useful in objectively assessing learning curves for new surgical techniques, and the transition from laparoscopic sacrocolpopexy to RSCP seems achievable. This, however, may vary with each surgeon’s manual dexterity and experience level. Further investigation with several surgeons and several institutions are needed to define a more accurate and generalized learning curve of RSCP. |
Periodontal disease and serum uric acid levels in the absence of metabolic syndrome: is there a link? A study on a sample of Cameroonian adults | 8be351ad-07db-471f-a339-57a7772a2c68 | 11660972 | Dentistry[mh] | Scientific rationale for study The relationship between periodontal disease and hyperuricemia remains poorly studied in Sub-Saharan Africa and Cameroon, especially among individuals without metabolic syndrome. A better understanding of this relationship will improve knowledge and strategies for the prevention and management of these affections. Principal findings We found that periodontal disease affects three out of four adults not suffering from metabolic syndrome, and hyperuricemia affects one-fifth. There appears to be no link between serum uric acid levels and periodontal disease in this group. Practical implications The relationship between serum uric acid levels and periodontal disease may be dependent on elements of the metabolic syndrome. In the absence of these elements, it may not be necessary to assess for hyperuricemia. Further studies are needed to better understand the salivary interaction between uric acid and periodontium in our population.
The relationship between periodontal disease and hyperuricemia remains poorly studied in Sub-Saharan Africa and Cameroon, especially among individuals without metabolic syndrome. A better understanding of this relationship will improve knowledge and strategies for the prevention and management of these affections.
We found that periodontal disease affects three out of four adults not suffering from metabolic syndrome, and hyperuricemia affects one-fifth. There appears to be no link between serum uric acid levels and periodontal disease in this group.
The relationship between serum uric acid levels and periodontal disease may be dependent on elements of the metabolic syndrome. In the absence of these elements, it may not be necessary to assess for hyperuricemia. Further studies are needed to better understand the salivary interaction between uric acid and periodontium in our population.
Periodontal diseases (PD) are disorders affecting the supporting tissues of the teeth. They are caused by excessive plaque formation, mainly due to infection, leading to inflammation and progressive destruction of the periodontium . Gingivitis and periodontitis are the main types of PD . PD is a worldwide oral health problem, affecting over 1.5 billion people . In Africa, with limited access to dental care, these conditions pose an additional public health challenge . In Cameroon, around 62.2% of the population suffers from gingivitis and 15% from periodontitis . They represent the main cause of tooth loss, which can compromise mastication, aesthetics, self-confidence and quality of life . Looking beyond the mouth, PD has been associated with a number of risk factors and conditions, of which cardiovascular risk factors such as hypertension, diabetes and dyslipidemia are particularly prominent, making it a significant and often overlooked contributor to morbidity . Some of the biomarkers involved in the spectrum of cardiovascular disease such as uric acid, have been proposed as factors associated with PD, . Uric acid derived from the catabolism of endogenous but mainly exogenous purines from the diet . Hyperuricemia, which refers to elevated serum uric acid levels (SUA), is the main metabolic abnormality associated with uric acid, and is a risk factor for gout and cardiovascular disease . The relationship between SUA and PD is still a matter of controversy. Evidence from fundamental studies suggests that uric acid, and particularly hyperuricemia, plays a role in the pathogenesis of PD. The imbalance of the oral microbiome during periodontal disease is thought to be responsible for chronic low-grade systemic inflammation, which has been associated with the development of metabolic syndrome and hyperuricemia . On the other hand, hyperuricemia may disrupt salivary and oral balance, leading to onset and progression of PD . However, in the light of epidemiological data, there is a real contradiction, with some studies suggesting that hypouricemia may have a harmful effect on the periodontal tissue. Thus, Tsai et al . showed that higher serum uric acid levels were associated with a greater risk of periodontitis ; Sato et al . corroborated this by showing that hyperuricemia could be a cause of alveolar bone destruction in obesity-related periodontitis . Nevertheles, some authors, such as Sreeram et al., Brotto et al. and Narenda et al., have found no relationship between uricemia and PD . Moreover, PD and hyperuricemia share a number of common risk factors, particularly the elements of the metabolic syndrome . It would therefore be crucial to evaluate the relationship between PD and SUA taking into consideration the role of metabolic syndrome. To the best of our knowledge, few studies have been carried out in the adult population with no element of metabolic syndrome. In Sub-Saharan Africa in general, and in Cameroon more specifically, the epidemiological importance of PD is certain, but little work is available on its relationship with uricemia. The aim of the present study was to assess the relationship between PD and SUA in Cameroonian adults with no evidence of metabolic syndrome, in order to enhance the state of knowledge on this subject.
Study design and setting This was a cross-sectional study conducted from December 2023 to May 2024 at the Implantology and Periodontology Laboratory of the Faculty of Medicine and Biomedical Sciences, University of Yaoundé I (Cameroon). Biological assays were performed at the Biochemistry Laboratory of the University Hospital Centre of Yaoundé (Cameroon). Participants We included Cameroonians aged 18 and above residing in the city of Yaoundé, Cameroon. They were invited to participate by announcements in the general population in public places. We excluded any participant with at least one element of the metabolic syndrome, namely: abdominal obesity, overweight or general obesity, hypertension, diabetes, HDL or LDL dyslipidemia. People with gout, chronic kidney disease (with glomerular filtration rate below 60 ml/min/1.73m 2 ), pregnant women, HIV infection, and participants receiving hypo- or hyperuricemic medication were also excluded. Sample size estimation The minimum sample size was estimated at 167, using the size calculation formula with respect to our study type contained in the manual by Whitley and Ball . We considered the prevalence of hyperuricemia in people with periodontal disease in the study by Joo et al. (30.6%), with a power of 95% and an error rate of 7% . Clinical data collection After obtaining administrative authorizations from the various study sites, and ethical clearance, we invited each potential study participant, who was informed through an information notice available in official languages (English and French). All eligible participants completed an informed consent form prior to inclusion. Data were collected using a data collection sheet. These included Sociodemographic data: age, sex; Oral hygiene habits: daily frequency of tooth brushing, brushing period, type of toothbrush, brushing technique, type of toothpaste and frequency of oral hygiene visits; Lifestyle informations: alcohol consumption, tobacco consumption, frequency of consumption of purine-rich foods. To assess the consumption of purine-rich foods, in the absence of validated tools in the Cameroonian population, we carried out a semi-quantitative assessment based on frequency of consumption, using the recommendations of Cade et al. . Purine-rich foods were selected on the basis of data from Central and West African populations with the highest purine content . A nutritionist was consulted at this stage. Consumption frequencies were assessed on a daily, weekly and monthly basis, and then weighted from 0 to 8 for a total score of 64. This made it possible to compare participants' consumption frequencies. The questionnaire is presented in Supplementary Table 1. The frequency of purine-rich food consumption was stratified into low (score below the 25th quartile), moderate (score between the 25th and 75th quartiles), and high (score above the 75th quartile). Periodontal examination: the methodology was described in one of our previous publications . The oral examination was complete and performed on each sextant (17–14, 13–23, 24–27, 37–34, 33–43, 44–47) using mirrors, tweezers and Williams periodontal probe graduated from 1 to 15 mm. The periodontal indices estimated were: the Silness and Loe plaque index, the Green and Vermillon calculus index (direct assessment of oral hygiene), and the Loe and Silness gingival index . Each index was stratified (scores 0, 1, 2, and 3) as presented in Periodontal examination: the methodology was described in one of our previous publications . The oral examination was complete and performed on each sextant (17–14, 13–23, 24–27, 37–34, 33–43, 44–47) using mirrors, tweezers and Williams periodontal probe graduated from 1 to 15 mm. The periodontal indices estimated were: the Silness and Loe plaque index, the Green and Vermillon calculus index (direct assessment of oral hygiene), and the Loe and Silness gingival index . Each index was stratified (scores 0, 1, 2, and 3) as presented in Supplementary Table 2. For each participant, we considered the highest score for each index. Periodontal pocket depth was assessed from the distance between the bottom of the periodontal pocket and the gingival margin. This was measured at six sites on each tooth: mesiobuccal, distobuccal, mesio-lingual or mesio-palatal, disto-lingual or disto-palatal and lingual or palatal. Pocket depth and clinical attachment loss were stratified from 0 to 3, as shown in Supplementary Table 3. Based on the criteria of the American Academy of Periodontology 1999, revised in 2015, we retained the diagnosis of gingivitis for a gingival index score of at least 1 present on at least two non-adjacent teeth . The diagnosis of periodontitis was based on evidence of loss of attachment and a pocket depth of at least 3 mm on two or more non-contiguous teeth. A patient with gingivitis and periodontitis was classified with the most severe condition, periodontitis. Biological data Uricemia was determined using the uricase method, using a reagent supplied by Biolabo®. It was performed on a venous blood sample taken after an 8-h fast. SUA was expressed in mg/L and hyperuricemia was defined for a value in men ≥ 70 mg/L, and in women ≥ 60 mg/L . Statistical analysis Data were analyzed using SPSS software version 23.0. It was also used to design the graphs. Continuous quantitative variables were presented with mean and standard deviation, while those not following the normal distribution were presented with median and interquartile range [quartile 25; quartile 75]. Categorical variables are presented with their counts and percentages. Means were compared using the one-factor ANOVA test. The association between periodontal disease and hyperuricemia was investigated by comparing frequencies using Fischer's exact test, and measuring the odds ratio along with its 95% confidence interval (OR [95%CI]). For all tests used, the significance threshold was 0.05.
This was a cross-sectional study conducted from December 2023 to May 2024 at the Implantology and Periodontology Laboratory of the Faculty of Medicine and Biomedical Sciences, University of Yaoundé I (Cameroon). Biological assays were performed at the Biochemistry Laboratory of the University Hospital Centre of Yaoundé (Cameroon).
We included Cameroonians aged 18 and above residing in the city of Yaoundé, Cameroon. They were invited to participate by announcements in the general population in public places. We excluded any participant with at least one element of the metabolic syndrome, namely: abdominal obesity, overweight or general obesity, hypertension, diabetes, HDL or LDL dyslipidemia. People with gout, chronic kidney disease (with glomerular filtration rate below 60 ml/min/1.73m 2 ), pregnant women, HIV infection, and participants receiving hypo- or hyperuricemic medication were also excluded.
The minimum sample size was estimated at 167, using the size calculation formula with respect to our study type contained in the manual by Whitley and Ball . We considered the prevalence of hyperuricemia in people with periodontal disease in the study by Joo et al. (30.6%), with a power of 95% and an error rate of 7% .
After obtaining administrative authorizations from the various study sites, and ethical clearance, we invited each potential study participant, who was informed through an information notice available in official languages (English and French). All eligible participants completed an informed consent form prior to inclusion. Data were collected using a data collection sheet. These included Sociodemographic data: age, sex; Oral hygiene habits: daily frequency of tooth brushing, brushing period, type of toothbrush, brushing technique, type of toothpaste and frequency of oral hygiene visits; Lifestyle informations: alcohol consumption, tobacco consumption, frequency of consumption of purine-rich foods. To assess the consumption of purine-rich foods, in the absence of validated tools in the Cameroonian population, we carried out a semi-quantitative assessment based on frequency of consumption, using the recommendations of Cade et al. . Purine-rich foods were selected on the basis of data from Central and West African populations with the highest purine content . A nutritionist was consulted at this stage. Consumption frequencies were assessed on a daily, weekly and monthly basis, and then weighted from 0 to 8 for a total score of 64. This made it possible to compare participants' consumption frequencies. The questionnaire is presented in Supplementary Table 1. The frequency of purine-rich food consumption was stratified into low (score below the 25th quartile), moderate (score between the 25th and 75th quartiles), and high (score above the 75th quartile). Periodontal examination: the methodology was described in one of our previous publications . The oral examination was complete and performed on each sextant (17–14, 13–23, 24–27, 37–34, 33–43, 44–47) using mirrors, tweezers and Williams periodontal probe graduated from 1 to 15 mm. The periodontal indices estimated were: the Silness and Loe plaque index, the Green and Vermillon calculus index (direct assessment of oral hygiene), and the Loe and Silness gingival index . Each index was stratified (scores 0, 1, 2, and 3) as presented in Periodontal examination: the methodology was described in one of our previous publications . The oral examination was complete and performed on each sextant (17–14, 13–23, 24–27, 37–34, 33–43, 44–47) using mirrors, tweezers and Williams periodontal probe graduated from 1 to 15 mm. The periodontal indices estimated were: the Silness and Loe plaque index, the Green and Vermillon calculus index (direct assessment of oral hygiene), and the Loe and Silness gingival index . Each index was stratified (scores 0, 1, 2, and 3) as presented in Supplementary Table 2. For each participant, we considered the highest score for each index. Periodontal pocket depth was assessed from the distance between the bottom of the periodontal pocket and the gingival margin. This was measured at six sites on each tooth: mesiobuccal, distobuccal, mesio-lingual or mesio-palatal, disto-lingual or disto-palatal and lingual or palatal. Pocket depth and clinical attachment loss were stratified from 0 to 3, as shown in Supplementary Table 3. Based on the criteria of the American Academy of Periodontology 1999, revised in 2015, we retained the diagnosis of gingivitis for a gingival index score of at least 1 present on at least two non-adjacent teeth . The diagnosis of periodontitis was based on evidence of loss of attachment and a pocket depth of at least 3 mm on two or more non-contiguous teeth. A patient with gingivitis and periodontitis was classified with the most severe condition, periodontitis.
Uricemia was determined using the uricase method, using a reagent supplied by Biolabo®. It was performed on a venous blood sample taken after an 8-h fast. SUA was expressed in mg/L and hyperuricemia was defined for a value in men ≥ 70 mg/L, and in women ≥ 60 mg/L .
Data were analyzed using SPSS software version 23.0. It was also used to design the graphs. Continuous quantitative variables were presented with mean and standard deviation, while those not following the normal distribution were presented with median and interquartile range [quartile 25; quartile 75]. Categorical variables are presented with their counts and percentages. Means were compared using the one-factor ANOVA test. The association between periodontal disease and hyperuricemia was investigated by comparing frequencies using Fischer's exact test, and measuring the odds ratio along with its 95% confidence interval (OR [95%CI]). For all tests used, the significance threshold was 0.05.
Characteristics of the sample We received 217 participants during the study period, of whom 174 were eligible for the study and were finally included. The 43 participants who were not included had at least one component of metabolic syndrome. The average age of the participants was 29 (10.39) years, ranging from 18 to 65 years. The sample included 100 (57.5%) women. Regarding their oral hygiene habits, the majority used medium-bristle toothbrushes (52.9%), brushed once a day (42.5%) or twice a day (53.4%), mostly used fluoride toothpaste (83.9%), and went more than a year without visiting the dentist (53.5%). Alcohol consumption was reported by 81 (46.1%) participants. The frequency of consumption of purine-rich foods was moderate and high in 92 (52.9%) and 47 (27%) participants respectively. These data are presented in Table . Prevalence of periodontal diseases Table shows the periodontal index scores. Most participants had a plaque and gingival index score of 1 or 2, while most participants had a loss of attachment and pocket depth index score of 1. As for the calculus index, most participants had a score of 0 or 1. Periodontal disease was found in 132 (75.9%) participants, of whom 78 (59.1%) had gingivitis, 54 (40.1%) had periodontitis. Prevalence of hyperuricemia Participants' mean SUA was 54.98 (16.86) mg/L. Values ranged from 24 to 98 mg/L. The frequency of hyperuricemia was 20.7% in the total sample. It was significantly higher in men compared with women (62.89 (14.7) vs. 46.63 (13.2) mg/L, p < 0.001), and in individuals with a purine-rich food frequency score ≥ 18 compared with the < 18 score group (56.9 (15.9) vs. 51.73 (15.86) mg/L, p = 0.043). The frequencies of hyperuricemia in participants with periodontal disease, gingivitis and periodontitis were respectively 20.45%, 20.51% and 20.37%. The frequency of hyperuricemia was 21.42% in individuals without periodontal disease (Fig. ). Association between serum uric acid levels and periodontal diseases We first compared the mean serum uric acid levels in the different periodontal index score groups (Fig. A-E), and found only that the uricemia of participants with calculus index score 3 was significantly higher compared to those with score 0 (Fig. -C; p = 0.026). Subsequently, we compared the mean serum uric acid levels between participants with and without periodontal disease, as well as those of the different periodontal disease groups, and found no significant difference. We also compared the mean scores for the frequency of purine-rich food consumption in these different groups, taking the group with no periodontal disease as a reference, without finding any significant difference. These results are presented in Table . We also assessed the association between hyperuricemia and periodontal disease, taken together and separately, without finding any significant association (Table ).
We received 217 participants during the study period, of whom 174 were eligible for the study and were finally included. The 43 participants who were not included had at least one component of metabolic syndrome. The average age of the participants was 29 (10.39) years, ranging from 18 to 65 years. The sample included 100 (57.5%) women. Regarding their oral hygiene habits, the majority used medium-bristle toothbrushes (52.9%), brushed once a day (42.5%) or twice a day (53.4%), mostly used fluoride toothpaste (83.9%), and went more than a year without visiting the dentist (53.5%). Alcohol consumption was reported by 81 (46.1%) participants. The frequency of consumption of purine-rich foods was moderate and high in 92 (52.9%) and 47 (27%) participants respectively. These data are presented in Table .
Table shows the periodontal index scores. Most participants had a plaque and gingival index score of 1 or 2, while most participants had a loss of attachment and pocket depth index score of 1. As for the calculus index, most participants had a score of 0 or 1. Periodontal disease was found in 132 (75.9%) participants, of whom 78 (59.1%) had gingivitis, 54 (40.1%) had periodontitis.
Participants' mean SUA was 54.98 (16.86) mg/L. Values ranged from 24 to 98 mg/L. The frequency of hyperuricemia was 20.7% in the total sample. It was significantly higher in men compared with women (62.89 (14.7) vs. 46.63 (13.2) mg/L, p < 0.001), and in individuals with a purine-rich food frequency score ≥ 18 compared with the < 18 score group (56.9 (15.9) vs. 51.73 (15.86) mg/L, p = 0.043). The frequencies of hyperuricemia in participants with periodontal disease, gingivitis and periodontitis were respectively 20.45%, 20.51% and 20.37%. The frequency of hyperuricemia was 21.42% in individuals without periodontal disease (Fig. ).
We first compared the mean serum uric acid levels in the different periodontal index score groups (Fig. A-E), and found only that the uricemia of participants with calculus index score 3 was significantly higher compared to those with score 0 (Fig. -C; p = 0.026). Subsequently, we compared the mean serum uric acid levels between participants with and without periodontal disease, as well as those of the different periodontal disease groups, and found no significant difference. We also compared the mean scores for the frequency of purine-rich food consumption in these different groups, taking the group with no periodontal disease as a reference, without finding any significant difference. These results are presented in Table . We also assessed the association between hyperuricemia and periodontal disease, taken together and separately, without finding any significant association (Table ).
The aim of the present study was to investigate the relationship between periodontal disease and serum uric acid levels in a group of Cameroonian adults with no evidence of metabolic syndrome. We found that, in this group, the frequency of hyperuricemia did not appear to differ between people with periodontal disease and those without. To the best of our knowledge, this is the first study of its kind in sub-Saharan Africa. Like PD, serum uric acid levels are also linked to metabolic syndrome. Several data in the literature point to the shared risk factors that largely explain the association between these two entities. Hypertension, diabetes, dyslipidemia and obesity are all important metabolic factors that share with hyperuricemia and PD a contingent of genetic, epigenetic and environmental risk factors, notably the oral and intestinal microbiome, and lifestyle attitudes such as alcohol consumption, smoking and a sedentary lifestyle . These pathways converge towards dysbiosis, chronic low-grade inflammation at tissue and vascular level . In order to better characterize the link between SUA and PD, it is important to assess this relationship while taking cardiometabolic risk factors into account. However, there are currently few data on this subject in the literature, particularly in populations free of metabolic syndrome. We therefore considered it appropriate to carry out this study in a population with no evidence of metabolic syndrome, residing in sub-Saharan Africa, also given the scarcity of data regarding this association in this region. We found that hyperuricemia affected 20.7% of participants with periodontal disease, with no difference between types of PD, and no difference from individuals without PD. What's more, no association was found between hyperuricemia and PD in general, or with PD types separately. These results corroborate those of Brotto et al . who found no association between uricemia and periodontal disease in the general population . However, these findings differ from those of Tsai et al . who found that higher serum uric acid levels were associated with a greater risk of periodontitis . The association between SUA and PD remains controversial. Uppin et al . in a systematic review of 6 studies in 2023 found that SUA were significantly altered in individuals with PD compared with those without. However, they reported that, with current evidence, it remains difficult to conclude whether they are significantly higher or lower in PD . While Byun et al . in a Korean study between 2004 and 2016 found that hyperuricemia would be a protective factor in periodontal disease . Results also corroborated by Xu et al . in the National Health and Nutrition Examination Survey (NHANES) of data collected between 2011 and 2014 . Joo et al . also found in a Korean study between 2016 and 2018 that hypouricemia increased the risk of periodontal disease by (OR = 1.62; 95% CI [1.13; 2.23]), while hyperuricemia did not . However, given our sample size, we were unable to perform the analysis for hypouricemia. We found that the highest calculus index had significantly higher serum uric acid levels (Fig. -C), but no difference in other periodontal indices. The literature reports more severe periodontal damage in individuals with hyperuricemia . In the absence of the metabolic syndrome, the hypothesis of gingival damage in relation to elevated salivary uric acid levels seems more plausible, as demonstrated in studies in rats . In humans, a meta-analysis of 14 studies by Uppin et al . in 2022 found that individuals with PD had lower crevicular and salivary uric acid levels, in contrast to blood levels, a situation that remains unexplained to this day . These findings concur with those of Ye et al . in a more recent meta-analysis in 2023 . In addition, elevated salivary uric acid levels would presumably be an anti-inflammatory marker after treatment of PD . It would therefore be important to be able to jointly assess serum and salivary levels in a larger sample within our population in order to improve the state of knowledge on this subject. Certain limitations must be borne in mind when interpreting the data from our study. The lack of evaluation of alveolar bone loss by radiography, which is an important element in defining the stage of periodontitis. Another limitation is the small sample size, which was highly selective in order to limit metabolic bias. Surely, a longitudinal study would provide a better answer to the question, with clustering according to the elements of the metabolic syndrome, in order to assess their individual influences, and the effect of treatments. Finally, it would be useful to consider the degree of inflammation, which can be assessed using indices such as the “Periodontal inflamed surface area (PISA)”, in order to get a more precise idea .
In this sample of adults without metabolic syndrome components, periodontal disease accounted for 75%. The prevalence of hyperuricemia among patients with periodontal disease was 20%. No association was found between serum uric acid levels and periodontal disease. The relationship described between uricemia and periodontal disease seems to be more dependent on metabolic syndrome. However, further studies are needed with a longitudinal design and measurement of salivary uric acid levels in this population.
Additional file 1. Supplementary Table 1: food frequency questionnaire for purine rich diet. Supplementary Table 2: Stratification of plaque index (Silness and Loe), calculus index (Green and Vermilion), and gingival index (Loe and Silness). Supplementary Table 3: Stratification of pocket depth and clinical attachment loss.
|
Divergence of nutrients, salt accumulation, bacterial community structure and diversity in soil after 8 years of flood irrigation with surface water and groundwater | a69cd378-e17e-4a01-a9aa-2cd18dbd8e7c | 11566440 | Microbiology[mh] | Freshwater scarcity has become a serious issue in arid regions. Despite 70.8% of the earth's surface being covered by water, freshwater resources are extremely limited, making up only 2.5% of the total water volume (the remaining 97.5% being saltwater) . Furthermore, the vast majority (87%) of freshwater is stored in ice caps, glaciers, and permanent frost, leaving only 0.25% of the Earth's water accessible to humans, and it is unevenly distributed . This situation is particularly severe in arid areas. One of the most serious problems restricting agricultural and economic crop irrigation is the scarcity of freshwater resources . Groundwater irrigation plays a crucial role in agricultural development in arid regions. Owing to water scarcity in these areas, groundwater has become a key source for irrigation. Although groundwater salinity has increased gradually annually with excessive human exploitation of local water resources, the proper utilization of groundwater resources is an important way to ameliorate the requirements for freshwater, and saltwater can be used instead of freshwater in arid ecosystems . Studies have shown that optimizing irrigation and drainage strategies can improve water use efficiency and reduce soil salinization, thus increasing crop yields . Currently, brackish water (including groundwater brackish water) is considered an alternative water resource for countries and regions facing freshwater shortages . As a result, in many water-scarce areas and countries, such as Israel , Afghanistan , Italy , and China , brackish water has become the most important water source for agricultural irrigation. In China, the amount of available saline water stored in underground resources is approximately 20 billion m 3 , of which the exploitable amount is approximately 65% . Irrigation with groundwater instead of freshwater has been carried out in some areas of China, such as Hebei, Xinjiang, and Qinghai . However, saline groundwater contains chemical components of various concentrations, and their utilization might cause soil salinization, which can further influence the soil fertility status, soil salt ion content, and bacterial diversity . Numerous researchers have demonstrated the application of saline groundwater in the agricultural industry . In areas suffering from severe freshwater shortages, the use of saline groundwater for agricultural irrigation can alleviate farmland drought and provide the necessary water for crop growth. However, salts from brackish water are also introduced into the soil, and long-term irrigation may lead to secondary soil salinization . Saline groundwater irrigation has been reported to be the main reason for crop yield reduction and soil salinization, and has detrimentally affected the agricultural economy and sustainable development of the soil environment . In fact, increased salt accumulation in the soil due to saline groundwater has been reported to be a critical issue . Soil salinity is worse in arid and semiarid areas, where most of the land has been affected by salinity . The degradation of land is worse in arid and semiarid areas, where more than 50% of land has already been converted into unfertile land because of the use of more saline water in these degraded lands to irrigate crops . Farming soils are frequently exposed to saline groundwater irrigation, and soil physicochemical characteristics (soil water, pH, organic carbon, and other characteristics) can be influenced by potential increases in soil salinity . Saline groundwater irrigation can not only change the soil environment but also influence soil microbial processes . A previous study indicated that the application of saline groundwater likely changes the soil aggregate structure, reduces permeability and inhibits nutrient availability by increasing salinity . Soil microorganisms have a direct influence on soil ecological processes (such as the formation of soil aggregates, the decomposition of organic matter, and nutrient cycling), and salinity, a major stressor of soil microorganisms, can have a profound influence on the soil environment . It has been reported that increased soil salinization has a strong negative impact on microbial activity and thereby negatively influences bacterial diversity . These negative influences are caused mainly by water availability or cellular physiology and metabolic processes restricted by soil salinization . Under the negative influence of salinity, soil bacterial diversity significantly differed among species under different irrigation water salinities, which can be explained by adaptations to salinity stress caused by changes in species composition . As the largest closed basin of the Tibetan Plateau, the Qaidam Basin (located in Qinghai Province in northwestern China) has the most abundant salt lakes and almost all varieties of salt deposits . In the Qaidam Basin, groundwater, precipitation and meltwater in mountainous areas have systematically evolved from freshwater to saltwater due to complex tectonic activities, arid climate characteristics, paleoclimate variations, and sedimentary lithology . The nonmountainous area of the Qaidam Basin experiences little precipitation, which leads to water shortages (especially fresh water), but groundwater resources (i.e., saline groundwater) are abundant in the region and play a significant role in the irrigation water supply and industrial production . Therefore, the use of groundwater resources should be rationalized in ways that solve water shortage crises and maintain the agricultural industry in the Qaidam Basin. As a potential strategy to save freshwater resources, the effect of saline groundwater irrigation on soil salinity, including the negative influences of salinity on soil physicochemical properties, has received increasing attention. The influence of saline groundwater irrigation on bacterial diversity and activity has been studied . However, the differences in soil fertility, salt ion content, and bacterial diversity in response to long-term irrigation with saline groundwater and surface water are still unclear. In particular, these responses and the relationships among soil fertility, salt ion content, and bacterial diversity under long-term flooding irrigation with saline groundwater need to be validated through field tests. These tests will help ensure the sustainability of saline groundwater irrigation . Therefore, we hypothesize that compared with long-term surface water irrigation, saline groundwater irrigation will reduce the soil nutrient content and increase soil salinization (increase the salt ion content) at different depths. Additionally, the microbial community structure and diversity will differ between the two, leading to divergence. To test this hypothesis, a Lycium ruthenicum (a perennial xerohalophyte shrub used as a medicinal plant that has salinity tolerance and drought resistance) field was divided into two areas: one was subjected to flooding irrigation with surface water (from the Nomuhong River, pH 7.76, total salinity 0.36 g L −1 ) for 8 years (May 2013 to September 2020) and another was irrigated with underground water (pH 7.81, total salinity 0.95 g L −1 ). Changes in the salinity, physicochemical properties, and microbial communities of the soil were investigated. The aims of this study were to 1) evaluate differences in the soil nutrient contents, soil salt ion contents and soil bacterial diversity between long-term irrigation with underground water and surface water (which are the most important irrigation water resources in the experimental area) in the L. ruthenicum field; and 2) explore the relationships among the soil fertility status, soil salt ion contents and bacterial diversity under long-term flooding irrigation with two types of water in the L. ruthenicum field. The results of this research could provide a scientific basis for saline groundwater management and soil quality improvement. Experimental area Field experiments were conducted from May 2013 to September 2020 at Nomuhong Farm (36°20′-36°30′N, 96°15′-96°35′E, elevation 2790 m), Qaidam area, Qinghai Province, China. The area is characterized by a plateau continental climate, which involves intense evaporation and little precipitation and is in a typical desert arid region with a mean annual temperature of 4.9 °C; athe maximum and minimum temperatures are 35.8 °C and –31 °C, respectively . The mean annual precipitation is 43.5 mm, and the mean annual evaporation is 2849.7 mm . Experimental design and field management To compare the differences in soil fertility, salt ion content, and bacterial diversity after long-term surface water and saline groundwater irrigation, an experimental field (approximately 0.66 hectares of cultivated land) was selected in the study area in 2013. Mechanical tillage, leveling, and land preparation were carried out to ensure flat and uniform soil conditions. The field was divided into two parts, from west to east: one for surface water irrigation and the other for saline groundwater irrigation. Each part was further divided into three smaller plots (30 m × 30 m), and L. ruthenicum seedlings were planted with row spacing of 1.5 m and a spacing of 1.0 m. A drainage ditch (2 m wide and 1 m deep) and two buffer zones (one on each side of the ditch, each with two rows of L. ruthenicum ) were set up between the two plots to prevent water and salt exchange. From May 2013 to September 2020, irrigation was conducted using surface water (from the Nuomuhong River, pH 7.76, total salinity 0.36 g L −1 ) and groundwater (pH 7.81, total salinity 0.95 g L −1 ) (for more details on surface water and groundwater, see Appendix 1). The specific irrigation and field management measures were as follows: During the growing season of L. ruthenicum (concentrated from June to August each year), irrigation was carried out twice a month, with each irrigation providing approximately 460 m 3 ha −1 . Before the first irrigation each year, a base fertilizer was applied (20 cm depth, 200 kg ha −1 nitrogen and 100 kg ha −1 P 2 O 5 ), followed by a top dressing three months later (100 kg ha −1 nitrogen and 50 kg ha −1 P 2 O 5 ). Weeds were mainly removed manually and with a rotary cultivator, with weeding performed five times per year. In September 2020, we analyzed the soil fertility, salt distribution, bacterial structure and diversity of the surface water and saline groundwater irrigation plots. Sampling and analysis Soil samples were collected in September 2020 from each study plot using a soil auger at three depths, i.e., 0–5, 5–10 and 10–20 cm. In each study plot, 5 locations (from 1/4, 1/2 and 3/4 of the two diagonal lines in the study) were selected, and then soils from the same depth were collected and mixed into one composite sample. There were 3 samples from different depths at each site; thus, there were 18 samples in total. All the soil samples were sieved through a 2 mm mesh to remove roots and other debris and divided into three parts. One part was used to measure the soil water content; the second part was air-dried and analyzed for total salinity, nutrient content (soil organic carbon, SOC; total nitrogen, TN; total phosphorus, TP; total potassium, TK; available nitrogen, AN; available phosphorus, AP; available potassium, AK), and major salt ions (Na + , K + , Ca 2+ , Mg 2+ , Cl − , HCO 3 − , CO 3 2− , and SO 4 2− ); and the third soil sample was stored at approximately -80 °C until DNA extraction to determine the soil bacterial diversity and abundance through diversity analysis. The soil water content was determined gravimetrically by heating at 105 °C until a constant weight was reached . The soil total salinity concentration was measured in soil‒water extracts with a 1:5 soil:water ratio (w:v) with a portable salt meter (CT-3086). The soil nutrient content was analyzed using conventional methods . Specifically, the SOC content was measured using the K 2 Cr 2 O 7 –H 2 SO 4 oxidation method; the TN content was determined by the semimicro Kjeldahl method; the TP content was evaluated by using spectrophotometry after H 2 SO 4 –HClO 4 digestion; the TK content was analyzed by flame photometry; the AN content was assessed by the alkaline KMnO 4 diffusion method; the AP content was determined by Mo–Sb colorimetry after NaHCO 3 extraction; and the AK content was estimated by flame photometry after NH 4 OAc extraction. The major salt ions in the soil suspension (soil:water ratio of 1:5) were analyzed. The Na + , K + , Ca 2+ , and Mg 2+ concentrations were determined by inductively coupled plasma spectrometry , the Cl − and SO 4 2− concentrations were measured using ionic chromatography, and the HCO 3 − and CO 3 2− concentrations were analyzed in a titration experiment (pH = 4.8) . Take 0.25 g of soil and extract genomic DNA for bacterial diversity using the E.Z.N.A.® Soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA) according to the manufacturer’s instructions. The DNA concentration and purity were measured using a NanoDrop 2000 UV–vis spectrophotometer (Thermo Scientific, DE, USA). Bacterial 16S rRNA genes in the V3–V4 region are amplified using specific primers with barcodes: 343F (5'-TACGGRAGGCAGCAG-3') and 798R (5'-AGGGTATCTAATCCT-3') through two rounds of PCR. After amplification, electrophoresis was performed to detect the PCR products, which were then purified. The purified PCR products are quantified using Qubit, and samples are mixed in equal amounts based on the PCR product concentration. The mixed samples were then subjected to high-throughput sequencing on the Illumina HiSeq 2500 platform (OE Biotech, China). The raw sequence data are first processed using Trimmomatic (version 0.35), with a sliding window applied to scan sequences and trimming when the quality score drops below 20. Sequences shorter than 50 bp were removed. The qualified paired-end raw data were merged using Flash (version 1.2.11) with a maximum overlap of 200 bp, generating complete paired-end sequences. Sequences containing N bases, homopolymers longer than 8 bases, or shorter than 200 bp were removed using the split_libraries function in QIIME (version 1.8.0), resulting in clean tags. Finally, UCHIME (version 2.4.2) was used to remove chimeras from the clean tags, yielding valid tags for OTU clustering and further data analysis. Statistical analysis The soil pH, soil water content, soil total salinity, soil nutrient content and soil salt ion content were analyzed by standard statistical analysis, and the differences were considered significant at the p < 0.05 level. The error bars indicate the standard deviations of the differences between the means. Analysis of variance (ANOVA) was performed using SPSS 20.0 statistical software (SPSS Inc., IL, USA). All the data were tested for normality and homoscedasticity prior to further analysis by ANOVA. One-way ANOVA was conducted with Duncan′s test to determine the effects of flood irrigation with surface water and underground water on the fertility status, salt ion content and bacterial diversity at different soil depths. Two-way ANOVA was conducted to analyze the effects of water type, soil depth, and their interaction on the soil properties, soil salt ion contents and bacterial diversity. Figures and were created with Origin 9.0 software (OriginLab, Los Angeles, USA). The bacterial community structure and α-diversity were analyzed using R software, and operational taxonomic units (OTUs) flower plots, relative abundance charts of the microbial community composition, and heatmaps were generated. Diversity index plots were created using Origin 9.0 software (OriginLab, Los Angeles, USA).Moreover, multiple factor analysis (MFA) was applied to the soil physicochemical properties, soil salt ion content and bacterial diversity data sets to assess the general structure of the data and to determine the relationships among the data sets . These data sets were Hellinger-transformed to alleviate the double-zero problem in the principal components analysis that forms part of the MFA . Analyses of MFA were performed using the “FactoMineR” package in R statistical software (version 3.5.1). The MFA graphs (Fig. ) were created using the “Factoextra” package for R statistical software . Field experiments were conducted from May 2013 to September 2020 at Nomuhong Farm (36°20′-36°30′N, 96°15′-96°35′E, elevation 2790 m), Qaidam area, Qinghai Province, China. The area is characterized by a plateau continental climate, which involves intense evaporation and little precipitation and is in a typical desert arid region with a mean annual temperature of 4.9 °C; athe maximum and minimum temperatures are 35.8 °C and –31 °C, respectively . The mean annual precipitation is 43.5 mm, and the mean annual evaporation is 2849.7 mm . To compare the differences in soil fertility, salt ion content, and bacterial diversity after long-term surface water and saline groundwater irrigation, an experimental field (approximately 0.66 hectares of cultivated land) was selected in the study area in 2013. Mechanical tillage, leveling, and land preparation were carried out to ensure flat and uniform soil conditions. The field was divided into two parts, from west to east: one for surface water irrigation and the other for saline groundwater irrigation. Each part was further divided into three smaller plots (30 m × 30 m), and L. ruthenicum seedlings were planted with row spacing of 1.5 m and a spacing of 1.0 m. A drainage ditch (2 m wide and 1 m deep) and two buffer zones (one on each side of the ditch, each with two rows of L. ruthenicum ) were set up between the two plots to prevent water and salt exchange. From May 2013 to September 2020, irrigation was conducted using surface water (from the Nuomuhong River, pH 7.76, total salinity 0.36 g L −1 ) and groundwater (pH 7.81, total salinity 0.95 g L −1 ) (for more details on surface water and groundwater, see Appendix 1). The specific irrigation and field management measures were as follows: During the growing season of L. ruthenicum (concentrated from June to August each year), irrigation was carried out twice a month, with each irrigation providing approximately 460 m 3 ha −1 . Before the first irrigation each year, a base fertilizer was applied (20 cm depth, 200 kg ha −1 nitrogen and 100 kg ha −1 P 2 O 5 ), followed by a top dressing three months later (100 kg ha −1 nitrogen and 50 kg ha −1 P 2 O 5 ). Weeds were mainly removed manually and with a rotary cultivator, with weeding performed five times per year. In September 2020, we analyzed the soil fertility, salt distribution, bacterial structure and diversity of the surface water and saline groundwater irrigation plots. Soil samples were collected in September 2020 from each study plot using a soil auger at three depths, i.e., 0–5, 5–10 and 10–20 cm. In each study plot, 5 locations (from 1/4, 1/2 and 3/4 of the two diagonal lines in the study) were selected, and then soils from the same depth were collected and mixed into one composite sample. There were 3 samples from different depths at each site; thus, there were 18 samples in total. All the soil samples were sieved through a 2 mm mesh to remove roots and other debris and divided into three parts. One part was used to measure the soil water content; the second part was air-dried and analyzed for total salinity, nutrient content (soil organic carbon, SOC; total nitrogen, TN; total phosphorus, TP; total potassium, TK; available nitrogen, AN; available phosphorus, AP; available potassium, AK), and major salt ions (Na + , K + , Ca 2+ , Mg 2+ , Cl − , HCO 3 − , CO 3 2− , and SO 4 2− ); and the third soil sample was stored at approximately -80 °C until DNA extraction to determine the soil bacterial diversity and abundance through diversity analysis. The soil water content was determined gravimetrically by heating at 105 °C until a constant weight was reached . The soil total salinity concentration was measured in soil‒water extracts with a 1:5 soil:water ratio (w:v) with a portable salt meter (CT-3086). The soil nutrient content was analyzed using conventional methods . Specifically, the SOC content was measured using the K 2 Cr 2 O 7 –H 2 SO 4 oxidation method; the TN content was determined by the semimicro Kjeldahl method; the TP content was evaluated by using spectrophotometry after H 2 SO 4 –HClO 4 digestion; the TK content was analyzed by flame photometry; the AN content was assessed by the alkaline KMnO 4 diffusion method; the AP content was determined by Mo–Sb colorimetry after NaHCO 3 extraction; and the AK content was estimated by flame photometry after NH 4 OAc extraction. The major salt ions in the soil suspension (soil:water ratio of 1:5) were analyzed. The Na + , K + , Ca 2+ , and Mg 2+ concentrations were determined by inductively coupled plasma spectrometry , the Cl − and SO 4 2− concentrations were measured using ionic chromatography, and the HCO 3 − and CO 3 2− concentrations were analyzed in a titration experiment (pH = 4.8) . Take 0.25 g of soil and extract genomic DNA for bacterial diversity using the E.Z.N.A.® Soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA) according to the manufacturer’s instructions. The DNA concentration and purity were measured using a NanoDrop 2000 UV–vis spectrophotometer (Thermo Scientific, DE, USA). Bacterial 16S rRNA genes in the V3–V4 region are amplified using specific primers with barcodes: 343F (5'-TACGGRAGGCAGCAG-3') and 798R (5'-AGGGTATCTAATCCT-3') through two rounds of PCR. After amplification, electrophoresis was performed to detect the PCR products, which were then purified. The purified PCR products are quantified using Qubit, and samples are mixed in equal amounts based on the PCR product concentration. The mixed samples were then subjected to high-throughput sequencing on the Illumina HiSeq 2500 platform (OE Biotech, China). The raw sequence data are first processed using Trimmomatic (version 0.35), with a sliding window applied to scan sequences and trimming when the quality score drops below 20. Sequences shorter than 50 bp were removed. The qualified paired-end raw data were merged using Flash (version 1.2.11) with a maximum overlap of 200 bp, generating complete paired-end sequences. Sequences containing N bases, homopolymers longer than 8 bases, or shorter than 200 bp were removed using the split_libraries function in QIIME (version 1.8.0), resulting in clean tags. Finally, UCHIME (version 2.4.2) was used to remove chimeras from the clean tags, yielding valid tags for OTU clustering and further data analysis. The soil pH, soil water content, soil total salinity, soil nutrient content and soil salt ion content were analyzed by standard statistical analysis, and the differences were considered significant at the p < 0.05 level. The error bars indicate the standard deviations of the differences between the means. Analysis of variance (ANOVA) was performed using SPSS 20.0 statistical software (SPSS Inc., IL, USA). All the data were tested for normality and homoscedasticity prior to further analysis by ANOVA. One-way ANOVA was conducted with Duncan′s test to determine the effects of flood irrigation with surface water and underground water on the fertility status, salt ion content and bacterial diversity at different soil depths. Two-way ANOVA was conducted to analyze the effects of water type, soil depth, and their interaction on the soil properties, soil salt ion contents and bacterial diversity. Figures and were created with Origin 9.0 software (OriginLab, Los Angeles, USA). The bacterial community structure and α-diversity were analyzed using R software, and operational taxonomic units (OTUs) flower plots, relative abundance charts of the microbial community composition, and heatmaps were generated. Diversity index plots were created using Origin 9.0 software (OriginLab, Los Angeles, USA).Moreover, multiple factor analysis (MFA) was applied to the soil physicochemical properties, soil salt ion content and bacterial diversity data sets to assess the general structure of the data and to determine the relationships among the data sets . These data sets were Hellinger-transformed to alleviate the double-zero problem in the principal components analysis that forms part of the MFA . Analyses of MFA were performed using the “FactoMineR” package in R statistical software (version 3.5.1). The MFA graphs (Fig. ) were created using the “Factoextra” package for R statistical software . Soil pH, soil water content and soil total salinity The soil pH, soil water content and soil total salinity content were significantly affected by the type of irrigation water, soil depth and their interaction (Table , p < 0.01), except for the effect of the water type × soil depth interaction on the soil water content (Table , p > 0.05). The pH was greater in the 0–5 cm topsoil layer than in the other layers under surface water and underground water irrigation, and the soil pH under surface water irrigation was lower than that under underground water irrigation at the same soil depth (Fig. a), which means that the pH of low-salinity irrigation water (surface water) was lower than that of high-salinity irrigation water (underground water). The soil total salinity content was significantly greater in the 0–5 cm topsoil layer than in the other layers under surface water and underground water irrigation, and the soil total salinity under surface water irrigation was lower than that under underground water irrigation (Fig. b, p < 0.05).In addition, the soil water content significantly increased with increasing soil depth under surface water and underground water irrigation (Fig. b, p < 0.05), and the soil water content under surface water irrigation was greater than that under underground water irrigation at the same soil depth (Fig. c). Soil nutrient contents Soil TK, AP and AK were significantly affected by the type of irrigation water, soil depth and their interaction (Table , p < 0.05). The contents of soil TN, TP, SOC, and AN were significantly affected by soil depth (Table , p < 0.01) but were not influenced by water type or the interaction with soil depth (Table , p > 0.05). There were differences in most soil nutrient contents in different soil layers under irrigation with surface water and underground water, and higher nutrient contents were measured in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. ). However, the TK and AK contents increased sharply under irrigation with underground water compared with those under surface water irrigation (Table , p < 0.001); the respective TK and AK levels were 3.75 g kg −1 and 95.66 mg kg −1 (0–5 cm), 3.49 g kg −1 and 88.98 mg kg −1 (5–10 cm), and 3.37 g kg −1 and 74.71 mg kg −1 (10–20 cm) under irrigation with surface water, whereas they were notably greater, i.e., 5.53 g kg −1 and 317.43 mg kg −1 (0–5 cm), 5.21 g kg −1 and 272.05 mg kg −1 (5–10 cm), and 4.13 g kg −1 and 267.97 mg kg −1 (10–20 cm), under irrigation with underground water (Fig. f and j). The TN, TP, TK, SOC, AN, AP and AK contents decreased with increasing soil depth, and the contents in the 0–5 cm layer were markedly greater than those in the other soil layers (Fig. d, e, f, g, h, i, and j, p < 0.05). These results indicated that the salinity level of the irrigation water influenced the soil fertility status after 8 years. Soil salt ion content After 8 years of management, the soil Na + , Mg 2+ , K + , Ca 2+ , Cl − and CO 3 2− contents were significantly affected by the type of irrigation water ( p < 0.001), soil depth ( p < 0.05) and their interaction (Table , p < 0.05). SO 4 2− was significantly affected by the type of irrigation water and soil depth (Table , p < 0.01), but HCO 3 − was not significantly affected by the type of irrigation water, soil depth or their interaction (Table , p > 0.05). There were dramatic differences in the soil salt ion contents (Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− and CO 3 2− ) due to irrigation with surface water and underground water at all soil depths, with dramatically lower values in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. a-g). For example, the contents of Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− and CO 3 2− within the 0–5 cm soil layer were 0.147, 0.095, 0.085, 0.175, 0.661, 1.527 and 0.122 g kg −1 in soil irrigated with surface water but increased notably to 5.666, 0.761, 0.692, 3.204, 13.810, 2.817 and 0.227 g kg −1 in soil irrigated with underground water. The soil salt ion content tended to decrease trend as the soil depth increased, and the levels of Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− , and HCO 3 − within the 0–5 cm soil layer reached the highest values under surface water and underground water irrigation (Fig. ), except for CO 3 2− within the 5–10 cm layer, which peaked under underground water irrigation (Fig. g). Bacterial community structure After 8 years of irrigation with surface water or underground water, there were significant differences in the soil bacterial composition. Overall, the surface water irrigation group had a greater number of OTUs, with 5,812 unique OTUs, than the underground water irrigation group had 3,725 unique OTUs (Fig. a). The two groups shared 4,788 OTUs (Fig. a). The core OTUs shared by all the samples from both irrigation groups numbered 132, whereas the non-shared OTUs ranged from 3,159 to 4,350 in the surface water group and from 2,143 to 3,148 in the underground water group (Fig. b). At a broader level (Fig. c-e), the composition and abundance of the top 15 bacterial taxa at the phylum, family, and species levels differed between the two irrigation groups. At the phylum level (Fig. c), the surface water group presented relatively high abundances of Proteobacteria , Firmicutes , and Cyanobacteria , whereas the underground water group presented relatively high abundances of Bacteroidetes , Actinobacteria , and Gemmatimonadetes . At the family level (Fig. d), Sphingomonadaceae , Muribaculaceae , and Prevotellaceae were more abundant in the surface water group, whereas Balneolaceae , Halomonadaceae , and Flavobacteriaceae were more abundant in the underground water group. At the species level (Fig. e), the bacterium YC-LK-LKJ35 , Rhodovulum sp ., the marine bacterium JK1007 , and Methylohalomonas lacus were more abundant in the underground water group. Notably, the relative abundances of the bacterium YC-LK-LKJ35 , marine bacterium JK1007 , and Methylohalomonas lacus in the underground water group were 0.0147%, 0.0021%, and 0.0016%, respectively, whereas they were 0.0042%, 0%, and 0% in the surface water group (see Supplementary Table 2). In contrast, Lactobacillus gasseri , Pseudomonas brassicacearum subsp . brassicacearum , Solanum torvum , and Lactobacillus murinus were more abundant in the surface water group. Additionally, the top 15 bacterial taxa at the phylum, family, and species levels showed similar trends between the groups (Fig. f-h). After 8 years of irrigation, salt-tolerant bacteria were significantly more abundant in the underground water group than in the surface water group. Bacterial diversity After 8 years of irrigation, the α-diversity (Chao1, Shannon, Observed species, and Simpson) of the surface water irrigation group was significantly greater than that of the underground water irrigation group (Fig. a-d). For example, the surface water group had values of 4030.00, 9.25, 3406.10, and 0.99 for the Chao1, Shannon, Observed species, and Simpson indices, respectively, while the underground water group had values of 2976.97, 7.97, 2423.83, and 0.98. The surface water group exceeded the underground water group by 35.37%, 16.03%, 40.53%, and 1.02%, respectively. After 8 years of management, all the soil bacterial diversity indices (Chao1, Shannon, Observed species, and Simpson) were significantly affected by the type of irrigation water (Table , p < 0.01). However, none of the soil bacterial diversity indices were significantly affected by soil depth or its interaction with the type of irrigation water (Table , p > 0.05). There were dramatic differences in soil bacterial diversity (Chao1, Shannon, Observed_species and Simpson) due to irrigation with surface water and underground water at all soil depths, with obviously higher values in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. e-h). Correlations among soil physicochemical properties, soil salt ion contents and bacterial diversity The soil physicochemical properties and the soil salt ion data sets in the MFA suggest that irrigation water can be divided into two groups corresponding to surface water (1–9) and underground water (10–18) (Fig. a). As shown in Fig. a, dimension 1 of the MFA explained 44.96% of the variance and mainly indicated a difference between irrigation with surface water and irrigation with underground water. Dimension 2 of the MFA explained 22.64% of the variance and was the main difference between the soil depths. The relationships of the patterns of the soil bacterial species with the soil physicochemical properties and the soil salt ion contents are further illustrated by the RV coefficients (Table ). These results indicate that the bacterial communities were mainly linked to the soil salt ions (RV = 0.66, p < 0.001), which themselves were partly linked to the soil physicochemical properties (RV = 0.45, p = 0.001). Figure a and b together indicate the main salinity gradient of irrigation water (between surface water and underground water) along the first axis and the soil depth along the second axis (from the upper to lower quadrants). For example, the scores of sites 10–18 (Fig. a, right-hand part of the graph) corresponded (Fig. b) to high salt ion contents (Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− ) and a relative high soil pH, as well as high soil total salinity. Here, close to the source, the soil conditions were dominated by soil salt ions (soil salinity). The relatively poor bacterial community was characterized by B8 (marine bacterium JK1007), B1 (bacterium YC-LK-LKJ35), B6 ( Rhodovulum sp.), B26 ( Candidatus Wildermuthbacteria bacterium RIFOXYD1 FULL 50–12), B13 ( Methylohalomonas lacus ) and B21 ( Ectothiorhodospiraceae bacterium WFHF3C12). In contrast, sites 1–9 presented relative high concentrations of soil water, TP and AN and relative low soil salinity. These sites were irrigated with surface water, and their communities were characterized by another set of species (for example, B2, B9, B11, B12, B23, B29, and B30). The soil pH, soil water content and soil total salinity content were significantly affected by the type of irrigation water, soil depth and their interaction (Table , p < 0.01), except for the effect of the water type × soil depth interaction on the soil water content (Table , p > 0.05). The pH was greater in the 0–5 cm topsoil layer than in the other layers under surface water and underground water irrigation, and the soil pH under surface water irrigation was lower than that under underground water irrigation at the same soil depth (Fig. a), which means that the pH of low-salinity irrigation water (surface water) was lower than that of high-salinity irrigation water (underground water). The soil total salinity content was significantly greater in the 0–5 cm topsoil layer than in the other layers under surface water and underground water irrigation, and the soil total salinity under surface water irrigation was lower than that under underground water irrigation (Fig. b, p < 0.05).In addition, the soil water content significantly increased with increasing soil depth under surface water and underground water irrigation (Fig. b, p < 0.05), and the soil water content under surface water irrigation was greater than that under underground water irrigation at the same soil depth (Fig. c). Soil TK, AP and AK were significantly affected by the type of irrigation water, soil depth and their interaction (Table , p < 0.05). The contents of soil TN, TP, SOC, and AN were significantly affected by soil depth (Table , p < 0.01) but were not influenced by water type or the interaction with soil depth (Table , p > 0.05). There were differences in most soil nutrient contents in different soil layers under irrigation with surface water and underground water, and higher nutrient contents were measured in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. ). However, the TK and AK contents increased sharply under irrigation with underground water compared with those under surface water irrigation (Table , p < 0.001); the respective TK and AK levels were 3.75 g kg −1 and 95.66 mg kg −1 (0–5 cm), 3.49 g kg −1 and 88.98 mg kg −1 (5–10 cm), and 3.37 g kg −1 and 74.71 mg kg −1 (10–20 cm) under irrigation with surface water, whereas they were notably greater, i.e., 5.53 g kg −1 and 317.43 mg kg −1 (0–5 cm), 5.21 g kg −1 and 272.05 mg kg −1 (5–10 cm), and 4.13 g kg −1 and 267.97 mg kg −1 (10–20 cm), under irrigation with underground water (Fig. f and j). The TN, TP, TK, SOC, AN, AP and AK contents decreased with increasing soil depth, and the contents in the 0–5 cm layer were markedly greater than those in the other soil layers (Fig. d, e, f, g, h, i, and j, p < 0.05). These results indicated that the salinity level of the irrigation water influenced the soil fertility status after 8 years. After 8 years of management, the soil Na + , Mg 2+ , K + , Ca 2+ , Cl − and CO 3 2− contents were significantly affected by the type of irrigation water ( p < 0.001), soil depth ( p < 0.05) and their interaction (Table , p < 0.05). SO 4 2− was significantly affected by the type of irrigation water and soil depth (Table , p < 0.01), but HCO 3 − was not significantly affected by the type of irrigation water, soil depth or their interaction (Table , p > 0.05). There were dramatic differences in the soil salt ion contents (Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− and CO 3 2− ) due to irrigation with surface water and underground water at all soil depths, with dramatically lower values in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. a-g). For example, the contents of Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− and CO 3 2− within the 0–5 cm soil layer were 0.147, 0.095, 0.085, 0.175, 0.661, 1.527 and 0.122 g kg −1 in soil irrigated with surface water but increased notably to 5.666, 0.761, 0.692, 3.204, 13.810, 2.817 and 0.227 g kg −1 in soil irrigated with underground water. The soil salt ion content tended to decrease trend as the soil depth increased, and the levels of Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− , and HCO 3 − within the 0–5 cm soil layer reached the highest values under surface water and underground water irrigation (Fig. ), except for CO 3 2− within the 5–10 cm layer, which peaked under underground water irrigation (Fig. g). After 8 years of irrigation with surface water or underground water, there were significant differences in the soil bacterial composition. Overall, the surface water irrigation group had a greater number of OTUs, with 5,812 unique OTUs, than the underground water irrigation group had 3,725 unique OTUs (Fig. a). The two groups shared 4,788 OTUs (Fig. a). The core OTUs shared by all the samples from both irrigation groups numbered 132, whereas the non-shared OTUs ranged from 3,159 to 4,350 in the surface water group and from 2,143 to 3,148 in the underground water group (Fig. b). At a broader level (Fig. c-e), the composition and abundance of the top 15 bacterial taxa at the phylum, family, and species levels differed between the two irrigation groups. At the phylum level (Fig. c), the surface water group presented relatively high abundances of Proteobacteria , Firmicutes , and Cyanobacteria , whereas the underground water group presented relatively high abundances of Bacteroidetes , Actinobacteria , and Gemmatimonadetes . At the family level (Fig. d), Sphingomonadaceae , Muribaculaceae , and Prevotellaceae were more abundant in the surface water group, whereas Balneolaceae , Halomonadaceae , and Flavobacteriaceae were more abundant in the underground water group. At the species level (Fig. e), the bacterium YC-LK-LKJ35 , Rhodovulum sp ., the marine bacterium JK1007 , and Methylohalomonas lacus were more abundant in the underground water group. Notably, the relative abundances of the bacterium YC-LK-LKJ35 , marine bacterium JK1007 , and Methylohalomonas lacus in the underground water group were 0.0147%, 0.0021%, and 0.0016%, respectively, whereas they were 0.0042%, 0%, and 0% in the surface water group (see Supplementary Table 2). In contrast, Lactobacillus gasseri , Pseudomonas brassicacearum subsp . brassicacearum , Solanum torvum , and Lactobacillus murinus were more abundant in the surface water group. Additionally, the top 15 bacterial taxa at the phylum, family, and species levels showed similar trends between the groups (Fig. f-h). After 8 years of irrigation, salt-tolerant bacteria were significantly more abundant in the underground water group than in the surface water group. After 8 years of irrigation, the α-diversity (Chao1, Shannon, Observed species, and Simpson) of the surface water irrigation group was significantly greater than that of the underground water irrigation group (Fig. a-d). For example, the surface water group had values of 4030.00, 9.25, 3406.10, and 0.99 for the Chao1, Shannon, Observed species, and Simpson indices, respectively, while the underground water group had values of 2976.97, 7.97, 2423.83, and 0.98. The surface water group exceeded the underground water group by 35.37%, 16.03%, 40.53%, and 1.02%, respectively. After 8 years of management, all the soil bacterial diversity indices (Chao1, Shannon, Observed species, and Simpson) were significantly affected by the type of irrigation water (Table , p < 0.01). However, none of the soil bacterial diversity indices were significantly affected by soil depth or its interaction with the type of irrigation water (Table , p > 0.05). There were dramatic differences in soil bacterial diversity (Chao1, Shannon, Observed_species and Simpson) due to irrigation with surface water and underground water at all soil depths, with obviously higher values in the soil irrigated with surface water than in the soil irrigated with groundwater at the same depth (Fig. e-h). The soil physicochemical properties and the soil salt ion data sets in the MFA suggest that irrigation water can be divided into two groups corresponding to surface water (1–9) and underground water (10–18) (Fig. a). As shown in Fig. a, dimension 1 of the MFA explained 44.96% of the variance and mainly indicated a difference between irrigation with surface water and irrigation with underground water. Dimension 2 of the MFA explained 22.64% of the variance and was the main difference between the soil depths. The relationships of the patterns of the soil bacterial species with the soil physicochemical properties and the soil salt ion contents are further illustrated by the RV coefficients (Table ). These results indicate that the bacterial communities were mainly linked to the soil salt ions (RV = 0.66, p < 0.001), which themselves were partly linked to the soil physicochemical properties (RV = 0.45, p = 0.001). Figure a and b together indicate the main salinity gradient of irrigation water (between surface water and underground water) along the first axis and the soil depth along the second axis (from the upper to lower quadrants). For example, the scores of sites 10–18 (Fig. a, right-hand part of the graph) corresponded (Fig. b) to high salt ion contents (Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− ) and a relative high soil pH, as well as high soil total salinity. Here, close to the source, the soil conditions were dominated by soil salt ions (soil salinity). The relatively poor bacterial community was characterized by B8 (marine bacterium JK1007), B1 (bacterium YC-LK-LKJ35), B6 ( Rhodovulum sp.), B26 ( Candidatus Wildermuthbacteria bacterium RIFOXYD1 FULL 50–12), B13 ( Methylohalomonas lacus ) and B21 ( Ectothiorhodospiraceae bacterium WFHF3C12). In contrast, sites 1–9 presented relative high concentrations of soil water, TP and AN and relative low soil salinity. These sites were irrigated with surface water, and their communities were characterized by another set of species (for example, B2, B9, B11, B12, B23, B29, and B30). Differences in soil physicochemical properties This study irevealed that soil nutrients were significantly greater in the surface soil (0–5 cm) after 8 years of irrigation with surface water and underground water. Soil nutrients that accumulate in the surface soil, possibly because of solutes (including most soil nutrients) in the soil water, move only with the liquid water and precipitate out where the phase changes from liquid to gas . The solutes are not washed out of the surface soil until flooding irrigation occurs because of the climatic characteristics (high evaporation but low precipitation) of the experimental area. Thus, the storage of soil nutrients in surface soil may occur and has the potential to persist for long periods . The surface accumulation of soil nutrients may also be related to vegetation litter decomposition and biogeochemical cycles , where litter input and biogeochemical cycles occur first in surface soil. Notably, the soil TK and AK contents were apparently greater in underground water than in surface water after 8 years of irrigation (Fig. f and j). This result contradicts those of many previous studies, which indicated that soil nutrients (including TK and AK) significantly decreased with increasing salinization and reflected the negative effect of salinization on soil nutrient availability . This difference may be attributed to higher K + content in underground water and its mineralization connected with potassium, as opposed to surface water . Therefore, significant quantities of K + and mineralized potassium ions accumulated in the soil after groundwater irrigation. The major salt ion results from this study also support this conclusion, where the K + content at different soil depths reached 0.692, 0.378, and 0.352 g kg −1 under groundwater irrigation but only 0.085, 0.064, and 0.061 g kg −1 under surface water irrigation. Moreover, there was no significant difference in the contents of nutrients such as TN, TP, SOC, and AN between underground water and surface water (Table ). Before the first irrigation each year, a base fertilizer was applied (20 cm depth, 200 kg ha −1 nitrogen and 100 kg ha −1 P 2 O 5 ), followed by a top dressing three months later (100 kg ha −1 nitrogen and 50 kg ha −1 P 2 O 5 ). Differences in soil salt ions After 8 years of continuous irrigation with surface water and underground water, the soil salinity and the soil salt ion contents (soil Na + , Mg 2+ , K + , Ca 2+ , Cl − and CO 3 2− ) differed greatly in the topsoil, especially at 0–5 cm. In the underground water treatment, the topsoil experienced detrimental salinization. This difference is likely related to the different total salinities of surface water (total salinity of 0.36 g L −1 ) and underground water (total salinity of 0.95 g L −1 ). Several studies have demonstrated that saline groundwater irrigation can lead to a significant increase in soil salinity, which is one of the most important causes of soil salinization and is dependent on the water salinity level and environmental factors . An increase in irrigation water salinity at the same irrigation water amount means that more soil salt ions will be input into irrigated soils and cause soil salt accumulation to increase. Our results also revealed that the salt ions contents were relative high in the top 5 cm layer and gradually decreased at depths of 5–10 and 10–20 cm. The accumulation of irrigation water-derived salt in the surface soil layer (0–5 cm) may be related to the combination of evaporation and salt-washing effects caused by flood irrigation. In the surface soil, the salt content within the 0–5 cm layer in the saline groundwater irrigation plots increased due to high evaporation rates during the growing season (June to August each year) of L. ruthenicum . In fact, the mean annual evaporation can reach 2849.7 mm at the study site, but the mean annual precipitation is only 43.5 mm . High evaporation and low precipitation first affect the topsoil by increasing soil water evaporation and decreasing salt leaching by rain, both of which cause the accumulation of salt ions in the topsoil layer (0–5 cm) . The transfer of soluble salt from deep soil to topsoil increases with increasing soil moisture evaporation, which causes salts to gradually accumulate on the soil surface . Moreover, the leaching of soluble salt to deep soil or groundwater is restricted by a lack of precipitation in this region . Soil located at greater depths may be less impacted by high evaporation, which is supported by our results that the soil water content was significantly lower in the topsoil layer (0–5 cm) than at other soil depths under both types of irrigation (Fig. b). Moreover, obvious evaporation occurs only after the supplementation of water from deep soil (or even groundwater) to the surface layer of the soil column, and salt accumulation at the surface of the soil column is rapid because salt migrates in the same direction as water move and accumulates after the evaporation of water . The greater the evaporation is, the greater the salt accumulation is . The relatively lower accumulation of salt ions in deeper soil may also be attributed to the salt-washing effect of flood irrigation, where salt ions may be leached to deeper soil layers . Differences in soil bacterial community structure and diversity The results revealed significant differences in the soil bacterial community composition at various taxonomic levels after 8 years of surface water and groundwater irrigation. Notably, at the species level (Fig. e), the groundwater irrigation group presented significantly more salt-tolerant bacterial species and greater relative abundances than did the surface water irrigation group. This is likely due to the salinity filtering process, as studies have indicated that the selection of bacterial assemblages depends on their salinity tolerance . The MFA results support this explanation, showing that the bacterial community was mainly associated with the soil salt ion conditions (RV = 0.66, p < 0.001). The amount of soil salt ions changed depending on the type of irrigation water used after 8 years of management. Long-term irrigation with different types of saltwater can lead to varying degrees of salt accumulation in the soil, altering the living environment of soil bacteria and ultimately affecting the bacterial community. Different salinity tolerances among soil microorganisms can result in shifts in bacterial community composition when exposed to different saline water sources . This may cause certain microorganisms to increase or decrease in abundance under changing soil salinity conditions . Common changes in salinity filtering have been observed for salt-tolerant species in high-salinity soils , which can profoundly influence bacterial community structure. Our findings indicated that under high salinity conditions, the relative abundance of common bacterial phyla such as Bacteroidetes, Actinobacteria, and Gemmatimonadetes changed, with salt-tolerant bacteria such as Balneolaceae and Halomonadaceae becoming dominant. For example, the abundance of Bacteroidetes increased significantly under salt stress, while that of other bacteria, such as Proteobacteria, decreased . The present study also consistently revealed that the abundance of saline-tolerant species increased after 8 years of continuous irrigation with saline groundwater (based on the analysis of the top 30 species), and groundwater irrigation can be used to characterize these species. For example, representative species of marine bacterium JK1007 (B8), bacterium YC-LK-LKJ35 (B1), Rhodovulum sp. (B6), Candidatus Wildermuthbacteria bacterium RIFOXYD1 FULL 50–12 (B26), Methylohalomonas lacus (B13) and Ectothiorhodospiraceae bacterium WFHF3C12 (B21) under irrigation with groundwater were identified from saline environments and have relatively high abundances . Our results also indicated that the bacterial community was partly linked to the soil physicochemical properties (most of the indicators were soil nutrients) after 8 years of continuous groundwater and surface irrigation. Three indicators (TK, AK, and pH) contributed to this correlation, and their levels were determined by soil salinity. This result may reflect a situation in which salinity filtering was the major determinant of bacterial community construction after 8 years of irrigation with the two types of water. The predominant contributions of TK and AK may be due to underground water containing greater amounts of K + and groundwater mineralization associated with potassium than surface water does , which ultimately results in marked changes in TK and AK. The lower contribution of soil nutrients to the soil bacterial community may also be related to the relative balance of soil nutrients (TN, AN, TP, AP, and SOC) during the experiment, with the same fertilization practices and weed control measures resulting in little difference in the contents of most soil nutrients. In the present study, groundwater irrigation caused a reduction in soil bacterial richness and diversity (Chao1, Shannon, Observed species and Simpson) in comparison with surface water irrigation after 8 years of management. This reduction could be due to excessive salt accumulation in the soil after long-term groundwater irrigation, where the salinity of the underground water was 2.64 times the total salinity of the surface water in the experimental area of this study. According to previous studies, salt broadly suppresses soil microbial communities and has a negative correlation with soil microbial diversity and activity in arid ecosystems . The reduction in soil bacterial richness and diversity associated with a sharp increase in soil salinity is mainly attributed to the increase in dehydration and lysis of bacterial cells under osmotic stress, and this negative influence is aggravated by increases in soluble salt concentrations . Our results revealed that although the abundances of dominant phyla such as Proteobacteria, Bacteroidetes, and Actinobacteria did not change significantly, the abundances of certain salt-tolerant species such as Geminicoccaceae increased significantly, while those of salt-sensitive groups (e.g., Cyanobacteria) decreased. This aligns with the findings of Xia et al. . This effect likely indicates that the increase in soil salinity due to long-term groundwater irrigation exerts selective pressure on different microbial groups, allowing salt-tolerant species to survive while salt-sensitive ones are eliminated . This in turn leads to a decrease in bacterial diversity. Unlike the significant differences between groundwater and surface water irrigation, the soil bacterial community structure, richness, and diversity were not significantly affected by soil depth under either irrigation method after 8 years. In fact, the soil salinity and salt ion contents were significantly greater at 0–5 cm than at the other depths. However, bacterial communities and diversity were not noticeably affected by changes in salt content with depth. This may be due to the ability of most soil microorganisms to coexist within a certain salinity range. Hu et al. reported that a less diverse and more stressed environment for microbial coexistence is more likely to form in brackish fields than in less salty, freshwater fields. Similarly, Shamim et al. reported that bacterial richness, Shannon diversity, and evenness did not significantly differ between nonsaline freshwater and saline groundwater environments. This study irevealed that soil nutrients were significantly greater in the surface soil (0–5 cm) after 8 years of irrigation with surface water and underground water. Soil nutrients that accumulate in the surface soil, possibly because of solutes (including most soil nutrients) in the soil water, move only with the liquid water and precipitate out where the phase changes from liquid to gas . The solutes are not washed out of the surface soil until flooding irrigation occurs because of the climatic characteristics (high evaporation but low precipitation) of the experimental area. Thus, the storage of soil nutrients in surface soil may occur and has the potential to persist for long periods . The surface accumulation of soil nutrients may also be related to vegetation litter decomposition and biogeochemical cycles , where litter input and biogeochemical cycles occur first in surface soil. Notably, the soil TK and AK contents were apparently greater in underground water than in surface water after 8 years of irrigation (Fig. f and j). This result contradicts those of many previous studies, which indicated that soil nutrients (including TK and AK) significantly decreased with increasing salinization and reflected the negative effect of salinization on soil nutrient availability . This difference may be attributed to higher K + content in underground water and its mineralization connected with potassium, as opposed to surface water . Therefore, significant quantities of K + and mineralized potassium ions accumulated in the soil after groundwater irrigation. The major salt ion results from this study also support this conclusion, where the K + content at different soil depths reached 0.692, 0.378, and 0.352 g kg −1 under groundwater irrigation but only 0.085, 0.064, and 0.061 g kg −1 under surface water irrigation. Moreover, there was no significant difference in the contents of nutrients such as TN, TP, SOC, and AN between underground water and surface water (Table ). Before the first irrigation each year, a base fertilizer was applied (20 cm depth, 200 kg ha −1 nitrogen and 100 kg ha −1 P 2 O 5 ), followed by a top dressing three months later (100 kg ha −1 nitrogen and 50 kg ha −1 P 2 O 5 ). After 8 years of continuous irrigation with surface water and underground water, the soil salinity and the soil salt ion contents (soil Na + , Mg 2+ , K + , Ca 2+ , Cl − and CO 3 2− ) differed greatly in the topsoil, especially at 0–5 cm. In the underground water treatment, the topsoil experienced detrimental salinization. This difference is likely related to the different total salinities of surface water (total salinity of 0.36 g L −1 ) and underground water (total salinity of 0.95 g L −1 ). Several studies have demonstrated that saline groundwater irrigation can lead to a significant increase in soil salinity, which is one of the most important causes of soil salinization and is dependent on the water salinity level and environmental factors . An increase in irrigation water salinity at the same irrigation water amount means that more soil salt ions will be input into irrigated soils and cause soil salt accumulation to increase. Our results also revealed that the salt ions contents were relative high in the top 5 cm layer and gradually decreased at depths of 5–10 and 10–20 cm. The accumulation of irrigation water-derived salt in the surface soil layer (0–5 cm) may be related to the combination of evaporation and salt-washing effects caused by flood irrigation. In the surface soil, the salt content within the 0–5 cm layer in the saline groundwater irrigation plots increased due to high evaporation rates during the growing season (June to August each year) of L. ruthenicum . In fact, the mean annual evaporation can reach 2849.7 mm at the study site, but the mean annual precipitation is only 43.5 mm . High evaporation and low precipitation first affect the topsoil by increasing soil water evaporation and decreasing salt leaching by rain, both of which cause the accumulation of salt ions in the topsoil layer (0–5 cm) . The transfer of soluble salt from deep soil to topsoil increases with increasing soil moisture evaporation, which causes salts to gradually accumulate on the soil surface . Moreover, the leaching of soluble salt to deep soil or groundwater is restricted by a lack of precipitation in this region . Soil located at greater depths may be less impacted by high evaporation, which is supported by our results that the soil water content was significantly lower in the topsoil layer (0–5 cm) than at other soil depths under both types of irrigation (Fig. b). Moreover, obvious evaporation occurs only after the supplementation of water from deep soil (or even groundwater) to the surface layer of the soil column, and salt accumulation at the surface of the soil column is rapid because salt migrates in the same direction as water move and accumulates after the evaporation of water . The greater the evaporation is, the greater the salt accumulation is . The relatively lower accumulation of salt ions in deeper soil may also be attributed to the salt-washing effect of flood irrigation, where salt ions may be leached to deeper soil layers . The results revealed significant differences in the soil bacterial community composition at various taxonomic levels after 8 years of surface water and groundwater irrigation. Notably, at the species level (Fig. e), the groundwater irrigation group presented significantly more salt-tolerant bacterial species and greater relative abundances than did the surface water irrigation group. This is likely due to the salinity filtering process, as studies have indicated that the selection of bacterial assemblages depends on their salinity tolerance . The MFA results support this explanation, showing that the bacterial community was mainly associated with the soil salt ion conditions (RV = 0.66, p < 0.001). The amount of soil salt ions changed depending on the type of irrigation water used after 8 years of management. Long-term irrigation with different types of saltwater can lead to varying degrees of salt accumulation in the soil, altering the living environment of soil bacteria and ultimately affecting the bacterial community. Different salinity tolerances among soil microorganisms can result in shifts in bacterial community composition when exposed to different saline water sources . This may cause certain microorganisms to increase or decrease in abundance under changing soil salinity conditions . Common changes in salinity filtering have been observed for salt-tolerant species in high-salinity soils , which can profoundly influence bacterial community structure. Our findings indicated that under high salinity conditions, the relative abundance of common bacterial phyla such as Bacteroidetes, Actinobacteria, and Gemmatimonadetes changed, with salt-tolerant bacteria such as Balneolaceae and Halomonadaceae becoming dominant. For example, the abundance of Bacteroidetes increased significantly under salt stress, while that of other bacteria, such as Proteobacteria, decreased . The present study also consistently revealed that the abundance of saline-tolerant species increased after 8 years of continuous irrigation with saline groundwater (based on the analysis of the top 30 species), and groundwater irrigation can be used to characterize these species. For example, representative species of marine bacterium JK1007 (B8), bacterium YC-LK-LKJ35 (B1), Rhodovulum sp. (B6), Candidatus Wildermuthbacteria bacterium RIFOXYD1 FULL 50–12 (B26), Methylohalomonas lacus (B13) and Ectothiorhodospiraceae bacterium WFHF3C12 (B21) under irrigation with groundwater were identified from saline environments and have relatively high abundances . Our results also indicated that the bacterial community was partly linked to the soil physicochemical properties (most of the indicators were soil nutrients) after 8 years of continuous groundwater and surface irrigation. Three indicators (TK, AK, and pH) contributed to this correlation, and their levels were determined by soil salinity. This result may reflect a situation in which salinity filtering was the major determinant of bacterial community construction after 8 years of irrigation with the two types of water. The predominant contributions of TK and AK may be due to underground water containing greater amounts of K + and groundwater mineralization associated with potassium than surface water does , which ultimately results in marked changes in TK and AK. The lower contribution of soil nutrients to the soil bacterial community may also be related to the relative balance of soil nutrients (TN, AN, TP, AP, and SOC) during the experiment, with the same fertilization practices and weed control measures resulting in little difference in the contents of most soil nutrients. In the present study, groundwater irrigation caused a reduction in soil bacterial richness and diversity (Chao1, Shannon, Observed species and Simpson) in comparison with surface water irrigation after 8 years of management. This reduction could be due to excessive salt accumulation in the soil after long-term groundwater irrigation, where the salinity of the underground water was 2.64 times the total salinity of the surface water in the experimental area of this study. According to previous studies, salt broadly suppresses soil microbial communities and has a negative correlation with soil microbial diversity and activity in arid ecosystems . The reduction in soil bacterial richness and diversity associated with a sharp increase in soil salinity is mainly attributed to the increase in dehydration and lysis of bacterial cells under osmotic stress, and this negative influence is aggravated by increases in soluble salt concentrations . Our results revealed that although the abundances of dominant phyla such as Proteobacteria, Bacteroidetes, and Actinobacteria did not change significantly, the abundances of certain salt-tolerant species such as Geminicoccaceae increased significantly, while those of salt-sensitive groups (e.g., Cyanobacteria) decreased. This aligns with the findings of Xia et al. . This effect likely indicates that the increase in soil salinity due to long-term groundwater irrigation exerts selective pressure on different microbial groups, allowing salt-tolerant species to survive while salt-sensitive ones are eliminated . This in turn leads to a decrease in bacterial diversity. Unlike the significant differences between groundwater and surface water irrigation, the soil bacterial community structure, richness, and diversity were not significantly affected by soil depth under either irrigation method after 8 years. In fact, the soil salinity and salt ion contents were significantly greater at 0–5 cm than at the other depths. However, bacterial communities and diversity were not noticeably affected by changes in salt content with depth. This may be due to the ability of most soil microorganisms to coexist within a certain salinity range. Hu et al. reported that a less diverse and more stressed environment for microbial coexistence is more likely to form in brackish fields than in less salty, freshwater fields. Similarly, Shamim et al. reported that bacterial richness, Shannon diversity, and evenness did not significantly differ between nonsaline freshwater and saline groundwater environments. After 8 years of cultivation, saline groundwater irrigation, compared with surface water irrigation, increased soil salinization (higher salt ion content). A comparison of surface water and groundwater irrigation revealed that under groundwater irrigation, the accumulation of soil salt ions (Na + , Mg 2+ , K + , Ca 2+ , Cl − , SO 4 2− , and HCO 3 2− ) and the contents of soil total potassium (TK) and available potassium (AK) at all soil depths are significantly greater than those under surface water irrigation, with an increasing trend of accumulation in the surface soil (0–5 cm). In contrast to soil salt ions, long-term groundwater and surface water irrigation resulted in divergent changes in the soil bacterial community structure and diversity, without significant differences being observed among the soil depths. Although the relative abundances of common bacterial groups and species (e.g., Bacteroidetes, Actinobacteria, and Gemmatimonadetes) do not vary significantly under different irrigation conditions, salt-tolerant bacterial groups (e.g., Balneolaceae and Halomonadaceae) and species (e.g., the marine bacteria JK1007, YC-LK-LKJ35, and Rhodovulum sp.) dominate in groundwater-irrigated environments. Moreover, the species diversity of the soil bacterial community under saline groundwater irrigation was significantly lower than that under surface water irrigation. Further analysis of the soil physicochemical properties, soil salt ion content, and bacterial communities indicated that the differences in the bacterial communities were mainly related to the soil salt ion concentrations. Unlike the bacterial communities in soils irrigated with surface water, the characteristic species in soils under long-term saline groundwater irrigation are salt-tolerant species (e.g., marine bacterium JK1007, bacterium YC-LK-LKJ35, Rhodovulum sp., Candidatus Wildermuthbacteria bacterium RIFOXYD1 FULL 50–12, Methylohalomonas lacus and Ectothiorhodospiraceae bacterium WFHF3C12). These findings suggest that salinity selection is the determining factor in the structural differences of bacterial communities between long-term groundwater and surface water irrigation. Supplementary Material 1. Supplementary Material 2. |
Uncommon ophthalmology - Care for the rare | a46ee7fc-3934-4aa4-85ec-36d08930a4ac | 9426150 | Ophthalmology[mh] | |
Construction and evaluation of a cloud follow-up platform for gynecological patients receiving chemotherapy | 49129e04-1312-49cd-b699-c87a68d731cd | 10802037 | Gynaecology[mh] | Gynecological malignant tumors occur in the female reproductive organs. The most prevalent gynecological cancers are cervical, endometrial, ovarian, fallopian tube, and vulvar cancers, accounting for 19% of new female cancers and seriously threatening women’s health . Chemotherapy is one of the primary means of cancer therapy . It improves the cure rate of cancer and prolongs the long-term survival of cancer patients; however, chemotherapy is associated with several adverse effects . Up to 75% of patients receiving chemotherapy experience chemotherapy-induced nausea and vomiting (CINV) . Even after prophylactic use of antiemetics, such as 5-HT 3 receptor antagonists, more than 50% of patients present with acute (within 24 h of receiving chemotherapy) or delayed (between 2 and 5 days of treatment) symptoms of CINV . In addition, the incidence of other adverse effects of chemotherapy, such as chemotherapy-induced constipation (CIC), sleep disturbance, chemotherapy-induced peripheral neuropathy (CIPN), and cancer-related fatigue (CRF) is 16–48% , 65% , 30–40% , and 78%, respectively . Typically, these adverse effects do not occur simultaneously, and some of them occur or worsen after discharge . Therefore, it is particularly important to provide regular follow-up and professional health guidance to patients receiving chemotherapy. Cloud follow-up is a new follow-up mode that uses mobile information technology for continuous nursing. It integrates information technology and medical services. Medical staff sends illustrated medical information to patients through the Internet platform and mutually interact with patients. This mode provides convenience, intelligence, and personalization . Traditional follow-up methods, such as telephone, email, outpatient follow-up, family visits, and community follow-up, require considerable human resources and time. Cloud follow-up can address these deficiencies and help digital management of patients’ information, data processing, and data sharing, thereby saving medical resources and improving the working efficiency of medical staff . Follow-up is an essential and routine aspect of treatment among gynecological patients receiving chemotherapy. In this study, a cloud follow-up platform was constructed for these patients, and cost-effectiveness and patients’ feedback were compared between this follow-up method and the traditional method. Setting and participants This study was conducted in a leading maternity and children’s hospital in China. The cloud follow-up platform was introduced into the gynecological tumor chemotherapy ward in 2019 using the hospital information system. In total, 2,538 patients who had undergone chemotherapy for gynecological cancer were enrolled between January and October 2021. This group of patients was defined as the cloud follow-up group since all the follow-up of patients was completed using the cloud follow-up system. In addition, between April and September 2020, 690 patients receiving chemotherapy for gynecological tumors were included in the manual follow-up group. Specifically, patients in this group were followed via telephone calls by nurses. Cloud follow-up group Establishment of a multidisciplinary treatment (MDT) team for cloud follow-up A multidisciplinary treatment (MDT) team with eight members was organized, including one department director, one head nurse, three tumor nurses, one oncologist, and two cloud follow-up information technicians. The department director was primarily responsible for constructing and coordinating the cloud follow-up platform. The head nurse was responsible for formulating a cloud follow-up-related management system and implementing project training. Physicians and nurses answered the medical questions of patients or their family members online from 14:00 to 16:00 every day. Tumor nurses were responsible for establishing disease publicity and education-based knowledge, follow-up from baseline, follow-up rules, etc. The cloud follow-up information technicians were responsible for providing technical support for the information needs of the follow-up of patients with gynecological cancer receiving chemotherapy. Construction of the cloud follow-up platform The cloud follow-up platform of our hospital was constructed and implemented by a third-party company, jointly managed by the technicians of the third-party company and the information technicians of the hospital. The cloud follow-up platform was mainly developed in Java language and adopted the Apsara technology platform, integrating elastic computing, data storage, CDN storage, and large-scale computing technology. This platform provided storage resources and computing resources to users on the Internet in the form of public services. The cloud follow-up platform included PC-based physician-patient collaboration, a medical App, a patient App, and a WeChat official account. Since July 2020, the cloud follow-up platform has expanded the modules and functions related to the follow-up of gynecological patients receiving chemotherapy (Table ). Implementation of the cloud follow-up platform Establishing patient-specific files The files included personal basic information (name, gender, age, ID card number, telephone number, etc.) and medical information (current medical problem, past medical history, allergy history, family history, marriage and childbirth history, history of surgery, etc.). It also included outpatient records (medical records, outpatient diagnosis, inspection, examination reports, etc.), inpatient records (admission registration, medical orders, discharge summary, surgical records, hospitalization expenses, examination reports, inspection reports, etc.), and medical examination (medical examination registration and medical examination records). Specialized follow-up and health education After establishing a specialized file for screening suitable patients, a specialized follow-up pathway was developed. After issuing the discharge order, the cloud follow-up system automatically added the patient to the follow-up list and collected the patient’s basic information. Since most of the patients receiving chemotherapy in the intervention department were discharged the day after the infusion of chemotherapeutic agents, a specific follow-up timeline (2 days, 1 week, and 2 weeks after discharge) was set by the MDT team to investigate the occurrence of acute CINV, delayed CINV, and other chemotherapy complications through the WeChat official account. The follow-up contents were consulted by experts. The items of the follow-up form included adverse reactions after chemotherapy, such as CINV, constipation, diarrhea, fatigue, and sleep disorders. Each symptom contained hidden subquestions, which popped up automatically only when the patient chose to select the symptom. In addition, the system automatically pushed the corresponding health education materials according to the answers provided by the patient. The forms of health education materials included video, PowerPoint, and health education text. Furthermore, the system has set the most severe level for each adverse reaction after chemotherapy. If the patient selected this option, the system assumed that the patient was in a life-threatening state and automatically reminded them to seek medical attention as soon as possible. Subsequently, a report was generated and automatically uploaded to the cloud (Table ). Medical staff could view the answers filled in by the patients through the medical App and provide necessary feedback. The following Textbox is a simple follow-up dialogue conducted through the WeChat official account. Household graded management Patients could record the adverse effects of chemotherapy at home through the patient App, and the medical staff assessed the contents filled in by patients in a real-time mode through the medical App and executed household graded management of patients with abnormal records. In the case of CINV, first-level management was patient self-management. According to the Common Terminology Criteria for Adverse Events v4.0 , when nausea and vomiting were rated as grade 0–2, the medical staff conducted one-to-one online guidance through the cloud follow-up system. The second line of management was medical specialty outpatient management. When nausea and vomiting were rated as grade 3–4, the medical staff promptly referred the patients to the hospital’s online system and made confirmation through telephone if necessary. The study flowchart of the cloud follow-up management platform is shown in Fig. . Health monitoring Through the Internet of Things, patients could directly collect health monitoring data from devices such as blood pressure monitors, blood glucose meters, and electrocardiograms. Data were automatically uploaded to the Medical App, and physicians could assess them at any time, achieving continuity between external health data and internal medical data. Particularly, in case of abnormal situations, the system could remind patients according to pre-set reminder rules, and push it to physicians to ensure the safety of patients. Manual follow-up group The patients in the control group were investigated by manual follow-up. Specifically, nurses contacted patients one by one according to the discharge list of patients. The items of the follow-up form were consistent with those of the cloud follow-up system. However, the uploaded report was filled in manually. Data collection Seven adverse reactions related to chemotherapy, including nausea and vomiting, constipation, diarrhea, sleep disorders, fatigue, and CIPN, and thrombotic prevention knowledge (finger exercises and ankle pump exercises) were assessed in the two groups of patients on the 2nd day, 1st week, and 2nd week after discharge. The severity of each adverse reaction included 2–5 options. Regarding the design of the options, nausea and vomiting were defined based on the Common Terminology Criteria for Adverse Events v4.0 . Constipation and diarrhea were defined based on the Bristol Stool Form Scale (BSFS) and disease diagnostic criteria . Fatigue was defined based on the Brief Fatigue Inventory (BFI) , and sleep quality and CIPN were defined based on the severity of clinical manifestations and their impact on daily life. The mastery of thrombosis prevention knowledge was set to the options of “yes” or “no”. At the last follow-up, follow-up satisfaction was added as an additional item to the questionnaire and divided into five levels: very satisfied, satisfied, average, dissatisfied, and very dissatisfied. Assessment indicators The assessment indicators were follow-up rate, follow-up satisfaction, session duration, and read rate. The following formulae were used: follow-up rate = number of effective follow-ups / (number of effective follow-ups + number of invalid follow-ups) × 100%; follow-up satisfaction = (number of very satisfied + number of satisfied) / number of total actual follow-up cases × 100%; and read rate = number of read times/number of send times × 100%. The number of effective follow-ups was defined as the complete data collection in Table (excluding the number of health education materials pushed parameter). The number of invalid follow-ups was defined as missing or incomplete data in Table . Follow-up satisfaction was defined as patients’ satisfaction with the follow-up service. The number of read times was defined as the total number of patients who actively read health education materials. For the number of send times, we measured how many times the cloud follow-up platform automatically sent health education materials to patients. Session duration was defined as the time medical staff needed to communicate with patients via telephone calls. Statistical analysis SPSS statistical software (version 22.0, IBM Inc) was used for data analysis. Age, education attainment, cancer stage, disease category, and read rate of health education materials were analyzed descriptively. Patients’ characteristics, follow-up rate, and follow-up satisfaction in both groups were measured using Pearson’s chi-square test. Multiple regression analysis was performed to explore the details of follow-up satisfaction degree and follow-up duration. All tests were two-sided. P <0.05 indicated a statistically significant difference. This study was conducted in a leading maternity and children’s hospital in China. The cloud follow-up platform was introduced into the gynecological tumor chemotherapy ward in 2019 using the hospital information system. In total, 2,538 patients who had undergone chemotherapy for gynecological cancer were enrolled between January and October 2021. This group of patients was defined as the cloud follow-up group since all the follow-up of patients was completed using the cloud follow-up system. In addition, between April and September 2020, 690 patients receiving chemotherapy for gynecological tumors were included in the manual follow-up group. Specifically, patients in this group were followed via telephone calls by nurses. Establishment of a multidisciplinary treatment (MDT) team for cloud follow-up A multidisciplinary treatment (MDT) team with eight members was organized, including one department director, one head nurse, three tumor nurses, one oncologist, and two cloud follow-up information technicians. The department director was primarily responsible for constructing and coordinating the cloud follow-up platform. The head nurse was responsible for formulating a cloud follow-up-related management system and implementing project training. Physicians and nurses answered the medical questions of patients or their family members online from 14:00 to 16:00 every day. Tumor nurses were responsible for establishing disease publicity and education-based knowledge, follow-up from baseline, follow-up rules, etc. The cloud follow-up information technicians were responsible for providing technical support for the information needs of the follow-up of patients with gynecological cancer receiving chemotherapy. Construction of the cloud follow-up platform The cloud follow-up platform of our hospital was constructed and implemented by a third-party company, jointly managed by the technicians of the third-party company and the information technicians of the hospital. The cloud follow-up platform was mainly developed in Java language and adopted the Apsara technology platform, integrating elastic computing, data storage, CDN storage, and large-scale computing technology. This platform provided storage resources and computing resources to users on the Internet in the form of public services. The cloud follow-up platform included PC-based physician-patient collaboration, a medical App, a patient App, and a WeChat official account. Since July 2020, the cloud follow-up platform has expanded the modules and functions related to the follow-up of gynecological patients receiving chemotherapy (Table ). A multidisciplinary treatment (MDT) team with eight members was organized, including one department director, one head nurse, three tumor nurses, one oncologist, and two cloud follow-up information technicians. The department director was primarily responsible for constructing and coordinating the cloud follow-up platform. The head nurse was responsible for formulating a cloud follow-up-related management system and implementing project training. Physicians and nurses answered the medical questions of patients or their family members online from 14:00 to 16:00 every day. Tumor nurses were responsible for establishing disease publicity and education-based knowledge, follow-up from baseline, follow-up rules, etc. The cloud follow-up information technicians were responsible for providing technical support for the information needs of the follow-up of patients with gynecological cancer receiving chemotherapy. The cloud follow-up platform of our hospital was constructed and implemented by a third-party company, jointly managed by the technicians of the third-party company and the information technicians of the hospital. The cloud follow-up platform was mainly developed in Java language and adopted the Apsara technology platform, integrating elastic computing, data storage, CDN storage, and large-scale computing technology. This platform provided storage resources and computing resources to users on the Internet in the form of public services. The cloud follow-up platform included PC-based physician-patient collaboration, a medical App, a patient App, and a WeChat official account. Since July 2020, the cloud follow-up platform has expanded the modules and functions related to the follow-up of gynecological patients receiving chemotherapy (Table ). Establishing patient-specific files The files included personal basic information (name, gender, age, ID card number, telephone number, etc.) and medical information (current medical problem, past medical history, allergy history, family history, marriage and childbirth history, history of surgery, etc.). It also included outpatient records (medical records, outpatient diagnosis, inspection, examination reports, etc.), inpatient records (admission registration, medical orders, discharge summary, surgical records, hospitalization expenses, examination reports, inspection reports, etc.), and medical examination (medical examination registration and medical examination records). Specialized follow-up and health education After establishing a specialized file for screening suitable patients, a specialized follow-up pathway was developed. After issuing the discharge order, the cloud follow-up system automatically added the patient to the follow-up list and collected the patient’s basic information. Since most of the patients receiving chemotherapy in the intervention department were discharged the day after the infusion of chemotherapeutic agents, a specific follow-up timeline (2 days, 1 week, and 2 weeks after discharge) was set by the MDT team to investigate the occurrence of acute CINV, delayed CINV, and other chemotherapy complications through the WeChat official account. The follow-up contents were consulted by experts. The items of the follow-up form included adverse reactions after chemotherapy, such as CINV, constipation, diarrhea, fatigue, and sleep disorders. Each symptom contained hidden subquestions, which popped up automatically only when the patient chose to select the symptom. In addition, the system automatically pushed the corresponding health education materials according to the answers provided by the patient. The forms of health education materials included video, PowerPoint, and health education text. Furthermore, the system has set the most severe level for each adverse reaction after chemotherapy. If the patient selected this option, the system assumed that the patient was in a life-threatening state and automatically reminded them to seek medical attention as soon as possible. Subsequently, a report was generated and automatically uploaded to the cloud (Table ). Medical staff could view the answers filled in by the patients through the medical App and provide necessary feedback. The following Textbox is a simple follow-up dialogue conducted through the WeChat official account. Household graded management Patients could record the adverse effects of chemotherapy at home through the patient App, and the medical staff assessed the contents filled in by patients in a real-time mode through the medical App and executed household graded management of patients with abnormal records. In the case of CINV, first-level management was patient self-management. According to the Common Terminology Criteria for Adverse Events v4.0 , when nausea and vomiting were rated as grade 0–2, the medical staff conducted one-to-one online guidance through the cloud follow-up system. The second line of management was medical specialty outpatient management. When nausea and vomiting were rated as grade 3–4, the medical staff promptly referred the patients to the hospital’s online system and made confirmation through telephone if necessary. The study flowchart of the cloud follow-up management platform is shown in Fig. . Health monitoring Through the Internet of Things, patients could directly collect health monitoring data from devices such as blood pressure monitors, blood glucose meters, and electrocardiograms. Data were automatically uploaded to the Medical App, and physicians could assess them at any time, achieving continuity between external health data and internal medical data. Particularly, in case of abnormal situations, the system could remind patients according to pre-set reminder rules, and push it to physicians to ensure the safety of patients. Manual follow-up group The patients in the control group were investigated by manual follow-up. Specifically, nurses contacted patients one by one according to the discharge list of patients. The items of the follow-up form were consistent with those of the cloud follow-up system. However, the uploaded report was filled in manually. Data collection Seven adverse reactions related to chemotherapy, including nausea and vomiting, constipation, diarrhea, sleep disorders, fatigue, and CIPN, and thrombotic prevention knowledge (finger exercises and ankle pump exercises) were assessed in the two groups of patients on the 2nd day, 1st week, and 2nd week after discharge. The severity of each adverse reaction included 2–5 options. Regarding the design of the options, nausea and vomiting were defined based on the Common Terminology Criteria for Adverse Events v4.0 . Constipation and diarrhea were defined based on the Bristol Stool Form Scale (BSFS) and disease diagnostic criteria . Fatigue was defined based on the Brief Fatigue Inventory (BFI) , and sleep quality and CIPN were defined based on the severity of clinical manifestations and their impact on daily life. The mastery of thrombosis prevention knowledge was set to the options of “yes” or “no”. At the last follow-up, follow-up satisfaction was added as an additional item to the questionnaire and divided into five levels: very satisfied, satisfied, average, dissatisfied, and very dissatisfied. Assessment indicators The assessment indicators were follow-up rate, follow-up satisfaction, session duration, and read rate. The following formulae were used: follow-up rate = number of effective follow-ups / (number of effective follow-ups + number of invalid follow-ups) × 100%; follow-up satisfaction = (number of very satisfied + number of satisfied) / number of total actual follow-up cases × 100%; and read rate = number of read times/number of send times × 100%. The number of effective follow-ups was defined as the complete data collection in Table (excluding the number of health education materials pushed parameter). The number of invalid follow-ups was defined as missing or incomplete data in Table . Follow-up satisfaction was defined as patients’ satisfaction with the follow-up service. The number of read times was defined as the total number of patients who actively read health education materials. For the number of send times, we measured how many times the cloud follow-up platform automatically sent health education materials to patients. Session duration was defined as the time medical staff needed to communicate with patients via telephone calls. Statistical analysis SPSS statistical software (version 22.0, IBM Inc) was used for data analysis. Age, education attainment, cancer stage, disease category, and read rate of health education materials were analyzed descriptively. Patients’ characteristics, follow-up rate, and follow-up satisfaction in both groups were measured using Pearson’s chi-square test. Multiple regression analysis was performed to explore the details of follow-up satisfaction degree and follow-up duration. All tests were two-sided. P <0.05 indicated a statistically significant difference. The files included personal basic information (name, gender, age, ID card number, telephone number, etc.) and medical information (current medical problem, past medical history, allergy history, family history, marriage and childbirth history, history of surgery, etc.). It also included outpatient records (medical records, outpatient diagnosis, inspection, examination reports, etc.), inpatient records (admission registration, medical orders, discharge summary, surgical records, hospitalization expenses, examination reports, inspection reports, etc.), and medical examination (medical examination registration and medical examination records). After establishing a specialized file for screening suitable patients, a specialized follow-up pathway was developed. After issuing the discharge order, the cloud follow-up system automatically added the patient to the follow-up list and collected the patient’s basic information. Since most of the patients receiving chemotherapy in the intervention department were discharged the day after the infusion of chemotherapeutic agents, a specific follow-up timeline (2 days, 1 week, and 2 weeks after discharge) was set by the MDT team to investigate the occurrence of acute CINV, delayed CINV, and other chemotherapy complications through the WeChat official account. The follow-up contents were consulted by experts. The items of the follow-up form included adverse reactions after chemotherapy, such as CINV, constipation, diarrhea, fatigue, and sleep disorders. Each symptom contained hidden subquestions, which popped up automatically only when the patient chose to select the symptom. In addition, the system automatically pushed the corresponding health education materials according to the answers provided by the patient. The forms of health education materials included video, PowerPoint, and health education text. Furthermore, the system has set the most severe level for each adverse reaction after chemotherapy. If the patient selected this option, the system assumed that the patient was in a life-threatening state and automatically reminded them to seek medical attention as soon as possible. Subsequently, a report was generated and automatically uploaded to the cloud (Table ). Medical staff could view the answers filled in by the patients through the medical App and provide necessary feedback. The following Textbox is a simple follow-up dialogue conducted through the WeChat official account. Patients could record the adverse effects of chemotherapy at home through the patient App, and the medical staff assessed the contents filled in by patients in a real-time mode through the medical App and executed household graded management of patients with abnormal records. In the case of CINV, first-level management was patient self-management. According to the Common Terminology Criteria for Adverse Events v4.0 , when nausea and vomiting were rated as grade 0–2, the medical staff conducted one-to-one online guidance through the cloud follow-up system. The second line of management was medical specialty outpatient management. When nausea and vomiting were rated as grade 3–4, the medical staff promptly referred the patients to the hospital’s online system and made confirmation through telephone if necessary. The study flowchart of the cloud follow-up management platform is shown in Fig. . Through the Internet of Things, patients could directly collect health monitoring data from devices such as blood pressure monitors, blood glucose meters, and electrocardiograms. Data were automatically uploaded to the Medical App, and physicians could assess them at any time, achieving continuity between external health data and internal medical data. Particularly, in case of abnormal situations, the system could remind patients according to pre-set reminder rules, and push it to physicians to ensure the safety of patients. The patients in the control group were investigated by manual follow-up. Specifically, nurses contacted patients one by one according to the discharge list of patients. The items of the follow-up form were consistent with those of the cloud follow-up system. However, the uploaded report was filled in manually. Seven adverse reactions related to chemotherapy, including nausea and vomiting, constipation, diarrhea, sleep disorders, fatigue, and CIPN, and thrombotic prevention knowledge (finger exercises and ankle pump exercises) were assessed in the two groups of patients on the 2nd day, 1st week, and 2nd week after discharge. The severity of each adverse reaction included 2–5 options. Regarding the design of the options, nausea and vomiting were defined based on the Common Terminology Criteria for Adverse Events v4.0 . Constipation and diarrhea were defined based on the Bristol Stool Form Scale (BSFS) and disease diagnostic criteria . Fatigue was defined based on the Brief Fatigue Inventory (BFI) , and sleep quality and CIPN were defined based on the severity of clinical manifestations and their impact on daily life. The mastery of thrombosis prevention knowledge was set to the options of “yes” or “no”. At the last follow-up, follow-up satisfaction was added as an additional item to the questionnaire and divided into five levels: very satisfied, satisfied, average, dissatisfied, and very dissatisfied. The assessment indicators were follow-up rate, follow-up satisfaction, session duration, and read rate. The following formulae were used: follow-up rate = number of effective follow-ups / (number of effective follow-ups + number of invalid follow-ups) × 100%; follow-up satisfaction = (number of very satisfied + number of satisfied) / number of total actual follow-up cases × 100%; and read rate = number of read times/number of send times × 100%. The number of effective follow-ups was defined as the complete data collection in Table (excluding the number of health education materials pushed parameter). The number of invalid follow-ups was defined as missing or incomplete data in Table . Follow-up satisfaction was defined as patients’ satisfaction with the follow-up service. The number of read times was defined as the total number of patients who actively read health education materials. For the number of send times, we measured how many times the cloud follow-up platform automatically sent health education materials to patients. Session duration was defined as the time medical staff needed to communicate with patients via telephone calls. SPSS statistical software (version 22.0, IBM Inc) was used for data analysis. Age, education attainment, cancer stage, disease category, and read rate of health education materials were analyzed descriptively. Patients’ characteristics, follow-up rate, and follow-up satisfaction in both groups were measured using Pearson’s chi-square test. Multiple regression analysis was performed to explore the details of follow-up satisfaction degree and follow-up duration. All tests were two-sided. P <0.05 indicated a statistically significant difference. Figure depicts patient recruitment and follow-up processes. In total, 3,706 patients were willing to participate in this study. Among these patients, 239 (6.4%) were excluded due to critical diseases, inability to use smartphones, illiteracy, and poor mental condition. Finally, 3,467 patients were included in this study and were allocated based on follow-up time. Among them, 2,735 patients were assigned to the intervention group for cloud follow-up, whereas 732 patients were assigned to the control group for manual follow-up (telephone follow-up). In total, 197 (7.2%) patients in the cloud follow-up were excluded due to filling out the follow-up form less than three times, while 42 (5.7%) patients in the manual follow-up were excluded due to inability to contact or refusal to participate. Eventually, 2,538 (92.8%) patients in the cloud follow-up group and 690 (94.3%) patients in the manual follow-up group successfully completed the survey. Patients’ characteristics The characteristics of patients in the two groups are shown in Table . No significant differences were found between the two groups in age, education attainment, cancer stage, and disease category. Cost-effectiveness Tables and summarize the major outcomes for all participants. When the patients in the two groups completed three follow-ups, the follow-up rate was not significantly different between the two groups (cloud: 6,957/7,614, 91.4%; manual: 1,869/2,070, 90.3%; P = 0.13). The follow-up satisfaction degree of cloud follow-up patients was significantly higher than that of manual follow-up group patients (cloud: 7,192/7,614, 94.5%; manual: 1,532/2,070, 74.0%; P <0.001). Multivariate logistic regression analysis revealed that cloud follow-up improved patient satisfaction degree (odds ratio: 2.239, 95% CI: 1.237 ~ 5.219). Moreover, 100 patients were randomly selected from the cloud follow-up group to calculate session duration. The total time needed to complete one follow-up was 1.2 h compared with 10.5 h in the manual follow-up group. Multiple linear regression models were applied to assess the adjusted impact of cloud follow-up. The results showed that cloud follow-up significantly reduced follow-up duration. The average difference in follow-up duration was decreased by 9.287 h. In addition, higher education attainment was correlated with lower differences in follow-up duration. Time spent in the manual follow-up group included the time spent by nurses over the telephone to inquire about the patient follow-up form information and arrange and upload Table . Table in the cloud follow-up group was automatically and instantaneously generated by the system. The time spent on the cloud follow-up group by medical staff mainly consisted of the time needed to give telephonic feedback on any abnormal form submitted by the patient. Pushing and reading thematic health education materials on the cloud follow-up platform Between January and October 2021, the cloud follow-up platform pushed 170,374 thematic health education materials. Of them, 124,189 were read, with a read rate of 72.9%. Among them, the read rate of “diet, nutrition, and patients receiving chemotherapy” was the highest, followed by “management of nausea and vomiting in patients receiving chemotherapy” and “management of constipation among patients receiving chemotherapy.” “Guidelines for adolescent gynecological patients receiving chemotherapy” had the lowest read rate (Table ). Usage of other functions of the cloud follow-up platform Of 2,538 patients, 2,212 downloaded the patient App and were registered on it. Patients actively recorded 6,235 diet and nutrition events, 4,256 CINV events, 3218 constipation events, 823 diarrhea events, 3,012 sleep disturbance events, 1,987 peripherally inserted central catheters (PICC) home care events, 924 fatigue events, 328 events of hands and feet numbness, and 1,439 pain events. The automatic data statistics function of the cloud follow-up platform showed that physicians handled abnormal health monitoring data 1,766 times. A medical staff member checked patients’ records through the medical App and provided timely online feedback for abnormal records. The characteristics of patients in the two groups are shown in Table . No significant differences were found between the two groups in age, education attainment, cancer stage, and disease category. Tables and summarize the major outcomes for all participants. When the patients in the two groups completed three follow-ups, the follow-up rate was not significantly different between the two groups (cloud: 6,957/7,614, 91.4%; manual: 1,869/2,070, 90.3%; P = 0.13). The follow-up satisfaction degree of cloud follow-up patients was significantly higher than that of manual follow-up group patients (cloud: 7,192/7,614, 94.5%; manual: 1,532/2,070, 74.0%; P <0.001). Multivariate logistic regression analysis revealed that cloud follow-up improved patient satisfaction degree (odds ratio: 2.239, 95% CI: 1.237 ~ 5.219). Moreover, 100 patients were randomly selected from the cloud follow-up group to calculate session duration. The total time needed to complete one follow-up was 1.2 h compared with 10.5 h in the manual follow-up group. Multiple linear regression models were applied to assess the adjusted impact of cloud follow-up. The results showed that cloud follow-up significantly reduced follow-up duration. The average difference in follow-up duration was decreased by 9.287 h. In addition, higher education attainment was correlated with lower differences in follow-up duration. Time spent in the manual follow-up group included the time spent by nurses over the telephone to inquire about the patient follow-up form information and arrange and upload Table . Table in the cloud follow-up group was automatically and instantaneously generated by the system. The time spent on the cloud follow-up group by medical staff mainly consisted of the time needed to give telephonic feedback on any abnormal form submitted by the patient. Between January and October 2021, the cloud follow-up platform pushed 170,374 thematic health education materials. Of them, 124,189 were read, with a read rate of 72.9%. Among them, the read rate of “diet, nutrition, and patients receiving chemotherapy” was the highest, followed by “management of nausea and vomiting in patients receiving chemotherapy” and “management of constipation among patients receiving chemotherapy.” “Guidelines for adolescent gynecological patients receiving chemotherapy” had the lowest read rate (Table ). Of 2,538 patients, 2,212 downloaded the patient App and were registered on it. Patients actively recorded 6,235 diet and nutrition events, 4,256 CINV events, 3218 constipation events, 823 diarrhea events, 3,012 sleep disturbance events, 1,987 peripherally inserted central catheters (PICC) home care events, 924 fatigue events, 328 events of hands and feet numbness, and 1,439 pain events. The automatic data statistics function of the cloud follow-up platform showed that physicians handled abnormal health monitoring data 1,766 times. A medical staff member checked patients’ records through the medical App and provided timely online feedback for abnormal records. Cloud follow-up provides a new model for continuous nursing Continuous nursing aims to extend in-hospital nursing services to communities and families and ensure the continuity of nursing services . Due to the limited length of stay, gynecological patients receiving chemotherapy have different degrees of demand for out-of-hospital nursing services after discharge. Thus, the implementation of continuous nursing for gynecological patients receiving chemotherapy can improve their quality of life (QoL) . In the information era, “Internet +” integrates the Internet and patient follow-up via information and communication technology, forming a modern form of follow-up. Cloud follow-up is one of these products. Cloud follow-up strengthens communication and exchange with patients, promotes the hospital brand, provides more health education resources for patients, and improves treatment compliance. Considering personalized and continuous medical services as the core, we designed efficient information flow and business collaboration channels and built a post-hospital cloud follow-up service system based on the existing service process and functional system of the hospital. This cloud follow-up service system could organically integrate follow-up and clinical practice. Multi-disciplinary cooperation was achieved, helping monitor and manage every patient who used the platform management. This platform also contributed to establishing a comprehensive and accurate medical and health database. With the advancement of science and technology, especially the development of the Internet of Things, cloud follow-up can be combined with other detection devices, making it more convenient and efficient to connect with hospital systems and respond to clinical requirements. Cloud follow-up improved the cost-effectiveness of follow-up The cost-effectiveness analysis showed no significant difference in the follow-up rate between the cloud follow-up group and the manual follow-up group, suggesting that the follow-up effect of the cloud follow-up group is not inferior to the manual follow-up group and can replace the manual follow-up scheme to a certain extent. The multiple linear regression model showed that the cloud follow-up saved 9.287 h compared with manual follow-up, possibly because the cloud follow-up allowed simultaneous follow-up of 7–9 patients and automatically generated follow-up results. In contrast, in manual follow-up, nurses had to call patients, fill in the follow-up form, and manually generate the follow-up results. Therefore, the cloud follow-up saves time and human resources. In addition, during the follow-up of 100 patients, patients with high school education or above took 0.876 h less time compared with patients with middle school education or below. The majority of patients in this study were middle-aged and elderly. Cloud follow-up, as a new form of follow-up, may be challenging for patients with lower levels of education to understand and use. Patients with higher levels of education are more likely to accept new things and adapt . Even with manual follow-up, patients with lower levels of education may have lower communication and information recognition abilities, resulting in longer follow-up times. Patient satisfaction can objectively reflect the quality of medical services and help measure the quality of hospital management . The multivariable logistic regression model showed that the cloud follow-up group had significantly greater follow-up satisfaction (odds ratio: 2.239, 95% CI: 1.237 ~ 5.219). Cloud follow-up considers medical institutions as the main provider of follow-up services, integrating hospital management systems, physicians, nurses, and researchers to provide patients with more effective post-hospital services. Cloud follow-up combines internet technology with medical and health management for patients, achieving functional requirements such as general follow-up, specialized follow-up, scientific research, questionnaire customization, App interaction, discharge education, and intelligent anomaly analysis. By utilizing the “Internet + Medical”, medical services can be extended to the post-hospital and home settings, enabling patients to receive scientific, professional, and convenient technical services and guidance for rehabilitation and treatment outside the hospital. Moreover, patients in the cloud follow-up group filled in the form or read the health education materials in their spare time. The follow-up time of the cloud follow-up group was more flexible compared with the time needed to cooperate with nurses for manual follow-up. In addition, cloud follow-up can diversely promote health education knowledge via text, PowerPoint, videos, etc., which can make health education funny and enable patients to receive health guidance more intuitively. The cloud follow-up platform can reflect patients’ attention to different aspects of disease knowledge Disease knowledge is closely related to patients’ self-care ability, treatment compliance, and QoL . In this study, 13 structured health-education themes were designed based on expert consultation. The read rate of patients exhibited that patients differently paid attention to each theme. Patients paid the highest attention to diet and nutrition, as this part had the most active records. It reflected the knowledge of patients about diet and nutrition and affected their compliance with diet and nutritional modifications, which may be closely related to the prognosis of tumor patients receiving chemotherapy. The 2017 ESPEN guidelines on nutrition in cancer patients recommend enteral nutrition support as a treatment for cancer patients receiving chemotherapy . Therefore, patients pay more attention to diet and nutrition during home care . In addition, patients pay more attention to the management of CINV and constipation , since CINV and constipation are the most common adverse effects of chemotherapy and significantly impact patients’ comfort and QoL. Patients pay less attention to fertility-related matters and guidelines for adolescent gynecological patients receiving chemotherapy (read rates were 40.5% and 35.1%, respectively). The reason may be related to the age at which diseases occur. The peak age of gynecological malignant tumors is mainly 40–65 years , which is consistent with the results of our study. However, most people in this age group had passed their adolescence and had given birth; therefore, they pay less attention to relevant health education. The cloud follow-up platform provided sufficient health education resources for patients and met the requirements of hospital infection management Gynecological patients receiving chemotherapy have insufficient health education information, especially for home care . However, lack of knowledge is an independent risk factor for treatment non-compliance . The 13 thematic health education materials pushed through the cloud follow-up platform improved the knowledge of gynecological patients receiving chemotherapy. Medical staff regularly logged in to the cloud follow-up platform to check patient click-through rates, assess the active acceptance of health guidance, and provide targeted guidance to patients with low compliance. In addition, the health education materials pushed by the cloud follow-up platform allowed patients or their family members to repeatedly view the education content anytime and anywhere through their mobile phones. The video content, combined with text, images, sound, and animation, makes it easier for patients to remember the treatment of side effects of chemotherapy, thereby improving their QoL. Furthermore, patients in the cloud follow-up platform group more comprehensively consulted medical staff online, obtained professional guidance and medication reminders, and received disease knowledge compared with those receiving manual follow-ups. It has no time and geographical limitations and provides a paradigm for other medical institutions to improve the management of patients without follow-up in remote areas. Information technology provides access to high-quality medical resources, reduces unnecessary outpatient follow-ups, avoids cross-regional patient mobility, and decreases the risk of cross-infection during the COVID-19 pandemic. This approach meets the requirements of hospital infection management. Limitations There are some limitations to the present study. First, in addition to the form of online answers by the medical staff, other forms, such as chatbot, can be introduced. A chatbot is an artificial intelligence program that realizes human-computer interaction in the form of dialogue or text with the help of natural language processing and emotional analysis. It is currently used in the diagnosis, treatment, and management of diseases . It can also help patients with gynecological tumors in solving problems commonly observed in the perioperative period . Therefore, future studies can develop chat machine software or reference software suitable for gynecological patients receiving chemotherapy. Second, the cloud follow-up system used in this study was implemented only in the early stages, including early software testing, system installation training, and hospital pilot. Therefore, clinical operation time was short, and the system was not stable enough. These issues should be addressed in collaboration with a software engineer in the later stages of the operation process. Continuous nursing aims to extend in-hospital nursing services to communities and families and ensure the continuity of nursing services . Due to the limited length of stay, gynecological patients receiving chemotherapy have different degrees of demand for out-of-hospital nursing services after discharge. Thus, the implementation of continuous nursing for gynecological patients receiving chemotherapy can improve their quality of life (QoL) . In the information era, “Internet +” integrates the Internet and patient follow-up via information and communication technology, forming a modern form of follow-up. Cloud follow-up is one of these products. Cloud follow-up strengthens communication and exchange with patients, promotes the hospital brand, provides more health education resources for patients, and improves treatment compliance. Considering personalized and continuous medical services as the core, we designed efficient information flow and business collaboration channels and built a post-hospital cloud follow-up service system based on the existing service process and functional system of the hospital. This cloud follow-up service system could organically integrate follow-up and clinical practice. Multi-disciplinary cooperation was achieved, helping monitor and manage every patient who used the platform management. This platform also contributed to establishing a comprehensive and accurate medical and health database. With the advancement of science and technology, especially the development of the Internet of Things, cloud follow-up can be combined with other detection devices, making it more convenient and efficient to connect with hospital systems and respond to clinical requirements. Cloud follow-up improved the cost-effectiveness of follow-up The cost-effectiveness analysis showed no significant difference in the follow-up rate between the cloud follow-up group and the manual follow-up group, suggesting that the follow-up effect of the cloud follow-up group is not inferior to the manual follow-up group and can replace the manual follow-up scheme to a certain extent. The multiple linear regression model showed that the cloud follow-up saved 9.287 h compared with manual follow-up, possibly because the cloud follow-up allowed simultaneous follow-up of 7–9 patients and automatically generated follow-up results. In contrast, in manual follow-up, nurses had to call patients, fill in the follow-up form, and manually generate the follow-up results. Therefore, the cloud follow-up saves time and human resources. In addition, during the follow-up of 100 patients, patients with high school education or above took 0.876 h less time compared with patients with middle school education or below. The majority of patients in this study were middle-aged and elderly. Cloud follow-up, as a new form of follow-up, may be challenging for patients with lower levels of education to understand and use. Patients with higher levels of education are more likely to accept new things and adapt . Even with manual follow-up, patients with lower levels of education may have lower communication and information recognition abilities, resulting in longer follow-up times. Patient satisfaction can objectively reflect the quality of medical services and help measure the quality of hospital management . The multivariable logistic regression model showed that the cloud follow-up group had significantly greater follow-up satisfaction (odds ratio: 2.239, 95% CI: 1.237 ~ 5.219). Cloud follow-up considers medical institutions as the main provider of follow-up services, integrating hospital management systems, physicians, nurses, and researchers to provide patients with more effective post-hospital services. Cloud follow-up combines internet technology with medical and health management for patients, achieving functional requirements such as general follow-up, specialized follow-up, scientific research, questionnaire customization, App interaction, discharge education, and intelligent anomaly analysis. By utilizing the “Internet + Medical”, medical services can be extended to the post-hospital and home settings, enabling patients to receive scientific, professional, and convenient technical services and guidance for rehabilitation and treatment outside the hospital. Moreover, patients in the cloud follow-up group filled in the form or read the health education materials in their spare time. The follow-up time of the cloud follow-up group was more flexible compared with the time needed to cooperate with nurses for manual follow-up. In addition, cloud follow-up can diversely promote health education knowledge via text, PowerPoint, videos, etc., which can make health education funny and enable patients to receive health guidance more intuitively. The cost-effectiveness analysis showed no significant difference in the follow-up rate between the cloud follow-up group and the manual follow-up group, suggesting that the follow-up effect of the cloud follow-up group is not inferior to the manual follow-up group and can replace the manual follow-up scheme to a certain extent. The multiple linear regression model showed that the cloud follow-up saved 9.287 h compared with manual follow-up, possibly because the cloud follow-up allowed simultaneous follow-up of 7–9 patients and automatically generated follow-up results. In contrast, in manual follow-up, nurses had to call patients, fill in the follow-up form, and manually generate the follow-up results. Therefore, the cloud follow-up saves time and human resources. In addition, during the follow-up of 100 patients, patients with high school education or above took 0.876 h less time compared with patients with middle school education or below. The majority of patients in this study were middle-aged and elderly. Cloud follow-up, as a new form of follow-up, may be challenging for patients with lower levels of education to understand and use. Patients with higher levels of education are more likely to accept new things and adapt . Even with manual follow-up, patients with lower levels of education may have lower communication and information recognition abilities, resulting in longer follow-up times. Patient satisfaction can objectively reflect the quality of medical services and help measure the quality of hospital management . The multivariable logistic regression model showed that the cloud follow-up group had significantly greater follow-up satisfaction (odds ratio: 2.239, 95% CI: 1.237 ~ 5.219). Cloud follow-up considers medical institutions as the main provider of follow-up services, integrating hospital management systems, physicians, nurses, and researchers to provide patients with more effective post-hospital services. Cloud follow-up combines internet technology with medical and health management for patients, achieving functional requirements such as general follow-up, specialized follow-up, scientific research, questionnaire customization, App interaction, discharge education, and intelligent anomaly analysis. By utilizing the “Internet + Medical”, medical services can be extended to the post-hospital and home settings, enabling patients to receive scientific, professional, and convenient technical services and guidance for rehabilitation and treatment outside the hospital. Moreover, patients in the cloud follow-up group filled in the form or read the health education materials in their spare time. The follow-up time of the cloud follow-up group was more flexible compared with the time needed to cooperate with nurses for manual follow-up. In addition, cloud follow-up can diversely promote health education knowledge via text, PowerPoint, videos, etc., which can make health education funny and enable patients to receive health guidance more intuitively. Disease knowledge is closely related to patients’ self-care ability, treatment compliance, and QoL . In this study, 13 structured health-education themes were designed based on expert consultation. The read rate of patients exhibited that patients differently paid attention to each theme. Patients paid the highest attention to diet and nutrition, as this part had the most active records. It reflected the knowledge of patients about diet and nutrition and affected their compliance with diet and nutritional modifications, which may be closely related to the prognosis of tumor patients receiving chemotherapy. The 2017 ESPEN guidelines on nutrition in cancer patients recommend enteral nutrition support as a treatment for cancer patients receiving chemotherapy . Therefore, patients pay more attention to diet and nutrition during home care . In addition, patients pay more attention to the management of CINV and constipation , since CINV and constipation are the most common adverse effects of chemotherapy and significantly impact patients’ comfort and QoL. Patients pay less attention to fertility-related matters and guidelines for adolescent gynecological patients receiving chemotherapy (read rates were 40.5% and 35.1%, respectively). The reason may be related to the age at which diseases occur. The peak age of gynecological malignant tumors is mainly 40–65 years , which is consistent with the results of our study. However, most people in this age group had passed their adolescence and had given birth; therefore, they pay less attention to relevant health education. Gynecological patients receiving chemotherapy have insufficient health education information, especially for home care . However, lack of knowledge is an independent risk factor for treatment non-compliance . The 13 thematic health education materials pushed through the cloud follow-up platform improved the knowledge of gynecological patients receiving chemotherapy. Medical staff regularly logged in to the cloud follow-up platform to check patient click-through rates, assess the active acceptance of health guidance, and provide targeted guidance to patients with low compliance. In addition, the health education materials pushed by the cloud follow-up platform allowed patients or their family members to repeatedly view the education content anytime and anywhere through their mobile phones. The video content, combined with text, images, sound, and animation, makes it easier for patients to remember the treatment of side effects of chemotherapy, thereby improving their QoL. Furthermore, patients in the cloud follow-up platform group more comprehensively consulted medical staff online, obtained professional guidance and medication reminders, and received disease knowledge compared with those receiving manual follow-ups. It has no time and geographical limitations and provides a paradigm for other medical institutions to improve the management of patients without follow-up in remote areas. Information technology provides access to high-quality medical resources, reduces unnecessary outpatient follow-ups, avoids cross-regional patient mobility, and decreases the risk of cross-infection during the COVID-19 pandemic. This approach meets the requirements of hospital infection management. There are some limitations to the present study. First, in addition to the form of online answers by the medical staff, other forms, such as chatbot, can be introduced. A chatbot is an artificial intelligence program that realizes human-computer interaction in the form of dialogue or text with the help of natural language processing and emotional analysis. It is currently used in the diagnosis, treatment, and management of diseases . It can also help patients with gynecological tumors in solving problems commonly observed in the perioperative period . Therefore, future studies can develop chat machine software or reference software suitable for gynecological patients receiving chemotherapy. Second, the cloud follow-up system used in this study was implemented only in the early stages, including early software testing, system installation training, and hospital pilot. Therefore, clinical operation time was short, and the system was not stable enough. These issues should be addressed in collaboration with a software engineer in the later stages of the operation process. This study highlighted that the follow-up effect of the cloud follow-up group was not inferior to that of the manual follow-up group. Cloud follow-up helps COVID-19 prevention and control, improving the cost-effectiveness of follow-up, providing sufficient health education for patients, and reflecting patients’ attention to disease knowledge. Therefore, it can be widely used in clinical practice. |
Proteomic Analysis of Midgut of Silkworm Reared on Artificial Diet and Mulberry Leaves and Functional Study of Three | 5f13c909-0afd-48c6-84a2-9894a25e2a7a | 11818671 | Biochemistry[mh] | The silkworm is not only an economically significant insect with favorable genetic traits but is also considered an ideal lepidopteran model for scientific research . As an oligophagous insect, the silkworm primarily feeds on fresh mulberry leaves, from which it derives all necessary nutrients and water . This relationship is a result of long-term co-evolution and natural selection between silkworms and mulberry trees. Although artificial diets for silkworms replicate the composition of mulberry leaves, the consumption of artificial feed varies significantly among different silkworm varieties compared to that of mulberry leaves . Consequently, issues related to weak silkworm physique, silk protein synthesis, and low silk yield remain unresolved . At the end of the 20th century, proteomics research technology began to be applied to silkworm research. With the advancement of the silkworm genome project, silkworm proteomics research has also become the focus of silkworm biology research . Studies have revealed the phosphorylation difference of the N-terminal sequence of the silk protein heavy chain molecule through proteomics analysis and found that the Filippi’s gland affects the silk viscosity during the spinning process of the silkworm by regulating the post-translational modification . The immune mechanism of the silkworm fat body against Bacillus cereus ZJ-4 was studied by proteomics. The results showed that the differentially expressed proteins were mainly involved in stress response, biological regulation, and innate immunity. Bacillus cereus ZJ-4 can destroy the innate immune pathway of silkworms and affect the normal immune function of fat cells . UDP-glycosyltransferases (UGTs) are a superfamily of proteases related to glycosylation, which are ubiquitous in animals, plants, bacteria, and viruses . In insects, UGTs play an important role in many processes, including detoxification of substrates such as plant allelochemicals, cuticle formation, pigmentation, and olfactory function . Studies have shown that UDP-glycosyltransferase (UGT) catalyzes a series of different lipophilic small compounds to combine with sugars to produce glycosides, which plays an important role in the detoxification of xenobiotics and the regulation of insect endostatin. Many UGTs are expressed in fat bodies, midgut, and Malpighian tubules, indicating that they play a role in detoxification. Some UGTs are expressed in antennae, indicating that they play a role in pheromone recognition . A total of 42 UGT genes were found in the silkworm genome. Through gene chip technology and RT-qPCR analysis, the results showed that different UGT gene expression patterns were different. The BmUGT013829 gene can cause glycosylation of flavonoids, and also participate in olfactory and detoxification reactions . Another study showed that BmUGT10295 and BmUGT8453 genes were significantly expressed in the midgut and Malpighian tubules of the silkworm infected by N.bombycis . After overexpression of the two genes, the number of microparticles in the samples was significantly reduced, while after RNAi interference gene expression, the number of microparticles in the samples was significantly increased, indicating that the two genes were induced to express and have resistance to microparticle disease . The glycosylation of UDP-glucosyltransferase (UGT) is of great significance in controlling and eliminating endogenous and exogenous toxins. BmUGT10286 ( UGT86 ) can directly affect the formation of green pigments in silkworm cocoons. The expression of UGT86 is not only expressed in digestive ducts and silk gland tissues but also in Malpighian tubules, fat body tissues, and gonads . UDP-glucosyltransferase and ABC transporters are involved in substance metabolism and detoxification processes . In this study, iTRAQ (isobaric Tags for Relative and Absolute Quantitation) technology was utilized to investigate the proteomics of the silkworm midgut, comparing specimens reared on an artificial diet versus those fed mulberry leaves, leading to the identification of significantly differential proteins. Through molecular docking technology, three anti-nutritional factors were identified that stably bind to the UGT40B4 , UGT340C2 , and UGT40A1 proteins. We further explored the impact of these distinct anti-nutritional factors on the expression and activity of the three UGT genes. This research is anticipated to lay a theoretical foundation for future investigations into the functional roles of UGT genes within the silkworm midgut and contribute to the development of optimized artificial diets for silkworm rearing.
2.1. Proteome Differential Expression Analysis In the midgut samples of artificial diet-rearing silkworms and mulberry leaf-rearing silkworms, we screened them with Fold change > 1.2 and Q value < 0.05. The results showed that there were 564 up-regulated proteins and 400 down-regulated proteins in the midgut of artificial diet-rearing silkworms compared with mulberry leaf-rearing silkworms. There were 964 significantly different proteins. 2.2. Statistical Analysis of Differential Protein KEGG Classification KEGG (Kyoto Encyclopedia of Genes and Genomes) is a tool mainly used to study the interaction between metabolic pathways, genes and proteins. By analyzing the KEGG pathways involved in differential proteins, we found that they include six categories: cellular processes, environmental information processing, genetic information processing, human diseases, metabolism and organic systems . In these six categories, they are involved in transport and catabolism, signal transduction, transcription, viral diseases, carbohydrate metabolism and endocrine system pathways. 2.3. KEGG Enrichment Analysis of Differential Proteins Through the analysis of differential proteins, it was found that they were mainly enriched in metabolic pathways, protein digestion and absorption, lysosome, propionic acid metabolism, galactose metabolism, valine, leucine and isoleucine degradation, phagosome, tryptophan metabolism and other pathways. Among them, 197 differentially expressed proteins were enriched in metabolic pathways, 41 differentially expressed proteins were enriched in protein digestion and absorption pathways, 34 differentially expressed proteins were enriched in protein processing pathways in endoplasmic reticulum, 27 differentially expressed proteins were enriched in lysosomal pathways, and 24 differentially expressed proteins were enriched in PI3K-Akt pathway. A total of 24 differentially expressed proteins were enriched in the oxidative phosphorylation pathway . 2.4. RT-qPCR Validation Analysis Through proteomics analysis and fluorescence quantitative analysis, UGT40B4 (AEW43167.1), UGT340C2 (AEW43159.1) and UGT40A1 (AEW43163.1) proteins in the uridine diphosphate glucose transferase family were screened out. They are involved in carbohydrate metabolism, lipid metabolism, cofactor and vitamin metabolism, biodegradation and metabolism of xenobiotics and other pathways. The expression levels of these three proteins in the midgut of artificial diet-rearing silkworms at the transcriptional level and protein level were significantly higher than those of mulberry leaf-rearing silkworms . 2.5. UGT Gene Expression Profile Analysis Through the analysis of the instar expression profile and tissue expression profile of the three UGT genes, it was found that in the instar expression profile, the relative expression of UGT40B4 gene was higher in the 1st and 4th instars of the silkworm . The relative expression level of UGT340C2 gene gradually increased from the 1st to 3rd instars of the silkworm, and the relative expression levels of the 2nd and 3rd instars were significantly higher, and the relative expression levels of the 4th to 5th instars were lower. The relative expression of UGT40A1 gene was low in the 1st to 3rd instars and high in the 4th to 5th instars, indicating that the gene was mainly expressed in the large silkworm period . In the tissue expression profile, the relative expression of UGT40B4 gene was higher in the Malpighian tube and midgut of the silkworm, indicating that the gene may be related to the digestion and metabolism of the silkworm. The relative expression of UGT340C2 gene in the epidermis and midgut of silkworms was higher, which may be related to the digestion and metabolism of silkworms and the construction of the epidermis. The UGT40A1 gene is highly expressed in the epidermis, Malpighian tubules, silk glands and head of the silkworm, and may have multiple functions . 2.6. Analysis of the Effect of Adding Soybean Isoflavones Test After adding soybean isoflavones, the expression of UGT40B4 gene showed a trend of first increasing, then decreasing and then increasing with the increase of soybean isoflavones, indicating that soybean isoflavones could induce the gene expression to a certain extent. The expression of UGT340C2 gene was significantly higher than that of the control group in the experimental group with 0.1% soybean isoflavones . With the increase of soybean isoflavones in the semi-synthetic feed of the experimental group, the expression of UGT340C2 gene decreased gradually, indicating that soybean isoflavones in a certain content range could induce the expression of UGT340C2 gene. With the increase of soybean isoflavone content, the induced expression of UGT340C2 gene was inhibited. The expression of UGT40A1 gene increased with the increase of isoflavone content , indicating that soybean isoflavones could significantly induce the expression of this gene. In the investigation of the body weight of silkworms during the full-eating period, it was found that the addition of 0.1% and 0.4% soybean isoflavones in semi-synthetic feed had a significant negative effect on the body weight of silkworms during the full-eating period , indicating that soybean isoflavones had a significant effect on the growth and development of the fifth instar of the silkworm. In the investigation of silkworm cocoon quality, it was found that the cocoon shell weight of the experimental group with soybean isoflavones was improved to a certain extent compared with the control group, and the pupal weight of the experimental group was significantly increased compared with the control group , indicating that the addition of soybean isoflavones in the feed can increase the pupal weight. 2.7. Analysis of the Effect of Adding Tannic Acid Test The test of adding tannic acid to semi-synthetic feed was analyzed. In the test group of adding 0.2% tannic acid to semi-synthetic feed, there was no significant difference in the expression of UGT40B4 gene compared with the control group . With the increase of tannic acid content in the test group, the expression of this gene showed a downward trend, indicating that tannic acid could inhibit the expression of this gene. The expression of UGT340C2 gene in the experimental group with 0.2% and 0.8% tannic acid was significantly higher than that in the control group, indicating that low tannic acid content could induce the expression of the gene. The expression of the gene decreased with the increase of tannic acid content in the semi-synthetic feed of the experimental group, indicating that the increase of tannic acid content in the feed could strengthen the inhibition of the gene expression. The expression level of UGT40A1 gene in the experimental group was significantly lower than that in the control group . With the increase of tannic acid addition in the experimental group, the expression level decreased, indicating that tannic acid could inhibit the expression of this gene. In the investigation of the body weight of silkworms during the feeding period, it was found that the body weight of silkworms decreased gradually with the increase of tannic acid content in the feed, indicating that tannic acid would have an adverse effect on the growth and development of silkworm in the 5th instar . In the investigation of silkworm cocoon quality, it was found that the addition of 0.2% tannic acid to the semi-synthetic feed significantly improved the cocoon quality, and the cocoon quality in the test group with 0.8% tannic acid significantly deteriorated. In the test group with 3.2% tannic acid, silkworms could not normally cocoon or thin cocoons, which seriously affected the quality of cocoons , indicating that tannic acid in artificial feed and mulberry leaves had adverse effects on the growth and development of silkworms and the cocooning process of silkworms. 2.8. Analysis of the Effect of Adding Arabinoxylan Test The expression of UGT40B4 gene decreased first, then increased and then decreased with the increase of arabinoxylan in semi-synthetic feed, indicating that arabinoxylan could induce the expression of this gene to a certain extent . The expression of UGT340C2 gene increased significantly with the increase of arabinoxylan in semi-synthetic feed, indicating that arabinoxylan could induce the expression of UGT340C2 gene. There was no significant difference in the expression of UGT340C2 gene between the experimental groups with 0.4% and 1.6% arabinoxylan. The expression of UGT40A1 gene increased gradually between 0% and 0.4% of arabinoxylan, indicating that arabinoxylan could induce the expression of the gene. The expression of the gene decreased after adding 1.6% of arabinoxylan , indicating that the high content of arabinoxylan inhibited the expression of the gene to a certain extent. In the investigation of the body weight of silkworms during the feeding period, it was found that the addition of 0.1% and 0.4% arabinoxylan to the semi-synthetic feed was beneficial to the growth and development of the 5th instar silkworm, and the addition of 1.6% arabinoxylan had a significant adverse effect on the growth and development of the 5th instar silkworm. In the investigation of cocoon quality, it was found that the addition of 0.1% arabinoxylan could increase the cocoon shell weight and pupal weight. After adding 0.4% arabinoxylan, the cocoon quality was not significantly different from that of the control group . After adding 1.6% arabinoxylan, the cocoon quality has significantly deteriorated, indicating that an appropriate amount of arabinoxylan could improve the production performance of silkworms to a certain extent . The high content of arabinoxylan would have a toxic effect on silkworms and affect the growth and development of silkworms.
In the midgut samples of artificial diet-rearing silkworms and mulberry leaf-rearing silkworms, we screened them with Fold change > 1.2 and Q value < 0.05. The results showed that there were 564 up-regulated proteins and 400 down-regulated proteins in the midgut of artificial diet-rearing silkworms compared with mulberry leaf-rearing silkworms. There were 964 significantly different proteins.
KEGG (Kyoto Encyclopedia of Genes and Genomes) is a tool mainly used to study the interaction between metabolic pathways, genes and proteins. By analyzing the KEGG pathways involved in differential proteins, we found that they include six categories: cellular processes, environmental information processing, genetic information processing, human diseases, metabolism and organic systems . In these six categories, they are involved in transport and catabolism, signal transduction, transcription, viral diseases, carbohydrate metabolism and endocrine system pathways.
Through the analysis of differential proteins, it was found that they were mainly enriched in metabolic pathways, protein digestion and absorption, lysosome, propionic acid metabolism, galactose metabolism, valine, leucine and isoleucine degradation, phagosome, tryptophan metabolism and other pathways. Among them, 197 differentially expressed proteins were enriched in metabolic pathways, 41 differentially expressed proteins were enriched in protein digestion and absorption pathways, 34 differentially expressed proteins were enriched in protein processing pathways in endoplasmic reticulum, 27 differentially expressed proteins were enriched in lysosomal pathways, and 24 differentially expressed proteins were enriched in PI3K-Akt pathway. A total of 24 differentially expressed proteins were enriched in the oxidative phosphorylation pathway .
Through proteomics analysis and fluorescence quantitative analysis, UGT40B4 (AEW43167.1), UGT340C2 (AEW43159.1) and UGT40A1 (AEW43163.1) proteins in the uridine diphosphate glucose transferase family were screened out. They are involved in carbohydrate metabolism, lipid metabolism, cofactor and vitamin metabolism, biodegradation and metabolism of xenobiotics and other pathways. The expression levels of these three proteins in the midgut of artificial diet-rearing silkworms at the transcriptional level and protein level were significantly higher than those of mulberry leaf-rearing silkworms .
Through the analysis of the instar expression profile and tissue expression profile of the three UGT genes, it was found that in the instar expression profile, the relative expression of UGT40B4 gene was higher in the 1st and 4th instars of the silkworm . The relative expression level of UGT340C2 gene gradually increased from the 1st to 3rd instars of the silkworm, and the relative expression levels of the 2nd and 3rd instars were significantly higher, and the relative expression levels of the 4th to 5th instars were lower. The relative expression of UGT40A1 gene was low in the 1st to 3rd instars and high in the 4th to 5th instars, indicating that the gene was mainly expressed in the large silkworm period . In the tissue expression profile, the relative expression of UGT40B4 gene was higher in the Malpighian tube and midgut of the silkworm, indicating that the gene may be related to the digestion and metabolism of the silkworm. The relative expression of UGT340C2 gene in the epidermis and midgut of silkworms was higher, which may be related to the digestion and metabolism of silkworms and the construction of the epidermis. The UGT40A1 gene is highly expressed in the epidermis, Malpighian tubules, silk glands and head of the silkworm, and may have multiple functions .
After adding soybean isoflavones, the expression of UGT40B4 gene showed a trend of first increasing, then decreasing and then increasing with the increase of soybean isoflavones, indicating that soybean isoflavones could induce the gene expression to a certain extent. The expression of UGT340C2 gene was significantly higher than that of the control group in the experimental group with 0.1% soybean isoflavones . With the increase of soybean isoflavones in the semi-synthetic feed of the experimental group, the expression of UGT340C2 gene decreased gradually, indicating that soybean isoflavones in a certain content range could induce the expression of UGT340C2 gene. With the increase of soybean isoflavone content, the induced expression of UGT340C2 gene was inhibited. The expression of UGT40A1 gene increased with the increase of isoflavone content , indicating that soybean isoflavones could significantly induce the expression of this gene. In the investigation of the body weight of silkworms during the full-eating period, it was found that the addition of 0.1% and 0.4% soybean isoflavones in semi-synthetic feed had a significant negative effect on the body weight of silkworms during the full-eating period , indicating that soybean isoflavones had a significant effect on the growth and development of the fifth instar of the silkworm. In the investigation of silkworm cocoon quality, it was found that the cocoon shell weight of the experimental group with soybean isoflavones was improved to a certain extent compared with the control group, and the pupal weight of the experimental group was significantly increased compared with the control group , indicating that the addition of soybean isoflavones in the feed can increase the pupal weight.
The test of adding tannic acid to semi-synthetic feed was analyzed. In the test group of adding 0.2% tannic acid to semi-synthetic feed, there was no significant difference in the expression of UGT40B4 gene compared with the control group . With the increase of tannic acid content in the test group, the expression of this gene showed a downward trend, indicating that tannic acid could inhibit the expression of this gene. The expression of UGT340C2 gene in the experimental group with 0.2% and 0.8% tannic acid was significantly higher than that in the control group, indicating that low tannic acid content could induce the expression of the gene. The expression of the gene decreased with the increase of tannic acid content in the semi-synthetic feed of the experimental group, indicating that the increase of tannic acid content in the feed could strengthen the inhibition of the gene expression. The expression level of UGT40A1 gene in the experimental group was significantly lower than that in the control group . With the increase of tannic acid addition in the experimental group, the expression level decreased, indicating that tannic acid could inhibit the expression of this gene. In the investigation of the body weight of silkworms during the feeding period, it was found that the body weight of silkworms decreased gradually with the increase of tannic acid content in the feed, indicating that tannic acid would have an adverse effect on the growth and development of silkworm in the 5th instar . In the investigation of silkworm cocoon quality, it was found that the addition of 0.2% tannic acid to the semi-synthetic feed significantly improved the cocoon quality, and the cocoon quality in the test group with 0.8% tannic acid significantly deteriorated. In the test group with 3.2% tannic acid, silkworms could not normally cocoon or thin cocoons, which seriously affected the quality of cocoons , indicating that tannic acid in artificial feed and mulberry leaves had adverse effects on the growth and development of silkworms and the cocooning process of silkworms.
The expression of UGT40B4 gene decreased first, then increased and then decreased with the increase of arabinoxylan in semi-synthetic feed, indicating that arabinoxylan could induce the expression of this gene to a certain extent . The expression of UGT340C2 gene increased significantly with the increase of arabinoxylan in semi-synthetic feed, indicating that arabinoxylan could induce the expression of UGT340C2 gene. There was no significant difference in the expression of UGT340C2 gene between the experimental groups with 0.4% and 1.6% arabinoxylan. The expression of UGT40A1 gene increased gradually between 0% and 0.4% of arabinoxylan, indicating that arabinoxylan could induce the expression of the gene. The expression of the gene decreased after adding 1.6% of arabinoxylan , indicating that the high content of arabinoxylan inhibited the expression of the gene to a certain extent. In the investigation of the body weight of silkworms during the feeding period, it was found that the addition of 0.1% and 0.4% arabinoxylan to the semi-synthetic feed was beneficial to the growth and development of the 5th instar silkworm, and the addition of 1.6% arabinoxylan had a significant adverse effect on the growth and development of the 5th instar silkworm. In the investigation of cocoon quality, it was found that the addition of 0.1% arabinoxylan could increase the cocoon shell weight and pupal weight. After adding 0.4% arabinoxylan, the cocoon quality was not significantly different from that of the control group . After adding 1.6% arabinoxylan, the cocoon quality has significantly deteriorated, indicating that an appropriate amount of arabinoxylan could improve the production performance of silkworms to a certain extent . The high content of arabinoxylan would have a toxic effect on silkworms and affect the growth and development of silkworms.
Uridine diphosphate-glycosyltransferases (UGTs) are pivotal multifunctional detoxification enzymes that participate in the metabolism of xenobiotic toxic substances . These enzymes play a crucial role in insects’ metabolism and elimination of plant-derived toxic compounds and exogenous noxious substances . Concurrently, insect UGTs also fulfill important functions in various physiological processes . Previous studies have shown that UDP-glycosyltransferase (UGT) catalyzes the binding of a series of different lipophilic small compounds to sugars to produce glycosides, and plays an important role in the detoxification of xenobiotics and the regulation of insect endogens . Other scholars revealed the UDP - glycosyl transferase (UGT) and phospholipase gene in flavonoids and the important role of glycerol phospholipid metabolism . Upon supplementing semi-synthetic feed with soybean isoflavones, tannins, and arabinoxylan, we observed differential expression of UGT40B4 , UGT340C2 , and UGT40A1 ( , and ). Notably, soybean isoflavones induced the expression of UGT340C2 and UGT40A1 genes. Furthermore, tissue expression profiles revealed elevated expression of UGT40B4 and UGT40A1 genes in the midgut of silkworms reared on artificial diet. Study found that in the diet adding 20 mg/kg and 80 mg/kg of soybean isoflavones not only can improve the growth performance, but also be beneficial to the immune response to poultry , the addition of soybean isoflavones to semi-synthetic feed exerted varied effects on silkworm growth, development, cocoon quality, and pupal mass, suggesting differential requirements for soybean isoflavones at various stages of silkworm development. By analyzing the transcriptome and proteomics of salivary gland function genes and oral secretion (OS) protein changes of Helicoverpa armigera fed with artificial diet (containing gossypol and tannin) and cotton plant leaves, it was found that cotton leaves, gossypol and tannin can significantly up-regulate the GST , UGT , hydrolase and lipase genes of Helicoverpa armigera , which are involved in the detoxification and digestion of Helicoverpa armigera . The tannic acid content in mulberry leaves was about 1.8~2.9%, the tannic acid content in M38 artificial diet was about 0.6~1.2%. The tannic acid content in mulberry leaves was significantly higher than that in artificial diet, and tannic acid could induce the expression of UGT340C2 gene. It is speculated that UGT340C2 gene may be involved in the detoxification function of the silkworm in artificial diet, and may also be related to the tannic acid metabolism process. Research findings indicate that the UGT013829 gene in silkworms enables flavonoids to undergo glycosylation. Moreover, this gene is involved in both olfactory responses and detoxification processes . Among the 52 UGT genes of Spodoptera litura , the enzyme activity and transcription level of 77% UGT members were significantly up-regulated after flavonoid treatment. The bacteria co-expressing UGTs had a higher survival rate under flavonoid treatment, and flavonoids were significantly metabolized by UGT recombinant cells, indicating that UGTs were involved in flavonoid detoxification . Through molecular docking, we discovered potential interactions between the three UGT proteins in our experiment and soybean isoflavones. Consequently, we hypothesize that these three UGT proteins may glycosylate soybean isoflavones, thus participating in their metabolism and absorption processes . Through proteomic analysis of the midgut in artificially reared and mulberry leaf-fed silkworms, we identified 964 significantly differentially expressed proteins. UGT40B4 , UGT340C2 , and UGT40A1 exhibited distinct expression patterns across developmental stages and tissues. Molecular docking techniques revealed that UGT40B4 , UGT340C2 , and UGT40A1 proteins demonstrated strong binding affinities for isoflavones, tannins, and arabinoxylans. The expression of three UGT genes in silkworms was significantly up-regulated by adding soybean isoflavones to semi-synthetic feed. Tannins induced the expression of the UGT340C2 gene, while UGT340C2 and UGT40A1 were significantly upregulated in the arabinoxylan-supplemented group. Tannin present in artificial diets and mulberry leaves adversely affected silkworm growth, development, and cocoon formation. Moderate amounts of arabinoxylans positively influenced the growth, development, and cocoon quality of fifth-instar silkworms. However, high concentrations of arabinoxylans exhibited toxic effects, impacting silkworm growth and development.
4.1. Test Silkworm Varieties and Feed The test silkworm variety is the No.1 silkworm strain of Youshi No.1 bred by the laboratory. The mulberry leaves were from the Forestry Experimental Base of Panhe Campus of Shandong Agricultural University, and the variety was Nongsang 14. The feed used is the M38 cooking artificial feed developed and processed by the laboratory. The main components of M38 artificial feed used in this experiment were mulberry leaf powder 38%, soybean meal 30%, corn flour 24.3%, citric acid 3.0%, inorganic salt 2%, compound VB 0.3%, Vc 2%, preservative 0.4%. An appropriate amount of artificial feed powder was weighed, and 1.8 times the weight of the powder was added with purified water. After fully mixed, it was cooked at 100 °C for 60 min. After cooling to room temperature, it was placed in a refrigerator at 4 °C for later use. 4.2. Sample Processing Materials The midgut of the 5th instar female silkworm reared on an artificial diet and mulberry leaves at 72 h were taken, and the peritrophic membrane was removed. The food residue was rinsed in 1 × PBS and placed on ice. The washed midgut epithelium was cut longitudinally along the midline and quickly placed in a 1.5 mL RNase-free centrifuge tube. Three half midguts from different individuals were placed in each tube, and three replicates were set for each group. The centrifuge tube was quickly frozen in liquid nitrogen and stored in a refrigerator at −80 °C for later use. 4.3. Fluorescence Quantitative PCR RNA was extracted from samples using TransZol Up RNA extraction reagent (Beijing all-gold Company, Beijing, China). The first strand of cDNA was synthesized using the EasyScript ® One-Step gDNA Removal and cDNA Synthesis SuperMix (Beijing All-Style Gold Company, Beijing, China) kit. The qPCR reaction system and conditionsThe qPCR instrument was Bio-Rad CFX96, and TransStart ® TipGreen qPCR SuperMix (Beijing TransGen, Beijing, China) kit was used. The reaction conditions were as follows: pre-denaturation at 94 °C for 30 s, denaturation at 94 °C for 5 s, annealing at 55 °C for 15 s, extension at 72 °C for 10 s, 40 cycles. 4.4. Semi-Synthetic Artificial Feed Feeding Test Materials The basic semi-synthetic feed formula in the feeding test was designed. The main components of the semi-synthetic feed are soybean protein (30%), cellulose powder (24.0%), corn starch (16%), sucrose (10%) and agar (10%), and contain small amounts of vitamins and inorganic salts. The tested silkworm varieties were the same as 2.1. Females and males were identified when the 5th instar larvae were raised. Females at 72 h of the 5th instar were used in the experiment. The 1st to 4th instar silkworms were fed with M38 artificial diet. After the 5th instar silkworms were raised, semi-synthetic diets with different contents of soybean isoflavones, tannic acid and arabinoxylan were used for feeding. Three replicate groups were set up under three content gradients, with 50 female silkworms in each group. The appropriate amount of powder semi-synthetic artificial feed was weighed, and pure water with 1.8 times the weight of the powder was added. After mixing, it was put into a point-broken bag, cooked at 100 °C for 60 min, pressed flat before cooling, and cooled to room temperature. Feed or store in a refrigerator at 4 °C. 4.5. Experiment on Adding Soybean Isoflavones to Semi-Synthetic Feed In the isoflavone supplementation experiment, four experimental groups were set up. The semi-synthetic diet in the control group was not supplemented with soy isoflavones. The semisynthetic diet of group 1 was supplemented with 0.1% soy isoflavones; The semi-synthetic diet of the two groups was supplemented with 0.4% isoflavones; The semi-synthetic diets of three groups were supplemented with 1.6% soy isoflavones. Sheng food to 5 age period of mulberry silkworm body quality measurement, the age of 5. 72 h of the experimental group in the silkworm intestine samples for fluorescence quantitative analysis of the amount of UGT gene expression. 4.6. Test of Adding Tannic Acid in Semi-Synthetic Feed In the experiment of adding tannic acid, the control group did not add tannic acid in the semi-synthetic feed of the feeding control group; the semi-synthetic feed of the experimental group 1 was added with 0.2% tannic acid; the semi-synthetic feed of the experimental group 2 was added with 0.8% tannic acid; in the experimental group 3, 3.2% tannic acid was added to the semi-synthetic feed. The body weight of silkworms in the 5th instar feeding period was measured, and the expression of UGT genes in the midgut samples of silkworms in each experimental group at 72 h of the 5th instar was quantitatively analyzed by fluorescence. 4.7. Experiment on Adding Arabinoxylan to Semi-Synthetic Feed In the experiment of adding arabinoxylan, the control group did not add arabinoxylan in the semi-synthetic feed of the feeding control group; the semi-synthetic feed of the experimental group 1 was supplemented with 0.1% arabinoxylan; the semi-synthetic feed of the experimental group 2 was added with 0.4% arabinoxylan; 1.6% arabinoxylan was added to the semi-synthetic feed of the experimental group 3. The body weight of silkworms in the 5th instar feeding period was measured, and the expression of UGT genes in the midgut samples of silkworms in each experimental group at 72 h of the 5th instar was quantitatively analyzed by fluorescence. 4.8. Statistical Analysis Biological replicates were conducted a minimum of three times, and the results are presented as the mean ± SD. p -values were calculated using Student’s t -test for two samples and One-way ANOVA (Tukey’s HSD test) for comparisons involving more than two samples, utilizing Prism 8 (GraphPad, San Diego, CA, USA) and SPSS Statistics 26.0 software (SPSS Inc., Chicago, IL, USA).
The test silkworm variety is the No.1 silkworm strain of Youshi No.1 bred by the laboratory. The mulberry leaves were from the Forestry Experimental Base of Panhe Campus of Shandong Agricultural University, and the variety was Nongsang 14. The feed used is the M38 cooking artificial feed developed and processed by the laboratory. The main components of M38 artificial feed used in this experiment were mulberry leaf powder 38%, soybean meal 30%, corn flour 24.3%, citric acid 3.0%, inorganic salt 2%, compound VB 0.3%, Vc 2%, preservative 0.4%. An appropriate amount of artificial feed powder was weighed, and 1.8 times the weight of the powder was added with purified water. After fully mixed, it was cooked at 100 °C for 60 min. After cooling to room temperature, it was placed in a refrigerator at 4 °C for later use.
The midgut of the 5th instar female silkworm reared on an artificial diet and mulberry leaves at 72 h were taken, and the peritrophic membrane was removed. The food residue was rinsed in 1 × PBS and placed on ice. The washed midgut epithelium was cut longitudinally along the midline and quickly placed in a 1.5 mL RNase-free centrifuge tube. Three half midguts from different individuals were placed in each tube, and three replicates were set for each group. The centrifuge tube was quickly frozen in liquid nitrogen and stored in a refrigerator at −80 °C for later use.
RNA was extracted from samples using TransZol Up RNA extraction reagent (Beijing all-gold Company, Beijing, China). The first strand of cDNA was synthesized using the EasyScript ® One-Step gDNA Removal and cDNA Synthesis SuperMix (Beijing All-Style Gold Company, Beijing, China) kit. The qPCR reaction system and conditionsThe qPCR instrument was Bio-Rad CFX96, and TransStart ® TipGreen qPCR SuperMix (Beijing TransGen, Beijing, China) kit was used. The reaction conditions were as follows: pre-denaturation at 94 °C for 30 s, denaturation at 94 °C for 5 s, annealing at 55 °C for 15 s, extension at 72 °C for 10 s, 40 cycles.
The basic semi-synthetic feed formula in the feeding test was designed. The main components of the semi-synthetic feed are soybean protein (30%), cellulose powder (24.0%), corn starch (16%), sucrose (10%) and agar (10%), and contain small amounts of vitamins and inorganic salts. The tested silkworm varieties were the same as 2.1. Females and males were identified when the 5th instar larvae were raised. Females at 72 h of the 5th instar were used in the experiment. The 1st to 4th instar silkworms were fed with M38 artificial diet. After the 5th instar silkworms were raised, semi-synthetic diets with different contents of soybean isoflavones, tannic acid and arabinoxylan were used for feeding. Three replicate groups were set up under three content gradients, with 50 female silkworms in each group. The appropriate amount of powder semi-synthetic artificial feed was weighed, and pure water with 1.8 times the weight of the powder was added. After mixing, it was put into a point-broken bag, cooked at 100 °C for 60 min, pressed flat before cooling, and cooled to room temperature. Feed or store in a refrigerator at 4 °C.
In the isoflavone supplementation experiment, four experimental groups were set up. The semi-synthetic diet in the control group was not supplemented with soy isoflavones. The semisynthetic diet of group 1 was supplemented with 0.1% soy isoflavones; The semi-synthetic diet of the two groups was supplemented with 0.4% isoflavones; The semi-synthetic diets of three groups were supplemented with 1.6% soy isoflavones. Sheng food to 5 age period of mulberry silkworm body quality measurement, the age of 5. 72 h of the experimental group in the silkworm intestine samples for fluorescence quantitative analysis of the amount of UGT gene expression.
In the experiment of adding tannic acid, the control group did not add tannic acid in the semi-synthetic feed of the feeding control group; the semi-synthetic feed of the experimental group 1 was added with 0.2% tannic acid; the semi-synthetic feed of the experimental group 2 was added with 0.8% tannic acid; in the experimental group 3, 3.2% tannic acid was added to the semi-synthetic feed. The body weight of silkworms in the 5th instar feeding period was measured, and the expression of UGT genes in the midgut samples of silkworms in each experimental group at 72 h of the 5th instar was quantitatively analyzed by fluorescence.
In the experiment of adding arabinoxylan, the control group did not add arabinoxylan in the semi-synthetic feed of the feeding control group; the semi-synthetic feed of the experimental group 1 was supplemented with 0.1% arabinoxylan; the semi-synthetic feed of the experimental group 2 was added with 0.4% arabinoxylan; 1.6% arabinoxylan was added to the semi-synthetic feed of the experimental group 3. The body weight of silkworms in the 5th instar feeding period was measured, and the expression of UGT genes in the midgut samples of silkworms in each experimental group at 72 h of the 5th instar was quantitatively analyzed by fluorescence.
Biological replicates were conducted a minimum of three times, and the results are presented as the mean ± SD. p -values were calculated using Student’s t -test for two samples and One-way ANOVA (Tukey’s HSD test) for comparisons involving more than two samples, utilizing Prism 8 (GraphPad, San Diego, CA, USA) and SPSS Statistics 26.0 software (SPSS Inc., Chicago, IL, USA).
|
Pharmacotherapeutic actions related to drug interaction alerts – a questionnaire study among Swedish hospital interns and residents in family medicine | 7af86921-632d-4077-8084-bd8185cb0225 | 11717818 | Family Medicine[mh] | The prevalence of polypharmacy has been reported to increase over time , and being treated with many drugs puts patients at risk of drug interactions . Drug interactions are usually categorised as either pharmacokinetic (PK), mediated by effects on the absorption, distribution, metabolism, and/or excretion of other drug(s) , or pharmacodynamic (PD), implying that one drug influences the effect of another, i.e., not primarily mediated via changes in drug concentration . Drug interactions may contribute to adverse drug reactions (ADRs) or diminished effect . The clinical consequences of drug interactions depend not only on the drugs, but also on patient characteristics, including the current clinical status . Older people may be particularly at risk since they are more vulnerable to ADRs, for instance due to the age-related decline in physiological compensatory mechanisms . In this age group, many people also have multiple or complex disorders requiring treatment with several drugs, further increasing the risk of ADRs and drug interactions . A substantial number of knowledge resources are available for clinical decision support concerning potentially problematic drug interactions . In Sweden, the national interaction database Janusmed is integrated in almost all electronic health record systems and is also accessible via the internet. Interaction alerts triggered by Janusmed are classified according to their clinical significance: D = a clinically significant interaction where the recommendation is to avoid the combination; C = a clinically significant interaction that can be handled, for instance, by a dose adjustment or separated intake; B = an alert where the clinical relevance is uncertain or varies; and A = a minor interaction without clinical relevance . Furthermore, the alerts are classified according to the level of documentation: 4 = controlled studies in relevant populations; 3 = studies among healthy volunteers and/or pilot studies among patients; 2 = well-documented case reports; 1 = incomplete case reports and/or in vitro studies; and 0 = extrapolation on the basis of studies with similar drugs . Janusmed alerts initially present some brief information about the expected consequence of the alerted drug combination as well as a recommendation for clinical management, along with the above-mentioned classifications of clinical significance and documentation. With a single mouse click, the user can get access to more detailed information. In all public and in most private primary care centres in Region Västra Götaland, alerts classified as D, C, or B, irrespective of level of documentation, appear integrated in the health record systems, with some differences in how they are presented. In the hospital setting, alerts classified as D or C are generally presented. The level of documentation is displayed in both settings. We have previously shown that many drug interaction alerts turn out to have been already addressed or not to be relevant for the specific patient . Investigating drug interaction alerts triggered by the medication lists of 274 older patients, we found that only 35 (9%) out of 405 presented alerts merited action according to two specialist physicians in consensus . In that study, the most commonly suggested actions were to switch omeprazole to pantoprazole to avoid problems related to omeprazole’s CYP2C19 inhibition , and to separate intake between levothyroxine and calcium or ferrous sulfate in order to avoid decreased absorption of levothyroxine . Furthermore, a recent systematic review reported that 90% of interaction alerts are overridden by physicians . However, little details are known about how physicians, in different stages of their career and in different settings, act on drug interaction alerts. In the present study, we aimed to explore how interns in a university hospital and resident physicians in primary care act on drug interaction alerts in a specific patient case as well as on drug interaction alerts in general. An anonymous questionnaire was distributed in print to pre-registration interns attending an educational day at Sahlgrenska University Hospital, Gothenburg, Sweden (December 2023), and to residents specialising in family medicine during an educational day in Borås, Region Västra Götaland, Sweden (November 2023). The questionnaire was filled in prior to a lecture concerning drug interactions and their alerts. The respondents were informed about the purpose of the study and that participation was voluntary. After completing the questionnaire, they could either choose to put it in a pile marked”research” or in the wastebin. The questionnaire was designed by the authors and piloted by two specialists in family medicine. The questionnaire is available in Online Resource . It consisted of three parts. The first part concerned the respondents’ actions related to drug interaction alerts in a clinical scenario with a fictional patient case. The case described a 73-year-old woman with 10 drugs in the medication list, triggering 11 drug interaction alerts in Janusmed, one classified as D, seven as C, and three as B alerts (Table ). The scenario was identical for both groups of respondents, except for the fact that the interns met the patient in the hospital setting where clopidogrel had been initiated the day before, whereas the residents met the patient at a follow-up visit in primary care shortly after the hospital stay where clopidogrel was added. Along with the questionnaire, each respondent received a complete printout of all available texts in Janusmed regarding the drug interaction alerts. In the first question (Q1) regarding the patient case, the respondent was instructed to decide on potential actions for each of the 10 drugs by ticking at least one of the following boxes: (i) no action, (ii) reduce dose, (iii) stop the drug, (iv) increase dose, or (v) other action; with space for comments. The context of this question was explained as “a regular day at work”, i.e., decision-making during time pressure requiring medical prioritising. In the second question (Q2), the respondent was to determine the importance of the actions suggested in Q1, using a scale with five steps, from 1 = not at all important, to 5 = very important. The option “no action” could be ticked for drugs with no action in Q1. The second part of the questionnaire focused on drug interaction alerts in general, and the extent to which the respondents click to access detailed information about alerted drug interactions including the background, the underlying mechanism, and cited references. In these questions, we also used a scale with five steps from 1 = never, to 5 = always, exploring if the classification of clinical significance and the level of documentation were related to extended reading. Finally, the third part of the questionnaire gathered background information about the respondent, including age, gender, and work experience. Analyses Descriptive analyses were performed using SPSS for Windows, version 24.0 (IBM Corp., Armonk, NY, USA) (C.T.), and R version 4.3.1 (Foundation for Statistical Computing, Vienna, Austria) (S.A.S.). When entering data in a spreadsheet for analyses, we noted that many respondents who marked (iii) “stop the drug” or (v) “other” on Q1, wrote a free text comment about switching to another drug. Therefore, we added a (vi) “switch drug” category. Responses matching this category, according to two authors in consensus (C.T. and S.A.S.), were categorised accordingly. In Q2, if the respondent ticked “no action” for a drug and had not filled in anything for the same drug in Q1, we adjusted the Q1 response to (i) “no action”. Respondents with missing data were included in the calculation of proportions, with 55 (interns) or 69 (residents) used as denominator unless indicated otherwise. Two questions with replies ranging from 1 to 5 were dichotomized. Thus, participants responding 4 or 5 were categorised as (i) considering their action important in the patient case, and (ii) choosing to access the detailed information in the knowledge resource. Differences between interns and residents concerning actions suggested for specific drugs, as well as respondents’ reading of alert texts depending on alert classification, were examined using Fisher’s exact test. Ethics approval This questionnaire study was assessed by the Swedish Ethical Review Authority. They determined that the Ethical Review Act was not applicable and had no ethical objections to the study (2023–02355-01). Descriptive analyses were performed using SPSS for Windows, version 24.0 (IBM Corp., Armonk, NY, USA) (C.T.), and R version 4.3.1 (Foundation for Statistical Computing, Vienna, Austria) (S.A.S.). When entering data in a spreadsheet for analyses, we noted that many respondents who marked (iii) “stop the drug” or (v) “other” on Q1, wrote a free text comment about switching to another drug. Therefore, we added a (vi) “switch drug” category. Responses matching this category, according to two authors in consensus (C.T. and S.A.S.), were categorised accordingly. In Q2, if the respondent ticked “no action” for a drug and had not filled in anything for the same drug in Q1, we adjusted the Q1 response to (i) “no action”. Respondents with missing data were included in the calculation of proportions, with 55 (interns) or 69 (residents) used as denominator unless indicated otherwise. Two questions with replies ranging from 1 to 5 were dichotomized. Thus, participants responding 4 or 5 were categorised as (i) considering their action important in the patient case, and (ii) choosing to access the detailed information in the knowledge resource. Differences between interns and residents concerning actions suggested for specific drugs, as well as respondents’ reading of alert texts depending on alert classification, were examined using Fisher’s exact test. This questionnaire study was assessed by the Swedish Ethical Review Authority. They determined that the Ethical Review Act was not applicable and had no ethical objections to the study (2023–02355-01). A total of 55 out of 55 interns and 69 out of 71 residents participated in the study (response rate: 98%). Regarding the interns, 33% were ≥ 31 years of age, 58% were women, and 89% had completed ≤ 1 year of internship. Regarding the residents, 86% were ≥ 31 years of age, 48% were women, and 33% had completed ≤ 2 years of training in family medicine. Further characteristics are available in Online Resource 2. Almost all respondents, 55 (100%) interns and 68 (98%) residents stated that they would perform at least one action related to the drug interaction alerts. The median number of drugs that the respondents stated they would act on was 4 (range: 0‒8). Actions related to interaction alerts most often concerned repaglinide, a drug that together with clopidogrel elicited a D alert. Omeprazole was the second most common drug acted upon. This drug was included in three C alerts, with citalopram, clopidogrel, and levothyroxine, respectively, as well as in a B alert with calcium. Hydroxyzine was the third most common drug acted upon, included in a C alert with citalopram. Many respondents considered their actions important (Table ). For repaglinide and omeprazole, the most common action was a switch to another drug. For hydroxyzine, the most common action was either to stop the drug or to switch to another drug. All actions that were suggested for each drug, by interns and residents, are available in Online Resource . Regarding the extent to which the respondents usually access detailed information about alerted drug interactions classified as D or C, available one mouse click away, there were 95 (77%) replies. Of these, 56 (59%) respondents stated they do so for D alerts, versus 29 (31%) for C alerts ( P < 0.001; Table ). Of the 85 (69%) respondents who answered questions regarding whether the documentation level influenced their reading of more detailed information, 37 (43%) stated they accessed only documentation corresponding to studies on humans (levels 3 or 4), versus 18 (21%) who also accessed studies corresponding to documentation levels 0–2 ( P = 0.003). This study shows that nearly all physicians state that they act on one of the drugs within a drug pair alerted as a D interaction, and that 91% rate that action as important. Likewise, but to a somewhat smaller extent, drugs within drug pairs alerted as C interactions are often said to be acted upon. Whereas 59% of the respondents state that they usually access detailed information regarding alerted D interactions, only 31% do so for C alerts. The level of documentation seems to be associated with the use of the knowledge resource in a similar way; twice as many of the respondents state that they access detailed information for alerts with level 3/4 documentation, compared with other levels of documentation. Repaglinide and clopidogrel triggered an alert classified as D, with a medical consequence described as an increased risk of hypoglycaemia. The underlying mechanism for this PK interaction is described as the inhibition of CYP2C8, the main enzyme that metabolises repaglinide, by a clopidogrel metabolite . Consequently, the exposure to repaglinide may increase. Our finding that this D interaction alert was frequently acted upon is in line with a previous study that reported that the prevalence of D interactions was reduced when Janusmed was integrated into primary healthcare records . Regarding repaglinide, 47% of the interns and 68% of the residents suggested an action that included a switch to another drug. Interestingly, 22% of the residents suggested a specific drug, primarily a glucagon-like peptide-1 (GLP-1) agonist, a dipeptidyl peptidase-4 (DPP-4) inhibitor, or a sodium-glucose transport protein 2 (SGLT2) inhibitor, whereas only one intern suggested a specific drug. On the other hand, 18% of the interns described that one of their actions would be to a consult a senior college or to refer to primary care for follow-up. One could speculate that these findings may illustrate differences in alert-related actions associated with the stage of career as well as the clinical setting. Interns, at the very beginning of their career and before obtaining a full licence to prescribe, may less readily make their own treatment decisions compared with residents. Furthermore, actions in the hospital setting may focus on the acute health condition necessitating in-hospital care, and other actions may not be medically prioritised. Omeprazole was involved in four PK interaction alerts, in drug pairs including either citalopram, clopidogrel, calcium, or levothyroxine. Overall, 77% of the respondents suggested an action related to omeprazole, but only 45% rated this action as important. Of those who acted on omeprazole, 78% suggested a switch to pantoprazole, i.e., consistent with the recommendation provided by Janusmed regarding the combined use of citalopram and clopidogrel . Interestingly, the action to stop omeprazole was suggested more than twice as often by residents compared with interns. Again, the divergent approach may be related both to the gradual development of professional autonomy regarding drug treatment decisions and to the clinical setting. Clearly, a switch to another proton pump inhibitor (PPI), described as less prone to enzyme inhibition than omeprazole , could be regarded as a minor treatment decision whereas stopping a drug may require more of the prescriber. Furthermore, as the withdrawal of a PPI has been associated with acid rebound effects , the primary care setting may be preferable as it allows follow-up. Another aspect worth noting is that three interns suggested to stop clopidogrel, or to switch this drug to another drug, whereas none of the residents suggested such an action. One may speculate that residents, having come further in the physician career, may be more aware of challenges related to cardio- and cerebrovascular prevention. Omeprazole is described to interact with levothyroxine and calcium at the absorption level, with reduced uptake as the consequence. In the patient case, two respondents suggested to increase the calcium dose and 11 respondents to monitor thyroid-stimulating hormone (TSH), both actions consistent with the Janusmed recommendations. Few respondents, however, considered actions related to calcium or levothyroxine important. It could be speculated that this finding reflects that the B classification of the omeprazole/calcium alert, may contribute to the perception of low importance. In addition, the monitoring of TSH in relation to the omeprazole/levothyroxine alert has previously been reported as being in general adequate, not requiring further action . In Janusmed, the hydroxyzine/citalopram alert is described as a PD interaction. Both drugs may prolong the QT interval in a dose-related manner , and the risk could increase when two QT-prolonging agents are combined , which, in turn, may increase the risk of torsade de pointes. The inclination for action regarding hydroxyzine, as well as the suggested actions, differed between interns and residents; 46% of the residents suggested to stop this drug whereas only 16% of the interns suggested such an action. A range of substitute drugs were suggested by both interns and residents, including melatonin, mirtazapine, oxazepam, promethazine, and zopiclone, representing common choices in the treatment of anxiety . The fact that the recommendation provided by Janusmed did not include a switch to a specific drug may have contributed to the diversity of proposed treatment strategies. Interestingly, only about half of the respondents rated their suggested action for hydroxyzine as important, although this drug is included in several sets of potentially inappropriate medications for older people . Our findings highlight the importance of the classifications in a drug interaction knowledge resource. Physicians at the early stages of the career seem prone to read more of the extended information when an alert is classified as having “higher” clinical significance. On the one hand, these results may be considered encouraging; alerts that are likely to have a greater impact on patients seem to receive more attention. On the other hand, this finding highlights the importance of trustworthy classifications. In this context, it must be mentioned that classifications differ somewhat between drug interaction knowledge resources . Furthermore, the level of documentation seems to guide physicians’ readiness to click for more information about a drug interaction alert. Thus, the presence of clinical studies, i.e., level 3 or 4 documentation, seems to encourage such access. Nevertheless, case reports and in vitro studies, as well as studies on similar drugs, could deserve more attention by physicians. Strengths and limitations An important strength of this study is that it contributes knowledge on how physicians at early stages of their career act on drug interaction alerts and how important they consider these actions to be. Furthermore, differences between hospital interns and residents in family medicine, representing the hospital and primary care setting, are explored, as well as the importance of classifications in a knowledge resource. It may also be considered a strength that the questionnaire was anonymous. This approach allowed the respondents to provide honest unbiased replies as there was no concern for repercussions. Furthermore, the results are based on a nearly 100% response rate, an aspect of importance for generalisability. The response rate for some of the general questions concerning information access, however, was lower. Another limitation of potential importance for the external validity is that our respondents represent Swedish healthcare and the national knowledge resource, Janusmed, used in this setting. Nevertheless, this resource has apparent similarities with internationally established well-renowned resources like Lexicomp, Micromedex, and Stockley’s . Indeed, they all provide interaction alerts that are classified regarding the clinical significance and the level of documentation, with recommendations for clinical management and references for further reading . An important limitation of this study is that the fictional patient case only included one alert classified as D, making comparisons with the more numerous C alerts difficult. Another limitation is that the respondents suggested actions per drug and not per drug pair eliciting an interaction alert, thereby precluding direct linking of actions to a specific alert. Furthermore, despite the fact that the respondents were instructed verbally and in writing to state any actions taken due to the risk of interactions , we cannot rule out that some actions may have been suggested for other reasons, such as switching to a drug considered more effective or less prone to evoke ADRs. However, the provided actions for each drug mirror the real-life situation when physicians prescribe drug treatment – prescribing concerns specific drugs in an entire medication list, not single drug pairs, and the context of the patient is taken into account. Finally, it must be stressed that this study does not evaluate the value of decision support regarding potentially problematic drug interactions. An important strength of this study is that it contributes knowledge on how physicians at early stages of their career act on drug interaction alerts and how important they consider these actions to be. Furthermore, differences between hospital interns and residents in family medicine, representing the hospital and primary care setting, are explored, as well as the importance of classifications in a knowledge resource. It may also be considered a strength that the questionnaire was anonymous. This approach allowed the respondents to provide honest unbiased replies as there was no concern for repercussions. Furthermore, the results are based on a nearly 100% response rate, an aspect of importance for generalisability. The response rate for some of the general questions concerning information access, however, was lower. Another limitation of potential importance for the external validity is that our respondents represent Swedish healthcare and the national knowledge resource, Janusmed, used in this setting. Nevertheless, this resource has apparent similarities with internationally established well-renowned resources like Lexicomp, Micromedex, and Stockley’s . Indeed, they all provide interaction alerts that are classified regarding the clinical significance and the level of documentation, with recommendations for clinical management and references for further reading . An important limitation of this study is that the fictional patient case only included one alert classified as D, making comparisons with the more numerous C alerts difficult. Another limitation is that the respondents suggested actions per drug and not per drug pair eliciting an interaction alert, thereby precluding direct linking of actions to a specific alert. Furthermore, despite the fact that the respondents were instructed verbally and in writing to state any actions taken due to the risk of interactions , we cannot rule out that some actions may have been suggested for other reasons, such as switching to a drug considered more effective or less prone to evoke ADRs. However, the provided actions for each drug mirror the real-life situation when physicians prescribe drug treatment – prescribing concerns specific drugs in an entire medication list, not single drug pairs, and the context of the patient is taken into account. Finally, it must be stressed that this study does not evaluate the value of decision support regarding potentially problematic drug interactions. This study shows that physicians act on drug interaction alerts with considerable variation, and that hospital interns in some respects differ from residents in family medicine, perhaps representing different stages of the career and different work settings. Recommendations for clinical management provided by the knowledge resource are quite often adhered to, and classifications of drug interaction alerts appear to guide physicians regarding whether to access more detailed information provided by the knowledge resource. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 260 KB) Supplementary file2 (DOCX 21.9 KB) Supplementary file3 (DOCX 28.7 KB) |
Investigation of genetic diversity in the loach | 53a3ade8-3a07-4427-9e29-a7251681b972 | 11583728 | Forensic Medicine[mh] | Misgurnus anguillicaudatus , commonly known as “ginseng in water”, is a significant freshwater economic fish in China. Due to its delicious taste and rich nutritional value, loaches are highly favored by people in East Asia . These typical benthic freshwater fish primarily inhabit lakes, ponds, streams, paddy fields, and other shallow water and silt environments. They prefer to hide during the day and emerge at night, demonstrating a remarkable tolerance for anoxic conditions . However, the germplasm resources of loach are gradually declining and degrading due to habitat degradation and overfishing, the results of these phenomena will be further reflected in the genetic diversity of loach species, gradually reducing the genetic diversity, eventually leading to an increased extinction probability of this species. And, current research on the genetic diversity of loach species remains limited. Genetic diversity is the foundation of species adaptation, survival, and ecological resilience. Research on genetic diversity plays a critical role in the conservation, exploitation, breeding, and improvement of germplasm resources. For instance, Nan et al. conducted an in-depth analysis of the genetic diversity of 115 oats and related hexaploid species worldwide, revealing the independent domestication and breeding history of China's major wheat crops, including naked oat. They identified candidate genes related to environmental adaptation in the naked oat genome, providing new insights and methods for the genetic improvement and breeding of oat . Similarly, Shi et al. compared the genetic diversity and structural levels of Procapra gutturosa , a highly endangered species in China, in the Hulun Lake National Nature Reserve and the China-Mongolia Border Area. This species faces a significant survival crisis due to habitat destruction and hunting. Their findings offer a scientific basis for the conservation and genetic improvement of the species' germplasm resources . Currently, the tools of studying genetic diversity are abundant. With the continuous development of genetics and molecular biology, molecular markers have become essential tools for genome mapping, gene markers, genetic diversity, and phylogenetic analysis . Molecular markers are based on nucleotide sequence variations in genetic material among individual species. First-generation molecular markers include Restriction Fragment Length Polymorphism (RFLPs), Randomly Amplified Polymorphic DNA (RAPD), and Amplification Fragment Length Polymorphism (AFLP). Second-generation molecular markers include Simple Sequence Repeats (SSRs) and Inter-Simple Sequence Repeats (ISSR) . Although these first- and second-generation markers have been widely used in genome mapping, gene markers, population dynamics studies, and taxonomic relationship establishment, they have notable limitations: low accuracy, time-consuming processes, high costs, and complex operation steps . These limitations prompted the development of third-generation molecular markers—Single Nucleotide Polymorphisms (SNPs). SNPs are abundant in the genome sequences of organisms, characterized by their large quantity, wide distribution, high coverage density, and high genetic stability. Analyzing the genetic structure and diversity of populations using whole-genome SNP data is an effective method for investigating and evaluating population status . There are several techniques are widely used to develop and detect SNPs for species genotyping, including Whole Genome Sequencing (WGS), Specific-Locus Amplified Fragment Sequencing (SLAF-seq), and DNA chips . WGS technology can explore genetic diversity, genetic structure, and traits by sequencing the whole genome of individual species at both individual and population levels, and it has been widely used in studies on genetic diversity, QTL mapping, DNA fingerprint construction, evolution, and phylogeny . For example, Yao et al. analyzed the genetic diversity, population structure, and genomic regions of the hybrid BoHuai goat using WGS . The BoHuai goat, formed by crossing the Boer goat and the Huai goat (a breed from Henan Province, China), has superior meat production, reproduction rates, and meat quality compared to its parents. Identifying the excellent genes of these hybrid goats can aid in breeding programs to create higher economic benefits. Similarly, Xia et al. used WGS technology to analyze genomic genetic variations in Jiaxian Red cattle (Red Bull) in China . These local cattle breeds adapt well to the local environment and challenging feeding conditions, and genetic variation analysis is beneficial for breeding and genetic improvement of beef cattle species. WGRS technology is a form of application of the WGS technology. In this study, based on WGRS technology, we assessed whole-genome SNP markers in 60 loaches from six regions: Xiangtan, Shaoyang, and Yueyang in Hunan; Guilin and Guiping in Guangxi; and Wuhan in Hubei, China. We aimed to analyze the population genetic structure and diversity, and to construct DNA fingerprints for these 60 individuals, laying the foundation for the identification, protection, and genetically improved breeding of wild loach germplasm resources. In addition, we hope that our study can help to fill the gap in the research field of loach genetic diversity research, and provide reference for the sustainable utilization of loach resources, and ecological restoration. Sequencing and identification of SNPs After whole genome resequencing of 60 wild loaches, a total of 1047.17 Gb of raw data was generated. Following filtering, 1046.98 Gb of clean data was obtained, with an average sequencing depth of 17.87x. The sequencing quality was high (Q30% ≥ 89.82%), and the GC content ranged between 38.17% and 39.00% (Table S1). The reference genome of loach is 1.10 Gb (NCBI accession number: GCF_027580225.1), and the average depth of coverage for the reference genome (excluding the N region) is greater than 11.83x (Table S2). Using the obtained clean sequence reads, we identified a total of 2,812,906 biallelic SNPs in the 60 samples, from which 10,022 core SNPs were selected with a minor allele frequency (MAF) > 0.05. Functional annotation of polymorphic loci revealed that the majority of the 2,812,906 biallelic SNPs exist in intronic regions (46.82%) or intergenic regions (26.12%). Exons represent 17.35% of the total SNPs, including 131,084 non-synonymous SNPs and 354,111 synonymous SNPs (Table S3). Among the 10,022 core SNPs screened, most of the SNP sites were present in intronic regions (53.58%), with exon regions accounting for slightly less (35.66%) (Table ). Additionally, we calculated the distribution of core SNP markers on chromosomes. According to the statistical results (Table S4) and the SNP density distribution map (Fig. ), core SNPs are evenly distributed across the wild loach chromosomes. Genetic structure analysis of Misgurnus anguillicaudatus The genetic diversity of 60 samples was analyzed using the selected 10,022 core SNP markers. First, the population structure was analyzed under the assumption that the number of clusters (K) ranged from 2 to 8 (Fig. ). When the CV error was at its minimum, the detected K value was 3, indicating that there are three clusters and suggesting that all our samples may have originated from three distinct ancestors (Fig. ). The 60 samples can be roughly divided into three groups from three different branches: Subgroup A includes the XT, SY, and XY groups; Subgroup B consists only of the WH group; and Subgroup C comprises the GL and GP groups. When K is equal to 2, five population were grouped, GP was separated. When K is equal to 4, five populations were further divided into four groups, XY and XT were grouped, SY was separated. Although the geographical straight-line distance was equal between the three regions, SY was still divided into another group. Besides this, GL and WH were grouped. When K is equal to 5, XY and XT were separated. Similarly, When K is equal to 6, WH and GL were separated. When K is equal to 7 or 8, the ancestral sequences of GL and SY are separated, and when K is equal to 8, the ancestral sequences of GP are separated. Based on the differences in individual genomic SNPs among the samples, we clustered the individuals into different subgroups according to their trait characteristics. The first principal component (PCA 1), the second principal component (PCA 2), and the third principal component (PCA 3) constitute a three-dimensional PCA cluster diagram (Fig. ). Principal component analysis shows that the genetics of the XT and XY populations are closest, indicating similar germplasm resources. In contrast, the germplasm resources of the GL population are significantly different from those of other groups. In the PC1/PC2, PC1/PC3, and PC3/PC2 plots, the PCA classification results were consistent: the XT, XY, GP, and WH groups each formed distinct clusters, reflecting the geographical relationships of the wild loach samples. The SY population formed a loose cluster (Fig. ). The phylogenetic analysis indicated that the 60 accessions investigated fell into three distinct groups: (1) accessions from the XT, SY, and XY groups; (2) accessions from the WH group; and (3) accessions from the GL and GP groups (Fig. ). Construction of the DNA fingerprint By comparing SNPs across different samples, 12 core differential markers were identified, creating a barcode that effectively and clearly distinguishes all materials (Fig. ). The graph displays specific genotype combinations of 60 loach samples, where each row represents a sample and each column represents an SNP genotype. For example, the first row in the figure provides the following information: Name: GL-M1; fingerprint code: G/C, T/T, G/A, C/C, T/T, T/C, C/C, C/C, G/C, G/G, T/C, C/T. Analysis of genetic distance The genetic distance matrix of 60 samples was calculated using genotyping data from 12 core SNP markers (Table S5). Genetic distances between populations sampled from six regions ranged from 0.0833 to 1, with an average genetic distance of 0.5494. The genetic distance values are predominantly distributed within the range of 0.5 to 0.6 (Fig. ), indicating that most samples exhibit moderate genetic distances and high genetic similarity, though not identical genetic profiles. Population genetic diversity The mean nucleotide diversity (Pi) of SNP markers across the six regions ranges from 0.065 to 0.156, with a mean value of 0.130. Observed heterozygosity (Ho) ranges from 0.071 to 0.205, averaging 0.140. Expected heterozygosity (He) ranges from 0.061 to 0.148, with an average of 0.123. The average inbreeding coefficient (Fis) ranges from 0.369 to 0.793, with a mean of 0.552 (Table ). Specifically, Pi distribution was similar in the SY and GL regions, with Pi values of 0.148 and 0.146, respectively, both higher than the mean value (mean Pi = 0.130). The expected heterozygosity (He) in SY is comparable to the observed heterozygosity (Ho), suggesting that genotype distribution is close to Hardy–Weinberg equilibrium. At the population level, most observed heterozygosity (Ho) values exceeded expected heterozygosity (He), indicating a higher inbreeding coefficient (Fis). The WH region exhibited the highest Ho and Pi values among the six regions, indicating high population heterozygosity and abundant genetic diversity. The GP region showed the highest Ho and Fis values, suggesting close kinship among individuals within the population. After whole genome resequencing of 60 wild loaches, a total of 1047.17 Gb of raw data was generated. Following filtering, 1046.98 Gb of clean data was obtained, with an average sequencing depth of 17.87x. The sequencing quality was high (Q30% ≥ 89.82%), and the GC content ranged between 38.17% and 39.00% (Table S1). The reference genome of loach is 1.10 Gb (NCBI accession number: GCF_027580225.1), and the average depth of coverage for the reference genome (excluding the N region) is greater than 11.83x (Table S2). Using the obtained clean sequence reads, we identified a total of 2,812,906 biallelic SNPs in the 60 samples, from which 10,022 core SNPs were selected with a minor allele frequency (MAF) > 0.05. Functional annotation of polymorphic loci revealed that the majority of the 2,812,906 biallelic SNPs exist in intronic regions (46.82%) or intergenic regions (26.12%). Exons represent 17.35% of the total SNPs, including 131,084 non-synonymous SNPs and 354,111 synonymous SNPs (Table S3). Among the 10,022 core SNPs screened, most of the SNP sites were present in intronic regions (53.58%), with exon regions accounting for slightly less (35.66%) (Table ). Additionally, we calculated the distribution of core SNP markers on chromosomes. According to the statistical results (Table S4) and the SNP density distribution map (Fig. ), core SNPs are evenly distributed across the wild loach chromosomes. Misgurnus anguillicaudatus The genetic diversity of 60 samples was analyzed using the selected 10,022 core SNP markers. First, the population structure was analyzed under the assumption that the number of clusters (K) ranged from 2 to 8 (Fig. ). When the CV error was at its minimum, the detected K value was 3, indicating that there are three clusters and suggesting that all our samples may have originated from three distinct ancestors (Fig. ). The 60 samples can be roughly divided into three groups from three different branches: Subgroup A includes the XT, SY, and XY groups; Subgroup B consists only of the WH group; and Subgroup C comprises the GL and GP groups. When K is equal to 2, five population were grouped, GP was separated. When K is equal to 4, five populations were further divided into four groups, XY and XT were grouped, SY was separated. Although the geographical straight-line distance was equal between the three regions, SY was still divided into another group. Besides this, GL and WH were grouped. When K is equal to 5, XY and XT were separated. Similarly, When K is equal to 6, WH and GL were separated. When K is equal to 7 or 8, the ancestral sequences of GL and SY are separated, and when K is equal to 8, the ancestral sequences of GP are separated. Based on the differences in individual genomic SNPs among the samples, we clustered the individuals into different subgroups according to their trait characteristics. The first principal component (PCA 1), the second principal component (PCA 2), and the third principal component (PCA 3) constitute a three-dimensional PCA cluster diagram (Fig. ). Principal component analysis shows that the genetics of the XT and XY populations are closest, indicating similar germplasm resources. In contrast, the germplasm resources of the GL population are significantly different from those of other groups. In the PC1/PC2, PC1/PC3, and PC3/PC2 plots, the PCA classification results were consistent: the XT, XY, GP, and WH groups each formed distinct clusters, reflecting the geographical relationships of the wild loach samples. The SY population formed a loose cluster (Fig. ). The phylogenetic analysis indicated that the 60 accessions investigated fell into three distinct groups: (1) accessions from the XT, SY, and XY groups; (2) accessions from the WH group; and (3) accessions from the GL and GP groups (Fig. ). By comparing SNPs across different samples, 12 core differential markers were identified, creating a barcode that effectively and clearly distinguishes all materials (Fig. ). The graph displays specific genotype combinations of 60 loach samples, where each row represents a sample and each column represents an SNP genotype. For example, the first row in the figure provides the following information: Name: GL-M1; fingerprint code: G/C, T/T, G/A, C/C, T/T, T/C, C/C, C/C, G/C, G/G, T/C, C/T. The genetic distance matrix of 60 samples was calculated using genotyping data from 12 core SNP markers (Table S5). Genetic distances between populations sampled from six regions ranged from 0.0833 to 1, with an average genetic distance of 0.5494. The genetic distance values are predominantly distributed within the range of 0.5 to 0.6 (Fig. ), indicating that most samples exhibit moderate genetic distances and high genetic similarity, though not identical genetic profiles. The mean nucleotide diversity (Pi) of SNP markers across the six regions ranges from 0.065 to 0.156, with a mean value of 0.130. Observed heterozygosity (Ho) ranges from 0.071 to 0.205, averaging 0.140. Expected heterozygosity (He) ranges from 0.061 to 0.148, with an average of 0.123. The average inbreeding coefficient (Fis) ranges from 0.369 to 0.793, with a mean of 0.552 (Table ). Specifically, Pi distribution was similar in the SY and GL regions, with Pi values of 0.148 and 0.146, respectively, both higher than the mean value (mean Pi = 0.130). The expected heterozygosity (He) in SY is comparable to the observed heterozygosity (Ho), suggesting that genotype distribution is close to Hardy–Weinberg equilibrium. At the population level, most observed heterozygosity (Ho) values exceeded expected heterozygosity (He), indicating a higher inbreeding coefficient (Fis). The WH region exhibited the highest Ho and Pi values among the six regions, indicating high population heterozygosity and abundant genetic diversity. The GP region showed the highest Ho and Fis values, suggesting close kinship among individuals within the population. Optimization of molecular markers based on WGRS and SNPs in loach Genetic diversity plays a crucial role in the conservation, exploration, and breeding improvement of species germplasm, particularly for populations facing resource degradation and decline. Single Nucleotide Polymorphisms (SNPs) are prevalent and stable in the genome, making them highly valuable for studying genetic diversity and population structure in loach species . In this study, we employed whole genome resequencing (WGRS) to effectively identify SNP markers in 60 samples. WGRS technology is a type of WGS technology, and WGRS technology is used for the analysis between different individuals of a species with a known reference genome. Compared to RAD-seq and SLAF-seq methods, WGRS offers significant advantages in accuracy, efficiency, and reduced sequencing costs, establishing it as a standard in biological genome research . Historically, whole genome sequencing milestones include the completion of the medaka ( Oryzias latipes ) genome in 2007 and the publication of zebrafish ( Danio rerio ) genome information in 2013, providing foundations for studying gene function, genetic evolution, and vertebrate ecological protection . Similarly, the first completion of the Atlantic cod ( Gadus morhua ) genome sequencing in 2011 enabled insights into disease resistance mechanisms, aiding in disease prevention and control strategies for Atlantic cod . Genome sequencing technologies have evolved from first to third generation sequencing, with second-generation sequencing notably offering high throughput, rapid speed, and cost-effectiveness, widely used in whole genome sequencing . In our study, employing second-generation WGRS technology, we identified 2,812,906 SNPs in Misgurnus anguillicaudatus , surpassing previous SNP counts obtained using the SNaPshot technique . Our findings serve as a genetic variation reference, facilitating functional gene discovery, natural resource protection, and selective breeding of wild loach. Notably, SNP markers are evenly distributed across chromosomes at the reference genome level of Misgurnus anguillicaudatus (Fig. ). These highly specific molecular markers can also be utilized for genetic mapping and gene localization in loach. Previously, SNP markers have proven effective in DNA fingerprinting of species like sheep, southern catfish ( Silurus meridionalis ), sugarcane, and gourd, enhancing species identification, variation detection, resource utilization efficiency, and germplasm conservation . Phylogenic analysis based on SNPs of loach Loach is a significant freshwater economic fish found in Asian coastal countries such as China, Japan, and Korea, valued for its high protein content and low lipid levels. Due to its unique physiological characteristics and natural variability in ploidy levels, loach has become a focal point in scientific research, particularly in exploring biological origins, evolutionary pathways of polyploidy, and parthenogenic reproduction . Furthermore, the distinctive hindgut respiratory system of loach makes it a potential model species for studying the molecular mechanisms of assisted air respiration in fish . Despite its economic and nutritional importance, the supply of loach for human consumption largely relies on wild populations. Our study revealed strong familial ties within wild loach groups closely associated with their geographic locations. Phylogenetic analysis of 60 samples identified three genetically distinct branches: one in Hunan province, China, and the other two in Hubei and Guangxi provinces, China. We hypothesize that this clustering is influenced by the benthic habits of loach, which favor mating and breeding within or near the same areas, facilitating frequent gene flow between loach groups. Additionally, the similar natural environments of adjacent sampling sites likely limit significant genetic mutations through natural selection, maintaining close genetic relationships among wild loach groups within the same region. Among the 60 wild samples examined, the highest observed (Ho) and expected (He) heterozygosity values were 0.205 and 0.148, respectively, observed in the Wuhan region of Hubei, China. Genetic distance values predominantly fell within the range of 0.5–0.6, indicating high genetic similarity but not identical genetic profiles among most samples. Studies confirm that under natural conditions, the genetic diversity of wild loach in different regions closely correlates with their ecological environments. For instance, in Wuhan, located in the eastern Hubei province along the middle reaches of the Yangtze River, the intricate network of rivers, lakes, and harbors enhances gene communication among different subgroups of the wild loach population, thereby enriching its genetic diversity. Genetic diversity plays a crucial role in the conservation, exploration, and breeding improvement of species germplasm, particularly for populations facing resource degradation and decline. Single Nucleotide Polymorphisms (SNPs) are prevalent and stable in the genome, making them highly valuable for studying genetic diversity and population structure in loach species . In this study, we employed whole genome resequencing (WGRS) to effectively identify SNP markers in 60 samples. WGRS technology is a type of WGS technology, and WGRS technology is used for the analysis between different individuals of a species with a known reference genome. Compared to RAD-seq and SLAF-seq methods, WGRS offers significant advantages in accuracy, efficiency, and reduced sequencing costs, establishing it as a standard in biological genome research . Historically, whole genome sequencing milestones include the completion of the medaka ( Oryzias latipes ) genome in 2007 and the publication of zebrafish ( Danio rerio ) genome information in 2013, providing foundations for studying gene function, genetic evolution, and vertebrate ecological protection . Similarly, the first completion of the Atlantic cod ( Gadus morhua ) genome sequencing in 2011 enabled insights into disease resistance mechanisms, aiding in disease prevention and control strategies for Atlantic cod . Genome sequencing technologies have evolved from first to third generation sequencing, with second-generation sequencing notably offering high throughput, rapid speed, and cost-effectiveness, widely used in whole genome sequencing . In our study, employing second-generation WGRS technology, we identified 2,812,906 SNPs in Misgurnus anguillicaudatus , surpassing previous SNP counts obtained using the SNaPshot technique . Our findings serve as a genetic variation reference, facilitating functional gene discovery, natural resource protection, and selective breeding of wild loach. Notably, SNP markers are evenly distributed across chromosomes at the reference genome level of Misgurnus anguillicaudatus (Fig. ). These highly specific molecular markers can also be utilized for genetic mapping and gene localization in loach. Previously, SNP markers have proven effective in DNA fingerprinting of species like sheep, southern catfish ( Silurus meridionalis ), sugarcane, and gourd, enhancing species identification, variation detection, resource utilization efficiency, and germplasm conservation . Loach is a significant freshwater economic fish found in Asian coastal countries such as China, Japan, and Korea, valued for its high protein content and low lipid levels. Due to its unique physiological characteristics and natural variability in ploidy levels, loach has become a focal point in scientific research, particularly in exploring biological origins, evolutionary pathways of polyploidy, and parthenogenic reproduction . Furthermore, the distinctive hindgut respiratory system of loach makes it a potential model species for studying the molecular mechanisms of assisted air respiration in fish . Despite its economic and nutritional importance, the supply of loach for human consumption largely relies on wild populations. Our study revealed strong familial ties within wild loach groups closely associated with their geographic locations. Phylogenetic analysis of 60 samples identified three genetically distinct branches: one in Hunan province, China, and the other two in Hubei and Guangxi provinces, China. We hypothesize that this clustering is influenced by the benthic habits of loach, which favor mating and breeding within or near the same areas, facilitating frequent gene flow between loach groups. Additionally, the similar natural environments of adjacent sampling sites likely limit significant genetic mutations through natural selection, maintaining close genetic relationships among wild loach groups within the same region. Among the 60 wild samples examined, the highest observed (Ho) and expected (He) heterozygosity values were 0.205 and 0.148, respectively, observed in the Wuhan region of Hubei, China. Genetic distance values predominantly fell within the range of 0.5–0.6, indicating high genetic similarity but not identical genetic profiles among most samples. Studies confirm that under natural conditions, the genetic diversity of wild loach in different regions closely correlates with their ecological environments. For instance, in Wuhan, located in the eastern Hubei province along the middle reaches of the Yangtze River, the intricate network of rivers, lakes, and harbors enhances gene communication among different subgroups of the wild loach population, thereby enriching its genetic diversity. In this study, we collected wild loach specimens from Xiangtan, Shaoyang, and Yueyang in Hunan Province, Guilin and Guiping in Guangxi Province, and Wuhan in Hubei Province. Using whole genome resequencing (WGRS) techniques, we identified 2,812,906 population-specific SNPs and 10,022 core SNPs. Based on these data, we conducted comprehensive analyses of population structure and genetic diversity across six regions. Despite their genomic similarity, loaches from different regions exhibit distinct morphological and behavioral traits. As part of this research, we constructed high-density molecular marker linkage maps for the population. These maps are pivotal for advancing bioeconomic initiatives leveraging loach genetic resources. Our findings will also inform future genetic enhancements aimed at improving the economic and quality traits of loach species. Sample collection The 60 wild loaches used in this study were collected from 6 regions in China, namely, Xiangtan (XT), Shaoyang (SY), and Yueyang (XY) in Hunan; Wuhan (WH) in Hubei; and Guiping (GP) and Guilin (GL) in Guangxi (Fig. ) (Table ). Ten loaches were taken from each area for subsequent experiments, and all of them come from individuals. All loaches were euthanized prior to sampling. The loach fasted 12–24 h before euthanasia, used MS222 (Tricaine Methanesulphonate) with a concentration of 350 mg/L to soak the loach, kept the environment quiet, without stimulation (using red light lighting), ventilation, dissolved oxygen and PH values were appropriate, and the water temperature was about 25℃. When the loach lasts for 15 min to maintain loss of reactivity to any stimulus, the euthanasia is considered successful. Their tail fins were soaked in absolute alcohol and stored at 4°C. DNA extraction Genomic DNA was extracted from the samples by using the phenol‒chloroform method. A total of 1–2 cm (approximately 5 g) of loach fin tissue block was removed, the tissue was treated with liquid nitrogen before placed into a mortar, and an appropriate amount of DNA lysate was added for grinding. After grinding, 200 µL of tissue homogenate was added to the centrifuge tube. Then, an equal volume of phenol/chloroform/isoamyl alcohol solution was added to the centrifuge tube, the tube was vortexed vigorously for 1 min, and the centrifuge tube was spun at high speed for 5 min. Removed 180 µL of the top aqueous solution and placed it into a new tube. The above operation was repeated 2 to 3 times to extract as much sample genomic DNA as possible. Added 75% NH 4 OAc to the final extract. Then, 1 µL of glycogen (20 µg) and a 2.5 × volume of 100% ethanol were added, mixed it well. The samples were incubated at 20°C and centrifuged for 20 min at 4°C at the top speed. Washed by added 300 µL of 80% EtOH and vortex 3 times. Spun it for 15 min in a 4°C centrifuge at top speed, and then 80% EtOH was used to wash the samples once again. The residual EtOH was removed with a P20 pipette, and the sample was air-dried for 1–2 min. The samples were resuspended in an appropriate volume of Ethidium Bromide. The individual purity and integrity of the DNA were analysed using agarose gel electrophoresis, and the DNA concentration was precisely quantified used a Qubit fluorometer. DNA library construction and data quality control The total amount of DNA extracted from each loach sample was 0.1 μg and was used for DNA library preparation. Randomic fragments were fragmented into 350 bp fragments by a Covaris crusher, genomic DNA samples were fragmented to 350 bp in size by sonication, and DNA fragments were end polished, a-tailed and ligated with full-length adapters for Illumina PE150 sequencing, followed by further size selection and PCR amplification . The PCR products were purified by the AMPure XP system (Beverly, MA, USA). Libraries were constructed using the TruSeq Library Construction Kit, and after library construction, initial quantification was performed use a Qubit 2.0 fluorometer; subsequently, the library concentration and insert size were examined used an Agilent 2100 bioanalyzer. After testing, qRT‒PCR (at concentrations greater than 2.0 nM) was performed to ensure library quality. After passing the library test, high-throughput sequencing was performed by an Illumina HiSeq PE150, and the raw fluorescence image files obtained from the Illumina platform were converted into raw sequence data by CASAVA base recognition. These short reads were recorded in FASTQ format, which contained sequence information and corresponded sequence quality information. After checking the sequencing quality distribution and sequencing error rate distribution, low-quality (Q < 20) reads in the raw data, 10% of the N (uncertain base) reads in the sequence, and reads with sequencing junctions were removed to obtain high-quality valid data (clean data) for subsequent genetic analysis. SNPs and InDels for detection and annotation Clean Burrows Wheeler Aligner (BWA) software (parameters: mem -t 4 -k 32 -M -R) was used to map the clean data to the loach reference genome (NCBI accession number: GCF_027580225.1) to obtain the raw mapping results in the BAM format. The alignment results were removed from PCR duplicates introduced during library construction by SAMtools software (parameters: sort, rmdup), multiple single nucleotide polymorphisms (SNPs), and indels through the UnifiedGenotyper module of GATK (3.8) software, filtering with VariantFiltration (parameters: clusterWindowSize 4, filterExpression "QD < 4.0 || FS > 60.0 || MQ < 40.0", -G_filter "GQ < 5"). Finally, sample SNPs and InDels were optionally annotated using ANNOVAR software. Mark screening of the core SNPs Since the average depth of the sequencing samples was greater than 11x, we first selected SNP sites with a sample depth less than 7 × and filtered out sites of low depth. Next, we selected SNP loci with genotypes covering at least 100% of the individuals from all populations and filtered out sites with a minimum allele frequency (MAF) value lower than 0.3, a polymorphism information content (PIC) lower than 0.4 and a SNP heterozygosity rate (Het) greater than 0.8. Then, based on the functional annotation of the SNP loci, we filtered out the loci located in the intergenic regions and screened for SNP sites located upstream and downstream and within genes. Finally, the SNP sites on the scaffold were removed, and only SNP sites on Chromosome were retained as core SNPs of the sample marker . Genetic differentiation Prior to population genetic analysis, we calculated heterozygosity (He, Ho), nucleotide polymorphism (Pi), and average inbreeding coefficient (Fis) values for the population. The expected heterozygosity value (He) was calculated according to the formula provided by Nei (1978) : 1 [12pt]{minimal} $$={_{i=1}^m}(1-{_{j=1\;}^m}p_{ij}^2)/ m$$ He = ∑ i = 1 m ( 1 - ∑ j = 1 m p ij 2 ) / m Note: m-number of gene seats; n-number of alleles per gene seat; frequency of the jth allele of Pij-ij seats. The observation heterozygosity value (Ho) was calculated according to the following formula: 2 [12pt]{minimal} $$= S/ A$$ Ho = S / A Note: S-observed number of miscellaneous individuals; A-total number of sample individuals. The nucleic acid polymorphism (Pi) was calculated according to the following formula: 3 [12pt]{minimal} $$=_{ j=1}^S^nP_i^2)}{N-1}\\= S$$ π = ∑ j = 1 S N 1 - ∑ i = 1 n P i 2 N - 1 θ π = π S Note: S-number of segregating sites; number of sequences in N-samples (e.g., 10 samples in a population, the number of diploid sequences = N = 20); n-number of n-alleles; frequency of the Pi-i th allele. The average inbreeding coefficient (Fis) of the local groups was calculated according to the following formula: 4 [12pt]{minimal} $$F_{IS} = - H_{I} }}{{H_{S} }}$$ F IS = H S - H I H S Note: [12pt]{minimal} $${H}_{I}$$ H I -The average of heterozygote frequencies observed in the whole population; [12pt]{minimal} $${H}_{S}$$ H S -Local population is the expected heterozygote frequency average of the ideal population. Finally, statistical processing was performed using Python scripts to obtain He, Ho, Pi, and Fis values for the population. Analysis of population genetic structure We used the neighbor-joining method to construct the phylogenetic tree. First, the distance between the populations was calculated using the core SNPs, as described in formula . After the calculation was completed, Treebest software (1.9.2) was used to construct the distance matrix, based on which the phylogenetic tree was constructed through the neighbor-joining method. Principal component analysis (PCA) of SNP data from 60 samples was performed use GCTA software. Population structure was analysed using PLINK (1.07) and admixture (1.23) software and K-means cluster analysis with K values between 2 and 8. Visualization and annotation of the phylogenetic tree were performed through the iTOL v6 platform ( https://itol.embl.de/#itolPromo ). 5 [12pt]{minimal} $${}_{''\#}=}_{*-\_}^{}{d}_{''\#}^{}$$ D ′ ′ # = 1 L ∑ ∗ - _ ′ d ′ ′ # ( ∗ ) Note: In the formula, L is the region length of high-quality SNPs, and the allele at position 1 is A/C. There are four cases for the value [12pt]{minimal} $${d}_{"\#}^{}$$ d " # ( ∗ ) : [12pt]{minimal} $$\{0,\;\;\;\;\;\;\;\;\;\;;\\0.5,\;\;\;\;\;\;\;\;\;\;;\\0.5,\;\;\;\;\;\;\;\;\;\;;\\1,\;\;\;\;\;\;\;\;\;\;..$$ 0 , if the genotypes of two individuals are AA and AA ; 0.5 , if the genotypes of two individuals are AA and AC ; 0.5 , if the genotypes of two individuals are AC and AC ; 1 , if the genotypes of two individuals are AA and CC . Construction of DNA fingerprints Based on the SNP site information obtained from the above analysis, SNPs were compared for core differential markers among 60 samples. SNP sites with an individual depth greater than 7x, an MAF above 0.1, and a completeness of 1 were first selected; then, SNP sites with an average depth greater than 10x, a mass ≥ 999 and a Pi ≥ 0.48 were further selected; and the remaining sites were sorted from large to small by the PI value. During this process, new sites are added, and distinguishable samples are calculated until the 60 samples are completely distinguished. Sequence alignment analysis of the obtained SNP sites using BLAST ensured a total 300 bp sequence of 150 bp before and after the obtained sites with no duplication within the genome. A total of 12 core SNP sites were obtained, allowing for complete discrimination of the 60 samples. Genetic distance analysis Using the 12 core SNPs selected from the above analysis samples, the genetic distance (GD) was calculated to obtain the genetic distance matrix . The calculation formula is shown in formula . The genetic distance results were counted and graphed. 6 [12pt]{minimal} $$= b/( a+ b)$$ GD = b / ( a + b ) Note: a: number of identical genotypes between two samples; b: number of different genotypes between two samples. The 60 wild loaches used in this study were collected from 6 regions in China, namely, Xiangtan (XT), Shaoyang (SY), and Yueyang (XY) in Hunan; Wuhan (WH) in Hubei; and Guiping (GP) and Guilin (GL) in Guangxi (Fig. ) (Table ). Ten loaches were taken from each area for subsequent experiments, and all of them come from individuals. All loaches were euthanized prior to sampling. The loach fasted 12–24 h before euthanasia, used MS222 (Tricaine Methanesulphonate) with a concentration of 350 mg/L to soak the loach, kept the environment quiet, without stimulation (using red light lighting), ventilation, dissolved oxygen and PH values were appropriate, and the water temperature was about 25℃. When the loach lasts for 15 min to maintain loss of reactivity to any stimulus, the euthanasia is considered successful. Their tail fins were soaked in absolute alcohol and stored at 4°C. Genomic DNA was extracted from the samples by using the phenol‒chloroform method. A total of 1–2 cm (approximately 5 g) of loach fin tissue block was removed, the tissue was treated with liquid nitrogen before placed into a mortar, and an appropriate amount of DNA lysate was added for grinding. After grinding, 200 µL of tissue homogenate was added to the centrifuge tube. Then, an equal volume of phenol/chloroform/isoamyl alcohol solution was added to the centrifuge tube, the tube was vortexed vigorously for 1 min, and the centrifuge tube was spun at high speed for 5 min. Removed 180 µL of the top aqueous solution and placed it into a new tube. The above operation was repeated 2 to 3 times to extract as much sample genomic DNA as possible. Added 75% NH 4 OAc to the final extract. Then, 1 µL of glycogen (20 µg) and a 2.5 × volume of 100% ethanol were added, mixed it well. The samples were incubated at 20°C and centrifuged for 20 min at 4°C at the top speed. Washed by added 300 µL of 80% EtOH and vortex 3 times. Spun it for 15 min in a 4°C centrifuge at top speed, and then 80% EtOH was used to wash the samples once again. The residual EtOH was removed with a P20 pipette, and the sample was air-dried for 1–2 min. The samples were resuspended in an appropriate volume of Ethidium Bromide. The individual purity and integrity of the DNA were analysed using agarose gel electrophoresis, and the DNA concentration was precisely quantified used a Qubit fluorometer. The total amount of DNA extracted from each loach sample was 0.1 μg and was used for DNA library preparation. Randomic fragments were fragmented into 350 bp fragments by a Covaris crusher, genomic DNA samples were fragmented to 350 bp in size by sonication, and DNA fragments were end polished, a-tailed and ligated with full-length adapters for Illumina PE150 sequencing, followed by further size selection and PCR amplification . The PCR products were purified by the AMPure XP system (Beverly, MA, USA). Libraries were constructed using the TruSeq Library Construction Kit, and after library construction, initial quantification was performed use a Qubit 2.0 fluorometer; subsequently, the library concentration and insert size were examined used an Agilent 2100 bioanalyzer. After testing, qRT‒PCR (at concentrations greater than 2.0 nM) was performed to ensure library quality. After passing the library test, high-throughput sequencing was performed by an Illumina HiSeq PE150, and the raw fluorescence image files obtained from the Illumina platform were converted into raw sequence data by CASAVA base recognition. These short reads were recorded in FASTQ format, which contained sequence information and corresponded sequence quality information. After checking the sequencing quality distribution and sequencing error rate distribution, low-quality (Q < 20) reads in the raw data, 10% of the N (uncertain base) reads in the sequence, and reads with sequencing junctions were removed to obtain high-quality valid data (clean data) for subsequent genetic analysis. Clean Burrows Wheeler Aligner (BWA) software (parameters: mem -t 4 -k 32 -M -R) was used to map the clean data to the loach reference genome (NCBI accession number: GCF_027580225.1) to obtain the raw mapping results in the BAM format. The alignment results were removed from PCR duplicates introduced during library construction by SAMtools software (parameters: sort, rmdup), multiple single nucleotide polymorphisms (SNPs), and indels through the UnifiedGenotyper module of GATK (3.8) software, filtering with VariantFiltration (parameters: clusterWindowSize 4, filterExpression "QD < 4.0 || FS > 60.0 || MQ < 40.0", -G_filter "GQ < 5"). Finally, sample SNPs and InDels were optionally annotated using ANNOVAR software. Since the average depth of the sequencing samples was greater than 11x, we first selected SNP sites with a sample depth less than 7 × and filtered out sites of low depth. Next, we selected SNP loci with genotypes covering at least 100% of the individuals from all populations and filtered out sites with a minimum allele frequency (MAF) value lower than 0.3, a polymorphism information content (PIC) lower than 0.4 and a SNP heterozygosity rate (Het) greater than 0.8. Then, based on the functional annotation of the SNP loci, we filtered out the loci located in the intergenic regions and screened for SNP sites located upstream and downstream and within genes. Finally, the SNP sites on the scaffold were removed, and only SNP sites on Chromosome were retained as core SNPs of the sample marker . Prior to population genetic analysis, we calculated heterozygosity (He, Ho), nucleotide polymorphism (Pi), and average inbreeding coefficient (Fis) values for the population. The expected heterozygosity value (He) was calculated according to the formula provided by Nei (1978) : 1 [12pt]{minimal} $$={_{i=1}^m}(1-{_{j=1\;}^m}p_{ij}^2)/ m$$ He = ∑ i = 1 m ( 1 - ∑ j = 1 m p ij 2 ) / m Note: m-number of gene seats; n-number of alleles per gene seat; frequency of the jth allele of Pij-ij seats. The observation heterozygosity value (Ho) was calculated according to the following formula: 2 [12pt]{minimal} $$= S/ A$$ Ho = S / A Note: S-observed number of miscellaneous individuals; A-total number of sample individuals. The nucleic acid polymorphism (Pi) was calculated according to the following formula: 3 [12pt]{minimal} $$=_{ j=1}^S^nP_i^2)}{N-1}\\= S$$ π = ∑ j = 1 S N 1 - ∑ i = 1 n P i 2 N - 1 θ π = π S Note: S-number of segregating sites; number of sequences in N-samples (e.g., 10 samples in a population, the number of diploid sequences = N = 20); n-number of n-alleles; frequency of the Pi-i th allele. The average inbreeding coefficient (Fis) of the local groups was calculated according to the following formula: 4 [12pt]{minimal} $$F_{IS} = - H_{I} }}{{H_{S} }}$$ F IS = H S - H I H S Note: [12pt]{minimal} $${H}_{I}$$ H I -The average of heterozygote frequencies observed in the whole population; [12pt]{minimal} $${H}_{S}$$ H S -Local population is the expected heterozygote frequency average of the ideal population. Finally, statistical processing was performed using Python scripts to obtain He, Ho, Pi, and Fis values for the population. We used the neighbor-joining method to construct the phylogenetic tree. First, the distance between the populations was calculated using the core SNPs, as described in formula . After the calculation was completed, Treebest software (1.9.2) was used to construct the distance matrix, based on which the phylogenetic tree was constructed through the neighbor-joining method. Principal component analysis (PCA) of SNP data from 60 samples was performed use GCTA software. Population structure was analysed using PLINK (1.07) and admixture (1.23) software and K-means cluster analysis with K values between 2 and 8. Visualization and annotation of the phylogenetic tree were performed through the iTOL v6 platform ( https://itol.embl.de/#itolPromo ). 5 [12pt]{minimal} $${}_{''\#}=}_{*-\_}^{}{d}_{''\#}^{}$$ D ′ ′ # = 1 L ∑ ∗ - _ ′ d ′ ′ # ( ∗ ) Note: In the formula, L is the region length of high-quality SNPs, and the allele at position 1 is A/C. There are four cases for the value [12pt]{minimal} $${d}_{"\#}^{}$$ d " # ( ∗ ) : [12pt]{minimal} $$\{0,\;\;\;\;\;\;\;\;\;\;;\\0.5,\;\;\;\;\;\;\;\;\;\;;\\0.5,\;\;\;\;\;\;\;\;\;\;;\\1,\;\;\;\;\;\;\;\;\;\;..$$ 0 , if the genotypes of two individuals are AA and AA ; 0.5 , if the genotypes of two individuals are AA and AC ; 0.5 , if the genotypes of two individuals are AC and AC ; 1 , if the genotypes of two individuals are AA and CC . Based on the SNP site information obtained from the above analysis, SNPs were compared for core differential markers among 60 samples. SNP sites with an individual depth greater than 7x, an MAF above 0.1, and a completeness of 1 were first selected; then, SNP sites with an average depth greater than 10x, a mass ≥ 999 and a Pi ≥ 0.48 were further selected; and the remaining sites were sorted from large to small by the PI value. During this process, new sites are added, and distinguishable samples are calculated until the 60 samples are completely distinguished. Sequence alignment analysis of the obtained SNP sites using BLAST ensured a total 300 bp sequence of 150 bp before and after the obtained sites with no duplication within the genome. A total of 12 core SNP sites were obtained, allowing for complete discrimination of the 60 samples. Using the 12 core SNPs selected from the above analysis samples, the genetic distance (GD) was calculated to obtain the genetic distance matrix . The calculation formula is shown in formula . The genetic distance results were counted and graphed. 6 [12pt]{minimal} $$= b/( a+ b)$$ GD = b / ( a + b ) Note: a: number of identical genotypes between two samples; b: number of different genotypes between two samples. Additional file 1. Table S1. Whole-genome sequencing data and quality control information for 60 samples. Additional file 2. Table S2. The alignment information of 60 sample sequences. Additional file 3. Table S3. The screening results and annotation information for 60 sample SNP sites. Additional file 4. Table S4. The screening results and annotation information for 60 sample core SNP sites. Additional file 5. Table S5. Genetic distance information for 60 samples. |
Experimental Verification for Numerical Simulation of Thalamic Stimulation-Evoked Calcium-Sensitive Fluorescence and Electrophysiology with Self-Assembled Multifunctional Optrode | 80221cd2-06a4-4c39-aacf-d429c656e777 | 9953878 | Physiology[mh] | Neurons interact largely through different neurons that exchange neurotransmitters between dendrites and axons to deliver signals . Membrane potential alternations are also transmitted to neighboring neurons or distant brain areas via electrical signals . Immunohistochemistry (IHC) staining is commonly used to detect changes in specific neurotransmitters and to monitor chemical signal transmission between neurons . Despite the high spatial resolution of IHC staining, identifying changes in immediate neural activity is challenging when analyzing neurotransmitter transmission between neurons, and the three-dimensional (3D) structure of the neurons may be destroyed while producing neural slice specimens. To investigate neural activity, a neural probe is usually implanted into a specific brain area to observe the electrical signal transmission between neurons and to directly detect action potential changes in the neural membrane . Although this method may not yield excellent spatial data, temporal resolution can be significantly improved . Owing to advances in probe fabrication, neural probes can now detect smaller changes in membrane potential and may be modified to have a reduced inflammatory response . Neural electrophysiology can detect in vivo changes in neural membrane potential in response to various stimuli, including electrical stimulation and other sensory stimuli inputs . Electrophysiology can also reveal neural firing characteristics such as spikes (high-frequency signals released by a single neuron) and local field potentials (LFP) (signals emitted by a specific neural population) . Notably, LFP has been used to study various brain phenomena, including the neural basis of perception , attention , and memory , as well as the neural basis of various neurological and psychiatric disorders . LFP is thought to reflect neuron activity that significantly contributes to the regulation of excitatory neuron activity and shaping of the brain’s overall activity patterns. Previous studies have shown that LFP comprises inhibitory and excitatory postsynaptic potentials (IPSP and EPSP, respectively). Ca 2+ influx across the neuron cell membrane can induce EPSP, which then triggers neurotransmitter release . However, during electrical stimulation, the electrical current, size, and shape of the electrodes used to deliver the current, as well as the brain’s tissue properties, may generate artifacts during electrophysiological recording, making it difficult to interpret the LFP and obscuring the underlying neural activity. Therefore, to minimize the effect of electrical stimulation artifacts on LFP, researchers have used techniques such as the template subtraction method , which attempts to isolate the LFP produced by the underlying neural activity from the electrical artifacts. As previously stated, because Ca 2+ regulates neurotransmitter release when neurons generate LFP, neurologists have concentrated on Ca 2+ studies. The development of fluorescent Ca 2+ indicators has accelerated Ca 2+ research in the field of neuroscience. Bioluminescent calcium-binding photoproteins, including aequorin, were the first calcium indicators utilized for cellular Ca 2+ signaling. Aequorin was microinjected into cells to track rapid changes in intracellular Ca 2+ by observing changes in luminosity . Although Ca 2+ indicators can offer spatiotemporal data on neural activity, they are incapable of determining whether the Ca 2+ fluorescence changes occur in neurons because neurons and astrocytes in the brain also cause alternations in Ca 2+ concentration when these cells are activated , causing the fluorescent signals of Ca 2+ indicators to coincide with the activity of neurons and other glia cells. Notably, Ca 2+ indicators can now be expressed in a specific cell type by regulating gene promoters using gene engineering. In recent years, the green fluorescent protein calcium indicator (GCaMP) family has emerged as the most widely used genetically encoded calcium indicator (GECI) for studying Ca 2+ signals in neurons . GCaMP is typically expressed in neurons using genetic engineering techniques, such as viral vectors or transgenesis. Once expressed, GCaMP fluoresces when bound to Ca 2+ , enabling researchers to measure changes in Ca 2+ concentration using microscopy or other imaging techniques . Since GECIs have been developed, technologies that measure brain activity through alterations in Ca 2+ fluorescence signals have advanced rapidly. An optrode is an emerging tool that it is used to implant in the brain to acquire Ca 2+ fluorescent signals and simultaneously monitor neural electrical activity in the deep brain area. The current utilities of the optrodes are classified according to integrated optical fiber, waveguide, and micro-light-emitting diode (microLED), which were highlighted in opsin technologies of optogenetic modulations and fluorescence sensing. Previous studies have successfully demonstrated the combination of genetics and optics for simultaneous control and monitoring of neural activity with the implantation of the optical-fiber-based optrode . To reduce the brain injury caused by the fiber-based optrode, an alternative to optical optrodes for light delivery is to use an integrated optical waveguide with a small cross-sectional area . However, there is a comparatively large propagation loss for a small-dimension waveguide; the input power of light must be increased, resulting in more energy loss and heat damage . To address the waveguide propagation loss and dispersion of the implantable optrode , recent studies have demonstrated the microfabrication of the optrode with an integrated microLED array , which provided optical stimuli with high extraction efficiency and spatial resolution. The microLED-based optrode was performed without an external light source and appropriate coupling system, but the requirement for a power supply and heat dissipation in tissue caused some additional problems . For long-term implantation, the microLED-based optrode easily suffered from electrical leaks and short-circuits caused either by degradation in the semiconducting properties or by electrochemical oxidation of the internal metal layer from the ingress of moisture, liquid, and/or ionic species . The recent development of photometric applications in optogenetics has created an increased demand for advancing engineering tools of optrodes for fluorescence sensing in vivo. An implantable neural probe with integrated waveguide based on semiconductor technology has attracted attention because of its high sensitivity in fluorescence and minimization of neuronal loss . The technique of integrated waveguide needs to overcome its high cost, high fabrication complexity, and lower integrity with the light source with coupling issue , leading to its low popularity in the field of neuroscience. This presents an opportunity to develop a robust manufacturing process for assembling optical fibers and flexible neural probes for photometric applications. To create a cost-effective optrode for detecting fluorescence, we developed a rapid, uncomplicated, and efficiency method for assembling a polyimide-based neural probe with an optical fiber. In this way, the position of the optical fiber can be freely altered and the process of optrode assembly is simpler and lower-consuming. Furthermore, to the best of our knowledge, an optical-fiber-based multichannel optrode for in vivo testing of electrical stimulation concurrently combined with fluorescence detection and electrophysiological recording has rarely been reported. Deep brain stimulation (DBS) has been used extensively in neuroscience research and clinical therapy and is an efficient method for regulating synaptic plasticity accompanied by alterations in the Ca 2+ concentration in neurons . To unravel the relationship between DBS-evoked LFP and DBS-evoked Ca 2+ fluorescent activity, our self-assembled optrode capable of concurrently recording LFP and Ca 2+ fluorescence was proposed in this study. Before the in vivo experiment, the volume of tissue activated (VTA) was estimated to verify the DBS-evoked regions. Notably, VTA is used extensively in DBS research due to its many clinical applications: it can aid in assessing the best stimulation sites , selecting postoperative stimulation parameters , and directing the presurgical planning for DBS lead-insertion surgery . We also investigated the Ca 2+ fluorescence activity under DBS. Monte Carlo (MC) simulation was used to simulate the evoked Ca 2+ fluorescence signal . In the field of optical modeling, MC simulation has been the gold standard and most effective technique, particularly for modelling light propagation in biological tissue . To apply it to our fabricated fiber-based multichannel optrode, we modified the standard voxel-based MC simulation, and both the excitation and emission wavelengths were considered. To further verify the changes between LFP and Ca 2+ fluorescent signals evoked by DBS, AAV-GCaMPs were transfected into the ventral posteromedial thalamic nuclei (VPM) of rats, and DBS was performed using the predetermined parameters in the VTA and MC simulation. 2.1. Fabrication and Design of an Optrode The proposed optrode comprised 16 microelectrodes and a reference electrode. The microelectrode array can be used for electrophysiological recording and electrical stimulation. Three quartz masks were designed to fabricate a probe. The first mask (Mask #1) was utilized in the internal wiring structure of a chip to implement a high-density redistribution layer (RDL) on the substrate of the microelectrode array and reference electrode, as well as its accompanying wire-bonding package metal substrate and connecting wires. The second mask (Mask #2) was used to create a three-dimensional microelectrode array, reference electrode, and metal bonding pad for wire bonding. The third mask (Mask #3) was used to form the essential features of the neural probe, such as long-axis electrodes, a tip with the appropriate angle for neural implantation, and its latter portion that carried the wire bonding. shows the flow diagram for fabricating a probe with three microelectrode arrays and a reference electrode. To prepare for polyimide film removal (PI-2611, HD Microsystems, Parlin, NJ, USA), a 6-inch glass wafer was first coated with a 200 nm thick chromium (Cr) layer and a 700 nm thick Cu layer using vapor deposition. After covering, the sacrificial layer was covered with a 30 μm thick polyimide film using a spin coater and baked at 300 °C, and an impact-resistant layer was created by depositing a 200 nm thick Cr layer onto the polyimide to increase its strength. This layer was then coated with a second polyimide layer with 30 μm thickness and cured as previously mentioned. Furthermore, wet etching was performed to finish the impact-resistant layer and preserve the shape of the neural probe. The second polyimide layer was then vapor-deposited with a 700 nm thick copper (Cu) layer and 100 nm thick Cr layer. The 16 microelectrodes, one reference electrode, and interconnecting traces were lithographically patterned using Mask #1 as shown in A(a–d). Subsequently, the metal circuits were shielded with a third 3.2 μm thick polyimide layer, which was spun onto a trace layer. On this layer, 3.2 mm thick windows with lithographic patterns were made using O 2 plasma etching with Mask #2 ( A(e)). To avoid cracks in the microelectrodes and bonding pads of the neural probe caused by the Kirkendall effect at the interface between gold and copper , which limits the utility of gold-copper coatings in high-temperature environments and when a long lifetime is sought, palladium was electroplated on the copper, which was used to suppress the voids formed at the boundary interface in the gold–copper coatings . Following palladium electroplating, the process of gold plating was performed to form 3D-structured microelectrodes and bonding pads ( A(f)). To remove the optrode from the glass wafer, the three polyimide layers were lithographically patterned with Mask #3, which was etched with O 2 plasma, and the Cu-based sacrificial layers were removed with a metal etchant as shown in A(g,h). After aligning the tip of an optical fiber (200 μm of diameters, 0.48 NA, Inper, Hangzhou, China) with Channel #5 of the neural probe and affixing it using UV resin (NOA65, Norland Products Inc., East Windsor, NJ, USA), the optrode was completed as shown in B. 2.2. Computational Modeling of the VTA in Thalamic DBS A 3D finite element method (FEM) model of the optrode was constructed using a commercial software package (Maxwell ® , ANSYS, Inc., Canonsburg, PA, USA) to assess the effects of DBS. A 2 × 2 × 2-mm 3 cube of homogeneous and isotropic brain tissue was modeled as an axisymmetric volume conductor surrounding the DBS microelectrodes. The electrical resistivities of the gold microelectrode and polyimide substrate of our self-assembled optrode were 2.439 × 10 −8 and 1.667 × 10 16 μm, respectively. To simulate the actual environment of the brain tissue in vivo, in vivo impedance was measured using a pair of microelectrodes (Channels #1 and #4) on our self-assembled optrode with a sinusoidal voltage source (20 mV, <150 nA, at 1 kHz) generated with an LCR meter (Model: 4263B, Agilent Technologies Inc., Santa Clara, CA, USA). Subsequently, the corresponding in vivo conductivity was measured using Equation (1): (1) Conductivity ( K ) [ S / m ] = 1 in vivo resistance ( R ) × distance ( D ) area ( S ) where R [Ω] was measured between Channel #1 and #4, D between both Channels was 450 μm, and S , with a microelectrode radius of 8 μm, was 200.960 μm 2 . R was 0.543 ± 0.005 MΩ, and K was calculated to be 4.120 ± 0.041 S/m. K was then introduced to Equation (2) to describe the current density and electric field: (2) J ⇀ = K E ⇀ = − K ∇ V e where J ⇀ is the current density, E ⇀ is the electric field, and V e is the negative gradient of potential. With a point source of current, I v , which was assumed to be infinite, J ⇀ could be described with divergence properties and presented as Equation (3): (3) ∇ · J ⇀ = I v = − K ∇ V e Under a homogeneous condition, Poisson’s equation for V e is described as Equation (4): (4) ∇ 2 V e = − I v K Owing to this homogeneous condition, the conservation of the current requires that ∇· J ⇀ = 0 . Simultaneously, V e must satisfy a partial differential equation called the Laplace equation which presented as Equation (5): (5) ∇ 2 V e = 0 A solution for V e in Poisson’s equation could be described as Equation (6): (6) V e = 1 4 π ε ∫ σ d V r where ε is the dielectric permittivity, σ is the permittivity, and r is the distance between two points in a tissue. Based on these equations, the distribution of V e could be calculated. In this FEM model, the mesh size was set to 10 μm, the in vivo conductivity, K , was set to 4.120 S/m for a homogeneous and isotropic tissue medium, as mentioned earlier, and the conductivities of the gold microelectrode and polyimide substrate of the self-assembled optrode were set to 4.100 × 10 7 and 5.998 × 10 −17 S/m, respectively. σ was set to 80 in the homogeneous and isotropic tissue medium. In this study, VTA was estimated using an activating function, defined as the second spatial derivative of an extracellular voltage along an axon , which can be described as Equation (7): (7) f ( n ) = Δ 2 V e Δ x 2 = [ V e ( n + 1 ) − V e ( n ) ] − [ V e ( n ) − V e ( n − 1 ) ] L 2 = V e ( n − 1 ) − 2 V e ( n ) + V e ( n + 1 ) L 2 where n is the position of the homogeneous and isotropic tissue medium, and L is the grid spacing conducted in the FEM model, which is 10 μm. The regions were assumed to be activated when f(n) > 0. 2.3. Simulation of the Thalamic-DBS-Evoked Calcium Signal The purpose of MC simulation is mainly to investigate the changes in Ca 2+ fluorescence emission and intensities from the VTA volume in response to varying DBS intensities. Our MC simulation was performed using a lab-designed MATLAB ® software (2020R MathWorks Inc., Natick, MA, USA) and C code. A single simulation was conducted concurrently with 465 nm excitation light and 525 nm emitted fluorescence. The voxel size was set to 10 μm according to the mesh size from the FEM model as described in . Our MC simulation model was made as the optical fiber (200 μm in diameter, 0.48 NA) inserted in a 2 × 2 × 2 mm 3 homogeneous brain tissue . To estimate the photon trajectory stimulated by the fiber, the light source of the optical fiber was assumed to be a defocused, uniformly dispersed beam. The position of the light source was located at Channel #5 of the optrode, corresponding to the position in the FEM model. The launch angle was 20.5° based on the NA of the optical fiber. Furthermore, the absorption ( μ a ) and scattering ( μ s ) coefficients of the white matter were measured using an integrated sphere optical system (Ophir IS6, Ophir Optronics Solutions Ltd., Jerusalem, Israel) under both 465 nm and 525 nm laser wavelengths in order to fit the MC simulation environment to real brain conditions. To determine the fluorescence signal profile of a single fiber implanted in a tissue, MC simulations of 10 M photon packets were emitted from the fiber, with the initial energy for each photon set to 1 weight ( W ). The starting coordinates and initial photon direction were selected based on the position of the optical fiber. As an optical fiber light source, a defocused, uniformly distributed beam was employed to predict the photon exit trajectories. The step size ( ΔS ) of a photon after it leaves the fiber must be less than the mean free path length of a photon in tissues; this is the reciprocal of the total attenuation coefficient. A function of random variable ( ξ ) was used to efficiently generate different step sizes for each photon step, as shown in Equation (8): (8) Δ S = − ln ξ μ a + μ s where μ a and μ s were 4.642 cm −1 and 257.631 cm −1 for a 465 nm excitation light and 4.873 cm −1 and 224.074 cm −1 for a 525 nm emission fluorescence, respectively. After each propagation step, a fraction of the photon packet was absorbed. The fraction of the absorbed photon weight ΔW was calculated using Equation (9): (9) Δ W = ( μ a μ a + μ s ) W The updated weight ( W′ ) representing the fraction of the scattered packet was given by Equation (10): (10) W ′ = W − Δ W The fluorescence quantum yield ( QY ) of a fluorophore is the fraction of absorbed photons resulting in fluorescence emission. Therefore, fluorescent photon emission occurs when an excitation photon propagates into a voxel where the fluorescence QY exceeds 0. When the absorbed photon weight was ΔW , the initial weight of the fluorescent photon packet W f was calculated using Equation (11): (11) W f = Δ W × QY As a result, QY was set to 1 if the activated region exhibited normal Ca 2+ indictor expression. Finally, the energy of emission fluorescence was denoted as φ and could be calculated by Equation (12): (12) φ = W cm 2 = W F × 1 μ a V where μ a is the absorption coefficient of the emission light, and V is the voxel size of 10 −3 × 10 −3 × 10 −3 cm 3 . 2.4. Setup of Fiber Photometry System A fluorescence minicube (FMC5, Doric, Québec, Canada) was used as the light path of the photometry system. A 405 nm violet (CLED_405, Doric, Québec, Canada) and 465 nm blue LED (CLED_465, Doric, Québec, Canada) were the light sources of the system. The lens collimated the two sources into dichroic mirrors such that the sources were collinearly oriented into an optical fiber for lighting. Then, the light from these two sources traveled through the optical fiber separately. The 465 nm blue LED was used to activate GCaMP6s, and the evoked Ca 2+ -dependent signals were measured in the 500–550 nm spectral window. The 405 nm violet LED was used to evoke Ca 2+ -independent signals, which were autofluorescence in brain tissue and were measured in the 420–450 nm spectral window. Two optical fibers were connected to an avalanche photodiode array (APD) (S8550-02, Hamamatsu Photonics, Hamamatsu, Japan) to limit light decay and to explore the fluorescence signals, as shown in . The signals of GCaMP6s and autofluorescence were recorded at Channel #9 and #25 on the APD array, respectively. The fluorescence signals were transmitted using a multichannel data acquisition system (PhotoniQ Model IQSP480, Vertilon Corp., Westford, MA, USA) for further analysis. 2.5. Animal Preparation and Surgery To validate the relationship between the evoked LFP and evoked Ca 2+ fluorescence, in vivo tests were conducted on 8-week-old Sprague–Dawley (SD) adult rats ( N = 5) weighting 250–350 g. The rats were housed and fed ad libitum in an animal facility (12:12 light/dark cycle; light on at 7 a.m.; 20 ± 3 °C). All animal experimental designs and procedures were reviewed and approved by the Institutional Animal Care and Use Committee of the Taipei Medical University (IACUC Approval number: LAC-2020-0210), and the rats were handled following the accepted standards and regulations. GCaMP6s was the Ca 2+ indicator in the in vivo experiment, which was obtained from Douglas Kim and GENIE Project (Addgene plasmid # 100843; RRID: Addgene_100843). The rats received 0.25 μL of GCaMP6s virus which was injected at a rate of 0.050 μL/min for 5 min into the right ventral posteromedial thalamic nuclei (VPM) (AP: −3.48 mm, ML: 2.70 mm, DV: −6.80 mm) while rats were under isoflurane anesthesia (induction 4%; maintenance 1.5%). Two weeks after viral injection, the self-assembled optrode was implanted into the rat thalamic VPM nuclei (AP: −3.48 mm, ML: 2.50 mm, DV: −6.80 mm) under the same procedure of isoflurane anesthesia. The whole skull was coated with dental cement to strengthen its attachment to the optrode. When the optrode was firmly fixed to the skull, the holder could be released, allowing the scalp to be stitched over the dental cement mound . 2.6. Thalamic-DBS-Induced Neuronal Activity Recording: Ca 2+ Fluorescence Signals and Electrophysiology Recordings Under isoflurane anesthesia (induction 4%; maintenance 1.5%), the rats were mounted on a stereotaxic device, and acute fluorescence and LFP were concurrently recorded for 40 s. The first 10 s of Ca 2+ fluorescence signals were recorded to calculate the baseline. DBS was triggered with an isolated pulse stimulator (S48, Grass Technologies, West Warwick, RI, USA), providing stimulation pulses of varying current strengths with a 0.4 ms duration, 3 Hz of frequency, and different DBS intensities (50 μA, 100 μA, 200 μA, and 300 μA) at 10–30 s during the recording. To determine the maximum DBS intensity, the total electrical energy delivered to the tissue (TEED) was used to calculate the amount of energy transferred by DBS intensity to the brain tissue using Equation (13) : (13) E DBS = ( I × R ) 2 × pw × f R ( 1 s ) where E D B S is the electrical energy delivered within 1 s; f = frequency of 3 [Hz]; I = current [A]; pw = pulse width of 0.4 × 10 −3 [s]; and R = in vivo impedance of 5.43 × 10 5 [Ω] between Channel #1 and #4 on the neural probe. In this study, the maximum TEED under the DBS intensity of 300 μA was 5.864 × 10 − 5 J, and the 20 s DBS was applied; therefore, the energy received by tissue was 1.172 × 10 − 3 J. According to Deep Brain Stimulation Management , when considering the safety concerns for DBS, the upper limit for charge capacity is 30 μC/cm 2 , which, converted to energy, is 1.12 × 10 − 2 J. To ensure the biosafety, we considered the DBS intensity of 300 μA as a proper upper limit for the DBS in this study. Based on our previous study , the lowest DBS intensity of 50 μA could induced stable neural responses. In addition, the increase in the DBS-evoked neural responses with the linear fashion were found by gradually increasing the stimulus intensities of 50 μA, 100 μA, and 200 μA. DBS-induced Ca 2+ fluorescence intensities were collected using our optical-fiber-based optrode. The total output power of the optical fiber tip was adjusted to 0.2 mW. The last 10 sec was the rest of the DBS. Electrophysiological recordings were also performed simultaneously using a multichannel acquisition processor (Open Ephys ). Neuronal LFP activity was sampled at 1 kHz and digitally filtered with a bandpass filter at 0.3–300 Hz. A graphical user interface controlled with LabVIEW (LabVIEW 2017, National Instruments, Austin, TX, USA) served as the main controller for the entire data acquisition system. Offline data were retrieved using the LabVIEW interface. shows the in vivo experimental setup. After electrical stimulation, the rats were sacrificed, and their brains were extracted to confirm GCaMP6s expression. This was conducted to ensure the transfection of the adeno-associated virus (AAV). Owing to the presence of a green fluorescent protein (GFP) gene segment located in the plasmid transported by the AAV, infected neurons could be observed using a fluorescent microscope (BX61, Olympus, Tokyo, Japan) at a 475 nm excitation wavelength. 2.7. Data Analysis To create an averaged evoked LFP, its amplitudes were clipped every 333 ms over the 15–25 s recording period in response to fluorescence signal emergence. Then, an absolute value of the evoked response amplitudes at 30 ms post-stimulus (denoted as ∑LFP) was calculated by summing the averaged evoked LFP. ∑LFP changes were used to evaluate the stability of the evoked responses induced by the thalamic stimulation. The raw Ca 2+ fluorescence intensity was mixed with a high-frequency noise caused by the photometric recording instrument, which should be filtered out with a 100-Hz low-pass filter. The change in Ca 2+ concentration is expressed as ΔF/F = (F s − F 0 )/F 0 , where F s is the fluorescence intensity during electrical stimulation, and F 0 is the mean fluorescence intensity before stimulation. The ΔF/F ratio was then averaged over the 15–25 s recording period because fluorescence signals increased about 5 s later . Both LFP and Ca 2+ fluorescence intensities were analyzed using MATLAB ® . 2.8. Statistical Analysis To verify the relationship between LFP and Ca 2+ fluorescence intensity in vivo, a linear data fit with a corresponding coefficient of determination ( R 2 ) was used to determine the relationships between VTA volume and simulated Ca 2+ fluorescence intensities, VTA volume and ∑LFP, simulated Ca 2+ fluorescence intensities and Ca 2+ fluorescence intensity in vivo, and ∑LFP and Ca 2+ fluorescence intensity in vivo. The higher the R 2 value, the more sensitive the positive to the paired items under stimulus intensities. The linear curve fitting was conducted with SPSS version 26.0 (SPSS Inc., Chicago, IL, USA). The significance level was set at p < 0.05. The results were expressed with mean values and standard error of the mean (mean ± SEM). The proposed optrode comprised 16 microelectrodes and a reference electrode. The microelectrode array can be used for electrophysiological recording and electrical stimulation. Three quartz masks were designed to fabricate a probe. The first mask (Mask #1) was utilized in the internal wiring structure of a chip to implement a high-density redistribution layer (RDL) on the substrate of the microelectrode array and reference electrode, as well as its accompanying wire-bonding package metal substrate and connecting wires. The second mask (Mask #2) was used to create a three-dimensional microelectrode array, reference electrode, and metal bonding pad for wire bonding. The third mask (Mask #3) was used to form the essential features of the neural probe, such as long-axis electrodes, a tip with the appropriate angle for neural implantation, and its latter portion that carried the wire bonding. shows the flow diagram for fabricating a probe with three microelectrode arrays and a reference electrode. To prepare for polyimide film removal (PI-2611, HD Microsystems, Parlin, NJ, USA), a 6-inch glass wafer was first coated with a 200 nm thick chromium (Cr) layer and a 700 nm thick Cu layer using vapor deposition. After covering, the sacrificial layer was covered with a 30 μm thick polyimide film using a spin coater and baked at 300 °C, and an impact-resistant layer was created by depositing a 200 nm thick Cr layer onto the polyimide to increase its strength. This layer was then coated with a second polyimide layer with 30 μm thickness and cured as previously mentioned. Furthermore, wet etching was performed to finish the impact-resistant layer and preserve the shape of the neural probe. The second polyimide layer was then vapor-deposited with a 700 nm thick copper (Cu) layer and 100 nm thick Cr layer. The 16 microelectrodes, one reference electrode, and interconnecting traces were lithographically patterned using Mask #1 as shown in A(a–d). Subsequently, the metal circuits were shielded with a third 3.2 μm thick polyimide layer, which was spun onto a trace layer. On this layer, 3.2 mm thick windows with lithographic patterns were made using O 2 plasma etching with Mask #2 ( A(e)). To avoid cracks in the microelectrodes and bonding pads of the neural probe caused by the Kirkendall effect at the interface between gold and copper , which limits the utility of gold-copper coatings in high-temperature environments and when a long lifetime is sought, palladium was electroplated on the copper, which was used to suppress the voids formed at the boundary interface in the gold–copper coatings . Following palladium electroplating, the process of gold plating was performed to form 3D-structured microelectrodes and bonding pads ( A(f)). To remove the optrode from the glass wafer, the three polyimide layers were lithographically patterned with Mask #3, which was etched with O 2 plasma, and the Cu-based sacrificial layers were removed with a metal etchant as shown in A(g,h). After aligning the tip of an optical fiber (200 μm of diameters, 0.48 NA, Inper, Hangzhou, China) with Channel #5 of the neural probe and affixing it using UV resin (NOA65, Norland Products Inc., East Windsor, NJ, USA), the optrode was completed as shown in B. A 3D finite element method (FEM) model of the optrode was constructed using a commercial software package (Maxwell ® , ANSYS, Inc., Canonsburg, PA, USA) to assess the effects of DBS. A 2 × 2 × 2-mm 3 cube of homogeneous and isotropic brain tissue was modeled as an axisymmetric volume conductor surrounding the DBS microelectrodes. The electrical resistivities of the gold microelectrode and polyimide substrate of our self-assembled optrode were 2.439 × 10 −8 and 1.667 × 10 16 μm, respectively. To simulate the actual environment of the brain tissue in vivo, in vivo impedance was measured using a pair of microelectrodes (Channels #1 and #4) on our self-assembled optrode with a sinusoidal voltage source (20 mV, <150 nA, at 1 kHz) generated with an LCR meter (Model: 4263B, Agilent Technologies Inc., Santa Clara, CA, USA). Subsequently, the corresponding in vivo conductivity was measured using Equation (1): (1) Conductivity ( K ) [ S / m ] = 1 in vivo resistance ( R ) × distance ( D ) area ( S ) where R [Ω] was measured between Channel #1 and #4, D between both Channels was 450 μm, and S , with a microelectrode radius of 8 μm, was 200.960 μm 2 . R was 0.543 ± 0.005 MΩ, and K was calculated to be 4.120 ± 0.041 S/m. K was then introduced to Equation (2) to describe the current density and electric field: (2) J ⇀ = K E ⇀ = − K ∇ V e where J ⇀ is the current density, E ⇀ is the electric field, and V e is the negative gradient of potential. With a point source of current, I v , which was assumed to be infinite, J ⇀ could be described with divergence properties and presented as Equation (3): (3) ∇ · J ⇀ = I v = − K ∇ V e Under a homogeneous condition, Poisson’s equation for V e is described as Equation (4): (4) ∇ 2 V e = − I v K Owing to this homogeneous condition, the conservation of the current requires that ∇· J ⇀ = 0 . Simultaneously, V e must satisfy a partial differential equation called the Laplace equation which presented as Equation (5): (5) ∇ 2 V e = 0 A solution for V e in Poisson’s equation could be described as Equation (6): (6) V e = 1 4 π ε ∫ σ d V r where ε is the dielectric permittivity, σ is the permittivity, and r is the distance between two points in a tissue. Based on these equations, the distribution of V e could be calculated. In this FEM model, the mesh size was set to 10 μm, the in vivo conductivity, K , was set to 4.120 S/m for a homogeneous and isotropic tissue medium, as mentioned earlier, and the conductivities of the gold microelectrode and polyimide substrate of the self-assembled optrode were set to 4.100 × 10 7 and 5.998 × 10 −17 S/m, respectively. σ was set to 80 in the homogeneous and isotropic tissue medium. In this study, VTA was estimated using an activating function, defined as the second spatial derivative of an extracellular voltage along an axon , which can be described as Equation (7): (7) f ( n ) = Δ 2 V e Δ x 2 = [ V e ( n + 1 ) − V e ( n ) ] − [ V e ( n ) − V e ( n − 1 ) ] L 2 = V e ( n − 1 ) − 2 V e ( n ) + V e ( n + 1 ) L 2 where n is the position of the homogeneous and isotropic tissue medium, and L is the grid spacing conducted in the FEM model, which is 10 μm. The regions were assumed to be activated when f(n) > 0. The purpose of MC simulation is mainly to investigate the changes in Ca 2+ fluorescence emission and intensities from the VTA volume in response to varying DBS intensities. Our MC simulation was performed using a lab-designed MATLAB ® software (2020R MathWorks Inc., Natick, MA, USA) and C code. A single simulation was conducted concurrently with 465 nm excitation light and 525 nm emitted fluorescence. The voxel size was set to 10 μm according to the mesh size from the FEM model as described in . Our MC simulation model was made as the optical fiber (200 μm in diameter, 0.48 NA) inserted in a 2 × 2 × 2 mm 3 homogeneous brain tissue . To estimate the photon trajectory stimulated by the fiber, the light source of the optical fiber was assumed to be a defocused, uniformly dispersed beam. The position of the light source was located at Channel #5 of the optrode, corresponding to the position in the FEM model. The launch angle was 20.5° based on the NA of the optical fiber. Furthermore, the absorption ( μ a ) and scattering ( μ s ) coefficients of the white matter were measured using an integrated sphere optical system (Ophir IS6, Ophir Optronics Solutions Ltd., Jerusalem, Israel) under both 465 nm and 525 nm laser wavelengths in order to fit the MC simulation environment to real brain conditions. To determine the fluorescence signal profile of a single fiber implanted in a tissue, MC simulations of 10 M photon packets were emitted from the fiber, with the initial energy for each photon set to 1 weight ( W ). The starting coordinates and initial photon direction were selected based on the position of the optical fiber. As an optical fiber light source, a defocused, uniformly distributed beam was employed to predict the photon exit trajectories. The step size ( ΔS ) of a photon after it leaves the fiber must be less than the mean free path length of a photon in tissues; this is the reciprocal of the total attenuation coefficient. A function of random variable ( ξ ) was used to efficiently generate different step sizes for each photon step, as shown in Equation (8): (8) Δ S = − ln ξ μ a + μ s where μ a and μ s were 4.642 cm −1 and 257.631 cm −1 for a 465 nm excitation light and 4.873 cm −1 and 224.074 cm −1 for a 525 nm emission fluorescence, respectively. After each propagation step, a fraction of the photon packet was absorbed. The fraction of the absorbed photon weight ΔW was calculated using Equation (9): (9) Δ W = ( μ a μ a + μ s ) W The updated weight ( W′ ) representing the fraction of the scattered packet was given by Equation (10): (10) W ′ = W − Δ W The fluorescence quantum yield ( QY ) of a fluorophore is the fraction of absorbed photons resulting in fluorescence emission. Therefore, fluorescent photon emission occurs when an excitation photon propagates into a voxel where the fluorescence QY exceeds 0. When the absorbed photon weight was ΔW , the initial weight of the fluorescent photon packet W f was calculated using Equation (11): (11) W f = Δ W × QY As a result, QY was set to 1 if the activated region exhibited normal Ca 2+ indictor expression. Finally, the energy of emission fluorescence was denoted as φ and could be calculated by Equation (12): (12) φ = W cm 2 = W F × 1 μ a V where μ a is the absorption coefficient of the emission light, and V is the voxel size of 10 −3 × 10 −3 × 10 −3 cm 3 . A fluorescence minicube (FMC5, Doric, Québec, Canada) was used as the light path of the photometry system. A 405 nm violet (CLED_405, Doric, Québec, Canada) and 465 nm blue LED (CLED_465, Doric, Québec, Canada) were the light sources of the system. The lens collimated the two sources into dichroic mirrors such that the sources were collinearly oriented into an optical fiber for lighting. Then, the light from these two sources traveled through the optical fiber separately. The 465 nm blue LED was used to activate GCaMP6s, and the evoked Ca 2+ -dependent signals were measured in the 500–550 nm spectral window. The 405 nm violet LED was used to evoke Ca 2+ -independent signals, which were autofluorescence in brain tissue and were measured in the 420–450 nm spectral window. Two optical fibers were connected to an avalanche photodiode array (APD) (S8550-02, Hamamatsu Photonics, Hamamatsu, Japan) to limit light decay and to explore the fluorescence signals, as shown in . The signals of GCaMP6s and autofluorescence were recorded at Channel #9 and #25 on the APD array, respectively. The fluorescence signals were transmitted using a multichannel data acquisition system (PhotoniQ Model IQSP480, Vertilon Corp., Westford, MA, USA) for further analysis. To validate the relationship between the evoked LFP and evoked Ca 2+ fluorescence, in vivo tests were conducted on 8-week-old Sprague–Dawley (SD) adult rats ( N = 5) weighting 250–350 g. The rats were housed and fed ad libitum in an animal facility (12:12 light/dark cycle; light on at 7 a.m.; 20 ± 3 °C). All animal experimental designs and procedures were reviewed and approved by the Institutional Animal Care and Use Committee of the Taipei Medical University (IACUC Approval number: LAC-2020-0210), and the rats were handled following the accepted standards and regulations. GCaMP6s was the Ca 2+ indicator in the in vivo experiment, which was obtained from Douglas Kim and GENIE Project (Addgene plasmid # 100843; RRID: Addgene_100843). The rats received 0.25 μL of GCaMP6s virus which was injected at a rate of 0.050 μL/min for 5 min into the right ventral posteromedial thalamic nuclei (VPM) (AP: −3.48 mm, ML: 2.70 mm, DV: −6.80 mm) while rats were under isoflurane anesthesia (induction 4%; maintenance 1.5%). Two weeks after viral injection, the self-assembled optrode was implanted into the rat thalamic VPM nuclei (AP: −3.48 mm, ML: 2.50 mm, DV: −6.80 mm) under the same procedure of isoflurane anesthesia. The whole skull was coated with dental cement to strengthen its attachment to the optrode. When the optrode was firmly fixed to the skull, the holder could be released, allowing the scalp to be stitched over the dental cement mound . 2+ Fluorescence Signals and Electrophysiology Recordings Under isoflurane anesthesia (induction 4%; maintenance 1.5%), the rats were mounted on a stereotaxic device, and acute fluorescence and LFP were concurrently recorded for 40 s. The first 10 s of Ca 2+ fluorescence signals were recorded to calculate the baseline. DBS was triggered with an isolated pulse stimulator (S48, Grass Technologies, West Warwick, RI, USA), providing stimulation pulses of varying current strengths with a 0.4 ms duration, 3 Hz of frequency, and different DBS intensities (50 μA, 100 μA, 200 μA, and 300 μA) at 10–30 s during the recording. To determine the maximum DBS intensity, the total electrical energy delivered to the tissue (TEED) was used to calculate the amount of energy transferred by DBS intensity to the brain tissue using Equation (13) : (13) E DBS = ( I × R ) 2 × pw × f R ( 1 s ) where E D B S is the electrical energy delivered within 1 s; f = frequency of 3 [Hz]; I = current [A]; pw = pulse width of 0.4 × 10 −3 [s]; and R = in vivo impedance of 5.43 × 10 5 [Ω] between Channel #1 and #4 on the neural probe. In this study, the maximum TEED under the DBS intensity of 300 μA was 5.864 × 10 − 5 J, and the 20 s DBS was applied; therefore, the energy received by tissue was 1.172 × 10 − 3 J. According to Deep Brain Stimulation Management , when considering the safety concerns for DBS, the upper limit for charge capacity is 30 μC/cm 2 , which, converted to energy, is 1.12 × 10 − 2 J. To ensure the biosafety, we considered the DBS intensity of 300 μA as a proper upper limit for the DBS in this study. Based on our previous study , the lowest DBS intensity of 50 μA could induced stable neural responses. In addition, the increase in the DBS-evoked neural responses with the linear fashion were found by gradually increasing the stimulus intensities of 50 μA, 100 μA, and 200 μA. DBS-induced Ca 2+ fluorescence intensities were collected using our optical-fiber-based optrode. The total output power of the optical fiber tip was adjusted to 0.2 mW. The last 10 sec was the rest of the DBS. Electrophysiological recordings were also performed simultaneously using a multichannel acquisition processor (Open Ephys ). Neuronal LFP activity was sampled at 1 kHz and digitally filtered with a bandpass filter at 0.3–300 Hz. A graphical user interface controlled with LabVIEW (LabVIEW 2017, National Instruments, Austin, TX, USA) served as the main controller for the entire data acquisition system. Offline data were retrieved using the LabVIEW interface. shows the in vivo experimental setup. After electrical stimulation, the rats were sacrificed, and their brains were extracted to confirm GCaMP6s expression. This was conducted to ensure the transfection of the adeno-associated virus (AAV). Owing to the presence of a green fluorescent protein (GFP) gene segment located in the plasmid transported by the AAV, infected neurons could be observed using a fluorescent microscope (BX61, Olympus, Tokyo, Japan) at a 475 nm excitation wavelength. To create an averaged evoked LFP, its amplitudes were clipped every 333 ms over the 15–25 s recording period in response to fluorescence signal emergence. Then, an absolute value of the evoked response amplitudes at 30 ms post-stimulus (denoted as ∑LFP) was calculated by summing the averaged evoked LFP. ∑LFP changes were used to evaluate the stability of the evoked responses induced by the thalamic stimulation. The raw Ca 2+ fluorescence intensity was mixed with a high-frequency noise caused by the photometric recording instrument, which should be filtered out with a 100-Hz low-pass filter. The change in Ca 2+ concentration is expressed as ΔF/F = (F s − F 0 )/F 0 , where F s is the fluorescence intensity during electrical stimulation, and F 0 is the mean fluorescence intensity before stimulation. The ΔF/F ratio was then averaged over the 15–25 s recording period because fluorescence signals increased about 5 s later . Both LFP and Ca 2+ fluorescence intensities were analyzed using MATLAB ® . To verify the relationship between LFP and Ca 2+ fluorescence intensity in vivo, a linear data fit with a corresponding coefficient of determination ( R 2 ) was used to determine the relationships between VTA volume and simulated Ca 2+ fluorescence intensities, VTA volume and ∑LFP, simulated Ca 2+ fluorescence intensities and Ca 2+ fluorescence intensity in vivo, and ∑LFP and Ca 2+ fluorescence intensity in vivo. The higher the R 2 value, the more sensitive the positive to the paired items under stimulus intensities. The linear curve fitting was conducted with SPSS version 26.0 (SPSS Inc., Chicago, IL, USA). The significance level was set at p < 0.05. The results were expressed with mean values and standard error of the mean (mean ± SEM). 3.1. Estimation of VTA Volume For thalamic DBS from the optrode, the effects of all four different DBS intensities (50 μA, 100 μA, 200 μA, and 300 μA) with comparable spatial patterns of activation in the XY, YZ, and XZ planes are shown in A. Stimulus current density modulated the effects of DBS from the optrode, and the affected regions were observed to concentrate on Channels #1 and #4 (current source and reference, respectively). Stronger effects of the stimulus current density were also observed to concentrate on the microelectrodes of current source and reference. Subsequently, the VTA volume was estimated when the value exceeded 0, which represented the activated area based on f(n) (Equation (7)). Increasing the stimulus current density resulted in a larger VTA volume ( B). Because the optrode was simulated in the same homogeneous brain tissue, the simulation results suggest that a change in stimulus current density could mediate a change in VTA volume. 3.2. Estimation of Simulated Ca 2+ Fluorescence Intensity A shows the fluorescence distributions evoked by thalamic DBS. The activated regions were derived from VTA, and QY was 1 in these voxels of regions assuming that GCaMPs were fully expressed in the activated regions. The simulated Ca 2+ fluorescence intensity was investigated by concentrating on the current source and the reference. The stronger simulated Ca 2+ fluorescence intensity also corresponded to the stronger DBS intensity. To validate the value of simulated Ca 2+ fluorescence, φ in every voxel was summed up in response to different DBS intensities ( B). A larger φ was found under stronger DBS intensity. In the MC simulation, because the activated regions were predetermined by the VTA volume, the similar distribution of VTA volume and Ca 2+ fluorescence signal could be investigated. 3.3. Acute Ca 2+ Fluorescence and LFP Recordings under In Vivo Thalamic Stimulation A shows the GCaMP expression and optrode position. The optrode was implanted at an anteroposterior (AP) level of −3.48 mm relative to the Bregma. Using the rat brain atlas, the position of the optrode was determined to be VPM (AP: −3.48 mm, ML: 2.50 mm, DV: −6.80 mm). B shows an enlarged fluorescent image. The expression of GCaMP6s–GFP was investigated with the fluorescence microscope using the excitation of 475 nm wavelength light. Acute Ca 2+ fluorescence and LFPs were recorded while the rats were exposed to varying DBS intensities. A shows the in vivo recordings of acute Ca 2+ fluorescence and the LFP. The first 10 sec of recording indicated the Ca 2+ fluorescence baseline, whereas the last 10 sec indicated stimulation rest. The DBS was conducted every 10 s during the recording duration. Stimulation artifacts were observed in the LFP recording, corresponding to the DBS frequency. The increased DBS intensity enabled the observation of an increase in both acute Ca 2+ fluorescence and LFP. In addition, increased Ca 2+ fluorescence signals were observed rising about 5 s after DBS initiation. Increasing the DBS current density induced a stronger response in both LFP and Ca 2+ fluorescence signals, consistent with the simulated results in VTA volume and MC simulation. B shows the first 30 ms of the average amplitudes from the elicited LFP measurements taken every 333 ms. Similar LFP amplitude patterns were observable, even when the DBS intensity was varied. Subsequently, the first 30 ms of average amplitudes under different DBS intensities were summed up with the absolute value to quantify the effects of varying DBS intensities and denoted as ∑LFP, as shown in C. To determine the relationship between DBS-evoked LFP and DBS-evoked Ca 2+ fluorescence signals under varying stimulus current densities, ∑LFP was further verified using VTA volume and simulated Ca 2+ fluorescence signals with linear curve fitting. 3.4. Linear Relationship among VTA Volume, Simulated Ca 2+ Fluorescence Intensity, Ca 2+ Fluorescence Intensity In Vivo, and ∑LFP A shows the linear curve fitting between the VTA volume and simulated Ca 2+ fluorescence intensity. The coefficient of determination ( R 2 ) between the VTA volume and simulated Ca 2+ fluorescence intensity was 0.976. The significant linearity indicated that the two experiments produced similar results. B shows the linear curve fitting between ∑LFP and VTA volume for a tissue conductivity of 4.120 S/m. An optimal fit ( R 2 = 0.995) between ∑LFP and VTA volume was obtained. C depicts the correlation between the simulated Ca 2+ fluorescence intensity and the Ca 2+ fluorescence intensity in vivo. The optimal coefficient of determination ( R 2 = 0.956) between simulated Ca 2+ fluorescence intensity and Ca 2+ fluorescence intensity in vivo was obtained for a QY of 1. D shows the linear curve fitting between ∑LFP and Ca 2+ fluorescence intensity in vivo. The optimal coefficient of determination ( R 2 = 0.997) between ∑LFP and Ca 2+ fluorescence intensity in vivo was obtained. This result indicated an ideal sensitivity to predict the evoked Ca 2+ fluorescence intensity in vivo based on the evoked LFP. Therefore, significant linear relationships existed among the VTA volume, simulated Ca 2+ fluorescence intensity, Ca 2+ fluorescence intensity in vivo, and ∑LFP. For thalamic DBS from the optrode, the effects of all four different DBS intensities (50 μA, 100 μA, 200 μA, and 300 μA) with comparable spatial patterns of activation in the XY, YZ, and XZ planes are shown in A. Stimulus current density modulated the effects of DBS from the optrode, and the affected regions were observed to concentrate on Channels #1 and #4 (current source and reference, respectively). Stronger effects of the stimulus current density were also observed to concentrate on the microelectrodes of current source and reference. Subsequently, the VTA volume was estimated when the value exceeded 0, which represented the activated area based on f(n) (Equation (7)). Increasing the stimulus current density resulted in a larger VTA volume ( B). Because the optrode was simulated in the same homogeneous brain tissue, the simulation results suggest that a change in stimulus current density could mediate a change in VTA volume. 2+ Fluorescence Intensity A shows the fluorescence distributions evoked by thalamic DBS. The activated regions were derived from VTA, and QY was 1 in these voxels of regions assuming that GCaMPs were fully expressed in the activated regions. The simulated Ca 2+ fluorescence intensity was investigated by concentrating on the current source and the reference. The stronger simulated Ca 2+ fluorescence intensity also corresponded to the stronger DBS intensity. To validate the value of simulated Ca 2+ fluorescence, φ in every voxel was summed up in response to different DBS intensities ( B). A larger φ was found under stronger DBS intensity. In the MC simulation, because the activated regions were predetermined by the VTA volume, the similar distribution of VTA volume and Ca 2+ fluorescence signal could be investigated. 2+ Fluorescence and LFP Recordings under In Vivo Thalamic Stimulation A shows the GCaMP expression and optrode position. The optrode was implanted at an anteroposterior (AP) level of −3.48 mm relative to the Bregma. Using the rat brain atlas, the position of the optrode was determined to be VPM (AP: −3.48 mm, ML: 2.50 mm, DV: −6.80 mm). B shows an enlarged fluorescent image. The expression of GCaMP6s–GFP was investigated with the fluorescence microscope using the excitation of 475 nm wavelength light. Acute Ca 2+ fluorescence and LFPs were recorded while the rats were exposed to varying DBS intensities. A shows the in vivo recordings of acute Ca 2+ fluorescence and the LFP. The first 10 sec of recording indicated the Ca 2+ fluorescence baseline, whereas the last 10 sec indicated stimulation rest. The DBS was conducted every 10 s during the recording duration. Stimulation artifacts were observed in the LFP recording, corresponding to the DBS frequency. The increased DBS intensity enabled the observation of an increase in both acute Ca 2+ fluorescence and LFP. In addition, increased Ca 2+ fluorescence signals were observed rising about 5 s after DBS initiation. Increasing the DBS current density induced a stronger response in both LFP and Ca 2+ fluorescence signals, consistent with the simulated results in VTA volume and MC simulation. B shows the first 30 ms of the average amplitudes from the elicited LFP measurements taken every 333 ms. Similar LFP amplitude patterns were observable, even when the DBS intensity was varied. Subsequently, the first 30 ms of average amplitudes under different DBS intensities were summed up with the absolute value to quantify the effects of varying DBS intensities and denoted as ∑LFP, as shown in C. To determine the relationship between DBS-evoked LFP and DBS-evoked Ca 2+ fluorescence signals under varying stimulus current densities, ∑LFP was further verified using VTA volume and simulated Ca 2+ fluorescence signals with linear curve fitting. 2+ Fluorescence Intensity, Ca 2+ Fluorescence Intensity In Vivo, and ∑LFP A shows the linear curve fitting between the VTA volume and simulated Ca 2+ fluorescence intensity. The coefficient of determination ( R 2 ) between the VTA volume and simulated Ca 2+ fluorescence intensity was 0.976. The significant linearity indicated that the two experiments produced similar results. B shows the linear curve fitting between ∑LFP and VTA volume for a tissue conductivity of 4.120 S/m. An optimal fit ( R 2 = 0.995) between ∑LFP and VTA volume was obtained. C depicts the correlation between the simulated Ca 2+ fluorescence intensity and the Ca 2+ fluorescence intensity in vivo. The optimal coefficient of determination ( R 2 = 0.956) between simulated Ca 2+ fluorescence intensity and Ca 2+ fluorescence intensity in vivo was obtained for a QY of 1. D shows the linear curve fitting between ∑LFP and Ca 2+ fluorescence intensity in vivo. The optimal coefficient of determination ( R 2 = 0.997) between ∑LFP and Ca 2+ fluorescence intensity in vivo was obtained. This result indicated an ideal sensitivity to predict the evoked Ca 2+ fluorescence intensity in vivo based on the evoked LFP. Therefore, significant linear relationships existed among the VTA volume, simulated Ca 2+ fluorescence intensity, Ca 2+ fluorescence intensity in vivo, and ∑LFP. 4.1. The Advance of Self-Assembled Optrode In this study, our self-assemble optrode was capable of concurrently recording Ca 2+ fluorescence signals and electrophysiology. The optical fiber was aligned with Channel #5 of the optrode, whereas Channel #1, #2, and #4 served as the current source, recording site, and reference, respectively. In addition, UV resin was employed as the adhesive owing to its transparency and ability to solidify when exposed to UV light. Its former characteristics minimized the light absorption loss, whereas its latter characteristics facilitated the optrode-assembly process. Optrode assembly has been demonstrated in previous studies. Stocke and Samuelsen used a 3D-printed self-made fixture to firmly attach an optical fiber to the optrode . The design of the self-made fixture could be modified to accommodate various diameters and lengths of optical fibers, and the firmness of the 3D-printed self-made fixture was shown to withstand long-term in vivo recording (12 days). Therefore, a 3D-printed self-made fixture that complies with the probe and optical fiber specifications may be an improved method for chronic in vivo recording. Sileo, et al. adhered a tapered optical fiber to a multichannel probe using UV resin. The tapered optical fiber provided homogenous illumination on the recording site, thereby reducing the loss of photonic energy by suppressing photon propagation in the brain tissue. Although the tapered optical fiber was demonstrated to be less invasive in the in vivo experiment , the tapered-optical-fiber-based optrode could only capture the electrophysiological signal evoked by the excitation light. Unlike the tapered-optical-fiber-based optrode, our self-assembled optrode could simultaneously record fluorescence and electrophysiological signals evoked by electrical stimulation. In addition, the microelectrode array of the optrode enabled accurate electrical stimulation and recording of small brain regions, indicating that the self-assembled optrode is a promising biosensor for investigating acute neuronal activity in vivo under varying DBS intensities. 4.2. The Correlation between Simulated Results and In Vivo Experiments Due to the assumed normal expression of GCaMPs in the VTA regions, the linear correlation data showed a strong positive association between VTA and simulated Ca 2+ fluorescence intensity. The QY of the VTA region was assumed to be 1, so simulated fluroescence would be emitted once the region was activated by the excitation light, indicating that the VTA volume had a strong positive correlation with the simulated Ca 2+ fluorescence. The most remarkable finding was that the VTA volume corresponded to the ∑LFP under varying DBS intensities, indicating that the predicted VTA volume was significantly correlated with the summation of the LFP amplitude. VTA was judged to be the most important benchmark for measuring the effects of the stimulation , whereas LFP was the neuronal response generated by different DBS parameters . The present study effectively connected the vital simulation to the in vivo state by demonstrating that changes in VTA volume maintained a significant positive correlation with varying DBS intensities, indicating that VTA volume estimation using FEM with a homogeneous and isotropic medium may be appropriate in the rodent model. However, the correlation between the simulated Ca 2+ fluorescence intensity and the Ca 2+ fluorescence intensity in vivo was weak, probably because QY was set to 1 in the MC simulation, which did not reflect the real condition of GCaMP expression in the activated regions and led to a bias in the MC simulation model compared to the in vivo experiment. Based on the previous studies, the efficiency of GCaMP expression should be considered in order to simulate the real conditions of an animal model. For instance, the GCaMP expression ratio varied at distinct post-injection time points . Because several factors such as transgenic animal , different strains of AAV-mediated expression , and the plasmid design affect the ratio of efficient GCaMP expression , the QY of the emitted fluorescence should be dynamically adjusted for the different factors in order to satisfy the necessary conditions for MC simulation. The linear curve fitting results showed that both VTA volume and simulated Ca 2+ fluorescence significantly and positively correlated with in vivo signals under varying DBS intensities, although the MC simulation at this stage could not match the actual condition of the animal model. Therefore, unnecessary in vivo experiments and optrode fabrication can be reduced by incorporating VTA and MC simulation into the optrode design phase. 4.3. Comparison of the Ca 2+ Photometry and Electrophysiology The linear curve fitting results indicated that Ca 2+ fluorescence intensity in vivo correlated with LFP under varying DBS intensities, indicating a link between Ca 2+ and neural activity. Without a doubt, calcium facilitates the neuronal transmission of neurotransmitters and the subsequent generation of action potentials .Therefore, investigating alterations in the Ca 2+ signal in brain tissue is an appropriate method for confirming neural activity. Ca 2+ photometry has gained widespread application in neurology owing to its ability to provide recordings free of electrical artifacts. Electrophysiological investigations, notably those aiming to understand how DBS affects the neurons in the target structure, have met major challenges owing to artifacts; hence, several strategies for artifact removal have been developed . However, unlike electrophysiology, the photometry method for measuring calcium indicators suffers from poor temporal resolution. GCaMP calcium measurement provides an integrated signal due to many spikes, whereas conventional single-unit electrophysiology detects individual action potentials . Furthermore, the decay time and the rise time of GCaMPs exceeded the electrical response and the electrical stimulation periods in this study , making it difficult to observe neural activity during the electrical stimulation period. Fiber photometry has additional drawbacks because it further averages signals across a population of neurons and from several neuronal compartments (dendrites and soma). A rise in fiber photometry signals might suggest either an increase in overall firing or an increase in population synchronization. Recent research in the striatum comparing neuronal firing and fiber photometry (using simultaneous multi-unit electrophysiology and fiber photometry) revealed that although fiber photometry signals were longer and may represent dendritic Ca 2+ influx related to back propagation, an initial phase of calcium signal correlated well with firing , suggesting that Ca 2+ comprises neuronal firing and the dendritic Ca 2+ influx phenomenon. Despite the limitation of the fiber photometry technique, the Ca 2+ was still an important biomarker for observing neural reaction, due to its function for mediating the release of neural transmitters. Especially in some disease models, such as Parkinson’s disease , Alzheimer’s disease , and depression , fluorescence signals of Ca 2+ have helped to reveal abnormal brain circuits and networking. With the development of the photometry technique and genic engineering, the limitations of measurement and the Ca 2+ indicators may be overcome in the future. In this study, our self-assemble optrode was capable of concurrently recording Ca 2+ fluorescence signals and electrophysiology. The optical fiber was aligned with Channel #5 of the optrode, whereas Channel #1, #2, and #4 served as the current source, recording site, and reference, respectively. In addition, UV resin was employed as the adhesive owing to its transparency and ability to solidify when exposed to UV light. Its former characteristics minimized the light absorption loss, whereas its latter characteristics facilitated the optrode-assembly process. Optrode assembly has been demonstrated in previous studies. Stocke and Samuelsen used a 3D-printed self-made fixture to firmly attach an optical fiber to the optrode . The design of the self-made fixture could be modified to accommodate various diameters and lengths of optical fibers, and the firmness of the 3D-printed self-made fixture was shown to withstand long-term in vivo recording (12 days). Therefore, a 3D-printed self-made fixture that complies with the probe and optical fiber specifications may be an improved method for chronic in vivo recording. Sileo, et al. adhered a tapered optical fiber to a multichannel probe using UV resin. The tapered optical fiber provided homogenous illumination on the recording site, thereby reducing the loss of photonic energy by suppressing photon propagation in the brain tissue. Although the tapered optical fiber was demonstrated to be less invasive in the in vivo experiment , the tapered-optical-fiber-based optrode could only capture the electrophysiological signal evoked by the excitation light. Unlike the tapered-optical-fiber-based optrode, our self-assembled optrode could simultaneously record fluorescence and electrophysiological signals evoked by electrical stimulation. In addition, the microelectrode array of the optrode enabled accurate electrical stimulation and recording of small brain regions, indicating that the self-assembled optrode is a promising biosensor for investigating acute neuronal activity in vivo under varying DBS intensities. Due to the assumed normal expression of GCaMPs in the VTA regions, the linear correlation data showed a strong positive association between VTA and simulated Ca 2+ fluorescence intensity. The QY of the VTA region was assumed to be 1, so simulated fluroescence would be emitted once the region was activated by the excitation light, indicating that the VTA volume had a strong positive correlation with the simulated Ca 2+ fluorescence. The most remarkable finding was that the VTA volume corresponded to the ∑LFP under varying DBS intensities, indicating that the predicted VTA volume was significantly correlated with the summation of the LFP amplitude. VTA was judged to be the most important benchmark for measuring the effects of the stimulation , whereas LFP was the neuronal response generated by different DBS parameters . The present study effectively connected the vital simulation to the in vivo state by demonstrating that changes in VTA volume maintained a significant positive correlation with varying DBS intensities, indicating that VTA volume estimation using FEM with a homogeneous and isotropic medium may be appropriate in the rodent model. However, the correlation between the simulated Ca 2+ fluorescence intensity and the Ca 2+ fluorescence intensity in vivo was weak, probably because QY was set to 1 in the MC simulation, which did not reflect the real condition of GCaMP expression in the activated regions and led to a bias in the MC simulation model compared to the in vivo experiment. Based on the previous studies, the efficiency of GCaMP expression should be considered in order to simulate the real conditions of an animal model. For instance, the GCaMP expression ratio varied at distinct post-injection time points . Because several factors such as transgenic animal , different strains of AAV-mediated expression , and the plasmid design affect the ratio of efficient GCaMP expression , the QY of the emitted fluorescence should be dynamically adjusted for the different factors in order to satisfy the necessary conditions for MC simulation. The linear curve fitting results showed that both VTA volume and simulated Ca 2+ fluorescence significantly and positively correlated with in vivo signals under varying DBS intensities, although the MC simulation at this stage could not match the actual condition of the animal model. Therefore, unnecessary in vivo experiments and optrode fabrication can be reduced by incorporating VTA and MC simulation into the optrode design phase. 2+ Photometry and Electrophysiology The linear curve fitting results indicated that Ca 2+ fluorescence intensity in vivo correlated with LFP under varying DBS intensities, indicating a link between Ca 2+ and neural activity. Without a doubt, calcium facilitates the neuronal transmission of neurotransmitters and the subsequent generation of action potentials .Therefore, investigating alterations in the Ca 2+ signal in brain tissue is an appropriate method for confirming neural activity. Ca 2+ photometry has gained widespread application in neurology owing to its ability to provide recordings free of electrical artifacts. Electrophysiological investigations, notably those aiming to understand how DBS affects the neurons in the target structure, have met major challenges owing to artifacts; hence, several strategies for artifact removal have been developed . However, unlike electrophysiology, the photometry method for measuring calcium indicators suffers from poor temporal resolution. GCaMP calcium measurement provides an integrated signal due to many spikes, whereas conventional single-unit electrophysiology detects individual action potentials . Furthermore, the decay time and the rise time of GCaMPs exceeded the electrical response and the electrical stimulation periods in this study , making it difficult to observe neural activity during the electrical stimulation period. Fiber photometry has additional drawbacks because it further averages signals across a population of neurons and from several neuronal compartments (dendrites and soma). A rise in fiber photometry signals might suggest either an increase in overall firing or an increase in population synchronization. Recent research in the striatum comparing neuronal firing and fiber photometry (using simultaneous multi-unit electrophysiology and fiber photometry) revealed that although fiber photometry signals were longer and may represent dendritic Ca 2+ influx related to back propagation, an initial phase of calcium signal correlated well with firing , suggesting that Ca 2+ comprises neuronal firing and the dendritic Ca 2+ influx phenomenon. Despite the limitation of the fiber photometry technique, the Ca 2+ was still an important biomarker for observing neural reaction, due to its function for mediating the release of neural transmitters. Especially in some disease models, such as Parkinson’s disease , Alzheimer’s disease , and depression , fluorescence signals of Ca 2+ have helped to reveal abnormal brain circuits and networking. With the development of the photometry technique and genic engineering, the limitations of measurement and the Ca 2+ indicators may be overcome in the future. In this study, the optrode successfully was fabricated with an assembly process for a low-cost, simple, and spatial flexibility of optical fiber onto the microfabricated neural probe. Furthermore, our self-assembled optrode was capable of performing a multifunction of electrical/optical stimulation and concurrent recordings of LFPs and Ca 2+ fluorescence signals, which was used to investigate the relationship between Ca 2+ signaling and electrophysiological performance. In addition, the simulated data for different intensities of DBS, including estimated evoked VTA volumes and corresponding changes of MC-simulated Ca 2+ fluorescence signals, were confirmed with a strongly positive correlation to in vivo recordings using our self-assembled optrode. These data suggest that the performance of electrophysiology was consistent with the phenomenon of Ca 2+ influx to neural synapses, corresponding to the role of Ca 2+ as a neurotransmitter mediator and subsequently inducing postsynaptic potentials. The developed optrode was successfully used to validate our numerical results on the DBS-evoked VTA estimation and corresponding Ca 2+ fluorescence excitation from the Monte Carlo modeling. It therefore is a promising tool for the investigation of the coupling between electrophysiology and cellular Ca 2+ signals in the neuroscience research field. |
Testing verbal quantifiers for social norms messages in cancer screening: evidence from an online experiment | d73d6312-c8af-43d7-aa07-7676e5800ebc | 6542069 | Health Communication[mh] | Individual decision making, and behaviour is often influenced by the perception of other people’s behaviour (descriptive social norms) and what behaviour is approved by other important people and society (injunctive norms) . Social norms provide people with a standard behaviour for a specific situation from which they do not want to deviate . Social norms can therefore be defined as rules that are understood by members of a group . Various studies have shown that social norms positively influence health behaviours . Therefore, there is growing interest in communicating normative information to encourage more people to engage in preventive health behaviours . While some studies have looked at the influence of social norms on cancer screening attendance or intentions only few have tried to influence screening behaviour by communicating normative information . Two studies have failed to encourage screening behaviour by communicating high uptake and preferences for a specific screening test, but they used relatively low social norms messages [15;16]. In Sieverding and colleagues’ experimental study with men aged 45 or older, they compared intentions following either a high (65%), low (18%) or no prevalence message . They found that men in the low-prevalence group reported less intention to undergo cancer screening and were less likely to leave their name and address to receive further information about cancer screening by mail. Similarly, a recent study by Schwartz and colleagues, that used verbal information about people’s choice of bowel cancer screening tests, such as many people, did not find any effect on intention, test preference, or uptake . In a recent experimental study by von Wagner and colleagues it was shown that correcting an initial belief about colorectal cancer screening uptake upwards (i.e. stating the correct answer was initial belief plus 30%) increased screening intentions among previously screening disinclined men and women . In their study, they initially asked participants to estimate how many people out of 10 they believe do the test and then provided them with a social norms messages that stated that uptake is higher than estimated or correct. Importantly, the messages used in their study were specifically designed to prove that, in principle, correcting normative beliefs increases intentions. For this purpose, they used messages that mapped on to the participants’ pre-conceived hypothesis rather than actual uptake of 43% of the English Bowel Scope Screening (BSS), which consists of an invasive flexible sigmoidoscopy test that is offered to 55 years old men and women . Specifically, they communicated to disinclined participants that uptake was either what was expected, or 30 percentage points higher (e.g. 70% instead of 40%) or that uptake was 80%. They found that a social norms message stating that 80% participate in the screening programme yielded the highest impact on intention. Personalised feedback by referring to the person’s own belief did not influence this effect. Based on the results by Sieverding and colleagues, one would expect a demotivating effect of communicating an uptake of 43% for the overall population, but as beliefs about uptake are positively correlated with own screening status , the information could still be motivating for non-intenders who originally believed that less than 43% participate. Moreover, information about descriptive norms can be provided in form of concrete numbers (e.g. 43% of all eligible people do the test ) or in form of verbal quantifiers (e.g. many eligible people do the test ). Until now, most studies that aim to address health-related intentions or behaviours use exact numbers [5–9; 15; 17]. So far only two studies have tested the use of verbal quantifiers to communicate normative behaviour in the context of cancer prevention [16; 19]. While Schwartz and colleagues did not find that communicating that many people choose the test influenced preferences or screening uptake, their study design of combining the social norms message with four additional messages does not allow us to determine whether the social norms message alone would have any effect . Similarly, Zikmund-Fisher and colleagues looked at whether telling women that ‘ most women ’ or ‘ a few women ’ take adjuvant chemotherapy following breast cancer surgery had a similar effect on intentions as telling them that 60% or 5% choose it. They found that the exact numerical norms messages about the popularity of chemotherapy had a greater effect on intentions than the less precise verbal quantifier. They conclude that verbal quantifiers are less effective due to them being less precise. While these two studies do not suggest that verbal quantifiers are effective ways to communicate social norms, other studies advocate that their vague meaning and subjective interpretation could make them more or less motivating than exact numbers . In Bocklisch and colleagues’ study, participants believed that a ‘possible’ event had an average likelihood rating of 51.4 out of 100 with a standard deviation of 21.6 . The large standard deviation suggests considerable variance between individuals in interpreting the verbal probability expression. Similar effects were found for frequency estimates in Wänke’s study . The individual variance in verbal interpretations has primarily been assessed in terms of the problems it creates for survey research, such as the misinterpretation of Likert scales . Yet the vagueness of interpretations could be harnessed to influence perceptions of normative behaviour. Specifically, using verbal quantifiers for screening programmes with low uptake could mitigate the risk of demotivation as some may believe that the quantifier implies higher uptake. The current research In this study, we set out to test whether verbal quantifiers could be used to increase intentions to have bowel scope screening (BSS), a test for 55 year olds offered as part of the English Bowel Cancer Screening Programme. Specifically, we wanted to compare verbal quantifiers to a precise numerical norms message and a control condition without any information about normative screening behaviour. In line with previous experimental studies , we focused on individuals who initially expressed little or no interest in BSS, to minimise ceiling and social desirability effects often associated with self-reported intention measures. We also wanted to simulate a targeted intervention aimed at non-attenders who are in most need of an effective behavioural intervention. Furthermore, using only disinclined study participants, we mitigate the problem of demotivating participants as expectations about uptake is positively linked with screening behaviour [13;17]. In Sieverding and colleagues’ study, non-attenders estimated that only 28% of other men participate in the German CRC programme, whereas irregular and regular attendees estimated that between 36 and 45% do the screening test . For the purpose of testing the hypothesis whether verbal quantifiers are better or worse than numerical norms in communicating low prevalence information, in terms of motivating disinclined men and women to attend BSS, we conducted two separate studies. In Study1 participants were presented with eight quantifiers for 43%, the current uptake of BSS in England. The quantifiers are listed in Table but were presented to participants in a random order Participants were asked to translate each quantifier into a proportion and then to indicate how misleading they perceived each quantifier to be after debriefing them about the true uptake of 43%. Study 2 then compared the motivational impact of two of these quantifiers with a control message that did not contain any prevalence information, and a message which communicated actual uptake as a proportion (43%). Thus, while Study 1 looked at the effect of using quantifiers on interpretation, Study 2 looked at whether descriptive norms can be used to increase intentions to participate in BSS . Comparing the numerical norms message with the control condition also revealed whether uptake of 43% is perceived as demotivating or motivating in the context of BSS. Furthermore, we also investigated in Study 2 the impact of our three normative messages on interest in reading more about the benefits and harms of the screening test. This active interest, which was demonstrated by a study participant wanting to read further information, was used as a proxy for real behaviour in line with the literature on the intention-behaviour gap . Additionally, we also used this question to gain a better understanding of how this nudge would facilitate or undermine people’s ability to make an informed choice about screening. Nudge type interventions such as social norms interventions have been criticized in terms of informed decision making . As interventions should avoid being manipulative or paternalistic to enable people to make an informed choice based on knowledge of the harms and benefits of cancer screening, it is important to know whether nudges influence information seeking behaviour. As informed choice is typically measured through relevant knowledge consistent with the decision maker’s values , we also measure comprehension of the additional information. We report all measures, manipulations, and exclusions in these studies. Sample size for Study 2 was calculated prior to data collection based on estimates obtained from a pilot study so that it was sufficiently powered to detect differences of at least 10% in participants choosing to do the screening test, between experimental conditions, with a power of 80% and an alpha value of 0.05 . All statistical analysis was conducted with Stata/SE version 15.1 (StataCorp LP, College Station, TX). Study 1 The primary aim of Study 1 was to identify quantifiers that translated into the highest uptake and that were not perceived as misleading, with the view to include them into Study 2. No hypotheses were made about Study 1 due to its exploratory nature.
In this study, we set out to test whether verbal quantifiers could be used to increase intentions to have bowel scope screening (BSS), a test for 55 year olds offered as part of the English Bowel Cancer Screening Programme. Specifically, we wanted to compare verbal quantifiers to a precise numerical norms message and a control condition without any information about normative screening behaviour. In line with previous experimental studies , we focused on individuals who initially expressed little or no interest in BSS, to minimise ceiling and social desirability effects often associated with self-reported intention measures. We also wanted to simulate a targeted intervention aimed at non-attenders who are in most need of an effective behavioural intervention. Furthermore, using only disinclined study participants, we mitigate the problem of demotivating participants as expectations about uptake is positively linked with screening behaviour [13;17]. In Sieverding and colleagues’ study, non-attenders estimated that only 28% of other men participate in the German CRC programme, whereas irregular and regular attendees estimated that between 36 and 45% do the screening test . For the purpose of testing the hypothesis whether verbal quantifiers are better or worse than numerical norms in communicating low prevalence information, in terms of motivating disinclined men and women to attend BSS, we conducted two separate studies. In Study1 participants were presented with eight quantifiers for 43%, the current uptake of BSS in England. The quantifiers are listed in Table but were presented to participants in a random order Participants were asked to translate each quantifier into a proportion and then to indicate how misleading they perceived each quantifier to be after debriefing them about the true uptake of 43%. Study 2 then compared the motivational impact of two of these quantifiers with a control message that did not contain any prevalence information, and a message which communicated actual uptake as a proportion (43%). Thus, while Study 1 looked at the effect of using quantifiers on interpretation, Study 2 looked at whether descriptive norms can be used to increase intentions to participate in BSS . Comparing the numerical norms message with the control condition also revealed whether uptake of 43% is perceived as demotivating or motivating in the context of BSS. Furthermore, we also investigated in Study 2 the impact of our three normative messages on interest in reading more about the benefits and harms of the screening test. This active interest, which was demonstrated by a study participant wanting to read further information, was used as a proxy for real behaviour in line with the literature on the intention-behaviour gap . Additionally, we also used this question to gain a better understanding of how this nudge would facilitate or undermine people’s ability to make an informed choice about screening. Nudge type interventions such as social norms interventions have been criticized in terms of informed decision making . As interventions should avoid being manipulative or paternalistic to enable people to make an informed choice based on knowledge of the harms and benefits of cancer screening, it is important to know whether nudges influence information seeking behaviour. As informed choice is typically measured through relevant knowledge consistent with the decision maker’s values , we also measure comprehension of the additional information. We report all measures, manipulations, and exclusions in these studies. Sample size for Study 2 was calculated prior to data collection based on estimates obtained from a pilot study so that it was sufficiently powered to detect differences of at least 10% in participants choosing to do the screening test, between experimental conditions, with a power of 80% and an alpha value of 0.05 . All statistical analysis was conducted with Stata/SE version 15.1 (StataCorp LP, College Station, TX).
The primary aim of Study 1 was to identify quantifiers that translated into the highest uptake and that were not perceived as misleading, with the view to include them into Study 2. No hypotheses were made about Study 1 due to its exploratory nature.
Participants We recruited 915 men and women aged 35–54 from a survey panel (Survey Sampling International); those with a previous diagnosis of bowel cancer or who have had part of their bowel remove were excluded. Similar to previous studies, we presented eligible respondents with a description of BSS and asked them to correctly identify the test as invasive before stating their intentions to attend BSS [17;23]. For this within-person analysis, only those who stated that they would definitely ( N = 49; 24.3%) or probably ( N = 153; 75.7%) not do the test when invited, were included (see Additional file : Figure S1 for flow through Study 1). Details of the respondents’ age, ethnicity, marital status, education and employment were collected at the end of the survey (see Additional file : Table S1 for details about participants’ characteristics in study 1). Most people in our sample were aged between 45 and 54 (45.5%), female (54.5%), married or cohabiting (61.95), White British (82.7%), were in paid employment (75.2%) and had A-level or higher education (62.4%). Procedures and measures Eligible participants were presented with eight verbal quantifiers of BSS uptake (see Table ) in a random order and asked to translate each of them into uptake from 0 to 100%. Each participant was then asked to indicate how misleading each expression was on a scale from 0 (not misleading at all) to 100 (very misleading) when reference to the quantifier of 43% was presented. Note that before Participants were asked about how misleading they perceived the messages to be, they were first asked to translate the percentage of 43% into a proportion out of 1000 to reduce the risk of misunderstanding. We further asked participants to compare the quantifiers based on their accuracy and whether they should be used for communication to the public with the questions: ‘ Which of the following statements most accurately describes 43% participation? ’ and ‘ Which of the following statements should be used by the screening programme to describe 43% participation?’ Participants’ numeracy and cancer health literacy were assessed by three questions adapted from Lipkus and colleagues and the six questions from Dumenci and colleagues’ CHLT-6 questionnaire . For both items, scores were calculated. Statistical analysis As answers to the translation and misleadingness questions were not normally distributed (see Additional file : Figure S2 and Figure S3 for distribution of answers), we used medians as measures of central tendency and calculated confidence intervals for each quantifier using nonparametric bootstraps. Friedman and Wilcoxon signed rank tests were used to compare the quantifiers.
We recruited 915 men and women aged 35–54 from a survey panel (Survey Sampling International); those with a previous diagnosis of bowel cancer or who have had part of their bowel remove were excluded. Similar to previous studies, we presented eligible respondents with a description of BSS and asked them to correctly identify the test as invasive before stating their intentions to attend BSS [17;23]. For this within-person analysis, only those who stated that they would definitely ( N = 49; 24.3%) or probably ( N = 153; 75.7%) not do the test when invited, were included (see Additional file : Figure S1 for flow through Study 1). Details of the respondents’ age, ethnicity, marital status, education and employment were collected at the end of the survey (see Additional file : Table S1 for details about participants’ characteristics in study 1). Most people in our sample were aged between 45 and 54 (45.5%), female (54.5%), married or cohabiting (61.95), White British (82.7%), were in paid employment (75.2%) and had A-level or higher education (62.4%).
Eligible participants were presented with eight verbal quantifiers of BSS uptake (see Table ) in a random order and asked to translate each of them into uptake from 0 to 100%. Each participant was then asked to indicate how misleading each expression was on a scale from 0 (not misleading at all) to 100 (very misleading) when reference to the quantifier of 43% was presented. Note that before Participants were asked about how misleading they perceived the messages to be, they were first asked to translate the percentage of 43% into a proportion out of 1000 to reduce the risk of misunderstanding. We further asked participants to compare the quantifiers based on their accuracy and whether they should be used for communication to the public with the questions: ‘ Which of the following statements most accurately describes 43% participation? ’ and ‘ Which of the following statements should be used by the screening programme to describe 43% participation?’ Participants’ numeracy and cancer health literacy were assessed by three questions adapted from Lipkus and colleagues and the six questions from Dumenci and colleagues’ CHLT-6 questionnaire . For both items, scores were calculated.
As answers to the translation and misleadingness questions were not normally distributed (see Additional file : Figure S2 and Figure S3 for distribution of answers), we used medians as measures of central tendency and calculated confidence intervals for each quantifier using nonparametric bootstraps. Friedman and Wilcoxon signed rank tests were used to compare the quantifiers.
Table , as well as Figs. and , summarises the median values of uptake and perceived misleadingness ascribed to each quantifier. A Friedman test indicated the distributions in translations of the quantifiers were significantly different between quantifiers (χ 2 (7) = 31.118, p < 0 .001 ). Wilcoxon signed rank tests showed that for all but two quantifiers (‘ numerous ’ and ‘ nearly half’ ), the translations of uptake differed significantly from the true uptake value of 43%. ‘ A large number ’ and ‘ a great number ’ had the highest median translation (50.5%). All quantifiers, except for ‘ nearly half’, were perceived as similarly misleading (χ 2 (7) = 48.326, p < 0.001). Looking at which quantifier respondents perceived as most accurate and ideal for communication to the public, Fig. reveals that ‘ nearly half ’ was the most popular choice (57.7% named it most accurate and 55.2% thought that it should be used for communication). There was a strong positive correlation between perceived accuracy and the quantifier rated best for public communication, indicating that participants thought that the public information campaigns should communicate accurate information ( r (202) = .716, p < 0.001). However, the similarity between accuracy and communication ratings (Fig. ) show the importance of accurate information. From an ethical standpoint, communicating accurate information is essential, though it is equally important to avoid adverse effects. Results of this study confirm that individuals interpret quantifiers for screening uptake differently. Almost all quantifiers were thought to represent uptake of more than 43%. Importantly, the most popular and least misleading quantifier (‘ nearly half’) was perceived as 43%, suggesting that using it in a normative message should not be different from communicating it as a proportion. The next step to better understand the use of quantifiers for normative messages would be to examine whether they exert any effect on screening intentions in an experiment that features a between-subject design. To this end, Study 2 compared the most popular quantifier ‘ nearly half ’, and ‘ a large number’ , which, together with ‘ a great number’ , elicited the highest uptake but had a slightly lower misleadingness rating than the latter, with a more traditional normative message that states the proportion of people having the test. This approach allowed us to test high descriptive norms and low, but accurate descriptive norm messages. Study 2 The primary aim of Study 2 was to compare the effects of different normative messages on screening intentions among a group of previous non-intenders. Specifically, we compared the two normative quantifiers (‘ nearly half’ and ‘ a large number ’) with a numerical description of uptake and a message without any uptake information. Furthermore, in line with the discussion about the ethics of using normative information in cancer screening, we tested whether the messages undermine people’s ability to make an informed choice about screening and reduce the likelihood that they would decide to read further facts and figures about BSS. Finally, using comprehension checks, we looked at whether the messages affected information processing.
The primary aim of Study 2 was to compare the effects of different normative messages on screening intentions among a group of previous non-intenders. Specifically, we compared the two normative quantifiers (‘ nearly half’ and ‘ a large number ’) with a numerical description of uptake and a message without any uptake information. Furthermore, in line with the discussion about the ethics of using normative information in cancer screening, we tested whether the messages undermine people’s ability to make an informed choice about screening and reduce the likelihood that they would decide to read further facts and figures about BSS. Finally, using comprehension checks, we looked at whether the messages affected information processing.
Participants The sampling method was identical to that in Study 1 but used a different pool of participants. A total of 5484, who did not participate in Study 1, started the survey. Of the 1294 eligible respondents (see Additional file : Figure S4 for flow through the survey), most indicated that they would definitely not ( N = 270; 20.9%) or probably not ( N = 1024; 79.1%) do the test, were asked to be randomised to one of four experimental conditions with equal probability. Sociodemographic characteristics of the final sample of 1245 (96.2%) were like Study 1 and variables were balanced between the four experimental conditions (see Additional file : Table S2 for descriptive statistics of the study sample). Most respondents were aged 45–54 (56.5%), female (53.1%), White-British (80.0%), married or cohabiting (59.6%), in paid employment (75.8%) and had A-level or higher education (62.2%). Procedures and measures Each participant received a paragraph of information about what happens during the screening test. For those in one of the three experimental conditions, an additional norms message (in bold) was added at the end of the paragraph: ‘ Currently, 43% … ’ , ‘Currently, nearly half … ’ , and ‘Currently, a large number … of men and women who are eligible to participate do so.’ Similar to Sieverding and colleagues , we subsequently asked participants about their intentions and whether they wished to read further facts and figures about BSS: termed active interest. The post-exposure intention question was measured in a similar way as in the filter question, simply adding the prefix ‘Given the previous information … ’ to “Would you take up the offer of bowel scope screening? ” and featured the same fully labelled 4-point Likert scale response options ( ‘definitely not’ , ‘probably not’ , ‘probably yes’ and ‘definitely yes’ ). Active interest was operationalised as the decision to read further facts and figures about BSS, rather than skipping that section. The question was adapted from a previous study and featured the response options ‘read information on next page before continuing with survey’ and ‘skip information on next page and continue with survey’ . Those that opted to read the information were asked three additional multiple-choice comprehension questions to measure engagement. “ Based on what you have just read … ” was followed by (1) “ … does bowel scope screening have any physical risks?”, (2) “ … does bowel scope screening detect all potential cancer? ” and (3) “… how many people think that the test is painful ?” Before debriefing, participants in all conditions were asked, based on the information they had read, how many people they thought to participate in BSS (0–100%). This question was used to measure comprehension for participants in the numerical condition, interpretation of the verbal quantifiers in the ‘ nearly half’ and ‘a large number’ conditions and beliefs about uptake in the control condition. Finally, respondents completed the CHLT-6, numeracy skills test and demographic questions as in study 1. Statistical analysis We used Chi-square tests of independence and logistic regressions adjusted for baseline intentions and sociodemographic variables to investigate the effect of the normative messages on dichotomised post-exposure intentions to participate in BSS. Intentions were reclassified (‘ yes, probably ’ and ‘ yes, definitely ’ versus ‘ probably not’ and ‘definitely not’ ) due to low frequencies in some answer categories. Active interest in reading about the screening test and engagement with the information were analysed using Chi-square tests of independence and Kruskal-Wallis tests. Due to the non-normal distribution of the answers about the beliefs, understanding and comprehension of uptake, we used medians as measures of central tendency.
The sampling method was identical to that in Study 1 but used a different pool of participants. A total of 5484, who did not participate in Study 1, started the survey. Of the 1294 eligible respondents (see Additional file : Figure S4 for flow through the survey), most indicated that they would definitely not ( N = 270; 20.9%) or probably not ( N = 1024; 79.1%) do the test, were asked to be randomised to one of four experimental conditions with equal probability. Sociodemographic characteristics of the final sample of 1245 (96.2%) were like Study 1 and variables were balanced between the four experimental conditions (see Additional file : Table S2 for descriptive statistics of the study sample). Most respondents were aged 45–54 (56.5%), female (53.1%), White-British (80.0%), married or cohabiting (59.6%), in paid employment (75.8%) and had A-level or higher education (62.2%).
Each participant received a paragraph of information about what happens during the screening test. For those in one of the three experimental conditions, an additional norms message (in bold) was added at the end of the paragraph: ‘ Currently, 43% … ’ , ‘Currently, nearly half … ’ , and ‘Currently, a large number … of men and women who are eligible to participate do so.’ Similar to Sieverding and colleagues , we subsequently asked participants about their intentions and whether they wished to read further facts and figures about BSS: termed active interest. The post-exposure intention question was measured in a similar way as in the filter question, simply adding the prefix ‘Given the previous information … ’ to “Would you take up the offer of bowel scope screening? ” and featured the same fully labelled 4-point Likert scale response options ( ‘definitely not’ , ‘probably not’ , ‘probably yes’ and ‘definitely yes’ ). Active interest was operationalised as the decision to read further facts and figures about BSS, rather than skipping that section. The question was adapted from a previous study and featured the response options ‘read information on next page before continuing with survey’ and ‘skip information on next page and continue with survey’ . Those that opted to read the information were asked three additional multiple-choice comprehension questions to measure engagement. “ Based on what you have just read … ” was followed by (1) “ … does bowel scope screening have any physical risks?”, (2) “ … does bowel scope screening detect all potential cancer? ” and (3) “… how many people think that the test is painful ?” Before debriefing, participants in all conditions were asked, based on the information they had read, how many people they thought to participate in BSS (0–100%). This question was used to measure comprehension for participants in the numerical condition, interpretation of the verbal quantifiers in the ‘ nearly half’ and ‘a large number’ conditions and beliefs about uptake in the control condition. Finally, respondents completed the CHLT-6, numeracy skills test and demographic questions as in study 1.
We used Chi-square tests of independence and logistic regressions adjusted for baseline intentions and sociodemographic variables to investigate the effect of the normative messages on dichotomised post-exposure intentions to participate in BSS. Intentions were reclassified (‘ yes, probably ’ and ‘ yes, definitely ’ versus ‘ probably not’ and ‘definitely not’ ) due to low frequencies in some answer categories. Active interest in reading about the screening test and engagement with the information were analysed using Chi-square tests of independence and Kruskal-Wallis tests. Due to the non-normal distribution of the answers about the beliefs, understanding and comprehension of uptake, we used medians as measures of central tendency.
While Table and Fig. show that only the ‘nearly half’ message significantly increased screening intentions in the univariate analysis compared with the control message (14.3% vs 7.8% χ 2 (1, 649) =7.15, p = 0.008), and no other message significantly increased the proportion of intenders ( ‘numerical’ : 9.9%, p = 0.326; ‘a large number’ : 12.5%, p = 0.051), the multivariate analysis revealed that, after adjusting for baseline intentions and sociodemographic variables, both ‘ nearly half’ (aOR 2.02, 95% CI 1.20–3.38, p < 0.01) and ‘a large number’ (aOR 1.72, 95% CI 1.00–2.96, p < 0.05) were associated with a significantly greater proportion of intenders compared to the control condition. The fFull model is included in the Supplementary file (see Additional file : Table S3). Note that due to the low number of study participants initially indicating that they would definitely not have the screening test when invited ( N = 256), we could not analyse the effect of the social norms messages separately for those who answered ‘ definitely not ’ and ‘ probably not ’ at the initial intention question. Looking at whether the normative messages influenced information seeking and engagement, Table and Fig. reveal that, independent of experimental condition, around 38% of respondents stated that they wanted to read more (36.2–42.2%, χ 2 (3, N = 1245) =4.41, p = 0.220). Furthermore, while most participants who read the additional information about the risk and benefits of the screening test got around 2 out of 3 comprehension questions right, a Kruskal-Wallis test did not reveal any differences in BSS knowledge across the conditions ( χ 2 = 2.59, p = 0.274, df = 2). Additional adjusted linear regression confirmed that there was no difference across the three experimental conditions ( ‘numerical’ : Beta − 0.075, 95% CI -0.289–0.139; ‘a large number’ : Beta − 0.091, 95% CI -0.318–0.137 and ‘ nearly half’ : Beta − 0.161, 95% CI -0.370–0.119; see Additional file : Table S4 for the full linear regression model). Thus, our normative messages did not negatively affect information seeking and processing. The results of the Study 2 show that normative quantifiers can be used to increase interest in screening programmes with low uptake. In contrast to what we expected, the low but more accurate descriptive norm message was as motivating as the high normative quantifier. A closer look at how respondents interpreted the quantifiers only partially confirmed the findings of Study 1 (see Additional file : Figure S5 for distribution of answers). Respondents thought that both quantifiers referred to uptake significantly greater than 43%. As in Study 1, ‘ a large number ’ was translated into the highest uptake ( Median = 51%, SD: 21.91); however, in contrast to Study 1 ‘ nearly half’ was translated as half ( Median = 50, 17.76). Interestingly, both those who were provided with information about uptake in proportions and those who didn’t receive any normative message indicated at the end of the experiment that they thought uptake would be close to half (numerical condition: Median = 46%, SD: 17.77; control condition: Median = 49%, SD: 22.23). The result of the control condition is in contrast with Sieverding and colleagues’ study which suggested that non-attenders estimated that only 28% undergo CRC screening .
To our knowledge, this was the first study that compared verbal quantifiers to numerical information in the context of colorectal cancer screening. In two online surveys, we identified and tested promising verbal quantifiers for cancer screening uptake of 43%. In contrast to Zikmund-Fisher and colleagues , we found that communicating that ‘ nearly half’ or ‘a large number’ of men and women eligible for the test participate in the screening programme, increased intentions to do the test among previously disinclined men and women. Using an exact numerical norms message did not affect intentions. Interestingly, the quantifier ‘ nearly half ’ which was rated as least misleading and most accurate in Study 1 worked as well as the more misleading quantifier ‘ a large number ’. While this suggests that the quantifiers could be used to improve the perception of low compliance rates, the vagueness of verbal quantifiers did not appear to fully explain this, as there was no difference with regard to how participants reacted to and interpreted the two quantifiers . Thus, information campaigns do not need to exaggerate the number of people who have already participated through vague and potentially misleading quantifiers but rather should correctly inform people. Furthermore, the numerical description did not decrease motivation as seen in previous experiments and campaigns , as the communicated numerical social norms message was in line with the belief about uptake. Importantly, we demonstrated that paraphrasing uptake using quantifiers did not negatively influence information seeking and engagement. The use of normative messages, therefore, did not seem to undermine informed decision making in the current study. Our study has several limitations. Firstly, we only assessed intentions to participate in cancer screening and willingness to read more about the test. Therefore, the utility of verbal descriptive norms in changing screening behaviour cannot be determined. Intention does not necessarily translate to behaviours, an effect commonly referred to as the ‘intention-behaviour gap’ . Additional strategies may be required to build on motivational changes to aid actual screening attendance, such as implementation intentions . Secondly, we only tested verbal quantifiers for one single value (43%), while a wider range of values would be needed to check the generalizability of the findings. A further limitation was that the respondents’ first language was not controlled for. Non-native English speakers may have interpreted the verbal quantifiers differently. The issue of language may have been exacerbated by using a survey vendor that does not have a prior language skill checks. Future work should include a language check. Moreover, the influence of perceived accuracy and credibility of normative messages on intentions and subsequent behaviours warrants further investigation. Study 1 identified a strong correlation between perceived accuracy and the quantifier rated best for public communication, echoing previous research where accuracy was considered the most important characteristic of informational messages . Finally, the above suggestions exemplify how the results of the current study could be incorporated into an evidence-based leaflet or document.
This study highlighted the potential of using verbal quantifiers for social norms interventions. While our systematically identified verbal quantifiers increased screening intentions among previously disinclined men and women, a traditional numerical norms message did not affect intentions. The effectiveness of using verbal quantifiers in social norms messages should be tested in other contexts and in a randomised controlled trial.
Additional file 1: Figure S1. Flow chart of participant participation through Study 1. Figure S2. Distributions of the translations of verbal quantifiers in Study 1; reference line depicts true uptake (43%). Figure S3. Distributions of misleadingness of verbal quantifiers in Study 1 ( N = 202). Figure S4. Flow chart of participant participation through Study 2. Figure S5. Distributions of beliefs, comprehension and interpretation of social norms messages in Study 2. Table S1. Descriptive statistics of the study population in Study 1 ( N = 402). Table S2. Descriptive statistics of the study population in Study 2 ( N = 1245). Table S3. Logistic regression models on screening intentions displaying odds ratios and 95% confidence intervals (CI) – Study 2. Table S4. Regression models on engagement with the additional information about the screening test in Study 2. (DOCX 2744 kb)
|
Pediatric neuro-oncology: Highlights of the last quarter-century | 452ee8fb-5bf6-40a4-b106-db88ebcd2af7 | 11664148 | Internal Medicine[mh] | The last 25 years in pediatric neuro-oncology have been transformative, marked by significant advancements in the understanding of brain tumor biology, the development of novel therapies, and collaborative research efforts. Herein, we attempt to summarize the key discoveries and developments that have defined this era and highlight the ongoing challenges for the field . Central nervous system (CNS) tumors are the most common pediatric solid tumor and second most common pediatric malignancy overall . With an incidence rate of 6.23 per 100,000, over 5000 cases of pediatric CNS tumors are diagnosed in the United States each year . The incidence of pediatric CNS tumors is increasing overall, likely related in part to improvements in diagnostic imaging techniques and detection of otherwise asymptomatic lesions . Whilst childhood cancer mortality has significantly decreased over the last 50 years, this is in large part driven by dramatic improvements in leukemia outcomes . Conversely, mortality rates from pediatric CNS tumors have remained static since 2007 and consequently, CNS tumors are now the leading cause of childhood cancer-related death . Globally, the majority of children presenting with CNS tumors each year live in low- and middle-income countries (LMICs) and the data on true incidence and mortality in these settings is limited . WHO 2021 classification The World Health Organization (WHO) published the first edition of the CNS tumor classification in 1979, and since then has released sequential categorization schemes incorporating evolving clinical, histopathological and immunohistochemical developments to further refine tumor diagnoses. With the discovery of the molecular drivers of many diseases and the advent of sophisticated diagnostic techniques, the most recent 2021 WHO classification system marks a fundamental shift towards hybrid histopathological-molecular diagnoses, which aim to better delineate and describe disease entities, improving the accuracy of diagnosis and hopefully translating to better prognostication and more informed clinical practice . Following this format, twenty-two tumor types were newly defined across the adult and pediatric disease spectra in the WHO 2021 edition . Emblematic of the shift toward molecularly-defined entities are the new delineations within the pediatric-type diffuse high-grade glioma category. High-grade midline tumors, previously radiographically defined as Diffuse Intrinsic Pontine Glioma (DIPG), were discovered to be epigenetically driven by histone mutations in landmark genomic discoveries in 2012 . This was reflected in the 2016 guideline with the new diagnostic category of H3K27M- mutant Diffuse Midline Glioma (DMG), which in the 2021 classification has now expanded as H3K27- altered Diffuse Midline Glioma, recognizing tumors lacking the canonical H3 mutations but still exhibiting loss of H3K27-trimethylation (driven instead by alterations in EGFR or EZHIP) and thus a similar mechanism of cell proliferation . As well as H3K27-altered DMGs, three further pediatric-type diffuse high-grade glioma subtypes were defined in the 2021 edition; H3G34R-mutant diffuse hemispheric gliomas, H3-wildtype and IDH-wildtype diffuse high-grade gliomas, and infant-type hemispheric gliomas. The latter is now known to harbor distinct driver fusions and exhibit a significantly improved outcome to other pediatric high-grade glioma diagnoses, thus demonstrating the profound clinical and therapeutic implications of hybrid molecular tumor classification . Molecular advances Genetic sequencing Advances in genetic sequencing have unveiled the molecular underpinnings of many pediatric brain tumor types over the last two decades. One of the most clinically impactful examples is in the case of pediatric low-grade glioma (pLGG). The most common type of brain tumor in children, pLGG is an umbrella diagnosis for a range of low-grade histologic entities that make up around 30-40 % of all pediatric brain tumors . pLGGs were discovered in several landmark genetic profiling efforts to be almost universally driven by single alterations within the MAPK pathway, such that pediatric low-grade gliomas are now considered a ‘single pathway disease’ . The sentinel discovery of the tandem duplication in the BRAF gene in pilocytic astrocytomas in 2008, identified the fusion of the uncharacterized KIAA1549 protein with the 3′ terminal of the BRAF kinase, causing loss of its inhibitory domain and subsequent constitutive activation . Following this, sequential mapping projects went on to describe other recurrent alterations converging on the MAPK pathway; most commonly somatic BRAF or germline NF1 alterations, as well as alterations involving FGFR1/2/3, NTRK2, RAF1, ALK and ROS1, and also non-MAPK alterations (such as MYB and MYBL1) . Understanding pLGGs as a single driver disease, and the identification of the pathway involved, has allowed for the development and implementation of effective targeted therapeutics which are now established treatment modalities, (as discussed in further detail below). In pediatric high-grade gliomas, the defining biologic breakthrough of the last two decades was the discovery of the role of driver histone mutations and epigenetic modification in tumorigenesis. Pioneering sequencing studies demonstrated that DIPGs were driven by recurrent mutations in genes encoding histone 3 variants (namely, H3F3A encoding H3.3, or less frequently HIST1H3B and HIST1H3C encoding H3.1) . These mutations lead to a lysine to methionine substitution at critical locations within the histone tail (p.K27M), which are involved in key regulatory post-translational modifications . Subsequent work went on to demonstrate the pathogenic effects of these mutations; namely that H3K27M results in suppression of polycomb repressive complex 2 (PRC2) function, leading to global reduction of repressive H3K27 trimethylation (H3K27me3) . Several other recurrent mutations (in EGFR and EZHIP) are now known to cause similar PRC2 inhibition and loss of H3K27me3 in a small subset of these tumors (around 4 %), now encompassed within the molecularly defined H3K27-altered Diffuse Midline Glioma diagnosis . Unfortunately, the identification of these epigenetic drivers of pediatric high grade gliomas has not yet meaningfully impacted survival outcomes in these very aggressive tumors, with the median survival in DIPG remaining <12 months . Medulloblastoma is another pediatric brain tumor that has been newly understood in the era of genetic sequencing. Advancements in transcriptional analysis combined with DNA sequencing led to ground-breaking studies describing four distinct molecular subgroups: WNT-driven, SHH-driven, Group 3 and Group 4 medulloblastoma . Large-scale molecular analyses subsequently confirmed the biologic and clinical heterogeneity of these subgroups, and have further delineated 12 subtypes; the clinical implications of these subtypes are an area of active investigation . Importantly, the four major subgroups can be distinguished by immunohistochemistry, meaning that subgroup-based diagnosis and clinical recommendations could have widespread implementation . With the increasing understanding of the heterogeneity within medulloblastoma, subgroup-specific characteristics are now being incorporated into modern medulloblastoma trial design, risk stratification algorithms, and treatment protocols. Finally, genetic sequencing efforts have revealed the significant biologic heterogeneity of ependymomas. These tumors have traditionally been defined by their histologic appearance and grade, but the latter has long being the subject of controversy given the high degree of variability in interpretation among pathologists . Genetic sequencing has unveiled the distinct molecular features of ependymoma, which are now used as a more definitive tool for classification. As such, there are now 10 different ependymal tumor subtypes, including supratentorial ependymoma, ZFTA fusion-positive, supratentorial ependymoma, YAP1 fusion-positive, posterior fossa group A ependymoma, posterior fossa group B ependymoma, spinal ependymoma MYCN-amplified, myxopapillary ependymoma and subependymoma . These molecular subgroups are now known to correlate with clinical behavior and prognosis . Additionally, several cytogenetic patterns within subgroups have become prognostically significant; for example, posterior fossa ependymomas with 1q gain have a poorer prognosis than those with a balanced profile . Recognition of the molecular and clinical heterogeneity within ependymal tumors has allowed for more accurate diagnosis and prognostication while continuing to inform better risk-stratified treatment approaches. Whilst the insights afforded by genomic sequencing listed above have had profound diagnostic, prognostic, and therapeutic implications for many pediatric brain tumor types, it is important to recognize that the technology, equipment, and expertise for genomic analysis is not universally available, especially for patients in LMICs. Innovative and collaborative strategies are needed to reduce the widening gap between care in high-income countries (HICs) and LMICs . DNA methylation profiling In addition to molecular sequencing, DNA methylation profiling, a method which classifies tumors based on their epigenetic signature, has emerged as a key tool in solid tumor diagnosis and classification at large over the last two decades. Broadly, methylation of CpG islands (regions of DNA with a high frequency of cytosine and guanine nucleotides) in promoter regions of genes causes suppression of transcription . The particular epigenetic DNA methylation signature of cancer cells has been seen to reflect both the tumor cell of origin and genetic changes acquired during tumor formation, thus differentiating individual cancer types and subtypes . These characteristic methylation patterns have since been utilized for tumor classification and diagnosis, initially in medulloblastoma and subsequently in a number of other brain tumor types. Methylation profiles have been shown to be a reproducible and accurate diagnostic tool across a range of sample types, including archival samples and tissues with scarce or low purity tumor tissue . These efforts were followed by the creation of a DNA methylation-based CNS tumor reference cohort and subsequent algorithmic machine-learning classifier by the DKFZ group in 2018, which allowed the prospective evaluation of new samples . In addition to increasing diagnostic accuracy, DNA methylation has allowed for rare and novel tumor types to be recognized as biologically distinct entities. For example, the term ‘Primitive Neuroectodermal Tumor’ has been abolished as methylation has unveiled multiple distinct tumor types within this previous umbrella diagnosis . Importantly, DNA methylation testing is not currently routinely accessible worldwide and treatment decisions are still based on histopathologic diagnosis in many countries. However, recent developments in long-read sequencing and methylation have created tools for ultra-fast molecular tumor characterization, which may lead to more easily accessible point of care methylation testing and even real-time intraoperative tumor DNA methylation classification . Radiology advances Parallel to the progress in molecular diagnostics, neuro-oncologic imaging techniques have experienced significant advancements over the last 25 years. Magnetic resonance imaging (MRI) has evolved to become the gold-standard imaging technique for diagnosis and monitoring of brain tumors. However, there are key radiographic differences between adult and pediatric tumors; in response to this, the Response Assessment in Pediatric Neuro-Oncology (RAPNO) international working group has published imaging criteria designed to allow a more standardized approach to pediatric brain tumor diagnosis, surveillance, and particularly objective trial response assessment that can be universally applied . Advanced MR techniques have also become important and widely used tools in modern pediatric neuro-oncology. MR perfusion weighted imaging can be helpful in distinguishing true tumor progression from radiation effect or pseudo progression, a commonly encountered clinical challenge with increasing relevance in this era of immunotherapy . Functional MRI can ‘map’ areas of the brain used in specific tasks, allowing optimization of surgical planning . Diffusion tensor imaging (DTI) tractography can similarly be applied preoperatively to identify important white matter tracts to guide surgery and predict motor outcomes . MR spectroscopy, which measures metabolite signals in tissue, has been used to help inform tumor grading and there is increasing promise in expanding its use to differentiate between molecular disease subtypes and predict treatment responses . Finally, positron emission tomography (PET) also has an evolving role in pediatric neuro-oncology for identification of CNS neoplastic lesions and also prognostication . Overall, there has been rapid expansion in both the knowledge and implementation of advanced imaging techniques over the last several decades, which is set to continue, particularly with the incorporation of artificial intelligence tools and machine learning algorithms. However, accessibility to these newer modalities and the expertise for interpretation limits the application in many clinical settings, particularly in LMICs. The World Health Organization (WHO) published the first edition of the CNS tumor classification in 1979, and since then has released sequential categorization schemes incorporating evolving clinical, histopathological and immunohistochemical developments to further refine tumor diagnoses. With the discovery of the molecular drivers of many diseases and the advent of sophisticated diagnostic techniques, the most recent 2021 WHO classification system marks a fundamental shift towards hybrid histopathological-molecular diagnoses, which aim to better delineate and describe disease entities, improving the accuracy of diagnosis and hopefully translating to better prognostication and more informed clinical practice . Following this format, twenty-two tumor types were newly defined across the adult and pediatric disease spectra in the WHO 2021 edition . Emblematic of the shift toward molecularly-defined entities are the new delineations within the pediatric-type diffuse high-grade glioma category. High-grade midline tumors, previously radiographically defined as Diffuse Intrinsic Pontine Glioma (DIPG), were discovered to be epigenetically driven by histone mutations in landmark genomic discoveries in 2012 . This was reflected in the 2016 guideline with the new diagnostic category of H3K27M- mutant Diffuse Midline Glioma (DMG), which in the 2021 classification has now expanded as H3K27- altered Diffuse Midline Glioma, recognizing tumors lacking the canonical H3 mutations but still exhibiting loss of H3K27-trimethylation (driven instead by alterations in EGFR or EZHIP) and thus a similar mechanism of cell proliferation . As well as H3K27-altered DMGs, three further pediatric-type diffuse high-grade glioma subtypes were defined in the 2021 edition; H3G34R-mutant diffuse hemispheric gliomas, H3-wildtype and IDH-wildtype diffuse high-grade gliomas, and infant-type hemispheric gliomas. The latter is now known to harbor distinct driver fusions and exhibit a significantly improved outcome to other pediatric high-grade glioma diagnoses, thus demonstrating the profound clinical and therapeutic implications of hybrid molecular tumor classification . Genetic sequencing Advances in genetic sequencing have unveiled the molecular underpinnings of many pediatric brain tumor types over the last two decades. One of the most clinically impactful examples is in the case of pediatric low-grade glioma (pLGG). The most common type of brain tumor in children, pLGG is an umbrella diagnosis for a range of low-grade histologic entities that make up around 30-40 % of all pediatric brain tumors . pLGGs were discovered in several landmark genetic profiling efforts to be almost universally driven by single alterations within the MAPK pathway, such that pediatric low-grade gliomas are now considered a ‘single pathway disease’ . The sentinel discovery of the tandem duplication in the BRAF gene in pilocytic astrocytomas in 2008, identified the fusion of the uncharacterized KIAA1549 protein with the 3′ terminal of the BRAF kinase, causing loss of its inhibitory domain and subsequent constitutive activation . Following this, sequential mapping projects went on to describe other recurrent alterations converging on the MAPK pathway; most commonly somatic BRAF or germline NF1 alterations, as well as alterations involving FGFR1/2/3, NTRK2, RAF1, ALK and ROS1, and also non-MAPK alterations (such as MYB and MYBL1) . Understanding pLGGs as a single driver disease, and the identification of the pathway involved, has allowed for the development and implementation of effective targeted therapeutics which are now established treatment modalities, (as discussed in further detail below). In pediatric high-grade gliomas, the defining biologic breakthrough of the last two decades was the discovery of the role of driver histone mutations and epigenetic modification in tumorigenesis. Pioneering sequencing studies demonstrated that DIPGs were driven by recurrent mutations in genes encoding histone 3 variants (namely, H3F3A encoding H3.3, or less frequently HIST1H3B and HIST1H3C encoding H3.1) . These mutations lead to a lysine to methionine substitution at critical locations within the histone tail (p.K27M), which are involved in key regulatory post-translational modifications . Subsequent work went on to demonstrate the pathogenic effects of these mutations; namely that H3K27M results in suppression of polycomb repressive complex 2 (PRC2) function, leading to global reduction of repressive H3K27 trimethylation (H3K27me3) . Several other recurrent mutations (in EGFR and EZHIP) are now known to cause similar PRC2 inhibition and loss of H3K27me3 in a small subset of these tumors (around 4 %), now encompassed within the molecularly defined H3K27-altered Diffuse Midline Glioma diagnosis . Unfortunately, the identification of these epigenetic drivers of pediatric high grade gliomas has not yet meaningfully impacted survival outcomes in these very aggressive tumors, with the median survival in DIPG remaining <12 months . Medulloblastoma is another pediatric brain tumor that has been newly understood in the era of genetic sequencing. Advancements in transcriptional analysis combined with DNA sequencing led to ground-breaking studies describing four distinct molecular subgroups: WNT-driven, SHH-driven, Group 3 and Group 4 medulloblastoma . Large-scale molecular analyses subsequently confirmed the biologic and clinical heterogeneity of these subgroups, and have further delineated 12 subtypes; the clinical implications of these subtypes are an area of active investigation . Importantly, the four major subgroups can be distinguished by immunohistochemistry, meaning that subgroup-based diagnosis and clinical recommendations could have widespread implementation . With the increasing understanding of the heterogeneity within medulloblastoma, subgroup-specific characteristics are now being incorporated into modern medulloblastoma trial design, risk stratification algorithms, and treatment protocols. Finally, genetic sequencing efforts have revealed the significant biologic heterogeneity of ependymomas. These tumors have traditionally been defined by their histologic appearance and grade, but the latter has long being the subject of controversy given the high degree of variability in interpretation among pathologists . Genetic sequencing has unveiled the distinct molecular features of ependymoma, which are now used as a more definitive tool for classification. As such, there are now 10 different ependymal tumor subtypes, including supratentorial ependymoma, ZFTA fusion-positive, supratentorial ependymoma, YAP1 fusion-positive, posterior fossa group A ependymoma, posterior fossa group B ependymoma, spinal ependymoma MYCN-amplified, myxopapillary ependymoma and subependymoma . These molecular subgroups are now known to correlate with clinical behavior and prognosis . Additionally, several cytogenetic patterns within subgroups have become prognostically significant; for example, posterior fossa ependymomas with 1q gain have a poorer prognosis than those with a balanced profile . Recognition of the molecular and clinical heterogeneity within ependymal tumors has allowed for more accurate diagnosis and prognostication while continuing to inform better risk-stratified treatment approaches. Whilst the insights afforded by genomic sequencing listed above have had profound diagnostic, prognostic, and therapeutic implications for many pediatric brain tumor types, it is important to recognize that the technology, equipment, and expertise for genomic analysis is not universally available, especially for patients in LMICs. Innovative and collaborative strategies are needed to reduce the widening gap between care in high-income countries (HICs) and LMICs . DNA methylation profiling In addition to molecular sequencing, DNA methylation profiling, a method which classifies tumors based on their epigenetic signature, has emerged as a key tool in solid tumor diagnosis and classification at large over the last two decades. Broadly, methylation of CpG islands (regions of DNA with a high frequency of cytosine and guanine nucleotides) in promoter regions of genes causes suppression of transcription . The particular epigenetic DNA methylation signature of cancer cells has been seen to reflect both the tumor cell of origin and genetic changes acquired during tumor formation, thus differentiating individual cancer types and subtypes . These characteristic methylation patterns have since been utilized for tumor classification and diagnosis, initially in medulloblastoma and subsequently in a number of other brain tumor types. Methylation profiles have been shown to be a reproducible and accurate diagnostic tool across a range of sample types, including archival samples and tissues with scarce or low purity tumor tissue . These efforts were followed by the creation of a DNA methylation-based CNS tumor reference cohort and subsequent algorithmic machine-learning classifier by the DKFZ group in 2018, which allowed the prospective evaluation of new samples . In addition to increasing diagnostic accuracy, DNA methylation has allowed for rare and novel tumor types to be recognized as biologically distinct entities. For example, the term ‘Primitive Neuroectodermal Tumor’ has been abolished as methylation has unveiled multiple distinct tumor types within this previous umbrella diagnosis . Importantly, DNA methylation testing is not currently routinely accessible worldwide and treatment decisions are still based on histopathologic diagnosis in many countries. However, recent developments in long-read sequencing and methylation have created tools for ultra-fast molecular tumor characterization, which may lead to more easily accessible point of care methylation testing and even real-time intraoperative tumor DNA methylation classification . Advances in genetic sequencing have unveiled the molecular underpinnings of many pediatric brain tumor types over the last two decades. One of the most clinically impactful examples is in the case of pediatric low-grade glioma (pLGG). The most common type of brain tumor in children, pLGG is an umbrella diagnosis for a range of low-grade histologic entities that make up around 30-40 % of all pediatric brain tumors . pLGGs were discovered in several landmark genetic profiling efforts to be almost universally driven by single alterations within the MAPK pathway, such that pediatric low-grade gliomas are now considered a ‘single pathway disease’ . The sentinel discovery of the tandem duplication in the BRAF gene in pilocytic astrocytomas in 2008, identified the fusion of the uncharacterized KIAA1549 protein with the 3′ terminal of the BRAF kinase, causing loss of its inhibitory domain and subsequent constitutive activation . Following this, sequential mapping projects went on to describe other recurrent alterations converging on the MAPK pathway; most commonly somatic BRAF or germline NF1 alterations, as well as alterations involving FGFR1/2/3, NTRK2, RAF1, ALK and ROS1, and also non-MAPK alterations (such as MYB and MYBL1) . Understanding pLGGs as a single driver disease, and the identification of the pathway involved, has allowed for the development and implementation of effective targeted therapeutics which are now established treatment modalities, (as discussed in further detail below). In pediatric high-grade gliomas, the defining biologic breakthrough of the last two decades was the discovery of the role of driver histone mutations and epigenetic modification in tumorigenesis. Pioneering sequencing studies demonstrated that DIPGs were driven by recurrent mutations in genes encoding histone 3 variants (namely, H3F3A encoding H3.3, or less frequently HIST1H3B and HIST1H3C encoding H3.1) . These mutations lead to a lysine to methionine substitution at critical locations within the histone tail (p.K27M), which are involved in key regulatory post-translational modifications . Subsequent work went on to demonstrate the pathogenic effects of these mutations; namely that H3K27M results in suppression of polycomb repressive complex 2 (PRC2) function, leading to global reduction of repressive H3K27 trimethylation (H3K27me3) . Several other recurrent mutations (in EGFR and EZHIP) are now known to cause similar PRC2 inhibition and loss of H3K27me3 in a small subset of these tumors (around 4 %), now encompassed within the molecularly defined H3K27-altered Diffuse Midline Glioma diagnosis . Unfortunately, the identification of these epigenetic drivers of pediatric high grade gliomas has not yet meaningfully impacted survival outcomes in these very aggressive tumors, with the median survival in DIPG remaining <12 months . Medulloblastoma is another pediatric brain tumor that has been newly understood in the era of genetic sequencing. Advancements in transcriptional analysis combined with DNA sequencing led to ground-breaking studies describing four distinct molecular subgroups: WNT-driven, SHH-driven, Group 3 and Group 4 medulloblastoma . Large-scale molecular analyses subsequently confirmed the biologic and clinical heterogeneity of these subgroups, and have further delineated 12 subtypes; the clinical implications of these subtypes are an area of active investigation . Importantly, the four major subgroups can be distinguished by immunohistochemistry, meaning that subgroup-based diagnosis and clinical recommendations could have widespread implementation . With the increasing understanding of the heterogeneity within medulloblastoma, subgroup-specific characteristics are now being incorporated into modern medulloblastoma trial design, risk stratification algorithms, and treatment protocols. Finally, genetic sequencing efforts have revealed the significant biologic heterogeneity of ependymomas. These tumors have traditionally been defined by their histologic appearance and grade, but the latter has long being the subject of controversy given the high degree of variability in interpretation among pathologists . Genetic sequencing has unveiled the distinct molecular features of ependymoma, which are now used as a more definitive tool for classification. As such, there are now 10 different ependymal tumor subtypes, including supratentorial ependymoma, ZFTA fusion-positive, supratentorial ependymoma, YAP1 fusion-positive, posterior fossa group A ependymoma, posterior fossa group B ependymoma, spinal ependymoma MYCN-amplified, myxopapillary ependymoma and subependymoma . These molecular subgroups are now known to correlate with clinical behavior and prognosis . Additionally, several cytogenetic patterns within subgroups have become prognostically significant; for example, posterior fossa ependymomas with 1q gain have a poorer prognosis than those with a balanced profile . Recognition of the molecular and clinical heterogeneity within ependymal tumors has allowed for more accurate diagnosis and prognostication while continuing to inform better risk-stratified treatment approaches. Whilst the insights afforded by genomic sequencing listed above have had profound diagnostic, prognostic, and therapeutic implications for many pediatric brain tumor types, it is important to recognize that the technology, equipment, and expertise for genomic analysis is not universally available, especially for patients in LMICs. Innovative and collaborative strategies are needed to reduce the widening gap between care in high-income countries (HICs) and LMICs . In addition to molecular sequencing, DNA methylation profiling, a method which classifies tumors based on their epigenetic signature, has emerged as a key tool in solid tumor diagnosis and classification at large over the last two decades. Broadly, methylation of CpG islands (regions of DNA with a high frequency of cytosine and guanine nucleotides) in promoter regions of genes causes suppression of transcription . The particular epigenetic DNA methylation signature of cancer cells has been seen to reflect both the tumor cell of origin and genetic changes acquired during tumor formation, thus differentiating individual cancer types and subtypes . These characteristic methylation patterns have since been utilized for tumor classification and diagnosis, initially in medulloblastoma and subsequently in a number of other brain tumor types. Methylation profiles have been shown to be a reproducible and accurate diagnostic tool across a range of sample types, including archival samples and tissues with scarce or low purity tumor tissue . These efforts were followed by the creation of a DNA methylation-based CNS tumor reference cohort and subsequent algorithmic machine-learning classifier by the DKFZ group in 2018, which allowed the prospective evaluation of new samples . In addition to increasing diagnostic accuracy, DNA methylation has allowed for rare and novel tumor types to be recognized as biologically distinct entities. For example, the term ‘Primitive Neuroectodermal Tumor’ has been abolished as methylation has unveiled multiple distinct tumor types within this previous umbrella diagnosis . Importantly, DNA methylation testing is not currently routinely accessible worldwide and treatment decisions are still based on histopathologic diagnosis in many countries. However, recent developments in long-read sequencing and methylation have created tools for ultra-fast molecular tumor characterization, which may lead to more easily accessible point of care methylation testing and even real-time intraoperative tumor DNA methylation classification . Parallel to the progress in molecular diagnostics, neuro-oncologic imaging techniques have experienced significant advancements over the last 25 years. Magnetic resonance imaging (MRI) has evolved to become the gold-standard imaging technique for diagnosis and monitoring of brain tumors. However, there are key radiographic differences between adult and pediatric tumors; in response to this, the Response Assessment in Pediatric Neuro-Oncology (RAPNO) international working group has published imaging criteria designed to allow a more standardized approach to pediatric brain tumor diagnosis, surveillance, and particularly objective trial response assessment that can be universally applied . Advanced MR techniques have also become important and widely used tools in modern pediatric neuro-oncology. MR perfusion weighted imaging can be helpful in distinguishing true tumor progression from radiation effect or pseudo progression, a commonly encountered clinical challenge with increasing relevance in this era of immunotherapy . Functional MRI can ‘map’ areas of the brain used in specific tasks, allowing optimization of surgical planning . Diffusion tensor imaging (DTI) tractography can similarly be applied preoperatively to identify important white matter tracts to guide surgery and predict motor outcomes . MR spectroscopy, which measures metabolite signals in tissue, has been used to help inform tumor grading and there is increasing promise in expanding its use to differentiate between molecular disease subtypes and predict treatment responses . Finally, positron emission tomography (PET) also has an evolving role in pediatric neuro-oncology for identification of CNS neoplastic lesions and also prognostication . Overall, there has been rapid expansion in both the knowledge and implementation of advanced imaging techniques over the last several decades, which is set to continue, particularly with the incorporation of artificial intelligence tools and machine learning algorithms. However, accessibility to these newer modalities and the expertise for interpretation limits the application in many clinical settings, particularly in LMICs. Targeted therapies The dramatic increase in understanding of molecular disease drivers over the last two decades has led to the development of many novel therapeutics to target aberrantly functioning cellular pathways. Nowhere has the clinical effect of these targeted therapies been more profound than in the pediatric low-grade glioma setting. The majority of patients with pLGGs survive well into adulthood, and as such, pLGG is effectively a chronic disease . The emphasis of treatment has thus shifted to focus on functional outcomes and maintaining quality of life whilst minimizing toxicity for these patients. The mainstay of therapy remains surgical resection, which can be curative in over 90 % of these tumors . Low-dose metronomic chemotherapy approaches remain the widely accepted standard of care for patients requiring further treatment for surgically inaccessible tumors, or those with residual or recurrent disease. These regimens generally achieve 5-year progression-free survival rates in the order of 45-55 %, meaning that around 50 % of patients will experience progression and require further therapy . However, these chemotherapy regimens are associated with significant short- and long-term toxicities, including immunosuppression, neuropathy, ototoxicity, allergic reactions, renal and hepatic dysfunction. Leveraging the new biologic understanding of pLGG tumorigenesis, numerous inhibitors targeting the culprit MAPK/ERK and mTOR pathways have since been developed and tested in phase 1 and 2 trials, with multiple phase 3 randomized controlled trials now underway . The MEK inhibitors selumetinib, trametinib, and binimetinib have all demonstrated early-phase responses in the recurrent/progressive pLGG setting, ranging from 15-56 % . Type I RAF inhibitors vemurafenib and dabrafenib, as well as combination trametinib and dabrafenib have also shown early phase safety and efficacy as single agents in recurrent BRAF V600E mutant pLGG . The type II RAF inhibitor tovorafenib was seen in the recent PNOC026/FIREFLY-1 phase II trial to induce profound responses in BRAF-altered recurrent or progressive pLGG with an overall response rate (ORR) of 64 %, leading to its FDA approval for this indication in 2024 . These impressive response rates have prompted investigation of these agents in the upfront treatment setting, where they pose a potential true shift in treatment paradigm. The recent prospective phase II trial in children with untreated BRAF V600E mutant pLGG comparing combination dabrafenib-trametinib therapy to carboplatin-vincristine demonstrated an ORR and median progression-free survival (PFS) of 47 % and 20.1 months in the dabrafenib-trametinib group compared to 11 % and 7.4 months in the carboplatin-vincristine group, with notably less toxicity . This finding led to the 2023 FDA approval for dabrafenib-trametinib combination therapy in the upfront setting for BRAF V600E mutant pLGGs, changing the standard of care treatment for this select group of patients. It is important to note that aside from the above dabrafenib-trametinib combination for BRAF V600E mutant pLGGs, selumetinib for NF1-associated plexiform neurofibromas, and everolimus for tuberous-sclerosis complex-associated subependymal giant cell astrocytomas (SEGAs), the role of targeted inhibitors in the upfront setting in pLGG remains uncertain and is the subject of ongoing investigation in multiple prospective trials . Presently, conventional chemotherapy remains the most widely accepted standard treatment whilst these investigations are ongoing, and upfront targeted inhibitor use is reserved for the clinical trial setting. This is important as there remain many unanswered questions of targeted inhibitor use in pLGG. Whilst the acute toxicity profiles have generally been favorable, little is known about the long-term side effects of these medications. The optimal duration of treatment is also unclear; whilst most trials were used a treatment duration of 24 months, this was not based on any scientific rationale. Furthermore, it has been observed that a proportion pLGGs will exhibit rapid ‘rebound’ growth after cessation of targeted therapy, but the clinical and biologic factors underpinning this mechanism are incompletely understood . In summary, targeted therapies are changing the treatment paradigm for some diseases, particularly pLGGs which benefit from being primarily single-driver entities. The role of targeted therapies in other diseases is still evolving but there is hope that similar therapeutic leaps will soon be realized in other pediatric CNS tumor types. Immunotherapy Over the last quarter century, great progress in the understanding of the immune mechanisms involved in cancer have led to significant advancements in immune based therapies, and they are being increasingly explored as therapeutic strategies for CNS tumors. T-cells express several proteins on their cell surface (such as PD-1 and CTLA-4), known as ‘checkpoint regulators’, which act to downregulate T-cell activity when they bind to specific ligands (such as PDL-1 and CD80/86 respectively) on antigen-presenting and other cells in the body . By blocking the checkpoint receptor-ligand interaction, the balance is tipped in favor of T-cell stimulation and supports T cell activation and engagement . The strategy was first clinically employed in adult melanoma, where remarkable responses were achieved in previously treatment-resistant advanced disease . Importantly, ICI sensitivity has been seen to relate to tumor mutational burden (TMB) or microsatellite instability, a surrogate marker of TMB . A higher number of tumor mutations drives an increased burden of neoantigens, and a greater likelihood of recognition by the patient's T-cells, thus enhancing the efficacy of immune checkpoint inhibition. This concept has underscored work in patients with biallelic replication repair deficiency (RRD), whose tumors harbor a high mutation burden. Historically, RRD-associated high-grade glioma (RRD-HGG) is seen to rapidly progress with a median post-relapse survival of 2.6 months, but a recent prospective pediatric trial using nivolumab for refractory nonhematologic cancers harboring a high TMB and/or MMRD demonstrated a best overall response of 50 %, with several sustained complete remissions including patients with refractory malignant gliomas . This work has shifted the treatment paradigm for this small group of patients and has led to the FDA approval of pembrolizumab in 2020, for pediatric patients with relapsed solid tumors with a high TMB. Adoptive cellular therapies, which use modified lymphocytes (usually T-cells or NK cells) to target tumor cells, have been the subject of much excitement and investigation over the last decade, particularly since the profound clinical impact of CD19-directed chimeric antigen receptor (CAR) T-cell therapy in high-risk hematologic malignancies. CAR T-cell therapy uses cytolytic T cells that have been engineered to express a receptor that recognizes a particular surface antigen on target tumor cells . These CARs are comprised of an antigen-binding domain and a cell signaling domain and they bestow MHC-unrestricted antigen specificity to involved T-cells . Several phase I studies have tested multiple CAR constructs against several antigens which demonstrate differential expression between tumor and normal tissue. Pediatric phase I clinical trial data has been published for GD2, HER2 and B7H3-CAR T-cell therapy in diffuse midline glioma and other relapsed/refractory pediatric brain tumors including ependymoma and medulloblastoma . It should be noted that these phase I trials have unveiled several significant toxicities associated with CAR T-cell therapy for CNS tumors; primarily on-tumor on-target toxicity that can cause significant tumoral/peritumoral swelling, leading to CSF obstruction and/or neural dysfunction . Additionally, these studies have revealed significant challenges facing CAR efficacy in brain tumors, including limitations in CAR T cell expansion and persistence, uncertainty in the optimal delivery route and the role of lymphodepletion, and antigen-loss recurrence. However, several radiographic and clinical responses have also been reported among these trials in traditionally treatment-resistant pediatric CNS tumors, highlighting the promise of this approach . Multiple phase I trials are ongoing . Several other important immunotherapeutic approaches developed over the last several decades include therapeutic cancer vaccines, oncolytic viral therapy, and other cellular therapies such as cytotoxic T-lymphocytes and engineered T-cell receptors, all of which are being investigated in clinical trials for various pediatric CNS tumor types. Radiation therapy Radiation therapy (RT) has long been a mainstay in pediatric CNS tumor treatment, however, is known to be associated with a range of significant short- and long-term side effects. Over the last 25 years, the field of pediatric radiation oncology has witnessed significant advancements that have largely been aimed at improving outcomes whilst minimizing the significant long-term side effects associated with radiation. A key development over this time has been the implementation and greater access to proton beam radiation therapy (PRT). In contrast to traditional photon radiation therapy, which irradiates a target using multiple x-ray beams (and deposits radiation in tissues beyond the target area), PRT directs protons towards the tumor target, depositing them with minimal residual radiation beyond the target tissue . This is an attractive feature particularly in the pediatric population, where RT-related damage to the surrounding structures during childhood development have can significant long-term consequences. In pediatric CNS tumors, the use of protons for medulloblastoma has been a major focus over the last few decades, as most children with medulloblastoma require irradiation of the entire craniospinal axis under standard of care treatment. Comparative dosimetric modelling showed that protons are able to not only eliminate exit radiation dosing to the chest, abdomen and pelvis of children, but also reduce the dose to the normal brain and critical CNS structures including the hearing apparatus, pituitary, optic pathway and hypothalamus.. Additionally, there is now clinical follow up data demonstrating the favorable long term toxicity profile of PRT in pediatric medulloblastoma patients, specifically demonstrating advantages in intellectual and endocrine sparing . Importantly, the disease control and patterns of failure in PRT-treated patients have been comparable to historical controls in these studies . Of note, given these studies were not randomized and instead utilized historical photon-treated controls, differences in median age, RT technique, dose, volume and follow-up time prevent any definitive comparative conclusions. Also, whilst the role of PRT in other entities continues to be explored, photon beam RT remains the preferred modality in several pediatric CNS tumors, including high-grade glioma. Finally, despite the rapid increase in the number of proton radiation centers around the world, proton therapy remains inaccessible for many children, particularly in LMICs. In addition to the expanding use of PRT, many other areas of pediatric radiation oncology continue to advance, including the integration of advanced imaging techniques, machine-based learning approaches, and the incorporation of molecular and biomarker-driven RT plans, all largely focused on minimizing long-term sequelae to improve treatment outcomes and quality of life for young CNS tumor patients. Surgery Pediatric neurosurgery has been shaped over the last quarter century by developments that have improved procedural precision, safety and outcomes. Whilst craniotomies remain a pivotal workhorse for many types of tumor resection, endoscopic and other minimally invasive techniques have been used to increase precision and reduce surgical morbidity. Stereotaxis, the process of using a 3-dimensional coordinate system in combination with CT or MRI to locate CNS targets, has allowed the use of minimally invasive techniques in a greater number of tumor types and locations. A pertinent example of this is in the setting of brainstem biopsy in diffuse intrinsic pontine glioma. Previously a solely radiographic diagnosis, DIPGs were considered too risky for tissue sampling given their intricate location within the brainstem. Stereotactic techniques have allowed the safe biopsy of these lesions, first pioneered in 2007 and subsequently shown in several large series to be feasible and safe, with low incidence of transient morbidity (<5 %), and the majority of procedures yielding sufficient tissue for molecular sequencing . Brainstem biopsy for DIPG has now become widely accepted and adopted practice, and has facilitated a monumental shift in the understanding of the molecular underpinnings of this disease and consideration for clinical trials that utilize targeted therapies. Whilst these advancements are unfortunately yet to translate to any meaningful improvement in the dismal prognosis of DIPG, it is hoped that greater understanding of tumor biology, facilitated by tissue sampling, will eventually lead to effective treatments. Other novel surgical therapeutic techniques have focused on improving delivery of drugs into tumor tissue, either through direct delivery or disruption of the blood brain barrier (BBB). Convection enhanced delivery (CED) involves surgical placement of a cannula directly into the brain or tumor to facilitate infusion of a drug or treatment via a pressure gradient, thereby circumventing the BBB . Another technique being explored is focused ultrasound (FUS), which entails trans-cranial delivery of low-frequency waves, temporarily disrupting the BBB, and can be visualized in real time on MRI by contrast extravasation in the area of interest . This technique is enhanced by the intravenous injection of lipid-encased perfluorocarbon microbubbles, which are hypothesized to aid in mechanical disruption of the BBB through US-induced oscillation; they have been shown to lower the US frequency threshold for BBB disruption . Following preclinical demonstration of safety and potential efficacy, this technique is now under active investigation in several trials for pediatric DIPG, using FUS with doxorubicin administration (NCT05615623), etoposide administration (NCT05762419), or aminolaevulinic acid (NCT05123534). Both CED and FUS have been shown safe in phase I trials and have high potential to improve drug delivery to the most challenging to treat pediatric brain tumors, though trials require significant resources and specialized equipment, so will likely be limited to select tertiary or quaternary cancer centers . The dramatic increase in understanding of molecular disease drivers over the last two decades has led to the development of many novel therapeutics to target aberrantly functioning cellular pathways. Nowhere has the clinical effect of these targeted therapies been more profound than in the pediatric low-grade glioma setting. The majority of patients with pLGGs survive well into adulthood, and as such, pLGG is effectively a chronic disease . The emphasis of treatment has thus shifted to focus on functional outcomes and maintaining quality of life whilst minimizing toxicity for these patients. The mainstay of therapy remains surgical resection, which can be curative in over 90 % of these tumors . Low-dose metronomic chemotherapy approaches remain the widely accepted standard of care for patients requiring further treatment for surgically inaccessible tumors, or those with residual or recurrent disease. These regimens generally achieve 5-year progression-free survival rates in the order of 45-55 %, meaning that around 50 % of patients will experience progression and require further therapy . However, these chemotherapy regimens are associated with significant short- and long-term toxicities, including immunosuppression, neuropathy, ototoxicity, allergic reactions, renal and hepatic dysfunction. Leveraging the new biologic understanding of pLGG tumorigenesis, numerous inhibitors targeting the culprit MAPK/ERK and mTOR pathways have since been developed and tested in phase 1 and 2 trials, with multiple phase 3 randomized controlled trials now underway . The MEK inhibitors selumetinib, trametinib, and binimetinib have all demonstrated early-phase responses in the recurrent/progressive pLGG setting, ranging from 15-56 % . Type I RAF inhibitors vemurafenib and dabrafenib, as well as combination trametinib and dabrafenib have also shown early phase safety and efficacy as single agents in recurrent BRAF V600E mutant pLGG . The type II RAF inhibitor tovorafenib was seen in the recent PNOC026/FIREFLY-1 phase II trial to induce profound responses in BRAF-altered recurrent or progressive pLGG with an overall response rate (ORR) of 64 %, leading to its FDA approval for this indication in 2024 . These impressive response rates have prompted investigation of these agents in the upfront treatment setting, where they pose a potential true shift in treatment paradigm. The recent prospective phase II trial in children with untreated BRAF V600E mutant pLGG comparing combination dabrafenib-trametinib therapy to carboplatin-vincristine demonstrated an ORR and median progression-free survival (PFS) of 47 % and 20.1 months in the dabrafenib-trametinib group compared to 11 % and 7.4 months in the carboplatin-vincristine group, with notably less toxicity . This finding led to the 2023 FDA approval for dabrafenib-trametinib combination therapy in the upfront setting for BRAF V600E mutant pLGGs, changing the standard of care treatment for this select group of patients. It is important to note that aside from the above dabrafenib-trametinib combination for BRAF V600E mutant pLGGs, selumetinib for NF1-associated plexiform neurofibromas, and everolimus for tuberous-sclerosis complex-associated subependymal giant cell astrocytomas (SEGAs), the role of targeted inhibitors in the upfront setting in pLGG remains uncertain and is the subject of ongoing investigation in multiple prospective trials . Presently, conventional chemotherapy remains the most widely accepted standard treatment whilst these investigations are ongoing, and upfront targeted inhibitor use is reserved for the clinical trial setting. This is important as there remain many unanswered questions of targeted inhibitor use in pLGG. Whilst the acute toxicity profiles have generally been favorable, little is known about the long-term side effects of these medications. The optimal duration of treatment is also unclear; whilst most trials were used a treatment duration of 24 months, this was not based on any scientific rationale. Furthermore, it has been observed that a proportion pLGGs will exhibit rapid ‘rebound’ growth after cessation of targeted therapy, but the clinical and biologic factors underpinning this mechanism are incompletely understood . In summary, targeted therapies are changing the treatment paradigm for some diseases, particularly pLGGs which benefit from being primarily single-driver entities. The role of targeted therapies in other diseases is still evolving but there is hope that similar therapeutic leaps will soon be realized in other pediatric CNS tumor types. Over the last quarter century, great progress in the understanding of the immune mechanisms involved in cancer have led to significant advancements in immune based therapies, and they are being increasingly explored as therapeutic strategies for CNS tumors. T-cells express several proteins on their cell surface (such as PD-1 and CTLA-4), known as ‘checkpoint regulators’, which act to downregulate T-cell activity when they bind to specific ligands (such as PDL-1 and CD80/86 respectively) on antigen-presenting and other cells in the body . By blocking the checkpoint receptor-ligand interaction, the balance is tipped in favor of T-cell stimulation and supports T cell activation and engagement . The strategy was first clinically employed in adult melanoma, where remarkable responses were achieved in previously treatment-resistant advanced disease . Importantly, ICI sensitivity has been seen to relate to tumor mutational burden (TMB) or microsatellite instability, a surrogate marker of TMB . A higher number of tumor mutations drives an increased burden of neoantigens, and a greater likelihood of recognition by the patient's T-cells, thus enhancing the efficacy of immune checkpoint inhibition. This concept has underscored work in patients with biallelic replication repair deficiency (RRD), whose tumors harbor a high mutation burden. Historically, RRD-associated high-grade glioma (RRD-HGG) is seen to rapidly progress with a median post-relapse survival of 2.6 months, but a recent prospective pediatric trial using nivolumab for refractory nonhematologic cancers harboring a high TMB and/or MMRD demonstrated a best overall response of 50 %, with several sustained complete remissions including patients with refractory malignant gliomas . This work has shifted the treatment paradigm for this small group of patients and has led to the FDA approval of pembrolizumab in 2020, for pediatric patients with relapsed solid tumors with a high TMB. Adoptive cellular therapies, which use modified lymphocytes (usually T-cells or NK cells) to target tumor cells, have been the subject of much excitement and investigation over the last decade, particularly since the profound clinical impact of CD19-directed chimeric antigen receptor (CAR) T-cell therapy in high-risk hematologic malignancies. CAR T-cell therapy uses cytolytic T cells that have been engineered to express a receptor that recognizes a particular surface antigen on target tumor cells . These CARs are comprised of an antigen-binding domain and a cell signaling domain and they bestow MHC-unrestricted antigen specificity to involved T-cells . Several phase I studies have tested multiple CAR constructs against several antigens which demonstrate differential expression between tumor and normal tissue. Pediatric phase I clinical trial data has been published for GD2, HER2 and B7H3-CAR T-cell therapy in diffuse midline glioma and other relapsed/refractory pediatric brain tumors including ependymoma and medulloblastoma . It should be noted that these phase I trials have unveiled several significant toxicities associated with CAR T-cell therapy for CNS tumors; primarily on-tumor on-target toxicity that can cause significant tumoral/peritumoral swelling, leading to CSF obstruction and/or neural dysfunction . Additionally, these studies have revealed significant challenges facing CAR efficacy in brain tumors, including limitations in CAR T cell expansion and persistence, uncertainty in the optimal delivery route and the role of lymphodepletion, and antigen-loss recurrence. However, several radiographic and clinical responses have also been reported among these trials in traditionally treatment-resistant pediatric CNS tumors, highlighting the promise of this approach . Multiple phase I trials are ongoing . Several other important immunotherapeutic approaches developed over the last several decades include therapeutic cancer vaccines, oncolytic viral therapy, and other cellular therapies such as cytotoxic T-lymphocytes and engineered T-cell receptors, all of which are being investigated in clinical trials for various pediatric CNS tumor types. Radiation therapy (RT) has long been a mainstay in pediatric CNS tumor treatment, however, is known to be associated with a range of significant short- and long-term side effects. Over the last 25 years, the field of pediatric radiation oncology has witnessed significant advancements that have largely been aimed at improving outcomes whilst minimizing the significant long-term side effects associated with radiation. A key development over this time has been the implementation and greater access to proton beam radiation therapy (PRT). In contrast to traditional photon radiation therapy, which irradiates a target using multiple x-ray beams (and deposits radiation in tissues beyond the target area), PRT directs protons towards the tumor target, depositing them with minimal residual radiation beyond the target tissue . This is an attractive feature particularly in the pediatric population, where RT-related damage to the surrounding structures during childhood development have can significant long-term consequences. In pediatric CNS tumors, the use of protons for medulloblastoma has been a major focus over the last few decades, as most children with medulloblastoma require irradiation of the entire craniospinal axis under standard of care treatment. Comparative dosimetric modelling showed that protons are able to not only eliminate exit radiation dosing to the chest, abdomen and pelvis of children, but also reduce the dose to the normal brain and critical CNS structures including the hearing apparatus, pituitary, optic pathway and hypothalamus.. Additionally, there is now clinical follow up data demonstrating the favorable long term toxicity profile of PRT in pediatric medulloblastoma patients, specifically demonstrating advantages in intellectual and endocrine sparing . Importantly, the disease control and patterns of failure in PRT-treated patients have been comparable to historical controls in these studies . Of note, given these studies were not randomized and instead utilized historical photon-treated controls, differences in median age, RT technique, dose, volume and follow-up time prevent any definitive comparative conclusions. Also, whilst the role of PRT in other entities continues to be explored, photon beam RT remains the preferred modality in several pediatric CNS tumors, including high-grade glioma. Finally, despite the rapid increase in the number of proton radiation centers around the world, proton therapy remains inaccessible for many children, particularly in LMICs. In addition to the expanding use of PRT, many other areas of pediatric radiation oncology continue to advance, including the integration of advanced imaging techniques, machine-based learning approaches, and the incorporation of molecular and biomarker-driven RT plans, all largely focused on minimizing long-term sequelae to improve treatment outcomes and quality of life for young CNS tumor patients. Pediatric neurosurgery has been shaped over the last quarter century by developments that have improved procedural precision, safety and outcomes. Whilst craniotomies remain a pivotal workhorse for many types of tumor resection, endoscopic and other minimally invasive techniques have been used to increase precision and reduce surgical morbidity. Stereotaxis, the process of using a 3-dimensional coordinate system in combination with CT or MRI to locate CNS targets, has allowed the use of minimally invasive techniques in a greater number of tumor types and locations. A pertinent example of this is in the setting of brainstem biopsy in diffuse intrinsic pontine glioma. Previously a solely radiographic diagnosis, DIPGs were considered too risky for tissue sampling given their intricate location within the brainstem. Stereotactic techniques have allowed the safe biopsy of these lesions, first pioneered in 2007 and subsequently shown in several large series to be feasible and safe, with low incidence of transient morbidity (<5 %), and the majority of procedures yielding sufficient tissue for molecular sequencing . Brainstem biopsy for DIPG has now become widely accepted and adopted practice, and has facilitated a monumental shift in the understanding of the molecular underpinnings of this disease and consideration for clinical trials that utilize targeted therapies. Whilst these advancements are unfortunately yet to translate to any meaningful improvement in the dismal prognosis of DIPG, it is hoped that greater understanding of tumor biology, facilitated by tissue sampling, will eventually lead to effective treatments. Other novel surgical therapeutic techniques have focused on improving delivery of drugs into tumor tissue, either through direct delivery or disruption of the blood brain barrier (BBB). Convection enhanced delivery (CED) involves surgical placement of a cannula directly into the brain or tumor to facilitate infusion of a drug or treatment via a pressure gradient, thereby circumventing the BBB . Another technique being explored is focused ultrasound (FUS), which entails trans-cranial delivery of low-frequency waves, temporarily disrupting the BBB, and can be visualized in real time on MRI by contrast extravasation in the area of interest . This technique is enhanced by the intravenous injection of lipid-encased perfluorocarbon microbubbles, which are hypothesized to aid in mechanical disruption of the BBB through US-induced oscillation; they have been shown to lower the US frequency threshold for BBB disruption . Following preclinical demonstration of safety and potential efficacy, this technique is now under active investigation in several trials for pediatric DIPG, using FUS with doxorubicin administration (NCT05615623), etoposide administration (NCT05762419), or aminolaevulinic acid (NCT05123534). Both CED and FUS have been shown safe in phase I trials and have high potential to improve drug delivery to the most challenging to treat pediatric brain tumors, though trials require significant resources and specialized equipment, so will likely be limited to select tertiary or quaternary cancer centers . A fundamental key to the successful translation of the innovation detailed above has been the evolution of pediatric clinical trial medicine over the last 25 years. Firstly, adaptive trial designs have now been widely adopted in pediatric phase I trials. These models, such as the ‘Rolling 6′ design first published in 2008, allow more efficient trial enrolment whilst upholding safety . This is of particular benefit in pediatric oncology where trial medicine is significantly impacted by the rarity and heterogeneity of pediatric cancers, ethical considerations of using experimental therapies in minors, regulatory hurdles and funding constraints. Many pediatric oncology trials are now molecularly stratified, which allows for better understanding, interpretation, and applicability of trial results, as well as potentially increased efficacy of trial agents when applied to specific molecularly selected targets. Finally, underpinning the ability to apply translational therapeutics, implement clinical trials, and ultimately effect tangible change in the field over the last 25 years has been the development of pediatric neuro-oncology consortia. Given the rarity of pediatric brain tumors, collaboration is vital to pool knowledge and resources and particularly to action experimental trials. Various consortia have collectively transformed the landscape of pediatric neuro-oncology over the last 25 years, fostering collaboration, advancing research, and improving outcomes for children with brain tumors. Whilst the field of pediatric neuro-oncology has witnessed remarkable strides forward over the last 25 years, significant challenges remain to further improve the outcomes of children with brain tumors. In the preclinical setting, generation of accurate preclinical models is important for faithful testing of new drugs and therapies against a replica tumor and microenvironment, however, this remains challenging and costly. In the realm of diagnostics, despite a wealth of new knowledge about the molecular drivers of various tumors, molecular testing modalities are not standardized nor are universally available, which can limit accurate diagnosis, access to molecularly targeted treatments and trial enrolment, and unified approaches are needed. In addition, targeted therapies have been impactful only in carefully selected patient populations, (mostly in the minority of tumors that have a single genetic driver), and the differential responses seen in seemingly identical histologic and molecular tumors is not yet well understood. Clinical trials in pediatric neuro-oncology face many ongoing challenges given the rare and heterogeneous nature of childhood brain tumors as well as resource and personnel constraints. For novel therapies that are changing the treatment paradigms in several disease entities, the potential late effects of these therapies remain unknown. Finally, it should be addressed that a major global pediatric neuro-oncology challenge is ensuring equity of access; many of the advancements described in this review are not yet able to benefit patients and families in LMICs, where diagnostic and therapeutic opportunities can be limited. Overall, however, the remarkable progress made over the last 25 years in pediatric neuro-oncology heralds a promising future with even greater potential for breakthroughs in the next quarter-century. Phoebe Power: Writing – original draft, Investigation, Conceptualization. Joelle P Straehla: Writing – review & editing. Jason Fangusaro: Writing – review & editing. Pratiti Bandopadhayay: Writing – review & editing. Neevika Manoharan: Writing – review & editing, Supervision, Conceptualization. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Jason Fangusaro serves on the Educational Speaker's Bureau for Day One Biopharmaceuticals. Pratiti Bandopadhayay has served on paid advisory boards for QED Therapeutics and Day One Biopharmaceuticals, and she currently serves on the Board of the Justice Resource Institute as a Trustee. Her laboratory has also received grant funding from the Novartis Institute of Biomedical Research. |
Social determinants of health and disparities in pediatric trauma care: protocol for a systematic review and meta-analysis | 0213ee98-e2f0-4ad9-9f70-e28507aa8097 | 10958897 | Pediatrics[mh] | Social determinants of health (SDH) refer to the social, economic, and environmental factors that influence individuals’ health and well-being . Inequities in SDH, including inequitable distributions of resources, opportunities, and power among different population groups, result in health disparities , defined as “health differences that are closely linked with social or economic disadvantage” . Research has shown that inequities in SDH not only shape disparities in health outcomes but also contribute to the exacerbation of these disparities through barriers to high-quality healthcare services such as limited access to resources and discriminatory practices . Populations facing these disparities experience heightened barriers to receiving timely, appropriate, and high-quality care, with increased health disparities and poorer health outcomes . Addressing disparities in healthcare delivery is a critical step toward reducing health disparities and improving health outcomes for all individuals, particularly marginalized and underserved populations. Injury is the leading cause of mortality and morbidity in children worldwide with more than two-thirds of children reporting at least one injury by the age of 16 in the United States and 31% of Canadian adolescents reporting an injury serious enough to limit their normal activities or require medical care in 2016 . Extensive evidence, including systematic reviews with meta-analyses, supports the significant influence of SDH on the risk of childhood injury and subsequent health outcomes . Studies have also investigated the impact of SDH-related inequities on healthcare delivery for injured children, recognizing that the accessibility and quality of care provided strongly shape health outcomes in this population . These studies have consistently identified socioeconomic status (SES), race, ethnicity, insurance status, geographic location, and language barriers as key factors associated with disparities in the delivery of healthcare following pediatric injury . However, this evidence has not been systematically reviewed. Our objective was therefore to synthesize current evidence on the influence of SDH on the delivery of acute healthcare for children and adolescents following injury using the PROGRESS-Plus framework. This systematic review will be conducted according to Cochrane methodology , and the protocol is reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-Protocols (PRISMA-P) statement (Additional file ) . The protocol has been registered in the International Prospective Register of Systematic Reviews (PROSPERO CRD42023408467). This protocol was developed in collaboration with our project advisory committee including pediatric trauma physicians (emergency department (ED), intensive care, trauma surgery, orthopedic surgery), pediatric nurse practitioners, ED physicians in referral hospitals, and equity–diversity and inclusion experts. We defined our eligibility criteria using the Population, exposure, comparator, outcomes, and study design (PECOS) approach . Populations We will consider studies on children and adolescents (≤ 19 years of age) who present to the ED or are admitted to the hospital following injury. We will include studies on the following injury mechanisms: motor vehicle collisions, falls, struck by/against, other transport, firearm, and cut/pierce . As is common in injury research and because risk factors, presentation, clinical management, and prognosis are distinct, we will not include studies on injuries due to burns, foreign objects, poisoning, or late effects of injury. Therefore, studies in which more than 20% of the population is injured by these mechanisms will be excluded. Exposures We have defined children’s SDH using the PROGRESS-Plus framework. The PROGRESS-Plus framework is a conceptual tool used in public health research and policy to systematically analyze and address health disparities. The framework is developed and endorsed by the Campbell and Cochrane Equity Methods Group . Using this framework, we will include studies that assess healthcare delivery according to at least one of the following factors: children’s place of residence (e.g., geographical location, urbanicity); race/ethnicity/culture/language; occupation; gender/sex; religion; education; socioeconomic status (e.g., family income level, insurance status); and social capital. The “Plus” stands for other factors associated with discrimination, exclusion, marginalization, or vulnerability such as personal characteristics (e.g., language barriers); relationships barriers to accessing care (e.g., children in a household with migrants or homeless parents, parents’ occupation, or education); or environmental situations that provide limited control of opportunities for health (e.g., attending public school, neighborhood environment) . Comparators Children in the non-exposed group as defined by the authors (will depend on the exposure group under evaluation). Outcomes We will consider studies that assess healthcare delivery (e.g., access to appropriate care, and adherence to best practices) for children with injury. We will evaluate healthcare provided in the acute setting (i.e., pre-hospital, emergency department, and in-patient care). Studies on post-acute rehabilitation services will be considered in a separate review. Studies reporting on the influence of SDH inequities for clinical outcomes (e.g., mortality, disabilities, morbidity) or resource utilization (e.g., length of stay in hospital, costs) only (without assessing healthcare delivery) will not be considered. Study designs We will include observational (i.e., retrospective and prospective cohorts, case–control studies) and experimental (i.e., randomized controlled trials, quasi-experimental studies) designs. We will exclude reviews, editorial articles, or reports if they do not present original data on the exposure-outcome associations of interest. Systematic reviews will be used to identify eligible studies not found by our search strategy. There will be no language or date restrictions. Articles in languages other than English or French will be translated using online translation tools for study selection and translators for data extraction. We will consider studies on children and adolescents (≤ 19 years of age) who present to the ED or are admitted to the hospital following injury. We will include studies on the following injury mechanisms: motor vehicle collisions, falls, struck by/against, other transport, firearm, and cut/pierce . As is common in injury research and because risk factors, presentation, clinical management, and prognosis are distinct, we will not include studies on injuries due to burns, foreign objects, poisoning, or late effects of injury. Therefore, studies in which more than 20% of the population is injured by these mechanisms will be excluded. We have defined children’s SDH using the PROGRESS-Plus framework. The PROGRESS-Plus framework is a conceptual tool used in public health research and policy to systematically analyze and address health disparities. The framework is developed and endorsed by the Campbell and Cochrane Equity Methods Group . Using this framework, we will include studies that assess healthcare delivery according to at least one of the following factors: children’s place of residence (e.g., geographical location, urbanicity); race/ethnicity/culture/language; occupation; gender/sex; religion; education; socioeconomic status (e.g., family income level, insurance status); and social capital. The “Plus” stands for other factors associated with discrimination, exclusion, marginalization, or vulnerability such as personal characteristics (e.g., language barriers); relationships barriers to accessing care (e.g., children in a household with migrants or homeless parents, parents’ occupation, or education); or environmental situations that provide limited control of opportunities for health (e.g., attending public school, neighborhood environment) . Children in the non-exposed group as defined by the authors (will depend on the exposure group under evaluation). We will consider studies that assess healthcare delivery (e.g., access to appropriate care, and adherence to best practices) for children with injury. We will evaluate healthcare provided in the acute setting (i.e., pre-hospital, emergency department, and in-patient care). Studies on post-acute rehabilitation services will be considered in a separate review. Studies reporting on the influence of SDH inequities for clinical outcomes (e.g., mortality, disabilities, morbidity) or resource utilization (e.g., length of stay in hospital, costs) only (without assessing healthcare delivery) will not be considered. We will include observational (i.e., retrospective and prospective cohorts, case–control studies) and experimental (i.e., randomized controlled trials, quasi-experimental studies) designs. We will exclude reviews, editorial articles, or reports if they do not present original data on the exposure-outcome associations of interest. Systematic reviews will be used to identify eligible studies not found by our search strategy. There will be no language or date restrictions. Articles in languages other than English or French will be translated using online translation tools for study selection and translators for data extraction. We will systematically search the three following databases: PubMed, Excerpta Medica database (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, Web of Science, and Academic Search Premier from their inception to a maximum of 6 months prior to submission for publication. We will also manually screen references of identified studies to find potentially relevant articles not retrieved using our search strategy. We will search the grey literature using Google Scholar. We will develop our search strategies in collaboration with a scientific librarian using an iterative process according to the Peer Review of Electronic Search Strategies (PRESS) guidelines (Additional file ). PubMed will be searched first to revise and improve the preliminary search strategy. The approved search strategy will be applied to EMBASE, CINAHL, PsycINFO, Web of Science, and Academic Search Premier thereafter. We will search for articles comprehensively, avoiding specific SDH-related keywords to ensure inclusivity and avoid limitations. The search strategy will then be limited to the combinations of keywords and controlled vocabulary on the themes of disparities (“disparity”, “health disparity”, “inequity”, and “health inequity”); trauma (“injuries”, “fractures”, and “trauma”); and pediatrics (“pediatric”, “child”, “infant”, “adolescent”, “youth”, and “young”). We will limit our search to articles that clearly identify SDH-related differences in access to and quality of care as disparities or inequities. The articles from the various databases will be imported and merged into EndNote 20 software (Version X9.3.3, Thomson Reuters, New York City, 2018). All duplicates between databases will be either automatically or manually removed and the most recent version retained. The list of unique articles will be exported to Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) for study screening. Two content experts will first independently screen 5% of the identified unique articles to pilot selection based on the eligibility criteria described above. The pilot phase will be repeated until an acceptable agreement is reached (kappa > 0.7) . The two reviewers will then independently screen all the unique articles based on titles and abstracts. The studies that both reviewers agree should not be included will be disregarded by default. The studies selected for inclusion by at least one of the reviewers or that did not provide sufficient information to allow evaluation merely based on the title or the abstract will be considered potentially eligible. The two reviewers will then independently evaluate the full texts of potentially eligible studies to determine eligibility for final inclusion. We will contact authors of studies with insufficient or unclear information for final decision-making at this stage. We will disregard the studies whose authors we were unable to contact after three attempts. For studies excluded at this stage, reasons for exclusion will be documented. In the event of any disagreement, the two reviewers will attempt to reach a consensus, and if necessary, a third reviewer will be called upon to arbitrate. Data will be extracted by two independent reviewers using a standard data extraction form along with a detailed instruction manual developed and pilot-tested by our research team. We will retrieve information on study characteristics (first author, year of publication, country of study population, data sources and period covered, settings, data sources, and study design); the characteristics of the population (total sample size, age range, and injury types and mechanisms); the PROGRESS-Plus factors studied; the characteristics of exposed and comparison groups (type and frequency); the characteristics of the outcomes studied (type, frequency in the exposed and comparison groups; type of effect measure (e.g., odds ratio, relative risk, mean difference); crude and adjusted effect measures and their 95% confidence intervals); and adjustment variables. We will conduct a pilot extraction phase using three studies and repeat iteratively on further studies until an acceptable agreement is reached. In case of disagreements, we will attempt to reach a consensus among reviewers or consult a third reviewer when necessary. The authors of studies with missing or unclear information will be contacted as described above. In case of failure, data will be considered missing. Two reviewers will independently assess the risk of bias of included studies using the appropriate risk of bias assessment tool according to the study designs. We will use the Risk Of Bias In Non-randomized Studies-of Exposure (ROBINS-E) tool to assess the risk of bias in observational studies . The ROBINS-E tool comprises seven domains of bias: confounding, selection of participants in the study, classification of exposures, departures from intended exposures, missing data, measurement of outcomes, and selection of the reported result . Each of these bias domains and overall risk-of-bias will be rated as low, moderate, high risk-of-bias, or no information. If we identify any experimental studies, we will use the revised Cochrane Risk-of-Bias Tool (RoB 2) . This tool covers five bias domains including bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in measurement of the outcome, and bias in selection of the reported result . Both tools will be piloted on a random sample of 5% of the included studies to ensure consistency among reviewers. Any disagreements will be resolved by discussion between the two reviewers or by arbitration with a third party when necessary. We will describe the study selection using a PRISMA flowchart. The extracted data will be synthesized in narrative form first, describing the studies and the PECOS elements (i.e., populations, exposures, comparators, outcomes, and study designs). For each outcome of interest, we will synthesize risk of bias assessments graphically according to each domain of bias and overall risk of bias, separately for experimental and observational studies. If sufficient and appropriate data is available in at least three studies retained, we will conduct meta-analyses for each outcome of interest using R version 4.2.1 . Pooled effect estimates and 95% confidence intervals will be calculated using random effects models. Publication bias will be explored using funnel plots . We will assess heterogeneity using the I 2 index . To explore unexplained heterogeneity, if the number of studies is sufficient, we will conduct subgroup analyses for the following factors, identified by our project advisory committee: age (0–5, 6–14, 15–19; categories defined on consultation with advisory committee members); last year of data collection; geographical region (North America, South America, Europe, Asia, Africa, and Australia); type and mechanisms of injuries; World Bank income categories (lower-middle, upper-middle); and risk of bias (low, medium, and high). Mechanisms of injury will be based on the International Classification of Diseases, tenth revision (ICD-10) criteria used by the American College of Surgeons, i.e., motor vehicle collisions, falls, struck by/against, other transport, and firearm/cut-pierce) . Types of injuries will be based on the American College of Surgeons Trauma Quality Improvement Program cohorts and our previous work : blunt multisystem injuries (Abbreviated Injury Scale (AIS) ≥ 3 in at least two body regions); traumatic brain injuries (intracranial lesions and Glasgow Coma Scale (GCS) 13–15 (mild), GCS 9–12 (moderate), or GCS 3–8 (severe)); spinal cord injuries (AIS codes 640200.3–640276.6, 640400.3–640468.5, 640600.3–640668.5, 630600.3–630638.4 except 630612.2 and 630614.3); solid organ injuries (blunt or penetrating injuries of the liver, spleen, kidney, or pancreas); and orthopedic fractures (fractures of the upper or lower extremities, pelvic ring, or spine not including spinal cord). However, if information in the included articles is lacking or deviates from the above definition, we will form subgroups according to the authors’ definitions. The findings of this review will advance knowledge on SDH-related inequities in pediatric injury care that clinicians and policymakers can use to design better care systems that offer equitable access to high-quality care to all children and adolescents after injury. However, this review has some limitations. Despite our intention to conduct a comprehensive review by including all the PROGRESS-Plus framework factors, we anticipate that we will not be able to conduct meta-analyses for some factors because of insufficient studies. Similarly, we expect heterogeneity in inclusion criteria, definitions of exposure and outcomes across studies, and insufficient studies to fully assess heterogeneity in results. We will disseminate our findings through infographic summaries distributed to clinical organizations, presentations to clinicians, healthcare administrators, and researchers (e.g., conferences, seminars, clinical rounds), and publication in a peer-reviewed journal. Additional file 1. PRISMA-P 2015 Checklist. Additional file 2: Table 1. Search strategy in PubMed. Table 2. Search strategy in EMBASE. Table 3. Search strategy in CINAHL. Table 4. Search strategy in PsycINFO. Table 5. Search strategy in Web of Science. Table 6. Search strategy in Academic Search Premier. Additional file 3: Table 1. Inclusion and exclusion criteria. |
GSH-responsive poly-resveratrol based nanoparticles for effective drug delivery and reversing multidrug resistance | 75adc5ae-c5f4-43f7-a9c3-09cfa50fead3 | 8745365 | Pharmacology[mh] | Introduction Cancer severely threatens human life in all the countries of the world. In 2020, over 19.3 million new cases of cancer and nearly 10.0 million cancer-related deaths were reported across the globe (Sung et al., ). Cancer is widely treated following the process of chemotherapy (Yang et al., ). Attention is being paid to the development of nanomedicines to enhance the therapeutic effect and minimize the side effects of chemotherapeutics. It is believed that the use of nanomedicines can result in increased drug efficacy (Au et al., ). The use of nanocarrier-based delivery systems can increase the solubility of hydrophobic drugs (Tan et al., ), enhance the bioavailability of drugs (Rosenblum et al., ), improve the accumulation of drugs at tumor tissues by improving the permeability and retention (EPR) effects (Tee et al., ), and result in decreased side effects (Raj et al., ). Developing suitable drug carriers is the elementary step to prepare the nano-drug delivery system. Various biomaterials, such as liposomes, polymers, exosomes, cell membranes, and peptides are being used to construct nano-drug delivery systems (Hu et al., ). Polymers have been widely used in the field of anticancer drug delivery (Pottanam Chali & Ravoo, ). The complex synthetic process, absence of biological activity, and a high degree of toxicity significantly limited the practical applications of the developed polymeric drug carriers in clinical settings (Shi et al., ; Ma et al., ) Hence, it is important to develop polymers that can be easily synthesized and used during treatment (Zheng et al., ; Ou et al., ). To date, few studies have been conducted on such polymer-based-drug carriers. 3,4′,5-trihydroxy- trans -stilbene, also known as Resveratrol (RES), is a natural polyphenolic phytoalexin found in 185 plant species. It is found in red wine, soybeans, peanuts, berries, etc. (Jhaveri et al., ). It exhibits a wide range of biological (such as anticancer, anti-carcinogenic, cardio-protective, neuroprotective, immunomodulatory, anti-inflammatory, and anti-oxidant) activities (Santos et al., ). RES is a potential anticancer molecule that suppresses the proliferation of various cancerous cells, such as breast, stomach, prostate, skin, colon, lung, and liver cells (Huminiecki & Horbańczuk, ). It has been reported that the use of RES can result in the recovery of the lost sensitivity of cells toward drugs, such as paclitaxel (PTX), doxorubicin (DOX), and methotrexate (MTX) (Alamolhodaei et al., ). Thus, RES can be used to reverse the case of multidrug resistance (MDR) to some extent. Poor water solubility, chemical instability (photosensitivity and auto-oxidation), ease of clearance, and low tumor-targeting ability have significantly limited the application of RES (Jangid et al., ). Various nanosized systems were developed to achieve the delivery of RES to address these problems (Jhaveri et al., ; Jangid et al., ). These delivery systems developed for RES can be classified into two types. One type of system is used to encapsulate RES to form nanocarriers, such as polymeric micelles, protein-based nanoparticles, liposomes, and inorganic nanoparticles (Jhaveri et al., ; Shen et al., ; Singh et al., ; Santos et al., ; Zhao et al., ). Jangid et al. prepared a novel amphiphilic polymer by functionalizing Pluronic F68 with lipid (stearic acid) and polysaccharide (inulin) that could function as a drug carrier (Jangid et al., ). RES could be loaded into the developed carrier and used to treat colon cancer. The process resulted in an increase in the blood circulation time and enhanced in vitro antitumor effect. As RES bears multiple active hydroxyl groups, another type of delivery system is developed following the process of covalent modification of RES. The process of covalent modification results in a change in the chemical and physical properties of RES. RES could be conjugated with polyethylene glycol (PEG) to increase the solubility, extent of blood circulation, and antitumor effects (Wang et al., ). These strategies could be used to significantly increase the blood circulation time and bioavailability of RES. However, the methods are characterized by low drug loading and uncontrolled drug release, resulting in unsatisfactory therapeutic effects. To the best of our knowledge, RES or RES-based materials have not been developed for use as drug carrier materials to date. RES or RES-based materials can be potentially used to develop drug delivery systems and for treating diseases as these exhibit good biological activities. RES cannot be directly used for drug delivery, but it can be polymerized via the three hydroxyl groups with a suitable linker to form polymers that can be used for the development of drug delivery systems. The RES-based polymers can be readily assembled into nanocarriers. As a proof of concept, herein, we synthesized a redox-responsive polymer from RES (PRES) following a simple condensation–polymerization reaction involving RES and 3,3′-dithiodipropionic acid (DTPA). The disulfide bond in PRES was stable in blood but could be efficiently degraded by intracellular reduction agents, such as glutathione (GSH). PRES could self-assemble into nanoparticles and control drug release. It could also be used as a nanocarrier for delivering chemotherapeutic agents. We hypothesized that the nano-platform formed using PRES may also enhance the antitumor effect of drugs and help overcome MDR. To confirm this hypothesis, we chose the widely used anticancer drug PTX and loaded it into the NPs fabricated using PRES (PTX@PRES NPs). The prepared PTX@PRES NPs could be used to improve the biocompatibility of RES, achieve high drug loading, and realize GSH-responsive drug release. The released RES could improve the sensitivity of the drug-resistant cancer cells toward PTX.
Materials and methods 2.1. Materials PTX, RES, DTPA, 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC), N,N’-dimethylformamide (DMF), dichloromethane (DCM), dimethyl sulfoxide (DMSO), and 4-dimethylaminopyridine (DMAP) were obtained from Aladdin Reagents (Shanghai, China). 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[methoxy(polyethylene glycol)-3000] (DSPE-PEG 3k ) was purchased from Xi’an Ruixi Biological Technology Co., Ltd. (Xi’an, China). 2.2. Instruments 1 H nuclear magnetic resonance ( 1 H NMR) spectra were recorded on a Varian U500 (300 MHz) spectrometer. The PRES was detected by a matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS, UltrafeXtreme, Bruker Daltonics, USA). Particle size and polydispersity (PDI) were determined using the dynamic light scattering (DLS, ZetaPlus, USA) technique. The morphology of the nanoparticles was observed using the transmission electron microscopy (TEM, JEM, Japan) technique. The high-performance liquid chromatography (HPLC) technique was used to analyze RES. The mobile phase consisted of a mixture of 0.5% (v/v) acetic acid in methanol and water (1:1, v/v). A flow rate of 1 mL/min was maintained. The UV–vis detection wavelength was 303 nm. The mobile phase used during the process of PTX analysis using the HPLC technique consisted of methanol/H 2 O (6.5:3.5, v/v). The flow rate was maintained at 1 mL/min, and the detection wavelength was 227 nm. 2.3. PRES synthesis PRES was synthesized by conducting an esterification reaction between DTPA and RES. RES (228.0 mg, 1.0 mmol), DTPA (182.2 mg, 1.0 mmol), EDC (401.1 mg, 2.1 mmol), and DMAP (256.2 mg, 2.1 mmol) was dissolved in 100 mL of DCM, and the solution was stirred at room temperature under an atmosphere of nitrogen. After 72 h, the reaction mixture was concentrated to 10 mL, and 40 mL of cold ethyl acetate was added to it. The solution was stored at 4 °C overnight. The precipitate formed was collected following the process of centrifugation. Following this, the collected precipitate was washed thrice with ethyl acetate. Subsequently, the product was dissolved in DMSO, which was then placed into a dialysis bag [molecular weight cutoff (MWCO): 3500 Da]. The solution was dialyzed against DMSO over a period of 48 h. This was followed by dialysis against distilled water (time: 24 h). Finally, the produced PRES was obtained following the freeze–drying cycle conducted under vacuum. The structure and average molecular weights of PRES were determined using the 1 H NMR and MALDI-MS. 2.4. Redox-responsive behavior of PRES The redox sensitivity of PRES was studied using the GPC and HPLC techniques. PRES (1.0 mg) was dissolved in a solvent system consisting of DMF and water (DMF: water = 8:1). Subsequently, GSH was added into the mixture and the final concentration was maintained at 10.0 mM. A fraction of the solution (100 μL) was withdrawn from the system and analyzed using the HPLC and GPC techniques following a 6-h long incubation period. 2.5. Preparation and characterization of the NPs The classical nanoprecipitation method was used to prepare the PRES NPs and PTX loaded NPs. For the preparation of the PRES NPs, 100.0 μL of the PRES solution (20 mg/mL in DMSO) and 100 μL of the DSPE-PEG 3k solution (20 mg/mL in DMSO) were mixed with each other under conditions of ultrasonication. Following this, the mixture was added dropwise to distilled water (4.0 mL) under conditions of vigorous stirring (stirring time: 1 h). Subsequently, the mixture was transferred to an ultrafiltration device (MWCO: 10 kDa) and centrifuged at 5000 rpm for 10 min. The system was washed thrice with distilled water, following which the NPs were dispersed in 2 mL of PBS (pH 7.4) to obtain the PRES NPs. To prepare the PTX loaded NPs, 30 μL of the PTX solution (20 mg/mL in DMSO), 150 μL of the PRES solution (20 mg/mL in DMSO), and 180 μL of the DSPE-PEG 3k solution (10 mg/mL in DMSO) were mixed, and the mixture was added dropwise to 2 mL of deionized water. The NPs were washed following the protocol described previously. Subsequently, they were dispersed in 2 mL of PBS (pH 7.4) to obtain the PTX@PRES NPs. The drug loading capacity (DLC) and encapsulation efficiency (DEE) of PTX in PTX@PRES NPs were determined using the HPLC technique. The DLC and DEE were calculated as follows: (1) DLC = weight of the drug in NPs weight of NPs × 100 % (2) DEE = weight of the drug in NPs weight of drug added × 100 % 2.6. Stability of NPs The changes in the size of the NPs in PBS (pH 7.4; with or without 10% FBS) were detected using the DLS technique to study the stability of the NPs. Freshly prepared solutions of NPs were dispersed in PBS or PBS containing 10% FBS. The final concentration of the solution was maintained at 3 mg/mL. The fabricated NPs were stored at 37 °C under conditions of shaking at 100 rpm. At predetermined intervals (0, 4, 8, 12, 24, 36, and 48 h), 1.0 mL of the solution containing NPs was withdrawn, and the solution was analyzed using the DLS technique. 2.7. In vitro drug release The release profiles of RES and PTX from NPs were studied at 37 °C in PBS (pH 7.4) containing 0.5% Tween80 (m/v) using GSH (20 µM or 10 mM) as the release medium. Freshly prepared NPs (10.0 mg) were dispersed (equivalent to 4.3 mg of PRES and 0.7 mg of PTX) in 2.0 mL of Tween80 in the absence of a release medium, and the solution was transferred into a dialysis bag (MWCO: 3.5 kDa). Subsequently, the dialysis bag was immersed into 48 mL of the release medium, and the temperature was maintained at 37 °C under conditions of shaking. At predetermined time intervals, 2 mL of the release medium, present outside the dialysis bag, was withdrawn. The solution was replenished with the same volume of fresh release medium. The amount of RES and PTX present was determined using the HPLC technique. 2.8. Cell and animal studies Human lung cancer cells (A549) and the corresponding PTX resistance cells (A549/PTX) were purchased from KeyGEN Biotechnology Co., Ltd. (Nanjing, China). A549 cells were cultured in F12K containing 10% fetal bovine serum (FBS) and 100 units/mL of streptomycin and penicillin. A549/PTX cells were cultured in RPMI 1640 containing 10% fetal bovine serum (FBS) and 100 units/mL of streptomycin and penicillin. The culture medium was treated with 20 ng/mL of Taxol to maintain the resistance of the A549/PTX cells. Male BALB/c normal nude mice (4–5 weeks old) were purchased from the Laboratory Animal Center of the USTC. All animal-based experiments were performed in accordance with the guidelines outlined by the National Institutes of Health Guide for the Care and Use of Laboratory animals. The protocol followed for animal-based studies was approved by USTC. 2.9. Cellular uptake The cellular uptake recorded for the NPs was determined using the confocal laser scanning microscopy (CLSM) technique using A549 and A549/PTX cells. Cells (3 × 10 4 ) were seeded in round disks and cultured over a period of 24 h. Subsequently, the FBS-free medium containing coumarin 6-loaded NPs were used to replace the medium, and the cells were incubated for another 4 h. Following this, the cells were washed with PBS and then fixed using 4% paraformaldehyde. After staining the nuclei with DPAI, the cells were observed following the CLSM technique. 2.10. In vitro cytotoxicity The A549 or A549/PTX cells were seeded in a 96-well plate (density: 5000 cells per well). The cells were incubated over a period of 24 h, following which the medium was replaced by 150 μL of dispersion of NPs and different concentrations of drugs. The cells were incubated for another 48 h. Subsequently, cell viability was analyzed following the CCK-8 assay technique using a Bio-Rad 680 microplate reader at a wavelength of 450 nm. The cell viability was calculated from the data obtained from six parallel wells using the following formula (PBS was used as the negative control): (3) Cell viability = Absorbance value of samples Absorbance value of PBS × 100 % The inhibitory concentration (IC 50 ) of each formulation was calculated from the recorded data using Origin 2021b (OriginLab, Northampton, MA, USA). The resistance index (RI) was calculated following the method presented in literature reports using the following equation: (4) RI = IC 50 of resistance cells IC 50 value of sensitive cells The half-maximal combination index (CI 50 ) was calculated to evaluate the synergistic effect of PTX and RES following the method presented in literature reports(Li et al., ) using the following equation: (5) R = D 1 D 1 x + D 2 D 2 x , where D 1 x and D 2 x represent the IC 50 value of PTX and RES, respectively, and D 1 and D 2 represent the molar ratio of the two drugs in the combination group at IC 50 . R < 1 represents synergy, R = 1 represents equivalence, and R > 1 represents antagonism. 2.11. Pharmacokinetics and biodistribution assay Sprague Dawley (SD) rat was employed as the animal model to investigate the pharmacokinetic properties of different formulations. In brief, the rats were treated with PTX (5 mg/kg), RES (15 mg/kg), and PTX@PRES NP (equal to 5 mg/kg PTX), respectively, via tail vein injection at a single dose. At pre-set time points, 0.5 mL blood was collected at the orbital vein and immediately centrifuged at 1000 rpm for 3 min to obtain the plasma. Thereafter, 0.2 mL of plasma was mixed with 0.4 mL acetonitrile/water (1:1, v/v) and sonicated for 5 min. After centrifuging at 3500 g for 15 min, the supernatant was collected and detected by HPLC. Moreover, the biodistribution drug formulations in A549/PTX tumor-bearing mice were also investigated. The murine model was established by subcutaneous injection of 5 × 10 6 cells to the back region of the BALB/c nude mice. After being implanted for ten days, mice were treated intravenously with PTX or PTX@PRES NPs (equal to 5.0 mg/kg of PTX) via the tail vein. After a 24-h treatment, 6 treatment mice were euthanized by rapid cervical dislocating, and the tumors or organs (spleen, kidney, lung, liver, and heat) were collected, weighed, and pulverized. The PTX concentration in each organ was measured by HPLC. 2.12. In vivo antitumor assay When the volume of the tumor xenograft reached ∼50 mm 3 , the mice were randomly divided into six groups ( n = 6). Following this, the mice were treated with PBS, PTX (5 mg/kg), RES (15 mg/kg), PTX + RES (5 mg/kg of PTX and 15 mg/kg of RES), PRES NPs (with an amount equal to the RES dose in PTX@PRES NPs), or PTX@PRES (containing 5 mg/kg PTX), respectively. The samples were injected every two days through the tail vein. Three treatment cycles were conducted over a period of 14 days. The weight of the mouse and the length and width of the tumor was monitored every three days. The tumor volume was calculated as follows: (6) V = ( L × W 2 ) / 2 , where L and W are the length and width, respectively. At the experimental endpoint, the mice were sacrificed, and the tumors were harvested and weighed. The tumor inhibitory rate (TIR) was calculated as follows (based on the weight of the excised tumor): (7) TIR = 1 − tumor weight of test group tumor weight of PBS group × 100 % 2.13. Statistical analysis The data were presented as the mean ± standard error by repeating all experiments thrice. Statistical Product and Service Solutions (SPSS; version 17.0) was used for statistical analysis. Significant differences between groups were determined using Student’s t -test at p < .05, while very significant differences were presented as * p < .05, ** p < .01, or *** p < .001.
Materials PTX, RES, DTPA, 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC), N,N’-dimethylformamide (DMF), dichloromethane (DCM), dimethyl sulfoxide (DMSO), and 4-dimethylaminopyridine (DMAP) were obtained from Aladdin Reagents (Shanghai, China). 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[methoxy(polyethylene glycol)-3000] (DSPE-PEG 3k ) was purchased from Xi’an Ruixi Biological Technology Co., Ltd. (Xi’an, China).
Instruments 1 H nuclear magnetic resonance ( 1 H NMR) spectra were recorded on a Varian U500 (300 MHz) spectrometer. The PRES was detected by a matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS, UltrafeXtreme, Bruker Daltonics, USA). Particle size and polydispersity (PDI) were determined using the dynamic light scattering (DLS, ZetaPlus, USA) technique. The morphology of the nanoparticles was observed using the transmission electron microscopy (TEM, JEM, Japan) technique. The high-performance liquid chromatography (HPLC) technique was used to analyze RES. The mobile phase consisted of a mixture of 0.5% (v/v) acetic acid in methanol and water (1:1, v/v). A flow rate of 1 mL/min was maintained. The UV–vis detection wavelength was 303 nm. The mobile phase used during the process of PTX analysis using the HPLC technique consisted of methanol/H 2 O (6.5:3.5, v/v). The flow rate was maintained at 1 mL/min, and the detection wavelength was 227 nm.
PRES synthesis PRES was synthesized by conducting an esterification reaction between DTPA and RES. RES (228.0 mg, 1.0 mmol), DTPA (182.2 mg, 1.0 mmol), EDC (401.1 mg, 2.1 mmol), and DMAP (256.2 mg, 2.1 mmol) was dissolved in 100 mL of DCM, and the solution was stirred at room temperature under an atmosphere of nitrogen. After 72 h, the reaction mixture was concentrated to 10 mL, and 40 mL of cold ethyl acetate was added to it. The solution was stored at 4 °C overnight. The precipitate formed was collected following the process of centrifugation. Following this, the collected precipitate was washed thrice with ethyl acetate. Subsequently, the product was dissolved in DMSO, which was then placed into a dialysis bag [molecular weight cutoff (MWCO): 3500 Da]. The solution was dialyzed against DMSO over a period of 48 h. This was followed by dialysis against distilled water (time: 24 h). Finally, the produced PRES was obtained following the freeze–drying cycle conducted under vacuum. The structure and average molecular weights of PRES were determined using the 1 H NMR and MALDI-MS.
Redox-responsive behavior of PRES The redox sensitivity of PRES was studied using the GPC and HPLC techniques. PRES (1.0 mg) was dissolved in a solvent system consisting of DMF and water (DMF: water = 8:1). Subsequently, GSH was added into the mixture and the final concentration was maintained at 10.0 mM. A fraction of the solution (100 μL) was withdrawn from the system and analyzed using the HPLC and GPC techniques following a 6-h long incubation period.
Preparation and characterization of the NPs The classical nanoprecipitation method was used to prepare the PRES NPs and PTX loaded NPs. For the preparation of the PRES NPs, 100.0 μL of the PRES solution (20 mg/mL in DMSO) and 100 μL of the DSPE-PEG 3k solution (20 mg/mL in DMSO) were mixed with each other under conditions of ultrasonication. Following this, the mixture was added dropwise to distilled water (4.0 mL) under conditions of vigorous stirring (stirring time: 1 h). Subsequently, the mixture was transferred to an ultrafiltration device (MWCO: 10 kDa) and centrifuged at 5000 rpm for 10 min. The system was washed thrice with distilled water, following which the NPs were dispersed in 2 mL of PBS (pH 7.4) to obtain the PRES NPs. To prepare the PTX loaded NPs, 30 μL of the PTX solution (20 mg/mL in DMSO), 150 μL of the PRES solution (20 mg/mL in DMSO), and 180 μL of the DSPE-PEG 3k solution (10 mg/mL in DMSO) were mixed, and the mixture was added dropwise to 2 mL of deionized water. The NPs were washed following the protocol described previously. Subsequently, they were dispersed in 2 mL of PBS (pH 7.4) to obtain the PTX@PRES NPs. The drug loading capacity (DLC) and encapsulation efficiency (DEE) of PTX in PTX@PRES NPs were determined using the HPLC technique. The DLC and DEE were calculated as follows: (1) DLC = weight of the drug in NPs weight of NPs × 100 % (2) DEE = weight of the drug in NPs weight of drug added × 100 %
Stability of NPs The changes in the size of the NPs in PBS (pH 7.4; with or without 10% FBS) were detected using the DLS technique to study the stability of the NPs. Freshly prepared solutions of NPs were dispersed in PBS or PBS containing 10% FBS. The final concentration of the solution was maintained at 3 mg/mL. The fabricated NPs were stored at 37 °C under conditions of shaking at 100 rpm. At predetermined intervals (0, 4, 8, 12, 24, 36, and 48 h), 1.0 mL of the solution containing NPs was withdrawn, and the solution was analyzed using the DLS technique.
In vitro drug release The release profiles of RES and PTX from NPs were studied at 37 °C in PBS (pH 7.4) containing 0.5% Tween80 (m/v) using GSH (20 µM or 10 mM) as the release medium. Freshly prepared NPs (10.0 mg) were dispersed (equivalent to 4.3 mg of PRES and 0.7 mg of PTX) in 2.0 mL of Tween80 in the absence of a release medium, and the solution was transferred into a dialysis bag (MWCO: 3.5 kDa). Subsequently, the dialysis bag was immersed into 48 mL of the release medium, and the temperature was maintained at 37 °C under conditions of shaking. At predetermined time intervals, 2 mL of the release medium, present outside the dialysis bag, was withdrawn. The solution was replenished with the same volume of fresh release medium. The amount of RES and PTX present was determined using the HPLC technique.
Cell and animal studies Human lung cancer cells (A549) and the corresponding PTX resistance cells (A549/PTX) were purchased from KeyGEN Biotechnology Co., Ltd. (Nanjing, China). A549 cells were cultured in F12K containing 10% fetal bovine serum (FBS) and 100 units/mL of streptomycin and penicillin. A549/PTX cells were cultured in RPMI 1640 containing 10% fetal bovine serum (FBS) and 100 units/mL of streptomycin and penicillin. The culture medium was treated with 20 ng/mL of Taxol to maintain the resistance of the A549/PTX cells. Male BALB/c normal nude mice (4–5 weeks old) were purchased from the Laboratory Animal Center of the USTC. All animal-based experiments were performed in accordance with the guidelines outlined by the National Institutes of Health Guide for the Care and Use of Laboratory animals. The protocol followed for animal-based studies was approved by USTC.
Cellular uptake The cellular uptake recorded for the NPs was determined using the confocal laser scanning microscopy (CLSM) technique using A549 and A549/PTX cells. Cells (3 × 10 4 ) were seeded in round disks and cultured over a period of 24 h. Subsequently, the FBS-free medium containing coumarin 6-loaded NPs were used to replace the medium, and the cells were incubated for another 4 h. Following this, the cells were washed with PBS and then fixed using 4% paraformaldehyde. After staining the nuclei with DPAI, the cells were observed following the CLSM technique.
In vitro cytotoxicity The A549 or A549/PTX cells were seeded in a 96-well plate (density: 5000 cells per well). The cells were incubated over a period of 24 h, following which the medium was replaced by 150 μL of dispersion of NPs and different concentrations of drugs. The cells were incubated for another 48 h. Subsequently, cell viability was analyzed following the CCK-8 assay technique using a Bio-Rad 680 microplate reader at a wavelength of 450 nm. The cell viability was calculated from the data obtained from six parallel wells using the following formula (PBS was used as the negative control): (3) Cell viability = Absorbance value of samples Absorbance value of PBS × 100 % The inhibitory concentration (IC 50 ) of each formulation was calculated from the recorded data using Origin 2021b (OriginLab, Northampton, MA, USA). The resistance index (RI) was calculated following the method presented in literature reports using the following equation: (4) RI = IC 50 of resistance cells IC 50 value of sensitive cells The half-maximal combination index (CI 50 ) was calculated to evaluate the synergistic effect of PTX and RES following the method presented in literature reports(Li et al., ) using the following equation: (5) R = D 1 D 1 x + D 2 D 2 x , where D 1 x and D 2 x represent the IC 50 value of PTX and RES, respectively, and D 1 and D 2 represent the molar ratio of the two drugs in the combination group at IC 50 . R < 1 represents synergy, R = 1 represents equivalence, and R > 1 represents antagonism.
Pharmacokinetics and biodistribution assay Sprague Dawley (SD) rat was employed as the animal model to investigate the pharmacokinetic properties of different formulations. In brief, the rats were treated with PTX (5 mg/kg), RES (15 mg/kg), and PTX@PRES NP (equal to 5 mg/kg PTX), respectively, via tail vein injection at a single dose. At pre-set time points, 0.5 mL blood was collected at the orbital vein and immediately centrifuged at 1000 rpm for 3 min to obtain the plasma. Thereafter, 0.2 mL of plasma was mixed with 0.4 mL acetonitrile/water (1:1, v/v) and sonicated for 5 min. After centrifuging at 3500 g for 15 min, the supernatant was collected and detected by HPLC. Moreover, the biodistribution drug formulations in A549/PTX tumor-bearing mice were also investigated. The murine model was established by subcutaneous injection of 5 × 10 6 cells to the back region of the BALB/c nude mice. After being implanted for ten days, mice were treated intravenously with PTX or PTX@PRES NPs (equal to 5.0 mg/kg of PTX) via the tail vein. After a 24-h treatment, 6 treatment mice were euthanized by rapid cervical dislocating, and the tumors or organs (spleen, kidney, lung, liver, and heat) were collected, weighed, and pulverized. The PTX concentration in each organ was measured by HPLC.
In vivo antitumor assay When the volume of the tumor xenograft reached ∼50 mm 3 , the mice were randomly divided into six groups ( n = 6). Following this, the mice were treated with PBS, PTX (5 mg/kg), RES (15 mg/kg), PTX + RES (5 mg/kg of PTX and 15 mg/kg of RES), PRES NPs (with an amount equal to the RES dose in PTX@PRES NPs), or PTX@PRES (containing 5 mg/kg PTX), respectively. The samples were injected every two days through the tail vein. Three treatment cycles were conducted over a period of 14 days. The weight of the mouse and the length and width of the tumor was monitored every three days. The tumor volume was calculated as follows: (6) V = ( L × W 2 ) / 2 , where L and W are the length and width, respectively. At the experimental endpoint, the mice were sacrificed, and the tumors were harvested and weighed. The tumor inhibitory rate (TIR) was calculated as follows (based on the weight of the excised tumor): (7) TIR = 1 − tumor weight of test group tumor weight of PBS group × 100 %
Statistical analysis The data were presented as the mean ± standard error by repeating all experiments thrice. Statistical Product and Service Solutions (SPSS; version 17.0) was used for statistical analysis. Significant differences between groups were determined using Student’s t -test at p < .05, while very significant differences were presented as * p < .05, ** p < .01, or *** p < .001.
Results and discussion 3.1. Synthesis and characterization of PRES As mentioned previously, RES, a natural ingredient presents in plants. The use of RES results in almost no side effects. Hence, it was chosen as a model drug and copolymerized with a redox-cleavable disulfide linker to prepare the poly-prodrug (defined as PRES herein). The PRES-based NPs could be easily degraded in the intracellular environment of tumor cells. The synthetic route and structure of PRES are shown in . The samples were characterized using the 1 H NMR, GPC, MALDI-MS, and HPLC techniques. Analysis of the 1 H NMR spectrum recorded for PRES revealed that the hydrogen protons corresponding to DTPA were present in the region of 3.2–3.5 ppm. The broad peaks appearing in the range of 6.0–8.3 ppm could be attributed to the hydrogen protons present in RES. The extended peak shapes suggested the successful polymerization of RES (Zheng et al., ). The results obtained using the GPC techniques revealed that the molecular weight ( Mw ) of PRES was 8706 Da ( Table S1 ). It was characterized by a narrow PDI of 1.2 ( and Table S1 ). Moreover, the MALDI-MS result showed the average molecular weight of PRES was 8131 Da ( Table S1 ). Since the MALDI-MS is more accurate, the molecular weight of PRES is 8131 Da. Analysis of the chromatogram recorded using the HPLC technique revealed the presence of a broad peak at 32.1 min (attributable to PRES). The presence of extra peaks was not observed, demonstrating the high purity of PRES. The GPC and HPLC techniques were used to investigate the redox-responsive ability of PRES. When PRES was incubated with 10 mM of GSH for 6 h, the molecular weight of PRES was significantly decreased to ∼300 Da . Under these conditions, the molecular weight was comparable to the molecular weight of free RES. It was observed that following the treatment with 10.0 mM of GSH over 6 h, the retention time for the peak corresponding to PRES in the HPLC spectral profile decreased significantly. A new peak corresponding to free RES appeared under these conditions., indicating the degradation of PRES into several fragments and free RES. The results suggest that the degradation of PRES is influenced by the GSH-triggered cleavage of the disulfide bond linkages. The GSH-triggered PRES degradation mechanism has been presented in . The nucleophilic attack by GSH initiated the degradation of PRES and resulted in the formation of RES-SH and RES-S-SG (Zuo et al., ). GSH also reacts with RES-S-SG, resulting in the production of oxidized GSH (GSSG) and RES-SH. The nucleophilic -SH group could easily react with the adjacent ester bond, resulting in the rapid hydrolysis of the hydrophilic RES-SH units and the release of RES. 3.2. Preparation and characterization of NPs The size and morphology of the NPs dictate their applicability in the field of biological and biomedical fields. These properties also dictate the physiological and pathological conditions required for the efficient treatment of diseases (Zheng et al., ). We further investigated whether PRES could be used to construct redox-responsive NPs for on-demand drug release. It is well-known that PEGylated nanomedicines are highly stable and can be used for prolonged blood circulation (Zhao et al., ). A biocompatible DSPE-PEG 3k was used to achieve good stability and long systemic circulation. A series of NPs containing DSPE-PEG 3k (10–50 wt.%; Table S2 ) was synthesized following a simple nanoprecipitation method. The results revealed that PRES could co-assemble with DSPE-PEG 3k (50 wt.%) to form spherical NPs characterized by narrow PDI and appropriate average hydrodynamic size (∼90 nm; , and Table S1 ). Thus, the optimal DSPE-PEG 3k content was found to be 50 wt.%, and 50 wt.% of DSPE-PEG 3k was used for further studies. Under these conditions, the drug loading level of RES was calculated in the presence of ∼33.5 wt.% of the PRES NPs. Additionally, the NPs were found to be highly stable in PBS or PBS containing FBS (10%). This was revealed by the fact that the sizes did not change significantly when the NPs were incubated in PBS or PBS containing FBS (10%) for more than 48 h . We hypothesized that PRES could be used as a drug carrier. PTX was chosen as a model drug to confirm this. PTX is widely used in clinical settings as an anticancer drug, and it has been used for the treatment of breast cancer, lung cancer, and prostate cancer (Sofias et al., ). A series of PTX@PRES NPs was prepared under conditions of varying PRES to PTX mass ratios (PTX/PRES = 1:1, 1:3, 1:5, 1:7, or 1:9) to optimize the preparation conditions of PTX@PRES NPs that exhibit good stability and high drug loading ability. The particle size of the prepared NPs ranged from 100 to 190 nm (Table S3). The particle size was small enough to allow excellent tumor accumulation via the EPR effect-based passive targeting pathway (Kang et al., ). Interestingly, we observed that the NPs were characterized by the maximum DLC (7.2 ± 0.4%) DEE (73.4 ± 4.6%) when the PTX/PRES ratio was 1:5. Hence, the PTX@PRES NPs with a PTX/PRES ratio of 1:5 were used for further studies. Analysis of the TEM images revealed that the PTX-loaded NPs were uniformly distributed and appeared spherical . The PTX@PRESNPs were highly stable, and significant changes in the particle sizes were not observed when they were treated with PBS (with or without 10% FBS) over a period of 48 h . This could help reduce the extent of undesired exposure of the parent drug in the blood circulation system and normal cells. Thus, the systemic toxicity of PTX could be reduced. 3.3. In vitro drug release The concentration of intracellular GSH (2–10 mM) and extracellular GSH (2–20 µM) is different. This redox diversity is an ideal stimulus that can be used to trigger the rapid release of intracellular drugs (Wang et al., ). PRES-based NPs could be easily degraded in intracellular environments of tumors. To investigate this, the drug release abilities of the PTX@PRES NPs at various GSH concentrations were determined following the dialysis method. We used PBS (pH 7.4) containing GSH (20 µM or 10 mM) to stimulate blood circulation and tune the intracellular environment. We also used 0.5% Tween80 (m/v) to increase the solubility of the drugs. As presented in , only trace amounts of PTX (∼6%) and RES (∼4%) were released from the PTX@PRES NPs following the incubation (time: 60 h) of the NPs in the blood circulation system containing GSH (20 µM). The results indicated the high colloidal stability of the NPs in the blood circulation system and normal cells. The stable nanostructure of the PTX@PRES NPs could alleviate the reduction and oxidation of the disulfide bonds, resulting in a decrease in the systemic toxicity of PTX. Interestingly, when the GSH concentration was increased to 10 mM (intracellular condition), more than 86 and 79% of PTX and RES, respectively, were released following incubation (time: 60 h). These results suggested the excellent stability of the NPs. The results also revealed that the cargo was not prematurely released during the blood circulation process. The rapid and effective release of drugs could be realized in the cancer cells in the presence of high levels of GSH. 3.4. Cellular uptake of NPs The NPs should effectively enter cancer cells to achieve efficient intracellular drug delivery. Coumarin-6, instead of PTX, was loaded into PRES NPs as a fluorescence probe to determine cellular uptake. A549/PTX cells were treated with coumarin-6-loaded NPs at 37 °C (treatment time: 2 or 4 h). The image was observed using the CLSM technique. The nuclei were stained with DAPI (blue) for subcellular observation, and the green fluorescence from coumarin-6 was analyzed to visualize the location of the NPs following the internalization of the A549/PTX cells . In the group containing treated NPs, a time-dependent cellular accumulation was observed when the green fluorescent signal recorded at 4 h was significantly stronger than the fluorescent signal recorded at 2 h. 3.5. In vitro cytotoxicity of NPs The in vitro cytotoxicity of each formulation was determined following the CCK-8 method and the IC 50 value was calculated simultaneously . The RI value corresponding to PTX (against the A549/PTX and A549 cells) was ∼50.87. The value indicated good resistance capability of the cells. The RI value corresponding to PTX + RES was 4.42. The decrease in the value confirmed that RES could effectively reverse the resistance of cells toward PTX. The RI value recorded for the PTX@PRES NPs was 4.28-fold less than that of free PTX, suggesting that the PTX@PRES NPs could effectively inhibit the growth of cells resistant to various drugs. The PTX@PRES NPs exhibited the maximum cytotoxicity. The lower cytotoxicity recorded for PTX + RES (compared to the cytotoxicity of the PTX@PRES NPs) can be attributed to the poor water solubility of the system. The cytotoxicity of the PRES NPs was higher than the cytotoxicity exhibited by free RES. This could be attributed to the better water solubility of the system. The CI 50 value was calculated to estimate the synergistic effect, which was 0.32 and 0.21 of PTX + RES and PTX@PRES NPs against A549/PTX cells, respectively. This result demonstrated the synergistic effects of RES and PTX. The combined concentrations of the two drugs were significantly lower than the IC 50 concentration (when used alone). 3.6. Pharmacokinetics and biodistribution Long blood circulation enables nanomedicine to accumulate at the tumor tissue through the EPR effect, increasing pharmacological activity (Tee et al., ). Thereby, PTX@PRES NPs may remarkably strengthen the blood circulation of RES and PTX. The pharmacokinetics of PTX and PTX@PRES NPs were studied by using SD rats as the animal model. The PTX concentration in plasma vs. time curves after intravenous administration of PTX or PTX@PRES NPs is shown in . The maximum PTX concentration was 17.1 µg/mL and decreased quickly. On the contrary, the maximum PTX concentration in the PTX@PRES NPs group was 21.2 µg/mL, which is 1.2-fold higher than that of free PTX. Moreover, the PTX@PRES NPs prolong the half-life time of free PTX from 4.5 to 10.2 h, demonstrating that the long blood circulation capability of PTX@PRES NPs prodrugs compared to free PTX. Additionally, the in vivo biodistribution study was further performed on A549/PTX tumor-bearing mice to explore the tumor-targeting ability of PTX@PRES NPs. As exhibited in , after injection for 24 h, the concentrations of PTX in the tumor tissue in the PTX@PRES NPs group were 3.5-fold higher than that of the free PTX group, providing evidence supporting the tumor-targeting ability of PTX@PRES NPs. 3.7. In vivo antitumor efficacy The antitumor efficacy was studied in mice bearing A549/PTX tumor to evaluate whether the use of PTX@PRES could result in enhanced therapeutic efficacy. The mice were treated with saline, free PTX, free RES, free PTX + RES, PRES NPs, and PTX@PRES NPs at a PTX dose of 5 mg/kg. The solutions were administered quartic through the tail vein. Tumors in the saline-treated group grew rapidly within 14 days . The three ‘free drug’ treatment groups (PTX, RES, and PTX + RES) exhibited moderate antitumor efficacy, and the TIR of REX, PTX, and PTX + RES was 13.2, 15.7, and 16.4%, respectively. The maximum inhibition of cancer cells was observed in the mice treated with PTX@PRES NPs. The TIR of PTX@PRES NPs was 82.3%, which was 1.3-, 1.4-, and 1.5-fold higher than that of the PTX, PTX + RES, and PRES NPs treated group. PRES NPs also exhibited a moderate tumor suppression effect with a TIR of 50.3% (when compared to the effect exhibited by the saline-treated group), suggesting that a high concentration of RES could suppress tumor growth to some extent. Further, the weight of the mice in each group did not change significantly during the period of therapy . This suggested that the drug formulations did not exhibit severe systemic toxicity. Thus, the PTX@PRES NPs can be potentially used to develop an alternative strategy for treating MDR cancer cells.
Synthesis and characterization of PRES As mentioned previously, RES, a natural ingredient presents in plants. The use of RES results in almost no side effects. Hence, it was chosen as a model drug and copolymerized with a redox-cleavable disulfide linker to prepare the poly-prodrug (defined as PRES herein). The PRES-based NPs could be easily degraded in the intracellular environment of tumor cells. The synthetic route and structure of PRES are shown in . The samples were characterized using the 1 H NMR, GPC, MALDI-MS, and HPLC techniques. Analysis of the 1 H NMR spectrum recorded for PRES revealed that the hydrogen protons corresponding to DTPA were present in the region of 3.2–3.5 ppm. The broad peaks appearing in the range of 6.0–8.3 ppm could be attributed to the hydrogen protons present in RES. The extended peak shapes suggested the successful polymerization of RES (Zheng et al., ). The results obtained using the GPC techniques revealed that the molecular weight ( Mw ) of PRES was 8706 Da ( Table S1 ). It was characterized by a narrow PDI of 1.2 ( and Table S1 ). Moreover, the MALDI-MS result showed the average molecular weight of PRES was 8131 Da ( Table S1 ). Since the MALDI-MS is more accurate, the molecular weight of PRES is 8131 Da. Analysis of the chromatogram recorded using the HPLC technique revealed the presence of a broad peak at 32.1 min (attributable to PRES). The presence of extra peaks was not observed, demonstrating the high purity of PRES. The GPC and HPLC techniques were used to investigate the redox-responsive ability of PRES. When PRES was incubated with 10 mM of GSH for 6 h, the molecular weight of PRES was significantly decreased to ∼300 Da . Under these conditions, the molecular weight was comparable to the molecular weight of free RES. It was observed that following the treatment with 10.0 mM of GSH over 6 h, the retention time for the peak corresponding to PRES in the HPLC spectral profile decreased significantly. A new peak corresponding to free RES appeared under these conditions., indicating the degradation of PRES into several fragments and free RES. The results suggest that the degradation of PRES is influenced by the GSH-triggered cleavage of the disulfide bond linkages. The GSH-triggered PRES degradation mechanism has been presented in . The nucleophilic attack by GSH initiated the degradation of PRES and resulted in the formation of RES-SH and RES-S-SG (Zuo et al., ). GSH also reacts with RES-S-SG, resulting in the production of oxidized GSH (GSSG) and RES-SH. The nucleophilic -SH group could easily react with the adjacent ester bond, resulting in the rapid hydrolysis of the hydrophilic RES-SH units and the release of RES.
Preparation and characterization of NPs The size and morphology of the NPs dictate their applicability in the field of biological and biomedical fields. These properties also dictate the physiological and pathological conditions required for the efficient treatment of diseases (Zheng et al., ). We further investigated whether PRES could be used to construct redox-responsive NPs for on-demand drug release. It is well-known that PEGylated nanomedicines are highly stable and can be used for prolonged blood circulation (Zhao et al., ). A biocompatible DSPE-PEG 3k was used to achieve good stability and long systemic circulation. A series of NPs containing DSPE-PEG 3k (10–50 wt.%; Table S2 ) was synthesized following a simple nanoprecipitation method. The results revealed that PRES could co-assemble with DSPE-PEG 3k (50 wt.%) to form spherical NPs characterized by narrow PDI and appropriate average hydrodynamic size (∼90 nm; , and Table S1 ). Thus, the optimal DSPE-PEG 3k content was found to be 50 wt.%, and 50 wt.% of DSPE-PEG 3k was used for further studies. Under these conditions, the drug loading level of RES was calculated in the presence of ∼33.5 wt.% of the PRES NPs. Additionally, the NPs were found to be highly stable in PBS or PBS containing FBS (10%). This was revealed by the fact that the sizes did not change significantly when the NPs were incubated in PBS or PBS containing FBS (10%) for more than 48 h . We hypothesized that PRES could be used as a drug carrier. PTX was chosen as a model drug to confirm this. PTX is widely used in clinical settings as an anticancer drug, and it has been used for the treatment of breast cancer, lung cancer, and prostate cancer (Sofias et al., ). A series of PTX@PRES NPs was prepared under conditions of varying PRES to PTX mass ratios (PTX/PRES = 1:1, 1:3, 1:5, 1:7, or 1:9) to optimize the preparation conditions of PTX@PRES NPs that exhibit good stability and high drug loading ability. The particle size of the prepared NPs ranged from 100 to 190 nm (Table S3). The particle size was small enough to allow excellent tumor accumulation via the EPR effect-based passive targeting pathway (Kang et al., ). Interestingly, we observed that the NPs were characterized by the maximum DLC (7.2 ± 0.4%) DEE (73.4 ± 4.6%) when the PTX/PRES ratio was 1:5. Hence, the PTX@PRES NPs with a PTX/PRES ratio of 1:5 were used for further studies. Analysis of the TEM images revealed that the PTX-loaded NPs were uniformly distributed and appeared spherical . The PTX@PRESNPs were highly stable, and significant changes in the particle sizes were not observed when they were treated with PBS (with or without 10% FBS) over a period of 48 h . This could help reduce the extent of undesired exposure of the parent drug in the blood circulation system and normal cells. Thus, the systemic toxicity of PTX could be reduced.
In vitro drug release The concentration of intracellular GSH (2–10 mM) and extracellular GSH (2–20 µM) is different. This redox diversity is an ideal stimulus that can be used to trigger the rapid release of intracellular drugs (Wang et al., ). PRES-based NPs could be easily degraded in intracellular environments of tumors. To investigate this, the drug release abilities of the PTX@PRES NPs at various GSH concentrations were determined following the dialysis method. We used PBS (pH 7.4) containing GSH (20 µM or 10 mM) to stimulate blood circulation and tune the intracellular environment. We also used 0.5% Tween80 (m/v) to increase the solubility of the drugs. As presented in , only trace amounts of PTX (∼6%) and RES (∼4%) were released from the PTX@PRES NPs following the incubation (time: 60 h) of the NPs in the blood circulation system containing GSH (20 µM). The results indicated the high colloidal stability of the NPs in the blood circulation system and normal cells. The stable nanostructure of the PTX@PRES NPs could alleviate the reduction and oxidation of the disulfide bonds, resulting in a decrease in the systemic toxicity of PTX. Interestingly, when the GSH concentration was increased to 10 mM (intracellular condition), more than 86 and 79% of PTX and RES, respectively, were released following incubation (time: 60 h). These results suggested the excellent stability of the NPs. The results also revealed that the cargo was not prematurely released during the blood circulation process. The rapid and effective release of drugs could be realized in the cancer cells in the presence of high levels of GSH.
Cellular uptake of NPs The NPs should effectively enter cancer cells to achieve efficient intracellular drug delivery. Coumarin-6, instead of PTX, was loaded into PRES NPs as a fluorescence probe to determine cellular uptake. A549/PTX cells were treated with coumarin-6-loaded NPs at 37 °C (treatment time: 2 or 4 h). The image was observed using the CLSM technique. The nuclei were stained with DAPI (blue) for subcellular observation, and the green fluorescence from coumarin-6 was analyzed to visualize the location of the NPs following the internalization of the A549/PTX cells . In the group containing treated NPs, a time-dependent cellular accumulation was observed when the green fluorescent signal recorded at 4 h was significantly stronger than the fluorescent signal recorded at 2 h.
In vitro cytotoxicity of NPs The in vitro cytotoxicity of each formulation was determined following the CCK-8 method and the IC 50 value was calculated simultaneously . The RI value corresponding to PTX (against the A549/PTX and A549 cells) was ∼50.87. The value indicated good resistance capability of the cells. The RI value corresponding to PTX + RES was 4.42. The decrease in the value confirmed that RES could effectively reverse the resistance of cells toward PTX. The RI value recorded for the PTX@PRES NPs was 4.28-fold less than that of free PTX, suggesting that the PTX@PRES NPs could effectively inhibit the growth of cells resistant to various drugs. The PTX@PRES NPs exhibited the maximum cytotoxicity. The lower cytotoxicity recorded for PTX + RES (compared to the cytotoxicity of the PTX@PRES NPs) can be attributed to the poor water solubility of the system. The cytotoxicity of the PRES NPs was higher than the cytotoxicity exhibited by free RES. This could be attributed to the better water solubility of the system. The CI 50 value was calculated to estimate the synergistic effect, which was 0.32 and 0.21 of PTX + RES and PTX@PRES NPs against A549/PTX cells, respectively. This result demonstrated the synergistic effects of RES and PTX. The combined concentrations of the two drugs were significantly lower than the IC 50 concentration (when used alone).
Pharmacokinetics and biodistribution Long blood circulation enables nanomedicine to accumulate at the tumor tissue through the EPR effect, increasing pharmacological activity (Tee et al., ). Thereby, PTX@PRES NPs may remarkably strengthen the blood circulation of RES and PTX. The pharmacokinetics of PTX and PTX@PRES NPs were studied by using SD rats as the animal model. The PTX concentration in plasma vs. time curves after intravenous administration of PTX or PTX@PRES NPs is shown in . The maximum PTX concentration was 17.1 µg/mL and decreased quickly. On the contrary, the maximum PTX concentration in the PTX@PRES NPs group was 21.2 µg/mL, which is 1.2-fold higher than that of free PTX. Moreover, the PTX@PRES NPs prolong the half-life time of free PTX from 4.5 to 10.2 h, demonstrating that the long blood circulation capability of PTX@PRES NPs prodrugs compared to free PTX. Additionally, the in vivo biodistribution study was further performed on A549/PTX tumor-bearing mice to explore the tumor-targeting ability of PTX@PRES NPs. As exhibited in , after injection for 24 h, the concentrations of PTX in the tumor tissue in the PTX@PRES NPs group were 3.5-fold higher than that of the free PTX group, providing evidence supporting the tumor-targeting ability of PTX@PRES NPs.
In vivo antitumor efficacy The antitumor efficacy was studied in mice bearing A549/PTX tumor to evaluate whether the use of PTX@PRES could result in enhanced therapeutic efficacy. The mice were treated with saline, free PTX, free RES, free PTX + RES, PRES NPs, and PTX@PRES NPs at a PTX dose of 5 mg/kg. The solutions were administered quartic through the tail vein. Tumors in the saline-treated group grew rapidly within 14 days . The three ‘free drug’ treatment groups (PTX, RES, and PTX + RES) exhibited moderate antitumor efficacy, and the TIR of REX, PTX, and PTX + RES was 13.2, 15.7, and 16.4%, respectively. The maximum inhibition of cancer cells was observed in the mice treated with PTX@PRES NPs. The TIR of PTX@PRES NPs was 82.3%, which was 1.3-, 1.4-, and 1.5-fold higher than that of the PTX, PTX + RES, and PRES NPs treated group. PRES NPs also exhibited a moderate tumor suppression effect with a TIR of 50.3% (when compared to the effect exhibited by the saline-treated group), suggesting that a high concentration of RES could suppress tumor growth to some extent. Further, the weight of the mice in each group did not change significantly during the period of therapy . This suggested that the drug formulations did not exhibit severe systemic toxicity. Thus, the PTX@PRES NPs can be potentially used to develop an alternative strategy for treating MDR cancer cells.
Conclusion In summary, a novel GSH-responsive polymer, PRES, based on RES was successfully prepared. PRES can self-assemble into nanoparticles that can be used in the field of antitumor drug delivery (PTX@PRES NPs). It can also be used to reverse MDR. Results from in vitro and in vivo studies revealed that the PTX@PRES NPs were stable in blood and could be used to rapidly release drugs under conditions of high GSH concentrations. PRES could also be used to effectively enhance the drug sensitivity of drug-resistant cells. The RES polymer may have potential applications in the field of cancer therapy.
Supplemental Material Click here for additional data file.
|
Dynamics of rice seed-borne bacteria from acquisition to seedling colonization | ba5ea21c-35f5-4323-ad49-f9ee0623c189 | 11613804 | Microbiology[mh] | Seeds serve as carriers, facilitating the transfer of microorganisms from parent plants to their offspring . As pioneer colonizers, seed bacteria likely impact microbiome assembly and function in subsequent generations . However, the distribution, origination, and transmission dynamics of rice seed-associated bacteria are not well-studied, especially their quantitative profiles. The seeds of some gramineous plants, such as rice and wheat, are covered by a pericarp that is closely connected and inseparable from the seed coat. This unique fruit structure of gramineous plant seeds is termed the caryopsis . The caryopsis is covered by two glumes, the lemma and palea, which are connected by rachilla. Below the glumes are two smaller glumes, the 1st and 2nd sterile lemmas. These accessory structures, along with the caryopsis, make up the rice grain . In agricultural production, rice crops are cultivated by planting rice grains instead of caryopses, making these accessory structures a potential bacterial reservoir for the next generation of plants . Bacteria have been identified to reside in various compartments of seeds, such as the seed coat, embryo, endosperm, and perisperm . 16S ribosomal RNA gene sequencing of rice seed compartments has shown that the outer surface of husks possesses higher richness and diversity of bacterial communities than the caryopsis . Studies isolating fungi, bacteria, and yeasts from the lemma and palea of wheat and barley reveal a higher abundance of filamentous fungi in these structures compared to the caryopsis . Bacterial seed-to-seedling transmission has been studied by allowing seedlings to grow in axenic culture, where seedlings may harbor microbial communities originating solely from seeds. A recent study, which involved planting hulled and unhulled rice seeds under axenic conditions, revealed that the seed coat acts as a microbial niche, limiting the taxonomic composition and diversity of bacterial communities in seeds and seedlings . Currently, our understanding of bacterial abundance, origination, and transmission associated with seed accessory structures remains limited. There are three proposed routes for the source and transmission of seed bacteriome : 1) the external route (horizontal transmission), where bacteria acquired from the environment colonize on and/or within seeds; 2) the internal route (vertical transmission), where endophytic bacteria enter seeds via the xylem or nonvascular tissue; 3) the floral route, where both environmentally-acquired and endophytic bacteria colonize on and/or within seeds via the flower. These routes, however, have largely yet to be experimentally validated. Seed-to-seed transmission of bacterial communities across generations has been reported in Setaria viridis , Crotalaria pumila , rice, radish and tomato . These studies compared community dissimilarity between the harvested/sown parent seeds and the progeny seeds, suggesting vertical transmission of seed-borne microbes across two or three generations. For example, previous studies on the longitudinal transmission of the bacterial community from seed to seed in rice sequenced the microbiota composition of developing and developed rice seeds in two consecutive years . The authors found that the bacterial compositions of progeny seeds were similar to those of parent seeds, identifying the parental seeds and stem endosphere as major sources of progeny seed microbial communities. However, microbiota overlap alone does not prove transmission . To confirm which route primarily contributes to the establishment of seed-borne bacteria, methods to differentiate these routes are needed. Tracking studies using fluorescent-labeled bacteria offer one of the most precise methods for determining transmission routes . For example, a constitutively eGFP-expressing bacterial pathogen, Clavibacter michiganensis subsp. michiganensis GCMM-22, was used to demonstrate that bacteria could access tomato seeds through two routes: an internal route via the xylem and an externally route via tomato fruit lesions . Moreover, further quantification of microbiota population sizes may enhance our understanding of how these vertically transmitted bacteria assemble and contribute to the plant microbiota. In comparison to vegetative organs, seeds generally harbor bacterial communities with less diversity . Proteobacteria is the dominant phylum found in a variety of plant seeds, with common genera including Bacillus , Pseudomonas , Paenibacillus , Micrococcus , Pantoea , Enterobacter , Stenotrophomonas , Xanthomonas , Cellulomonas, and Acinetobacter . The genera Pantoea , Methylobacterium , Sphingomonas , and Pseudomonas were identified as the main members of rice seeds and can stably exist in both developing and mature seeds . Many bacteria residing in seeds are challenging to cultivate and may be inactive or dormant , making detection methods at the nucleic acid level, that are independent of culture, more effective in evaluating the structure and population size of the microbiota . However, bacterial 16S rRNA gene sequencing has a limitation: it co-amplifies plant organellar DNAs, especially in plant tissues that contain limited numbers of bacteria. Recently, several improved methods have been developed to specifically amplify bacteria-specific 16S rRNA gene sequences from plant whole genomic DNAs , enhancing the use of 16S rRNA gene sequencing in deciphering seed bacterial communities. In this study, we found that rice seeds harbor abundant bacteria within their accessory structures, while caryopses themselves are almost bacteria-free. Rice seeds primarily acquire bacteria through external routes during panicle heading and flowering, and transfer these bacteria to different organs of the seedlings upon germination. In contrast, bacteria originating solely from internal routes are unable to establish a diverse bacterial community in the corresponding seedlings. These novel insights into seed-borne bacteria will promote the development of next-generation breeding strategies from the perspective of engineering the seed microbiome. Abundant seed-associated bacteria concentrated between caryopsis and glumes in rice grain Rice cultivation involves planting the whole grain rather than just the caryopsis. Consequently, we investigated the bacterial community within the entire rice grain (referred to as ‘internal grain’) and compared it to the community within the isolated caryopsis (referred to as ‘internal caryopsis’). We quantified bacterial abundance in rice seeds from different cultivars and planting areas using quantitative PCR (q-PCR). Q-PCR revealed that internal grains harbored an average bacterial abundance of 1.88 × 10 4 , while the internal caryopsis harbored undetectable levels of bacteria (Fig. A). This trend of higher bacterial abundance in the internal grain compared to the internal caryopsis was consistent across different locations and cultivars (Fig. B). To further illustrate the distribution of bacteria in rice seeds, we quantified bacterial abundance in different grain compartments using Nipponbare seeds harvested six weeks later. Aligning with the above results, internal grain contained an average of 9.86 × 10 3 bacteria, while most caryopses had undetectable levels (Fig. C). Removing the caryopsis from the internal grain yielded the remaining seed accessory structures, which contained a bacterial abundance of 6.61 × 10 3 , comparable to the level in the internal grain (Fig. C). Further separation and measurement within the accessory structures revealed an average abundance of 4.68 × 10 2 bacteria (Fig. C). These results indicated that most bacteria reside in the region between the caryopsis and glumes (Fig. D). Culture-dependent methods confirmed the presence of culturable bacteria in this niche (Fig. S ). 16S rRNA gene sequencing was utilized to analyze the bacterial community structure within the internal grain. Grains harvested six weeks later were used for this analysis. At the phylum level, Proteobacteria predominated the community (99.6%, Fig. E). At the genus level, the dominant members included Pantoea (66.4%) and Pseudomonas (30%), followed by Sphingomonas (2%) and Aureimonas (0.6%) (Fig. F). These findings align with previous characterizations of seed-associated bacteria . The observed Ace index at the operational taxonomic units (OTU) level was under 30 (Fig. G), significantly lower than reported for rice leaves and roots , indicating relatively low species richness in the rice seed microbiome. Collectively, our findings demonstrate an abundance of bacteria carried by rice grains, emphasizing the crucial role of the entire grain, including both the caryopsis and seed accessory structures, in housing the seed-borne bacterial community. Notably, the caryopsis itself contains minimal bacteria, underscoring the importance of the structures surrounding the caryopsis in nurturing this vital bacterial community. Seed-borne bacterial community remains stable during development After confirming the bacterial community carried by mature rice seeds during storage, we further investigated the abundance and composition of the seed-associated bacterial community throughout seed development. Grains were examined at 0, 7, 15, 24, and 40 days after pollination (DAP) representing flowering to physiological maturity. Q-PCR revealed consistent bacterial abundance within internal grains across developmental stages, reaching a peak of 7.34 × 10 5 cells per seed at physiological maturity (Fig. A). Conversely, internal caryopses exhibited negligible bacteria throughout development (Fig. A). 16S rRNA gene sequencing analysis revealed remarkable similarity in taxonomic compositions across stages. Proteobacteria dominated (> 90%), followed by Firmicutes and Actinobacteria (Fig. B). Community size standardized to 1.00 at 0 DAP showed coefficients of 1.22, 3.38, 3.02, and 7.48 at 7, 15, 24, and 40 DAP, respectively (Fig. C). The dominant genera, including Pantoea , Pseudomonas, and Sphingomonas , remained prevalent throughout seed development, collectively comprising over two-thirds of the bacterial community. The remaining third was made up of genera such as Agrobacterium , Xanthomonas , Methylobacterium , and Aureimonas . Notably, the abundance of Pantoea increased dramatically, exceeding 50% by the physiological maturity stage (Fig. C). Venn analysis showed that 78% of OTUs were shared among all stages, with no unique OTUs (Fig. D), indicating minimal developmental impact on bacterial taxa. The bacterial taxa composition and the quantities throughout development are similar to storage-stage seeds (Fig. F-G). Ace index at OTU level remained low throughout (Fig. E), consistent with the storage-stage seeds (Fig. H). These findings demonstrate that the seed-borne bacterial community maintains composition and abundance throughout development and storage, with minor fluctuations in population size across stages. Seed bacteria predominantly originate from the external environment during panicle heading and flowering To determine the primary sources of rice seed-borne bacteria, we established a control condition where seeds from greenhouse-grown plants were not inoculated with bacteria. In this scenario, rice seeds at the panicle heading and flowering stages had minimal to negative bacteria on their surface (Fig. A). Subsequent greenhouse cultivation revealed very low internal grain bacterial levels (~ 100 cells/seed) across development stages (Greenhouse, Fig. B). These results provide evidence: if internal route exists for rice seeds to acquire bacteria, it plays a minor role in shaping the abundant bacterial community typically found within mature seeds. We also examined the bacterial abundance in seeds at the panicle heading and flowering stages harvested from paddy field-grown plants. SEM observation revealed high abundances of bacteria residing on their surface (Fig. C). Subsequent cultivation in the paddy field revealed high levels of bacterial colonization in the internal grains (Paddy field, Fig. B). These results strongly support the external environment as a major source for rice seed-borne microbiota. To further rule out the internal route’s role in the paddy field cultivation conditions, rice plants were transplanted from the paddy field to a greenhouse before the booting stage and grown until seed harvest. No bacteria were observed on the exterior surface of the seeds in the transplanted plants when they grew into the flowering stage (Fig. D). Seeds harvested from the transplanted plants also exhibited significantly lower bacterial levels than those grown in the paddy field (Fig. B), further supporting the link between abundant seed-borne microbiota and bacterial acquisition from the external environment. Notably, the consistent presence of abundant bacterial communities in seeds grown under natural field conditions suggests that specific environmental factors in the field play a crucial role in bacterial acquisition. However, it remains unclear why the transplanted plants harbored seed bacterial abundance tenfold higher than the greenhouse-growing plants. We then used SEM to detect bacteria on the surface of the panicles during the late booting stage, just before the panicle becomes visible outside the stem. There are very few bacteria residing on the panicle surface during this stage (Fig. E). However, bacterial colonization increased significantly during the panicle heading and flowering stages (Fig. E). Q-PCR analysis corroborated these findings (Fig. F), indicating that seeds primarily acquire bacteria during the panicle heading to flowering stages. To definitively identify the stages at which bacteria colonize the seeds, we created a synthetic bacterial community (SynCom) composed of six dominant seed-borne bacteria isolated from paddy field grains. The SynCom included three strains of the genus Pantoea and one strain each from Pseudomonas , Sphingomonas, and Agrobacterium . This SynCom reflected the major taxa found in grain bacterial communities (Fig. ). The six strains were cultured to log phase and mixed in equal amounts at 10 8 –10 9 CFU/ml. Under greenhouse conditions, this SynCom was then sprayed onto rice panicles at three different development stages: flowering, grain filling, and physiological maturity (Fig. A). SynCom inoculation at the flowering stage yielded abundant bacterial colonization throughout seed development, with levels reaching 10 6 –10 7 bacteria per seed at physiological maturity (Fig. B). However, SynCom application during grain filling or physiological maturity stages resulted in only a temporary increase in bacterial abundance in 3 days post-inoculation. Following this period, bacterial levels remained low (around 10 2 bacteria per seed) (Fig. B). These results further demonstrate that inoculation no later than the flowering stage is necessary for rice seeds to acquire bacteria from the external environments and establish their bacterial communities. Seed-borne bacteria establish abundant communities in the seedlings To investigate the transmission of seed-borne bacteria to seedlings, we designed an axenic cultivation system for rice plants (Fig. A). Both caryopses and grains were surface-sterilized and grown for 10 days, followed by bacterial abundance quantification using qPCR. Seedlings grown from caryopses harbored a small number of bacteria [(0–10 4 bacteria per gram fresh weight (g −1 FW)] (Fig. B). In stark contrast, seedlings grown from grains exhibited significantly higher bacterial levels, reaching 10 6 –10 7 g −1 FW in shoots and 10 7 –10 8 g −1 FW in roots (Fig. B). These findings demonstrate the successful transmission of bacteria from the internal grain to the next generation seedling. The bacteria, if any in the internal caryopsis, cannot establish abundant bacterial communities in either the shoot or the root. To further verify this transmission, a grain microbiota transplantation experiment was conducted. Surface-sterilized caryopses and grains were planted alternately in the axenic system. The results revealed a marked increase in the bacterial abundance of caryopses-derived seedlings when planted adjacent to grains. The bacterial population in the shoot reached 10 6 –10 7 bacteria g −1 FW, while roots harbored around 10 8 bacteria g −1 FW, indicating the successful acquisition of bacteria from adjacent grains (Fig. B). This further confirms that the seed-borne bacteria can be transmitted to the seedlings from the internal grains via the external environments. Furthermore, surface-sterilized caryopses were planted in the axenic system. Prior to planting, 40 mL of the SynCom suspension (10 6 CFU/mL) was added to the axenic cultivation system containing approximately 50 mL of soil. Ten days after planting, qPCR analysis revealed that the bacterial abundance in seedling shoots and roots is at levels comparable to those found in germinated grains (Fig. B). These findings indicate that externally inoculated seed bacteria can be successfully acquired by rice seeds to establish a robust bacterial community. Finally, to visualize the spread of bacteria from seeds to seedlings, we introduced an eGFP gene into one of the seed-borne bacterial strains, Pantoea eucrina SY1 (Fig. C). This strain was designated PeSY1-GFP (Fig. C). We utilized SEM and confocal laser scanning microscope (CLSM) to observe the distribution of PeSY1-GFP in seedlings. The strain was inoculated into the soil of the axenic system, and surface-sterilized caryopses were planted (Fig. D). SEM confirmed the absence of bacteria on the surface of shoots and roots of the seedlings cultured in the axenic system, verifying bacteria-free conditions (Fig. S ). In contrast, PeSY1-GFP inoculation resulted in bacterial colonization on the surface of both seedling shoots and roots (Fig. S ). Notably, CLSM revealed the presence of PeSY1-GFP not only on the root surface but also within the internal spaces (Fig. E). Bacterial communities adapt and diversify from seed to seedling 16S rRNA gene sequencing was performed to investigate the dynamic of bacterial communities in rice grains, seedling shoots, and seedling roots. Venn analysis revealed that 19 out of 20 OTUs from rice grains successfully colonized the seedling shoots or roots (Fig. A), indicating most seed-borne bacteria successfully established themselves in the seedlings. While overall taxonomic composition remained similar, the relative abundance of specific genera displayed remarkable changes. Notably, dominant grain bacteria like Pantoea and Pseudomonas declined significantly in seedlings, while Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium , Stenotrophomonas, Sphingomonas , Acidovorax , and Aureimonas emerged as the major members (Fig. B, Fig. S A). Ace index values indicated that community richness was unchanged from seeds to seedlings (Fig. C). However, Shannon index values revealed that seed microbiome diversity was significantly lower than in seedling organs (Fig. D). This denotes that although community richness was maintained from seeds to seedlings, community diversity increased markedly. PCoA based on Bray–Curtis distances further confirmed significant divergence in community structure between the three compartments (Fig. E and S3B-G). Here, PC1 captured the highest variance between grains and seedlings, while PC2 reflected the distinction between seedling shoots and roots. To gain deeper insights into functional shifts, the BugBase algorithm predicted high-level phenotypes in the different microbiome samples. Remarkably, 6 phenotypes were significantly different between the grain and seedling microbiomes. The grain microbiome exhibited a higher proportion of stress-tolerant, biofilm-forming, potentially pathogenic, facultatively anaerobic, and mobile elements-containing bacteria compared to the seedling microbiome, which showed a higher proportion of Gram-negative and aerobic bacteria (Fig. F and S4). Since most seedling bacteria originate from seeds (Fig. A), we further compared bacterial profiles between rice seeds and leaves throughout seed developmental stages: flowering, grain filling, and maturity. Under paddy field conditions, leaves and seeds acquire their microbiota through different routes. 16S rRNA gene sequencing revealed distinct bacterial compositions between seeds and leaves at each stage (Fig. S A-D). Both Ace and Shannon indices were significantly different between seeds and leaves at each stage, denoting divergent community richness and diversity (Fig. G-H). Moreover, 30% of leaf bacteria genera were not shared with seeds (Fig. I). This represents substantial differences in taxa between plant leaves and developing seeds. Comparing phenotype differences during seed development, we found seeds throughout their development harbored a higher proportion of stress-tolerant, biofilm-forming, potentially pathogenic, facultatively anaerobic, and mobile element-containing bacteria, while leaves consistently harbored more Gram-negative, aerobic, and Gram-positive bacteria (Fig. J and S5E-F). These results largely mirror those obtained from the seeds vs. seedlings comparison (Fig. F and J). These findings collectively demonstrate significant reshaping and diversification of bacterial community composition and function during the seed-to-seedling transition. This dynamic adaptation likely reflects the bacterial response to changing environmental conditions faced within the developing plant. Rice cultivation involves planting the whole grain rather than just the caryopsis. Consequently, we investigated the bacterial community within the entire rice grain (referred to as ‘internal grain’) and compared it to the community within the isolated caryopsis (referred to as ‘internal caryopsis’). We quantified bacterial abundance in rice seeds from different cultivars and planting areas using quantitative PCR (q-PCR). Q-PCR revealed that internal grains harbored an average bacterial abundance of 1.88 × 10 4 , while the internal caryopsis harbored undetectable levels of bacteria (Fig. A). This trend of higher bacterial abundance in the internal grain compared to the internal caryopsis was consistent across different locations and cultivars (Fig. B). To further illustrate the distribution of bacteria in rice seeds, we quantified bacterial abundance in different grain compartments using Nipponbare seeds harvested six weeks later. Aligning with the above results, internal grain contained an average of 9.86 × 10 3 bacteria, while most caryopses had undetectable levels (Fig. C). Removing the caryopsis from the internal grain yielded the remaining seed accessory structures, which contained a bacterial abundance of 6.61 × 10 3 , comparable to the level in the internal grain (Fig. C). Further separation and measurement within the accessory structures revealed an average abundance of 4.68 × 10 2 bacteria (Fig. C). These results indicated that most bacteria reside in the region between the caryopsis and glumes (Fig. D). Culture-dependent methods confirmed the presence of culturable bacteria in this niche (Fig. S ). 16S rRNA gene sequencing was utilized to analyze the bacterial community structure within the internal grain. Grains harvested six weeks later were used for this analysis. At the phylum level, Proteobacteria predominated the community (99.6%, Fig. E). At the genus level, the dominant members included Pantoea (66.4%) and Pseudomonas (30%), followed by Sphingomonas (2%) and Aureimonas (0.6%) (Fig. F). These findings align with previous characterizations of seed-associated bacteria . The observed Ace index at the operational taxonomic units (OTU) level was under 30 (Fig. G), significantly lower than reported for rice leaves and roots , indicating relatively low species richness in the rice seed microbiome. Collectively, our findings demonstrate an abundance of bacteria carried by rice grains, emphasizing the crucial role of the entire grain, including both the caryopsis and seed accessory structures, in housing the seed-borne bacterial community. Notably, the caryopsis itself contains minimal bacteria, underscoring the importance of the structures surrounding the caryopsis in nurturing this vital bacterial community. After confirming the bacterial community carried by mature rice seeds during storage, we further investigated the abundance and composition of the seed-associated bacterial community throughout seed development. Grains were examined at 0, 7, 15, 24, and 40 days after pollination (DAP) representing flowering to physiological maturity. Q-PCR revealed consistent bacterial abundance within internal grains across developmental stages, reaching a peak of 7.34 × 10 5 cells per seed at physiological maturity (Fig. A). Conversely, internal caryopses exhibited negligible bacteria throughout development (Fig. A). 16S rRNA gene sequencing analysis revealed remarkable similarity in taxonomic compositions across stages. Proteobacteria dominated (> 90%), followed by Firmicutes and Actinobacteria (Fig. B). Community size standardized to 1.00 at 0 DAP showed coefficients of 1.22, 3.38, 3.02, and 7.48 at 7, 15, 24, and 40 DAP, respectively (Fig. C). The dominant genera, including Pantoea , Pseudomonas, and Sphingomonas , remained prevalent throughout seed development, collectively comprising over two-thirds of the bacterial community. The remaining third was made up of genera such as Agrobacterium , Xanthomonas , Methylobacterium , and Aureimonas . Notably, the abundance of Pantoea increased dramatically, exceeding 50% by the physiological maturity stage (Fig. C). Venn analysis showed that 78% of OTUs were shared among all stages, with no unique OTUs (Fig. D), indicating minimal developmental impact on bacterial taxa. The bacterial taxa composition and the quantities throughout development are similar to storage-stage seeds (Fig. F-G). Ace index at OTU level remained low throughout (Fig. E), consistent with the storage-stage seeds (Fig. H). These findings demonstrate that the seed-borne bacterial community maintains composition and abundance throughout development and storage, with minor fluctuations in population size across stages. To determine the primary sources of rice seed-borne bacteria, we established a control condition where seeds from greenhouse-grown plants were not inoculated with bacteria. In this scenario, rice seeds at the panicle heading and flowering stages had minimal to negative bacteria on their surface (Fig. A). Subsequent greenhouse cultivation revealed very low internal grain bacterial levels (~ 100 cells/seed) across development stages (Greenhouse, Fig. B). These results provide evidence: if internal route exists for rice seeds to acquire bacteria, it plays a minor role in shaping the abundant bacterial community typically found within mature seeds. We also examined the bacterial abundance in seeds at the panicle heading and flowering stages harvested from paddy field-grown plants. SEM observation revealed high abundances of bacteria residing on their surface (Fig. C). Subsequent cultivation in the paddy field revealed high levels of bacterial colonization in the internal grains (Paddy field, Fig. B). These results strongly support the external environment as a major source for rice seed-borne microbiota. To further rule out the internal route’s role in the paddy field cultivation conditions, rice plants were transplanted from the paddy field to a greenhouse before the booting stage and grown until seed harvest. No bacteria were observed on the exterior surface of the seeds in the transplanted plants when they grew into the flowering stage (Fig. D). Seeds harvested from the transplanted plants also exhibited significantly lower bacterial levels than those grown in the paddy field (Fig. B), further supporting the link between abundant seed-borne microbiota and bacterial acquisition from the external environment. Notably, the consistent presence of abundant bacterial communities in seeds grown under natural field conditions suggests that specific environmental factors in the field play a crucial role in bacterial acquisition. However, it remains unclear why the transplanted plants harbored seed bacterial abundance tenfold higher than the greenhouse-growing plants. We then used SEM to detect bacteria on the surface of the panicles during the late booting stage, just before the panicle becomes visible outside the stem. There are very few bacteria residing on the panicle surface during this stage (Fig. E). However, bacterial colonization increased significantly during the panicle heading and flowering stages (Fig. E). Q-PCR analysis corroborated these findings (Fig. F), indicating that seeds primarily acquire bacteria during the panicle heading to flowering stages. To definitively identify the stages at which bacteria colonize the seeds, we created a synthetic bacterial community (SynCom) composed of six dominant seed-borne bacteria isolated from paddy field grains. The SynCom included three strains of the genus Pantoea and one strain each from Pseudomonas , Sphingomonas, and Agrobacterium . This SynCom reflected the major taxa found in grain bacterial communities (Fig. ). The six strains were cultured to log phase and mixed in equal amounts at 10 8 –10 9 CFU/ml. Under greenhouse conditions, this SynCom was then sprayed onto rice panicles at three different development stages: flowering, grain filling, and physiological maturity (Fig. A). SynCom inoculation at the flowering stage yielded abundant bacterial colonization throughout seed development, with levels reaching 10 6 –10 7 bacteria per seed at physiological maturity (Fig. B). However, SynCom application during grain filling or physiological maturity stages resulted in only a temporary increase in bacterial abundance in 3 days post-inoculation. Following this period, bacterial levels remained low (around 10 2 bacteria per seed) (Fig. B). These results further demonstrate that inoculation no later than the flowering stage is necessary for rice seeds to acquire bacteria from the external environments and establish their bacterial communities. To investigate the transmission of seed-borne bacteria to seedlings, we designed an axenic cultivation system for rice plants (Fig. A). Both caryopses and grains were surface-sterilized and grown for 10 days, followed by bacterial abundance quantification using qPCR. Seedlings grown from caryopses harbored a small number of bacteria [(0–10 4 bacteria per gram fresh weight (g −1 FW)] (Fig. B). In stark contrast, seedlings grown from grains exhibited significantly higher bacterial levels, reaching 10 6 –10 7 g −1 FW in shoots and 10 7 –10 8 g −1 FW in roots (Fig. B). These findings demonstrate the successful transmission of bacteria from the internal grain to the next generation seedling. The bacteria, if any in the internal caryopsis, cannot establish abundant bacterial communities in either the shoot or the root. To further verify this transmission, a grain microbiota transplantation experiment was conducted. Surface-sterilized caryopses and grains were planted alternately in the axenic system. The results revealed a marked increase in the bacterial abundance of caryopses-derived seedlings when planted adjacent to grains. The bacterial population in the shoot reached 10 6 –10 7 bacteria g −1 FW, while roots harbored around 10 8 bacteria g −1 FW, indicating the successful acquisition of bacteria from adjacent grains (Fig. B). This further confirms that the seed-borne bacteria can be transmitted to the seedlings from the internal grains via the external environments. Furthermore, surface-sterilized caryopses were planted in the axenic system. Prior to planting, 40 mL of the SynCom suspension (10 6 CFU/mL) was added to the axenic cultivation system containing approximately 50 mL of soil. Ten days after planting, qPCR analysis revealed that the bacterial abundance in seedling shoots and roots is at levels comparable to those found in germinated grains (Fig. B). These findings indicate that externally inoculated seed bacteria can be successfully acquired by rice seeds to establish a robust bacterial community. Finally, to visualize the spread of bacteria from seeds to seedlings, we introduced an eGFP gene into one of the seed-borne bacterial strains, Pantoea eucrina SY1 (Fig. C). This strain was designated PeSY1-GFP (Fig. C). We utilized SEM and confocal laser scanning microscope (CLSM) to observe the distribution of PeSY1-GFP in seedlings. The strain was inoculated into the soil of the axenic system, and surface-sterilized caryopses were planted (Fig. D). SEM confirmed the absence of bacteria on the surface of shoots and roots of the seedlings cultured in the axenic system, verifying bacteria-free conditions (Fig. S ). In contrast, PeSY1-GFP inoculation resulted in bacterial colonization on the surface of both seedling shoots and roots (Fig. S ). Notably, CLSM revealed the presence of PeSY1-GFP not only on the root surface but also within the internal spaces (Fig. E). 16S rRNA gene sequencing was performed to investigate the dynamic of bacterial communities in rice grains, seedling shoots, and seedling roots. Venn analysis revealed that 19 out of 20 OTUs from rice grains successfully colonized the seedling shoots or roots (Fig. A), indicating most seed-borne bacteria successfully established themselves in the seedlings. While overall taxonomic composition remained similar, the relative abundance of specific genera displayed remarkable changes. Notably, dominant grain bacteria like Pantoea and Pseudomonas declined significantly in seedlings, while Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium , Stenotrophomonas, Sphingomonas , Acidovorax , and Aureimonas emerged as the major members (Fig. B, Fig. S A). Ace index values indicated that community richness was unchanged from seeds to seedlings (Fig. C). However, Shannon index values revealed that seed microbiome diversity was significantly lower than in seedling organs (Fig. D). This denotes that although community richness was maintained from seeds to seedlings, community diversity increased markedly. PCoA based on Bray–Curtis distances further confirmed significant divergence in community structure between the three compartments (Fig. E and S3B-G). Here, PC1 captured the highest variance between grains and seedlings, while PC2 reflected the distinction between seedling shoots and roots. To gain deeper insights into functional shifts, the BugBase algorithm predicted high-level phenotypes in the different microbiome samples. Remarkably, 6 phenotypes were significantly different between the grain and seedling microbiomes. The grain microbiome exhibited a higher proportion of stress-tolerant, biofilm-forming, potentially pathogenic, facultatively anaerobic, and mobile elements-containing bacteria compared to the seedling microbiome, which showed a higher proportion of Gram-negative and aerobic bacteria (Fig. F and S4). Since most seedling bacteria originate from seeds (Fig. A), we further compared bacterial profiles between rice seeds and leaves throughout seed developmental stages: flowering, grain filling, and maturity. Under paddy field conditions, leaves and seeds acquire their microbiota through different routes. 16S rRNA gene sequencing revealed distinct bacterial compositions between seeds and leaves at each stage (Fig. S A-D). Both Ace and Shannon indices were significantly different between seeds and leaves at each stage, denoting divergent community richness and diversity (Fig. G-H). Moreover, 30% of leaf bacteria genera were not shared with seeds (Fig. I). This represents substantial differences in taxa between plant leaves and developing seeds. Comparing phenotype differences during seed development, we found seeds throughout their development harbored a higher proportion of stress-tolerant, biofilm-forming, potentially pathogenic, facultatively anaerobic, and mobile element-containing bacteria, while leaves consistently harbored more Gram-negative, aerobic, and Gram-positive bacteria (Fig. J and S5E-F). These results largely mirror those obtained from the seeds vs. seedlings comparison (Fig. F and J). These findings collectively demonstrate significant reshaping and diversification of bacterial community composition and function during the seed-to-seedling transition. This dynamic adaptation likely reflects the bacterial response to changing environmental conditions faced within the developing plant. This study elucidates the distribution and composition of bacterial communities associated with rice seeds, and reveals the dynamics of their acquisition, maintenance, and transmission between generations (Fig. ). We clarified that while rice grain contains abundant bacteria residing in the interspace between the caryopsis and glumes, the caryopsis itself is not the main reservoir. Therefore, the entire grain structure is essential for harboring the seed-borne bacterial community. Grains mainly acquire bacteria during panicle heading and flowering. The bacterial taxonomic composition remains relatively stable throughout seed development and storage, while population sizes vary by several folds. Upon germination, these seed-borne bacteria can spread into different organs of the newly formed seedlings. Depending on the shoot or root environments, the bacterial community structures are reshaped. It is not conclusively confirmed whether all rice caryopses contain endobacteria. However, it is clear that even if present, these endobacteria are unable to form abundant bacterial communities in germinated seedlings. Regarding the relationship between seed-borne bacteria and seedling bacterial communities, previous studies have shown some differing results. For example, surface-sterilized wheat seeds did not yield any colonies, yet colonies were recovered from seedlings derived from surface-sterilized seeds grown under axenic conditions. Five days of growth increased the endophyte densities to a range between 8.0 × 10 3 and 1.6 × 10 5 CFU g −1 FW . The endophytes from maize kernels increased to 10 1 –10 2 CFU g −1 FW and 10 5 –10 8 CFU g −1 FW in day-2 and day-7 seedlings, respectively, indicating significant amplification of seed-associated bacteria during plant development . Multiple lines of evidence confirm the external origin of seed-borne bacteria and demonstrate how the planting environment and timing of bacterial inoculation are critical factors for the acquisition and establishment of seed-borne bacterial communities. 1) External origin: Under greenhouse conditions where seeds are not inoculated with bacteria, seeds remain essentially bacteria-free (Fig. A-B). This indicates that potential internal-transmitted bacteria are incapable of establishing significant seed-borne communities. This finding strongly suggests the external environment as the primary source of seed-borne bacteria. 2) Inoculation during panicle heading and flowering: Spraying the SynCom onto panicles at the flowering stage results in successful bacterial acquisition and establishment of robust seed-borne communities (Fig. ). This directly demonstrates the efficacy of external inoculation in shaping the seed-borne microbiome. 3) Field-grown seeds: Seeds grown under natural field conditions consistently harbor abundant bacterial communities (Figs. , , and ), suggesting that specific environmental factors present in the field are crucial for bacterial acquisition. 4) Limited inoculation window: SynCom inoculation after flowering fails to establish significant seed-borne microbiota (Fig. B). This indicates a specific developmental window of opportunity exists for successful bacterial colonization, emphasizing the importance of timing external inputs. It is interesting to investigate whether seeds contain core bacterial taxa and whether these pioneer taxa drive priority effects influencing the assembly of plant microbiota. In this study, through 16S rRNA gene sequencing analysis, we found rice grains at different developmental stages contained similar bacterial compositions, with the genera Pantoea , Pseudomonas, and Sphingomonas showing high relative abundance (Fig. G, C, and S5B). These results are consistent with previous studies , suggesting rice plants may exert significant selection on the seed microbiota. In a previous study analyzing bacterial communities from 99 cultivated rice varieties by 16S rRNA gene sequencing, 15 generalist core taxa were detected to be shared among all samples collected from rice seeds, root endospheres, and rhizospheres. Seven out of these core taxa were annotated as Pantoea , Pseudomonas, and Sphingomonas . These three genera have also been identified as core taxa in other plants’ seed bacterial communities such as Salvia miltiorrhiza , maize, bean, Triticum spp., and Brassica spp. . Therefore, although the seed microbiota composition is affected by plant species, genotype and seed development stage , some core taxa at low taxonomic levels appear to exist across seed microbiota of various plants. As a niche-specific microhabitat with less freely available nutrition and high osmotic pressure , seeds can impose strong selective pressures on their microbial inhabitants . Studies have also revealed certain shared characteristics among endophytic bacteria residing in seeds, including tolerance to high osmotic pressure, endospore formation capabilities, and amylase activity – traits not typically found in other plant microhabitats . BugBase predicted 6 significantly different organism-level microbiome phenotypes between seeds and seedlings (Fig. and S4-S5), representing environment-dependent spread of bacteria from seeds to seedlings. Moreover, BugBase predicted all 7 phenotypes as significantly different between developing seeds and leaves, with difference patterns largely mirroring the seed vs. seedling comparison (Fig. and S4-S5). This further supports tight linkage between microbiome phenotypes and tissue environments. Internal grains contained more facultatively anaerobic bacteria, while both seedlings and leaves contained more aerobic bacteria (Fig. and S4-S5). This may be because seeds primarily depend on anaerobic respiration during development and storage, while leaves support more aerobic bacteria due to atmospheric oxygen exposure. Seedling roots also harbored more aerobic bacteria given the shallow soil depth during germination. The higher abundance of biofilm-forming bacteria residing on the caryopsis-glume interspace may facilitate colonization and protect bacteria from desiccation and harsh conditions. The functions of these differential phenotypes warrant further investigation. The extent to which the seed microbiota contributes to the plant microbiota community composition has been less investigated. Most studies have found that the seed microbiota contributes to root endosphere community composition, but the covariation between the seed and endosphere communities tends to be relatively weak ; notably, most seed-borne bacterial taxa are absent from root samples . One hypothesis is that during seed germination, small population sizes and abundant inactive or dormant bacteria struggle to adapt to the rapidly changing habitat, fail to efficiently colonize new niches, and thus minimally impact the plant microbiota assembly . Finding that rice leaves in the reproductive phase harbor microbiome phenotypes similar to those of seedlings germinated from surface-sterilized grains under axenic cultivation lends some support for plant tissue as a determinant of microbiome phenotypes. Using an axenic cultivation system whereby microbiota originate solely from planted seeds, we demonstrated that seed-borne bacteria can spread to various seedling tissues, though with dramatically altered relative abundances. Using rice seeds as a model system, we have elucidated the major niches of the seed bacterial community, revealed the dynamics of community acquisition, maintenance, and transmission between generations, and demonstrated dramatic reshaping of the community in seedling tissues. To our knowledge, this is the first study to establish an axenic cultivation system to quantify bacterial dispersal and transmission in plant seeds. These novel findings substantially improve our understanding of microbial community dynamics in plant ecosystems. Further investigation of beneficial microorganisms in crops will enable optimized management practices to promote plant health, an essential step toward improving agricultural sustainability. Rice seeds and cultivation The seeds of six rice varieties—three japonica (Longjing 31, Jijing 88, and Kongyu 131) and three indica (Huanghuazhan, Shuangkangmingzhan, and 9311)—were harvested from paddy fields in Lingshui, Hainan and Wuqing, Tianjin. Additionally, Nipponbare seeds were harvested from Shunyi, Beijing. These harvested seeds were all kept in the dark and stored in dry conditions until their DNA was extracted. The Nipponbare seeds planted in Shunyi rice paddy fields were collected at 0, 7, 15, 24, and 40 days after pollination (DAP) and stored at -80 °C immediately after collection until DNA extraction. The 40-DAP seeds were mature, harvested and stored in the dark under dry conditions until DNA extraction. All these rice seeds are listed in Supplemental Table 1. The rice plants in the field were planted in paddy fields, including Lingshui, Wuqing and Shunyi, with over 10 years rice planting history. The greenhouse for rice planting is a room with a glass roof. Rice is planted in cement ponds with a dimension of 2 m length, 1.5 m width, and 1 m height. The light source is sunlight, and the greenhouse is naturally ventilated by skylights. Rice plants were watered with tap water through a water pipe directly into the cement pond. Rice plants transplanted from paddy field to the greenhouse were grown in Shunyi, Beijing for the entire vegetative growth stage, then were transplanted to the greenhouse right before the booting stage and finished subsequent productive development until seed maturity in the greenhouse. Surface sterilization of rice seeds and DNA extraction from plant tissues Rice grains, caryopses, and the seed accessory structures were placed into a 50-ml Falcon tube, washed in 70% ethanol for 1 min, and then washed in 1.5% sodium hypochlorite (mixed in ddH 2 O) on a shaking platform at 30 rpm for 40 min, and then washed four times in sterile ddH 2 O. The ethanol, sodium hypochlorite, and sterile ddH 2 O were all handled in a biological safety cabinet. After surface sterilization, four rice grains, caryopses, or seed accessory structures were added to a 2.0 ml centrifuge tube and ground into a fine powder using a SPEX Sample Prep (Geno Grinder, USA). Separately, shoot and root samples of offspring seedlings were collected at two and three weeks after planting in the axenic system. The root samples were washed in sterile ddH 2 O to remove any attached soil. Both the shoot and root samples were cut into pieces, put into 2.0 ml centrifuge tubes, and ground to a fine powder using a SPEX Sample Prep. DNA was extracted according to a previously described method . DNA was dissolved in ddH 2 O and stored at -20 °C until further analysis. Quantitative PCR to determine bacterial abundance in rice tissues The primer pair 322F-A (5’-ACGGHCCARACTCCTACGGAA-3’) and 796R (5’-CTACCMGGGTATCTAATCCKG-3’) were used to amplify the bacterial 16S rRNA gene from the plant’s whole genomic DNA to determine the abundance of bacteria in rice tissues . Based on the standard curve of bacterial abundance in rice tissues , quantitative PCR (qPCR) was performed to determine the absolute bacterial abundance of rice tissues, including shoots, roots, caryopses, grains, and seed accessory structures. SYBR-Green qPCR (25 µl) was performed according to the manufacturer’s instructions (Bio-Rad, Hercules, CA, USA). 16S rRNA gene amplification The primer set 322F-A/796R was used to amplify the bacterial 16S V 3 -V 4 region . PCR mixtures were made in a volume of 30 µl containing 200 µM dNTPs, 0.2 µM each primer, 2 U Platinum Taq DNA polymerase (Invitrogen, USA), and 1.5 mM MgCl 2 . The concentration of the template DNA depended on the sample type, with 120 ng/30 µl for rice seed and shoot DNA and 30 ng/30 µl for root DNA. The following PCR conditions were used: initial denaturation at 94 °C for two min, followed by 34 (for seed and shoot) or 30 (for root) cycles consisting of denaturation (94 °C for 30 s), annealing (56 °C for 30 s), extension (72 °C for 30 s), and a final extension step at 72 °C for five min. Each sample was amplified in triplicate. The PCR products were run on a 1% agarose gel to ensure successful amplification. No visible amplification was observed from the negative control (no template added). Illumina next generation sequencing The primer barcode sequences were synthesized by Majorbio Bio-pharm Technology Co., Ltd. (Shanghai, China). PCR products were purified and amplicon libraries were constructed at Majorbio. Paired-end (2 × 300 bp) sequencing was conducted using the Illumina MiSeq PE300 platform. The paired reads were merged into single sequences based on a 10 bp overlap. 16S rRNA gene sequences were processed in QIIME 1.9.1 ( http://qiime.org/install/index.html ). Sequences with quality scores < 20 were discarded . Tags were merged with Fast Length Adjustment of Short Reads (v. 1.2.11) . All samples had > 30,000 effective sequences. Effective tags were clustered into 97% identity operational taxonomic units (OTUs) using Usearch program v. 7.0.1090 . All samples had coverage index > 0.97. Representative sequences for each OTU were selected by UPARSE . OTUs aligned to chloroplast and mitochondrial sequences were removed during clustering. OTUs with ≥ 5 sequences in at least 3 samples were retained. Taxonomic classification of representative sequences was performed using the Ribosomal Database Project’s classifier in QIIME with the default parameters . Statistical analysis All bioinformatics analyses of 16S rRNA gene sequencing were performed on the online platform Majorbio I-Sanger Cloud Platform ( http://cloud.majorbio.com ). To reveal the richness and diversity of the bacterial communities in different amplicon libraries, both ACE and Shannon indices were calculated using Mothur ( https://mothur.org/wiki/calculators/ ) . Student’s t-test was performed to compare the values of the alpha diversity indices ( P < 0.05). To visualize the beta diversity pattern of bacterial communities, principal coordinate analyses were conducted using the prcomp function based on the Bray–Curtis distance algorithm . Analysis of similarity (ANOSIM) was performed to test the significance of bacterial community dissimilarity. Venn diagrams at various taxonomic levels were generated using the lapply function. Considering the different levels of sample difference in the microbial community size, each study contained 12 to 32 sample replicates, and each seed (caryopsis or grain) sample contains 3–4 caryopses or grains. To predict high-level phenotypes presenting in our microbiome samples, BugBase was performed using the 16S rRNA sequencing data. BugBase analysis focused on seven common traits for most prokaryotic organisms, including Gram-negative and Gram-positive delineation, biofilm formation, pathogenic potential, presence of mobile elements, oxygen utilization, and stress tolerance. We applied the non-parametric Kruskal–Wallis test to compare the results from BugBase. Scanning electron microscope (SEM) After surface sterilization, the rice tissues were cut into several pieces under axenic conditions to expose the internal profile. For sample fixation, the tissue pieces were washed twice with ddH 2 O, fixed in 4% paraformaldehyde at 4 °C for 3–7 days, and then washed with ddH 2 O three times for 7, 8, and 9 min each. The fixed samples were dehydrated in a graded ethanol series (50%, 70%, 85%, 95%, and 100%; 16 min immersion at each concentration). Overall, 100% ethanol dehydration was repeated 3 times. After dehydration, the specimens were critical-point dried with liquid CO 2 (Leica EM CPD300; Leica, Hanau, Germany) and sputter coated with gold–palladium (E-1045 ion sputter; Hitachi, Tokyo, Japan). The samples were observed under a Quanta200 scanning electron microscope (FEI, Hillsboro, OR, USA). Axenic culture system and grain microbiota transplantation system The flower nutrient soil and vermiculite are mixed well in a ratio of 1: 3. Mixed soil was put into a pot and the pot was sealed with a filter membrane. The entire device was autoclaved at 121 °C for 60 min. When rice seed was planted, the entire operation process was conducted in a biosafety cabinet, and all relevant instruments were disinfected in advance. Sterile distilled water was added to moisten the mixed soil, then the surface-sterilized grain or caryopsis were sown in the soil. Finally, the pot was sealed again, the entire device was placed in a climate incubator, and the seedlings were cultured at 25°C in a 16-h light/8-h dark cycle. Microbiota was transplanted from grain to the caryopsis in the above axenic culture system. After surface sterilization, rice caryopses and grains were alternately sown in the axenic system. Four caryopses and four grains were cultured in each pot. The distance between each caryopsis/grain and the adjacent grains/caryopses is about 1 cm. The grain microbiota transplantation system was placed in a 25°C climate incubator and cultured in a 16 h light/8 h dark cycle. Preparation of synthetic community (SynCom) Nipponbare rice grains, harvested from paddy fields, were surface-sterilized. Subsequently, in a biosafety cabinet, they were separated into caryopses and seed accessory structures. These components were then individually placed on three different culture media, Nutrient Agar (1% peptone, 0.3% beef paste, 1.5% agar, 0.5% sodium chloride, pH 7.0), Reasoner’s 2A Agar (0.05% proteose peptone, 0.05% casamino acids, 0.05% yeast extract, 0.05% glucose, 0.05% soluble starch, 0.03% dipotassium phosphate, 0.005% magnesium sulfate, 0.03% sodium pyruvate, 1.5% agar) and Tryptic Soy Broth (1.7% pancreatic digest of casein, 0.3% peptic digest of soybean, 0.5% sodium chloride, 0.25% dipotassium phosphate, 0.25% glucose, 1.5% agar). The samples were cultured for 7 days at 28 °C. Six bacterial strains were isolated from surface-sterilized rice grains, including 3 strains annotated to Pantoea and other 3 strains respectively annotated to Pseudomonas , Sphingomonas, and Agrobacterium . Each bacterial strain was cultured in Tryptic Soy Broth (TSB) medium until the OD 600 was between 0.6 and 0.8. After collecting the bacteria by centrifugation, each bacterial strain was washed twice with ddH 2 O and resuspended in ddH 2 O at OD 600 = 1.0. The six bacterial strains were then mixed in equal proportions to create the SynCom of the rice grain bacteria. When spraying the SynCom on rice grains at different seed developmental stages in the greenhouse, the bacterial concentration of the SynCom was 10 8 –10 9 CFU·ml −1 . When the SynCom was inoculated into the axenic cultivation system with the sowing of rice caryopses, the bacterial content of the SynCom was 10 6 –10 7 CFU per seed. To achieve this bacterial content, the OD 600 = 1.0 suspension was serially diluted to 10 6 CFU/mL. Subsequently, 40 mL of this bacterial suspension was added to the axenic cultivation system containing approximately 50 mL of soil. Finally, eight seeds were planted into the inoculated soil. eGPF-labelling for visualizing bacterial strains using a confocal laser scanning microscope (CLSM) An eGFP gene was cloned into the plasmid pEHLYA2-SD, and the recombinant plasmid was transformed into P. eucrina SY1 to create the eGFP-labelled P. eucrina SY1 strain, named Pe-SY1-eGFP. To examine the colonization of Pe-SY1-eGFP, the bacteria were cultured to the log-phage stage, washed twice with ddH 2 O, and resuspended in ddH 2 O at OD 600 = 1.0. The bacterial suspension was inoculated into the axenic cultivation system with rice seedlings. After co-incubation, the rice roots were collected and rinsed three times with sterile ddH 2 O to remove the surface-attached bacteria. CLSM was carried out to observe the seedling samples. To create cross-sections, the roots were cut into small segments and deposited on silylated glass slides (Sigma cat. no. S4651; St. Louis, MO, USA). The samples were examined using a Leica TCS SP8 confocal microscope. The eGPF-labelled bacteria were detected with the 488 nm for green channels. The seeds of six rice varieties—three japonica (Longjing 31, Jijing 88, and Kongyu 131) and three indica (Huanghuazhan, Shuangkangmingzhan, and 9311)—were harvested from paddy fields in Lingshui, Hainan and Wuqing, Tianjin. Additionally, Nipponbare seeds were harvested from Shunyi, Beijing. These harvested seeds were all kept in the dark and stored in dry conditions until their DNA was extracted. The Nipponbare seeds planted in Shunyi rice paddy fields were collected at 0, 7, 15, 24, and 40 days after pollination (DAP) and stored at -80 °C immediately after collection until DNA extraction. The 40-DAP seeds were mature, harvested and stored in the dark under dry conditions until DNA extraction. All these rice seeds are listed in Supplemental Table 1. The rice plants in the field were planted in paddy fields, including Lingshui, Wuqing and Shunyi, with over 10 years rice planting history. The greenhouse for rice planting is a room with a glass roof. Rice is planted in cement ponds with a dimension of 2 m length, 1.5 m width, and 1 m height. The light source is sunlight, and the greenhouse is naturally ventilated by skylights. Rice plants were watered with tap water through a water pipe directly into the cement pond. Rice plants transplanted from paddy field to the greenhouse were grown in Shunyi, Beijing for the entire vegetative growth stage, then were transplanted to the greenhouse right before the booting stage and finished subsequent productive development until seed maturity in the greenhouse. Rice grains, caryopses, and the seed accessory structures were placed into a 50-ml Falcon tube, washed in 70% ethanol for 1 min, and then washed in 1.5% sodium hypochlorite (mixed in ddH 2 O) on a shaking platform at 30 rpm for 40 min, and then washed four times in sterile ddH 2 O. The ethanol, sodium hypochlorite, and sterile ddH 2 O were all handled in a biological safety cabinet. After surface sterilization, four rice grains, caryopses, or seed accessory structures were added to a 2.0 ml centrifuge tube and ground into a fine powder using a SPEX Sample Prep (Geno Grinder, USA). Separately, shoot and root samples of offspring seedlings were collected at two and three weeks after planting in the axenic system. The root samples were washed in sterile ddH 2 O to remove any attached soil. Both the shoot and root samples were cut into pieces, put into 2.0 ml centrifuge tubes, and ground to a fine powder using a SPEX Sample Prep. DNA was extracted according to a previously described method . DNA was dissolved in ddH 2 O and stored at -20 °C until further analysis. The primer pair 322F-A (5’-ACGGHCCARACTCCTACGGAA-3’) and 796R (5’-CTACCMGGGTATCTAATCCKG-3’) were used to amplify the bacterial 16S rRNA gene from the plant’s whole genomic DNA to determine the abundance of bacteria in rice tissues . Based on the standard curve of bacterial abundance in rice tissues , quantitative PCR (qPCR) was performed to determine the absolute bacterial abundance of rice tissues, including shoots, roots, caryopses, grains, and seed accessory structures. SYBR-Green qPCR (25 µl) was performed according to the manufacturer’s instructions (Bio-Rad, Hercules, CA, USA). The primer set 322F-A/796R was used to amplify the bacterial 16S V 3 -V 4 region . PCR mixtures were made in a volume of 30 µl containing 200 µM dNTPs, 0.2 µM each primer, 2 U Platinum Taq DNA polymerase (Invitrogen, USA), and 1.5 mM MgCl 2 . The concentration of the template DNA depended on the sample type, with 120 ng/30 µl for rice seed and shoot DNA and 30 ng/30 µl for root DNA. The following PCR conditions were used: initial denaturation at 94 °C for two min, followed by 34 (for seed and shoot) or 30 (for root) cycles consisting of denaturation (94 °C for 30 s), annealing (56 °C for 30 s), extension (72 °C for 30 s), and a final extension step at 72 °C for five min. Each sample was amplified in triplicate. The PCR products were run on a 1% agarose gel to ensure successful amplification. No visible amplification was observed from the negative control (no template added). The primer barcode sequences were synthesized by Majorbio Bio-pharm Technology Co., Ltd. (Shanghai, China). PCR products were purified and amplicon libraries were constructed at Majorbio. Paired-end (2 × 300 bp) sequencing was conducted using the Illumina MiSeq PE300 platform. The paired reads were merged into single sequences based on a 10 bp overlap. 16S rRNA gene sequences were processed in QIIME 1.9.1 ( http://qiime.org/install/index.html ). Sequences with quality scores < 20 were discarded . Tags were merged with Fast Length Adjustment of Short Reads (v. 1.2.11) . All samples had > 30,000 effective sequences. Effective tags were clustered into 97% identity operational taxonomic units (OTUs) using Usearch program v. 7.0.1090 . All samples had coverage index > 0.97. Representative sequences for each OTU were selected by UPARSE . OTUs aligned to chloroplast and mitochondrial sequences were removed during clustering. OTUs with ≥ 5 sequences in at least 3 samples were retained. Taxonomic classification of representative sequences was performed using the Ribosomal Database Project’s classifier in QIIME with the default parameters . All bioinformatics analyses of 16S rRNA gene sequencing were performed on the online platform Majorbio I-Sanger Cloud Platform ( http://cloud.majorbio.com ). To reveal the richness and diversity of the bacterial communities in different amplicon libraries, both ACE and Shannon indices were calculated using Mothur ( https://mothur.org/wiki/calculators/ ) . Student’s t-test was performed to compare the values of the alpha diversity indices ( P < 0.05). To visualize the beta diversity pattern of bacterial communities, principal coordinate analyses were conducted using the prcomp function based on the Bray–Curtis distance algorithm . Analysis of similarity (ANOSIM) was performed to test the significance of bacterial community dissimilarity. Venn diagrams at various taxonomic levels were generated using the lapply function. Considering the different levels of sample difference in the microbial community size, each study contained 12 to 32 sample replicates, and each seed (caryopsis or grain) sample contains 3–4 caryopses or grains. To predict high-level phenotypes presenting in our microbiome samples, BugBase was performed using the 16S rRNA sequencing data. BugBase analysis focused on seven common traits for most prokaryotic organisms, including Gram-negative and Gram-positive delineation, biofilm formation, pathogenic potential, presence of mobile elements, oxygen utilization, and stress tolerance. We applied the non-parametric Kruskal–Wallis test to compare the results from BugBase. After surface sterilization, the rice tissues were cut into several pieces under axenic conditions to expose the internal profile. For sample fixation, the tissue pieces were washed twice with ddH 2 O, fixed in 4% paraformaldehyde at 4 °C for 3–7 days, and then washed with ddH 2 O three times for 7, 8, and 9 min each. The fixed samples were dehydrated in a graded ethanol series (50%, 70%, 85%, 95%, and 100%; 16 min immersion at each concentration). Overall, 100% ethanol dehydration was repeated 3 times. After dehydration, the specimens were critical-point dried with liquid CO 2 (Leica EM CPD300; Leica, Hanau, Germany) and sputter coated with gold–palladium (E-1045 ion sputter; Hitachi, Tokyo, Japan). The samples were observed under a Quanta200 scanning electron microscope (FEI, Hillsboro, OR, USA). The flower nutrient soil and vermiculite are mixed well in a ratio of 1: 3. Mixed soil was put into a pot and the pot was sealed with a filter membrane. The entire device was autoclaved at 121 °C for 60 min. When rice seed was planted, the entire operation process was conducted in a biosafety cabinet, and all relevant instruments were disinfected in advance. Sterile distilled water was added to moisten the mixed soil, then the surface-sterilized grain or caryopsis were sown in the soil. Finally, the pot was sealed again, the entire device was placed in a climate incubator, and the seedlings were cultured at 25°C in a 16-h light/8-h dark cycle. Microbiota was transplanted from grain to the caryopsis in the above axenic culture system. After surface sterilization, rice caryopses and grains were alternately sown in the axenic system. Four caryopses and four grains were cultured in each pot. The distance between each caryopsis/grain and the adjacent grains/caryopses is about 1 cm. The grain microbiota transplantation system was placed in a 25°C climate incubator and cultured in a 16 h light/8 h dark cycle. Nipponbare rice grains, harvested from paddy fields, were surface-sterilized. Subsequently, in a biosafety cabinet, they were separated into caryopses and seed accessory structures. These components were then individually placed on three different culture media, Nutrient Agar (1% peptone, 0.3% beef paste, 1.5% agar, 0.5% sodium chloride, pH 7.0), Reasoner’s 2A Agar (0.05% proteose peptone, 0.05% casamino acids, 0.05% yeast extract, 0.05% glucose, 0.05% soluble starch, 0.03% dipotassium phosphate, 0.005% magnesium sulfate, 0.03% sodium pyruvate, 1.5% agar) and Tryptic Soy Broth (1.7% pancreatic digest of casein, 0.3% peptic digest of soybean, 0.5% sodium chloride, 0.25% dipotassium phosphate, 0.25% glucose, 1.5% agar). The samples were cultured for 7 days at 28 °C. Six bacterial strains were isolated from surface-sterilized rice grains, including 3 strains annotated to Pantoea and other 3 strains respectively annotated to Pseudomonas , Sphingomonas, and Agrobacterium . Each bacterial strain was cultured in Tryptic Soy Broth (TSB) medium until the OD 600 was between 0.6 and 0.8. After collecting the bacteria by centrifugation, each bacterial strain was washed twice with ddH 2 O and resuspended in ddH 2 O at OD 600 = 1.0. The six bacterial strains were then mixed in equal proportions to create the SynCom of the rice grain bacteria. When spraying the SynCom on rice grains at different seed developmental stages in the greenhouse, the bacterial concentration of the SynCom was 10 8 –10 9 CFU·ml −1 . When the SynCom was inoculated into the axenic cultivation system with the sowing of rice caryopses, the bacterial content of the SynCom was 10 6 –10 7 CFU per seed. To achieve this bacterial content, the OD 600 = 1.0 suspension was serially diluted to 10 6 CFU/mL. Subsequently, 40 mL of this bacterial suspension was added to the axenic cultivation system containing approximately 50 mL of soil. Finally, eight seeds were planted into the inoculated soil. An eGFP gene was cloned into the plasmid pEHLYA2-SD, and the recombinant plasmid was transformed into P. eucrina SY1 to create the eGFP-labelled P. eucrina SY1 strain, named Pe-SY1-eGFP. To examine the colonization of Pe-SY1-eGFP, the bacteria were cultured to the log-phage stage, washed twice with ddH 2 O, and resuspended in ddH 2 O at OD 600 = 1.0. The bacterial suspension was inoculated into the axenic cultivation system with rice seedlings. After co-incubation, the rice roots were collected and rinsed three times with sterile ddH 2 O to remove the surface-attached bacteria. CLSM was carried out to observe the seedling samples. To create cross-sections, the roots were cut into small segments and deposited on silylated glass slides (Sigma cat. no. S4651; St. Louis, MO, USA). The samples were examined using a Leica TCS SP8 confocal microscope. The eGPF-labelled bacteria were detected with the 488 nm for green channels. Supplementary Material 1: Fig S1. Culture-dependent experiments showing bacteria between caryopsis and glumes. A. Surface-sterilized grain contains bacteria internally (left), not from the internal accessory structures (middle) or internal caryopsis (right). B. Surface-sterilized grain separated into caryopsis and accessory structures, both contain culturable bacteria. Supplementary Material 2: Fig S2. SEM showing colonization of Pantoea eucrina SY1 on surfaces of germinated seedling organs. Images examined at 14 days and 28 days after planting using a Quanta200 scanning electron microscope. Supplementary Material 3: Fig S3. A. Different genera among bacterial communities of rice seeds (grain) and offspring seedling organs (shoot, root) by Kruskal-Wallis H test. *, p <0.05, **, p <0.01. B-G. PCoA of bacterial communities at species (B), genus (C), family (D), order (E), class (F), and phylum (G) levels. Significant differences by ANOSIM. Supplementary Material 4: Fig S4. BugBase-predicted different microbiome phenotypes among seeds and seedling organs by Kruskal-Wallis H test. **, p <0.01. Supplementary Material 5: Fig S5. A. Relative abundance of bacterial taxa in rice seeds or leaves during development at genus level. Taxa > 0.1% shown. B-C. Relative abundance at genus (B) and phylum (C) levels. Taxa > 0.1% shown. D. PCoA of bacterial communities in seeds and leaves during development. Significant differences by ANOSIM. E-F. BugBase prediction of microbiome phenotypes between seeds and leaves during development. B-D and F, seeds represent 5 stages (0, 7, 15, 24, and 40 DAP), 5 samples per stage. Leaves represent 4 stages (0, 7, 15, and 24 DAP), 5 samples per stage. Supplementary Material 6: Supplemental Table 1. The rice seeds used in this study. |
Differentiated Papillary NUT Carcinoma: An Unexpected, Deceptively Bland Presentation of a Sinonasal Carcinoma | 51752bae-574e-46aa-a91f-6e5ea26ca085 | 10513967 | Anatomy[mh] | Over the past decade, the spectrum of malignant tumors of the head and neck has expanded, with many entities characterized by distinct molecular alterations. For example, carcinomas comprise conventional squamous cell carcinomas, NUT carcinomas, DEK::AFF2 carcinomas, EBV- and HPV-associated carcinomas, undifferentiated as well as SWI/SNF complex deficient sinonasal carcinomas, highlighting the variety of different morphologies and molecular pathogeneses . However, morphological overlap between different entities should be considered during the process of histopathological diagnosis. Here we report the case of a 32-year-old male patient, who presented with a non-healing lesion of the upper alveolar ridge after tooth extraction, leading to an oro-antral fistula. The histological features of the initial biopsy appeared deceptively bland, prompting the differential diagnosis of reactive inflammatory changes. However, an external histopathological consultation accompanied by molecular work-up with the detection of a NSD3::NUTM1 fusion, yielded the unexpected diagnosis of a sinonasal NUT carcinoma originating from the maxillary sinus. Clinical Presentation A 32-year-old, otherwise healthy, actively smoking male patient, presented with a 5-month history of pain in the left upper jaw. He was referred to a dentist, and after treatment, including extraction of tooth 25, wound healing delay was accompanied by a persistent oro-antral fistula (Fig. ). On examination, the fistula was localized at the site of the extracted tooth and the alveolar ridge of the posterior part appeared enlarged. The first biopsy showed chronic-active inflammatory changes. A post-biopsy CT scan of the paranasal sinuses demonstrated a large osseous defect and bone erosion at the site of the extracted tooth with complete opacification of the left maxillary sinus (Fig. A). Despite the inflammatory changes noted in the first biopsy, the CT scan was interpreted as suspicious for malignancy. A second, larger biopsy was performed one month later, revealing a non-keratinizing squamous cell carcinoma. For staging purposes and resection planning, a whole body FDG-PET/CT was performed, showing metabolically enhanced osseous destruction in the left maxillary sinus (Fig. B). The patient was discussed at our multidisciplinary tumor board with the consensus that disease was staged as cT2 cN0 cM0 (UICC/TNM 8 th edition), requiring primary resection. A hemimaxillectomy with wide margins was performed. In addition, the patient underwent a selective neck dissection level I-III on the left side followed by reconstruction of the defect with a superficial circumflex iliac artery-based iliac bone-free flap. Pathology The initial biopsies revealed an exophytic-papillomatous (Fig. A), partly inverted tumor with squamous differentiation without unequivocal evidence of invasion. Based on the clinical context of a prior tooth extraction with persistent oro-antral fistula, differential diagnostic considerations encompassed prominent reactive inflammatory changes as well as an exophytic-papillomatous, well-differentiated carcinoma. Mucocytes were not detected in the Alcian-Blue-PAS-stain. Therefore, despite abundantly admixed granulocytes, a sinonasal papilloma (which could have aided the diagnosis as a possible precursor lesion) could not be confirmed. In light of the relatively mature squamous differentiation, minimal cytologic atypia and prominent inflammation, a clear diagnosis was hampered. In the second biopsy, small, discohesive collections of epithelial cells infiltrating the stroma with focal transformation into larger, basaloid aggregates without clear demarcation by a basement membrane, militated against the diagnosis of a reactive process (Fig. B–D). Additionally, the time course and clinico-radiological features favored a malignant process. The interpretation as a reactive squamous epithelial proliferation was revised with the descriptive diagnosis of an exophytic-papillomatous and partly endophytic growing carcinoma. In the ensuing external pathologic consultation, a diagnosis of a non-keratinizing squamous cell carcinoma (NKSCC) was rendered, assuming that the lesion originated from the sinonasal tract rather than the mucosa of the oral cavity, based on the latest WHO classification of Head & Neck Tumours 5th edition (beta version) . HPV DNA testing as well as EBV-RNA in situ hybridization and p16 immunohistochemistry were negative. In order to address the differential diagnosis of a DEK::AFF2 fusion-associated carcinoma, molecular profiling was performed using the FoundationOne ® Heme test. DEK::AFF2 fusion-associated carcinomas have been described recently as an emerging entity in the sinonasal tract, with the majority showing a strikingly bland histologic appearance and overlap with so-called low-grade papillary Schneiderian carcinomas . Importantly, the detection of a DEK::AFF2 gene fusion would allow for more accurate classification and prognostic assessment. Surprisingly, no DEK::AFF2 , but a NUT::NSD3 gene fusion was detected, leading to the diagnosis of a NUT carcinoma. A subsequently performed NUT immunohistochemistry (Fig. E) showed a matching “speckled type” positivity in the majority of the carcinoma cells (almost 100% in both, basaloid and more differentiated components), corroborating the diagnosis and visualizing the fusion product. In concordance with the morphology lacking mucocytes, no EGFR mutation was detected, which are very common in inverted sinonasal papilloma and their carcinoma ex papilloma . The macroscopy of the following left-sided hemimaxillectomy showed the main tumor originating in the maxillary sinus and breaking through the bone into the oral cavity. Together with the neck dissection specimen level I-III the final pathologic tumor staging (according to carcinomas of the nasal cavity and paranasal sinuses) was pT2 pN0 (0/57) L0 V0 Pn1, high-grade, R0 (UICC/TNM 8 th edition, 2017). Extensive perineural spread was noted. Clinical Follow-up Adjuvant local radiotherapy was recommended at our multidisciplinary tumorboard. One year after diagnosis and six months after completion of treatment (at the time of the case report submission), the patient showed no evidence of disease, neither clinically nor on PET/CT.
A 32-year-old, otherwise healthy, actively smoking male patient, presented with a 5-month history of pain in the left upper jaw. He was referred to a dentist, and after treatment, including extraction of tooth 25, wound healing delay was accompanied by a persistent oro-antral fistula (Fig. ). On examination, the fistula was localized at the site of the extracted tooth and the alveolar ridge of the posterior part appeared enlarged. The first biopsy showed chronic-active inflammatory changes. A post-biopsy CT scan of the paranasal sinuses demonstrated a large osseous defect and bone erosion at the site of the extracted tooth with complete opacification of the left maxillary sinus (Fig. A). Despite the inflammatory changes noted in the first biopsy, the CT scan was interpreted as suspicious for malignancy. A second, larger biopsy was performed one month later, revealing a non-keratinizing squamous cell carcinoma. For staging purposes and resection planning, a whole body FDG-PET/CT was performed, showing metabolically enhanced osseous destruction in the left maxillary sinus (Fig. B). The patient was discussed at our multidisciplinary tumor board with the consensus that disease was staged as cT2 cN0 cM0 (UICC/TNM 8 th edition), requiring primary resection. A hemimaxillectomy with wide margins was performed. In addition, the patient underwent a selective neck dissection level I-III on the left side followed by reconstruction of the defect with a superficial circumflex iliac artery-based iliac bone-free flap.
The initial biopsies revealed an exophytic-papillomatous (Fig. A), partly inverted tumor with squamous differentiation without unequivocal evidence of invasion. Based on the clinical context of a prior tooth extraction with persistent oro-antral fistula, differential diagnostic considerations encompassed prominent reactive inflammatory changes as well as an exophytic-papillomatous, well-differentiated carcinoma. Mucocytes were not detected in the Alcian-Blue-PAS-stain. Therefore, despite abundantly admixed granulocytes, a sinonasal papilloma (which could have aided the diagnosis as a possible precursor lesion) could not be confirmed. In light of the relatively mature squamous differentiation, minimal cytologic atypia and prominent inflammation, a clear diagnosis was hampered. In the second biopsy, small, discohesive collections of epithelial cells infiltrating the stroma with focal transformation into larger, basaloid aggregates without clear demarcation by a basement membrane, militated against the diagnosis of a reactive process (Fig. B–D). Additionally, the time course and clinico-radiological features favored a malignant process. The interpretation as a reactive squamous epithelial proliferation was revised with the descriptive diagnosis of an exophytic-papillomatous and partly endophytic growing carcinoma. In the ensuing external pathologic consultation, a diagnosis of a non-keratinizing squamous cell carcinoma (NKSCC) was rendered, assuming that the lesion originated from the sinonasal tract rather than the mucosa of the oral cavity, based on the latest WHO classification of Head & Neck Tumours 5th edition (beta version) . HPV DNA testing as well as EBV-RNA in situ hybridization and p16 immunohistochemistry were negative. In order to address the differential diagnosis of a DEK::AFF2 fusion-associated carcinoma, molecular profiling was performed using the FoundationOne ® Heme test. DEK::AFF2 fusion-associated carcinomas have been described recently as an emerging entity in the sinonasal tract, with the majority showing a strikingly bland histologic appearance and overlap with so-called low-grade papillary Schneiderian carcinomas . Importantly, the detection of a DEK::AFF2 gene fusion would allow for more accurate classification and prognostic assessment. Surprisingly, no DEK::AFF2 , but a NUT::NSD3 gene fusion was detected, leading to the diagnosis of a NUT carcinoma. A subsequently performed NUT immunohistochemistry (Fig. E) showed a matching “speckled type” positivity in the majority of the carcinoma cells (almost 100% in both, basaloid and more differentiated components), corroborating the diagnosis and visualizing the fusion product. In concordance with the morphology lacking mucocytes, no EGFR mutation was detected, which are very common in inverted sinonasal papilloma and their carcinoma ex papilloma . The macroscopy of the following left-sided hemimaxillectomy showed the main tumor originating in the maxillary sinus and breaking through the bone into the oral cavity. Together with the neck dissection specimen level I-III the final pathologic tumor staging (according to carcinomas of the nasal cavity and paranasal sinuses) was pT2 pN0 (0/57) L0 V0 Pn1, high-grade, R0 (UICC/TNM 8 th edition, 2017). Extensive perineural spread was noted.
Adjuvant local radiotherapy was recommended at our multidisciplinary tumorboard. One year after diagnosis and six months after completion of treatment (at the time of the case report submission), the patient showed no evidence of disease, neither clinically nor on PET/CT.
The histological and clinical features of the current case represent a highly unusual constellation. Typical NUT carcinoma is characterized by a more undifferentiated monomorphic morphology with small squamous islets and abrupt keratinization. These features were not present in our case. Nevertheless, the molecular profile, the NUT::NSD3 gene fusion, has been recurrently described in NUT carcinomas and confirms the diagnosis, especially in association with a squamous phenotype. Accordingly, this case can be regarded as part of the spectrum of NUT carcinomas and emphasizes the importance of considering this differential diagnosis in mature and well-differentiated squamous cell carcinoma. Such atypical features as well as the lack of awareness of this entity suggest an under-diagnosis and -reporting of NUT carcinomas . The partly prominent squamous epithelial differentiation and the growth pattern are highly unusual and to the best of our knowledge have not been described in NUT carcinomas. In this regard, NUT carcinomas are characterized by translocation-associated fusion oncoproteins that interfere with cell differentiation and cell growth. The majority of NUT-fusions involves BRD4 (bromodomain containing protein 4), leading to an epigenetically induced block of cell differentiation and promotion of cellular growth. NSD3 encodes a histone lysine methyltransferase that binds the extraterminal domain of BRD. In cases harboring the NUT::NSD3 fusion, this alteration probably leads to similar functional oncogenic consequences. However, as presented in this case, the level of interference with cell differentiation might be different in NUT::NSD3 fusion than in NUT::BRD4 fusion . This could explain why NUT::NSD3 fusion positive carcinomas outside the thorax appear to have a significantly better prognosis than their NUT::BRD4 positive counterparts . An additional diagnostic challenge are the reactive, inflammatory squamous epithelial changes, which can be prominent after an intervention such as a tooth extraction. The relatively young patient age and the unusual morphology led to the consideration of an HPV-associated carcinoma, which could not be substantiated, as immunohistochemistry for p16 and molecular analysis for HPV DNA were negative. A carcinoma with DEK::AFF2 gene fusion was considered as the primary differential diagnosis on morphologic grounds. These carcinomas have recently been described and exhibit similar morphologic features to the current case . Importantly, this case presented significant morphological overlap with other head and neck carcinomas. The NUT::NSD3 gene fusion has recently been described in a subset of thyroid carcinomas without classical features, so that there is a rationale for NUT immunohistochemistry and/or molecular testing in unusual cases. In particular, there is increasing evidence that NUT gene fusions can occur in tumors with different underlying cell types (other than squamoid-like cells), such as thyroid follicle cells. Additional data are needed for accurate classification of these increasingly detected neoplasms . The concept of tumoral-mucosal fusion as a potential pitfall of processes underlying the surface mucosa is recognized in minor salivary gland neoplasia . However, the observation that the majority of bland squamous cells in the mucosa were NUT IHC positive in our case, suggests that maturation may be involved. This case further demonstrates that highly sensitive and specific NUT immunohistochemistry is useful in identifying cases with unusual morphology, thus enabling accurate classification. Future studies on larger numbers of cases are needed for comparing the biological behavior and other features of “differentiated NUT carcinoma” with the classical type.
|
The relationship between frequency content and representational dynamics in the decoding of neurophysiological data | 6f6cb59a-a6a3-4ec4-8f54-a8ff9dfed4aa | 10565838 | Physiology[mh] | Introduction The field of representational dynamics uses temporal patterns in decoding accuracy timecourses to test hypotheses about how the brain processes information . By decoding different experimental stimuli from recorded brain activity at high temporal resolution, researchers use information theoretic measures to quantify what features of a stimulus are explicitly represented in neural data as a function of time from stimulus onset . An emerging question in neuroscience is how these representational dynamics relate to the brain's underlying neurophysiology . Such analyses seek to go beyond merely answering what is represented in recorded brain activity, by also characterising the neural mechanisms explaining how that information is represented . This commonly involves a decoding paradigm we will refer to as instantaneous signal decoding , where classifiers are trained and tested on the raw broadband signal recorded over all sensors at each timepoint following a stimulus , and the representational dynamics interpreted (often with reference to activity in canonical frequency bands). This can be used for example to study the phase-locking of information content to canonical oscillations , the dynamics of memory , or the direction of information flow . A closely related paradigm, we will refer to as narrowband signal decoding , applies the same procedure after filtering the data into a narrowband of interest. This explicitly links observed patterns with canonical frequency bands . Unfortunately, however, the fundamental relationship between the frequency content of the stimulus evoked signal and the inferred information content is not widely recognised. Whilst many decoding approaches aim to be agnostic about the specific data characteristics over time that drive their results, there is a considerable risk of misinterpretation when this relationship is not considered. In this paper we draw attention to this relationship, highlighting that the spectral content of the evoked response is translated to double its original frequency in associated decoding accuracy metrics when using the instantaneous signal decoding or narrowband signal decoding paradigms most typically used in the literature . From this, we identify two problems: the first is the presence of artefacts due to representational aliasing; the second is the broader challenge of how we should interpret information theoretic metrics that systematically oscillate at double the frequency of the evoked response spectrum. We argue that these problems arise from a narrow focus on information content in the instantaneous signal at a single moment in time, which ignores information stored in the signal's gradient or higher moments. Conceptually, this is analogous to analysing a simple pendulum by measuring only its displacement at a single instant in time – not its velocity or acceleration, which would together fully define the dynamic system. As illustrated in , such a narrow focus only on the pendulum's displacement leads to inferred information content that peaks at the pendulum's extrema (i.e. the peaks and troughs of the oscillation); taking a broader view of the information contained in both the displacement and velocity leads to a measure of information content that is stable over time. We extend the same logic to the dynamic trajectory of neural activity evoked by a stimulus. This motivates a third decoding paradigm that we refer to as complex spectrum decoding , which is one way of including such temporal gradient information. Returning to our example above, if we applied a Fourier decomposition to the pendulum's displacement over time, we would obtain a single complex frequency component with a real part (tracking the displacement) and an imaginary part (tracking the velocity). This concept generalises to neural activity, where we would expect more complex Fourier dynamics played out simultaneously over multiple frequency bands and spatial channels. When this complex spectrum information is included as features to a classifier, we show that this results in inferred representational dynamic patterns that have higher accuracy, are more stable over time, and which we believe to provide a better characterisation of the brain's representational architecture. How the spectrum of the evoked response determines the signal information content We first ask: what is the fundamental relationship between frequency-specific features of the stimulus-evoked response and the resulting timecourse of decoding accuracy? We address this question using a generative modelling approach, where we model the neural data recorded on individual trials as a Fourier series with bandlimited Gaussian noise. From a probabilistic modelling perspective, the mutual information is the theoretical quantity analogous to decoding accuracy that we can then derive. This allows us to characterise how the information content of a signal varies as a function of time and frequency . 2.1 Generative model of stimulus evoked responses We wish to model epoched electrophysiological data recorded from P channels under two different experimental conditions. Let us denote by x n , t the [ P × 1 ] vector of data recorded at time t ∈ { 1 , 2 , … T } on trial n ∈ { 1 , 2 , … N } , where y n ∈ { 1 , − 1 } denotes the experimental condition for that trial. We model x n , t as comprising a condition-independent evoked response term μ t of dimension [ P × 1 ] , and residual terms that are decomposed into a sum of [ P × 1 ] Fourier components z n , t , ω : (1) x n , t = μ t + ∑ ω = 0 Ω z n , t , ω We henceforth refer to x n , t as the ‘broadband signal’ , and the multiple z n , t , ω terms as the ‘narrowband signals’ . If we assume each narrowband signal z n , t , ω has a multivariate Gaussian distribution with mean conditioned on the stimulus (see for full details), we obtain the following expression for the distribution of the broadband signal: (2) P ( X t | Y ) ∼ N ( μ t + Y ∑ ω = 0 Ω A ω cos ( ω t + ϕ ω ) , ∑ ω = 0 Ω Σ ω ) Each A ω term is a diagonal [ P × P ] matrix, where the i th diagonal entry, denoted by a ω , i , reflects the magnitude of the component at frequency ω on channel i . Both ω and t are scalar indices reflecting the frequency and time respectively; ϕ ω is a [ P × 1 ] vector, each entry of which contains the phase offset of the oscillation at frequency ω across the P channels. Finally, we model induced effects (i.e. narrowband power that is not phase aligned to the stimulus) independently in each frequency band, where Σ ω is the [ P × P ] covariance matrix modelling the spatial variance and correlations expressed at frequency band ω . Note that this corresponds to an assumption that only the evoked response, not the induced response, differs over the two conditions – this is a simplifying assumption that we later relax in . We can now characterise the mutual information between the broadband signal X t or its constituent narrowband signals Z t , ω and the class labels Y . 2.2 Information content available to narrowband signal decoding We wish to explore how the spectrum of the evoked response determines the representational dynamics inferred from the decoding paradigms that are most typically used in the literature (T. ; ; ). We start by considering instantaneous decoding of narrowband signals Z t , ω , which we refer to as narrowband signal decoding. Given a probabilistic model, we can calculate the mutual information I ( Z t , ω , Y ) , which expresses the amount of information shared between the signal and the condition label time courses. This measure of information content in the signal that pertains to the condition labels can be thought of as a surrogate measure of decoding accuracy were one to do narrowband signal decoding . Starting with a single Fourier component of the evoked response at frequency ω , the mutual information is itself a sinusoidal function that has been translated to double the original frequency, 2 ω : (3) I ( Z t , ω , Y ) = f ( c ω + r ω cos ( 2 ω t + ξ ω ) ) Where f is a monotonic, concave function (see for proof and for plot of the function); and c ω , r ω and ξ ω are all scalar values that are constant with respect to time (see for their exact values, and and for proof of the above result). The intuition for this is based on what was discussed in : if Z t , ω were the displacement of a pendulum oscillating at frequency ω, a decoder will perform best at the peaks and troughs of that oscillation and poorly in between these points. We illustrate this relationship in example 1 , where we simulate an evoked response under two conditions. Suppose that one condition (in blue) contains information content at 10Hz across both channels, and the second condition (in black) does not. The information content associated with this narrowband component is itself a sinusoidal function oscillating at 20Hz. 2.3 Information content available to instantaneous signal decoding Realistic neural signals are not expressed in a single component frequency across all spatial areas, but are rather comprised of a number of spatially distinct components at multiple frequencies. How then does the entire frequency spectrum of the evoked response determine the frequency spectrum of the associated information content? This equates to the paradigm of instantaneous signal decoding that is most widely performed in the literature . For the broadband signal X t given in our model, the information content is given by: (4) I ( X t , Y ) = f ( c B + ∑ ω Ω r B , ω cos ( 2 ω t + ξ B , ω ) + h ( t ) ) Where c B , r B , ω and ξ B , ω are scalar values that are constant over time, and h ( t ) refers to additional sinusoidal harmonics distributed across the frequency spectrum between zero and 2 Ω (see for their exact values along with proof of this result). Importantly, if the highest frequency component of the evoked response on any channel is Ω , it follows that the highest frequency in the associated information spectrum will be 2 Ω . We illustrate this point with example 2 in ; for simplicity we simulate an evoked response comprising just 2 spectral components under each condition at 10Hz and 15Hz; the associated information content displays multiple peaks over time, represented in its Fourier spectrum by frequency components distributed between 0Hz and 30Hz. As we will explore further in , this also means that commonly used anti-aliasing filters are insufficient to stop representational aliasing, i.e. alias artefacts in the inferred information content dynamics. In order to further illustrate the representational dynamics of instantaneous signal decoding, we created Representational Dynamics Simulator , a web application analogous to , where the user can interactively change the parameters of the evoked spectrum and see the resulting information content ( ; hosted at https://representational-dynamics.herokuapp.com ). 2.4 Modelling induced effects It is important to consider the degree to which these findings are specific to our chosen modelling assumptions. We have specifically limited our discussion to that of evoked effects by assuming the noise distribution was invariant over conditions. In the frequency domain, this means that we have limited our analysis to the part of the signal that is phase aligned to stimulus onset. When we introduce condition-specific induced effects – i.e. to model the case where one condition induces an increase in bandlimited power that has random phase alignment with the stimulus onset – we can no longer derive an exact analytic expression for the mutual information; however, we can derive an upper bound on the information content. This upper bound is a function of components at the same frequencies specified in equations 3 (for the narrowband case; see and for proof) and 5 (for the instantaneous signal case; see for proof). This result is not mathematically trivial, but may nonetheless be intuitive to some readers on the basis that the information content of a signal containing both evoked and induced effects must be less than the combined information content of each of those effects assessed independently; and the information content of induced effects assessed independently is constant with respect to time (owing to the uniform phase distribution that defines induced effects). Thus, we are able to generalise our findings to the case where induced effects are present. Generative model of stimulus evoked responses We wish to model epoched electrophysiological data recorded from P channels under two different experimental conditions. Let us denote by x n , t the [ P × 1 ] vector of data recorded at time t ∈ { 1 , 2 , … T } on trial n ∈ { 1 , 2 , … N } , where y n ∈ { 1 , − 1 } denotes the experimental condition for that trial. We model x n , t as comprising a condition-independent evoked response term μ t of dimension [ P × 1 ] , and residual terms that are decomposed into a sum of [ P × 1 ] Fourier components z n , t , ω : (1) x n , t = μ t + ∑ ω = 0 Ω z n , t , ω We henceforth refer to x n , t as the ‘broadband signal’ , and the multiple z n , t , ω terms as the ‘narrowband signals’ . If we assume each narrowband signal z n , t , ω has a multivariate Gaussian distribution with mean conditioned on the stimulus (see for full details), we obtain the following expression for the distribution of the broadband signal: (2) P ( X t | Y ) ∼ N ( μ t + Y ∑ ω = 0 Ω A ω cos ( ω t + ϕ ω ) , ∑ ω = 0 Ω Σ ω ) Each A ω term is a diagonal [ P × P ] matrix, where the i th diagonal entry, denoted by a ω , i , reflects the magnitude of the component at frequency ω on channel i . Both ω and t are scalar indices reflecting the frequency and time respectively; ϕ ω is a [ P × 1 ] vector, each entry of which contains the phase offset of the oscillation at frequency ω across the P channels. Finally, we model induced effects (i.e. narrowband power that is not phase aligned to the stimulus) independently in each frequency band, where Σ ω is the [ P × P ] covariance matrix modelling the spatial variance and correlations expressed at frequency band ω . Note that this corresponds to an assumption that only the evoked response, not the induced response, differs over the two conditions – this is a simplifying assumption that we later relax in . We can now characterise the mutual information between the broadband signal X t or its constituent narrowband signals Z t , ω and the class labels Y . Information content available to narrowband signal decoding We wish to explore how the spectrum of the evoked response determines the representational dynamics inferred from the decoding paradigms that are most typically used in the literature (T. ; ; ). We start by considering instantaneous decoding of narrowband signals Z t , ω , which we refer to as narrowband signal decoding. Given a probabilistic model, we can calculate the mutual information I ( Z t , ω , Y ) , which expresses the amount of information shared between the signal and the condition label time courses. This measure of information content in the signal that pertains to the condition labels can be thought of as a surrogate measure of decoding accuracy were one to do narrowband signal decoding . Starting with a single Fourier component of the evoked response at frequency ω , the mutual information is itself a sinusoidal function that has been translated to double the original frequency, 2 ω : (3) I ( Z t , ω , Y ) = f ( c ω + r ω cos ( 2 ω t + ξ ω ) ) Where f is a monotonic, concave function (see for proof and for plot of the function); and c ω , r ω and ξ ω are all scalar values that are constant with respect to time (see for their exact values, and and for proof of the above result). The intuition for this is based on what was discussed in : if Z t , ω were the displacement of a pendulum oscillating at frequency ω, a decoder will perform best at the peaks and troughs of that oscillation and poorly in between these points. We illustrate this relationship in example 1 , where we simulate an evoked response under two conditions. Suppose that one condition (in blue) contains information content at 10Hz across both channels, and the second condition (in black) does not. The information content associated with this narrowband component is itself a sinusoidal function oscillating at 20Hz. Information content available to instantaneous signal decoding Realistic neural signals are not expressed in a single component frequency across all spatial areas, but are rather comprised of a number of spatially distinct components at multiple frequencies. How then does the entire frequency spectrum of the evoked response determine the frequency spectrum of the associated information content? This equates to the paradigm of instantaneous signal decoding that is most widely performed in the literature . For the broadband signal X t given in our model, the information content is given by: (4) I ( X t , Y ) = f ( c B + ∑ ω Ω r B , ω cos ( 2 ω t + ξ B , ω ) + h ( t ) ) Where c B , r B , ω and ξ B , ω are scalar values that are constant over time, and h ( t ) refers to additional sinusoidal harmonics distributed across the frequency spectrum between zero and 2 Ω (see for their exact values along with proof of this result). Importantly, if the highest frequency component of the evoked response on any channel is Ω , it follows that the highest frequency in the associated information spectrum will be 2 Ω . We illustrate this point with example 2 in ; for simplicity we simulate an evoked response comprising just 2 spectral components under each condition at 10Hz and 15Hz; the associated information content displays multiple peaks over time, represented in its Fourier spectrum by frequency components distributed between 0Hz and 30Hz. As we will explore further in , this also means that commonly used anti-aliasing filters are insufficient to stop representational aliasing, i.e. alias artefacts in the inferred information content dynamics. In order to further illustrate the representational dynamics of instantaneous signal decoding, we created Representational Dynamics Simulator , a web application analogous to , where the user can interactively change the parameters of the evoked spectrum and see the resulting information content ( ; hosted at https://representational-dynamics.herokuapp.com ). Modelling induced effects It is important to consider the degree to which these findings are specific to our chosen modelling assumptions. We have specifically limited our discussion to that of evoked effects by assuming the noise distribution was invariant over conditions. In the frequency domain, this means that we have limited our analysis to the part of the signal that is phase aligned to stimulus onset. When we introduce condition-specific induced effects – i.e. to model the case where one condition induces an increase in bandlimited power that has random phase alignment with the stimulus onset – we can no longer derive an exact analytic expression for the mutual information; however, we can derive an upper bound on the information content. This upper bound is a function of components at the same frequencies specified in equations 3 (for the narrowband case; see and for proof) and 5 (for the instantaneous signal case; see for proof). This result is not mathematically trivial, but may nonetheless be intuitive to some readers on the basis that the information content of a signal containing both evoked and induced effects must be less than the combined information content of each of those effects assessed independently; and the information content of induced effects assessed independently is constant with respect to time (owing to the uniform phase distribution that defines induced effects). Thus, we are able to generalise our findings to the case where induced effects are present. Technical and Interpretational issues raised The relationship we have characterised above between the stimulus-evoked signal spectrum and the spectrum of the information content raises several issues with commonly used instantaneous signal decoding pipelines. On a technical level, there is a risk of high frequency artefacts which we refer to as representational aliasing. On a broader level, this raises questions about how certain features of decoding accuracy timecourses should be interpreted. 3.1 Representational aliasing The Nyquist frequency defines the highest frequency component that can be correctly resolved from data that has been digitally sampled at a specified sampling rate. It is standard practice to apply a low pass anti-aliasing filter prior to sampling which ensures no signal components are above the Nyquist frequency and that all signal components can therefore be correctly resolved. However, this only applies to the signal components, not their associated information spectrum, which we have shown contains spectral contents at double the highest frequency of the signal spectrum. It follows that representational aliasing artefacts will be present in instantaneous signal decoding accuracy metric unless the following condition is met: (5) F S ≥ 4 Ω Where Ω is the highest frequency component of the evoked response and F s is the sampling rate. Thus, instantaneous signal decoding pipelines need to use low pass filters with cut-off no higher than one quarter of the sampling rate – before training classifiers – in order to eliminate representational aliasing effects. illustrates this graphically. 3.2 How should we interpret oscillatory information content? The oscillatory nature of information content associated with sinusoidal components of the evoked response is, we argue, interpretationally problematic. Features resembling multiple successive peaks in the timecourse of classification accuracy are quite commonly reported in the literature ; in some cases, the dynamics of these successive peaks have been interpreted as evidence for complex cognitive phenomena such as phase-locked memory reactivation . As we have shown in , successive peaks arise naturally from an evoked response containing sinusoidal components. We argue that a simpler explanation for their common appearance in the literature could merely be that the typical evoked response is characterised by a succession of peaks and troughs (e.g. the N70, P100 and N175) that resemble a transient sinusoidal waveform. We believe a fuller picture of information content should include the information stored in the dynamic gradient of the signal that is not available using instantaneous signal decoding pipelines. In we explore a third paradigm that includes such information, and show that this results in narrowband information content that is stable over time. However, as these methods will not always be practical for reasons given in the discussion, we would more generally argue that representational dynamics obtained using instantaneous signal decoding and representing the ‘double peak’ feature shown in (and widely characterised in the literature) should first be assumed to correspond merely to peaks and troughs of an evoked sinusoidal signal, rather than more complex cognitive phenomena. Representational aliasing The Nyquist frequency defines the highest frequency component that can be correctly resolved from data that has been digitally sampled at a specified sampling rate. It is standard practice to apply a low pass anti-aliasing filter prior to sampling which ensures no signal components are above the Nyquist frequency and that all signal components can therefore be correctly resolved. However, this only applies to the signal components, not their associated information spectrum, which we have shown contains spectral contents at double the highest frequency of the signal spectrum. It follows that representational aliasing artefacts will be present in instantaneous signal decoding accuracy metric unless the following condition is met: (5) F S ≥ 4 Ω Where Ω is the highest frequency component of the evoked response and F s is the sampling rate. Thus, instantaneous signal decoding pipelines need to use low pass filters with cut-off no higher than one quarter of the sampling rate – before training classifiers – in order to eliminate representational aliasing effects. illustrates this graphically. How should we interpret oscillatory information content? The oscillatory nature of information content associated with sinusoidal components of the evoked response is, we argue, interpretationally problematic. Features resembling multiple successive peaks in the timecourse of classification accuracy are quite commonly reported in the literature ; in some cases, the dynamics of these successive peaks have been interpreted as evidence for complex cognitive phenomena such as phase-locked memory reactivation . As we have shown in , successive peaks arise naturally from an evoked response containing sinusoidal components. We argue that a simpler explanation for their common appearance in the literature could merely be that the typical evoked response is characterised by a succession of peaks and troughs (e.g. the N70, P100 and N175) that resemble a transient sinusoidal waveform. We believe a fuller picture of information content should include the information stored in the dynamic gradient of the signal that is not available using instantaneous signal decoding pipelines. In we explore a third paradigm that includes such information, and show that this results in narrowband information content that is stable over time. However, as these methods will not always be practical for reasons given in the discussion, we would more generally argue that representational dynamics obtained using instantaneous signal decoding and representing the ‘double peak’ feature shown in (and widely characterised in the literature) should first be assumed to correspond merely to peaks and troughs of an evoked sinusoidal signal, rather than more complex cognitive phenomena. Obtaining measures of sinusoidal information content that are stable over time We contend that the profile of information content obtained by instantaneous signal decoding is potentially misleading, as it suggests the brain's representational dynamics are much faster than the actual evoked spectrum from which they are derived. Whilst instantaneous signal decoding pipelines are the most popular way to apply decoding to neural data at high temporal resolution, alternative methods exist that overcome these limitations. We focus our attention on Fourier analysis (for continuity with our modelling approach and because of these methods are well-established in neural data analysis), but emphasise these benefits are not specific to Fourier analysis per se – rather, they arise whenever methods include information in a dynamic signal's higher temporal derivatives (e.g. its gradient and rate of change) as features for classification. 4.1 Complex spectrum decoding We previously characterised the information content between stimulus labels Y and the narrowband Fourier series components Z t , ω . These narrowband components do not in fact include all the information that is returned by a Fourier signal decomposition; they reflect only the real component of a complex number representation. The imaginary components of these narrowband components reflect the instantaneous gradient information of each narrowband signal; we here characterise the information content associated with the full complex signal representation of each narrowband component, analogous to the decoding accuracy that would be obtained when both the narrowband signal and its local gradient are used as features for classification as in ; . 4.1.1 Real and complex components of a Fourier decomposition Fourier decompositions provide a complex representation of the underlying signal that includes both a real signal component and an orthogonal imaginary component, which we omitted from our model outline in for simplicity. Including this complex-valued information, the same model can equivalently be written: (6) x n , t = μ t + ∑ ω = 0 Ω z n , t , ω (7) z n , t , ω = w n , t , ω + w n , t , ω * 2 (8) w n , t , ω = y n A ω e i ( ω t + ϕ ω ) + ε n , ω e i ω t (9) ε n , ω = N ( 0 , Σ ω ) + i N ( 0 , Σ ω ) Where w n , t , ω * denotes the complex conjugate of w n , t , ω . This is exactly equivalent to the model of , however it includes the complex spectral representation w n , t , ω of each narrowband Fourier series component. It includes a condition-dependent evoked term y n A ω e i ( ω t + ϕ ω ) (i.e. the component of the response that is phase-locked to the stimulus), and a condition-independent residual term (i.e. the residual component with randomly drawn phase and amplitude on each trial; note that the values for the phase and amplitude are respectively the angle and magnitude of the complex valued ε n , ω converted to polar coordinates). 4.1.2 Information content available to complex spectrum decoding An alternative to decoding on the raw signal at each point in time is to use both the real and imaginary parts of the complex-valued Fourier coefficients as features/inputs to a classifier . We will refer to this decoding paradigm as complex spectrum decoding . When all this information is included as features for classification, then the resulting information content in each frequency band is given by: (10) I ( W t , ω , Y ) = f ( 2 c ω ) Where c ω is the average value of the sinusoidal expression associated with the real information content in (see for proof). Importantly, this expression is no longer sinusoidal; it is stable over time, and greater or equal to the peak information content that can be obtained using only the real spectrum (see ). Consequently, this overcomes both the problematic interpretational issues associated with instantaneous signal decoding discussed above, as well as the risk of representational aliasing that would otherwise require low-pass filtering with cut-off one quarter of the sampling rate. 4.2 Practical considerations for non-stationary and non-oscillatory signals We emphasise the generality of these results, deriving from the fact that any arbitrary time series can be mapped into the frequency domain by a Fourier decomposition. Whilst we have so far simulated quite simplified evoked responses comprising only a few frequency components, our approach generalises to those that contain non-stationary and/or non-oscillatory components. In this section we demonstrate this with some more complex simulations. 4.2.1 Sliding window Fourier decompositions Real evoked responses are more complex than the illustrative examples we have simulated so far and in particular do not have spectral profiles that are constant over the whole trial epoch. We therefore anticipate that the methods introduced above will be most informative when combined with sliding window methods, e.g. where separate Fourier decompositions are applied to each window within a trial epoch rather than a single Fourier decomposition applied to the whole epoch. There are numerous methods for estimating spectral properties over sliding windows, which are typically similar in motivation but different in implementation. Perhaps the most important factor is how the trade-off between time and frequency resolution is handled. Given our focus on characterising representational dynamics over time, we prefer methods that use a fixed temporal resolution, such as the Short-Time Fourier Transform (STFT). This provides complex-valued Fourier coefficients in each frequency band at each timepoint within a trial, allowing decoding accuracy to then be computed timepoint-by-timepoint without the interpretational problems previously discussed. 4.2.2 Non-stationary oscillatory signals To test these methods on evoked signals characterised by transient spectral properties, we simulated a signal over two channels using a combination of frequency chirp functions and unit step functions (example 1 in ). To maintain simplicity only one of the two conditions has this profile, the other is a null condition of stationary Gaussian noise. As shown by the time-frequency diagram on A, the frequency distribution of the signal varies over time and over the two channels. For this signal, we then computed: (i) The broadband information content; This corresponds to the information content available to instantaneous signal decoding , i.e. the timepoint-by-timepoint decoding approaches that are most typically used in the literature . (ii) The complex spectrum information content ; this corresponds to the information content available to complex spectrum decoding as we have proposed. In this case however we have estimated the complex spectral features using a sliding window (specifically using a STFT with 50ms sliding Hamming window). As shown in B, the broadband information content (analogous to the decoding accuracy obtained by instantaneous signal decoding ) contains fast dynamics that do not clearly relate to the evoked signal shown in A. Applying a similar STFT analysis to this information content ( B, right hand side) shows it reflects components at up to double the frequency of the corresponding signals (i.e. it contains components at up to 100Hz, double the frequencies identified in A). In contrast, the complex spectrum information content provides frequency band specific measures of information content that more closely reflect the spectral distribution of information at each moment in time over the course of the trial (i.e. B, lower plot reflects the combined contributions of the channel power spectral density plots in A). From the perspective of representational dynamics, such information is at least complementary, and we would argue more informative than that available to instantaneous signal decoding . 4.2.3 Non-oscillatory evoked signals In we showed that consecutive peaks in decoding accuracy timecourses could arise due to a simple oscillatory signal, even if this oscillatory signal is itself stable over time. We argued that these peaks should not be interpreted as representing discrete events or cognitive phenomena. This begs the question, how do our methods perform if the underlying signals do derive from discrete temporal events, where the underlying signals cannot be parsimoniously represented using sinusoidal components? To test this, we simulated an evoked response deriving from two spatially and temporally distinct “activations”, and repeated the analysis described above to compare the broadband and narrowband information content. To simulate non-oscillatory signals, each activation was characterised by a Gaussian kernel function ( C). As shown in D, the broadband information content (i.e. that available when doing instantaneous signal decoding ) produces two distinct peaks corresponding to each activation. Notably, this profile is replicated in the complex spectrum information content ( D, lower panel) showing that this method does not obscure such phenomena – provided the sliding window width is less than the period between these activations. Wider window lengths progressively include more information from both activations and the peaks become much less pronounced (see Supplementary Information, and Figure S2). We therefore conclude that, subject to appropriate sliding window sizes, complex spectrum decoding can eliminate the fast dynamics associated with sinusoidal components of the evoked response, whilst not eliminating the structure associated with spatially distinct, potentially non-oscillatory evoked activations. Complex spectrum decoding We previously characterised the information content between stimulus labels Y and the narrowband Fourier series components Z t , ω . These narrowband components do not in fact include all the information that is returned by a Fourier signal decomposition; they reflect only the real component of a complex number representation. The imaginary components of these narrowband components reflect the instantaneous gradient information of each narrowband signal; we here characterise the information content associated with the full complex signal representation of each narrowband component, analogous to the decoding accuracy that would be obtained when both the narrowband signal and its local gradient are used as features for classification as in ; . 4.1.1 Real and complex components of a Fourier decomposition Fourier decompositions provide a complex representation of the underlying signal that includes both a real signal component and an orthogonal imaginary component, which we omitted from our model outline in for simplicity. Including this complex-valued information, the same model can equivalently be written: (6) x n , t = μ t + ∑ ω = 0 Ω z n , t , ω (7) z n , t , ω = w n , t , ω + w n , t , ω * 2 (8) w n , t , ω = y n A ω e i ( ω t + ϕ ω ) + ε n , ω e i ω t (9) ε n , ω = N ( 0 , Σ ω ) + i N ( 0 , Σ ω ) Where w n , t , ω * denotes the complex conjugate of w n , t , ω . This is exactly equivalent to the model of , however it includes the complex spectral representation w n , t , ω of each narrowband Fourier series component. It includes a condition-dependent evoked term y n A ω e i ( ω t + ϕ ω ) (i.e. the component of the response that is phase-locked to the stimulus), and a condition-independent residual term (i.e. the residual component with randomly drawn phase and amplitude on each trial; note that the values for the phase and amplitude are respectively the angle and magnitude of the complex valued ε n , ω converted to polar coordinates). 4.1.2 Information content available to complex spectrum decoding An alternative to decoding on the raw signal at each point in time is to use both the real and imaginary parts of the complex-valued Fourier coefficients as features/inputs to a classifier . We will refer to this decoding paradigm as complex spectrum decoding . When all this information is included as features for classification, then the resulting information content in each frequency band is given by: (10) I ( W t , ω , Y ) = f ( 2 c ω ) Where c ω is the average value of the sinusoidal expression associated with the real information content in (see for proof). Importantly, this expression is no longer sinusoidal; it is stable over time, and greater or equal to the peak information content that can be obtained using only the real spectrum (see ). Consequently, this overcomes both the problematic interpretational issues associated with instantaneous signal decoding discussed above, as well as the risk of representational aliasing that would otherwise require low-pass filtering with cut-off one quarter of the sampling rate. Real and complex components of a Fourier decomposition Fourier decompositions provide a complex representation of the underlying signal that includes both a real signal component and an orthogonal imaginary component, which we omitted from our model outline in for simplicity. Including this complex-valued information, the same model can equivalently be written: (6) x n , t = μ t + ∑ ω = 0 Ω z n , t , ω (7) z n , t , ω = w n , t , ω + w n , t , ω * 2 (8) w n , t , ω = y n A ω e i ( ω t + ϕ ω ) + ε n , ω e i ω t (9) ε n , ω = N ( 0 , Σ ω ) + i N ( 0 , Σ ω ) Where w n , t , ω * denotes the complex conjugate of w n , t , ω . This is exactly equivalent to the model of , however it includes the complex spectral representation w n , t , ω of each narrowband Fourier series component. It includes a condition-dependent evoked term y n A ω e i ( ω t + ϕ ω ) (i.e. the component of the response that is phase-locked to the stimulus), and a condition-independent residual term (i.e. the residual component with randomly drawn phase and amplitude on each trial; note that the values for the phase and amplitude are respectively the angle and magnitude of the complex valued ε n , ω converted to polar coordinates). Information content available to complex spectrum decoding An alternative to decoding on the raw signal at each point in time is to use both the real and imaginary parts of the complex-valued Fourier coefficients as features/inputs to a classifier . We will refer to this decoding paradigm as complex spectrum decoding . When all this information is included as features for classification, then the resulting information content in each frequency band is given by: (10) I ( W t , ω , Y ) = f ( 2 c ω ) Where c ω is the average value of the sinusoidal expression associated with the real information content in (see for proof). Importantly, this expression is no longer sinusoidal; it is stable over time, and greater or equal to the peak information content that can be obtained using only the real spectrum (see ). Consequently, this overcomes both the problematic interpretational issues associated with instantaneous signal decoding discussed above, as well as the risk of representational aliasing that would otherwise require low-pass filtering with cut-off one quarter of the sampling rate. Practical considerations for non-stationary and non-oscillatory signals We emphasise the generality of these results, deriving from the fact that any arbitrary time series can be mapped into the frequency domain by a Fourier decomposition. Whilst we have so far simulated quite simplified evoked responses comprising only a few frequency components, our approach generalises to those that contain non-stationary and/or non-oscillatory components. In this section we demonstrate this with some more complex simulations. 4.2.1 Sliding window Fourier decompositions Real evoked responses are more complex than the illustrative examples we have simulated so far and in particular do not have spectral profiles that are constant over the whole trial epoch. We therefore anticipate that the methods introduced above will be most informative when combined with sliding window methods, e.g. where separate Fourier decompositions are applied to each window within a trial epoch rather than a single Fourier decomposition applied to the whole epoch. There are numerous methods for estimating spectral properties over sliding windows, which are typically similar in motivation but different in implementation. Perhaps the most important factor is how the trade-off between time and frequency resolution is handled. Given our focus on characterising representational dynamics over time, we prefer methods that use a fixed temporal resolution, such as the Short-Time Fourier Transform (STFT). This provides complex-valued Fourier coefficients in each frequency band at each timepoint within a trial, allowing decoding accuracy to then be computed timepoint-by-timepoint without the interpretational problems previously discussed. 4.2.2 Non-stationary oscillatory signals To test these methods on evoked signals characterised by transient spectral properties, we simulated a signal over two channels using a combination of frequency chirp functions and unit step functions (example 1 in ). To maintain simplicity only one of the two conditions has this profile, the other is a null condition of stationary Gaussian noise. As shown by the time-frequency diagram on A, the frequency distribution of the signal varies over time and over the two channels. For this signal, we then computed: (i) The broadband information content; This corresponds to the information content available to instantaneous signal decoding , i.e. the timepoint-by-timepoint decoding approaches that are most typically used in the literature . (ii) The complex spectrum information content ; this corresponds to the information content available to complex spectrum decoding as we have proposed. In this case however we have estimated the complex spectral features using a sliding window (specifically using a STFT with 50ms sliding Hamming window). As shown in B, the broadband information content (analogous to the decoding accuracy obtained by instantaneous signal decoding ) contains fast dynamics that do not clearly relate to the evoked signal shown in A. Applying a similar STFT analysis to this information content ( B, right hand side) shows it reflects components at up to double the frequency of the corresponding signals (i.e. it contains components at up to 100Hz, double the frequencies identified in A). In contrast, the complex spectrum information content provides frequency band specific measures of information content that more closely reflect the spectral distribution of information at each moment in time over the course of the trial (i.e. B, lower plot reflects the combined contributions of the channel power spectral density plots in A). From the perspective of representational dynamics, such information is at least complementary, and we would argue more informative than that available to instantaneous signal decoding . 4.2.3 Non-oscillatory evoked signals In we showed that consecutive peaks in decoding accuracy timecourses could arise due to a simple oscillatory signal, even if this oscillatory signal is itself stable over time. We argued that these peaks should not be interpreted as representing discrete events or cognitive phenomena. This begs the question, how do our methods perform if the underlying signals do derive from discrete temporal events, where the underlying signals cannot be parsimoniously represented using sinusoidal components? To test this, we simulated an evoked response deriving from two spatially and temporally distinct “activations”, and repeated the analysis described above to compare the broadband and narrowband information content. To simulate non-oscillatory signals, each activation was characterised by a Gaussian kernel function ( C). As shown in D, the broadband information content (i.e. that available when doing instantaneous signal decoding ) produces two distinct peaks corresponding to each activation. Notably, this profile is replicated in the complex spectrum information content ( D, lower panel) showing that this method does not obscure such phenomena – provided the sliding window width is less than the period between these activations. Wider window lengths progressively include more information from both activations and the peaks become much less pronounced (see Supplementary Information, and Figure S2). We therefore conclude that, subject to appropriate sliding window sizes, complex spectrum decoding can eliminate the fast dynamics associated with sinusoidal components of the evoked response, whilst not eliminating the structure associated with spatially distinct, potentially non-oscillatory evoked activations. Sliding window Fourier decompositions Real evoked responses are more complex than the illustrative examples we have simulated so far and in particular do not have spectral profiles that are constant over the whole trial epoch. We therefore anticipate that the methods introduced above will be most informative when combined with sliding window methods, e.g. where separate Fourier decompositions are applied to each window within a trial epoch rather than a single Fourier decomposition applied to the whole epoch. There are numerous methods for estimating spectral properties over sliding windows, which are typically similar in motivation but different in implementation. Perhaps the most important factor is how the trade-off between time and frequency resolution is handled. Given our focus on characterising representational dynamics over time, we prefer methods that use a fixed temporal resolution, such as the Short-Time Fourier Transform (STFT). This provides complex-valued Fourier coefficients in each frequency band at each timepoint within a trial, allowing decoding accuracy to then be computed timepoint-by-timepoint without the interpretational problems previously discussed. Non-stationary oscillatory signals To test these methods on evoked signals characterised by transient spectral properties, we simulated a signal over two channels using a combination of frequency chirp functions and unit step functions (example 1 in ). To maintain simplicity only one of the two conditions has this profile, the other is a null condition of stationary Gaussian noise. As shown by the time-frequency diagram on A, the frequency distribution of the signal varies over time and over the two channels. For this signal, we then computed: (i) The broadband information content; This corresponds to the information content available to instantaneous signal decoding , i.e. the timepoint-by-timepoint decoding approaches that are most typically used in the literature . (ii) The complex spectrum information content ; this corresponds to the information content available to complex spectrum decoding as we have proposed. In this case however we have estimated the complex spectral features using a sliding window (specifically using a STFT with 50ms sliding Hamming window). As shown in B, the broadband information content (analogous to the decoding accuracy obtained by instantaneous signal decoding ) contains fast dynamics that do not clearly relate to the evoked signal shown in A. Applying a similar STFT analysis to this information content ( B, right hand side) shows it reflects components at up to double the frequency of the corresponding signals (i.e. it contains components at up to 100Hz, double the frequencies identified in A). In contrast, the complex spectrum information content provides frequency band specific measures of information content that more closely reflect the spectral distribution of information at each moment in time over the course of the trial (i.e. B, lower plot reflects the combined contributions of the channel power spectral density plots in A). From the perspective of representational dynamics, such information is at least complementary, and we would argue more informative than that available to instantaneous signal decoding . Non-oscillatory evoked signals In we showed that consecutive peaks in decoding accuracy timecourses could arise due to a simple oscillatory signal, even if this oscillatory signal is itself stable over time. We argued that these peaks should not be interpreted as representing discrete events or cognitive phenomena. This begs the question, how do our methods perform if the underlying signals do derive from discrete temporal events, where the underlying signals cannot be parsimoniously represented using sinusoidal components? To test this, we simulated an evoked response deriving from two spatially and temporally distinct “activations”, and repeated the analysis described above to compare the broadband and narrowband information content. To simulate non-oscillatory signals, each activation was characterised by a Gaussian kernel function ( C). As shown in D, the broadband information content (i.e. that available when doing instantaneous signal decoding ) produces two distinct peaks corresponding to each activation. Notably, this profile is replicated in the complex spectrum information content ( D, lower panel) showing that this method does not obscure such phenomena – provided the sliding window width is less than the period between these activations. Wider window lengths progressively include more information from both activations and the peaks become much less pronounced (see Supplementary Information, and Figure S2). We therefore conclude that, subject to appropriate sliding window sizes, complex spectrum decoding can eliminate the fast dynamics associated with sinusoidal components of the evoked response, whilst not eliminating the structure associated with spatially distinct, potentially non-oscillatory evoked activations. Evidence from MEG data The results we have presented are fundamentally theoretical and supported by simulated data from models of evoked activity. We therefore wanted to test how these findings extend to real data, and therefore tested our main predictions on a MEG dataset of visual image decoding. 5.1 Methods We took a publicly available dataset comprising 15 subjects viewing 118 different visual stimuli . This data had been acquired on an Elekta Neuromag scanner with 306 channels (204 planar gradiometers and 102 magnetometers) at 1kHz sampling rate, with filtering applied at acquisition with bandpass 0.03Hz to 300Hz. We downsampled the data to 100 samples per second with an anti-aliasing filter with cut-off at 50Hz and extracted the 0.5 second epochs immediately following stimulus presentation. The data was then mapped into a complex time-frequency decomposition using an STFT with Hamming window length of 100ms. The epoched data was then decoded to predict the trial condition labels using the three paradigms: i Instantaneous signal decoding : decoding the raw broadband signal time-point-by-timepoint as widely performed in the literature (T. ; ; ). ii Narrowband Signal decoding : sliding window decoding using the time-frequency estimates from the STFT, but only using the real coefficients across all sensors as a set of features. This method is analogous to decoding on data filtered into specific frequency bands of interest. iii Complex spectrum decoding : sliding window decoding using the time-frequency estimates from the STFT, using both the real and imaginary coefficients across all sensors as a set of features. Each approach fit linear support vector machine classifiers using three-fold cross validation. This was applied to each pair of the 118 images in a mass pairwise classification paradigm as originally implemented by . In cases (ii) and (iii), classifiers were trained separately on each frequency band. The decoding used three-fold cross-validation to obtain independent classification accuracy metrics as a function of time and frequency for each pair of images and each participant. Finally, to test the hypothesis that different frequency bands contained complementary information, we trained an aggregate classifier to estimate the aggregate information distributed over all frequency bands. We did this through a nested cross validation procedure. An inner cross validation loop simply consisted of the complex spectrum decoding estimates described above. The outer cross validation loop then partitioned all of the stimuli into two equally sized groups and applied two-fold cross validation to obtain accuracy estimates. This outer loop consisted of a random forest ensemble classifier with 100 trees, trained to predict the class label from the outputs of the complex spectrum decoding classifiers on each trial. This outer loop was run ten times with replacement for each subject, randomly sampling a different subset of stimuli with replacement on each cross validation fold. 5.2 Decoding accuracy vs time in different decoding paradigms A plots the decode accuracy derived from decoding under the three identified paradigms. As paradigms (ii) and (iii) provide accuracy in each frequency band independently, for ease of visualisation they are each plotted separately against paradigm (i). Averaged over all pairs of stimuli and all subjects, this identifies a systematic variation in the information content at different frequencies as a function of time. The earliest detectable information appears in higher frequencies, but these peak quite transiently at relatively low values and are quickly surpassed by information content in lower frequencies, which rise to higher values and are then sustained for a longer duration. Notably, the information in either the 10Hz or the 0Hz band exceeds that obtained by instantaneous signal decoding for nearly the entire period analysed – the accuracy averaged over all timepoints is higher for both measures (paired t-test, p<0.001 Bonferroni corrected for multiple comparisons over frequency bands) which correspond to higher accuracy over a majority of timepoints in these frequency bands (see Supplementary Information, and Figure S4). From the perspective of representational dynamics, this establishes first and foremost that Fourier decompositions can improve decoding accuracy over instantaneous signal decoding methods whilst retaining a profile of how the representational content evolves in both time and frequency. 5.3 Complex spectrum decoding accuracy exceeds narrowband signal decoding accuracy B compares the average classification accuracy in each frequency band, averaged over all subjects and pairs of stimuli, when either the complex spectrum decoding or narrowband signal decoding is applied (it follows from the definition of the discrete Fourier transform that the imaginary coefficients in the 0Hz and 50Hz frequency bands are always zero, so in these bands the two paradigms are in fact equivalent). In all cases the classification accuracy obtained using complex spectrum decoding exceeds that obtained using narrowband spectrum decoding ; this information gap can be interpreted as the information stored in the gradient of these sinusoidal components. 5.4 Narrowband signal decoding produces spectral peaks at double their original frequency in inferred decoding accuracy metrics Our models predict that the information content associated with evoked spectral components at a given frequency is itself oscillatory at double that frequency, unless complex spectrum decoding is applied. We have so far plotted the average over all subjects and all comparisons, therefore obscuring some of the temporal dynamics evident in each comparison. For example, in C we plot the timecourse obtained for one subject and one pair of stimuli; the accuracy timecourse obtained from complex spectrum decoding appears to follow the envelope of the equivalent timecourse obtained by narrowband signal decoding which appears to show sinusoidal dynamics. If we take the PSD of these accuracy timecourses, we observe a peak at double the frequency band being analysed (i.e. the 10Hz and 20Hz bands are associated with a 20Hz and 40Hz spectral peak respectively). If we take the PSD of the timecourse for every pair of stimuli and every subject and average, we see the PSD is significantly higher in the 10Hz and 20Hz bands at approximately 20Hz and 40Hz, respectively. Given a sampling rate of 100Hz, we expect representational aliasing to occur linked to any evoked spectral content above 25Hz. Specifically, given evoked spectral content at 30Hz or 40Hz, we expect representational aliasing artefacts at 40Hz and 20Hz, respectively (for example, a 30Hz component is translated to 60Hz in the accuracy timecourses; as this is 10Hz above the Nyquist frequency, it is aliased to 10Hz below the Nyquist frequency, i.e. to 40Hz). For both of these narrowband signals, we see peaks at these locations, confirming the presence of representational aliasing. We stress that this aliasing effect must also be present in the instantaneous signal decoding results, they just cannot be explicitly resolved as we have no knowledge of the frequencies at which they would be expected. Finally, in these plots we note that spectra are significantly more weighted towards the lower end of the frequency spectrum for complex spectrum decoding vs narrowband signal decoding , whilst the opposite relationship is the case towards the upper end of the frequency spectrum. This means that the higher accuracies obtained by complex spectrum decoding in B are a result of increased low frequency content, or representational dynamics that are more stable over time. In , we made the argument that complex spectrum decoding reflects the true frequencies at which information is present in the original signal. This point is now reinforced with real data. In A, we plot the accuracies per frequency alongside the PSD of the accuracy timecourses obtained using instantaneous signal decoding. The former is interpretable and reveals an information profile with both spectral and temporal structure. In contrast, it follows from that the PSD of the instantaneous signal decoding timecourse does not correspond to the frequencies of actual information. The representational aliasing effects characterised above, and harmonics created when multiple carrier frequencies are combined, have contaminated the spectral profile such that no prominent structure can be easily observed. 5.5 Complex spectrum decoding accesses information content that is complementary over frequencies Having established that complex spectrum decoding accesses information content that is not available to instantaneous signal decoding , one final question arises; is the complex spectral information across different frequencies overlapping, or complementary? That is to say, if we aggregate the information over frequency bands, do we obtain performance that is merely equivalent to the best individual frequency band – or exceeding it? B plots the performance of the aggregate classifier against the complex spectrum decoding accuracies achieved in each frequency band, and that obtained by instantaneous signal decoding . The aggregate classifier significantly outperforms the instaneous signal decoder , reaching a peak accuracy of 67.6% vs 61.6%. As plotted in C, this difference quantifies the total amount of information that is inadvertently being omitted by the insensitivity of instantaneous signal decoding paradigms to information stored in signal gradients. However the aggregate decoder accuracy also peaks at a level higher than that obtained in any individual frequency band. As in D, over the period between 70msec and 190msec following stimulus presentation, the aggregate classification accuracy significantly exceeded the information content in any individual frequency. This coincides with the time over which significant information content was distributed across multiple frequency bands, especially higher frequency bands, proving that these different frequency bands contain information content that is complementary. The performance is quite different for timesteps more than 370msec after stimulus onset, with the ensemble classifier underperforming slightly relative to the best narrowband classifiers (albeit still outperforming standard broadband methods). Over this period, the classifiers trained on higher frequencies output chance level predictions, and only lower frequency bands contain meaningful information content. We conclude that over this period, all meaningful information is concentrated in lower frequency bands, and the inclusion of high frequency bands that only contain noise is in fact detrimental to classifier performance. Methods We took a publicly available dataset comprising 15 subjects viewing 118 different visual stimuli . This data had been acquired on an Elekta Neuromag scanner with 306 channels (204 planar gradiometers and 102 magnetometers) at 1kHz sampling rate, with filtering applied at acquisition with bandpass 0.03Hz to 300Hz. We downsampled the data to 100 samples per second with an anti-aliasing filter with cut-off at 50Hz and extracted the 0.5 second epochs immediately following stimulus presentation. The data was then mapped into a complex time-frequency decomposition using an STFT with Hamming window length of 100ms. The epoched data was then decoded to predict the trial condition labels using the three paradigms: i Instantaneous signal decoding : decoding the raw broadband signal time-point-by-timepoint as widely performed in the literature (T. ; ; ). ii Narrowband Signal decoding : sliding window decoding using the time-frequency estimates from the STFT, but only using the real coefficients across all sensors as a set of features. This method is analogous to decoding on data filtered into specific frequency bands of interest. iii Complex spectrum decoding : sliding window decoding using the time-frequency estimates from the STFT, using both the real and imaginary coefficients across all sensors as a set of features. Each approach fit linear support vector machine classifiers using three-fold cross validation. This was applied to each pair of the 118 images in a mass pairwise classification paradigm as originally implemented by . In cases (ii) and (iii), classifiers were trained separately on each frequency band. The decoding used three-fold cross-validation to obtain independent classification accuracy metrics as a function of time and frequency for each pair of images and each participant. Finally, to test the hypothesis that different frequency bands contained complementary information, we trained an aggregate classifier to estimate the aggregate information distributed over all frequency bands. We did this through a nested cross validation procedure. An inner cross validation loop simply consisted of the complex spectrum decoding estimates described above. The outer cross validation loop then partitioned all of the stimuli into two equally sized groups and applied two-fold cross validation to obtain accuracy estimates. This outer loop consisted of a random forest ensemble classifier with 100 trees, trained to predict the class label from the outputs of the complex spectrum decoding classifiers on each trial. This outer loop was run ten times with replacement for each subject, randomly sampling a different subset of stimuli with replacement on each cross validation fold. Decoding accuracy vs time in different decoding paradigms A plots the decode accuracy derived from decoding under the three identified paradigms. As paradigms (ii) and (iii) provide accuracy in each frequency band independently, for ease of visualisation they are each plotted separately against paradigm (i). Averaged over all pairs of stimuli and all subjects, this identifies a systematic variation in the information content at different frequencies as a function of time. The earliest detectable information appears in higher frequencies, but these peak quite transiently at relatively low values and are quickly surpassed by information content in lower frequencies, which rise to higher values and are then sustained for a longer duration. Notably, the information in either the 10Hz or the 0Hz band exceeds that obtained by instantaneous signal decoding for nearly the entire period analysed – the accuracy averaged over all timepoints is higher for both measures (paired t-test, p<0.001 Bonferroni corrected for multiple comparisons over frequency bands) which correspond to higher accuracy over a majority of timepoints in these frequency bands (see Supplementary Information, and Figure S4). From the perspective of representational dynamics, this establishes first and foremost that Fourier decompositions can improve decoding accuracy over instantaneous signal decoding methods whilst retaining a profile of how the representational content evolves in both time and frequency. Complex spectrum decoding accuracy exceeds narrowband signal decoding accuracy B compares the average classification accuracy in each frequency band, averaged over all subjects and pairs of stimuli, when either the complex spectrum decoding or narrowband signal decoding is applied (it follows from the definition of the discrete Fourier transform that the imaginary coefficients in the 0Hz and 50Hz frequency bands are always zero, so in these bands the two paradigms are in fact equivalent). In all cases the classification accuracy obtained using complex spectrum decoding exceeds that obtained using narrowband spectrum decoding ; this information gap can be interpreted as the information stored in the gradient of these sinusoidal components. Narrowband signal decoding produces spectral peaks at double their original frequency in inferred decoding accuracy metrics Our models predict that the information content associated with evoked spectral components at a given frequency is itself oscillatory at double that frequency, unless complex spectrum decoding is applied. We have so far plotted the average over all subjects and all comparisons, therefore obscuring some of the temporal dynamics evident in each comparison. For example, in C we plot the timecourse obtained for one subject and one pair of stimuli; the accuracy timecourse obtained from complex spectrum decoding appears to follow the envelope of the equivalent timecourse obtained by narrowband signal decoding which appears to show sinusoidal dynamics. If we take the PSD of these accuracy timecourses, we observe a peak at double the frequency band being analysed (i.e. the 10Hz and 20Hz bands are associated with a 20Hz and 40Hz spectral peak respectively). If we take the PSD of the timecourse for every pair of stimuli and every subject and average, we see the PSD is significantly higher in the 10Hz and 20Hz bands at approximately 20Hz and 40Hz, respectively. Given a sampling rate of 100Hz, we expect representational aliasing to occur linked to any evoked spectral content above 25Hz. Specifically, given evoked spectral content at 30Hz or 40Hz, we expect representational aliasing artefacts at 40Hz and 20Hz, respectively (for example, a 30Hz component is translated to 60Hz in the accuracy timecourses; as this is 10Hz above the Nyquist frequency, it is aliased to 10Hz below the Nyquist frequency, i.e. to 40Hz). For both of these narrowband signals, we see peaks at these locations, confirming the presence of representational aliasing. We stress that this aliasing effect must also be present in the instantaneous signal decoding results, they just cannot be explicitly resolved as we have no knowledge of the frequencies at which they would be expected. Finally, in these plots we note that spectra are significantly more weighted towards the lower end of the frequency spectrum for complex spectrum decoding vs narrowband signal decoding , whilst the opposite relationship is the case towards the upper end of the frequency spectrum. This means that the higher accuracies obtained by complex spectrum decoding in B are a result of increased low frequency content, or representational dynamics that are more stable over time. In , we made the argument that complex spectrum decoding reflects the true frequencies at which information is present in the original signal. This point is now reinforced with real data. In A, we plot the accuracies per frequency alongside the PSD of the accuracy timecourses obtained using instantaneous signal decoding. The former is interpretable and reveals an information profile with both spectral and temporal structure. In contrast, it follows from that the PSD of the instantaneous signal decoding timecourse does not correspond to the frequencies of actual information. The representational aliasing effects characterised above, and harmonics created when multiple carrier frequencies are combined, have contaminated the spectral profile such that no prominent structure can be easily observed. Complex spectrum decoding accesses information content that is complementary over frequencies Having established that complex spectrum decoding accesses information content that is not available to instantaneous signal decoding , one final question arises; is the complex spectral information across different frequencies overlapping, or complementary? That is to say, if we aggregate the information over frequency bands, do we obtain performance that is merely equivalent to the best individual frequency band – or exceeding it? B plots the performance of the aggregate classifier against the complex spectrum decoding accuracies achieved in each frequency band, and that obtained by instantaneous signal decoding . The aggregate classifier significantly outperforms the instaneous signal decoder , reaching a peak accuracy of 67.6% vs 61.6%. As plotted in C, this difference quantifies the total amount of information that is inadvertently being omitted by the insensitivity of instantaneous signal decoding paradigms to information stored in signal gradients. However the aggregate decoder accuracy also peaks at a level higher than that obtained in any individual frequency band. As in D, over the period between 70msec and 190msec following stimulus presentation, the aggregate classification accuracy significantly exceeded the information content in any individual frequency. This coincides with the time over which significant information content was distributed across multiple frequency bands, especially higher frequency bands, proving that these different frequency bands contain information content that is complementary. The performance is quite different for timesteps more than 370msec after stimulus onset, with the ensemble classifier underperforming slightly relative to the best narrowband classifiers (albeit still outperforming standard broadband methods). Over this period, the classifiers trained on higher frequencies output chance level predictions, and only lower frequency bands contain meaningful information content. We conclude that over this period, all meaningful information is concentrated in lower frequency bands, and the inclusion of high frequency bands that only contain noise is in fact detrimental to classifier performance. Discussion We have outlined a widely overlooked problem in decoding pipelines: that frequency components in the evoked response produce corresponding components at double their original frequency content in the resulting accuracy metrics. Where researchers are not aware of this fundamental relationship, there is a considerable risk of misinterpreting results and, in particular, of inferring relationships with canonical frequency bands that are in fact trivial representations of the evoked response spectrum. We have argued that including a signal's higher temporal derivatives in decoding better reflects the full picture of information content available to downstream brain regions, such that the more stable temporal profiles obtained by complex spectrum decoding (and related methods) are a better depiction of the true information content compared to instantaneous signal decoding . It is notable that neural circuits – being fundamentally conductance-based at the cellular level – are perfectly placed to compute such higher temporal derivatives, such that this information is readily available for any further computation. This certainly does not mean that the brain does not encode information to a particular phase of an underlying oscillation, just that commonly used instantaneous signal decoding methods may mistakenly suggest so. In particular, studies investigating memory reactivation have interpreted the oscillatory dynamics of the classification accuracy as evidence that reactivation is functionally phase-locked to canonical frequency bands. ) analyse the spectrum of the classification accuracy timecourse, much as we have done in C, and interpreted the peak at 7Hz as evidence for theta phase locking. Our results suggest this may rather be the result of a 3.5Hz sinusoidal component in the evoked response. In a similar vein, ) found that classification accuracy was modulated by theta phase. Crucially, this work derived decoding accuracy by using wavelet power estimates as inputs to the classifier, a measure which is theoretically phase invariant and therefore could circumvent the effects that we have characterised above. Nonetheless, in practice this becomes largely dependent on the parameters controlling the resolution of time and frequency when power is estimated, such that the relationships we have characterised remain a potential confound (see Supplementary Information, and Figure S3). We therefore argue broadly for caution in interpreting oscillatory dynamics in classification accuracy timecourses, and – where authors wish to make stronger interpretations such as those discussed here – recommend rigorous parameter testing with suitably defined non-parametric tests of significance to prove such characteristics could not arise trivially. Our results are fundamentally mathematical, and should be interpreted as such; they derive from the expected Fourier spectrum of the evoked response, not from the fundamental frequency of a canonical neural oscillation. For example, a 40Hz Fourier component can be produced by a vast range of underlying neural sources, only a subset of which would be considered ‘gamma oscillations’. Our results apply regardless; thus the recommendation to low-pass filter with a cut-off frequency of one quarter of the sampling rate applies to any researcher doing instantaneous signal decoding , irrespective of the frequencies of neural activity they may be interested in or expecting. As an information theoretic result, if our modelling assumptions hold then these results are fundamental and apply to any instantaneous signal decoding approach regardless of methodological choices on the part of the researcher; they cannot be overcome by use of nonlinear classifiers, machine learning tools, or by analysing different accuracy metrics. The result similarly applies more broadly beyond our focus of decoding wherever unsigned statistics are used – for example, applying timepoint-by-timepoint F-tests in an ANOVA analysis to ascertain when a univariate sensor signal significantly differs over conditions would exhibit the same behaviour. In our analysis we have derived the spectrum of the information content up to an arbitrary monotonic scaling denoted by the function f . It follows that other widely used metrics to assess decoding accuracy (such as classification accuracy, distance from the classification hyperplane etc.) are each a different monotonic scaling of this quantity (see and SI for further details). We therefore argue that our results are universally applicable to instantaneous signal decoding pipelines (and indeed many other pipelines that utilise unsigned statistics) regardless of any variations in methodological choices. We have characterised three major decoding paradigms but do not claim these to be exhaustive with respect to the literature. A very common approach involves the application of classifiers not to a recorded signal itself but to a set of Fourier features derived from a signal, which in most applications will be equivalent to the narrowband or complex spectrum decoding paradigms such that all our results remain applicable. A related area of research uses the recorded signal and its central-difference gradient as features, similarly obtaining enhanced accuracy as a result . However an emerging area of research infers nonlinear time-domain features, for example through the training of temporal convolutional networks or recurrent neural networks, that are then used as inputs for classification . These methods typically offer a greater ability to separate conditions, however the accompanying barriers to interpretability have to date limited their direct application in the study of representational dynamics. We hope that such interpretability barriers will be challenged and overcome in future work, and that the relationships we have outlined here may aid this endeavor. Finally, we have shown that complex spectrum decoding overcomes the problem of representational aliasing whilst also presenting other benefits; specifically, leading to higher accuracies that are more stable over time. We are not the first to use complex features for decoding and find they achieve greater classification accuracy , nor more generally to use gradient information as features for decoding , however the theoretical principles for the underlying relationship were not previously established. However, it similarly presents its own challenges. The significant increase in dimensionality associated with a feature vector that varies simultaneously over time, space and frequency may present computational challenges. Furthermore, whilst we see interpretational benefits to having results that are resolved in both frequency and time, in some circumstances (such as the non-sinusoidal signal example simulated in ) this additional complexity may not harbour any new insights. We have spoken broadly of Fourier analysis, again to stress that these results apply generically to STFTs, wavelet decompositions, or any other such method – however each of these apply different assumptions that mostly result in different trade-offs of time and frequency resolution. These trade-offs are likely to be especially pertinent in the context of high temporal resolution decoding. Nonetheless, the benefits can be quite substantial and well justified by the results. Conclusion We have characterised the relationship between the stimulus evoked spectrum and the information content spectrum, which is commonly used to investigate the brain's representational dynamics. Understanding how these two quantities relate is crucial to interpreting results obtained via decoding pipelines. By establishing these relationships under three different decoding paradigms, this work opens the door to much stronger interpretation of decoding results by linking the question of what is being represented with the neural mechanisms explaining how it is being represented. We hope this will enable more targeted scientific enquiry to uncover the true mechanisms by which the brain processes diverse forms of information. C.H.: conceptualisation, methodology, software, formal analysis, investigation, writing – original draft, project administration; M.V.E.: methodology, software, validation, resources, writing – review and editing, visualisation; A.Q.: validation, writing - review and editing; D.V.: validation, writing - review and editing; M.W.: validation, writing – review and editing, supervision, funding acquisition. The data used in Results is from a previously published work ; it is publicly available for download at http://userpage.fu-berlin.de/rmcichy/fusion_project_page/main.html . The code to perform all the analysis and example simulations published in this paper is publicaly available at https://github.com/OHBA-analysis/RepresentationalDynamicsModelling . The interactive web application accompanying is published at https://doi.org/10.5281/zenodo.6579997 and is hosted at https://representational-dynamics.herokuapp.com/ . The data used in is from a previously published work ; as established in the original publication, the study was conducted in accordance with the Declaration of Helsinki and approved by the local ethics committee (Institutional Review Board of the Massachusetts Institute of Technology). The authors have no interests to declare. |
Thiazolidinediones: An In–Depth Study of Their Synthesis and Application to Medicinal Chemistry in the Treatment of Diabetes Mellitus | f096ed88-ca53-40e4-be32-7a378ca93a87 | 8251912 | Pharmacology[mh] | Introduction to TZD Heterocyclic systems are recognised to be of great importance due to their proven utility within the field of medicinal chemistry. It has been estimated that more than 85 % of all chemical entities which evoke a biological reaction contain at least one heterocycle. The incorporation of heterocycles into drug molecules allows for organic chemists to modulate pharmacokinetic and pharmacodynamic properties, by altering such parameters as lipophilicity, polarity, hydrogen bonding ability as well as toxicological profiles. It is, therefore, unsurprising that organic chemists have become highly familiar with heterocycles featuring various ring sizes. One such five‐membered ring heterocycle is a thiazole ( 1 ), its non‐aromatic analogue being thiazolidine ( 2 ). When 2 is decorated further with two carbonyl groups at position 2 and 4, the ring system is termed 2,4‐thiazolidinedione ( 3 ) (TZD), which is the focus of this review (Figure ). TZD exists as a white crystalline solid with a melting point of 123–125 °C and is bench stable when kept below 30 °C. In terms of solubility, TZD is only sparingly soluble in a variety of common organic solvents including water, MeOH, EtOH, DMSO and Et 2 O. Due to the presence of two carbonyl groups and an α‐hydrogen, TZD has the ability to exist as a series of tautomers (see 3 a – e , Figure ). Aside from its use within the sphere of organic and medicinal chemistry, TZD acts as an inhibitor for the corrosion of steels in acidic environments and is reported as a ‘brightener’ in the electroplating industry. 1.1 TZD core synthesis Synthetic methodologies to yield the TZD core were first reported in the 1923 work by Kallenberg. Kallenberg's method reacts carbonyl sulphide ( 4 ) with ammonia, in the presence of KOH, to generate in situ the corresponding alkyl thioncarbamate ( 5 ), which, in turn, reacts with an α‐halogenated carboxylic acid ( 6 ). The thiocarbamate produced is then cyclised in acidic conditions, to yield the desired TZD ( 3 , Scheme ). Using more recent methodology, TZD is often synthesised by refluxing α‐chloroacetic acid ( 6 ) with thiourea ( 7 ), utilising water as a solvent for a prolonged period of time (∼12 h). The reaction mechanism for this process was proposed by Liberman et al . and can be seen below (Scheme ). Here, an initial attack on chloroacetic acid by the thiourea sulfur atom occurs. This S N 2 type reaction and generation of HCl takes place before a subsequent, second, nucleophilic substitution reaction caused by the attack of the amine onto the carboxylic carbon (releasing water). The final steps of this reaction rely on the generation of the 2‐imino‐4‐thiazolidinone intermediate species ( 11 ), which is hydrolysed at position 2 by the in situ generation of HCl. Such hydrolysis results in the eventual release of ammonia as a gas and yields TZD ( 3 ). The above and previously described methodology requires prolonged heating at significantly elevated temperatures (100–110 °C). In order to overcome such issues, Kumar and colleagues in 2006, evaluated the use of microwave‐induced synthesis to yield 3 . In a push to develop greener synthetic methodologies, it is not surprising that solid phase, solvent‐free reactions promoted/initiated via microwave irradiation have seen considerable attention in the last 20 years. The method proposed by Kumar can be completed in a total of two synthetic steps in less than 0.5 h. Both chloroacetic acid ( 6 ) and thiourea ( 7 ) were suspended in water and stirred under ice‐cold conditions for approximately 15 min, to precipitate out the previously discussed 2‐imino‐4‐thiazolidinone ( 11 ). Compound 11 was then subjected to microwave initiation at 250 W for a period of 5 min (Scheme ). The desired TZD was isolated following cooling and vacuum filtration in 83 % yield and without the need for further purification. Although the yield was the same as the conventional method, the reduction in reaction time and temperature certainly presented some synthetic advantages. It should be noted that conducting the reaction utilising HCl as the acid and heating for a 7–8 hour period provided the highest yield of 3 at 94 % (Figure ). A third common synthetic protocol involves the reaction of ethyl chloroacetate ( 12 ) with thiosemicarbazone ( 13 ), which in the presence of NaOEt, generates 2‐hydrazino‐4‐thiazolidinone ( 14 ); this, in turn, can be refluxed in dilute hydrochloric acid to give the desired TZD ( 3 ) (Scheme ). A fourth commonly used method to generate the TZD core also utilises compound 12 . Here, acidification of the product obtained from the reaction of 12 with potassium thiocyanate yields 3 (Scheme ). It should be noted that during this process significant care must be taken due to the liberation of toxic HCN gas as a by‐product. Though a series of methodologies have been presented to generate 3 as a core framework the majority of research has been laid into developing robust, high yielding and simple substitution reactions. While novel methodologies have been illustrated above, the most widely used synthetic pathway to produce the TZD core remains as that illustrated in Scheme and Figure . This is because reagents utilised are available in large quantities from commercial sources and do not require the handling or liberation of severely toxic by‐products (as seen in the case of Scheme ). Despite the efficient synthesis of 3 with aid of a microwave reactor the same yield was achieved.
TZD core synthesis Synthetic methodologies to yield the TZD core were first reported in the 1923 work by Kallenberg. Kallenberg's method reacts carbonyl sulphide ( 4 ) with ammonia, in the presence of KOH, to generate in situ the corresponding alkyl thioncarbamate ( 5 ), which, in turn, reacts with an α‐halogenated carboxylic acid ( 6 ). The thiocarbamate produced is then cyclised in acidic conditions, to yield the desired TZD ( 3 , Scheme ). Using more recent methodology, TZD is often synthesised by refluxing α‐chloroacetic acid ( 6 ) with thiourea ( 7 ), utilising water as a solvent for a prolonged period of time (∼12 h). The reaction mechanism for this process was proposed by Liberman et al . and can be seen below (Scheme ). Here, an initial attack on chloroacetic acid by the thiourea sulfur atom occurs. This S N 2 type reaction and generation of HCl takes place before a subsequent, second, nucleophilic substitution reaction caused by the attack of the amine onto the carboxylic carbon (releasing water). The final steps of this reaction rely on the generation of the 2‐imino‐4‐thiazolidinone intermediate species ( 11 ), which is hydrolysed at position 2 by the in situ generation of HCl. Such hydrolysis results in the eventual release of ammonia as a gas and yields TZD ( 3 ). The above and previously described methodology requires prolonged heating at significantly elevated temperatures (100–110 °C). In order to overcome such issues, Kumar and colleagues in 2006, evaluated the use of microwave‐induced synthesis to yield 3 . In a push to develop greener synthetic methodologies, it is not surprising that solid phase, solvent‐free reactions promoted/initiated via microwave irradiation have seen considerable attention in the last 20 years. The method proposed by Kumar can be completed in a total of two synthetic steps in less than 0.5 h. Both chloroacetic acid ( 6 ) and thiourea ( 7 ) were suspended in water and stirred under ice‐cold conditions for approximately 15 min, to precipitate out the previously discussed 2‐imino‐4‐thiazolidinone ( 11 ). Compound 11 was then subjected to microwave initiation at 250 W for a period of 5 min (Scheme ). The desired TZD was isolated following cooling and vacuum filtration in 83 % yield and without the need for further purification. Although the yield was the same as the conventional method, the reduction in reaction time and temperature certainly presented some synthetic advantages. It should be noted that conducting the reaction utilising HCl as the acid and heating for a 7–8 hour period provided the highest yield of 3 at 94 % (Figure ). A third common synthetic protocol involves the reaction of ethyl chloroacetate ( 12 ) with thiosemicarbazone ( 13 ), which in the presence of NaOEt, generates 2‐hydrazino‐4‐thiazolidinone ( 14 ); this, in turn, can be refluxed in dilute hydrochloric acid to give the desired TZD ( 3 ) (Scheme ). A fourth commonly used method to generate the TZD core also utilises compound 12 . Here, acidification of the product obtained from the reaction of 12 with potassium thiocyanate yields 3 (Scheme ). It should be noted that during this process significant care must be taken due to the liberation of toxic HCN gas as a by‐product. Though a series of methodologies have been presented to generate 3 as a core framework the majority of research has been laid into developing robust, high yielding and simple substitution reactions. While novel methodologies have been illustrated above, the most widely used synthetic pathway to produce the TZD core remains as that illustrated in Scheme and Figure . This is because reagents utilised are available in large quantities from commercial sources and do not require the handling or liberation of severely toxic by‐products (as seen in the case of Scheme ). Despite the efficient synthesis of 3 with aid of a microwave reactor the same yield was achieved.
Substitution Reactions The large number of reported substitutions on the core TZD frame can result in the alteration of molecular properties, the generation of novel molecules and potentially bioactive candidates. Substitutions at the nitrogen position N 3 , the C 2 carbonyl and the methylene group are known. The carbonyl group present at C 4 is considered highly unreactive. The only reported reactivity at this position was displayed in 1999 by Kato et al . When 15 , a derivative of 3 , was treatted with Lawesson's reagent in THF to afford the corresponding thiocarbonyl compound ( 16 ) (Scheme ). 2.1 NH substitution The primary methodology to introduce substituents onto the nitrogen atom involves deprotonation with an appropriate base, followed by substitution with either alkyl or benzyl halides. The first reported generation of such compounds dates back to the mid 1950’s in work by Lo and Bradsher. Their protocols used either NaOMe (Bradsher) or KOH (Lo) as the base. When NaOMe is utilised, Bradsher opted to conduct the reaction in hot MeOH whereas this was exchanged for DMF in alkylation reactions performed by Lo (Scheme ). Over time, with significant research directed towards conducting N ‐alkylation of TZD frameworks, a vast range of different bases have been screened. It is now readily accepted that other suitable bases include potassium carbonate, tetrabutylammonium iodide , NEt 3 or even sodium hydride. It has also been determined that suitable solvents include acetone. Another less commonly used methodology for the generation of 3‐substituted TZDs, relates back to one of the original procedures used for the generation of the TZD core ( 3 ). By substituting ammonia with a primary amine (Scheme ), the substituent group present is shown to persist through the entire process and results in the desired N ‐substituted derivatives. A final and more novel approach can be seen in a 2005 paper published by Mendoza that discloses access to N ‐substituted TZD structures derived from oxazolidinethiones. Chiral auxiliaries have been used to construct bioactive compounds. The chiral auxiliary used in Mendoza‘s work can be seen as a derivative of the renowned Evans auxiliary ( 18 ), with the only difference being a replacement of the carbonyl function to a thione ( 19 ). Preparation of 18 initially involves protection of S ‐valine methyl ester ( 21 ) amino group using trifluoroacetic anhydride and NEt 3 to generate N ‐trifluoroacetyl S ‐valine methyl ester ( 22 ). Subsequent addition of methyl magnesium iodide followed by ester hydrolysis, in the presence of aqueous potassium hydroxide, gives β‐amino alcohol 23 in 78 % yield overall. The final synthetic step which induces cyclisation to form 19 (80 %) also installed the thione group (Scheme ). With the chiral auxiliary in hand, the Mendoza group managed to induce a shift of the isopropyl group occupying the C 4 position to the nitrogen and transform the oxazolidine‐2‐thione core moiety into the one featured in TZD. The procedure involved treating 19 with NaH at 0 °C in dichloromethane, prior to the dropwise addition of bromoacetyl bromide at −78 °C. The product ( 24 ), was generated as an oil in 67 % yield (Scheme ). 2.2 Methylene substitution Further key functionalisation of the TZD framework can be achieved through substitution at the methylene C 5 position. The most widely used methodology involves a Knoevenagel condensation (KC), which relies upon the addition of an aldehyde to an activated methylene unit followed by a dehydration reaction to generate a new olefin. The reaction is often conducted in the presence of a base and these often include primary and secondary amines. The base causes an initial removal of the acidic proton before its nucleophilic attack onto the carbonyl ( 28 , Figure ). Following this, the conjugate base induces an elimination reaction with the production of water. In the pursuit of developing a series of 5‐(substituted benzylidene)‐TZD derivatives to act as tyrosine kinase inhibitors, Ha and co‐workers discovered that a catalytic amount of piperidine is optimal for this type of chemistry. A proposed catalytic cycle involving the use of piperidine is shown below in Figure . The reaction can tolerate bases such as NaOH , MeNH 2 , morpholine, K 2 CO 3 and aqueous ammonia. Typically, the condensation is conducted in standard organic solvents such as alcohols, DMF, DMSO or DCM. Though robust, the standard KC conditions involve prolonged heating (usually for up to 12 h) and complicated azeotropic removal of H 2 O with the aid of a Dean‐Stark apparatus and, as such, results in a time‐consuming and tedious reaction. Fortunately, more convenient methods have been developed over the last twenty years. In 2006, a solvent‐free synthetic protocol was discovered. In this work it was found that combining both initial precursors along with piperidine (to act as a base and catalyst), activated silica gel and acetic acid within a 900 W microwave reactor, gave reaction times of ∼7 minutes. Kumar found that not only did the reaction time decrease but the yield of all 23 substrates tested also increased. The removal of the activated silica and/or the AcOH decreased the yield. In work aimed at generation of greener methodology, Mahalle et al , avoided the use of hazardous catalysts/solvents by using polyethylene glycol (PEG‐300) instead. The group found that refluxing both the aldehyde and TZD in PEG for around 3 h resulted in significantly high yields of up to 80 % (Scheme ). In 2012, Thirupathi and co‐workers looked to the biochemical world in search of a cheap, non‐toxic, and readily available catalyst to be used in a modified Knoevenagel reaction. The catalyst which proved to be most promising was L ‐tyrosine. They developed a highly efficient protocol, conducted at ambient temperature, utilising water as a solvent, with a fast work‐up and purification procedure and rapid reaction times. The proposed reaction mechanism for the adapted procedure can be found below (Scheme ). In their research to generate a series of 11 analogues ( 33 ), they found that aryl aldehydes substituted with electron‐withdrawing groups (EWG) reacted faster than those possessing electron‐donating groups (EDG). It was also found that L ‐tyrosine is essential for reactivity as no product formation occurred when it was absent. The initial mechanistic step features L ‐tyrosine in its zwitterionic form abstracting a proton from the activated methylene of TZD ( 34 ). Then, the carbonyl is protonated to the corresponding oxonium cation ( 35 ) before it is attacked by the highly nucleophilic deprotonated TZD core ( 36 ). The final step once again involves a dehydration reaction in which the desired 5‐arylidene compounds are formed ( 33 ) (Scheme ). In the same year, a second modified Knoevenagel protocol was developed by Zhou and co‐workers at the South‐Central University of Nationalities, Wuhan. They utilised ethylenediamine diacetate (EDDA), a readily available and cheap Brønsted acid‐base combined salt catalyst, which has generated considerable interest in recent times. The coupling of benzaldehyde ( 38 ) with TZD at room temperature, in a solvent‐free environment with catalyst loading set to 10 mol% produced the desired 5‐benzylidene ( 39 ) in a 63 % yield after a reaction time of 150 min (Scheme ). The optimal reaction conditions consisted of conducting the reaction at 80 °C with 5 mol% catalyst loading, giving rise to a 91 % yield of 39 (Scheme ). No product was yielded in the absence of EDDA. From a green synthesis perspective, Zhou and colleagues also looked at the possibility of recycling the EDDA catalyst. The filtrate containing the EDDA contaminated water was concentrated in vacuo and used directly in a subsequent reaction. Results from this study showed that the recovered EDDA could be used in at least four reactions before considerable reduction in catalyst activity was observed. The group successfully managed to synthesise a series of benzylidene structures ( 33 ) in significantly high yields ranging from 70–91 % (Scheme ). Over the past few decades, the use of ultrasound to promote chemical reactions has seen an increase in popularity. Sonochemical techniques can be seen as a milder alternative to conventional reaction heating and has demonstrated the ability to provide higher yields with shorter reaction time all without the need for inert reaction conditions. With these advantages in mind, Bougrin et al . devised a unique methodology for the synthesis of novel TZD containing structures substituted at both N 2 and C 5 position via a one‐pot synthetic protocol. This involved combining TZD, along with the desired aldehyde that would effectively substitute at the C 5 position, and an alkyl halide (to react at the amine centre). This work is illustrated by the reaction of 3 , 40 and 41 in Scheme . The reagents are dissolved in a mixture of EtOH/H 2 O ( v/v , 2 : 1) and NaOH before sonification for a period of 25 min at 25 °C (Scheme ). The desired disubstituted product was isolated following acidic precipitation with 4 M HCl and recrystallisation from hot EtOH. To verify the effect of ultrasound irradiation on reaction success, a series of control experiments were conducted where sonification was exchanged for conventional mechanical magnetic stirring. In these cases, the reaction times were extended up to 12 h without full conversion. Although a vast variety of methodologies have been illustrated which effectively substitute the two main positions of the TZD core framework (C 5 and N 3 ), the most commonly used methodologies are those which are first mentioned. For methylene (C 5 ) functionalisation, KC condensation employs mild reaction conditions, high functional group tolerance and generally allows high yielding reactions over a short reaction time. While other methodologies presented herein are novel, they often rely upon the use of specialist equipment or extended work‐up procedures. The same can be said for substitution at the N 3 position which usually is conducted through deprotonation (using an inorganic base such as NaOH) followed by nucleophilic attack on an alkyl/aryl halide.
NH substitution The primary methodology to introduce substituents onto the nitrogen atom involves deprotonation with an appropriate base, followed by substitution with either alkyl or benzyl halides. The first reported generation of such compounds dates back to the mid 1950’s in work by Lo and Bradsher. Their protocols used either NaOMe (Bradsher) or KOH (Lo) as the base. When NaOMe is utilised, Bradsher opted to conduct the reaction in hot MeOH whereas this was exchanged for DMF in alkylation reactions performed by Lo (Scheme ). Over time, with significant research directed towards conducting N ‐alkylation of TZD frameworks, a vast range of different bases have been screened. It is now readily accepted that other suitable bases include potassium carbonate, tetrabutylammonium iodide , NEt 3 or even sodium hydride. It has also been determined that suitable solvents include acetone. Another less commonly used methodology for the generation of 3‐substituted TZDs, relates back to one of the original procedures used for the generation of the TZD core ( 3 ). By substituting ammonia with a primary amine (Scheme ), the substituent group present is shown to persist through the entire process and results in the desired N ‐substituted derivatives. A final and more novel approach can be seen in a 2005 paper published by Mendoza that discloses access to N ‐substituted TZD structures derived from oxazolidinethiones. Chiral auxiliaries have been used to construct bioactive compounds. The chiral auxiliary used in Mendoza‘s work can be seen as a derivative of the renowned Evans auxiliary ( 18 ), with the only difference being a replacement of the carbonyl function to a thione ( 19 ). Preparation of 18 initially involves protection of S ‐valine methyl ester ( 21 ) amino group using trifluoroacetic anhydride and NEt 3 to generate N ‐trifluoroacetyl S ‐valine methyl ester ( 22 ). Subsequent addition of methyl magnesium iodide followed by ester hydrolysis, in the presence of aqueous potassium hydroxide, gives β‐amino alcohol 23 in 78 % yield overall. The final synthetic step which induces cyclisation to form 19 (80 %) also installed the thione group (Scheme ). With the chiral auxiliary in hand, the Mendoza group managed to induce a shift of the isopropyl group occupying the C 4 position to the nitrogen and transform the oxazolidine‐2‐thione core moiety into the one featured in TZD. The procedure involved treating 19 with NaH at 0 °C in dichloromethane, prior to the dropwise addition of bromoacetyl bromide at −78 °C. The product ( 24 ), was generated as an oil in 67 % yield (Scheme ).
Methylene substitution Further key functionalisation of the TZD framework can be achieved through substitution at the methylene C 5 position. The most widely used methodology involves a Knoevenagel condensation (KC), which relies upon the addition of an aldehyde to an activated methylene unit followed by a dehydration reaction to generate a new olefin. The reaction is often conducted in the presence of a base and these often include primary and secondary amines. The base causes an initial removal of the acidic proton before its nucleophilic attack onto the carbonyl ( 28 , Figure ). Following this, the conjugate base induces an elimination reaction with the production of water. In the pursuit of developing a series of 5‐(substituted benzylidene)‐TZD derivatives to act as tyrosine kinase inhibitors, Ha and co‐workers discovered that a catalytic amount of piperidine is optimal for this type of chemistry. A proposed catalytic cycle involving the use of piperidine is shown below in Figure . The reaction can tolerate bases such as NaOH , MeNH 2 , morpholine, K 2 CO 3 and aqueous ammonia. Typically, the condensation is conducted in standard organic solvents such as alcohols, DMF, DMSO or DCM. Though robust, the standard KC conditions involve prolonged heating (usually for up to 12 h) and complicated azeotropic removal of H 2 O with the aid of a Dean‐Stark apparatus and, as such, results in a time‐consuming and tedious reaction. Fortunately, more convenient methods have been developed over the last twenty years. In 2006, a solvent‐free synthetic protocol was discovered. In this work it was found that combining both initial precursors along with piperidine (to act as a base and catalyst), activated silica gel and acetic acid within a 900 W microwave reactor, gave reaction times of ∼7 minutes. Kumar found that not only did the reaction time decrease but the yield of all 23 substrates tested also increased. The removal of the activated silica and/or the AcOH decreased the yield. In work aimed at generation of greener methodology, Mahalle et al , avoided the use of hazardous catalysts/solvents by using polyethylene glycol (PEG‐300) instead. The group found that refluxing both the aldehyde and TZD in PEG for around 3 h resulted in significantly high yields of up to 80 % (Scheme ). In 2012, Thirupathi and co‐workers looked to the biochemical world in search of a cheap, non‐toxic, and readily available catalyst to be used in a modified Knoevenagel reaction. The catalyst which proved to be most promising was L ‐tyrosine. They developed a highly efficient protocol, conducted at ambient temperature, utilising water as a solvent, with a fast work‐up and purification procedure and rapid reaction times. The proposed reaction mechanism for the adapted procedure can be found below (Scheme ). In their research to generate a series of 11 analogues ( 33 ), they found that aryl aldehydes substituted with electron‐withdrawing groups (EWG) reacted faster than those possessing electron‐donating groups (EDG). It was also found that L ‐tyrosine is essential for reactivity as no product formation occurred when it was absent. The initial mechanistic step features L ‐tyrosine in its zwitterionic form abstracting a proton from the activated methylene of TZD ( 34 ). Then, the carbonyl is protonated to the corresponding oxonium cation ( 35 ) before it is attacked by the highly nucleophilic deprotonated TZD core ( 36 ). The final step once again involves a dehydration reaction in which the desired 5‐arylidene compounds are formed ( 33 ) (Scheme ). In the same year, a second modified Knoevenagel protocol was developed by Zhou and co‐workers at the South‐Central University of Nationalities, Wuhan. They utilised ethylenediamine diacetate (EDDA), a readily available and cheap Brønsted acid‐base combined salt catalyst, which has generated considerable interest in recent times. The coupling of benzaldehyde ( 38 ) with TZD at room temperature, in a solvent‐free environment with catalyst loading set to 10 mol% produced the desired 5‐benzylidene ( 39 ) in a 63 % yield after a reaction time of 150 min (Scheme ). The optimal reaction conditions consisted of conducting the reaction at 80 °C with 5 mol% catalyst loading, giving rise to a 91 % yield of 39 (Scheme ). No product was yielded in the absence of EDDA. From a green synthesis perspective, Zhou and colleagues also looked at the possibility of recycling the EDDA catalyst. The filtrate containing the EDDA contaminated water was concentrated in vacuo and used directly in a subsequent reaction. Results from this study showed that the recovered EDDA could be used in at least four reactions before considerable reduction in catalyst activity was observed. The group successfully managed to synthesise a series of benzylidene structures ( 33 ) in significantly high yields ranging from 70–91 % (Scheme ). Over the past few decades, the use of ultrasound to promote chemical reactions has seen an increase in popularity. Sonochemical techniques can be seen as a milder alternative to conventional reaction heating and has demonstrated the ability to provide higher yields with shorter reaction time all without the need for inert reaction conditions. With these advantages in mind, Bougrin et al . devised a unique methodology for the synthesis of novel TZD containing structures substituted at both N 2 and C 5 position via a one‐pot synthetic protocol. This involved combining TZD, along with the desired aldehyde that would effectively substitute at the C 5 position, and an alkyl halide (to react at the amine centre). This work is illustrated by the reaction of 3 , 40 and 41 in Scheme . The reagents are dissolved in a mixture of EtOH/H 2 O ( v/v , 2 : 1) and NaOH before sonification for a period of 25 min at 25 °C (Scheme ). The desired disubstituted product was isolated following acidic precipitation with 4 M HCl and recrystallisation from hot EtOH. To verify the effect of ultrasound irradiation on reaction success, a series of control experiments were conducted where sonification was exchanged for conventional mechanical magnetic stirring. In these cases, the reaction times were extended up to 12 h without full conversion. Although a vast variety of methodologies have been illustrated which effectively substitute the two main positions of the TZD core framework (C 5 and N 3 ), the most commonly used methodologies are those which are first mentioned. For methylene (C 5 ) functionalisation, KC condensation employs mild reaction conditions, high functional group tolerance and generally allows high yielding reactions over a short reaction time. While other methodologies presented herein are novel, they often rely upon the use of specialist equipment or extended work‐up procedures. The same can be said for substitution at the N 3 position which usually is conducted through deprotonation (using an inorganic base such as NaOH) followed by nucleophilic attack on an alkyl/aryl halide.
Role of TZDs in Medicinal Chemistry Due to the diverse range of reactions that can be conducted on the TZD core framework it is unsurprising that substituted TZDs exhibit a vast range of pharmacological activities. Since their first reported use in 1954, TZD derivatives have been known to provide antimicrobial, antiviral, antioxidant, anticancer, anti‐inflammatory, anti‐plasmodial and anti‐hyperglycemic effects. Further to this, TZD containing compounds have demonstrated uses as Al‐2 quorum sensing inhibitors, aldose reductase inhibitors, alpha glucoside inhibitors, COX inhibitors, 15‐hydroxyprostaglandin dehydrogenase inhibitors, peptide deformylase inhibitors, PTP1B inhibitors as well as FFAR1 agonists, β 3 agonists, GPR‐40 agonists and as peroxisome proliferator‐activated receptors (PPAR) modulators. Aside from substituted TZDs conferring pharmacological effects towards a vast range of protein targets, the motif itself is a recognised bioisostere for the carboxylic acid moiety. 3.1 TZD as a bioisostere A known methodology often utilised in the development of marketable drugs is the exchanging of functional groups for bioisosteric motifs. Bioisostere is recognised as an umbrella term which can be divided into two categories; classical and non‐classical. Classical bioisosteres are described as isoelectronic with the unexchanged moiety and exhibit similar biological properties to the parent structure. Non‐classical bioisosteres however can vary widely, e. g. featuring different number of atoms present, different number of hydrogen bond acceptors (HBA) or donors (HBD), but the electrostatic map is often highly similar to the parent molecule. Despite the ubiquitous presence of the carboxylic acid group within endogenous substances (including amino acids, triglycerides, prostaglandins), and its appearance in over 450 known marketed drugs worldwide (including NSAID's, antibiotics, anticoagulants and statins), it is often deemed a liability. Ionisation to the carboxylate anion at physiological pH results in a diminished ability to diffuse passively across lipid cell membranes such as those in the intestine or blood brain barrier (BBB). Furthermore, metabolism of the carboxylic acid moiety via phase II glucuronidation reaction can complicate pharmacotherapy due to the production of idiosyncratic products. In works published by Lassalas investigating structure property relationships of carboxylic acid isosteres, it was determined that TZDs confer moderate acidity with an associated pKa value of around 6‐7 and is considered to be relatively lipophilic. This makes the TZD group prone to augmenting drug permeability via diffusion across biological membranes in comparison to its carboxylic counterpart. It is for these reasons that the TZD framework is considered a suitable surrogate. It should also be noted that many modulators of PPARγ feature the carboxylic acid moiety. 3.2 TZDs as therapeutic agents As discussed previously, TZD containing scaffolds are known to invoke a biological response across a wide variety of biological targets. The following section will deal with target specific interactions. Herein, both the mechanism of action brought about by the clinical agent as well as a discussion of the synthetic methodology used to generate the TZDs are included. 3.3 TZDs in the treatment of diabetes mellitus The most common utilisation of TZD‐containing structures has been in the treatment of diabetes mellitus (DM), which is recognised as a metabolic disorder often characterised by hyperglycaemia and related ailments caused due to a deficiency in insulin secretion. There are two recognised forms of DM: type 1 (T1D) and type 2 (T2D). T1D is associated with either a total or partial lack of insulin brought about by a defect in the immune system resulting in a loss of tolerance and degradation (through inflammation) to the β‐cells. T2D is, however, associated with erratic degrees of insulin resistance, unregulated insulin secretion and an increase in hepatic glucose production hence sufferers are termed non‐insulin dependent. The concentration of glucose present in the blood is controlled via hormonal balance of both insulin and glucagon through a homeostatic mechanism. When glucose concentration is high there is an increase uptake of glucose by the skeletal muscle. In cases where the glucose concentration is low, glucagon promotes the synthesis and excretion of glucose. The International Diabetes Federation has predicted that the morbidity rate from diabetes will increase from 425 million in 2017 to 629 million by 2045. Conventional treatments for DM prior to the use of TZD based structures include drugs classes such as sulphonylureas ( 43 – 45 ), meglitinides ( 46/47 ) and biguanides ( 48 ) (Figure ). 3.3.1 PPARγ inhibition The first antihyperglycemic TZD that came to the attention of medicinal chemists was ciglitazone ( 49 ). Workers at the Takeda laboratories (Japan) successfully synthesised 49 in early 1975 while generating a series of 71 analogues of the hypolipidemic drug clofibrate with the aim of discovering a more potent derivative. They found that some of the products generated displayed hypoglycemic effects when tested on diabetic mice. Though it was initially approved by the US Food and Drug Administration (FDA), it was withdrawn from the market due to unacceptable liver toxicity. Then, troglitazone ( 50 ) was discovered and developed by Sankyo in 1988, and approved by the FDA for T2D treatments in 1997. However, within 6 weeks of its launch it was withdrawn from the UK market (much like 49 ) as a result of potentially fatal hepatoxicity. In 1999, Takeda and SmithKline developed two TZD containing drugs, pioglitazone ( 51 ) and rosiglitazone ( 52 ) which were also approved by the FDA for the management of T2D (Figure ). After capitalising the market for insulin sensitisers and becoming one of the top 25 selling brands in the United States, concerns were raised regarding rosiglitazone causing heart failure due to fluid retention and, in 2011, the European Medicines Agency recommended that it should be withdrawn from the market. These ‘first generation’ ‘glitazones’ were initially prepared via condensation or nucleophilic substitution of a halogen substituted nitrobenzene ( 53 ) with an alcoholic precursor ( 54 ) to generate the intermediates, 55 (Scheme ). These compounds were then subjected to hydrogenation conditions to provide the amines 56 . Diazotization of amine 56 was then conducted to yield the diazonium salts 57 which were condensed in the presence of methyl acrylate to afford the α‐halogenated esters 58 . The final steps of this synthesis introduced the TZD moiety via a cyclo‐condensation with thiourea ( 7 ) to afford the respective imines 59 before acidic work up and hydrolysis yielded 49 or 51 in relatively high yields (Scheme ). Though suitable for small scale generation, the use of pyrophoric NaH, expensive metal catalysts, toxic acids as well as potentially explosive intermediates being generated may result in it being unsuitable for significant upscaling. Therefore a process chemistry methodology was developed which made use of commercially available TZD. The improved methodology synthesised 49 in a 54 % overall yield over a series of three steps (Scheme ). In the above synthesis, substitution of the commercially available p ‐fluorobenzaldehyde ( 60 ) with 61 in the presence of NaH, in DMF, afforded 62 which was, in turn, coupled with TZD ( 3 ) via a KC to generate the penultimate intermediate ( 63 ). The final step in the synthesis required a reduction of the benzylidene olefin which was achieved using NaBH 4 in the presence of NaOH. PPARs are ligand inducible transcription factors that are important biologically for cell differentiation, lipid and glucose homeostasis, insulin sensitivity, inflammatory responses and various other metabolic processes. Such receptors are known to exist in a series of three distinct isoforms namely PPARα, PPARβ/δ and PPARγ. Agonist binding within the ligand binding region (LBR) cause a translocation to the cell nucleus and heterodimerisation with retinoid X receptor (RXR), another nuclear receptor. Following dimerisation they bind with a specific region of DNA known as peroxisome proliferator hormone response elements (PPREs). PPARα and PPARγ are greatly expressed in the heart, liver and skeletal muscle as well as in adipose tissue respectively. Binding of TZDs promotes fatty acid uptake and subsequent storage in adipose tissue. As a result, fat levels are reduced in the liver, pancreas and muscles, leading to protection from toxic products formed by metabolism of free fatty acids. Furthermore, TZDs interfere with cellular signalling pathways between insulin sensitive tissues and organs such as the liver, muscle and adipose tissue. They have also been seen to increase the production of adipokines which are insulin sensitisers. Through extensive structure activity relationship (SAR) studies conventional TZDs can be split into five common moieties; an acid TZD head group (purple), a lipophilic tail (green), linked to a central phenyl ring (orange) by two aliphatic chains (blue/red) (Figure ). Agonist activity towards PPARγ is usually brought about through the formation of hydrogen bonds with the P 1 hydrophilic binding pocket present in arm I of the active site. Specifically, hydrogen bonding interactions are formed with His323, His449, and Tyr473. The lipophilic tail (green) would react with binding sites present in both arm II and arm III through a multitude of hydrophobic interactions, including VDW and π‐π stacking interactions. The aliphatic linking chains (blue/red) act as spacers in order to orientate the acidic TZD head (purple) and lipophilic tail (green) into their binding pockets. Finally, the phenyl ring present in the centre (orange) interacts with further hydrophobic amino acid residues in the cleft in a similar way to that of the tail group. Modifications of the blue‐coloured alkyl linker (Figure ) were tolerated, providing that the alkyl chain did not exceed three carbon atoms. It is likely that this linker orientates the molecule and provides the required space between the lipophilic tail and polar acidic head group. The central phenyl ring is seen to be highly significant in the binding interactions to bring about PPARγ agonism though substitution with a benzdihydropyran and naphthyl groups (englitazone ( 64 ) and netoglitazone ( 65 ) respectively) are also well tolerated. The second linker (red) has also been subjected to structural modifications. The severe adverse side effects brought about by 49 have been attributed to the fact that it features a very small aliphatic linker of just one carbon atom which fails to present 49 in the desired conformation for efficient binding and often results in off target interactions. A series of analogues are shown below with modifications to this alkyl linker. Compound 66 features the maximum number of six atoms between the central phenyl ring and the lipophilic tail, it is thought that the oxime functional group plays a vital role through the provision of hydrophobic interactions. A more exotic linker was introduced using a sulfonyl group ( 67 / 68 ) which most likely behaves as a weak HBA resulting in the formation of a short syn orientated hydrogen bonds planar to the axis of the S=O group with residues present at the entrance of the ligand binding domain (LBD) (Figure ). The final structural region within the general pharmacophore model which can receive structural modification is the green lipophilic tail (Figure ). During drug‐receptor interactions, the affinity of the drug is determined predominantly by the extent of hydrophobic interaction. It has been evaluated that within the LBD of PPARγ there exists two large hydrophobic pockets, namely P 3 and P 4 respectively. It is for this reason that large hydrophobic units are required in the tail end of suitable drug candidates. Rosiglitazone (52) possesses a large sized pyridine ring as the tail but is unable to reach both hydrophobic pockets, only reaching P 3 . On the other hand, balaglitazone (69) features a bulky benzopyrimidinone group as the hydrophobic tail which is shown to be a partial agonist of PPARγ. It is believed that partial agonist activity is brought about by hydrophobic interactions with both pockets P 3 and P 4 (Figure ). Exchange of the lipophilic tail with pyridyl (70) and pyrimidyl (71) moieties showed an increase in agonist activity at the micromolar level with pyridyl conferring slightly higher potency. In the case of 71, a decrease in plasma glucose and triglyceride levels of 73 % and 85 % respectively was observed. Such moieties exhibit better agonist activity than both 51 and 52 (first generation glitazones) in terms of both oral absorption and less severe side‐effects. PPARγ inhibition was also witnessed when the lipophilic tail was substituted with tetrahydronaphthalenes (72), styryl (73) and diphenyloxy (74) groups and by a range of nitrogen‐containing heterocycles (75–78) (Figure ). 3.3.2 PTP1B inhibition A second well recognised therapeutic strategy for the treatment of DM is through the inhibition of protein tyrosine phosphatase 1B (PTP1B). The phosphorylation‐dephosphorylation of biological molecules is recognised as a vital mechanism in the control of cellular function, growth, communication and differentiation. Protein tyrosine phosphorylation also plays an active role in transmitting extracellular responses which can lead to T‐cell activation and antigen‐receptor signalling pathways. The process is led by two opposing enzyme; kinases (PTK) which conduct phosphate transfer, and, protein tyrosine phosphatases (PTP) which act to catalytically hydrolyse the previously installed phosphate group. PTP are distinguished in terms of structure by the presence of only one catalytic domain consisting of around 240 residues with arginine and cysteine acting as essential amino acids in the catalytic process. In vivo , PTP maintain the level of tyrosine phosphorylation which has shown an association with T2D. PTP1B is recognised to be a negative regulator of insulin‐receptor and leptin‐receptor signalling pathway. The binding of leptin to its receptor results in phosphorylation of Janus Kinase 2 (JAK2) and further activates the associated JAK signal transducer and activation transcription (STAT). This results in translocation of STAT3 to the cell nucleus and subsequently induces a gene mediated response to reduce the production of acetyl coenzyme‐A carboxylase (ACC). ACC is a biotin‐dependent enzyme which catalyses the conversion of acetyl coenzyme A (Ac‐CoA) to malonyl CoA, this in turn halts or reduces the rate of fatty acid synthesis; while increasing the rate of fatty acid oxidation via metabolism. In an effort to develop a series of biologically active compounds, Bhattarai and co‐workers initially looked to the previously described PPARγ modulators (glitazones) to assess their inhibition towards PTP1B. They found that clinically used glitazones 50 – 52 displayed medium to low potency in terms of PTP1B inhibition with IC 50 values between 55–400 μM. Glitazones which feature substitution at the C 5 position with a benzylidene linker are termed 5‐benzylidene‐TZD derivatives. Using these glitazones as a scaffold for their subsequent research, the group investigated altering the location of the benzyloxy group stemming from the central phenyl ring to develop a series of analogues where substitution occurs at the ortho and para positions (Scheme ). The derivatives illustrated in Scheme were synthesised via the heavily reported method of conducting a KC in the presence of a piperidine catalyst. In cases where this methodology did not yield the desired product ( 81 i , 84 f and 84 g ), the Nobel prize winning Pd cross‐coupling Suzuki reaction was employed to generate 81 i and 84 g – h in yields of 81 %, 90 % and 86 % respectively (Scheme ). Following synthesis, the analogues generated were tested for in vivo inhibition of PTP1B in an obese and diabetic mouse model (C57BL/6J Jms Slc, male). Bhattarai found that all of the generated compounds were significantly more potent than the marketed glitazones. Substitution at the ortho position brought about a higher level of inhibition with IC 50 values ranging from 5–23 μM. The most potent analogues synthesised were 81 h and 84 e (Schemes & ), both possessing an IC 50 value of 5.0 μM. Analogue 81 h was further evaluated in order to obtain an insight into its inhibitory mechanism of action. In a series of docking simulations, it was illustrated that a pair of hydrogen bonding interactions are made from Gln266 and the backbone amino group of Ser216 to the carbonyl oxygen substituents on the TZD frame. The acidic proton occupying the amidic position was assumed to be deprotonated in vivo due to it possessing a p K a value of ∼6.74. In its deprotonated state, 81 h is seen during autodock simulations to be close to Cys215 and Arg221 with a distance between them of within 4–5 Å. Further interaction (and hence stabilisation) within the catalytic site can be seen through hydrophobic interactions between the inhibitor's aromatic rings and surrounding hydrophobic residues, including Tyr46, Val49, Phe182, Ala217 and Ile218. In the same in vivo study , 81 h was administered orally with food at 143 mg/day/Kg (of weight) for a period of 4 weeks which showed a reduction in both weight and glucose tolerance. Significantly lower levels of total cholesterol and triglycerides were present in the serum of mice consuming 81 h though dark brown spots were observed in the liver (suggesting liver damage) indicating that 81 h required further optimisation through derivatisation. The exhibited side‐effects as described when utilising the general structure for PPARγ inhibitors has since been attributed to off target interactions from PTP1B. Specifically, the acidic nature of the proton attatched to the N 3 position on the TZD frame is important. This rationale comes from its resemblance to the carboxylate anion of the natural fatty acid ligands which are known modulators for PPARγ. Due to these findings there has been a push for the development of novel N ‐substituted TZD derivatives. A reduction in target promiscuity should provide a reduction in side‐effects, specifically relating to hepatotoxicity. It should be noted that Bhat et al . has since utilised these findings to successfully develop a series of N ‐substituted glitazones with little to no observed toxicity. In 2007, Maccari and colleagues set about developing a series of N ‐substituted 5‐arylidene derivatives as modulators to PTP1B. They began with the installation of a para methylbenzoic acid group onto the N ‐3 position of TZD. This was primarily due to the fact that benzoic acid acts as a phospho‐tyrosine isostere. In their published work they successfully synthesised a series of 10 analogues (Figure ). Installation of groups at the methylene position proceeded through the already discussed KC with the corresponding aromatic aldehyde and TZD, in the presence of a piperidine catalyst, in refluxing EtOH. Substitution at the amidic position was achieved by refluxing the 5‐substituted TZD in 4‐bromomethyl benzoic acid, in the presence of K 2 CO 3 , prior to acidic workup and recrystallisation from hot MeOH. The synthesised analogues were evaluated for activity in vitro against recombinant human PTP1B as well as the two present active isoforms. Compounds 86 – 91 were shown to exhibit PTP1B inhibition with IC 50 values in the low micromolar range (1.1–6.5 μM). Within this subset, compound 86 and 87 displayed the most effective inhibition to both isoforms of PTP1B. In 2017, Mahapatra helped to tackle a common problem associated with 5‐arylidene TZD based compounds. Classically, active site directed PTP1B modulators have possessed a high charge density which brings about a host of issues in terms of pharmacokinetics and poor membrane permeability. This, in turn, reduces oral bioavailability and drug‐likeliness. To overcome these problems, Mahapatra looked at exchanging the arylidene substitution at the methylene position, and turned to inspiration from work published by Anderson, Moretto and Ye. Their work has shown positive results towards PTP1B inhibition via the inclusion of thiophene based compounds. Mahapatra therefore envisaged a series of small molecule inhibitors containing a TZD core, N ‐substitution with a lipophilic alkyl or haloalkyl group and a thiophene derivative installed at the methylene position joined to TZD via a vinyl linker ( 95 ). The general structure for this series of compounds can be seen below in Figure . A series of 10 N ‐alkyl/alkyl halide analogues were synthesised ( 95 a – j ) in good to high yield (59–86 %) via KC of TZD ( 3 ) with thiophene‐2‐carboxaldehyde ( 96 ), to afford 97 , prior to N ‐alkylation with the appropriate mono/disubstituted alkyl halide utilising anhydrous K 2 CO 3 as a base (Scheme ). In vitro studies utilising the BML‐AK822 assay kit containing human, recominant PTP1B (residue 1–322) expressed in E . coli revealed binding affinities ranging from 10–73 μM with smaller substituents possessing greater potency. The highest potency was witnessed with 95 e (IC 50 =10 μM) while the lowest potency was exhibited by 95 c (IC 50 =73 μM). As seen with previous examples, the carbonyl group present on the TZD core partakes in a hydrogen‐bonding interaction with Arg221 as well as previously unwitnessed interactions with Lys120 (a residue present in the catalytic cleft of PTP1B). Further interactions were observed through π‐π stacking of the thiophene ring and Tyr46. Alongside their SAR studies, the group conducted full computational predictions on the pharmacokinetics parameters for each analogue generated. All of the compounds present no violations with reference to Lipinski's guidelines. 3.3.3 ALR2 inhibition DM has also been associated as a leading cause of new cases of partial vision loss or total blindness, as well as health concerns relating to heart disease, neuropathy and nephropathy. Diabetic retinopathy is characterised by capillary cell loss, capillary membrane thickening in the basement reagent and an increase in leukocyte adhesion to endothelial cells. Such medical conditions are brought about through complications in glucose metabolism involving the Aldose reductase 2 enzyme (ALR2) in the polyol pathway. ALR2 is an enzyme belonging to the aldo‐ketoreductase superfamily which during the polyol pathway catalyses the NADPH‐induced reduction of glucose ( 98 ) to sorbitol ( 99 ). Following this reduction, sorbitol is subjected to an oxidation reaction by sorbitol dehydrogenase generating the pentose sugar fructose ( 100 ) (Scheme ). In healthy humans, only a small amount of glucose is metabolised via this pathway as the majority undergoes a phosphorylation reaction via hexokinase to generate the hexose sugar glucose‐6‐phosphate which is, in turn, used as a substrate for glycolysis, a key process in cellular respiration. In cases of chronic hyperglycaemia (such as those suffering with DM), stimulation of the polyol pathway is significantly increased. As ALR2 is highly prevalent in the cornea, retina and lens as well as within the kidneys and neurone myelin sheaths, the aforementioned medical complications are usually witnessed. The two main classes of ALR2 inhibitors are cyclic imides such as sorbinil ( 101 ) and epalrestat ( 103 ) (usually containing hydantoins), and carboxylic acids such as tolrestat ( 102 ) (Figure ). Though carboxylic acids have been shown to exhibit high in vitro potency they are generally less active in vivo compared to imides. This could be attributed to the vast extent of metabolism associated with carboxylic acid‐containing compounds via phase II conjugation reactions. Members of this functional group series exhibit acidity and less favourable pharmacokinetic properties. There is a need to further develop series of cyclic imide based compounds with a hope to remove the toxicity associated with the hydantoin moiety. One such strategy has involved the use of the TZD framework because it is a known and recognised bioisostere for hydantoins. With these considerations in mind, and in an effort to develop a series of novel inhibitors for ALR2 containing a TZD core, Maccari and co‐workers synthesised a distinct set of three TZD‐containing 5‐arylidene TZDs. Their work followed on from previous published work which outlined the necessary pharmacophore for successful ALR2 inhibition. This pharmacophore requires the presence of an acidic proton, the ability to act as a HBA and to contain a substituted aromatic ring. The first series of compounds generated ( 104 a – k ) possessed the acidic amidic hydrogen on the TZD core, the second series generated consisted of replacing the acidic hydrogen with an acetate ester group ( 105 a – e , g , j ), while the third series saw generation of the corresponding acetic acid ( 106 a – e , g , j ) (Figure ). Synthesis of analogues possessing the general structure 104 was achieved via KC of TZD ( 3 ) with the corresponding meta / para ‐substituted benzaldehyde utilising piperidine as a catalytic base. Deprotonation of the acidic proton was achieved with NaH in DMF before the generation of the ester 105 with methyl bromoacetate. Subsequent hydrolysis afforded the appropriate carboxylic acid ( 106 ) via an acetic acid‐catalysed process. Analogues featuring the general structure 104 showed the greatest variation in yield. Compounds 104 f / k were produced in the highest yields of 88 % and 90 % respectively. Lower yields were witnessed (52 % and 57 %) in cases featuring a strong EWG fluorine or trifluoromethyl at the meta position ( 104 a / e ). Likewise, a lower yield of 52 % was achieved when a methoxy unit was introduced at the meta position ( 104 d ). In the case of molecules featuring the general structure 105 significantly higher yields can be witnessed. The highest yield was generated in the case of meta methoxy‐derivative 105 d (98 %) closely followed by meta substituted fluoride 105 a (96 %). Very high yields were achieved in all cases ( 106 a – e , g , j ) when the ester was hydrolysed to the corresponding carboxylic acid. The potency of compounds featuring an acidic group has been deemed highly important for future work in the development of ALR2 inhibitors. This acidic proton is considered to be ionised at physiological pH and will form ionic interactions with the active site of the enzyme. In 2005, Maccari et al . conducted further optimisation of their previous work by running a series of molecular modelling studies in order to generate a comprehensive understanding of SARs. . They found that the presence of a second aromatic ring on the 5‐benzylidene group increased potency as compared to molecules which only possess one aromatic ring. Furthermore, substitution at the meta position showed an increase in activity which was independent of the nature of the substitution. In terms of N ‐functionalisation, they found that the presence of an acetate chain caused an increase in affinity which they attributed to the fact that a polar interaction is formed with Tyr48, His110, Trp111 as well as the nicotinamide ring of NAD + . With an interest in optimising potency even further, Maccari and colleagues investigated the effects of reducing the 5‐arylidene olefin present in their substrates to the corresponding benzyl derivatives. The group managed to successfully synthesise a further eighteen analogues to explore the effect of no substituent on the nitrogen atom and substituting it with both the acetate ester (as seen in 105 ) and a carboxylic acid ( 106 ) (see Scheme ). Reduction of the benzylidene group was achieved through the addition of LiBH 4 in pyridine as per the procedure reported by Giles which showed selective hydrogenation of the olefin. Among the series of N ‐unsubstituted derivatives, 109 a – d displayed IC 50 values ranging from 31–79 μM in an in vitro bovine lenses’ assay towards ALR2. Despite this moderate to poor potency derivative 109 b – d showed a significant increase in potency as compared to the previously generated benzylidene analogues which, at a concentration of 50 μM, only generated 41 %, 10 %, and 20 % inhibition respectively. This increase in activity however was not apparent when the phenoxy substituent was installed in the meta position nor was it observed when the phenoxy was replaced with a methoxy unit. Among the analogues generated, which featured methyl esters, only 110 e displayed ALR2 inhibition (IC 50 =21 μM). While this is a 2.5‐fold increase compared to the N ‐unsubstituted derivative it is significantly less effective than the corresponding acid 111 e . Finally, the carboxylic acid bearing products displayed a 15‐ to 80‐fold increase in potency as compared to their analogous benzylidene derivatives. Inspired by the works of Maccari, Bozdağ‐Dűndar et al . also looked to develop a series of ALR2 inhibitors featuring a TZD core in 2008. They substituted the benzylidene moiety commonly used in clinical agents to treat DM for flavonoid based systems. Flavonoids are recognised as a ubiquitous motif present in a wide range of edible, plants, fruits and plant‐derived beverages (including juices and teas). They have also been deemed as health‐promoting and disease preventing motifs which have seen use as antibacterial and antiviral agents. The group synthesised a series of ten flavone‐substituted TZDs and separated them into three distinct classes depending upon where the flavone unit was coupled to the methylene carbon. In the series of analogues generated substitution occurred at either the 3’, 4’ or 6 position (see Figure ). Along with altering the position of substitution onto the flavone ring, the group also explored developing linkers featuring sp 3 and sp 2 hybridised carbons and substances featuring both N ‐substitution and no substituent on the nitrogen atom. Analogues without the presence of an olefin ( 113 , 116 , 117 , 119 ) were generated through the coupling of the appropriate bromomethylflavone with dilithio‐TZD, and the rest of the structures were generated via KC with the corresponding 3’/4’/6‐carboxaldehydes in the presence of AcOH and NaOAc. Substitution at the N 5 position was carried out with the aid of an alkyl iodide under basic conditions (Figure ). After conducting a series on in vitro experiments to assess potency, it was found that substitution at the 4’ position yielded the highest inhibitory activity. The most active analogue generated was 114 which possessed a potency of 0.43 μM. Substitution at the N 3 position with a Me group showed some activity but still less than unsubstituted derivatives. Further, the presence of a double bond at the C 5 position did not significantly impact potency. In the same year, Bozdağ‐Dűndar published an additional paper on the generation of flavone‐substituted TZDs. All analogues generated in this study featured the olefin linker between the TZD core and the flavone substitution, and substitution at the N 3 position appeared in the form of acetyl esters or acetic acids. In order to generate the 3’/4’/6‐carboxaldehyde precursors required for later KC, a methyl substituted flavone initially had to be prepared. This was completed using the Baker‐Venkataraman method ( 124 ). Subsequent bromination utilising NBS and a catalytic quantity of benzoyl peroxide afforded 125 . The final aldehyde was generated via the addition of HMTA ( 127 ) in an acidic environment by means of a Sommelet reaction, generating 126 (Scheme ). Functionalisation of the nitrogen to yield the acetate ester ( 127 , 129 , 131 ) proceeded by combining TZD ( 3 ) with ethyl bromoacetate and NaH in THF. Hydrolysis under acidic conditions then generated the free carboxylic acids ( 128 , 130 , 132 ). Coupling of TZD ( 3 ) with the appropriate flavone proceeded through KC, in the presence of NaOAc and glacial acetic acid, to yield the structures illustrated in Figure . In vitro ALR2 inhibition studies showed that the newly synthesised flavonyl compounds bearing the acetic acid chain ( 128 , 130 , 132 ) possessed high potency. In this study, ALR2 was isolated from kidney tissue obtained following the death of male albino rats and the formulated flavone‐substitued compound was dosed at a concentration of 100 μM. The highest potency was observed in the case of 128 which exerted an inhibitory action of 86.6 %. Compounds 130 and 132 were shown to inhibit ALR2 at 56.3 % and 44.6 % respectively at a concentration of 100 μM. The ester derivatives ( 127 , 129 , 131 ) generated however, proved to be less potent with percentage inhibition given as 12.9 %, 6.7 % and 14.4 % respectively at the same concentration. The decrease in potency was attributed to the lack of any acidic proton in the substrates. The presence of an acidic functionality is a highly important requirement for ALR2 inhibitors, because they form interactions in their ionised state. 3.4 TZDs and Side‐effects TZDs have received significant interest over the last three decades due to the risk of side‐effects. Such side‐effects were initally observed following the development of first generation glitazones as PPARγ inhibitors. Troglitazone induced liver damage has been attributed to to the production of harmful reactive metabolites during hepatic metabolism. This has been linked to acute liver failure caused by apoptosis of liver tissue cells. Further mechanisms of induced heptatoxicity include mitochondrial damage, promotion of oxidative stress and through the accumilation of bile in the liver due to inhibition of bile excretory proteins. However, it should be noted that specifically in the case of troglitazone, heptatoxicity is considered to be idiosynchratic and not dose‐ dependent. A second commonly witnessed side‐effect following prescripted use of TZDs is weight gain. TZDs are known to cause edema and increase the overal levels of plasma volume in vivo . This leads to a re‐distribution of fat via differentiation of preadipocytes into small fat cells. A more recently identified side‐effect following TZD usage concerns an increase risk in the development of bone fractures. Aside from weight gain being a contributory factor here,the main cause of this risk has been named ‘TZD‐induced bone loss’. Such induced action leads to an increase in adipogenesis and subsequent decrease in osteoblastogenesis. Furthermore, insulin levels play a direct role in the modulation of osteoblastogenesis and hence bone formation. As TZDs act to reduce insulin levels, the side‐effects of increased risk in bone fractions develop.
TZD as a bioisostere A known methodology often utilised in the development of marketable drugs is the exchanging of functional groups for bioisosteric motifs. Bioisostere is recognised as an umbrella term which can be divided into two categories; classical and non‐classical. Classical bioisosteres are described as isoelectronic with the unexchanged moiety and exhibit similar biological properties to the parent structure. Non‐classical bioisosteres however can vary widely, e. g. featuring different number of atoms present, different number of hydrogen bond acceptors (HBA) or donors (HBD), but the electrostatic map is often highly similar to the parent molecule. Despite the ubiquitous presence of the carboxylic acid group within endogenous substances (including amino acids, triglycerides, prostaglandins), and its appearance in over 450 known marketed drugs worldwide (including NSAID's, antibiotics, anticoagulants and statins), it is often deemed a liability. Ionisation to the carboxylate anion at physiological pH results in a diminished ability to diffuse passively across lipid cell membranes such as those in the intestine or blood brain barrier (BBB). Furthermore, metabolism of the carboxylic acid moiety via phase II glucuronidation reaction can complicate pharmacotherapy due to the production of idiosyncratic products. In works published by Lassalas investigating structure property relationships of carboxylic acid isosteres, it was determined that TZDs confer moderate acidity with an associated pKa value of around 6‐7 and is considered to be relatively lipophilic. This makes the TZD group prone to augmenting drug permeability via diffusion across biological membranes in comparison to its carboxylic counterpart. It is for these reasons that the TZD framework is considered a suitable surrogate. It should also be noted that many modulators of PPARγ feature the carboxylic acid moiety.
TZDs as therapeutic agents As discussed previously, TZD containing scaffolds are known to invoke a biological response across a wide variety of biological targets. The following section will deal with target specific interactions. Herein, both the mechanism of action brought about by the clinical agent as well as a discussion of the synthetic methodology used to generate the TZDs are included.
TZDs in the treatment of diabetes mellitus The most common utilisation of TZD‐containing structures has been in the treatment of diabetes mellitus (DM), which is recognised as a metabolic disorder often characterised by hyperglycaemia and related ailments caused due to a deficiency in insulin secretion. There are two recognised forms of DM: type 1 (T1D) and type 2 (T2D). T1D is associated with either a total or partial lack of insulin brought about by a defect in the immune system resulting in a loss of tolerance and degradation (through inflammation) to the β‐cells. T2D is, however, associated with erratic degrees of insulin resistance, unregulated insulin secretion and an increase in hepatic glucose production hence sufferers are termed non‐insulin dependent. The concentration of glucose present in the blood is controlled via hormonal balance of both insulin and glucagon through a homeostatic mechanism. When glucose concentration is high there is an increase uptake of glucose by the skeletal muscle. In cases where the glucose concentration is low, glucagon promotes the synthesis and excretion of glucose. The International Diabetes Federation has predicted that the morbidity rate from diabetes will increase from 425 million in 2017 to 629 million by 2045. Conventional treatments for DM prior to the use of TZD based structures include drugs classes such as sulphonylureas ( 43 – 45 ), meglitinides ( 46/47 ) and biguanides ( 48 ) (Figure ). 3.3.1 PPARγ inhibition The first antihyperglycemic TZD that came to the attention of medicinal chemists was ciglitazone ( 49 ). Workers at the Takeda laboratories (Japan) successfully synthesised 49 in early 1975 while generating a series of 71 analogues of the hypolipidemic drug clofibrate with the aim of discovering a more potent derivative. They found that some of the products generated displayed hypoglycemic effects when tested on diabetic mice. Though it was initially approved by the US Food and Drug Administration (FDA), it was withdrawn from the market due to unacceptable liver toxicity. Then, troglitazone ( 50 ) was discovered and developed by Sankyo in 1988, and approved by the FDA for T2D treatments in 1997. However, within 6 weeks of its launch it was withdrawn from the UK market (much like 49 ) as a result of potentially fatal hepatoxicity. In 1999, Takeda and SmithKline developed two TZD containing drugs, pioglitazone ( 51 ) and rosiglitazone ( 52 ) which were also approved by the FDA for the management of T2D (Figure ). After capitalising the market for insulin sensitisers and becoming one of the top 25 selling brands in the United States, concerns were raised regarding rosiglitazone causing heart failure due to fluid retention and, in 2011, the European Medicines Agency recommended that it should be withdrawn from the market. These ‘first generation’ ‘glitazones’ were initially prepared via condensation or nucleophilic substitution of a halogen substituted nitrobenzene ( 53 ) with an alcoholic precursor ( 54 ) to generate the intermediates, 55 (Scheme ). These compounds were then subjected to hydrogenation conditions to provide the amines 56 . Diazotization of amine 56 was then conducted to yield the diazonium salts 57 which were condensed in the presence of methyl acrylate to afford the α‐halogenated esters 58 . The final steps of this synthesis introduced the TZD moiety via a cyclo‐condensation with thiourea ( 7 ) to afford the respective imines 59 before acidic work up and hydrolysis yielded 49 or 51 in relatively high yields (Scheme ). Though suitable for small scale generation, the use of pyrophoric NaH, expensive metal catalysts, toxic acids as well as potentially explosive intermediates being generated may result in it being unsuitable for significant upscaling. Therefore a process chemistry methodology was developed which made use of commercially available TZD. The improved methodology synthesised 49 in a 54 % overall yield over a series of three steps (Scheme ). In the above synthesis, substitution of the commercially available p ‐fluorobenzaldehyde ( 60 ) with 61 in the presence of NaH, in DMF, afforded 62 which was, in turn, coupled with TZD ( 3 ) via a KC to generate the penultimate intermediate ( 63 ). The final step in the synthesis required a reduction of the benzylidene olefin which was achieved using NaBH 4 in the presence of NaOH. PPARs are ligand inducible transcription factors that are important biologically for cell differentiation, lipid and glucose homeostasis, insulin sensitivity, inflammatory responses and various other metabolic processes. Such receptors are known to exist in a series of three distinct isoforms namely PPARα, PPARβ/δ and PPARγ. Agonist binding within the ligand binding region (LBR) cause a translocation to the cell nucleus and heterodimerisation with retinoid X receptor (RXR), another nuclear receptor. Following dimerisation they bind with a specific region of DNA known as peroxisome proliferator hormone response elements (PPREs). PPARα and PPARγ are greatly expressed in the heart, liver and skeletal muscle as well as in adipose tissue respectively. Binding of TZDs promotes fatty acid uptake and subsequent storage in adipose tissue. As a result, fat levels are reduced in the liver, pancreas and muscles, leading to protection from toxic products formed by metabolism of free fatty acids. Furthermore, TZDs interfere with cellular signalling pathways between insulin sensitive tissues and organs such as the liver, muscle and adipose tissue. They have also been seen to increase the production of adipokines which are insulin sensitisers. Through extensive structure activity relationship (SAR) studies conventional TZDs can be split into five common moieties; an acid TZD head group (purple), a lipophilic tail (green), linked to a central phenyl ring (orange) by two aliphatic chains (blue/red) (Figure ). Agonist activity towards PPARγ is usually brought about through the formation of hydrogen bonds with the P 1 hydrophilic binding pocket present in arm I of the active site. Specifically, hydrogen bonding interactions are formed with His323, His449, and Tyr473. The lipophilic tail (green) would react with binding sites present in both arm II and arm III through a multitude of hydrophobic interactions, including VDW and π‐π stacking interactions. The aliphatic linking chains (blue/red) act as spacers in order to orientate the acidic TZD head (purple) and lipophilic tail (green) into their binding pockets. Finally, the phenyl ring present in the centre (orange) interacts with further hydrophobic amino acid residues in the cleft in a similar way to that of the tail group. Modifications of the blue‐coloured alkyl linker (Figure ) were tolerated, providing that the alkyl chain did not exceed three carbon atoms. It is likely that this linker orientates the molecule and provides the required space between the lipophilic tail and polar acidic head group. The central phenyl ring is seen to be highly significant in the binding interactions to bring about PPARγ agonism though substitution with a benzdihydropyran and naphthyl groups (englitazone ( 64 ) and netoglitazone ( 65 ) respectively) are also well tolerated. The second linker (red) has also been subjected to structural modifications. The severe adverse side effects brought about by 49 have been attributed to the fact that it features a very small aliphatic linker of just one carbon atom which fails to present 49 in the desired conformation for efficient binding and often results in off target interactions. A series of analogues are shown below with modifications to this alkyl linker. Compound 66 features the maximum number of six atoms between the central phenyl ring and the lipophilic tail, it is thought that the oxime functional group plays a vital role through the provision of hydrophobic interactions. A more exotic linker was introduced using a sulfonyl group ( 67 / 68 ) which most likely behaves as a weak HBA resulting in the formation of a short syn orientated hydrogen bonds planar to the axis of the S=O group with residues present at the entrance of the ligand binding domain (LBD) (Figure ). The final structural region within the general pharmacophore model which can receive structural modification is the green lipophilic tail (Figure ). During drug‐receptor interactions, the affinity of the drug is determined predominantly by the extent of hydrophobic interaction. It has been evaluated that within the LBD of PPARγ there exists two large hydrophobic pockets, namely P 3 and P 4 respectively. It is for this reason that large hydrophobic units are required in the tail end of suitable drug candidates. Rosiglitazone (52) possesses a large sized pyridine ring as the tail but is unable to reach both hydrophobic pockets, only reaching P 3 . On the other hand, balaglitazone (69) features a bulky benzopyrimidinone group as the hydrophobic tail which is shown to be a partial agonist of PPARγ. It is believed that partial agonist activity is brought about by hydrophobic interactions with both pockets P 3 and P 4 (Figure ). Exchange of the lipophilic tail with pyridyl (70) and pyrimidyl (71) moieties showed an increase in agonist activity at the micromolar level with pyridyl conferring slightly higher potency. In the case of 71, a decrease in plasma glucose and triglyceride levels of 73 % and 85 % respectively was observed. Such moieties exhibit better agonist activity than both 51 and 52 (first generation glitazones) in terms of both oral absorption and less severe side‐effects. PPARγ inhibition was also witnessed when the lipophilic tail was substituted with tetrahydronaphthalenes (72), styryl (73) and diphenyloxy (74) groups and by a range of nitrogen‐containing heterocycles (75–78) (Figure ). 3.3.2 PTP1B inhibition A second well recognised therapeutic strategy for the treatment of DM is through the inhibition of protein tyrosine phosphatase 1B (PTP1B). The phosphorylation‐dephosphorylation of biological molecules is recognised as a vital mechanism in the control of cellular function, growth, communication and differentiation. Protein tyrosine phosphorylation also plays an active role in transmitting extracellular responses which can lead to T‐cell activation and antigen‐receptor signalling pathways. The process is led by two opposing enzyme; kinases (PTK) which conduct phosphate transfer, and, protein tyrosine phosphatases (PTP) which act to catalytically hydrolyse the previously installed phosphate group. PTP are distinguished in terms of structure by the presence of only one catalytic domain consisting of around 240 residues with arginine and cysteine acting as essential amino acids in the catalytic process. In vivo , PTP maintain the level of tyrosine phosphorylation which has shown an association with T2D. PTP1B is recognised to be a negative regulator of insulin‐receptor and leptin‐receptor signalling pathway. The binding of leptin to its receptor results in phosphorylation of Janus Kinase 2 (JAK2) and further activates the associated JAK signal transducer and activation transcription (STAT). This results in translocation of STAT3 to the cell nucleus and subsequently induces a gene mediated response to reduce the production of acetyl coenzyme‐A carboxylase (ACC). ACC is a biotin‐dependent enzyme which catalyses the conversion of acetyl coenzyme A (Ac‐CoA) to malonyl CoA, this in turn halts or reduces the rate of fatty acid synthesis; while increasing the rate of fatty acid oxidation via metabolism. In an effort to develop a series of biologically active compounds, Bhattarai and co‐workers initially looked to the previously described PPARγ modulators (glitazones) to assess their inhibition towards PTP1B. They found that clinically used glitazones 50 – 52 displayed medium to low potency in terms of PTP1B inhibition with IC 50 values between 55–400 μM. Glitazones which feature substitution at the C 5 position with a benzylidene linker are termed 5‐benzylidene‐TZD derivatives. Using these glitazones as a scaffold for their subsequent research, the group investigated altering the location of the benzyloxy group stemming from the central phenyl ring to develop a series of analogues where substitution occurs at the ortho and para positions (Scheme ). The derivatives illustrated in Scheme were synthesised via the heavily reported method of conducting a KC in the presence of a piperidine catalyst. In cases where this methodology did not yield the desired product ( 81 i , 84 f and 84 g ), the Nobel prize winning Pd cross‐coupling Suzuki reaction was employed to generate 81 i and 84 g – h in yields of 81 %, 90 % and 86 % respectively (Scheme ). Following synthesis, the analogues generated were tested for in vivo inhibition of PTP1B in an obese and diabetic mouse model (C57BL/6J Jms Slc, male). Bhattarai found that all of the generated compounds were significantly more potent than the marketed glitazones. Substitution at the ortho position brought about a higher level of inhibition with IC 50 values ranging from 5–23 μM. The most potent analogues synthesised were 81 h and 84 e (Schemes & ), both possessing an IC 50 value of 5.0 μM. Analogue 81 h was further evaluated in order to obtain an insight into its inhibitory mechanism of action. In a series of docking simulations, it was illustrated that a pair of hydrogen bonding interactions are made from Gln266 and the backbone amino group of Ser216 to the carbonyl oxygen substituents on the TZD frame. The acidic proton occupying the amidic position was assumed to be deprotonated in vivo due to it possessing a p K a value of ∼6.74. In its deprotonated state, 81 h is seen during autodock simulations to be close to Cys215 and Arg221 with a distance between them of within 4–5 Å. Further interaction (and hence stabilisation) within the catalytic site can be seen through hydrophobic interactions between the inhibitor's aromatic rings and surrounding hydrophobic residues, including Tyr46, Val49, Phe182, Ala217 and Ile218. In the same in vivo study , 81 h was administered orally with food at 143 mg/day/Kg (of weight) for a period of 4 weeks which showed a reduction in both weight and glucose tolerance. Significantly lower levels of total cholesterol and triglycerides were present in the serum of mice consuming 81 h though dark brown spots were observed in the liver (suggesting liver damage) indicating that 81 h required further optimisation through derivatisation. The exhibited side‐effects as described when utilising the general structure for PPARγ inhibitors has since been attributed to off target interactions from PTP1B. Specifically, the acidic nature of the proton attatched to the N 3 position on the TZD frame is important. This rationale comes from its resemblance to the carboxylate anion of the natural fatty acid ligands which are known modulators for PPARγ. Due to these findings there has been a push for the development of novel N ‐substituted TZD derivatives. A reduction in target promiscuity should provide a reduction in side‐effects, specifically relating to hepatotoxicity. It should be noted that Bhat et al . has since utilised these findings to successfully develop a series of N ‐substituted glitazones with little to no observed toxicity. In 2007, Maccari and colleagues set about developing a series of N ‐substituted 5‐arylidene derivatives as modulators to PTP1B. They began with the installation of a para methylbenzoic acid group onto the N ‐3 position of TZD. This was primarily due to the fact that benzoic acid acts as a phospho‐tyrosine isostere. In their published work they successfully synthesised a series of 10 analogues (Figure ). Installation of groups at the methylene position proceeded through the already discussed KC with the corresponding aromatic aldehyde and TZD, in the presence of a piperidine catalyst, in refluxing EtOH. Substitution at the amidic position was achieved by refluxing the 5‐substituted TZD in 4‐bromomethyl benzoic acid, in the presence of K 2 CO 3 , prior to acidic workup and recrystallisation from hot MeOH. The synthesised analogues were evaluated for activity in vitro against recombinant human PTP1B as well as the two present active isoforms. Compounds 86 – 91 were shown to exhibit PTP1B inhibition with IC 50 values in the low micromolar range (1.1–6.5 μM). Within this subset, compound 86 and 87 displayed the most effective inhibition to both isoforms of PTP1B. In 2017, Mahapatra helped to tackle a common problem associated with 5‐arylidene TZD based compounds. Classically, active site directed PTP1B modulators have possessed a high charge density which brings about a host of issues in terms of pharmacokinetics and poor membrane permeability. This, in turn, reduces oral bioavailability and drug‐likeliness. To overcome these problems, Mahapatra looked at exchanging the arylidene substitution at the methylene position, and turned to inspiration from work published by Anderson, Moretto and Ye. Their work has shown positive results towards PTP1B inhibition via the inclusion of thiophene based compounds. Mahapatra therefore envisaged a series of small molecule inhibitors containing a TZD core, N ‐substitution with a lipophilic alkyl or haloalkyl group and a thiophene derivative installed at the methylene position joined to TZD via a vinyl linker ( 95 ). The general structure for this series of compounds can be seen below in Figure . A series of 10 N ‐alkyl/alkyl halide analogues were synthesised ( 95 a – j ) in good to high yield (59–86 %) via KC of TZD ( 3 ) with thiophene‐2‐carboxaldehyde ( 96 ), to afford 97 , prior to N ‐alkylation with the appropriate mono/disubstituted alkyl halide utilising anhydrous K 2 CO 3 as a base (Scheme ). In vitro studies utilising the BML‐AK822 assay kit containing human, recominant PTP1B (residue 1–322) expressed in E . coli revealed binding affinities ranging from 10–73 μM with smaller substituents possessing greater potency. The highest potency was witnessed with 95 e (IC 50 =10 μM) while the lowest potency was exhibited by 95 c (IC 50 =73 μM). As seen with previous examples, the carbonyl group present on the TZD core partakes in a hydrogen‐bonding interaction with Arg221 as well as previously unwitnessed interactions with Lys120 (a residue present in the catalytic cleft of PTP1B). Further interactions were observed through π‐π stacking of the thiophene ring and Tyr46. Alongside their SAR studies, the group conducted full computational predictions on the pharmacokinetics parameters for each analogue generated. All of the compounds present no violations with reference to Lipinski's guidelines. 3.3.3 ALR2 inhibition DM has also been associated as a leading cause of new cases of partial vision loss or total blindness, as well as health concerns relating to heart disease, neuropathy and nephropathy. Diabetic retinopathy is characterised by capillary cell loss, capillary membrane thickening in the basement reagent and an increase in leukocyte adhesion to endothelial cells. Such medical conditions are brought about through complications in glucose metabolism involving the Aldose reductase 2 enzyme (ALR2) in the polyol pathway. ALR2 is an enzyme belonging to the aldo‐ketoreductase superfamily which during the polyol pathway catalyses the NADPH‐induced reduction of glucose ( 98 ) to sorbitol ( 99 ). Following this reduction, sorbitol is subjected to an oxidation reaction by sorbitol dehydrogenase generating the pentose sugar fructose ( 100 ) (Scheme ). In healthy humans, only a small amount of glucose is metabolised via this pathway as the majority undergoes a phosphorylation reaction via hexokinase to generate the hexose sugar glucose‐6‐phosphate which is, in turn, used as a substrate for glycolysis, a key process in cellular respiration. In cases of chronic hyperglycaemia (such as those suffering with DM), stimulation of the polyol pathway is significantly increased. As ALR2 is highly prevalent in the cornea, retina and lens as well as within the kidneys and neurone myelin sheaths, the aforementioned medical complications are usually witnessed. The two main classes of ALR2 inhibitors are cyclic imides such as sorbinil ( 101 ) and epalrestat ( 103 ) (usually containing hydantoins), and carboxylic acids such as tolrestat ( 102 ) (Figure ). Though carboxylic acids have been shown to exhibit high in vitro potency they are generally less active in vivo compared to imides. This could be attributed to the vast extent of metabolism associated with carboxylic acid‐containing compounds via phase II conjugation reactions. Members of this functional group series exhibit acidity and less favourable pharmacokinetic properties. There is a need to further develop series of cyclic imide based compounds with a hope to remove the toxicity associated with the hydantoin moiety. One such strategy has involved the use of the TZD framework because it is a known and recognised bioisostere for hydantoins. With these considerations in mind, and in an effort to develop a series of novel inhibitors for ALR2 containing a TZD core, Maccari and co‐workers synthesised a distinct set of three TZD‐containing 5‐arylidene TZDs. Their work followed on from previous published work which outlined the necessary pharmacophore for successful ALR2 inhibition. This pharmacophore requires the presence of an acidic proton, the ability to act as a HBA and to contain a substituted aromatic ring. The first series of compounds generated ( 104 a – k ) possessed the acidic amidic hydrogen on the TZD core, the second series generated consisted of replacing the acidic hydrogen with an acetate ester group ( 105 a – e , g , j ), while the third series saw generation of the corresponding acetic acid ( 106 a – e , g , j ) (Figure ). Synthesis of analogues possessing the general structure 104 was achieved via KC of TZD ( 3 ) with the corresponding meta / para ‐substituted benzaldehyde utilising piperidine as a catalytic base. Deprotonation of the acidic proton was achieved with NaH in DMF before the generation of the ester 105 with methyl bromoacetate. Subsequent hydrolysis afforded the appropriate carboxylic acid ( 106 ) via an acetic acid‐catalysed process. Analogues featuring the general structure 104 showed the greatest variation in yield. Compounds 104 f / k were produced in the highest yields of 88 % and 90 % respectively. Lower yields were witnessed (52 % and 57 %) in cases featuring a strong EWG fluorine or trifluoromethyl at the meta position ( 104 a / e ). Likewise, a lower yield of 52 % was achieved when a methoxy unit was introduced at the meta position ( 104 d ). In the case of molecules featuring the general structure 105 significantly higher yields can be witnessed. The highest yield was generated in the case of meta methoxy‐derivative 105 d (98 %) closely followed by meta substituted fluoride 105 a (96 %). Very high yields were achieved in all cases ( 106 a – e , g , j ) when the ester was hydrolysed to the corresponding carboxylic acid. The potency of compounds featuring an acidic group has been deemed highly important for future work in the development of ALR2 inhibitors. This acidic proton is considered to be ionised at physiological pH and will form ionic interactions with the active site of the enzyme. In 2005, Maccari et al . conducted further optimisation of their previous work by running a series of molecular modelling studies in order to generate a comprehensive understanding of SARs. . They found that the presence of a second aromatic ring on the 5‐benzylidene group increased potency as compared to molecules which only possess one aromatic ring. Furthermore, substitution at the meta position showed an increase in activity which was independent of the nature of the substitution. In terms of N ‐functionalisation, they found that the presence of an acetate chain caused an increase in affinity which they attributed to the fact that a polar interaction is formed with Tyr48, His110, Trp111 as well as the nicotinamide ring of NAD + . With an interest in optimising potency even further, Maccari and colleagues investigated the effects of reducing the 5‐arylidene olefin present in their substrates to the corresponding benzyl derivatives. The group managed to successfully synthesise a further eighteen analogues to explore the effect of no substituent on the nitrogen atom and substituting it with both the acetate ester (as seen in 105 ) and a carboxylic acid ( 106 ) (see Scheme ). Reduction of the benzylidene group was achieved through the addition of LiBH 4 in pyridine as per the procedure reported by Giles which showed selective hydrogenation of the olefin. Among the series of N ‐unsubstituted derivatives, 109 a – d displayed IC 50 values ranging from 31–79 μM in an in vitro bovine lenses’ assay towards ALR2. Despite this moderate to poor potency derivative 109 b – d showed a significant increase in potency as compared to the previously generated benzylidene analogues which, at a concentration of 50 μM, only generated 41 %, 10 %, and 20 % inhibition respectively. This increase in activity however was not apparent when the phenoxy substituent was installed in the meta position nor was it observed when the phenoxy was replaced with a methoxy unit. Among the analogues generated, which featured methyl esters, only 110 e displayed ALR2 inhibition (IC 50 =21 μM). While this is a 2.5‐fold increase compared to the N ‐unsubstituted derivative it is significantly less effective than the corresponding acid 111 e . Finally, the carboxylic acid bearing products displayed a 15‐ to 80‐fold increase in potency as compared to their analogous benzylidene derivatives. Inspired by the works of Maccari, Bozdağ‐Dűndar et al . also looked to develop a series of ALR2 inhibitors featuring a TZD core in 2008. They substituted the benzylidene moiety commonly used in clinical agents to treat DM for flavonoid based systems. Flavonoids are recognised as a ubiquitous motif present in a wide range of edible, plants, fruits and plant‐derived beverages (including juices and teas). They have also been deemed as health‐promoting and disease preventing motifs which have seen use as antibacterial and antiviral agents. The group synthesised a series of ten flavone‐substituted TZDs and separated them into three distinct classes depending upon where the flavone unit was coupled to the methylene carbon. In the series of analogues generated substitution occurred at either the 3’, 4’ or 6 position (see Figure ). Along with altering the position of substitution onto the flavone ring, the group also explored developing linkers featuring sp 3 and sp 2 hybridised carbons and substances featuring both N ‐substitution and no substituent on the nitrogen atom. Analogues without the presence of an olefin ( 113 , 116 , 117 , 119 ) were generated through the coupling of the appropriate bromomethylflavone with dilithio‐TZD, and the rest of the structures were generated via KC with the corresponding 3’/4’/6‐carboxaldehydes in the presence of AcOH and NaOAc. Substitution at the N 5 position was carried out with the aid of an alkyl iodide under basic conditions (Figure ). After conducting a series on in vitro experiments to assess potency, it was found that substitution at the 4’ position yielded the highest inhibitory activity. The most active analogue generated was 114 which possessed a potency of 0.43 μM. Substitution at the N 3 position with a Me group showed some activity but still less than unsubstituted derivatives. Further, the presence of a double bond at the C 5 position did not significantly impact potency. In the same year, Bozdağ‐Dűndar published an additional paper on the generation of flavone‐substituted TZDs. All analogues generated in this study featured the olefin linker between the TZD core and the flavone substitution, and substitution at the N 3 position appeared in the form of acetyl esters or acetic acids. In order to generate the 3’/4’/6‐carboxaldehyde precursors required for later KC, a methyl substituted flavone initially had to be prepared. This was completed using the Baker‐Venkataraman method ( 124 ). Subsequent bromination utilising NBS and a catalytic quantity of benzoyl peroxide afforded 125 . The final aldehyde was generated via the addition of HMTA ( 127 ) in an acidic environment by means of a Sommelet reaction, generating 126 (Scheme ). Functionalisation of the nitrogen to yield the acetate ester ( 127 , 129 , 131 ) proceeded by combining TZD ( 3 ) with ethyl bromoacetate and NaH in THF. Hydrolysis under acidic conditions then generated the free carboxylic acids ( 128 , 130 , 132 ). Coupling of TZD ( 3 ) with the appropriate flavone proceeded through KC, in the presence of NaOAc and glacial acetic acid, to yield the structures illustrated in Figure . In vitro ALR2 inhibition studies showed that the newly synthesised flavonyl compounds bearing the acetic acid chain ( 128 , 130 , 132 ) possessed high potency. In this study, ALR2 was isolated from kidney tissue obtained following the death of male albino rats and the formulated flavone‐substitued compound was dosed at a concentration of 100 μM. The highest potency was observed in the case of 128 which exerted an inhibitory action of 86.6 %. Compounds 130 and 132 were shown to inhibit ALR2 at 56.3 % and 44.6 % respectively at a concentration of 100 μM. The ester derivatives ( 127 , 129 , 131 ) generated however, proved to be less potent with percentage inhibition given as 12.9 %, 6.7 % and 14.4 % respectively at the same concentration. The decrease in potency was attributed to the lack of any acidic proton in the substrates. The presence of an acidic functionality is a highly important requirement for ALR2 inhibitors, because they form interactions in their ionised state.
PPARγ inhibition The first antihyperglycemic TZD that came to the attention of medicinal chemists was ciglitazone ( 49 ). Workers at the Takeda laboratories (Japan) successfully synthesised 49 in early 1975 while generating a series of 71 analogues of the hypolipidemic drug clofibrate with the aim of discovering a more potent derivative. They found that some of the products generated displayed hypoglycemic effects when tested on diabetic mice. Though it was initially approved by the US Food and Drug Administration (FDA), it was withdrawn from the market due to unacceptable liver toxicity. Then, troglitazone ( 50 ) was discovered and developed by Sankyo in 1988, and approved by the FDA for T2D treatments in 1997. However, within 6 weeks of its launch it was withdrawn from the UK market (much like 49 ) as a result of potentially fatal hepatoxicity. In 1999, Takeda and SmithKline developed two TZD containing drugs, pioglitazone ( 51 ) and rosiglitazone ( 52 ) which were also approved by the FDA for the management of T2D (Figure ). After capitalising the market for insulin sensitisers and becoming one of the top 25 selling brands in the United States, concerns were raised regarding rosiglitazone causing heart failure due to fluid retention and, in 2011, the European Medicines Agency recommended that it should be withdrawn from the market. These ‘first generation’ ‘glitazones’ were initially prepared via condensation or nucleophilic substitution of a halogen substituted nitrobenzene ( 53 ) with an alcoholic precursor ( 54 ) to generate the intermediates, 55 (Scheme ). These compounds were then subjected to hydrogenation conditions to provide the amines 56 . Diazotization of amine 56 was then conducted to yield the diazonium salts 57 which were condensed in the presence of methyl acrylate to afford the α‐halogenated esters 58 . The final steps of this synthesis introduced the TZD moiety via a cyclo‐condensation with thiourea ( 7 ) to afford the respective imines 59 before acidic work up and hydrolysis yielded 49 or 51 in relatively high yields (Scheme ). Though suitable for small scale generation, the use of pyrophoric NaH, expensive metal catalysts, toxic acids as well as potentially explosive intermediates being generated may result in it being unsuitable for significant upscaling. Therefore a process chemistry methodology was developed which made use of commercially available TZD. The improved methodology synthesised 49 in a 54 % overall yield over a series of three steps (Scheme ). In the above synthesis, substitution of the commercially available p ‐fluorobenzaldehyde ( 60 ) with 61 in the presence of NaH, in DMF, afforded 62 which was, in turn, coupled with TZD ( 3 ) via a KC to generate the penultimate intermediate ( 63 ). The final step in the synthesis required a reduction of the benzylidene olefin which was achieved using NaBH 4 in the presence of NaOH. PPARs are ligand inducible transcription factors that are important biologically for cell differentiation, lipid and glucose homeostasis, insulin sensitivity, inflammatory responses and various other metabolic processes. Such receptors are known to exist in a series of three distinct isoforms namely PPARα, PPARβ/δ and PPARγ. Agonist binding within the ligand binding region (LBR) cause a translocation to the cell nucleus and heterodimerisation with retinoid X receptor (RXR), another nuclear receptor. Following dimerisation they bind with a specific region of DNA known as peroxisome proliferator hormone response elements (PPREs). PPARα and PPARγ are greatly expressed in the heart, liver and skeletal muscle as well as in adipose tissue respectively. Binding of TZDs promotes fatty acid uptake and subsequent storage in adipose tissue. As a result, fat levels are reduced in the liver, pancreas and muscles, leading to protection from toxic products formed by metabolism of free fatty acids. Furthermore, TZDs interfere with cellular signalling pathways between insulin sensitive tissues and organs such as the liver, muscle and adipose tissue. They have also been seen to increase the production of adipokines which are insulin sensitisers. Through extensive structure activity relationship (SAR) studies conventional TZDs can be split into five common moieties; an acid TZD head group (purple), a lipophilic tail (green), linked to a central phenyl ring (orange) by two aliphatic chains (blue/red) (Figure ). Agonist activity towards PPARγ is usually brought about through the formation of hydrogen bonds with the P 1 hydrophilic binding pocket present in arm I of the active site. Specifically, hydrogen bonding interactions are formed with His323, His449, and Tyr473. The lipophilic tail (green) would react with binding sites present in both arm II and arm III through a multitude of hydrophobic interactions, including VDW and π‐π stacking interactions. The aliphatic linking chains (blue/red) act as spacers in order to orientate the acidic TZD head (purple) and lipophilic tail (green) into their binding pockets. Finally, the phenyl ring present in the centre (orange) interacts with further hydrophobic amino acid residues in the cleft in a similar way to that of the tail group. Modifications of the blue‐coloured alkyl linker (Figure ) were tolerated, providing that the alkyl chain did not exceed three carbon atoms. It is likely that this linker orientates the molecule and provides the required space between the lipophilic tail and polar acidic head group. The central phenyl ring is seen to be highly significant in the binding interactions to bring about PPARγ agonism though substitution with a benzdihydropyran and naphthyl groups (englitazone ( 64 ) and netoglitazone ( 65 ) respectively) are also well tolerated. The second linker (red) has also been subjected to structural modifications. The severe adverse side effects brought about by 49 have been attributed to the fact that it features a very small aliphatic linker of just one carbon atom which fails to present 49 in the desired conformation for efficient binding and often results in off target interactions. A series of analogues are shown below with modifications to this alkyl linker. Compound 66 features the maximum number of six atoms between the central phenyl ring and the lipophilic tail, it is thought that the oxime functional group plays a vital role through the provision of hydrophobic interactions. A more exotic linker was introduced using a sulfonyl group ( 67 / 68 ) which most likely behaves as a weak HBA resulting in the formation of a short syn orientated hydrogen bonds planar to the axis of the S=O group with residues present at the entrance of the ligand binding domain (LBD) (Figure ). The final structural region within the general pharmacophore model which can receive structural modification is the green lipophilic tail (Figure ). During drug‐receptor interactions, the affinity of the drug is determined predominantly by the extent of hydrophobic interaction. It has been evaluated that within the LBD of PPARγ there exists two large hydrophobic pockets, namely P 3 and P 4 respectively. It is for this reason that large hydrophobic units are required in the tail end of suitable drug candidates. Rosiglitazone (52) possesses a large sized pyridine ring as the tail but is unable to reach both hydrophobic pockets, only reaching P 3 . On the other hand, balaglitazone (69) features a bulky benzopyrimidinone group as the hydrophobic tail which is shown to be a partial agonist of PPARγ. It is believed that partial agonist activity is brought about by hydrophobic interactions with both pockets P 3 and P 4 (Figure ). Exchange of the lipophilic tail with pyridyl (70) and pyrimidyl (71) moieties showed an increase in agonist activity at the micromolar level with pyridyl conferring slightly higher potency. In the case of 71, a decrease in plasma glucose and triglyceride levels of 73 % and 85 % respectively was observed. Such moieties exhibit better agonist activity than both 51 and 52 (first generation glitazones) in terms of both oral absorption and less severe side‐effects. PPARγ inhibition was also witnessed when the lipophilic tail was substituted with tetrahydronaphthalenes (72), styryl (73) and diphenyloxy (74) groups and by a range of nitrogen‐containing heterocycles (75–78) (Figure ).
PTP1B inhibition A second well recognised therapeutic strategy for the treatment of DM is through the inhibition of protein tyrosine phosphatase 1B (PTP1B). The phosphorylation‐dephosphorylation of biological molecules is recognised as a vital mechanism in the control of cellular function, growth, communication and differentiation. Protein tyrosine phosphorylation also plays an active role in transmitting extracellular responses which can lead to T‐cell activation and antigen‐receptor signalling pathways. The process is led by two opposing enzyme; kinases (PTK) which conduct phosphate transfer, and, protein tyrosine phosphatases (PTP) which act to catalytically hydrolyse the previously installed phosphate group. PTP are distinguished in terms of structure by the presence of only one catalytic domain consisting of around 240 residues with arginine and cysteine acting as essential amino acids in the catalytic process. In vivo , PTP maintain the level of tyrosine phosphorylation which has shown an association with T2D. PTP1B is recognised to be a negative regulator of insulin‐receptor and leptin‐receptor signalling pathway. The binding of leptin to its receptor results in phosphorylation of Janus Kinase 2 (JAK2) and further activates the associated JAK signal transducer and activation transcription (STAT). This results in translocation of STAT3 to the cell nucleus and subsequently induces a gene mediated response to reduce the production of acetyl coenzyme‐A carboxylase (ACC). ACC is a biotin‐dependent enzyme which catalyses the conversion of acetyl coenzyme A (Ac‐CoA) to malonyl CoA, this in turn halts or reduces the rate of fatty acid synthesis; while increasing the rate of fatty acid oxidation via metabolism. In an effort to develop a series of biologically active compounds, Bhattarai and co‐workers initially looked to the previously described PPARγ modulators (glitazones) to assess their inhibition towards PTP1B. They found that clinically used glitazones 50 – 52 displayed medium to low potency in terms of PTP1B inhibition with IC 50 values between 55–400 μM. Glitazones which feature substitution at the C 5 position with a benzylidene linker are termed 5‐benzylidene‐TZD derivatives. Using these glitazones as a scaffold for their subsequent research, the group investigated altering the location of the benzyloxy group stemming from the central phenyl ring to develop a series of analogues where substitution occurs at the ortho and para positions (Scheme ). The derivatives illustrated in Scheme were synthesised via the heavily reported method of conducting a KC in the presence of a piperidine catalyst. In cases where this methodology did not yield the desired product ( 81 i , 84 f and 84 g ), the Nobel prize winning Pd cross‐coupling Suzuki reaction was employed to generate 81 i and 84 g – h in yields of 81 %, 90 % and 86 % respectively (Scheme ). Following synthesis, the analogues generated were tested for in vivo inhibition of PTP1B in an obese and diabetic mouse model (C57BL/6J Jms Slc, male). Bhattarai found that all of the generated compounds were significantly more potent than the marketed glitazones. Substitution at the ortho position brought about a higher level of inhibition with IC 50 values ranging from 5–23 μM. The most potent analogues synthesised were 81 h and 84 e (Schemes & ), both possessing an IC 50 value of 5.0 μM. Analogue 81 h was further evaluated in order to obtain an insight into its inhibitory mechanism of action. In a series of docking simulations, it was illustrated that a pair of hydrogen bonding interactions are made from Gln266 and the backbone amino group of Ser216 to the carbonyl oxygen substituents on the TZD frame. The acidic proton occupying the amidic position was assumed to be deprotonated in vivo due to it possessing a p K a value of ∼6.74. In its deprotonated state, 81 h is seen during autodock simulations to be close to Cys215 and Arg221 with a distance between them of within 4–5 Å. Further interaction (and hence stabilisation) within the catalytic site can be seen through hydrophobic interactions between the inhibitor's aromatic rings and surrounding hydrophobic residues, including Tyr46, Val49, Phe182, Ala217 and Ile218. In the same in vivo study , 81 h was administered orally with food at 143 mg/day/Kg (of weight) for a period of 4 weeks which showed a reduction in both weight and glucose tolerance. Significantly lower levels of total cholesterol and triglycerides were present in the serum of mice consuming 81 h though dark brown spots were observed in the liver (suggesting liver damage) indicating that 81 h required further optimisation through derivatisation. The exhibited side‐effects as described when utilising the general structure for PPARγ inhibitors has since been attributed to off target interactions from PTP1B. Specifically, the acidic nature of the proton attatched to the N 3 position on the TZD frame is important. This rationale comes from its resemblance to the carboxylate anion of the natural fatty acid ligands which are known modulators for PPARγ. Due to these findings there has been a push for the development of novel N ‐substituted TZD derivatives. A reduction in target promiscuity should provide a reduction in side‐effects, specifically relating to hepatotoxicity. It should be noted that Bhat et al . has since utilised these findings to successfully develop a series of N ‐substituted glitazones with little to no observed toxicity. In 2007, Maccari and colleagues set about developing a series of N ‐substituted 5‐arylidene derivatives as modulators to PTP1B. They began with the installation of a para methylbenzoic acid group onto the N ‐3 position of TZD. This was primarily due to the fact that benzoic acid acts as a phospho‐tyrosine isostere. In their published work they successfully synthesised a series of 10 analogues (Figure ). Installation of groups at the methylene position proceeded through the already discussed KC with the corresponding aromatic aldehyde and TZD, in the presence of a piperidine catalyst, in refluxing EtOH. Substitution at the amidic position was achieved by refluxing the 5‐substituted TZD in 4‐bromomethyl benzoic acid, in the presence of K 2 CO 3 , prior to acidic workup and recrystallisation from hot MeOH. The synthesised analogues were evaluated for activity in vitro against recombinant human PTP1B as well as the two present active isoforms. Compounds 86 – 91 were shown to exhibit PTP1B inhibition with IC 50 values in the low micromolar range (1.1–6.5 μM). Within this subset, compound 86 and 87 displayed the most effective inhibition to both isoforms of PTP1B. In 2017, Mahapatra helped to tackle a common problem associated with 5‐arylidene TZD based compounds. Classically, active site directed PTP1B modulators have possessed a high charge density which brings about a host of issues in terms of pharmacokinetics and poor membrane permeability. This, in turn, reduces oral bioavailability and drug‐likeliness. To overcome these problems, Mahapatra looked at exchanging the arylidene substitution at the methylene position, and turned to inspiration from work published by Anderson, Moretto and Ye. Their work has shown positive results towards PTP1B inhibition via the inclusion of thiophene based compounds. Mahapatra therefore envisaged a series of small molecule inhibitors containing a TZD core, N ‐substitution with a lipophilic alkyl or haloalkyl group and a thiophene derivative installed at the methylene position joined to TZD via a vinyl linker ( 95 ). The general structure for this series of compounds can be seen below in Figure . A series of 10 N ‐alkyl/alkyl halide analogues were synthesised ( 95 a – j ) in good to high yield (59–86 %) via KC of TZD ( 3 ) with thiophene‐2‐carboxaldehyde ( 96 ), to afford 97 , prior to N ‐alkylation with the appropriate mono/disubstituted alkyl halide utilising anhydrous K 2 CO 3 as a base (Scheme ). In vitro studies utilising the BML‐AK822 assay kit containing human, recominant PTP1B (residue 1–322) expressed in E . coli revealed binding affinities ranging from 10–73 μM with smaller substituents possessing greater potency. The highest potency was witnessed with 95 e (IC 50 =10 μM) while the lowest potency was exhibited by 95 c (IC 50 =73 μM). As seen with previous examples, the carbonyl group present on the TZD core partakes in a hydrogen‐bonding interaction with Arg221 as well as previously unwitnessed interactions with Lys120 (a residue present in the catalytic cleft of PTP1B). Further interactions were observed through π‐π stacking of the thiophene ring and Tyr46. Alongside their SAR studies, the group conducted full computational predictions on the pharmacokinetics parameters for each analogue generated. All of the compounds present no violations with reference to Lipinski's guidelines.
ALR2 inhibition DM has also been associated as a leading cause of new cases of partial vision loss or total blindness, as well as health concerns relating to heart disease, neuropathy and nephropathy. Diabetic retinopathy is characterised by capillary cell loss, capillary membrane thickening in the basement reagent and an increase in leukocyte adhesion to endothelial cells. Such medical conditions are brought about through complications in glucose metabolism involving the Aldose reductase 2 enzyme (ALR2) in the polyol pathway. ALR2 is an enzyme belonging to the aldo‐ketoreductase superfamily which during the polyol pathway catalyses the NADPH‐induced reduction of glucose ( 98 ) to sorbitol ( 99 ). Following this reduction, sorbitol is subjected to an oxidation reaction by sorbitol dehydrogenase generating the pentose sugar fructose ( 100 ) (Scheme ). In healthy humans, only a small amount of glucose is metabolised via this pathway as the majority undergoes a phosphorylation reaction via hexokinase to generate the hexose sugar glucose‐6‐phosphate which is, in turn, used as a substrate for glycolysis, a key process in cellular respiration. In cases of chronic hyperglycaemia (such as those suffering with DM), stimulation of the polyol pathway is significantly increased. As ALR2 is highly prevalent in the cornea, retina and lens as well as within the kidneys and neurone myelin sheaths, the aforementioned medical complications are usually witnessed. The two main classes of ALR2 inhibitors are cyclic imides such as sorbinil ( 101 ) and epalrestat ( 103 ) (usually containing hydantoins), and carboxylic acids such as tolrestat ( 102 ) (Figure ). Though carboxylic acids have been shown to exhibit high in vitro potency they are generally less active in vivo compared to imides. This could be attributed to the vast extent of metabolism associated with carboxylic acid‐containing compounds via phase II conjugation reactions. Members of this functional group series exhibit acidity and less favourable pharmacokinetic properties. There is a need to further develop series of cyclic imide based compounds with a hope to remove the toxicity associated with the hydantoin moiety. One such strategy has involved the use of the TZD framework because it is a known and recognised bioisostere for hydantoins. With these considerations in mind, and in an effort to develop a series of novel inhibitors for ALR2 containing a TZD core, Maccari and co‐workers synthesised a distinct set of three TZD‐containing 5‐arylidene TZDs. Their work followed on from previous published work which outlined the necessary pharmacophore for successful ALR2 inhibition. This pharmacophore requires the presence of an acidic proton, the ability to act as a HBA and to contain a substituted aromatic ring. The first series of compounds generated ( 104 a – k ) possessed the acidic amidic hydrogen on the TZD core, the second series generated consisted of replacing the acidic hydrogen with an acetate ester group ( 105 a – e , g , j ), while the third series saw generation of the corresponding acetic acid ( 106 a – e , g , j ) (Figure ). Synthesis of analogues possessing the general structure 104 was achieved via KC of TZD ( 3 ) with the corresponding meta / para ‐substituted benzaldehyde utilising piperidine as a catalytic base. Deprotonation of the acidic proton was achieved with NaH in DMF before the generation of the ester 105 with methyl bromoacetate. Subsequent hydrolysis afforded the appropriate carboxylic acid ( 106 ) via an acetic acid‐catalysed process. Analogues featuring the general structure 104 showed the greatest variation in yield. Compounds 104 f / k were produced in the highest yields of 88 % and 90 % respectively. Lower yields were witnessed (52 % and 57 %) in cases featuring a strong EWG fluorine or trifluoromethyl at the meta position ( 104 a / e ). Likewise, a lower yield of 52 % was achieved when a methoxy unit was introduced at the meta position ( 104 d ). In the case of molecules featuring the general structure 105 significantly higher yields can be witnessed. The highest yield was generated in the case of meta methoxy‐derivative 105 d (98 %) closely followed by meta substituted fluoride 105 a (96 %). Very high yields were achieved in all cases ( 106 a – e , g , j ) when the ester was hydrolysed to the corresponding carboxylic acid. The potency of compounds featuring an acidic group has been deemed highly important for future work in the development of ALR2 inhibitors. This acidic proton is considered to be ionised at physiological pH and will form ionic interactions with the active site of the enzyme. In 2005, Maccari et al . conducted further optimisation of their previous work by running a series of molecular modelling studies in order to generate a comprehensive understanding of SARs. . They found that the presence of a second aromatic ring on the 5‐benzylidene group increased potency as compared to molecules which only possess one aromatic ring. Furthermore, substitution at the meta position showed an increase in activity which was independent of the nature of the substitution. In terms of N ‐functionalisation, they found that the presence of an acetate chain caused an increase in affinity which they attributed to the fact that a polar interaction is formed with Tyr48, His110, Trp111 as well as the nicotinamide ring of NAD + . With an interest in optimising potency even further, Maccari and colleagues investigated the effects of reducing the 5‐arylidene olefin present in their substrates to the corresponding benzyl derivatives. The group managed to successfully synthesise a further eighteen analogues to explore the effect of no substituent on the nitrogen atom and substituting it with both the acetate ester (as seen in 105 ) and a carboxylic acid ( 106 ) (see Scheme ). Reduction of the benzylidene group was achieved through the addition of LiBH 4 in pyridine as per the procedure reported by Giles which showed selective hydrogenation of the olefin. Among the series of N ‐unsubstituted derivatives, 109 a – d displayed IC 50 values ranging from 31–79 μM in an in vitro bovine lenses’ assay towards ALR2. Despite this moderate to poor potency derivative 109 b – d showed a significant increase in potency as compared to the previously generated benzylidene analogues which, at a concentration of 50 μM, only generated 41 %, 10 %, and 20 % inhibition respectively. This increase in activity however was not apparent when the phenoxy substituent was installed in the meta position nor was it observed when the phenoxy was replaced with a methoxy unit. Among the analogues generated, which featured methyl esters, only 110 e displayed ALR2 inhibition (IC 50 =21 μM). While this is a 2.5‐fold increase compared to the N ‐unsubstituted derivative it is significantly less effective than the corresponding acid 111 e . Finally, the carboxylic acid bearing products displayed a 15‐ to 80‐fold increase in potency as compared to their analogous benzylidene derivatives. Inspired by the works of Maccari, Bozdağ‐Dűndar et al . also looked to develop a series of ALR2 inhibitors featuring a TZD core in 2008. They substituted the benzylidene moiety commonly used in clinical agents to treat DM for flavonoid based systems. Flavonoids are recognised as a ubiquitous motif present in a wide range of edible, plants, fruits and plant‐derived beverages (including juices and teas). They have also been deemed as health‐promoting and disease preventing motifs which have seen use as antibacterial and antiviral agents. The group synthesised a series of ten flavone‐substituted TZDs and separated them into three distinct classes depending upon where the flavone unit was coupled to the methylene carbon. In the series of analogues generated substitution occurred at either the 3’, 4’ or 6 position (see Figure ). Along with altering the position of substitution onto the flavone ring, the group also explored developing linkers featuring sp 3 and sp 2 hybridised carbons and substances featuring both N ‐substitution and no substituent on the nitrogen atom. Analogues without the presence of an olefin ( 113 , 116 , 117 , 119 ) were generated through the coupling of the appropriate bromomethylflavone with dilithio‐TZD, and the rest of the structures were generated via KC with the corresponding 3’/4’/6‐carboxaldehydes in the presence of AcOH and NaOAc. Substitution at the N 5 position was carried out with the aid of an alkyl iodide under basic conditions (Figure ). After conducting a series on in vitro experiments to assess potency, it was found that substitution at the 4’ position yielded the highest inhibitory activity. The most active analogue generated was 114 which possessed a potency of 0.43 μM. Substitution at the N 3 position with a Me group showed some activity but still less than unsubstituted derivatives. Further, the presence of a double bond at the C 5 position did not significantly impact potency. In the same year, Bozdağ‐Dűndar published an additional paper on the generation of flavone‐substituted TZDs. All analogues generated in this study featured the olefin linker between the TZD core and the flavone substitution, and substitution at the N 3 position appeared in the form of acetyl esters or acetic acids. In order to generate the 3’/4’/6‐carboxaldehyde precursors required for later KC, a methyl substituted flavone initially had to be prepared. This was completed using the Baker‐Venkataraman method ( 124 ). Subsequent bromination utilising NBS and a catalytic quantity of benzoyl peroxide afforded 125 . The final aldehyde was generated via the addition of HMTA ( 127 ) in an acidic environment by means of a Sommelet reaction, generating 126 (Scheme ). Functionalisation of the nitrogen to yield the acetate ester ( 127 , 129 , 131 ) proceeded by combining TZD ( 3 ) with ethyl bromoacetate and NaH in THF. Hydrolysis under acidic conditions then generated the free carboxylic acids ( 128 , 130 , 132 ). Coupling of TZD ( 3 ) with the appropriate flavone proceeded through KC, in the presence of NaOAc and glacial acetic acid, to yield the structures illustrated in Figure . In vitro ALR2 inhibition studies showed that the newly synthesised flavonyl compounds bearing the acetic acid chain ( 128 , 130 , 132 ) possessed high potency. In this study, ALR2 was isolated from kidney tissue obtained following the death of male albino rats and the formulated flavone‐substitued compound was dosed at a concentration of 100 μM. The highest potency was observed in the case of 128 which exerted an inhibitory action of 86.6 %. Compounds 130 and 132 were shown to inhibit ALR2 at 56.3 % and 44.6 % respectively at a concentration of 100 μM. The ester derivatives ( 127 , 129 , 131 ) generated however, proved to be less potent with percentage inhibition given as 12.9 %, 6.7 % and 14.4 % respectively at the same concentration. The decrease in potency was attributed to the lack of any acidic proton in the substrates. The presence of an acidic functionality is a highly important requirement for ALR2 inhibitors, because they form interactions in their ionised state.
TZDs and Side‐effects TZDs have received significant interest over the last three decades due to the risk of side‐effects. Such side‐effects were initally observed following the development of first generation glitazones as PPARγ inhibitors. Troglitazone induced liver damage has been attributed to to the production of harmful reactive metabolites during hepatic metabolism. This has been linked to acute liver failure caused by apoptosis of liver tissue cells. Further mechanisms of induced heptatoxicity include mitochondrial damage, promotion of oxidative stress and through the accumilation of bile in the liver due to inhibition of bile excretory proteins. However, it should be noted that specifically in the case of troglitazone, heptatoxicity is considered to be idiosynchratic and not dose‐ dependent. A second commonly witnessed side‐effect following prescripted use of TZDs is weight gain. TZDs are known to cause edema and increase the overal levels of plasma volume in vivo . This leads to a re‐distribution of fat via differentiation of preadipocytes into small fat cells. A more recently identified side‐effect following TZD usage concerns an increase risk in the development of bone fractures. Aside from weight gain being a contributory factor here,the main cause of this risk has been named ‘TZD‐induced bone loss’. Such induced action leads to an increase in adipogenesis and subsequent decrease in osteoblastogenesis. Furthermore, insulin levels play a direct role in the modulation of osteoblastogenesis and hence bone formation. As TZDs act to reduce insulin levels, the side‐effects of increased risk in bone fractions develop.
Conclusion The TZD core is a widely used structural motif within the sphere of medicinal chemistry. Its structure can be seen in a vast range of biologically active compounds in the treatment of many medical conditions. Its most common application can be seen in regards to the treatment of DM. DM, and specifically T2D, is considered to be one of the major risk factors associated with cardiovascular disease and mortality. TZD‐containing structures have been seen to inhibit a diverse range of biological targets not limited to PPARγ, PTP1B and ALR2. Unfortunately, TZDs have classically been associated with serious side‐effects. Such side‐effects commonly witnessed include severe hepatotoxicity, fluid retention and significant weight gain. As a result previously marketed glitazones including troglitazone and rosiglitazone were withdrawn from clinical use. The TZD structure has been the focus of many efforts to functionalise at two main positions, namely the activated methylene carbon (C 5 ) and the amidic nitrogen (N 3 ). Methodologies to substitute at these posistions are well reported and have stood up to scrutiny for several decades now. These ‘simple to replicate‘ methodologies offer synthetic organic and medicinal chemists the opportunity to develop a vast range of novel derivatives very quickly. Following our recent publication in the late months of 2020 our group is currently developing a series of bioisosteric motifs containing TZD as a valuable tool for medicinal chemists.
The authors declare no conflict of interest.
Nathan Long obtained his integrated masters degree at Queen Mary University of London (Pharmaceutical Chemistry) completing his final year research project under the supervision of Dr Stellios Arseniyadis on developing a photoinduced difluoromethylation methodology. Nathan has also worked in conjunction with the McCormack Group (William Harvery Research Institute) and Howell group (Queen Mary University of London) developing a series of positive allosteric modulators for the treatment of type 2 diabetes and arthritis. Nathan is currently working as a doctoral research student within the Wren group at Kingston University London currently developing a toolbox for the bioisosteric replacement for the carboxylic acid moiety .
Prof. Adam Le Gresley undertook his PhD at the University of Surrey under Prof Nikolai Kuhnert before completing his NIH postdoctoral research at Drexel College of Medicine, Philadelphia, USA. Since joining Kingston University as a lecturer in 2009, Adam has established a research group in organic and analytical chemistry, funded by industrial sponsors such as GlaxoSmithKline, LGC Ltd. and Innovate UK, working on the design and synthesis of fluorogenic compounds for problem pathogen detection and method development for NMR metabolomics/2D qNMR for complex mixture analysis and metrology. Adam was appointed full professor of organic chemistry in 2020 .
Dr. Stephen Wren was educated at the Universities of Cambridge (PhD in Organic Chemistry, Corpus Christi College), Manchester and Texas (research in the synthesis of anti‐cancer compounds with Professor Phil Magnus at the University of Texas at Austin). Stephen is highly experienced in medicinal chemistry and has worked on a diverse set of biological targets over many disease areas in several organisations (Xenova, Argenta Discovery and Summit plc). He has an extensive track record in project and team management, intellectual property, drug discovery in many therapeutic areas. After three years at the Oxford Drug Discovery Institute and a Fellowship at St Hilda's College, Oxford, Stephen joined Kingston University London as a lecturer in organic and pharmaceutical chemistry in September 2018 and was promoted to senior lecturer in 2020 .
|
In Memoriam to Vittorio Ricci, Professor of Pathology at Pavia University | d36940a9-a776-4b61-ae66-15dd86f07992 | 7354507 | Pathology[mh] | |
Evaluation of Neuromuscular Morphometry of the Vaginal Wall Using Protein Gene Product 9.5 (Pgp 9.5) and Smooth Muscle α-Actin (Sma) in Patients with Posterior Vaginal Wall Prolapse | c7884d3c-4b42-44d5-9c9f-cb37c78991dc | 11123034 | Anatomy[mh] | Pelvic organ prolapse (POP) is a common condition in which the pelvic organs herniate into or out of the vaginal walls. Many women with herniation have symptoms affecting their daily activities and sexual life. The presence of POP has negative effects on body perception and sexuality . As with pelvic organ prolapse, rectocele is the result of people standing on two legs . As the symptoms and conditions of pelvic floor dysfunction span a broad spectrum of disciplines, determining the overall incidence of the amalgam of disorders is difficult. By the age of 80, about 11% of women will have one or more surgical interventions for urinary incontinence or pelvic organ prolapse . The progressive weakening of the pelvic floor, a natural consequence of aging, and the joint effect of birth-related trauma lead to the chronic deterioration of the rectovaginal septum, resulting in rectocele . In most patients, multiparity and chronic causes of increased intra-abdominal pressure are prominent in the etiology . Rectocele is a relatively common disease. Its prevalence increases with age, history of constipation, multiple vaginal deliveries, and episiotomies. However, rectocele and clinical problems due to rectocele can also be encountered in female patients who have never been pregnant . The age range of clinical manifestations of rectocele is usually between the fourth and fifth decades. The clinical symptoms of the patients are not compatible with the size of the detected rectocele . Nevertheless, rectoceles over 2 cm in size are symptomatic . The first symptoms encountered may be mild complaints such as constipation and vaginal fullness . The endopelvic fascia, extending into the inner layers of the posterior vaginal wall, is the most crucial fascia within the rectovaginal septum. The rectovaginal fascia and the side walls of the paracolpium support the posterior wall of the vagina. According to DeLancey, the rectovaginal fascia is a fibromuscular structure starting from the peritoneum and extending to the perineal body, and it is more prominent in the lateral parts than in the medial parts . This ensures that the movements of the vagina and rectum are independent of each other. The rectovaginal fascia provides passive support to the visceral organs and pelvic floor. The rectovaginal fascia is formed by collagen, fibroblasts, smooth muscle fibers, elastin, neurovascular, and fibrovascular fibers. The rectovaginal fascia secures the endopelvic fascia, cervix, and vagina to the pelvis on either side of the pelvis. The endopelvic fascia is the structure that surrounds the pelvic organs and provides loose attachment to the pelvic bones and pelvic diaphragm. The endopelvic fascia contains smooth muscle fibers, vascular nerve bundle, adipose tissue, collagen, and elastin, and it is the most essential support structure used to stabilize the uterus . Prolapse of the posterior wall of the vagina is caused by weakness in the endopelvic fascia . Rectocele occurs due to stretching and rupture of the rectovaginal fascia from the attachment points due to the expansion of the vaginal wall during labor. Furthermore, systemic diseases and hereditary connective tissue diseases may also cause rectocele . Studies on posterior vaginal wall prolapse are usually retrospective. There are very few case–control studies on this subject. This study aims to clinically evaluate underlying factors in the etiology of posterior vaginal wall prolapse, which leads to rectal and sexual dysfunction, affecting quality of life. The aim of this study is to evaluate and compare this neuromuscular structure in women with posterior vaginal wall prolapse with the neuromuscular structure of women in the general population. It was intended to determine epithelial thickness, collagen tissue properties and the amount and characteristic of staining, and the extent of change with age in the evaluation of the formed tissue in the age range in which this condition is observed. The effect of the thickness and morphometry of the rectovaginal fascia on posterior vaginal wall prolapse treated with various surgical interventions was evaluated. In this study, we evaluated the role of the rectovaginal fascia, and its structure, thickness, and smooth muscle density in the etiology of rectocele. Studies evaluating the neuromuscular morphometry of the anterior pelvic fascia are available in the literature . Biopsies of the anterior vaginal wall during surgery in women with prolapse showed the altered expression of smooth muscle proteins and decreased smooth muscle fraction . It has also been observed in some skeletal muscle tumors, myofibroblasts, and myoepithelial cells in pathological tissues that the smooth muscle actin (SMA) contingent can be controlled during translation and transcription . It is considered that myofibroblasts involved in tissue damage and wound healing originate from pericytes, vascular smooth muscles, and perivascular fibroblasts and are transported to the wound . Unlike the others, α SMA, which is one of the SMA types, is generally detected in cells of smooth-muscle origin. It has been reported that the expression of smooth muscle proteins is differentiated at various levels, and that there is a decrease in smooth muscle fraction in women with prolapse biopsied from the anterior vaginal wall . Protein Gene Product 9.5 (PGP 9.5), also known as ubiquitin carboxyl-terminal hydrolase-1, is a 27 kDa protein first isolated from whole-brain extracts . American guidelines consider anti-Protein Gene Product 9.5 immunohistochemistry to be the gold standard in the evaluation of distal symmetric polyneuropathy and the determination of intraepidermal nerve fiber density by skin biopsy . Similarly, European guidelines conclude that distal leg skin biopsy measuring the amount of intraepidermal nerve fiber density is a reliable and effective technique to assess the diagnosis of fine fiber neuropathy . Changes in vaginal mucosal innervation have previously been described in patients with vulvar–vestibulitis pain syndrome and stress urinary incontinence using the PGP-9.5 neuronal marker. Further studies are needed to neurochemically characterize the nerve fibers of the rectovaginal wall in patients with rectocele. The aim of this study, “Evaluation of neuromuscular morphometry of the vaginal wall using PGP-9.5 and SMA in women with posterior vaginal wall prolapse”, is to compare this neuromuscular structure in women with posterior vaginal wall prolapse with the neuromuscular structure of women in the general population and perform an evaluation. A total of 62 patients admitted to the Gynecology and Obstetrics Clinic of Hitit University Training and Research Hospital between December 2019 and June 2020 were included in the study. Patients aged 40–75 years were included in both groups. In the study, the subjects were divided into two groups. Patients operated on for prolapse were included in the study group. Patients who did not have prolapse and who were undergoing vaginal intervention for other gynecological reasons constituted the control group. Both groups were planned to contain 31 patients each. The informed consent form was read to all patients who participated in the study, and the patients who agreed to sign the form were included in the study. The first group included women aged between 40 and 75 years, who had not undergone any vaginal surgery, had not undergone any abdominal uterine suspension surgery, had posterior wall prolapse, and for whom surgery was planned. The second group included women between the ages of 40 and 75 who had not undergone any vaginal surgery, had not undergone any abdominal uterine suspension surgery, did not have posterior vaginal wall prolapse, and who were scheduled to undergo vaginal intervention for other reasons. Patients whose age was not suitable for the specified group, who had previously undergone vaginal surgery, who had undergone abdominal surgery and a uterine suspension surgery, who were not suitable for the specified examination conditions, and who had undergone rectocele surgery, experienced urinary incontinence, or undergone bladder surgery were not included in the study. Of the patients who agreed to participate in the study, reproductive information such as age, height, weight, smoking, previous operations, concomitant chronic diseases, gravida, parity, abortus, number of living children, number of vaginal deliveries, number of C-sections (CSs), date of last menstrual period, and duration of menopause presented in the patient/healthy case form was recorded both to identify the patient and to examine them in the study. If prolapse was present in all patients, its grade was staged according to POP-Q classification and recorded. In both groups, samples approximately 2–6 mm thick and 5–9 mm wide were taken from 3 cm proximal to the vagina at the Ap point according to POPQ classification during the interventional procedure. The samples taken in the pathology clinic were fixed in 10% formaldehyde for 6–8 h, and a macroscopic examination was performed. All samples sent for macroscopic examination were sliced into 5 mm thick sections and cassette-taped from surface to depth. The obtained cassettes were then subjected to 16 h alcohol monitoring in a fully automatic tissue tracking device. Three slides and 4-micron-thick sections were obtained from the paraffin-embedded control and case groups for routine Hematoxylin-Eosin staining. The sections obtained were evaluated under a light microscope. Blocks containing the most muscle tissue and peripheral nerve sections were selected, and immunohistochemical studies were performed. The findings were evaluated by an expert pathologist in a completely blinded manner. Smooth muscle actin (SMA) (clone monoclonal Mouse anti-human Smooth Muscle Actin clone 1A4) was studied in a DAKO (Omnis, EnVisionTM FLEX, High pH Code GV800) fully automatic immunohistochemistry staining device for the evaluation of muscle thickness in the samples. The stained preparations obtained were measured for each case using a software program (NIS element) by determining the thickest part of the muscle under a Nikon Eclipse Ni light microscope . To evaluate the number of peripheral nerves in the samples, a PGP 9.5 (clone Polyclonal Rabbit Anti-PGP 9.5 Code No./Code/Code-Nr. Z 5116) DAKO (Omnis, EnVisionTM FLEX, High pH Code GV800) fully automatic immunohistochemistry staining device was used. The stained preparations were examined under a Nikon Eclipse Ni light microscope, and the number of peripheral nerves per mm 2 was determined using a software program (NIS element) . 2.1. Statistical Analysis The SPSS version 23 package program was used for the statistics of this study. The following were analyzed: the distribution of age, height, weight, BMI, smoking status, operation, and chronic diseases of the control and study groups and the results of the difference analysis; the mean values of some parameters related to delivery in the study and control groups and the results of the difference analysis between the groups; distribution of labor parameters that were significant between the study and control groups; the mean values of prolapse, POPQ Ap, POPQ Bp, muscle thickness, and number of nerves per mm 2 of fascia in the study and control groups, and the results of difference analysis; the means of prolapse, POPQ Ap, POPQ Bp, muscle thickness, and number of nerves per mm 2 in fascia in the study and control groups; Pearson’s correlation analysis results for the relationship between prolapse, POPQ Ap, POPQ Bp, and muscle thickness, with number of nerves per mm 2 in fascia in the study group; the results of the correlation analysis for the relationship between the degree of prolapse with some demographic and obstetric characteristics of the patients in the study group; Spearman’s rho correlation analysis results for the relationship between POPQ Ap and POPQ Bp parameters with some demographic and birth characteristics of the patients in the study group; Spearman’s rho correlation analysis results for the relationship between the parameters of muscle thickness and number of nerves per mm 2 in fascia with some demographic and birth characteristics of the patients in the study group. p values < 0.05 were considered significant. 2.2. Ethical Statement Before the study, approval was obtained from the Hitit University Faculty of Medicine Clinical Research Ethics Committee with the number 116, dated 12 November 2019. The clinical trial registration number of the study is NCT06363838. The SPSS version 23 package program was used for the statistics of this study. The following were analyzed: the distribution of age, height, weight, BMI, smoking status, operation, and chronic diseases of the control and study groups and the results of the difference analysis; the mean values of some parameters related to delivery in the study and control groups and the results of the difference analysis between the groups; distribution of labor parameters that were significant between the study and control groups; the mean values of prolapse, POPQ Ap, POPQ Bp, muscle thickness, and number of nerves per mm 2 of fascia in the study and control groups, and the results of difference analysis; the means of prolapse, POPQ Ap, POPQ Bp, muscle thickness, and number of nerves per mm 2 in fascia in the study and control groups; Pearson’s correlation analysis results for the relationship between prolapse, POPQ Ap, POPQ Bp, and muscle thickness, with number of nerves per mm 2 in fascia in the study group; the results of the correlation analysis for the relationship between the degree of prolapse with some demographic and obstetric characteristics of the patients in the study group; Spearman’s rho correlation analysis results for the relationship between POPQ Ap and POPQ Bp parameters with some demographic and birth characteristics of the patients in the study group; Spearman’s rho correlation analysis results for the relationship between the parameters of muscle thickness and number of nerves per mm 2 in fascia with some demographic and birth characteristics of the patients in the study group. p values < 0.05 were considered significant. Before the study, approval was obtained from the Hitit University Faculty of Medicine Clinical Research Ethics Committee with the number 116, dated 12 November 2019. The clinical trial registration number of the study is NCT06363838. This study aims to demonstrate the structural changes in women with rectoceles compared to the general population using SMA and PGP 9.5 stains. In the control group of 31 patients, 25 patients underwent hysterectomy + bilateral salpingo-oophorectomy and constituted the majority. The rest of the control group consisted of three patients with probe curettage, one with hysterectomy, one with hysterectomy + right unilateral salpingo-oophorectomy, and one with hysterectomy + bilateral salpingo-oophorectomy + left ovarian cyst excision. Of the 31 patients in the study group, 11 patients underwent vaginal hysterectomy + anterior–posterior colporrhaphy and 9 patients underwent anterior–posterior colporrhaphy. Two patients each underwent anterior colporrhaphy, total abdominal hysterectomy + bilateral salpingo-oophorectomy, and anterior–posterior colporrhaphy + uterosacral sacrocolpopexy surgeries; one patient each underwent vaginal hysterectomy, total abdominal hysterectomy + posterior colporrhaphy, posterior colporrhaphy, perinoplasty, and bilateral tubal ligation + transobturator tape + posterior colporrhaphy. The mean age of the control group (49.87 ± 5.35) was statistically significantly lower than the mean age of the study group (61.13 ± 8.74) ( p < 0.05). Mean height was higher in the control group and mean weight and BMI were higher in the study group; however, the differences between the groups were not statistically significant ( p > 0.05). Overall, 3.2% of the control group and 10.0% of the study group reported smoking. Smoking did not show a statistically significant difference between the groups ( p > 0.05). Overall, 45.2% of the control group and 61.3% of the study group had a history of surgery. The differences between the distribution of the operation history of both groups were not statistically significant ( p > 0.05). Overall, 48.4% of the control group and 71.0% of the study group had a history of chronic disease, and again, the difference in the distribution of chronic disease history between the groups was not statistically significant ( p > 0.05). The mean gravida, parity, abortus, number of living children, number of vaginal deliveries, and duration of menopause were higher in the study group compared to the control group. The number of CSs was higher in the control group than in the study group. Based on the results of the difference analysis, the differences between the groups in terms of gravida, parity, number of living children, number of normal vaginal births, and duration of menopause were statistically significant ( p < 0.05). The differences in the mean number of abortus and CSs between the groups were not statistically significant ( p > 0.05). Means and ranges of change in gravida, parity, abortus, live births, number of normal vaginal births (NSDs), and duration of menopause were higher in the study group than in the control group. The mean POPQ Ap and POPQ Bp were significantly higher in the control group, while muscle thickness and number of nerves per mm 2 of fascia were significantly higher in the study group ( p < 0.05). These distributions are shown in . As shown, POPQ Ap and POPQ Bp averages and ranges of change were higher in the study group. There was a more stable distribution with both lower averages and a smaller range of variation in the control group. The opposite was true for muscle thickness and the number of nerves per mm 2 of fascia. The results of Pearson’s correlation analysis for the relationship between prolapse, POPQ Ap, POPQ Bp, and muscle thickness with number of nerves per mm 2 in fascia in the study group are given in . Based on the correlation analysis results, there was a statistically significant and positive correlation between the degree of prolapse and POPQ Ap (r = 0.915; p < 0.01) and POPQ Bp (r = 0.912; p < 0.01). The correlation coefficient showed that the parameter that most affected the degree of prolapse was POPQ Ap. There was also a statistically significant and positive correlation between muscle thickness and the number of nerves per mm 2 in the fascia (r = 0.618; p < 0.01). There was no statistically significant correlation between the degree of prolapse, POPQ Ap and POPQ Bp, and muscle thickness with the number of nerves per mm 2 of fascia ( p > 0.05). The results of the correlation analysis for the relationship between the degree of prolapse with some demographic and obstetric characteristics of the patients in the study group are given in . Based on the results of correlation analysis, there were statistically significant and positive correlations between the degree of prolapse and age (r = 0.464; p < 0.01), parity (r = 0.392; p < 0.05), number of live births (r = 0.373; p < 0.05), and the number of NSDs (r = 0.356; p < 0.05). The correlation coefficients showed that age was the parameter that most affected the degree of prolapse, followed by parity, number of live births, and number of NSDs. The results of the Spearman’s rho correlation analysis for the relationship between POPQ Ap and POPQ Bp parameters with some demographic and obstetric characteristics of the patients in the study group are given in . The correlation analysis revealed statistically significant and positive correlations between the POPQ Ap and POPQ Bp parameters and age, parity, and number of live births. The results obtained for both POPQ Ap and POPQ Bp and the distribution of correlation coefficients were similar to those obtained for the degree of prolapse. According to the correlation coefficients, age was the parameter that affected POPQ Ap and POPQ Bp parameters the most, followed by parity, number of live births, and number of NSDs. The results of the Spearman’s rho correlation analysis for the relationship between the parameters of muscle thickness and number of nerves per mm 2 in fascia with some demographic and obstetric characteristics of the patients in the study group are given in . The correlation analysis revealed that age had a negative effect (r = −0.437; p < 0.05) and the CS number had a positive effect (r = 0.378; p < 0.05) on muscle thickness. There was no statistically significant correlation between demographic and birth characteristics with the parameters of muscle thickness and number of nerves per mm 2 in fascia ( p > 0.05). Means and ranges of change in gravida, parity, abortus, number of living children, number of NSDs, and duration of menopause were higher in the study group compared to the control group. POPQ Ap and POPQ Bp averages were statistically significantly higher in the control group, while muscle thickness and number of nerves per mm 2 in fascia were significantly higher in the study group ( p < 0.000). According to the correlation analysis results, there was a statistically significant and positive correlation between the degree of prolapse and POPQ Ap (r = 0.915; p < 0.01) and POPQ Bp (r = 0.912; p < 0.01). The correlation coefficient showed that the parameter that most affected the degree of prolapse was POPQ Ap. There was also a statistically significant and positive correlation between muscle thickness and the number of nerves per mm 2 in the fascia (r = 0.618; p < 0.01). There was no statistically significant correlation between the degree of prolapse and muscle thickness (r = −0.026; p > 0.05) and the number of nerves per mm 2 of fascia (r = −0.155; p > 0.05). Vaginal wall prolapse is a condition classified as anterior, posterior, and apical compartments, characterized by muscle and fascia defects and causing numerous urinary, sexual, or bowel dysfunctions, especially incontinence. In an imaging study of women with posterior vaginal prolapse, Luo et al. reported the mean age of posterior vaginal prolapse in Caucasian women as 54.9 ± 8.7 years. In the same study, the mean age of the control group with similar symptoms was reported as 54.2 ± 8.9 years. In our study, the mean age of the patients was 49.87 ± 5.35 years in the control group and 61.13 ± 8.74 years in the study group. The age of the women with posterior vaginal wall prolapse was 60 years and above, consistent with the literature . It is observed that the risk of POP development increases with age. In this case, age can be considered as an independent risk factor for rectocele. Inal reported that POP-Q Ap and POP-Q Bp scores were higher in the study group compared to the control group and the difference between the groups was statistically significant. Luo et al. found the mean POP-Q Ap and POP-Q Bp to be 1.7 ± 0.8 in women with posterior vaginal prolapse. In the same study, the mean POP-Q Ap and POP-Q Bp of the control group were −1.7 ± 0.7, and the differences between the control and experimental groups were found to be statistically significant. In our study, both POP-Q Ap and POP-Q Bp scores were statistically significantly higher in the study group compared to the control group ( p < 0.05). In this respect, the results obtained in this study are compatible with both different prolapse results and posterior prolapse results. In other words, POP-Q Ap and POP-Q Bp averages are higher in women with prolapse compared to the control group. Inal reported that the number of nerves was statistically significantly lower in the prolapse group compared to the control group. In the same study, nerve diameter was statistically significantly higher in the control group than in the prolapse group. In our study, the mean of both muscle thickness and number of nerves per mm 2 in fascia were statistically significantly higher in the control group than in the study group ( p < 0.05). There was a statistically positive and significant correlation between muscle thickness and number of nerves in the fascia in the study group. Again, there was a statistically significant relationship between the degree of prolapse with POP-Q Ap and POP-Q Bp. Nonetheless, the relationships between the degree of prolapse with muscle thickness and the number of nerves per mm 2 in the fascia were not statistically significant ( p > 0.05). In studies on the degree of prolapse, the risk factors for prolapse are reported to be the most important factor. Factors such as increasing age, number of NSDs, and number of live births increase the degree of prolapse . In our study, the degree of prolapse was positively and significantly associated with age, parity, number of live births, and number of NSDs. POP-Q Ap and POP-Q Bp had similar associations with the degree of prolapse. Age had a negative and significant effect on muscle thickness, while the number of CS births was positively correlated with muscle thickness. That is, women who experienced more CSs had higher muscle thickness. It can be stated that CS birth prevents the reduction in muscle thickness. İnal reported a significant correlation between the number of nerves and age, NSD delivery, and postmenopausal period. All three variables negatively affect the number of nerves. In our study, the relationships between the number of nerves and age, NSD delivery, and postmenopausal period were not statistically significant ( p > 0.05). Studies on the innervation of the vaginal wall in patients with POP are inconclusive. The integrity of the vagina and supporting connective tissue is essential for normal pelvic floor function and the anatomy of the pelvic organs. Branches of the hypogastric plexus innervate the musculus levator ani and posterior vagina . Denervation injury of the pelvic floor during labor may cause loss of vaginal support, leading to POP . Several studies have evaluated anterior vaginal wall innervation in women with or without POP. Zhu et al. analyzed Protein Gene Product 9.5 (PGP 9.5) staining as a neuronal marker in peripheral nerves and ganglia in tissue . They showed that the nerve fiber profile in the vaginal epithelium and subepithelium was significantly lower in women with stress urinary incontinence and POP compared to the control group. Inal et al. measured the number and diameter of subepithelial nerve fibers in the anterior vaginal wall and observed that these nerve fibers decreased in women with anterior prolapse compared to women with normal vaginal support . Kaplan et al. confirmed this by describing reduced neuronization in the vaginal wall in the POP group . To date, only two studies have evaluated the innervation of the posterior vaginal wall in women with or without POP. Boreham et al. analyzed glial cells and astrocytes using antibodies against S100 and found that nerve fibers were fewer and smaller in the vaginal muscular layers of women in the POP group. Altman et al. reached the opposite conclusion by detecting increased nerve fiber density in the subepithelium of the rectovaginal wall in patients with posterior vaginal wall prolapse using PGP 9.5 antibodies. They suggested that neuronal regeneration following nerve trauma may be involved in the pathogenesis of pelvic floor disorders . The discrepancy between these results can also be explained by the difference in the method of tissue sample collection, localization, and sensitivity of these two neuronal markers. Our study has some limitations. The first of these is that the average age of the groups we included in our study could not be matched despite all our efforts. This age difference may affect the results of the study. Conducting the study in age-matched groups may be the subject of other studies. In addition, our study was conducted with a limited number of women. Although the number of participants was statistically sufficient and significant, our study needs to be confirmed in larger groups. Our study has the advantage of being a planned prospective study evaluating the innervation of the posterior vaginal wall in the literature, and the small number of cases is a limitation of our study. Considering the results obtained in this study and the results reported in the literature together, it is observed that prolapse statistically significantly decreases the number of nerves in women. Similarly, it is reasonable to make the same interpretation for the degree of prolapse. Decreased innervation may lead to decreased smooth muscle thickness, although it is not very similar in structure to skeletal muscle. Further experiments with in vivo and in vitro models are necessary to clarify the cause-and-effect relationship between denervation and vaginal smooth muscle morphology. |
Comparison of Different Test Systems for the Detection of Antiphospholipid Antibodies in a Chinese Cohort | 12212471-9386-4a83-b906-e1431b43a142 | 8283786 | Pathology[mh] | The antiphospholipid syndrome (APS) is defined by the development of venous/arterial thromboses or by the occurrence of obstetrical events including recurrent fetal losses or increased perinatal morbidity, with the persistent presence of antiphospholipid antibodies (aPLs). According to the 2006 APS classification criteria, APS diagnosis is based on the positivity of at least one of the clinical criteria as well as one of laboratory criteria including lupus anticoagulant (LA), high level of anti-cardiolipin (aCL), anti- β 2 glycoprotein-I (a β 2GPI) immunoglobulin isotype G (IgG) or M (IgM) . More recently, non-criteria aPLs including anti-aCL or anti-b2GPI IgA, anti-phosphatidylserine–prothrombin (aPS/PT) complex, anti-annexin A5 antibodies (aAnxV), etc. are receiving increasing attention . APS could be associated with several severe clinical outcomes such as pulmonary embolism, acute myocardial infarction, and stroke, which demand immediate appropriate intervention. On the other hand, anticoagulant treatment commonly utilized for APS could increase bleeding risk for susceptible patients. Since aPL detection comprise a large part of APS diagnosis, a detection system with high sensitivity and specificity is required in order to timely identify APS patients as well as provide accurate clinical intervention . Besides, evaluation of aPLs could also contribute to prognosis and risk assessment for associated clinical manifestations . Numerous guidelines and studies concerning aCL and a β 2GPI tests have been published . However, test results for aPLs remain contradictory among different detection methods as well as commercial manufacturers, probably due to the lack of standardization for cut-off values, method of calibration and quantitation, choice of solid phase and coating, type and source of antigen, and other analytic problems . Traditionally, enzyme-linked immunosorbent assay (ELISA) was applied due to its relative time and cost-efficiency. In recent years, novel automating detection systems, such as chemiluminescent immunoassay (CLIA), addressable laser bead immunoassay (ALBIA), line immunoassay (LIA), etc. have been introduced for aPL detection, and promising results have been yielded . Automatization can improve the reproducibility and reduce interlaboratory variation, yet may show distinct performance characteristics compared to ELISA . More specifically, in China, home-conducted ELISA is still most widely applied at laboratories for APS diagnosis. However, an increasing number of automated analyzers have been equipped by large general hospitals with high application potentials. Regarding commercially available systems, most studies focused on measuring and comparing only one assay to laboratory-conducted ELISA . However, little attention has been paid to simultaneously evaluate different test systems that are commonly chosen. The aim of this study was to assess and compare the diagnostic and analytic performances of four commercial assays prevalently used in China, including two ELISA and two CLIA systems, in a Chinese prospective APS cohort. Detection of IgG, IgM, and IgA for aCL and a β 2GPI antibodies was evaluated, and a test system with the best diagnostic value was explored of its correlation with key clinical features. Patients Recruitment This was a single-center, prospective cohort study conducted at Peking Union Medical College Hospital (PUMCH) and the National Clinical Research Center for Dermatologic and Immunologic Diseases (NCRC-DID) from May 2017 to January 2020. A total of 313 consecutive patients were included in this study, of which 100 patients had been diagnosed with primary APS (PAPS group), 52 with APS secondary to SLE (SAPS group), 71 with SLE (SLE group), and 90 healthy controls (HC group). Diagnosis of APS was defined by clinicians according to the 2006 Sydney revised classification criteria . According to the criteria, IgG and IgM aCL and a2GPI were analyzed with standardized ELISA (INOVA Diagnostics) at the Key Laboratory. Lupus anticoagulant was detected and evaluated according to the ISTH recommendations. Dilute Russell viper venom time (dRVVT) testing and activated partial thromboplastin time were measured, where LAC was considered positive if the ratio of screen/confirm time ratio was >1.20. Diagnosis of SLE was based on the 1997 ACR criteria and confirmed by the 2019 EULAR/ACR criteria. Clinical manifestations were recorded for PAPS, SAPS, and SLE groups, including vascular thrombosis (arterial or venous), pregnancy morbidity, and extra-criteria manifestations, including thrombocytopenia, heart valve disease, autoimmune hemolytic anemia, and neurological disorders, etc. For the HC group, only aPL serology information was present. For each subject, 4 ml of blood was collected with the help of a BD vacutainer without anticoagulants. Blood samples were allowed to clot at room temperature for 1 h and then centrifuged at 4°C for 5 min at 3,000 rpm. Serum was collected and stored at −80°C. No sample was exposed to more than one freeze–thaw cycle before analysis. The study was approved by the ethics committee at PUMCH and fulfilled the ethical guidelines of the declaration of Helsinki. All subjects gave written informed consent. Laboratory Tests For each study subject, IgG, IgM, and IgA isotypes of aCL and a β 2GPI were analyzed with four systems listed below: a. iFlash CLIA kits provided by YHLO Biotech Co., Shenzhen, China (Y-CLIA); b. QUANTA Flash ® CLIA kits provided by INOVA Diagnostics, Inc., San Diego, CA, US, Werfen Group as sales agent (W-CLIA); c. QUANTA Lite™ ELISA kits provided by INOVA Diagnostics, Inc., San Diego, CA, US, Werfen Group as sales agent (W-ELISA); d. AESKULISA ® ELISA test kits provided by Aesku.Diagnostics GmbH & Co. KG, Wendelsheim, Germany (A-ELISA). Detailed characteristics of test systems from different manufacturers were summarized in . Cut-off values were defined for each system as recommended by the manufacturer. Statistical Analysis Statistical analysis was performed using SPSS 26.0 or R (version 3.6.2). The χ 2 test or Fisher’s exact test was used for comparison of categorical variables, and Wilcoxon test was used for continuous variables after normality was explored with the Shapiro–Wilk test. Sensitivities, specificities, and accuracies in APS diagnosis were compared in the McNemar test. Youden Index, positive and negative predictive values (PPV and NPV), and odds ratio (OR) with 95% confidence interval (95% CI) were also shown. Correlation of different aPL isotype levels with clinical manifestations was calculated, and clinical events with 95% CI were displayed. Two-tailed values of p less than 0.05 were considered statistically significant. This was a single-center, prospective cohort study conducted at Peking Union Medical College Hospital (PUMCH) and the National Clinical Research Center for Dermatologic and Immunologic Diseases (NCRC-DID) from May 2017 to January 2020. A total of 313 consecutive patients were included in this study, of which 100 patients had been diagnosed with primary APS (PAPS group), 52 with APS secondary to SLE (SAPS group), 71 with SLE (SLE group), and 90 healthy controls (HC group). Diagnosis of APS was defined by clinicians according to the 2006 Sydney revised classification criteria . According to the criteria, IgG and IgM aCL and a2GPI were analyzed with standardized ELISA (INOVA Diagnostics) at the Key Laboratory. Lupus anticoagulant was detected and evaluated according to the ISTH recommendations. Dilute Russell viper venom time (dRVVT) testing and activated partial thromboplastin time were measured, where LAC was considered positive if the ratio of screen/confirm time ratio was >1.20. Diagnosis of SLE was based on the 1997 ACR criteria and confirmed by the 2019 EULAR/ACR criteria. Clinical manifestations were recorded for PAPS, SAPS, and SLE groups, including vascular thrombosis (arterial or venous), pregnancy morbidity, and extra-criteria manifestations, including thrombocytopenia, heart valve disease, autoimmune hemolytic anemia, and neurological disorders, etc. For the HC group, only aPL serology information was present. For each subject, 4 ml of blood was collected with the help of a BD vacutainer without anticoagulants. Blood samples were allowed to clot at room temperature for 1 h and then centrifuged at 4°C for 5 min at 3,000 rpm. Serum was collected and stored at −80°C. No sample was exposed to more than one freeze–thaw cycle before analysis. The study was approved by the ethics committee at PUMCH and fulfilled the ethical guidelines of the declaration of Helsinki. All subjects gave written informed consent. For each study subject, IgG, IgM, and IgA isotypes of aCL and a β 2GPI were analyzed with four systems listed below: a. iFlash CLIA kits provided by YHLO Biotech Co., Shenzhen, China (Y-CLIA); b. QUANTA Flash ® CLIA kits provided by INOVA Diagnostics, Inc., San Diego, CA, US, Werfen Group as sales agent (W-CLIA); c. QUANTA Lite™ ELISA kits provided by INOVA Diagnostics, Inc., San Diego, CA, US, Werfen Group as sales agent (W-ELISA); d. AESKULISA ® ELISA test kits provided by Aesku.Diagnostics GmbH & Co. KG, Wendelsheim, Germany (A-ELISA). Detailed characteristics of test systems from different manufacturers were summarized in . Cut-off values were defined for each system as recommended by the manufacturer. Statistical analysis was performed using SPSS 26.0 or R (version 3.6.2). The χ 2 test or Fisher’s exact test was used for comparison of categorical variables, and Wilcoxon test was used for continuous variables after normality was explored with the Shapiro–Wilk test. Sensitivities, specificities, and accuracies in APS diagnosis were compared in the McNemar test. Youden Index, positive and negative predictive values (PPV and NPV), and odds ratio (OR) with 95% confidence interval (95% CI) were also shown. Correlation of different aPL isotype levels with clinical manifestations was calculated, and clinical events with 95% CI were displayed. Two-tailed values of p less than 0.05 were considered statistically significant. Patient Characteristics Among 152 APS patients, there were 63 (63.0%) females for PAPS, 46 (88.5%) for SAPS, and the mean age for each was 36.3 and 32.9 years . Mean age was 30.1+/−8.2 years in the SLE group, of which 61 (85.9%) were female, while the HC group had 41 (45.6%) female and a mean age of 43.4+/−12.2. Detailed clinical manifestations were recorded for both APS and SLE patients and were shown. Thrombosis was most commonly present, with 80 (80.0%) for PAPS and 39 (75%) for SAPS, but not in the SLE group. Patients with history of arterial or venous thrombosis were recorded for APS patients. Pregnancy morbidity, history of adverse pregnancy, microangiopathy, and LA were also observed in both PAPS and SAPS group. Of all the clinical manifestations, the prevalence of thrombocytopenia was significantly different between PAPS and SAPS group (χ 2 = 4.382, p = 0.036). Assay Characteristics As summarized in , the coating, conjugation, calibration, and cut-off values with their calculation were listed for four commercial test systems. More specifically, Y-CLIA conducted paramagnetic particle chemiluminescent immunoassay using a fully automated iFlash 3000 Chemiluminescence Immunoassay Analyzer. Recommended values with best sensitivity, specificity, and false positive results of healthy donors against APS, SLE, and other autoimmune disease patients were chosen for all antibody isotypes. For W-CLIA, antigen-specific paramagnetic bead chemiluminescent immunoassay was conducted employing the fully automated BIO-FLASH CLIA instrument. Cut-off values for all antibodies were calculated using the 99th percentile in healthy groups. W-ELISA was a semi-quantitative enzyme linked immunosorbent assay manually conducted according to the manufacturer’s instruction. Cut-off values were set based on the evaluation of normal and positive antibody samples. For A-ELISA, assay was also manually conducted following manufacturer’s protocols, yet no information was provided for cut-off value calculation. Predictive Power of aPLs for Different Test Systems Antibody results obtained from four test systems were evaluated for diagnostic power with sensitivity, specificity, accuracy, Youden Index, PPV, and NPV in APS diagnosis from the HC group in . For each antibody type, sensitivity, specificity, and accuracy were compared first between the same test methods ( i.e. , Y-CLIA against W-CLIA, W-ELISA against A-ELISA). The better system from each method, if identified, was then compared to determine the best system, which was further evaluated for clinical manifestation prediction. As shown in , the accuracy of aCL IgG was significantly higher for Y-CLIA than W-CLIA (p < 0.001), and A-ELISA than W-ELISA (p = 0.035). The sensitivity (p < 0.001) and accuracy (p < 0.001) were both significantly higher for Y-CLIA method. For aCL IgM, sensitivity and accuracy were significantly higher for W-ELISA than A-ELISA (p < 0.001). As for aCL IgA, Y-CLIA and A-ELISA were selected respectively for comparison, and the specificity of the former was significantly higher (p = 0.031). Sensitivity and accuracy of positivity of aCL IgG, IgM, or IgA were also significantly higher for Y-CLIA than for W-CLIA (p < 0.001). Y-CLIA and W-ELISA were selected as better systems for positivity of aCL IgG or IgM, and significant difference was observed for accuracy (p = 0.022). Concerning a β 2GPI, Y-CLIA and W-CLIA were selected for comparison of IgM, whose specificity (p = 0.049) was higher that the former. Sensitivity and accuracy of positivity of a β 2GPI IgG, IgM, or IgA, as well as those of aCL IgG or IgM, were all significantly higher for Y-CLIA. All in all, Y-CLIA was considered as a system with the best predictive power. Similarly, the sensitivity, specificity, and accuracy were also compared among four systems in identifying thrombosis and pregnancy morbidity . For thrombosis events, significant results for sensitivity and accuracy of aCL and a β 2GPI positivity were all higher for Y-CLIA than for W-CLIA and for W-ELISA than for A-ELISA. Y-CLIA still showed higher accuracy (p = 0.022 for aCL IgG or IgM and p = 0.001 for a β 2GPI IgM). As for pregnancy morbidity, significant results for specificity and accuracy of aCL and a β 2GPI positivity were significantly higher for W-CLIA than for Y-CLIA and for A-ELISA than for W-ELISA. Distribution of aPL Test Results As different cut-off values were used by four test systems, the distribution of aPL test results from different manufacturers among patient groups were calculated with lg[(test result/cutoff value) +1] so that they could be visualized together as positive numbers in . Patients positive for antibodies fell above the dotted line, and the range of distribution varied due to use of both test methods and limitation of test range for different antibodies. In general, W-CLIA had the widest range of test distribution, while W-ELISA had the narrowest. For Y-CLIA, test range limitation influenced distribution for three autoantibodies. The results of primary or secondary APS patients were compared to other groups and illustrated. Overall, most test systems could distinguish between APS patients and HC, while little significant difference was observed between PAPS and SAPS groups. For different antibodies, four test systems showed different strengths of differential diagnosis. For instance, W-CLIA was best at discrimination for aCL IgG, while A-ELISA was best at aCL IgM. Additionally, distribution of aPLs among clinical groups with the largest number of patients ( i.e. , thrombosis, pregnancy morbidity, and thrombocytopenia) was also illustrated in . Cross-Positivity Analysis for Four aPL in APS Patients Among 152 patients, cross positivity for IgG or IgM of aCL and a β 2GpI for each of the four test systems were demonstrated with Venn diagrams . For aCL, 50 (32.9%) patients were tested positive for IgG or IgM by all systems. There were 12 (7.9%) patients who were tested positive only by Y-CLIA, and 13 (8.6%) were tested positive only by W-ELISA. Similarly, for a β 2GpI, 19 (12.5%) patients were test positive only by Y-CLIA, and seven (4.6%) were tested positive only by W-CLIA. When combining the positivity of aCL and a β 2GpI, Y-CLIA identified the most amount of positive patients (totally 102, 67.8%), with the highest level of patients distinguished only by the system (16, 10.5%). Clinical Manifestations Prediction for the Test Systems The correlation of different aPL levels by all four test systems with non-criteria clinical manifestations was further explored, with significant results presented in . Thrombocytopenia was associated with the greatest number of antibody positivity (aCL IgG by Y-CLIA, aCL IgM/a β 2GpI IgG/a β 2GpI IgM by W-CLIA, and a β 2GpI IgM by W-ELISA). Significant association was also observed for APSN, PVT, PE, DVT, and positivity of some autoantibodies by certain test systems. Little association was observed between IgA with any clinical features. Among 152 APS patients, there were 63 (63.0%) females for PAPS, 46 (88.5%) for SAPS, and the mean age for each was 36.3 and 32.9 years . Mean age was 30.1+/−8.2 years in the SLE group, of which 61 (85.9%) were female, while the HC group had 41 (45.6%) female and a mean age of 43.4+/−12.2. Detailed clinical manifestations were recorded for both APS and SLE patients and were shown. Thrombosis was most commonly present, with 80 (80.0%) for PAPS and 39 (75%) for SAPS, but not in the SLE group. Patients with history of arterial or venous thrombosis were recorded for APS patients. Pregnancy morbidity, history of adverse pregnancy, microangiopathy, and LA were also observed in both PAPS and SAPS group. Of all the clinical manifestations, the prevalence of thrombocytopenia was significantly different between PAPS and SAPS group (χ 2 = 4.382, p = 0.036). As summarized in , the coating, conjugation, calibration, and cut-off values with their calculation were listed for four commercial test systems. More specifically, Y-CLIA conducted paramagnetic particle chemiluminescent immunoassay using a fully automated iFlash 3000 Chemiluminescence Immunoassay Analyzer. Recommended values with best sensitivity, specificity, and false positive results of healthy donors against APS, SLE, and other autoimmune disease patients were chosen for all antibody isotypes. For W-CLIA, antigen-specific paramagnetic bead chemiluminescent immunoassay was conducted employing the fully automated BIO-FLASH CLIA instrument. Cut-off values for all antibodies were calculated using the 99th percentile in healthy groups. W-ELISA was a semi-quantitative enzyme linked immunosorbent assay manually conducted according to the manufacturer’s instruction. Cut-off values were set based on the evaluation of normal and positive antibody samples. For A-ELISA, assay was also manually conducted following manufacturer’s protocols, yet no information was provided for cut-off value calculation. Antibody results obtained from four test systems were evaluated for diagnostic power with sensitivity, specificity, accuracy, Youden Index, PPV, and NPV in APS diagnosis from the HC group in . For each antibody type, sensitivity, specificity, and accuracy were compared first between the same test methods ( i.e. , Y-CLIA against W-CLIA, W-ELISA against A-ELISA). The better system from each method, if identified, was then compared to determine the best system, which was further evaluated for clinical manifestation prediction. As shown in , the accuracy of aCL IgG was significantly higher for Y-CLIA than W-CLIA (p < 0.001), and A-ELISA than W-ELISA (p = 0.035). The sensitivity (p < 0.001) and accuracy (p < 0.001) were both significantly higher for Y-CLIA method. For aCL IgM, sensitivity and accuracy were significantly higher for W-ELISA than A-ELISA (p < 0.001). As for aCL IgA, Y-CLIA and A-ELISA were selected respectively for comparison, and the specificity of the former was significantly higher (p = 0.031). Sensitivity and accuracy of positivity of aCL IgG, IgM, or IgA were also significantly higher for Y-CLIA than for W-CLIA (p < 0.001). Y-CLIA and W-ELISA were selected as better systems for positivity of aCL IgG or IgM, and significant difference was observed for accuracy (p = 0.022). Concerning a β 2GPI, Y-CLIA and W-CLIA were selected for comparison of IgM, whose specificity (p = 0.049) was higher that the former. Sensitivity and accuracy of positivity of a β 2GPI IgG, IgM, or IgA, as well as those of aCL IgG or IgM, were all significantly higher for Y-CLIA. All in all, Y-CLIA was considered as a system with the best predictive power. Similarly, the sensitivity, specificity, and accuracy were also compared among four systems in identifying thrombosis and pregnancy morbidity . For thrombosis events, significant results for sensitivity and accuracy of aCL and a β 2GPI positivity were all higher for Y-CLIA than for W-CLIA and for W-ELISA than for A-ELISA. Y-CLIA still showed higher accuracy (p = 0.022 for aCL IgG or IgM and p = 0.001 for a β 2GPI IgM). As for pregnancy morbidity, significant results for specificity and accuracy of aCL and a β 2GPI positivity were significantly higher for W-CLIA than for Y-CLIA and for A-ELISA than for W-ELISA. As different cut-off values were used by four test systems, the distribution of aPL test results from different manufacturers among patient groups were calculated with lg[(test result/cutoff value) +1] so that they could be visualized together as positive numbers in . Patients positive for antibodies fell above the dotted line, and the range of distribution varied due to use of both test methods and limitation of test range for different antibodies. In general, W-CLIA had the widest range of test distribution, while W-ELISA had the narrowest. For Y-CLIA, test range limitation influenced distribution for three autoantibodies. The results of primary or secondary APS patients were compared to other groups and illustrated. Overall, most test systems could distinguish between APS patients and HC, while little significant difference was observed between PAPS and SAPS groups. For different antibodies, four test systems showed different strengths of differential diagnosis. For instance, W-CLIA was best at discrimination for aCL IgG, while A-ELISA was best at aCL IgM. Additionally, distribution of aPLs among clinical groups with the largest number of patients ( i.e. , thrombosis, pregnancy morbidity, and thrombocytopenia) was also illustrated in . Among 152 patients, cross positivity for IgG or IgM of aCL and a β 2GpI for each of the four test systems were demonstrated with Venn diagrams . For aCL, 50 (32.9%) patients were tested positive for IgG or IgM by all systems. There were 12 (7.9%) patients who were tested positive only by Y-CLIA, and 13 (8.6%) were tested positive only by W-ELISA. Similarly, for a β 2GpI, 19 (12.5%) patients were test positive only by Y-CLIA, and seven (4.6%) were tested positive only by W-CLIA. When combining the positivity of aCL and a β 2GpI, Y-CLIA identified the most amount of positive patients (totally 102, 67.8%), with the highest level of patients distinguished only by the system (16, 10.5%). The correlation of different aPL levels by all four test systems with non-criteria clinical manifestations was further explored, with significant results presented in . Thrombocytopenia was associated with the greatest number of antibody positivity (aCL IgG by Y-CLIA, aCL IgM/a β 2GpI IgG/a β 2GpI IgM by W-CLIA, and a β 2GpI IgM by W-ELISA). Significant association was also observed for APSN, PVT, PE, DVT, and positivity of some autoantibodies by certain test systems. Little association was observed between IgA with any clinical features. APS is an autoimmune disease featuring thrombosis and/or pregnancy morbidity which may lead to severe consequences. In order to accurately identify APS patients and provide timely clinical intervention, a detection system with high sensitivity and specificity is required. In this study, the diagnostic and analytic performances of four commercial assays were compared in detecting IgG/IgM/IgA for aCL and a β 2GPI antibodies. In brief, CLIA by YHLO Biotech Co. was considered as the system with the best predictive power, where 58.55 and 57.89% of APS patients were positive for aCL or a β 2GPI for at least one antibodies (IgG or IgM or IgA). Y-CLIA also identified the greatest number of patients (67.8%) positive for aCL or a β 2GpI IgG or IgM, with the highest level of patients distinguished only by the system (16, 10.5%). Nevertheless, for Y-CLIA, little correlation of antibodies’ positivity result with thrombosis or pregnancy complication was observed. In addition, the greatest number of double/triple patients was detected by Y-CLIA. Concerning clinical manifestations, a significant association was observed between W-CLIA and TP/PE, Y-CLIA and TP, as well as combined results with TP/PE/thrombosis. Overall, CLIA showed better performance characteristics than traditional ELISA test systems. Many previous studies have found poor agreement among different aPL assay platforms , which may result from various factors. As shown in , depending on the coating method for solid phase, antibodies detected would either bind to cardiolipin or bind directly to β 2GPI. In addition, different conjugates were applied for signal detection. A lack of universal internal standards for calibration further increased the chance of discrepancy. In addition, different cut-off values were chosen, as they stem from heterogenous reference sample groups in the original calculation. Thus, it might be better to choose the same appropriate reference population among all platforms and utilize an in-house 99th percentile cut-off value, which had been recommended by all manufacturers and confirmed by previous studies . Nevertheless, due to the restriction of subjects, this study still chose the cut-off values provided by platform instructions respectively, which might not reflect the distribution characteristics of the disease population. Compared to ELISA, automated CLIA has the advantage of increasing reproducibility, reducing hands-on time as well as avoiding manual error, which had been proved by some previous studies. With regard to the predictive value of aPLs detected by the four systems, Y-CLIA stood out as the best. indicated that the sensitivity, specificity, accuracy, and Youden index were higher for Y-CLIA among each comparison whenever a significant difference was found. As for ELISAs, W-ELISA had higher predictive power for most aPLs compared to A-ELISA. However, no single detection system had stably shown better performance for all aPLs. Distribution of aPL test results in further reflected this inconsistency. Y-CLIA did not show better ability at distinguishing PAPS or SPAS from SLE or HC groups compared to other systems. Indeed, it had been estimated in previous studies that around 40% of patients with SLE have aPL, and APS may develop in up to 50–70% of patients with both SLE and aPLs . Thus, although Y-CLIA could be recommended for APS diagnosis, other systems may provide additive value for each individual aPL in differentiation, especially when SLE was involved. The predictive power of criterial manifestations indicated that besides serology diagnosis, different systems had respective strengths in predicting associate events. W-CLIA was more sensitive and accurate for thrombosis, while results from A-ELISA were more specific and accurate for pregnancy-related outcomes. Since APS diagnosis relied both on clinical and experimental criteria, inclusion of more test systems was still of great importance. As IgG or IgM of aCL and a β 2GpI was part of the standard diagnostic criteria, cross-positivity analysis was conducted, which revealed that Y-CLIA identified the most number of patients test positive overall. However, other systems were still of great value for different aPLs, as 8.6% of aCL and 4.6% of a β 2GpI were tested positive only by W-ELISA or W-CLIA. which suggested that a combination of more test systems could increase the sensitivity of APS diagnosis. In the clinic, patients may remain persistently negative for criteria aPLs yet show typical APS clinical manifestations (defined as seronegative APS, SNAPS) . Alternate testing platforms could assist in final diagnosis for SNAPS patients. According to the European League Against Rheumatism (EULAR) guidelines for APS, high-risk profiles for APS is defined as a positive LA test, the presence of double (any combination of LA, aCL or a β 2GPI antibodies) or triple (all three subtypes) aPL positivity, or the presence of persistently high aPL titers . It is crucial to recognize these high-risk patients in order for the early prevention of thrombotic and obstetric events . Thus, a cross-positivity analysis was conducted to evaluate the ability of four test systems in identifying high-risk patients concerning aCL/a β 2GPI detection (result not shown). For double-positive patients, among 94 patients (61.84%) positive for LA and aCL, eleven and nine were detected positive only by Y-CLIA and W-ELISA respectively. Among 92 patients (60.53%) positive for LA and a β 2GPI, seven and six were detected positive only by Y-CLIA and W-CLIA respectively. For 77 triple-positive patients (50.66%), nine were detected positive only by Y-CLIA and two by W-CLIA. The result suggested that a combination of more test systems could increase the sensitivity of high-risk identification for APS. Finally, the results of different aPL isotypes tested by four systems were explored of their association with non-criteria clinical manifestations. Thrombocytopenia was associated with the greatest number of antibody positivity, and significant results were also observed for APSN, PVT, PE, and DVT. However, no other significant association was observed for other clinical features or IgA isotype. Similar results could be observed in a study conducted by us recently in a large cohort with more than 7,000 patients . It had been reported that the prevalence of thrombocytopenia was 20 to 46% as a manifestation of primary APS, probably because aCL may bind activated platelet membranes and cause platelet destruction . Although the correlation between aPLs and thrombosis or pregnancy events has been confirmed by a number of studies , conflicting results have also been observed in other reports . In our study, venous thrombosis events (PVT, PE, and DVT) showed more correlation with aPL positivity, while little significant relationship was found with poor pregnancy outcomes. It should be noted that the number of patients with most of the recorded clinical manifestations was small . Consequently, the results might be strongly influenced by patient heterogeneity including age, gender, or other factors. All in all, this study confirmed the advantage of using CLIA testing systems for aPL detection, with higher predictive power and better ability at identifying both low-titer suspected and multi-positive high-risk patients. In the future, with the reduction of test apparatus cost, fully automated CLIA could replace ELISA in the laboratory testing of aPLs for APS diagnosis and monitor. For the local population in China, Y-CLIA would be a more suitable choice concerning commercially available testing systems. Our study has some limitations. Recommended cut-off values were used and not calculated with the local population, which might decrease precision in sequential analysis. Correlation between autoantibodies and clinical manifestations, especially obstetrical related events, still needs examination. Larger sample size and inclusion of patients with a wider range of associated diseases or clinical features, as well as more high-risk patients (double/triple-positive), could further complement the study. The predictive performance of the selected test system (Y-CLIA) also needs further confirmation. In conclusion, CLIA was considered a better platform for IgG/IgM/IgA aCL and a β 2GPI detection in APS diagnosis. Additionally, a combination of other detection platforms could assist in clinical diagnosis and differential diagnosis, increase the ability to exclude SNAPS, as well as identify high-risk patients. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. The studies involving human participants were reviewed and approved by the Medical Ethics Committee of Peking Union Medical College Hospital. The patients/participants provided their written informed consent to participate in this study. All authors were involved in the design of this study. CH, SL, ZX, HY, HJ, and JZ contributed to the collection of blood samples and other experimental procedures. YS and WQ were involved in data collection and pre-processing. CH and SL analyzed the data and wrote the manuscript. JZ, QW, XT, ML, and YZ contributed to the recruitment of patients and evaluation of clinical data. All authors contributed to the article and approved the submitted version. This study was supported by the National Key Research and Development Program of China (2019YFC0840603, 2017YFC0907601, and 2017YFC0907602), the National Natural Science Foundation of China (81771780), and the CAMS Initiative for Innovative Medicine (2017-I2M-3-001 and 2019-I2M-2-008). The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. |
Progress in Neuropharmacology of Anesthetics and Analgesics for the Improvement of Medical Treatment | 79a01638-48dc-49e2-859c-f55cd5457c93 | 9199551 | Pharmacology[mh] | |
The Michigan Genetic Hereditary Testing (MiGHT) study’s innovative approaches to promote uptake of clinical genetic testing among cancer patients: a study protocol for a 3-arm randomized controlled trial | 30457002-7f11-4521-b33d-0f3d611af77b | 9911941 | Internal Medicine[mh] | Note: the numbers in curly brackets in this protocol refer to SPIRIT checklist item numbers. The order of the items has been modified to group similar items (see http://www.equator-network.org/reporting-guidelines/spirit-2013-statement-defining-standard-protocol-items-for-clinical-trials/
Background and rationale {6a} There are more than 16.9 million cancer survivors in the USA, with nearly 1.9 million new cancers diagnosed each year. Although most cancers are sporadic, germline genetic variants are implicated in 5–10% of cancer cases. Clinical genetic testing identifies pathogenic germline genetic variants associated with hereditary cancer syndromes. An estimated 20% of cancer patients have a family history of cancer, and a subset of these developed their cancers as a result of inherited pathogenic variants in genes associated with cancer susceptibility. Several of these genes are associated with well-known hereditary cancer syndromes, such as BRCA1/2 for hereditary breast and ovarian cancer (HBOC), TP53 for Li-Fraumeni syndrome, and MLH1 , MSH2 , MSH6 , PMS2 , and EPCAM for Lynch syndrome. Importantly, most individuals with genetic susceptibility remain undiagnosed. Epidemiological studies have estimated the prevalence of HBOC in the general population to be 1 in 400 but more recent exome research has suggested an even higher prevalence of 1 in 139 . For Lynch syndrome, which is the most common inherited form of colorectal cancer, the general population prevalence is approximately 1 in 279 . Germline genetic testing identifies individuals with cancer predisposition syndromes and supports the use of personalized strategies for cancer prevention, early detection, and/or targeted therapy . The germline genetic testing results carry implications for not only the cancer patient’s own treatment, but also the medical management of their family members . There is a growing demand for cancer genetic services, yet genetic counseling and genetic testing remain underutilized (Bednar et al., 2020). As a result of the increasing number and decreasing cost of genetic tests and the expansion of genetics and genomics into mainstream medicine the demand for genetic counseling services has outpaced the workforce . Lack of access directly impacts treatment options, outcomes, screening for other malignancies, and assessment of at-risk family members . Barriers to accessing genetic testing are multi-tiered. Substantial patient-level barriers to genetic counseling and testing persist, including limited knowledge, financial concerns, competing demands on patients at the time of diagnosis, fear of insurance discrimination, emotional distress, uncertain benefit, time commitment, lack of knowledge about genetic counseling or testing, discouragement by family members, and personal fear . Provider-level barriers may relate to limited knowledge of genomic medicine, insufficient information to assess cancer risk and refer to genetic counseling and testing, and challenges communicating the complexity of genomic medicine adds to cancer care. Population-level barriers to the knowledge of and access to genetic testing have been found among racial/ethnic minorities, people for whom English is a second language, patients with public insurance, and rural communities. These communities have been underserved persistently . Leaders in the cancer genetics community emphasize the importance of developing new models for providing genetics education and counseling to patients who are considering clinical genetic testing for cancer susceptibility and multilevel approaches to overcome barriers to uptake of genetic testing are an area of focus . Given the rapidly expanding indications for genetic testing to guide oncologic treatment decision-making, alternative ways to deliver cancer genetics services (including telehealth, point-of-care, and direct-to-consumer clinical genetic testing) are being employed to expand access and may include digital interventions and counselors without formal education in genetics. Objectives {7} The Michigan Genetic Hereditary Testing (MiGHT) study is a pragmatic randomized controlled trial designed to increase the utilization of genetic testing among eligible cancer patients by addressing health education and behavior barriers (NIH/NCI U01CA232827). While digital health tools and telephone-based coaching have been successful in motivating behavior change across a wide range of health issues, these strategies have not yet been integrated into interventions for facilitating care delivery for patients at risk for hereditary cancer syndromes. The MiGHT study represents a patient-centered approach to increasing the uptake of genetic testing with interactive, web-based technology. Led by our team at the University of Michigan Rogel Cancer Center, the study is conducted in collaboration with the Michigan Oncology Quality Consortium (MOQC), a state-wide network of nearly 90% of medical and gynecologic oncology practices, predominantly community practices throughout the state, and the Michigan Department of Health and Human Services (MDHHS). The MDHHS Cancer Genomics Program was funded through a grant from the Centers for Disease Control and Prevention to increase awareness of genetic testing and counseling and to provide information about genetic resources for patients and health care providers in the state of Michigan (Cooperative Agreement #5U38GD000054). The primary objective of this three-arm randomized clinical trial is to test the efficacy of two patient-level behavioral interventions on uptake of cancer genetic testing. The two interventions are (1) a virtual genetics navigator with tailored content and, (2) motivational interviewing by genetic health coaches. We have two primary hypotheses concerning the independent comparisons of the active intervention arms 2 and 3 to the usual care (UC), Arm 1: Hypothesis 1 – Arm 2 – a virtual genetics navigator (VGN), will increase the proportion of patients completing genetic testing compared to UC. Hypothesis 2 – Arm 3 – motivational interviewing-based telephone counseling with a genetic health coach (GHC) will increase the proportion of patients completing genetic testing compared to UC. Secondary objectives are to assess the barriers and motivators for genetic testing of testing uptake and understanding for whom the intervention works (moderators). Trial design {8} A three-arm, randomized control trial will be conducted with participants randomly assigned to either of the two intervention arms (Virtual Genetics Navigator and Genetics Health Coach) or to the control arm (Usual Care). We will prospectively evaluate the noninferiority of the effectiveness of health education delivered with the support of a virtual genetics navigator or motivational interviewing-based telephone coach in comparison to usual care to increase the uptake of genetic testing for hereditary cancers.
There are more than 16.9 million cancer survivors in the USA, with nearly 1.9 million new cancers diagnosed each year. Although most cancers are sporadic, germline genetic variants are implicated in 5–10% of cancer cases. Clinical genetic testing identifies pathogenic germline genetic variants associated with hereditary cancer syndromes. An estimated 20% of cancer patients have a family history of cancer, and a subset of these developed their cancers as a result of inherited pathogenic variants in genes associated with cancer susceptibility. Several of these genes are associated with well-known hereditary cancer syndromes, such as BRCA1/2 for hereditary breast and ovarian cancer (HBOC), TP53 for Li-Fraumeni syndrome, and MLH1 , MSH2 , MSH6 , PMS2 , and EPCAM for Lynch syndrome. Importantly, most individuals with genetic susceptibility remain undiagnosed. Epidemiological studies have estimated the prevalence of HBOC in the general population to be 1 in 400 but more recent exome research has suggested an even higher prevalence of 1 in 139 . For Lynch syndrome, which is the most common inherited form of colorectal cancer, the general population prevalence is approximately 1 in 279 . Germline genetic testing identifies individuals with cancer predisposition syndromes and supports the use of personalized strategies for cancer prevention, early detection, and/or targeted therapy . The germline genetic testing results carry implications for not only the cancer patient’s own treatment, but also the medical management of their family members . There is a growing demand for cancer genetic services, yet genetic counseling and genetic testing remain underutilized (Bednar et al., 2020). As a result of the increasing number and decreasing cost of genetic tests and the expansion of genetics and genomics into mainstream medicine the demand for genetic counseling services has outpaced the workforce . Lack of access directly impacts treatment options, outcomes, screening for other malignancies, and assessment of at-risk family members . Barriers to accessing genetic testing are multi-tiered. Substantial patient-level barriers to genetic counseling and testing persist, including limited knowledge, financial concerns, competing demands on patients at the time of diagnosis, fear of insurance discrimination, emotional distress, uncertain benefit, time commitment, lack of knowledge about genetic counseling or testing, discouragement by family members, and personal fear . Provider-level barriers may relate to limited knowledge of genomic medicine, insufficient information to assess cancer risk and refer to genetic counseling and testing, and challenges communicating the complexity of genomic medicine adds to cancer care. Population-level barriers to the knowledge of and access to genetic testing have been found among racial/ethnic minorities, people for whom English is a second language, patients with public insurance, and rural communities. These communities have been underserved persistently . Leaders in the cancer genetics community emphasize the importance of developing new models for providing genetics education and counseling to patients who are considering clinical genetic testing for cancer susceptibility and multilevel approaches to overcome barriers to uptake of genetic testing are an area of focus . Given the rapidly expanding indications for genetic testing to guide oncologic treatment decision-making, alternative ways to deliver cancer genetics services (including telehealth, point-of-care, and direct-to-consumer clinical genetic testing) are being employed to expand access and may include digital interventions and counselors without formal education in genetics.
The Michigan Genetic Hereditary Testing (MiGHT) study is a pragmatic randomized controlled trial designed to increase the utilization of genetic testing among eligible cancer patients by addressing health education and behavior barriers (NIH/NCI U01CA232827). While digital health tools and telephone-based coaching have been successful in motivating behavior change across a wide range of health issues, these strategies have not yet been integrated into interventions for facilitating care delivery for patients at risk for hereditary cancer syndromes. The MiGHT study represents a patient-centered approach to increasing the uptake of genetic testing with interactive, web-based technology. Led by our team at the University of Michigan Rogel Cancer Center, the study is conducted in collaboration with the Michigan Oncology Quality Consortium (MOQC), a state-wide network of nearly 90% of medical and gynecologic oncology practices, predominantly community practices throughout the state, and the Michigan Department of Health and Human Services (MDHHS). The MDHHS Cancer Genomics Program was funded through a grant from the Centers for Disease Control and Prevention to increase awareness of genetic testing and counseling and to provide information about genetic resources for patients and health care providers in the state of Michigan (Cooperative Agreement #5U38GD000054). The primary objective of this three-arm randomized clinical trial is to test the efficacy of two patient-level behavioral interventions on uptake of cancer genetic testing. The two interventions are (1) a virtual genetics navigator with tailored content and, (2) motivational interviewing by genetic health coaches. We have two primary hypotheses concerning the independent comparisons of the active intervention arms 2 and 3 to the usual care (UC), Arm 1: Hypothesis 1 – Arm 2 – a virtual genetics navigator (VGN), will increase the proportion of patients completing genetic testing compared to UC. Hypothesis 2 – Arm 3 – motivational interviewing-based telephone counseling with a genetic health coach (GHC) will increase the proportion of patients completing genetic testing compared to UC. Secondary objectives are to assess the barriers and motivators for genetic testing of testing uptake and understanding for whom the intervention works (moderators).
A three-arm, randomized control trial will be conducted with participants randomly assigned to either of the two intervention arms (Virtual Genetics Navigator and Genetics Health Coach) or to the control arm (Usual Care). We will prospectively evaluate the noninferiority of the effectiveness of health education delivered with the support of a virtual genetics navigator or motivational interviewing-based telephone coach in comparison to usual care to increase the uptake of genetic testing for hereditary cancers.
Study setting {9} Study participants will be identified through oncology practices participating in the MOQC, a physician-led state-wide collaborative quality initiative that includes 68 academic and community oncology practices whose members represent over 90% of the medical and gynecologic oncologists in Michigan. Eligibility criteria {10} Oncology patients are eligible to participate in the MiGHT study if they (1) are 18 years of age or older, (2) can speak and read in English, (3) have access to a telephone and the internet, and (4) self-report a diagnosis of breast, ovarian, prostate, endometrial, pancreatic, or colorectal cancer that meets the National Comprehensive Cancer Network (NCCN) criteria for genetic testing. Personal and family history of cancer will be self-reported through the Family Health History Tool (FHHT). The FHHT is a web-based survey delivered to potential participants by email or SMS/Text which elicits detailed information about family history of cancer (cancer type and age at diagnosis) in first- and second-degree relatives and calculates a score predicting the probability of Lynch syndrome (PREMM5) . Individuals with breast, ovarian, prostate, endometrial, pancreatic, or colorectal cancers that meet the MiGHT study eligibility criteria, which have been adapted from NCCN criteria for genetic testing (Table ) . Potential participants will be contacted by the study team by email or telephone to provide information about the clinical trial. Individuals who report having previously undergone clinical germline genetic testing or who have already scheduled an appointment for genetic testing are ineligible. This criterion ensures focus on the main outcome of uptake of clinical genetic testing and barriers to and motivations affecting successful completion in those at increased risk for pathogenic variants. Who will take informed consent? {26a} The research staff members review the eligibility criteria for potential participants. Potential participants are then contacted by study staff via email or telephone to confirm the eligibility criteria have been met. Once eligibility has been confirmed, the staff member will add them as a user of the MiGHT study platform. Then the system will send an email to the potential participant. The invitation email includes a personalized link to login to the MiGHT study platform, where the individual confirms whether or not they have taken a genetic test or if they have an appointment scheduled to take a genetic test. If the potential participant has neither they will be able to indicate their consent to participate in the study using the consent form displayed within the MiGHT study platform. Additional consent provisions for collection and use of participant data and biological specimens {26b} N/A, we have no additional consent provisions.
Study participants will be identified through oncology practices participating in the MOQC, a physician-led state-wide collaborative quality initiative that includes 68 academic and community oncology practices whose members represent over 90% of the medical and gynecologic oncologists in Michigan.
Oncology patients are eligible to participate in the MiGHT study if they (1) are 18 years of age or older, (2) can speak and read in English, (3) have access to a telephone and the internet, and (4) self-report a diagnosis of breast, ovarian, prostate, endometrial, pancreatic, or colorectal cancer that meets the National Comprehensive Cancer Network (NCCN) criteria for genetic testing. Personal and family history of cancer will be self-reported through the Family Health History Tool (FHHT). The FHHT is a web-based survey delivered to potential participants by email or SMS/Text which elicits detailed information about family history of cancer (cancer type and age at diagnosis) in first- and second-degree relatives and calculates a score predicting the probability of Lynch syndrome (PREMM5) . Individuals with breast, ovarian, prostate, endometrial, pancreatic, or colorectal cancers that meet the MiGHT study eligibility criteria, which have been adapted from NCCN criteria for genetic testing (Table ) . Potential participants will be contacted by the study team by email or telephone to provide information about the clinical trial. Individuals who report having previously undergone clinical germline genetic testing or who have already scheduled an appointment for genetic testing are ineligible. This criterion ensures focus on the main outcome of uptake of clinical genetic testing and barriers to and motivations affecting successful completion in those at increased risk for pathogenic variants.
The research staff members review the eligibility criteria for potential participants. Potential participants are then contacted by study staff via email or telephone to confirm the eligibility criteria have been met. Once eligibility has been confirmed, the staff member will add them as a user of the MiGHT study platform. Then the system will send an email to the potential participant. The invitation email includes a personalized link to login to the MiGHT study platform, where the individual confirms whether or not they have taken a genetic test or if they have an appointment scheduled to take a genetic test. If the potential participant has neither they will be able to indicate their consent to participate in the study using the consent form displayed within the MiGHT study platform.
N/A, we have no additional consent provisions.
Explanation for choice of comparators {6b} Our goal is to increase the uptake of genetic testing in patients who meet clinical criteria for referral but who have not yet been tested or scheduled for testing. For this clinical trial, participants will be randomized to one of 3 parallel arms: One control arm - usual care (UC, Arm 1). Two intervention arms, Virtual genetics navigator (VGN, Arm 2) and Genetics health coach (GHC, Arm 3) The rationale for comparing each intervention to usual care is to determine if delivering genetics education virtually or with a health coach is superior to usual care. We consider both interventions less costly than using licensed GCs for which there is a workforce shortage. Intervention description {11a} All participants will have access to the MiGHT study web platform which contains contact information for the study team and links to resources for genetic testing that are publicly available through the Michigan Department of Health and Human Services (MDHHS) Cancer Genomics Best Practices website. The MDHHS publishes lists of genetics service providers in the state of Michigan as well as the phone number for the MDHHS genetics hotline where patients and providers can request more information about obtaining clinical genetics services. The participants randomized to Arm 1 will not have access to any intervention-specific content or functionality. Virtual genetics navigator (VGN) intervention (Arm 2) Participants randomized to Arm 2 will be directed to the virtual genetics navigator (VGN) module of the MiGHT study platform, The VGN is created to allow participants to navigate through foundational genetics education materials and tailored motivational media encouraging genetic testing. Over the course of the study, participants complete online assessments (at baseline – T0, post-test, 6 months – T1, and 12 months follow-up – T2) to help us tailor content and measure the effect of the interventions. We designed the tailored content to reduce patient-level barriers and broaden the reach, impact, and equity of genetic testing. The content areas include the following topics: (1) benefits of genetic testing, (2) countering myths about testing, (3) overcoming barriers/fears, (4) education about genetic testing, and (5) how to get genetic testing (e.g., through local genetics specialty clinics, primary care provider or oncologist, direct-to-consumer options). Our message database includes expert-written motivational messages that were iteratively developed through a group review, with content guided by current clinical genetic testing guidelines and motivational interviewing practices. Examples of tailored messaging are provided in Table . For example, presentation of the content displayed on the participant’s homepage is tailored and prioritized using the participant’s responses to the baseline survey. If the individual’s baseline survey responses indicated they have specific barriers to genetic testing (e.g., concerns about cost and privacy), then content about overcoming these barriers will appear toward the top of the page. If the individual endorses low readiness for genetic testing, content designed to increase readiness is presented. Participants will have on-demand access to the VGN and will be allowed to click on content areas that they are interested in learning about. For participants who have not undergone genetic testing, the VGN will ask them to (1) rate their readiness for genetic testing and (2) identify remaining barriers (e.g., “What is holding you back?”). For participants who indicate that they have completed genetic testing, the VGN will provide information about communicating test results to first and second-degree blood relatives. Genetics health coach (GHC) intervention (Arm 3) Participants randomized to the GHC Arm will access the MiGHT study platform to schedule two coaching telephone calls with a genetic health coach (GHC). The GHC will overcome resistance and knowledge gaps by providing foundational genetics “key facts” and offer resources to help participants access genetic testing services. GHCs are professionals in a health-related field or first-year genetic counseling students who have undergone training in Motivational Interviewing (MI). MI is a patient-centered communication style , which has been used extensively to support autonomous decision-making and positive health behavior changes . The MI training of GHCs employed a combination of didactic information and experiential exercises and was delivered by senior author KR and a certified genetic counselor. We deliberately chose to use GHCs rather than certified genetic counselors given the national shortage of clinical genetics professionals and the need to expand reach of clinical cancer genetics services. Our ability to train and hire GHCs showcases a multi-level approach to strengthen the workforce in two ways (1) leverage the growing number of students enrolled in genetic counseling programs nationwide to augment existing education with MI training and (2) tap into other health-related guilds which already have the health communications skills and experience with motivating health behaviors to supplement their current practice with genetics education. GHCs were trained to answer basic questions about genetics and testing but not to give medical advice . Participants will schedule up to two telephone calls (approximately 2 weeks and 3 months after randomization) with a GHC. During each coaching call, GHCs discuss barriers and motivators for genetic testing including a readiness assessment. GHCs and participants work collaboratively to overcome resistance and build motivation to undergo genetic testing. The GHCs will help participants process their own reasons, for or against, testing, including how their current testing status aligns with their goals, and values. After each coaching call, the GHC provides a brief written summary of the discussion (topics covered, things to work on/consider, resources and any other necessary follow-ups). This summary is made available on the participant’s MiGHT study portal. Clinical information such as specific risk assessments or potential screening recommendation changes will not be discussed by the GHCs. All coaching calls will be recorded and the audio files will be stored securely for transcription and further research analysis. Criteria for discontinuing or modifying allocated interventions {11b} Participation is voluntary. Participants may discontinue their interactions with the VGN or GHC at any time. Strategies to improve adherence to interventions {11c} Engaging multiple stakeholders in the development and design of novel health interventions is advocated by researchers . We employed co-design methods to engage and empower patients and health care professionals through the iterative development process of the MiGHT study platform. Using feedback from our advisory board members (described under the “Composition of the coordinating center and trial steering committee {5d}” section) and ethics policy, we identified requirements for reminders and notifications. Reminders to login to the MiGHT study platform to report the status of genetic testing or to attend GHC sessions will be emailed or sent via SMS/text message to improve adherence. Also, participants are offered gift cards following the completion of each survey. Relevant concomitant care permitted or prohibited during the trial {11d} The MiGHT study web platform provides clinically vetted publicly available resources and links to MDHHS and MOQC for all participants to use at their convenience. All participants are encouraged to discuss genetic testing with their healthcare providers and are referred to their treating clinicians for any medical follow-up. If participants choose to undergo clinical genetic testing, this will be coordinated/ordered by the participant’s clinical medical providers or by the participants themselves. No genetic tests will be ordered as part of the MiGHT study, nor will the study team be privy to results from tests completed by study participants. Provisions for post-trial care {30} All participants are encouraged to continue with medical care as prescribed by their healthcare providers and are referred to contact their treating clinicians about medical follow-up. If participants have questions about genetic test results, they will be encouraged to contact their medical team. There is no anticipated harm from participating in this study and we plan to address any unforeseen care needs reflected through participants’ comments and feedback should they arise during the trial. Outcomes {12} The primary outcome measure is participants’ self-reported completion of genetic testing assessed at 6 months after randomization (yes/no). This will be assessed through the follow-up surveys administered via the MiGHT study web platform. Barriers to uptake of genetic testing Barriers are assessed using 23 items covering multiple domains, which broadly fit under emotional and self-efficacy. The emotional items were adapted from Thompson et al. to assess potential benefits for self/family (informing health behaviors) and potential harms (negative emotional reaction, confidentiality, family worry, guilt, stigma) . The self-efficacy items assess participant confidence in pursuing genetic testing, acting on the information, and communicating results to relatives and were adapted from Katapodi et al. . Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely). The mean score for each question is calculated to rank the barriers in order of importance. A higher mean score indicates the greater importance of that specific barrier. Motivators to uptake of genetic testing Motivation for getting tested is measured with an adapted version of the Treatment Self-Regulation Questionnaire (TSRQ) by Levesque et al. Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely) . The mean score for each question, across all participants who completed genetic testing, will be calculated in order to rank the motivators in importance for purposes of tailoring. A higher mean score indicates the greater importance of that specific motivator. Questions related to motivators are only asked of participants who have not yet completed genetic testing. Participant timeline {13} The participant timeline is shown in Cancer patients receiving care at MOQC oncology practices will be sent a link to the FHHT by email or SMS/Text 2 weeks prior to their upcoming clinic appointment. One month after completion of the FHHT, eligible individuals with diagnoses of breast, ovarian, prostate, endometrial, pancreatic or colorectal cancers who meet NCCN criteria for genetic testing will be contacted by the study team by email or telephone to invited them to enroll in the clinical trial. After informed consent and completion of the baseline survey (T0), enrolled participants are randomly assigned via the MiGHT study web platform to Arm 1: UC, Arm 2: VGN, or Arm 3: GHC groups. The intervention period lasts 6 months and participants complete surveys to assess the effect of the interventions at 6 months post-intervention (T1) and at 12 months post-intervention (T2). Sample size {14} Based on data compiled by the state of Michigan’s MDHHS Cancer Genomics Best Practices branch, we expect that uptake of clinical genetic testing at 6 months post baseline among participants in the UC group will be 20% or less. With 202 participants per intervention arm, we will have 82% power to find a 14% difference between the VGN mobile-optimized website arm and UC arm, and 99% power to find a 20% difference between the GHC and UC arms. Accounting for attrition, we plan to enroll a total of 759 participants. Recruitment {15} Patients meeting the criteria for genetic evaluation will be contacted by email or by telephone and invited to participate in the clinical trial. Study staff will make up to 20 contact attempts and will direct potential subjects to the MiGHT study web platform. Patients interested in participating in the study will have the opportunity to provide informed consent during a conversation with a study team member or by reviewing the informed consent document through the MiGHT study web platform.
Our goal is to increase the uptake of genetic testing in patients who meet clinical criteria for referral but who have not yet been tested or scheduled for testing. For this clinical trial, participants will be randomized to one of 3 parallel arms: One control arm - usual care (UC, Arm 1). Two intervention arms, Virtual genetics navigator (VGN, Arm 2) and Genetics health coach (GHC, Arm 3) The rationale for comparing each intervention to usual care is to determine if delivering genetics education virtually or with a health coach is superior to usual care. We consider both interventions less costly than using licensed GCs for which there is a workforce shortage.
All participants will have access to the MiGHT study web platform which contains contact information for the study team and links to resources for genetic testing that are publicly available through the Michigan Department of Health and Human Services (MDHHS) Cancer Genomics Best Practices website. The MDHHS publishes lists of genetics service providers in the state of Michigan as well as the phone number for the MDHHS genetics hotline where patients and providers can request more information about obtaining clinical genetics services. The participants randomized to Arm 1 will not have access to any intervention-specific content or functionality. Virtual genetics navigator (VGN) intervention (Arm 2) Participants randomized to Arm 2 will be directed to the virtual genetics navigator (VGN) module of the MiGHT study platform, The VGN is created to allow participants to navigate through foundational genetics education materials and tailored motivational media encouraging genetic testing. Over the course of the study, participants complete online assessments (at baseline – T0, post-test, 6 months – T1, and 12 months follow-up – T2) to help us tailor content and measure the effect of the interventions. We designed the tailored content to reduce patient-level barriers and broaden the reach, impact, and equity of genetic testing. The content areas include the following topics: (1) benefits of genetic testing, (2) countering myths about testing, (3) overcoming barriers/fears, (4) education about genetic testing, and (5) how to get genetic testing (e.g., through local genetics specialty clinics, primary care provider or oncologist, direct-to-consumer options). Our message database includes expert-written motivational messages that were iteratively developed through a group review, with content guided by current clinical genetic testing guidelines and motivational interviewing practices. Examples of tailored messaging are provided in Table . For example, presentation of the content displayed on the participant’s homepage is tailored and prioritized using the participant’s responses to the baseline survey. If the individual’s baseline survey responses indicated they have specific barriers to genetic testing (e.g., concerns about cost and privacy), then content about overcoming these barriers will appear toward the top of the page. If the individual endorses low readiness for genetic testing, content designed to increase readiness is presented. Participants will have on-demand access to the VGN and will be allowed to click on content areas that they are interested in learning about. For participants who have not undergone genetic testing, the VGN will ask them to (1) rate their readiness for genetic testing and (2) identify remaining barriers (e.g., “What is holding you back?”). For participants who indicate that they have completed genetic testing, the VGN will provide information about communicating test results to first and second-degree blood relatives. Genetics health coach (GHC) intervention (Arm 3) Participants randomized to the GHC Arm will access the MiGHT study platform to schedule two coaching telephone calls with a genetic health coach (GHC). The GHC will overcome resistance and knowledge gaps by providing foundational genetics “key facts” and offer resources to help participants access genetic testing services. GHCs are professionals in a health-related field or first-year genetic counseling students who have undergone training in Motivational Interviewing (MI). MI is a patient-centered communication style , which has been used extensively to support autonomous decision-making and positive health behavior changes . The MI training of GHCs employed a combination of didactic information and experiential exercises and was delivered by senior author KR and a certified genetic counselor. We deliberately chose to use GHCs rather than certified genetic counselors given the national shortage of clinical genetics professionals and the need to expand reach of clinical cancer genetics services. Our ability to train and hire GHCs showcases a multi-level approach to strengthen the workforce in two ways (1) leverage the growing number of students enrolled in genetic counseling programs nationwide to augment existing education with MI training and (2) tap into other health-related guilds which already have the health communications skills and experience with motivating health behaviors to supplement their current practice with genetics education. GHCs were trained to answer basic questions about genetics and testing but not to give medical advice . Participants will schedule up to two telephone calls (approximately 2 weeks and 3 months after randomization) with a GHC. During each coaching call, GHCs discuss barriers and motivators for genetic testing including a readiness assessment. GHCs and participants work collaboratively to overcome resistance and build motivation to undergo genetic testing. The GHCs will help participants process their own reasons, for or against, testing, including how their current testing status aligns with their goals, and values. After each coaching call, the GHC provides a brief written summary of the discussion (topics covered, things to work on/consider, resources and any other necessary follow-ups). This summary is made available on the participant’s MiGHT study portal. Clinical information such as specific risk assessments or potential screening recommendation changes will not be discussed by the GHCs. All coaching calls will be recorded and the audio files will be stored securely for transcription and further research analysis.
Participants randomized to Arm 2 will be directed to the virtual genetics navigator (VGN) module of the MiGHT study platform, The VGN is created to allow participants to navigate through foundational genetics education materials and tailored motivational media encouraging genetic testing. Over the course of the study, participants complete online assessments (at baseline – T0, post-test, 6 months – T1, and 12 months follow-up – T2) to help us tailor content and measure the effect of the interventions. We designed the tailored content to reduce patient-level barriers and broaden the reach, impact, and equity of genetic testing. The content areas include the following topics: (1) benefits of genetic testing, (2) countering myths about testing, (3) overcoming barriers/fears, (4) education about genetic testing, and (5) how to get genetic testing (e.g., through local genetics specialty clinics, primary care provider or oncologist, direct-to-consumer options). Our message database includes expert-written motivational messages that were iteratively developed through a group review, with content guided by current clinical genetic testing guidelines and motivational interviewing practices. Examples of tailored messaging are provided in Table . For example, presentation of the content displayed on the participant’s homepage is tailored and prioritized using the participant’s responses to the baseline survey. If the individual’s baseline survey responses indicated they have specific barriers to genetic testing (e.g., concerns about cost and privacy), then content about overcoming these barriers will appear toward the top of the page. If the individual endorses low readiness for genetic testing, content designed to increase readiness is presented. Participants will have on-demand access to the VGN and will be allowed to click on content areas that they are interested in learning about. For participants who have not undergone genetic testing, the VGN will ask them to (1) rate their readiness for genetic testing and (2) identify remaining barriers (e.g., “What is holding you back?”). For participants who indicate that they have completed genetic testing, the VGN will provide information about communicating test results to first and second-degree blood relatives.
Participants randomized to the GHC Arm will access the MiGHT study platform to schedule two coaching telephone calls with a genetic health coach (GHC). The GHC will overcome resistance and knowledge gaps by providing foundational genetics “key facts” and offer resources to help participants access genetic testing services. GHCs are professionals in a health-related field or first-year genetic counseling students who have undergone training in Motivational Interviewing (MI). MI is a patient-centered communication style , which has been used extensively to support autonomous decision-making and positive health behavior changes . The MI training of GHCs employed a combination of didactic information and experiential exercises and was delivered by senior author KR and a certified genetic counselor. We deliberately chose to use GHCs rather than certified genetic counselors given the national shortage of clinical genetics professionals and the need to expand reach of clinical cancer genetics services. Our ability to train and hire GHCs showcases a multi-level approach to strengthen the workforce in two ways (1) leverage the growing number of students enrolled in genetic counseling programs nationwide to augment existing education with MI training and (2) tap into other health-related guilds which already have the health communications skills and experience with motivating health behaviors to supplement their current practice with genetics education. GHCs were trained to answer basic questions about genetics and testing but not to give medical advice . Participants will schedule up to two telephone calls (approximately 2 weeks and 3 months after randomization) with a GHC. During each coaching call, GHCs discuss barriers and motivators for genetic testing including a readiness assessment. GHCs and participants work collaboratively to overcome resistance and build motivation to undergo genetic testing. The GHCs will help participants process their own reasons, for or against, testing, including how their current testing status aligns with their goals, and values. After each coaching call, the GHC provides a brief written summary of the discussion (topics covered, things to work on/consider, resources and any other necessary follow-ups). This summary is made available on the participant’s MiGHT study portal. Clinical information such as specific risk assessments or potential screening recommendation changes will not be discussed by the GHCs. All coaching calls will be recorded and the audio files will be stored securely for transcription and further research analysis.
Participation is voluntary. Participants may discontinue their interactions with the VGN or GHC at any time.
Engaging multiple stakeholders in the development and design of novel health interventions is advocated by researchers . We employed co-design methods to engage and empower patients and health care professionals through the iterative development process of the MiGHT study platform. Using feedback from our advisory board members (described under the “Composition of the coordinating center and trial steering committee {5d}” section) and ethics policy, we identified requirements for reminders and notifications. Reminders to login to the MiGHT study platform to report the status of genetic testing or to attend GHC sessions will be emailed or sent via SMS/text message to improve adherence. Also, participants are offered gift cards following the completion of each survey.
The MiGHT study web platform provides clinically vetted publicly available resources and links to MDHHS and MOQC for all participants to use at their convenience. All participants are encouraged to discuss genetic testing with their healthcare providers and are referred to their treating clinicians for any medical follow-up. If participants choose to undergo clinical genetic testing, this will be coordinated/ordered by the participant’s clinical medical providers or by the participants themselves. No genetic tests will be ordered as part of the MiGHT study, nor will the study team be privy to results from tests completed by study participants.
All participants are encouraged to continue with medical care as prescribed by their healthcare providers and are referred to contact their treating clinicians about medical follow-up. If participants have questions about genetic test results, they will be encouraged to contact their medical team. There is no anticipated harm from participating in this study and we plan to address any unforeseen care needs reflected through participants’ comments and feedback should they arise during the trial.
The primary outcome measure is participants’ self-reported completion of genetic testing assessed at 6 months after randomization (yes/no). This will be assessed through the follow-up surveys administered via the MiGHT study web platform. Barriers to uptake of genetic testing Barriers are assessed using 23 items covering multiple domains, which broadly fit under emotional and self-efficacy. The emotional items were adapted from Thompson et al. to assess potential benefits for self/family (informing health behaviors) and potential harms (negative emotional reaction, confidentiality, family worry, guilt, stigma) . The self-efficacy items assess participant confidence in pursuing genetic testing, acting on the information, and communicating results to relatives and were adapted from Katapodi et al. . Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely). The mean score for each question is calculated to rank the barriers in order of importance. A higher mean score indicates the greater importance of that specific barrier. Motivators to uptake of genetic testing Motivation for getting tested is measured with an adapted version of the Treatment Self-Regulation Questionnaire (TSRQ) by Levesque et al. Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely) . The mean score for each question, across all participants who completed genetic testing, will be calculated in order to rank the motivators in importance for purposes of tailoring. A higher mean score indicates the greater importance of that specific motivator. Questions related to motivators are only asked of participants who have not yet completed genetic testing.
Barriers are assessed using 23 items covering multiple domains, which broadly fit under emotional and self-efficacy. The emotional items were adapted from Thompson et al. to assess potential benefits for self/family (informing health behaviors) and potential harms (negative emotional reaction, confidentiality, family worry, guilt, stigma) . The self-efficacy items assess participant confidence in pursuing genetic testing, acting on the information, and communicating results to relatives and were adapted from Katapodi et al. . Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely). The mean score for each question is calculated to rank the barriers in order of importance. A higher mean score indicates the greater importance of that specific barrier.
Motivation for getting tested is measured with an adapted version of the Treatment Self-Regulation Questionnaire (TSRQ) by Levesque et al. Each item is scored on a scale from 1 to 5 (1=not at all; 5=extremely) . The mean score for each question, across all participants who completed genetic testing, will be calculated in order to rank the motivators in importance for purposes of tailoring. A higher mean score indicates the greater importance of that specific motivator. Questions related to motivators are only asked of participants who have not yet completed genetic testing.
The participant timeline is shown in Cancer patients receiving care at MOQC oncology practices will be sent a link to the FHHT by email or SMS/Text 2 weeks prior to their upcoming clinic appointment. One month after completion of the FHHT, eligible individuals with diagnoses of breast, ovarian, prostate, endometrial, pancreatic or colorectal cancers who meet NCCN criteria for genetic testing will be contacted by the study team by email or telephone to invited them to enroll in the clinical trial. After informed consent and completion of the baseline survey (T0), enrolled participants are randomly assigned via the MiGHT study web platform to Arm 1: UC, Arm 2: VGN, or Arm 3: GHC groups. The intervention period lasts 6 months and participants complete surveys to assess the effect of the interventions at 6 months post-intervention (T1) and at 12 months post-intervention (T2).
Based on data compiled by the state of Michigan’s MDHHS Cancer Genomics Best Practices branch, we expect that uptake of clinical genetic testing at 6 months post baseline among participants in the UC group will be 20% or less. With 202 participants per intervention arm, we will have 82% power to find a 14% difference between the VGN mobile-optimized website arm and UC arm, and 99% power to find a 20% difference between the GHC and UC arms. Accounting for attrition, we plan to enroll a total of 759 participants.
Patients meeting the criteria for genetic evaluation will be contacted by email or by telephone and invited to participate in the clinical trial. Study staff will make up to 20 contact attempts and will direct potential subjects to the MiGHT study web platform. Patients interested in participating in the study will have the opportunity to provide informed consent during a conversation with a study team member or by reviewing the informed consent document through the MiGHT study web platform.
Sequence generation {16a} After consenting to the study, participants create login accounts on the MiGHT study web platform and complete the baseline survey. Participants are then block randomized and assigned one of the three arms based on the cancer type (strata: breast, ovarian/endometrial, colorectal, pancreatic, prostate) randomly selecting blocks of size 3 or 6 within each stratum. Assignments are made based on the randomized list during the enrollment process after a participant completes the baseline survey. Concealment mechanism {16b} Our study biostatistician prepared the computer-generated random numbers. Then the MiGHT study team information technologists integrated the randomization requirements into the automated functions of the MiGHT study web platform. Implementation {16c} Participants will be informed of the arm to which they have been randomized when they login to the MiGHT study web platform using their assigned username and password. The content displayed will vary by the arm as described in the Interventions section.
After consenting to the study, participants create login accounts on the MiGHT study web platform and complete the baseline survey. Participants are then block randomized and assigned one of the three arms based on the cancer type (strata: breast, ovarian/endometrial, colorectal, pancreatic, prostate) randomly selecting blocks of size 3 or 6 within each stratum. Assignments are made based on the randomized list during the enrollment process after a participant completes the baseline survey.
Our study biostatistician prepared the computer-generated random numbers. Then the MiGHT study team information technologists integrated the randomization requirements into the automated functions of the MiGHT study web platform.
Participants will be informed of the arm to which they have been randomized when they login to the MiGHT study web platform using their assigned username and password. The content displayed will vary by the arm as described in the Interventions section.
Who will be blinded {17a} No study team members nor participants will be blinded to their randomization arm. Procedure for unblinding if needed {17b} N/A, this study is unblinded.
No study team members nor participants will be blinded to their randomization arm.
N/A, this study is unblinded.
Plans for assessment and collection of outcomes {18a} Data are collected through participant-completed surveys, notations from the Genetics Health Coaches, and keystrokes/clicks from the virtual genetics navigator platform. The MiGHT study web platform is used to administer and collect surveys at baseline (upon enrollment, T0) and at 6 and 12 months post-T0. Plans to promote participant retention and complete follow-up {18b} The MiGHT study web platform sends reminders (by email/SMS text) to participants encouraging them to complete the surveys. Participants receive incremental electronic gift card incentives to enroll and to remain in the study. For completing the baseline survey (T0), participants receive $10, for the 6-month survey (T1) they receive $15, for the 12-month (T2) they receive $25. Also, the research team meets regularly to address issues that may impact participant retention. Data management {19} Data for this trial are collected through two sources: Extraction from the family health history tool (FHHT). The FHHT is used by MOQC practices to securely collect a comprehensive personal and family cancer history (HUM00180616). The extracted data is use to screen potential participants for eligibility. The MiGHT study platform is a secure web application that has seamless integration with Qualtrics surveys. The platform collects data related to logins, page views, and paradata (clicks/keystrokes). In addition, all participants use the platform to complete surveys (baseline, 6- and 12-month follow-ups), participants randomized to the VGN enter data to indicate their progression toward (scheduled appointment) or uptake of genetic testing, whereas participants randomized to GHC will have data collected via semi-structured reports submitted by the coach after each session. The GHC report includes close-ended questions and an unstructured field to provide a written summary of the discussion (topics covered, things to work on/consider, and any other necessary follow-ups). Confidentiality {27} All participant data will be housed in the MiGHT study platform stored in HIPAA-compliant study databases hosted on secure, encrypted servers. No identifiable information about participants will be shared beyond the study team. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} N/A, no biological specimens are collected within this study.
Data are collected through participant-completed surveys, notations from the Genetics Health Coaches, and keystrokes/clicks from the virtual genetics navigator platform. The MiGHT study web platform is used to administer and collect surveys at baseline (upon enrollment, T0) and at 6 and 12 months post-T0.
The MiGHT study web platform sends reminders (by email/SMS text) to participants encouraging them to complete the surveys. Participants receive incremental electronic gift card incentives to enroll and to remain in the study. For completing the baseline survey (T0), participants receive $10, for the 6-month survey (T1) they receive $15, for the 12-month (T2) they receive $25. Also, the research team meets regularly to address issues that may impact participant retention.
Data for this trial are collected through two sources: Extraction from the family health history tool (FHHT). The FHHT is used by MOQC practices to securely collect a comprehensive personal and family cancer history (HUM00180616). The extracted data is use to screen potential participants for eligibility. The MiGHT study platform is a secure web application that has seamless integration with Qualtrics surveys. The platform collects data related to logins, page views, and paradata (clicks/keystrokes). In addition, all participants use the platform to complete surveys (baseline, 6- and 12-month follow-ups), participants randomized to the VGN enter data to indicate their progression toward (scheduled appointment) or uptake of genetic testing, whereas participants randomized to GHC will have data collected via semi-structured reports submitted by the coach after each session. The GHC report includes close-ended questions and an unstructured field to provide a written summary of the discussion (topics covered, things to work on/consider, and any other necessary follow-ups).
All participant data will be housed in the MiGHT study platform stored in HIPAA-compliant study databases hosted on secure, encrypted servers. No identifiable information about participants will be shared beyond the study team.
N/A, no biological specimens are collected within this study.
Statistical methods for primary and secondary outcomes {20a} Logistic regression will be used to compare the proportion of participants who complete genetic testing at 6 months between the two active intervention arms with UC. The model will include variables for intervention, cancer type, age, and time since diagnosis (over or under 1 year). As a secondary analysis, we will control for a potential “dosage” effect for the VGN and GHC treatments by including a covariate for dosage. Dosage here is defined as 0 for the UC group, the number of times the website is accessed for the mobile-optimized website group (log-ins), or the number of health coach encounters completed (0,1, 2) for the GHC group. Secondary analyses will investigate survey data; outcomes will be assessed using linear mixed models. Linear mixed models use all available measurements allowing participants to have an unequal number of observations and produce unbiased parameter estimates as long as the missing observations are missing at random (MAR). The model will include fixed effects for time, indicators for treatment (VGN, GHC, and reference category of UC), and treatment-by-time interactions, cancer type, age, and time since diagnosis (over or under 1 year). Random effects for the intercept and time with an unstructured within-person correlation structure for the residual errors will be specified. Model diagnostics will be used to determine the suitability of more parsimonious (e.g., autoregressive) correlation structures and nonlinear effects for time. Potential effect modifiers of interest will be entered as interaction terms with the intervention arm. Where interaction terms are significant stratified analyses of outcomes will be performed. For example, if the impact of either the mobile-optimized website or MI counseling differs significantly by gender, we will stratify results for men and women. Interim analyses {21b} N/A, no interim analyses have been identified at this time. Methods for additional analyses (e.g., subgroup analyses) {20b} N/A, no additional analyses have been identified at this time. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} We closely follow the regulatory documentation and reporting process that are strictly implemented by the NIH and U-M IRB for any non-adherence and deviations. All randomized individuals will be analyzed via an intent-to-treat approach. We will work to prevent missing data by the recruitment and retention strategies. The amount and patterns of missing data and its associations with other variables (in particular with the intervention category) will be explored so that an appropriate statistical method for analysis can be used. If the data is missing at random (missing outcomes can be predicted from other observed variables), we will use multiple imputation to handle sporadic missing at random outcomes. Multiple imputation by chained equations will be used with 100*fraction of incomplete cases number of imputations. Results will be combined using Rubin’s rules. In case of non-ignorable missing (missing not at random) data, sensitivity analyses will be performed using pattern mixture or selection models to evaluate the robustness of our conclusions to a range of sensible conditions. Plans to give access to the full protocol, participant-level data, and statistical code {31c} Study team members at the University of Michigan will have access to the deidentified final trial dataset. Long-term storage of de-identified data will be hosted on secured servers, per the data management plan approved by the IRB. Third parties interested in using the final dataset to study related topics may request access and permission from the multiple PIs. Also, permission is required for any publications or dissemination effort. Permission will be granted on a case-by-case basis and with full consideration of the NIH and U-M IRB guidelines.
Logistic regression will be used to compare the proportion of participants who complete genetic testing at 6 months between the two active intervention arms with UC. The model will include variables for intervention, cancer type, age, and time since diagnosis (over or under 1 year). As a secondary analysis, we will control for a potential “dosage” effect for the VGN and GHC treatments by including a covariate for dosage. Dosage here is defined as 0 for the UC group, the number of times the website is accessed for the mobile-optimized website group (log-ins), or the number of health coach encounters completed (0,1, 2) for the GHC group. Secondary analyses will investigate survey data; outcomes will be assessed using linear mixed models. Linear mixed models use all available measurements allowing participants to have an unequal number of observations and produce unbiased parameter estimates as long as the missing observations are missing at random (MAR). The model will include fixed effects for time, indicators for treatment (VGN, GHC, and reference category of UC), and treatment-by-time interactions, cancer type, age, and time since diagnosis (over or under 1 year). Random effects for the intercept and time with an unstructured within-person correlation structure for the residual errors will be specified. Model diagnostics will be used to determine the suitability of more parsimonious (e.g., autoregressive) correlation structures and nonlinear effects for time. Potential effect modifiers of interest will be entered as interaction terms with the intervention arm. Where interaction terms are significant stratified analyses of outcomes will be performed. For example, if the impact of either the mobile-optimized website or MI counseling differs significantly by gender, we will stratify results for men and women.
N/A, no interim analyses have been identified at this time.
N/A, no additional analyses have been identified at this time.
We closely follow the regulatory documentation and reporting process that are strictly implemented by the NIH and U-M IRB for any non-adherence and deviations. All randomized individuals will be analyzed via an intent-to-treat approach. We will work to prevent missing data by the recruitment and retention strategies. The amount and patterns of missing data and its associations with other variables (in particular with the intervention category) will be explored so that an appropriate statistical method for analysis can be used. If the data is missing at random (missing outcomes can be predicted from other observed variables), we will use multiple imputation to handle sporadic missing at random outcomes. Multiple imputation by chained equations will be used with 100*fraction of incomplete cases number of imputations. Results will be combined using Rubin’s rules. In case of non-ignorable missing (missing not at random) data, sensitivity analyses will be performed using pattern mixture or selection models to evaluate the robustness of our conclusions to a range of sensible conditions.
Study team members at the University of Michigan will have access to the deidentified final trial dataset. Long-term storage of de-identified data will be hosted on secured servers, per the data management plan approved by the IRB. Third parties interested in using the final dataset to study related topics may request access and permission from the multiple PIs. Also, permission is required for any publications or dissemination effort. Permission will be granted on a case-by-case basis and with full consideration of the NIH and U-M IRB guidelines.
Composition of the coordinating center and trial steering committee {5d} The MiGHT study team meets weekly (including the 3 principal investigators, co-Investigators with expertise in motivational interviewing, clinical genetics, behavioral interventions, and members of the Center for Health Communications Research overseeing the web platform and data management). At the study’s inception, a Community and Patient Advisory Board was convened to maintain continuous stakeholder involvement throughout the study. The Advisory Board consists of eight patients/caregivers, two oncologists, two nurses, one administrator, three genetic counselors, and one representative from the MDHHS. The MiGHT study team and Advisory Board meet quarterly, to discuss progress on the study and to obtain feedback on interventions and educational materials as part of our iterative design process. Composition of the data monitoring committee, its role and reporting structure {21a} The Data Safety and Monitoring Committee for the MiGHT study includes 2 internal members (University of Michigan Director of Population Sciences and a second biostatistician) and 3 external members (2 oncologists and one genetic counselor from 3 different academic medical centers outside Michigan). The study team meets with the DSMB every 6 months or more frequently depending on the activity of the protocol. Topics for discussion include matters related to the safety of study participants (SAE/UaP reporting), validity and integrity of the data, enrollment rate relative to expectations, characteristics of participants, retention of participants, adherence to the protocol (potential or real protocol deviations) and data completeness. At these regular meetings, the protocol-specific Data and Safety Monitoring Report form will be completed and signed by the Principal Investigator or by one of the co-investigators. Data and Safety Monitoring Reports will be submitted to the University of Michigan Rogel Cancer Center Data and Safety Monitoring Committee (DSMC) every 6 months for independent review. Adverse event reporting and harms {22} The potential risks of this project are anticipated to be minimal with important safeguards in place to protect the welfare of study participants. It is possible that participants may experience some emotional discomfort when thinking about family cancer diagnoses and potential implications for their relatives. However, at the beginning of the survey, we will stress that a participant can stop the surveys, VGN, or GHC sessions at any time if they feel uncomfortable. Our MiGHT study team includes certified genetic counselors, clinical and research psychologists, and practicing physicians who can provide advice and/or facilitate clinical interventions should the need arise. Any adverse events resulting from research procedures will be reported to the IRB and DSMB per institutional guidelines. Frequency and plans for auditing trial conduct {23} The principal investigators convene weekly meetings with the research team to review the progress of the study, recruitment and enrollment status, and identify any adverse events, which may be anticipated or unanticipated. Subject accruals, as well as data and safety monitoring summary reports are submitted to the IRB as part of the annual renewal approval process and to the NIH with the annual progress report. Plans for communicating important protocol amendments to relevant parties (e.g. trial participants, ethical committees) {25} If amendments to the protocol are required, these will be reviewed by the principal investigators and submitted to the IRB for approval prior to implementation. A copy of the revised protocol will be shared with the research team. Any deviations from the protocol are fully documented, reported to the IRB, and updated in the clinical trial registry. Dissemination plans {31a} Trial results will be presented at local, national, and international meetings and disseminated through peer-reviewed publications. We have ongoing meetings with participating MOQC practices. We will present study results at national and international meetings to aid the dissemination of both positive and negative findings. If the MiGHT study interventions are effective, results could inform future MDHHS policy, resources, and tools regarding genetic testing and counseling for hereditary cancer syndromes. Study progress and findings will be recorded periodically on ClinicalTrials.gov .
The MiGHT study team meets weekly (including the 3 principal investigators, co-Investigators with expertise in motivational interviewing, clinical genetics, behavioral interventions, and members of the Center for Health Communications Research overseeing the web platform and data management). At the study’s inception, a Community and Patient Advisory Board was convened to maintain continuous stakeholder involvement throughout the study. The Advisory Board consists of eight patients/caregivers, two oncologists, two nurses, one administrator, three genetic counselors, and one representative from the MDHHS. The MiGHT study team and Advisory Board meet quarterly, to discuss progress on the study and to obtain feedback on interventions and educational materials as part of our iterative design process.
The Data Safety and Monitoring Committee for the MiGHT study includes 2 internal members (University of Michigan Director of Population Sciences and a second biostatistician) and 3 external members (2 oncologists and one genetic counselor from 3 different academic medical centers outside Michigan). The study team meets with the DSMB every 6 months or more frequently depending on the activity of the protocol. Topics for discussion include matters related to the safety of study participants (SAE/UaP reporting), validity and integrity of the data, enrollment rate relative to expectations, characteristics of participants, retention of participants, adherence to the protocol (potential or real protocol deviations) and data completeness. At these regular meetings, the protocol-specific Data and Safety Monitoring Report form will be completed and signed by the Principal Investigator or by one of the co-investigators. Data and Safety Monitoring Reports will be submitted to the University of Michigan Rogel Cancer Center Data and Safety Monitoring Committee (DSMC) every 6 months for independent review.
The potential risks of this project are anticipated to be minimal with important safeguards in place to protect the welfare of study participants. It is possible that participants may experience some emotional discomfort when thinking about family cancer diagnoses and potential implications for their relatives. However, at the beginning of the survey, we will stress that a participant can stop the surveys, VGN, or GHC sessions at any time if they feel uncomfortable. Our MiGHT study team includes certified genetic counselors, clinical and research psychologists, and practicing physicians who can provide advice and/or facilitate clinical interventions should the need arise. Any adverse events resulting from research procedures will be reported to the IRB and DSMB per institutional guidelines.
The principal investigators convene weekly meetings with the research team to review the progress of the study, recruitment and enrollment status, and identify any adverse events, which may be anticipated or unanticipated. Subject accruals, as well as data and safety monitoring summary reports are submitted to the IRB as part of the annual renewal approval process and to the NIH with the annual progress report.
If amendments to the protocol are required, these will be reviewed by the principal investigators and submitted to the IRB for approval prior to implementation. A copy of the revised protocol will be shared with the research team. Any deviations from the protocol are fully documented, reported to the IRB, and updated in the clinical trial registry.
Trial results will be presented at local, national, and international meetings and disseminated through peer-reviewed publications. We have ongoing meetings with participating MOQC practices. We will present study results at national and international meetings to aid the dissemination of both positive and negative findings. If the MiGHT study interventions are effective, results could inform future MDHHS policy, resources, and tools regarding genetic testing and counseling for hereditary cancer syndromes. Study progress and findings will be recorded periodically on ClinicalTrials.gov .
The MiGHT study addresses important gaps in our ability to increase the uptake of genetic testing by testing two scalable interventions. The MiGHT interventions deliver an innovative approach for engaging with a state-wide network of oncology practices and their patients in personalized ways. Our MI-based tailored messages and coaching demonstrate how population-level interventions are still able to be patient-centered. The virtual genetics navigator explores how technology may be used to extend the reach of clinical genetics services. Patients are individuals with different values, health histories, and experiences. The interventions developed for the MiGHT study address key barriers and motivators. In this Over the next three years of the study, we have the opportunity to investigate two methods of delivering personalized genetics education that amplifies individual motivators. By supporting patients most at risk for hereditary cancer with virtual tools and trained genetics health coaches, we hope to address critical workforce shortages, patient education needs, and disparities in the uptake of genetic testing .
The recruitment for this 3-arm RCT began February 2, 2022, and will continue until the cohort has accrued, which is anticipated February 2, 2025 (36 months). We plan to complete the follow-up surveys by March 1, 2026 (48 months). The pandemic (COVID-19) has placed a considerable strain on healthcare services, communities, and patients and has presented challenges to study roll-out and delayed participant recruitment. Many smaller oncology practices have limited resources and staff to ensure that their patients are aware of the study and to follow up. This has significantly delayed our research activities and publications. The study protocol date: Initial approval January 31, 2022; Current version October 14, 2022.
Additional file 1.
|
Probiotics in the non-surgical treatment of periodontitis: a systematic review and network meta-analysis | 0b3c9a6e-468e-4b81-954d-a053c8da61b5 | 11481756 | Dentistry[mh] | Periodontitis, a chronic inflammatory oral condition, is a major global health concern, ranking as the second leading cause of tooth loss worldwide after dental caries . Approximately 50% of the global population experiences periodontitis, making it the seventh most prevalent disease globally . While periodontitis is multifactorial, the presence of dysbiotic biofilm is crucial for its progression . The primary treatment goal is to reduce harmful microorganisms and restore a healthy flora around teeth, and also to create a biologically compatible root surface for reattachment . Professional mechanical plaque removal (PMPR) are widely accepted methods for achieving this, but their effectiveness can vary due to factors like deep probing depths and difficult-to-reach areas . Probiotics and other adjuvants to subgingival instrumentation have been proposed to address these limitations. While the use of adjunctive antibiotics and other antimicrobials is established, the indications are specific. The primary concern associated with the use of antimicrobials is bacterial resistance . Therefore, there is growing interest in understanding the mechanism of action of probiotics in modifying the microflora of periodontal patients. The mechanisms underlying the potential efficacy of probiotics in periodontal disease are related to biological mechanisms . Probiotics compete with periodontal pathogens, modulating dysbiotic conditions . They can reduce the immunogenicity of the microflora and modulate immunological and inflammatory pathways, resulting in the reduction of the destructive inflammation characteristic of periodontitis . The ultimate outcome is immunological homeostasis, which could persist in the individual for an extended period. Indeed, probiotics can reduce periodontal disease pathogens by producing hydrogen peroxide . Additionally, while plaque is a necessary but not sufficient condition for the development of periodontal disease, probiotics also demonstrate the ability to prevent plaque formation by reducing saliva pH through the production of antioxidants, thereby inhibiting the growth of bacteria . Early systematic reviews (SRs) highlighted the short-term benefits of probiotics as an adjunct to subgingival instrumentation, but no specific regimen was deemed superior . Given the complexity of clinical decisions and the necessity for evidence-based practices, a clear understanding of the relative risks and benefits of probiotic therapy is crucial. To provide a comprehensive understanding of probiotic therapy, a network meta-analysis (NMA) model was implemented. This model, unlike classical meta-analysis, accommodates all available probiotic regimens, allowing for indirect comparisons between interventions not directly assessed in individual trials. This approach enhances accuracy, offers a coherent overview, and enables the ranking of interventions based on their relative risks and benefits. Therefore, this systematic review and network meta-analysis aimed to answer the following focused question: In adult patients with periodontitis and good general health , what is the effect of the combination of PMPR and different existing probiotics in comparison with PMPR alone on probing pocket depth (PPD) reduction and clinical attachment level (CAL) gain?
Protocol registration and reporting format This SR and NMA adhered to PRISMA guidelines, including the updated version for network meta-analysis (Appendix ). It is registered in PROSPERO under trial No. CRD42021250678. Eligibility criteria Table shows the main inclusion criteria for the PICO question, including primary and secondary outcomes. Studies lacking essential data required for a meta-analysis; nonrandomized clinical studies, cohort studies, and case series; studies involving patients with systemic diseases (HIV/AIDS or diabetes) or intellectual disabilities; studies focused on forms of periodontal disease other than chronic periodontitis, patients in periodontal supportive therapy, or healthy volunteers; studies examining therapies other than probiotics; studies targeting children, adolescents, or the elderly population; studies failing to meet the transitivity assumption were excluded. Information source and searches Three electronic databases and three grey literature platforms were searched up to November 2023: MEDLINE (via PubMed), LILACS, and Cochrane Central Registry of Controlled Trials (CENTRAL); and Google Scholar (with the first 300 references retrieved), ClinicalTrials.gov, and a database listing of unpublished studies (DANS EASY Archive, available at 10.17026/dans-xtf-47w5), respectively. Detailed search strategies (Appendix ) were adopted, supplemented by screening of reference lists (using Research Rabbit, https://www.researchrabbit.ai/ ) and outreach to corresponding authors via email to inquire about additional research in the field or awareness of any ongoing projects. Study selection and data extraction Two reviewers rigorously and independently followed predetermined criteria for screening titles and abstracts for eligibility. Exclusion decisions were meticulously recorded (Table , Appendix ). Full-text reports were obtained for included studies and those lacking sufficient information. Data extraction covered study features, participant details, and outcome measures. Contacting corresponding authors addressed any needed clarifications. Discrepancies were resolved through discussion, with a third reviewer consulted if necessary. Data items Table shows the main variables sought in the included studies. Table presents data by group and outcomes (Appendix ). Risk of bias within individual studies The methodological quality of the included studies was assessed using Cochrane Collaboration’s Tool Assessment Risk of Bias 5.1.0 . Two independent reviewers assigned ‘low risk,’ ‘unclear risk,’ or ‘high risk’ of bias to each question. Discrepancies were resolved through discussion or consultation with a third reviewer if needed. Cohen’s kappa coefficient evaluated interrater agreement, with interpretations ranging from poor to almost perfect. Final scores were determined based on the percentage of ‘low risk of bias’ responses. Study bias was categorized as high (≤ 49%), moderate (50–69%), or low (≥ 70%). Data synthesis Summary treatment effect measures Clinical parameters for continuous primary and secondary outcomes were derived from included studies. Mean differences (MD) and standard errors were presented for all studies. Effect sizes within and between groups at baseline and last follow-up were calculated using MedCalc ® Software Ltd (available at https://www.medcalc.org/calc/comparison_of_means.php ) (Appendix , Tables , , , and ). Planned methods of analysis Network meta-analyses, incorporating direct and indirect comparisons, were conducted using a frequentist weighted least-squares approach with the “netmeta package” in Rstudio (R. Rstudio, PBC, Boston, MA). Random-effects models were applied, categorizing results by follow-up periods: ≤3 months (short-term) and > 3 months (long-term). A single common approach was used to assess heterogeneity within studies. Assessment of inconsistency Both local (SIDE method) and global (incoherence models) approaches were employed to assess inconsistency. The netsplit function separated indirect from direct evidence, while incoherence models assessed inconsistency across the entire network. No inconsistency was considered for p > 0.05. Confidence in the results of the network meta-analysis The CINeMA framework (Confidence in Network Meta-Analysis ( https://cinema.ispm.unibe.ch/ ) assessed confidence in results and certainty of evidence, covering within-study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence. Confidence was graded as high, moderate, low, or very low (Appendix , Tables , and ). Additional analyses A sensitivity analysis was conducted, considering the duration of probiotic therapy, categorized as either ≤ 1 month or > 1 month. Assessment of transitivity across comparisons To evaluate transitivity, systematic information on patient and study characteristics was provided. This allowed the empirical assessment of potential effect modifiers’ distribution across trials, including periodontal disease severity, diagnostic criteria, smoking habits, and follow-up period. Network geometry Illustrated as spider web-like plots, network geometry portrays connections between studies employing diverse periodontal therapies. Plots, categorized by outcomes, interpret geometry based on parameters like patient count, study numbers, nodes, edges, strong edges percentage, common comparators percentage, density, and median thickness.
This SR and NMA adhered to PRISMA guidelines, including the updated version for network meta-analysis (Appendix ). It is registered in PROSPERO under trial No. CRD42021250678.
Table shows the main inclusion criteria for the PICO question, including primary and secondary outcomes. Studies lacking essential data required for a meta-analysis; nonrandomized clinical studies, cohort studies, and case series; studies involving patients with systemic diseases (HIV/AIDS or diabetes) or intellectual disabilities; studies focused on forms of periodontal disease other than chronic periodontitis, patients in periodontal supportive therapy, or healthy volunteers; studies examining therapies other than probiotics; studies targeting children, adolescents, or the elderly population; studies failing to meet the transitivity assumption were excluded.
Three electronic databases and three grey literature platforms were searched up to November 2023: MEDLINE (via PubMed), LILACS, and Cochrane Central Registry of Controlled Trials (CENTRAL); and Google Scholar (with the first 300 references retrieved), ClinicalTrials.gov, and a database listing of unpublished studies (DANS EASY Archive, available at 10.17026/dans-xtf-47w5), respectively. Detailed search strategies (Appendix ) were adopted, supplemented by screening of reference lists (using Research Rabbit, https://www.researchrabbit.ai/ ) and outreach to corresponding authors via email to inquire about additional research in the field or awareness of any ongoing projects.
Two reviewers rigorously and independently followed predetermined criteria for screening titles and abstracts for eligibility. Exclusion decisions were meticulously recorded (Table , Appendix ). Full-text reports were obtained for included studies and those lacking sufficient information. Data extraction covered study features, participant details, and outcome measures. Contacting corresponding authors addressed any needed clarifications. Discrepancies were resolved through discussion, with a third reviewer consulted if necessary.
Table shows the main variables sought in the included studies. Table presents data by group and outcomes (Appendix ).
The methodological quality of the included studies was assessed using Cochrane Collaboration’s Tool Assessment Risk of Bias 5.1.0 . Two independent reviewers assigned ‘low risk,’ ‘unclear risk,’ or ‘high risk’ of bias to each question. Discrepancies were resolved through discussion or consultation with a third reviewer if needed. Cohen’s kappa coefficient evaluated interrater agreement, with interpretations ranging from poor to almost perfect. Final scores were determined based on the percentage of ‘low risk of bias’ responses. Study bias was categorized as high (≤ 49%), moderate (50–69%), or low (≥ 70%).
Summary treatment effect measures Clinical parameters for continuous primary and secondary outcomes were derived from included studies. Mean differences (MD) and standard errors were presented for all studies. Effect sizes within and between groups at baseline and last follow-up were calculated using MedCalc ® Software Ltd (available at https://www.medcalc.org/calc/comparison_of_means.php ) (Appendix , Tables , , , and ). Planned methods of analysis Network meta-analyses, incorporating direct and indirect comparisons, were conducted using a frequentist weighted least-squares approach with the “netmeta package” in Rstudio (R. Rstudio, PBC, Boston, MA). Random-effects models were applied, categorizing results by follow-up periods: ≤3 months (short-term) and > 3 months (long-term). A single common approach was used to assess heterogeneity within studies. Assessment of inconsistency Both local (SIDE method) and global (incoherence models) approaches were employed to assess inconsistency. The netsplit function separated indirect from direct evidence, while incoherence models assessed inconsistency across the entire network. No inconsistency was considered for p > 0.05. Confidence in the results of the network meta-analysis The CINeMA framework (Confidence in Network Meta-Analysis ( https://cinema.ispm.unibe.ch/ ) assessed confidence in results and certainty of evidence, covering within-study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence. Confidence was graded as high, moderate, low, or very low (Appendix , Tables , and ). Additional analyses A sensitivity analysis was conducted, considering the duration of probiotic therapy, categorized as either ≤ 1 month or > 1 month. Assessment of transitivity across comparisons To evaluate transitivity, systematic information on patient and study characteristics was provided. This allowed the empirical assessment of potential effect modifiers’ distribution across trials, including periodontal disease severity, diagnostic criteria, smoking habits, and follow-up period. Network geometry Illustrated as spider web-like plots, network geometry portrays connections between studies employing diverse periodontal therapies. Plots, categorized by outcomes, interpret geometry based on parameters like patient count, study numbers, nodes, edges, strong edges percentage, common comparators percentage, density, and median thickness.
Clinical parameters for continuous primary and secondary outcomes were derived from included studies. Mean differences (MD) and standard errors were presented for all studies. Effect sizes within and between groups at baseline and last follow-up were calculated using MedCalc ® Software Ltd (available at https://www.medcalc.org/calc/comparison_of_means.php ) (Appendix , Tables , , , and ).
Network meta-analyses, incorporating direct and indirect comparisons, were conducted using a frequentist weighted least-squares approach with the “netmeta package” in Rstudio (R. Rstudio, PBC, Boston, MA). Random-effects models were applied, categorizing results by follow-up periods: ≤3 months (short-term) and > 3 months (long-term). A single common approach was used to assess heterogeneity within studies.
Both local (SIDE method) and global (incoherence models) approaches were employed to assess inconsistency. The netsplit function separated indirect from direct evidence, while incoherence models assessed inconsistency across the entire network. No inconsistency was considered for p > 0.05.
The CINeMA framework (Confidence in Network Meta-Analysis ( https://cinema.ispm.unibe.ch/ ) assessed confidence in results and certainty of evidence, covering within-study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence. Confidence was graded as high, moderate, low, or very low (Appendix , Tables , and ).
A sensitivity analysis was conducted, considering the duration of probiotic therapy, categorized as either ≤ 1 month or > 1 month.
To evaluate transitivity, systematic information on patient and study characteristics was provided. This allowed the empirical assessment of potential effect modifiers’ distribution across trials, including periodontal disease severity, diagnostic criteria, smoking habits, and follow-up period.
Illustrated as spider web-like plots, network geometry portrays connections between studies employing diverse periodontal therapies. Plots, categorized by outcomes, interpret geometry based on parameters like patient count, study numbers, nodes, edges, strong edges percentage, common comparators percentage, density, and median thickness.
Study selection A flow diagram in Appendix outlines the article screening process. From 2,599 articles, 104 underwent full-text review, with 33 meeting the inclusion criteria for qualitative assessment. Quantitative analysis included 28 studies for PPD, 26 for CAL, and 18 for BOP; other outcomes (PI and CFU) lacked sufficient data for network meta-analysis. Study characteristics Appendix ’s table summarizes details from the 33 RCTs, featuring patients diagnosed with moderate to severe chronic periodontitis . All participants were untreated patients, and smoking habits varied, with one RCT exclusive to smokers and 22 involving non-smokers. Follow-up periods ranged from 1 month to 1 year, and probiotic therapy durations spanned from a single application to 6 months. The probiotics included Bifidobacterium , Bacillus , Lactobacillus , Streptococci , Saccharomyces , alone or in combination, administered through various routes like gel, lozenges, paste, gum, powder, tablets, capsules, drops, mouthwash, sachets, and yogurt. Summary of network geometry In analyzing periodontal outcomes, PPD reduction data were extracted from 28 studies (85%), encompassing 1,056 participants , revealing a network diagram with 25 nodes and 27 edges, 11.11% strong edges, 32% common comparators, and a median network connection with a density of 0.09 and a mean thickness of 1.04 (Fig. A). For CAL, 26 RCTs [17–26,28−33,35–40,42−45] (79%) with 906 patients showcased a network diagram (Fig. B) containing 26 nodes and 29 edges, 10.34% strong edges, 38.46% common comparators, a median network connection with a density of 0.09, and a mean thickness of 0.90. Additionally, BOP data from 18 RCTs (55%) with 682 patients exhibited a network diagram (Fig. C) featuring 16 nodes and 16 edges, 18.75% strong edges, 31.25% common comparators, a median network connection with a density of 0.13, and a mean thickness of 1.13. Risk of bias within included studies Cohen’s kappa for the 33 studies assessed using the Cochrane Collaboration’s Tool was 0.96 ( p = 0.018), indicating almost perfect agreement. No study was excluded after overall appraisal, but 30% had a high risk of bias, primarily in selection, performance, and detection domains (Fig. ) (Appendix ). Synthesis of results PPD (short-term studies) When combined with SRP, eight probiotic interventions resulted in significantly greater PPD reduction compared to Splac with a MD from 0.18 mm (95% confidence interval [CI]: 0.08–0.28, p = 0.0004, 95% prediction intervals [PdI]: -0.4680; 0.8280) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.48 mm (95% CI: 1.24–1.72, p = 0.0001, 95% PdI: -0.0829; 3.0429) with SRP + Lactobacillus reuteri (SLreut) (Fig. A). PPD (long-term studies) When combined with SRP, Lactobacillus reuteri DA (SLreutDA) significantly reduced PPD with a MD of 0.80 (95% CI: 0.30–1.29, p = 0.0016, 95% PdI: -5.4763; 7.0680) compared with Splac (Fig. B). CAL (short-term studies) Sixteen probiotic interventions combined with SRP caused significantly more CAL gain than Splac, with a MD from 0.16 mm (95% CI: 0.05–0.27, p = 0.0050, 95% PdI: -0.5643; 0.8843) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.05 mm (95% CI: 1.03–1.07, p = 0.0001, 95% PdI: 0.9102; 1.1898) with SRP + Lactobacillus acidophilus , Lactobacillus rhamnosus , Bifidobacterium longum and Saccharomyces boulardii (SLacidLrhamBlongSboul) (Fig. A). CAL (long-term studies) When combined with SRP, two probiotic interventions significantly cause more CAL gain than Splac with a MD from 0.32 mm (95% CI: 0.13–0.51, p = 0.0011) with SRP + Lactobacillus reuteri single ( SLreutsingle ) to 0.43 mm (95% CI: 0.24–0.62, p = 0.0001) with SRP + Lactobacillus reuteri incremental ( Slreutincrem ) (Fig. B). BOP (short-term studies) In four probiotic combinations with SRP, it is verified the significant reduction of BOP compared to Splac, with a MD from 13.26% (95% CI: 5.45–21.07, p = 0.0009, 95% PdI: -48.8436; 75.3636 ) with SRP + Lactobacillus reuteri D (SlreutD) to 33.00% (95% CI: 23,62–42.38, p = 0.0001, 95% PdI: -37.6700; 103.6700 ) with SRP + Lactobacillus reuteri AA (SLreutAA) (Fig. A). BOP (long-term studies) When combined with SRP, four probiotic interventions significantly cause more BOP reduction than Splac with a MD from 5.02% (95% CI: 3.64–6.40, p = 0.0001) with SRP + Streptococcus oralis , uberis and rattus (SSoraluberrat) to 23.31% (95% CI: 18.50–28.12, p = 0.0001,) with Slreutincrem (Fig. B). Exploration for inconsistency SIDE analysis revealed no inconsistency (0%) for all the studied outcomes. Global inconsistency was not identified for any outcome as well: PPD (Q = 0, p = NA, for both short- and long-term studies), CAL (Q = 0, p = NA, for both short- and long-term studies), and BOP (Q = 0, p = NA, for both short- and long-term studies) (Appendix ). Results of additional analyses To reduce heterogeneity (for PPD long-term studies and BOP short-term studies), we implemented the following strategies: removing studies with a high risk of bias and conducting a subgroup analysis based on the duration of antibiotic therapy (with 1 month as the reference). Since there were no differences in the final results when excluding studies with a high risk of bias, we decided to retain them to increase the sample size and enhance the robustness of the results. For the other outcomes, the heterogeneity was not significant (I²=0%). PPD (long-term studies) Therapy ≤ 1 m Considering the studies with a duration of therapy ≤ 1 month, the heterogeneity of the analysis decreases from considerable (I 2 = 96.1%, 95% CI: 91.7 to 98.2) to not important (0%). SRP + SLreutDA maintains the clinical relevance (MD > 0.5 mm) (MD = 1.16, 95% CI: 1.06 to 1.25, p = 0.0001). Therapy > 1 m The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations. BOP (short-term studies) The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations. Ranking of the interventions For all the outcomes measured, the best probiotic regimen in terms of PPD and BOP reduction and CAL gain is the Lactobacillus , specifically the specie reuteri (PPD ≤ 3 m: P-score = 1; PPD > 3 m: P-score = 0.9363; CAL > 3 m: P-score = 0.9650; BOP ≤ 3 m: P-score = 0.9671; 0.9417; BOP > 3 m: P-score = 0.9863). This probiotic, when used as an adjuvant to SRP, appears to be the most effective for both short and long-term follow-up periods, whether the therapy lasts for less or more than one month. The combination of Lactobacillus with Bifidobacterium and Saccharomyces seems to have better impact on CAL gain in studies with a follow up ≤ 3 m (P-score = 0.9922).
A flow diagram in Appendix outlines the article screening process. From 2,599 articles, 104 underwent full-text review, with 33 meeting the inclusion criteria for qualitative assessment. Quantitative analysis included 28 studies for PPD, 26 for CAL, and 18 for BOP; other outcomes (PI and CFU) lacked sufficient data for network meta-analysis.
Appendix ’s table summarizes details from the 33 RCTs, featuring patients diagnosed with moderate to severe chronic periodontitis . All participants were untreated patients, and smoking habits varied, with one RCT exclusive to smokers and 22 involving non-smokers. Follow-up periods ranged from 1 month to 1 year, and probiotic therapy durations spanned from a single application to 6 months. The probiotics included Bifidobacterium , Bacillus , Lactobacillus , Streptococci , Saccharomyces , alone or in combination, administered through various routes like gel, lozenges, paste, gum, powder, tablets, capsules, drops, mouthwash, sachets, and yogurt.
In analyzing periodontal outcomes, PPD reduction data were extracted from 28 studies (85%), encompassing 1,056 participants , revealing a network diagram with 25 nodes and 27 edges, 11.11% strong edges, 32% common comparators, and a median network connection with a density of 0.09 and a mean thickness of 1.04 (Fig. A). For CAL, 26 RCTs [17–26,28−33,35–40,42−45] (79%) with 906 patients showcased a network diagram (Fig. B) containing 26 nodes and 29 edges, 10.34% strong edges, 38.46% common comparators, a median network connection with a density of 0.09, and a mean thickness of 0.90. Additionally, BOP data from 18 RCTs (55%) with 682 patients exhibited a network diagram (Fig. C) featuring 16 nodes and 16 edges, 18.75% strong edges, 31.25% common comparators, a median network connection with a density of 0.13, and a mean thickness of 1.13.
Cohen’s kappa for the 33 studies assessed using the Cochrane Collaboration’s Tool was 0.96 ( p = 0.018), indicating almost perfect agreement. No study was excluded after overall appraisal, but 30% had a high risk of bias, primarily in selection, performance, and detection domains (Fig. ) (Appendix ).
PPD (short-term studies) When combined with SRP, eight probiotic interventions resulted in significantly greater PPD reduction compared to Splac with a MD from 0.18 mm (95% confidence interval [CI]: 0.08–0.28, p = 0.0004, 95% prediction intervals [PdI]: -0.4680; 0.8280) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.48 mm (95% CI: 1.24–1.72, p = 0.0001, 95% PdI: -0.0829; 3.0429) with SRP + Lactobacillus reuteri (SLreut) (Fig. A). PPD (long-term studies) When combined with SRP, Lactobacillus reuteri DA (SLreutDA) significantly reduced PPD with a MD of 0.80 (95% CI: 0.30–1.29, p = 0.0016, 95% PdI: -5.4763; 7.0680) compared with Splac (Fig. B). CAL (short-term studies) Sixteen probiotic interventions combined with SRP caused significantly more CAL gain than Splac, with a MD from 0.16 mm (95% CI: 0.05–0.27, p = 0.0050, 95% PdI: -0.5643; 0.8843) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.05 mm (95% CI: 1.03–1.07, p = 0.0001, 95% PdI: 0.9102; 1.1898) with SRP + Lactobacillus acidophilus , Lactobacillus rhamnosus , Bifidobacterium longum and Saccharomyces boulardii (SLacidLrhamBlongSboul) (Fig. A). CAL (long-term studies) When combined with SRP, two probiotic interventions significantly cause more CAL gain than Splac with a MD from 0.32 mm (95% CI: 0.13–0.51, p = 0.0011) with SRP + Lactobacillus reuteri single ( SLreutsingle ) to 0.43 mm (95% CI: 0.24–0.62, p = 0.0001) with SRP + Lactobacillus reuteri incremental ( Slreutincrem ) (Fig. B). BOP (short-term studies) In four probiotic combinations with SRP, it is verified the significant reduction of BOP compared to Splac, with a MD from 13.26% (95% CI: 5.45–21.07, p = 0.0009, 95% PdI: -48.8436; 75.3636 ) with SRP + Lactobacillus reuteri D (SlreutD) to 33.00% (95% CI: 23,62–42.38, p = 0.0001, 95% PdI: -37.6700; 103.6700 ) with SRP + Lactobacillus reuteri AA (SLreutAA) (Fig. A). BOP (long-term studies) When combined with SRP, four probiotic interventions significantly cause more BOP reduction than Splac with a MD from 5.02% (95% CI: 3.64–6.40, p = 0.0001) with SRP + Streptococcus oralis , uberis and rattus (SSoraluberrat) to 23.31% (95% CI: 18.50–28.12, p = 0.0001,) with Slreutincrem (Fig. B).
When combined with SRP, eight probiotic interventions resulted in significantly greater PPD reduction compared to Splac with a MD from 0.18 mm (95% confidence interval [CI]: 0.08–0.28, p = 0.0004, 95% prediction intervals [PdI]: -0.4680; 0.8280) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.48 mm (95% CI: 1.24–1.72, p = 0.0001, 95% PdI: -0.0829; 3.0429) with SRP + Lactobacillus reuteri (SLreut) (Fig. A).
When combined with SRP, Lactobacillus reuteri DA (SLreutDA) significantly reduced PPD with a MD of 0.80 (95% CI: 0.30–1.29, p = 0.0016, 95% PdI: -5.4763; 7.0680) compared with Splac (Fig. B).
Sixteen probiotic interventions combined with SRP caused significantly more CAL gain than Splac, with a MD from 0.16 mm (95% CI: 0.05–0.27, p = 0.0050, 95% PdI: -0.5643; 0.8843) with SRP + Bifidobacterium lactis DN (SBlactDN) to 1.05 mm (95% CI: 1.03–1.07, p = 0.0001, 95% PdI: 0.9102; 1.1898) with SRP + Lactobacillus acidophilus , Lactobacillus rhamnosus , Bifidobacterium longum and Saccharomyces boulardii (SLacidLrhamBlongSboul) (Fig. A).
When combined with SRP, two probiotic interventions significantly cause more CAL gain than Splac with a MD from 0.32 mm (95% CI: 0.13–0.51, p = 0.0011) with SRP + Lactobacillus reuteri single ( SLreutsingle ) to 0.43 mm (95% CI: 0.24–0.62, p = 0.0001) with SRP + Lactobacillus reuteri incremental ( Slreutincrem ) (Fig. B).
In four probiotic combinations with SRP, it is verified the significant reduction of BOP compared to Splac, with a MD from 13.26% (95% CI: 5.45–21.07, p = 0.0009, 95% PdI: -48.8436; 75.3636 ) with SRP + Lactobacillus reuteri D (SlreutD) to 33.00% (95% CI: 23,62–42.38, p = 0.0001, 95% PdI: -37.6700; 103.6700 ) with SRP + Lactobacillus reuteri AA (SLreutAA) (Fig. A).
When combined with SRP, four probiotic interventions significantly cause more BOP reduction than Splac with a MD from 5.02% (95% CI: 3.64–6.40, p = 0.0001) with SRP + Streptococcus oralis , uberis and rattus (SSoraluberrat) to 23.31% (95% CI: 18.50–28.12, p = 0.0001,) with Slreutincrem (Fig. B).
SIDE analysis revealed no inconsistency (0%) for all the studied outcomes. Global inconsistency was not identified for any outcome as well: PPD (Q = 0, p = NA, for both short- and long-term studies), CAL (Q = 0, p = NA, for both short- and long-term studies), and BOP (Q = 0, p = NA, for both short- and long-term studies) (Appendix ).
To reduce heterogeneity (for PPD long-term studies and BOP short-term studies), we implemented the following strategies: removing studies with a high risk of bias and conducting a subgroup analysis based on the duration of antibiotic therapy (with 1 month as the reference). Since there were no differences in the final results when excluding studies with a high risk of bias, we decided to retain them to increase the sample size and enhance the robustness of the results. For the other outcomes, the heterogeneity was not significant (I²=0%). PPD (long-term studies) Therapy ≤ 1 m Considering the studies with a duration of therapy ≤ 1 month, the heterogeneity of the analysis decreases from considerable (I 2 = 96.1%, 95% CI: 91.7 to 98.2) to not important (0%). SRP + SLreutDA maintains the clinical relevance (MD > 0.5 mm) (MD = 1.16, 95% CI: 1.06 to 1.25, p = 0.0001). Therapy > 1 m The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations. BOP (short-term studies) The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations.
Therapy ≤ 1 m Considering the studies with a duration of therapy ≤ 1 month, the heterogeneity of the analysis decreases from considerable (I 2 = 96.1%, 95% CI: 91.7 to 98.2) to not important (0%). SRP + SLreutDA maintains the clinical relevance (MD > 0.5 mm) (MD = 1.16, 95% CI: 1.06 to 1.25, p = 0.0001).
Considering the studies with a duration of therapy ≤ 1 month, the heterogeneity of the analysis decreases from considerable (I 2 = 96.1%, 95% CI: 91.7 to 98.2) to not important (0%). SRP + SLreutDA maintains the clinical relevance (MD > 0.5 mm) (MD = 1.16, 95% CI: 1.06 to 1.25, p = 0.0001).
The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations.
The heterogeneity of this outcome remains considerable (> 70%), even with the sensitivity analysis, and no clinically relevant alterations were observed compared to the initial network estimations.
For all the outcomes measured, the best probiotic regimen in terms of PPD and BOP reduction and CAL gain is the Lactobacillus , specifically the specie reuteri (PPD ≤ 3 m: P-score = 1; PPD > 3 m: P-score = 0.9363; CAL > 3 m: P-score = 0.9650; BOP ≤ 3 m: P-score = 0.9671; 0.9417; BOP > 3 m: P-score = 0.9863). This probiotic, when used as an adjuvant to SRP, appears to be the most effective for both short and long-term follow-up periods, whether the therapy lasts for less or more than one month. The combination of Lactobacillus with Bifidobacterium and Saccharomyces seems to have better impact on CAL gain in studies with a follow up ≤ 3 m (P-score = 0.9922).
This systematic review with network meta-analysis examined 33 RCTs to assess the efficacy of probiotics in enhancing clinical parameters (PPD, CAL, BOP). It represents the first comprehensive analysis of diverse probiotic proposals. Although most probiotics, in conjunction with SRP, showed improvements in PPD and CAL over Splac, certainty levels were very low at 92% and 71%, respectively (Appendix ). Our sensitivity analysis substantially reduced the heterogeneity of the outcomes measured, specifically for PPD. It concludes that the duration of probiotic therapy regimens can directly impact the success of the supplementary intervention. Our findings did not show sustained benefits beyond one month of probiotic therapy, suggesting that there is no difference providing probiotic therapy for more than one mouth. Quantitative analysis for secondary outcomes, PI and CFU, faced constraints due to network disconnection and high clinical data heterogeneity, respectively. However, the qualitative evaluation of the included studies measuring the effect of probiotics as an adjuvant to subgingival instrumentation in terms of reducing CFU counts follows the biological plausibility of the mechanism of action of probiotics in oral microflora. Almost all the included studies show that groups with subgingival instrumentation adjuvated with probiotic therapy experienced more CFU reduction compared to control/placebo. This is valid for the total load of bacteria or specific periodontal bacteria ( Aggregatibacter actinomycetemcomitans , Porphyromonas gingivalis , Prevotella intermedia , Fusobacterium nucleatum , Tannerella forsythia , and Treponema denticola ) demonstrating the action of probiotics in replacing dysbiotic microflora with symbiotic microflora. Despite theoretically distinct systemic and local routes of probiotic administration, we refrained from conducting a subgroup analysis based on the type of administration. This decision was influenced by the local application of probiotics, such as dissolving tablets under the tongue, applying gels, sucking lozenges, and dissolving capsules in the mouth, where it’s challenging to ensure that the patient does not inadvertently swallow the content, making it difficult to control for a completely local application route. In our statistical analysis, we differentiated between two control arms (SRP; SRP + plac) based on the well-established placebo response, even in cases where outcomes are objectively measurable. While PPD and CAL measurements are objective, we acknowledge that improvement and response to periodontal therapy result from a collaborative effort between the dentist and the patient’s commitment to oral healthcare. In periodontal diseases, the placebo effect is explained as a psychological response to the therapeutic context or treatment received, possibly associated with the patient’s motivation to improve . However, in our results, we only observed significant and clinically relevant differences between SRP and SRP + plac for the BOP outcome (-21.51, 95% CI: -30.35 to -12.67, p = 0.001, 95% PdI: -89.1755; 46.1555), leading us to conclude that SRP alone is an inferior therapy compared to SRP + plac. Nevertheless, the measurement of this outcome is subjective compared to the objective measurements of PPD and CAL outcomes. For PPD, there were no differences between SRP and SRP + plac (-0.02, 95% CI: -0.21 to 0.17, p = 0.8402, 95% PdI: -1.2799; 1.2399). For the CAL outcome, the differences were statistically significant between SRP and SRP + plac but not clinically relevant (0.27, 95% CI: 0.01 to 0.053, p = 0.0430, 95% PdI: -1.4251; 1.9651). These differences were only proven for the short-term studies, as the network loses connection for the long-term studies (to apply the network algorithm, it was mandatory to remove the arms of SRP alone treatment). In 2020, Nikolaos Donos et al. published a systematic review evaluating the efficacy of host modulators combined with subgingival instrumentation in reducing probing pocket depth in patients with periodontitis. The study concluded that based on five RCTs, treatment with probiotics resulted in a non-statistically significant benefit in PPD reduction of 0.38 mm . On the other hand, a study published by J Li and his team, supports the use of probiotics as adjuvant to non-surgical periodontal treatment for PPD (MD=-0.60, 95% CI: -0.9 to -0.3, p < 0.001) and CAL (MD=-0.52, 95% CI: -0.75 to -0.28, p < 0.001) outcomes . This study suggest that the administration of probiotics together with scaling and root planing can somewhat improve chronic periodontitis patient clinical outcomes and reduce levels of periodontal pathogens . Our results, in addition to incorporating a network analysis, have included an additional number of randomized controlled trials (33) and increased the overall study population from 193 and 647 patients to 1290, respectively. This expansion provides innovative evidence for this topic and enhances statistical robustness. With SLreutDA , the reduction in PPD was statistically significant (MD = 1.16, 95% CI: 1.06 to 1.25, p = 0.001) and clinically relevant (difference > 0.5 mm), with the results being better than that reported by Donos et al. and consistent with the results of Li et al. . The confidence interval and I 2 statistic (0%) in our results instilled confidence in this information. The purpose of conducting a network meta-analysis in this field is to compare all available probiotic therapy regimens head-to-head and understand which one is the most effective as an adjunct to periodontal therapy. Although our study supports the clinical benefits of probiotics, there are still studies that do not demonstrate these benefits, with some authors advocating against the use of probiotics in the treatment of periodontal diseases . To ensure a homogeneous sample and meet the transitivity assumption, we focused on untreated patients diagnosed with periodontitis. Additionally, we also adhered to the definition provided by Armitage (1999) to avoid excluding studies published before 2018 . Given that chronic and aggressive periodontitis have different disease story, we included only studies on chronic periodontitis published before 2018. Individual diagnostic criteria for chronic periodontitis were analyzed for each included study. This led to the exclusion of three studies due to missing patient selection information or different criteria for diagnosing the disease. This situation highlights the importance of adhering to FAIR principles in biomedical research to ensure data is findable, accessible, interoperable (using standardized vocabularies), and reusable. Smoking is a well-known risk factor for experiencing periodontitis, and it appears to have more impact on the CAL outcome, increasing the risk of periodontal attachment loss compared to non-smokers. Analyzing the subgroup of smokers with a network model created two subnetworks, thus preventing comparison. The findings of this study align with existing literature since the interventions associated with significantly greater CAL gain are primarily from studies that excluded smokers. The included studies’ follow-up period ranged from 1 month to 1 year. For the presentation of results, we chose to divide the data by follow-up periods: ≤3 months (short-term) and > 3 months (long-term), since periodontal patients, contrary to the general population, require a more frequent recall system. The truth is that this follow-up period is not established for all periodontal patients, as it varies greatly depending on the case. However, the 3-month follow-up period seems to be the most acceptable time for periodontal patient recall in maintenance . The most favorable results for the measured outcomes were observed at 1, 3, and 6 months, indicating probiotic therapies’ short- and long-term success. These results are novel, as the evidence of probiotics’ clinical efficacy at 6-month of follow-up was still to be proven, according to published papers . Considerable heterogeneity was observed in clinical data, particularly for the PPD outcome. This variability can be attributed to differences in probiotic therapy duration (ranging from a single application to 1 year), various methods of administration, the use of single probiotic versus combinations, and variations in clinical data collection methods. The primary limitation in probiotic research stems from the fact that these agents were initially developed for treating gastrointestinal disorders. As a result, there are currently no approved probiotics for use in dental practice, necessitating extensive clinical research to comprehend the specificities of these agents in the oral environment. It is precisely the duration and route of administration of the probiotics that pose the greatest challenge, as it is necessary to understand how long and for how long probiotics need to be taken to prevent the pathogenic microflora from becoming dominant again . In our statistical analysis, we combined data from different administration protocols and a wide range of microorganisms. While we acknowledge this approach as a significant limitation of the study, it reflects the available evidence on probiotics that we were able to work with. For this reason, the results of this study should be interpreted conscientiously and with caution. Nevertheless, it is apparent that incorporating probiotics as an adjunctive therapy in periodontal treatment is safe, as evidenced by the absence of reported adverse effects in patients . Our results suggest that Lactobacillus , particularly the specie reuteri , appear to be an effective adjuvant to subgingival instrumentation in improving clinical parameters, as they performed significantly and clinically better across all considered outcomes. This finding is consistent with the most recent published literature in this field . Although the results do not align with the recommendations outlined in the clinical guideline published by the EFP , the authors believe that it could be due to the additional available evidence since guidelines publication in 2020. Since then, twenty-nine additional RCTs have been published, with a substantial increase in the overall study population from 193 patients to 1290 patients. Nevertheless, it is important to mention that the perceived effectiveness of Lactobacillus reuteri as the most effective probiotic may be influenced by its strong representation in RCTs, which is likely due to funding from pharmaceutical companies. This represents a limitation and underscores, once again, the need for caution when interpreting and extrapolating the results. The network diagrams for the three outcomes were categorized as ‘star networks’ due to numerous proposed protocols in the literature, resulting in a low percentage of direct evidence. This reliance on indirect evidence is a limitation, cautioning against definitive conclusions. While Lactobacillus emerged as the most effective protocol for all outcomes in both short and long-term studies, the findings are based on low-quality indirect evidence. Thus, further clinical validation in oral healthcare settings is crucial. Additionally, a decline in probiotic effectiveness between 3 and 6 months underlines the need for extended-duration research to evaluate sustained efficacy and inform more robust clinical practices. The analyzed evidence suggests that combining SRP with probiotics regimens as adjuvants to subgingival instrumentation is effective in improving clinical parameters (PPD and CAL). Lactobacillus reuteri seems to be the most comprehensive and effective of the studied probiotic. Although SRP + Lactobacillus reuteri ranked higher than most other genera, these results must be cautiously interpreted due to the network’s weak connection for the considered outcomes. This systematic review underscores the need for future research, advocating for long-term RCTs (minimum one year) to evaluate sustained probiotic effects. Standardizing administration routes, comparing single vs. combined probiotic regimens, and enhancing clinical data collection methods in RCTs are crucial for improved comparability and reliability of results. Additionally, it would be interesting to combine all possible therapies used as adjuncts to non-surgical periodontal treatment, such as antibiotics , ozonized gels , and hyaluronic acid , to determine which interventions work best.
This systematic review with network meta-analysis highlights the potential role of probiotics, particularly Lactobacillus reuteri , as an effective adjuvant to professional mechanical plaque removal in improving clinical parameters in periodontal therapy. The findings underscore the possibility of integrating probiotics into periodontal treatment protocols, especially in light of the growing issue of antimicrobial resistance, as probiotics do not seem to cause adverse effects. Furthermore, this review calls for further long-term RCTs to validate these results. Standardizing probiotic administration and addressing clinical data heterogeneity are essential for advancing the use of probiotics.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Gynecologist Supply Deserts Across the VA and in the Community | cd43eab6-21c0-48d8-b9eb-c495845af961 | 9481821 | Gynaecology[mh] | The Veterans Health Administration (VA) mission of providing comprehensive healthcare for women veterans includes gynecology care, such as advanced procedures not typically available in primary care (colposcopy, endometrial biopsy, hysterectomy, etc.). VA is obligated to provide access to specialty gynecology care for all enrolled women veterans across the country, even those residing in areas with scant healthcare resources and in rural areas. , VA benefits cover gynecology care at VA facilities, or as VA-purchased care via a non-VA gynecologist who is part of VA’s approved community-based provider network. Geographic access to gynecologists thus relies upon availability of a gynecologist at a woman veteran’s VA facility (or a proximate VA) and/or availability of a gynecologist in the VA’s community network. Historically, there has been geographic variation in VA gynecologist supply. , Although the VA has worked to hire more gynecologists, in 2015, 27% of VA healthcare systems lacked an onsite gynecologist. Thus, use of VA’s community network is fairly common for such services: among women veterans who received care through VA for a gender-specific condition, 24% received gynecology care in the community. However, not all regions of the country have adequate gynecologist supply , raising the possibility that in some areas, VA may not have a sufficient community-based gynecologist pool to draw upon for their community network. “Gynecologist supply deserts” would arise in areas of overlapping gaps, i.e., in geographic regions lacking both VA-based and community-based gynecology services. Such deserts have been identified for other types of VA services (e.g., primary care and mental health) , but have not been examined for gynecology care. Some subgroups of women (e.g., rural residents, ethnic/racial minorities, veterans getting primary care at VA satellite clinics) may be at particular risk for gaps in access to gynecologic care. , In gynecologist supply deserts, gynecologic healthcare needs may go unmet, potentially contributing to preventable morbidity related to missed diagnoses and delayed treatments, and potentially exacerbating disparities. Our objective was to first characterize the extent to which women veterans live in gynecologist supply deserts (i.e., have both inadequate community gynecologist supply and lack of local VA gynecologists), and second, examine residence in gynecologist supply deserts by individual and VA site characteristics. We also make a novel contribution to the literature that can inform policy and planning, by presenting gynecologist supply deserts geographically. Overview This cross-sectional descriptive study uses VA administrative data and information on county-level clinician supply to characterize veterans with reduced access to gynecology care, either because it is not available locally in the VA, in their community, or both. This work was approved by the VA Central IRB. Data This analysis uses fiscal year 2017 (FY17) data from both VA and publicly available sources. Information about women veterans is drawn from a VA database of patient-level sociodemographic characteristics (age, race/ethnicity, service-connected status, whether the patient is “new” to VA, urban/rural residence). It also indicates the VA site (a VA Medical Center or one of its satellite facilities) where the veteran received care most frequently, or, in the case of a tie, most recently (referred to hereafter as “homesite”). This source database, created by the VA Women’s Health Evaluation Initiative (WHEI) with the support of VA Office of Women’s Health (OWH; VA’s national program office overseeing women’s healthcare delivery nationwide), draws from multiple VA enrollment and utilization files. Other VA data come from the VA Women’s Assessment Tool for Comprehensive Health (WATCH) survey, which asks site representatives for information about services available at their site. This survey is administered by OWH to each VA site of primary care. In FY17, the WATCH response rate was 100% ( n = 1197 sites of primary care). This study uses responses about where the site refers women for specialty gynecology care (as opposed to reproductive health services that can be provided in primary care by a non-specialist, which are not explored in this study). Publicly available data include county-level clinician and population information available from the Health Resources and Services Administration’s Area Health Resource File (AHRF; 2018-2019 release, with data on calendar year 2017). To create a person-level analysis file, we linked veteran county of residence with county identifiers (5-digit Federal Information Processing System codes used to uniquely identify counties) in the AHRF, and, separately, veteran homesite to site identifiers in WATCH. Cohort The study cohort includes all women veterans nationally with at least one FY17 VA primary care visit (in a general primary care clinic and/or a women’s clinic) ( n = 417,287). We excluded women with missing data across any of the data sources ( n = 9805, 2.3%), the majority of whom were missing county codes or individual sociodemographic characteristics. This resulted in an analysis cohort of 407,482 veterans. Variables Community Gynecologist Supply To measure community gynecologist supply, we calculated the number of practicing obstetrician-gynecologists in a county per 10,000 women (veterans and non-veterans) in the county. Using recommended standards for adequate obstetrician-gynecologist availability, , we created two categories of county-level supply for main analyses: inadequate (≤ 2 per 10,000 women) and adequate (> 2 per 10,000 women). Women were assigned a level of community gynecologist supply corresponding to their county of residence (per VA Enrollment file data). VA Gynecologist Supply To measure VA gynecologist supply, we used responses from a WATCH survey question: “Where did women receiving care at this … clinic get specialty gynecology services most often (e.g., for abnormal Pap, abnormal bleeding, gynecology surgery)?” Using the survey response categories determined by the national VA Office of Women’s Health (Web Appendix Table ), responses for each woman’s homesite were first grouped into categories describing where gynecology services were most often received: (1) at this site, (2) at another VA site within 50 miles, (3) at another VA site beyond 50 miles, and (4) through VA-purchased care at any distance. Based on these categories, we coded VA gynecology services as “Local” if available at the site or within 50 miles, “Distant” if available in VA but beyond 50 miles, and “No VA gynecologist” if only available through VA-purchased care, regardless of the distance. Gynecologist Supply Desert Women residing in a county characterized as inadequate supply and who lacked a local VA gynecologist are considered to live in a gynecologist supply desert. Patient-Level Characteristics Sociodemographic variables allowed for comparison of community gynecology care supply with VA gynecology care access within sub-populations of interest: age group, any service-connected disability rating status, rural residence, and race/ethnicity were measured using definitions developed for a series of reports on women veteran VA patients. We created an additional indicator, “new to VA” status, identifying whether the veteran had a primary care visit in FY17 but no evidence of VA use within the previous 8 years. Homesite-Level Characteristics We also examined sub-populations based on homesite-level characteristics: whether the homesite was a VA medical center (VAMC) versus another type of VA site (i.e., satellite clinics) and whether it had a separate women’s clinic (i.e., a multidisciplinary clinic offering primary care, mental healthcare, and often other services like gynecology). Analyses We assessed frequencies of sociodemographic and site-level characteristics for the cohort overall, and stratified by residence in a gynecologist supply desert. We also examined the proportion of the cohort residing in a gynecologist supply desert overall, and within each sociodemographic and homesite-level characteristic. We tested whether the proportion of veterans without local VA gynecologists were higher for those in counties with inadequate gynecologist supply compared to those in counties with adequate supply, using a chi-squared test. Conversely, we tested whether the proportion of veterans in inadequate-supply counties was higher for those without local VA gynecologists compared to those with local VA gynecologists. Finally, we depicted community and VA gynecologist supply data geographically, using four US county maps. The maps dichotomize the VA gynecologist measure to show counties where less than 50% versus 50% or more of the study cohort had a local VA gynecologist, and simultaneously show which counties had inadequate- versus adequate-supply of community gynecologists. Analyses were conducted in SAS® 9.2 (Cary, NC) and maps were created using ArcGIS by Esri (Redlands, CA). This cross-sectional descriptive study uses VA administrative data and information on county-level clinician supply to characterize veterans with reduced access to gynecology care, either because it is not available locally in the VA, in their community, or both. This work was approved by the VA Central IRB. This analysis uses fiscal year 2017 (FY17) data from both VA and publicly available sources. Information about women veterans is drawn from a VA database of patient-level sociodemographic characteristics (age, race/ethnicity, service-connected status, whether the patient is “new” to VA, urban/rural residence). It also indicates the VA site (a VA Medical Center or one of its satellite facilities) where the veteran received care most frequently, or, in the case of a tie, most recently (referred to hereafter as “homesite”). This source database, created by the VA Women’s Health Evaluation Initiative (WHEI) with the support of VA Office of Women’s Health (OWH; VA’s national program office overseeing women’s healthcare delivery nationwide), draws from multiple VA enrollment and utilization files. Other VA data come from the VA Women’s Assessment Tool for Comprehensive Health (WATCH) survey, which asks site representatives for information about services available at their site. This survey is administered by OWH to each VA site of primary care. In FY17, the WATCH response rate was 100% ( n = 1197 sites of primary care). This study uses responses about where the site refers women for specialty gynecology care (as opposed to reproductive health services that can be provided in primary care by a non-specialist, which are not explored in this study). Publicly available data include county-level clinician and population information available from the Health Resources and Services Administration’s Area Health Resource File (AHRF; 2018-2019 release, with data on calendar year 2017). To create a person-level analysis file, we linked veteran county of residence with county identifiers (5-digit Federal Information Processing System codes used to uniquely identify counties) in the AHRF, and, separately, veteran homesite to site identifiers in WATCH. The study cohort includes all women veterans nationally with at least one FY17 VA primary care visit (in a general primary care clinic and/or a women’s clinic) ( n = 417,287). We excluded women with missing data across any of the data sources ( n = 9805, 2.3%), the majority of whom were missing county codes or individual sociodemographic characteristics. This resulted in an analysis cohort of 407,482 veterans. Community Gynecologist Supply To measure community gynecologist supply, we calculated the number of practicing obstetrician-gynecologists in a county per 10,000 women (veterans and non-veterans) in the county. Using recommended standards for adequate obstetrician-gynecologist availability, , we created two categories of county-level supply for main analyses: inadequate (≤ 2 per 10,000 women) and adequate (> 2 per 10,000 women). Women were assigned a level of community gynecologist supply corresponding to their county of residence (per VA Enrollment file data). VA Gynecologist Supply To measure VA gynecologist supply, we used responses from a WATCH survey question: “Where did women receiving care at this … clinic get specialty gynecology services most often (e.g., for abnormal Pap, abnormal bleeding, gynecology surgery)?” Using the survey response categories determined by the national VA Office of Women’s Health (Web Appendix Table ), responses for each woman’s homesite were first grouped into categories describing where gynecology services were most often received: (1) at this site, (2) at another VA site within 50 miles, (3) at another VA site beyond 50 miles, and (4) through VA-purchased care at any distance. Based on these categories, we coded VA gynecology services as “Local” if available at the site or within 50 miles, “Distant” if available in VA but beyond 50 miles, and “No VA gynecologist” if only available through VA-purchased care, regardless of the distance. Gynecologist Supply Desert Women residing in a county characterized as inadequate supply and who lacked a local VA gynecologist are considered to live in a gynecologist supply desert. Patient-Level Characteristics Sociodemographic variables allowed for comparison of community gynecology care supply with VA gynecology care access within sub-populations of interest: age group, any service-connected disability rating status, rural residence, and race/ethnicity were measured using definitions developed for a series of reports on women veteran VA patients. We created an additional indicator, “new to VA” status, identifying whether the veteran had a primary care visit in FY17 but no evidence of VA use within the previous 8 years. Homesite-Level Characteristics We also examined sub-populations based on homesite-level characteristics: whether the homesite was a VA medical center (VAMC) versus another type of VA site (i.e., satellite clinics) and whether it had a separate women’s clinic (i.e., a multidisciplinary clinic offering primary care, mental healthcare, and often other services like gynecology). To measure community gynecologist supply, we calculated the number of practicing obstetrician-gynecologists in a county per 10,000 women (veterans and non-veterans) in the county. Using recommended standards for adequate obstetrician-gynecologist availability, , we created two categories of county-level supply for main analyses: inadequate (≤ 2 per 10,000 women) and adequate (> 2 per 10,000 women). Women were assigned a level of community gynecologist supply corresponding to their county of residence (per VA Enrollment file data). To measure VA gynecologist supply, we used responses from a WATCH survey question: “Where did women receiving care at this … clinic get specialty gynecology services most often (e.g., for abnormal Pap, abnormal bleeding, gynecology surgery)?” Using the survey response categories determined by the national VA Office of Women’s Health (Web Appendix Table ), responses for each woman’s homesite were first grouped into categories describing where gynecology services were most often received: (1) at this site, (2) at another VA site within 50 miles, (3) at another VA site beyond 50 miles, and (4) through VA-purchased care at any distance. Based on these categories, we coded VA gynecology services as “Local” if available at the site or within 50 miles, “Distant” if available in VA but beyond 50 miles, and “No VA gynecologist” if only available through VA-purchased care, regardless of the distance. Women residing in a county characterized as inadequate supply and who lacked a local VA gynecologist are considered to live in a gynecologist supply desert. Sociodemographic variables allowed for comparison of community gynecology care supply with VA gynecology care access within sub-populations of interest: age group, any service-connected disability rating status, rural residence, and race/ethnicity were measured using definitions developed for a series of reports on women veteran VA patients. We created an additional indicator, “new to VA” status, identifying whether the veteran had a primary care visit in FY17 but no evidence of VA use within the previous 8 years. We also examined sub-populations based on homesite-level characteristics: whether the homesite was a VA medical center (VAMC) versus another type of VA site (i.e., satellite clinics) and whether it had a separate women’s clinic (i.e., a multidisciplinary clinic offering primary care, mental healthcare, and often other services like gynecology). We assessed frequencies of sociodemographic and site-level characteristics for the cohort overall, and stratified by residence in a gynecologist supply desert. We also examined the proportion of the cohort residing in a gynecologist supply desert overall, and within each sociodemographic and homesite-level characteristic. We tested whether the proportion of veterans without local VA gynecologists were higher for those in counties with inadequate gynecologist supply compared to those in counties with adequate supply, using a chi-squared test. Conversely, we tested whether the proportion of veterans in inadequate-supply counties was higher for those without local VA gynecologists compared to those with local VA gynecologists. Finally, we depicted community and VA gynecologist supply data geographically, using four US county maps. The maps dichotomize the VA gynecologist measure to show counties where less than 50% versus 50% or more of the study cohort had a local VA gynecologist, and simultaneously show which counties had inadequate- versus adequate-supply of community gynecologists. Analyses were conducted in SAS® 9.2 (Cary, NC) and maps were created using ArcGIS by Esri (Redlands, CA). Description of Study Cohort As seen in Table , over a quarter of the cohort (26.7%) lived in rural areas, and nearly half were women of color (41.0%). Most were under 65 years old (87.2%), had a service-connected disability rating (67.9%), and/or were returning VA patients (96.5%). Just over half received their primary care at a non-VAMC satellite clinic (53.7%); similarly, just over half received primary care at a site without a women’s clinic (53.6%). Gynecologist Supply Deserts, Overall and by Sub-population Overall, 9% ( n = 36,936 women) of the study cohort lived in gynecologist supply deserts, both lacking a local VA gynecologist and living in an inadequate-supply county (Table ). Among them, 56% ( n = 20,622 women) did not have a distant VA gynecologist (i.e., they would need to rely on a community provider for gynecology care) (data not shown). The proportion of the women in each sub-population who lived in a gynecologist supply desert varied (4–24%), as shown in Figure . The sub-populations with the highest proportions were rural residents (24%), those who got their primary care at non-VAMC satellite clinics (13%), those who got their care at a site without a women’s clinic (13%), and those with American Indian or Alaska Native (12%), or white (12%) race. Comparison of VA Gynecologist to Community Gynecologist Supply Overall, most veterans (70.8%) had a local VA gynecologist, but a substantial group had either a distant (11.7%) or no VA gynecologist (17.5%) (data not shown). Veterans without a local VA gynecologist were more likely to live in inadequate-supply counties and vice versa. The percent of veterans without a local VA gynecologist was higher among veterans living in an inadequate-supply (versus adequate-supply) county (40.1% versus 26.0%) ( p < 0.001). Conversely, the percent of veterans who lived in an inadequate-supply county was higher among veterans with distant or no VA gynecologist (versus local VA gynecologist) (34.3%, 28.9% versus 19.1%) (data not shown). Geographic Distribution of Gynecologist Supply Deserts The map in Figure shows counties that could be characterized as gynecologist supply deserts, as they had inadequate supply of gynecologists and the majority of women veterans living in the county lacked a local VA gynecologist. There were 1130 counties (37% of all counties) meeting these criteria. They were located primarily in the Midwest and mountain west regions. Web Appendix includes maps of counties that are not gynecologist supply deserts, either due to their VA gynecologist supply only ( n = 816 counties), their community gynecologist supply only ( n = 534 counties), or both VA and community gynecologist supply ( n = 579 counties). As seen in Table , over a quarter of the cohort (26.7%) lived in rural areas, and nearly half were women of color (41.0%). Most were under 65 years old (87.2%), had a service-connected disability rating (67.9%), and/or were returning VA patients (96.5%). Just over half received their primary care at a non-VAMC satellite clinic (53.7%); similarly, just over half received primary care at a site without a women’s clinic (53.6%). Overall, 9% ( n = 36,936 women) of the study cohort lived in gynecologist supply deserts, both lacking a local VA gynecologist and living in an inadequate-supply county (Table ). Among them, 56% ( n = 20,622 women) did not have a distant VA gynecologist (i.e., they would need to rely on a community provider for gynecology care) (data not shown). The proportion of the women in each sub-population who lived in a gynecologist supply desert varied (4–24%), as shown in Figure . The sub-populations with the highest proportions were rural residents (24%), those who got their primary care at non-VAMC satellite clinics (13%), those who got their care at a site without a women’s clinic (13%), and those with American Indian or Alaska Native (12%), or white (12%) race. Overall, most veterans (70.8%) had a local VA gynecologist, but a substantial group had either a distant (11.7%) or no VA gynecologist (17.5%) (data not shown). Veterans without a local VA gynecologist were more likely to live in inadequate-supply counties and vice versa. The percent of veterans without a local VA gynecologist was higher among veterans living in an inadequate-supply (versus adequate-supply) county (40.1% versus 26.0%) ( p < 0.001). Conversely, the percent of veterans who lived in an inadequate-supply county was higher among veterans with distant or no VA gynecologist (versus local VA gynecologist) (34.3%, 28.9% versus 19.1%) (data not shown). The map in Figure shows counties that could be characterized as gynecologist supply deserts, as they had inadequate supply of gynecologists and the majority of women veterans living in the county lacked a local VA gynecologist. There were 1130 counties (37% of all counties) meeting these criteria. They were located primarily in the Midwest and mountain west regions. Web Appendix includes maps of counties that are not gynecologist supply deserts, either due to their VA gynecologist supply only ( n = 816 counties), their community gynecologist supply only ( n = 534 counties), or both VA and community gynecologist supply ( n = 579 counties). We found that nearly 1 in 10 women veteran VA primary care patients lived in a gynecologist supply desert in 2017, with no local VA gynecologist and with inadequate county-level gynecologist supply. Subgroups at particular risk of residing in a gynecologist supply desert included those living in rural areas, women veterans with American Indian/Alaska Native or white race, as well as those receiving primary care at satellite clinics and those receiving primary care at sites without a women’s clinic. VA policy entitles veterans lacking timely or nearby VA care to obtain care from community clinicians. However, this policy alone may not suffice for ensuring access in areas that also have scarcity of community gynecologists. We identify nearly 37,000 women veterans who may face barriers to accessing gynecology services because they live in counties with inadequate gynecologist supply and also lack a local VA gynecologist. For them, greater use of VA-purchased care in the community may not help improve access, since their local communities also have insufficient gynecologist supply. These women may therefore lack timely access to reproductive healthcare when the need arises, which, in turn, may have detrimental health effects. Notably, high proportions of American Indian/Alaska Native women live in such gynecologist supply deserts. While some American Indian/Alaska Native women may have access to care through the Indian Health Service, limitations to gynecology care access for this group is a concern, and may exacerbate known health inequities and well-documented disparities in all-cause mortality. That women in inadequate-supply counties more commonly lack a local VA gynecologist is concerning but not surprising. From the institutional administrator perspective, some VA facilities in low-supply counties may see a low volume of women veterans, making it fiscally challenging to recruit and maintain an onsite gynecologist. From the physician perspective, the same factors that drive health workforce shortages in rural and other under-served areas , may also make physicians less inclined to be recruited to a VA facility in those areas. These may include limited job opportunities for spouses, lack of familiarity with rural lifestyles, and less enticing financial incentives. Workforce recruitment challenges in VA merit further inquiry. Given well-documented clinician shortages in rural areas, it may not be surprising that rural veterans, regardless of community supply, frequently lacked local VA gynecologists. More surprisingly however, over a quarter (28%) of urban veterans in low-supply areas also lacked a local VA gynecologist. While small urban areas (which may face issues similar to those faced in rural areas) count as urban, these findings suggest that future research on access to care in the VA should consider area clinician supply in addition to rural/urban status. While many women lived in gynecology deserts, it is notable that over half of the veterans residing in inadequate-supply counties did have local VA gynecology care: without VA, these women would likely have few alternative sources for gynecology care. This highlights the important role VA plays as a safety net provider in medically under-resourced areas. , Public funding for the VA allows it to maintain access points in areas less able to attract or sustain private healthcare providers, thereby creating vital healthcare infrastructure. In these areas, women’s ability to receive needed gynecology specialty services may benefit from VA policy to maximize outreach to women veterans who do not use VA and non-VA policy to address access to gynecologists for non-Veteran women who do not have access to VA infrastructure. This study echoes related analyses of medical deserts across VA and community providers. For example, nearly a quarter of veterans enrolled in the VA live in a county that was both a healthcare shortage area (as defined by the Health Resources and Services Administration) and did not have a VA site of care. Similarly, Ohl and colleagues point to high proportions of veterans who are eligible for VA-purchased care (by virtue of their proximity to the nearest VA site), who also live in healthcare shortage areas. The present study expands this inquiry through a focus on gynecology care in a national cohort of women veterans. Limitations to the community gynecologist supply measure include (1) this measure does not account for the fact that non-gynecologists (e.g., family physicians, nurse practitioners) sometimes provide at least limited gynecology services; (2) not all obstetrician-gynecologists counted in the community gynecologist measure offer the full spectrum of gynecology services, suggesting that service gaps could exist even where a gynecologist is available; (3) not all community gynecologists are part of the VA-purchased care networks, and those that are may not have appointment availability. Limitations to the local VA gynecologist measure include (1) its reliance on self-reported information from VA sites of care (introducing potential measurement error); (2) lack of VA gynecologist supply adjustment per-capita (i.e., to account for variation in number of women veterans served per VA site); (3) the threshold distance (50 miles) used to define “local” VA gynecologist may exceed that distance to her residence. Study findings have two parallel policy implications for VA. First, in gynecologist supply deserts, relying solely on VA-purchased care may not suffice to alleviate access issues. In these areas, attention to hiring VA gynecologists, extending service capacity via non-gynecologist clinicians (e.g., family medicine physicians, nurse practitioners) who have specialty gynecology skills, expansion of veteran transportation options to specialty gynecology locations, and innovation around staffing models (e.g., VA-based traveling clinicians, or tele-gynecology hubs) may offer solutions for local VAs with low volumes of women patients. A VA demonstration project of a provider-to-provider women’s health educational and virtual consultation program found the virtual format to be a promising modality for positively influencing patient care. That demonstration project subsequently demonstrated the feasibility of providing tele-gynecology consultations with that format. National organizations that provide widely accessible women’s healthcare via models in which physicians lead teams of advance practice providers (e.g., nurse practitioners and nurse midwives) could also serve as a model for VA in ensuring access to cost-effective gynecological services in rural areas. Second, in areas where lack of VA gynecologists correlates with greater gynecologist supply in the surrounding community, it is important that community-based gynecologists are included in the contracted networks used for VA-purchased care referrals, that VA monitors the quality of these community-based providers’ care, and that robust systems for care coordination between VA and non-VA clinicians are in place. Ideally such community-based gynecologists would be versed in distinct characteristics of women veterans, such as the high rates of military sexual trauma and PTSD in this population, which may necessitate coordination with VA-based mental health providers and attention to trauma-informed care. It is also important for women veterans to know they can identify in-network gynecologists by selecting “community providers (in VA’s network)” on the VA facility locator, with the caveat that the locator does not indicate whether a specific clinician is accepting new patients. Additionally, veterans may have options for a purchased-care clinician versus gynecologist at a distant VA site when a gynecologist is not available at a local VA, though more research is needed to understand how veterans experience this choice. As the number of women veterans in VA has grown, access to gynecology care has become even more salient. This study identified a large cohort of veterans in gynecologist supply deserts, who likely had scarce access to gynecology care both within and outside of VA. Remaining true to VA’s mission to care for all veterans, regardless of gender and no matter how remote, will require continued attention to approaches that overcome gaps in gynecologist supply. ESM 1 (DOCX 789 kb) |
Efficacy and mechanism of action of harmine derivative H-2-104 against | 27fd3947-8970-4416-ab24-da39628d7ea3 | 11912776 | Biochemistry[mh] | Cystic echinococcosis (CE) is a severe zoonotic parasitic disease caused by the larval stage of the tapeworm Echinococcus granulosus ( E. granulosus ) parasites in humans or animals , and it is classified as one of the most serious parasitic diseases in humans by the World Health Organization (WHO) and the Food and Agriculture Organization of the United Nations (FAO) . This disease is predominantly distributed in regions with developed animal husbandry . In China, CE is predominantly prevalent in pastoral and semi-pastoral areas of seven provinces and autonomous regions, namely Nei Mongol, Sichuan, Tibet, Gansu, Qinghai, Ningxia, and Xinjiang . According to statistics, the direct economic loss in western China due to echinococcosis amounts to RMB 3 billion yuan annually . CE has been listed by the World Health Organization as one of 17 neglected diseases that need to be controlled or eliminated by 2050 .The infection caused by E. granulosus primarily affects organs with rich blood supply, with the liver being the most common site . The presence of occupational lesions at the site of E. granulosus infection leads to discomfort, and such infections impact the structure and function of the liver, primarily disrupting its metabolic, detoxifying, and excretory capabilities. Additionally, there is a risk of secondary infections, which collectively impairs the patient’s quality of life . Currently, the treatment options for CE primarily encompass surgical intervention and pharmacological therapy . Surgical procedures are primarily indicated for patients with definitive surgical indications, whereas pharmacological therapy represents the sole approach for those ineligible for surgery . Albendazole (ABZ), belonging to the benzimidazole class of drugs, is currently the first-line clinical treatment for CE and is also one of the drugs recommended by the WHO for the management of CE . Despite its widespread use in the treatment of CE, ABZ exhibits poor solubility, leading to inadequate absorption after oral administration, and low drug concentrations in plasma and liver, and thus, only about one-third of patients achieve remission or cure . 20~40% of patients exhibit suboptimal treatment outcomes. Furthermore, ABZ acts to inhibit rather than expel the parasite, necessitating long-term administration , which in turn predisposes patients to adverse reactions such as nausea, vomiting, alopecia, renal impairment, mucosal damage, and even death . Consequently, it is urgent to develop novel and effective chemotherapeutic agents. Recent years have witnessed remarkable advancements in pharmacological research targeting Echinococcus granulosus infections, with particular emphasis on the identification of bioactive plant-derived compounds exhibiting anthelmintic properties from traditional medicinal botanicals. Notably, systematic phytochemical investigations have validated the therapeutic potential of several species including Zataria multiflora , Nigella sativa , Berberis vulgaris , Allium sativum and crocin . These findings not only corroborate the empirical knowledge of traditional healing systems but also lay the foundation for substantial progress in developing novel anti-echinococcosis therapeutics through modern pharmaceutical approaches. Peganum harmala L., a perennial herb belonging to the Zygophyllaceae family, has been traditionally used as a medicinal herb by ethnic groups such as Uyghurs, Kazaks, and Mongolians . Peganum harmala L . contains a variety of chemical compositions, such as alkaloids, flavonoids, anthraquinones, triterpenoids, steroids, phenolic glycosides and volatile oils, etc., of which alkaloids are the most abundant, up to 2 ~ 6 %. Alkaloids mainly include β-carbolines and quinolines, and the most researched alkaloid is harmine among the β-carboline alkaloids . The medicinal parts include its seeds and whole plant, with the major chemical constituent being harmine (HM) (Fig. A). Numerous studies have reported that HM exhibits a wide range of pharmacological activities, including antibacterial , antiparasitic , antitumor , antidepressant , and antidiabetic effects . Our research team has discovered that HM possesses significant anti-CE activity . However, studies have also shown that HM has strong neurotoxicity, capable of stimulating the central nervous system and causing adverse reactions or even life-threatening conditions in humans and animals, thereby limiting its clinical application . To reduce the neurotoxicity of HM, our research group previously conducted structural modification on HM to obtain the derivative H-2-104 (Fig. B). Preliminary studies have indicated that H-2-104 exhibits favorable absorption properties and high bioavailability , suggesting that H-2-104 may be a potential therapeutic agent for CE and warrants further in-depth investigation. Metabolomics, an emerging discipline developed in recent years, involves the analysis of metabolites within organisms to observe changes in metabolites under different physiological or pathological states, thereby elucidating the relationships between metabolites and their corresponding physiological or pathological conditions. Currently, metabolomics is widely applied in various fields such as drug discovery and development, disease diagnosis, therapeutic efficacy assessment, toxicological evaluation, and biomarker discovery . In this study, we first investigated the inhibition of E. granulosus activity of H-2-104 through in vitro and in vivo pharmacodynamics experiments. Subsequently, based on LC-MS/MS technology, we detected changes in serum and liver metabolites in mice infected with E. granulosus after the intervention, identified potential biomarkers and the involved metabolic pathways, and explored the mechanism of action of H-2-104 against CE. Our findings provide a potentially valuable reference for the development of anti-CE drugs.
Chemicals and reagents ABZ (purity > 98%) was purchased from Sigma-Aldrich (St. Louis, USA). HM and H-2-104 (purity > 98%) were synthesized by Xinjiang Huashidan Pharmaceutical Co., Ltd. Unless stated otherwise, all culture reagents were purchased from Gibco (Wisent, Canada).
ABZ (purity > 98%) was purchased from Sigma-Aldrich (St. Louis, USA). HM and H-2-104 (purity > 98%) were synthesized by Xinjiang Huashidan Pharmaceutical Co., Ltd. Unless stated otherwise, all culture reagents were purchased from Gibco (Wisent, Canada).
Eighty female Kunming(KM) mice, aged 6–8 weeks, of Specific Pathogen Free grade with a body weight of 20 ± 2 g, were purchased from the Experimental Animal Center of Xinjiang Medical University. The experimental animal production license number is SYXK (Xin) 2018-0003 The animals were housed in a barrier environment at the Animal Experimentation Center of Xinjiang Medical University. China. This experiment was approved by the Experimental Animal Ethics Committee of Xinjiang Medical University with the approval number: IACUC-20170420-04. Parasites collection and culture Protoscoleces (PSCs) were isolated from hepatic cysts of naturally infected sheep slaughtered at the Hualing Slaughterhouse in Urumqi, Xinjiang. The method for collecting and culturing PSCs refers to previous studies . Briefly, cyst fluid was aspirated from the hepatic cysts, and after natural sedimentation, the supernatant was discarded to collect the PSCs. The PSCs were washed five times with sterile saline and then digested with 1% pepsin (pH = 2.0) for 30 min. After filtration through a sieve, they were washed with saline-containing antibiotics until the viability of the PSCs reached over 98% . Subsequently, the PSCs were transferred to a 25 cm² cell culture flask containing RPMI 1640 medium supplemented with 2% penicillin (100 U/mL) and streptomycin (100 µg/mL), and 10% fetal bovine serum. The viability of the PSCs was assessed using 1% eosin staining, with a requirement of greater than 95% viability. The eligible PSCs were then cultured in an incubator at 37 °C with 5% CO 2 . Effect of H-2-104 on PSCs in vitro PSCs were subjected to adaptive culturing for 48 h, and their viability was assessed. PSCs with viability greater than 95% were added to a 96-well plate, with approximately 200 PSCs per well. 0.1% dimethyl sulfoxide (DMSO, Amresco, USA) group was used as the negative control; The HM and H-2-104 groups were dissolved in DMSO and 2 µL was added to wells to make the final concentrations of 6.25, 12.5, 25, 50, 100 and 200 µM, respectively. PSCs were collected at 1, 2, 3, 4, and 5 days, respectively.The survival rate of PSCs in each group was detected by eosin staining method. The experiment was performed in triplicate. Additionally, changes in the ultrastructure of PSCs were also observed using a scanning electron microscope (SEM) (JSM-6390LV, JEOL Ltd., Tokyo, Japan). Subacute toxicity study of H-2-104 in mice The subacute toxicity study in mice was performed according to OECD Guideline No. 407 . Fifty KM mice were randomly divided into five groups of ten animals each (five female and five male): (1) control group, given 0.5% CMC-Na; (2)HM group, given 100 mg/kg/day HM suspension; (3) H-2-104 groups (low, medium and high), given 50, 100 or 200 mg/kg/day H-2-104 suspension. After 30 days of intragastric administration, blood was collected from anesthetized mice. The biochemical parameters were measured, and liver, kidney and brain tissues were collected for pathological examination. Effect of H-2-104 on E. granulosus -infected mice in vivo A 0.2 mL suspension of normal saline containing 3000 PSCs was injected into mice via intraperitoneal injection . 8 months after infection, mice successfully infected were randomly divided into five groups (6 mice/group): (1)control group and model group, 0.5% carboxymethyl cellulose (CMC-Na) solution; (2) positive drug group, given 50 mg/kg/day ABZ in 0.5% CMC-Na solution; (3) H-2-104 groups (low, medium and high), given 25, 50 or 100 mg/kg/day H-2-104 in distilled water. Oral administration was carried out for 30 days . At the end of treatment, all animals were anesthetized with isoflurane to collect blood, and euthanized by cervical dislocation to prevent pain. Then the mice were dissected to isolate the livers and cysts, and the cyst inhibition rate was calculated as follows: [(mean cysts weight of the model group) - (mean cysts weight of the intervention group)] / (mean cysts weight of the model group) × 100%. Additionally, cysts were observed by transmission electron microscopy (TEM) (JEM1230, JEOL company, Japan) as described previously; and the liver tissues were collected for histopathological observation. Furthermore, metabolomic analysis was conducted on the liver tissues. Metabolomics analysis 100 µL of serum was placed in a clean EP tube, and 400 µL of extraction solution containing isotope-labeled internal standards (methanol: acetonitrile, 1:1 (v/v)) were added. The mixture was vortexed for 30 s and then sonicated in an ice-water bath for 10 min. After allowing the sample to stand at -40 °C for an hour, it was centrifuged at 13,800 × g for 15 min at 4 °C. The supernatant was transferred to a clean sample vial for analysis. Additionally, 25 mg of liver tissue sample were weighed into a clean EP tube, and homogenization beads were added. 500 µL of extraction solution containing isotope-labeled internal standards (methanol: acetonitrile: water; 2:2:1(v/v/v)) were then added, and the mixture was vortexed for 30 s. The sample was homogenized in a homogenizer (JXFSTPRP-24, shanghaijingye, Shanghai, China) at 35 Hz for four minutes and then transferred to an ice-water bath for five minutes of sonication. This homogenization step was repeated three times. Following this, the sample was incubated at -40 °C for an hour and then centrifuged at 13,800 × g for 15 min at 4 °C. The supernatant was transferred to a clean sample vial for analysis. LC-MS/MS analyses were performed using a high-performance liquid chromatography (HPLC) system (Vanquish, Thermo Fisher Scientific, Waltham, USA, and Bruker BioSpin, Karlsruhe, Germany). The injection volume of the plasma and liver was 2 µL. Data were acquired using an Orbitrap Exploris 120 mass spectrometer (Thermo Fisher Scientific, Waltham, USA, and Bruker BioSpin, Karlsruhe, Germany). Equipped with an electrospray ionization (ESI) source, operating in both positive and negative ion modes. The spray voltage was set to 3.8 kV for positive ions and − 3.4 kV for negative ions. The sheath gas flow rate was 50 arb, and the auxiliary gas flow rate was 15 arb. The capillary temperature was maintained at 320℃. The first-stage resolution was set to 60,000, and the second-stage resolution was set to 15,000. The raw data were converted to the mzXML format using ProteoWizard and processed with an in-house program, which was developed using R and based on XCMS, for peak detection, extraction, alignment, and integration. The metabolites were identified by accuracy mass and MS/MS data which were matched with HMDB ( http://www.hmdb.ca ) , massbank ( http://www.massbank.jp/ ) , KEGG ( https://www.genome.jp/kegg/ ) , LipidMaps ( http://www.lipidmaps.org ) , mzcloud ( https://www.mzcloud.org ) and the metabolite database bulid by Panomix Biomedical Tech Co., Ltd. (Suzhou, China). Two different multivariate statistical analysis models, unsupervised and supervised, were applied to discriminate the groups (PCA; PLS-DA; OPLS-DA) by R ropls (v1.22.0) package . The statistical significance of P value was obtained by statistical test between groups. Finally, combined with P value, VIP (OPLS-DA variable projection importance) and FC (multiple of difference between groups) to screen biomarker metabolites. By default, when P < 0.05 and VIP > 1, we think that metabolite were considered to have significant differential expression. Differential metabolites were subjected to pathway analysis by MetaboAnalyst , which combines results from powerful pathway enrichment analysis with the pathway topology analysis. The identified metabolites in metabolomics were then mapped to the KEGG pathway for biological interpretation of higher-level systemic functions. The metabolites and corresponding pathways were visualized using KEGG Mapper tool. The data were analyzed on the BioDeep Platform ( https://www.biodeep.cn ). Statistical data analysis SPSS 26.0 software (IBM Corporation, Armonk, USA) was used to analyze the data. Data are expressed as mean ± standard deviation(SD). In all cases, P < 0.05 was considered statistically significant. Prism 8 software (GraphPad, USA) is used to create graphs.
Protoscoleces (PSCs) were isolated from hepatic cysts of naturally infected sheep slaughtered at the Hualing Slaughterhouse in Urumqi, Xinjiang. The method for collecting and culturing PSCs refers to previous studies . Briefly, cyst fluid was aspirated from the hepatic cysts, and after natural sedimentation, the supernatant was discarded to collect the PSCs. The PSCs were washed five times with sterile saline and then digested with 1% pepsin (pH = 2.0) for 30 min. After filtration through a sieve, they were washed with saline-containing antibiotics until the viability of the PSCs reached over 98% . Subsequently, the PSCs were transferred to a 25 cm² cell culture flask containing RPMI 1640 medium supplemented with 2% penicillin (100 U/mL) and streptomycin (100 µg/mL), and 10% fetal bovine serum. The viability of the PSCs was assessed using 1% eosin staining, with a requirement of greater than 95% viability. The eligible PSCs were then cultured in an incubator at 37 °C with 5% CO 2 .
PSCs were subjected to adaptive culturing for 48 h, and their viability was assessed. PSCs with viability greater than 95% were added to a 96-well plate, with approximately 200 PSCs per well. 0.1% dimethyl sulfoxide (DMSO, Amresco, USA) group was used as the negative control; The HM and H-2-104 groups were dissolved in DMSO and 2 µL was added to wells to make the final concentrations of 6.25, 12.5, 25, 50, 100 and 200 µM, respectively. PSCs were collected at 1, 2, 3, 4, and 5 days, respectively.The survival rate of PSCs in each group was detected by eosin staining method. The experiment was performed in triplicate. Additionally, changes in the ultrastructure of PSCs were also observed using a scanning electron microscope (SEM) (JSM-6390LV, JEOL Ltd., Tokyo, Japan).
The subacute toxicity study in mice was performed according to OECD Guideline No. 407 . Fifty KM mice were randomly divided into five groups of ten animals each (five female and five male): (1) control group, given 0.5% CMC-Na; (2)HM group, given 100 mg/kg/day HM suspension; (3) H-2-104 groups (low, medium and high), given 50, 100 or 200 mg/kg/day H-2-104 suspension. After 30 days of intragastric administration, blood was collected from anesthetized mice. The biochemical parameters were measured, and liver, kidney and brain tissues were collected for pathological examination.
E. granulosus -infected mice in vivo A 0.2 mL suspension of normal saline containing 3000 PSCs was injected into mice via intraperitoneal injection . 8 months after infection, mice successfully infected were randomly divided into five groups (6 mice/group): (1)control group and model group, 0.5% carboxymethyl cellulose (CMC-Na) solution; (2) positive drug group, given 50 mg/kg/day ABZ in 0.5% CMC-Na solution; (3) H-2-104 groups (low, medium and high), given 25, 50 or 100 mg/kg/day H-2-104 in distilled water. Oral administration was carried out for 30 days . At the end of treatment, all animals were anesthetized with isoflurane to collect blood, and euthanized by cervical dislocation to prevent pain. Then the mice were dissected to isolate the livers and cysts, and the cyst inhibition rate was calculated as follows: [(mean cysts weight of the model group) - (mean cysts weight of the intervention group)] / (mean cysts weight of the model group) × 100%. Additionally, cysts were observed by transmission electron microscopy (TEM) (JEM1230, JEOL company, Japan) as described previously; and the liver tissues were collected for histopathological observation. Furthermore, metabolomic analysis was conducted on the liver tissues.
100 µL of serum was placed in a clean EP tube, and 400 µL of extraction solution containing isotope-labeled internal standards (methanol: acetonitrile, 1:1 (v/v)) were added. The mixture was vortexed for 30 s and then sonicated in an ice-water bath for 10 min. After allowing the sample to stand at -40 °C for an hour, it was centrifuged at 13,800 × g for 15 min at 4 °C. The supernatant was transferred to a clean sample vial for analysis. Additionally, 25 mg of liver tissue sample were weighed into a clean EP tube, and homogenization beads were added. 500 µL of extraction solution containing isotope-labeled internal standards (methanol: acetonitrile: water; 2:2:1(v/v/v)) were then added, and the mixture was vortexed for 30 s. The sample was homogenized in a homogenizer (JXFSTPRP-24, shanghaijingye, Shanghai, China) at 35 Hz for four minutes and then transferred to an ice-water bath for five minutes of sonication. This homogenization step was repeated three times. Following this, the sample was incubated at -40 °C for an hour and then centrifuged at 13,800 × g for 15 min at 4 °C. The supernatant was transferred to a clean sample vial for analysis. LC-MS/MS analyses were performed using a high-performance liquid chromatography (HPLC) system (Vanquish, Thermo Fisher Scientific, Waltham, USA, and Bruker BioSpin, Karlsruhe, Germany). The injection volume of the plasma and liver was 2 µL. Data were acquired using an Orbitrap Exploris 120 mass spectrometer (Thermo Fisher Scientific, Waltham, USA, and Bruker BioSpin, Karlsruhe, Germany). Equipped with an electrospray ionization (ESI) source, operating in both positive and negative ion modes. The spray voltage was set to 3.8 kV for positive ions and − 3.4 kV for negative ions. The sheath gas flow rate was 50 arb, and the auxiliary gas flow rate was 15 arb. The capillary temperature was maintained at 320℃. The first-stage resolution was set to 60,000, and the second-stage resolution was set to 15,000. The raw data were converted to the mzXML format using ProteoWizard and processed with an in-house program, which was developed using R and based on XCMS, for peak detection, extraction, alignment, and integration. The metabolites were identified by accuracy mass and MS/MS data which were matched with HMDB ( http://www.hmdb.ca ) , massbank ( http://www.massbank.jp/ ) , KEGG ( https://www.genome.jp/kegg/ ) , LipidMaps ( http://www.lipidmaps.org ) , mzcloud ( https://www.mzcloud.org ) and the metabolite database bulid by Panomix Biomedical Tech Co., Ltd. (Suzhou, China). Two different multivariate statistical analysis models, unsupervised and supervised, were applied to discriminate the groups (PCA; PLS-DA; OPLS-DA) by R ropls (v1.22.0) package . The statistical significance of P value was obtained by statistical test between groups. Finally, combined with P value, VIP (OPLS-DA variable projection importance) and FC (multiple of difference between groups) to screen biomarker metabolites. By default, when P < 0.05 and VIP > 1, we think that metabolite were considered to have significant differential expression. Differential metabolites were subjected to pathway analysis by MetaboAnalyst , which combines results from powerful pathway enrichment analysis with the pathway topology analysis. The identified metabolites in metabolomics were then mapped to the KEGG pathway for biological interpretation of higher-level systemic functions. The metabolites and corresponding pathways were visualized using KEGG Mapper tool. The data were analyzed on the BioDeep Platform ( https://www.biodeep.cn ).
SPSS 26.0 software (IBM Corporation, Armonk, USA) was used to analyze the data. Data are expressed as mean ± standard deviation(SD). In all cases, P < 0.05 was considered statistically significant. Prism 8 software (GraphPad, USA) is used to create graphs.
H-2-104 significantly inhibited PSCs activity in vitro The activity of PSCs after the intervention of H-2-104 is shown in Fig. A. The results showed that the activity of PSCs was inhibited to varying degrees after drug administration, with the inhibition in the group being significantly superior to that in the parent compound HM group. On the third day, the survival rate of PSCs in 200 µM H-2-104 group was 0%, which was significantly lower than that in HM group. The lethal concentration 50% (LC 50 ) of H-2-104 was 79.82 µM, significantly lower than that of HM at 109.1 µM (Fig. B). To investigate the effects of H-2-104 on the PSCs ultrastructure, SEM was used to observe the changes (Fig. C). After 48 h of intervention, the morphological structure of PSCs in DMSO group was intact, with a full body and neatly arranged microvilli. In HM group, the surface of PSCs was concave and the microvilli were messy; In the H-2-104 group, the surface of PSCs was wrinkled, the hook and microtriches were lost, and the body was seriously damaged. Subchronic toxicity study to evaluate the safety of H-2-104 Throughout the treatment period, observations of fur coloration, behavioral patterns, dietary intake, and defecation status in all experimental groups revealed no abnormalities. Hematological parameters were quantitatively assessed, with corresponding results documented in Table . The WBC, Neu and Lym levels in HM group were significantly higher than those in control group ( P < 0.01), while there was no significant difference in blood routine parameters between all H-2-104 groups and the control group ( P > 0.05). Pathological results showed that chronic inflammatory cell infiltration occurred in the portal area of liver tissue and loose arrangement, swelling and deformation of brain tissue cells in HM group. However, no significant pathological changes were observed in the organs of mice in all dose groups of H-2-104 (Fig. A). H-2-104 has therapeutic activity in vivo experiments in mice infected with E. granulosus The in vivo efficacy of H-2-104 against E. granulosus cysts was investigated in KM mice infected with E. granulosus . Mice in the H-2-104 drug intervention group had significantly smaller cysts compared with mice in the model group and ABZ group (Fig. B). Therapeutic evaluation was performed through systematic analysis of cyst characteristics, including wet weight determination combined with morphological parameters (number and diameter of cysts), followed by inhibition rate computation to quantify treatment effects (Fig. C). Following 30-day treatment protocols, all therapeutic regimens demonstrated statistically significant improvements relative to the model group: cyst weight reduction (ANOVA, F (5, 30) = 63.601, P < 0.001), decreased cyst count (F (5,30) = 27.031, P < 0.001), reduced cyst diameter (F (5, 30) = 27.532, P < 0.001), and elevated inhibition rates (F (5, 30) = 112.820, P < 0.001). H-2-104 displayed dose-responsive therapeutic profiles across its dosage range (25, 50 and 100 mg/kg). Particularly noteworthy was the 50 mg/kg dose, which achieved equivalent or enhanced efficacy compared to both ABZ and HM at iso-dosage levels (50 mg/kg), whereas maximal therapeutic response was attained with the 100 mg/kg regimen. H-2-104 attenuated liver damage in mice infected with E. granulosus The HE staining results revealed that the liver tissue of mice in the model group exhibited obvious pathological changes, with vesicular structures visible in the center and hepatocytes arranged in disarray. In the ABZ group, cysts were also observed in liver tissue, but they were characterized by clear structure and distinct boundaries, suggesting absorption of vesicular contents. No significant pathological changes were noted in the liver tissues of mice in the various H-2-104 groups(Fig. A). The Masson staining results indicated that there was significant collagen deposition in the model group, while the ABZ group showed marked improvement in this regard. All H-2-104 groups demonstrated significant improvement in collagen deposition, with the level of improvement increasing significantly as the drug dosage increased (ANOVA, F (5,30) = 231.339, P < 0.05). Specifically, the improvement observed in the 50 mg/kg H-2-104 group was comparable to that in the ABZ group, with no statistical difference. However, the effect of the 100 mg/kg H-2-104 group was significantly superior to that of the ABZ group(Fig. B). H-2-104 destroys the normal structure of cysts infected with E. granulosus in mice in vivo The results of TEM on E. granulosus cysts revealed the following (Fig. ): In the model group, the vesicular wall structure of E. granulosus was clear and intact, with distinct boundaries between the cortical and germinal layers. The nuclei, nuclear membranes, and nucleoli were clearly visible, and the microvilli on the vesicular wall were arranged neatly and uniformly in length. In the ABZ group, the vesicular wall thickness of E. granulosus was significantly uneven, with a disorganized germinal layer that was difficult to distinguish. Nuclear rupture was observed, and the microvilli varied in length, with a significant increase in vacuolar structures. In the 25 mg/kg H-2-104 group, the cyst wall of E. granulosus was thickened, with a clear separation between the germinal and cortical layers, and sparse microvilli. In the 50 mg/kg H-2-104 group, the cyst wall structure of E. granulosus was unclear, with the cortical and germinal layers intertwined. Pyknosis and fragmented tissues were visible in the germinal layer, and the microvilli were disorganized. In the 100 mg/kg H-2-104 group, the cyst wall of E. granulosus was significantly thickened, with the structures of the germinal layer appearing layered and an increase in elongated vacuolar structures. The microvilli varied in length and thickness. These findings suggest that both the ABZ group and the various H-2-104 groups have certain destructive and damaging effects on the cysts of E. granulosus . Serum metabolomics analysis In the PCA analysis, the R 2 X values were 0.514 and 0.539, respectively (Fig. A, B), indicating certain distinctions among the sample groups, though further screening of inter-group differences is required. To proceed, OPLS-DA was employed for data analysis (Fig. C, D,E, F). Additionally, to validate the model’s effectiveness, 200 permutation tests were conducted (Fig. G, H,I, J). The results demonstrated that, in positive ion mode, the R 2 X, R 2 Y, and Q 2 for the control and model groups were 0.467, 0.999, and 0.834, respectively, while in negative ion mode, these values were 0.396, 0.990, and 0.825 for the same groups. For the model and H-2-104 groups, in positive ion mode, the R 2 X, R 2 Y, and Q 2 were 0.354, 0.981, and 0.726, respectively, and in negative ion mode, the corresponding values were 0.484, 0.996, and 0.752. These results indicate that there are significant differences between control group and model group, model group and H-2-104 group, and Q 2 of each model is greater than 0.5, which indicates that the model has a good prediction degree. Furthermore, with Q 2 values greater than 0.5 for all models, the predictive performance of the models is considered good. Based on the OPLS-DA model, differential metabolites between the control group and model group, as well as between the model group and H-2-104 group, were screened using the criteria of Variable Importance in the Projection (VIP) > 1 and P < 0.05. Compared with the control group, a total of 1401 differential metabolites were significantly altered ( P < 0.05) in the model group, with 611 upregulated and 790 downregulated (Fig. K). These metabolites can serve as potential biomarkers to characterize metabolic disturbances in the body following E. granulosus infection. Compared with the model group, a total of 914 differential metabolites were significantly altered ( P < 0.05) in the H-2-104 group, with 399 upregulated and 515 downregulated (Fig. L). By comparing the differential metabolites obtained from the two inter-group screenings and utilizing the KEGG database to compare the significantly different metabolites, it was found that 64 differential metabolites were significantly altered ( P < 0.05) in both inter-group comparisons (Table ). To visually compare the content changes of the common differential metabolites among the groups, the content of these 64 differential metabolites in each sample was converted into a clustered heatmap (Fig. A). Pathway enrichment analysis was conducted on the 64 differential metabolites using the KEGG database (Fig. B). The main pathways with significant differences in the serum before and after drug administration include Necroptosis, Choline metabolism in cancer, Retrograde endocannabinoid signaling, Linoleic acid metabolism, Phenylalanine metabolism, and others. Liver metabolomics analysis In the PCA analysis, the R 2 X values were 0.562 and 0.541, respectively (Fig. A, B), indicating the presence of certain differences among the sample groups, but further screening for inter-group differences was necessary. To further analyze the data, OPLS-DA was employed (Fig. C, D,E, F). Additionally, to validate the effectiveness of the models, 200 permutation tests were conducted for each (Fig. G, H,I, J). The results showed that for the control and model groups in positive ion mode, the R 2 X, R 2 Y, and Q 2 values were 0.47, 0.994, and 0.856, respectively. In negative ion mode, the corresponding values for these two groups were 0.503, 0.996, and 0.816. For the model and H-2-104 groups in positive ion mode, the R 2 X, R 2 Y, and Q 2 values were 0.281, 0.991, and 0.664, respectively, while in negative ion mode, the values were 0.302, 0.98, and 0.584. These results indicate that there are significant differences between the control and model groups, as well as between the model and H-2-104 groups. Furthermore, the Q 2 values of all models were greater than 0.5, indicating good predictive performance of the models. Based on the OPLS-DA model, differential metabolites between the control and model groups, as well as between the model and H-2-104 groups, were screened using the criteria of Variable Importance in the Projection (VIP) > 1 and P < 0.05, and volcano plots were generated for each comparison. Compared to the control group, a total of 2,039 differential metabolites were significantly altered ( P < 0.05) in the model group, with 1,209 upregulated and 830 downregulated (Fig. K). These metabolites can serve as potential biomarkers indicating metabolic disturbances in the body following E. granulosus infection. In comparison to the model group, 900 differential metabolites were significantly changed ( P < 0.05) in the H-2-104 group, including 417 upregulated and 483 downregulated (Fig. L). By comparing the differential metabolites obtained from the two inter-group screenings and utilizing the KEGG database for further analysis of significantly different metabolites, it was found that 81 differential metabolites were significantly altered ( P < 0.05) in both comparisons (Table ). To visually compare the changes in the abundance of these common differential metabolites across the groups, the abundances of these 81 differential metabolites in each sample were converted into a clustered heatmap (Fig. A). Pathway enrichment analysis was conducted for 81 differential metabolites based on the KEGG database, and the results were visually presented (Fig. B). The major pathways with significant differences in the serum before and after drug administration primarily included Fructose and mannose metabolism, Phosphotransferase system (PTS), and Retrograde endocannabinoid signaling.
The activity of PSCs after the intervention of H-2-104 is shown in Fig. A. The results showed that the activity of PSCs was inhibited to varying degrees after drug administration, with the inhibition in the group being significantly superior to that in the parent compound HM group. On the third day, the survival rate of PSCs in 200 µM H-2-104 group was 0%, which was significantly lower than that in HM group. The lethal concentration 50% (LC 50 ) of H-2-104 was 79.82 µM, significantly lower than that of HM at 109.1 µM (Fig. B). To investigate the effects of H-2-104 on the PSCs ultrastructure, SEM was used to observe the changes (Fig. C). After 48 h of intervention, the morphological structure of PSCs in DMSO group was intact, with a full body and neatly arranged microvilli. In HM group, the surface of PSCs was concave and the microvilli were messy; In the H-2-104 group, the surface of PSCs was wrinkled, the hook and microtriches were lost, and the body was seriously damaged.
Throughout the treatment period, observations of fur coloration, behavioral patterns, dietary intake, and defecation status in all experimental groups revealed no abnormalities. Hematological parameters were quantitatively assessed, with corresponding results documented in Table . The WBC, Neu and Lym levels in HM group were significantly higher than those in control group ( P < 0.01), while there was no significant difference in blood routine parameters between all H-2-104 groups and the control group ( P > 0.05). Pathological results showed that chronic inflammatory cell infiltration occurred in the portal area of liver tissue and loose arrangement, swelling and deformation of brain tissue cells in HM group. However, no significant pathological changes were observed in the organs of mice in all dose groups of H-2-104 (Fig. A).
E. granulosus The in vivo efficacy of H-2-104 against E. granulosus cysts was investigated in KM mice infected with E. granulosus . Mice in the H-2-104 drug intervention group had significantly smaller cysts compared with mice in the model group and ABZ group (Fig. B). Therapeutic evaluation was performed through systematic analysis of cyst characteristics, including wet weight determination combined with morphological parameters (number and diameter of cysts), followed by inhibition rate computation to quantify treatment effects (Fig. C). Following 30-day treatment protocols, all therapeutic regimens demonstrated statistically significant improvements relative to the model group: cyst weight reduction (ANOVA, F (5, 30) = 63.601, P < 0.001), decreased cyst count (F (5,30) = 27.031, P < 0.001), reduced cyst diameter (F (5, 30) = 27.532, P < 0.001), and elevated inhibition rates (F (5, 30) = 112.820, P < 0.001). H-2-104 displayed dose-responsive therapeutic profiles across its dosage range (25, 50 and 100 mg/kg). Particularly noteworthy was the 50 mg/kg dose, which achieved equivalent or enhanced efficacy compared to both ABZ and HM at iso-dosage levels (50 mg/kg), whereas maximal therapeutic response was attained with the 100 mg/kg regimen.
E. granulosus The HE staining results revealed that the liver tissue of mice in the model group exhibited obvious pathological changes, with vesicular structures visible in the center and hepatocytes arranged in disarray. In the ABZ group, cysts were also observed in liver tissue, but they were characterized by clear structure and distinct boundaries, suggesting absorption of vesicular contents. No significant pathological changes were noted in the liver tissues of mice in the various H-2-104 groups(Fig. A). The Masson staining results indicated that there was significant collagen deposition in the model group, while the ABZ group showed marked improvement in this regard. All H-2-104 groups demonstrated significant improvement in collagen deposition, with the level of improvement increasing significantly as the drug dosage increased (ANOVA, F (5,30) = 231.339, P < 0.05). Specifically, the improvement observed in the 50 mg/kg H-2-104 group was comparable to that in the ABZ group, with no statistical difference. However, the effect of the 100 mg/kg H-2-104 group was significantly superior to that of the ABZ group(Fig. B).
E. granulosus in mice in vivo The results of TEM on E. granulosus cysts revealed the following (Fig. ): In the model group, the vesicular wall structure of E. granulosus was clear and intact, with distinct boundaries between the cortical and germinal layers. The nuclei, nuclear membranes, and nucleoli were clearly visible, and the microvilli on the vesicular wall were arranged neatly and uniformly in length. In the ABZ group, the vesicular wall thickness of E. granulosus was significantly uneven, with a disorganized germinal layer that was difficult to distinguish. Nuclear rupture was observed, and the microvilli varied in length, with a significant increase in vacuolar structures. In the 25 mg/kg H-2-104 group, the cyst wall of E. granulosus was thickened, with a clear separation between the germinal and cortical layers, and sparse microvilli. In the 50 mg/kg H-2-104 group, the cyst wall structure of E. granulosus was unclear, with the cortical and germinal layers intertwined. Pyknosis and fragmented tissues were visible in the germinal layer, and the microvilli were disorganized. In the 100 mg/kg H-2-104 group, the cyst wall of E. granulosus was significantly thickened, with the structures of the germinal layer appearing layered and an increase in elongated vacuolar structures. The microvilli varied in length and thickness. These findings suggest that both the ABZ group and the various H-2-104 groups have certain destructive and damaging effects on the cysts of E. granulosus .
In the PCA analysis, the R 2 X values were 0.514 and 0.539, respectively (Fig. A, B), indicating certain distinctions among the sample groups, though further screening of inter-group differences is required. To proceed, OPLS-DA was employed for data analysis (Fig. C, D,E, F). Additionally, to validate the model’s effectiveness, 200 permutation tests were conducted (Fig. G, H,I, J). The results demonstrated that, in positive ion mode, the R 2 X, R 2 Y, and Q 2 for the control and model groups were 0.467, 0.999, and 0.834, respectively, while in negative ion mode, these values were 0.396, 0.990, and 0.825 for the same groups. For the model and H-2-104 groups, in positive ion mode, the R 2 X, R 2 Y, and Q 2 were 0.354, 0.981, and 0.726, respectively, and in negative ion mode, the corresponding values were 0.484, 0.996, and 0.752. These results indicate that there are significant differences between control group and model group, model group and H-2-104 group, and Q 2 of each model is greater than 0.5, which indicates that the model has a good prediction degree. Furthermore, with Q 2 values greater than 0.5 for all models, the predictive performance of the models is considered good. Based on the OPLS-DA model, differential metabolites between the control group and model group, as well as between the model group and H-2-104 group, were screened using the criteria of Variable Importance in the Projection (VIP) > 1 and P < 0.05. Compared with the control group, a total of 1401 differential metabolites were significantly altered ( P < 0.05) in the model group, with 611 upregulated and 790 downregulated (Fig. K). These metabolites can serve as potential biomarkers to characterize metabolic disturbances in the body following E. granulosus infection. Compared with the model group, a total of 914 differential metabolites were significantly altered ( P < 0.05) in the H-2-104 group, with 399 upregulated and 515 downregulated (Fig. L). By comparing the differential metabolites obtained from the two inter-group screenings and utilizing the KEGG database to compare the significantly different metabolites, it was found that 64 differential metabolites were significantly altered ( P < 0.05) in both inter-group comparisons (Table ). To visually compare the content changes of the common differential metabolites among the groups, the content of these 64 differential metabolites in each sample was converted into a clustered heatmap (Fig. A). Pathway enrichment analysis was conducted on the 64 differential metabolites using the KEGG database (Fig. B). The main pathways with significant differences in the serum before and after drug administration include Necroptosis, Choline metabolism in cancer, Retrograde endocannabinoid signaling, Linoleic acid metabolism, Phenylalanine metabolism, and others.
In the PCA analysis, the R 2 X values were 0.562 and 0.541, respectively (Fig. A, B), indicating the presence of certain differences among the sample groups, but further screening for inter-group differences was necessary. To further analyze the data, OPLS-DA was employed (Fig. C, D,E, F). Additionally, to validate the effectiveness of the models, 200 permutation tests were conducted for each (Fig. G, H,I, J). The results showed that for the control and model groups in positive ion mode, the R 2 X, R 2 Y, and Q 2 values were 0.47, 0.994, and 0.856, respectively. In negative ion mode, the corresponding values for these two groups were 0.503, 0.996, and 0.816. For the model and H-2-104 groups in positive ion mode, the R 2 X, R 2 Y, and Q 2 values were 0.281, 0.991, and 0.664, respectively, while in negative ion mode, the values were 0.302, 0.98, and 0.584. These results indicate that there are significant differences between the control and model groups, as well as between the model and H-2-104 groups. Furthermore, the Q 2 values of all models were greater than 0.5, indicating good predictive performance of the models. Based on the OPLS-DA model, differential metabolites between the control and model groups, as well as between the model and H-2-104 groups, were screened using the criteria of Variable Importance in the Projection (VIP) > 1 and P < 0.05, and volcano plots were generated for each comparison. Compared to the control group, a total of 2,039 differential metabolites were significantly altered ( P < 0.05) in the model group, with 1,209 upregulated and 830 downregulated (Fig. K). These metabolites can serve as potential biomarkers indicating metabolic disturbances in the body following E. granulosus infection. In comparison to the model group, 900 differential metabolites were significantly changed ( P < 0.05) in the H-2-104 group, including 417 upregulated and 483 downregulated (Fig. L). By comparing the differential metabolites obtained from the two inter-group screenings and utilizing the KEGG database for further analysis of significantly different metabolites, it was found that 81 differential metabolites were significantly altered ( P < 0.05) in both comparisons (Table ). To visually compare the changes in the abundance of these common differential metabolites across the groups, the abundances of these 81 differential metabolites in each sample were converted into a clustered heatmap (Fig. A). Pathway enrichment analysis was conducted for 81 differential metabolites based on the KEGG database, and the results were visually presented (Fig. B). The major pathways with significant differences in the serum before and after drug administration primarily included Fructose and mannose metabolism, Phosphotransferase system (PTS), and Retrograde endocannabinoid signaling.
Numerous studies have reported that HM has a good inhibitory effect on E. granulosus , but its obvious neurotoxicity limits the clinical application of HM. Therefore, in order to reduce the toxicity and increase the efficacy, our team synthesized 1,076 new compounds including 32 types of structures by structural modification of 1, 2, 3, 7, and 9 positions of HM parent nucleus. The pharmacodynamics and subchronic toxicity results in vitro and in vivo showed that H-2-104 might be a promising compound for the treatment of echinococcosis caused by E. granulosus infection, and the mechanism might be related to the regulation of necroptosis, linoleic acid metabolism, phenylalanine metabolism, glucose metabolism and lipid metabolism. Our previous study showed that derivatives DH-330, H-2-98 and H-2-168 possessed potent anti-CE activities . In this study, the anti-hydatid effect of derivative H-2-104 was investigated on the basis of previous studies.The results in vitro showed that all the E. granulosus treated with 200 μm of H-2-104 died on the third day of intervention, which was much better than that of HM at the same concentration. Currently, although drugs for CE have been widely developed and have shown significant parasiticidal effects in vitro, but the in vivo effects are unsatisfactory. Therefore, we further evaluated the therapeutic efficacy of H-2-104 in E. granulosus infected mice. The of subchronic results showed that thesafety of H-2-104 was significantly better than that of HM, and was equivalent to that of ABZ. At the same dosage of 50 mg/kg/day, the treatment effect of H-2-104 was significantly better than that of ABZ. The TEM results further demonstrated that after treatment with H-2-104, the ultrastructure of the cysts was disrupted to different degrees. These results suggest that H-2-104 may be a promising new drug against echinococcosis. In order to clarify the anti-echinococcosis mechanism of H-2-104, the changes of metabolites in serum and liver of mice before and after H-2-104 intervention were further investigated by non-targeted metabolomics. In serum metabolomics studies, the concentrations of a total of 64 metabolites underwent significant changes after model establishment and were altered following H-2-104 intervention. The enriched metabolic pathways primarily included necroptosis, linoleic acid metabolism, and phenylalanine metabolism. Necroptosis , a form of programmed cell death, is activated by extracellular or intracellular signals when apoptosis is blocked and occurs widely in various liver diseases such as hepatitis and liver cancer. Studies have shown that necroptosis can promote liver pathology, hepatocyte injury, and death . When necroptosis occurs in multiple types of liver cells, including hepatic stellate cells and Kupffer, inflammatory mediators are released, leading to inflammatory lesions and fibrosis in the liver. Inducing the death of these cells or inhibiting their functions can slow down or even reverse the progression of liver fibrosis . Recent studies have confirmed that harmine has a certain ameliorative effect on acute liver injury induced by CCl 4 , with mechanisms related to necroptosis . In the present study, Masson staining results showed that H-2-104 could significantly improve the area of collagen deposition in the liver, suggesting that H-2-104 may have the effect of alleviating liver fibrosis caused by E. granulosus , and its mechanism may be related to necroptosis. Arachidonic acid (AA) and its metabolites, as well as sphingosine, can activate or inhibit certain signaling pathways, further regulating cell survival or death . In this study, the differential metabolites arachidonic acid and sphingosine jointly participated in the necroptosis pathway, suggesting that H-2-104 may improve liver damage caused by E. granulosus by altering metabolite concentrations and regulating the necroptosis pathway. Linoleic acid, as an unsaturated fatty acid, possesses antioxidant properties that can neutralize free radicals in the body and reduce oxidative stress-induced damage to hepatocytes . It also participates in linoleic acid metabolism within organisms. Multiple research findings have indicated a link between the linoleic acid metabolic pathway and liver function, suggesting that drugs can improve normal liver physiological functions by regulating this pathway . The improvement of E. granulosus by H-2-104 may be related to this pathway. Additionally, differential metabolites such as N-acetylphenylalanine and phenylacetylglycine participate in the phenylalanine metabolic pathway, which primarily occurs in liver tissue and involves the conversion of phenylalanine to tyrosine under enzymatic catalysis . The normal functioning of the liver directly affects the phenylalanine metabolic pathway, particularly during inflammation or infection, which can lead to increased phenylalanine levels in the body , consistent with the results of this study. Some studies have also found that abnormalities in certain enzymes within the phenylalanine metabolic pathway can produce metabolites that activate specific signaling pathways, thereby exacerbating liver diseases. Furthermore, research has shown that the phenylalanine metabolic pathway is disturbed in the livers of mice infected with E. granulosus . Combined with the regulation of the phenylalanine metabolic pathway observed in this study following drug intervention, it can be inferred that the efficacy of H-2-104 may be related to its regulation of phenylalanine metabolism and key substances within this pathway. Future exploration of this pathway and its key substances as potential drug targets holds significant research value. In liver metabolomics research, the concentrations of a total of 81 metabolites underwent significant changes after model establishment and were altered following H-2-104 intervention. The enriched metabolic pathways primarily included fructose and mannose metabolism, glycerophospholipid metabolism, and others. In this study, 6-phosphoglucomannose, 1-phosphoglucomannose, and fructose-6-phosphate collectively participated in the fructose and mannose metabolic pathway , further influencing glucose metabolism in the organism. 6-Phosphoglucomannose is an intermediate in the metabolic pathway, with its primary metabolism involving conversion to fructose-6-phosphate by an isomerase for glycolysis, and its secondary metabolism involving conversion to 1-phosphoglucomannose by a mutase for protein glycosylation . E. granulosus primarily maintains its life activities through glycolysis for energy production. For the growth and development of parasites, acquiring energy sources from the host is extremely important . Glucose is the primary energy substance for E. granulosus to maintain life activities, and blocking the parasite’s energy acquisition is an effective means to inhibit its growth. The metabolomics results also suggest that H-2-104 may interfere with the organism’s glucose metabolism process to exert an inhibitory effect on E. granulosus . Additionally, enrichment results revealed changes in glycerophospholipid metabolism within the organism. Glycerophospholipid metabolism is part of lipid metabolism, and lipids play crucial roles in transmitting intercellular signals, maintaining cell survival and apoptosis, and sustaining normal organismal functions . Diseases or drugs can disrupt this metabolic pathway, further causing liver damage . Based on the experimental results, it is speculated that H-2-104 may also normalize the disordered glycerophospholipid metabolism in the organism, thereby restoring proper function and overcoming the damage caused by E. granulosus to the liver. The limitations of this study are as follows. First, although the sub-chronic toxicity results indicated that H-2-104 was safe, its long-term toxicity remains unclear and requires further observation. Secondly, additional samples are necessary to validate the current results and enhance the reliability and accuracy of the procedure. Finally, various omics technologies, such as transcriptomics and proteomics, could be cross-validated to better support the experimental findings. In summary, this study systematically investigated the anti-echinococcosis effect of H-2-104 both in vitro and in vivo, and explored the potential mechanism of H-2-104 in treating E. granulosus infection in mice using non-targeted metabolomics. This research offers a novel strategy for anti-CE drug treatment.
In conclusion, this study systematically investigated the anti-echinococcosis effects of the harmine derivative H-2-104, both in vitro and in vivo. The results demonstrated that H-2-104 exhibited significant inhibitory activity against E. granulosus , suggesting that H-2-104 may represent a promising new drug for the treatment of CE. Furthermore, its anti-CE effects may be associated with the regulation of multiple pathways, including apoptosis, amino acid metabolism, and glucose metabolism. Future studies should further explore H-2-104 and its related pathways as key areas of interest.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Ultrasound-guided vs. fluoroscopy-guided percutaneous leverage reduction for severely displaced radial neck fractures in children: a comparative analysis of clinical and radiological outcomes | 8b8d429b-6688-48ef-9f77-42e94670fc37 | 11787730 | Surgical Procedures, Operative[mh] | This retrospective study was approved by our institution’s internal review board. Informed consent for participation in the study was obtained from all participants. Between 2016 and 2021, the medical records and radiographs of the 73 eligible children were analysed in our hospital. The inclusion criterion was a closed, displaced radial neck fracture with an angulation greater than 30° in children with open growth plates. Concomitant injuries were not excluded from the study population. The exclusion criteria consisted of previous elbow injuries and associated elbow fractures requiring open reduction, open fractures, and incomplete medical or radiographic records. Radial neck fractures were classified according to the method described by the O’Brien classification. There were 48 (65.8%) type II fractures and 25 (34.2%) type III fractures. The patients’ ages ranged from 2.5 to 14.25 years, with an average age of 7 years and 9 months. Among the patients, the left side was affected in 46 patients, whereas the right side was affected in 27 patients. Additionally, concomitant injuries were observed in twenty-nine patients: two had a compound fracture involving both the olecranon and distal radius, whereas the remaining twenty-seven had an olecranon fracture alone. Among these cases, sixteen were managed conservatively, whereas eleven cases with displaced olecranon fractures along with two cases presenting a compound olecranon and distal radius fracture underwent surgical treatment via closed reduction percutaneous pinning. Surgical techniques All cases received percutaneous pin leverage technique. According to the treatment of the different guidance technique, patients were divided two groups: FL-guided (34 cases) or US-guided (39 cases). If closed percutaneous leverage reduction is not achievable, an accessory open reduction might be necessary for satisfactory reduction. Ultrasound-guided procedure Patients were positioned supine with the elbow maintained in semiflexion and forearm pronation under general anaesthesia. A GE LOGIQ e ultrasound system (GE Healthcare, Milwaukee, WI, USA) equipped with a 7.0–12.5 MHz linear array transducer (GE Healthcare, Tokyo) was used. US-guided techniques are performed in all surgical procedures by surgeons who are specifically trained in musculoskeletal US. During the operation, the ultrasound probe is connected through a single sterile laparoscopic sheath. Ultrasonographic imaging of the radial neck in three standardized sectional planes—anterior, lateral, and dorsal—facilitated monitoring and documentation of angulation and reduction progress. First, the site that provides the maximum angular image of interspace structures under US guidance was visually identified. Once the acoustic window displaying the maximal angulation view of the fracture line was located at the center of the screen, the skin insertion site was marked at the radial neck. Subsequently, a 1.5 mm Kirschner wire (K-wire) (2.0 mm for older children) was inserted via an in-plane technique from the posterolateral elbow direction (Fig. a). The K-wire was advanced strictly obliquely parallel to the long axis of the transducer, and the US enabled real-time visualization of its entire hyperechoic path. The K-wire continued to advance until its tip was accurately reached, and the displaced radial head was reduced effectively by leveraging it at a precisely established fracture gap location during intraoperative procedures (Fig. b; Supplementary video ). Intraoperatively, reducing displaced fractures can be combined with an attempt to apply pressure on the lateral side of the radial neck and varus stress with the elbow extended. Once reduction was confirmed through US, the K-wire was advanced to penetrate through the contralateral distal cortex to maintain the proper reduction status. Additionally, US examination verified whether the K-wire penetrated the contralateral cortex and assisted in determining the length of retreatment to the bone cortex as well as the depth of K-wire insertion (Fig. ). If a 2.0 mm pin was used as a lever, a 1.5 mm K-wire was percutaneously inserted for in situ fixation after removing the first pin. If necessary, a second K-wire may be placed. In accordance with previous literature report , the final assessment of the K-wire position and whether it penetrated the contralateral cortex was conducted via the FL. Fluoroscopy-guided Procedure Patients were prepared as specified above for the US procedure. Percutaneous pinning procedures were performed under FL guidance. After an acceptable reduction was achieved through percutaneous leverage manipulation, percutaneous pinning was conducted. Postoperative management and evaluation The postoperative protocol was similar in both groups. The K-wire was left protruding out of the skin and bent over to prevent migration. A long arm cast with the forearm in a neutral position was applied. The K-wire was removed 4 ~ 6 weeks after surgery. Outcome measurements The dose area product (DAP; mGy/cm2) for each examination was measured by an inbuilt DAP meter on the image intensifier device. Imaging was performed via fluoroscopy-image intensifiers (BV Endura; Philips, Veenpluis, The Netherlands). During image acquisition, the dose, brightness, and contrast were automatically optimized. During the follow-up, the patient’s radiological and clinical parameters were evaluated. Postoperative radiological assessments were conducted in accordance with the Métaizeau reduction classification . The outcomes were categorized as follows: Excellent for anatomic reduction; Good for reductions within < 20 degrees; Fair for reductions between 20~40 degrees; and Poor for reductions exceeding 40 degrees. Clinical evaluations at the final follow-up utilized the Métaizeau functional classification . The evaluation criteria for the Métaizeau classification are: (1) Excellent: no loss of motion; (2) Good: ≤20° loss of motion in any direction; (3) Fair: 20~40° loss of motion in any direction; (4) Poor: >40° loss of motion in any direction. Statistical analysis The SPSS statistical package (SPSS 20.0 version; IBM Corp, Armonk, NY) was used for statistical analysis. The categorical data were analysed via the chi-square (χ2) test, and the continuous data were analysed via t tests. The data are displayed as the means ± SDs or n . P < 0.05 was considered significant.
All cases received percutaneous pin leverage technique. According to the treatment of the different guidance technique, patients were divided two groups: FL-guided (34 cases) or US-guided (39 cases). If closed percutaneous leverage reduction is not achievable, an accessory open reduction might be necessary for satisfactory reduction.
Patients were positioned supine with the elbow maintained in semiflexion and forearm pronation under general anaesthesia. A GE LOGIQ e ultrasound system (GE Healthcare, Milwaukee, WI, USA) equipped with a 7.0–12.5 MHz linear array transducer (GE Healthcare, Tokyo) was used. US-guided techniques are performed in all surgical procedures by surgeons who are specifically trained in musculoskeletal US. During the operation, the ultrasound probe is connected through a single sterile laparoscopic sheath. Ultrasonographic imaging of the radial neck in three standardized sectional planes—anterior, lateral, and dorsal—facilitated monitoring and documentation of angulation and reduction progress. First, the site that provides the maximum angular image of interspace structures under US guidance was visually identified. Once the acoustic window displaying the maximal angulation view of the fracture line was located at the center of the screen, the skin insertion site was marked at the radial neck. Subsequently, a 1.5 mm Kirschner wire (K-wire) (2.0 mm for older children) was inserted via an in-plane technique from the posterolateral elbow direction (Fig. a). The K-wire was advanced strictly obliquely parallel to the long axis of the transducer, and the US enabled real-time visualization of its entire hyperechoic path. The K-wire continued to advance until its tip was accurately reached, and the displaced radial head was reduced effectively by leveraging it at a precisely established fracture gap location during intraoperative procedures (Fig. b; Supplementary video ). Intraoperatively, reducing displaced fractures can be combined with an attempt to apply pressure on the lateral side of the radial neck and varus stress with the elbow extended. Once reduction was confirmed through US, the K-wire was advanced to penetrate through the contralateral distal cortex to maintain the proper reduction status. Additionally, US examination verified whether the K-wire penetrated the contralateral cortex and assisted in determining the length of retreatment to the bone cortex as well as the depth of K-wire insertion (Fig. ). If a 2.0 mm pin was used as a lever, a 1.5 mm K-wire was percutaneously inserted for in situ fixation after removing the first pin. If necessary, a second K-wire may be placed. In accordance with previous literature report , the final assessment of the K-wire position and whether it penetrated the contralateral cortex was conducted via the FL.
Patients were prepared as specified above for the US procedure. Percutaneous pinning procedures were performed under FL guidance. After an acceptable reduction was achieved through percutaneous leverage manipulation, percutaneous pinning was conducted.
The postoperative protocol was similar in both groups. The K-wire was left protruding out of the skin and bent over to prevent migration. A long arm cast with the forearm in a neutral position was applied. The K-wire was removed 4 ~ 6 weeks after surgery.
The dose area product (DAP; mGy/cm2) for each examination was measured by an inbuilt DAP meter on the image intensifier device. Imaging was performed via fluoroscopy-image intensifiers (BV Endura; Philips, Veenpluis, The Netherlands). During image acquisition, the dose, brightness, and contrast were automatically optimized. During the follow-up, the patient’s radiological and clinical parameters were evaluated. Postoperative radiological assessments were conducted in accordance with the Métaizeau reduction classification . The outcomes were categorized as follows: Excellent for anatomic reduction; Good for reductions within < 20 degrees; Fair for reductions between 20~40 degrees; and Poor for reductions exceeding 40 degrees. Clinical evaluations at the final follow-up utilized the Métaizeau functional classification . The evaluation criteria for the Métaizeau classification are: (1) Excellent: no loss of motion; (2) Good: ≤20° loss of motion in any direction; (3) Fair: 20~40° loss of motion in any direction; (4) Poor: >40° loss of motion in any direction.
The SPSS statistical package (SPSS 20.0 version; IBM Corp, Armonk, NY) was used for statistical analysis. The categorical data were analysed via the chi-square (χ2) test, and the continuous data were analysed via t tests. The data are displayed as the means ± SDs or n . P < 0.05 was considered significant.
The demographic and clinical characteristics of the patients in the US and FL groups are shown in Table . In this study, 39 patients (7.4 ± 2.5 years old, 21 males, 18 females) in the US group and 34 patients (8.2 ± 2.3 years old, 20 males, 14 females) in the FL group were included (Figs. and ). The mean follow-up period was 2 years and 1 month (range, 1 y 6 mo to 3 y 4 mo). There was no significant difference between the two groups concerning the baseline characteristics or fracture parameters. Table shows that the operative times as 24.8 ± 8.0 min (95% confidence interval [CI]: 22.2 to 27.4) for the ultrasound (US) group and 42.2 ± 15.2 min (95% CI: 37.5 to 48.1) for the fluoroscopy (FL) group ( P < 0.01). The number of FL images was 3.6 ± 1.6 times (95% CI: 3.1 to 4.1) in the US group and 22.3 ± 8.1 times (95% CI: 19.5 to 25.1) in the FL group ( P < 0.001). The radiation dose was 9.6 ± 6.1 mGy (95% CI: 7.6 to 11.5) in the US group and 69.7 ± 34.7 mGy (95% CI: 57.6 to 81.8) in the FL group ( P < 0.001). Consequently, US guidance significantly reduced both operative time and radiation exposure.The success rates for reduction were 100% and 91.2% in the US group and FL group, respectively. US can detect all instances where the fixation pin penetrates the contralateral cortex. The results of the radiographic and functional outcome evaluations are shown in Table . According to the Metaizeau reduction classification, the excellent rates in the US group and FL group were 89.7% (35/39) and 76.5% (26/34), respectively; there was no significant difference between the two groups ( p = 0.130). No further redisplacement occurred during the final radiographic examination in any group. However, according to the Metaizeau clinical classification at the last follow-up, the excellent and good rates in the US and FL groups were 97.4% (38/39) and 88.2% (30/34), respectively; there was no significant difference between the two groups ( p = 0.197). One patient from the FL group who underwent open reduction experienced premature physeal closure, whereas no such occurrence was observed among patients from the US group. Radial head overgrowth was observed in eight patients in the FL group and seven patients in the US group. No secondary displacement or nerve injury was noted during the follow-up period. In the FL group, one patient exhibited both joint stiffness and heterotopic ossification. No radial head necrosis was detected.
Our study supports the hypothesis that US-guided percutaneous pinning is an effective method for treating displaced radial neck fractures, with comparable clinical outcomes to those of FL-guided methods. However, US guidance offers several advantages, including real-time visualization of the reduction process, shorter surgical time, reduced radiation exposure, and enhanced convenience. While FL remains the standard method for percutaneous pinning, our findings suggest that US may be a viable alternative in children. Additionally, US can provide valuable supplementary information regarding treatment and enable the assessment of articular cartilage in the radial neck. The standard method of percutaneous pinning is FL guided . Accurate anatomical data assessment via radiographic imaging is particularly challenging in the nonossified head of the radius . The rate of open reduction for fractures can reach up to 33% . Nonetheless, open reduction remains a valuable resource for achieving anatomic reduction. Our study reported a 91.2% success rate with FL. We observed a 100% success rate with US, which is comparable to outcomes reported in previous studies . Utilizing US to determine the appropriate angle can reduce the need for multiple fluoroscopic images to ascertain the trajectory view. Consequently, ultrasound is more beneficial in assisting fracture reduction. No statistically significant difference in functional outcomes was detected between the two techniques during follow-up. Gutierrez-de described excellent and good rate of 92.6% in percutaneous surgery with FL guidance. Our findings align with those of recent publications , ratings ranging from 81.5–100% . Similarly, both US and FL achieved equivalent functional outcomes . However, based on our results, it is reasonable to consider US-guided methods as an alternative to FL-guided percutaneous pinning in this patient population, given that US has a higher excellent and good rate (97.4%) compared to FL-guided methods (88.2%). Our study observed a 23.1% incidence of complications associated with fluoroscopy and 35.3% with ultrasound. Radial head overgrowth was the predominant complication, a finding consistent with reports in the literature that estimate its occurrence at 18–37% among patients . No significant difference in complication rates was noted between the two groups in our study, which aligns with previous literature . However, patients who developed complications in both groups did not exhibit a significant deterioration in functional outcomes. In this study, the US group demonstrated a significantly shorter operating time than the FL group did (24.8 min vs. 42.2 min, p < 0.001). Our findings on operating time are consistent with those of previous studies, which report durations ranging from 25 to 65 min . However, US guidance in fracture reduction may not offer additional advantages over other methods, potentially increasing surgical time. The current findings contradict this. Similarly, in Su’s study, compared to FL, the surgery time was significantly lower with US guidance . The time used in US-guided reduction is not surprising, as the sonographic technique of the in-plane pin approach allows real-time monitoring and safe advancement of the pin to the target structure within seconds. This finding emphasizes a substantial distinction in comparison with FL-guided reduction. Moreover, repeated manipulation and leverage may cause additional injury to the radial neck as well as increased operation time. Furthermore, previous studies did not provide visibility of the depth of the pin position during US guidance . However, contrary to previous findings, we also found that US could confirm bicortical purchase of fixation when the long axis of the transducer was advanced strictly parallel to the pin, eliminating the need for fluoroscopic control and saving time. Physicians with specialized training in US can use it as a primary diagnostic tool for accurately diagnosing pediatric elbow fractures . Therefore, US-guided techniques should be considered as the standard of care in surgery. In the present study, the mean angulation postreduction in the US group was 1.1°; in the FL group, it was 3.6°. Our results were consistent with previously published studies reporting mean angulation after surgery ranging from 3.6° to 7.5° . Additionally, according to the Metaizeau reduction classification, a higher incidence of excellent outcomes was observed in the US group. Compared with FL guidance, US-guided assessment allows for the evaluation of reduction quality in multiple dimensions and results in a minor residual tilt and improved reduction quality; however, there are no significant differences between the US and FL groups regarding reduction quality or its correlation with clinical or radiographic outcomes . Nevertheless, we believe that US-guided techniques offer a useful resource for achieving anatomical reduction of radial head fractures, which is a relevant factor in functional outcomes . Our study revealed that fewer FL images and lower radiation exposures were required for the US group than for the FL group. In children undergoing FL-guided percutaneous pinning, Martus et al. reported low radiation doses in radiation-sensitive regions, which were below the thresholds associated with radiation-induced infertility, skin abnormalities, and cataracts. However, owing to potential cumulative effects, even if the possible thresholds are not reached, each radiation exposure may subsequently increase the risk of future cancer development . The acquisition of multiple images leads to an increased amount of radiation exposure for both patients and operators. Therefore, establishing limits on radiation exposure when selecting surgical treatment methods is recommended . Consequently, minimizing radiation exposure remains desirable for patients with high tissue radiosensitivity, such as children . Nevertheless, poor visualization of the radial head and neck may result in increased use of radiation during procedures involving young children. Thus, the principle of a dose being “as low as reasonably practicable” must be maintained . The mean DAP exposure in our study was comparable to that reported in other studies that involved the use of US for the percutaneous pin leverage technique . It is controversial whether repeating percutaneous leverage reduction attempts may increase iatrogenic trauma by damaging the blood supply of the radial head . The incorporation of C-arm fluoroscopic imaging to visualize the trajectory of the pin has been operator dependent. However, US guidance provides accurate visual anatomical structures during pin insertion. In the present study, the manual reduction process was similar in both groups, but US guidance reduced injuries to adjacent structures. In addition, the US-guided insertion site could be carefully selected to avoid damage to the posterior interosseous nerve. We did not observe posterior interosseous nerve injury, and there was no radial head necrosis during the follow-up. We believe that such a rather desirable outcome can be attributed to a set of technical factors. We acknowledge that this study has several limitations. First, the retrospective nature of the study may inherently limit the conclusions that can be drawn. Our findings suggest a potential clinical benefit of the US approach. However, the low statistical power warrants further investigation with larger, multicenter, prospective studies to validate these results and improve generalizability. Nevertheless, it is worth noting that the present study represents the largest investigation on this topic published to date. Consequently, we believe that the results presented here can be generalized to cases treated by other clinicians employing focused US guidance. Second, all percutaneous pinning procedures in our trial were performed by experienced experts proficient in US- and FL-guided techniques. Although US-guided percutaneous pinning is considered a skill-level intervention, it is important to consider the potential surgeon effect when these results are applied in generalizable practice settings, particularly for novice operators. Furthermore, although FL was ultimately used to confirm pin placement in this study, which is consistent with previous reports in the literature , our findings revealed that US can assess pin penetration through the contralateral cortex in accordance with intraoperative fluoroscopic data. Therefore, the ultrasound-guided procedure demonstrated efficacy and reliability in cases where additional FL confirmation of K-wire positioning was unnecessary. In recent years, fluoroscopy has not been deemed necessary.
In conclusion, US offers substantial benefits in the pediatric orthopedic management of displaced radial neck fractures, including more efficient reduction techniques, shorter operative times and decreased radiation exposure. Despite similar outcomes between US and FL, US can be viewed as a viable alternative to FL, due to their comparable efficacy in the guidance of percutaneous leverage reduction. However, further randomized controlled trials are warranted to confirm long-term efficacy.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
Feasibility and Implementation of an Oncology Rehabilitation Triage Clinic: Assessing Rehabilitation, Exercise Need, and Triage Pathways within the Alberta Cancer Exercise–Neuro-Oncology Study | ce7f5ef5-e93c-4da5-b7c8-19c41547b08c | 10377964 | Internal Medicine[mh] | Due to improved screening and treatment, death rates for all cancer types combined have decreased by 33% since 1991 . With increased survival, those living with and beyond cancer face an increased burden of physical and functional morbidity as well as diminished psychosocial well-being, resulting in lower quality of life into survivorship . To address physical and functional impairment following a cancer diagnosis, multidisciplinary rehabilitation and exercise programs have been developed. Specifically, cancer physiatry (physical medicine and rehabilitation physicians with a specialty in oncology), physiotherapy, occupational therapy, speech–language therapy, lymphedema management, as well as exercise prescription and counselling have strong evidence to support their important role in the care of cancer patients throughout the cancer journey . Individually, these cancer rehabilitation and exercise interventions have been shown to improve function, psychosocial well-being, and survival . Unfortunately, widespread access to cancer rehabilitation and exercise resources for individuals living with and beyond cancer lags behind those organized for patients with other chronic conditions, such as heart disease, for which rehabilitation and exercise are part of standard care . There is thus an “evidence to practice” gap, with system-wide access to rehabilitation and exercise programs clinically lacking in many high-quality oncology care systems . Previous reports comment on the essential component of rehabilitation and exercise in comprehensive cancer care . Despite this, the development of cancer rehabilitation and exercise programs within clinical oncology care settings has been delayed, in part due to the lack of a specific implementation plan with effective patient screening, triage, and referral pathways . To improve patient access to rehabilitation and exercise resources, clinical implementation to optimize patients receiving the right rehabilitation and exercise care at the right time must include: (1) screening patients for impairments and inactivity, (2) the development of triage resources to help with decision making for appropriate exercise and rehabilitation services, (3) sustainable system-embedded referral pathways , and (4) additional evidence-based rehabilitation and exercise programs to serve patients. Currently in Canada, there is a lack of system-embedded screening and triage tools, as well as referral pathways for cancer rehabilitation and exercise. Many programs rely on oncologists or nursing staff to identify patients in need of services, which previous research has shown falls short for patients. For example, Cheville and colleagues surveyed patients on 27 cancer-related symptoms, signs, and functional problems, and also reviewed electronic medical records (EMR) for oncology documentation . They found a total of 65% of patients reported a functional impairment amenable to rehabilitation, yet only 6% of these problems were reported in the EMR by oncologists. Non-functional symptoms, including pain, weight loss, and nausea, however, were reported 49% of the time. This may be due to a lack of time, a lack of specific training to screen for functional impairment, or a lack of knowledge of rehabilitation and exercise resources. This disparity reinforces the need for standardized screening for all patients to effectively identify those with functional impairment, and implemented clinical pathways that can facilitate triage and referral to appropriate resources. The screening, triage, and referral approach is supported by extensive work in the area of psychosocial oncology, where effective screening for distress can improve the identification of affected patients, allowing for referral to appropriate services and leading to significantly decreased levels of distress when compared to not screening . Applying this same principle to functional impairment and inactivity has the potential to significantly improve patient care and survivorship. Multiple call-to-action statements agree with the need for improved and integrated screening, triage, and referral pathways, and note that more research is needed in this important area . Following the identification of patients with functional deficits or concerns through screening, it becomes essential to establish triage and referral pathways. In most cancer care systems in Canada, these are not well established for both rehabilitation and exercise. Santa Mina and colleagues proposed a physical activity referral pathway, which was recently expanded upon by Wagoner et al. as an example of triage pathways to rehabilitation and exercise. These models provide a clinical framework and are currently being studied and implemented . Additionally, Covington and colleagues have proposed the Exercise in Cancer Evaluation and Decision Support (EXCEEDS) algorithm that is currently being studied, and have encouraged researchers to evaluate their evidence-based clinical decision-making referral tool in a variety of tumour groups . Therefore, the objective of this research was to identify rehabilitation and exercise needs in an underserved oncology population, and study triage and referral processes to enhance patient rehabilitation and care. This manuscript presents data on the feasibility of the rehabilitation and exercise triage clinic as part of the Alberta Cancer Exercise–Neuro-Oncology study (ACE-Neuro) . Specifically, the implementation of the triage clinic is reported, including the (a) assessment of rehabilitation and exercise needs of patients with brain tumours (i.e., neuro-oncology patients) and (b) the triage and referral of participants to physiatry, physiotherapy, occupational therapy, and/or exercise (i.e., ACE-Neuro) based on pre-determined cut-offs. Neuro-oncology patients were selected as a population of interest as they face unique functional challenges given side effects related to tumour location and treatments. They frequently experience cognitive, physical, and psychological impairments, and often report that their needs are not adequately addressed . Unfortunately, methods to effectively screen and refer neuro-oncology patients to appropriate rehabilitation interventions are lacking . Fortunately, there is great potential to continue to expand on the rehabilitation and exercise evidence for patients with brain tumours, including effective methods to identify patients in need and refer them to tailored rehabilitation programs . Ultimately, the purpose of this work is to improve the identification of functional impairment and inactivity among patients with brain tumours, and identify effective strategies for triage and referral to appropriate rehabilitation and exercise resources. Our hope is that this will help to establish efficient pathways in rehabilitation oncology, so all cancer patients can be screened and receive appropriate rehabilitation and exercise care at the right time.
2.1. Study Design and Ethical Approval This study was approved by the University of Calgary Health Research Ethics Board of Alberta (HREBA)–Cancer Committee (CC)–HREBA.CC-20-0322, and is a component of a larger study, Alberta Cancer Exercise–Neuro (ACE-Neuro) . The triage clinic was conducted in Calgary, Alberta, and does not include ACE-Neuro patients from the Edmonton, Alberta site. This study was a mixed-methods descriptive study reporting on feasibility outcomes. 2.2. Study Outcomes Feasibility was the primary outcome with both quantitative and qualitative components. Quantitatively, feasibility was defined a priori as a referral rate of at least 50%, an enrollment rate of at least 50%, and a triage clinic attendance rate of at least 60%. These feasibility thresholds were based on other feasibility work in exercise oncology as well as on feedback from the clinical team . Specifically, given the poor survival prognoses and high symptom burden of neuro-oncology patients, lower criteria were expected. Referral rate was defined as the number of patients referred from the clinical team to the ACE-Neuro out of the total number of patients seen in the clinic over the recruitment period (i.e., from 16 April 2021 to 2 December 2022). The enrollment rate was defined as the number of patients who enrolled after hearing the full study description out of the total number of patients referred. Finally, the triage clinic attendance rate was defined as the number of people who attended the triage clinic assessment out of the total number enrolled. Feasibility was also assessed by examining the safety of the triage clinic and documenting any adverse events. Adverse events were tracked using a standardized adverse event reporting system that classifies adverse events as level 1 (minor incident with no lost time beyond the day of injury; temporary, immediate care), level 2 (medical aid with no lost time beyond the day of injury; medical care beyond first aid), or level 3 (serious injury or death) . Feasibility and acceptability were also assessed qualitatively via semi-structured interviews with participants. 2.3. Participants All neuro-oncology patients with a primary brain tumour (benign or malignant) over 18 years of age and able to consent in English were eligible to participate in the study. Patients with secondary brain metastases were excluded. Participants could be at any stage in the treatment pathway (pre, on, or post-treatment). Participants were recruited at the Tom Baker Cancer Centre in Calgary, Alberta, Canada. As the primary outcome was feasibility, no a priori sample size was calculated. Eligible neuro-oncology patients were approached by the study team after obtaining consent to contact. If interested, a clinical team member (nurse or oncologist) sent a referral to the Rehabilitation Oncology department via the electronic medical system . Patients were also able to self-refer to the study via a study brochure or poster located within relevant clinic areas. Once referred, the study coordinator contacted the patient to review study eligibility and details and obtain consent to participate. Patients who agreed to participate did so via REDCap, a secure web application (Research Electronic Data Capture; REDCap) . After providing informed consent, participants completed the health and medical history screening, including a Health History Questionnaire (i.e., to collect medical history) and Identifying Information Questionnaire (i.e., to collect demographics), as well as the Physical Activity Readiness Questionnaire, PAR-Q+. All screening was completed via REDCap . Once consent and initial questionnaires were completed, the ACE-Neuro study coordinator (JTD; clinical exercise physiologist) reviewed participant health histories via chart review and phone call, and participants were booked into the triage clinic. 2.4. Triage Clinic The triage clinic was led by a physical medicine and rehabilitation resident physician (LCC) and the ACE-Neuro study coordinator (JTD; clinical exercise physiologist). Participants were booked for a 45-min appointment, during which their medical and functional histories were reviewed, and a full central and peripheral neurological examination and the Short Physical Performance Battery Protocol (SPPB) were performed . From this, the Karnofsky Performance Status (KPS) and Eastern Cooperative Oncology Group (ECOG) scores were determined. Criteria for triage included the SPPB, ECOG, and KPS, as well as previously published referral recommendations from Covington and colleagues and pre-determined cut-offs from our clinical team. These pre-determined cut-offs were developed following consultation and deliberation with a multidisciplinary team, including rehabilitation clinical team leaders, physiotherapists, occupational therapists, physical medicine and rehabilitation doctors, behavioural medicine researchers, and clinical exercise physiologists. Please see for the triage clinic criteria. After the assessment, participants were then triaged and referred to the ACE-Neuro exercise study, Cancer Physiatry, Rehabilitation Oncology (i.e., Physiotherapy/Occupational Therapy), or a combination of these services. As part of the ACE-Neuro study, if triage to the ACE-Neuro exercise study was not appropriate after the triage clinic assessments, patients could be re-assessed in the triage clinic once deemed appropriate by their clinical team (i.e., oncologist, physiotherapist, or occupational therapist). 2.5. Study Measures 2.5.1. Identifying Information Questionnaire and Health History Questionnaire Both demographic and medical history were collected via patient report and chart review. Demographic history included participants’ self-reported age, sex, self-identified gender, self-identified ethnicity, education, annual family income, and marital and employment status. Medical history included type of primary brain tumour, stage, treatment status, treatment types received, smoking status, alcohol intake, medical co-morbidities, and cancer-related co-morbidities. Participants also completed the Physical Activity Readiness Questionnaire (i.e., PAR-Q). 2.5.2. Health-Related Fitness Measures Health-related fitness measures included height and weight, resting heart rate, and blood pressure. Body mass index was calculated using height and weight. 2.5.3. Short Physical Performance Battery (SPPB) The SPPB consists of a group of three tests examining gait speed, chair stand speed, and balance testing . It is a validated tool used to predict risk for mortality, nursing home admission, and disability . It is scored from 0 (worst performance) to 12 (best performance). A score of 5 or higher was necessary for direct referral to the ACE-Neuro exercise study. See for a summary of the SPPB. 2.5.4. Neurological Examination A neurological examination was performed by the resident physician, consisting of a cognitive screening assessment as well as a physical examination. Cognitive screening consisted of examination of orientation, registration, recall, and language (speaking, reading, and writing). A cranial nerve screening examination was conducted, followed by a motor examination for tone, reflexes, bulk, and power. Finally, a sensory examination for light touch and pinprick sensation was conducted, and coordination was tested. 2.5.5. Karnofsky Performance Status (KPS) KPS is a validated assessment tool for functional impairment, ranging from 100 (normal, no complaints, no sign of disease) to 0 (death). Each increment has well-defined criteria, which were used to classify study participants following a review of their health history and physical examination . A score of 50 or higher was necessary for direct referral to ACE-Neuro. 2.5.6. Eastern Cooperative Oncology Group score (ECOG) The ECOG is a validated assessment tool to assess functional status, scored from 0 (fully active, able to carry on all pre-disease performance without restriction) to 5 (death). As with the KPS, each increment has well-defined criteria used to classify study participants following their health history review and physical examination . A score of 3 or lower qualified participants for the ACE-Neuro exercise study. 2.6. Qualitative Interviews To obtain participant perspectives on triage clinic safety, acceptability, and satisfaction, semi-structured interviews were conducted with participants and members of the clinical team (i.e., oncologists, nurses, and administrators). We sampled and invited participants to a 15- to 30-min interview with the ACE-Neuro study coordinator (JTD) at the location of their choosing (i.e., via Zoom or in-person) at various times across the study duration. Specifically, participants were interviewed during or after the ACE-Neuro 12-week exercise intervention, and members of the clinical team were interviewed at various time points during the study recruitment period, with the aim of gathering varied perspectives to inform the clinical integration of processes specifically. Interviews were recorded by end-to-end encrypted Zoom (online) or with an audio recording device (in-person). Examples of questions asked during the interview are presented in . 2.7. Statistical Analysis 2.7.1. Quantitative Data Descriptive characteristics of participants are presented using mean ± standard deviation or percentages. Feasibility was reported using percentages related to the pre-determined thresholds mentioned above. Descriptive results, using mean ± standard deviation or percentages, are also reported for the SSPB, KPS, ECOG, and neurological examination results. 2.7.2. Qualitative Data Interviews were transcribed verbatim via ExpressScribe , managed in NVivo 12 , and analyzed by one author (JTD) using conventional content analysis . This iterative process included reading the transcripts, coding the data, and generating category descriptions. To ensure a rigorous process, a reflexivity journal was kept by JTD, and critical review and discussion with two other authors (LCC and SNC-R) occurred across the study process . To enhance readability of participant quotes, repetitive words, identifiable information, and mumbled speech were replaced with brackets: […].
This study was approved by the University of Calgary Health Research Ethics Board of Alberta (HREBA)–Cancer Committee (CC)–HREBA.CC-20-0322, and is a component of a larger study, Alberta Cancer Exercise–Neuro (ACE-Neuro) . The triage clinic was conducted in Calgary, Alberta, and does not include ACE-Neuro patients from the Edmonton, Alberta site. This study was a mixed-methods descriptive study reporting on feasibility outcomes.
Feasibility was the primary outcome with both quantitative and qualitative components. Quantitatively, feasibility was defined a priori as a referral rate of at least 50%, an enrollment rate of at least 50%, and a triage clinic attendance rate of at least 60%. These feasibility thresholds were based on other feasibility work in exercise oncology as well as on feedback from the clinical team . Specifically, given the poor survival prognoses and high symptom burden of neuro-oncology patients, lower criteria were expected. Referral rate was defined as the number of patients referred from the clinical team to the ACE-Neuro out of the total number of patients seen in the clinic over the recruitment period (i.e., from 16 April 2021 to 2 December 2022). The enrollment rate was defined as the number of patients who enrolled after hearing the full study description out of the total number of patients referred. Finally, the triage clinic attendance rate was defined as the number of people who attended the triage clinic assessment out of the total number enrolled. Feasibility was also assessed by examining the safety of the triage clinic and documenting any adverse events. Adverse events were tracked using a standardized adverse event reporting system that classifies adverse events as level 1 (minor incident with no lost time beyond the day of injury; temporary, immediate care), level 2 (medical aid with no lost time beyond the day of injury; medical care beyond first aid), or level 3 (serious injury or death) . Feasibility and acceptability were also assessed qualitatively via semi-structured interviews with participants.
All neuro-oncology patients with a primary brain tumour (benign or malignant) over 18 years of age and able to consent in English were eligible to participate in the study. Patients with secondary brain metastases were excluded. Participants could be at any stage in the treatment pathway (pre, on, or post-treatment). Participants were recruited at the Tom Baker Cancer Centre in Calgary, Alberta, Canada. As the primary outcome was feasibility, no a priori sample size was calculated. Eligible neuro-oncology patients were approached by the study team after obtaining consent to contact. If interested, a clinical team member (nurse or oncologist) sent a referral to the Rehabilitation Oncology department via the electronic medical system . Patients were also able to self-refer to the study via a study brochure or poster located within relevant clinic areas. Once referred, the study coordinator contacted the patient to review study eligibility and details and obtain consent to participate. Patients who agreed to participate did so via REDCap, a secure web application (Research Electronic Data Capture; REDCap) . After providing informed consent, participants completed the health and medical history screening, including a Health History Questionnaire (i.e., to collect medical history) and Identifying Information Questionnaire (i.e., to collect demographics), as well as the Physical Activity Readiness Questionnaire, PAR-Q+. All screening was completed via REDCap . Once consent and initial questionnaires were completed, the ACE-Neuro study coordinator (JTD; clinical exercise physiologist) reviewed participant health histories via chart review and phone call, and participants were booked into the triage clinic.
The triage clinic was led by a physical medicine and rehabilitation resident physician (LCC) and the ACE-Neuro study coordinator (JTD; clinical exercise physiologist). Participants were booked for a 45-min appointment, during which their medical and functional histories were reviewed, and a full central and peripheral neurological examination and the Short Physical Performance Battery Protocol (SPPB) were performed . From this, the Karnofsky Performance Status (KPS) and Eastern Cooperative Oncology Group (ECOG) scores were determined. Criteria for triage included the SPPB, ECOG, and KPS, as well as previously published referral recommendations from Covington and colleagues and pre-determined cut-offs from our clinical team. These pre-determined cut-offs were developed following consultation and deliberation with a multidisciplinary team, including rehabilitation clinical team leaders, physiotherapists, occupational therapists, physical medicine and rehabilitation doctors, behavioural medicine researchers, and clinical exercise physiologists. Please see for the triage clinic criteria. After the assessment, participants were then triaged and referred to the ACE-Neuro exercise study, Cancer Physiatry, Rehabilitation Oncology (i.e., Physiotherapy/Occupational Therapy), or a combination of these services. As part of the ACE-Neuro study, if triage to the ACE-Neuro exercise study was not appropriate after the triage clinic assessments, patients could be re-assessed in the triage clinic once deemed appropriate by their clinical team (i.e., oncologist, physiotherapist, or occupational therapist).
2.5.1. Identifying Information Questionnaire and Health History Questionnaire Both demographic and medical history were collected via patient report and chart review. Demographic history included participants’ self-reported age, sex, self-identified gender, self-identified ethnicity, education, annual family income, and marital and employment status. Medical history included type of primary brain tumour, stage, treatment status, treatment types received, smoking status, alcohol intake, medical co-morbidities, and cancer-related co-morbidities. Participants also completed the Physical Activity Readiness Questionnaire (i.e., PAR-Q). 2.5.2. Health-Related Fitness Measures Health-related fitness measures included height and weight, resting heart rate, and blood pressure. Body mass index was calculated using height and weight. 2.5.3. Short Physical Performance Battery (SPPB) The SPPB consists of a group of three tests examining gait speed, chair stand speed, and balance testing . It is a validated tool used to predict risk for mortality, nursing home admission, and disability . It is scored from 0 (worst performance) to 12 (best performance). A score of 5 or higher was necessary for direct referral to the ACE-Neuro exercise study. See for a summary of the SPPB. 2.5.4. Neurological Examination A neurological examination was performed by the resident physician, consisting of a cognitive screening assessment as well as a physical examination. Cognitive screening consisted of examination of orientation, registration, recall, and language (speaking, reading, and writing). A cranial nerve screening examination was conducted, followed by a motor examination for tone, reflexes, bulk, and power. Finally, a sensory examination for light touch and pinprick sensation was conducted, and coordination was tested. 2.5.5. Karnofsky Performance Status (KPS) KPS is a validated assessment tool for functional impairment, ranging from 100 (normal, no complaints, no sign of disease) to 0 (death). Each increment has well-defined criteria, which were used to classify study participants following a review of their health history and physical examination . A score of 50 or higher was necessary for direct referral to ACE-Neuro. 2.5.6. Eastern Cooperative Oncology Group score (ECOG) The ECOG is a validated assessment tool to assess functional status, scored from 0 (fully active, able to carry on all pre-disease performance without restriction) to 5 (death). As with the KPS, each increment has well-defined criteria used to classify study participants following their health history review and physical examination . A score of 3 or lower qualified participants for the ACE-Neuro exercise study.
Both demographic and medical history were collected via patient report and chart review. Demographic history included participants’ self-reported age, sex, self-identified gender, self-identified ethnicity, education, annual family income, and marital and employment status. Medical history included type of primary brain tumour, stage, treatment status, treatment types received, smoking status, alcohol intake, medical co-morbidities, and cancer-related co-morbidities. Participants also completed the Physical Activity Readiness Questionnaire (i.e., PAR-Q).
Health-related fitness measures included height and weight, resting heart rate, and blood pressure. Body mass index was calculated using height and weight.
The SPPB consists of a group of three tests examining gait speed, chair stand speed, and balance testing . It is a validated tool used to predict risk for mortality, nursing home admission, and disability . It is scored from 0 (worst performance) to 12 (best performance). A score of 5 or higher was necessary for direct referral to the ACE-Neuro exercise study. See for a summary of the SPPB.
A neurological examination was performed by the resident physician, consisting of a cognitive screening assessment as well as a physical examination. Cognitive screening consisted of examination of orientation, registration, recall, and language (speaking, reading, and writing). A cranial nerve screening examination was conducted, followed by a motor examination for tone, reflexes, bulk, and power. Finally, a sensory examination for light touch and pinprick sensation was conducted, and coordination was tested.
KPS is a validated assessment tool for functional impairment, ranging from 100 (normal, no complaints, no sign of disease) to 0 (death). Each increment has well-defined criteria, which were used to classify study participants following a review of their health history and physical examination . A score of 50 or higher was necessary for direct referral to ACE-Neuro.
The ECOG is a validated assessment tool to assess functional status, scored from 0 (fully active, able to carry on all pre-disease performance without restriction) to 5 (death). As with the KPS, each increment has well-defined criteria used to classify study participants following their health history review and physical examination . A score of 3 or lower qualified participants for the ACE-Neuro exercise study.
To obtain participant perspectives on triage clinic safety, acceptability, and satisfaction, semi-structured interviews were conducted with participants and members of the clinical team (i.e., oncologists, nurses, and administrators). We sampled and invited participants to a 15- to 30-min interview with the ACE-Neuro study coordinator (JTD) at the location of their choosing (i.e., via Zoom or in-person) at various times across the study duration. Specifically, participants were interviewed during or after the ACE-Neuro 12-week exercise intervention, and members of the clinical team were interviewed at various time points during the study recruitment period, with the aim of gathering varied perspectives to inform the clinical integration of processes specifically. Interviews were recorded by end-to-end encrypted Zoom (online) or with an audio recording device (in-person). Examples of questions asked during the interview are presented in .
2.7.1. Quantitative Data Descriptive characteristics of participants are presented using mean ± standard deviation or percentages. Feasibility was reported using percentages related to the pre-determined thresholds mentioned above. Descriptive results, using mean ± standard deviation or percentages, are also reported for the SSPB, KPS, ECOG, and neurological examination results. 2.7.2. Qualitative Data Interviews were transcribed verbatim via ExpressScribe , managed in NVivo 12 , and analyzed by one author (JTD) using conventional content analysis . This iterative process included reading the transcripts, coding the data, and generating category descriptions. To ensure a rigorous process, a reflexivity journal was kept by JTD, and critical review and discussion with two other authors (LCC and SNC-R) occurred across the study process . To enhance readability of participant quotes, repetitive words, identifiable information, and mumbled speech were replaced with brackets: […].
Descriptive characteristics of participants are presented using mean ± standard deviation or percentages. Feasibility was reported using percentages related to the pre-determined thresholds mentioned above. Descriptive results, using mean ± standard deviation or percentages, are also reported for the SSPB, KPS, ECOG, and neurological examination results.
Interviews were transcribed verbatim via ExpressScribe , managed in NVivo 12 , and analyzed by one author (JTD) using conventional content analysis . This iterative process included reading the transcripts, coding the data, and generating category descriptions. To ensure a rigorous process, a reflexivity journal was kept by JTD, and critical review and discussion with two other authors (LCC and SNC-R) occurred across the study process . To enhance readability of participant quotes, repetitive words, identifiable information, and mumbled speech were replaced with brackets: […].
3.1. Demographics and Feasibility See for participant demographics and for participant clinical characteristics and health history. The average age of participants was 51 ± 13.5, and the average time since diagnosis was 78.2 ± 101.7 months. The most commonly diagnosed brain tumour was glioblastoma ( n = 19). Please see for participant co-morbidities and cancer-related side effects. presents the study flow chart. Recruitment was open for 20 months between April 2021 and December 2022. On average, 14 newly diagnosed neuro-oncology patients were seen at the Tom Baker Cancer Centre neuro-oncology clinic per month (a total of 280 patients were seen during the recruitment period). Of those, 86 were referred by a clinician to the triage clinic (referral rate of 31%). Approximately 194 patients were not referred due to (1) the clinical team forgetting to refer, (2) patient lack of interest, and (3) clinical judgment (e.g., a patient requiring palliative care, a patient unable to understand/speak English, or the clinical team being unsure of patient’s rehabilitation needs). In addition to patients referred from the neuro-oncology clinic, 10 self-referred to the study, for a total of 96 patients being referred to the study. Of the 96 referred patients, 93 met the eligibility criteria. Three patients were excluded due to not being diagnosed with a primary brain tumour ( n = 1), unable to consent in English ( n = 1), or being diagnosed under the age of 18 ( n = 1). Of the 93 eligible, 57 enrolled in the study and completed informed consent (enrollment rate of 61%). Of the 36 patients who did not enroll, 15 were not interested, 12 were unable to be contacted, 8 had disease progression, and 1 moved to another country. Of the 57 enrolled participants, 54 attended the triage clinic (attendance rate of 94.7%). Reasons for non-attendance included time constraints ( n = 2) and not being interested at this time ( n = 1). One patient was seen in the triage clinic twice. On this patient’s first visit to the triage clinic, they did not meet the ACE-Neuro exercise inclusion criteria and were referred to physiotherapy to improve physical function. They were later re-referred to the triage clinic, re-assessed, and triaged to exercise. The total number of participant assessments is thus n = 55. No adverse events occurred during the triage clinic. The average time from referral to initial contact was 10.3 ± 8.9 business days, and the average time to triage clinic visit was 22.2 ± 20.0 business days. 3.2. Triage Clinic Outcomes See and for the triage clinic assessment results. presents participants’ vitals, body composition, and triage outcomes (i.e., SPPB, ECOG, and KPS scores). includes the neurological examination results. reviews referral rates to the available rehabilitation and exercise resources. Of the 55 participant assessments, 49 met the inclusion criteria for exercise (i.e., SPPB ≥ 5, ECOG < 3, and KPS > 50) and were thus referred to the ACE-Neuro exercise intervention . Six participants did not meet the initial criteria and were referred to either an individual ( n = 3) or a combination ( n = 3) of specialized rehabilitation services, including two referrals to physiatry, four to physiotherapy, and four to occupational therapy. Of the 49 referred to ACE-Neuro exercise, 22 of these were also referred to either one ( n = 19) or multiple ( n = 3) additional resources, including 5 referrals to physiatry, 5 to physiotherapy, and 15 to occupational therapy. The average BMI of triage clinic participants was 30.0 ± 6.5 kg/m 2 . Resting heart rate and blood pressure were 80 ± 16 bpm and 122.8/83.2 mmHg, respectively. A total of 53 of the 55 participant assessments completed the full SPPB. Reasons for not completing the full or parts of the SPPB were related to safety (i.e., the triage clinic team or patient not feeling safe to complete) or an inability to perform (e.g., unable to ambulate). The mean SPPB score of patients was 8.9 ± 3.1. The majority of participants (57.1%) had an ECOG score of 1, with the next highest score being 2 (33.9%). A total of 91.1% of participants scored between 60 and 90 on the KPS, with 30.4% scoring 90, 17.9% scoring 80, 23.2% scoring 70, and 19.6% scoring 60. A total of 51 (92.7%) participants had deficits in the neurologic examination (i.e., four participants had completely normal exams). See for full neurological examination results. Forty participants (72.7%) had cognitive deficits, 30 (54.5%) had deficits with cranial nerve examination, 11 (20.0%) had motor deficits, 25 (45.5%) had abnormal reflexes, 17 (30.9%) had peripheral sensory deficits, and 25 (45.5%) had coordination deficits. Eight participants had deficits only with cognition, but otherwise normal cranial nerve, motor, reflex, sensory, and coordination examinations. 3.3. Qualitative Results Of the 55 triage clinic participants, 20 completed a semi-structured interview. Of these 20 participants, four had caregivers present. In addition, five members of the clinical team completed an interview. Overall, all participants (i.e., participants and members of the clinical team) (1) felt satisfied with the triage clinic and (2) valued the triage clinic as part of neuro-oncology care. includes additional representative quotes for these two categories. 3.3.1. Category One: Satisfaction with the Rehabilitation Triage Clinic Participants spoke of feeling satisfied with the triage clinic appointment safety, length, examination components (e.g., SPPB and neurological exam), personnel (i.e., resident physiatrist and exercise physiologist), and location. Participants also felt that attending the appointment in-person was feasible and helpful in advance of the subsequent ACE-Neuro exercise intervention (for those triaged to exercise). This appointment was really very organized. I mean—when they informed me that I will be […] that I need to do the assessment, it’s very coordinated uh it’s fast and then they’re very warm and very supportive […] I know that I’m in good hands because I know that they’re gonna be supporting me. And […] from the time that they contact you, the communication, the physical check-up, those are all, timed professionally and very organized. I love that they do that because it’s more like knowing you a bit more based on what your situation […] and seeing you before you do the activity is important so that they can assess your limitation as well. Participant 04 Some participants spoke of feeling uncertain and nervous in advance of the triage clinic, but yet were ultimately satisfied with how the appointment was conducted. Well, you know before you’re kinda wondering what this is all about and you know you’re more curious and once you get there, I think all of our questions were answered you were really good [at] taking us through that pre-assessment. I know there was a bit of a wait time there before you […] decided whether you’re in or out I thought, oh you know that might take longer I might have to go home and find out about it […] in a week whatever, but you came right back and told us, so there was really no wait time and we left with the equipment we needed, […] so I uh I think it went yeah really quite smoothly. Participant 07 Some participants shared feedback on ways to improve the triage clinic, including providing additional information on the rationale for the types of assessments chosen. It would have been nice to see […] why you decided on those tests, and like the rationale so like we would know how it would be beneficial to us, because so far it kind of seemed like it was just a test to see if she was fit for the program. Participant 37′s Caregiver From the clinical team’s perspective, referring to the triage clinic did not disrupt their clinical workflow and was thus perceived as a feasible addition to their neuro-oncology clinic. It is very easy just put in the order [for the referral to the triage clinic] the order is 2 seconds, so no, it seems like it’s working. Clinical Team Member 01 (Oncologist) I found it easy to refer. That was simple, even with [the new electronic medical record], it was easy to refer patients […] I think patients, uhm, were seen a little bit faster than they were with just rehab, and I think their needs might have been more individualized and met. Clinical Team Member 08 (Nurse) Members of the clinical team also spoke to their satisfaction with the triage clinic personnel for triaging participants to an appropriate and tailored resource. I do like the triage system, because I know the patient would benefit from exercise and I know the patient would benefit from [occupational therapy] or [physiotherapy]. But it was nice having somebody who specializes in that area to make that decision. Clinical Team Member 06 (Nurse Practitioner) 3.3.2. Category Two: Value of a Triage Clinic Participants felt that the triage clinic was beneficial for providing them with a sense of hope in their cancer journey as well as for supporting access to additional resources. Well that there was maybe some hope [laughs] for getting some of these muscles working again […] there’s hope out there […] it’s not a dead-end. Participant 43 It was good it was great ‘cause I finally got someone to—I finally got recognized. Well, not recognized, but you know, someone to actually help me out with [my brain cancer] so that’s great. Participant 17 I thought that that was good, and out of that I ended up in occupational therapy as well as [ACE-Neuro], both of which were excellent programs and helped me. Participant 51 It opened up my eyes to some of the [resources and programs] that were available to me that I didn’t even know about. Participant 59 It was probably the best day I’ve had in a really long time. Having [the triage clinic], be truly kindness, and an opening to just whatever I needed. You guys were there, period. You were there, and you never talked to each other like I wasn’t part of it. So, everything that was brought up was brought up for all of us to be part of which I thought was kindness, and just an openness that made it UN scary, which was lovely […] For me that was one of the best [appointments] that I’ve been- Not one of, that was the best I’ve been to of an appointment. Yeah, that was I above and beyond…that was perfect for me. Participant 52 Members of the clinical team felt that referring to the triage clinic was beneficial for participants for supporting safety in advance of exercise participation (for those triaged to exercise), as well as for patient experience by needing only one referral per patient. Further, members of the clinical team felt that referring to just one source also simplified their referral process and workflow in the clinic. I think that simplifies things for us a lot right? So one, it is a one-point of referral. And then you guys do the bulk work, really? And sometimes we refer, and I’ve heard that we refer to physiatry, but then the team feels that the patient should be really seen by [occupational therapy]. […] Sometimes we are not sure who to refer the patient to, and what would be the best fit, so I think that was quite nice to be just able to, you know, refer to rehab, and then see what’s the best for the patient. Clinical Team Member 07 (Oncologist) You need to do the triage, I think, That’s what [makes] it safe […] you need that triage to know what the patient is appropriate for. Clinical Team Member 01 (Oncologist) Finally, members of the clinical team spoke about the possibilities of a triage clinic that extends beyond the neuro-oncology patient population. I would like to see it grow beyond brain tumours, I know [the research team] is looking at head and neck as well but is there a role and vision for a triage clinic to assess rehab readiness for everyone with a cancer diagnosis? There could be many more layers to this clinic. . Clinical Team 4.
See for participant demographics and for participant clinical characteristics and health history. The average age of participants was 51 ± 13.5, and the average time since diagnosis was 78.2 ± 101.7 months. The most commonly diagnosed brain tumour was glioblastoma ( n = 19). Please see for participant co-morbidities and cancer-related side effects. presents the study flow chart. Recruitment was open for 20 months between April 2021 and December 2022. On average, 14 newly diagnosed neuro-oncology patients were seen at the Tom Baker Cancer Centre neuro-oncology clinic per month (a total of 280 patients were seen during the recruitment period). Of those, 86 were referred by a clinician to the triage clinic (referral rate of 31%). Approximately 194 patients were not referred due to (1) the clinical team forgetting to refer, (2) patient lack of interest, and (3) clinical judgment (e.g., a patient requiring palliative care, a patient unable to understand/speak English, or the clinical team being unsure of patient’s rehabilitation needs). In addition to patients referred from the neuro-oncology clinic, 10 self-referred to the study, for a total of 96 patients being referred to the study. Of the 96 referred patients, 93 met the eligibility criteria. Three patients were excluded due to not being diagnosed with a primary brain tumour ( n = 1), unable to consent in English ( n = 1), or being diagnosed under the age of 18 ( n = 1). Of the 93 eligible, 57 enrolled in the study and completed informed consent (enrollment rate of 61%). Of the 36 patients who did not enroll, 15 were not interested, 12 were unable to be contacted, 8 had disease progression, and 1 moved to another country. Of the 57 enrolled participants, 54 attended the triage clinic (attendance rate of 94.7%). Reasons for non-attendance included time constraints ( n = 2) and not being interested at this time ( n = 1). One patient was seen in the triage clinic twice. On this patient’s first visit to the triage clinic, they did not meet the ACE-Neuro exercise inclusion criteria and were referred to physiotherapy to improve physical function. They were later re-referred to the triage clinic, re-assessed, and triaged to exercise. The total number of participant assessments is thus n = 55. No adverse events occurred during the triage clinic. The average time from referral to initial contact was 10.3 ± 8.9 business days, and the average time to triage clinic visit was 22.2 ± 20.0 business days.
See and for the triage clinic assessment results. presents participants’ vitals, body composition, and triage outcomes (i.e., SPPB, ECOG, and KPS scores). includes the neurological examination results. reviews referral rates to the available rehabilitation and exercise resources. Of the 55 participant assessments, 49 met the inclusion criteria for exercise (i.e., SPPB ≥ 5, ECOG < 3, and KPS > 50) and were thus referred to the ACE-Neuro exercise intervention . Six participants did not meet the initial criteria and were referred to either an individual ( n = 3) or a combination ( n = 3) of specialized rehabilitation services, including two referrals to physiatry, four to physiotherapy, and four to occupational therapy. Of the 49 referred to ACE-Neuro exercise, 22 of these were also referred to either one ( n = 19) or multiple ( n = 3) additional resources, including 5 referrals to physiatry, 5 to physiotherapy, and 15 to occupational therapy. The average BMI of triage clinic participants was 30.0 ± 6.5 kg/m 2 . Resting heart rate and blood pressure were 80 ± 16 bpm and 122.8/83.2 mmHg, respectively. A total of 53 of the 55 participant assessments completed the full SPPB. Reasons for not completing the full or parts of the SPPB were related to safety (i.e., the triage clinic team or patient not feeling safe to complete) or an inability to perform (e.g., unable to ambulate). The mean SPPB score of patients was 8.9 ± 3.1. The majority of participants (57.1%) had an ECOG score of 1, with the next highest score being 2 (33.9%). A total of 91.1% of participants scored between 60 and 90 on the KPS, with 30.4% scoring 90, 17.9% scoring 80, 23.2% scoring 70, and 19.6% scoring 60. A total of 51 (92.7%) participants had deficits in the neurologic examination (i.e., four participants had completely normal exams). See for full neurological examination results. Forty participants (72.7%) had cognitive deficits, 30 (54.5%) had deficits with cranial nerve examination, 11 (20.0%) had motor deficits, 25 (45.5%) had abnormal reflexes, 17 (30.9%) had peripheral sensory deficits, and 25 (45.5%) had coordination deficits. Eight participants had deficits only with cognition, but otherwise normal cranial nerve, motor, reflex, sensory, and coordination examinations.
Of the 55 triage clinic participants, 20 completed a semi-structured interview. Of these 20 participants, four had caregivers present. In addition, five members of the clinical team completed an interview. Overall, all participants (i.e., participants and members of the clinical team) (1) felt satisfied with the triage clinic and (2) valued the triage clinic as part of neuro-oncology care. includes additional representative quotes for these two categories. 3.3.1. Category One: Satisfaction with the Rehabilitation Triage Clinic Participants spoke of feeling satisfied with the triage clinic appointment safety, length, examination components (e.g., SPPB and neurological exam), personnel (i.e., resident physiatrist and exercise physiologist), and location. Participants also felt that attending the appointment in-person was feasible and helpful in advance of the subsequent ACE-Neuro exercise intervention (for those triaged to exercise). This appointment was really very organized. I mean—when they informed me that I will be […] that I need to do the assessment, it’s very coordinated uh it’s fast and then they’re very warm and very supportive […] I know that I’m in good hands because I know that they’re gonna be supporting me. And […] from the time that they contact you, the communication, the physical check-up, those are all, timed professionally and very organized. I love that they do that because it’s more like knowing you a bit more based on what your situation […] and seeing you before you do the activity is important so that they can assess your limitation as well. Participant 04 Some participants spoke of feeling uncertain and nervous in advance of the triage clinic, but yet were ultimately satisfied with how the appointment was conducted. Well, you know before you’re kinda wondering what this is all about and you know you’re more curious and once you get there, I think all of our questions were answered you were really good [at] taking us through that pre-assessment. I know there was a bit of a wait time there before you […] decided whether you’re in or out I thought, oh you know that might take longer I might have to go home and find out about it […] in a week whatever, but you came right back and told us, so there was really no wait time and we left with the equipment we needed, […] so I uh I think it went yeah really quite smoothly. Participant 07 Some participants shared feedback on ways to improve the triage clinic, including providing additional information on the rationale for the types of assessments chosen. It would have been nice to see […] why you decided on those tests, and like the rationale so like we would know how it would be beneficial to us, because so far it kind of seemed like it was just a test to see if she was fit for the program. Participant 37′s Caregiver From the clinical team’s perspective, referring to the triage clinic did not disrupt their clinical workflow and was thus perceived as a feasible addition to their neuro-oncology clinic. It is very easy just put in the order [for the referral to the triage clinic] the order is 2 seconds, so no, it seems like it’s working. Clinical Team Member 01 (Oncologist) I found it easy to refer. That was simple, even with [the new electronic medical record], it was easy to refer patients […] I think patients, uhm, were seen a little bit faster than they were with just rehab, and I think their needs might have been more individualized and met. Clinical Team Member 08 (Nurse) Members of the clinical team also spoke to their satisfaction with the triage clinic personnel for triaging participants to an appropriate and tailored resource. I do like the triage system, because I know the patient would benefit from exercise and I know the patient would benefit from [occupational therapy] or [physiotherapy]. But it was nice having somebody who specializes in that area to make that decision. Clinical Team Member 06 (Nurse Practitioner) 3.3.2. Category Two: Value of a Triage Clinic Participants felt that the triage clinic was beneficial for providing them with a sense of hope in their cancer journey as well as for supporting access to additional resources. Well that there was maybe some hope [laughs] for getting some of these muscles working again […] there’s hope out there […] it’s not a dead-end. Participant 43 It was good it was great ‘cause I finally got someone to—I finally got recognized. Well, not recognized, but you know, someone to actually help me out with [my brain cancer] so that’s great. Participant 17 I thought that that was good, and out of that I ended up in occupational therapy as well as [ACE-Neuro], both of which were excellent programs and helped me. Participant 51 It opened up my eyes to some of the [resources and programs] that were available to me that I didn’t even know about. Participant 59 It was probably the best day I’ve had in a really long time. Having [the triage clinic], be truly kindness, and an opening to just whatever I needed. You guys were there, period. You were there, and you never talked to each other like I wasn’t part of it. So, everything that was brought up was brought up for all of us to be part of which I thought was kindness, and just an openness that made it UN scary, which was lovely […] For me that was one of the best [appointments] that I’ve been- Not one of, that was the best I’ve been to of an appointment. Yeah, that was I above and beyond…that was perfect for me. Participant 52 Members of the clinical team felt that referring to the triage clinic was beneficial for participants for supporting safety in advance of exercise participation (for those triaged to exercise), as well as for patient experience by needing only one referral per patient. Further, members of the clinical team felt that referring to just one source also simplified their referral process and workflow in the clinic. I think that simplifies things for us a lot right? So one, it is a one-point of referral. And then you guys do the bulk work, really? And sometimes we refer, and I’ve heard that we refer to physiatry, but then the team feels that the patient should be really seen by [occupational therapy]. […] Sometimes we are not sure who to refer the patient to, and what would be the best fit, so I think that was quite nice to be just able to, you know, refer to rehab, and then see what’s the best for the patient. Clinical Team Member 07 (Oncologist) You need to do the triage, I think, That’s what [makes] it safe […] you need that triage to know what the patient is appropriate for. Clinical Team Member 01 (Oncologist) Finally, members of the clinical team spoke about the possibilities of a triage clinic that extends beyond the neuro-oncology patient population. I would like to see it grow beyond brain tumours, I know [the research team] is looking at head and neck as well but is there a role and vision for a triage clinic to assess rehab readiness for everyone with a cancer diagnosis? There could be many more layers to this clinic. . Clinical Team 4.
Participants spoke of feeling satisfied with the triage clinic appointment safety, length, examination components (e.g., SPPB and neurological exam), personnel (i.e., resident physiatrist and exercise physiologist), and location. Participants also felt that attending the appointment in-person was feasible and helpful in advance of the subsequent ACE-Neuro exercise intervention (for those triaged to exercise). This appointment was really very organized. I mean—when they informed me that I will be […] that I need to do the assessment, it’s very coordinated uh it’s fast and then they’re very warm and very supportive […] I know that I’m in good hands because I know that they’re gonna be supporting me. And […] from the time that they contact you, the communication, the physical check-up, those are all, timed professionally and very organized. I love that they do that because it’s more like knowing you a bit more based on what your situation […] and seeing you before you do the activity is important so that they can assess your limitation as well. Participant 04 Some participants spoke of feeling uncertain and nervous in advance of the triage clinic, but yet were ultimately satisfied with how the appointment was conducted. Well, you know before you’re kinda wondering what this is all about and you know you’re more curious and once you get there, I think all of our questions were answered you were really good [at] taking us through that pre-assessment. I know there was a bit of a wait time there before you […] decided whether you’re in or out I thought, oh you know that might take longer I might have to go home and find out about it […] in a week whatever, but you came right back and told us, so there was really no wait time and we left with the equipment we needed, […] so I uh I think it went yeah really quite smoothly. Participant 07 Some participants shared feedback on ways to improve the triage clinic, including providing additional information on the rationale for the types of assessments chosen. It would have been nice to see […] why you decided on those tests, and like the rationale so like we would know how it would be beneficial to us, because so far it kind of seemed like it was just a test to see if she was fit for the program. Participant 37′s Caregiver From the clinical team’s perspective, referring to the triage clinic did not disrupt their clinical workflow and was thus perceived as a feasible addition to their neuro-oncology clinic. It is very easy just put in the order [for the referral to the triage clinic] the order is 2 seconds, so no, it seems like it’s working. Clinical Team Member 01 (Oncologist) I found it easy to refer. That was simple, even with [the new electronic medical record], it was easy to refer patients […] I think patients, uhm, were seen a little bit faster than they were with just rehab, and I think their needs might have been more individualized and met. Clinical Team Member 08 (Nurse) Members of the clinical team also spoke to their satisfaction with the triage clinic personnel for triaging participants to an appropriate and tailored resource. I do like the triage system, because I know the patient would benefit from exercise and I know the patient would benefit from [occupational therapy] or [physiotherapy]. But it was nice having somebody who specializes in that area to make that decision. Clinical Team Member 06 (Nurse Practitioner)
Participants felt that the triage clinic was beneficial for providing them with a sense of hope in their cancer journey as well as for supporting access to additional resources. Well that there was maybe some hope [laughs] for getting some of these muscles working again […] there’s hope out there […] it’s not a dead-end. Participant 43 It was good it was great ‘cause I finally got someone to—I finally got recognized. Well, not recognized, but you know, someone to actually help me out with [my brain cancer] so that’s great. Participant 17 I thought that that was good, and out of that I ended up in occupational therapy as well as [ACE-Neuro], both of which were excellent programs and helped me. Participant 51 It opened up my eyes to some of the [resources and programs] that were available to me that I didn’t even know about. Participant 59 It was probably the best day I’ve had in a really long time. Having [the triage clinic], be truly kindness, and an opening to just whatever I needed. You guys were there, period. You were there, and you never talked to each other like I wasn’t part of it. So, everything that was brought up was brought up for all of us to be part of which I thought was kindness, and just an openness that made it UN scary, which was lovely […] For me that was one of the best [appointments] that I’ve been- Not one of, that was the best I’ve been to of an appointment. Yeah, that was I above and beyond…that was perfect for me. Participant 52 Members of the clinical team felt that referring to the triage clinic was beneficial for participants for supporting safety in advance of exercise participation (for those triaged to exercise), as well as for patient experience by needing only one referral per patient. Further, members of the clinical team felt that referring to just one source also simplified their referral process and workflow in the clinic. I think that simplifies things for us a lot right? So one, it is a one-point of referral. And then you guys do the bulk work, really? And sometimes we refer, and I’ve heard that we refer to physiatry, but then the team feels that the patient should be really seen by [occupational therapy]. […] Sometimes we are not sure who to refer the patient to, and what would be the best fit, so I think that was quite nice to be just able to, you know, refer to rehab, and then see what’s the best for the patient. Clinical Team Member 07 (Oncologist) You need to do the triage, I think, That’s what [makes] it safe […] you need that triage to know what the patient is appropriate for. Clinical Team Member 01 (Oncologist) Finally, members of the clinical team spoke about the possibilities of a triage clinic that extends beyond the neuro-oncology patient population. I would like to see it grow beyond brain tumours, I know [the research team] is looking at head and neck as well but is there a role and vision for a triage clinic to assess rehab readiness for everyone with a cancer diagnosis? There could be many more layers to this clinic. . Clinical Team 4.
The concept of cancer rehabilitation and exercise was first introduced over 40 years ago, with barriers at that time including difficulty identifying patients in need, and awareness from oncologists on the role of rehabilitation and activity . Unfortunately, these same barriers exist today . With improving survival rates among cancer patients, the role of functional rehabilitation and exercise is more important than ever . Cancer survivors report long-term concerns with function, quality of life, and inactivity following their diagnosis . To date, consistent screening for inactivity and impairment, as well as triage and referral pathways (i.e., through the EMR) to appropriate rehabilitation and exercise resources (i.e., physiatry, physiotherapy, occupational therapy, and exercise), do not exist in most cancer care systems. Over the last several years, multiple researchers and clinicians have identified the critical need for improved impairment-driven cancer rehabilitation . Screening for distress programs, including the revised Edmonton System Assessment System (ESAS) and the Canadian Problem Checklist, have been implemented in most Canadian Cancer Centers . The purpose of these pre-existing tools is to help healthcare providers identify, assess, and manage distressing symptoms and concerns experienced by patients, and enhance the person centeredness of care delivered by providing appropriate and tailored referrals . The purpose is also to have automated thresholds that trigger referrals to appropriate resources, avoiding missed opportunities for patient care. These tools screen for symptoms like nausea, fatigue, and shortness of breath, but do not include critical screening questions related to activities of daily living, physical function, or activity levels. The Screening for Distress initiative was based on research showing the profound benefit of routine screening for distress among patients and the value of referring to appropriate resources within the cancer care setting as needed . Recent studies indicate that more cancer survivors report decreased health-related quality of life related to physical impairment versus psychological impairment, begging the need for improved research and implementation of screening, triage, and referral for physical impairment in addition to psychological impairment . Early research to develop rehabilitation care pathways are underway in the United States, with more work necessary to develop and test screening and clinical referral pathways that will better serve cancer patients worldwide . Neuro-oncology patients have unique needs, with impairments often affecting function, including cognition, mobility, and coordination . The purpose of this study was thus to assess the feasibility of a triage clinic to define common impairments or deficits among neuro-oncology patients and assess the feasibility of triage decision making and referral to both rehabilitation and exercise resources. Overall, we found that the triage clinic was feasible from an enrollment and attendance perspective, based on achieving pre-determined cut-offs and based on participant qualitative reports on the enrollment pathway. To contribute to overall feasibility, we importantly found that the triage clinic was safe, with no adverse events during the triage clinic appointment. Participants commented that the assessments were organized and thorough. Finally, the triage clinic was found to be feasible based on the appropriate triage of participants to rehabilitation and exercise services using the pre-determined triage tools. The enrollment rate of 61% exceeded our a priori feasibility rate of 50%, and the triage clinic attendance rate of 94.7% also exceeded our a priori feasibility, set at 60%. On average, individuals were seen in the triage clinic 22.2 business days after their referral, which from the qualitative data, was deemed acceptable by both participants and clinicians. Further, participants spoke about the value of the triage clinic as a part of their neuro-oncology care, commenting on the in-depth assessment that informed their access to appropriate rehabilitation resources in a timely manner. Participants felt the clinic offered a tailored approach to their rehabilitation care. Clinical team members commented on how the triage clinic simplified their referral processes, feeling that they could refer to one place and their patients would be further assessed to determine specific rehabilitation needs. One clinical team member commented on how they would like the triage clinic to grow beyond brain tumours and into other tumour groups. Overall, these quantitative and qualitative results support the feasibility of enrollment and triage clinic attendance for the neuro-oncology population, as well as the acceptability of the triage clinic appointment. The pre-determined tools used for the triage decision included a health history screening interview, a neurological examination, the SPPB, ECOG, and KPS. Importantly, 93% of participants assessed in the clinic had a neurological deficit (i.e., 51 out of 55 participants). The most common deficits were with cognition, cranial nerves, reflexes, and coordination. These triage clinic results clearly show the prevalence of neurological deficits often contributing to patient functional impairment, and point to the need for triage to resources that are appropriate and tailored to each patient’s needs. Appropriate triage can support streamlined access to rehabilitation and exercise resources in a timely fashion, without participants having to be re-referred to separate providers across multiple visits. Functionally, participants, on average, scored 8.9 ± 3.1 on the SPPB out of 12. The previous literature on frailty suggests a score of lower than 10 indicates one or more mobility limitations and is predictive for all-cause mortality . Therefore, the pre-determined cut-off to be eligible for the ACE-Neuro exercise study was initially 10/12; however, this was changed to 5/12 after the first five participants were assessed. It was clear that due to balance issues, gait speed, and decreased leg endurance, the majority of scores were less than 10/12. Despite this scoring and one or more mobility limitations, participants were still able to perform basic chair exercises, making them eligible for the ACE-Neuro exercise study. For this reason, the criteria were changed to ensure participants who were frail or had more than one mobility limitation were not excluded from the ACE-Neuro exercise study. Those scoring below 5/12 often required mobility aids and therefore did not meet the eligibility criteria for the ACE-Neuro exercise study. Of those who did not meet eligibility criteria on the SPPB for the ACE-Neuro exercise study ( n = 5), the barriers were mainly not being able to complete one or more of the three tests (i.e., balance, gait speed, and chair to stand). From a clinical feasibility perspective, the SPPB was an easy assessment to administer and was tolerated well by participants. The KPS and ECOG scores were determined by the physiatry resident and clinical exercise physiologist based on health history, neurological examination, and the SPPB. The majority of patients scored 1 on the ECOG (57.1%), i.e., “restricted in physically strenuous activity but ambulatory and able to carry out work for a light or sedentary nature” . On the KPS, the majority of scores were distributed between 60 and 90/100, with the largest group scoring 90 (30.4%, i.e., “able to carry on normal activity with minor signs or symptoms of disease”) and the next largest group scoring 70 (23.3%, i.e., “cares for self but unable to carry on normal activity or do active work”) . Moving forward, selecting one of these functional status scores would be reasonable as they provide similar data. The KPS, which has more data intervals compared to the ECOG, allows for a more specific categorization of function, which may help to facilitate referral decisions more easily. Using the triage clinic criteria, a total of 49 participants were referred to the ACE-Neuro exercise study, and of these, 22 participants required additional rehabilitation services referrals to address specific impairments (See ). Overall, participants found attending the triage clinic feasible and beneficial. Interestingly, the referral rate into the study was 31%, which was less than the a priori feasibility level of 50%. One reason patients were not referred was due to “clinical judgement” by the oncologist or nurse in the neuro-oncology clinic. Potential barriers to referral amongst the 69% not referred may have included the perception that rehabilitation and exercise were not necessary or not medically appropriate for the majority of patients. However, previous research in other tumour groups has shown that physical impairment impacts over 90% of patients , and our results show motor or sensory impairment amongst 92.7% of participants assessed. Cheville and colleagues found that while 91% of patients reported needing rehabilitation services post-diagnosis, only 30% reported receiving this care . Other reports suggest physical rehabilitation needs rank highest in unmet needs, over financial, emotional, communication, body image, and multiple other categories of needs, and that physical impairment is a key contributor to psychosocial distress . In addition, a lack of screening and identification is a significant cause of high physical impairment rates among patients . To address this in the future, improved patient screening and ease of referral to rehabilitation resources (i.e., through an EMR), as well as education for healthcare providers, may be a means to increase referral rates within standard clinical care. Overall, this study highlights the lack of standardized identifications of patients with functional impairment or who are currently sedentary. Once patients are identified, however, our triage clinic results indicate that effective and efficient assessment, triage, and referral of these patients to appropriate rehabilitation resources is feasible and well accepted both by patients and clinical team members. To improve the identification of functional impairment among patients, we thus propose a tool for screening called the Cancer Rehabilitation and Exercise Screening Tool (CREST, see ). This simple assessment takes less than 5 min to complete and can assist with identifying the most common functional impairments seen in individuals living with and beyond cancer. CREST was developed by cancer physiatrists, cancer and exercise researchers, physicians, and exercise physiologists, and can be implemented within the Cancer Exercise and Rehabilitation Pathways Model (see ), adapted from our prior work with colleagues . The proposed CREST tool screens for physical inactivity and allows participants to report pre-identified functional concerns and difficulties with activities of daily living using a 1–10 Likert scale. Similar to the ESAS, which has now been widely implemented at most cancer appointments , CREST may improve the efficient and effective identification of those with functional impairment. To the best of our knowledge, no other functional screening tool designed for implementation in a clinical setting has been successfully integrated into cancer care. This is despite reports that a screening tool would help to better identify patients with impairment, potentially improving patient care and recovery . Research tools like the Functional Assessment of Cancer Therapy scale (FACT) and the SF-36 exist, but are not designed for screening purposes (i.e., the FACT and SF-36) and/or are not specific to cancer (i.e., SF-36) . The Functional Independence Measure (FIM) is a well-validated measure for disability, but it is not designed as a screening tool and is not validated in the cancer population . Recently the Patient-Report Outcomes Measurement Information System (PROMIS) Cancer Function Brief 3D profile has been proposed as a composite of three short forms that evaluate gross and upper extremity function, fatigue, social participation, cognition, and fine motor skills, but it is not designed to identify specific impairments that can aid in triage and referral to specific rehabilitation specialists . In addition, it was originally designed as a research tool, although more recent reports have investigated its role as a clinical tool . The CREST, specifically designed as an in-clinic screening tool, may be used at each oncology appointment to identify new or existing functional impairments among patients. The tool can be completed in the waiting room by patients and reviewed with the clinical team members or healthcare providers, who can then facilitate appropriate referrals to either a triage clinic for further assessment, or directly to specific resources (i.e., physiatry, physiotherapy, or occupational therapy) for those with functional impairments. For those without any current impairment but who are inactive, a referral to exercise resources can be made. For individuals meeting activity guidelines without any impairment, they may only need to receive electronic or printed resources to support the maintenance of their active lifestyles. The hope is that with improved screening, we can close the gap between those with functional rehabilitation or inactivity concerns and those referred to rehabilitation and exercise resources. Future studies are necessary to validate and assess the benefit and implementation of the CREST. As Smith and colleagues stated, “it is challenging, if not impossible, to imagine a high-quality oncology care system that does not include rehabilitation services” . Evidence supports the role of cancer rehabilitation, which includes screening for functional impairment and inactivity, as a way to improve function and quality of life among patients . Therefore, work is needed to both improve the identification of patients with functional impairment, and the triage and referral of these patients to appropriate services. The triage clinic results indicate that the recruitment of patients is difficult, likely due to a lack of consistent screening and identification of those in need . Our hope is that CREST will be implemented within the Cancer Rehabilitation and Exercise Pathways Model as a screening resource, and the triage clinic will provide assessment for complex patients, allowing for referral to the right rehabilitation and exercise resources at the right time. With improved screening, triage and referral into rehabilitation resources, those living with and beyond cancer have the potential to more easily access the support they need, improving their recovery and quality of life into survivorship.
|
Enzyme-linked immunosorbent assay and immunohistochemical analysis of mast cell related biochemicals in oral submucous fibrosis | d0f48581-e238-43bb-b089-05c748600eec | 11140300 | Anatomy[mh] | Oral submucous fibrosis (OSMF) is a potentially malignant disorder affecting the oral cavity and oropharynx causing extensive fibrous tissue deposition in submucosa. The presence of the mast cells in histological section of OSMF has been found to be related to various stages of OSMF. Although, studies have suggested that enzymes released by degranulation of mast cells in OSMF have some role in its initiation – although the exact mechanism of action is not known. Therefore, it would be imperative to identify the presence of various biochemicals of mast cells in affected oral mucosa and perhaps the serum of the affected individuals. Hence, it can be stated that if there is histamine release during the initiation and progression of OSMF, possible increased levels of serum histamine levels can be observed in those patients. Additionally, in response to increased histamine levels, its metabolism will be initiated which is mediated through enzyme diamine oxidase (DAO). Therefore, variation in serum DAO levels may also be expected in those individuals. Mast cells (MCs) are multifaceted immune cells subclassified based on their protease content as mast cell with tryptase only (MCT) and mast cell with tryptase and chymase (MCTC) and may have varied role in various pathogeneses including cancer. Mast cells exhibit increased accumulation within tumor microenvironments and attribute to prognosis, metastasis and reduced survival in several types of human cancer. mast cells can influence the tumor microenvironment and induce pro-tumor effects by conditioning the fate of tumor cells, initiation of angiogenesis and tissue remodeling though its bioactive molecules. including drug resistance. – Also, it has been observed that mast cells and fibroblasts show physical interactions and induce fibroblast contraction of collagen lattices. Further, OSMF shows high potential for malignant transformation and is correlated to a series of biochemical alterations seen during the progress of the disease. The incidence of mast cells in the affected mucosa may be implicated in the malignant transformation. Studies show that mast cell serine proteases like tryptase and chymase significantly impact angiogenesis thus affecting tumor development and progression. There is evidence that supports an association between mast cell chymase and tumor angiogenesis. While tryptase, a neutral serine protease, is the abundant mediator stored in the mast cell granules that mediate degranulation of mast cells in allergic diseases. Thus, among these, evaluation of serum chymase may provide insights into the function of mast cells in the initiation and progression of OSMF. The present study proposes that variations in the serum levels of diamine oxidase, histamine, and chymase may be noticed in OSMF. With this view, the present study will attempt to estimate the serum levels of chymase, histamine, and diamine oxidase in various stages of OSMF and compare them with the levels in healthy individuals and persons with areca habit without OSMF. Additionally, an immunohistochemical analysis will be performed for mast cell-related markers- chymase, diamine oxidase, and histamine to extrapolate their presence in serum. The present research study has been approved by the Institutional Ethical Committee and will be conducted in the Department of Oral Medicine and Radiology, Sharad Pawar Dental College and Hospital, Wardha, India. Aim: Assessment of variations in serum diamine oxidase, histamine, and chymase levels in the various stages of OSMF, areca chewers without OSMF, and healthy individuals by an enzyme-linked immunosorbent assay. 1. To estimate serum histamine and diamine oxidase levels in the various stages of OSMF and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 2. To estimate serum chymase levels in the various stages of OSMF and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 3. To compare serum histamine and diamine oxidase levels in various stages of OSMF and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 4. To compare serum chymase levels in the various stages of OSMF, and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 5. To correlate serum diamine oxidase levels with serum histamine levels in the various stages of OSMF and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 6. To correlate serum histamine, chymase, and diamine oxidase levels in various stages of OSMF and between overall OSMF patients, areca chewers without OSMF, and healthy individuals. 7. To analyze the immunohistochemical expression of mast cell chymase, histamine, and diamine oxidase in OSMF, patients with areca habit without OSMF, and healthy individuals. 8. The serum values will be correlated to the immunohistochemistry expressions of the chymase, diamine oxidase and histamine to identify the possible relationship. Experimental design The participants will be divided into three groups as OSMF group, individuals with areca habit without OSMF group, and Healthy individuals’ group. All individuals included will be above 18 years without any systemic or metabolic disorder. OSMF patients with a history of treatment will be excluded from the study. Written consent will be obtained after an explanation of the study procedure and protocol for the collection of serum samples and biopsy specimens. Selection of OSMF patients Individuals with OSMF will be selected following functional staging classification of proposed by More et al. The classification of functional stages is given as follows: Functional staging: ‐ M1: Interincisal mouth opening - up to or greater than 35 mm. ‐ M2: Interincisal mouth opening between 25 and 35 mm. ‐ M3: Interincisal mouth opening between 15 and 25 mm. ‐ M4: Interincisal mouth opening less than 15 mm. Selection of individuals with areca habit without OSMF Individuals with a history of areca consumption of more than one year duration, without any evidence of OSMF and other oral mucosal conditions like leukoplakia and lichen planus; and without history of any medical disorders Selection of Healthy individuals Individuals without OSMF and without habits and any systemic disorders/conditions. The details of selected participants will be recorded in the case history proforma for recording clinical findings and investigations. The informed consent will be obtained before enrolling the participant for proposed investigations. Exclusion criteria 1. The individuals with a history of the consumption of tobacco in any other form such as cigarette, bidi, and tobacco with lime. 2. The individuals with a history of any systemic disease. 3. The patients having history of receiving treatment for OSMF. 4. The patients having history for antihistaminic medications. The proposed study, a case control study, will include OSMF patients, areca habitual without OSMF and healthy individuals. The population of Indian subcontinent is infinite, and the individuals affected by OSMF are variable from state to state, therefore no definite data is available. Yet, the latest published literature shows that the prevalence of OSMF in India is 7.21%, the sample size can be calculated using the formula for known population proportion (p) with the help of SPSS software (IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). The sample size selection will be at 95 per cent confidence interval with margin of error ( d ) at ±10%. Thus, following formula can be used to calculate sample size: n = z 2 p 1 − p d 2 In the formula: n = sample size z = z score (1.96) p = population proportion (0.0721) d = margin of error (0.1) n = 1.96 2 0.0721 1 − 0.0721 0.1 2 n = 25.7 Thus, 26 individuals can be enrolled in each group for the study. Thus, a total sample size of 78 will be used for balanced study with each group consisting of 26 subjects. Five histological sections each for overall OSMF patients, areca chewers without OSMF and healthy individuals will be subjected for immunohistochemical analysis. Thus, 15 histopathological slides will be subjected to immunohistochemical expression for each marker viz. histamine, diamine oxidase and chymase. Materials Commercially available ELISA kits will be procured for the estimation of serum histamine, chymase and diamine oxidase. The optical density will be measured by spectrophotometry at a wavelength of 450 nm ± 2 nm using an ELISA reader. The immunohistochemical analysis will be performed on formalin-fixed paraffin-embedded biopsy tissue sections of OSMF patients. Procedure Collection of serum samples: Blood sample will be drawn from antecubital vein using a 5 ml syringe and it will be taken into a vial containing a clot activator and serum gel separator. After that, it will be transferred into a centrifuge tube. The centrifugation of blood will be done for ten minutes, and serum will be obtained. Centrifugation is a procedure of using centrifugal force and the aim is to separate two unmixable liquids. There is sedimentation of heterogeneous mixtures in which more heavy constituents drift away from the axis of centrifuge and less heavy constituents drift towards the axis of centrifuge. The effective gravitational force on a test tube is increased which causes the precipitate to collect on the bottom of tube and the remaining supernatant liquid will be withdrawn with the help of a pipette. Estimation of serum Diamine oxidase levels: This ELISA kit will use the Sandwich-ELISA principle. The ELISA plate provided is pre-coated with an antibody specific to Mouse DAO. Standards or samples will be added to the ELISA plate wells and combined with the specific antibody. Next, a biotinylated detection antibody specific for Mouse DAO and Avidin-Horseradish Peroxidase (HRP) conjugate will be added sequentially to each micro plate well and incubated. Free components are washed away. The substrate solution will be added to each well. Only those wells that contain Mouse DAO, biotinylated detection antibody and Avidin-HRP conjugate will appear blue in color. The enzyme-substrate reaction will be terminated by the addition of stop solution and the color turns yellow. The optical density (OD) will be measured using spectrophotometry at a wavelength of 450 nm ± 2 nm. The OD value will be proportional to the concentration of Mouse DAO. The concentration of Mouse DAO in the samples will be calculated by comparing the OD of the samples to the standard curve. Estimation of serum Histamine levels: After Histamine is quantitatively acylated, the subsequent competitive ELISA kit will be used the microtiter plate format with the antigen bound to the solid phase. The acylated standards, controls and samples and the solid phase bound analyte will compete for a fixed number of antiserum binding sites. After the system is in equilibrium, free antigen and free antigen-antiserum complexes will be removed by washing. The antibody bound to the solid phase will be detected by an anti-rabbit IgG peroxidase conjugate using 3,3′,5,5′-tetramethylbenzidine (TMB) as a substrate. The reaction will be monitored at 450 nm. Quantification of unknown samples will be achieved by comparing their absorbance with a reference curve prepared with known standard concentrations. Estimation of serum chymase levels: This ELISA kit will use the Sandwich ELISA principle. The ELISA plate provided in this kit is precoated with an antibody specific to Human CMA1. Standards or samples will be added to the ELISA plate wells and combined with the specific antibody. Then, a biotinylated detection antibody specific for Human CMA1 and Avidin Horseradish Peroxidase (HRP) conjugate will be added successively to each micro plate well and incubated. Free components will be washed away. The substrate solution will be added to each well. Only those wells that contain Human CMA1, biotinylated detection antibody and Avidin HRP conjugate will appear blue in color. The enzyme substrate reaction will be terminated by the addition of stop solution and the color turns yellow. The optical density (OD) will be measured spectrophotometrically at a wavelength of 450 nm ± 2 nm. The OD value will be proportional to the concentration of Human CMA1. The concentration of Human CMA1 in the samples will be calculated by comparing the OD of the samples to the standard curve. Immunohistochemistry: The tissue sections will be deparaffinized in xylene solution and rehydrated through decreasing graded ethanol solution. Endogenous peroxidase activity will be inhibited by incubation for 10 minutes with 3% hydrogen peroxidase. The primary monoclonal antibodies used for mast cells will be anti-MC chymase, anti-MC diamine oxidase, and antihistamine antibodies. Staining will be performed at room temperature on an automatic staining workstation. Commercially available immunohistochemistry kits will be procured for these markers. Statistical analysis The statistical analysis will be carried out using SPSS version 27.0 (IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). The data obtained will be analysed using following tests: 1. Students t-test 2. Analysis of variance (ANOVA) 3. Pearson’s correlation A p -value of 0.05 or lower with 95 percent confidence interval will be considered statistically significant for the analysis. The participants will be divided into three groups as OSMF group, individuals with areca habit without OSMF group, and Healthy individuals’ group. All individuals included will be above 18 years without any systemic or metabolic disorder. OSMF patients with a history of treatment will be excluded from the study. Written consent will be obtained after an explanation of the study procedure and protocol for the collection of serum samples and biopsy specimens. Individuals with OSMF will be selected following functional staging classification of proposed by More et al. The classification of functional stages is given as follows: Functional staging: ‐ M1: Interincisal mouth opening - up to or greater than 35 mm. ‐ M2: Interincisal mouth opening between 25 and 35 mm. ‐ M3: Interincisal mouth opening between 15 and 25 mm. ‐ M4: Interincisal mouth opening less than 15 mm. Individuals with a history of areca consumption of more than one year duration, without any evidence of OSMF and other oral mucosal conditions like leukoplakia and lichen planus; and without history of any medical disorders Individuals without OSMF and without habits and any systemic disorders/conditions. The details of selected participants will be recorded in the case history proforma for recording clinical findings and investigations. The informed consent will be obtained before enrolling the participant for proposed investigations. 1. The individuals with a history of the consumption of tobacco in any other form such as cigarette, bidi, and tobacco with lime. 2. The individuals with a history of any systemic disease. 3. The patients having history of receiving treatment for OSMF. 4. The patients having history for antihistaminic medications. The proposed study, a case control study, will include OSMF patients, areca habitual without OSMF and healthy individuals. The population of Indian subcontinent is infinite, and the individuals affected by OSMF are variable from state to state, therefore no definite data is available. Yet, the latest published literature shows that the prevalence of OSMF in India is 7.21%, the sample size can be calculated using the formula for known population proportion (p) with the help of SPSS software (IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). The sample size selection will be at 95 per cent confidence interval with margin of error ( d ) at ±10%. Thus, following formula can be used to calculate sample size: n = z 2 p 1 − p d 2 In the formula: n = sample size z = z score (1.96) p = population proportion (0.0721) d = margin of error (0.1) n = 1.96 2 0.0721 1 − 0.0721 0.1 2 n = 25.7 Thus, 26 individuals can be enrolled in each group for the study. Thus, a total sample size of 78 will be used for balanced study with each group consisting of 26 subjects. Five histological sections each for overall OSMF patients, areca chewers without OSMF and healthy individuals will be subjected for immunohistochemical analysis. Thus, 15 histopathological slides will be subjected to immunohistochemical expression for each marker viz. histamine, diamine oxidase and chymase. Commercially available ELISA kits will be procured for the estimation of serum histamine, chymase and diamine oxidase. The optical density will be measured by spectrophotometry at a wavelength of 450 nm ± 2 nm using an ELISA reader. The immunohistochemical analysis will be performed on formalin-fixed paraffin-embedded biopsy tissue sections of OSMF patients. Collection of serum samples: Blood sample will be drawn from antecubital vein using a 5 ml syringe and it will be taken into a vial containing a clot activator and serum gel separator. After that, it will be transferred into a centrifuge tube. The centrifugation of blood will be done for ten minutes, and serum will be obtained. Centrifugation is a procedure of using centrifugal force and the aim is to separate two unmixable liquids. There is sedimentation of heterogeneous mixtures in which more heavy constituents drift away from the axis of centrifuge and less heavy constituents drift towards the axis of centrifuge. The effective gravitational force on a test tube is increased which causes the precipitate to collect on the bottom of tube and the remaining supernatant liquid will be withdrawn with the help of a pipette. Estimation of serum Diamine oxidase levels: This ELISA kit will use the Sandwich-ELISA principle. The ELISA plate provided is pre-coated with an antibody specific to Mouse DAO. Standards or samples will be added to the ELISA plate wells and combined with the specific antibody. Next, a biotinylated detection antibody specific for Mouse DAO and Avidin-Horseradish Peroxidase (HRP) conjugate will be added sequentially to each micro plate well and incubated. Free components are washed away. The substrate solution will be added to each well. Only those wells that contain Mouse DAO, biotinylated detection antibody and Avidin-HRP conjugate will appear blue in color. The enzyme-substrate reaction will be terminated by the addition of stop solution and the color turns yellow. The optical density (OD) will be measured using spectrophotometry at a wavelength of 450 nm ± 2 nm. The OD value will be proportional to the concentration of Mouse DAO. The concentration of Mouse DAO in the samples will be calculated by comparing the OD of the samples to the standard curve. Estimation of serum Histamine levels: After Histamine is quantitatively acylated, the subsequent competitive ELISA kit will be used the microtiter plate format with the antigen bound to the solid phase. The acylated standards, controls and samples and the solid phase bound analyte will compete for a fixed number of antiserum binding sites. After the system is in equilibrium, free antigen and free antigen-antiserum complexes will be removed by washing. The antibody bound to the solid phase will be detected by an anti-rabbit IgG peroxidase conjugate using 3,3′,5,5′-tetramethylbenzidine (TMB) as a substrate. The reaction will be monitored at 450 nm. Quantification of unknown samples will be achieved by comparing their absorbance with a reference curve prepared with known standard concentrations. Estimation of serum chymase levels: This ELISA kit will use the Sandwich ELISA principle. The ELISA plate provided in this kit is precoated with an antibody specific to Human CMA1. Standards or samples will be added to the ELISA plate wells and combined with the specific antibody. Then, a biotinylated detection antibody specific for Human CMA1 and Avidin Horseradish Peroxidase (HRP) conjugate will be added successively to each micro plate well and incubated. Free components will be washed away. The substrate solution will be added to each well. Only those wells that contain Human CMA1, biotinylated detection antibody and Avidin HRP conjugate will appear blue in color. The enzyme substrate reaction will be terminated by the addition of stop solution and the color turns yellow. The optical density (OD) will be measured spectrophotometrically at a wavelength of 450 nm ± 2 nm. The OD value will be proportional to the concentration of Human CMA1. The concentration of Human CMA1 in the samples will be calculated by comparing the OD of the samples to the standard curve. Immunohistochemistry: The tissue sections will be deparaffinized in xylene solution and rehydrated through decreasing graded ethanol solution. Endogenous peroxidase activity will be inhibited by incubation for 10 minutes with 3% hydrogen peroxidase. The primary monoclonal antibodies used for mast cells will be anti-MC chymase, anti-MC diamine oxidase, and antihistamine antibodies. Staining will be performed at room temperature on an automatic staining workstation. Commercially available immunohistochemistry kits will be procured for these markers. The statistical analysis will be carried out using SPSS version 27.0 (IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). The data obtained will be analysed using following tests: 1. Students t-test 2. Analysis of variance (ANOVA) 3. Pearson’s correlation A p -value of 0.05 or lower with 95 percent confidence interval will be considered statistically significant for the analysis. As numerous mast cells are found to be present histologically in OSMF, it is expected to see the possible increase in serum levels of diamine oxidase, histamine, and chymase. Further, the presence of these enzymes will be confirmed by immunohistochemical analysis of histological sections. positive results in both the methods will give an insight into the role of these enzymes in progression and the possible role in the malignant transformation of OSMF. If the outcomes reveal statistically nonsignificant variation in serum levels of these biomolecules, role of mast cells in systemically influencing the initiation and progression of OSMF will be ruled out. Nonetheless, increased serum levels might be indicative of the active secretory action of mast cells and may provide an understanding of their role in progression of OSMF in oropharynx and esophagus. Tissue expression of histamine and chymase will establish a possible role of histamine receptors and chymase in induction of fibrosis and futuristic approach to malignant transformation of the OSMF. The prevalence of OSMF in India is 7.21% and shows higher rates of malignant transformation. The presence of mast cells in histological sections reveal their strong association with the disease so that some biochemicals and enzymes might play role in the progression and malignant transformation. , Thus, detection of the mast cell-related chemicals in tissue and serum will delineate further mechanisms in pathogenesis. With special emphasis on diamine oxidase and its relation to histamine, the possible allergic response of mast cells to etiological factors like betel nut will be discerned. Further, possible detection of chymase in serum will define its elaborate role in local as well as a systemic mechanism in angiogenesis in OSMF. Infiltration of mast cells in OSMF and other premalignant disorders like lichen planus and leukoplakia has been studied and are implicated in initiation and progression of these entities. , , The studies are conducted on expression of MCT and MCTC in OSMF using immunohistochemistry and observed a significant increase in expression of MCTC in OSMF. Further, serum histamine and tryptase levels have been assessed in oral squamous cell carcinoma (OSCC) along with its correlation with various histological zones/stages. While histamine was positively correlated to the depth of the invasion, serum tryptase had no correlation with various grades of OSCC. , Moreover, as mast cells are associated with collage metabolic disorders, it is suggested that they play similar role in pathogenesis of OSMF and labelled it as a collagen metabolic disorder. Nonetheless, the literature gap shows need to understand the role of chymase and histamine in pathogenesis of OSMF and its malignant transformation. Further, mast cell related research on their degranulation and release of bioactive molecules may provide an insight to understand the pathophysiology of OSMF as a collagen metabolic disorder. Ethical considerations The present study is approved by the Institutional Ethics Committee of Datta Meghe Institute of Higher Education and Research, Sawangi Wardha, India with the ethical approval reference no. DMIMS (DU)/Ph.D. Regn. /2021/1226. Study status The present study is in the process of data collection wherein serum samples are being collected from OSMF group, Areca habitual and control groups as per the design. The present study is approved by the Institutional Ethics Committee of Datta Meghe Institute of Higher Education and Research, Sawangi Wardha, India with the ethical approval reference no. DMIMS (DU)/Ph.D. Regn. /2021/1226. The present study is in the process of data collection wherein serum samples are being collected from OSMF group, Areca habitual and control groups as per the design. |
Changes in Soil Microbial Communities Induced by Biodegradable and Polyethylene Mulch Residues Under Three Different Temperatures | 3ab37c7d-1aad-49da-ada2-286030a97981 | 11291583 | Microbiology[mh] | Depending on the durability, flexibility, and cost-effectiveness of plastics and their wide range of uses, 391 million tonnes (Mt) of it was produced globally only in 2021 . The increase in plastic manufacture and use has two primary drawbacks. (i) Plastics are expected to account for 15% of the world’s carbon emissions by 2050; (ii) post-consumer plastics are a significant source of environmental plastic pollution and marine litter . Plastic most probably accessed the soil environments due to increase of petroleum-based consumer products like synthetic fibers . Nowadays, the ways by which plastics enter agricultural, horticultural, orchard, grassland, and forest soils comprise different pathways: the spreading of sewage sludge, composted and fermented organic waste, the plowing of mulching film, and irrigation with contaminated water . In addition to these pathways, agriculture’s reliance on the plastic market is growing: the estimated rate of plastic use in agriculture is expected to rise by about 64% by 2030 to meet the growing population demand for food . Soil plastic pollution can alter its structure, adversely affect microbial communities, and transport chemical contaminants . In addition, microplastics can be uptake by plants and enter the food chain, raising concerns about food safety . Mulching has been identified as one of the pathways for plastic input in soil. It is a common practice for cultivation of horticultural crops that significantly boost their production and quality . These films are widely used as they help to maintain soil moisture, control weed growth, and to modulate effect on soil temperature, satisfying the need of more sustainable agriculture systems, as it is considered a practice for saving soil and water . Polyethylene (PE) or low-density polyethylene (LDPE) mulch plastic films are typically used in agriculture, for example, for soil solarization. Nevertheless, since they require over 100 years to decompose, they have a significant negative influence on the environment . Their disposal methods include burning, burying, and recycling . However, because of soil contamination, recycling can be challenging and costly . The use of biodegradable plastic mulches (BDMs) is an attractive eco-sustainable alternative approach to overcome the environmental pollution problems caused by the use of plastic films , because they can be degraded progressively in the soil without releasing toxic residues . After the growing season, PE or LDPE films must be removed from the soil surface. In contrast, BDMs can be tilled into the soil where they are biodegraded by micro-organisms. BDM fragments contribute physically and biogeochemically to the soil. This is characteristic to BDMs, so research on non-biodegradable polymers cannot provide insights into the effects of BDMs on soil ecology and function . BDMs, tilled into soil, provide a carbon input; several studies showed that soil microorganisms respond to these inputs even if tiny, especially in agricultural soil where microbial development is carbon limited. Previous research demonstrated that BDMs increased microbial biomass, and enzyme activity , and altered soil microbial community structure . The addition of residual plastic film in soil seemed to harm the bacterial community’s richness and diversity on plastic surfaces, but it did not influence on the surrounding soil community . Other studies found increased fungus abundances in soil as a result of BDM incorporation . Nevertheless, the microbial response to BDMs is influenced by environment, soil type, and/or management practices. Confirming this, Li et al. find enrichment of fungus in one location and of Gram-positive bacteria in another. On the other hand, the action of microorganisms on biodegradable polymers in soil is well known and can be simplified into three steps: (i) microbial colonization of the polymer surface, (ii) enzymatic depolymerization of polymers, and (iii) microbial assimilation and utilization of the monomers and oligomers released from polymers by enzymatic hydrolysis . Other enzymes typically related to plastic breakdown include those from the esterase family, both bacterial and fungal, such as cutinases and lipases . Temperature is a key factor in this process. Effective biodegradation of plastics in soil by microorganisms occurs within the mesophilic temperature range (10 to 45 °C), which promotes optimal microbial activity . Additionally, the EN 17033 standard for biodegradable mulch films suggests a temperature around 25 °C to effectively represent conditions favorable for mesophilic soil microorganisms . In the context of climate change, rising temperatures—projected to increase by up to 2 °C by the end of the century—along with changes in rainfall patterns leading to more frequent and severe droughts, can indirectly impact microbial communities by increasing evapotranspiration rates, reducing soil moisture, and thereby affecting their functions and consequently plastic biodegradation. In this scenario, microbial populations emerge as pivotal players in the degradation of plastic fragments within the soil environment. Critical gaps remain in our understanding effects of BDMs on soil ecosystems; also due to the absence of studies including a direct comparison of PE with BDMs to determine whether they affect soils differently. The presented study aimed to assess the degradation potential and microbial dynamics associated with two biodegradable Mater-Bi mulches and traditional low-density polyethylene mulch residues when mixed in fallow agricultural soil under three temperature conditions. Evaluation of plastic degradation involves separating and weighing large plastic fragments at the end of the experiment. Furthermore, high-throughput sequencing was used to analyze changes in bacterial and fungal communities throughout, and the predicted functions of the trials, shedding light on the intricate interactions between plastic mulches and soil microbiota.
Soil Samples and Plastic Mulch Residual Preparation Fallow agricultural soil samples (0–5-cm depth; pH-H 2 O 7.21; electrical conductivity 263.5 μS cm −1 ; CaCO 3 4%; organic matter 9.30%; organic carbon 5.39%; total nitrogen 0.47%; C/N ratio 11.42%; phosphorus available 475.16 ppm; potassium exchangeable 716.87 ppm; Supplementary Material) were collected during spring season (March 2022) in Castellammare di Stabia (Naples; 40°41′15″N; 14°29′35″E), an area under a Mediterranean climate (average annual precipitation of approx. 52.20 mm, temperature from 7 to 32 °C). A total of 4.5 kg of soil underwent drying at room temperature and sieving through a 2-mm mesh to segregate sand, silt, and clay from larger particles, according to Al Hosni et al. . Assembly of Soil-Plastic Ecosystem The study employed three mulch sheets including two BDMs, white compostable Mater-Bi (MB, 15 µm), black compostable Mater-Bi (TMB, 20 µm) mulches, and a traditional black low-density polyethylene mulch (LDPE, 30 µm). The two biodegradable film Mater-Bi® by Novamont Company are starch-based, treated with biodegradable polyesters, and certified compostable (OK Biodegradable Soil by TÜV) , according to European standards (UNI EN 13432:2002, UNI EN 14995). In contrast, traditional mulching plastic film is made from low-density polyethylene resin pellets, offering easy processability, chemical resistance, durability, and flexibility . All sheets were cut into small fragments less than or equal to 2 cm 2 according to the procedure described by Al Hosni et al. . A mixture of plastic and soil containing 1% w/w plastic per 100 g of soil was placed in transparent magenta boxes (76 × 76 × 102 mm, Magenta GA-7–3 Plant Culture Box). Magentas containing soil and plastic were incubated at three different temperatures: room temperature (RT; 20–25 °C), 30 °C, and 45 °C, selected on the basis of the guidelines of standard test methods for biodegradable mulch films . The experiment was conducted with five replicates per treatment. Soil samples without plastic were used as controls. The magenta boxes were placed in humid chambers (RH 70%) for 6 months. They were weighed every 2 days to assess water evaporation, and the lack water was compensated spraying tap water. Sampling of the soil-plastic mixture was performed at the beginning (t0), after 3 (t3), and 6 (t6) months of incubation. The collected samples were stored at − 20 °C until molecular analysis. Quantification of Degraded Plastics At the end of the 6-month trial, plastics residuals were quantified for each soil-plastic mixture. After final sampling, each mixture was dried at 50 °C for 72 h and sieved through a 2-mm mesh sieve to collect plastic fragments larger than 2 mm. This fraction was first rinsed with sterile distilled water to remove adhering soil particles, after drying it was weighed to calculate the residual percentage of plastic, using the following equation : [12pt]{minimal}
$$D = ((Mi-Mf))/{Mi }^{*} 100$$ D = ( ( M i - M f ) ) / Mi ∗ 100 where D is the percentage of plastics degraded at the end of the test, Mi is the starting dry mass of plastic fragments, and Mf is the dry mass of recovered plastic fragments > 2 mm. Plastic particles smaller than 2 mm represent the proportion of degraded plastic. DNA Extraction, Amplicon Sequencing, and Data Processing For molecular analysis, total genomic DNA was extracted from soil-plastic ecosystem using a FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch Cedex, France) according to the manufacturer’s instructions. High-Throughput Sequencing Synthetic oligonucleotide primers S-D-Bact-0341F50 (5′-CCTACGGGNGGCWGCAG-3′) and S-D-Bact-0785R50 (5′-GACTACHVGGGTATCTAATCC-3′) , and the primers EMP.ITS1 (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and EMP.ITS2 (5′-GCTGCGTTCTTCATCGATGC-3′) were used to evaluate bacterial and fungal diversity, respectively, by amplicon-based metagenomic sequencing. PCR conditions for V3–V4 region consisted of 25 cycles (95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s) plus one additional cycle at 72 °C for 10 min as a final chain elongation. PCR conditions for ITS1–ITS2 region consisted of 35 cycles (94 °C for 30 s, 52 °C for 30 s, and 68 °C for 30 s) plus one additional cycle at 68 °C for 7 min as a final chain elongation. Agencourt AMPure beads (Beckman Coulter, Milan, IT) were used to purify PCR products, whereas quantification was performed by AF2200 Plate Reader (Eppendorf, Milan, IT). Equimolar pools were obtained, and sequencing was carried out on an Illumina MiSeq platform, yielding to 2 × 250 bp, paired end reads. The raw Illumina sequencing data are available in the Sequence Read Archive Database of the National Center of Biotechnology Information (PRJNA1127654). Bioinformatics and Data Analysis After sequencing, QIIME 2 software was used to analyze fastq files . Sequence adapters and primers were trimmed by using a cut adapter, whereas DADA2 algorithm was used to trim low-quality reads, remove chimeric sequences, and joined sequences shorter than 250. DADA2 produced amplicon sequence variants (ASVs), which were rarefied at the lowest number of sequences/sample and used for taxonomic assignment using the QIIME feature-classifier plugin against Greengenes and UNITE database for the bacterial and fungal microbiota, respectively. The taxa abundances were recalculated after the exclusion of chloroplast, mitochondria contaminants, and singleton ASVs. Statistical Analysis Data on degraded plastic amounts were analyzed by univariate ANOVA, followed by Tukey’s HSD post hoc for comparison of means ( p < 0.05) using IBM SPSS 19.0 statistical software package (SPSS Inc., Cary, NC, USA). The R statistical environment (R version 4.1.2) was used for sequencing data analysis and data visualization using RStudio . Microbial community data were organized and analyzed with R package phyloseq and vegan 2.5–6 . The quality of sequencing was controlled with rarefaction analysis using the rarecurve function (vegan package). The Shannon–Weaver index ( H ), used to assess alpha diversity, was calculated as follows: H = -sum pi * ln pi , where pi is the proportional abundance of species i . The diversity was calculated as follows: D = exp( H ) . Beta diversity was examined by permutational multivariate analysis of variance (PERMANOVA) using the adonis function from vegan. Principal coordinate analysis (PCoA) on Bray–Curtis dissimilarities was used to visualize the differences between samples. To visualize microbial community structure, unconstrained ordination by principal component analysis (PCA) on clr-transformed ASVs tables was used, followed by distance-based redundancy analysis (db-RDA), constrained for statistically significant factors identified in PERMANOVA and conditioning for block. The metabolic function was predicted by Tax4Fun analysis through the Kyoto Encyclopedia of Genes and Genomes (KEGG) database . The analysis focused on the differences in predicted abundances of enzyme‐encoding genes linked with plastic degradation. ANOVA ( p ≤ 0.05) was used to assess the difference in predicted abundance considering the experimental factors. Heatmaps were generated in R using the package pheatmap 1.0.12 . Venn diagrams were created to assess the unique and shared core microbiota, identifying overlap between soils treated with different plastic mulch residues (MB, TMB, LDPE) at RT and 30 °C (detection > 0.01% and prevalence = 70% for bacteria; detection > 0.01% and prevalence = 99% for fungi, based on prevalence plot). Finally, ASV sequences of the core were compared with the GenBank nucleotide data library using BLAST software on the National Centre for Biotechnology Information website (ASVs belonging to the same species were collapsed).
Fallow agricultural soil samples (0–5-cm depth; pH-H 2 O 7.21; electrical conductivity 263.5 μS cm −1 ; CaCO 3 4%; organic matter 9.30%; organic carbon 5.39%; total nitrogen 0.47%; C/N ratio 11.42%; phosphorus available 475.16 ppm; potassium exchangeable 716.87 ppm; Supplementary Material) were collected during spring season (March 2022) in Castellammare di Stabia (Naples; 40°41′15″N; 14°29′35″E), an area under a Mediterranean climate (average annual precipitation of approx. 52.20 mm, temperature from 7 to 32 °C). A total of 4.5 kg of soil underwent drying at room temperature and sieving through a 2-mm mesh to segregate sand, silt, and clay from larger particles, according to Al Hosni et al. .
The study employed three mulch sheets including two BDMs, white compostable Mater-Bi (MB, 15 µm), black compostable Mater-Bi (TMB, 20 µm) mulches, and a traditional black low-density polyethylene mulch (LDPE, 30 µm). The two biodegradable film Mater-Bi® by Novamont Company are starch-based, treated with biodegradable polyesters, and certified compostable (OK Biodegradable Soil by TÜV) , according to European standards (UNI EN 13432:2002, UNI EN 14995). In contrast, traditional mulching plastic film is made from low-density polyethylene resin pellets, offering easy processability, chemical resistance, durability, and flexibility . All sheets were cut into small fragments less than or equal to 2 cm 2 according to the procedure described by Al Hosni et al. . A mixture of plastic and soil containing 1% w/w plastic per 100 g of soil was placed in transparent magenta boxes (76 × 76 × 102 mm, Magenta GA-7–3 Plant Culture Box). Magentas containing soil and plastic were incubated at three different temperatures: room temperature (RT; 20–25 °C), 30 °C, and 45 °C, selected on the basis of the guidelines of standard test methods for biodegradable mulch films . The experiment was conducted with five replicates per treatment. Soil samples without plastic were used as controls. The magenta boxes were placed in humid chambers (RH 70%) for 6 months. They were weighed every 2 days to assess water evaporation, and the lack water was compensated spraying tap water. Sampling of the soil-plastic mixture was performed at the beginning (t0), after 3 (t3), and 6 (t6) months of incubation. The collected samples were stored at − 20 °C until molecular analysis.
At the end of the 6-month trial, plastics residuals were quantified for each soil-plastic mixture. After final sampling, each mixture was dried at 50 °C for 72 h and sieved through a 2-mm mesh sieve to collect plastic fragments larger than 2 mm. This fraction was first rinsed with sterile distilled water to remove adhering soil particles, after drying it was weighed to calculate the residual percentage of plastic, using the following equation : [12pt]{minimal}
$$D = ((Mi-Mf))/{Mi }^{*} 100$$ D = ( ( M i - M f ) ) / Mi ∗ 100 where D is the percentage of plastics degraded at the end of the test, Mi is the starting dry mass of plastic fragments, and Mf is the dry mass of recovered plastic fragments > 2 mm. Plastic particles smaller than 2 mm represent the proportion of degraded plastic.
For molecular analysis, total genomic DNA was extracted from soil-plastic ecosystem using a FastDNA SPIN Kit for Soil (MP Biomedicals, Illkirch Cedex, France) according to the manufacturer’s instructions. High-Throughput Sequencing Synthetic oligonucleotide primers S-D-Bact-0341F50 (5′-CCTACGGGNGGCWGCAG-3′) and S-D-Bact-0785R50 (5′-GACTACHVGGGTATCTAATCC-3′) , and the primers EMP.ITS1 (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and EMP.ITS2 (5′-GCTGCGTTCTTCATCGATGC-3′) were used to evaluate bacterial and fungal diversity, respectively, by amplicon-based metagenomic sequencing. PCR conditions for V3–V4 region consisted of 25 cycles (95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s) plus one additional cycle at 72 °C for 10 min as a final chain elongation. PCR conditions for ITS1–ITS2 region consisted of 35 cycles (94 °C for 30 s, 52 °C for 30 s, and 68 °C for 30 s) plus one additional cycle at 68 °C for 7 min as a final chain elongation. Agencourt AMPure beads (Beckman Coulter, Milan, IT) were used to purify PCR products, whereas quantification was performed by AF2200 Plate Reader (Eppendorf, Milan, IT). Equimolar pools were obtained, and sequencing was carried out on an Illumina MiSeq platform, yielding to 2 × 250 bp, paired end reads. The raw Illumina sequencing data are available in the Sequence Read Archive Database of the National Center of Biotechnology Information (PRJNA1127654). Bioinformatics and Data Analysis After sequencing, QIIME 2 software was used to analyze fastq files . Sequence adapters and primers were trimmed by using a cut adapter, whereas DADA2 algorithm was used to trim low-quality reads, remove chimeric sequences, and joined sequences shorter than 250. DADA2 produced amplicon sequence variants (ASVs), which were rarefied at the lowest number of sequences/sample and used for taxonomic assignment using the QIIME feature-classifier plugin against Greengenes and UNITE database for the bacterial and fungal microbiota, respectively. The taxa abundances were recalculated after the exclusion of chloroplast, mitochondria contaminants, and singleton ASVs.
Synthetic oligonucleotide primers S-D-Bact-0341F50 (5′-CCTACGGGNGGCWGCAG-3′) and S-D-Bact-0785R50 (5′-GACTACHVGGGTATCTAATCC-3′) , and the primers EMP.ITS1 (5′-CTTGGTCATTTAGAGGAAGTAA-3′) and EMP.ITS2 (5′-GCTGCGTTCTTCATCGATGC-3′) were used to evaluate bacterial and fungal diversity, respectively, by amplicon-based metagenomic sequencing. PCR conditions for V3–V4 region consisted of 25 cycles (95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s) plus one additional cycle at 72 °C for 10 min as a final chain elongation. PCR conditions for ITS1–ITS2 region consisted of 35 cycles (94 °C for 30 s, 52 °C for 30 s, and 68 °C for 30 s) plus one additional cycle at 68 °C for 7 min as a final chain elongation. Agencourt AMPure beads (Beckman Coulter, Milan, IT) were used to purify PCR products, whereas quantification was performed by AF2200 Plate Reader (Eppendorf, Milan, IT). Equimolar pools were obtained, and sequencing was carried out on an Illumina MiSeq platform, yielding to 2 × 250 bp, paired end reads. The raw Illumina sequencing data are available in the Sequence Read Archive Database of the National Center of Biotechnology Information (PRJNA1127654).
After sequencing, QIIME 2 software was used to analyze fastq files . Sequence adapters and primers were trimmed by using a cut adapter, whereas DADA2 algorithm was used to trim low-quality reads, remove chimeric sequences, and joined sequences shorter than 250. DADA2 produced amplicon sequence variants (ASVs), which were rarefied at the lowest number of sequences/sample and used for taxonomic assignment using the QIIME feature-classifier plugin against Greengenes and UNITE database for the bacterial and fungal microbiota, respectively. The taxa abundances were recalculated after the exclusion of chloroplast, mitochondria contaminants, and singleton ASVs.
Data on degraded plastic amounts were analyzed by univariate ANOVA, followed by Tukey’s HSD post hoc for comparison of means ( p < 0.05) using IBM SPSS 19.0 statistical software package (SPSS Inc., Cary, NC, USA). The R statistical environment (R version 4.1.2) was used for sequencing data analysis and data visualization using RStudio . Microbial community data were organized and analyzed with R package phyloseq and vegan 2.5–6 . The quality of sequencing was controlled with rarefaction analysis using the rarecurve function (vegan package). The Shannon–Weaver index ( H ), used to assess alpha diversity, was calculated as follows: H = -sum pi * ln pi , where pi is the proportional abundance of species i . The diversity was calculated as follows: D = exp( H ) . Beta diversity was examined by permutational multivariate analysis of variance (PERMANOVA) using the adonis function from vegan. Principal coordinate analysis (PCoA) on Bray–Curtis dissimilarities was used to visualize the differences between samples. To visualize microbial community structure, unconstrained ordination by principal component analysis (PCA) on clr-transformed ASVs tables was used, followed by distance-based redundancy analysis (db-RDA), constrained for statistically significant factors identified in PERMANOVA and conditioning for block. The metabolic function was predicted by Tax4Fun analysis through the Kyoto Encyclopedia of Genes and Genomes (KEGG) database . The analysis focused on the differences in predicted abundances of enzyme‐encoding genes linked with plastic degradation. ANOVA ( p ≤ 0.05) was used to assess the difference in predicted abundance considering the experimental factors. Heatmaps were generated in R using the package pheatmap 1.0.12 . Venn diagrams were created to assess the unique and shared core microbiota, identifying overlap between soils treated with different plastic mulch residues (MB, TMB, LDPE) at RT and 30 °C (detection > 0.01% and prevalence = 70% for bacteria; detection > 0.01% and prevalence = 99% for fungi, based on prevalence plot). Finally, ASV sequences of the core were compared with the GenBank nucleotide data library using BLAST software on the National Centre for Biotechnology Information website (ASVs belonging to the same species were collapsed).
Effect of Temperature on Plastic Degradation After 6 months of incubation of soil, plastic mulch residues greater than 2 mm were weighed for comparison with the initial weight (Fig. , Supplementary Table 1). The incubation temperature used in the experiment influenced the degree of degradation of the plastic mulching films. At RT, biodegradable mulch residues, MB and TMB, showed a high percentage of degradation (69.15% and 51.36% respectively). The increase of temperature at 30 °C, allowed an increase of the MB mulch film residues degradation (88.90%), and a decrease of TMB degradation to 38.86%, while at 45 °C no biodegradation activity, was observed. After 6 months, no degradation of LDPE was observed at all temperature assayed (Fig. , Supplementary Table 1). Dynamic of Microbial Communities in Soil with Mulching Films at Different Temperatures Dynamic of Bacterial Communities Amplicon sequencing yielded a total of 38,723 bacterial ASVs obtained from 61 samples. Cyanobacteria, Acidobacteria, Actinobacteria, Bacteroidetes, Chloroflexi, Firmicutes, and Proteobacteria were the dominant groups at the beginning of the process (Fig. A). Cyanobacteria decreased significantly over time and with different mulches ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4). After 3 months at RT, Cyanobacteria decreased in MB-soil (~ 68.84 to 0.01%) and in LDPE-treated soil (64.84 to 23.85%). At 30 °C, their abundance decreased up to 22.23% in LDPE-treated soil. Acidobacteria were significantly affected by temperature and mulch type ( p < 0.001 and p < 0.05, respectively; Supplementary Table 4). In MB-soil at RT, Acidobacteria increased from 2.42 to 10.28% after 3 months, while they decreased up to 0.77% in LDPE-treated soil. At 30 °C, their abundances were 5.29% in MB and 7.57% in TMB soils after 3 months of incubation. Actinobacteria abundances were significantly influenced by all variables (Supplementary Table 4). They increased in MB-soil at RT from 14.20 to 32.81% and in LDPE-treated soil up to 29.43%. At 30 °C, Actinobacteria levels were around 30.00% in MB, TMB, and LDPE soils. Chloroflexi were one of the dominant groups in the experimental process, but their abundance was not significantly influenced by any of the experimental variables (Supplementary Table 4). Firmicutes were significantly affected by time and temperature ( p < 0.01; Supplementary Table 4). Their abundance in MB-soil at RT increased from ~ 1 to 18.39%, and slightly in LDPE-treated soil. At 30 °C, Firmicutes levels were around 2–3% in MB, TMB, and LDPE soils. Proteobacteria, influenced by both time and plastic type ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4), increased in MB-soil at RT from 8.47 to ~ 29.29% and in LDPE-treated soil to 23.93%. At 30 °C, their levels were around 30–37% in MB, TMB, and LDPE soils. Planctomycetes, affected by time and temperature (Supplementary Table 4), decreased in MB-soil at RT from 7.15 to 0.08%, while were quite stable in LDPE-treated soil. At 30 °C, their levels were 6–8% in MB, TMB, and LDPE soils. Gemmatimonadetes were significantly affected by temperature ( p < 0.001; Supplementary Table 4). At 30 °C, they were about 3.5% in MB and TMB soils, and increased in LDPE-treated soil from 1.14 to 3.19%. At 6 months, LDPE-treated soil at RT had a stable Gemmatimonadetes population of around 3%. At 45 °C, the bacterial composition of all analyzed soil samples remained quite stable for up to 6 months. Slight changes were observed after 3 months of incubation due to a decrease of Cyanobacteria and an increase of Actinobacteria, Proteobacteria, Chloroflexi, Gemmatimonadetes, Firmicutes, and Planctomycetes. After 6 months, the bacterial composition was similar to the control after 3 months of incubation (Fig. A). Dynamic of Fungal Communities Amplicon sequencing yielded 11,537 fungal ITS ASVs. At RT and 30 °C, the phylum Ascomycota was dominant with relative abundances ranging from 62.96 to 82.41%, followed by the phylum Basidiomycota (from ~ 3.21 to 22.26%). In addition, the phylum Mortierellomycota (from 1.03 to 9.64%) and unclassified fungal populations (from 4 to 27%) were present in all samples (Fig. B). These four phyla were also significantly influenced by the addition of plastic to the soil ( p < 0.05; Supplementary Table 4). At 45 °C, there was a shift in fungal community with an increase of unclassified fungi (from 13.99 to 77.86%), followed by a decrease in the phyla Basidiomycota (ranging from 0.83 to 5.31%) and Ascomycota (ranging from 18.20 to 84.42%). Microbial Diversity Analysis of variance of Shannon’s index for mulch plastic-type, sampling time, and temperature showed that bacterial diversity was affected by mulch type and temperature, whereas fungal diversity was affected only by temperature (Supplementary Tables 2A and 2B). After 6 months, significant differences were found between soil incubated at RT and at 30 °C, and also between 30 and 45 °C (Fig. A). The Shannon index of the fungal community showed significant differences between soils incubated at RT compared to 30 °C after 3 months (Fig. B), whereas after 6 months, both soils incubated at RT and 30 °C differed from 45 °C ( p < 0.05; Fig. B). Furthermore, a higher Shannon index was found in MB and TMB-treated soil compared with soil added with LDPE mulch residues as well as control ( p < 0.05; Supplementary Fig. 1). PERMANOVA analysis on beta diversity performed on all the experimental factors allows to state that each variable affects the bacterial and fungal community structures (Fig. A and 4C; Supplementary Table 3). Based on the statistical analysis, a constrained ordination (db-RDA) was performed for both plastic type and sampling time ( p < 0.05) for bacterial and fungal communities (Fig. B and 4D). No distinct separation pattern could be observed for bacterial community (Fig. A and 4B). Fungal communities showed temperature-specific clustering (Fig. C), constrained ordination revealed a distinct trend in soil amended with MB mulch residues (Fig. D). Functional Prediction Analysis Functional profiles were predicted from the 16S rRNA gene sequencing data to compare soil treated with different mulch plastic residuals. While over 6000 functional genes were predicted, the focus was on enzyme-encoding genes like hydrolases, lipases, cutinase, cellobiosidase, and catalases. ANOVA analyses pinpointed 57 predicted genes significantly affected by the experimental factors ( p < 0.05; Supplementary Table 5). After identifying the encoding genes predicted abundances significantly affected by experimental variables, the pool of encoding genes was further narrowed down. Only genes associated with the degradation of complex substrates, including cellulose, starch, and other organic compounds, were investigated (Fig. ) along with catalase as biological indicator of soil health and productivity (Supplementary Fig. 2). Identified enzymes linked with plastic degradation included cellulase, lipases, esterases, and hydrolases. Two main clusters resulted from the functional profiles of the bacterial communities, which were mostly associated with the sampling time (Fig. ). The first cluster (cluster I) includes soil samples with MB and TMB mulch plastic residues collected after 3 months of incubation at 30 °C and RT. This cluster also contains soil samples enriched in TMB and LDPE residues after 3 months at 45 °C and 30 °C, along with one control sample (C) after 6 months at 30 °C (Fig. ). The most abundant enzyme‐encoding genes in these samples were 1,4-beta-cellobiosidase (K01225), acylglycerol lipase (K01054), esterase/lipase (K01066), maltooligosyltrehalose trehalohydrolase (K01236), cutinase (K08095), 3D-3,5/4-trihydroxycyclohexane-1,2-dione hydrolase (K03336), putative hydrolase (K04477), 2-hydroxy-6-ketonona-2,4-dienedioic acid hydrolase (K05714), and carbon–nitrogen hydrolase family protein (K08590). The second cluster (cluster II) includes MB samples incubated at 45 °C, soil supplemented with LDPE residues incubated at RT and 45 °C, and samples incubated at 45 °C after 6 months. The most abundant enzyme-encoding genes in cluster II were phospholipase D (K01115), lysophospholipase (K01048), phospholipase/carboxylesterase (K06999), 6-aminohexanoate cyclic dimer hydrolase (K01471), ADP-ribosylglycohydrolase (K05521), peptidyl-tRNA hydrolase (pth2 family) (K04794), and phosphonoacetate hydrolase (K06193). Lastly, ureidoacrylate peracid hydrolase increased after 6 months in two soil samples supplemented with MB and TMB and incubated at RT. The predicted catalase gene EC.1.11.1.6. was affected by the addition of plastic in soil ( p < 0.05), whereas catalase-peroxidase EC.1.11.1.21. by time ( p < 0.01; Supplementary Table 4). The predicted abundance of the catalase gene increased after 3 months in the plastic-enriched samples at both RT and 30 °C (cluster II; Supplementary Fig. 2). Notably, catalase-peroxidase also increased after 3 months in all the biodegradable mulch-enriched samples (Supplementary Fig. 2). Core Microbiota The microbial core was calculated in soil samples associated with the degradation of mulch residuals at RT and 30 °C (Table and Supplementary Fig. 3). The bacterial core consists of 84 ASVs for bacteria and 45 for fungi. Among the three plastic types, four ASVs were shared among bacteria and 30 among fungi (Supplementary Fig. 2). ASVs collapse within their respective taxa led to a reduction in taxonomic diversity. Thermoanaerobaculum aquaticum MP-01, Arthrobacter nitrophenolicus SJCon, Pseudarthrobacter phenanthrenivorans Sphe3, Sphaerobacter thermophilus DSM 20745, and Neobacillus endophyticus BRMEA1 were detected only in the microbiota of soil with MB degradation (Table , Supplementary Fig. 2A). Other species present only in soils with LDPE belonged to the phyla Actinobacteria, Firmicutes, Gemmatimonadetes, and Proteobacteria (Table ). Soil with TMB was characterized by three species Dehalogenimonas alkenigignens IP3-3, Thermoflavimicrobium daqui FBKL4.011, and Hydrogenispora ethanolica LX-B, shared with soils containing MB and LDPE (Table , Supplementary Fig. 3A). All fungal shared core members of the TMB-treated soil were in common with the other two mulch types (Table , Supplementary Fig. 3B). Between them, Lophiotrema rubi , unclassified Nectriaceae , and unclassified Agaricaceae were shared with both MB and LDPE soils. Cheilymenia sp. and Solicoccozyma aeria were common between TMB and MB-treated soils, whereas TMB and LDPE shared Gibberella sp . , Preussia flanaganii , and Preussia flanaganii . Finally, Metacordyceps chlamydospore , Niesslia mucida , unclassified Fusarium , and unclassified Mortierellaceae were detected only in the core microbiota of LDPE-treated soils (Table ).
After 6 months of incubation of soil, plastic mulch residues greater than 2 mm were weighed for comparison with the initial weight (Fig. , Supplementary Table 1). The incubation temperature used in the experiment influenced the degree of degradation of the plastic mulching films. At RT, biodegradable mulch residues, MB and TMB, showed a high percentage of degradation (69.15% and 51.36% respectively). The increase of temperature at 30 °C, allowed an increase of the MB mulch film residues degradation (88.90%), and a decrease of TMB degradation to 38.86%, while at 45 °C no biodegradation activity, was observed. After 6 months, no degradation of LDPE was observed at all temperature assayed (Fig. , Supplementary Table 1).
Dynamic of Bacterial Communities Amplicon sequencing yielded a total of 38,723 bacterial ASVs obtained from 61 samples. Cyanobacteria, Acidobacteria, Actinobacteria, Bacteroidetes, Chloroflexi, Firmicutes, and Proteobacteria were the dominant groups at the beginning of the process (Fig. A). Cyanobacteria decreased significantly over time and with different mulches ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4). After 3 months at RT, Cyanobacteria decreased in MB-soil (~ 68.84 to 0.01%) and in LDPE-treated soil (64.84 to 23.85%). At 30 °C, their abundance decreased up to 22.23% in LDPE-treated soil. Acidobacteria were significantly affected by temperature and mulch type ( p < 0.001 and p < 0.05, respectively; Supplementary Table 4). In MB-soil at RT, Acidobacteria increased from 2.42 to 10.28% after 3 months, while they decreased up to 0.77% in LDPE-treated soil. At 30 °C, their abundances were 5.29% in MB and 7.57% in TMB soils after 3 months of incubation. Actinobacteria abundances were significantly influenced by all variables (Supplementary Table 4). They increased in MB-soil at RT from 14.20 to 32.81% and in LDPE-treated soil up to 29.43%. At 30 °C, Actinobacteria levels were around 30.00% in MB, TMB, and LDPE soils. Chloroflexi were one of the dominant groups in the experimental process, but their abundance was not significantly influenced by any of the experimental variables (Supplementary Table 4). Firmicutes were significantly affected by time and temperature ( p < 0.01; Supplementary Table 4). Their abundance in MB-soil at RT increased from ~ 1 to 18.39%, and slightly in LDPE-treated soil. At 30 °C, Firmicutes levels were around 2–3% in MB, TMB, and LDPE soils. Proteobacteria, influenced by both time and plastic type ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4), increased in MB-soil at RT from 8.47 to ~ 29.29% and in LDPE-treated soil to 23.93%. At 30 °C, their levels were around 30–37% in MB, TMB, and LDPE soils. Planctomycetes, affected by time and temperature (Supplementary Table 4), decreased in MB-soil at RT from 7.15 to 0.08%, while were quite stable in LDPE-treated soil. At 30 °C, their levels were 6–8% in MB, TMB, and LDPE soils. Gemmatimonadetes were significantly affected by temperature ( p < 0.001; Supplementary Table 4). At 30 °C, they were about 3.5% in MB and TMB soils, and increased in LDPE-treated soil from 1.14 to 3.19%. At 6 months, LDPE-treated soil at RT had a stable Gemmatimonadetes population of around 3%. At 45 °C, the bacterial composition of all analyzed soil samples remained quite stable for up to 6 months. Slight changes were observed after 3 months of incubation due to a decrease of Cyanobacteria and an increase of Actinobacteria, Proteobacteria, Chloroflexi, Gemmatimonadetes, Firmicutes, and Planctomycetes. After 6 months, the bacterial composition was similar to the control after 3 months of incubation (Fig. A). Dynamic of Fungal Communities Amplicon sequencing yielded 11,537 fungal ITS ASVs. At RT and 30 °C, the phylum Ascomycota was dominant with relative abundances ranging from 62.96 to 82.41%, followed by the phylum Basidiomycota (from ~ 3.21 to 22.26%). In addition, the phylum Mortierellomycota (from 1.03 to 9.64%) and unclassified fungal populations (from 4 to 27%) were present in all samples (Fig. B). These four phyla were also significantly influenced by the addition of plastic to the soil ( p < 0.05; Supplementary Table 4). At 45 °C, there was a shift in fungal community with an increase of unclassified fungi (from 13.99 to 77.86%), followed by a decrease in the phyla Basidiomycota (ranging from 0.83 to 5.31%) and Ascomycota (ranging from 18.20 to 84.42%).
Amplicon sequencing yielded a total of 38,723 bacterial ASVs obtained from 61 samples. Cyanobacteria, Acidobacteria, Actinobacteria, Bacteroidetes, Chloroflexi, Firmicutes, and Proteobacteria were the dominant groups at the beginning of the process (Fig. A). Cyanobacteria decreased significantly over time and with different mulches ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4). After 3 months at RT, Cyanobacteria decreased in MB-soil (~ 68.84 to 0.01%) and in LDPE-treated soil (64.84 to 23.85%). At 30 °C, their abundance decreased up to 22.23% in LDPE-treated soil. Acidobacteria were significantly affected by temperature and mulch type ( p < 0.001 and p < 0.05, respectively; Supplementary Table 4). In MB-soil at RT, Acidobacteria increased from 2.42 to 10.28% after 3 months, while they decreased up to 0.77% in LDPE-treated soil. At 30 °C, their abundances were 5.29% in MB and 7.57% in TMB soils after 3 months of incubation. Actinobacteria abundances were significantly influenced by all variables (Supplementary Table 4). They increased in MB-soil at RT from 14.20 to 32.81% and in LDPE-treated soil up to 29.43%. At 30 °C, Actinobacteria levels were around 30.00% in MB, TMB, and LDPE soils. Chloroflexi were one of the dominant groups in the experimental process, but their abundance was not significantly influenced by any of the experimental variables (Supplementary Table 4). Firmicutes were significantly affected by time and temperature ( p < 0.01; Supplementary Table 4). Their abundance in MB-soil at RT increased from ~ 1 to 18.39%, and slightly in LDPE-treated soil. At 30 °C, Firmicutes levels were around 2–3% in MB, TMB, and LDPE soils. Proteobacteria, influenced by both time and plastic type ( p < 0.01 and p < 0.05, respectively; Supplementary Table 4), increased in MB-soil at RT from 8.47 to ~ 29.29% and in LDPE-treated soil to 23.93%. At 30 °C, their levels were around 30–37% in MB, TMB, and LDPE soils. Planctomycetes, affected by time and temperature (Supplementary Table 4), decreased in MB-soil at RT from 7.15 to 0.08%, while were quite stable in LDPE-treated soil. At 30 °C, their levels were 6–8% in MB, TMB, and LDPE soils. Gemmatimonadetes were significantly affected by temperature ( p < 0.001; Supplementary Table 4). At 30 °C, they were about 3.5% in MB and TMB soils, and increased in LDPE-treated soil from 1.14 to 3.19%. At 6 months, LDPE-treated soil at RT had a stable Gemmatimonadetes population of around 3%. At 45 °C, the bacterial composition of all analyzed soil samples remained quite stable for up to 6 months. Slight changes were observed after 3 months of incubation due to a decrease of Cyanobacteria and an increase of Actinobacteria, Proteobacteria, Chloroflexi, Gemmatimonadetes, Firmicutes, and Planctomycetes. After 6 months, the bacterial composition was similar to the control after 3 months of incubation (Fig. A).
Amplicon sequencing yielded 11,537 fungal ITS ASVs. At RT and 30 °C, the phylum Ascomycota was dominant with relative abundances ranging from 62.96 to 82.41%, followed by the phylum Basidiomycota (from ~ 3.21 to 22.26%). In addition, the phylum Mortierellomycota (from 1.03 to 9.64%) and unclassified fungal populations (from 4 to 27%) were present in all samples (Fig. B). These four phyla were also significantly influenced by the addition of plastic to the soil ( p < 0.05; Supplementary Table 4). At 45 °C, there was a shift in fungal community with an increase of unclassified fungi (from 13.99 to 77.86%), followed by a decrease in the phyla Basidiomycota (ranging from 0.83 to 5.31%) and Ascomycota (ranging from 18.20 to 84.42%).
Analysis of variance of Shannon’s index for mulch plastic-type, sampling time, and temperature showed that bacterial diversity was affected by mulch type and temperature, whereas fungal diversity was affected only by temperature (Supplementary Tables 2A and 2B). After 6 months, significant differences were found between soil incubated at RT and at 30 °C, and also between 30 and 45 °C (Fig. A). The Shannon index of the fungal community showed significant differences between soils incubated at RT compared to 30 °C after 3 months (Fig. B), whereas after 6 months, both soils incubated at RT and 30 °C differed from 45 °C ( p < 0.05; Fig. B). Furthermore, a higher Shannon index was found in MB and TMB-treated soil compared with soil added with LDPE mulch residues as well as control ( p < 0.05; Supplementary Fig. 1). PERMANOVA analysis on beta diversity performed on all the experimental factors allows to state that each variable affects the bacterial and fungal community structures (Fig. A and 4C; Supplementary Table 3). Based on the statistical analysis, a constrained ordination (db-RDA) was performed for both plastic type and sampling time ( p < 0.05) for bacterial and fungal communities (Fig. B and 4D). No distinct separation pattern could be observed for bacterial community (Fig. A and 4B). Fungal communities showed temperature-specific clustering (Fig. C), constrained ordination revealed a distinct trend in soil amended with MB mulch residues (Fig. D).
Functional profiles were predicted from the 16S rRNA gene sequencing data to compare soil treated with different mulch plastic residuals. While over 6000 functional genes were predicted, the focus was on enzyme-encoding genes like hydrolases, lipases, cutinase, cellobiosidase, and catalases. ANOVA analyses pinpointed 57 predicted genes significantly affected by the experimental factors ( p < 0.05; Supplementary Table 5). After identifying the encoding genes predicted abundances significantly affected by experimental variables, the pool of encoding genes was further narrowed down. Only genes associated with the degradation of complex substrates, including cellulose, starch, and other organic compounds, were investigated (Fig. ) along with catalase as biological indicator of soil health and productivity (Supplementary Fig. 2). Identified enzymes linked with plastic degradation included cellulase, lipases, esterases, and hydrolases. Two main clusters resulted from the functional profiles of the bacterial communities, which were mostly associated with the sampling time (Fig. ). The first cluster (cluster I) includes soil samples with MB and TMB mulch plastic residues collected after 3 months of incubation at 30 °C and RT. This cluster also contains soil samples enriched in TMB and LDPE residues after 3 months at 45 °C and 30 °C, along with one control sample (C) after 6 months at 30 °C (Fig. ). The most abundant enzyme‐encoding genes in these samples were 1,4-beta-cellobiosidase (K01225), acylglycerol lipase (K01054), esterase/lipase (K01066), maltooligosyltrehalose trehalohydrolase (K01236), cutinase (K08095), 3D-3,5/4-trihydroxycyclohexane-1,2-dione hydrolase (K03336), putative hydrolase (K04477), 2-hydroxy-6-ketonona-2,4-dienedioic acid hydrolase (K05714), and carbon–nitrogen hydrolase family protein (K08590). The second cluster (cluster II) includes MB samples incubated at 45 °C, soil supplemented with LDPE residues incubated at RT and 45 °C, and samples incubated at 45 °C after 6 months. The most abundant enzyme-encoding genes in cluster II were phospholipase D (K01115), lysophospholipase (K01048), phospholipase/carboxylesterase (K06999), 6-aminohexanoate cyclic dimer hydrolase (K01471), ADP-ribosylglycohydrolase (K05521), peptidyl-tRNA hydrolase (pth2 family) (K04794), and phosphonoacetate hydrolase (K06193). Lastly, ureidoacrylate peracid hydrolase increased after 6 months in two soil samples supplemented with MB and TMB and incubated at RT. The predicted catalase gene EC.1.11.1.6. was affected by the addition of plastic in soil ( p < 0.05), whereas catalase-peroxidase EC.1.11.1.21. by time ( p < 0.01; Supplementary Table 4). The predicted abundance of the catalase gene increased after 3 months in the plastic-enriched samples at both RT and 30 °C (cluster II; Supplementary Fig. 2). Notably, catalase-peroxidase also increased after 3 months in all the biodegradable mulch-enriched samples (Supplementary Fig. 2).
The microbial core was calculated in soil samples associated with the degradation of mulch residuals at RT and 30 °C (Table and Supplementary Fig. 3). The bacterial core consists of 84 ASVs for bacteria and 45 for fungi. Among the three plastic types, four ASVs were shared among bacteria and 30 among fungi (Supplementary Fig. 2). ASVs collapse within their respective taxa led to a reduction in taxonomic diversity. Thermoanaerobaculum aquaticum MP-01, Arthrobacter nitrophenolicus SJCon, Pseudarthrobacter phenanthrenivorans Sphe3, Sphaerobacter thermophilus DSM 20745, and Neobacillus endophyticus BRMEA1 were detected only in the microbiota of soil with MB degradation (Table , Supplementary Fig. 2A). Other species present only in soils with LDPE belonged to the phyla Actinobacteria, Firmicutes, Gemmatimonadetes, and Proteobacteria (Table ). Soil with TMB was characterized by three species Dehalogenimonas alkenigignens IP3-3, Thermoflavimicrobium daqui FBKL4.011, and Hydrogenispora ethanolica LX-B, shared with soils containing MB and LDPE (Table , Supplementary Fig. 3A). All fungal shared core members of the TMB-treated soil were in common with the other two mulch types (Table , Supplementary Fig. 3B). Between them, Lophiotrema rubi , unclassified Nectriaceae , and unclassified Agaricaceae were shared with both MB and LDPE soils. Cheilymenia sp. and Solicoccozyma aeria were common between TMB and MB-treated soils, whereas TMB and LDPE shared Gibberella sp . , Preussia flanaganii , and Preussia flanaganii . Finally, Metacordyceps chlamydospore , Niesslia mucida , unclassified Fusarium , and unclassified Mortierellaceae were detected only in the core microbiota of LDPE-treated soils (Table ).
Plastic Degradation The findings of this study provide important insights into the biodegradation dynamics of various mulch films across different temperatures. The significant degradation of biodegradable plastic mulch films (MB and TMB) during 6 months at RT demonstrates their promise as long-term alternatives to non-biodegradable options (LDPE). The biodegradation of TMB by 51.36% and MB mulch by 69.15% indicates their ability to decompose over time, concerning LDPE mulch recalcitrant to biological decomposition during the 6-month trial, according to previous establishment . The impact of temperature on degradation rates is a pivotal consideration. The rapid breakdown seen at 30 °C, particularly with MB mulch (88.90%) as well as no degradation at 45 °C, indicates that temperature impact on biodegradation efficiency is linked to microbial catabolic activity. Sintim et al. observed temperature-dependent variations in mulch film biodegradability under field conditions, with higher temperatures significantly enhancing polymer breakdown. Nevertheless, our study demonstrated that elevating temperatures up to 45 °C markedly affects metabolic activity of mesophilic bacterial populations, corresponding to the non-degradation of all mulch residues . This result marked the temperature’s pivotal role in shaping microbial communities and impacting their capacity to degrade mulch residues . In the context of climate change, rising temperatures could potentially diminish the efficacy of biodegradable mulches in promoting sustainable agriculture. This evidence concerning the impact of temperatures emphasizes the need to consider this environmental variable to optimize the degradation rates based on local climate. Nevertheless, it must be considered that in a real scenario , the temperature along with other biotic (variety of microorganisms such as bacteria, fungi, and Archaea) and abiotic factor (e.g., light, oxygen concentrations, humidity, and acidity) affects polymer biodegradability . Microbial Community Composition and Diversity This study provides insights into the biodegradation dynamics of various mulch films at different temperatures, comparing the effects of biodegradable (MB and TMB) and polyethylene (LDPE) mulch residues on soil microbial communities. Soil samples treated with MB and TMB showed consistent trends, including reductions in Cyanobacteria and Planctomycetes after 3 months, while Actinobacteria, Proteobacteria, Chloroflexi, Firmicutes, Bacteroidetes, and Acidobacteria dominated the bacterial composition, aligning with existing research on plastics in diverse environments . Previous studies highlighted how biodegradable plastics can enrich Proteobacteria, which play crucial roles in soil biogeochemical cycles due to their association with total nitrogen and organic carbon levels . Bacteroidetes, widely distributed in ecosystems, contribute significantly to the degradation of complex organic materials, aided by genera like Flavobacterium that break down polysaccharides and influence denitrification processes . Evidence suggests that biodegradable plastics in agricultural soils contribute minimally to carbon levels, affecting soil bacterial responses due to limited carbon availability . Furthermore, main variations were at RT and 30 °C supporting the results of plastic breakdown and highlighting that burial of biodegradable plastics can lead to changes in soil microbial community structures . LDPE-treated soil displayed minor changes in bacterial composition, resembling control groups after 6 months, and presenting enriched microplastic-degrading taxa such as Bacteroidetes and Proteobacteria . All samples incubated at 45 °C showed a stable bacterial composition up to 6 months, probably due to bacterial decrease in activity with higher temperature occurrences , aligning with the absence of plastic degradation. No fluctuations in the abundance of predominant fungal phyla Ascomycota, Basidiomycota, and Mortierellomycota were observed at RT and 30 °C. Previous research showed that the phyla Ascomycota and Basidiomycota were responsible for the breakdown of oil-based polymeric polymers . Therefore, these findings suggest that, at least throughout the 6-month trial, no mesophilic fungal communities linked to plastic degradation emerged. Otherwise, the shifts observed at 45 °C in fungal community composition highlight the influence of both plastic and temperature on fungal dynamics, in absence of plastic degradation. In all samples, the Ascomycota and Aphelidiomycota decreased, and the unclassified fungi increased. These last can be assumed that the thermophilic phyla selected by the plastic and temperature are responsible for the shift in fungal profiles at 45 °C. Thermophilic fungi are characterized by their ability to thrive at temperatures ranging from 20 to 62 °C ; however, there are an estimated 3 million thermophilic fungal species on Earth, of which only about 100,000 have been identified, so the increase in unclassified ASVs at 45 °C may be due to the presence of species yet to be classified . Analyses of alpha diversity across the entire data set confirmed that the mulch plastic type was the most discriminating factor shaping bacterial communities, followed by temperature that impact also fungal communities. It was interesting to note that the bacterial community responded differentially to each source of plastics tilled in the soil supporting results of plastic degradation. The bacterial Shannon diversity index was higher in soils treated with TMB and MB mulch residues and decreased in LDPE-amended soils as well as in controls. The introduction of available carbon sources for bacteria through biodegradable mulch residues affected the Shannon index, confirming the response of soil microbes to these inputs . According to previous studies , the effect of the different temperatures on the microbial community was particularly evident at the end of the experiment, e.g., by reducing the activity of mesophilic bacteria or selecting thermophilic fungi (45 °C). In addition, the beta-diversity analysis of the microbiota associated with mulch residues, investigated with PERMANOVA, showed the significant influence of sampling times that reflect the temporal dynamics of plastic degradation process. The distinct sampling stages could help to determine the interaction between microorganisms and plastic tilled in the soil. Over time, enzymes break down the plastic and microbes use the resulting fragments for their nutrition by a process involving the turnover of different populations with distinct enzyme kits . Moreover, using biodegradable mulch can benefit the soil’s biogeochemical cycles by promoting a dynamic microbial community, due to increased carbon intake. Predicted Enzymatic Activity In this study, particular focus is placed on genes encoding hydrolases, lipases, cutinase, and cellobiosidase, due to their significant role in natural or synthetic polymer biodegradation . The clustering of predicted functions demonstrated that potential gene abundances were strongly affected by sampling time since two major clusters were observed: (I) samples collected after 3 months of incubation at 30 °C and RT tilled with biodegradable mulch residues (MB and TMB), comprising three exceptions: t3-TMB_T 45 °C, t6-C_T 30 °C, and t3-LDPE_T 30 °C; and (II) all remaining samples. The increase in several encoding genes associated with plastic degradation in cluster I, like cutinase (K08095) and esterase/lipase (K01066), suggests a potential bacterial active response in soils containing plastic residues that responded positively to degradation . These enzymes are known to cleave the ester bonds and degrade polyurethane substrate, and therefore can act also on non-biodegradable plastics . Cutinase can hydrolyze a broad variety of synthetic polymer esters, both soluble and insoluble, that are structurally related to cutin . The increase of cellulose 1,4-beta-cellobiosidase (K01225, cluster I) that hydrolyze cellulose releasing cellobiose, was probably induced by the presence of biodegradable mulch residues in soil . Cluster II comprises all samples at the latter sampling time, controls, and samples incubated at 45 °C. In particular, the presence of MB and TMB from later sampling times, where plastic degradation occurred, suggests that there may be significant differences in some metabolic activities compared to the same samples after 3 months of incubation. This confirms that changes in the activities of bacterial populations over time reflect the dynamics of plastic degradation in soil. Over time, polymers degrade enzymatically, and microorganisms assimilate and utilize the degradation products, potentially causing shifts in the predicted coding genes . Finally, catalase performs an important role in soil ecosystem, and it can be used as a biological activity index to evaluate the quality of a particular soil . As previously indicated, variations of catalase-specific activities are also indicative of phylogenetic changes in community structure . The tilling of biodegradable mulches modifies the soil ecosystem, promoting an environment in which aerobic bacteria thrive, which may have positive implications for soil health and quality. Core Microbiota Associated with Mulch Plastic Residues The microbial core investigation aimed to provide hints of potential microorganisms involved in plastic breakdown for future investigations. The analysis was performed on all plastic-type soil systems at temperatures where breakdown occurred (RT and 30 °C). There is still a lack of literature, on information on microbial species known for their ability to degrade plastics. Thus, core members found in this work could be evaluated based on their enzymatic kit to assess their potential action on natural and non-natural polymers. Among the identified bacterial species, Hydrogenispora ethanolica LX-B isolated from the mesophilic (35 °C) anaerobic fermentation process of sludge can ferment substrates with different carbon sources, including starch, glucose, maltose, and fructose , suggesting its potential involvement in the degradation of starch-based plastics. The species Thermoflavimicrobium daqui FBKL4.01, a thermophilic bacterium isolated from the Daqu used to produce Chinese liquor Moutai, can ferments pure wheat , confirming its involvement in the degradation of starch and cellulose. The analysis of fungal core revealed 12 different species. Among them, Solicoccozyma aeria ( Cryptococcus aerius ) is known for its ability to produce amylases at 30 °C, to digest raw starch . These three microbial species were found in soils amended with MB and TMB and could therefore be of interest for the biodegradation of starch-based plastics.
The findings of this study provide important insights into the biodegradation dynamics of various mulch films across different temperatures. The significant degradation of biodegradable plastic mulch films (MB and TMB) during 6 months at RT demonstrates their promise as long-term alternatives to non-biodegradable options (LDPE). The biodegradation of TMB by 51.36% and MB mulch by 69.15% indicates their ability to decompose over time, concerning LDPE mulch recalcitrant to biological decomposition during the 6-month trial, according to previous establishment . The impact of temperature on degradation rates is a pivotal consideration. The rapid breakdown seen at 30 °C, particularly with MB mulch (88.90%) as well as no degradation at 45 °C, indicates that temperature impact on biodegradation efficiency is linked to microbial catabolic activity. Sintim et al. observed temperature-dependent variations in mulch film biodegradability under field conditions, with higher temperatures significantly enhancing polymer breakdown. Nevertheless, our study demonstrated that elevating temperatures up to 45 °C markedly affects metabolic activity of mesophilic bacterial populations, corresponding to the non-degradation of all mulch residues . This result marked the temperature’s pivotal role in shaping microbial communities and impacting their capacity to degrade mulch residues . In the context of climate change, rising temperatures could potentially diminish the efficacy of biodegradable mulches in promoting sustainable agriculture. This evidence concerning the impact of temperatures emphasizes the need to consider this environmental variable to optimize the degradation rates based on local climate. Nevertheless, it must be considered that in a real scenario , the temperature along with other biotic (variety of microorganisms such as bacteria, fungi, and Archaea) and abiotic factor (e.g., light, oxygen concentrations, humidity, and acidity) affects polymer biodegradability .
This study provides insights into the biodegradation dynamics of various mulch films at different temperatures, comparing the effects of biodegradable (MB and TMB) and polyethylene (LDPE) mulch residues on soil microbial communities. Soil samples treated with MB and TMB showed consistent trends, including reductions in Cyanobacteria and Planctomycetes after 3 months, while Actinobacteria, Proteobacteria, Chloroflexi, Firmicutes, Bacteroidetes, and Acidobacteria dominated the bacterial composition, aligning with existing research on plastics in diverse environments . Previous studies highlighted how biodegradable plastics can enrich Proteobacteria, which play crucial roles in soil biogeochemical cycles due to their association with total nitrogen and organic carbon levels . Bacteroidetes, widely distributed in ecosystems, contribute significantly to the degradation of complex organic materials, aided by genera like Flavobacterium that break down polysaccharides and influence denitrification processes . Evidence suggests that biodegradable plastics in agricultural soils contribute minimally to carbon levels, affecting soil bacterial responses due to limited carbon availability . Furthermore, main variations were at RT and 30 °C supporting the results of plastic breakdown and highlighting that burial of biodegradable plastics can lead to changes in soil microbial community structures . LDPE-treated soil displayed minor changes in bacterial composition, resembling control groups after 6 months, and presenting enriched microplastic-degrading taxa such as Bacteroidetes and Proteobacteria . All samples incubated at 45 °C showed a stable bacterial composition up to 6 months, probably due to bacterial decrease in activity with higher temperature occurrences , aligning with the absence of plastic degradation. No fluctuations in the abundance of predominant fungal phyla Ascomycota, Basidiomycota, and Mortierellomycota were observed at RT and 30 °C. Previous research showed that the phyla Ascomycota and Basidiomycota were responsible for the breakdown of oil-based polymeric polymers . Therefore, these findings suggest that, at least throughout the 6-month trial, no mesophilic fungal communities linked to plastic degradation emerged. Otherwise, the shifts observed at 45 °C in fungal community composition highlight the influence of both plastic and temperature on fungal dynamics, in absence of plastic degradation. In all samples, the Ascomycota and Aphelidiomycota decreased, and the unclassified fungi increased. These last can be assumed that the thermophilic phyla selected by the plastic and temperature are responsible for the shift in fungal profiles at 45 °C. Thermophilic fungi are characterized by their ability to thrive at temperatures ranging from 20 to 62 °C ; however, there are an estimated 3 million thermophilic fungal species on Earth, of which only about 100,000 have been identified, so the increase in unclassified ASVs at 45 °C may be due to the presence of species yet to be classified . Analyses of alpha diversity across the entire data set confirmed that the mulch plastic type was the most discriminating factor shaping bacterial communities, followed by temperature that impact also fungal communities. It was interesting to note that the bacterial community responded differentially to each source of plastics tilled in the soil supporting results of plastic degradation. The bacterial Shannon diversity index was higher in soils treated with TMB and MB mulch residues and decreased in LDPE-amended soils as well as in controls. The introduction of available carbon sources for bacteria through biodegradable mulch residues affected the Shannon index, confirming the response of soil microbes to these inputs . According to previous studies , the effect of the different temperatures on the microbial community was particularly evident at the end of the experiment, e.g., by reducing the activity of mesophilic bacteria or selecting thermophilic fungi (45 °C). In addition, the beta-diversity analysis of the microbiota associated with mulch residues, investigated with PERMANOVA, showed the significant influence of sampling times that reflect the temporal dynamics of plastic degradation process. The distinct sampling stages could help to determine the interaction between microorganisms and plastic tilled in the soil. Over time, enzymes break down the plastic and microbes use the resulting fragments for their nutrition by a process involving the turnover of different populations with distinct enzyme kits . Moreover, using biodegradable mulch can benefit the soil’s biogeochemical cycles by promoting a dynamic microbial community, due to increased carbon intake.
In this study, particular focus is placed on genes encoding hydrolases, lipases, cutinase, and cellobiosidase, due to their significant role in natural or synthetic polymer biodegradation . The clustering of predicted functions demonstrated that potential gene abundances were strongly affected by sampling time since two major clusters were observed: (I) samples collected after 3 months of incubation at 30 °C and RT tilled with biodegradable mulch residues (MB and TMB), comprising three exceptions: t3-TMB_T 45 °C, t6-C_T 30 °C, and t3-LDPE_T 30 °C; and (II) all remaining samples. The increase in several encoding genes associated with plastic degradation in cluster I, like cutinase (K08095) and esterase/lipase (K01066), suggests a potential bacterial active response in soils containing plastic residues that responded positively to degradation . These enzymes are known to cleave the ester bonds and degrade polyurethane substrate, and therefore can act also on non-biodegradable plastics . Cutinase can hydrolyze a broad variety of synthetic polymer esters, both soluble and insoluble, that are structurally related to cutin . The increase of cellulose 1,4-beta-cellobiosidase (K01225, cluster I) that hydrolyze cellulose releasing cellobiose, was probably induced by the presence of biodegradable mulch residues in soil . Cluster II comprises all samples at the latter sampling time, controls, and samples incubated at 45 °C. In particular, the presence of MB and TMB from later sampling times, where plastic degradation occurred, suggests that there may be significant differences in some metabolic activities compared to the same samples after 3 months of incubation. This confirms that changes in the activities of bacterial populations over time reflect the dynamics of plastic degradation in soil. Over time, polymers degrade enzymatically, and microorganisms assimilate and utilize the degradation products, potentially causing shifts in the predicted coding genes . Finally, catalase performs an important role in soil ecosystem, and it can be used as a biological activity index to evaluate the quality of a particular soil . As previously indicated, variations of catalase-specific activities are also indicative of phylogenetic changes in community structure . The tilling of biodegradable mulches modifies the soil ecosystem, promoting an environment in which aerobic bacteria thrive, which may have positive implications for soil health and quality.
The microbial core investigation aimed to provide hints of potential microorganisms involved in plastic breakdown for future investigations. The analysis was performed on all plastic-type soil systems at temperatures where breakdown occurred (RT and 30 °C). There is still a lack of literature, on information on microbial species known for their ability to degrade plastics. Thus, core members found in this work could be evaluated based on their enzymatic kit to assess their potential action on natural and non-natural polymers. Among the identified bacterial species, Hydrogenispora ethanolica LX-B isolated from the mesophilic (35 °C) anaerobic fermentation process of sludge can ferment substrates with different carbon sources, including starch, glucose, maltose, and fructose , suggesting its potential involvement in the degradation of starch-based plastics. The species Thermoflavimicrobium daqui FBKL4.01, a thermophilic bacterium isolated from the Daqu used to produce Chinese liquor Moutai, can ferments pure wheat , confirming its involvement in the degradation of starch and cellulose. The analysis of fungal core revealed 12 different species. Among them, Solicoccozyma aeria ( Cryptococcus aerius ) is known for its ability to produce amylases at 30 °C, to digest raw starch . These three microbial species were found in soils amended with MB and TMB and could therefore be of interest for the biodegradation of starch-based plastics.
Sustainable agricultural systems necessitate minimizing plastic waste accumulation in soils by transitioning to effective biodegradable alternatives. This study examines the impact of integrating both biodegradable and polyethylene mulch types into soil on the microbial community over a 6-month trial period, providing a direct comparison among mulch sheets applied under identical conditions and addressing a significant gap in the literature. The research observed a substantial decrease in the weight of Mater-Bi biodegradable plastics after 6 months of incubation, related to shifts in microbial community dynamics and bacterial functions under mesophilic conditions. Monitoring microbial responses offers valuable insights for optimizing agricultural management practices, thereby promoting sustainable and resilient agricultural systems. Moreover, the absence of plastic degradation observed when samples were incubated at 45 °C raises important considerations regarding potential applications under varying climate scenarios. Finally, through analysis of the core microbiota in plastic-influenced soil ecosystems, the study identifies microbial groups potentially pivotal in plastic biodegradation process.
Below is the link to the electronic supplementary material. Supplementary file1 (DOC 566 KB) Supplementary file2 (XLSX 15.8 KB) Supplementary file3 (XLSX 57.1 KB)
|
Recent Developments in the Photochemical Synthesis of Functionalized Imidazopyridines | 426bffc1-9bba-405c-aaea-1eec9e3cab60 | 9182178 | Pharmacology[mh] | Imidazopyridines have had a long-lasting interest in organic and medicinal chemistry . These heterocyclic scaffolds have broadly been found in many pharmaceuticals with various biological activities. For example, saripidem or alpidem are hypnotic drugs that are chemically distinct from benzodiazepines but bind at the same site on the receptor. Zolpidem, more commonly known as Ambien ® , is employed as a medication in the therapy for insomnia. Zolimidine, a gastroprotective agent, also exhibits an imidazopyridine motif in its structure. More recently, researchers considered further biological applications for these molecules, namely involving their antibacterial , analgesic, anti-inflammatory , antiviral , and antitumor properties . Besides, imidazopyridines could also show significant fluorescent properties, which could be utilized to construct polymer films . Given their importance in applied sciences, tremendous efforts have been conducted since the last century to prepare these N -heterocycles . In 1925, Tschitschibabin and Kirsanow were the first to synthesize imidazopyridines by heating 2,3-aminopyridine derivatives with acetic anhydride . This skeleton was ignored for a long time in organic chemistry due to the underdevelopment of efficient methodologies for accessing highly functionalized molecules. In recent decades, the upturn of organometallic chemistry has emulated numerous synthetic strategies, particularly C–H functionalizations , condensations , or multicomponent reactions . With awareness of the current environmental issues and the decrease in non-renewable sources, the development of straightforward and mild procedures using eco-friendly conditions is highly desirable. Thus, in the 2010s, photochemical methods applied to imidazopyridine platforms have expanded intensively . Photochemistry offers many advantages over conventional heating, with the use of light as a sustainable energy source and less toxic organic reagents or catalysts . In this context, visible-light-induced approaches for synthesizing and functionalizing imidazopyridines have flourished in the last decade . Various photocatalytic reactions have been implemented with metallic or metal-free catalysts. As an overview of the recent updates in imidazopyridine chemistry, the review includes the literature survey up to April 2022. We have detailed original transformations of the imidazopyridine’s building blocks via C–H functionalization or multicomponent, or tandem reactions.
In 2015, the Hajra group developed the first metal-free C–H thiocyanation of imidazo[1,2- a ]pyridines, with eosin Y as the photocatalyst and air as a green oxidant, in acetonitrile (ACN) and under a blue LED (light-emitting diode). A wide range of substituted 3-(thiocyanato)imidazo[1,2- a ]pyridines was afforded, with high yields. The methodology was also extended to selenocyanation and trifluoromethylthiolation reactions . By the control experiments, the C–H functionalization could occur via the visible-light photoactivation of eosin Y, forming the thiocyanate radical. This latter intermediate could react with the imidazo[1,2- a ]pyridine to deliver the desired compound after an oxidation and deprotonation sequence . This first incursion in the C–H functionalization of imidazopyridines has paved the way for diverse transformations applying this approach. 2.1. Formation of C–C Bonds The C–C bond construction in a radical pathway represents one of the significant tools in organic chemistry. In this frame, numerous methodologies have risen over the last decade for the C–C bond’s elaboration at position C 3 in imidazopyridines. 2.1.1. Fluoroalkylation of Imidazopyridines Fluorine constitutes a highly privileged bioisostere of the hydrogen atom due to its metabolic stability and lipophilicity . These interesting properties have promoted the incorporation of fluorinated motifs into organic substrates, potential biologically active compounds, or drug candidates . In 2020, Cui et al. detailed the visible-light-mediated metal-free C 3 –H trifluoromethylation of imidazo[1,2- a ]pyridines, using an acridinium derivative as the photoredox catalyst and Langlois’ reagent (CF 3 SO 2 Na) as the fluorinating agent in dichloroethane (DCE) . This straightforward procedure is very compatible with electron-rich or electron-poor substituted substrates (up to 84% yield). By TEMPO ((2,2,6,6-Tetramethylpiperidin-1-yl)oxyl) radical capture, the authors proved the involvement of a fluoroalkyl radical intermediate engendered by single electron transfer (SET) via the acridinium photocatalyst. Another synthetic method consists of a trifluoromethylation with Langlois’ reagent, 4,4′-dimethoxybenzophenone as the photocatalyst, and HFIP (hexafluoroisopromanol) as an additive in dry ACN. In this manner, Lefebvre, Hoffmann, and Rueping developed a C 3 -substituted imidazo[1,2- a ]pyridine scaffold with a 42% yield . The Zhang team proposed a regioselective C–H trifluoromethylation in position C 3 of imidazo[1,2- a ]pyridines. The investigation of the reaction conditions showed that anthraquinone-2-carboxylic acid (AQN) was the best photocatalyst, employed simultaneously with Langlois’ reagent, trifluoroacetic acid (TFA), and potassium carbonate in DMSO (dimethyl sulfoxide) . This method allowed for access to 21 trifluoromethylated imidazo[1,2- a ]pyridine derivatives, with moderate-to-good yields. The process’s applicability was validated by the C 3 -trifluoromethylation of Zolimidine, an antiulcer drug, with 55% yield. Zhang and co-workers demonstrated the radical reaction process through mechanistic studies with radical-trapping experiments . Deng and co-workers conceptualized an efficient process for the regioselective C 3 -trifluoromethylation and perfluoroalkylation of imidazo[1,2- a ]pyridines. By visible-light photoactivation, a broad array of functionalized imidazo[1,2- a ]pyridines were prepared, with satisfactory results. The main advantage of this method relies on the use of only an organic base (DBU: 1,8-Diazabicyclo[5.4.0]undec-7-ene) with the fluorinating agent in ACN or N -methyl-2-pyrrolidone (NMP). Light on/off experiments and radical trapping reactions suggested that an electron-donor–acceptor (EDA) complex could be formed between DBU with trifluoromethyl (or perfluoroalkyl) iodide. The blue-LED irradiation of the EDA complex led to the generation of CF 3 • radicals, which could react with the imidazo[1,2- a ]pyridine substrate, producing the corresponding radical intermediate. This latter compound could undergo an oxidation-deprotonation sequence (Path A, ) or a hydrogen abstraction by iodine radicals, delivered from the EDA complex and iodide (Path B, ) . The same year, Wu and his colleagues developed a similar idea, using DMSO as a solvent instead of NMP for the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines. This modified approach contributed to the synthesis of 27 C 3 -fluorinated imidazo[1,2- a ]pyridines . A good tolerance is observed for both the electron-withdrawing and electron-donating groups (21 to 96% yield) . The C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu and Fu was carried out with fac -[Ir(ppy) 3 ] (ppy: 2-phenylpyridinato), 1,1,1-trifluoro-2-iodoethane, and K 2 CO 3 in DMSO . This visible-light-promoted reaction resulted in the preparation of a broad range of C 3 -fluorinated imidazopyridines, exhibiting electron-poor or -rich substituents . Inhibition of this transformation was performed with TEMPO as a radical scavenger; the expected compound was not detected, implying a radical path. The mechanism of this functionalized C–H could thus be rationalized: the oxidation of the excited photocatalyst by CF 3 CH 2 I could lead to the CF 3 CH 2 • radical species. Addition of the latter radical could be accomplished on the imidazo[1,2- a ]pyridine motif. Oxidation and base-mediated deprotonation could induce the formation of the desired product. Huang and Zhu went further with the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines with TMEDA (tetramethylethylenediamine) as a radical initiator and K 3 PO 4 as the base . The transformation displayed a good tolerance with diversely substituted imidazo[1,2- a ]pyridines (74 to 92% yield). Functional groups in the meta - or para - position provided the wanted compounds with better results than the imidazo[1,2- a ]pyridines featuring substituents in the ortho position. Under modified conditions, the procedure was also attempted for a C 3 -difluoroacetylation, giving the expected N -heterocycle with 61% yield . Mechanistic control experiments with radical scavengers (TEMPO, 1,1-diphenylethene, and hydroquinone) jeopardized the reaction since the expected C 3 -perfluoroalkylated imidazo[1,2- a ]pyridine was obtained in low yields. With TEMPO, 2,2,6,6-tetramethyl-1-(perfluorobutoxy)piperidine was identified by GC-MS (gas chromatography–mass spectrometry) analysis, confirming the radical character of the process. To enlarge the diversity of fluorinated imidazopyridines, the Fu team conceived access to (phenylsulfonyl) difluoromethylated structures in the presence of PhSO 2 CF 2 I, K 2 CO 3, and fac -[Ir(ppy) 3 ] . The adopted protocol allowed for the preparation of 15 C 3 -functionalized imidazo[1,2- a ]pyridines with good-to-high yields . The same group established a related approach for introducing a difluoroacetyl motif in the C 3 position of the imidazo[1,2- a ]pyridine skeleton with BrCF 2 CO 2 Et. Substrates exhibiting electron-donating groups led to the desired products in higher yields than the electron-withdrawing ones . Xu and co-workers reported the C–H difluoroalkylation of imidazo[1,2- a ]pyridines mediated by visible light. The protocol requires the use of bromodifluoroaryl ketones as a co-substrate, TMEDA as the organic base in acetonitrile, and a 33 W compact fluorescent light (CFL). These mild and straightforward conditions yielded a wide range of imidazo[1,2- a ]pyridines displaying various functional groups . Difluoromethylenephosphonation of imidazo[1,2- a ]pyridine, realized by the Hajra team, provides functionalized N -heterocycles by employing rose bengal (RB) as a photocatalyst, bis(pinacolato)diboron as an additive, and NaHCO 3 as the base . The exploration of the substrate’s scope revealed that highly substituted imidazo[1,2- a ]pyridines could be synthesized through this method. The expected products were not observed by attempting the standard reaction with different radical inhibitors (TEMPO, BHT: Butylated hydroxytoluene, para -benzoquinone, and 1,1-diphenylethylene), confirming the radical process. Without bis(pinacolato)diboron (B 2 pin 2 ), the reaction did not proceed, indicating the crucial role of this additive. With all these findings and cyclic voltammetry measurements, the authors proposed the activation of imidazo[1,2- a ]pyridine by bis(pinacolato)diboron, generating a cationic intermediate. This intermediate could then undergo the addition of CF 2 PO(OEt) 2 radicals (formed by RB* oxidation). Hydrogen abstraction by NaHCO 3 could deliver the difluoromethylenephosphonated imidazo[1,2- a ]pyridine . In summary, fluoroalkylation of imidazopyridines could be reached in several reaction conditions, with moderate-to-good yields (up to 96% yield). All the approaches described here required polar aprotic solvents (mainly ACN, DMSO) and organic bases or acids under an inert atmosphere. The photocatalysts employed were organophotocatalysts or fac -[Ir(ppy) 3 ]. These strategies allowed access to diversely fluorinated compounds. 2.1.2. Alkylation of Imidazopyridines Alongside the fluoroalkylation of imidazopyridines, the introduction of various moieties by alkylation reactions has also arisen during the last five years. In 2017, inspired by the above-mentioned C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu, Fu, and coworkers , Liu and Sun developed the C 3 -cyanomethylation of imidazo[1,2- a ]pyridines using an analogous photocatalytic system . With the inexpensive bromoacetonitrile as a cyanomethyl source, the group efficiently developed a large array of substituted imidazopyridines (up to 96% yield). It should be outlined that a significant yield enhancement was noted for some substrates by employing iodoacetonitrile rather than bromoacetonitrile. This robust method was also applied to the synthesis of Zolpidem and Alpidem, drugs used in anxiety treatment. Once the cyanomethylated imidazo[1,2- a ]pyridines were isolated, they were converted into the corresponding ethyl esters. These intermediates were then hydrolyzed with KOH and amidified in dichloromethane (DCM) following standard procedures , to afford the expected biologically active compounds . Aminoalkylation has also drawn attention in the topic of imidazopyridines’ functionalization. In 2018, Hajra and co-workers disclosed the metal-free coupling between tertiary amines and imidazo[1,2- a ]pyridines . With rose bengal as the organocatalyst under aerobic conditions, they combined N -phenyltetrahydroisoquinoline with imidazopyridines in a regioselective manner . A broad range of highly substituted imidazopyridines was thus produced. Good to excellent yields were obtained with electron-donating or -withdrawing groups. The approach was also extended to N , N -dimethylaniline derivatives with success. Control experiments implemented the elucidation of the mechanism with radical scavengers (TEMPO, BHT) and a singlet oxygen quencher (DABCO: 1,4-diazabicyclo[2.2.2]octane). The suggested pathway could pass through an energy transfer between the excited state of the photocatalyst (RB*) and the ground-state oxygen ( 3 O 2 ). The generated singlet oxygen could undergo an SET for the tertiary amine to deliver the amine radical cation. By hydrogen capture, an iminium is then formed. This latter molecule could be implied in an electrophilic addition with the imidazopyridine. A final proton abstraction could then give the target compound. More recently, Yu et al. conceptualized a sustainable procedure for the aminomethylation of imidazo[1,2- a ]pyridines by using N -arylglycines as the amine sources and an original metallated photocatalyst (CsPbBr 3 ) . The principal advantage of this procedure is the possible re-utilization of the perovskite catalyst for at least five times and with excellent yields (more than 88%). As the reaction is in a heterogeneous system, the recovery of CsPbBr 3 was facilitated by simple centrifugation. Good compatibility was remarked for substrates featuring donor (Me, OMe, NH 2 ) or acceptor (F, Cl, Br, CN, CF 3 , CO 2 Me) substituents. It should be emphasized that aminomethylation is applicable for a gram-scale synthesis with sunlight irradiation. The inhibition of the transformation by radical scavengers suggested a radical reaction mechanism. CsPbBr 3 could release an electron (e−) and a hole (h+) by absorbing a photon. A SET could then be realized from N -arylglycine to the hole, leading to the corresponding radical. This intermediate could then be added to the imidazopyridine scaffold. The oxidation by O 2 provided the heterocyclic cation, which could evolve to the final product by deprotonation . The same team went one step further by improving the protocol in a greener way. In 2021, Lv and Yu established an eco-compatible carbon nitride nanosheet (NM-g-C 3 N 4 ), which could catalyze under blue-LED irradiation the aminomethylation of imidazopyridines . To fulfill the criteria of green chemistry, dimethyl carbonate was employed this time as the reaction solvent. Again, a set of aminomethylated imidazo[1,2- a ]pyridines displaying diverse functional groups (18 examples) was elaborated smoothly . As previously, the NM-g-C 3 N 4 photocatalyst could be reused after the reaction workup by centrifugation. The recycling experiments showed that the photocatalyst’s efficiency is maintained after seven transformation cycles. Zhu and Le monitored the C–H aminomethylation reaction with N -arylglycine derivatives in an analogous eco-compatible way . The reaction occurred efficiently under photocatalyst-free conditions . A wide range of functionalized imidazopyridines was provided with good results (40 to 95% yields). The group unraveled the aminomethylation path with various control experiments, namely, reaction under a nitrogen atmosphere or in an open-air flask, or radical trapping with TEMPO. By blue-LED irradiation, a singlet oxygen could be formed and interact with the N -arylglycine substrate to generate a radical cation. This latter intermediate could evolve in an alkyl radical by proton transfer and decarboxylation. Subsequently, the amino radical could undergo a proton transfer, leading to the corresponding imine. The final electrophilic addition of the imine to the imidazopyridine motif allows access to the target product. The C–H alkylation of imidazo[1,2- a ]pyridines could be performed with N -hydroxyphthalimide esters as alkylating reagents. Jin and his colleagues conceived this original strategy for the C–H functionalization of the aryl part of the imidazopyridine platform . The organic photoredox catalysis implied eosin Y as the photocatalyst and TfOH (triflic acid) as the additive. The reaction was well-tolerated with a wide array of imidazopyridine substrates (up to 86% yield). By checking the N -hydroxyphthalimide esters’ scope, satisfactory results were afforded for the primary, secondary, and tertiary alkyl groups. The alkylation pathway was unraveled with radical trapping experiments: an adduct with BHT was identified with HRMS (high-resolution mass spectrometry) analysis, validating the radical mechanism. An SET could occur from the excited state of eosin Y to the protonated N -hydroxyphthalimide ester. The formed radical species could be decomposed into an alkyl radical, which could be introduced into the imidazopyrine’s nucleus. The oxidation of the imidazopyridine radical by an SET with eosin Y •+ produced the corresponding cation. Finally, the expected compound is obtained via deprotonation . The Hajra group deepened the concept of C–H alkylation by exploring the three-component carbosilylation of alkenes in the imidazopyridine scaffold . The combination of a metal catalyst (FeCl 2 ) and blue-LED photocatalysis enabled the C–C and C–Si bond formation. The reaction involving an imidazopyridine substrate, a styrene derivative, and (TMS) 3 SiH gave a wide array of silylated imidazo[1,2- a ]pyridines (26 compounds) in 45 to 88% yields . After the scope study, the authors examined the transformation pathway with radical scavengers. The reaction did not occur in the presence of TEMPO, BHT, or benzoquinone, reflecting a radical mechanism. The same result was observed without a photocatalyst or light source. Considering these control experiments, the proposed path could proceed via an SET between the iron(II) catalyst and the excited state of eosin Y. The radical anion eosin Y •− could then realize an SET with the di- tert -butyl peroxide, affording the radical t BuO • . The formed silyl radical will be added to the styrene by hydrogen abstraction. An SET could then be accomplished from the generated radical styrene to iron(III). An electrophilic addition could be reached with imidazopyridine, allowing access to the desired product. Summarily, the alkylation methodologies reported herein provided a wide library of functionalized imidazo[1,2- a ]pyridines with a broad susbtrate scope and satisfactory yields. The strategies involved organic or organometallic catatylic systems, but also innovative techniques such as the use of perovskite catalysts and carbon nitride nanosheets, or photocatalyst-free conditions. 2.1.3. Carbonylalkylation and Carbonylation of Imidazopyridines As a continuation of the visible-light C–H alkylation of imidazopyridines, the addition of carbonyl groups and their derivatives was also widely studied. In 2018, Zhu and Le conducted the visible-light-mediated carbonylalkylation of imidazo[1,2- a ]pyridines with N -arylglycine esters . The coupling reaction between these two molecules was carried out with a copper catalyst (Cu(OTf) 2 ) in acetonitrile. The imidazo[1,2- a ]pyridine scope investigation indicated that electron-poor substituents increased the transformation efficiency more than the methyl groups. Studying the N-arylglycine esters showed good suitability with a large array of substrates. The same authors recently extended their synthetic method by coupling imidazo[1,2- a ]pyridines with α-amino ketones and α-amino acid derivatives . Some improvements were applied: the metal catalyst was replaced by an organophotocatalyst (Eosin Y), with citric acid monohydrate as an additive. Ethanol was employed as a greener solvent, and the visible-light irradiation was monitored with an 18 W blue-LED light . The scope examination of N -arylglycine ethyl esters indicated the high reaction efficiency with electron-donating groups on the aryl motif, while various esters (methyl, isopropyl, tert -butyl, and benzyl esters) displayed good compatibility with moderate-to-good yields. α-amino ketones delivered the expected imidazo[1,2- a ]pyridines with low yields. Regarding the scope of imidazo[1,2- a ]pyridines, a similar trend was observed with a better reactivity of electron-rich substrates. The authors performed control experiments, including radical trapping, reactions with imine substrates, and cyclic voltammetry, to understand the mechanism. The possible path could proceed by an SET between the excited state of eosin Y and the α-amino carbonyl derivative. The formed radical cation could be oxidized into the iminium intermediate, which could undergo an electrophilic addition from the imidazo[1,2- a ]pyridine. A final oxidation step could provide the desired product. In 2022, Jiang and Yu realized the ethoxy-carbonyl methylation of imidazo[1,2- a ]pyridines with α-bromoesters in water, employing rhodamine B (RhB) as the photocatalyst, dilauroyl peroxide as the oxidant, and potassium ethyl xanthogenate as an additive . This method allowed for the preparation of three imidazopyridines with moderate yields. The photochemical reaction was also successfully applied to the preparation of Zolpidem in one step, with 2-bromo- N , N -dimethylacetamide as the substrate partner . Another application of the carbonylalkylation reaction was performed by the Chaubey group, with the total synthesis of Zolpidem . After a detailed methodology for the C 3 -carbonylation of imidazo[1,2- a ]pyridines in the presence of dialkyl malonates, the authors discovered a rapid multi-step synthetic route to Zolpidem in high yields. This sequence was based on the visible-light-promoted C–H carbonylalkylation of the corresponding imidazopyridine, followed by a Krapcho decarboxylation at 160 °C, hydrolysis, and condensation . An analogous idea came out in Hajra’s group: by changing the carbonylalkylated source (ethyl diazoacetate) and the photocatalyst ([Ru(bpy) 3 ]Cl 2 , with bpy: 2,2′-bipyridyl), they accomplished the C 3 -ethoxycarbonyl methylation of imidazo[1,2- a ]pyridines . By studying the scope of imidazo[1,2- a ]pyridines, a good compatibility was noticed with electron-rich substituents. Surprisingly, the reaction did not occur in the presence of electron-withdrawing groups. A slight modification of the optimized conditions was thus applied: by adding 10 mol% of N , N -dimethyl- m -toluidine, a redox-active additive, the C–H carbonylalkylation ran smoothly with satisfactory results (up to 92% yield). The viability of the methodology was confirmed with the gram-scale preparation of ethyl 2-(2-phenylimidazo[1,2-a]pyridin-3-yl)acetate with a 70% yield ( , Equation (1)) and the late-stage amidation of a C 3 -substituted compound ( , Equation (2)). Similarly, Yu, Tan, and Deng expanded Hajra’s methodology to a wide range of diazo derivatives and imidazo[1,2- a ]pyridine substrates (28 examples) . Subsequently, the reaction showed its applicability with a gram-scale reaction ( , Equation (1)) and the Zolpidem preparation ( , Equation (2)). This strategy allowed shorter and more efficient access to synthetic drugs than Chaubey’s approach ( cf . ). In 2019, Guan and He moved one step beyond the concept of imidazo[1,2- a ]pyridines’ carbonylation . The direct addition of a carbonyl motif on the imidazo[1,2- a ]pyridine skeleton was conducted under 32 W CFL irradiation with an oxygen balloon and 9-mesityl-10-methylacridinium perchlorate (Acr + -Mes). Using a nitrone derivative as a co-substrate, an aryl entity could be included in the carbonyl group. With the optimized conditions in hand, the scope was scrutinized: imidazo[1,2- a ]pyridines bearing bromo- or chloro-substituents in position C7 exhibited a higher reaction efficiency (65–66% yield) than the C6-substituted ones (50–54% yield). Concerning the nitrone screening, higher yields were noted with the meta - and para -substitution on the aryl part than the ortho- substitution, probably due to steric hindrance. Next, the reaction mechanism was elucidated with control experiments (radical inhibition, 18 O-labeling reaction, and Stern–Volmer quenching fluorescence) and the X-ray crystal structure of the N -hydroxylamine intermediate. The pathway started from the SET between the excited photocatalyst (Acr + -Mes*) and the imidazo[1,2- a ]pyridine. The nitrone could then be introduced in the imidazo[1,2- a ]pyridine. Two possible paths could then be identified. The first path could involve deprotonation and nitrosobenzene releasing. The resulting radical could react with the radical oxygen species O 2 •− (formed by an SET with the radical photocatalyst) to generate the carbonylated product. The second path could imply an SET from the radical photocatalyst to the radical nitroso, giving the corresponding N -hydroxylamine. A second SET could then occur, leading to a radical N -hydroxylamine. As the first path, the decomposition of the N -hydroxylamine delivered a nitrosobenzene and the corresponding radical, which could be transformed into the target compound . Carbonylalkylation and carbonylation of imidazopyridines enabled the introduction of amino acid derivatives in the imidazopyridine’s core. The employed approaches consisted in the use of metal catalysts (Cu(OTf) 2 , [Ru(bpy) 3 ]Cl 2 ) or organophotocatalysts, in apolar (DCM) or polar protic and aprotic solvents (ACN, Dioxane, MeOH, EtOH). Eco-friendly methods demonstrated their efficiency in aqueous media. 2.1.4. Sulfonylmethylation of Imidazopyridines In the same way, Zhang and Cui exploited an extension of the imidazopyridines’ alkylation for the sulfonylmethylation reaction . By utilizing bromomethyl sulfones with an iridium photocatalyst ([Ir(ppy) 3 ]), a broad range of imidazo[1,2- a ]pyridines could be functionalized efficiently with satisfactory yields . The transformation is also well suited for diversely substituted bromomethyl sulfones. The mechanism investigation by radical trapping revealed that the transformation path could imply radical intermediates. From this observation, the authors suggested an SET from the excited state of the photocatalyst to the bromomethyl sulfone, to deliver a corresponding sulfomethyl radical. The addition of the latter intermediate to the imidazo[1,2- a ]pyridine’s core provided the corresponding radical, which could be oxidized via an SET with [Ir(ppy) 3 ] + . The formed cation could be converted into the expected compound by deprotonation. 2.1.5. Formylation of Imidazopyridines Formyl functional groups constitute a major moiety in N -heterocycles, since they could be key building blocks for synthesizing highly complex molecules. In this frame, the visible-light-induced formylation of imidazo[1,2- a ]pyridines has recently gained interest. The Hajra team developed mild conditions for the regioselective formylation of imidazo[1,2- a ]pyridines in position C 3 , with rose bengal as the photoredox catalyst, KI as an additive, and TMEDA as the formylating agent . This reaction is suitable with substrates featuring electron-poor, -rich, or halogenated substituents (up to 95% yield). The transformation was entirely inhibited by achieving control experiments with TEMPO or benzoquinone. The same result was noted by replacing O 2 (from the air) with an argon atmosphere. With all these observations, the authors proposed the following pathway: by excitation of the photocatalyst (RB), a singlet oxygen ( 1 O 2 ) could be generated, inducing the formation of the iodine radical. This latter intermediate could oxidize the TMEDA as a radical cation. With the superoxide radical anion, the TMEDA-derived radical cation could be turned into an iminium ion. The electrophilic addition with the imidazo[1,2- a ]pyridine could then occur, followed by a re-aromatization. Iodine could thus oxidize the TMEDA motif, releasing an iminium ion. Consequently, hydrolysis of the iminium ion could afford the desired formylated imidazo[1,2- a ]pyridine . 2.1.6. Arylation of Imidazopyridines Recently, Cui and Wu conducted the visible-light C(sp 2 )–H arylation of heterocycles with hypervalent iodine ylides as the arylating agents, eosin Y as the photocatalyst, and potassium carbonate as the base . Among the synthesized heterocyclic scaffolds, five examples of imidazo[1,2- a ]pyridines were depicted with satisfactory yields . Sun et al. reported their research on the regioselective azolylation of imidazo[1,2- a ]pyridines . The installation of the azole nucleus was mediated by 2-bromoazoles under blue-LED irradiation. The photocatalytic process involved Cy 2 NMe ( N , N -dicyclohexylmethylamine) as an organic base and an iridium photocatalyst ([Ir(ppy) 2 (dtbbpy)]PF 6 , with dtbbpy: 4,4′-di-tert-butyl-2,2′-dipyridyl). This synthetic approach furnished 29 C 3 -substituted imidazo[1,2- a ]pyridines with 28 to 79% yield . Electron-poor groups on the imidazopyridine scaffold diminished the heterocycle’s reactivity, whereas electron-rich substituents favored the reaction’s efficiency. In addition, the authors reported good suitability with diversely substituted bromoazoles, i.e., bromothiadiazole, bromothiophene, and bromofuraldehyde. A radical inhibition of the C 3 -azolylation was also conducted with TEMPO: the target molecule was not detected, pointing out the radical character of the transformation. An oxidative quenching of the bromoazole by the excited state photocatalyst could lead to the corresponding heterocyclic radical, which could be added to the imidazopyridine skeleton. Simultaneously, Ir(IV) could reduce the organic base into a radical amine cation. This latter one could then catch hydrogen radicals to release the desired product. These two examples showed the wide possibility for functionalizing the imidazopyridine scaffold. The introduction of aryl and heteroaryl motifs in good-to-moderate yields was provided by organocatalyst (Eosin Y) or an iridium complex in aprotic polar solvents. 2.2. Formation of C–N Bonds With the presence of heteroarylamines in a plethora of natural products, C–H amination of heterocyclic structures constitutes a long-lasting interest for organic chemists . Efficient, mild, and regioselective methodologies were addressed, especially the eco-friendly C–H functionalization induced by visible light . Within this frame, Adimurthy and co-workers published in 2017 the metal-free C 3 amination of imidazo[1,2- a ]pyridines . This synthetic strategy allowed for the introduction of aza-heteroarenes (benzotriazole, pyrazole, imidazole, 1 H -1,2,4-triazole, 1 H -benzo[ d ]imidazole, and 1 H -indazole) to the imidazo[1,2- a ]pyridine platform. Satisfactory yields were obtained, even with halogenated substituents on both reaction substrates . Similarly, Zhang and Lei introduced an azole motif in imidazo[1,2- a ]pyridines at position C 3 . In contrast with the previous method, the C–N bond formation additionally required a metal catalyst ([Co(dmgH)(dmgH 2 )]Cl 2, with dmg: dimethylglyoximato) . The corresponding C 3 -functionalized imidazo[1,2- a ]pyridines were generated with good-to-excellent yields. The scope examination with azoles demonstrated good reaction tolerance by employing pyrazoles, imidazoles, or triazoles. A thorough mechanistic study, including the light on/off experiments, radical trapping, cyclic voltammetry measurements, and DFT (density functional theory) calculations, validated the radical reaction path. The excited state of the organophotocatalyst could allow for an SET to the imidazo[1,2- a ]pyridine. The generated radical cation species could then undergo a nucleophilic attack of the azole substrate, giving the corresponding radical. Simultaneously, a Co(III) catalyst could oxidize the reduced photocatalyst, releasing back the photocatalyst to its fundamental state. The subsequently formed Co(II) could realize an SET to the radical imidazo[1,2- a ]pyridine. The target aza-heterocycle could be engendered by deprotonation. Co(I) could be converted back into Co(III) by proton capture and dehydrogenation . The regioselective C–N bond formation could also be extended to incorporate sulfonamide groups on imidazo[1,2- a ]pyridines. The Sun group outlined the light-mediated C 3 -sulfonamidation reaction with an iridium photocatalyst ([Ir(ppy) 2 (dtbbpy)]PF 6 ) and NaClO as the oxidant . The process was very compatible with imidazo[1,2- a ]pyridines featuring electron-poor or -rich substituents. By contrast, a significant electronic effect could be remarked with the sulfonamides: methyl, methoxy, and tert -butyl derived sulfamides enhanced the yields compared to the chlorinated or brominated ones. Control experiments with TEMPO or 1,1-diphenylethene corroborated the radical mechanism. The oxidative quenching of the photocatalyst’s excited state by NaClO could result in an Ir(IV) complex. This organometallic species could be involved in an SET with the sulfamide to deliver a sulfamido radical, which could react with the imidazo[1,2- a ]pyridine. Oxidation and deprotonation will transform the produced radical into the desired N -heterocyle . In 2020, Braga and his co-workers performed the azo-coupling of imidazo[1,2- a ]pyridines with aryl diazonium salts under green LED irradiation . By this strategy, 18 functionalized imidazopyridines were prepared with good-to-excellent yields (up to 99%). The reaction’s viability was validated with a gram-scale synthesis of a diazo derivative ( , Equation (1)) and the reduction of a diazo imidazo[1,2- a ]pyridine by zinc in acidic conditions ( , Equation (2)). More recently, the visible-light-induced C–H amination of imidazo[1,2- a ]pyridines was exploited in an environmentally friendly manner with micellar catalysis. Li’s approach was based on the use of amphiphilic surfactants in water, which could constitute micelles by hydrophobic interaction . The core of the micelles could be employed as a micro-reactor, where substrates could be activated. This green procedure needs a hydrophilic cationic N -aminopyridinium salt as the amine transfer reagent. The “head” of the pyridinium salt (pyridinium nucleus) could interact with the micelle surface, whereas the amine “tail” was localized in the core ( vide infra ). Sodium dodecyl sulfate (SDS) was chosen as the surfactant, yielding better results during the optimization step. With 2,4,5,6-tetrakis(9 H -carbazol-9-yl) isophthalonitrile (4CzIPN) as the photocatalyst under blue-LED irradiation, a series of C 3 -aminated imidazo[1,2- a ]pyridines were provided with good-to-excellent yields (up to 92% yield). The reaction path was unraveled by conducting complementary experiments (radical trapping, light-off procedure, process without surfactant, photocatalyst, or N 2 ). In the micelle hydrophobic core, an SET from the excited state of 4CzIPN to the pyridinium salt could lead to the amino radical. A radical addition could then occur on the imidazo[1,2- a ]pyridine. A second SET could furnish the corresponding cation, which could undergo pyridine-mediated deprotonation . The formation of C–N bonds in the imidazopyridine’s structure allowed the incorporation of aza-heterocyclic nuclei, sulfonamides, amines, and diazo groups on the C 3 position. The implied reactions needed organophotocatalysts (Acr + -Mes, eosin Y-Na 2 , 4CzIPN) or metal complexes (Co- or Ir-derived catalysts). In a more sustainable way, a micellar system was employed instead of conventional organic solvents. In all the examples, the desired products were obtained with excellent yields. 2.3. Formation of C–O Bonds With the major occurrence of the C–O bond in natural or biologically active compounds, the construction of this motif is highly sought by researchers. Among the developed strategies, Hajra and co-workers investigated a metal-free methodology for the C–H alkoxylation of imidazo[1,2- a ]pyridines . With an organophotocatalyst (rose bengal) and alcohol under visible-light LED irradiation, the group constructed a C–O bond on the position C 3 of the imidazo[1,2- a ]pyridine’s nucleus. Twenty-seven examples of functionalized imidazo[1,2- a ]pyridines were synthesized, bearing various alcohols. Good-to-excellent yields were obtained with N -heterocycles displaying electron-poor or -rich substituents without any electronic effect. In the dark or with a radical inhibitor, control experiments could gain insight into the reaction mechanism: by an SET with rose bengal. An imidazopyridine radical cation could be engendered. This latter intermediate could react with alcohol to yield the corresponding radical. The desired alkoxylated product could then be formed by HO 2 • hydrogen abstraction . More recently, Singh and his colleagues developed C–H activation mechanism assisted by directing groups for the oxygenation of heterocyclic scaffolds . This original transformation required 1,2,3,5-tetrakis(carbazol-9-yl)-4,6-dicyanobenzene (4CzIPN) as the organic photocatalyst, palladium acetate as the metal catalyst, and potassium trifluoroacetate as the base, in a solvent mixture (CF 3 CO 2 H/DMF: dimethylformamide) and under an oxygen atmosphere. It should be highlighted that a high temperature is needed to oxidize Pd(II) to Pd(IV). In this context, the authors reported one example of imidazopyridine C–H activation, using the imidazopyridine itself as the directing group. In contrast with the previous approach, the imidazopyridine platform is herein functionalized in the aryl part with a 66% yield . These two examples showed the large possibility of C–O functionalization of imidazopyridines. By varying the conditions, the oxygenated motif could be introduced regiospecifically in different positions on the imidazopyridine’s structure. In metal-free conditions, C 3 -functionalization was realized. In contrast, the palladium-catalyzed reaction allowed for the C–H activation on the aryl part of the scaffold. 2.4. Formation of C–P Bonds The functionalization of imidazo[1,2- a ]pyridines with phosphorous motifs was also envisioned. In 2020, Sun, Chen, and Yu studied the C 3 -phosphorylation of imidazo[1,2- a ]pyridines, under visible-light irradiation . By using RhB as a photoredox catalyst, lauroyl peroxide (LPO) as an oxidant, and diethyl carbonate as the reaction solvent, imidazo[1,2- a ]pyridines were efficiently phosphorylated in position C 3 with good yields. The transformation presented a good tolerance in the presence of electron-withdrawing or -donating groups on the imidazo[1,2- a ]pyridine scaffold. With respect to the phosphine oxides, variously substituted diaryl phosphine oxides could be employed in the reaction conditions. After the scope investigation, the authors decided to deepen their knowledge of the C 3 -phosphorylation by examining its pathway. Mechanistic insights (radical trapping, Stern–Volmer voltammetry fluorescence quenching, and variation of standard conditions) suggested an energy transfer (ET) from RhB* to the imidazo[1,2- a ]pyridine substrate. The substrate could then evolve to a triplet state (T), capable of reacting with LPO for generating a phosphine radical through a radical cascade. The addition of phosphine on the imidazo[1,2- a ]pyridine structure’s radical could thus occur. Final deprotonation could deliver the expected phosphorylated aza-heterocycle . 2.5. Formation of C–S Bonds As sulfur-containing molecular architectures are widely available in various drugs, biologically active molecules, or natural compounds , the conception of C–S bonds has received growing attention from the chemistry community. Complementary to the conventional metal-catalyzed cross-coupling methods , researchers have sought milder procedures for C–S bond creation . Following the examples mentioned above in C–C and C–heteroatom bond formation, Yang and Wang envisaged the C 3 -sulfenylation of imidazo[1,2- a ]pyridines under visible light irradiation . By utilizing eosin B as the organophotocatalyst, tert -butyl hydroperoxide (TBHP) as the oxidant, and aryl sulfinic acids as the sulfur source, functionalized imidazo[1,2- a ]pyridines were delivered smoothly (64 to 87% yield). No electronic influence was remarked for either electron-withdrawing or -donating substituted sulfinic acids. The deciphering of the reaction path by control experiments brought to light the radical character of the transformation. The photoexcited species eosin B* could realize an SET with t BuOOH. Radical t BuOO • could then abstract a hydrogen atom to the sulfinic aryl acid, evolving into a thiyl radical by reduction. This intermediate could thus be added to the imidazo[1,2- a ]pyridine. Finally, the target product could be obtained by an SET and deprotonation . In 2018, Barman and co-workers proposed an analogous synthetic methodology for the C 3 -sulfenylation of imidazo[1,2- a ]pyridines . Compared to the preceding example, thiols replaced sulfinic acids as the sulfenylating agent. The easier procedure was reported, since the oxidation step was conducted by ambient air. A good reaction efficiency was noticed for many imidazo[1,2- a ]pyridines and thiols exhibiting electron-poor or -rich groups . The method’s viability was checked in a gram-scale transformation with 2-phenylimidazo[1,2- a ]pyridine and thiophenol, affording the expected compound an 87% yield. Motivated by the C–H thiocyanation promoted by Hajra et al. , Tang and Yu combined photochemistry with heterogeneous catalysts. As the microporous polymer catalysts could be efficiently recycled in reactions, the authors employed the benzo[1,2- b :4,5- b’ ]dithiophene-4,8-dione conjugated microporous polymer (CMP-BDD) as a heterogeneous photocatalyst. With this strategy, the C–H thiocyanation of imidazo[1,2- a ]pyridines has been revived in a greener manner . Similar to the Hajra group’s results, a good tolerance was noticed for imidazopyridines featuring electron-donating or withdrawing substituents . Following the same approach, Chen and Yu carried out the C(sp 2 )–H thiocyanation of heterocyclic compounds with carbon nitride (g-C 3 N 4 ) as the heterogeneous photocatalyst . Under blue-LED irradiation and with a green solvent (dimethyl carbonate), thiocyanated imidazo[1,2- a ]pyridines were provided in good yields (up to 96% yields). As previously mentioned, the transformation was very compatible, with substrates displaying methyl, methoxy, fluorine, or thienyl groups . Oxidative sulfur forms could also be incorporated into the imidazo[1,2- a ]pyridine’s structure. Piguel et al. accomplished the light-induced regioselective sulfonylation of imidazopyridines in the presence of DABCO- bis (sulfur dioxide) and an aryl iodonium salt . Aside from forming the C–S bond, this reaction allowed the integration of an aryl group on the sulfone part. The scope examination with respect to the imidazo[1,2- a ]pyridine substrates indicated neglectable electronic effects of the N -heterocycle substituents, considering the good yields obtained. The same trend was found by varying aryl iodonium hexafluorophosphates. A radical path was suggested by mechanistic insights (Stern–Volmer fluorescence quenching experiments, light-off reactions, and radical trapping). The green LED activation could favor the formation of excited species of Eosin Y*. An SET could occur with the aryl iodonium salt, producing the aryl radical, which could be trapped by the DABCO- bis (sulfur dioxide). The transfer of sulfonyl radicals on the imidazo[1,2- a ]pyridine could thus follow. The second SET and deprotonation could afford the final sulfonylated N -heterocycle . The formation of C–S bonds in the imidazopyridine skeleton permitted the synthesis of highly functionalized imidazopyridines. By these presented methods, thioethers, thiocyanates, and sulfonyl derivatives could be provided in good yields. In addition to the sulfur sources (e.g., sulfinic acids, thiols), these reactions involved organophotocatalysts (eosin B, eosin Y, rose bengal). Some transformations were optimized by using a greener solvent (dimethyl carbonate) combined with recyclable catalysts (microporous polymer or carbon nitride). 2.6. Formation of C–Se Bonds Selenylated molecules have attracted great interest in chemistry, namely by their biological and medicinal properties including anticancer or anti-Alzheimer’s activities. They also represent key synthetic intermediates/substrates in total synthesis or asymmetric catalysis . The synthesis of organoselenium compounds has been an important topic in organic chemistry with their rising importance. In the pursuit of eco-compatible processes, Liu et al. described in 2017 the first visible-light-promoted C 3 -selenylation under aerobic conditions . Three selenylated imidazo[1,2- a ]pyridines were elaborated, in the presence of diphenyl diselenide and FIrPic (bis[2-(4,6-difluorophenyl)pyridinato-C2, N ](picolinato) iridium(III)). Because of their low electron density, the imidazo[1,2- a ]pyridines were only afforded in moderate yields. Mechanistic investigations with TEMPO and photoluminescence experiments pointed out the SET from the excited state of FIrPic to the diphenyl diselenide, leading to the PhSe • radical. PhSe • was then oxidized into PhSe + , which could undergo an electrophilic addition to the imidazo[1,2- a ]pyridine’s structure. The expected molecule could be furnished by deprotonation . In 2018, Braga et al. optimized Liu’s protocol by replacing FIrPic with an organophotocatalyst (rose bengal) . The group slightly improved the reaction yield with the imidazo[1,2- a ]pyridine scaffold . In the same trend, the Kumaraswamy team envisaged the C 3 -selenylation of an imidazo[1,2- a ]pyridine motif with diphenyl selenide as the selenylating reagent, LiCl as an additive, and 2-methyl-1-propanol as a solvent . Photoactivation of diphenyl selenide provided the desired N-heterocycle with a 55% yield . In 2019, Yasuike and co-workers conceptualized an alternative light-induced method with ammonium iodide instead of FIrPic as the photocatalyst . A series of 3-(arylselanyl)imidazopyridines were prepared with good-to-excellent yields under aerobic conditions . The reaction was very compatible with variously substituted diarylselenides. The incorporation of selenylated motifs was accomplished with various visible-light-induced methodologies. A narrow scope of susbtrates was explored due to the low diversity of the selenylated source. An iridium-based photocatalyst or rose bengal could be employed for these reactions. As an alternative metal-free approach, halogenated salts (LiCl or NH 4 I) were used for the selenation of imidazopyridines, with moderate-to-good yields. 2.7. Formation of C–Br Bonds In 2019, Lee, Jung, and Kim reported the C 3 -bromination of imidazo[1,2- a ]pyridines under visible light . The transformation required CBr 4 as a bench-stable bromine source and an iridium-derived photocatalyst . The scope examination showed that imidazo[1,2- a ]pyridines presenting electron-withdrawing or -donating groups allowed for the preparation of brominated aza-heterocycles with satisfactory yields. The process’s viability was validated with a gram-scale reaction, providing the expected functionalized imidazo[1,2- a ]pyridine with an 83% yield. The C–H functionalization of imidazopyridines constitutes a powerful approach for the synthesis of highly substituted imidazopyridines. All the strategies proved their efficiency with broad scopes, wide functional group tolerances, and high yields. The viability of these methods was validated with gram-scale reactions and the preparation of compounds with biological interest. These methodologies mainly used organophotocatalysts, ruthenium, or iridium complexes under blue-LED irradiation. Some of these transformations have been improved in an eco-compatible way by employing green solvents or sustainable materials as heterogeneous catalysts. Several methodologies have emerged in parallel with the C–H functionalization strategies to elaborate these scaffolds, especially the multi-component one-pot reactions.
The C–C bond construction in a radical pathway represents one of the significant tools in organic chemistry. In this frame, numerous methodologies have risen over the last decade for the C–C bond’s elaboration at position C 3 in imidazopyridines. 2.1.1. Fluoroalkylation of Imidazopyridines Fluorine constitutes a highly privileged bioisostere of the hydrogen atom due to its metabolic stability and lipophilicity . These interesting properties have promoted the incorporation of fluorinated motifs into organic substrates, potential biologically active compounds, or drug candidates . In 2020, Cui et al. detailed the visible-light-mediated metal-free C 3 –H trifluoromethylation of imidazo[1,2- a ]pyridines, using an acridinium derivative as the photoredox catalyst and Langlois’ reagent (CF 3 SO 2 Na) as the fluorinating agent in dichloroethane (DCE) . This straightforward procedure is very compatible with electron-rich or electron-poor substituted substrates (up to 84% yield). By TEMPO ((2,2,6,6-Tetramethylpiperidin-1-yl)oxyl) radical capture, the authors proved the involvement of a fluoroalkyl radical intermediate engendered by single electron transfer (SET) via the acridinium photocatalyst. Another synthetic method consists of a trifluoromethylation with Langlois’ reagent, 4,4′-dimethoxybenzophenone as the photocatalyst, and HFIP (hexafluoroisopromanol) as an additive in dry ACN. In this manner, Lefebvre, Hoffmann, and Rueping developed a C 3 -substituted imidazo[1,2- a ]pyridine scaffold with a 42% yield . The Zhang team proposed a regioselective C–H trifluoromethylation in position C 3 of imidazo[1,2- a ]pyridines. The investigation of the reaction conditions showed that anthraquinone-2-carboxylic acid (AQN) was the best photocatalyst, employed simultaneously with Langlois’ reagent, trifluoroacetic acid (TFA), and potassium carbonate in DMSO (dimethyl sulfoxide) . This method allowed for access to 21 trifluoromethylated imidazo[1,2- a ]pyridine derivatives, with moderate-to-good yields. The process’s applicability was validated by the C 3 -trifluoromethylation of Zolimidine, an antiulcer drug, with 55% yield. Zhang and co-workers demonstrated the radical reaction process through mechanistic studies with radical-trapping experiments . Deng and co-workers conceptualized an efficient process for the regioselective C 3 -trifluoromethylation and perfluoroalkylation of imidazo[1,2- a ]pyridines. By visible-light photoactivation, a broad array of functionalized imidazo[1,2- a ]pyridines were prepared, with satisfactory results. The main advantage of this method relies on the use of only an organic base (DBU: 1,8-Diazabicyclo[5.4.0]undec-7-ene) with the fluorinating agent in ACN or N -methyl-2-pyrrolidone (NMP). Light on/off experiments and radical trapping reactions suggested that an electron-donor–acceptor (EDA) complex could be formed between DBU with trifluoromethyl (or perfluoroalkyl) iodide. The blue-LED irradiation of the EDA complex led to the generation of CF 3 • radicals, which could react with the imidazo[1,2- a ]pyridine substrate, producing the corresponding radical intermediate. This latter compound could undergo an oxidation-deprotonation sequence (Path A, ) or a hydrogen abstraction by iodine radicals, delivered from the EDA complex and iodide (Path B, ) . The same year, Wu and his colleagues developed a similar idea, using DMSO as a solvent instead of NMP for the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines. This modified approach contributed to the synthesis of 27 C 3 -fluorinated imidazo[1,2- a ]pyridines . A good tolerance is observed for both the electron-withdrawing and electron-donating groups (21 to 96% yield) . The C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu and Fu was carried out with fac -[Ir(ppy) 3 ] (ppy: 2-phenylpyridinato), 1,1,1-trifluoro-2-iodoethane, and K 2 CO 3 in DMSO . This visible-light-promoted reaction resulted in the preparation of a broad range of C 3 -fluorinated imidazopyridines, exhibiting electron-poor or -rich substituents . Inhibition of this transformation was performed with TEMPO as a radical scavenger; the expected compound was not detected, implying a radical path. The mechanism of this functionalized C–H could thus be rationalized: the oxidation of the excited photocatalyst by CF 3 CH 2 I could lead to the CF 3 CH 2 • radical species. Addition of the latter radical could be accomplished on the imidazo[1,2- a ]pyridine motif. Oxidation and base-mediated deprotonation could induce the formation of the desired product. Huang and Zhu went further with the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines with TMEDA (tetramethylethylenediamine) as a radical initiator and K 3 PO 4 as the base . The transformation displayed a good tolerance with diversely substituted imidazo[1,2- a ]pyridines (74 to 92% yield). Functional groups in the meta - or para - position provided the wanted compounds with better results than the imidazo[1,2- a ]pyridines featuring substituents in the ortho position. Under modified conditions, the procedure was also attempted for a C 3 -difluoroacetylation, giving the expected N -heterocycle with 61% yield . Mechanistic control experiments with radical scavengers (TEMPO, 1,1-diphenylethene, and hydroquinone) jeopardized the reaction since the expected C 3 -perfluoroalkylated imidazo[1,2- a ]pyridine was obtained in low yields. With TEMPO, 2,2,6,6-tetramethyl-1-(perfluorobutoxy)piperidine was identified by GC-MS (gas chromatography–mass spectrometry) analysis, confirming the radical character of the process. To enlarge the diversity of fluorinated imidazopyridines, the Fu team conceived access to (phenylsulfonyl) difluoromethylated structures in the presence of PhSO 2 CF 2 I, K 2 CO 3, and fac -[Ir(ppy) 3 ] . The adopted protocol allowed for the preparation of 15 C 3 -functionalized imidazo[1,2- a ]pyridines with good-to-high yields . The same group established a related approach for introducing a difluoroacetyl motif in the C 3 position of the imidazo[1,2- a ]pyridine skeleton with BrCF 2 CO 2 Et. Substrates exhibiting electron-donating groups led to the desired products in higher yields than the electron-withdrawing ones . Xu and co-workers reported the C–H difluoroalkylation of imidazo[1,2- a ]pyridines mediated by visible light. The protocol requires the use of bromodifluoroaryl ketones as a co-substrate, TMEDA as the organic base in acetonitrile, and a 33 W compact fluorescent light (CFL). These mild and straightforward conditions yielded a wide range of imidazo[1,2- a ]pyridines displaying various functional groups . Difluoromethylenephosphonation of imidazo[1,2- a ]pyridine, realized by the Hajra team, provides functionalized N -heterocycles by employing rose bengal (RB) as a photocatalyst, bis(pinacolato)diboron as an additive, and NaHCO 3 as the base . The exploration of the substrate’s scope revealed that highly substituted imidazo[1,2- a ]pyridines could be synthesized through this method. The expected products were not observed by attempting the standard reaction with different radical inhibitors (TEMPO, BHT: Butylated hydroxytoluene, para -benzoquinone, and 1,1-diphenylethylene), confirming the radical process. Without bis(pinacolato)diboron (B 2 pin 2 ), the reaction did not proceed, indicating the crucial role of this additive. With all these findings and cyclic voltammetry measurements, the authors proposed the activation of imidazo[1,2- a ]pyridine by bis(pinacolato)diboron, generating a cationic intermediate. This intermediate could then undergo the addition of CF 2 PO(OEt) 2 radicals (formed by RB* oxidation). Hydrogen abstraction by NaHCO 3 could deliver the difluoromethylenephosphonated imidazo[1,2- a ]pyridine . In summary, fluoroalkylation of imidazopyridines could be reached in several reaction conditions, with moderate-to-good yields (up to 96% yield). All the approaches described here required polar aprotic solvents (mainly ACN, DMSO) and organic bases or acids under an inert atmosphere. The photocatalysts employed were organophotocatalysts or fac -[Ir(ppy) 3 ]. These strategies allowed access to diversely fluorinated compounds. 2.1.2. Alkylation of Imidazopyridines Alongside the fluoroalkylation of imidazopyridines, the introduction of various moieties by alkylation reactions has also arisen during the last five years. In 2017, inspired by the above-mentioned C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu, Fu, and coworkers , Liu and Sun developed the C 3 -cyanomethylation of imidazo[1,2- a ]pyridines using an analogous photocatalytic system . With the inexpensive bromoacetonitrile as a cyanomethyl source, the group efficiently developed a large array of substituted imidazopyridines (up to 96% yield). It should be outlined that a significant yield enhancement was noted for some substrates by employing iodoacetonitrile rather than bromoacetonitrile. This robust method was also applied to the synthesis of Zolpidem and Alpidem, drugs used in anxiety treatment. Once the cyanomethylated imidazo[1,2- a ]pyridines were isolated, they were converted into the corresponding ethyl esters. These intermediates were then hydrolyzed with KOH and amidified in dichloromethane (DCM) following standard procedures , to afford the expected biologically active compounds . Aminoalkylation has also drawn attention in the topic of imidazopyridines’ functionalization. In 2018, Hajra and co-workers disclosed the metal-free coupling between tertiary amines and imidazo[1,2- a ]pyridines . With rose bengal as the organocatalyst under aerobic conditions, they combined N -phenyltetrahydroisoquinoline with imidazopyridines in a regioselective manner . A broad range of highly substituted imidazopyridines was thus produced. Good to excellent yields were obtained with electron-donating or -withdrawing groups. The approach was also extended to N , N -dimethylaniline derivatives with success. Control experiments implemented the elucidation of the mechanism with radical scavengers (TEMPO, BHT) and a singlet oxygen quencher (DABCO: 1,4-diazabicyclo[2.2.2]octane). The suggested pathway could pass through an energy transfer between the excited state of the photocatalyst (RB*) and the ground-state oxygen ( 3 O 2 ). The generated singlet oxygen could undergo an SET for the tertiary amine to deliver the amine radical cation. By hydrogen capture, an iminium is then formed. This latter molecule could be implied in an electrophilic addition with the imidazopyridine. A final proton abstraction could then give the target compound. More recently, Yu et al. conceptualized a sustainable procedure for the aminomethylation of imidazo[1,2- a ]pyridines by using N -arylglycines as the amine sources and an original metallated photocatalyst (CsPbBr 3 ) . The principal advantage of this procedure is the possible re-utilization of the perovskite catalyst for at least five times and with excellent yields (more than 88%). As the reaction is in a heterogeneous system, the recovery of CsPbBr 3 was facilitated by simple centrifugation. Good compatibility was remarked for substrates featuring donor (Me, OMe, NH 2 ) or acceptor (F, Cl, Br, CN, CF 3 , CO 2 Me) substituents. It should be emphasized that aminomethylation is applicable for a gram-scale synthesis with sunlight irradiation. The inhibition of the transformation by radical scavengers suggested a radical reaction mechanism. CsPbBr 3 could release an electron (e−) and a hole (h+) by absorbing a photon. A SET could then be realized from N -arylglycine to the hole, leading to the corresponding radical. This intermediate could then be added to the imidazopyridine scaffold. The oxidation by O 2 provided the heterocyclic cation, which could evolve to the final product by deprotonation . The same team went one step further by improving the protocol in a greener way. In 2021, Lv and Yu established an eco-compatible carbon nitride nanosheet (NM-g-C 3 N 4 ), which could catalyze under blue-LED irradiation the aminomethylation of imidazopyridines . To fulfill the criteria of green chemistry, dimethyl carbonate was employed this time as the reaction solvent. Again, a set of aminomethylated imidazo[1,2- a ]pyridines displaying diverse functional groups (18 examples) was elaborated smoothly . As previously, the NM-g-C 3 N 4 photocatalyst could be reused after the reaction workup by centrifugation. The recycling experiments showed that the photocatalyst’s efficiency is maintained after seven transformation cycles. Zhu and Le monitored the C–H aminomethylation reaction with N -arylglycine derivatives in an analogous eco-compatible way . The reaction occurred efficiently under photocatalyst-free conditions . A wide range of functionalized imidazopyridines was provided with good results (40 to 95% yields). The group unraveled the aminomethylation path with various control experiments, namely, reaction under a nitrogen atmosphere or in an open-air flask, or radical trapping with TEMPO. By blue-LED irradiation, a singlet oxygen could be formed and interact with the N -arylglycine substrate to generate a radical cation. This latter intermediate could evolve in an alkyl radical by proton transfer and decarboxylation. Subsequently, the amino radical could undergo a proton transfer, leading to the corresponding imine. The final electrophilic addition of the imine to the imidazopyridine motif allows access to the target product. The C–H alkylation of imidazo[1,2- a ]pyridines could be performed with N -hydroxyphthalimide esters as alkylating reagents. Jin and his colleagues conceived this original strategy for the C–H functionalization of the aryl part of the imidazopyridine platform . The organic photoredox catalysis implied eosin Y as the photocatalyst and TfOH (triflic acid) as the additive. The reaction was well-tolerated with a wide array of imidazopyridine substrates (up to 86% yield). By checking the N -hydroxyphthalimide esters’ scope, satisfactory results were afforded for the primary, secondary, and tertiary alkyl groups. The alkylation pathway was unraveled with radical trapping experiments: an adduct with BHT was identified with HRMS (high-resolution mass spectrometry) analysis, validating the radical mechanism. An SET could occur from the excited state of eosin Y to the protonated N -hydroxyphthalimide ester. The formed radical species could be decomposed into an alkyl radical, which could be introduced into the imidazopyrine’s nucleus. The oxidation of the imidazopyridine radical by an SET with eosin Y •+ produced the corresponding cation. Finally, the expected compound is obtained via deprotonation . The Hajra group deepened the concept of C–H alkylation by exploring the three-component carbosilylation of alkenes in the imidazopyridine scaffold . The combination of a metal catalyst (FeCl 2 ) and blue-LED photocatalysis enabled the C–C and C–Si bond formation. The reaction involving an imidazopyridine substrate, a styrene derivative, and (TMS) 3 SiH gave a wide array of silylated imidazo[1,2- a ]pyridines (26 compounds) in 45 to 88% yields . After the scope study, the authors examined the transformation pathway with radical scavengers. The reaction did not occur in the presence of TEMPO, BHT, or benzoquinone, reflecting a radical mechanism. The same result was observed without a photocatalyst or light source. Considering these control experiments, the proposed path could proceed via an SET between the iron(II) catalyst and the excited state of eosin Y. The radical anion eosin Y •− could then realize an SET with the di- tert -butyl peroxide, affording the radical t BuO • . The formed silyl radical will be added to the styrene by hydrogen abstraction. An SET could then be accomplished from the generated radical styrene to iron(III). An electrophilic addition could be reached with imidazopyridine, allowing access to the desired product. Summarily, the alkylation methodologies reported herein provided a wide library of functionalized imidazo[1,2- a ]pyridines with a broad susbtrate scope and satisfactory yields. The strategies involved organic or organometallic catatylic systems, but also innovative techniques such as the use of perovskite catalysts and carbon nitride nanosheets, or photocatalyst-free conditions. 2.1.3. Carbonylalkylation and Carbonylation of Imidazopyridines As a continuation of the visible-light C–H alkylation of imidazopyridines, the addition of carbonyl groups and their derivatives was also widely studied. In 2018, Zhu and Le conducted the visible-light-mediated carbonylalkylation of imidazo[1,2- a ]pyridines with N -arylglycine esters . The coupling reaction between these two molecules was carried out with a copper catalyst (Cu(OTf) 2 ) in acetonitrile. The imidazo[1,2- a ]pyridine scope investigation indicated that electron-poor substituents increased the transformation efficiency more than the methyl groups. Studying the N-arylglycine esters showed good suitability with a large array of substrates. The same authors recently extended their synthetic method by coupling imidazo[1,2- a ]pyridines with α-amino ketones and α-amino acid derivatives . Some improvements were applied: the metal catalyst was replaced by an organophotocatalyst (Eosin Y), with citric acid monohydrate as an additive. Ethanol was employed as a greener solvent, and the visible-light irradiation was monitored with an 18 W blue-LED light . The scope examination of N -arylglycine ethyl esters indicated the high reaction efficiency with electron-donating groups on the aryl motif, while various esters (methyl, isopropyl, tert -butyl, and benzyl esters) displayed good compatibility with moderate-to-good yields. α-amino ketones delivered the expected imidazo[1,2- a ]pyridines with low yields. Regarding the scope of imidazo[1,2- a ]pyridines, a similar trend was observed with a better reactivity of electron-rich substrates. The authors performed control experiments, including radical trapping, reactions with imine substrates, and cyclic voltammetry, to understand the mechanism. The possible path could proceed by an SET between the excited state of eosin Y and the α-amino carbonyl derivative. The formed radical cation could be oxidized into the iminium intermediate, which could undergo an electrophilic addition from the imidazo[1,2- a ]pyridine. A final oxidation step could provide the desired product. In 2022, Jiang and Yu realized the ethoxy-carbonyl methylation of imidazo[1,2- a ]pyridines with α-bromoesters in water, employing rhodamine B (RhB) as the photocatalyst, dilauroyl peroxide as the oxidant, and potassium ethyl xanthogenate as an additive . This method allowed for the preparation of three imidazopyridines with moderate yields. The photochemical reaction was also successfully applied to the preparation of Zolpidem in one step, with 2-bromo- N , N -dimethylacetamide as the substrate partner . Another application of the carbonylalkylation reaction was performed by the Chaubey group, with the total synthesis of Zolpidem . After a detailed methodology for the C 3 -carbonylation of imidazo[1,2- a ]pyridines in the presence of dialkyl malonates, the authors discovered a rapid multi-step synthetic route to Zolpidem in high yields. This sequence was based on the visible-light-promoted C–H carbonylalkylation of the corresponding imidazopyridine, followed by a Krapcho decarboxylation at 160 °C, hydrolysis, and condensation . An analogous idea came out in Hajra’s group: by changing the carbonylalkylated source (ethyl diazoacetate) and the photocatalyst ([Ru(bpy) 3 ]Cl 2 , with bpy: 2,2′-bipyridyl), they accomplished the C 3 -ethoxycarbonyl methylation of imidazo[1,2- a ]pyridines . By studying the scope of imidazo[1,2- a ]pyridines, a good compatibility was noticed with electron-rich substituents. Surprisingly, the reaction did not occur in the presence of electron-withdrawing groups. A slight modification of the optimized conditions was thus applied: by adding 10 mol% of N , N -dimethyl- m -toluidine, a redox-active additive, the C–H carbonylalkylation ran smoothly with satisfactory results (up to 92% yield). The viability of the methodology was confirmed with the gram-scale preparation of ethyl 2-(2-phenylimidazo[1,2-a]pyridin-3-yl)acetate with a 70% yield ( , Equation (1)) and the late-stage amidation of a C 3 -substituted compound ( , Equation (2)). Similarly, Yu, Tan, and Deng expanded Hajra’s methodology to a wide range of diazo derivatives and imidazo[1,2- a ]pyridine substrates (28 examples) . Subsequently, the reaction showed its applicability with a gram-scale reaction ( , Equation (1)) and the Zolpidem preparation ( , Equation (2)). This strategy allowed shorter and more efficient access to synthetic drugs than Chaubey’s approach ( cf . ). In 2019, Guan and He moved one step beyond the concept of imidazo[1,2- a ]pyridines’ carbonylation . The direct addition of a carbonyl motif on the imidazo[1,2- a ]pyridine skeleton was conducted under 32 W CFL irradiation with an oxygen balloon and 9-mesityl-10-methylacridinium perchlorate (Acr + -Mes). Using a nitrone derivative as a co-substrate, an aryl entity could be included in the carbonyl group. With the optimized conditions in hand, the scope was scrutinized: imidazo[1,2- a ]pyridines bearing bromo- or chloro-substituents in position C7 exhibited a higher reaction efficiency (65–66% yield) than the C6-substituted ones (50–54% yield). Concerning the nitrone screening, higher yields were noted with the meta - and para -substitution on the aryl part than the ortho- substitution, probably due to steric hindrance. Next, the reaction mechanism was elucidated with control experiments (radical inhibition, 18 O-labeling reaction, and Stern–Volmer quenching fluorescence) and the X-ray crystal structure of the N -hydroxylamine intermediate. The pathway started from the SET between the excited photocatalyst (Acr + -Mes*) and the imidazo[1,2- a ]pyridine. The nitrone could then be introduced in the imidazo[1,2- a ]pyridine. Two possible paths could then be identified. The first path could involve deprotonation and nitrosobenzene releasing. The resulting radical could react with the radical oxygen species O 2 •− (formed by an SET with the radical photocatalyst) to generate the carbonylated product. The second path could imply an SET from the radical photocatalyst to the radical nitroso, giving the corresponding N -hydroxylamine. A second SET could then occur, leading to a radical N -hydroxylamine. As the first path, the decomposition of the N -hydroxylamine delivered a nitrosobenzene and the corresponding radical, which could be transformed into the target compound . Carbonylalkylation and carbonylation of imidazopyridines enabled the introduction of amino acid derivatives in the imidazopyridine’s core. The employed approaches consisted in the use of metal catalysts (Cu(OTf) 2 , [Ru(bpy) 3 ]Cl 2 ) or organophotocatalysts, in apolar (DCM) or polar protic and aprotic solvents (ACN, Dioxane, MeOH, EtOH). Eco-friendly methods demonstrated their efficiency in aqueous media. 2.1.4. Sulfonylmethylation of Imidazopyridines In the same way, Zhang and Cui exploited an extension of the imidazopyridines’ alkylation for the sulfonylmethylation reaction . By utilizing bromomethyl sulfones with an iridium photocatalyst ([Ir(ppy) 3 ]), a broad range of imidazo[1,2- a ]pyridines could be functionalized efficiently with satisfactory yields . The transformation is also well suited for diversely substituted bromomethyl sulfones. The mechanism investigation by radical trapping revealed that the transformation path could imply radical intermediates. From this observation, the authors suggested an SET from the excited state of the photocatalyst to the bromomethyl sulfone, to deliver a corresponding sulfomethyl radical. The addition of the latter intermediate to the imidazo[1,2- a ]pyridine’s core provided the corresponding radical, which could be oxidized via an SET with [Ir(ppy) 3 ] + . The formed cation could be converted into the expected compound by deprotonation. 2.1.5. Formylation of Imidazopyridines Formyl functional groups constitute a major moiety in N -heterocycles, since they could be key building blocks for synthesizing highly complex molecules. In this frame, the visible-light-induced formylation of imidazo[1,2- a ]pyridines has recently gained interest. The Hajra team developed mild conditions for the regioselective formylation of imidazo[1,2- a ]pyridines in position C 3 , with rose bengal as the photoredox catalyst, KI as an additive, and TMEDA as the formylating agent . This reaction is suitable with substrates featuring electron-poor, -rich, or halogenated substituents (up to 95% yield). The transformation was entirely inhibited by achieving control experiments with TEMPO or benzoquinone. The same result was noted by replacing O 2 (from the air) with an argon atmosphere. With all these observations, the authors proposed the following pathway: by excitation of the photocatalyst (RB), a singlet oxygen ( 1 O 2 ) could be generated, inducing the formation of the iodine radical. This latter intermediate could oxidize the TMEDA as a radical cation. With the superoxide radical anion, the TMEDA-derived radical cation could be turned into an iminium ion. The electrophilic addition with the imidazo[1,2- a ]pyridine could then occur, followed by a re-aromatization. Iodine could thus oxidize the TMEDA motif, releasing an iminium ion. Consequently, hydrolysis of the iminium ion could afford the desired formylated imidazo[1,2- a ]pyridine . 2.1.6. Arylation of Imidazopyridines Recently, Cui and Wu conducted the visible-light C(sp 2 )–H arylation of heterocycles with hypervalent iodine ylides as the arylating agents, eosin Y as the photocatalyst, and potassium carbonate as the base . Among the synthesized heterocyclic scaffolds, five examples of imidazo[1,2- a ]pyridines were depicted with satisfactory yields . Sun et al. reported their research on the regioselective azolylation of imidazo[1,2- a ]pyridines . The installation of the azole nucleus was mediated by 2-bromoazoles under blue-LED irradiation. The photocatalytic process involved Cy 2 NMe ( N , N -dicyclohexylmethylamine) as an organic base and an iridium photocatalyst ([Ir(ppy) 2 (dtbbpy)]PF 6 , with dtbbpy: 4,4′-di-tert-butyl-2,2′-dipyridyl). This synthetic approach furnished 29 C 3 -substituted imidazo[1,2- a ]pyridines with 28 to 79% yield . Electron-poor groups on the imidazopyridine scaffold diminished the heterocycle’s reactivity, whereas electron-rich substituents favored the reaction’s efficiency. In addition, the authors reported good suitability with diversely substituted bromoazoles, i.e., bromothiadiazole, bromothiophene, and bromofuraldehyde. A radical inhibition of the C 3 -azolylation was also conducted with TEMPO: the target molecule was not detected, pointing out the radical character of the transformation. An oxidative quenching of the bromoazole by the excited state photocatalyst could lead to the corresponding heterocyclic radical, which could be added to the imidazopyridine skeleton. Simultaneously, Ir(IV) could reduce the organic base into a radical amine cation. This latter one could then catch hydrogen radicals to release the desired product. These two examples showed the wide possibility for functionalizing the imidazopyridine scaffold. The introduction of aryl and heteroaryl motifs in good-to-moderate yields was provided by organocatalyst (Eosin Y) or an iridium complex in aprotic polar solvents.
Fluorine constitutes a highly privileged bioisostere of the hydrogen atom due to its metabolic stability and lipophilicity . These interesting properties have promoted the incorporation of fluorinated motifs into organic substrates, potential biologically active compounds, or drug candidates . In 2020, Cui et al. detailed the visible-light-mediated metal-free C 3 –H trifluoromethylation of imidazo[1,2- a ]pyridines, using an acridinium derivative as the photoredox catalyst and Langlois’ reagent (CF 3 SO 2 Na) as the fluorinating agent in dichloroethane (DCE) . This straightforward procedure is very compatible with electron-rich or electron-poor substituted substrates (up to 84% yield). By TEMPO ((2,2,6,6-Tetramethylpiperidin-1-yl)oxyl) radical capture, the authors proved the involvement of a fluoroalkyl radical intermediate engendered by single electron transfer (SET) via the acridinium photocatalyst. Another synthetic method consists of a trifluoromethylation with Langlois’ reagent, 4,4′-dimethoxybenzophenone as the photocatalyst, and HFIP (hexafluoroisopromanol) as an additive in dry ACN. In this manner, Lefebvre, Hoffmann, and Rueping developed a C 3 -substituted imidazo[1,2- a ]pyridine scaffold with a 42% yield . The Zhang team proposed a regioselective C–H trifluoromethylation in position C 3 of imidazo[1,2- a ]pyridines. The investigation of the reaction conditions showed that anthraquinone-2-carboxylic acid (AQN) was the best photocatalyst, employed simultaneously with Langlois’ reagent, trifluoroacetic acid (TFA), and potassium carbonate in DMSO (dimethyl sulfoxide) . This method allowed for access to 21 trifluoromethylated imidazo[1,2- a ]pyridine derivatives, with moderate-to-good yields. The process’s applicability was validated by the C 3 -trifluoromethylation of Zolimidine, an antiulcer drug, with 55% yield. Zhang and co-workers demonstrated the radical reaction process through mechanistic studies with radical-trapping experiments . Deng and co-workers conceptualized an efficient process for the regioselective C 3 -trifluoromethylation and perfluoroalkylation of imidazo[1,2- a ]pyridines. By visible-light photoactivation, a broad array of functionalized imidazo[1,2- a ]pyridines were prepared, with satisfactory results. The main advantage of this method relies on the use of only an organic base (DBU: 1,8-Diazabicyclo[5.4.0]undec-7-ene) with the fluorinating agent in ACN or N -methyl-2-pyrrolidone (NMP). Light on/off experiments and radical trapping reactions suggested that an electron-donor–acceptor (EDA) complex could be formed between DBU with trifluoromethyl (or perfluoroalkyl) iodide. The blue-LED irradiation of the EDA complex led to the generation of CF 3 • radicals, which could react with the imidazo[1,2- a ]pyridine substrate, producing the corresponding radical intermediate. This latter compound could undergo an oxidation-deprotonation sequence (Path A, ) or a hydrogen abstraction by iodine radicals, delivered from the EDA complex and iodide (Path B, ) . The same year, Wu and his colleagues developed a similar idea, using DMSO as a solvent instead of NMP for the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines. This modified approach contributed to the synthesis of 27 C 3 -fluorinated imidazo[1,2- a ]pyridines . A good tolerance is observed for both the electron-withdrawing and electron-donating groups (21 to 96% yield) . The C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu and Fu was carried out with fac -[Ir(ppy) 3 ] (ppy: 2-phenylpyridinato), 1,1,1-trifluoro-2-iodoethane, and K 2 CO 3 in DMSO . This visible-light-promoted reaction resulted in the preparation of a broad range of C 3 -fluorinated imidazopyridines, exhibiting electron-poor or -rich substituents . Inhibition of this transformation was performed with TEMPO as a radical scavenger; the expected compound was not detected, implying a radical path. The mechanism of this functionalized C–H could thus be rationalized: the oxidation of the excited photocatalyst by CF 3 CH 2 I could lead to the CF 3 CH 2 • radical species. Addition of the latter radical could be accomplished on the imidazo[1,2- a ]pyridine motif. Oxidation and base-mediated deprotonation could induce the formation of the desired product. Huang and Zhu went further with the C 3 -perfluoroalkylation of imidazo[1,2- a ]pyridines with TMEDA (tetramethylethylenediamine) as a radical initiator and K 3 PO 4 as the base . The transformation displayed a good tolerance with diversely substituted imidazo[1,2- a ]pyridines (74 to 92% yield). Functional groups in the meta - or para - position provided the wanted compounds with better results than the imidazo[1,2- a ]pyridines featuring substituents in the ortho position. Under modified conditions, the procedure was also attempted for a C 3 -difluoroacetylation, giving the expected N -heterocycle with 61% yield . Mechanistic control experiments with radical scavengers (TEMPO, 1,1-diphenylethene, and hydroquinone) jeopardized the reaction since the expected C 3 -perfluoroalkylated imidazo[1,2- a ]pyridine was obtained in low yields. With TEMPO, 2,2,6,6-tetramethyl-1-(perfluorobutoxy)piperidine was identified by GC-MS (gas chromatography–mass spectrometry) analysis, confirming the radical character of the process. To enlarge the diversity of fluorinated imidazopyridines, the Fu team conceived access to (phenylsulfonyl) difluoromethylated structures in the presence of PhSO 2 CF 2 I, K 2 CO 3, and fac -[Ir(ppy) 3 ] . The adopted protocol allowed for the preparation of 15 C 3 -functionalized imidazo[1,2- a ]pyridines with good-to-high yields . The same group established a related approach for introducing a difluoroacetyl motif in the C 3 position of the imidazo[1,2- a ]pyridine skeleton with BrCF 2 CO 2 Et. Substrates exhibiting electron-donating groups led to the desired products in higher yields than the electron-withdrawing ones . Xu and co-workers reported the C–H difluoroalkylation of imidazo[1,2- a ]pyridines mediated by visible light. The protocol requires the use of bromodifluoroaryl ketones as a co-substrate, TMEDA as the organic base in acetonitrile, and a 33 W compact fluorescent light (CFL). These mild and straightforward conditions yielded a wide range of imidazo[1,2- a ]pyridines displaying various functional groups . Difluoromethylenephosphonation of imidazo[1,2- a ]pyridine, realized by the Hajra team, provides functionalized N -heterocycles by employing rose bengal (RB) as a photocatalyst, bis(pinacolato)diboron as an additive, and NaHCO 3 as the base . The exploration of the substrate’s scope revealed that highly substituted imidazo[1,2- a ]pyridines could be synthesized through this method. The expected products were not observed by attempting the standard reaction with different radical inhibitors (TEMPO, BHT: Butylated hydroxytoluene, para -benzoquinone, and 1,1-diphenylethylene), confirming the radical process. Without bis(pinacolato)diboron (B 2 pin 2 ), the reaction did not proceed, indicating the crucial role of this additive. With all these findings and cyclic voltammetry measurements, the authors proposed the activation of imidazo[1,2- a ]pyridine by bis(pinacolato)diboron, generating a cationic intermediate. This intermediate could then undergo the addition of CF 2 PO(OEt) 2 radicals (formed by RB* oxidation). Hydrogen abstraction by NaHCO 3 could deliver the difluoromethylenephosphonated imidazo[1,2- a ]pyridine . In summary, fluoroalkylation of imidazopyridines could be reached in several reaction conditions, with moderate-to-good yields (up to 96% yield). All the approaches described here required polar aprotic solvents (mainly ACN, DMSO) and organic bases or acids under an inert atmosphere. The photocatalysts employed were organophotocatalysts or fac -[Ir(ppy) 3 ]. These strategies allowed access to diversely fluorinated compounds.
Alongside the fluoroalkylation of imidazopyridines, the introduction of various moieties by alkylation reactions has also arisen during the last five years. In 2017, inspired by the above-mentioned C 3 -trifluoroethylation of imidazo[1,2- a ]pyridines by Xu, Fu, and coworkers , Liu and Sun developed the C 3 -cyanomethylation of imidazo[1,2- a ]pyridines using an analogous photocatalytic system . With the inexpensive bromoacetonitrile as a cyanomethyl source, the group efficiently developed a large array of substituted imidazopyridines (up to 96% yield). It should be outlined that a significant yield enhancement was noted for some substrates by employing iodoacetonitrile rather than bromoacetonitrile. This robust method was also applied to the synthesis of Zolpidem and Alpidem, drugs used in anxiety treatment. Once the cyanomethylated imidazo[1,2- a ]pyridines were isolated, they were converted into the corresponding ethyl esters. These intermediates were then hydrolyzed with KOH and amidified in dichloromethane (DCM) following standard procedures , to afford the expected biologically active compounds . Aminoalkylation has also drawn attention in the topic of imidazopyridines’ functionalization. In 2018, Hajra and co-workers disclosed the metal-free coupling between tertiary amines and imidazo[1,2- a ]pyridines . With rose bengal as the organocatalyst under aerobic conditions, they combined N -phenyltetrahydroisoquinoline with imidazopyridines in a regioselective manner . A broad range of highly substituted imidazopyridines was thus produced. Good to excellent yields were obtained with electron-donating or -withdrawing groups. The approach was also extended to N , N -dimethylaniline derivatives with success. Control experiments implemented the elucidation of the mechanism with radical scavengers (TEMPO, BHT) and a singlet oxygen quencher (DABCO: 1,4-diazabicyclo[2.2.2]octane). The suggested pathway could pass through an energy transfer between the excited state of the photocatalyst (RB*) and the ground-state oxygen ( 3 O 2 ). The generated singlet oxygen could undergo an SET for the tertiary amine to deliver the amine radical cation. By hydrogen capture, an iminium is then formed. This latter molecule could be implied in an electrophilic addition with the imidazopyridine. A final proton abstraction could then give the target compound. More recently, Yu et al. conceptualized a sustainable procedure for the aminomethylation of imidazo[1,2- a ]pyridines by using N -arylglycines as the amine sources and an original metallated photocatalyst (CsPbBr 3 ) . The principal advantage of this procedure is the possible re-utilization of the perovskite catalyst for at least five times and with excellent yields (more than 88%). As the reaction is in a heterogeneous system, the recovery of CsPbBr 3 was facilitated by simple centrifugation. Good compatibility was remarked for substrates featuring donor (Me, OMe, NH 2 ) or acceptor (F, Cl, Br, CN, CF 3 , CO 2 Me) substituents. It should be emphasized that aminomethylation is applicable for a gram-scale synthesis with sunlight irradiation. The inhibition of the transformation by radical scavengers suggested a radical reaction mechanism. CsPbBr 3 could release an electron (e−) and a hole (h+) by absorbing a photon. A SET could then be realized from N -arylglycine to the hole, leading to the corresponding radical. This intermediate could then be added to the imidazopyridine scaffold. The oxidation by O 2 provided the heterocyclic cation, which could evolve to the final product by deprotonation . The same team went one step further by improving the protocol in a greener way. In 2021, Lv and Yu established an eco-compatible carbon nitride nanosheet (NM-g-C 3 N 4 ), which could catalyze under blue-LED irradiation the aminomethylation of imidazopyridines . To fulfill the criteria of green chemistry, dimethyl carbonate was employed this time as the reaction solvent. Again, a set of aminomethylated imidazo[1,2- a ]pyridines displaying diverse functional groups (18 examples) was elaborated smoothly . As previously, the NM-g-C 3 N 4 photocatalyst could be reused after the reaction workup by centrifugation. The recycling experiments showed that the photocatalyst’s efficiency is maintained after seven transformation cycles. Zhu and Le monitored the C–H aminomethylation reaction with N -arylglycine derivatives in an analogous eco-compatible way . The reaction occurred efficiently under photocatalyst-free conditions . A wide range of functionalized imidazopyridines was provided with good results (40 to 95% yields). The group unraveled the aminomethylation path with various control experiments, namely, reaction under a nitrogen atmosphere or in an open-air flask, or radical trapping with TEMPO. By blue-LED irradiation, a singlet oxygen could be formed and interact with the N -arylglycine substrate to generate a radical cation. This latter intermediate could evolve in an alkyl radical by proton transfer and decarboxylation. Subsequently, the amino radical could undergo a proton transfer, leading to the corresponding imine. The final electrophilic addition of the imine to the imidazopyridine motif allows access to the target product. The C–H alkylation of imidazo[1,2- a ]pyridines could be performed with N -hydroxyphthalimide esters as alkylating reagents. Jin and his colleagues conceived this original strategy for the C–H functionalization of the aryl part of the imidazopyridine platform . The organic photoredox catalysis implied eosin Y as the photocatalyst and TfOH (triflic acid) as the additive. The reaction was well-tolerated with a wide array of imidazopyridine substrates (up to 86% yield). By checking the N -hydroxyphthalimide esters’ scope, satisfactory results were afforded for the primary, secondary, and tertiary alkyl groups. The alkylation pathway was unraveled with radical trapping experiments: an adduct with BHT was identified with HRMS (high-resolution mass spectrometry) analysis, validating the radical mechanism. An SET could occur from the excited state of eosin Y to the protonated N -hydroxyphthalimide ester. The formed radical species could be decomposed into an alkyl radical, which could be introduced into the imidazopyrine’s nucleus. The oxidation of the imidazopyridine radical by an SET with eosin Y •+ produced the corresponding cation. Finally, the expected compound is obtained via deprotonation . The Hajra group deepened the concept of C–H alkylation by exploring the three-component carbosilylation of alkenes in the imidazopyridine scaffold . The combination of a metal catalyst (FeCl 2 ) and blue-LED photocatalysis enabled the C–C and C–Si bond formation. The reaction involving an imidazopyridine substrate, a styrene derivative, and (TMS) 3 SiH gave a wide array of silylated imidazo[1,2- a ]pyridines (26 compounds) in 45 to 88% yields . After the scope study, the authors examined the transformation pathway with radical scavengers. The reaction did not occur in the presence of TEMPO, BHT, or benzoquinone, reflecting a radical mechanism. The same result was observed without a photocatalyst or light source. Considering these control experiments, the proposed path could proceed via an SET between the iron(II) catalyst and the excited state of eosin Y. The radical anion eosin Y •− could then realize an SET with the di- tert -butyl peroxide, affording the radical t BuO • . The formed silyl radical will be added to the styrene by hydrogen abstraction. An SET could then be accomplished from the generated radical styrene to iron(III). An electrophilic addition could be reached with imidazopyridine, allowing access to the desired product. Summarily, the alkylation methodologies reported herein provided a wide library of functionalized imidazo[1,2- a ]pyridines with a broad susbtrate scope and satisfactory yields. The strategies involved organic or organometallic catatylic systems, but also innovative techniques such as the use of perovskite catalysts and carbon nitride nanosheets, or photocatalyst-free conditions.
As a continuation of the visible-light C–H alkylation of imidazopyridines, the addition of carbonyl groups and their derivatives was also widely studied. In 2018, Zhu and Le conducted the visible-light-mediated carbonylalkylation of imidazo[1,2- a ]pyridines with N -arylglycine esters . The coupling reaction between these two molecules was carried out with a copper catalyst (Cu(OTf) 2 ) in acetonitrile. The imidazo[1,2- a ]pyridine scope investigation indicated that electron-poor substituents increased the transformation efficiency more than the methyl groups. Studying the N-arylglycine esters showed good suitability with a large array of substrates. The same authors recently extended their synthetic method by coupling imidazo[1,2- a ]pyridines with α-amino ketones and α-amino acid derivatives . Some improvements were applied: the metal catalyst was replaced by an organophotocatalyst (Eosin Y), with citric acid monohydrate as an additive. Ethanol was employed as a greener solvent, and the visible-light irradiation was monitored with an 18 W blue-LED light . The scope examination of N -arylglycine ethyl esters indicated the high reaction efficiency with electron-donating groups on the aryl motif, while various esters (methyl, isopropyl, tert -butyl, and benzyl esters) displayed good compatibility with moderate-to-good yields. α-amino ketones delivered the expected imidazo[1,2- a ]pyridines with low yields. Regarding the scope of imidazo[1,2- a ]pyridines, a similar trend was observed with a better reactivity of electron-rich substrates. The authors performed control experiments, including radical trapping, reactions with imine substrates, and cyclic voltammetry, to understand the mechanism. The possible path could proceed by an SET between the excited state of eosin Y and the α-amino carbonyl derivative. The formed radical cation could be oxidized into the iminium intermediate, which could undergo an electrophilic addition from the imidazo[1,2- a ]pyridine. A final oxidation step could provide the desired product. In 2022, Jiang and Yu realized the ethoxy-carbonyl methylation of imidazo[1,2- a ]pyridines with α-bromoesters in water, employing rhodamine B (RhB) as the photocatalyst, dilauroyl peroxide as the oxidant, and potassium ethyl xanthogenate as an additive . This method allowed for the preparation of three imidazopyridines with moderate yields. The photochemical reaction was also successfully applied to the preparation of Zolpidem in one step, with 2-bromo- N , N -dimethylacetamide as the substrate partner . Another application of the carbonylalkylation reaction was performed by the Chaubey group, with the total synthesis of Zolpidem . After a detailed methodology for the C 3 -carbonylation of imidazo[1,2- a ]pyridines in the presence of dialkyl malonates, the authors discovered a rapid multi-step synthetic route to Zolpidem in high yields. This sequence was based on the visible-light-promoted C–H carbonylalkylation of the corresponding imidazopyridine, followed by a Krapcho decarboxylation at 160 °C, hydrolysis, and condensation . An analogous idea came out in Hajra’s group: by changing the carbonylalkylated source (ethyl diazoacetate) and the photocatalyst ([Ru(bpy) 3 ]Cl 2 , with bpy: 2,2′-bipyridyl), they accomplished the C 3 -ethoxycarbonyl methylation of imidazo[1,2- a ]pyridines . By studying the scope of imidazo[1,2- a ]pyridines, a good compatibility was noticed with electron-rich substituents. Surprisingly, the reaction did not occur in the presence of electron-withdrawing groups. A slight modification of the optimized conditions was thus applied: by adding 10 mol% of N , N -dimethyl- m -toluidine, a redox-active additive, the C–H carbonylalkylation ran smoothly with satisfactory results (up to 92% yield). The viability of the methodology was confirmed with the gram-scale preparation of ethyl 2-(2-phenylimidazo[1,2-a]pyridin-3-yl)acetate with a 70% yield ( , Equation (1)) and the late-stage amidation of a C 3 -substituted compound ( , Equation (2)). Similarly, Yu, Tan, and Deng expanded Hajra’s methodology to a wide range of diazo derivatives and imidazo[1,2- a ]pyridine substrates (28 examples) . Subsequently, the reaction showed its applicability with a gram-scale reaction ( , Equation (1)) and the Zolpidem preparation ( , Equation (2)). This strategy allowed shorter and more efficient access to synthetic drugs than Chaubey’s approach ( cf . ). In 2019, Guan and He moved one step beyond the concept of imidazo[1,2- a ]pyridines’ carbonylation . The direct addition of a carbonyl motif on the imidazo[1,2- a ]pyridine skeleton was conducted under 32 W CFL irradiation with an oxygen balloon and 9-mesityl-10-methylacridinium perchlorate (Acr + -Mes). Using a nitrone derivative as a co-substrate, an aryl entity could be included in the carbonyl group. With the optimized conditions in hand, the scope was scrutinized: imidazo[1,2- a ]pyridines bearing bromo- or chloro-substituents in position C7 exhibited a higher reaction efficiency (65–66% yield) than the C6-substituted ones (50–54% yield). Concerning the nitrone screening, higher yields were noted with the meta - and para -substitution on the aryl part than the ortho- substitution, probably due to steric hindrance. Next, the reaction mechanism was elucidated with control experiments (radical inhibition, 18 O-labeling reaction, and Stern–Volmer quenching fluorescence) and the X-ray crystal structure of the N -hydroxylamine intermediate. The pathway started from the SET between the excited photocatalyst (Acr + -Mes*) and the imidazo[1,2- a ]pyridine. The nitrone could then be introduced in the imidazo[1,2- a ]pyridine. Two possible paths could then be identified. The first path could involve deprotonation and nitrosobenzene releasing. The resulting radical could react with the radical oxygen species O 2 •− (formed by an SET with the radical photocatalyst) to generate the carbonylated product. The second path could imply an SET from the radical photocatalyst to the radical nitroso, giving the corresponding N -hydroxylamine. A second SET could then occur, leading to a radical N -hydroxylamine. As the first path, the decomposition of the N -hydroxylamine delivered a nitrosobenzene and the corresponding radical, which could be transformed into the target compound . Carbonylalkylation and carbonylation of imidazopyridines enabled the introduction of amino acid derivatives in the imidazopyridine’s core. The employed approaches consisted in the use of metal catalysts (Cu(OTf) 2 , [Ru(bpy) 3 ]Cl 2 ) or organophotocatalysts, in apolar (DCM) or polar protic and aprotic solvents (ACN, Dioxane, MeOH, EtOH). Eco-friendly methods demonstrated their efficiency in aqueous media.
In the same way, Zhang and Cui exploited an extension of the imidazopyridines’ alkylation for the sulfonylmethylation reaction . By utilizing bromomethyl sulfones with an iridium photocatalyst ([Ir(ppy) 3 ]), a broad range of imidazo[1,2- a ]pyridines could be functionalized efficiently with satisfactory yields . The transformation is also well suited for diversely substituted bromomethyl sulfones. The mechanism investigation by radical trapping revealed that the transformation path could imply radical intermediates. From this observation, the authors suggested an SET from the excited state of the photocatalyst to the bromomethyl sulfone, to deliver a corresponding sulfomethyl radical. The addition of the latter intermediate to the imidazo[1,2- a ]pyridine’s core provided the corresponding radical, which could be oxidized via an SET with [Ir(ppy) 3 ] + . The formed cation could be converted into the expected compound by deprotonation.
Formyl functional groups constitute a major moiety in N -heterocycles, since they could be key building blocks for synthesizing highly complex molecules. In this frame, the visible-light-induced formylation of imidazo[1,2- a ]pyridines has recently gained interest. The Hajra team developed mild conditions for the regioselective formylation of imidazo[1,2- a ]pyridines in position C 3 , with rose bengal as the photoredox catalyst, KI as an additive, and TMEDA as the formylating agent . This reaction is suitable with substrates featuring electron-poor, -rich, or halogenated substituents (up to 95% yield). The transformation was entirely inhibited by achieving control experiments with TEMPO or benzoquinone. The same result was noted by replacing O 2 (from the air) with an argon atmosphere. With all these observations, the authors proposed the following pathway: by excitation of the photocatalyst (RB), a singlet oxygen ( 1 O 2 ) could be generated, inducing the formation of the iodine radical. This latter intermediate could oxidize the TMEDA as a radical cation. With the superoxide radical anion, the TMEDA-derived radical cation could be turned into an iminium ion. The electrophilic addition with the imidazo[1,2- a ]pyridine could then occur, followed by a re-aromatization. Iodine could thus oxidize the TMEDA motif, releasing an iminium ion. Consequently, hydrolysis of the iminium ion could afford the desired formylated imidazo[1,2- a ]pyridine .
Recently, Cui and Wu conducted the visible-light C(sp 2 )–H arylation of heterocycles with hypervalent iodine ylides as the arylating agents, eosin Y as the photocatalyst, and potassium carbonate as the base . Among the synthesized heterocyclic scaffolds, five examples of imidazo[1,2- a ]pyridines were depicted with satisfactory yields . Sun et al. reported their research on the regioselective azolylation of imidazo[1,2- a ]pyridines . The installation of the azole nucleus was mediated by 2-bromoazoles under blue-LED irradiation. The photocatalytic process involved Cy 2 NMe ( N , N -dicyclohexylmethylamine) as an organic base and an iridium photocatalyst ([Ir(ppy) 2 (dtbbpy)]PF 6 , with dtbbpy: 4,4′-di-tert-butyl-2,2′-dipyridyl). This synthetic approach furnished 29 C 3 -substituted imidazo[1,2- a ]pyridines with 28 to 79% yield . Electron-poor groups on the imidazopyridine scaffold diminished the heterocycle’s reactivity, whereas electron-rich substituents favored the reaction’s efficiency. In addition, the authors reported good suitability with diversely substituted bromoazoles, i.e., bromothiadiazole, bromothiophene, and bromofuraldehyde. A radical inhibition of the C 3 -azolylation was also conducted with TEMPO: the target molecule was not detected, pointing out the radical character of the transformation. An oxidative quenching of the bromoazole by the excited state photocatalyst could lead to the corresponding heterocyclic radical, which could be added to the imidazopyridine skeleton. Simultaneously, Ir(IV) could reduce the organic base into a radical amine cation. This latter one could then catch hydrogen radicals to release the desired product. These two examples showed the wide possibility for functionalizing the imidazopyridine scaffold. The introduction of aryl and heteroaryl motifs in good-to-moderate yields was provided by organocatalyst (Eosin Y) or an iridium complex in aprotic polar solvents.
With the presence of heteroarylamines in a plethora of natural products, C–H amination of heterocyclic structures constitutes a long-lasting interest for organic chemists . Efficient, mild, and regioselective methodologies were addressed, especially the eco-friendly C–H functionalization induced by visible light . Within this frame, Adimurthy and co-workers published in 2017 the metal-free C 3 amination of imidazo[1,2- a ]pyridines . This synthetic strategy allowed for the introduction of aza-heteroarenes (benzotriazole, pyrazole, imidazole, 1 H -1,2,4-triazole, 1 H -benzo[ d ]imidazole, and 1 H -indazole) to the imidazo[1,2- a ]pyridine platform. Satisfactory yields were obtained, even with halogenated substituents on both reaction substrates . Similarly, Zhang and Lei introduced an azole motif in imidazo[1,2- a ]pyridines at position C 3 . In contrast with the previous method, the C–N bond formation additionally required a metal catalyst ([Co(dmgH)(dmgH 2 )]Cl 2, with dmg: dimethylglyoximato) . The corresponding C 3 -functionalized imidazo[1,2- a ]pyridines were generated with good-to-excellent yields. The scope examination with azoles demonstrated good reaction tolerance by employing pyrazoles, imidazoles, or triazoles. A thorough mechanistic study, including the light on/off experiments, radical trapping, cyclic voltammetry measurements, and DFT (density functional theory) calculations, validated the radical reaction path. The excited state of the organophotocatalyst could allow for an SET to the imidazo[1,2- a ]pyridine. The generated radical cation species could then undergo a nucleophilic attack of the azole substrate, giving the corresponding radical. Simultaneously, a Co(III) catalyst could oxidize the reduced photocatalyst, releasing back the photocatalyst to its fundamental state. The subsequently formed Co(II) could realize an SET to the radical imidazo[1,2- a ]pyridine. The target aza-heterocycle could be engendered by deprotonation. Co(I) could be converted back into Co(III) by proton capture and dehydrogenation . The regioselective C–N bond formation could also be extended to incorporate sulfonamide groups on imidazo[1,2- a ]pyridines. The Sun group outlined the light-mediated C 3 -sulfonamidation reaction with an iridium photocatalyst ([Ir(ppy) 2 (dtbbpy)]PF 6 ) and NaClO as the oxidant . The process was very compatible with imidazo[1,2- a ]pyridines featuring electron-poor or -rich substituents. By contrast, a significant electronic effect could be remarked with the sulfonamides: methyl, methoxy, and tert -butyl derived sulfamides enhanced the yields compared to the chlorinated or brominated ones. Control experiments with TEMPO or 1,1-diphenylethene corroborated the radical mechanism. The oxidative quenching of the photocatalyst’s excited state by NaClO could result in an Ir(IV) complex. This organometallic species could be involved in an SET with the sulfamide to deliver a sulfamido radical, which could react with the imidazo[1,2- a ]pyridine. Oxidation and deprotonation will transform the produced radical into the desired N -heterocyle . In 2020, Braga and his co-workers performed the azo-coupling of imidazo[1,2- a ]pyridines with aryl diazonium salts under green LED irradiation . By this strategy, 18 functionalized imidazopyridines were prepared with good-to-excellent yields (up to 99%). The reaction’s viability was validated with a gram-scale synthesis of a diazo derivative ( , Equation (1)) and the reduction of a diazo imidazo[1,2- a ]pyridine by zinc in acidic conditions ( , Equation (2)). More recently, the visible-light-induced C–H amination of imidazo[1,2- a ]pyridines was exploited in an environmentally friendly manner with micellar catalysis. Li’s approach was based on the use of amphiphilic surfactants in water, which could constitute micelles by hydrophobic interaction . The core of the micelles could be employed as a micro-reactor, where substrates could be activated. This green procedure needs a hydrophilic cationic N -aminopyridinium salt as the amine transfer reagent. The “head” of the pyridinium salt (pyridinium nucleus) could interact with the micelle surface, whereas the amine “tail” was localized in the core ( vide infra ). Sodium dodecyl sulfate (SDS) was chosen as the surfactant, yielding better results during the optimization step. With 2,4,5,6-tetrakis(9 H -carbazol-9-yl) isophthalonitrile (4CzIPN) as the photocatalyst under blue-LED irradiation, a series of C 3 -aminated imidazo[1,2- a ]pyridines were provided with good-to-excellent yields (up to 92% yield). The reaction path was unraveled by conducting complementary experiments (radical trapping, light-off procedure, process without surfactant, photocatalyst, or N 2 ). In the micelle hydrophobic core, an SET from the excited state of 4CzIPN to the pyridinium salt could lead to the amino radical. A radical addition could then occur on the imidazo[1,2- a ]pyridine. A second SET could furnish the corresponding cation, which could undergo pyridine-mediated deprotonation . The formation of C–N bonds in the imidazopyridine’s structure allowed the incorporation of aza-heterocyclic nuclei, sulfonamides, amines, and diazo groups on the C 3 position. The implied reactions needed organophotocatalysts (Acr + -Mes, eosin Y-Na 2 , 4CzIPN) or metal complexes (Co- or Ir-derived catalysts). In a more sustainable way, a micellar system was employed instead of conventional organic solvents. In all the examples, the desired products were obtained with excellent yields.
With the major occurrence of the C–O bond in natural or biologically active compounds, the construction of this motif is highly sought by researchers. Among the developed strategies, Hajra and co-workers investigated a metal-free methodology for the C–H alkoxylation of imidazo[1,2- a ]pyridines . With an organophotocatalyst (rose bengal) and alcohol under visible-light LED irradiation, the group constructed a C–O bond on the position C 3 of the imidazo[1,2- a ]pyridine’s nucleus. Twenty-seven examples of functionalized imidazo[1,2- a ]pyridines were synthesized, bearing various alcohols. Good-to-excellent yields were obtained with N -heterocycles displaying electron-poor or -rich substituents without any electronic effect. In the dark or with a radical inhibitor, control experiments could gain insight into the reaction mechanism: by an SET with rose bengal. An imidazopyridine radical cation could be engendered. This latter intermediate could react with alcohol to yield the corresponding radical. The desired alkoxylated product could then be formed by HO 2 • hydrogen abstraction . More recently, Singh and his colleagues developed C–H activation mechanism assisted by directing groups for the oxygenation of heterocyclic scaffolds . This original transformation required 1,2,3,5-tetrakis(carbazol-9-yl)-4,6-dicyanobenzene (4CzIPN) as the organic photocatalyst, palladium acetate as the metal catalyst, and potassium trifluoroacetate as the base, in a solvent mixture (CF 3 CO 2 H/DMF: dimethylformamide) and under an oxygen atmosphere. It should be highlighted that a high temperature is needed to oxidize Pd(II) to Pd(IV). In this context, the authors reported one example of imidazopyridine C–H activation, using the imidazopyridine itself as the directing group. In contrast with the previous approach, the imidazopyridine platform is herein functionalized in the aryl part with a 66% yield . These two examples showed the large possibility of C–O functionalization of imidazopyridines. By varying the conditions, the oxygenated motif could be introduced regiospecifically in different positions on the imidazopyridine’s structure. In metal-free conditions, C 3 -functionalization was realized. In contrast, the palladium-catalyzed reaction allowed for the C–H activation on the aryl part of the scaffold.
The functionalization of imidazo[1,2- a ]pyridines with phosphorous motifs was also envisioned. In 2020, Sun, Chen, and Yu studied the C 3 -phosphorylation of imidazo[1,2- a ]pyridines, under visible-light irradiation . By using RhB as a photoredox catalyst, lauroyl peroxide (LPO) as an oxidant, and diethyl carbonate as the reaction solvent, imidazo[1,2- a ]pyridines were efficiently phosphorylated in position C 3 with good yields. The transformation presented a good tolerance in the presence of electron-withdrawing or -donating groups on the imidazo[1,2- a ]pyridine scaffold. With respect to the phosphine oxides, variously substituted diaryl phosphine oxides could be employed in the reaction conditions. After the scope investigation, the authors decided to deepen their knowledge of the C 3 -phosphorylation by examining its pathway. Mechanistic insights (radical trapping, Stern–Volmer voltammetry fluorescence quenching, and variation of standard conditions) suggested an energy transfer (ET) from RhB* to the imidazo[1,2- a ]pyridine substrate. The substrate could then evolve to a triplet state (T), capable of reacting with LPO for generating a phosphine radical through a radical cascade. The addition of phosphine on the imidazo[1,2- a ]pyridine structure’s radical could thus occur. Final deprotonation could deliver the expected phosphorylated aza-heterocycle .
As sulfur-containing molecular architectures are widely available in various drugs, biologically active molecules, or natural compounds , the conception of C–S bonds has received growing attention from the chemistry community. Complementary to the conventional metal-catalyzed cross-coupling methods , researchers have sought milder procedures for C–S bond creation . Following the examples mentioned above in C–C and C–heteroatom bond formation, Yang and Wang envisaged the C 3 -sulfenylation of imidazo[1,2- a ]pyridines under visible light irradiation . By utilizing eosin B as the organophotocatalyst, tert -butyl hydroperoxide (TBHP) as the oxidant, and aryl sulfinic acids as the sulfur source, functionalized imidazo[1,2- a ]pyridines were delivered smoothly (64 to 87% yield). No electronic influence was remarked for either electron-withdrawing or -donating substituted sulfinic acids. The deciphering of the reaction path by control experiments brought to light the radical character of the transformation. The photoexcited species eosin B* could realize an SET with t BuOOH. Radical t BuOO • could then abstract a hydrogen atom to the sulfinic aryl acid, evolving into a thiyl radical by reduction. This intermediate could thus be added to the imidazo[1,2- a ]pyridine. Finally, the target product could be obtained by an SET and deprotonation . In 2018, Barman and co-workers proposed an analogous synthetic methodology for the C 3 -sulfenylation of imidazo[1,2- a ]pyridines . Compared to the preceding example, thiols replaced sulfinic acids as the sulfenylating agent. The easier procedure was reported, since the oxidation step was conducted by ambient air. A good reaction efficiency was noticed for many imidazo[1,2- a ]pyridines and thiols exhibiting electron-poor or -rich groups . The method’s viability was checked in a gram-scale transformation with 2-phenylimidazo[1,2- a ]pyridine and thiophenol, affording the expected compound an 87% yield. Motivated by the C–H thiocyanation promoted by Hajra et al. , Tang and Yu combined photochemistry with heterogeneous catalysts. As the microporous polymer catalysts could be efficiently recycled in reactions, the authors employed the benzo[1,2- b :4,5- b’ ]dithiophene-4,8-dione conjugated microporous polymer (CMP-BDD) as a heterogeneous photocatalyst. With this strategy, the C–H thiocyanation of imidazo[1,2- a ]pyridines has been revived in a greener manner . Similar to the Hajra group’s results, a good tolerance was noticed for imidazopyridines featuring electron-donating or withdrawing substituents . Following the same approach, Chen and Yu carried out the C(sp 2 )–H thiocyanation of heterocyclic compounds with carbon nitride (g-C 3 N 4 ) as the heterogeneous photocatalyst . Under blue-LED irradiation and with a green solvent (dimethyl carbonate), thiocyanated imidazo[1,2- a ]pyridines were provided in good yields (up to 96% yields). As previously mentioned, the transformation was very compatible, with substrates displaying methyl, methoxy, fluorine, or thienyl groups . Oxidative sulfur forms could also be incorporated into the imidazo[1,2- a ]pyridine’s structure. Piguel et al. accomplished the light-induced regioselective sulfonylation of imidazopyridines in the presence of DABCO- bis (sulfur dioxide) and an aryl iodonium salt . Aside from forming the C–S bond, this reaction allowed the integration of an aryl group on the sulfone part. The scope examination with respect to the imidazo[1,2- a ]pyridine substrates indicated neglectable electronic effects of the N -heterocycle substituents, considering the good yields obtained. The same trend was found by varying aryl iodonium hexafluorophosphates. A radical path was suggested by mechanistic insights (Stern–Volmer fluorescence quenching experiments, light-off reactions, and radical trapping). The green LED activation could favor the formation of excited species of Eosin Y*. An SET could occur with the aryl iodonium salt, producing the aryl radical, which could be trapped by the DABCO- bis (sulfur dioxide). The transfer of sulfonyl radicals on the imidazo[1,2- a ]pyridine could thus follow. The second SET and deprotonation could afford the final sulfonylated N -heterocycle . The formation of C–S bonds in the imidazopyridine skeleton permitted the synthesis of highly functionalized imidazopyridines. By these presented methods, thioethers, thiocyanates, and sulfonyl derivatives could be provided in good yields. In addition to the sulfur sources (e.g., sulfinic acids, thiols), these reactions involved organophotocatalysts (eosin B, eosin Y, rose bengal). Some transformations were optimized by using a greener solvent (dimethyl carbonate) combined with recyclable catalysts (microporous polymer or carbon nitride).
Selenylated molecules have attracted great interest in chemistry, namely by their biological and medicinal properties including anticancer or anti-Alzheimer’s activities. They also represent key synthetic intermediates/substrates in total synthesis or asymmetric catalysis . The synthesis of organoselenium compounds has been an important topic in organic chemistry with their rising importance. In the pursuit of eco-compatible processes, Liu et al. described in 2017 the first visible-light-promoted C 3 -selenylation under aerobic conditions . Three selenylated imidazo[1,2- a ]pyridines were elaborated, in the presence of diphenyl diselenide and FIrPic (bis[2-(4,6-difluorophenyl)pyridinato-C2, N ](picolinato) iridium(III)). Because of their low electron density, the imidazo[1,2- a ]pyridines were only afforded in moderate yields. Mechanistic investigations with TEMPO and photoluminescence experiments pointed out the SET from the excited state of FIrPic to the diphenyl diselenide, leading to the PhSe • radical. PhSe • was then oxidized into PhSe + , which could undergo an electrophilic addition to the imidazo[1,2- a ]pyridine’s structure. The expected molecule could be furnished by deprotonation . In 2018, Braga et al. optimized Liu’s protocol by replacing FIrPic with an organophotocatalyst (rose bengal) . The group slightly improved the reaction yield with the imidazo[1,2- a ]pyridine scaffold . In the same trend, the Kumaraswamy team envisaged the C 3 -selenylation of an imidazo[1,2- a ]pyridine motif with diphenyl selenide as the selenylating reagent, LiCl as an additive, and 2-methyl-1-propanol as a solvent . Photoactivation of diphenyl selenide provided the desired N-heterocycle with a 55% yield . In 2019, Yasuike and co-workers conceptualized an alternative light-induced method with ammonium iodide instead of FIrPic as the photocatalyst . A series of 3-(arylselanyl)imidazopyridines were prepared with good-to-excellent yields under aerobic conditions . The reaction was very compatible with variously substituted diarylselenides. The incorporation of selenylated motifs was accomplished with various visible-light-induced methodologies. A narrow scope of susbtrates was explored due to the low diversity of the selenylated source. An iridium-based photocatalyst or rose bengal could be employed for these reactions. As an alternative metal-free approach, halogenated salts (LiCl or NH 4 I) were used for the selenation of imidazopyridines, with moderate-to-good yields.
In 2019, Lee, Jung, and Kim reported the C 3 -bromination of imidazo[1,2- a ]pyridines under visible light . The transformation required CBr 4 as a bench-stable bromine source and an iridium-derived photocatalyst . The scope examination showed that imidazo[1,2- a ]pyridines presenting electron-withdrawing or -donating groups allowed for the preparation of brominated aza-heterocycles with satisfactory yields. The process’s viability was validated with a gram-scale reaction, providing the expected functionalized imidazo[1,2- a ]pyridine with an 83% yield. The C–H functionalization of imidazopyridines constitutes a powerful approach for the synthesis of highly substituted imidazopyridines. All the strategies proved their efficiency with broad scopes, wide functional group tolerances, and high yields. The viability of these methods was validated with gram-scale reactions and the preparation of compounds with biological interest. These methodologies mainly used organophotocatalysts, ruthenium, or iridium complexes under blue-LED irradiation. Some of these transformations have been improved in an eco-compatible way by employing green solvents or sustainable materials as heterogeneous catalysts. Several methodologies have emerged in parallel with the C–H functionalization strategies to elaborate these scaffolds, especially the multi-component one-pot reactions.
In 2018, Siddiqui et al. reported a “green” Groebke–Blackburn–Bienaymé reaction to prepare imidazo[1,2- a ]pyridines, in the presence of 2-amino-pyridines, aldehydes, and isocyanides. The main benefit of this strategy relies on the absence of solvents and metalz. The transformation only requires the use of a CFL delivering visible light, affording the corresponding N -heterocycles (11 examples) with high yields. A good tolerance was observed for electron-poor and -rich substituents. The pathway suggested the formation of an imine intermediate, which would undergo a nucleophilic addition by the isocyanide. Through an intramolecular cyclization, the generated amine would be photoactivated . More recently, the Singh team established a similar procedure for constructing imidazo[1,2- a ]pyridines, with 2-amino-pyridines, benzyl amines, and tert -butyl isocyanide. In contrast with the previous strategy, eosin Y was employed as a photocatalyst, and a mixture of eco-friendly solvents (EtOH and water) was needed for this multi-component reaction. By irradiation under visible light at room temperature with 22 W white LEDs, a wide range of substituted imidazo[1,2- a ]pyridines were obtained with excellent yields . The cyclization was very compatible with benzylamine, displaying electron-withdrawing or -donating groups. However, the scope exploration with respect to aminopyridines was only limited to 2-aminopyridine and 5-bromo-2-aminopyridine. Mechanistic insights confirmed the radical pathway and the importance of oxygen in the open flask transformation. Based on these observations, the reaction path would proceed through a hydrogen atom transfer (HAT) induced by the photoexcitation of eosin Y. A hydrogen atom would be extracted from the benzylamine, producing the corresponding benzylic amine. This latter intermediate would be oxidized with O 2 into benzylimine. The nucleophilic addition of aminopyridine to benzylimine would form the aminal and subsequently the imine. As before, the imine undergoes a nucleophilic attack of the isocyanide and a light-promoted cyclization to deliver the expected imidazo[1,2- a ]pyridine . In a solvent-free medium, the same group pursued their research with a greener methodology and with styrene derivatives and 2-aminopyridines . By adding tert -butyl isocyanide and eosin Y under blue-LED irradiation, they obtained a series of diversely substituted imidazo[1,2- a ]pyridines with 92 to 95% yield . The transformation was well suited for electron-poor and -rich groups. Das and Thomas developed a one-pot synthesis of imidazo[1,2- a ]pyridines involving alkenes, N -bromosuccinimide (NBS), and 2-aminopyridine . The substrate scope only included the variation of the styrene substrates. Higher yields were noticed with electron-rich substituents (R = Me or OMe) compared to the electron-withdrawing ones (Br, Cl, NO 2 , CO 2 H, and CO 2 Et). The viability of the process was validated with a gram-scale reaction starting from styrene (R=H), furnishing the desired compound with a 74% yield. A pathway examination established the influence of the photoactivation on the reaction and the bromoketone formation as a key intermediate. The suggested mechanism would consist of a double addition of a bromo radical, provided by NBS under photoactivation, on the styrene. The formed bromoketone would then be subjected to a nucleophilic attack by the 2-aminopyridine. By light induction, the carbonyl moiety would serve as a photosensitizer, allowing the production of a diradical intermediate. After the radical cyclization, the abstraction of water and the tautomerization of the imidazopyridine would afford the suitable isomer . In brief, multicomponent reactions demonstrated their applicability in eco-compatible conditions. By using organocatalyst and mainly solvent-free conditions, the preparation of diversely substituted imidazopyridines was achieved with good yields (up to 95% yield). Complementary to the multi-component transformations, various methods have been established to synthesize the imidazo[1,2- a ]pyridines.
In 2016, Singh and co-workers reported an oxidative photoredox catalysis to conceive 2-nitro-3-arylimidazo[1,2- a ]pyridines in a regioselective manner. To that end, nitrostyrenes and 2-aminopyridines were used as substrates and eosin Y as a photoredox catalyst . The irradiation with a green LED at room temperature, under an open atmosphere, and in acetonitrile led to the expected 2-nitro-3-arylimidazo[1,2- a ] pyridines with good yields (67 to 78% yield). It should be highlighted that nitrostyrenes exhibiting electron-withdrawing or -donating substituents afforded imidazo[1,2- a ]pyridines in satisfactory yields. In contrast, the desired compounds were not obtained with aliphatic nitrostyrenes. The scope for the 2-aminopyridines was only limited to the methyl-substituted substrates. However, the substituent position was studied, demonstrating an insignificant influence of the methyl group on the reaction yield. Mechanistic investigations indicated that the transformation should involve a Michael addition between the 2-aminopyridine and the nitrostyrene, followed by an SET induced by eosin Y under visible light, an intramolecular cyclization, and an oxidation step . Kamal et al. described the visible-light-induced coupling of α-keto vinyl azides and 2-aminopyridines, with [Ru(bpy) 3 ]Cl 2 ·6H 2 O as a photocatalyst . The scope concerning α-keto vinyl azides provided diversely substituted imidazo[1,2- a ]pyridines with excellent yields. 2-aminopyridines bearing chloro, methyl ester, or methyl substituents also led to good yields for the expected N -heterocycles. Complementary studies hinted that the pathway would imply a photo-decomposition of the vinyl azides into an azirine intermediate. Moreover, the prepared imidazo[1,2- a ]pyridines were biologically evaluated on different cancer cell lines: A549 (lung cancer), DU-145 (prostate cancer), MCF-7 (breast cancer), and Hela (cervical cancer)). Some of these molecules presented encouraging cytotoxic activities against A549, DU-145, and MCF-7 cell lines . The Chuah group reported a visible-light-induced cyclization to form the imidazo[1,2- a ]pyridine motif. The reaction necessitated β-ketoesters and 2-aminopyridines as substrates, erythrosin B as a photoredox catalyst, and KBr as a halogenating agent. A wide range of β-ketoesters was well tolerated, affording the corresponding products in good yields. The variation of the 2-aminopyridines allowed access to imidazo[1,2- a ]pyridines with a 59 to 85% yield. Mechanism understanding was carried out with control experiments and cyclic voltammetry. The results indicated that the light irradiation would promote an SET from the bromide ion (Br − ) to the excited photocatalyst, generating a bromine radical. Alongside the photoredox catalysis, a condensation reaction would occur between the β-ketoester and the 2-aminopyridine to obtain the enamine intermediate. This latter compound would react with the bromine radical. By proton abstraction with O 2 •− , an α-bromo ketone would then be generated, which would undergo an intramolecular cyclization . Recently, the Sun team depicted the photoredox synthesis of C 3 -alkylated imidazo[1,2- a ]pyridines with α-bromocarbonyls and 2-aminopyridines . The methodology consisted of a one-pot condensation and alkylation with an iridium-based photocatalyst under blue-light activation. Satisfactory yields were obtained with a wide range of α-bromocarbonyls ( , Equation (1)). The extension of the strategy was explored with multi-component reactions involving α-bromocarbonyls, alkyl bromides, and 2-aminopyridines substrates. Again, the elaboration of “unsymmetrical” C 3 -alkylated imidazo[1,2- a ]pyridines was achieved with good yields ( , Equation (2)). The whole approach was validated with a gram-scale reaction and the preparation of Zolpidem, a drug used for insomnia treatment. In light of control experiments, the reaction path would proceed via an alkylation–condensation sequence, followed by the addition of a radical-derived acetophenone, generated by an SET, on the imidazo[1,2- a ]pyridine’s nucleus. Imidazo[1,5- a ]pyridine derivatives could also be built through the photocyclization of imidazoles at room temperature and in NMP . This approach was based on the light irradiation of imidazoles, displaying strong electron-rich substituents in position 2 . A total of five imidazo[1,5-a]pyridine-5,8-diones were synthesized with moderate yields, probably due to photodegradation. The reaction path suggested a ring contraction of the pyrone ring followed by a cyclopentanedione ring opening. The resulted biradical intermediate would undergo decarbonylation, intramolecular cyclization, and oxidation, yielding the expected heterocycle. All these syntheses illustrated the broad range of the possible methods for constructing the imidazopyridine’s structure. Satisfactory yields were obtained for the prepared highly substituted imidazopyridines. The applicability of some strategies was confirmed with the elaboration of imidazo[1,2- a ]pyridines with biological interest.
In summary, all the presented studies for the construction of imidazopyridines have reflected the strong emergence of photochemistry over the last decade. Under visible-light photocatalysis, efficient access to scaffolds with a high biological interest is achievable, with good yields and a wide functional group compatibility. These methodologies also represent promising alternatives to classical approaches for elaborating these motifs, with their mild and eco-friendly conditions (such as room temperature or green solvents). The transformations involved in the synthesis of these N -heterocycles could be grouped into three categories: (1) the C–H functionalizations, which mainly occur in the C 3 -position of the imidazo[1,2- a ]pyridines, for the formation of carbon–carbon or carbon–heteroatom bonds, (2) the multi-component reactions, essentially based on the Groebke–Blackburn–Bienaymé reaction, and (3) the cyclization reactions, allowing for the preparation of highly substituted imidazo[1,2- a ]pyridines and imidazo[1,5- a ]pyridines. By complementary studies, especially radical trapping, the reactions pathways shed light on the radical mechanisms. Therefore, this review constitutes a valuable tool for synthetic chemists working in this exciting field. We also hope that this overview will inspire novel synthetic strategies using visible-light activation, paving the way for constructing a similarly original kind of heterocyclic moieties. In parallel, the development of more environmentally benign procedures should be pursued in the future, namely with the employment of reusable catalysts and easy-handling substrates or a reduction in organic waste.
|
Natural products from food sources can alter the spread of antimicrobial resistance plasmids in Enterobacterales | f60be4a6-0fbc-4d3b-847c-3a8c27c0fe30 | 11541548 | Microbiology[mh] | Antimicrobial resistance (AMR) is a growing global problem, with resistant bacteria causing increasing numbers of difficult-to-treat infections, leading to increased morbidity and mortality. Recently, the World Health Organization has designated third-generation cephalosporin-resistant and carbapenem-resistant Enterobacterales as a critical priority for the development of novel therapies . Mobile genetic elements, such as plasmids, have resulted in the widespread dissemination of extended-spectrum β-lactams (ESBLs) and carbapenemases amongst these organisms. Plasmids are self-replicating pieces of DNA that can carry a variety of accessory genes, including multiple AMR and/or virulence genes . Conjugative plasmids encode all the necessary machinery to mediate their transmission from one bacterial cell to another through a process called conjugation . Conjugative transfer of a plasmid into a new host has the potential to turn a drug-susceptible strain into a multidrug-resistant strain and, in worst-case scenarios, also a hypervirulent strain . Therefore, research into these mobile genetic elements is of considerable importance. The most common plasmid type isolated from animal and human sources is IncF plasmids . These plasmids commonly carry multiple genes encoding AMR determinants, such as ESBLs and carbapenemases . One example is the 114 kb IncFII plasmid pKpQIL, which carries the bla KPC-3 carbapenemase, bla TEM-1 β-lactamase, and heavy metal resistance genes . The KPC-3 carbapenemase confers resistance to penicillin, cephalosporin, and carbapenem antibiotics, and is resistant to standard β-lactamase inhibitors, such as clavulanic acid, tazobactam, and sulbactam . pKpQIL was first identified in an extensively drug-resistant epidemic strain of Klebsiella pneumoniae isolated in Israel between 2006 and 2008 . Since then, pKpQIL and its variants have been reported worldwide in various species of Enterobacterales . In addition to IncF plasmids, another important vector of ESBLs in Escherichia coli is IncK plasmids like pCT, which carries the bla CTX-M-14 ESBL gene . The CTX-M-14 ESBL confers resistance to several clinically important third-generation cephalosporins, such as cefotaxime, ceftriaxone, and cefpodoxime . pCT-like plasmids have been identified in human and animal E. coli isolates from Australia, Asia, and Europe . Furthermore, IncK plasmids have contributed to the dissemination of the bla CTX-M-14 ESBL gene in the UK and Spain . Some approaches have focused on different methods to remove plasmids from bacterial hosts (curing agents), and others have focused on preventing plasmid transfer (conjugation inhibitors); broadly speaking, such anti-plasmid approaches are gaining interest . The search for and use of anti-plasmid compounds capable of curing plasmids or inhibiting conjugative plasmid transfer are ongoing . Natural products have historically played a significant role in drug discovery owing to the extensive structural diversity and complexity of chemical compounds . Certain food products and their bioactive constituents possess diverse physiological effects. For example, ginger has been reported to have protective effects on gastrointestinal, nervous, and cardiovascular systems , displays antimicrobial effects , and has been associated with improved outcomes in fatty liver diseases . Similarly, black pepper and turmeric extracts have been reported to have diverse physiological effects in vitro and in vivo , including anti-tumorigenic, anti-diarrhoeal, antioxidant, and antimicrobial effects . The kamala tree ( Mallotus philippensis ) and its fruit have been traditionally used to treat parasitic infections and are reported to have antimicrobial, antioxidant, and anti-inflammatory properties . Therefore, the wealth of diverse phytochemicals in natural products could offer compounds with anti-plasmid activity. Here, we performed a screen for natural products with anti-plasmid activity (with either plasmid curing or conjugation inhibitor activity) by measuring the effects of bioactive plant extracts and bioactive compounds from black pepper, ginger, cashew nuts, and kamala on plasmid conjugation in E. coli and K. pneumoniae .
Natural product extracts and compounds Extracts were produced by extracting 10 g of powdered plant material with chloroform (200 ml) overnight at room temperature. Extracts were then concentrated under vacuum and stored in a freezer at −20 °C until use. The compounds 6-gingerol, capsaicin, anacardic acid, and rottlerin were purchased from Merck, UK. Extracts or compounds were dissolved in DMSO, and DMSO vehicle controls were used at the same volume throughout. Bacterial strains The bacterial strains used are described in . Unless stated otherwise, all strains were grown in Luria–Bertani (LB) broth supplemented with the appropriate antibiotics and incubated at 37 °C with aeration. High-throughput screening of extracts and compounds The transmission of pCT gfp in E. coli EC958c and pKpQIL gfp in K. pneumoniae Ecl8 in the presence of natural product extracts and compounds was measured by flow cytometry as previously described . Briefly, 1 ml of the overnight cultures of the donor ( E. coli EC958c with pCT gfp or K. pneumoniae Ecl8 with pKpQIL gfp ) and the recipient ( E. coli EC958c or K. pneumoniae Ecl8 with chromosomal mCherry ) strains were pelleted, washed in sterile PBS, and diluted to an OD 600 of 0.5. Equal volumes of donor and recipient strains were mixed to give a donor-to-recipient ratio of 1 : 1. A 20 µl volume of the donor–recipient mix was inoculated into 180 µl of LB broth supplemented with a final concentration of natural product extract or compound in a 96-well round bottom plate (Corning, USA). The same volume of DMSO was also added to 180 µl of LB broth as vehicle control. The plate was incubated at 37 °C with gentle agitation (∼100 r.p.m.) for 24 h ( E. coli) or 6 h ( K. pneumoniae) . Following incubation, 20 µl was removed and serially diluted 1 : 1000 in filter-sterilised Dulbecco’s PBS (Merck, UK). Samples were analysed on the Attune NxT acoustic focusing flow cytometer with Autosampler (Thermo Scientific, USA). GFP emission was collected using the BL1-H channel and the mCherry emission was collected using the YL2-H channel. For each sample, 10 000 bacterial events were recorded. Plasmid conjugation was measured by quantifying the number of green fluorescent protein (GFP)-positive bacteria (donor), mCherry-positive bacteria (recipient), and GFP-positive/mCherry-positive bacteria (transconjugants). Gating strategies were exactly as previously described . The conjugation frequency was calculated as the number of dual fluorescent bacterial events (transconjugants) divided by the number of mCherry-positive-only events (recipients). Three independent experiments were carried out, each one consisting of four biological replicates. Antimicrobial susceptibility testing The antimicrobial susceptibility of bacterial strains to natural product compounds was determined using the broth microdilution method . Briefly, overnight cultures grown in LB broth (~1×10 9 c.f.u. ml −1 ) were diluted to 1×10 6 c.f.u. ml −1 in LB broth. A 10 mg ml −1 stock solution of 6-gingerol, anacardic acid, capsaicin, and rottlerin was prepared fresh on the day of each experiment using DMSO as a diluent. A 1024 µg ml −1 working stock solution of each compound was prepared in LB broth. In a round bottom 96-well plate, 100 µl of the working stock solution was inoculated into the first column and 50 µl of LB broth was dispensed into the rest of the wells. A 50 µl volume was removed from the first well and added to the second and mixed, and this process was repeated across the columns to give a concentration range from 1024 down to 1 µg ml −1 . For each strain, a 50 µl volume of the 1×10 6 c.f.u. ml −1 was dispensed into wells of a single row. This resulted in a final concentration range of 512 down to 0.5 µg ml −1 . The plates were incubated for 18 h at 37 °C. The minimum inhibitory concentration was determined as the lowest concentration of a compound that visibly reduced the growth of bacteria. Growth kinetic assays Overnight cultures of bacteria grown in LB broth (~ 1×10 9 c.f.u. ml −1 ) were diluted to a starting inoculum of 1×10 6 c.f.u. ml −1 in a 96-well flat bottom plate (Corning, USA). Where appropriate, the test strains were diluted in LB broth supplemented with pure compounds. Concentrations of the compounds tested were 256 µg ml −1 6-gingerol, 256 µg ml −1 anacardic acid, 128 µg ml −1 capsaicin, and 128 µg ml −1 rottlerin. The same volume of DMSO was used as a vehicle control to ensure that DMSO did not adversely affect bacterial growth. The optical density at 600 nm (OD 600 ) was measured every 30 min for 24 h at 37 °C with shaking (200 r.p.m.) using the FLUOstar OMEGA plate reader (BMG Labtech, Germany). Three independent experiments were carried out, each consisting of three biological replicates. Liquid broth conjugation with clinical isolate The K. pneumoniae clinical isolate carrying the IncF plasmid pCPE16_3 bla NDM-1 (KP10) was paired with the hygromycin-resistant K. pneumoniae ATCC 43816R recipient strain (KP20) in liquid broth as previously described . Briefly, KP10 and KP20 cultures were grown overnight, and sub-cultures were prepared in 5 ml LB broth (1% inoculum) and grown to an OD 600 of ∼0.5. Then, 1 ml of cultures were pelleted, and media were replaced with LB broth to adjust the OD 600 to 0.5. The donor (KP10) and the recipient (KP20) were mixed at a 1 : 10 ratio alongside control single strains. The donor, recipient, and mixed cultures were separately diluted 1 : 5 in LB broth containing a final concentration of 100 µg ml −1 of the natural compound or the same volume of DMSO as vehicle control and these were incubated statically at 37 °C for 1 h. Corresponding dilutions were plated onto LB agar to assess cell viability and selective media to determine donor-to-recipient ratios and transconjugant production. Plates were incubated at 37 °C overnight. Transconjugant colonies carrying pCPE16_3 bla NDM-1 were selected on LB agar supplemented with 300 µg ml −1 hygromycin B (PhytoTech Labs, USA) and 2 µg ml −1 doripenem (Merck, Germany). Conjugation frequencies were calculated as the number of transconjugants per recipient. Data shown are the mean±standard deviation of three independent experiments, each carried out with four biological replicates. Statistical analysis In , the mean of each treatment group was compared to the mean of the DMSO control using one-way ANOVA followed by Dunnett’s test to correct for multiple comparisons. In , the mean of the DMSO control was compared to the mean of the treatment group using two-tailed unpaired t -tests. All statistical analyses were performed using GraphPad Prism version 10 for MacOS (GraphPad, San Diego, CA USA) http://www.graphpad.com . Only P -values ≤0.05 were considered statistically significant.
Extracts were produced by extracting 10 g of powdered plant material with chloroform (200 ml) overnight at room temperature. Extracts were then concentrated under vacuum and stored in a freezer at −20 °C until use. The compounds 6-gingerol, capsaicin, anacardic acid, and rottlerin were purchased from Merck, UK. Extracts or compounds were dissolved in DMSO, and DMSO vehicle controls were used at the same volume throughout.
The bacterial strains used are described in . Unless stated otherwise, all strains were grown in Luria–Bertani (LB) broth supplemented with the appropriate antibiotics and incubated at 37 °C with aeration.
The transmission of pCT gfp in E. coli EC958c and pKpQIL gfp in K. pneumoniae Ecl8 in the presence of natural product extracts and compounds was measured by flow cytometry as previously described . Briefly, 1 ml of the overnight cultures of the donor ( E. coli EC958c with pCT gfp or K. pneumoniae Ecl8 with pKpQIL gfp ) and the recipient ( E. coli EC958c or K. pneumoniae Ecl8 with chromosomal mCherry ) strains were pelleted, washed in sterile PBS, and diluted to an OD 600 of 0.5. Equal volumes of donor and recipient strains were mixed to give a donor-to-recipient ratio of 1 : 1. A 20 µl volume of the donor–recipient mix was inoculated into 180 µl of LB broth supplemented with a final concentration of natural product extract or compound in a 96-well round bottom plate (Corning, USA). The same volume of DMSO was also added to 180 µl of LB broth as vehicle control. The plate was incubated at 37 °C with gentle agitation (∼100 r.p.m.) for 24 h ( E. coli) or 6 h ( K. pneumoniae) . Following incubation, 20 µl was removed and serially diluted 1 : 1000 in filter-sterilised Dulbecco’s PBS (Merck, UK). Samples were analysed on the Attune NxT acoustic focusing flow cytometer with Autosampler (Thermo Scientific, USA). GFP emission was collected using the BL1-H channel and the mCherry emission was collected using the YL2-H channel. For each sample, 10 000 bacterial events were recorded. Plasmid conjugation was measured by quantifying the number of green fluorescent protein (GFP)-positive bacteria (donor), mCherry-positive bacteria (recipient), and GFP-positive/mCherry-positive bacteria (transconjugants). Gating strategies were exactly as previously described . The conjugation frequency was calculated as the number of dual fluorescent bacterial events (transconjugants) divided by the number of mCherry-positive-only events (recipients). Three independent experiments were carried out, each one consisting of four biological replicates.
The antimicrobial susceptibility of bacterial strains to natural product compounds was determined using the broth microdilution method . Briefly, overnight cultures grown in LB broth (~1×10 9 c.f.u. ml −1 ) were diluted to 1×10 6 c.f.u. ml −1 in LB broth. A 10 mg ml −1 stock solution of 6-gingerol, anacardic acid, capsaicin, and rottlerin was prepared fresh on the day of each experiment using DMSO as a diluent. A 1024 µg ml −1 working stock solution of each compound was prepared in LB broth. In a round bottom 96-well plate, 100 µl of the working stock solution was inoculated into the first column and 50 µl of LB broth was dispensed into the rest of the wells. A 50 µl volume was removed from the first well and added to the second and mixed, and this process was repeated across the columns to give a concentration range from 1024 down to 1 µg ml −1 . For each strain, a 50 µl volume of the 1×10 6 c.f.u. ml −1 was dispensed into wells of a single row. This resulted in a final concentration range of 512 down to 0.5 µg ml −1 . The plates were incubated for 18 h at 37 °C. The minimum inhibitory concentration was determined as the lowest concentration of a compound that visibly reduced the growth of bacteria.
Overnight cultures of bacteria grown in LB broth (~ 1×10 9 c.f.u. ml −1 ) were diluted to a starting inoculum of 1×10 6 c.f.u. ml −1 in a 96-well flat bottom plate (Corning, USA). Where appropriate, the test strains were diluted in LB broth supplemented with pure compounds. Concentrations of the compounds tested were 256 µg ml −1 6-gingerol, 256 µg ml −1 anacardic acid, 128 µg ml −1 capsaicin, and 128 µg ml −1 rottlerin. The same volume of DMSO was used as a vehicle control to ensure that DMSO did not adversely affect bacterial growth. The optical density at 600 nm (OD 600 ) was measured every 30 min for 24 h at 37 °C with shaking (200 r.p.m.) using the FLUOstar OMEGA plate reader (BMG Labtech, Germany). Three independent experiments were carried out, each consisting of three biological replicates.
The K. pneumoniae clinical isolate carrying the IncF plasmid pCPE16_3 bla NDM-1 (KP10) was paired with the hygromycin-resistant K. pneumoniae ATCC 43816R recipient strain (KP20) in liquid broth as previously described . Briefly, KP10 and KP20 cultures were grown overnight, and sub-cultures were prepared in 5 ml LB broth (1% inoculum) and grown to an OD 600 of ∼0.5. Then, 1 ml of cultures were pelleted, and media were replaced with LB broth to adjust the OD 600 to 0.5. The donor (KP10) and the recipient (KP20) were mixed at a 1 : 10 ratio alongside control single strains. The donor, recipient, and mixed cultures were separately diluted 1 : 5 in LB broth containing a final concentration of 100 µg ml −1 of the natural compound or the same volume of DMSO as vehicle control and these were incubated statically at 37 °C for 1 h. Corresponding dilutions were plated onto LB agar to assess cell viability and selective media to determine donor-to-recipient ratios and transconjugant production. Plates were incubated at 37 °C overnight. Transconjugant colonies carrying pCPE16_3 bla NDM-1 were selected on LB agar supplemented with 300 µg ml −1 hygromycin B (PhytoTech Labs, USA) and 2 µg ml −1 doripenem (Merck, Germany). Conjugation frequencies were calculated as the number of transconjugants per recipient. Data shown are the mean±standard deviation of three independent experiments, each carried out with four biological replicates.
In , the mean of each treatment group was compared to the mean of the DMSO control using one-way ANOVA followed by Dunnett’s test to correct for multiple comparisons. In , the mean of the DMSO control was compared to the mean of the treatment group using two-tailed unpaired t -tests. All statistical analyses were performed using GraphPad Prism version 10 for MacOS (GraphPad, San Diego, CA USA) http://www.graphpad.com . Only P -values ≤0.05 were considered statistically significant.
Extracts from natural products can reduce plasmid conjugation Given the highly bioactive properties of certain food products, it was decided to test the activity of extracts from black pepper, ginger, turmeric, and kamala using a previously developed flow cytometry assay for monitoring the conjugation of pCT and pKpQIL in E. coli EC958c and K. pneumoniae Ecl8, respectively . In this assay, the donor and recipient cells can be differentiated using flow cytometry based on their GFP and mCherry fluorescence, respectively. As a result, transconjugant cells can be identified based on their dual fluorescence of GFP and mCherry. For E. coli , the conjugation assays consisted of a 1 : 1 mix of the donor EC24 ( E. coli EC958c pCT gfp ) and the recipient EC25 ( E. coli EC958c mCherry) , followed by a 24 h incubation at 37 °C. All extracts and pure compounds used in this study were dissolved in DMSO, therefore, their effects on plasmid conjugation were compared to treatment with the same volume of DMSO as vehicle controls. The conjugation frequency of pCT in E. coli was significantly reduced in the presence of black pepper ( P <0.001), ginger ( P =0.007), and kamala ( P <0.001) extracts compared to the DMSO vehicle control . Turmeric extract did not have a significant impact ( P =0.986) on pCT conjugation in E. coli . For K. pneumoniae , the conjugation assays consisted of a 1 : 1 ratio of the recipient KP18 ( K. pneumoniae Ecl8 mCherry) and the donor KP19 ( K. pneumoniae Ecl8 pKpQIL gfp ) , followed by a 6 h incubation at 37 °C. In K. pneumoniae , the conjugation frequency of pKpQIL was significantly reduced in the presence of black pepper ( P =0.001), turmeric ( P =0.003), and kamala extracts ( P =0.007) compared to the DMSO vehicle control. However, ginger extract had no significant impact ( P =0.063) on pKpQIL conjugation in K. pneumoniae . Comparing the effects of the natural product extracts on pCT and pKpQIL conjugation frequencies, black pepper, and kamala extracts were effective against both plasmids, whilst ginger and turmeric had a plasmid-specific effect. As the flow cytometry assay relies on the fluorescent markers to identify donor, recipient, and transconjugant cells, the impact of the extracts on GFP and mCherry fluorescence was also determined. All tested extracts significantly increased the number of non-fluorescent EC24 and EC25 cells after incubation compared to the DMSO control (Fig. S1a and Table S1, available in the online version of this article). This suggested that the decrease in the conjugation frequency of pCT in E. coli by the extracts could also be due to fluorescence interference. For KP18 and KP19, the black pepper, ginger, and kamala extracts had no significant effect on the number of fluorescent cells, while the turmeric extract significantly increased the number of non-fluorescent cells (Fig. S1b). Turmeric extract reduced the number of fluorescent KP18 and KP19 cells, suggesting the presence of compounds that could be interfering with the fluorescence of GFP and mCherry. However, it should be noted that the conjugation frequency is calculated as the proportion of dual fluorescent cells per fluorescent recipient cell. Pure compounds from natural product extracts have a moderate effect on plasmid conjugation Based on the literature, some pure compounds with known bioactive effects found in the food product extracts, or compounds with anticipated activity, were tested using the high-throughput conjugation assay . All compounds were tested at 100 µg ml −1 . As with the whole extracts, the conjugation of pCT in E. coli was more susceptible to inhibition than pKpQIL conjugation in K. pneumoniae . In E. coli , 6-gingerol ( P =0.01), capsaicin ( P =0.001), and rottlerin ( P <0.001) significantly reduced the conjugation frequency of pCT compared to the DMSO control . In K. pneumoniae , only rottlerin ( P =0.004) significantly reduced the conjugation frequency of pKpQIL, whilst 6-gingerol had no significant impact ( P =0.962) and capsaicin significantly increased pKpQIL conjugation ( P <0.001) . At 100 µg ml −1 , anacardic acid had no significant effect on either pCT and pKpQIL conjugation frequencies ( P= 0.57 and P =0.823). At 100 µg ml −1 , none of the pure compounds significantly affected the fluorescence of E. coli and K. pneumoniae cells compared to DMSO control (Fig. S2 and Table S2). To look more at the impacts of some compounds with the greatest activity in E. coli EC958c carrying pCT, dose–response curves were performed with 1–256 µg ml −1 of 6-gingerol and capsaicin concentrations. 6-gingerol significantly reduced pCT conjugation frequency at 128 and 256 µg ml −1 ( ; P <0.001). However, at these concentrations, 6-gingerol also significantly reduced the overall number of fluorescent cells recorded in the bacterial population compared to the DMSO control (Fig. S3 and Table S3). Therefore, the reduction in the conjugation frequency of pCT treated with higher concentrations of capsaicin could be due to interference with the expression of GFP and mCherry proteins. Capsaicin produced a dose-dependent reduction in the conjugation frequency of pCT at 2 µg ml −1 and above . At concentrations of 64 µg ml −1 and above, capsaicin also significantly reduced the overall number of fluorescent cells detected within the bacteria population compared to the DMSO control (Fig. S3). Therefore, capsaicin concentrations of 2–32 µg ml −1 were effective in reducing pCT conjugation without affecting the fluorescence of the donor and recipient cells. For K. pneumoniae Ecl8 carrying pKpQIL, two compounds were selected, rottlerin because it caused a significant decrease in pKpQIL conjugation frequency, and anacardic acid because it had no effect at 100 µg ml −1 and therefore was tested at a wider concentration range to look for an effect. At lower concentrations (2, 4, and 16 µg ml −1 ), anacardic acid significantly increased the conjugation frequency of pKpQIL in K. pneumoniae Ecl8 . However, at higher concentrations, anacardic acid had no significant impact on pKpQIL conjugation frequency . Additionally, none of the anacardic acid concentrations tested had a significant effect on the overall number of fluorescent cells compared to DMSO control (Fig. S3c). Rottlerin significantly reduced the conjugation frequency of pKpQIL in K. pneumoniae Ecl8 at 32 µg ml −1 and above compared to the DMSO control . The greatest reduction was seen upon treatment with 128 or 256 µg ml −1 of rottlerin. However, at 128 and 256 µg ml −1 , rottlerin significantly reduced the number of fluorescent bacterial cells compared to the DMSO control (Fig. S3d). Nonetheless, 32 and 64 µg ml −1 rottlerin caused a significant decrease in pKpQIL conjugation frequency without affecting the number of fluorescent K. pneumoniae Ecl8 cells (Fig. S3d). It is possible that potential anti-plasmid compounds could be inhibiting the growth of the donor or recipient strains, which would alter population density and both/either the number of cells able to donate or receive the plasmid. Antimicrobial susceptibility testing showed that none of the compounds inhibited the growth of the E. coli and K. pneumoniae strains (>512 µg ml −1 ) at concentrations above those tested in the conjugation assays . To ensure that the pure compounds were not affecting the growth of the strains over the 6 and 24 h incubation used in the flow cytometry-based conjugation assays, E. coli and K. pneumoniae strains were grown in LB broth supplemented with the highest concentration of the pure compounds tested in the dose–response experiments. For KP18 and KP19, 256 µg ml −1 anacardic acid and 128 µg ml −1 rottlerin were tested because 256 µg ml −1 rottlerin adversely affected bacterial growth . At 256 µg ml −1 , anacardic acid had no impact on the growth of KP18 or KP19 over 24 h compared to the DMSO control . At 128 µg ml −1 , rottlerin did not affect the growth of KP18 and KP19 during the medium log phase, up to ~6 h, which is the duration of the conjugation assay. However, it delayed the transition of both strains from the late log phase to the stationary phase, although both strains still reached the same final OD 600 value as the DMSO control at the end of 24 h . For EC24 and EC25, 256 µg ml −1 6-gingerol and 128 µg ml −1 capsaicin were tested because 256 µg ml −1 capsaicin reduced the growth of both strains. Neither 128 µg ml −1 capsaicin nor 256 µg ml −1 6-gingerol affected bacterial growth during the log phase, but neither EC24 nor EC25 reached the same final density . Therefore, the reduction in pCT conjugation frequency in E. coli EC958c treated with high concentrations of capsaicin and 6-gingerol could be confounded by the high concentration’s impact on growth and cell density. Compounds impact on plasmid conjugation in a carbapenem-resistant K. pneumoniae clinical isolate Next, the effect of the four compounds (capsaicin, anacardic acid, 6-gingerol, and rottlerin) was tested on KP10 (a clinical urine isolate of K. pneumoniae ), which we have shown carries and readily transmits a 120 kb IncF plasmid with a bla NDM-1 carbapenem resistance gene, termed pCPE16_3 The recipient strain was KP20, a previously generated hygromycin-resistant K. pneumoniae ATCC 43816R strain . A 1 : 10 donor-to-recipient ratio of KP10 and KP20 was used for the conjugation assays. Each compound was tested at 100 µg ml −1 , with a 1 h co-incubation of donor and recipient strains. Three parameters were explored: the number of transconjugants produced at the end of the 1 h incubation, the donor-to-recipient ratio after 1 h, and the conjugation frequency calculated as the number of transconjugants generated per recipient after 1 h. For capsaicin, there was no change in the number of transconjugant bacteria, the donor-to-recipient ratio, or the pCPE16_3 conjugation frequency ( ; P =0.4713, 0.2513, and 0.4446, respectively). This is in contrast with what was seen with pKpQIL, where the conjugation frequency was significantly higher compared to the DMSO control . Anacardic acid significantly reduced the number of transconjugant bacteria ( P =0.0375), but had no significant effect on the donor-to-recipient ratio or conjugation frequency ( ; P =0.2302 and 0.1937, respectively). This was comparable to pKpQIL, where anacardic acid also did not affect conjugation frequency . Comparable to pKpQIL, 6-gingerol did not affect any of the parameters for pCPE16_3 conjugation ( ; P =0.3988, 0.6255, 0.6850, respectively). Interestingly, rottlerin significantly reduced all three tested parameters. It significantly reduced the total number of transconjugant bacteria from 1.32×10 5 in DMSO to 2.25×10 4 ( P =0.006), the donor-to-recipient ratio from 0.0169 to 0.00783 ( P =0.045), and the conjugation frequency of pCPE16_3 from 2.313×10 −4 in DMSO to 8.83×10 −5 . Antimicrobial susceptibility testing showed that both KP10 and KP20 had an MIC of >512 µg ml −1 for rottlerin . However, since rottlerin reduced the ratio of KP10 to KP20, and impacted on the growth kinetics of KP18 and KP19, the impact of 100 µg ml −1 rottlerin or an equal volume of DMSO on the viability of KP10 and KP20 cells was determined following 1 h incubation. Compared to DMSO control, KP10 formed fewer colonies after 1 h of incubation with 100 µg ml −1 rottlerin; however, the difference was not statistically significant. This suggested that sub-MIC rottlerin may have affected KP10 growth during the conjugation time frame (Fig. S4).
Given the highly bioactive properties of certain food products, it was decided to test the activity of extracts from black pepper, ginger, turmeric, and kamala using a previously developed flow cytometry assay for monitoring the conjugation of pCT and pKpQIL in E. coli EC958c and K. pneumoniae Ecl8, respectively . In this assay, the donor and recipient cells can be differentiated using flow cytometry based on their GFP and mCherry fluorescence, respectively. As a result, transconjugant cells can be identified based on their dual fluorescence of GFP and mCherry. For E. coli , the conjugation assays consisted of a 1 : 1 mix of the donor EC24 ( E. coli EC958c pCT gfp ) and the recipient EC25 ( E. coli EC958c mCherry) , followed by a 24 h incubation at 37 °C. All extracts and pure compounds used in this study were dissolved in DMSO, therefore, their effects on plasmid conjugation were compared to treatment with the same volume of DMSO as vehicle controls. The conjugation frequency of pCT in E. coli was significantly reduced in the presence of black pepper ( P <0.001), ginger ( P =0.007), and kamala ( P <0.001) extracts compared to the DMSO vehicle control . Turmeric extract did not have a significant impact ( P =0.986) on pCT conjugation in E. coli . For K. pneumoniae , the conjugation assays consisted of a 1 : 1 ratio of the recipient KP18 ( K. pneumoniae Ecl8 mCherry) and the donor KP19 ( K. pneumoniae Ecl8 pKpQIL gfp ) , followed by a 6 h incubation at 37 °C. In K. pneumoniae , the conjugation frequency of pKpQIL was significantly reduced in the presence of black pepper ( P =0.001), turmeric ( P =0.003), and kamala extracts ( P =0.007) compared to the DMSO vehicle control. However, ginger extract had no significant impact ( P =0.063) on pKpQIL conjugation in K. pneumoniae . Comparing the effects of the natural product extracts on pCT and pKpQIL conjugation frequencies, black pepper, and kamala extracts were effective against both plasmids, whilst ginger and turmeric had a plasmid-specific effect. As the flow cytometry assay relies on the fluorescent markers to identify donor, recipient, and transconjugant cells, the impact of the extracts on GFP and mCherry fluorescence was also determined. All tested extracts significantly increased the number of non-fluorescent EC24 and EC25 cells after incubation compared to the DMSO control (Fig. S1a and Table S1, available in the online version of this article). This suggested that the decrease in the conjugation frequency of pCT in E. coli by the extracts could also be due to fluorescence interference. For KP18 and KP19, the black pepper, ginger, and kamala extracts had no significant effect on the number of fluorescent cells, while the turmeric extract significantly increased the number of non-fluorescent cells (Fig. S1b). Turmeric extract reduced the number of fluorescent KP18 and KP19 cells, suggesting the presence of compounds that could be interfering with the fluorescence of GFP and mCherry. However, it should be noted that the conjugation frequency is calculated as the proportion of dual fluorescent cells per fluorescent recipient cell.
Based on the literature, some pure compounds with known bioactive effects found in the food product extracts, or compounds with anticipated activity, were tested using the high-throughput conjugation assay . All compounds were tested at 100 µg ml −1 . As with the whole extracts, the conjugation of pCT in E. coli was more susceptible to inhibition than pKpQIL conjugation in K. pneumoniae . In E. coli , 6-gingerol ( P =0.01), capsaicin ( P =0.001), and rottlerin ( P <0.001) significantly reduced the conjugation frequency of pCT compared to the DMSO control . In K. pneumoniae , only rottlerin ( P =0.004) significantly reduced the conjugation frequency of pKpQIL, whilst 6-gingerol had no significant impact ( P =0.962) and capsaicin significantly increased pKpQIL conjugation ( P <0.001) . At 100 µg ml −1 , anacardic acid had no significant effect on either pCT and pKpQIL conjugation frequencies ( P= 0.57 and P =0.823). At 100 µg ml −1 , none of the pure compounds significantly affected the fluorescence of E. coli and K. pneumoniae cells compared to DMSO control (Fig. S2 and Table S2). To look more at the impacts of some compounds with the greatest activity in E. coli EC958c carrying pCT, dose–response curves were performed with 1–256 µg ml −1 of 6-gingerol and capsaicin concentrations. 6-gingerol significantly reduced pCT conjugation frequency at 128 and 256 µg ml −1 ( ; P <0.001). However, at these concentrations, 6-gingerol also significantly reduced the overall number of fluorescent cells recorded in the bacterial population compared to the DMSO control (Fig. S3 and Table S3). Therefore, the reduction in the conjugation frequency of pCT treated with higher concentrations of capsaicin could be due to interference with the expression of GFP and mCherry proteins. Capsaicin produced a dose-dependent reduction in the conjugation frequency of pCT at 2 µg ml −1 and above . At concentrations of 64 µg ml −1 and above, capsaicin also significantly reduced the overall number of fluorescent cells detected within the bacteria population compared to the DMSO control (Fig. S3). Therefore, capsaicin concentrations of 2–32 µg ml −1 were effective in reducing pCT conjugation without affecting the fluorescence of the donor and recipient cells. For K. pneumoniae Ecl8 carrying pKpQIL, two compounds were selected, rottlerin because it caused a significant decrease in pKpQIL conjugation frequency, and anacardic acid because it had no effect at 100 µg ml −1 and therefore was tested at a wider concentration range to look for an effect. At lower concentrations (2, 4, and 16 µg ml −1 ), anacardic acid significantly increased the conjugation frequency of pKpQIL in K. pneumoniae Ecl8 . However, at higher concentrations, anacardic acid had no significant impact on pKpQIL conjugation frequency . Additionally, none of the anacardic acid concentrations tested had a significant effect on the overall number of fluorescent cells compared to DMSO control (Fig. S3c). Rottlerin significantly reduced the conjugation frequency of pKpQIL in K. pneumoniae Ecl8 at 32 µg ml −1 and above compared to the DMSO control . The greatest reduction was seen upon treatment with 128 or 256 µg ml −1 of rottlerin. However, at 128 and 256 µg ml −1 , rottlerin significantly reduced the number of fluorescent bacterial cells compared to the DMSO control (Fig. S3d). Nonetheless, 32 and 64 µg ml −1 rottlerin caused a significant decrease in pKpQIL conjugation frequency without affecting the number of fluorescent K. pneumoniae Ecl8 cells (Fig. S3d). It is possible that potential anti-plasmid compounds could be inhibiting the growth of the donor or recipient strains, which would alter population density and both/either the number of cells able to donate or receive the plasmid. Antimicrobial susceptibility testing showed that none of the compounds inhibited the growth of the E. coli and K. pneumoniae strains (>512 µg ml −1 ) at concentrations above those tested in the conjugation assays . To ensure that the pure compounds were not affecting the growth of the strains over the 6 and 24 h incubation used in the flow cytometry-based conjugation assays, E. coli and K. pneumoniae strains were grown in LB broth supplemented with the highest concentration of the pure compounds tested in the dose–response experiments. For KP18 and KP19, 256 µg ml −1 anacardic acid and 128 µg ml −1 rottlerin were tested because 256 µg ml −1 rottlerin adversely affected bacterial growth . At 256 µg ml −1 , anacardic acid had no impact on the growth of KP18 or KP19 over 24 h compared to the DMSO control . At 128 µg ml −1 , rottlerin did not affect the growth of KP18 and KP19 during the medium log phase, up to ~6 h, which is the duration of the conjugation assay. However, it delayed the transition of both strains from the late log phase to the stationary phase, although both strains still reached the same final OD 600 value as the DMSO control at the end of 24 h . For EC24 and EC25, 256 µg ml −1 6-gingerol and 128 µg ml −1 capsaicin were tested because 256 µg ml −1 capsaicin reduced the growth of both strains. Neither 128 µg ml −1 capsaicin nor 256 µg ml −1 6-gingerol affected bacterial growth during the log phase, but neither EC24 nor EC25 reached the same final density . Therefore, the reduction in pCT conjugation frequency in E. coli EC958c treated with high concentrations of capsaicin and 6-gingerol could be confounded by the high concentration’s impact on growth and cell density.
K. pneumoniae clinical isolate Next, the effect of the four compounds (capsaicin, anacardic acid, 6-gingerol, and rottlerin) was tested on KP10 (a clinical urine isolate of K. pneumoniae ), which we have shown carries and readily transmits a 120 kb IncF plasmid with a bla NDM-1 carbapenem resistance gene, termed pCPE16_3 The recipient strain was KP20, a previously generated hygromycin-resistant K. pneumoniae ATCC 43816R strain . A 1 : 10 donor-to-recipient ratio of KP10 and KP20 was used for the conjugation assays. Each compound was tested at 100 µg ml −1 , with a 1 h co-incubation of donor and recipient strains. Three parameters were explored: the number of transconjugants produced at the end of the 1 h incubation, the donor-to-recipient ratio after 1 h, and the conjugation frequency calculated as the number of transconjugants generated per recipient after 1 h. For capsaicin, there was no change in the number of transconjugant bacteria, the donor-to-recipient ratio, or the pCPE16_3 conjugation frequency ( ; P =0.4713, 0.2513, and 0.4446, respectively). This is in contrast with what was seen with pKpQIL, where the conjugation frequency was significantly higher compared to the DMSO control . Anacardic acid significantly reduced the number of transconjugant bacteria ( P =0.0375), but had no significant effect on the donor-to-recipient ratio or conjugation frequency ( ; P =0.2302 and 0.1937, respectively). This was comparable to pKpQIL, where anacardic acid also did not affect conjugation frequency . Comparable to pKpQIL, 6-gingerol did not affect any of the parameters for pCPE16_3 conjugation ( ; P =0.3988, 0.6255, 0.6850, respectively). Interestingly, rottlerin significantly reduced all three tested parameters. It significantly reduced the total number of transconjugant bacteria from 1.32×10 5 in DMSO to 2.25×10 4 ( P =0.006), the donor-to-recipient ratio from 0.0169 to 0.00783 ( P =0.045), and the conjugation frequency of pCPE16_3 from 2.313×10 −4 in DMSO to 8.83×10 −5 . Antimicrobial susceptibility testing showed that both KP10 and KP20 had an MIC of >512 µg ml −1 for rottlerin . However, since rottlerin reduced the ratio of KP10 to KP20, and impacted on the growth kinetics of KP18 and KP19, the impact of 100 µg ml −1 rottlerin or an equal volume of DMSO on the viability of KP10 and KP20 cells was determined following 1 h incubation. Compared to DMSO control, KP10 formed fewer colonies after 1 h of incubation with 100 µg ml −1 rottlerin; however, the difference was not statistically significant. This suggested that sub-MIC rottlerin may have affected KP10 growth during the conjugation time frame (Fig. S4).
The growing threat of AMR necessitates the search for alternative strategies to combat the spread and prevalence of AMR genes. Anti-plasmid compounds that interfere with plasmid conjugation or stability are being explored as a potential way to address AMR. Since natural products possess a wide range of biological activities, they offer a promising source of anti-plasmid compounds. To that end, we investigated the effect of natural product extracts and some of their reported bioactive constituents on plasmid conjugation. We found that most natural product extracts reduced plasmid conjugation in both E. coli and K. pneumoniae . To identify the possible compounds responsible for reducing plasmid conjugation within the natural product extracts, we tested the major and widely reported bioactive compounds present in some of these extracts. We found that capsaicin and 6-gingerol significantly reduced pCT transmission in E. coli EC958c. Previous work showed that 6-gingerol reduced the transfer of the IncN pKM101, IncP pUB307, and IncI2 TP114 plasmids, while capsaicin reduced the transfer of the IncI2 TP114, IncW R7K, IncP pUB307 and IncN pKM101 plasmids in E. coli K12 J53, without antibacterial effects on Gram-negative bacteria . Based on plasmid replicon typing, the IncI2 plasmid TP114 and the IncK plasmid pCT belong to the I complex . Therefore, the impact of 6-gingerol and capsaicin on pCT conjugation is comparable to the impact on TP114 conjugation. In a different study, rottlerin reduced the conjugation of the IncN pKM101, IncI2 TP114, IncP pUB307, and IncX2 R6K plasmids in E. coli K-12 J53 . In agreement, our data showed that in a clinical E. coli EC958c isolate with a veterinary plasmid , rottlerin also reduced the conjugation of the IncK plasmid pCT, which belongs to the same I complex as TP114. The effect of anacardic acid on plasmid conjugation has not been reported before, and we found that it did not significantly affect plasmid conjugation. However, at low concentrations, it increased pKpQIL conjugation in K. pneumoniae Ecl8. Some anti-plasmid compounds have plasmid-specific effects , therefore, anacardic acid may have activity (increase or decrease conjugation) in other plasmid types. This study used a fluorescent reporter assay to monitor plasmid conjugation by flow cytometry. Using fluorescent reporters to monitor plasmid conjugation increases throughput; however, consideration must be given to the potential impact of compounds on the expression and function of fluorescent proteins. For example, during conjugation, black pepper, ginger, turmeric, and kamala extracts significantly increased the number of non-fluorescent E. coli cells, whereas K. pneumoniae cells were less prone to reduction in fluorescence. Therefore, the apparent decrease in pCT conjugation frequency in E. coli treated with the natural product extracts could partly be due to the significant reduction of fluorescent cells. At 100 µg ml −1 , none of the pure compounds significantly affected the fluorescence of E. coli and K. pneumoniae cells. However, at higher concentrations (128 and 256 µg ml −1 ), capsaicin, 6-gingerol, and rottlerin significantly reduced the fluorescence of E. coli and K. pneumoniae cells. Certain natural product compounds display intrinsic fluorescence or quenching , hence, the apparent reduction in conjugation frequency by higher concentrations of natural product compounds is likely due to fluorescence quenching. The activity of anti-plasmid compounds could also be influenced by plasmid–host combination. Some bacterial host strains can acquire and maintain certain plasmids without a fitness cost . These successful host–plasmid pairings contribute significantly to the global spread of AMR genes . Despite their prevalence, there is still limited knowledge of what factors influence plasmid transfer in these successful plasmid–host combinations . Therefore, understanding the intricacies of the plasmid–host relationship is important in developing effective strategies to combat plasmid-mediated antibiotic resistance. The ideal anti-plasmid compound would have broad-range activity against different host strains and plasmids; however, owing to the diversity of plasmids and their relationship with the host, this may prove difficult. Nonetheless, identifying compounds that target globally disseminating plasmid–host combinations could be an attractive strategy to prevent the spread of AMR plasmids. Overall, we found that the potency of the natural product compounds was low as the density and number of transconjugant cells were still too high following treatment with the compounds. Therefore, the natural product compounds investigated in this study are unlikely to curb the spread of AMR plasmids in bacterial populations. Nonetheless, the data in this study suggests that certain natural product compounds like rottlerin could provide a chemical scaffold for further developing novel anti-plasmid compounds using structure–activity relationship studies.
10.1099/mic.0.001496 Supplementary Material 1.
|
Laboratory transmission potential of British mosquitoes for equine arboviruses | cb53ff46-413c-45cd-ad33-1c579e6a08ff | 7425075 | Pathology[mh] | Globalisation and climate change are expected to change the level of risk for emergence of vector-borne diseases in previously unaffected regions. In the last fifty years, the geographical range of a number of mosquito-borne arboviral diseases has increased, including Zika, dengue, chikungunya and West Nile. Mosquito-borne arboviral infections which affect both horses and people include, amongst others, the flaviviruses West Nile virus (WNV), Japanese encephalitis virus (JEV) and Murray Valley encephalitis virus (MVEV), and the alphaviruses Venezuelan equine encephalitis virus (VEEV), Eastern equine encephalitis virus (EEEV), Western equine encephalitis virus, and Ross River virus (RRV) . Whilst the emergence in Europe of dengue and chikungunya has been associated with Aedes aegypti and the invasive mosquito Aedes albopictus , for most of the equine viruses Culex mosquitoes are significantly involved in transmission. Expansion of the range of some arboviruses (West Nile virus (WNV) for example) has demonstrated vector competence of previously naïve mosquito species or populations . Other emerging diseases that affect equines include Peruvian horse sickness virus and Bunyamwera virus . Both are mosquito-borne viruses that have emerged as fatal equine diseases, in Peru and Argentina respectively, within the last 25 years. Sindbis and Middelburg viruses, circulating in Europe and/or Africa have also been recently associated with neurological disease in horses . There has been much discussion of the risk of equine arbovirus introduction to Europe in the last decade . The equine arboviruses generally have complex enzootic transmission cycles involving wildlife as reservoir hosts and ‘bridge vectors’ with broad feeding preferences which can carry virus from the reservoir host to other hosts; including humans and horses, both of which are clinically affected. The three viruses investigated in the present study (VEEV, RRV, JEV) have significant impacts on the health of people and horses (summarised in ) in endemic areas. Venezuelan equine encephalitis virus circulates in enzootic cycles between rodent hosts and mosquito vectors in Mexico, Central and South America and has a complex transmission cycle involving regular mutation of the virus, facilitating transmission to humans and horses through broadening of the vector and host ranges. This results in an epizootic cycle during which, virus amplification in the horse is sufficient to result in mosquito infection and this is thought to significantly increase the risk of human infection . VEEV infection causes neurological signs in humans and horses and significant infection and mortality rates in horses . Ross River virus is active seasonally in Australia with a number of vectors implicated. Epidemic polyarthritis due to RRV infection is regularly encountered in people in Australia , and related signs are seen in infected horses including synovial effusion, muscle stiffness and exercise intolerance . In Australia, RRV is maintained in a transmission cycle between mosquito vectors and marsupial hosts. However, a large outbreak occurred in the South Pacific in 1979–1980 , and other outbreaks consistent with human-mosquito-human transmission provide evidence that regions without native marsupial hosts may be at risk of limited epizootic outbreaks. The predominance of marsupials as reservoirs of RRV has been called into question and horses are suggested as potentially significant reservoirs by some authors . These factors raise the possibility that the potential for RRV to spread globally may be greater than previously thought. Japanese encephalitis virus outbreaks have occurred from Asia to Oceania and the virus infects a broad range of species although the primary transmission cycle involves ardeid birds . JEV infection causes neurological disease and mortality in equines and humans. JEV has several secondary vectors as well as the main vector Cx. tritaeniorhynchus , and has been identified in numerous species of wild-caught mosquitoes including Cx. pipiens , in which JEV RNA was discovered in Italy in 2011 . Culex pipiens has also been shown to be a laboratory competent vector, as has the invasive mosquito Ae. albopictus which is widespread in southern Europe . None of these three viruses have been identified in the UK; however, to estimate the risk of autochthonous transmission (post-introduction) of these viruses in an unaffected country it is necessary to consider potential native vectors. Several studies have investigated vector competence of European mosquitoes for WNV , including UK populations , and for JEV . While some mosquito species present both in Europe and the Americas or Oceania have had their vector competence assessed for equine alphaviruses such as VEEV and RRV , to our knowledge no field-collected European mosquito populations have been experimentally evaluated for alphaviruses affecting equines. The aim of this study was to investigate British wild-caught mosquito species for laboratory transmission potential (detection of viral RNA in saliva) of selected equine arboviruses, at temperatures which occur in the UK now, or may in the future. Viruses (an epidemic strain of VEEV, RRV and JEV) were selected based on their effects on equine health . Mosquito species were selected based on the potential exposure of British equines to the candidate vector. During a previous study, UK equine premises were sampled for candidate mosquito vectors and Cx. pipiens , Culiseta annulata and Oc. detritus were collected on a significant number of sites . The mosquito-virus combinations tested were JEV in Cs. annulata and Cx. pipiens , and RRV and VEEV in Oc. detritus . Ochlerotatus detritus has previously been shown to be a potential laboratory vector of JEV and so was not further tested. None of these mosquito-virus combinations have been tested before, except for JEV and Cx. pipiens , which was examined here at a significantly lower temperature than previously . The presence of viral RNA in saliva is a pre-requisite for a species being a vector, although this alone does not prove that a species is able to transmit under natural conditions. Hence, where viral RNA is detected in saliva, we refer to this as (laboratory) transmission potential to differentiate our results from laboratory vector competence demonstrated by transmission to vertebrates, and from natural transmission. Additionally, we use the term candidate vector to describe mosquito species with ecological characteristics such as host-preference and habitat type which make them of interest for vector competence evaluation. Mosquitoes Experiments were conducted on adult female mosquitoes originating from egg rafts or larvae collected on the Wirral Peninsula, northwest England. Ochlerotatus detritus were collected as third- or fourth-instar larvae, or pupae from brackish marshland by Little Neston (53°16′37.2″N, 3°04′06.4″W) between May and October. Culex pipiens egg-rafts were collected from container habitats on farmland at University of Liverpool, Leahurst Campus, Neston (53°17′25.6″N, 3°01′29.9″W), between May and August. Culiseta annulata egg-rafts were collected from container habitats (black 15 litre buckets were placed to catch rainwater and organic debris, for the purpose of attracting ovipositing Cs. annulata ) in woodland at Ness Botanic Gardens, Little Neston (53°16′11.5″N, 3°02’48.3″W) between May and August. Individual egg-rafts were allowed to hatch in covered larval trays. Culiseta annulata egg rafts were initially differentiated from Cx. pipiens complex rafts based on size, and emerged adults were identified morphologically. To separate Cx. pipiens from the morphologically identical species Cx. torrentium , a small number of larvae hatched from each egg raft were identified to species level using restriction fragment length polymorphism analysis and larval trays containing larvae identified as Cx. pipiens were retained. Immature mosquitoes were reared in a brick-built, unheated, non-insulated outbuilding (during May to November), thereby approximating outdoor shaded conditions. Larvae were reared in water collected from their larval habitat, supplemented with tap water as necessary. Where supplementary food was required Brewer’s Yeast was provided. Adults were allowed to emerge and mate in 30 × 30 × 30 cm BugDorms (BugDorm, Taichung, Taiwan). Adults were kept in ambient conditions (as for larvae) and were offered 10% sucrose solution on cotton wool ad libitum , then transferred to an indoor (temperature controlled) insectary on the same day as the virus-containing blood meal was offered. Viruses Viruses used were the JEV strain CNS138-11 , RRV (National Collection of Pathogenic Viruses (NCPV) catalogue number 0005281v) and VEEV P676 (NCPV catalogue number 0605153v). All viruses were cultured and titre assayed in Vero cells. Final virus titre in blood meals was 1 × 10 6 plaque forming units (pfu)/ml for JEV, 5.6 × 10 6 50% tissue culture infectious dose (TCID 50 )/ml for RRV, and 9.5 × 10 6 pfu/ml for VEEV. Titres were chosen based on information about viraemia in amplification or transport hosts and previously published studies investigating laboratory transmission. Titres were limited by the stock concentration provided by the respective institutions (measured using plaque assay (JEV, VEEV) or endpoint dilution assay TCID 50 (RRV)). Virus stocks were aliquoted on the day of receipt and stored at -80 °C, with aliquots discarded after use to minimise freeze-thaw before infection experiments. Infection At 10–21 days post-emergence female mosquitoes were transferred into 1-litre polypropylene Dispo-safe containers (The Microbiological Supply Company, Luton, UK), with a fine mesh covering and were starved of sugar for 24 h. They were then allowed to feed for up to 3 h, in low light conditions at 21 °C, on heparinised human blood (NHS transfusion service, Speke, UK) containing the virus. A Hemotek membrane feeding apparatus (Discovery Workshops, Lancashire, UK) heated to 39 °C was used with the membrane provided by the manufacturer. Immediately before use this was worn next to human skin for 15–20 min, to impart human odour, and encourage feeding. Blood-fed females were incubated at 18 °C, 21 °C or 24 °C. Mosquitoes were maintained at this temperature for 7–35 days and were provided with 10% sucrose. On the day of testing, mosquitoes were immobilised with triethylamine (FlyNap, Carolina Biological Supply Company, Burlington, USA), and their saliva was extracted by inserting each mosquito’s proboscis into a capillary tube containing mineral oil for 30 min. Each mosquito and its expectorate were placed in a separate 1.5 ml microcentrifuge tube containing 200 µl TRIzol reagent (Thermo Fisher Scientific, Waltham, USA), kept at room temperature for 2 h to inactivate virus and then stored at − 20 °C. Repeat infections were carried out for each experimental condition if 30 surviving mosquitoes were not available for testing at all time points. In this case another batch of mosquitoes was infected, until no further mosquitoes of under 22 days post-emergence were available. Our intention was to analyse at least 30 surviving mosquitoes for each condition. Total numbers infected were not recorded due to accidental mortality. Measuring viral RNA in body and saliva Semi-quantitative qPCR was used to estimate viral RNA quantities in mosquito saliva and bodies. Samples were run in duplicate and the mean of these two C q values was used in further analysis (see Additional file : Table S1). RNA was extracted using TRIzol reagent as per the manufacturer’s instructions. Samples were stored at − 20 °C for up to 14 days before cDNA generation. cDNA was generated using Superscript™ Vilo™ (Thermo Fisher Scientific). Each 20 µl reaction consisted of 4 µl Superscript™ Vilo™ MasterMix, 6 µl RNase-free water, and 10 µl of sample. PCR plates were incubated at 25 °C for 10 min, then 42 °C for 90 min and the reaction was terminated at 85 °C for 5 min. cDNA was stored at − 20 °C. TaqMan (Thermo Fisher Scientific) quantitative polymerase chain reaction (qPCR) was used to detect the presence of viral RNA in the samples. Primer and probe sets are shown in Table . TaqMan qPCR assays were performed in a reaction volume of 20 µl. The reaction contained 1 × TaqMan Gene Expression Master Mix (with ROX passive reference), TaqMan probe (500 nM for VEEV and RRV assays; 150 nM for JEV assay), primers (1 µM for VEEV and RRV assays; 400 nM for JEV assay) and 2 µl of cDNA or control substance. Thermocycler conditions for VEEV and RRV assays were: 1 cycle of 95 °C for 10 min, then 45 cycles of 95 °C for 15 s, 55 °C for 30 s and 60 °C for 30 s. For the JEV assay thermocycler conditions were: 1 cycle of 95 °C for 10 min, then 45 cycles of 95 °C for 15 s, and 60 °C for 1 min. Amplification and detection were performed using an Agilent Mx3005P qPCR System (Agilent Technologies, Santa Clara, USA). Analysis For each cDNA generation, a no-template control (nuclease-free water), and a positive control (viral RNA) were assayed. For each TaqMan assay, a positive control (cDNA generated from neat virus RNA) and negative controls (nuclease-free water, and cDNA generated from a mosquito infected with JEV for VEEV and RRV assays or infected with VEEV for JEV assays) were included. For each virus, a standard curve for the PCR was generated using 3 replicates of 10-fold serial dilutions with a dynamic range of 7 logs using the stock virus in order to allow calculation of estimated PCR efficiency (see Additional file : Text S1): JEV – 103.19%, RRV – 95.04%, VEEV – 91.66%. The copy number of viral RNA in the stock virus was not known and therefore viral copy number cannot be estimated from C q value. Samples were considered positive for viral RNA if the C q value obtained from the sample was ≤ 40. To aid the interpretation of C q values on plots, an ‘estimated relative RNA quantity’ is represented for each viral RNA, on a scale showing orders of magnitude, relative to a sample producing a C q value of 40. The method used here is semi-quantitative and the scales presented on plots correspond to transformed C q values and not to absolute quantification of virus or RNA quantity (see Additional file : Text S1). In this study, for percentage of mosquitoes with detectable viral RNA in bodies of saliva the denominator was the total number of mosquitoes successfully feeding on infected blood and surviving until the point of sampling. All statistical analyses were performed using the statistical programming language R . The difference in two proportions was analysed using Fisher’s exact test (fisher.test); the Shapiro-Wilks test was used to test whether data were normally distributed (shapiro.test). The Kruskal-Wallis rank sum test (kruskal.test) was used to test for significant differences in C q values between groups, and pairwise Mann Whitney-U tests (wilcox.test) with a Holm correction , were used to test for significant differences between each pair of groups. Experiments were conducted on adult female mosquitoes originating from egg rafts or larvae collected on the Wirral Peninsula, northwest England. Ochlerotatus detritus were collected as third- or fourth-instar larvae, or pupae from brackish marshland by Little Neston (53°16′37.2″N, 3°04′06.4″W) between May and October. Culex pipiens egg-rafts were collected from container habitats on farmland at University of Liverpool, Leahurst Campus, Neston (53°17′25.6″N, 3°01′29.9″W), between May and August. Culiseta annulata egg-rafts were collected from container habitats (black 15 litre buckets were placed to catch rainwater and organic debris, for the purpose of attracting ovipositing Cs. annulata ) in woodland at Ness Botanic Gardens, Little Neston (53°16′11.5″N, 3°02’48.3″W) between May and August. Individual egg-rafts were allowed to hatch in covered larval trays. Culiseta annulata egg rafts were initially differentiated from Cx. pipiens complex rafts based on size, and emerged adults were identified morphologically. To separate Cx. pipiens from the morphologically identical species Cx. torrentium , a small number of larvae hatched from each egg raft were identified to species level using restriction fragment length polymorphism analysis and larval trays containing larvae identified as Cx. pipiens were retained. Immature mosquitoes were reared in a brick-built, unheated, non-insulated outbuilding (during May to November), thereby approximating outdoor shaded conditions. Larvae were reared in water collected from their larval habitat, supplemented with tap water as necessary. Where supplementary food was required Brewer’s Yeast was provided. Adults were allowed to emerge and mate in 30 × 30 × 30 cm BugDorms (BugDorm, Taichung, Taiwan). Adults were kept in ambient conditions (as for larvae) and were offered 10% sucrose solution on cotton wool ad libitum , then transferred to an indoor (temperature controlled) insectary on the same day as the virus-containing blood meal was offered. Viruses used were the JEV strain CNS138-11 , RRV (National Collection of Pathogenic Viruses (NCPV) catalogue number 0005281v) and VEEV P676 (NCPV catalogue number 0605153v). All viruses were cultured and titre assayed in Vero cells. Final virus titre in blood meals was 1 × 10 6 plaque forming units (pfu)/ml for JEV, 5.6 × 10 6 50% tissue culture infectious dose (TCID 50 )/ml for RRV, and 9.5 × 10 6 pfu/ml for VEEV. Titres were chosen based on information about viraemia in amplification or transport hosts and previously published studies investigating laboratory transmission. Titres were limited by the stock concentration provided by the respective institutions (measured using plaque assay (JEV, VEEV) or endpoint dilution assay TCID 50 (RRV)). Virus stocks were aliquoted on the day of receipt and stored at -80 °C, with aliquots discarded after use to minimise freeze-thaw before infection experiments. At 10–21 days post-emergence female mosquitoes were transferred into 1-litre polypropylene Dispo-safe containers (The Microbiological Supply Company, Luton, UK), with a fine mesh covering and were starved of sugar for 24 h. They were then allowed to feed for up to 3 h, in low light conditions at 21 °C, on heparinised human blood (NHS transfusion service, Speke, UK) containing the virus. A Hemotek membrane feeding apparatus (Discovery Workshops, Lancashire, UK) heated to 39 °C was used with the membrane provided by the manufacturer. Immediately before use this was worn next to human skin for 15–20 min, to impart human odour, and encourage feeding. Blood-fed females were incubated at 18 °C, 21 °C or 24 °C. Mosquitoes were maintained at this temperature for 7–35 days and were provided with 10% sucrose. On the day of testing, mosquitoes were immobilised with triethylamine (FlyNap, Carolina Biological Supply Company, Burlington, USA), and their saliva was extracted by inserting each mosquito’s proboscis into a capillary tube containing mineral oil for 30 min. Each mosquito and its expectorate were placed in a separate 1.5 ml microcentrifuge tube containing 200 µl TRIzol reagent (Thermo Fisher Scientific, Waltham, USA), kept at room temperature for 2 h to inactivate virus and then stored at − 20 °C. Repeat infections were carried out for each experimental condition if 30 surviving mosquitoes were not available for testing at all time points. In this case another batch of mosquitoes was infected, until no further mosquitoes of under 22 days post-emergence were available. Our intention was to analyse at least 30 surviving mosquitoes for each condition. Total numbers infected were not recorded due to accidental mortality. Semi-quantitative qPCR was used to estimate viral RNA quantities in mosquito saliva and bodies. Samples were run in duplicate and the mean of these two C q values was used in further analysis (see Additional file : Table S1). RNA was extracted using TRIzol reagent as per the manufacturer’s instructions. Samples were stored at − 20 °C for up to 14 days before cDNA generation. cDNA was generated using Superscript™ Vilo™ (Thermo Fisher Scientific). Each 20 µl reaction consisted of 4 µl Superscript™ Vilo™ MasterMix, 6 µl RNase-free water, and 10 µl of sample. PCR plates were incubated at 25 °C for 10 min, then 42 °C for 90 min and the reaction was terminated at 85 °C for 5 min. cDNA was stored at − 20 °C. TaqMan (Thermo Fisher Scientific) quantitative polymerase chain reaction (qPCR) was used to detect the presence of viral RNA in the samples. Primer and probe sets are shown in Table . TaqMan qPCR assays were performed in a reaction volume of 20 µl. The reaction contained 1 × TaqMan Gene Expression Master Mix (with ROX passive reference), TaqMan probe (500 nM for VEEV and RRV assays; 150 nM for JEV assay), primers (1 µM for VEEV and RRV assays; 400 nM for JEV assay) and 2 µl of cDNA or control substance. Thermocycler conditions for VEEV and RRV assays were: 1 cycle of 95 °C for 10 min, then 45 cycles of 95 °C for 15 s, 55 °C for 30 s and 60 °C for 30 s. For the JEV assay thermocycler conditions were: 1 cycle of 95 °C for 10 min, then 45 cycles of 95 °C for 15 s, and 60 °C for 1 min. Amplification and detection were performed using an Agilent Mx3005P qPCR System (Agilent Technologies, Santa Clara, USA). For each cDNA generation, a no-template control (nuclease-free water), and a positive control (viral RNA) were assayed. For each TaqMan assay, a positive control (cDNA generated from neat virus RNA) and negative controls (nuclease-free water, and cDNA generated from a mosquito infected with JEV for VEEV and RRV assays or infected with VEEV for JEV assays) were included. For each virus, a standard curve for the PCR was generated using 3 replicates of 10-fold serial dilutions with a dynamic range of 7 logs using the stock virus in order to allow calculation of estimated PCR efficiency (see Additional file : Text S1): JEV – 103.19%, RRV – 95.04%, VEEV – 91.66%. The copy number of viral RNA in the stock virus was not known and therefore viral copy number cannot be estimated from C q value. Samples were considered positive for viral RNA if the C q value obtained from the sample was ≤ 40. To aid the interpretation of C q values on plots, an ‘estimated relative RNA quantity’ is represented for each viral RNA, on a scale showing orders of magnitude, relative to a sample producing a C q value of 40. The method used here is semi-quantitative and the scales presented on plots correspond to transformed C q values and not to absolute quantification of virus or RNA quantity (see Additional file : Text S1). In this study, for percentage of mosquitoes with detectable viral RNA in bodies of saliva the denominator was the total number of mosquitoes successfully feeding on infected blood and surviving until the point of sampling. All statistical analyses were performed using the statistical programming language R . The difference in two proportions was analysed using Fisher’s exact test (fisher.test); the Shapiro-Wilks test was used to test whether data were normally distributed (shapiro.test). The Kruskal-Wallis rank sum test (kruskal.test) was used to test for significant differences in C q values between groups, and pairwise Mann Whitney-U tests (wilcox.test) with a Holm correction , were used to test for significant differences between each pair of groups. Detection of JEV RNA in Cs. annulata Culiseta annulata was evaluated at 3 time points after challenge by ingestion with JEV and incubation at 21 °C and 24 °C (Table ). The trend in percentage of mosquitoes with viral RNA in bodies and saliva was a reduction over time and both parameters are reduced at 24 °C compared to 21 °C. The range of estimated relative JEV RNA quantity for mosquito bodies and saliva is presented in Fig. . Detection of JEV RNA in Cx. pipiens A small number of Cx. pipiens were tested at one temperature (18 °C) and one time point (21 days). All 18 mosquitoes tested positive for viral RNA in bodies, and 13 (72.2%) had viral RNA in saliva. Median C q values produced from mosquito bodies and saliva were 33.85 and 36.78 respectively. Detection of RRV RNA in Oc. detritus Ochlerotatus detritus exposed to a blood meal containing RRV were evaluated at 5 time points after challenge, and incubation at 21 °C and 24 °C (Table ). The percentage of mosquitoes expectorating viral RNA was highest after 7 days with an incubation temperature of 24 °C. At both temperatures, by 21 days, the percentage of mosquitoes expectorating RRV RNA and the proportion with detectable viral RNA in their bodies had dropped significantly compared to those at 7 days ( P < 0.001 in both cases). This observation correlates with the drop in estimated quantity of RNA detected in these bodies seen at 21 °C (Fig. ). C q values were not normally distributed (Shapiro-Wilks test) and variances were significantly different between groups. There were significant differences between incubation periods in the C q values of mosquito bodies maintained at 21 °C ( χ 2 =13.67, df = 3, P = 0.003). Pairwise tests indicated a significant difference in C q values between mosquito bodies after a 14 day incubation period, compared to a 7 day incubation period at 21 °C ( P = 0.012). Detection of VEEV RNA in Oc. detritus Ochlerotatus. detritus was evaluated at 4 time points after challenge by ingestion with VEEV and incubation at 18 °C, 21 °C and 24 °C (Table ; only 3 time points for 18 C). In general, the trend over time is for increased proportions of mosquitoes with VEEV RNA detected in the body and higher estimated relative RNA quantities in bodies (Fig. ), but few mosquitoes expectorated viral RNA. Cs. annulata Culiseta annulata was evaluated at 3 time points after challenge by ingestion with JEV and incubation at 21 °C and 24 °C (Table ). The trend in percentage of mosquitoes with viral RNA in bodies and saliva was a reduction over time and both parameters are reduced at 24 °C compared to 21 °C. The range of estimated relative JEV RNA quantity for mosquito bodies and saliva is presented in Fig. . Cx. pipiens A small number of Cx. pipiens were tested at one temperature (18 °C) and one time point (21 days). All 18 mosquitoes tested positive for viral RNA in bodies, and 13 (72.2%) had viral RNA in saliva. Median C q values produced from mosquito bodies and saliva were 33.85 and 36.78 respectively. Oc. detritus Ochlerotatus detritus exposed to a blood meal containing RRV were evaluated at 5 time points after challenge, and incubation at 21 °C and 24 °C (Table ). The percentage of mosquitoes expectorating viral RNA was highest after 7 days with an incubation temperature of 24 °C. At both temperatures, by 21 days, the percentage of mosquitoes expectorating RRV RNA and the proportion with detectable viral RNA in their bodies had dropped significantly compared to those at 7 days ( P < 0.001 in both cases). This observation correlates with the drop in estimated quantity of RNA detected in these bodies seen at 21 °C (Fig. ). C q values were not normally distributed (Shapiro-Wilks test) and variances were significantly different between groups. There were significant differences between incubation periods in the C q values of mosquito bodies maintained at 21 °C ( χ 2 =13.67, df = 3, P = 0.003). Pairwise tests indicated a significant difference in C q values between mosquito bodies after a 14 day incubation period, compared to a 7 day incubation period at 21 °C ( P = 0.012). Oc. detritus Ochlerotatus. detritus was evaluated at 4 time points after challenge by ingestion with VEEV and incubation at 18 °C, 21 °C and 24 °C (Table ; only 3 time points for 18 C). In general, the trend over time is for increased proportions of mosquitoes with VEEV RNA detected in the body and higher estimated relative RNA quantities in bodies (Fig. ), but few mosquitoes expectorated viral RNA. Here we present the first demonstration of laboratory transmission potential of any European mosquito population for VEEV and RRV, with RNA of both viruses detected in Oc. detritus saliva. Ochlerotatus detritus has previously been shown to be able to produce flaviviruses in saliva ; to our knowledge, this is the first study demonstrating it can produce viral RNA of an alphavirus (RRV). Detection of VEEV RNA in the saliva of Oc. detritus was infrequent, despite high proportions of mosquito bodies being infected, marking this species as a potential but inefficient laboratory transmitter. We found that over 70% of Cx. pipiens produced RNA of JEV in saliva after being maintained at 18 °C, similar to the mean temperature of some recent UK summer months. Culex pipiens has previously been shown to be capable of laboratory transmission of JEV after 11 days when maintained at 27 °C . Lower incubation temperatures had not been previously used for this mosquito-virus pair. Further work in order to confirm laboratory competence and estimate the lower temperature limit for replication of JEV in Cx. pipiens is warranted as Cx. pipiens is a widespread vector of West Nile virus and has been considered a potential vector of JEV in Europe . We add Cs. annulata to the number of European species shown to produce JEV RNA in saliva. To our knowledge, Cs. annulata has only been tested once before for vector competence for an arbovirus : it was shown to be competent but was not an efficient vector of Tahnya virus (Bunyaviridae; Orthobunyavirus). The results of this study for RRV and Oc. detritus, and for JEV and Cs. annulata, differ from the familiar pattern of number of mosquitoes positive for viral RNA increasing over time and with temperature: (i) in some instances, a decrease in viral RNA titre at later time points; and (ii) a decrease in viral RNA titre with higher maintenance temperature. It seems plausible that for these non-naturally occurring mosquito-virus interactions we are able to generate infections that are unstable over time; with the virus killing the mosquito or the mosquito clearing the virus, both processes facilitated by higher temperatures or longer incubation. Unfortunately, survival rates in infected or non-infected mosquitoes was not investigated in this study. Thus, we are unable to ascertain whether viral infection was causing mosquito mortality, which could account for the detection of more uninfected mosquitoes at later time points. An important question in mosquito infection studies is whether the titre of virus in the blood meal reflects the virus titre in natural hosts. If the experimental titres are much higher than occur naturally, demonstrating infectivity to the mosquitoes may not indicate true transmission potential. Estimated blood-meal virus titres of RRV (5.6 × 10 6 TCID 50 /ml) and VEEV (9.5 × 10 6 pfu/ml) used in this study were generally comparable to host viraemias. Reported titres of RRV include 1 × 10 5.5 TCID 50 /ml in humans and 1 × 10 6.3 50% suckling mouse intracerebral lethal dose (SMICLD 50 )/ml in horses . Reported titres of VEEV in horses range from 1 × 10 5.3 to 1 × 10 8.5 SMICLD 50 /ml . The estimated bloodmeal titre of JEV used here (1 × 10 6 pfu/ml) exceeds that reported for natural hosts by one to two logs, such as pigs (1 × 10 4.5 TCID 50 /ml or 1 × 10 4 SMICLD 50 /ml ), ardeid birds (1 × 10 4.3 SMICLD 50 /ml ) and non-ardeid birds (1 × 10 5.4 pfu/ml ). In the present study, titres in blood meals were estimated from frozen stock solution maximum titres. A previous study by our group using the same method of titre estimation for JEV with similar storage conditions, overestimated the titre in blood meals by 2 logs ; therefore it was considered likely that the final titre in blood meals used in this study would be, in reality, closer to 1 ×10 4 pfu/ml than 1 × 10 6 pfu/ml and therefore, would approximate the JEV titre in natural hosts. Confirmation that a mosquito species is a laboratory competent vector ideally involves demonstration of transmission from one vertebrate host to another. The use of vertebrate hosts in vector competence experiments has diminished in recent years, due to animal welfare considerations, and therefore alternative methods have been developed as a proxy for natural transmission . These include mosquito infection by artificial blood meal with a comparable viral titre to viraemias seen in vertebrate hosts, and transmission estimated by saliva extraction through forced salivation. Quantification of infectious virus in the expectorate of mosquitoes can be achieved using cell culture, however this is technically challenging and was not possible in this study. The present study uses detection of viral RNA in saliva, thus the results should be interpreted with caution: while detection of viral RNA in saliva is an important finding , we have not yet demonstrated the production of infectious virus. For demonstration of the potential of a mosquito species to become an ecologically significant vector in the event of virus introduction, other factors affecting vectorial capacity need to be evaluated : these factors include the presence of suitable hosts, mosquito longevity and biting rates and the impact of environmental temperatures. All three viruses have complex enzootic cycles involving more than one vertebrate host and more than one mosquito vector. Importantly, epizootic outbreaks of VEEV and RRV may occur involving equines or humans as the major vertebrate hosts. Humans are considered potential transport hosts for RRV and rarely, VEEV . By contrast, humans and equines are considered to be dead-end host of JEV, not able to produce a high enough viraemia to infect mosquitoes. Pigs are considered amplification hosts for JEV and experimental pig-pig transmission has been observed . Therefore, it is at least theoretically possible for a mammal-biting mosquito such as Oc. detritus to be a vector of all three viruses in an epizootic outbreak. The risk of transmission of equine arboviruses in the UK, including consideration of the potential for virus introduction or emergence as well as the ecology of British mosquito species which may be considered candidate vectors, is discussed elsewhere . Here, we focus on the mosquito species used in this study in relation to the ecological attributes which make them of interest as candidate vectors for the viruses tested. Ochlerotatus detritus is considered the primary species associated with brackish water that causes biting nuisance for humans in the UK and was trapped on seven of nine saltmarsh associated equine premises in the UK in a recent study . Natural exposure of horses to Oc. detritus on the Wirral Peninsula, UK, has been used for testing of mosquito repellents, confirming regular blood-feeding from horses . Culiseta annulata can be a locally significant nuisance species, noted particularly in early spring and late autumn in the UK, breeds in a variety of natural and artificial habitats, both shaded and unshaded and is widespread . Culiseta annulata has been shown to bite horses and other large animals in the UK and engorged females have also been captured in horse baited traps in France and Switzerland . This species was found on 75% (24/32) equine premises sampled in a previous study in the UK . Biting nuisance was experienced on two such premises during sampling, which was strongly suspected to be related to poorly drained muck-heaps which contained high densities of larvae identified as Cs. annulata/alaskaensis/subochrea and this species is associated with manure and water with a high nitrogen content . Culiseta annulata takes blood meals from birds as well as mammals including swine (an amplification host for JEV), both in the UK and elsewhere and has been considered a potential bridge-vector for WNV . Culex pipiens are considered abundant and widespread in the UK and were found on 65% (15/23) of equine premises where suitable water sources were found in a previous study . On all but four of these sites, mammal biting (candidate bridge-vector) species such as Cs. annulata or Oc. detritus were also trapped. In addition, Culex pipiens and/or torrentium mosquitoes were identified in horse-baited traps in the UK during testing for efficacy of repellents in a rural location, although these were not blood-fed . In the UK, Culex pipiens ecoforms pipiens and molestus , and hybrids are present. The complexity of taxonomy of Cx. pipiens is discussed elsewhere and here we use the term Cx. pipiens as mosquitoes were differentiated from Cx. torrentium but no attempt was made to define which ecoform they represented. For JEV, bird-mosquito-bird and bird-mosquito-mammal (bridge vector) transmission are likely to be required for ongoing transmission in the event of virus emergence, therefore vector competence studies of UK populations of ornithophilic species such as Cx. pipiens would provide important information. Overall, due to their widespread distribution and relative abundance, Cx. pipiens, whether ornithophilic or more catholic in their feeding preferences, must be considered a candidate vector (enzootic or as a bridge-vector) for JEV and here we demonstrate its ability to produce viral RNA in saliva at lower temperatures than previously shown. Considering the current UK climate, the risk of enzootic establishment of these viruses appears low, however if climate change substantially alters factors such as the distribution, density and vectorial capacity of potential mosquito vectors then the risk of epizootic transmission may increase. The lowest maintenance temperature used in this study was 18 °C at which temperature, Cx. pipiens was able to produce JEV RNA in saliva. Assessment of vector potential at lower temperatures would be required in order to inform risk assessment for current UK climate conditions, although it is important to note that since the year 2000 there have been 10 years where July or August, or both, have had a mean temperature > 18 °C in East Anglia and 8 years for south east and the central south of England . The 2.2 km convection permitting model (part of UKCP18) using the high emission scenario (RCP8.5) suggests that UK summer temperatures will rise by 3.6–5 °C for 2061–2080 therefore assessment, at 21 °C was considered relevant to predicted future climate conditions. We detected viral RNA in saliva in mosquitoes incubated at 21 °C for all three virus-mosquito combinations. As discussed previously, investigation of mosquito species’ potential for laboratory transmission of arboviruses by detection of viral RNA in saliva must be treated with a degree of caution and evaluation of vector competence and (even more so) estimation of potential vectorial capacity require additional information. Limitations of the present study include the relatively low numbers of Cx. pipiens used, lack of survival data and limited range of temperatures and time-points. Further work which would be useful in evaluating the ability of UK populations of these mosquito species to transmit RRV, JEV or VEEV would include (but is not limited to): confirmation of production of infectious virus in saliva using cell titration methods; investigation of lower temperatures and shorter incubation times than those used in this study; and investigation of the apparent instability of mosquito infections, including at lower temperatures than those used in this study, to assess infection dynamics at temperatures to which these populations are adapted. The present study demonstrated that mosquito populations present in the UK are able to produce viral RNA in saliva after feeding on blood containing arboviruses which affect people and equines, and which are associated with significant morbidity and / or mortality in both groups. For all mosquito-virus pairs viral RNA was produced in the saliva of some mosquitoes. Ochlerotatus detritus demonstrated the ability to produce RRV RNA in saliva and low numbers produced VEEV RNA in saliva. Culiseta annulata and Cx. pipiens produced JEV RNA in saliva. For some mosquito-virus pairs there was evidence that infections were unstable and viral RNA decreased over time. Further work on the lower temperature limit for replication of JEV in Cx. pipiens , and confirmation that the RNA in saliva is indicative of infectious virus is warranted. Additional file 1: Table S1. Table of C q values for all mosquito virus pairs. Additional file 2: Text S1. Derivation of ‘estimated relative quantity’ of RNA from C q values. |
Association Between Participation in Clinical Trials and Overall Survival Among Children With Intermediate- or High-risk Neuroblastoma | 214cd210-c586-4e25-bbe9-092d176b599c | 8267607 | Pediatrics[mh] | Therapeutic clinical trials have enabled the development of new approaches that have improved the survival of patients with cancer. , , , Although patients receiving experimental regimens in therapeutic clinical trials may experience benefits associated with new treatment strategies, those randomly assigned to receive the standard of care may also experience benefits associated with strict adherence to treatment schedules, dosing, and supportive care required by study protocols. A single-institution study demonstrated that children with cancer treated in clinical trials showed a trend toward improved outcomes. We therefore hypothesized that children with neuroblastoma would also experience benefits associated with clinical trial enrollment. Throughout the world, treatment of neuroblastoma is tailored according to the risk of relapse and death based on a combination of clinical and genetic prognostic biomarkers. In the Children’s Oncology Group (COG), patients with low-risk disease have excellent outcomes, and current studies are evaluating whether subsets of these children may be cured with observation alone ( NCT02176967 ). A series of COG and legacy North American cooperative groups (Pediatric Oncology Group [POG] and Children’s Cancer Group [CCG]) studies have established that patients with intermediate-risk disease also have excellent outcomes with surgery and moderate-dose chemotherapy. Successive studies (POG 9243, COG A3961, and COG ANBL0531 ) have demonstrated that therapy reduction approaches effectively maintain excellent outcomes for these patients. Similar results have been observed in European protocols for low-risk and intermediate-risk neuroblastoma. , , For patients with high-risk disease, successive randomized COG (CCG 3891, COG A3973, and ANBL0532) and European clinical trials testing increasingly intensive, multimodality treatments have led to new standards of care and improved survival. , , , , , Despite these successes, participation in an unproven therapeutic trial carries risk, and the experimental nature of trials may cause anxiety for patients and families. Although a substantially larger proportion of pediatric oncology patients are enrolled in clinical trials compared with adults with cancer, more than half of all patients with neuroblastoma are not treated in a clinical trial owing to many factors, including family and clinician preference and receiving a diagnosis when no open trial is available. For these patients, treatment is generally based on the regimen demonstrating the best outcome in the most recently completed clinical trial. The International Neuroblastoma Risk Group (INRG) Data Commons includes clinical phenotype, tumor biology, and outcome data on patients enrolled in COG (ANBL00B1) or legacy (9047) biology studies. Since 2000, the ANBL00B1 biology study has served as the infrastructure for rapid and reliable acquisition of tumor prognostic markers for risk classification and enrollment in COG clinical trials. Approximately 500 to 600 patients per year are enrolled in this biology study, representing 70% to 80% of all patients with neuroblastoma diagnosed in North America. Clinical trial registration numbers for the subset of patients with neuroblastoma enrolled in up-front COG or legacy North American cooperative group clinical trials are also available in the INRG Data Commons. To investigate the potential benefit associated with participating in a clinical trial, we compared the outcome of 3986 patients with intermediate- or high-risk neuroblastoma in the INRG Data Commons who were enrolled in a clinical trial with the outcome of 5101 patients not enrolled in a trial but treated with standard of care. Because patient selection bias is known to affect the outcome of a trial, we assessed the clinical features and tumor biomarkers of the cohort enrolled in a clinical trial vs a biology study only. Potential racial/ethnic disparities in trial participation were also evaluated.
Patients and Variables In this cohort study, data from patients with intermediate- or high-risk neuroblastoma in the INRG Data Commons were assessed. The study cohort included patients who received a diagnosis between January 1, 1991, and March 1, 2020. Survival analyses were conducted among the subset of patients with known outcomes who received a diagnosis prior to January 1, 2017, to ensure at least 3 years of follow-up. Patients were evaluated according to enrollment in a cooperative biology study (POG 9047 or COG ANBL00B1, which centrally collected patient and tumor data and outcomes) but not an up-front clinical trial vs those enrolled in a risk-based (CCG, POG, or COG) clinical trial for patients with a new diagnosis. Because only COG and POG also collected data from patients enrolled in a biology trial but not an up-front clinical trial, the study cohort was limited to patients in North America. Patient data abstracted from the INRG Data Commons included age at diagnosis, sex, race, ethnicity, International Neuroblastoma Staging System (INSS) stage, year of diagnosis, MYCN (GenBank 4613 ) amplification status, ploidy, grade of differentiation, histologic characteristics, aberrations of 1p and 11q, and the mitosis-karyorrhexis index (MKI). The INRG Data Commons and data use are approved by the University of Chicago institutional review board, which waived consent as all data were deidentified. This study followed the reporting requirements of the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Risk group was assigned according to the 2006 COG classification system using INSS stage, age, histologic characteristics, ploidy, and MYCN status. As the classification system changed over time, all patients were assigned a risk group based on features available in the INRG Data Commons and analyzed accordingly. Thus, all comparisons were between patients meeting identical criteria for risk assignment. Because outcome data for half of the patients enrolled in ANBL0532 were not available in the INRG Data Commons, these patients were excluded from survival analyses. Statistical Analysis The χ 2 test and the Wilcoxon rank sum test compared characteristics of patients according to clinical trial enrollment status. Event-free survival (EFS) and overall survival (OS) were estimated by Kaplan-Meier methods, and the differences between groups were evaluated using the log-rank test. Point estimates of EFS and OS were calculated at 10 years from diagnosis because patients treated in older trials were often lost to follow-up after this time. , In addition, we conducted univariate and multivariate analyses of established prognostic markers (age, INSS stage, MYCN amplification status, histologic characteristics, and ploidy) within subsets of patients with intermediate- or high-risk disease included in estimates of EFS and OS using Cox proportional hazards regression models. In multivariable models, we adjusted potentially confounded factors with outcomes among patients’ characteristics significant at P < .05 in univariate analysis. Factors were dropped if more than 20% of patients had missing data. The proportional hazards assumption was validated for all models. To assess the association of changes in standard of care resulting from successive clinical trials, the OS of patients with high-risk disease who were not treated in an up-front clinical trial was analyzed over time. Statistical analyses were performed using Stata, version 16 (StataCorp LLC) and R, version 3.6.0 (R Group for Statistical Computing). All P values were from 2-sided tests, and results were deemed statistically significant at P < .05.
In this cohort study, data from patients with intermediate- or high-risk neuroblastoma in the INRG Data Commons were assessed. The study cohort included patients who received a diagnosis between January 1, 1991, and March 1, 2020. Survival analyses were conducted among the subset of patients with known outcomes who received a diagnosis prior to January 1, 2017, to ensure at least 3 years of follow-up. Patients were evaluated according to enrollment in a cooperative biology study (POG 9047 or COG ANBL00B1, which centrally collected patient and tumor data and outcomes) but not an up-front clinical trial vs those enrolled in a risk-based (CCG, POG, or COG) clinical trial for patients with a new diagnosis. Because only COG and POG also collected data from patients enrolled in a biology trial but not an up-front clinical trial, the study cohort was limited to patients in North America. Patient data abstracted from the INRG Data Commons included age at diagnosis, sex, race, ethnicity, International Neuroblastoma Staging System (INSS) stage, year of diagnosis, MYCN (GenBank 4613 ) amplification status, ploidy, grade of differentiation, histologic characteristics, aberrations of 1p and 11q, and the mitosis-karyorrhexis index (MKI). The INRG Data Commons and data use are approved by the University of Chicago institutional review board, which waived consent as all data were deidentified. This study followed the reporting requirements of the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. Risk group was assigned according to the 2006 COG classification system using INSS stage, age, histologic characteristics, ploidy, and MYCN status. As the classification system changed over time, all patients were assigned a risk group based on features available in the INRG Data Commons and analyzed accordingly. Thus, all comparisons were between patients meeting identical criteria for risk assignment. Because outcome data for half of the patients enrolled in ANBL0532 were not available in the INRG Data Commons, these patients were excluded from survival analyses.
The χ 2 test and the Wilcoxon rank sum test compared characteristics of patients according to clinical trial enrollment status. Event-free survival (EFS) and overall survival (OS) were estimated by Kaplan-Meier methods, and the differences between groups were evaluated using the log-rank test. Point estimates of EFS and OS were calculated at 10 years from diagnosis because patients treated in older trials were often lost to follow-up after this time. , In addition, we conducted univariate and multivariate analyses of established prognostic markers (age, INSS stage, MYCN amplification status, histologic characteristics, and ploidy) within subsets of patients with intermediate- or high-risk disease included in estimates of EFS and OS using Cox proportional hazards regression models. In multivariable models, we adjusted potentially confounded factors with outcomes among patients’ characteristics significant at P < .05 in univariate analysis. Factors were dropped if more than 20% of patients had missing data. The proportional hazards assumption was validated for all models. To assess the association of changes in standard of care resulting from successive clinical trials, the OS of patients with high-risk disease who were not treated in an up-front clinical trial was analyzed over time. Statistical analyses were performed using Stata, version 16 (StataCorp LLC) and R, version 3.6.0 (R Group for Statistical Computing). All P values were from 2-sided tests, and results were deemed statistically significant at P < .05.
Cohort Characteristics There were 3058 patients with intermediate-risk neuroblastoma (1533 boys [50.1%]; mean [SD] age, 10.7 [14.7] months) and 6029 patients with high-risk neuroblastoma (3493 boys [57.9%]; mean [SD] age, 45.8 [37.4] months) in the final analytic cohort . We identified 14 723 patients who received a diagnosis between 1991 and 2020 and were treated at COG, POG, or CCG institutions. Patients with low-risk neuroblastoma (n = 4791) were excluded. In addition, we excluded 845 patients for whom risk group assignment could not be determined owing to unknown stage (n = 263), unknown MYCN status (n = 497), or unknown histologic characteristics (n = 85) for those older than 18 months with MYCN -nonamplified, INSS stage 3 tumors. Between 1991 and 2000, a median of 399 patients (interquartile range [IQR], 327-410 patients) were enrolled in the POG 9047 biology study per year. After activation of the COG biology study (ANBL00B1) in 2001, a median of 561 patients (IQR, 542-629 patients) were enrolled each year between 2001 and 2019. In the United States and Canada, approximately 700 to 800 new cases of neuroblastoma are diagnosed annually. , Thus, approximately 50% to 57% of all patients with neuroblastoma who received a diagnosis in the 1990s and 70% to 80% of patients who received a diagnosis since 2001 are included in the INRG Data Commons. Of the 3058 patients with intermediate-risk disease, 41 (1.3%) were enrolled in a high-risk clinical trial and were excluded from survival analyses. Similarly, 68 patients with high-risk disease (1.1%), 56 with MYCN -amplified tumors, enrolled only in an intermediate-risk trial and were excluded from survival analyses. Characteristics of Patients With Intermediate-risk Disease Of the 3058 patients with intermediate-risk disease, 1513 (49.5%) were enrolled in an up-front clinical trial, and 1545 were enrolled in biology trials only . A total of 132 of 1330 patients enrolled in a clinical trial (9.9%) and 135 of 1325 patients enrolled in a biology study (10.2%) were Black. Hispanic patients made up 13.3% (332 of 2489) of the group of patients with intermediate-risk disease. Compared with patients enrolled only in a biology study, those enrolled in an intermediate-risk clinical trial were more likely to have favorable risk features, including non–stage 4 disease (1064 of 1499 [71.0%] vs 980 of 1527 [64.2%]; P < .001) and tumors with low MKI (715 of 984 [72.7%] vs 655 of 1007 [65.0%]; P < .001) and/or hyperdiploid (912 of 1168 [78.1%] vs 925 of 1240 [74.6%]; P = .04). Conversely, patients in clinical trials were less likely than those in biology studies to have favorable histologic characteristics (1154 of 1252 [92.2%] vs 1157 of 1225 [94.4%]; P = .02). There were no differences according to age (>18 months), sex, race/ethnicity, or MYCN amplification. Outcomes for Patients With Intermediate-risk Disease To assess differences in outcomes according to enrollment in a clinical trial vs biology study alone, we focused on studies COG A3961, ANBL0531, CCG 3881, and POG 9243, each of which enrolled more than 100 patients. No difference in EFS was observed between patients enrolled in a clinical trial (n = 1231) between 1991 and 2011 (excluding 2006 because no studies were open that year) compared with those enrolled in a biology study alone in those same years (n = 710) (85% [95% CI, 83%-87%] vs 87% [95% CI, 84%-90%] at 10 years; P = .08) ( A). The median follow-up time of survivors was 8.5 years (IQR, 6.2-10.6 years) for patients enrolled in a clinical trial and 8.4 years (5.2-10.6 years) for those enrolled in a biology study. A Cox proportional hazards regression model showed no difference in the hazard ratio (HR) for EFS according to clinical trial enrollment (HR, 1.36; 95% CI, 0.97-1.92; P = .07) when accounting for stage, histologic characteristics, and ploidy (eTables 1 and 2 in the ). Overall survival was significantly higher for patients with intermediate-risk disease who were enrolled in a clinical trial than for those enrolled in a biology study (95% [95% CI, 94%-96%] vs 91% [95% CI, 89%-93%]; P = .002) ( B and ). However, in a multivariable model accounting for age, disease stage, and ploidy, enrollment in a clinical trial vs a biology study did not retain a statistically significantly higher OS (HR, 0.68; 95% CI, 0.45-1.03; P = .07) (eTables 1 and 2 in the ). Characteristics of Patients With High-risk Disease Of the 6029 patients with high-risk neuroblastoma, 2473 (41.0%) were enrolled in an up-front clinical trial, and 3556 were enrolled in biology studies only. Similar to both US census data and cancer prevalence percentages, , 316 of 2154 patients with high-risk neuroblastoma in clinical trials (14.7%) and 446 of 3132 of patients with high-risk neuroblastoma in biology studies (14.2%) were Black. Hispanic patients made up 11.7% (586 of 5014) of the group of patients with high-risk disease. Compared with patients enrolled only in biology studies, those enrolled in a clinical trial were more likely to be older than 18 months at diagnosis (2149 [86.9%] vs 2953 [83.0%]; P < .001), have INSS stage 4 disease (2151 of 2427 [88.6%] vs 2945 of 3512 [83.9%]; P < .001), have unfavorable histologic characteristics (1734 of 1819 [95.3%] vs 2429 of 2610 [93.1%]; P = .002), have hypodiploidy or diploidy (806 of 1486 [54.2%] vs 1357 of 2665 [50.9%]; P = .04), and have undifferentiated or poorly differentiated tumors (1697 of 1737 [97.7%] vs 2247 of 2347 [95.7%]; P = .001). There were no detectable differences in enrollment according to sex, race/ethnicity, MYCN amplification status, or MKI. Outcomes for High-risk Patients Clinical trial outcomes data for this analysis were limited to patients enrolled in CCG 3891, conducted between 1991 and 1997, and COG A3973, conducted between 2001 and 2006, which enrolled at least 100 patients. A significantly lower EFS was observed for patients who participated in COG A3973 and CCG 3891 (n = 922) compared with those enrolled only in a biology study who received a diagnosis between 1991 and 1997 or between 2001 and 2006 (n = 807) (32% [95% CI, 29%-35%] vs 38% [95% CI, 35%-41%]; P < .001 at 10 years ( A). The median follow-up time of survivors was 11 years (IQR, 7.5-13.4 years) in clinical trials and 10.2 years (IQR, 5.5-12.5 years) in biology studies. In the Cox proportional hazards regression model, clinical trial enrollment remained significantly associated with inferior EFS (HR, 1.16; 95% CI, 1.02-1.33; P = .02) compared with biology study enrollment when accounting for stage and MYCN status (eTables 1 and 2 in the ). However, no significant difference in OS between the 2 groups (38% [95% CI, 35%-41%] vs 41% [95% CI, 38%-44%]; P = .23) ( B) was observed . Similarly, there was no difference in OS according to clinical trial enrollment (HR, 1.01; 95% CI, 0.89-1.16; P = .81) when accounting for stage and MYCN status (eTables 1 and 2 in the ). To investigate whether the differences in EFS may be due to a delay in reporting events other than death for patients enrolled in biology studies, we compared the time between the reported event and survival among the patients enrolled in clinical trials and patients enrolled in biology studies only. Among the 807 patients treated in a biology study only, event and death were reported on the same day for 224 of 464 deceased patients (48.3%) compared with 76 of 587 deceased patients (13.0%) enrolled in COG A3973 or CCG 3891 ( P < .001). To evaluate how outcomes changed over time for patients not enrolled in a clinical trial, we analyzed EFS and OS of 2447 patients with high-risk disease enrolled in a biology study but not an up-front clinical trial according to 3 eras (1991-1999, 2000-2008, and 2009-2016) corresponding to changes in standards of care. , Both EFS and OS were superior for patients treated in more recent eras , suggesting that all patients with high-risk disease are experiencing benefits associated with the advances made in clinical trials.
There were 3058 patients with intermediate-risk neuroblastoma (1533 boys [50.1%]; mean [SD] age, 10.7 [14.7] months) and 6029 patients with high-risk neuroblastoma (3493 boys [57.9%]; mean [SD] age, 45.8 [37.4] months) in the final analytic cohort . We identified 14 723 patients who received a diagnosis between 1991 and 2020 and were treated at COG, POG, or CCG institutions. Patients with low-risk neuroblastoma (n = 4791) were excluded. In addition, we excluded 845 patients for whom risk group assignment could not be determined owing to unknown stage (n = 263), unknown MYCN status (n = 497), or unknown histologic characteristics (n = 85) for those older than 18 months with MYCN -nonamplified, INSS stage 3 tumors. Between 1991 and 2000, a median of 399 patients (interquartile range [IQR], 327-410 patients) were enrolled in the POG 9047 biology study per year. After activation of the COG biology study (ANBL00B1) in 2001, a median of 561 patients (IQR, 542-629 patients) were enrolled each year between 2001 and 2019. In the United States and Canada, approximately 700 to 800 new cases of neuroblastoma are diagnosed annually. , Thus, approximately 50% to 57% of all patients with neuroblastoma who received a diagnosis in the 1990s and 70% to 80% of patients who received a diagnosis since 2001 are included in the INRG Data Commons. Of the 3058 patients with intermediate-risk disease, 41 (1.3%) were enrolled in a high-risk clinical trial and were excluded from survival analyses. Similarly, 68 patients with high-risk disease (1.1%), 56 with MYCN -amplified tumors, enrolled only in an intermediate-risk trial and were excluded from survival analyses.
Of the 3058 patients with intermediate-risk disease, 1513 (49.5%) were enrolled in an up-front clinical trial, and 1545 were enrolled in biology trials only . A total of 132 of 1330 patients enrolled in a clinical trial (9.9%) and 135 of 1325 patients enrolled in a biology study (10.2%) were Black. Hispanic patients made up 13.3% (332 of 2489) of the group of patients with intermediate-risk disease. Compared with patients enrolled only in a biology study, those enrolled in an intermediate-risk clinical trial were more likely to have favorable risk features, including non–stage 4 disease (1064 of 1499 [71.0%] vs 980 of 1527 [64.2%]; P < .001) and tumors with low MKI (715 of 984 [72.7%] vs 655 of 1007 [65.0%]; P < .001) and/or hyperdiploid (912 of 1168 [78.1%] vs 925 of 1240 [74.6%]; P = .04). Conversely, patients in clinical trials were less likely than those in biology studies to have favorable histologic characteristics (1154 of 1252 [92.2%] vs 1157 of 1225 [94.4%]; P = .02). There were no differences according to age (>18 months), sex, race/ethnicity, or MYCN amplification.
To assess differences in outcomes according to enrollment in a clinical trial vs biology study alone, we focused on studies COG A3961, ANBL0531, CCG 3881, and POG 9243, each of which enrolled more than 100 patients. No difference in EFS was observed between patients enrolled in a clinical trial (n = 1231) between 1991 and 2011 (excluding 2006 because no studies were open that year) compared with those enrolled in a biology study alone in those same years (n = 710) (85% [95% CI, 83%-87%] vs 87% [95% CI, 84%-90%] at 10 years; P = .08) ( A). The median follow-up time of survivors was 8.5 years (IQR, 6.2-10.6 years) for patients enrolled in a clinical trial and 8.4 years (5.2-10.6 years) for those enrolled in a biology study. A Cox proportional hazards regression model showed no difference in the hazard ratio (HR) for EFS according to clinical trial enrollment (HR, 1.36; 95% CI, 0.97-1.92; P = .07) when accounting for stage, histologic characteristics, and ploidy (eTables 1 and 2 in the ). Overall survival was significantly higher for patients with intermediate-risk disease who were enrolled in a clinical trial than for those enrolled in a biology study (95% [95% CI, 94%-96%] vs 91% [95% CI, 89%-93%]; P = .002) ( B and ). However, in a multivariable model accounting for age, disease stage, and ploidy, enrollment in a clinical trial vs a biology study did not retain a statistically significantly higher OS (HR, 0.68; 95% CI, 0.45-1.03; P = .07) (eTables 1 and 2 in the ).
Of the 6029 patients with high-risk neuroblastoma, 2473 (41.0%) were enrolled in an up-front clinical trial, and 3556 were enrolled in biology studies only. Similar to both US census data and cancer prevalence percentages, , 316 of 2154 patients with high-risk neuroblastoma in clinical trials (14.7%) and 446 of 3132 of patients with high-risk neuroblastoma in biology studies (14.2%) were Black. Hispanic patients made up 11.7% (586 of 5014) of the group of patients with high-risk disease. Compared with patients enrolled only in biology studies, those enrolled in a clinical trial were more likely to be older than 18 months at diagnosis (2149 [86.9%] vs 2953 [83.0%]; P < .001), have INSS stage 4 disease (2151 of 2427 [88.6%] vs 2945 of 3512 [83.9%]; P < .001), have unfavorable histologic characteristics (1734 of 1819 [95.3%] vs 2429 of 2610 [93.1%]; P = .002), have hypodiploidy or diploidy (806 of 1486 [54.2%] vs 1357 of 2665 [50.9%]; P = .04), and have undifferentiated or poorly differentiated tumors (1697 of 1737 [97.7%] vs 2247 of 2347 [95.7%]; P = .001). There were no detectable differences in enrollment according to sex, race/ethnicity, MYCN amplification status, or MKI.
Clinical trial outcomes data for this analysis were limited to patients enrolled in CCG 3891, conducted between 1991 and 1997, and COG A3973, conducted between 2001 and 2006, which enrolled at least 100 patients. A significantly lower EFS was observed for patients who participated in COG A3973 and CCG 3891 (n = 922) compared with those enrolled only in a biology study who received a diagnosis between 1991 and 1997 or between 2001 and 2006 (n = 807) (32% [95% CI, 29%-35%] vs 38% [95% CI, 35%-41%]; P < .001 at 10 years ( A). The median follow-up time of survivors was 11 years (IQR, 7.5-13.4 years) in clinical trials and 10.2 years (IQR, 5.5-12.5 years) in biology studies. In the Cox proportional hazards regression model, clinical trial enrollment remained significantly associated with inferior EFS (HR, 1.16; 95% CI, 1.02-1.33; P = .02) compared with biology study enrollment when accounting for stage and MYCN status (eTables 1 and 2 in the ). However, no significant difference in OS between the 2 groups (38% [95% CI, 35%-41%] vs 41% [95% CI, 38%-44%]; P = .23) ( B) was observed . Similarly, there was no difference in OS according to clinical trial enrollment (HR, 1.01; 95% CI, 0.89-1.16; P = .81) when accounting for stage and MYCN status (eTables 1 and 2 in the ). To investigate whether the differences in EFS may be due to a delay in reporting events other than death for patients enrolled in biology studies, we compared the time between the reported event and survival among the patients enrolled in clinical trials and patients enrolled in biology studies only. Among the 807 patients treated in a biology study only, event and death were reported on the same day for 224 of 464 deceased patients (48.3%) compared with 76 of 587 deceased patients (13.0%) enrolled in COG A3973 or CCG 3891 ( P < .001). To evaluate how outcomes changed over time for patients not enrolled in a clinical trial, we analyzed EFS and OS of 2447 patients with high-risk disease enrolled in a biology study but not an up-front clinical trial according to 3 eras (1991-1999, 2000-2008, and 2009-2016) corresponding to changes in standards of care. , Both EFS and OS were superior for patients treated in more recent eras , suggesting that all patients with high-risk disease are experiencing benefits associated with the advances made in clinical trials.
In this study, we analyzed 9087 patients with neuroblastoma in the INRG Data Commons to investigate whether participation in an up-front clinical trial was associated with superior outcomes. Most patients in North America who received a diagnosis of neuroblastoma during the past 3 decades were enrolled in the COG ANBL00B1 or legacy biology study, and demographic information, tumor biomarkers, and outcome data on these patients are included in the INRG Data Commons. A total of 43.9% of the patients with intermediate- or high-risk disease were also enrolled in a clinical trial; for these patients, the number that identifies the clinical trial are captured in the INRG Data Commons. These data provide a unique opportunity to compare the outcomes of North American patients with neuroblastoma enrolled in a clinical trial with a representative cohort of “real-world” patients who were treated off trial. Because of the advances in neuroblastoma treatment that have been made based on sequential clinical trials testing new therapeutic approaches, we expected to find a benefit associated with clinical trial participation for patients with high-risk disease. However, our analysis demonstrated that OS was not higher for patients with high-risk disease enrolled in an up-front clinical trial compared with those treated off trial. The reasons for the lack of survival benefit remain unclear but may reflect the common practice to treat patients not enrolled in a clinical trial according to the therapeutic and supportive care regimens used in a previous clinical trial. We also found that EFS was inferior for patients with high-risk disease who were enrolled in an up-front clinical trial compared with those who were treated off trial. Comparison of the 2 cohorts demonstrated that the patients enrolled in clinical trials had a higher prevalence of high-risk features, including older age, metastatic disease, and unfavorable biological features. These differences in clinical features and tumor biology suggest that there may be physician bias regarding enrollment of patients with high-risk disease in clinical trials. To assess other possible reasons for the improved EFS in the biology study cohort, we evaluated the time from event to death and found that a significantly larger proportion of patients in biology studies had 0 days between event and death compared with those in clinical trials. These findings suggest that the superior EFS observed in the children enrolled only in a biology study may be due, in part, to a failure to report events other than death. In contrast to the cohort of patients with high-risk disease, significantly improved OS but not EFS was observed for the patients with intermediate-risk disease who were enrolled in up-front clinical trials. This observation suggests that salvage treatments after relapse were more effective in the clinical trial cohort, which may reflect differences in tumor biology. Analysis of the 2 cohorts demonstrated that patients enrolled in an intermediate-risk clinical trial were significantly more likely to have favorable prognostic markers, including localized disease and tumors with favorable biological features. In a multivariable analysis accounting for age, disease stage, and ploidy, enrollment in a clinical trial was not significantly associated with OS, suggesting that differences in these features were associated with the observed difference in OS. Thus, there appears to be physician bias toward off-trial treatment of patients with intermediate-risk disease with more unfavorable tumor biology. Contrasting studies identifying discrepancies in clinical trial enrollment according to demographic features, such as older age, and race/ethnicity, , , , , , , , , we found no evidence of bias in recruitment across demographic groups. Of the 3986 patients enrolled in studies, 12.9% were Black and 11.3% were Hispanic, mirroring the prevalence of Black and Hispanic individuals in the US population and in the overall neuroblastoma population in North America. Previous studies have shown that Black and Native American children have a higher prevalence of high-risk disease, and there may be factors genetically predisposing these groups to have more aggressive tumors. Our study suggests that differences in outcomes are not likely due to whether or not a patient is enrolled in a clinical trial. Although we are unable to assess how other social determinants of health that disproportionally affect minority populations may be associated with adherence to protocol therapy and outcomes, virtually all chemotherapy regimens for neuroblastoma are administered intravenously in a hospital or outpatient clinic and closely monitored. Limitations This study has some limitations. Information about treatment received is not available in the INRG Data Commons. Although postconsolidation immunotherapy has been shown to improve survival for patients with high-risk neuroblastoma, outcome data for patients enrolled in the nonrandomized immunotherapy expansion group of the ANBL0032 clinical trial are not currently available in the INRG Data Commons. Specifically, the biology study–only cohort did not include 423 patients who received a diagnosis between 2009 and 2016 who were not enrolled in an up-front therapeutic trial but enrolled in ANBL0032 and nonrandomly assigned to receive postconsolidation immunotherapy. Thus, the actual EFS and OS of the patients enrolled in a biology-only study during this era is likely higher than reported in this study.
This study has some limitations. Information about treatment received is not available in the INRG Data Commons. Although postconsolidation immunotherapy has been shown to improve survival for patients with high-risk neuroblastoma, outcome data for patients enrolled in the nonrandomized immunotherapy expansion group of the ANBL0032 clinical trial are not currently available in the INRG Data Commons. Specifically, the biology study–only cohort did not include 423 patients who received a diagnosis between 2009 and 2016 who were not enrolled in an up-front therapeutic trial but enrolled in ANBL0032 and nonrandomly assigned to receive postconsolidation immunotherapy. Thus, the actual EFS and OS of the patients enrolled in a biology-only study during this era is likely higher than reported in this study.
To learn from every pediatric oncology patient, there is a culture among pediatric oncologists to ask every parent or legal guardian to consider enrolling their child in an up-front clinical trial. This INRG Data Commons analysis found that, among patients with intermediate- or high-risk neuroblastoma diagnosed in North America, there was a high prevalence of population-wide clinical trial participation. Advances in neuroblastoma treatment during the past decades have resulted from the development of new standards of care based on the results of successive, risk-based clinical trials, improving survival rates of patients with high-risk disease. Our results suggest that there may be some physician bias regarding clinical trial enrollment associated with tumor biology. However, no evidence of bias in recruitment across demographic groups was observed, enabling assessment of treatment response and toxic effects across racial/ ethnic groups. The decision to enroll in clinical trials can be fraught with tension but must continue to be supported and encouraged.
|
ARC/Arg3.1 expression in the lateral geniculate body of monocular form deprivation amblyopic kittens | 17ea25dd-4fac-4da5-b06a-6b6db14ebd33 | 9809052 | Anatomy[mh] | Related studies believe that visual developmental plasticity is the basis of amblyopia treatment, and its performance is closely related to synaptic plasticity . During the occurrence and development of amblyopia, the synaptic density in ganglion cells, lateral geniculate body, and visual cortex of animals changed . Synapse, as the structural basis of information transmission between neurons, is the key part of visual development plasticity . In many pathogeneses of amblyopia, synaptic plasticity is considered the most critical link and maybe the final pathway of other pathogenesis. Its plasticity can be divided into long-term potentiation (LTP) and long-term depression (LTD) according to time . Immediate early genes also have the effect of coupling short-term signals with long-term changes . As one of the immediate early genes, activity-regulated cytoskeleton-associated protein (ARC/Arg3.1) is induced in neurons in response to neural activity and is necessary for activity-induced forms of synaptic plasticity . Moreover, it also is a crucial regulator of memory and cognitive flexibility . However, there has been no study on the correlation between the expression of ARC/Arg3.1 and amblyopia. Therefore, we examined changes in ARC/Arg3.1 in the lateral geniculate body in amblyopia to investigate the significance of this body in the pathogenesis of amblyopia and provide theoretical support for the occurrence and development of amblyopia.
Animals We used 20 healthy 3-week-old kittens weighing between 250 and 350 g, regardless of coat color and gender. All kittens were examined to rule out congenital and developmental abnormalities such as refractive media and fundus opacity, and the refractive errors were + 2.25 ~ + 3.50D. All kittens were kept in a room with an ambient temperature set to 24 ± 1 °C and relative humidity of 50 ± 10%, which had great ventilation and natural light. Up to the age of 5 weeks, all kittens were fed milk powder and water 5 times a day as they were unable to feed on their own. After 5 weeks of age, the kitten has fed solid food and drank water three times a day. This study has been approved and supervised by the Experimental Animal Ethics Committee of North Sichuan Medical College (NSMC Appl. No. 2021 [66]), and all animals in this study were provided by the Experimental Animal Center of North Sichuan Medical College. Animals model establishment Twenty kittens used the random number table method to divide into the experimental group ( n = 10) and the control group ( n = 10). The kittens were anesthetized by intraperitoneal injection of 1% pentobarbital sodium (35 mg/kg). The right eye of the kittens in the experimental group was covered with black opaque covering cloth, while the control group was only anesthetized. Pattern visual evoked potential (PVEP) examination was performed on two groups of kittens regularly every week. The successful establishment of the amblyopia model was based on the standard that the amplitude of the P100 wave in the right eye (the covering eye) of the experimental group was lower than that of the other three groups, and the latency of P100 wave was higher than that of the other three groups . PVEP detection All kittens were anesthetized intraperitoneally with 1% pentobarbital sodium (35 mg/kg). The refractive errors of all kittens were detected by retinoscopy and corrected by lenses. Then, the animal electrode needles (RL-1223000030-RC-D, Roland Consult Stasche Finger Gmbh) were inserted into the middle of the forehead, occipital and posterior part of the ear tip of kittens. Place the kitten 40 cm away from the vertical line in the center of the display screen, and adjust the head position so that its visual axis is perpendicular to the screen. Set the program of Reti-Port/Scan 21B (Roland Consult Stasche Finger Gmbh), select the chessboard flip mode, the time–frequency 1 Hz, 0.3 cpd mode, a contrast of 97%, superimposed 64 times and sampling time of 300 ms . The test was repeated 3 times for each eye of each kitten and the average value was obtained. Dissection of the lateral geniculate body At the age of 7 weeks, after PVEP detection, all kittens were euthanized by intraperitoneal injection of 2% pentobarbital sodium (100 mg/kg) according to the American Veterinary Medical Association (AVMA) Animal Euthanasia Guidelines (2020). According to the Atlas of Feline Anatomy For Veterinarians, the left lateral geniculate body of kittens was isolated and fixed in 4% paraformaldehyde (Fig. ). Then paraffin embedding was carried out and paraffin sections were prepared. The paraffin section was made by treating the whole tissue block as a whole, and the thickness of the section was set to 4 μm. The expression of ARC/Arg3.1 was detected by immunohistochemistry (IHC) and in situ hybridization (ISH), and the apoptosis of neurons in the lateral geniculate body was detected by TdT-mediated dUTP Nick-End Labeling (TUNEL). Immunohistochemical staining Sections were dewaxed to water and placed in 3% hydrogen peroxide solution and phosphate buffer saline (pH 7.4) (Boster Biological Technology Co., Ltd., China, AR0030) in turn to block endogenous peroxidase. The slices were placed in a repair box containing citric acid (pH 6.0) (Boster Biological Technology Co., Ltd., China, AR0024) antigen repair buffer for antigen repair. The tissue was then evenly covered with 5% BSA blocking solution in the culture dish for serum blocking. Followed by the addition of the first antibody (the dilution ratio of the antibody was 1: 100) (ARC/Arg3.1) (Proteintech Group, Inc, China, 16,290–1-AP). The second antibody (Biotin Conjugated goat anti-rabbit IgG) and strept avidin–biotin complex (SABC) (Boster Biological Technology Co., Ltd., China, SA1022). Diaminobenzidine (DAB) (Boster Biological Technology Co., Ltd., China, AR1022) was used to show color, and positive results ranged from yellow to brownish yellow. The nucleus of hematoxylin staining (Beijing Solarbio Science & Technology Co., Ltd., China, Ltd, G1080) was blue. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis. In situ hybridization staining Paraffin sections were dewaxed in water and digested with protease K (20 μl/ml) at 37℃ for 30 min. The 3% methanol-hydrogen peroxide was added, and the slide was placed in phosphate buffer saline (pH 7.4) (Boster Biological Technology Co., Ltd., China, AR0033) to block endogenous peroxidase. After pre-hybridization, ARC/Arg3.1 mRNA probe (5`-CGCTG GGTCA AGCGT GAGAT GCACG TGTGG AGGGA-3`; 5`-TATTG GCTGT CCCAG ATCCA GAACC ACATG AATGG-3`; 5`-TGGCG TAAGC GGGAC CTGTA CCAGA CACTG TATGT-3`) hybridization solution containing the probe was added (Boster Biological Technology Co., Ltd., China, MK1612) at a concentration of 20 μl. Hybridization was conducted at 37℃ in an incubator overnight, and then the hybridization solution was washed away. BSA blocking solution was then added, followed by a drop of mouse anti-digoxigenin-labeled peroxidase (Boster Biological Technology Co., Ltd., China, MK1748). DAB (Boster Biological Technology Co., Ltd., China, AR1022) was used to show color, and positive results ranged from yellow to brownish yellow. The nucleus of hematoxylin staining (Beijing Solarbio Science & Technology Co., Ltd., China, Ltd, G1080) was blue. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis. TUNEL staining Paraffin sections were dewaxed in water and digested with protease K, and 3% hydrogen peroxide was added to block endogenous peroxidase. Add Labeling Buffer, 5% BSA, the Anti-DIG-Biotin, and SABC (Boster Biological Technology Co., Ltd., China, MK1015) in sequence. DAB (Boster Biological Technology Co., Ltd., China, AR1022) was used for color development, and the color of the DAB positive reaction was brown-yellow; the nucleus appeared blue after hematoxylin staining. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis. Statistical analysis The statistical analysis software was Stata/SE 16.0 All data are expressed as means ± standard deviation ( [12pt]{minimal}
$$$$ X ¯ ± s). One-way ANOVA (LSD) was used to analyze the baseline data of each eye of the two groups of kittens, including axial length, diopter, and P100 wave of each eye. One-way ANOVA (LSD) was used to compare P100 waves in the same group at different times and in different groups at the same time. Paired sample t -test was used to compare the differences between different eyes in the same group, two independent sample t -test was used to compare the difference between the right eye in the control group and the experimental group. The results of IHC, ISH, and TUNEL were tested by independent sample t -test. Pearson correlation coefficient analysis was performed on the IHC, ISH, and TUNEL data of the control group and the experimental group.
We used 20 healthy 3-week-old kittens weighing between 250 and 350 g, regardless of coat color and gender. All kittens were examined to rule out congenital and developmental abnormalities such as refractive media and fundus opacity, and the refractive errors were + 2.25 ~ + 3.50D. All kittens were kept in a room with an ambient temperature set to 24 ± 1 °C and relative humidity of 50 ± 10%, which had great ventilation and natural light. Up to the age of 5 weeks, all kittens were fed milk powder and water 5 times a day as they were unable to feed on their own. After 5 weeks of age, the kitten has fed solid food and drank water three times a day. This study has been approved and supervised by the Experimental Animal Ethics Committee of North Sichuan Medical College (NSMC Appl. No. 2021 [66]), and all animals in this study were provided by the Experimental Animal Center of North Sichuan Medical College.
Twenty kittens used the random number table method to divide into the experimental group ( n = 10) and the control group ( n = 10). The kittens were anesthetized by intraperitoneal injection of 1% pentobarbital sodium (35 mg/kg). The right eye of the kittens in the experimental group was covered with black opaque covering cloth, while the control group was only anesthetized. Pattern visual evoked potential (PVEP) examination was performed on two groups of kittens regularly every week. The successful establishment of the amblyopia model was based on the standard that the amplitude of the P100 wave in the right eye (the covering eye) of the experimental group was lower than that of the other three groups, and the latency of P100 wave was higher than that of the other three groups .
All kittens were anesthetized intraperitoneally with 1% pentobarbital sodium (35 mg/kg). The refractive errors of all kittens were detected by retinoscopy and corrected by lenses. Then, the animal electrode needles (RL-1223000030-RC-D, Roland Consult Stasche Finger Gmbh) were inserted into the middle of the forehead, occipital and posterior part of the ear tip of kittens. Place the kitten 40 cm away from the vertical line in the center of the display screen, and adjust the head position so that its visual axis is perpendicular to the screen. Set the program of Reti-Port/Scan 21B (Roland Consult Stasche Finger Gmbh), select the chessboard flip mode, the time–frequency 1 Hz, 0.3 cpd mode, a contrast of 97%, superimposed 64 times and sampling time of 300 ms . The test was repeated 3 times for each eye of each kitten and the average value was obtained.
At the age of 7 weeks, after PVEP detection, all kittens were euthanized by intraperitoneal injection of 2% pentobarbital sodium (100 mg/kg) according to the American Veterinary Medical Association (AVMA) Animal Euthanasia Guidelines (2020). According to the Atlas of Feline Anatomy For Veterinarians, the left lateral geniculate body of kittens was isolated and fixed in 4% paraformaldehyde (Fig. ). Then paraffin embedding was carried out and paraffin sections were prepared. The paraffin section was made by treating the whole tissue block as a whole, and the thickness of the section was set to 4 μm. The expression of ARC/Arg3.1 was detected by immunohistochemistry (IHC) and in situ hybridization (ISH), and the apoptosis of neurons in the lateral geniculate body was detected by TdT-mediated dUTP Nick-End Labeling (TUNEL).
Sections were dewaxed to water and placed in 3% hydrogen peroxide solution and phosphate buffer saline (pH 7.4) (Boster Biological Technology Co., Ltd., China, AR0030) in turn to block endogenous peroxidase. The slices were placed in a repair box containing citric acid (pH 6.0) (Boster Biological Technology Co., Ltd., China, AR0024) antigen repair buffer for antigen repair. The tissue was then evenly covered with 5% BSA blocking solution in the culture dish for serum blocking. Followed by the addition of the first antibody (the dilution ratio of the antibody was 1: 100) (ARC/Arg3.1) (Proteintech Group, Inc, China, 16,290–1-AP). The second antibody (Biotin Conjugated goat anti-rabbit IgG) and strept avidin–biotin complex (SABC) (Boster Biological Technology Co., Ltd., China, SA1022). Diaminobenzidine (DAB) (Boster Biological Technology Co., Ltd., China, AR1022) was used to show color, and positive results ranged from yellow to brownish yellow. The nucleus of hematoxylin staining (Beijing Solarbio Science & Technology Co., Ltd., China, Ltd, G1080) was blue. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis.
Paraffin sections were dewaxed in water and digested with protease K (20 μl/ml) at 37℃ for 30 min. The 3% methanol-hydrogen peroxide was added, and the slide was placed in phosphate buffer saline (pH 7.4) (Boster Biological Technology Co., Ltd., China, AR0033) to block endogenous peroxidase. After pre-hybridization, ARC/Arg3.1 mRNA probe (5`-CGCTG GGTCA AGCGT GAGAT GCACG TGTGG AGGGA-3`; 5`-TATTG GCTGT CCCAG ATCCA GAACC ACATG AATGG-3`; 5`-TGGCG TAAGC GGGAC CTGTA CCAGA CACTG TATGT-3`) hybridization solution containing the probe was added (Boster Biological Technology Co., Ltd., China, MK1612) at a concentration of 20 μl. Hybridization was conducted at 37℃ in an incubator overnight, and then the hybridization solution was washed away. BSA blocking solution was then added, followed by a drop of mouse anti-digoxigenin-labeled peroxidase (Boster Biological Technology Co., Ltd., China, MK1748). DAB (Boster Biological Technology Co., Ltd., China, AR1022) was used to show color, and positive results ranged from yellow to brownish yellow. The nucleus of hematoxylin staining (Beijing Solarbio Science & Technology Co., Ltd., China, Ltd, G1080) was blue. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis.
Paraffin sections were dewaxed in water and digested with protease K, and 3% hydrogen peroxide was added to block endogenous peroxidase. Add Labeling Buffer, 5% BSA, the Anti-DIG-Biotin, and SABC (Boster Biological Technology Co., Ltd., China, MK1015) in sequence. DAB (Boster Biological Technology Co., Ltd., China, AR1022) was used for color development, and the color of the DAB positive reaction was brown-yellow; the nucleus appeared blue after hematoxylin staining. The tissue was then dehydrated, and microscopic examination, image acquisition, and analysis were performed. Three fields of view were randomly selected for each slice for statistical analysis.
The statistical analysis software was Stata/SE 16.0 All data are expressed as means ± standard deviation ( [12pt]{minimal}
$$$$ X ¯ ± s). One-way ANOVA (LSD) was used to analyze the baseline data of each eye of the two groups of kittens, including axial length, diopter, and P100 wave of each eye. One-way ANOVA (LSD) was used to compare P100 waves in the same group at different times and in different groups at the same time. Paired sample t -test was used to compare the differences between different eyes in the same group, two independent sample t -test was used to compare the difference between the right eye in the control group and the experimental group. The results of IHC, ISH, and TUNEL were tested by independent sample t -test. Pearson correlation coefficient analysis was performed on the IHC, ISH, and TUNEL data of the control group and the experimental group.
Baseline condition There was no significant difference in diopter ( F = 0.483, P = 0.696), axial length ( F = 0.509, P = 0.679), amplitude ( F = 0.013, P = 0.998) and latency ( F = 0.629, P = 0.601) of PVEP between the two groups at 3 weeks of age (Table ). P100 wave of PVEP At the age of 3 weeks, there was no statistical difference in latency ( F = 0.629, P = 0.601) and amplitude ( F = 0.013, P = 0.998) of P100 wave between the right eye of the experimental group, the left eye of the experimental group, the right eye of the control group and the left eye of the control group. With the increase of age, the latency of four groups of P100 waves showed a downward trend, while the amplitude showed an upward trend. Moreover, although the latency and amplitude of the P100 wave in the right eye of the experimental group also showed a changing trend, the overall change was lower than that of the other three groups. At postnatal age 5, 6, and 7 weeks, statistical differences were observed in latency and amplitude of P100 wave among the four groups. The latency of the P100 wave in the right (covered) eye of the experimental group was significantly higher than that in the left eye of the experimental group, the right eye, or the left eye of the control group (for F and P values, please see Tables and ). However, the amplitude of the P100 wave in the right (covered) eye of the experimental group was significantly lower than that in the left eye of the experimental group, the right eye, or the left eye of the control group (for F and P values, please see Tables and ). The results showed that after 5 weeks of age, monocular form deprivation amblyopia had formed in the right eye of kittens in the experimental group (Figs. and , Tables and ). (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ). Immunohistochemical staining The results of IHC showed that ARC/Arg3.1 protein was expressed in sections of the experimental group and the control group. ARC/Arg 3.1 protein expression in the cytoplasm, which was brown-yellow, and the nucleus was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was lower than that in the control group ( P < 0.001). The number of positive cells in the experimental group was lower than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ). In situ hybridization staining The results of ISH showed that ARC/Arg3.1 mRNA was expressed in sections of the experimental group and the control group. ARC/Arg 3.1 mRNA expression in the cytoplasm, which was brown-yellow, and the nucleus was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was lower than that in the control group ( P < 0.001). The number of positive cells in the experimental group was lower than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ). TUNEL staining The results of TUNEL showed that there were positive cells in the slices of the experimental group and the control group. The nucleus of positive cells was brown-yellow, while that of negative cells was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was higher than that in the control group ( P < 0.001). The number of positive cells in the experimental group was more than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ). The analysis of the correlation Pearson correlation coefficient indicated that the intensity of TUNEL positive reaction was negatively correlated with the mean optical density of positive cells from ARC/Arg3.1 protein ( PCCs = -0.415, P = 0.001) and mRNA ( PCCs = -0.409, P = 0.001) expression at the age of 7 weeks, and ARC/Arg3.1 protein expression was positively correlated with ARC/Arg3.1 mRNA expression ( PCCs = 0.958, P < 0.001) (Table ).
There was no significant difference in diopter ( F = 0.483, P = 0.696), axial length ( F = 0.509, P = 0.679), amplitude ( F = 0.013, P = 0.998) and latency ( F = 0.629, P = 0.601) of PVEP between the two groups at 3 weeks of age (Table ).
At the age of 3 weeks, there was no statistical difference in latency ( F = 0.629, P = 0.601) and amplitude ( F = 0.013, P = 0.998) of P100 wave between the right eye of the experimental group, the left eye of the experimental group, the right eye of the control group and the left eye of the control group. With the increase of age, the latency of four groups of P100 waves showed a downward trend, while the amplitude showed an upward trend. Moreover, although the latency and amplitude of the P100 wave in the right eye of the experimental group also showed a changing trend, the overall change was lower than that of the other three groups. At postnatal age 5, 6, and 7 weeks, statistical differences were observed in latency and amplitude of P100 wave among the four groups. The latency of the P100 wave in the right (covered) eye of the experimental group was significantly higher than that in the left eye of the experimental group, the right eye, or the left eye of the control group (for F and P values, please see Tables and ). However, the amplitude of the P100 wave in the right (covered) eye of the experimental group was significantly lower than that in the left eye of the experimental group, the right eye, or the left eye of the control group (for F and P values, please see Tables and ). The results showed that after 5 weeks of age, monocular form deprivation amblyopia had formed in the right eye of kittens in the experimental group (Figs. and , Tables and ). (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ).
The results of IHC showed that ARC/Arg3.1 protein was expressed in sections of the experimental group and the control group. ARC/Arg 3.1 protein expression in the cytoplasm, which was brown-yellow, and the nucleus was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was lower than that in the control group ( P < 0.001). The number of positive cells in the experimental group was lower than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ).
The results of ISH showed that ARC/Arg3.1 mRNA was expressed in sections of the experimental group and the control group. ARC/Arg 3.1 mRNA expression in the cytoplasm, which was brown-yellow, and the nucleus was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was lower than that in the control group ( P < 0.001). The number of positive cells in the experimental group was lower than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ).
The results of TUNEL showed that there were positive cells in the slices of the experimental group and the control group. The nucleus of positive cells was brown-yellow, while that of negative cells was blue. At the age of 7 weeks, the average optical density of positive cells in the experimental group was higher than that in the control group ( P < 0.001). The number of positive cells in the experimental group was more than that in the control group ( P < 0.001) (Fig. , Table ) (relevant data is available at https://figshare.com/s/41efd7f80329308c0d0a ).
Pearson correlation coefficient indicated that the intensity of TUNEL positive reaction was negatively correlated with the mean optical density of positive cells from ARC/Arg3.1 protein ( PCCs = -0.415, P = 0.001) and mRNA ( PCCs = -0.409, P = 0.001) expression at the age of 7 weeks, and ARC/Arg3.1 protein expression was positively correlated with ARC/Arg3.1 mRNA expression ( PCCs = 0.958, P < 0.001) (Table ).
The application of kittens as models of formal deprivation amblyopia can be traced back to the studies of Wiesel & Hubel (1963) and Hubel & Wiesel (1970) . Some studies have shown that for kittens, the effect of form deprivation on vision usually peaks from birth to 4 weeks old. After 12 weeks of age, the form deprivation caused by occlusion can hardly interfere with their visual development . Therefore, in this study, 3-week-old kittens were selected, which can fully ensure that their covering eyes are affected by form deprivation factors during the visual development period of kittens, and have maximum interference on their visual development, to successfully establish a monocular form deprivation amblyopia kitten model. Some studies have shown that when the eyes of kittens are covered by monocular deprivation, the transformation of the ocular dominance column can occur in a short time . With the prolongation of deprivation time, the deviation of dominant column pairs will become more obvious. To observe the form deprivation eyes of kittens and compare whether form deprivation amblyopia is formed. In this study, the results of PVEP detection were used as the basis for the establishment of the amblyopia model. At present, it has been mature and applied to the establishment of the amblyopia animal model . The retina is stimulated by external light, which will produce potential changes and form nerve impulses. Nerve impulses pass through the optic nerve, optic chiasma, optic bundle, lateral geniculate body, and optic radiation in turn, and finally pass to the visual center of the cerebral cortex. Under the condition of normal visual development of kittens, the conduction velocity and mode of cells related to vision can stimulate consistent potential activity, form harmonious synchronous oscillation in front of and visual cortex, and then produce a regular waveform . The results showed that with the development of the visual system of kittens, after 4 weeks of covering, the latency of the P100 wave decreased and the amplitude increased in both eyes of the experimental group and control group. However, in the experimental group of the right eye compared with the other three groups, although there is a certain degree of change, the overall range is still lower than the other three groups. Moreover, after 2 weeks of occlusion (at the fifth week of age), the latency of the P100 wave in the occluded eyes of the experimental group was significantly higher than that of the other three groups, and the amplitude of the P100 wave was significantly lower than that of the other three groups. Therefore, we believe that form deprivation amblyopia developed in the right eye of the experimental group at the fifth week of age. This result is consistent with the results of the previous . The balance of excitation and depression at the axonal level of the visual cortex is the condition of maintaining the normal development and function of the visual cortex, and it is also an important factor affecting the plasticity of the visual system . The first synaptic replacement of optic nerve impulse in the brain occurs in the lateral geniculate body, and each lateral geniculate body receives projections from temporal optic nerve fibers of the ipsilateral retina and nasal optic nerve fibers of the contralateral retina, and this projection has strict local regional correspondence. As a semi-crossover animal, the optic nerve fibers of kittens pass through the optic chiasma to the lateral geniculate body, and the lateral geniculate body of kittens can be divided into A, A1, C, and C1-C3 layers. Layers A, C, and C2 received projections from contralateral ocular fibers, while layers A1 and C1 received projections from ipsilateral ocular fibers. Some studies have found that C-fos, GABA, and brain-derived neurotrophic factor (BDNF) in the contralateral lateral geniculate body of the amblyopic eyes are down-regulated compared with those in the ipsilateral amblyopic eyes in kittens with monocular form deprivation amblyopic eyes . Synapse, as the structural basis of information transmission between neurons, is the key part of visual development plasticity. Some studies have suggested that synaptic plasticity is considered the most critical link in the pathogenesis of amblyopia . Neural storage in neural networks is considered to depend partly on the plasticity of synapses . Synaptic plasticity refers to the change of synaptic connections between neurons when synapses are in use or disuse . In terms of time, it can be divided into two types: LTP and LTD . LTP is a kind of information storage mode at the synaptic level, and the induction mechanism of LTP is mainly the Ca2 + influx caused by the change of the NMDAR channel in the postsynaptic membrane, which triggers a series of biochemical processes in cells. LTD refers to activity-dependent persistent potential attenuation induced on an unstimulated pathway, which is the opposite of LTP. For example, the visual cortex response caused by visual deprivation is an LTD phenomenon. Considering the induction and expression mechanism of LTP and LTD in the lateral geniculate body, it is similar to the hippocampus . For example, NMDAR needs to be activated, which leads to the activation of a series of intracellular kinases and the redistribution of AMPA receptors. The first change in the plasticity of the lateral geniculate body is synaptic efficiency, which does not need to synthesize new proteins, and then there will be long-term changes in neural pathways. However, long-term changes in neural pathways require gene expression and new protein synthesis, so kinase activation will lead to gene expression, which may be realized through the activation of transcription factors. Some studies have shown that NMDAR is related to the changes in the visual cortex in amblyopia animals . However, Ziburkus et al. found that the expression of NMDAR in the lateral geniculate body of amblyopia animals did not seem to be affected. This is contrary to some research results in recent years. Arc/arg3.1 encodes a protein of about 400 amino acids, which has no catalytic or other known functional motifs . Arc/Arg3.1 protein interacts directly or indirectly with many proteins, indicating that it has the function of a pivotal protein . Most of its functions are thought to occur at postsynaptic sites. Biochemical and electron microscopic studies show that Arc/Arg3.1 protein exists in the postsynaptic density of activated neurons . As an archetypal immediate-early gene, ARC/Arg3.1 is generally considered a reliable marker of neuronal activity . ARC/Arg3.1 is also necessary for various forms of learning and memory, and some studies suggest that it is related to synaptic plasticity . For example, ARC/Arg3.1 could regulate the expression of AMPAR during homeostatic plasticity with LTD and then maintain LTP . ARC/Arg3.1 plays a key role in the long-term synaptic plasticity of excitatory synapses and memory and postnatal cortical development . These results suggest a physiological function of Arc in enhancing the overall orientation specificity of visual cortical neurons during the post-eye-opening life of an animal. Wang et al. showed a physiological function of Arc in enhancing the overall orientation specificity of visual cortical neurons. Some studies have shown that LTP and LTD, heterosynaptic LTD (inverse synaptic tagging), and homeostatic scaling all require the synthesis of ARC/Arc3.1, but the specific mechanism that determines the prominent changes is still unclear . When BDNF was injected into the hippocampus, researchers found that a transcription-dependent LTP induced by ARC/Arg3.1 mRNA in the granulosa cell body and dendrite can be induced . Another study shows that the expression of BDNF is associated with the maintenance of high-frequency stimulation long-term potential (HFS-LTP) . Injection of Arc antisense oligodeoxynucleotides (Arc-AS) before injection of BDNF inhibits the induction of LTP, which indicates that the process is completely downstream of the BDNF signaling pathway . After injecting BDNF for 2 h, the LTP can be restored to the baseline level by injecting Arc-AS (Arc antisense oligodeoxynucleotides) again. However, after injecting BDNF for 4 h, Arc-AS injection has no effect . Therefore, some researchers believe that the newly synthesized ARC/Arg 3.1 can regulate the expression and consolidation of LTP induced by HFS-LTP and exogenous BDNF In addition, Qi et al. showed that ARC/Arg3.1 can be activated and up-regulated by PKA/CREB and ERK/CREB signaling pathways and they found a significant increase in the number of neuronal apoptosis in the model group after ARC/Arg 3.1 gene knockout. Based on these studies and our results, we speculate that the number of neuronal apoptosis in the lateral geniculate body increases due to the influence of form deprivation. The change of neuron number in the lateral geniculate body further leads to the decrease of ARC/Arg3.1 expression, which leads to abnormal activities such as LTP and LTD, and finally promotes the further development of amblyopia. However, this study still has some limitations. We proved that the expression protein and mRNA of ARC/Arg3.1 were down-regulated in the visual cortex of amblyopia kittens, but we failed to detect the dynamic change of ARC/Arg3.1 or set different groups to observe its changes with age. On the other hand, we only compared the expression of ARC/Arg3.1 in the lateral geniculate body of amblyopia kittens and normal kittens and did not study the change of its expression in different layers of the lateral geniculate body. In addition, the statistics of slices in this study were carried out at high power using three random visual fields. Although we have adopted some methods to reduce the risk of bias as much as possible, this method still has some limitations.
To sum up, the expression of ARC/Arg3.1 is associated with monocular form deprivation amblyopia and apoptosis of lateral geniculate body cells. This study speculates that ARC/Arg3.1 gene plays an important role in visual development.
|
The 6th International Workshop of the Asian Society of Gynecologic Oncology, December 19th to 20th, 2020 | 1e197720-1d5e-4cf0-a3fc-a5eefc8d9a39 | 8039175 | Gynaecology[mh] | The 6th International Workshop of the Asian Society of Gynecologic Oncology (ASGO) was held in the Chang Yung-Fa Foundation International Convention Center, Taipei, Taiwan, on 19th to 20th December 2020. The ASGO Workshop 2020 in Taiwan was the first ASGO congress that was held outside of Japan and South Korea since ASGO commenced in 2008 . In addition, the ASGO Workshop 2020, which was a virtual meeting, was also the first ASGO hybrid meeting due to the coronavirus disease 2019 (COVID-19) pandemic. There were 239 participants from 13 countries who attended the meeting . Opening remarks commenced with the current president of ASGO, Dr. Daisuke Aoki, Professor of the Department of Obstetrics and Gynecology, Keio University School of Medicine, Japan, followed by the vice president of the International Gynecologic Cancer Society, Dr. Jae-Weon Kim, who joined our congress from South Korea. Dr. Chih-Ming Ho, the president of the Taiwan Association of Gynecologic Oncologists (TAGO) concluded the opening remarks by welcoming participants from all over Asia on behalf of the organizing committee . 1. COVID-19 pandemic session The ASGO workshop featured 22 presentations and 3 Young Doctor sessions. The first session discussed the practice in gynecologic oncology in five countries in Asia during the COVID-19 pandemic . Dr. Sokbom Kang shared the condition and management guidelines adopted in South Korea that minimize the risk of patients and prioritize patient care. The second speaker, Dr. Yusuke Kobayashi, shared the treatment policy for gynecological tumors in Japan and emphasized on the triage, minimization of hospital visits, and maximization of the use of telemedicine. Dr. Neerja Bhatla then presented the conditions in India, where decreased numbers of visits to hospitals, cessations of registration of new patients, and delays in the initiation of treatment occurred as resources were diverted for COVID-19 care. Dr Suresh Kumarasamy followed and showed how the Movement Control Order, a restriction of movement of people implemented by the Malaysian government in response to COVID-19, affected the practice in gynecological oncology, especially in hospitals treating COVID-19 patients. At the same time, other patients had to be transferred to private hospitals and were given subsidies from the government. Finally, Dr. Cheng-Chang Chang shared Taiwan's policy in assessing the risk of being infected with COVID-19 in anyone entering medical care facilities, achieved by linking their health insurance card with past 14-day travel history. All speakers agreed on limiting the hospital visits of patients and risk of exposure to COVID-19 . After the first session, there was an industrial section presentation presented by Dr. Chia-Yen Huang on novel immunotherapy for advanced endometrial cancer. He highlighted a phase 2 trial (KEYNOTE-146) that lead to the approval of lenvatinib and pembrolizumab in the treatment of advanced endometrial carcinomas that are not microsatellite instability-high or mismatch repair deficient. 2. Cervical session To begin with cervical cancer session, Dr. Se-Ik Kim discussed the Laparoscopic Approach of Carcinoma of the Cervix (LACC) trial, a trial that has sparked discussion and influenced our clinical practice. This trial reported higher recurrence rates and worse overall survival with minimally invasive surgery when compared to open surgery in stage IA1 to IB1 cervical cancer. Dr. Kim sought robust scientific evidence from randomized control trials and suggested optimal candidate selection and the development of surgical techniques that may improve the outcome of patients with early cervical cancer who undergo minimally invasive surgeries. The next presentation, by Dr. Pao-Ling Torng, reviewed the latest treatment of metastatic or relapsed cervical cancer. She pointed out the recent approval of pembrolizumab for patients with programmed death-ligand 1 (PD-L1)-positive tumors and recurrent or metastatic cervical cancer in progression on or after chemotherapy. Clinical research on immune checkpoint inhibitors are showing promising result in improving the poor outcome of metastatic or relapsed cervical cancer. Dr. Takayuki Enomoto followed and demonstrated radical trachelectomy for early-stage cervical cancer at 15–17 weeks of gestation. He also reported the impressive outcomes of eight cases with gestational age of over 30 weeks. This session was concluded with a presentation by Dr. Wu-Chou Lin, who gave a lecture on anatomic dissection in nerve-sparing radical hysterectomy for early-stage cervical cancer. He clearly illustrated step-by-step the preservation of nerves innervating the bladder, vagina, and rectum during radical hysterectomy . 3. Endometrial session The endometrial cancer session began with Dr. Ting-Chang Chang, who shared the molecular characterization in endometrial cancer, including POLE mutations, mis-match repair deficiency, estrogen receptors, progesterone receptors, as well as p53 status and their clinical implications. POLE mutations were noted in 11.1% of his study group on exons 9, 23, 14, and 11, whereby patients with somatic POLE mutated tumors showed a tendency of better survival. Dr. Sang-Wun Kim then demonstrated a two-step sentinel lymph node mapping strategy in endometrial cancer staging. Compared to conventional cervical injection, this two-step sentinel lymph node mapping improved paraaortic sentinel lymph node detection rate (18.7% vs. 5.7%, p<0.001). Dr. Kimio Ushijima later discussed adjuvant therapy for early endometrial cancer, suggesting that microcystic elongated and fragmented pattern would be a new pathologic risk factor and proposed that chemotherapy is a reasonable treatment strategy for early endometrial cancer in intermediate high-risk patients according to Japanese Gynecologic Oncology Group (JGOG) 2033 and JGOG 2043. Dr. Hee-Seung Kim concluded the session by reviewing the treatment for stage IV and metastatic endometrial cancer. He emphasized that optimal cytoreduction remains a favorable prognostic factor in advanced endometrial cancer. In addition, he showed the future direction of using different immune checkpoint inhibitors in treating advanced or recurrent endometrial cancer . 4. Ovarian session The ovarian cancer session started with Dr. Masaki Mandai, who discussed the primary and secondary debulking operation in ovarian cancer. He reviewed the conflicting results of Gynecologic Oncology Group (GOG)-0213 and Arbeitsgemeinschaft Gynaekologische Onkologie (AGO) DESKTOP III, which assessed the role of secondary cytoreductive surgery. In addition, he pointed out that neo-adjuvant chemotherapy may not always be a substitute for primary debulking surgery as JGOG 0602 suggested. He concluded by emphasizing personalization rather than standardization in surgical treatment for ovarian cancer patients. Dr. Ka-Yu Tse then gave a lecture on personalized medicine in maintenance therapy of ovarian cancer. She discussed the current status and introduced methods that guide the treatment in ovarian cancer. Dr. Heng-Cheng Hsu followed and reviewed the treatment for relapsed ovarian cancer. To conclude this session, Dr. Yi-Jen Chen discussed the current role of minimally invasive surgery for ovarian cancer, emphasizing the need for optimal debulking and that the use of minimally invasive procedures should be limited in selected patients. 5. Big data and shared decision-making (SDM) There were 2 concurrent special sessions before the closing of day 1 in room 601. Dr. Cheng-I Liao shared the big data analysis in populational gynecologic oncology. He demonstrated useful methods and primary results of gynecologic oncology from different databases that help us to identify the inadequacies of the current medical record system and provided suggestions for further improvement. Dr. Chen-Hsuan Wu then demonstrated the SDM in real practice. She showed the implementation of SDM in clinical situations and emphasized the core elements of SDM, which are risk communication and value clarifications . 6. Poly(ADP-ribose) polymerase (PARP) inhibitor and test At the same time, room 603 focused on precision medicine and immuno-oncology on day 1 . The precision medicine session began with Dr. Shu-Jen Chen, who gave an overview on precision medicine in gynecologic cancers. She illustrated how next-generation sequencing-based genetic testing of BRCA1/2 mutations and homologous recombination deficiency (HRD) have been used for the management of gynecologic cancers. Two PARP inhibitors, olaparib and niraparib, have been approved as maintenance therapy for newly diagnosed advanced primary ovarian cancer as well as for platinum-sensitive, relapsed ovarian cancer patients who are in a complete or partial response to platinum-based chemotherapy. Dr. Katsutoshi Oda reviewed the mechanism targeting HRD in ovarian cancer. He pointed out that in addition to blocking the enzyme activity of PARP, PARP inhibitors trap PARP1 and PARP2 to the sites of DNA damage and cause cytotoxicity in cells. He suggested that the distinct PARP-trapping potency may be associated with the different anti-tumor activities and adverse events of each PARP inhibitor. SOLO-1 and PAOLA-1 respectively demonstrated a substantial benefit of progression-free survival in ovarian cancer patients with BRCA1/2 mutation and HRD-positive tumors. Dr. Ya-Min Cheng then gave a lecture on biomarker and genetic testing for PARP inhibitors and emphasized the predictive values of HRD tests on the magnitude of benefit of PARP inhibitors. 7. Target therapies Before the Society of Gynecologic Oncology, Republic of China (SGO-ROC) 10th Council Meeting and TAGO 9th Council Meeting, an industrial section reviewing the safety and quality of life of bevacizumab for gynecological cancer was presented by Dr. Hung-Hsueh Chou. The efficacy of bevacizumab, a humanized monoclonal immunoglobulin G antibody that targets the vascular endothelial growth factor (VEGF) receptor, has been demonstrated in ovarian cancer through ICON-7 and GOG-0218 as well as cervical cancer through GOG-0240. Dr. Chou also discussed the safety and efficacy of bio-similar drugs entering the market. Dr. Ying-Cheng Chiang followed and summarized the evolving area of maintenance therapy in recurrent ovarian cancer. He reviewed the targeted therapy that has arisen as maintenance therapy, included bevacizumab and PARP inhibitors. Dr. Shih-Tien Hsu later summarized the treatment of PARP inhibitors in ovarian cancer. He summarized all the published data and proposed the best sequence or combination of treatments with regard of the different molecular profiles, included BRCA mutation, HRD deficiency, and HRD proficient. 8. Immunotherapies Dr. Jen-Ruei Chen started the immuno-oncology session by reviewing immuno-oncology in gynecological malignancies. He gave a lecture on the concept of cancer immunity cycle and cancer immunotherapies in gynecological malignancies. Dr. David SP Tan followed and discussed the biomarkers in immuno-oncology for gynecological cancers. He focused on the predictive biomarkers, including programmed cell death protein 1 and PD-L1 in response to immunotherapeutic approaches. This session ended with Dr. Yin-Yi Chang, who gave an overview of cell therapy in gynecologic cancers and introduced the application of cellular immunotherapy on gynecologic cancers . 9. New drugs The two industrial sections towards the end of day 1 started with Dr. Wen-Shiung Liou reviewing the treatment of trabectedin in gynecological cancer. Trabectedin is approved for the treatment of recurrent platinum-sensitive ovarian cancer in combination of pegylated liposomal doxorubicin as well as the treatment of advanced uterine leiomyosarcoma. The second industrial section, on the PARP inhibitor resistance and the ways to overcome it, was presented by Dr. Chyong-Huey Lai. She suggested potential strategies to overcome PARP inhibitor resistance that is caused by DNA replication fork protection, reversion mutations, epigenetic modification, restoration of ADP-ribosylation, and pharmacological alteration. The ASGO workshop featured 22 presentations and 3 Young Doctor sessions. The first session discussed the practice in gynecologic oncology in five countries in Asia during the COVID-19 pandemic . Dr. Sokbom Kang shared the condition and management guidelines adopted in South Korea that minimize the risk of patients and prioritize patient care. The second speaker, Dr. Yusuke Kobayashi, shared the treatment policy for gynecological tumors in Japan and emphasized on the triage, minimization of hospital visits, and maximization of the use of telemedicine. Dr. Neerja Bhatla then presented the conditions in India, where decreased numbers of visits to hospitals, cessations of registration of new patients, and delays in the initiation of treatment occurred as resources were diverted for COVID-19 care. Dr Suresh Kumarasamy followed and showed how the Movement Control Order, a restriction of movement of people implemented by the Malaysian government in response to COVID-19, affected the practice in gynecological oncology, especially in hospitals treating COVID-19 patients. At the same time, other patients had to be transferred to private hospitals and were given subsidies from the government. Finally, Dr. Cheng-Chang Chang shared Taiwan's policy in assessing the risk of being infected with COVID-19 in anyone entering medical care facilities, achieved by linking their health insurance card with past 14-day travel history. All speakers agreed on limiting the hospital visits of patients and risk of exposure to COVID-19 . After the first session, there was an industrial section presentation presented by Dr. Chia-Yen Huang on novel immunotherapy for advanced endometrial cancer. He highlighted a phase 2 trial (KEYNOTE-146) that lead to the approval of lenvatinib and pembrolizumab in the treatment of advanced endometrial carcinomas that are not microsatellite instability-high or mismatch repair deficient. To begin with cervical cancer session, Dr. Se-Ik Kim discussed the Laparoscopic Approach of Carcinoma of the Cervix (LACC) trial, a trial that has sparked discussion and influenced our clinical practice. This trial reported higher recurrence rates and worse overall survival with minimally invasive surgery when compared to open surgery in stage IA1 to IB1 cervical cancer. Dr. Kim sought robust scientific evidence from randomized control trials and suggested optimal candidate selection and the development of surgical techniques that may improve the outcome of patients with early cervical cancer who undergo minimally invasive surgeries. The next presentation, by Dr. Pao-Ling Torng, reviewed the latest treatment of metastatic or relapsed cervical cancer. She pointed out the recent approval of pembrolizumab for patients with programmed death-ligand 1 (PD-L1)-positive tumors and recurrent or metastatic cervical cancer in progression on or after chemotherapy. Clinical research on immune checkpoint inhibitors are showing promising result in improving the poor outcome of metastatic or relapsed cervical cancer. Dr. Takayuki Enomoto followed and demonstrated radical trachelectomy for early-stage cervical cancer at 15–17 weeks of gestation. He also reported the impressive outcomes of eight cases with gestational age of over 30 weeks. This session was concluded with a presentation by Dr. Wu-Chou Lin, who gave a lecture on anatomic dissection in nerve-sparing radical hysterectomy for early-stage cervical cancer. He clearly illustrated step-by-step the preservation of nerves innervating the bladder, vagina, and rectum during radical hysterectomy . The endometrial cancer session began with Dr. Ting-Chang Chang, who shared the molecular characterization in endometrial cancer, including POLE mutations, mis-match repair deficiency, estrogen receptors, progesterone receptors, as well as p53 status and their clinical implications. POLE mutations were noted in 11.1% of his study group on exons 9, 23, 14, and 11, whereby patients with somatic POLE mutated tumors showed a tendency of better survival. Dr. Sang-Wun Kim then demonstrated a two-step sentinel lymph node mapping strategy in endometrial cancer staging. Compared to conventional cervical injection, this two-step sentinel lymph node mapping improved paraaortic sentinel lymph node detection rate (18.7% vs. 5.7%, p<0.001). Dr. Kimio Ushijima later discussed adjuvant therapy for early endometrial cancer, suggesting that microcystic elongated and fragmented pattern would be a new pathologic risk factor and proposed that chemotherapy is a reasonable treatment strategy for early endometrial cancer in intermediate high-risk patients according to Japanese Gynecologic Oncology Group (JGOG) 2033 and JGOG 2043. Dr. Hee-Seung Kim concluded the session by reviewing the treatment for stage IV and metastatic endometrial cancer. He emphasized that optimal cytoreduction remains a favorable prognostic factor in advanced endometrial cancer. In addition, he showed the future direction of using different immune checkpoint inhibitors in treating advanced or recurrent endometrial cancer . The ovarian cancer session started with Dr. Masaki Mandai, who discussed the primary and secondary debulking operation in ovarian cancer. He reviewed the conflicting results of Gynecologic Oncology Group (GOG)-0213 and Arbeitsgemeinschaft Gynaekologische Onkologie (AGO) DESKTOP III, which assessed the role of secondary cytoreductive surgery. In addition, he pointed out that neo-adjuvant chemotherapy may not always be a substitute for primary debulking surgery as JGOG 0602 suggested. He concluded by emphasizing personalization rather than standardization in surgical treatment for ovarian cancer patients. Dr. Ka-Yu Tse then gave a lecture on personalized medicine in maintenance therapy of ovarian cancer. She discussed the current status and introduced methods that guide the treatment in ovarian cancer. Dr. Heng-Cheng Hsu followed and reviewed the treatment for relapsed ovarian cancer. To conclude this session, Dr. Yi-Jen Chen discussed the current role of minimally invasive surgery for ovarian cancer, emphasizing the need for optimal debulking and that the use of minimally invasive procedures should be limited in selected patients. There were 2 concurrent special sessions before the closing of day 1 in room 601. Dr. Cheng-I Liao shared the big data analysis in populational gynecologic oncology. He demonstrated useful methods and primary results of gynecologic oncology from different databases that help us to identify the inadequacies of the current medical record system and provided suggestions for further improvement. Dr. Chen-Hsuan Wu then demonstrated the SDM in real practice. She showed the implementation of SDM in clinical situations and emphasized the core elements of SDM, which are risk communication and value clarifications . At the same time, room 603 focused on precision medicine and immuno-oncology on day 1 . The precision medicine session began with Dr. Shu-Jen Chen, who gave an overview on precision medicine in gynecologic cancers. She illustrated how next-generation sequencing-based genetic testing of BRCA1/2 mutations and homologous recombination deficiency (HRD) have been used for the management of gynecologic cancers. Two PARP inhibitors, olaparib and niraparib, have been approved as maintenance therapy for newly diagnosed advanced primary ovarian cancer as well as for platinum-sensitive, relapsed ovarian cancer patients who are in a complete or partial response to platinum-based chemotherapy. Dr. Katsutoshi Oda reviewed the mechanism targeting HRD in ovarian cancer. He pointed out that in addition to blocking the enzyme activity of PARP, PARP inhibitors trap PARP1 and PARP2 to the sites of DNA damage and cause cytotoxicity in cells. He suggested that the distinct PARP-trapping potency may be associated with the different anti-tumor activities and adverse events of each PARP inhibitor. SOLO-1 and PAOLA-1 respectively demonstrated a substantial benefit of progression-free survival in ovarian cancer patients with BRCA1/2 mutation and HRD-positive tumors. Dr. Ya-Min Cheng then gave a lecture on biomarker and genetic testing for PARP inhibitors and emphasized the predictive values of HRD tests on the magnitude of benefit of PARP inhibitors. Before the Society of Gynecologic Oncology, Republic of China (SGO-ROC) 10th Council Meeting and TAGO 9th Council Meeting, an industrial section reviewing the safety and quality of life of bevacizumab for gynecological cancer was presented by Dr. Hung-Hsueh Chou. The efficacy of bevacizumab, a humanized monoclonal immunoglobulin G antibody that targets the vascular endothelial growth factor (VEGF) receptor, has been demonstrated in ovarian cancer through ICON-7 and GOG-0218 as well as cervical cancer through GOG-0240. Dr. Chou also discussed the safety and efficacy of bio-similar drugs entering the market. Dr. Ying-Cheng Chiang followed and summarized the evolving area of maintenance therapy in recurrent ovarian cancer. He reviewed the targeted therapy that has arisen as maintenance therapy, included bevacizumab and PARP inhibitors. Dr. Shih-Tien Hsu later summarized the treatment of PARP inhibitors in ovarian cancer. He summarized all the published data and proposed the best sequence or combination of treatments with regard of the different molecular profiles, included BRCA mutation, HRD deficiency, and HRD proficient. Dr. Jen-Ruei Chen started the immuno-oncology session by reviewing immuno-oncology in gynecological malignancies. He gave a lecture on the concept of cancer immunity cycle and cancer immunotherapies in gynecological malignancies. Dr. David SP Tan followed and discussed the biomarkers in immuno-oncology for gynecological cancers. He focused on the predictive biomarkers, including programmed cell death protein 1 and PD-L1 in response to immunotherapeutic approaches. This session ended with Dr. Yin-Yi Chang, who gave an overview of cell therapy in gynecologic cancers and introduced the application of cellular immunotherapy on gynecologic cancers . The two industrial sections towards the end of day 1 started with Dr. Wen-Shiung Liou reviewing the treatment of trabectedin in gynecological cancer. Trabectedin is approved for the treatment of recurrent platinum-sensitive ovarian cancer in combination of pegylated liposomal doxorubicin as well as the treatment of advanced uterine leiomyosarcoma. The second industrial section, on the PARP inhibitor resistance and the ways to overcome it, was presented by Dr. Chyong-Huey Lai. She suggested potential strategies to overcome PARP inhibitor resistance that is caused by DNA replication fork protection, reversion mutations, epigenetic modification, restoration of ADP-ribosylation, and pharmacological alteration. A total of eight topics were presented in the Young Doctor session on day 1. Dr. Chia-Lin Chou shared her experience of curative resection of ovaries for ovarian metastasis from colorectal cancer. The second speaker, Dr. Malika Kengsakul, shared a rare case of extra-nodal involvement of diffuse large B-cell lymphoma that mimics locally advanced cervical cancer. Dr. Se-Ik Kim followed by revealing the low adoption rate of iRECIST (60%) in the real world and demonstrated the necessity of imaging follow-up to confirm treatment responses. He also presented a second research on the implementation of adjuvant radiotherapy to reduce the disease recurrence rate in patients with intermediate-risk, stage IB–IIA cervical cancer treated primarily with radical hysterectomy. Dr. Alka Dahiya then highlighted the distinct differences in clinicopathological profiles of primary peritoneal carcinoma and ovarian carcinoma. Dr. Anusha Kamath followed and shared the knowledge, attitude, and perception among medical graduates in India about human papillomavirus infections and its vaccine. This session ended with Dr. Rahul Deepak Modi's presentations on the insight into gynecological oncology training in India from the perspectives of in-training candidates and the experience of virtual learning during the COVID-19 pandemic in India. To conclude the first day of the ASGO Workshop 2020, there was a banquet in the Shing-Peng-Lai, a Taiwanese seafood restaurant. Most of the attendees gathered at this dinner and enjoyed the food in this MICHELIN Guide restaurant. The second day of ASGO Workshop 2020 commenced with a lecture on the origin of ovarian cancer species and precancerous landscape by Dr. Ie-Ming Shih. He suggested the precancerous landscape in fallopian tubes, which contains multiple concurrent precursor lesions including serous tubal intraepithelial carcinoma with genetic heterogeneity, provides a platform for the evolution of high-grade serous carcinoma . The last 2 industrial sessions began with Dr. Chien-Feng Li sharing the identification of BRCAness in clinical practice. In his presentation, he detailed the prevalence of BRCA1/2 and other HR gene mutations as well as HRD in Taiwanese high-grade serous ovarian cancer patients. Dr. Hung-Hsueh Chou then gave a talk on optimizing chemotherapy in platinum-sensitive recurrent ovarian cancer and shared real-world data from Taiwan. He suggested that carboplatin-pegylated liposomal doxorubicin-bevacizumab as a new standard regimen for patients with recurrent ovarian cancer suitable for platinum-based and antiangiogenic treatment according to the latest phase 3 ENGOT-OV 18 trial. Dr. MiKio Mikami next presented the update in the treatment for uterine sarcoma. He answered the clinical questions on the treatment and adjuvant therapy for uterine leiomyosarcoma, endometrial stromal sarcoma, and recurrent uterine sarcoma. This was followed by Dr. Kuan-Gen Huang, who demonstrated and described laparoscopic hyperthermic intraperitoneal chemotherapy. He shared experiences and suggested that laparoscopic hyperthermic intraperitoneal chemotherapy was feasible and safe in ovarian cancer patients with optimal cytoreduction at completeness of cytoreduction score 0 and 1. The last session of the congress was presented by Dr. Sarikapan Wilailak, who reviewed cancer during pregnancy. The physiological changes during pregnancy posed a big challenge in identifying cancer, and emphasized the importance of multidisciplinary care . Meanwhile, two Young Doctor sessions commenced in room 603 with a total of 16 topics presented . Dr. Roopjit Kaur Sahi started the second session and shared her experience of a tertiary care center in India using IOTA logistic regression models, the Risk of Malignancy Index, and IOTA ADNEX to characterize adnexal masses. Dr. Wen-Hsuan Lin followed and analyzed malignant ovarian germ cell tumors. She reported better survival when comparing chemotherapy with bleomycin, etoposide, and cisplatin to chemotherapy with bleomycin, etoposide, and carboplatin. Dr. Anila Tresa Alukal suggested that the chance of residual disease in loop electrosurgical excision procedure was less if the specimen had a minimum length of 0.775 cm and a minimum thickness of 0.65 cm. She also reported a significant association between p53 expression and poorer outcomes of endometrial cancer. Dr. Aswathy G Nath then reported a good long-term quality of life and a survival outcome of 37.7% at 7 years after pelvic exenteration for gynecological malignancies. She also shared her 10 years of experience in a tertiary care center in India on vulvo-vaginal melanoma and found a median survival of only 11 months. Dr. Deepak Bose suggested that directed risk models using significant risk factors such as grade 3, non-endometrioid tumors, and deep myometrial invasion can better predict the risk of nodal metastasis in endometrial cancer. He also concluded this session by suggesting adjuvant therapy was the only significant factor affecting the outcomes of surgically treated vulvar malignancies. The third Young Doctor session started with Dr. Pallavi Verma, who shared a case of peripheral primitive neuroectodermal tumor of pelvis in pregnancy. Dr. Anandita followed and shared her pilot study that suggested that postoperative coffee consumption resulted in an earlier return of gastrointestinal function when compared to tea consumption. Dr. Madhavi Dokku shared the outcomes of medical management of atypical endometrial hyperplasia in her institute and concluded that the efficacy of any progesterone therapy appears similar. Next, Dr. Sue-Jar Chen analyzed the recurrent pattern of epithelial ovarian cancer with metastatic lymph node and concluded that the rate of treatment failure in lymph node and isolated lymph node relapse appeared to be more frequent in patients with initial nodal involvement. Dr. Anjana JS reported that neoadjuvant chemotherapy in advanced malignant germ cell tumor made complete cytoreduction possible and preserved fertility. She also shared her experience on a significant correlation between results of colposcopic biopsy and final histopathology after loop electrosurgical excision procedure in patients with atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion and high-grade cytology. Dr. Sarita Kumari reported the role of cancer testis antigen POTE-E in preoperative prognosis of epithelial ovarian tumors, which also showed remarkable good diagnostic accuracy. Dr. Amulya B concluded the session by sharing 5 cases of nonepithelial ovarian cancers. The 6th ASGO International Workshop, which was originally planned to be held in May 2020, was postponed due to the COVID-19 pandemic. The organizing committee hosted this workshop in a hybrid meeting with 104 participants joining this meeting online from overseas. With the effort and contribution of all participants, the 6th ASGO Workshop meeting was a remarkable success. The 7th Biennial Meeting of ASGO will be held at the Shangri-La Hotel in Bangkok, Thailand, from November 25th to 27th, 2021. We hope to meet you all in person in Thailand after the COVID-19 pandemic . |
Political attitudes and efficacy of health expert communication on the support for COVID-19 vaccination program: Findings from a survey in Hong Kong | 351ac4ab-38f5-49ab-98fe-739f419be90f | 8898665 | Health Communication[mh] | Introduction Since early 2020, the spread of coronavirus disease 2019 (COVID-19) has severely affected the world. The World Health Organization declared nine vaccines to be safe and effective as of January 2022 and has recommended a vaccination as soon as possible . Vaccination can reduce the chances of infection, the severity of illness, and death. Thus, vaccine uptake is crucial to combat COVID-19, resume normal activities, and recover from the economic downturn. An estimated 60%–70% vaccine uptake rate is required for herd immunity . However, populations in different countries have varying support for vaccination programs. In particular, some people in developed countries are cautious about new vaccines, making participation in vaccination programs a continuous challenge. Therefore, against this backdrop, it is essential to understand the factors associated with support for vaccination programs. Owing to a shortage of supplies of vaccines, COVID-19 vaccination programs in many countries are administered as state public health programs. However, they do not necessarily garner enough public support. Studies on support for public health programs have explored various programs such as disease reporting , smoke-free workplace laws , and water fluoridation . One reason for not getting enough support for public health programs is the “prevention paradox.” The substantial social benefits outweighed the social cost. However, individual incentives to participate were inadequate as people did not experience immediate benefits . This problem is similar to that faced in vaccination programs due to vaccine hesitancy. Vaccine hesitancy is defined as refusal, reluctance, or delay in receiving vaccination , , leading to below-target coverage and behind-schedule vaccination programs. A large-scale retrospective analysis across 149 countries using data collected between 2015 and 2019 established that vaccine confidence was low for some Asian and Middle Eastern countries . Further, a previous study that reviewed 31 COVID-19 vaccine hesitancy studies established acceptance rates of 23.6% to 91.3%, with low rates reported in the Middle East, Russia, Africa, and several European countries . These findings suggest that gathering support for public vaccination programs and boosting vaccination rates is a crucial task. This study considers the COVID-19 vaccination program in Hong Kong and examines the relationships between political attitudes and the support for the vaccination program. Several studies investigate the association of political attitudes and vaccine hesitancy. The partisan effect was found in studies on COVID-19 measures; many of these studies using US data found Republicans to be more vaccine-hesitant , , . Other studies based on data in democracies found that the divide was not between the left and the right, but rather how far one was from the center, with the far-right and far-left showing more hesitancy , , . A co-partisan effect has been found in which supporters follow advice from leaders of their political parties . For example, a study based in Brazil found that supporters of the President strongly rejected Chinese vaccines because the President was critical of China . However, few studies examined data in countries under authoritarian regimes, partly because political censorship is common among authoritarian regimes, and genuine opposition parties and partisan division may be non-existent. Hong Kong is in a transitory phase from being an open society to a more authoritarian one, and individuals can still report their political stances anonymously, which makes this study possible. Another research question examined in the current study is the effect of health expert communication on directing pandemic responses. Many governments have used this strategy to boost support for their vaccination programs. There were complaints of distrust in science and the sidelining of scientists ; nonetheless, the pandemic provided an opportunity for scientists to gain significance in driving mainstream discourse in many countries. COVID-19 research has found that people trust experts more than the government and are more interested in expert sources than government ones . Previous evidence also shows that experts can help increase compliance with COVID-19 health measures , and induce changes in knowledge . Further, how people respond to health expert advice has become an active research topic. The first research question examines the political attitude variables associated with support for the government vaccination program. These factors include political stance, trust in government, preference for pandemic control over freedom, view on China’s influence on the policymaking in Hong Kong, and political attentiveness. The second research question addresses the efficacy of health expert communication in increasing support for the government vaccination program. In addition, although they serve as controls, some of the previous findings on demographics, socio-economic factors, knowledge, and experience of COVID-19 were examined to determine whether they are associated with the tendency to support.
Materials and methods An online survey was conducted from May 26 to June 3, 2021, among the general population of Hong Kong aged 18 years or above. The Hong Kong government began the COVID-19 vaccination program at the end of February 2021. On May 30, the vaccine uptake rate for people completing the first dose was 18% , which was lower than that in many developed countries, including Israel (62%), Canada (56%), the United States (50%), Italy (39%), and France (38%). At the time of the survey, there were no significant outbreaks in Hong Kong, with only 55 confirmed cases in May. Further, vaccination was voluntary except for workers of catering businesses and recreational or entertainment venues . There were no material benefits or conveniences for the vaccinated people in most other circumstances. Occupational requirements and incentives for vaccination were only announced after the survey period. Thus, the results were unaffected by these events. Ethics approval of this study was obtained from the corresponding author’s affiliated institution. 2.1 Study sample and data collection The data were part of a larger project. Participants were recruited by an online survey company (Dynata) using quota sampling to mimic the general Hong Kong population by age and sex. Vaccinated participants were not filtered, their responses were included in this study. Electronic consent was obtained from participants before the survey began. Participants could discontinue the study at any time if they desired. The collected data were retrieved from the online survey platform and protected by passwords. 2.2 Treatment Participants were randomly assigned to three groups of similar sizes: the conflict treatment group, the control group, and the aligned treatment group. Each group of participants viewed different excerpts of the government vaccination program. The excerpt viewed by the control group only contained a neutral government announcement. The aligned treatment group received the same government announcement and positive communication from a health expert, Prof. Kwok-Yung Yuen. Further, the conflict treatment group received the government announcement and a hesitant remark given by Prof. Yuen on vaccination. Both were direct quotes from newspapers. The quote supporting vaccination was delivered on March 6, 2021, when Prof. Yuen vaccinated himself ; the quote indicating hesitancy was delivered on May 4, 2020, during the early phase of the pandemic . Furthermore, Yuen was the Chair of Infectious Disease in the Department of Microbiology at the University of Hong Kong. He was one of the most frequently interviewed health experts in the media on the issue of COVID-19, personally worked on related research, and was rated to be among the top 1% of researchers in the world by the Essential Science Indicator . The health expert whose quotes were used in the excerpt was selected among the four members of the expert advisory panel for pandemic control appointed by the government to combat COVID-19 when the pandemic began . All four members were academics and held positions in the two medical schools of Hong Kong. The reason for selecting this panel is to ensure that the chosen health expert in this study was widely recognized. Furthermore, although the panel was officially given the task to advise the government and communicate with the public, the members occasionally held views different from the government, thus portraying a certain degree of professionalism and independence. Prof. Yuen had been selected out of the four members based on two criteria: popularity and political neutrality. Prof. Yuen was a renowned scholar and an applauded SARS hero in Hong Kong due to his involvement in containing the SARS infections in 2003 . Among the four members selected, Prof. Yuen appeared in local newspapers the most number of times. From the beginning of 2020 until the survey period, his name appeared in Hong Kong-based printed media and their web versions approximately 5,300 times, based on search results in Wisenews , an online news database. The number of appearances of the other three members was approximately 4,800 (Prof. David Shu-Cheong Hui), 1,500 (Prof. Gabriel Matthew Leung), and 100 (Prof. Keiji Fukuda). Evaluating with a more extended period, Prof. Yuen also had more media exposure before the pandemic than Prof. David Hui, who had the second most number of appearances as mentioned above. From 2003 (SARS year) to 2019, Prof. Yuen had approximately 7,000 mentions in Wisenews , and Prof. Hui had only about 1,600. This shows that Prof. Yuen has been a familiar figure for a long time and was well-known, not only because of the COVID-19 pandemic but also before it. Regarding political neutrality, Prof. Yuen had not been in any political appointment or joined the government. His other public positions were related to his microbiologist and academic expertise. Contrastingly, a different member (Prof. Leung) had worked in a government position. The study used the “mirror experiment” method of survey research, which employed a real-world vignette instead of a hypothetical one . Using a hypothetical health expert is beneficial because it can isolate the effect tested from other factors. However, its disadvantage is the cognitive burden due to the lack of familiarity with a hypothetical person. Additionally, results may exaggerate the treatment effect in the real world. It is questionable whether the effect size is generalizable because a hypothetical question may produce only a hypothetical answer . This study chose a real person for the vignette because social-ecological validity is critical when dealing with real-world problems and prescribing potential solutions . Health experts during the pandemic were largely known to the public. A hypothetical vignette is unnecessary because the research question did not isolate a precise casual mechanism. The excerpts were as follows:. Control group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year.” Aligned treatment group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year. Prof. Yuen Kwok-Yung, Chair of Infectious Diseases, Department of Microbiology at the University of Hong Kong, said that no adverse reactions have been observed for currently available vaccines after one year of testing. Sufficient time has been given to prove that the vaccine is safe and effective. Therefore, he vaccinates himself to set an example and urges the public to vaccinate as soon as possible.” Conflict treatment group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year. Prof. Yuen Kwok-Yung, Chair of Infectious Diseases, Department of Microbiology at the University of Hong Kong said that it is the first time in human history to undergo mass vaccination, side effect is unknown. “If there is vaccine for me now, I would say no. It would be better to keep mask on and wait until others have vaccinated.” 2.3 Support for the vaccination program After viewing the excerpts, participants were presented with the question “To what extent do you support the government vaccination program?” and were asked to rate the answer on an 11-point scale, from “0” (least support) to “10” (strongly support). A rating of 6–10 was classified as “supporting the vaccination program,” following an earlier COVID-19 vaccination study in Hong Kong . 2.4 Political attitudes Before presenting the excerpts, political attitudes were assessed using four items. The first two items asked participants the extent to which they agreed with the statement “The Hong Kong government is willing to cope with the effect of COVID-19” and “Freedom is more important than pandemic control.” These items were rated on an 11-point scale, from “0” (strongly disagree) to “10” (strongly agree). The other two questions were “In general, do you pay attention to politics?” and “To what extent do you see Hong Kong under the influence of China in policymaking?” Moreover, the answers to these questions were rated on an 11-point scale, with 0 indicating the least attention and lowest perceived influence of China, respectively, and 10 indicating the most for both answers. The participants were also asked to self-report their political stance. In Hong Kong, the major political cleavage is not between the left and right but between pro-government and pro-democracy. The choices provided in the survey were “pro-establishment,” “moderate/center,” “pan-democratic,” “pan-localist,” “others,” and “don’t know.” For statistical analysis, these responses were grouped into three categories: pro-government (pro-establishment), opposition (democrat and localist), and others (not included in the above categories). 2.5 COVID-19 experience We examined participants’ quarantine experience, COVID-19 knowledge, attention, and interest in the pandemic. Quarantine experience was assessed using a yes/no answer to the statement, “I have been in quarantine because of COVID-19.” Participants’ knowledge of COVID-19 was tested using five questions. Sample statements include “COVID-19 can remain in aerosols (particles suspended in the air) for up to 3 h” and “a vaccinated person will not be infected.” Participants were asked to indicate whether the item was correct or not by choosing “true,” “false,” or “not sure.” Participants received one mark for each correct answer. Total scores ranged from 0 to 5, with higher scores reflecting better knowledge. Attention on COVID-19 was assessed using the question “I am wary of COVID-19”. Interest in the pandemic was measured with the statement “I pay attention and follow the news of COVID-19 closely.” The two items were rated on an 11-point scale, from “0” (strongly disagree) to “10” (strongly agree). Higher scores indicated greater attention or interest. 2.6 Demographic factors Data on participants’ demographic background, including sex, education, age, self-declared social class (grassroots, lower-middle, middle, upper-middle, and upper), and origin, were collected. 2.7 Statistical analysis Descriptive analyses were performed for all study variables divided into two treatment groups and one control group. The Chi-squared test and one-way ANOVA were used to confirm that the independent variables across the three groups were not statistically different. Logistic regression models were used to investigate factors associated with support for the government vaccination program. COVID-19 experience, demographic factors, and political attitudes were independent variables. COVID-19 experience and demographic factors were used in Model 1. Political stances were added for Model 2. Model 3 included all the political attitude variables. Further, Model 4 tested for the interaction effect between the treatment groups and political stances. Finally, Model 5 tested the interaction effect between the treatment groups and political attentiveness. Odds ratios were adjusted using other variables in the regression models. Furthermore, all statistical analyses were performed using Stata version 16.1 (StataCorp LLC, College Station, Texas, USA). Statistical significance was set at p < 0.05.
Study sample and data collection The data were part of a larger project. Participants were recruited by an online survey company (Dynata) using quota sampling to mimic the general Hong Kong population by age and sex. Vaccinated participants were not filtered, their responses were included in this study. Electronic consent was obtained from participants before the survey began. Participants could discontinue the study at any time if they desired. The collected data were retrieved from the online survey platform and protected by passwords.
Treatment Participants were randomly assigned to three groups of similar sizes: the conflict treatment group, the control group, and the aligned treatment group. Each group of participants viewed different excerpts of the government vaccination program. The excerpt viewed by the control group only contained a neutral government announcement. The aligned treatment group received the same government announcement and positive communication from a health expert, Prof. Kwok-Yung Yuen. Further, the conflict treatment group received the government announcement and a hesitant remark given by Prof. Yuen on vaccination. Both were direct quotes from newspapers. The quote supporting vaccination was delivered on March 6, 2021, when Prof. Yuen vaccinated himself ; the quote indicating hesitancy was delivered on May 4, 2020, during the early phase of the pandemic . Furthermore, Yuen was the Chair of Infectious Disease in the Department of Microbiology at the University of Hong Kong. He was one of the most frequently interviewed health experts in the media on the issue of COVID-19, personally worked on related research, and was rated to be among the top 1% of researchers in the world by the Essential Science Indicator . The health expert whose quotes were used in the excerpt was selected among the four members of the expert advisory panel for pandemic control appointed by the government to combat COVID-19 when the pandemic began . All four members were academics and held positions in the two medical schools of Hong Kong. The reason for selecting this panel is to ensure that the chosen health expert in this study was widely recognized. Furthermore, although the panel was officially given the task to advise the government and communicate with the public, the members occasionally held views different from the government, thus portraying a certain degree of professionalism and independence. Prof. Yuen had been selected out of the four members based on two criteria: popularity and political neutrality. Prof. Yuen was a renowned scholar and an applauded SARS hero in Hong Kong due to his involvement in containing the SARS infections in 2003 . Among the four members selected, Prof. Yuen appeared in local newspapers the most number of times. From the beginning of 2020 until the survey period, his name appeared in Hong Kong-based printed media and their web versions approximately 5,300 times, based on search results in Wisenews , an online news database. The number of appearances of the other three members was approximately 4,800 (Prof. David Shu-Cheong Hui), 1,500 (Prof. Gabriel Matthew Leung), and 100 (Prof. Keiji Fukuda). Evaluating with a more extended period, Prof. Yuen also had more media exposure before the pandemic than Prof. David Hui, who had the second most number of appearances as mentioned above. From 2003 (SARS year) to 2019, Prof. Yuen had approximately 7,000 mentions in Wisenews , and Prof. Hui had only about 1,600. This shows that Prof. Yuen has been a familiar figure for a long time and was well-known, not only because of the COVID-19 pandemic but also before it. Regarding political neutrality, Prof. Yuen had not been in any political appointment or joined the government. His other public positions were related to his microbiologist and academic expertise. Contrastingly, a different member (Prof. Leung) had worked in a government position. The study used the “mirror experiment” method of survey research, which employed a real-world vignette instead of a hypothetical one . Using a hypothetical health expert is beneficial because it can isolate the effect tested from other factors. However, its disadvantage is the cognitive burden due to the lack of familiarity with a hypothetical person. Additionally, results may exaggerate the treatment effect in the real world. It is questionable whether the effect size is generalizable because a hypothetical question may produce only a hypothetical answer . This study chose a real person for the vignette because social-ecological validity is critical when dealing with real-world problems and prescribing potential solutions . Health experts during the pandemic were largely known to the public. A hypothetical vignette is unnecessary because the research question did not isolate a precise casual mechanism. The excerpts were as follows:. Control group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year.” Aligned treatment group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year. Prof. Yuen Kwok-Yung, Chair of Infectious Diseases, Department of Microbiology at the University of Hong Kong, said that no adverse reactions have been observed for currently available vaccines after one year of testing. Sufficient time has been given to prove that the vaccine is safe and effective. Therefore, he vaccinates himself to set an example and urges the public to vaccinate as soon as possible.” Conflict treatment group “The government offers to members of the public the vaccination programs free of charge. The government’s goal is to provide vaccines for the majority of the population within this year. Prof. Yuen Kwok-Yung, Chair of Infectious Diseases, Department of Microbiology at the University of Hong Kong said that it is the first time in human history to undergo mass vaccination, side effect is unknown. “If there is vaccine for me now, I would say no. It would be better to keep mask on and wait until others have vaccinated.”
Support for the vaccination program After viewing the excerpts, participants were presented with the question “To what extent do you support the government vaccination program?” and were asked to rate the answer on an 11-point scale, from “0” (least support) to “10” (strongly support). A rating of 6–10 was classified as “supporting the vaccination program,” following an earlier COVID-19 vaccination study in Hong Kong .
Political attitudes Before presenting the excerpts, political attitudes were assessed using four items. The first two items asked participants the extent to which they agreed with the statement “The Hong Kong government is willing to cope with the effect of COVID-19” and “Freedom is more important than pandemic control.” These items were rated on an 11-point scale, from “0” (strongly disagree) to “10” (strongly agree). The other two questions were “In general, do you pay attention to politics?” and “To what extent do you see Hong Kong under the influence of China in policymaking?” Moreover, the answers to these questions were rated on an 11-point scale, with 0 indicating the least attention and lowest perceived influence of China, respectively, and 10 indicating the most for both answers. The participants were also asked to self-report their political stance. In Hong Kong, the major political cleavage is not between the left and right but between pro-government and pro-democracy. The choices provided in the survey were “pro-establishment,” “moderate/center,” “pan-democratic,” “pan-localist,” “others,” and “don’t know.” For statistical analysis, these responses were grouped into three categories: pro-government (pro-establishment), opposition (democrat and localist), and others (not included in the above categories).
COVID-19 experience We examined participants’ quarantine experience, COVID-19 knowledge, attention, and interest in the pandemic. Quarantine experience was assessed using a yes/no answer to the statement, “I have been in quarantine because of COVID-19.” Participants’ knowledge of COVID-19 was tested using five questions. Sample statements include “COVID-19 can remain in aerosols (particles suspended in the air) for up to 3 h” and “a vaccinated person will not be infected.” Participants were asked to indicate whether the item was correct or not by choosing “true,” “false,” or “not sure.” Participants received one mark for each correct answer. Total scores ranged from 0 to 5, with higher scores reflecting better knowledge. Attention on COVID-19 was assessed using the question “I am wary of COVID-19”. Interest in the pandemic was measured with the statement “I pay attention and follow the news of COVID-19 closely.” The two items were rated on an 11-point scale, from “0” (strongly disagree) to “10” (strongly agree). Higher scores indicated greater attention or interest.
Demographic factors Data on participants’ demographic background, including sex, education, age, self-declared social class (grassroots, lower-middle, middle, upper-middle, and upper), and origin, were collected.
Statistical analysis Descriptive analyses were performed for all study variables divided into two treatment groups and one control group. The Chi-squared test and one-way ANOVA were used to confirm that the independent variables across the three groups were not statistically different. Logistic regression models were used to investigate factors associated with support for the government vaccination program. COVID-19 experience, demographic factors, and political attitudes were independent variables. COVID-19 experience and demographic factors were used in Model 1. Political stances were added for Model 2. Model 3 included all the political attitude variables. Further, Model 4 tested for the interaction effect between the treatment groups and political stances. Finally, Model 5 tested the interaction effect between the treatment groups and political attentiveness. Odds ratios were adjusted using other variables in the regression models. Furthermore, all statistical analyses were performed using Stata version 16.1 (StataCorp LLC, College Station, Texas, USA). Statistical significance was set at p < 0.05.
Results 3.1 Sample characteristics A total of 1,079 respondents completed all the questions of the survey. Participants were randomly assigned to three groups, and each group contained approximately the same number of participants. shows the characteristics of the samples in the three control and treatment groups. Overall, more women (53.5%) and individuals with university education (56.5%) completed the survey. The average age of the participants was 39.9 years. Compared with the age distribution of the Hong Kong population at the end of 2020, the 18–59-year-old group was better represented, and the ≥ 60-year-old group was underrepresented, despite strategies being employed to encourage more respondents from the ≥ 60-year-old group. This was expected, given the accessibility of the online survey. Additionally, the proportion of the eight age groups under 60 years was 9.18%–15.48% of the overall sample, which shows a sufficient representation of each age group. More than half of the participants (55.1%) reported that they belonged to the grassroots and lower-middle-class, 39.6% reported being middle class, and 5.4% reported being upper-middle and upper class, resembling the class structure of Hong Kong society. Using the Chi-squared test and ANOVA, we could not reject the null hypothesis that the characteristics listed are not systematically different across groups (all p-values > 0.05). Thus, we were confident that the independent variables of samples randomly assigned to the three groups were not statistically different. 3.2 Difference in support for the vaccination program across the control and treatment groups shows the percentage of supporting respondents by political stance and treatment group. A Chi-squared test was conducted to test if the difference in support for vaccination program is statistically significant. Support varied widely across political stances regardless of the treatment given. Within the pro-government stance, 80.8% of the respondents supported the vaccination program, and the number was 54.2% in other political stances. Support in the opposition stance was the lowest at 38.7%. The overall treatment result matched the expected pattern, with the aligned treatment group having the highest support percentage (55.4%), followed by the control group (52.7%) and the conflict treatment group (48.7%). However, the difference was only statistically significant within the opposition political stance (p = 0.017). This may be because the efficacy of expert advice may vary across respondents with different political stances. People of different political stances have starkly distinct political attitudes in other dimensions, as summarized in . Opposition supporters had the highest percentage of university education, had the best knowledge of COVID-19, demonstrated the lowest trust in the government, had the strongest preference for freedom over pandemic control, were most politically attentive, and strongly agreed that China influenced Hong Kong’s COVID-19 policymaking. Other political stances came second in all dimensions except politically attentiveness, for which they had the lowest scores. The differences were statistically significant (p ≤ 0.001). The three political stances did not differ statistically in terms of class (p = 0.483) and age (p = 0.237). 3.3 Association of political attitude and support for the government vaccination program The results of the logistic regression analyses are shown in . Model 1 included COVID-19 experience and demographic factors but excluded variables on political attitudes. Without controlling for political variables, the model explained 14.5% of the variance in support for the vaccination program. Expert advice did not affect the support. Quarantine experience (odds ratio [OR]: 2.098, p = 0.006) and awareness of COVID-19 news (OR: 1.248, p < 0.001) positively predicted support. In terms of demographics, men showed higher support than women (OR: 1.38; p = 0.014). An increase in age increased the likelihood to support the program (p < 0.001). University education was not a predictor in this dataset. People belonging to the grassroots and lower-middle class were 47.2% (p < 0.001) less likely to support the vaccination program than those belonging to the middle class. People of origins other than Hong Kong were 5.165 times (p = 0.001) more likely to show support. Model 2 examined the association of political stance and support for the program using pro-government and opposition dummy variables. Pro-government respondents were more likely to support vaccination (OR: 2.820, p < 0.001), whereas opposition respondents were less likely (OR: 0.523, p < 0.001). The difference in support for vaccination program between the two groups was 5.4 (=2.820/0.523) times. Other variables such as quarantine experience, awareness of COVID-19 news, and demographic variables remained statistically significant and of a similar magnitude. Model 3 included all variables related to political attitude, and the explained variance was 25.9 percentage points higher than that of Model 1. Trust in the government (OR: 1.307, p < 0.001), preference for pandemic control over freedom (OR: 1.404, p < 0.001), and political attentiveness (OR: 1.114; p = 0.014) increased participants’ likelihood of supporting the vaccination program. A region-specific factor of China’s influence on Hong Kong policymaking was included. Hong Kong is a special administrative region in China. This variable reflected the perceived relationship between Beijing and the local government. Hong Kong residents who agreed that China had influenced Hong Kong policymaking had a lower tendency to support the vaccination program (OR: 0.884; p = 0.007). After controlling for political attitudes, the political stance of pro-government (p = 0.352) or opposition (p = 0.976) did not significantly influence the support in the vaccination program. Model 4 added the interaction terms between treatment and political stance. In this model, health expert advice remained statistically insignificant unconditionally in explaining support for the vaccination program. Concerning the interaction terms, a pro-government stance did not have an interaction effect with the treatment groups assigned. However, an opposition stance had a significant interaction with health expert advice. The opposition respondents showed less support than the pro-government respondents (OR: 0.479, p = 0.019). When compared with the conflicting health expert treatment, the opposition respondents assigned to the control group showed higher support (OR: 2.847; p = 0.012), and when given the aligned health expert treatment, the support increased further (OR: 3.245, p = 0.006). shows the predictive margins of support for the vaccination program across treatment groups and political stances with a 95% confidence interval. Opposition supporters who received the conflicting treatment showed statistically lower support for the vaccination program than those who received the aligned treatment. In contrast, there was no statistical difference regarding the treatment group for pro-government and other political stances. Wariness of COVID-19 was a statistically significant predictor of support for the vaccination program (OR: 0.880, p = 0.048). shows the predictive margins of support for the vaccination program across treatment groups and political stances with a 95% confidence interval. Opposition supporters who received the conflicting treatment showed statistically lower support for the vaccination program than those who received the aligned treatment. In contrast, there was no statistical difference regarding the treatment group for pro-government and other political stances. Wariness of COVID-19 was a statistically significant predictor of support for the vaccination program (OR: 0.880, p = 0.048). Model 5 tested the interaction effect between the health expert advice and political attentiveness. Results show that politically attentive respondents were more affected by treatment effect in both comparisons between conflict treatment and control groups (OR: 1.190; p = 0.036) and between conflict and aligned treatment groups (OR: 1.242; p = 0.010).
Sample characteristics A total of 1,079 respondents completed all the questions of the survey. Participants were randomly assigned to three groups, and each group contained approximately the same number of participants. shows the characteristics of the samples in the three control and treatment groups. Overall, more women (53.5%) and individuals with university education (56.5%) completed the survey. The average age of the participants was 39.9 years. Compared with the age distribution of the Hong Kong population at the end of 2020, the 18–59-year-old group was better represented, and the ≥ 60-year-old group was underrepresented, despite strategies being employed to encourage more respondents from the ≥ 60-year-old group. This was expected, given the accessibility of the online survey. Additionally, the proportion of the eight age groups under 60 years was 9.18%–15.48% of the overall sample, which shows a sufficient representation of each age group. More than half of the participants (55.1%) reported that they belonged to the grassroots and lower-middle-class, 39.6% reported being middle class, and 5.4% reported being upper-middle and upper class, resembling the class structure of Hong Kong society. Using the Chi-squared test and ANOVA, we could not reject the null hypothesis that the characteristics listed are not systematically different across groups (all p-values > 0.05). Thus, we were confident that the independent variables of samples randomly assigned to the three groups were not statistically different.
Difference in support for the vaccination program across the control and treatment groups shows the percentage of supporting respondents by political stance and treatment group. A Chi-squared test was conducted to test if the difference in support for vaccination program is statistically significant. Support varied widely across political stances regardless of the treatment given. Within the pro-government stance, 80.8% of the respondents supported the vaccination program, and the number was 54.2% in other political stances. Support in the opposition stance was the lowest at 38.7%. The overall treatment result matched the expected pattern, with the aligned treatment group having the highest support percentage (55.4%), followed by the control group (52.7%) and the conflict treatment group (48.7%). However, the difference was only statistically significant within the opposition political stance (p = 0.017). This may be because the efficacy of expert advice may vary across respondents with different political stances. People of different political stances have starkly distinct political attitudes in other dimensions, as summarized in . Opposition supporters had the highest percentage of university education, had the best knowledge of COVID-19, demonstrated the lowest trust in the government, had the strongest preference for freedom over pandemic control, were most politically attentive, and strongly agreed that China influenced Hong Kong’s COVID-19 policymaking. Other political stances came second in all dimensions except politically attentiveness, for which they had the lowest scores. The differences were statistically significant (p ≤ 0.001). The three political stances did not differ statistically in terms of class (p = 0.483) and age (p = 0.237).
Association of political attitude and support for the government vaccination program The results of the logistic regression analyses are shown in . Model 1 included COVID-19 experience and demographic factors but excluded variables on political attitudes. Without controlling for political variables, the model explained 14.5% of the variance in support for the vaccination program. Expert advice did not affect the support. Quarantine experience (odds ratio [OR]: 2.098, p = 0.006) and awareness of COVID-19 news (OR: 1.248, p < 0.001) positively predicted support. In terms of demographics, men showed higher support than women (OR: 1.38; p = 0.014). An increase in age increased the likelihood to support the program (p < 0.001). University education was not a predictor in this dataset. People belonging to the grassroots and lower-middle class were 47.2% (p < 0.001) less likely to support the vaccination program than those belonging to the middle class. People of origins other than Hong Kong were 5.165 times (p = 0.001) more likely to show support. Model 2 examined the association of political stance and support for the program using pro-government and opposition dummy variables. Pro-government respondents were more likely to support vaccination (OR: 2.820, p < 0.001), whereas opposition respondents were less likely (OR: 0.523, p < 0.001). The difference in support for vaccination program between the two groups was 5.4 (=2.820/0.523) times. Other variables such as quarantine experience, awareness of COVID-19 news, and demographic variables remained statistically significant and of a similar magnitude. Model 3 included all variables related to political attitude, and the explained variance was 25.9 percentage points higher than that of Model 1. Trust in the government (OR: 1.307, p < 0.001), preference for pandemic control over freedom (OR: 1.404, p < 0.001), and political attentiveness (OR: 1.114; p = 0.014) increased participants’ likelihood of supporting the vaccination program. A region-specific factor of China’s influence on Hong Kong policymaking was included. Hong Kong is a special administrative region in China. This variable reflected the perceived relationship between Beijing and the local government. Hong Kong residents who agreed that China had influenced Hong Kong policymaking had a lower tendency to support the vaccination program (OR: 0.884; p = 0.007). After controlling for political attitudes, the political stance of pro-government (p = 0.352) or opposition (p = 0.976) did not significantly influence the support in the vaccination program. Model 4 added the interaction terms between treatment and political stance. In this model, health expert advice remained statistically insignificant unconditionally in explaining support for the vaccination program. Concerning the interaction terms, a pro-government stance did not have an interaction effect with the treatment groups assigned. However, an opposition stance had a significant interaction with health expert advice. The opposition respondents showed less support than the pro-government respondents (OR: 0.479, p = 0.019). When compared with the conflicting health expert treatment, the opposition respondents assigned to the control group showed higher support (OR: 2.847; p = 0.012), and when given the aligned health expert treatment, the support increased further (OR: 3.245, p = 0.006). shows the predictive margins of support for the vaccination program across treatment groups and political stances with a 95% confidence interval. Opposition supporters who received the conflicting treatment showed statistically lower support for the vaccination program than those who received the aligned treatment. In contrast, there was no statistical difference regarding the treatment group for pro-government and other political stances. Wariness of COVID-19 was a statistically significant predictor of support for the vaccination program (OR: 0.880, p = 0.048). shows the predictive margins of support for the vaccination program across treatment groups and political stances with a 95% confidence interval. Opposition supporters who received the conflicting treatment showed statistically lower support for the vaccination program than those who received the aligned treatment. In contrast, there was no statistical difference regarding the treatment group for pro-government and other political stances. Wariness of COVID-19 was a statistically significant predictor of support for the vaccination program (OR: 0.880, p = 0.048). Model 5 tested the interaction effect between the health expert advice and political attentiveness. Results show that politically attentive respondents were more affected by treatment effect in both comparisons between conflict treatment and control groups (OR: 1.190; p = 0.036) and between conflict and aligned treatment groups (OR: 1.242; p = 0.010).
Discussion This study primarily aimed to understand the extent to which the support for government vaccination program is related to political attitude and the efficacy of health expert communication. A strong effect of political stance was associated with the support. Trust in the government, preference for pandemic control over freedom, political attentiveness, and perception of China’s influence on Hong Kong policymaking were explanatory variables for the support. The effect of health expert communication was pronounced in the opposition stance and politically attentive respondents, and this may help inform strategies to boost support. It is essential to understand the political and social contexts before and when the study was conducted to interpret the results. Before the COVID-19 pandemic, 2019 was a year of political instability in Hong Kong, triggered by the introduction of an extradition bill that would allow criminal suspects to be arrested in the Hong Kong jurisdiction and transferred to the jurisdiction of mainland China for trial. At their peak, anti-government protests attracted 1.5–2 million people (or more than one-fifth of all residents) to the street . After social unrest, China’s National People’s Congress Standing Committee passed the Hong Kong National Security Law to restore stability. With extraordinary measures, including the disqualification of opposition lawmakers and the postponement of the Legislative Council election, the swift change in the political landscape undermined political trust in the government. Satisfaction with the Hong Kong government dropped to an all-time low in March 2020, with 82.5% expressing dissatisfaction. In May 2021, when this survey was conducted, 62.8% of the respondents were still dissatisfied with the government . As reported in the literature, low trust in health authorities and government institutions correlates with low compliance with public health policies , , which could undermine COVID-19 control measures. After the COVID-19 outbreak, the government implemented several pandemic control measures that opposition supporters viewed to have the hidden aim of silencing anti-government voices , . For example, the social gathering ban meant that demonstrators would be subject to fines. The opposition supporters saw the government using pandemic control as a reason to postpone the Legislation Council election . Given the low satisfaction with the government amidst social unrest, the involvement of independent health experts in providing professional advice and disseminating information about pandemic control measures was a potential way to encourage compliance. Scientists have gained high levels of public trust in many countries . Health experts could help frame the issue as a public health problem rather than a political problem , package the choice of measures as informed by scientific evidence rather than driven by bureaucratic or political concerns . Consequently, this could depoliticize COVID-19 measures . The first research question of this study examined the association between political attitude and government vaccination program support. This study measured political attitudes using different variables and examined them individually. Further, Model 2 found the partisan effect to be strong. In authoritarian regimes, the major political cleavage is between the government and the opposition. Support for the vaccination program among the pro-government participants was 5.4 times higher than opposition participants. A possible explanation for this is a co-partisan effect—an endorsement from the government or politicians from the same parties can enhance compliance , , . Another possible explanation for this is “affective polarization,” meaning that people choose positions different from the party they distrust or dislike. For instance, opposition supporters may refuse to vaccinate to show defiance against the government. This tendency has been supported by evidence on the adherence to COVID-19 measures . Partisanship was found to be a strong predictor of vaccine acceptance/hesitancy in other studies , , . Conservatives in the US, Brazil, and globally were more anti-science and had less perceived risk of COVID-19 than liberals, which led to their lower support for COVID-19 measures , , . Interestingly, in Hong Kong, the more liberal opposition supporters had lower support for the vaccine program than conservative pro-government supporters. The political attitudes of different partisan supporters in multiple dimensions can help us understand the results of this study. In Model 3, when trust in government, preference for pandemic control over freedom, China’s influence on Hong Kong policymaking, and political attentiveness were statistically significant, political stance became statistically insignificant. In the following sections, each of these political attitude variables is discussed. For each unit increase in trust in the government on a scale of 0–10, support for the vaccination program increased by 30.1%. A strong association is observed in the existing literature on COVID-19 vaccination , , . Political trust in a government is the belief in the government to take care of citizens’ interests , which can be issue-specific and relational, or whether one can trust a party to perform a specific job to a certain standard , which influences the extent to which government actions are supported . Respondents were asked whether they trusted the government to perform the specific task of coping with the effects of COVID-19. This question was taken as institutional and heuristic, as no separate government authorities, departments, or officials were asked. The trust level measured in this study was 4.93/10, similar to that in another study conducted in 2020, which measured 3.75/7 . Trust in the Hong Kong government is reflected in the common ratings of “satisfaction with the government” (from −100% to +100%) in a rolling survey conducted since 1997 . The net value dropped dramatically in April 2019, from −18.3% to its lowest value of −73.7%, and did not return to previous levels in August 2021. Preference for pandemic control over freedom is another robust predictor for program support. For each unit increase in preference for pandemic control over freedom on a scale of 0–10, support increased by 40.8%. Other recent studies of the Hong Kong population have yielded seemingly contradictory results. One study found that perceived infringement of freedom had no statistical association with social distancing behavior . Another study found that people from Hong Kong disagreed on whether the requirement of vaccination for travel was an infringement of personal freedom more than they agreed . Notably, this study refers to the freedom to choose to vaccinate, whereas the other studies referred to freedom to gather or travel. Acceptance rates depend heavily on the type and degree of freedom being restricted. For example, people living in Hong Kong were more resistant to privacy infringement by digital contact tracing than travel restrictions . Similar results were obtained in the US and the UK . The second explanation for this was the different time frames in which the surveys were conducted. In 2020, when people first became aware of COVID-19, people were more cautious and willing to sacrifice freedom for helping control the pandemic. As the pandemic continued, with more information about the nature of the pandemic and the effectiveness of counter-virus measures, people may have reassessed their risk perception and reevaluated the need for vaccination. This time-variation argument is supported by the different COVID-19 vaccine acceptance in two waves of the same study in Hong Kong . China’s influence on Hong Kong COVID-19 policymaking is a region-specific variable. However, it could also be understood as the influence of the central government on local government and, given the political context of Hong Kong, influence from an authoritarian source. Hong Kong has been a special administrative region with the autonomy of its policymaking granted by its mini constitution. The 2014 Umbrella Movement and the 2019 anti-extradition law movement were vital signs that parts of the society were resistant to interventions from Beijing. The results show that each unit increase of this belief (on a scale from 0 to 10) reduced support for the vaccination program by 11.6%. The result suggests that the sentiment of China’s intervention in Hong Kong affects political issues and public health. Political attentive respondents were 11.4% more likely to undergo vaccination for each unit increase in the 11-point scale. This may be because politically attentive respondents are more civilly engaged. Thus, they are more likely to support pro-social measures for the public good. Notably, a study in the early phase of the pandemic in Hong Kong found that a robust civil society that was politically engaged in the 2019 political movement helped cope with the shortage of masks and fill gaps in government measures . To the best of our knowledge, few studies have examined the association of political attention and pandemic control. This factor was included in one US and UK comparative study on COVID-19 but only served as a control. Another US study included related questions on political interest and political knowledge, which were not associated with changes in behavior and policy support during COVID-19 . Therefore, the present study is the first to identify the association of political attentiveness and vaccination program support. The second research question concerns the efficacy of health expert communication. Health expert communication of vaccination did not affect support for the government vaccination program. Nonetheless, it affected the opposition stance and politically attentive respondents. Previous studies have revealed the enhancing effect of expert advice on compliance during COVID-19 , and other epidemics , . However, the result was assumed to be general, and the expert communication effect was found to be similar across partisanship in a cross-country comparative study . In contrast, this study revealed that the expert communication strategy has a differentiating impact on specific political groups, which is novel and needs further exploration. In Model 4, health expert communication only focused on the opposition’s stance. It increased the support for the vaccination program by 185% and 225% for the control and aligned treatment groups, respectively, compared to conflict treatment, but not among the pro-government supporters. This shows that the strategy of health expert communication worked well for the opposition and could compensate for their low program support. In contrast, there was no statistically significant difference regarding the treatment given between pro-government and other political stances. One possible explanation for the strong expert communication effect in the opposition respondents was the need for an outside source to verify the government’s claim on the vaccine. Similar to another Hong Kong study, people critical of an authoritarian government found the information provided by a non-government source to be more credible than that by a government source . The result also supports the finding that liberals are more pro-science , , and, hence, more receptive to scientific advice from health experts. Moreover, the results point to the importance of health experts not publicly providing advice that conflict with the government, as it may reduce support for public health measures for people with certain political stances. However, ethical issues may arise when scientific evidence is inconsistent with the government’s position. Finally, Model 5 examines the interaction effect between the treatment group and political attentiveness. For each unit increase in political attentiveness on a scale of 0–10, expert communication increased support for the vaccination program by 19.0% and 24.2% for the control and aligned treatment groups, respectively, compared to the conflict treatment group; this is another novel finding of the expert communication effect. Public health experts come from the epistemological community and represent a source of expertise independent of the government. The explanation may be that politically attentive people are more civilly engaged. Therefore, they are more receptive to advice from other members of society. This study had several limitations. First, the survey was not conducted using random representative sampling. It should be stressed that the result was not meant to infer the population support rate. The study’s main contributions are the political explanations and the effects of health expert advice on the support for vaccination programs, which do not strictly require representative sampling. Second, the data were cross-sectional and not temporal. This study was concentrated on a point in time to determine the support for the vaccination programs. Third, there might be a possible question order effect. The appearance of political attitude questions before treatment may have affected participants’ responses. However, considering a reverse order, if respondents were given different excerpts before they answered the political attitude questions, there may have been a treatment effect on the independent variables of political attitudes. Fourth, Prof. Yuen’s words were used in the excerpts. Respondents’ perception of him and other past exposures would reduce the size of the effect. However, a few reasons establish why this may not seriously affect the soundness of the results. First, Prof. Yuen appeared in the media frequently before the survey period. The respondents would also have heard about different recommendations given by various people and authorities throughout the vaccine debate. Therefore, it is not likely that a respondent would remember the specific excerpts used in this survey. Second, this study partially captured the pretreatment exposure by including variables such as knowledge of COVID-19 and awareness of COVID-19 news. Moreover, participants were randomly allocated to experiment groups. Thus, there is no compelling reason to believe that respondents assigned to different groups had different pretreatment effects. Future studies may examine the impact of expert advice over a more extended period. The prolonged use of public health experts by a weakly trusted government may, in turn, reduce people’s trust in experts over time.
Conclusion Political attitudes were associated with support for the government vaccination programs in the authoritarian regime in Hong Kong. The support was divided across political stances. Further investigation found that trust in the government, preference for freedom over pandemic control, political attentiveness, and China’s influence on policymaking in Hong Kong were associated with the tendency to support vaccination programs. Further, these associations were more robust than education and knowledge of COVID-19, which other studies have explored. Participants in the opposition political stance had significantly lower support rates. This could be countered by positive health expert communication of the government’s vaccination program. Politically attentive people were more receptive to health expert advice. This shows that governments could utilize health experts to encourage vaccination even in communities with low trust in the government. If forced vaccination imposes personal costs and reduces one’s welfare, then using health experts in a persuasive capacity could encourage voluntary vaccine uptake, maintaining the individuals’, and thus collective, social welfare.
Ethics approval Ethics approval was obtained from the Human Research Ethics Committee of the University of Hong Kong (Ref. No. EA210118). All the participants gave informed consent before taking part.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
A trauma-informed approach to the pediatric COVID-19 response | 46ed7e75-a451-4314-b1aa-f570a4c95d1c | 7914025 | Pediatrics[mh] | The coronavirus disease (COVID-19) pandemic and its related public health measures have undoubtedly affected the psychological well-being of children. In the United States, the national survey, “Well-Being of Parents and Children During the COVID-19 Pandemic,” highlighted that 14% of respondents reported worsening behavioral health for their children since the start of the pandemic, a phenomenon also reported in China. In this commentary, we explain how pandemic-related well-being effects can be explained by the neurobiology of trauma and toxic stress. We then introduce a trauma-informed framework that pediatricians can use in clinical practice to promote the health, well-being, and safety of children during the COVID-19 pandemic and beyond.
The Substance Abuse and Mental Health Services Administration defines trauma as events or circumstances that are experienced by individuals as physically or emotionally harmful with lasting effects on health and well-being. Traumatic exposures can cause sustained activation of the body's stress response system, or “fight, flight, or freeze” response. When combined with the lack of important resiliency factors that can buffer tolerable levels of stress, the traumatic exposure can lead to toxic stress. , In contrast to positive and tolerable levels of stress, the neuroendocrine shifts associated with the toxic stress response can disrupt the foundation for optimal brain growth and functioning. In the immediate period, this may manifest as a change in behaviors or symptoms of anxiety and depression. , Over the long term, the downstream effects of the stress response can disturb whole-body homeostasis, leading to inflammation and activation of pathways causing disease. , As the seminal adverse childhood experiences study demonstrated, exposure to childhood trauma can lead to an increased risk for a range of diseases such as asthma, cardiovascular disease, cancer, and diabetes, which can lead to increased morbidity and even premature death.
During COVID-19, uncertainty, concern, and isolation are common in the lives of many families, as they balance upholding wellness with the responsibilities of childcare and employment in a landscape of limited resources. , These pandemic-related conditions can lead to high levels of stress, which can be even more pronounced in low-income and minority communities, for whom the stress of COVID-19 can be layered on additional adversities and in which lack of access to community resources can intensify the impact of the stressors related to COVID-19. , While the toxic stress and traumatic effect of pandemics have been well established, , COVID-19 has brought new and unforeseen challenges, not characteristic of prior pandemics. Children no longer engage in their daily routines, have lost access to many of their regular environments, and are not allowed to play, hug, be held, or visit with friends and family. The necessary public health measures related to COVID-19 effectively remove access to the precise resiliency and protective factors that help buffer traumatic experiences Alt-text: Unlabelled box , thereby increasing the potential for the stress associated with COVID-19 to reach toxic levels. As stated, toxic stress related to traumatic exposures can lead to negative physical and mental health outcomes across the lifespan. With COVID-19, this effect has been underscored by the increase in behavioral health symptoms in children. , If pediatricians recognize the potential traumatic effect of COVID-19 and respond appropriately with a trauma-informed approach to care, they may be able to intervene and safeguard against both the acute and lifelong negative health outcomes for our patients.
The central tenet of trauma-informed care is to consider what happened to a person, instead of what is wrong with a person. Alt-text: Unlabelled box Trauma-informed care actively seeks to promote resilience to derail the pathways that lead from traumatic exposure to toxic stress and ultimately poor health outcomes. It strives to uphold the principles of respectful, patient-centered care by prioritizing trust and collaboration and promoting strengths. We propose a simple framework, “CARES,” to guide pediatricians in employing a trauma-informed approach to care in the response to the COVID-19 pandemic. C: Consider the context. This process involves employing the central tenet of trauma-informed care, thereby understanding a patient's presentation in the context of life events, i.e., considering how COVID-19 or the effects of other traumas may be playing a role in the patient's presentation. A: Ask. Pediatricians should openly discuss COVID-19 with patients/families. It is important to ask about rather than assume how COVID-19 may be affecting them, since the effects will likely differ by circumstances. Questions for discussion can include how the pandemic has affected access to school, employment, or community supports. To understand the impact, pediatricians can also discuss effects on sleep, self-care, and coping. Pediatricians can also use this opportunity to inquire about additional stressors, which may be compounding the stress of COVID-19. R: Resiliency and Resources. It is essential to remain strengths-focused and assets-based to reinforce resilience factors that may lessen the stress effect of COVID-19. Positive social interactions and supportive family environments have been shown to be protective against traumatic exposures and toxic stress. , During COVID-19, parents can begin with actions as simple as scheduling regular family time for healthy engagement with appropriate touch/hugs, maintaining a daily schedule that includes consistency in regular play/outdoor activities, reaching out to loved ones via technology, validating difficult emotions, expressing honesty about their own struggles, and prioritizing their own well-being. Families can also be referred to community services and resources that have been known to lessen the impact of traumatic exposures, such as parenting support and mental health services. E: Educate. Pediatricians should educate patients and caretakers on the relationship between traumatic exposures, stress, and health. In this way, they can help our families reframe their child's behavioral responses and medical issues in the context of an acute reaction to stress. We cannot expect our patients/families to engage in discussions related to their stress, participate in related interventions, and be mindful of ways to lessen the impact of traumatic exposures if they do not appreciate the critical relationships among these concepts. As the word “trauma” may have the potential to be retraumatizing or difficult for some patients to relate to, pediatric providers can substitute more neutral terminology such as “(toxic) stress” that alludes to the neurobiological impact of trauma. Pediatricians can highlight for parents and patients that through engagement in healthy, resilience-building activities like those described above, they have the ability to support their own wellness and combat the potential health effects of the COVID-19–related stress they may be experiencing. S: Self-Care. By recognizing the trauma and stressors in their own lives, pediatric providers can strengthen our ability to meet the challenges posed by COVID-19. The CARES framework can be used for a self-check. Concerns about our own health, risks of bringing the virus home to loved ones, social isolation, and personal and financial challenges have to be addressed. We cannot sacrifice our own health for the well-being of our patients or we will fail at both. The “CARES” approach can be applied during any number of clinical encounters, but can be especially helpful in targeting critical issues during this time. Recognizing that the families that may need this approach most may not be seeking care in this time, we encourage the creation of systems to reach out to all patients/families to offer them the opportunity to interface with their healthcare providers in innovative ways. The telehealth platform can help to better envision what patients experience and can provide windows of opportunity for meaningful conversations and insights into stressors and barriers to healthy living.
To effectively care for children during COVID-19, pediatricians need to appreciate that the pandemic has the potential to be a traumatic exposure for children with lasting physical and mental health effects. This understanding is critical as it provides not only an explanation for the increase in well-being and mental health concerns already documented during the pandemic but also a clear way forward, with a trauma-informed approach. By employing the “CARES” framework, pediatricians can openly discuss the pandemic with families, collaborate to build resiliency and encourage engagement in activities and resources that are protective. In this way, pediatricians can proactively intervene to turn experiences that could have caused toxic stress to instead cause tolerable levels of stress, which is less damaging. This approach could potentially prevent both the short- and long-term health consequences resulting from the traumatic effect and toxic stress exposure of COVID-19. Over time, COVID-19 can be as damaging as the traumatic exposures included in the adverse childhood experiences study Alt-text: Unlabelled box , given the chronic stress that the pandemic and its related public health measures have incited for our pediatric patients and their families. Pediatricians are uniquely positioned to mitigate the extent to which the pandemic affects the well-being of the nation's children and we believe it is our responsibility to do so, to uphold the health and wellness of pediatric patients across their lifespan.
Not applicable.
The authors have no financial relationships relevant to this article to disclose.
The authors have no conflicts of interest relevant to this article to disclose.
|
Evaluation of Turner Syndrome Knowledge among Physicians and Parents | dab58548-c38b-4ced-8294-7ac931129719 | 7127883 | Gynaecology[mh] | Turner syndrome (TS) is one of the most commonly observed chromosomal abnormalities, estimated at around 1 in 2500 live births. To the best of our knowledge, there are no studies related to the incidence of TS in Turkey. Nevertheless, in a multicenter study carried out in 2013-2014, 842 patients with TS between 0-18 ages were examined retrospectively in 35 different centers, and the average diagnosis age was determined as 10.2±4.4 years. It is thought that TS is diagnosed at a later age in Turkey. What this study adds? This study shows that physicians do not have adequate knowledge of TS. Poor knowledge about TS may increase diagnosis delays. The education program about TS should be revised and implemented to address this problem at the medical faculty and post-graduate levels.
Turner syndrome (TS) is one of the most commonly observed chromosomal abnormalities, estimated at around 1 in 2500 live births. To the best of our knowledge, there are no studies related to the incidence of TS in Turkey. Nevertheless, in a multicenter study carried out in 2013-2014, 842 patients with TS between 0-18 ages were examined retrospectively in 35 different centers, and the average diagnosis age was determined as 10.2±4.4 years. It is thought that TS is diagnosed at a later age in Turkey.
This study shows that physicians do not have adequate knowledge of TS. Poor knowledge about TS may increase diagnosis delays. The education program about TS should be revised and implemented to address this problem at the medical faculty and post-graduate levels.
Turner syndrome (TS) is a sex chromosome abnormality in females, characterized by partial or complete loss of one of the X chromosomes . In nearly half of patients, it can be diagnosed in infancy with the presence of typical clinical findings. While a limited number of patients with TS are diagnosed with short stature during childhood, the rest of them are diagnosed with primary amenorrhea in adolescence . Through early diagnosis and appropriate treatment of these patients (growth hormone treatment, estrogen replacement, training and psychological support), they have a chance to participate in academic and social life, and they also achieve nearly normal adult height, bone density and sexual development. Several studies have focused on diagnosing TS earlier . Chronic complications may be prevented by earlier diagnosis and initiating treatment at birth or during infancy. Additionally, parents can deal with the situation more easily with early TS diagnosis . Although TS is common, the exact incidence of TS in Turkey is unknown, and awareness regarding this issue may be inadequate. Patients with TS are diagnosed late in Turkey. Therefore, in this study, we aimed to evaluate the TS knowledge and awareness levels of physicians and parents of children with TS.
This descriptive study was a questionnaire survey. The researchers designed two questionnaires; one to be administered to physicians volunteers (n=140) and the other to parents (n=30). The questionnaire for the physicians was developed based on the current literature, guidelines, and expert opinions . The questionnaire for the parents was developed based on family information flyers from the internet and expert opinions (TS: a guide for families https://turnersyndromefoundation.org/wp-content/uploads/2017/08/New-Turner-Syndrome-Guide-for-Families-Patricia-Reiser-CFNP-and-Marsha-Davenport-MD.pdf and http://nhfv.org/wp-content/uploads/2016/02/Turner-Syndrome-A-Guide-for-Families.pdf). This study included pediatricians, gynecologists, family physicians and parents whose children were diagnosed with TS. Ethics committee approval for the surveys was obtained from the Katip Çelebi University Local Ethics Committee (date: 16 June 2016, ethics approval number: 194). The physicians and parents in question were informed about the questionnaire, and surveys were performed face-to-face by NK after the informed consent form had been signed. An attempt was made to word all questions in a neutral manner. All response data of the participants were analyzed anonymously. No incentives were provided to the respondents. The first physician survey comprised 18 multiple-choice questions. The first four questions assessed the physicians’ specialties, proficiency regarding TS knowledge, number of years working in their profession and the institutions that they work for. The other 14 questions included TS epidemiology, clinical findings, diagnosis, treatment and follow-up recommendations. There were 19 “yes/no” questions in the survey designed for the parents. The first few questions concerned the demographic features (age, gender and education status) of the parent, and the remaining questions concerned the diagnosis, treatment and follow-up of TS. The patients’ age at diagnosis was obtained from medical records. The answers were evaluated as correct or incorrect by researchers. Examples of the questionnaires applied to both physicians and parents are given in the supplementary documents (Supplementary 1, 2). Statistical Analysis The sample size was calculated according to the estimated size within the sampling universe using the formula referred to as ‘the formula to estimate the number of individuals for a known sample and width of population’ . The analysis of physician survey data was carried out using Statistical Package for the Social Sciences, version 22.0, program (IBM Inc., Armonk, NY, USA), and the percentage distribution was performed using the chi-square test. Descriptive statistics for family surveys, however, were presented as frequency, percentage, average, standard deviation, median, minimum, maximum and range values. In the analysis of differences between measurement values of the two groups, the Mann-Whitney U test or the independent sample t-test were used according to the distribution. A significance test for the difference in two proportions and a Pearson chi-square test were used. A p<0.05 was considered statistically significant.
The sample size was calculated according to the estimated size within the sampling universe using the formula referred to as ‘the formula to estimate the number of individuals for a known sample and width of population’ . The analysis of physician survey data was carried out using Statistical Package for the Social Sciences, version 22.0, program (IBM Inc., Armonk, NY, USA), and the percentage distribution was performed using the chi-square test. Descriptive statistics for family surveys, however, were presented as frequency, percentage, average, standard deviation, median, minimum, maximum and range values. In the analysis of differences between measurement values of the two groups, the Mann-Whitney U test or the independent sample t-test were used according to the distribution. A significance test for the difference in two proportions and a Pearson chi-square test were used. A p<0.05 was considered statistically significant.
A total of 140 physicians working at training and research hospitals, state hospitals, university hospitals and primary care clinics (PCCs) and 30 parents whose children had been diagnosed with TS were included in the study. Of the physicians, 62.9% were family physicians, 26.4% were pediatricians and 10.7% were gynecologists. A total of 50.7% of physicians were working at training and research hospitals, 15% at universities, 3.6% at state hospitals and 30.7% at PCCs and 62.9% of them had been working for 10 years or less. Physician Knowledge of Turner Syndrome Thirty-five percent of physicians self-reported that their knowledge level of TS was adequate, 49.3% indicated that their knowledge was insufficient and 15.7% reported having no knowledge of TS. When all the physicians were considered, the rate of correct answers was 50.71±16.17%. The percentages of correct answers among the 88 family physicians, 37 pediatricians and 15 gynecologists were 46.08±16.51, 56.2±19.02 and 58.3±20.4, respectively. Responses to several questions related to the frequency and findings of TS are presented in . For the question concerning chromosomal abnormality, the pediatricians’ accurate answer rate was higher than that of other specialties (p=0.023). The question that referred to the type of hypogonadism was answered incorrectly by 72.1% of all physicians; however, 53.3% of gynecologists answered it correctly. Gynecologists had the highest accurate answer rate related to fertility and malignancy questions (p=0.028); 64.3% of physicians mistakenly thought that the intelligence level of patients with TS was low. Pediatricians were significantly more well-informed regarding this issue (p=0.018). Approximately 63.6% of the physicians gave incorrect answers to the question regarding estrogen and growth hormone treatment . Knowledge of Parents About Turner Syndrome Thirty parents whose children were diagnosed with TS participated in the study. The mean age of girls with TS was 58.8±50.95 months. The median diagnostic age was 66 months (1-168 months). The parents’ percentage of correct answers was 68±15%, and no significant difference was found between mothers and fathers (mothers 74%, fathers 63%; p=0.063). The rate of correct responses among parents was higher than that of physicians, but the difference was not significant. Parent responses to several questions regarding TS are presented in . The median correct response rate of primary school graduates was 74% (range, 37-84%), and the median correct response rate of high school or university graduates was 66% (range, 32-89%). There was no significant difference between the parents according to their educational status (p=0.690). However, the median (range) age at diagnosis was significantly younger in children of parents who graduated from high school [21 months (1-120) vs. 90 months (1-168); p=0.008].
Thirty-five percent of physicians self-reported that their knowledge level of TS was adequate, 49.3% indicated that their knowledge was insufficient and 15.7% reported having no knowledge of TS. When all the physicians were considered, the rate of correct answers was 50.71±16.17%. The percentages of correct answers among the 88 family physicians, 37 pediatricians and 15 gynecologists were 46.08±16.51, 56.2±19.02 and 58.3±20.4, respectively. Responses to several questions related to the frequency and findings of TS are presented in . For the question concerning chromosomal abnormality, the pediatricians’ accurate answer rate was higher than that of other specialties (p=0.023). The question that referred to the type of hypogonadism was answered incorrectly by 72.1% of all physicians; however, 53.3% of gynecologists answered it correctly. Gynecologists had the highest accurate answer rate related to fertility and malignancy questions (p=0.028); 64.3% of physicians mistakenly thought that the intelligence level of patients with TS was low. Pediatricians were significantly more well-informed regarding this issue (p=0.018). Approximately 63.6% of the physicians gave incorrect answers to the question regarding estrogen and growth hormone treatment .
Thirty parents whose children were diagnosed with TS participated in the study. The mean age of girls with TS was 58.8±50.95 months. The median diagnostic age was 66 months (1-168 months). The parents’ percentage of correct answers was 68±15%, and no significant difference was found between mothers and fathers (mothers 74%, fathers 63%; p=0.063). The rate of correct responses among parents was higher than that of physicians, but the difference was not significant. Parent responses to several questions regarding TS are presented in . The median correct response rate of primary school graduates was 74% (range, 37-84%), and the median correct response rate of high school or university graduates was 66% (range, 32-89%). There was no significant difference between the parents according to their educational status (p=0.690). However, the median (range) age at diagnosis was significantly younger in children of parents who graduated from high school [21 months (1-120) vs. 90 months (1-168); p=0.008].
TS is one of the most commonly observed chromosomal abnormalities with an incidence of 1 in 2500 live births and effects nearly 1.5 million women in the world . There are no studies related to the incidence of TS in Turkey. Nevertheless, in a multicenter study carried out in 2013-2014, 842 patients with TS between 0-18 ages were examined retrospectively in 35 different centers, and the average diagnosis age was determined as 10.2±4.4 years . In a study carried out in England, it was estimated that there were 12,500 TS cases; however, it is known that there are approximately 1000 cases in TS support associations and expert hospital clinics. This means that a large number of cases cannot be diagnosed and do not receive medical care . Compared to developed countries, it is thought that TS is diagnosed at a later age in Turkey. This is most likely due to the lower awareness level of Turkish physicians. For this reason, we aimed to investigate TS knowledge and awareness levels of physicians and parents whose children were diagnosed with TS. In the survey, only just over half (50.71±16.17%) of all questions were correctly answered by physicians. It is not possible to compare our results with previous ones because, to the best of our knowledge, no study on this topic has been published in the past in Turkey or in other countries. The question related to short stature, the most common finding in TS, was not answered properly by 51.1% of family doctors and 29.7% of pediatricians. The growth curve and monitoring of children are important in primary care. The reason for the insufficient knowledge level of physicians may be that they do not encounter such patients because there is no referral chain system. When there is no referral chain, it is difficult for family physicians to maintain health care services, and this situation forms the weakest point of the family medicine practice. In the study by Kringos , this situation was given as one of the most important factors why Turkey ranks in the poor category for primary care health services. The final adult height of TS patients is positively related to a younger age at diagnosis and the duration of growth hormone treatment . Primary care physicians missing short stature will result in late diagnosis and insufficient benefit from growth hormone treatment for all children who would have benefitted, including girls with TS. When physicians are asked about the intelligence level of children with TS, 64.3% of them answered incorrectly. The inadequate knowledge level of physicians about this issue can cause children diagnosed with TS to be guided in a wrong way and perhaps lead to their exclusion from society. Although patients with TS tend to have some problems with mathematics, these can be overcome with additional time and adequate education. The overall education level of women with TS is equal to or better than that of the overall female population . When educational and psychological support is commenced early in TS, it can help academic success and social integration. The question about high gonadotropin levels in TS was answered more correctly by gynecologists, compared to the physicians in other specialties. This shows that undiagnosed and late-diagnosed girls sought the care of gynecologists with a primary amenorrhea complaint. Although family physicians and pediatricians had inadequate knowledge regarding fertility, 66.7% of gynecologists answered the question correctly. This could be explained by the fact that patients diagnosed with TS consult them for infertility treatment. Most women with TS will be infertile; however, pregnancy has been achieved with oocyte donation and in vitro fertilization . Today, many diseases can be diagnosed with simple scanning programs, and in this way, more significant complications can be prevented. The standard approach for cardiac evaluation in TS is echocardiography and four extremity blood pressure measurements that should be performed on every patient at the time of diagnosis . Even if echocardiography is normal, every patient should be evaluated with magnetic resonance imaging as soon as it is feasible without the need for general anesthesia . Physicians responded to the question related to cardiovascular disease incorrectly 58.6% of the time. A lack of knowledge can cause late-diagnosed cardiovascular system diseases and increased mortality. There is an increased risk of gonadoblastoma in patients with TS carrying Y chromosome fragmentation, and it is known that remocal of streak gonads are performed by obstetricians and gynecologists. It was not surprising that gynecologists were more knowledgeable than other physicians in terms of the combination of TS and malignancy. The TS knowledge level of physicians was determined to be unsatisfactory when compared with the knowledge of TS parents. As families research TS in a detailed way after diagnosis, it is not wrong to expect their knowledge level to be higher. Parents want to obtain all information related to the disease since TS is an unknown disease in society and therefore there is an increased level of concern in families. Parents responded with 90% correct answers to the question about TS being a genetic disease. We can imply that this chronic condition led to desperation in families, which increased their solution-oriented quests. Nevertheless, parent knowledge was not sufficient in relation to the diseases accompanying TS and parents should be informed by specialists about this. In a study performed on children with chronic disease, it was reported that there was an important effect of the relationship between the families of hospitalized children and the nurses conducting the care of the sick children. However, the physicians were not well informed about the problems regarding psychosocial adjustment . Meeting the psychosocial and educational requirements, as well as the medical requirements, of the population affected by chronic disease will increase the childrens’ and families’ quality of life in both the acute and follow-up periods. Additionally, it will positively affect communication between health personnel and families. The awareness level of primary care family physicians, pediatricians and gynecologists should be enhanced in the areas of early diagnosis and treatment to decrease mortality and morbidity in patients with TS. Our study has revealed troubles related to this issue. It has been shown that major social campaigns are effective in the renewal of knowledge for both families with sick children and physicians, and there is an apparent increase in the early diagnosis of diseases . Education programs must be regularly applied to staff by experts in issues such as the requirements of ill children and their parents, and all staff members must be offered counselling services in healthcare organizations. Qualified staff must be employed to apply psychological support, social orientation, and special programs for sick children and their families in healthcare organizations. Therefore, the results show that education programs should be maintained following graduation as well. New early diagnosis strategies should be developed to overcome the delay of treatment in patients with TS. Study Limitations The limitation of this study is the relatively small number of physicians and parents of girls with TS. Accordingly, generalizations from these findings to the total population of physicians and families with children diagnosed with TS must be made cautiously.
The limitation of this study is the relatively small number of physicians and parents of girls with TS. Accordingly, generalizations from these findings to the total population of physicians and families with children diagnosed with TS must be made cautiously.
This study indicates that the knowledge about TS of physicians (especially family physicians) was insufficient, although TS is a relatively common disease. To prevent late diagnosis, increased complications and inadequate treatment in patients with TS, post-graduation education programs for physicians should be increased, and the referral chain of the patient must be applied in the health system. The parents’ answers showed that they were worried about TS and the associated problems (short stature, infertility, etc.) and they sought information about TS from clinicians, brochures and the internet. Intermittent information and training programs should be organized for families with TS. Cooperation between the physicians and parents provides better follow-up for these children and better control of the accompanying conditions related to TS.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.